Generative Artificial Intelligence Image Tools among Future Designers: A Usability, User Experience, and Emotional Analysis

: Generative Artificial Intelligence (GenAI) image tools hold the promise of revolutionizing a designer’s creative process. The increasing supply of this type of tool leads us to consider whether they suit future design professionals. This study aims to unveil if three GenAI image tools—Midjourney 5.2, DreamStudio beta, and Adobe Firefly 2—meet future designers’ expectations. Do these tools have good Usability, show sufficient User Experience (UX), induce positive emotions, and provide satisfactory results? A literature review was performed, and a quantitative empirical study based on a multidimensional analysis was executed to answer the research questions. Sixty users used the GenAI image tools and then responded to a holistic evaluation framework. The results showed that while the GenAI image tools received favorable ratings for Usability, they fell short in achieving high scores, indicating room for improvement. None of the platforms received a positive evaluation in all UX scales, highlighting areas for enhancement. The benchmark comparison revealed that all platforms, except for Adobe Firefly’s Efficiency scale, require enhancements in pragmatic and hedonic qualities. Despite inducing neutral to above-average positive emotions and minimal negative emotions, the overall satisfaction was moderate, with Midjourney aligning more closely with user expectations. This study emphasizes the need for significant improvements in Usability, positive emotional resonance, and result satisfaction, even more so in UX, so that GenAI image tools can meet future designers’ expectations.


Introduction
In the contemporary design landscape, the use of GenAI image tools in the summer of 2022, driven by machine learning algorithms and large language models, rose among the design community.Design practitioners have shown a growing curiosity about these tools, which has increased their availability on the market.Tools like DALL-E 2, Midjourney, Stable Diffusion, and Adobe Firefly, among many others, promise to revolutionize the paradigms of image production.These and other tools allow designers to redefine the creative process, improve methodologies, result in faster ideation and prototype development, faster creation, manipulation, and the exploration of visual and multimedia content [1].However, the extent to which these conversational interface tools cater to the needs and preferences of future designers remains a subject of inquiry.Some authors advise that despite the potential benefits of AI image tools, there is a lack of research on their effectiveness and Usability [2,3], which prompted us to delve into this research topic.Nielsen [3] indicates that this new user interface paradigm, the Intent-Based Outcome Specification, has deep-rooted Usability problems that require knowledge of prompt engineering, such as the need to use prose text that obliges the user to tell the computer what outcome they want rather than tells the computer what to do.The user tells the computer what they want but not how to accomplish it, reverting the locus of control: all these new specifications prove the need to analyze these user interfaces.Yu, Dong, and Wu's [4] study also warns about the UX problems of GenAI tools, highlighting their high learning costs, limited effectiveness, tude, and Behavioral Intention.In addition, the results suggest that Midjourney positively influences creativity.
Our brief literature review shows that GenAI text and image tools and their relationship with design and the creative industry have been interesting and and have produced cutting-edge topics for research, covering areas like visualization design, communication design, web design, design methodologies and processes, ethics, UX, and user interface.There is still a need to understand if distinct GenAI image tools align with future designers' expectations; this understanding could help improve the development of these types of tools.

Quantitative Experimental Design
Our research endeavors to explore whether three GenAI image tools respond to the requirements of the future generation of designers, proposing a good UX, Usability, inducing positive emotions, and providing satisfactory results.Therefore, the research questions driving this study are as follows: RQ1. Do GenAI image tools (Midjourney 5.2, DreamStudio beta, and Adobe Firefly 2) have Usability?Moreover, do they have a positive evaluation of Usefulness, Ease of Use, Ease of Learning, and user Satisfaction?RQ2.Do GenAI image tools (Midjourney 5.2, DreamStudio beta, and Adobe Firefly 2) have sufficient UX for design students?Additionally, do they receive favorable assessments for their Attractiveness, Perspicuity, Efficiency, Dependability, Stimulation, and Novelty?Furthermore, do they align with the standards of the "Good" category when compared to the benchmark values?
RQ3. Do GenAI image tools (Midjourney 5.2, DreamStudio beta, and Adobe Firefly 2) induce positive/negative emotions in design students when they use them?
We conducted an experiment involving three user groups, each utilizing and assessing three designated tools to address these research questions.The three groups provided responses to a holistic questionnaire that contained the USE Questionnaire: Usefulness, Satisfaction, and Ease of Use (USEQ); User Experience Questionnaire (UEQ); the Positive Affect Negative Affect Scale (PANAS); and a question related to the satisfaction with the generated results.

Procedure
We assigned each of the three GenAI image tools (Midjourney, DreamStudio, and Adobe Firefly) to a group of users.Then, between 4 and 19 December 2023, we requested users to visit a laboratory at the University of Beira Interior, where they followed a briefing: create a poster of a circus in Paul Rand style.They used and experimented with the GenAI tool assigned to them for the time they considered necessary, and following the experience, the students completed an online holistic questionnaire.Access to the Google Forms questionnaire was facilitated through a link provided on the University of Beira Interior Moodle platform.

Participants of the Study
We enlisted sixty volunteers (N = 60).The study participants were undergraduates pursuing a Bachelor's Degree in Multimedia Design within the Art Department at the University of Beira Interior, Portugal.These 60 participants were then distributed randomly into groups of 20 users each.The sample size range between 20 and 30 respondents guaranteed that we could attain trustworthy results [32].

Data Gathering
Information was gathered through the Portuguese version of the USEQ [5,33], the PANAS [7,8], the Portuguese version of the UEQ [6], and a question related to satisfaction with the results.Some studies recommend employing a variety of questionnaires to obtain a complete measurement of UX in conversational interfaces or enhance the assessment of a particular UX dimension [34].Three instruments and one query were chosen to respond to the research questions: (a) The USEQ was chosen because it is a valid and reliable [35] survey instrument with 30 items that examine four dimensions of Usability: Usefulness, Ease of Use, Ease of Learning, and Satisfaction.Each item was rated on a seven-point Likert scale ranging from 1 = Strongly Disagree to 7 = Strongly Agree.This questionnaire evaluated the self-perceived Usability of GenAI image tools.(b) The UEQ was used because it facilitated the analysis of the complete UX beyond mere Usability.The questionnaire scales encompass Usability elements such as Perspicuity, Efficiency, and Dependability as well as UX factors like Novelty and Stimulation.This comprehensive approach provides a holistic understanding of the UX across product/system touchpoints [32].The UEQ consists of 26 items spread across six seven-point Likert-type scales including Perspicuity, Attractiveness, Stimulation, Dependability, Novelty, and Efficiency.Each item in the UEQ is structured as a semantic differential, with two opposing terms representing each item.The reliability and validity of the UEQ have undergone thorough scrutiny in numerous studies [32,36].(c) The PANAS is among the most frequently utilized scales for assessing moods or emotions.This brief scale comprises 20 items, with half measuring positive effects (e.g., inspired and excited) and the other half measuring negative effects (e.g., afraid and upset).Each item employs a five-point Likert scale, from 1 = Very Slightly to 5 = Extremely, to measure the degree of experienced emotions within a defined timeframe [7].This scale can measure emotional responses to events, such as the experience with GenAI image tools.Emotions are an increasingly important factor in human-computer interaction.Nevertheless, traditional Usability has mostly ignored the affective factors of the user and the user interface [37].This scale can complement Usability testing by adding an information layer about the user's affective state during testing.(d) A question about the satisfaction with the results obtained with GenAI image tools was included, and was rated on a seven-point Likert scale.

Materials
(1) Independent variables The variables manipulated independently in this study were the three GenAI image tools.The chosen tools were Midjourney (see Figure 1), DreamStudio (see Figure 2), and Adobe Firefly (see Figure 3).We chose Midjourney because this tool is a leading generative AI tool, we chose DreamStudio because it is an easy-to-use interface using the latest version of the Stable Diffusion model, and is a leading generative AI tool as well, and we chose Adobe Firefly because Adobe created it.This company creates software specifically for designers and creatives.
Midjourney runs on the Discord Interface.Although the company is creating a new interface, it is not fully functional in Portugal.The image generation platform first entered open beta in July 2022.Midjourney has a text prompt box that admits using commands and parameters.The results presented allow the user to scale images, create slight image variations, and rerun the original prompt.Midjourney also accepts that terms are interpreted separately, allows for images to be uploaded, has a negative prompt, and has the creative tag prompt for unconventional outputs.Adobe Firefly was released in June 2023.Its features include a text prompt box, negative prompt, image upload, image proportions, style controls, and buttons for image effects, light, color, composition, and style intensity.It also has different possibilities, not only text to image, but also generative refill, text effects, generative recolor, and text to vectors.Adobe Firefly developers are now working on the possibilities of 3D to image and sketch to image.
(2) Dependent variables The dependent variables used in this study were as follows: Usability, UX, Emotional Induction, and Generated Results Satisfaction.
(a) Usability: This variable was explored to ascertain the extent to which participants feel the system is easy to use and provides benefits in helping users obtain information or create a result.Four dimensions were used to evaluate this variable: Usefulness, Ease of Use, Ease of Learning, and Satisfaction.Given this aim, thirty statements and two open questions were used.As important as for other interfaces, the described Usability components are not sufficient concerning the evaluation of GenAI image tools, because they are not a traditional interactive device; their process is highly random and unexplainable, and the results may interfere with the Usability evaluation; because of this, other variables were considered in this study.(b) UX: UX focuses on the perceptions and behaviors of the user during their interactions with technical systems or products (ISO 9241-210:2019) [38].Measuring UX means measuring hedonic (non-goal-oriented aspects) and pragmatic dimensions (goal-oriented aspects).Six dimensions were used to evaluate this variable: Efficiency (shows whether users can accomplish their tasks without undue effort), Attractiveness (indicates the overall impression of the product and gauges users' preferences towards it), Perspicuity (shows whether users can quickly become acquainted with the product or grasp its usage), Stimulation (indicates whether using the product is stimulating and engaging), Dependability (demonstrates whether users perceive a sense of control during interaction), and Novelty (reflects the product's level of innovation and creativity and its ability to capture the user's interest).Attractiveness represents a liability dimension.Efficiency, Perspicuity, and Dependability pertain to pragmatic quality, while Stimulation and Novelty relate to hedonic quality [39].(c) Emotional Induction: This variable was studied to determine the emotional impact exerted by the independent variable on participants.Participant motivation, ease of memorization, and ability to solve problems can be influenced by positive emotions [40].Also, some studies consider the existence of carry-over effects of affective states on Usability appraisals [41].We used twenty items to assess the emotions experienced during the experiment, and participants were queried about the extent to which they felt each emotion.The positive emotion items were attentive, active, enthusiastic, alert, determined, excited, interested, inspired, strong, and proud.The negative emotion items were scared, afraid, jittery, nervous, irritable, hostile, ashamed, guilty, distressed, and upset.(d) Generated Results Satisfaction: Because users' evaluations of GenAI image tools may tend to the results generated and confuse the ability to understand the other measurable scales, we opted to measure the users' satisfaction regarding the results generated.

Data Analysis
For the data analysis, we proceeded differently for each variable.For Usability (USEQ), we analyzed the average of the scales Usefulness, Ease of Use, Ease of Learning, and Satisfaction.Then, we measured the total average of all scales to obtain the total score of the Usability variable.The values were converted into percentages to facilitate the interpretation of the results.We also translated the results of the open questions related to positive and negative items and transcribed the most named arguments.
We analyzed each UEQ item for the UX and then used a benchmark to compare the results.Values for the single scale items between −0.8 and 0.8 represent a neutral evaluation, values >0.8 represent a positive evaluation, and values <−0.8 represent a negative evaluation [32].Upon obtaining scores for each scale, the data were analyzed through a benchmark graph to evaluate the quality of the GenAI image tool in comparison to other products within the UEQ Analysis Data Tool dataset.Subsequently, we reference the benchmark intervals for the UEQ as outlined by Schrepp, Hinderks, and Thomaschewski [42].The feedback is categorized into five levels:

•
Excellent: the evaluated product is among the best 10% of results.

•
Good: 10% of the benchmark results are better than the evaluated product, and 75% of the results are worse than the evaluated product.• Above Average: 25% of the benchmark results are better than the evaluated product, so 50% of the results are worse.

•
Below Average: 50% of the results in the benchmark are better than the evaluated product, and 25% of the results are worse.

•
Bad: the evaluated product is among the worst 25% of results.
Additionally, Cronbach's Alpha was utilized to assess the reliability of the UEQ scales.A Cronbach's Alpha value between 1-0.90 indicates an excellent internal consistency; between 0.70-0.90, a good internal consistency; between 0.60-0.70,an acceptable consistency; between 0.50-0.60,a poor consistency; and less than 0.50, an unacceptable consistency.
For the PANAS, we calculated the total score by finding the sum of the ten positive and ten negative items in a range of 10-50.
We measure the total average for each GenAI image tool for the Generated Results Satisfaction variable.Again, values were converted into percentages to facilitate the interpretation of the results.

Results
Considering the questionnaires completed by 60 participants, we determined that the average age of our participants is 21 years old, with 25 (41.7%)male respondents, 34 (56.7%)female respondents, and one (1.7%)respondent identified as another gender.A total of 32 (53.3%) participants had never used GenAI image tools, and 28 (46.7%) had already experimented with this tool.Most of the 28 participants experimented with several GenAI tools; between the reported platforms, we can find Dall-E, for which 12 (20%) participants reported having used it, 8 (13.3%) participants indicated that had used Adobe Firefly, 7 (11.7%)participants had been experimenting with Midjourney, 4 (6.7%)participants used Stable Diffusion, and 14 (23.3%) also indicated that had used other GenAI image tools.From the analysis of each group, we can say that they are homogenous regarding the participants' previous use of GenAI image tools.In the DreamStudio group, 11 participants never used GenAI tools, and 9 did; in the Midjourney group, 9 participants never used GenAI tools, and 11 did; and in the Adobe Firefly group, 8 participants never used GenAI tools, and 12 persons did.

Impact upon Usability
The overall value of Usability for Midjourney, on a scale of 1-7, is 4.72, corresponding to 67.43%.DreamStudio reaches a 58.86% (4.12) score, and Adobe Firefly has a 62.29% (4.36) score (Table 1).These values are very little above the average, revealing the need to improve aspects of the user interface and the system itself to increase Usability.Regarding the dimensions of the Usability variable, Usefulness (61.71%),Ease of Use (65.86%),Satisfaction (66.14%), and Ease of Learning (76.29%) are higher for the Midjourney platform.The dimension Ease of Learning is the one that reaches the highest values.Almost all the dimensions reach positive values for all the platforms; the exception is the dimension Usefulness for the platform DreamStudio (45.57%), which is below the average.The results of the questions related to positive and negative points were essential for us to gather qualitative feedback to improve the GenAI image tool.The positive points mentioned most often for the Midjourney platform were related to the very quick generation of results and the possibility to inspire and improve creativity.Negative points included complaints about a confusing interface, difficulties related to image editing, results that did not meet users' expectations, concerns over ethical and copyright issues, and difficulties in creating prompts.
For the DreamStudio GenAI image tool, the qualitative positive points mentioned by the participants highlight the platform's ease of use, being fun to use, and its quick creation of images.The negative points emphasized by the participants indicated difficulties in editing the image results, results that did not change when altering the prompts, results that are too repetitive, an interface that should be simpler and organized, and copyright concerns.
For Adobe Firefly, students highlight the positive points of being easy to use, having an intuitive interface, creating exciting results, and the quick creation of results.The negative points accentuated were difficulties in putting together referential images, concerns over copyright, and the need for more editing tools.

Midjourney GenAI Tool
The reliability assessment of the scale Attractiveness (α = 0.96) indicated an excellent internal consistency; the analysis of the scales Perspicuity (α = 0.81), Efficiency (α = 0.87), Stimulation (α = 0.88), and Novelty (α = 0.75) indicated a good internal consistency; the scale Dependability (α = 0.50) indicated a poor internal consistency.The experience with Midjourney created two types of results.Initially, we examined the significance of each UEQ item (see Figure 4), where the average indicated a positive evaluation of the UX for the scales Efficiency (0.950) and Stimulation (0.858).The average reveals a neutral evaluation for the scales Attractiveness (0.608); Perspicuity (0.675); Dependability (0.238); and Novelty (0.600).Secondly, we acquired additional results based on the UEQ benchmark (see Figure 5).We analyzed the UX of the Midjourney GenAI tool compared to other digital products.
The diagram shows that the scale values of Efficiency, Stimulation, and Novelty are in the "Below the Average" category, which indicates that 50% of the benchmark products have Secondly, we acquired additional results based on the UEQ benchmark (see Figure 5).We analyzed the UX of the Midjourney GenAI tool compared to other digital products.The diagram shows that the scale values of Efficiency, Stimulation, and Novelty are in the "Below the Average" category, which indicates that 50% of the benchmark products have a better UX than this GenAI tool.The scenario is worse when we measure the scales of Attractiveness, Perspicuity, and Dependability, which are in the "Bad" category, meaning that 75% of the benchmark products have a better UX than this GenAI tool.Secondly, we acquired additional results based on the UEQ benchmark (see Figure 5).We analyzed the UX of the Midjourney GenAI tool compared to other digital products.The diagram shows that the scale values of Efficiency, Stimulation, and Novelty are in the "Below the Average" category, which indicates that 50% of the benchmark products have a better UX than this GenAI tool.The scenario is worse when we measure the scales of Attractiveness, Perspicuity, and Dependability, which are in the "Bad" category, meaning that 75% of the benchmark products have a better UX than this GenAI tool.

DreamStudio GenAI Tool
The reliability examination of the scales Attractiveness (α = 0.83), Efficiency (0.76), Stimulation (0.73), and Novelty (0.80) indicated "Good" internal consistency.The analysis of the scales Perspicuity (0.50) and Dependability (0.55) indicated a "Poor" internal consistency.With the DreamStudio experience, we can perceive the value of each UEQ item (see Figure 6).The average analysis reveals a positive evaluation of the UX for the scales Perspicuity (0.888) and Efficiency (1.038).It also indicates a neutral evaluation for the following scales: Attractiveness (0.458), Dependability (0.238), Stimulation (0.138), and Novelty (0.188).The outcomes of the UEQ benchmark (see Figure 7) for this GenAI tool reveal that the scale value for Perspicuity, Efficiency, and Novelty fall within the "Below the Average" category; this suggests that 50% of the benchmarked products outperform this GenAI tool in these aspects.On the other hand, the scales Attractiveness, Dependability, and Stimulation are in the "Bad" category, meaning that this product is in the 25% worst UX results.The outcomes of the UEQ benchmark (see Figure 7) for this GenAI tool reveal that the scale value for Perspicuity, Efficiency, and Novelty fall within the "Below the Average" category; this suggests that 50% of the benchmarked products outperform this GenAI tool in these aspects.On the other hand, the scales Attractiveness, Dependability, and Stimulation are in the "Bad" category, meaning that this product is in the 25% worst UX results.
The outcomes of the UEQ benchmark (see Figure 7) for this GenAI tool reveal that the scale value for Perspicuity, Efficiency, and Novelty fall within the "Below the Average" category; this suggests that 50% of the benchmarked products outperform this GenAI tool in these aspects.On the other hand, the scales Attractiveness, Dependability, and Stimulation are in the "Bad" category, meaning that this product is in the 25% worst UX results.
The user experience with the GenAI Tool Adobe Firefly also produced two types of results.First, we can observe the value of each UEQ item (see Figure 8
The user experience with the GenAI Tool Adobe Firefly also produced two types of results.First, we can observe the value of each UEQ item (see Figure 8), where the average indicates a positive evaluation of the UX for the following scales: Attractiveness For this GenAI tool, the UEQ benchmark results (see Figure 9) indicate that the scale values for Attractiveness, Perspicuity, Dependability, Stimulation, and Novelty fall within the "Above Average" category.These results suggest that 25% of the benchmark products outperform this GenAI tool in these properties.Conversely, the Efficiency scale falls within the "Good" category, ranking among the top 25% results.In the graphs presented below (see Figure 10 and Figure 11), it is possible to compare the UEQ results of the three GenAI tools.For this GenAI tool, the UEQ benchmark results (see Figure 9) indicate that the scale values for Attractiveness, Perspicuity, Dependability, Stimulation, and Novelty fall within the "Above Average" category.These results suggest that 25% of the benchmark products outperform this GenAI tool in these properties.Conversely, the Efficiency scale falls within the "Good" category, ranking among the top 25% results.For this GenAI tool, the UEQ benchmark results (see Figure 9) indicate that the scale values for Attractiveness, Perspicuity, Dependability, Stimulation, and Novelty fall within the "Above Average" category.These results suggest that 25% of the benchmark products outperform this GenAI tool in these properties.Conversely, the Efficiency scale falls within the "Good" category, ranking among the top 25% results.

UX Comparison between the Three GenAI Tools
In the graphs presented below (see Figure 10 and Figure 11), it is possible to compare the UEQ results of the three GenAI tools.

UX Comparison between the Three GenAI Tools
In the graphs presented below (see Figures 10 and 11), it is possible to compare the UEQ results of the three GenAI tools.

UX Comparison between the Three GenAI Tools
In the graphs presented below (see Figure 10 and Figure 11), it is possible to compare the UEQ results of the three GenAI tools.

Impact on Emotional Induction
Table 2 shows positive and negative average scores from the experience with Midjourney, DreamStudio, and Adobe Firefly.The results reveal that Midjourney (29,455) achieved a higher average of positive affects.Adobe Firefly reveals a neutral level of students' positive emotions, while Dream-Studion reveals a low level of positive emotions.Midjourney (3.09, on a scale of 1-5) and Adobe Firefly (3.15, on a scale of 1-5) environments induce higher enthusiasm, while DreamStudio (3.05, on a scale of 1-5) induces a higher interest in students.
All the negative affect scores obtained with the PANAS questionnaire reveal a low level of students' negative emotions.Low levels of negative emotional induction indicate a lack of negative engagement, reflecting calmness and serenity [8].Midjourney (2.14, on a scale of 1-5) and Adobe Firefly (2.15, on a scale of 1-5) induce higher levels of jittery behavior, while DreamStudio (2.5, on a scale of 1-5) induces a higher level of distress.

Generated Results Satisfaction
The analysis of the student's perceptions (Table 3) about the results indicates that students are more satisfied with the results obtained with Midjourney (67.14%).

Impact on Emotional Induction
Table 2 shows the positive and negative average scores from the experience with Midjourney, DreamStudio, and Adobe Firefly.The results reveal that Midjourney (29,455) achieved a higher average of positive affects.Adobe Firefly reveals a neutral level of students' positive emotions, while Dream-Studion reveals a low level of positive emotions.Midjourney (3.09, on a scale of 1-5) and Adobe Firefly (3.15, on a scale of 1-5) environments induce higher enthusiasm, while DreamStudio (3.05, on a scale of 1-5) induces a higher interest in students.
All the negative affect scores obtained with the PANAS questionnaire reveal a low level of students' negative emotions.Low levels of negative emotional induction indicate a lack of negative engagement, reflecting calmness and serenity [8].Midjourney (2.14, on a scale of 1-5) and Adobe Firefly (2.15, on a scale of 1-5) induce higher levels of jittery behavior, while DreamStudio (2.5, on a scale of 1-5) induces a higher level of distress.

Generated Results Satisfaction
The analysis of the student's perceptions (Table 3) about the results indicates that students are more satisfied with the results obtained with Midjourney (67.14%).Below, it is possible to see some of the results (Table 4) created by the students during the experience.

Adobe Firefly
Prompt: Inside circus tent vector old retro vintage style of Paul Rand.
Digital 2024, 4, FOR PEER REVIEW 14 Adobe Firefly Prompt: Inside circus tent vector old retro vintage style of Paul Rand.

Adobe Firefly
Prompt: Vintage circus host with big pointed black hat, t wo women seated and two circus tents on the background vector poster Paul Rand style.

Discussion
We developed this fieldwork study to understand if GenAI image tools, namely Midjourney, DreamStudio, and Adobe Firefly, are suitable for future designers, demonstrating that these individuals recognize that these tools have Usability, sufficient UX, provoke positive emotions, and provide satisfactory results in line with their expectations.The research questions were answered throughout the investigation.We consider the first research question: RQ1.Do GenAI image tools have Usability?Moreover, do they have a positive evaluation of Usefulness, Ease of Use, Ease of Learning, and user Satisfaction?The results indicate that future designers positively evaluate Usability in all GenAI tools but reach very little above-average values.The Midjourney tool shows a higher Usability level.This tool also achieves higher levels in all four Usability parameters: Usefulness, Ease of Use, Ease of Learning, and Satisfaction.Curiously, the dimension that reached the highest level for all the platforms was learnability; users considered it quick and easy to learn, without much effort to use these tools, and that once learned, it was easy to remember how to use them.All the tools reach positive values for almost all the dimensions of Usability; the exception is the dimension of Usefulness in the DreamStudio tool, which has a negative evaluation.
USEQ has two open questions that are very important for gathering qualitative feedback.For all tools, students refer to positive points, such as the readiness to create results, and negative points, such as difficulties in editing images and concerns over ethical and copyright issues.For Midjourney, students indicated that the platform could be a vehicle of inspiration that improves creativity; this is consistent with the results of other studies [31].Students also indicated that when using Midjourney, it was challenging to create prompts, the interface could be confusing initially, and the results might not meet users' expectations.For DreamStudio, students alleged that the platform was fun and easy to use, the results were too repetitive and did not change when altering prompts, and the interface could be more organized and straightforward.For Adobe Firefly, the students underline that the interface is intuitive and easy to use, and the results are interesting; they also indicate that it is difficult to upload referential images.

Adobe Firefly
Prompt: Vintage circus host with big pointed black hat, t wo women seated and two circus tents on the background vector poster Paul Rand style.

Discussion
We developed this fieldwork study to understand if GenAI image tools, namely Midjourney, DreamStudio, and Adobe Firefly, are suitable for future designers, demonstrating that these individuals recognize that these tools have Usability, sufficient UX, provoke positive emotions, and provide satisfactory results in line with their expectations.The research questions were answered throughout the investigation.We consider the first research question: RQ1.Do GenAI image tools have Usability?Moreover, do they have a positive evaluation of Usefulness, Ease of Use, Ease of Learning, and user Satisfaction?The results indicate that future designers positively evaluate Usability in all GenAI tools but reach very little above-average values.The Midjourney tool shows a higher Usability level.This tool also achieves higher levels in all four Usability parameters: Usefulness, Ease of Use, Ease of Learning, and Satisfaction.Curiously, the dimension that reached the highest level for all the platforms was learnability; users considered it quick and easy to learn, without much effort to use these tools, and that once learned, it was easy to remember how to use them.All the tools reach positive values for almost all the dimensions of Usability; the exception is the dimension of Usefulness in the DreamStudio tool, which has a negative evaluation.
USEQ has two open questions that are very important for gathering qualitative feedback.For all tools, students refer to positive points, such as the readiness to create results, and negative points, such as difficulties in editing images and concerns over ethical and copyright issues.For Midjourney, students indicated that the platform could be a vehicle of inspiration that improves creativity; this is consistent with the results of other studies [31].Students also indicated that when using Midjourney, it was challenging to create prompts, the interface could be confusing initially, and the results might not meet users' expectations.For DreamStudio, students alleged that the platform was fun and easy to use, the results were too repetitive and did not change when altering prompts, and the interface could be more organized and straightforward.For Adobe Firefly, the students underline that the interface is intuitive and easy to use, and the results are interesting; they also indicate that it is difficult to upload referential images.

Adobe Firefly
Prompt: Vintage circus host with big pointed black hat, two women seated and two circus tents on the background vector poster Paul Rand style.

Adobe Firefly
Prompt: Vintage circus host with big pointed black hat, t wo women seated and two circus tents on the background vector poster Paul Rand style.

Discussion
We developed this fieldwork study to understand if GenAI image tools, namely Midjourney, DreamStudio, and Adobe Firefly, are suitable for future designers, demonstrating that these individuals recognize that these tools have Usability, sufficient UX, provoke positive emotions, and provide satisfactory results in line with their expectations.The research questions were answered throughout the investigation.We consider the first research question: RQ1.Do GenAI image tools have Usability?Moreover, do they have a positive evaluation of Usefulness, Ease of Use, Ease of Learning, and user Satisfaction?The results indicate that future designers positively evaluate Usability in all GenAI tools but reach very little above-average values.The Midjourney tool shows a higher Usability level.This tool also achieves higher levels in all four Usability parameters: Usefulness, Ease of Use, Ease of Learning, and Satisfaction.Curiously, the dimension that reached the highest level for all the platforms was learnability; users considered it quick and easy to learn, without much effort to use these tools, and that once learned, it was easy to remember how to use them.All the tools reach positive values for almost all the dimensions of Usability; the exception is the dimension of Usefulness in the DreamStudio tool, which has a negative evaluation.
USEQ has two open questions that are very important for gathering qualitative feedback.For all tools, students refer to positive points, such as the readiness to create results, and negative points, such as difficulties in editing images and concerns over ethical and copyright issues.For Midjourney, students indicated that the platform could be a vehicle of inspiration that improves creativity; this is consistent with the results of other studies [31].Students also indicated that when using Midjourney, it was challenging to create prompts, the interface could be confusing initially, and the results might not meet users' expectations.For DreamStudio, students alleged that the platform was fun and easy to use, the results were too repetitive and did not change when altering prompts, and the interface could be more organized and straightforward.For Adobe Firefly, the students underline that the interface is intuitive and easy to use, and the results are interesting; they also indicate that it is difficult to upload referential images.

Discussion
We developed this fieldwork study to understand if GenAI image tools, namely Midjourney, DreamStudio, and Adobe Firefly, are suitable for future designers, demonstrating that these individuals recognize that these tools have Usability, sufficient UX, provoke positive emotions, and provide satisfactory results in line with their expectations.The research questions were answered throughout the investigation.We consider the first research question: RQ1.Do GenAI image tools have Usability?Moreover, do they have a positive evaluation of Usefulness, Ease of Use, Ease of Learning, and user Satisfaction?The results indicate that future designers positively evaluate Usability in all GenAI tools but reach very little above-average values.The Midjourney tool shows a higher Usability level.This tool also achieves higher levels in all four Usability parameters: Usefulness, Ease of Use, Ease of Learning, and Satisfaction.Curiously, the dimension that reached the highest level for all the platforms was learnability; users considered it quick and easy to learn, without much effort to use these tools, and that once learned, it was easy to remember how to use them.All the tools reach positive values for almost all the dimensions of Usability; the exception is the dimension of Usefulness in the DreamStudio tool, which has a negative evaluation.
USEQ has two open questions that are very important for gathering qualitative feedback.For all tools, students refer to positive points, such as the readiness to create results, and negative points, such as difficulties in editing images and concerns over ethical and copyright issues.For Midjourney, students indicated that the platform could be a vehicle of inspiration that improves creativity; this is consistent with the results of other studies [31].Students also indicated that when using Midjourney, it was challenging to create prompts, the interface could be confusing initially, and the results might not meet users' expectations.For DreamStudio, students alleged that the platform was fun and easy to use, the results were too repetitive and did not change when altering prompts, and the interface could be more organized and straightforward.For Adobe Firefly, the students underline that the interface is intuitive and easy to use, and the results are interesting; they also indicate that it is difficult to upload referential images.

Discussion
We developed this fieldwork study to understand if GenAI image tools, namely Midjourney, DreamStudio, and Adobe Firefly, are suitable for future designers, demonstrating that these individuals recognize that these tools have Usability, sufficient UX, provoke positive emotions, and provide satisfactory results in line with their expectations.The research questions were answered throughout the investigation.We consider the first research question: RQ1.Do GenAI image tools have Usability?Moreover, do they have a positive evaluation of Usefulness, Ease of Use, Ease of Learning, and user Satisfaction?The results indicate that future designers positively evaluate Usability in all GenAI tools but reach very little above-average values.The Midjourney tool shows a higher Usability level.This tool also achieves higher levels in all four Usability parameters: Usefulness, Ease of Use, Ease of Learning, and Satisfaction.Curiously, the dimension that reached the highest level for all the platforms was learnability; users considered it quick and easy to learn, without much effort to use these tools, and that once learned, it was easy to remember how to use them.All the tools reach positive values for almost all the dimensions of Usability; the exception is the dimension of Usefulness in the DreamStudio tool, which has a negative evaluation.
USEQ has two open questions that are very important for gathering qualitative feedback.For all tools, students refer to positive points, such as the readiness to create results, and negative points, such as difficulties in editing images and concerns over ethical and copyright issues.For Midjourney, students indicated that the platform could be a vehicle of inspiration that improves creativity; this is consistent with the results of other studies [31].Students also indicated that when using Midjourney, it was challenging to create prompts, the interface could be confusing initially, and the results might not meet users' expectations.For DreamStudio, students alleged that the platform was fun and easy to use, the results were too repetitive and did not change when altering prompts, and the interface could be more organized and straightforward.For Adobe Firefly, the students underline that the interface is intuitive and easy to use, and the results are interesting; they also indicate that it is difficult to upload referential images.
Next, we regard the second research question: RQ2 Do GenAI image tools have sufficient UX for design students?Additionally, do they receive favorable assessments for their Attractiveness, Perspicuity, Efficiency, Dependability, Stimulation, and Novelty?Furthermore, do they align with the standards of the "Good" category compared to the benchmark values?The analysis of each GenAI tool's UX suggests that the scale Efficiency is the only one that is positive for all platforms, meaning that users agree that with all platforms, it is possible to solve tasks without unnecessary effort, which corresponds to the result of the USEQ.The scale that is evaluated as neutral in all the platforms was Novelty, indicating these platforms do not catch the interest of our students and are not understood as innovative.For Midjourney, the other scale that reaches a positive level is Stimulation, meaning the users find it exciting and motivating to use it.The scale Perspicuity is the other one that reaches a positive score for the DreamStudio tool, which indicates that users easily became familiar with the platform and learned how to use it.Adobe Firefly is the platform that has more scales (Attractiveness, Perspicuity, Dependability, and Stimulation) reaching a positive level, which suggests that users like the tool overall, believe that it is easy to learn how to use the tool, feel control over the interactions, and feel excited and motivated to use it.For the analysis of the UX benchmark, it is essential to remember that "the general UX expectations have grown over time.Since the benchmark also contains data from established products, a new product should reach at least the Good category on all scales" [42] (p.43).None of the analyzed platforms reached this goal.Only one platform, Adobe Firefly, reached a "Good" category in only one scale, which was Efficiency, a pragmatic goal-oriented quality, which means users feel they can solve their tasks without much effort.In all the platforms, the scales Attractiveness, Perspicuity, Dependability, Stimulation, and Novelty are below the category of "Good".Midjourney (Attractiveness, Perspicuity, and Dependability) and DreamStudio (Attractiveness, Dependability, and Stimulation) have scales in the "Bad" category, meaning these products are in the 25% worst results of the benchmark.
Despite our group of design students recognizing that these tools could solve tasks easily, without much effort, they need significant improvements in their pragmatic and hedonic qualities.The improvements should concern the total impression of the GenAI tools, making users feel control over the interactions, promoting easy learning on how to use the tools and become familiar with them, making these tools more exciting and motivating to use, and transforming these tools so that they catch the user's interest and be understood as an innovative product.These GenAI image tools do not have sufficient UX.
Next, we look at the third research question: RQ3.Do GenAI image tools induce positive/negative emotions in design students when they use them?The results reveal that these tools induce low negative emotions, provoking calmness and serenity [8].The positive emotions reveal low levels for DreamStudio, neutral levels for Firefly, and a slightly above-the-average levels for Midjourney.
We consider the fourth research question: RQ4.Do GenAI image tools provide satisfactory results for design students?The results indicate that students are more satisfied with the Midjourney results, although the values are not much above the average.This scenario can reveal a possibility of a connection between the achieved image results, the Usability, and the emotional induction scores.
This study contributes to the growing body of quantitative research whose aim is to evaluate the UX/Usability/Emotions of GenAI tools [29,43,44], specifically GenAI image tools in the design domain [31,45].This study can assess that our design students consider these platforms to have slightly above-the-average positive Usability levels, with insufficient UX scores, even more so when compared to other products.The positive emotions they induce in the students also range between low and just above average.The negative emotions induced are low.The satisfaction with the results obtained could be higher.The obtained results from this experience prove that GenAI image tools need improvements related to their Usability, Induced Emotions, and Satisfaction with the generated results.However, more efforts need to be made to improve the UX.
Because these tools have specific characteristics and the results obtained with them are not controllable, the users' responses may tend to the results.In future studies, we recognize that it would be important to propose holistic measuring instruments used specifically for Generative AI products, with a benchmark related to AI-infused products.This future instrument should evaluate not only the hedonic and pragmatic qualities, obtained image results, and emotions but also trustworthiness and reliability, concerns that our student participants revealed.Also, qualitative studies could obtain more data on what could be changed in these tools to achieve higher levels of Usability and UX.As in any study, there were limitations to this research.The sample size could be higher, with future designers from different universities and domain experts.The study could be completed using a Usability test to obtain more qualitative data, and we could also evaluate more GenAI image tools.

Conclusions
This study assessed the UX, Emotional Induction, and Results Satisfaction of three GenAI image tools, namely, Midjourney, DreamStudio, and Adobe Firefly, with 60 future designers as participants.While all platforms received favorable ratings for Usability, they fell short of achieving high scores, indicating room for improvement.Midjourney was perceived as having the highest Usability, particularly in Usefulness, Ease of Use, Ease of Learning, and Satisfaction.The students agreed that it is easy and quick to learn how to work with all the tools; the Ease of learning dimension is the one that reached the highest values.With respect to UX, none of the platforms reached a positive evaluation in all the scales, with only Adobe Firefly achieving positive ratings in five scales.Students positively evaluated all GenAI tools regarding their Efficiency, agreeing that they can solve tasks without unnecessary effort.The analyzed tools failed to meet the UX goal requirements concerning Novelty, meaning students do not understand these tools to be innovative products, and they do not catch their interest.The comparison with the benchmark shows that all GenAI image tools need improvements in the pragmatic and hedonic qualities; the exception is the scale Efficiency in Adobe Firefly.In fact, some Midjourney and DreamStudio scales are in the worst 25% benchmark results.Despite inducing above-the-average positive emotions, particularly for Midjourney, and low negative emotions for all tools, the overall result satisfaction was moderate, with Midjourney meeting expectations more closely.The results obtained with this study underscore the need for significant Usability, Emotional Induction, and Generated Results Satisfaction improvements, but further improvement is needed for the UX variable, so that these tools can meet the expectations of future designers.
Funding: This research received no external funding.

Figure 3 .
Figure 3. Adobe Firefly 2 user interface.Midjourney runs on the Discord Interface.Although the company is creating a new interface, it is not fully functional in Portugal.The image generation platform first entered open beta in July 2022.Midjourney has a text prompt box that admits using commands and parameters.The results presented allow the user to scale images, create slight image

Figure 3 .
Figure 3. Adobe Firefly 2 user interface.Midjourney runs on the Discord Interface.Although the company is creating a new interface, it is not fully functional in Portugal.The image generation platform first entered open beta in July 2022.Midjourney has a text prompt box that admits using commands and parameters.The results presented allow the user to scale images, create slight image

Figure 3 .
Figure 3. Adobe Firefly 2 user interface.Midjourney runs on the Discord Interface.Although the company is creating a new interface, it is not fully functional in Portugal.The image generation platform first entered open beta in July 2022.Midjourney has a text prompt box that admits using commands and parameters.The results presented allow the user to scale images, create slight image

Figure 3 .
Figure 3. Adobe Firefly 2 user interface.DreamStudio is a user-friendly interface designed for image creation, leveraging the latest Stable Diffusion image generation model iteration.It was released in August 2022.The DreamStudio interface features style options, a text prompt box, a negative prompt, image upload, image settings, advanced image settings, and the dream button.

Figure 8 .
Figure 8.Average UEQ scale values of Adobe Firefly.

Figure 8 .
Figure 8.Average UEQ scale values of Adobe Firefly.

/Table 4 .Table 4 .Table 4 .Table 4 .
imagine prompt create a poster of a circus based on the work of the designer Paul Rand-v 5.2 Id: 933008ae-8519-4c48-8043-628c79c9191b Midjourney /imagine prompt contortionists and jugglers for a circus poster in the designer paul rand style-v 5.2 Id: 7d3a0416-1c5e-4368-869a-2bdc4324c57e DreamStudio Prompt: Poster for a circus with the influence of Paul Rand, that is, with various geometric figures and color.The poster needs to contain objective elements of the circus.Seed: 554,335 Image: DreamStudio Prompt: Colorful poster for a circus, white background, geometrical objects in primary colors, different texture, minimalistic design.Seed: 93,581 Midjourney /imagine prompt contortionists and jugglers for a circus poster in the designer paul rand style-v 5.2 Id: 7d3a0416-1c5e-4368-869a-2bdc4324c57e Digital 2024, 4, FOR PEER REVIEW 13 Results created by the participants, with respective GenAI tool and prompt used.GenAI Tool Prompt/Image Upload/ID, Seed Results Midjourney /imagine prompt create a poster of a circus based on the work of the designer Paul Rand-v 5.2 Id: 933008ae-8519-4c48-8043-628c79c9191b Midjourney /imagine prompt contortionists and jugglers for a circus poster in the designer paul rand style-v 5.2 Id: 7d3a0416-1c5e-4368-869a-2bdc4324c57e DreamStudio Prompt: Poster for a circus with the influence of Paul Rand, that is, with various geometric figures and color.The poster needs to contain objective elements of the circus.Seed: 554,335 Image: DreamStudio Prompt: Colorful poster for a circus, white background, geometrical objects in primary colors, different texture, minimalistic design.Seed: 93,581 DreamStudio Prompt: Poster for a circus with the influence of Paul Rand, that is, with various geometric figures and color.The poster needs to contain objective elements of the circus.Seed: 554,335 Image: Digital 2024, 4, FOR PEER REVIEW 13 Results created by the participants, with respective GenAI tool and prompt used.GenAI Tool Prompt/Image Upload/ID, Seed Results Midjourney /imagine prompt create a poster of a circus based on the work of the designer Paul Rand-v 5.2 Id: 933008ae-8519-4c48-8043-628c79c9191b Midjourney /imagine prompt contortionists and jugglers for a circus poster in the designer paul rand style-v 5.2 Id: 7d3a0416-1c5e-4368-869a-2bdc4324c57e DreamStudio Prompt: Poster for a circus with the influence of Paul Rand, that is, with various geometric figures and color.The poster needs to contain objective elements of the circus.Seed: 554,335 Image: DreamStudio Prompt: Colorful poster for a circus, white background, geometrical objects in primary colors, different texture, minimalistic design.Seed: 93,581 Digital 2024, 4, FOR PEER REVIEW 13 Results created by the participants, with respective GenAI tool and prompt used.GenAI Tool Prompt/Image Upload/ID, Seed Results Midjourney /imagine prompt create a poster of a circus based on the work of the designer Paul Rand-v 5.2 Id: 933008ae-8519-4c48-8043-628c79c9191b Midjourney /imagine prompt contortionists and jugglers for a circus poster in the designer paul rand style-v 5.2 Id: 7d3a0416-1c5e-4368-869a-2bdc4324c57e DreamStudio Prompt: Poster for a circus with the influence of Paul Rand, that is, with various geometric figures and color.The poster needs to contain objective elements of the circus.Seed: 554,335 Image: DreamStudio Prompt: Colorful poster for a circus, white background, geometrical objects in primary colors, different texture, minimalistic design.Seed: 93,581 DreamStudio Prompt: Colorful poster for a circus, white background, geometrical objects in primary colors, different texture, minimalistic design.Seed: 93,581 Digital 2024, 4, FOR PEER REVIEW 13 Results created by the participants, with respective GenAI tool and prompt used.GenAI Tool Prompt/Image Upload/ID, Seed Results Midjourney /imagine prompt create a poster of a circus based on the work of the designer Paul Rand-v 5.2 Id: 933008ae-8519-4c48-8043-628c79c9191b Midjourney /imagine prompt contortionists and jugglers for a circus poster in the designer paul rand style-v 5.2 Id: 7d3a0416-1c5e-4368-869a-2bdc4324c57e DreamStudio Prompt: Poster for a circus with the influence of Paul Rand, that is, with various geometric figures and color.The poster needs to contain objective elements of the circus.Seed: 554,335 Image: DreamStudio Prompt: Colorful poster for a circus, white background, geometrical objects in primary colors, different texture, minimalistic design.Seed: 93,581

Digital 2024, 4 ,
FOR PEERREVIEW  14    Adobe Firefly Prompt: Inside circus tent vector old retro vintage style of Paul Rand.

Digital 2024, 4 ,
FOR PEERREVIEW  14    Adobe Firefly Prompt: Inside circus tent vector old retro vintage style of Paul Rand.

Digital 2024, 4 ,
FOR PEER REVIEW 14 Adobe Firefly Prompt: Inside circus tent vector old retro vintage style of Paul Rand.Adobe Firefly Prompt: Vintage circus host with big pointed black hat, t wo women seated and two circus tents on the background vector poster Paul Rand style.

Table 1 .
Total results of the USE Questionnaire.

Table 2 .
Average scores resulting from PANAS.

Table 3 .
Evaluation of the results obtained by the students.

Table 2 .
Average scores resulting from PANAS.

Table 3 .
Evaluation of the results obtained by the students.

Table 4 .
Results created by the participants, with respective GenAI tool and prompt used.
Midjourney /imagine prompt create a poster of a circus based on the work of the designer Paul Rand-v 5.2 Id: 933008ae-8519-4c48-8043-628c79c9191b Digital 2024, 4, FOR PEER REVIEW 13

Table 4 .
Results created by the participants, with respective GenAI tool and prompt used.