Visual presentation of mental healthcare chatbots for user experience - - PDF document

visual presentation of mental healthcare chatbots for
SMART_READER_LITE
LIVE PREVIEW

Visual presentation of mental healthcare chatbots for user experience - - PDF document

See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/342834499 Visual presentation of mental healthcare chatbots for user experience Article in Journal of the HCI Society of Korea May 2020


slide-1
SLIDE 1

See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/342834499

Visual presentation of mental healthcare chatbots for user experience

Article in Journal of the HCI Society of Korea · May 2020

DOI: 10.17210/jhsk.2020.06.15.2.39

CITATIONS READS

59

2 authors, including: Some of the authors of this publication are also working on these related projects: Indirect Teaching for All & Autism spectrum disorder in Design Class(ITAD) View project User experience in VR art therapy View project Seung Jin Chung Yonsei University

7 PUBLICATIONS 1 CITATION

SEE PROFILE

All content following this page was uploaded by Seung Jin Chung on 10 July 2020.

The user has requested enhancement of the downloaded file.

slide-2
SLIDE 2

39

* 주저자 : PhD student of Dept. Human environment & Design, Yonsei University ** 교신저자 : Professor of Dept. Human environment & Design, Yonsei University; e-mail: hyunju@yonsei.ac.kr ■ 접수일 : 2020년 2월 10일 / 심사일 : 2020년 3월 16일 / 게재확정일 : 2020년 5월 15일

Visual presentation of mental healthcare chatbots for user experience

Seung Jin Chung*, Hyunju Lee**

Abstract Today the digital healthcare market and interests in mental health chatbots are

  • growing. People get easily accessed to chatbot interactions, and chatbots can be used to

support people’s emotional stability and psychological well-being. Consequently, many mental healthcare chatbots have been designed to reduce the likelihood of the occurrence of mental health issues by improving self-metacognition through early interventions. However, only few studies have discussed specific factors, which mental healthcare chatbot design may affect user

  • experience. Besides, although visual presentations throughout a chatbot system influence

users’ positive/negative experiences, most chatbot studies so far have focused on identity design rather than graphical interfaces and non-verbal visual communication tools. While it is important to examine specific visual design elements, it is also important to examine overall visual design requirements. Therefore, this study explored the user experience of mental health chatbots in terms of identity design, chatbot interface design, and visual communication

  • tools. In this study, participants’ data were collected by pre/post preference evaluations, the

system usability scale questionnaire, and semi-structured interview related to selected chatbot systems (i.e. Replika, Youper, Sayana, Woebot). The collected data were qualitatively analysed, and consequently, considerations were suggested for designing mental healthcare chatbots. 핵심어: Chatbot, Mental Healthcare, AI Bot, Chatbot Design, Emotion-Aware Conversation

slide-3
SLIDE 3

40

  • 1. Introduction

Mental health concerns are not only issues that affect particular groups of people, but also anyone who lives in fast-paced society can suffer from mental difficulties such as burnout, chronic fatigue syndrome, depression, anxiety and problems sleeping on occasion. Although many of these difficulties are needed to address before they cause significant disruption in people’s lives, social prejudice may cause sufferers from seeking treatment. Therefore, it is vital to reduce social discrimination toward people with mental health problems, and encouraging individuals is required how to cope with the normal stresses of life by monitoring their mental well-being[1]. People sometimes feel uncomfortable recognising their mental concerns, and mental health applications can overcome this discomfort as well as support mental health. Self-assessments suggested by chatbots can be effective to avoid self-stigma concerns[2,3]. In addition, mental healthcare chatbots have been developed based on basic psychological counselling principles. Psychological counselling generally begins by addressing the cognition in the form of individual thoughts, emotions, and

  • behaviours. For example, cognitive behavioural therapy is one of

the representative treatments aimed to stop negative thinking or behaviour patterns by understanding the patient’s thoughts and

  • feelings. Similarly, mental healthcare chatbots have been developed

to build trustworthy relationships with users. Communications by appropriate visual presentations can help users recognise their emotions. Online environments are fundamentally visual, so visual design can be changed to increase user attention and improve their perception and decision-making abilities[4]. However, despite the importance of visual design, the scope has limited in chatbot research as chatbot identity design. Thus, this study aims to examine mental healthcare chatbot users’ experiences through

  • verall

visual presentations. For a comprehensive discussion on the research objective, this study collected participants’ data and qualitatively analysed. Data were gathered by pre/post preference assessments, the system usability scale (SUS) questionnaire, and 1:1 interview, and then examined what/how visual presentations affected user experience.

  • 2. Literature Reviews

2.1. Mental Healthcare Chatbots

Conversational agents respond to users with natural language and empathic conversations are needed in mental healthcare

  • systems. Mental healthcare chatbots are categorised according to

how users interact with them: text-based, voice-based, and visual-language based. Text-based chatbots are most commonly found on mobile devices, but using over two types of interactions is also frequent[5]. Text-based chatbots need to lead meaningful conversations according to contexts, and in the case of voice-based chatbots, various tones of voice are needed[6]. For empathic conversations, researchers have explored ways how chatbots can mimic human emotions, adjust their speech patterns to express shared understanding, offer new perspectives, and account for the situations and feelings communicated by users[7]. Most of all, message interaction is the main feature in chatbot systems, researchers are considered that chat conversations have the most influence on the user experience[8,9]. Go & Sundar[8] found that message interactivity can compensate for low anthropomorphic visualisation and identity cues. Also, in terms of mental healthcare and chatbot, building a trustworthy relationship is significant for successful message interaction between a user and a chatbot[10]. Therefore, strategies link to chatbot’s conversation styles, and building an attachment bond between mental healthcare chatbots and users can be secured by relational cues such as small talk, self-disclosure, empathy, humour, meta-relational talk, and continuity[11].

2.2. Chatbot User Experience Evaluations

Evaluating the effectiveness of mental healthcare chatbots is required a holistic approach. User experience (UX) can explore cognitive (pragmatic) and affective/hedonic factors including

  • bjective and subjective appraisals before, during, and after digital

interactions[12]. Followed by user need hierarchy, usability can be a fundamental factor to understand UXs as a higher-order need than the functionality need first[13]. However, usability assessments of chatbots include distinctive factors such as conversational intelligence, chatbot personality, and chat interface[14]. In this regard, Holmes et al.[2] developed a Chatbot Usability Questionnaire (CUQ) to assess the usability in that Shneiderman’s eight golden rules and Nielsen’s ten usability heuristics may not easily be adapted to studying chatbots. In order to understand UX, most researchers collected related quantitative and qualitative data, using several evaluating tools[15-18]. A longitudinal study by Winckler et al.[18] identified the following six UX dimensions based on the HCI literature: visual and aesthetic experience, emotion, stimulation, identification, meaning and value, and social relatedness and co-experience. They used several methods to assess UXs including thinking aloud, Self-assessment Manikin (SAM) questionnaire, the AttrakDiff questionnaire, the SUS questionnaire, and semi-structured interviews.

slide-4
SLIDE 4

41

Of these methods, the SUS questionnaire is used to examine usability while the other methods are used to examine UXs. Zarour & Alharbi[19] studied software products and identified three aspects

  • f UX that corresponded to user needs: pragmatic & hedonic,

brand, and technology. Table 1 shows that studies on chatbots evaluated chatbot systems using several methods selected according to the studies’ objectives.

Authors Objective Methods Kuligowska (2015) Assess and compare commercial chatbots

  • Interviews, identified 10

key quality attributes

  • 1-5 rating scale

Meira & Canuto (2015) Assess the quality

  • f embodied

emotional agents

  • Three-level measurement

framework Kaleem et al. (2016) Assess the quality

  • f conversational

agents

  • Goal-question-metric

approach developed by Fenton & Pfleeger (1998)

  • Pre/post test scores
  • Perception of learning
  • Correct/incorrect responses

and time in system Jain et al. (2018)[14] Evaluate text messaging-based conversational agents

  • Qualitative data anlaysis

by face-to-face meeting Go & Sundar (2019)[8] Determine the effectiveness of customer-service chatbots in e-commerce

  • Participants factorial

experiment (survey) Holmes et al (2019)[2] Determine the usability of heathcare chatbots

  • SUS, UEQ and CUQ

Table 1. Chatbot UX evaluation methods ([10], Table 2)

2.3. Visual Presentations of Mental Healthcare Chatbots

Visual design requirements include the perspective of usability and the emotional design context[20]. General principles of visual design were suggested by Gestalt psychologists, which can improve system learnability and readability. Besides, design can also refer to elements that are likely to have an impact on users’ emotion such as colours, shapes, and sounds. In particular, design shapes and colours can generate positive or negative feelings. According to research by Um et al.[21], saturated and analogously bright warm colour combinations including yellow, orange, and pink, and illustrations or characters with round shapes are useful for emotional design treatment in multimedia environments. Current expanded visual design’s consideration focuses on sensory experiences in that studies indicated that visually perceived feelings can increase the accessibility to thoughts and feelings[22]. This study examined the following visual presentations of mental healthcare chatbots: chatbot identity/personality, chat interface, and visual communication as non-verbal message interactions. First, chatbot’s identity is a visualisation of artificial intelligence, which can be indicated by anthropomorphic levels. As the closer, an image is to a real-looking person, people can feel interpersonal closeness with a chatbot[11]. However, as another point of view, people can feel uncomfortable due to the uncanny valley effect, observed in the human and non-human interaction process[23]. Second, chat interface is related to usability and also user feelings by generating a certain atmosphere through the design choices for various elements including background colours, overall layout and button

  • interactions. Sending and receiving text and basic multimedia

messages is a common chatbot feature, and structured messages are also provided in many for quick replies[24]. The graphical interface design generally follows the modern principles of creating simple, clean layouts with a minimal number of active controls

  • n-screen[25].

Lastly, non-textual messages can be effective for emotional support; and consequently, recent messaging dialogues includes various communication methods such as images, videos and sounds[26]. Above all, visual communication tools are provided for the user’s perception and the encouragement of positive feelings and thoughts. Since users may struggle in describing their emotional and psychological states clearly, visual-supportive communication strategies can help users articulate their emotional and psychological states more clearly than they can through textual communication. In addition, image-based assessments can reduce stress than text-based[27], and likewise, Augmentative and Alternative Communication (AAC) based on visual images can be successfully adapted for people with cognitive or emotional disorders.

Category Visual presentation examples Chatbot identity/personality Anthropomorphic visual cues (avatar, icon, profile etc.) Chat interface Graphic design styles (backgrounds, layout, colours etc.) Visual communication tools /non-verbal message interactions emoticons, illustrations, pictures, videos, graphs etc.

Table 2. Metal healthcare chatbot visual presentations

slide-5
SLIDE 5

42

  • 3. Methods

The study selected four mental healthcare chatbot applications. Mobile applications, which are easy to access, are regarded as an advantageous tool for mental health[5,28]. The target chatbot requirements were an intelligent conversational agent which can interact with humans cognitively and emotionally as well as available in mobile devices. Thus, services providing simplified interactions and menus were eliminated, and finally, four mental healthcare chatbots were chosen as follows: Replika(R), Youper(Y), Sayana(S), and Woebot(W). In terms of language, chatbots that can communicate in Korean and English were considered, but as a result, all selected chatbots supported English. For the UX analysis, pre/post preference, SUS questionnaire, and 1:1 interview were conducted. Pre preference assessment was performed before experiencing each service. SUS questions were provided after each chatbot platform experience. SUS is a reliable evaluation tool that is commonly used, which was developed to measure usability based on effectiveness, efficiency and satisfaction. Post preference ranking was marked after experiencing all chatbots and then 1:1 interview was conducted to gather qualitative data. The detailed process of the experiments was as below: shared the purpose of this study with the user participation consent form, asked the preference order of four chatbots using images (i.e. app icons, key screenshots), provided SUS questions after chatting with each chatbot about 5 minutes, asked the preference again after using all chatbots, and lastly conducted a semi-structured interview about 30 minutes to explore overall UX relating to each visual presentation (i.e. chatbot identity design, chat interface design, visual communication tools). The criteria of participants are literate in English, had never experienced four selected mental health chatbots. They were familiar with mobile applications such as computer chatting, aged in their 20s to 30s. Participants were recruited by the criteria with an informed consent form, and 9 participants were engaged in this

  • research. All participant are female and most participants were

university students (students=7, others=2). Participants obtained printed instructions including each chatbot’s images to mark pre preference and the SUS assessments. Also, participants were encouraged to write or speak their thoughts, feelings including questions during all sessions.

  • 4. Results

As a result of comparing pre and post preference evaluation, all participants had been changed their attitudes after using the systems. The preference for Sayana(S) was highest before using the system; however, Youper(Y) was highest after testing all chatbot systems. The pre preference ranking was Sayana(S), Replika(R), Youper(Y), and Woebot(W) from the highest. After chatbot experiences, the preference was changed to Youper(Y), Sayana(S), Woebot(W), and Replika(R). Individual SUS scores were shown in Figure 1 followed by the guideline, which is the highest score is 100 by multiplying 2.5 from a final score. Of the 10 items on the questionnaire, high standard deviations were shown by a few items(Figure 2). Figure 1.Means and standard deviations of SUS scores (n=9) Figure 2. Means and standard deviations for the 10 items of the SUS During experiments, 102 comments (Common: 8, Replika: 18, Youper: 23, Sayana: 26, Woebot: 27) were collected and these are analysed depending positive/ negative feedbacks and suggestions. Participants’ comments were focused on the feedback on message

  • interaction. Remarkably, participants evaluated negatively in case

that a chatbot demands users’ positive attitude change or requests an answer repeatedly or provides irrelative answers.

slide-6
SLIDE 6

43

By interviews, 57 meaningful comments were generated. Among comments, 23 were about chatbot identities, 18 comments indicated chat interfaces, 12 comments illustrated visual communication tools, and 4 comments were categorised as common statements. In terms

  • f chatbot identity, there was some variation in chatbot identity

preference, but the anthropomorphic identity was preferred overall. Only one participant preferred the abstract identity design (participant 3). As a conversational partner, anthropomorphic design for chatbot identity might be necessary, but the higher level of anthropomorphism can be uncomfortable in a case by the uncanny valley effect. (R/participant 5) “I am not really into that egg. I cannot catch the meaning of that. And even, it's broken… it feels a little weird.” (Y/participant 6) “My favourite was the one that looked like a person.” (S/participant 9) “This typical robot appearance is better for me because chatbot is not a real human after all. I would feel uncomfortable with a chatbot pretending to be human.” (R/participant 3) “I am curious about this the most. It got me interested.” Regarding the chatbot interface, participants preferred lighter background colours. The background colour was considered as a crucial factor in chatbot interface, and opinions differed about using illustration as a background. For instance, with regard to Sayana(S)’s background illustration, one participant said, “This graphic is qualified, but it just wasn’t for me”, and another said, “It looks complicated” while some participants satisfied the

  • illustrations. Lastly. visual communication tools were positively

evaluated in case these are presented according to its conversational

  • context. Participants did not prefer images with a lot of text.

(R/participant 5) “The GIF made me confused as suddenly came

  • ut in conversation. For me, it is better to send nothing than an

irrelevant image.” (S/participant 4) (Regarding the presented image cards) “They are not bad but I think they are not effective, and they have too much text.” (Y/participant 1) “Emotional healing is more important for me than learning something. I was stressed out because this chatbot kept asking me questions.” (Y/participant 2) “It is very nice to be able to express my feelings with colours. It’s smart because I am not sure how to define my feeling in a single word.”

  • 5. Discussion

5.1. Visual Presentation Decisions in Mental Healthcare Chatbots

Chatbot identities can significantly influence on user first impressions and their decision whether to try the service on. However, it is hard to conclude identity design is a crucial factor to determine its future use. Instead, conversation experiences were more important considering the result of interviews. Participants expect emotional conversations, and consequently, it was important to feel closeness through friendly and reliable interactions[10,11]. In the light of participants’ expectation, participants preferred to use chatbots which had an anthropomorphic design, but they could be easily disappointed after conversations due to their high expectation. Accordingly, strategical anthropomorphism is needed to match the conversational level interaction level[8]. Furthermore, some participants were uncomfortable to communicate with chatbots whose identities fell in the uncanny valley[23]. With regard to the interface design, participants liked the graphical interface design followed by standard rules as in previous research[25]. Participants generally preferred bright colours and background music that matched designs, and various options on background images seem to be needed based on participant’s

  • comments. One participant mentioned, “I don’t want to use this

service frequently because it is too dark and makes me depressed”. Therefore, bright backgrounds can be effective as a default and

  • ptions to personalise interfaces further are recommended.

In addition, Visual supports can help make emotions understood more easily than text[27]. Users can recognise their situations and express their emotions through visual supports. Visual tools included exercising to relieve stress, body scanning during meditation, self-diagnosis and recording moods, and be used to

  • entertain. Most of all, chatbot’s visual communication tools should

be easy to understand and correspond to the context. Besides, participants preferred to use colours for recording their feelings rather than emoticons. As a result of interviews, most participants had difficulty describing their emotions specifically and felt stressed when asked to select the single emotion that they were feeling. This was similar that found in other studies on emotional design and colours[20,21]. More study of multisensory emotional expression is needed.

5.2 Considerations for Designing Mental Healthcare Chatbots

Based on the results of this study, design considerations are

slide-7
SLIDE 7

44

suggested for future mental healthcare chatbot developments. 1) Consistent chatbot design: Chatbot should exhibit a consistent personality and visual attributes to enable users to consider it is an identical object. 2) Intuitive image supports: Visual communication tools should be designed to reduce users’ stress. Only the necessary amount of writing should be included with images. 3) Structured message rather than open styles: Structured messaging can benefit user participations. Users may feel stressed if they should lead conversations all the time. 4) Alternatives to express feelings: It can be effective to define their feelings through colours or abstract images rather than words. 5) Personalised interfaces: A chat session including attractive background colours and music would be effective to switch users’ moods. A system can recommend daily backgrounds or provides options for users. 6) Strategic identity design: Chatbot’s conversation level should match the anthropomorphic level, and it may be related to marketing strategy. 7) Tips for emotional intimacy with a chatbot: Users might feel uncomfortable if a chatbot repeatedly asks users’ answers or demand behaviour changes compulsory.

5.3 Limitations & Future Works

The study includes limitations as below. First, this study followed qualitative research strategies. Therefore, it may well be difficult to generalise the results of this study. In this regard, further works engaging experts and stakeholders are needed to verify the research suggestions. Second, the participant scope of this study was limited; thus the additional studies with a variety of participants are recommended. Lastly, this study aimed to explore overall visual presentation; however, the specific design guidelines are required in future research.

  • 6. Conclusion

Visual design studies on chatbots commonly have been focused

  • n chatbot’s identity designs. Therefore, this study explored the

UXs of mental healthcare chatbots related to three types of visual presentations: identity design, interface design, and visual communication tools. Based on its qualitative data analysis, this study suggested design considerations for mental healthcare design. The combination of visual elements is significant in chatbot developments and particularly, mental healthcare chatbots are needed to consider how to support cognitive and emotional recognition by visualisations. At last, despite the interests of digital healthcare systems, domestic services of mental healthcare chatbots are not yet popularly accepted; thus, improvements must continue to appeal to potential users.

Reference

[1] Martínez, C. and Farhan, I. Making the right choices: Using data-driven technology to transform mental healthcare. file:///Users/chungseungjin/Downloads/Making-righ t-choices-using-data-driven%20technology-19.pdf May 5. 2020. [2] Holmes, S., Moorhead, A., Bond, R., Zheng, H., Coates, V. and McTear, M. Usability testing of a healthcare chatbot: Can we use conventional methods to assess conversational user interfaces?. In Proceedings of the 31st European Conference on Cognitive Ergonomics. Belfast, United Kingdom. pp. 207-214. 2019. [3] Terry, N. P. and Gunter, T. D. Regulating mobile mental health apps. Behavioral Sciences and the

  • Law. 36(2). John Wiley & Sons. pp. 136-144. 2018.

[4] Kahn, B. E. Using visual design to improve customer perceptions

  • f
  • nline
  • assortments. Journal
  • f
  • Retailing. 93(1). Elsevier. pp. 29-42. 2017.

[5] Abd-alrazaq, A. A., Alajlani, M., Alalwan, A. A., Bewick, B. M., Gardner, P. and Househ, M. An

  • verview ofthe features of chatbots in mental health:

A scoping review. International Journal of Medical

  • Informatics. 132. Elsevier. pp. 1-7. 2018.

[6] Mensio, M., Rizzo, G. and Morisio, M. The Rise of Emotion-aware Conversational Agents: Threats in

  • DigitalEmotions. Proceeding WWW '18 Companion

Proceedings of the The Web Conference. Lyon,

  • France. pp. 1541-1544. 2018.

[7] Morris, R. R., Kouddous, K., Kshirsagar, R. and Schueller, S. M. Towards an artificially empathic conversational agent for mental health applications: system design and user perceptions. Journal of medical Internet research. 20(6). JMIR Publications.

  • p. e10148. 2018.

[8] Go, E. and Sundar, S. Humanizingchatbots: The effects of visual, identity and conversational cues on humannessperceptions. Computers in Human

  • Behavior. 97. Elsevier. pp. 304-316. 2019.
slide-8
SLIDE 8

45

[9] Radziwill, N. M. and Benton, M. C. Evaluating quality

  • f chatbots and intelligent conversational agents.

https://arxiv.org/pdf/1704.04579.pdf May 5. 2020. [10] Lisetti, C., Amini, R., Yasavur, U. and Rishe, N. I can help you change! an empathic virtual agent delivers behavior change health interventions. ACM Transactions on Management Information Systems (TMIS). 4(4). ACM. pp. 1-28. 2013. [11] Kowatsch, T., Nißen, M., Rǖegger, D., Stieger, M., Flǖckiger, C., Allemand, M. and von Wangenheim, F. The impact of interpersonal closeness cues in text-based healthcare chatbots on attachment bond and the desire to continue interacting: an experimental design. 26th European Conference on Information Systems. Portsmouth, UK. pp. 1-13. 2018. [12] Minge, M., Thǖring, M., Wagner, I. and Kuhr, C. V. The meCUE questionnaire: a modular tool for measuring user experience. In Advances in Ergonomics Modeling, Usability & Special

  • Populations. NY: Springer. pp. 115-128. 2017.

[13] Gallula, D. and Frank, A. J. User Empowering

  • Design. In Proceedings of the 2014 European

Conference on cognitive ergonomics. Vienna, Austria.

  • pp. 1-3. 2014.

[14] Jain, M., Kumar, P., Kota, R. and Patel, S. N. Evaluating and Informing the Design of Chatbots. In Proceedings ofthe 2018 Designing Interactive Systems Conference. Hong Kong, China. pp. 895-

  • 906. 2018.

[15] Law, E. L. C., Van Schaik, P. and Roto, V. Attitudes towards user experience (UX) measurement. International Journal of Human-Computer Studies. 72(6). Elsevier. pp. 526-541. 2014. [16] Satti, F. A., Hussain, J., Bilal, H. S. M., Khan, W. A., Khattak, A. M., Yeon, J. E. and Lee, S. Holistic User eXperience in Mobile Augmented Reality Using User eXperience Measurement Index. In 2019 Conference

  • n Next Generation Computing Applications. Mauritius.
  • pp. 1-6. 2019.

[17] Urrutia, J. I. G., Brangier, E. and Cessat, L. Is a Holistic Criteria-Based Approach Possible in User Experience?. In International Conference of Design, User Experience, and Usability. Vancouver, Canada.

  • pp. 395-409. 2017.

[18] Winckler, M., Bernhaupt, R. and Bach, C. Identification

  • f UX dimensions for incident reporting systems with

mobile applications in urban contexts: a longitudinal

  • study. Cognition, Technology & Work. 18(4). Springer.
  • pp. 673-694. 2016.

[19] Zarour, M. and Alharbi, M. User experience framework that combines aspects, dimensions, and measurement

  • methods. Cogent Engineering. 4(1). Taylor & Francis.
  • pp. 1-25. 2017.

[20] Dillman, D. A., Gertseva, A. and Mahon-Haft, T. Achieving usability in establishment surveys through the application of visual design principles. Journal of Official Statistics. 21(2). Statistics Sweden. pp. 183-214. 2005. [21] Um, E., Plass, J. L., Hayward, E. O. and Homer, B.

  • D. Emotional design in multimedia learning. Journal
  • f educational psychology. 104(2). APA Publishing.
  • pp. 485-498. 2012.

[22] Baek, E., Choo, H. J. and Lee, S. H. M. Using warmth as the visual design of a store: Intimacy, relational needs, and approach intentions. Journal

  • f Business Research. 88. Elsevier. pp. 91-101.

2018. [23] Ciechanowski, L., Przegalinska, A., Magnuski, M. and Gloor, P. In the shades of the uncanny valley: An experimental study of human–chatbot interaction. Future Generation Computer

  • Systems. 92.

Elsevier. pp. 539-548. 2019. [24] Klopfenstein, L. C., Delpriori, S., Malatini, S. and Bogliolo, A. The rise of bots: A survey of conversational interfaces, patterns, and paradigms. In Proceedings of the 2017 Conference on Designing Interactive Systems. ACM. pp. 555-565. 2017. [25] Modrzejewski, M. and Rokita, P. Graphical Interface Design for Chatbots for the Needs of Artificial Intelligence Support in Web and Mobile Applications. In International Conference on Computer Vision and

  • Graphics. Warsaw, Poland. pp. 48-56. 2018.

[26] Følstad, A. and Brandtzæg, P. B. Chatbots and the new world of HCI. Interactions. 24(4). ACM. pp. 38-42. 2017. [27] Marengo, D., Settanni, M. and Giannotta, F. Development and preliminary validation of an im age- based instrument to assess depressive symptoms. Psychiatry

  • Research. 279. Elsevier. pp. 180-185. 2019.

[28] Ahn, S. and Lee, H. Use of Mobile Mental Health Application for Mental Health Promotion: Based on the Information-Motivation-Behavioral Skills Model. Korean Society For Journalism And Communication Studies.62(6). KSJCS. pp. 167-194. 2018.

View publication stats View publication stats View publication stats View publication stats