Motivation and emotion/Book/2023/Artificial emotion

Artificial emotion:
To what extent can artificial intelligence experience emotion and how can this be applied?

Overview edit

 
Figure 1. Some forms of Artificial Intelligence can accurately recognise human emotions through facial expressions.

Emotion artificial intelligence (AI) does not involve teary-eyed robots who can't seem to move on from their ex, but rather refers to the field of affective computing in which various forms of AI attempt to accurately recognise, interpret, and mimic human emotions (Zhao et al., 2022). This capability is achieved through the use of AI technology to analyse facial expressions (e.g., see Figure 1), body language, verbal cues and behavioural responses of other intelligent entities to gather data and make an inference of the most likely emotional state (Assuncao et al., 2022). Current AI forms are not able to 'feel' emotions, that is, the technology does not posses[spelling?] any innate emotions, nor are they likely to reach that level of complexity without a physical living body, however, there are still excessive beneficial applications AI's 'experience' of emotion across the globe (Cao et al., 2022). For example, emotion AI is increasingly valuable in fields such as healthcare, business, marketing, employment, early intervention, etc. all of which will be discussed in the following chapter alongside the potential limitations and ethical considerations of implementing emotion AI in today's society.

Focus questions:

  • What are the ethical considerations of emotion in current and future AI systems?
  • What is the relationship between Artificial Intelligence and models of emotion?
  • Explain the detection of psychological disorders using emotion recognition AI?[grammar?]


Defining Artificial Intelligence edit

First established in the 1950's[grammar?], AI is a branch of computer science in which researchers have attempted to create technology and machines who's[grammar?] embedded software is able to 'think' and make comprehensive decisions as if mimicking the human brain (Haenlein & Kaplan, 2019). AI may also refer to these machines or robots themselves, with highly advantageous capabilities in computing large quantities of data and accurately deciphering this information in a fraction of the time compared to the neurological processes of humans (Latika Kharb, 2018). There are various differing forms of AI technology available and emerging in today's society, the most common being 'Reactive Machines'. This type of AI does not hold any capacity for memory, but rather is able to respond to it's environment and surroundings. Other common AI sub-types include Limited Memory (harnesses memory functions for learning and response improvement), Theory of Mind AI (able to comprehend other's needs), and Self Aware AI (closely replicates human intelligence) (Islam et al., 2022).

Ethical considerations of emotion in artificial intelligence systems edit

Ethics can be described as the innate and moral set of principles or 'rules' that dictate one's actions and behaviours in society (Ghotbi, 2022). But how is this relevant in the field of artificial intelligence? Well, considering most AI is attempting to directly mimic human emotion and behaviour, there are various ethical considerations involved, namely the possibility of unreliability and error of judgement. While an overwhelming percentage of modern AI technology is deemed to be highly accurate and efficient in decision-making capabilities, this is not a fail proof system and sometimes errors can occur (LaGrandeur, 2015). This is particularly true for instances of more complex comprehension and ethical reasoning, where AI cannot quite match the level of human intellect required. Due to the ever-increasing presence of AI software in areas such as healthcare, business, marketing, etc. there is a multitude of negative consequences for real life people should AI make an error in judgement. For example, many AI systems are being introduced to assist with precision surgery (see Figure 2). If AI were to make an error in the placement of a bodily incision or determine an incorrect level or type of anaesthetic, the patient is then placed at increased risk of harm and potential loss of life.[How does this relate to emotion?]

Potential AI bias and corruption edit

Similarly, there is growing suspicion surrounding the potential for AI to become biased or corrupt, engaging in unethical decision-making. For example, Monteith et al. (2022) provide evidence of AI being used to determine the most suitable applicants for job positions in highly elite organisations. While this may seem an efficient and unbiased method of pruning through applicants for the best possible employee, recent studies show that some AI can in fact hold predetermined biases toward varying racial facial features depending on innate coding used to develop such recognition software to begin with (LaGrandeur, 2015). Furthermore, initial data input of AI programming and training systems may be biased to naturally associate certain aspects of gender with specific occupations, increasing the inaccuracy of final decisions.[How does this relate to emotion?]


There is also speculation surrounding the idea of 'emotional AI' being predominately[spelling?] used as a marketing stint[spelling?] to make such technologies more attractive to the average human consumer (Latika Kharb, 2018). It is no surprise that people would be more willing and likely to buy into products, companies, schemes, etc. if they feel the technology used is somewhat capable of understanding and resonating with their own emotions. This instigates a further conflict of morals and ethics on whether it is fair to compare and reduce human emotion and sensitivity to a piece of artificial intelligence at all (Stark & Hoey, 2021).

So what do you think? Should consumers take this seeming perk at face value, or is this upgrade in AI capability purely for profit and financial gain?

CASE STUDY

An artificial intelligence machine referred to as 'Deep Patient' was utilised in assessing the diagnosis and intervention of real life patients in Mount Sinai Hospital[when?][factual?]. Unfortunately, it was later found that up to 28% of Deep Patient's assessments had been incorrect, providing patients with a diagnosis or intervention that was not fit for the reality of their symptoms.

  • Imagine you are a patient being treated at Mount Sinai Hospital and Deep Patient is going to assess your mental health.
  • What might be some of the consequences for your health and wellbeing should Deep Patient make an error?

Relationship between artificial intelligence and models of emotion edit

 
Figure 3: Basic human emotions (most identifiable to emotion AI software

Emotion is a highly complex and multifaceted concept, however, we can attempt to understand emotions as varying psychological states which occur as a result of intertwining neurophysiological changes (Borod, 2000). This encompasses our thoughts, feelings, mood, relationships, mental and physical behavioural responses and experience of pleasure. Depending on the current emotional state of the body and mind, one can use this information to inform and develop 'feelings', in other words, feelings are the interpretation of emotional states based on initial thoughts and responses to that emotion (Heimerl et al., 2020).

Paul Ekman first posed the idea of six basic emotions and accompanying expressions, a concept which remains widely accepted in the field of emotional research to this day (Bartneck et al., 2017). Ekman's model of core emotions consist of happiness, sadness, anger, fear, surprise and disgust, each of which most individual's are able to readily display and also recognise in others. These emotions can be recognised through evaluating the combination of physiological state (body language, facial expressions), verbal depictions (tone of voice, language, volume), and resulting behaviours (punching, hugging, crying, etc.) (Kagan, 2008).

Facial expression detection edit

Although current forms of AI cannot truly feel emotions for themselves, there is significant evidence to suggest AI can be used to accurately recognise, interpret, mimic and display the emotional expressions of other entities (Assuncao et al., 2022). Using the information provided in emotional models, advanced forms of artificial intelligence can detect and analyse certain behavioural indications that humans may be experiencing specific emotions. (Heimerl et al., 2020). Due to the integration of highly developed motion detection software, Emotion AI is able to analyse and extract key information about the human face including shape, size, fine lines and indentations, motion, etc. and reference this data in comparison with pre-existing knowledge of human emotional expression (Cowen et al., 2020). Such detection capabilities of Emotion AI hold immense benefits from increased quality and relatability of media content to police officials being able to immediately identify a threat or suspicious activity to potentially prevent major crime or identify perpetrators. Despite these advantageous applications, Emotion AI is only deemed accurate in identifying basic core emotions (see Figure 3) and remains limited in more complex emotional recognition (Pfeifer, 1988). There is also rising scepticism concerning the logic behind facial expression and emotion detective AI, with claims that not all facial expressions can convey exactly the emotions someone may be feeling. e.g. you can still produce a physical smile even though experiencing inner frustration, sadness, or anger (Cowen et al., 2020).

Table 1: AI emotion analysis:

AI Emotion Analysis 4-Step Process
1) Detect Acquire, detect, and localise image of human face through the use of an inbuilt or external video camera (CCTV, IP, USB)
2) Optimise Image is optimised through resizing, clarification, cropping, colour correction, rotation, etc. to improve final emotion detection
3) Extract Analysis and extraction of key facial features through CNN (Convolution Neural Network), highlighting the size, shape, texture, motion, etc. of the face
4) Classify Perform final classification of the face, assigning an emotion label/s e.g. happy, sad, angry, excited, etc.

Natural Language Processing edit

Natural language processing (NLP) comprises of certain AI abilities to hear, understand and process spoken words as if the technology itself were human (Mathews, 2019). Unsurprisingly, this form of AI can also be used to detect emotion, as so much of our conveyance of emotion and feeling is communicated to others through tone, volume, language, etc. Artificial intelligence uses NLP to identify slight changes in an individual's mood via interpretation of vocal patterns and speech indications (Mathews, 2019). In fact, there are specific combinations of tone and language which can infer which emotion is most likely being experienced at any given time. For example, common 'aggressive' phrases such as "Whatever" and "You're not listening!" when paired with a higher pitch voice and increased volume of speech allow AI to detect the likely emotion of anger (Basu et al., 2017). This information can then be used to form an appropriate response for that emotion, such as responding in a calm and even manner, or presenting likely media to facilitate or reduce present feelings.

NLP artificial intelligence has a multitude of growing applications including language translation, data analysis, computer generated calls, digital assistants, etc. and as such, is becoming an increasingly prominent part of using technology of any kind. To give a more specific example, one study found NLP software combined with AI was able to accurately match gaming opponents[grammar?] emotional states and demonstrate 'empathy' for their losses (Marchi et al., 2019). Emerging capabilities of AI are also working towards creation and matching of voice to real human beings whom they are conversing with, presenting the same tone and demeaner[spelling?]. Although impressive, this technology does pose some security risks in terms of voice activated locks and protection, as voice recognition AI could be used to illegally infiltrate the personal data of others.

Flash quiz!

Facial expressions are always an accurate representation of our current emotions:

True
False

Detection of psychological disorders using emotion recognition AI edit

 
Figure 4: Visual representation of emotions experienced by a depressed individual

Mental health is currently one of the biggest topics of conversation around the globe, with an estimated 280 million people suffering from depression and anxiety worldwide (Carvajal-Velez et al., 2021). But how does this relate to artificial intelligence? Present and emerging forms of emotion recognition AI are suggested to accurately detect the presence of psychological illness in human beings through verbal interactions and facial/behaviour analysis software (Lee et al., 2021). Specifically, innovative AI systems have demonstrated the capacity to break down emotional cues displayed by human patients (see Figure 4) in order to detect potential signs of depression and cause for diagnosis (Nemesure et al., 2021).

Some forms of AI can even detect symptoms of depression through analysing an individual's interactions with social media content from the images posted to the type of language used to comment on other's content (Ahmed et al., 2022). This data was then cross-referenced with assessment from legitimate medical professionals, finding emotion recognition AI up to 70% accurate in detecting depressive symptoms of real individuals. Despite these figures seeming highly advantageous in favour of AI in mental illness detection, Monteith et al., (2022) reinforce the notion that any AI diagnoses should be regarded as an interpretation of likely symptoms rather than conclusive medical fact.

Definition: Depressive Disorder

(Depression) is a highly common psychological disorder consisting of intense low mood for extended periods of time. Typical symptoms can include low motivation, loss of appetite, drowsiness, increased irritability and feelings of hopelessness (Tolentino & Schmidt, 2018)

Treatment outcomes and early intervention edit

So we've established the role of AI in detection and diagnosis of depression, but what next? Various studies show significant statistical evidence for the implementation of AI technology in the planning phase of treatment for those currently suffering from psychological illness, as well as potential benefits of early intervention (Balcombe & De Leo, 2022). Emotion AI has the capability to accurately recognise indicative symptoms across a wide range of data collection systems, allowing a collective consensus to be drawn on the current demographics which are most at risk of developing depression (Joshi & Kanoongo, 2022). This creates a new potential for early intervention to be instilled in such populations, such as providing further education about mental illness and it's[grammar?] consequences as well as offering preventative therapy options.

Artificial intelligence can also improve the symptoms of depression on a more individual level, with the increased ability to create personalised and unique treatment plans based on specific data analysis of the patient at hand (Lee & Park, 2022). Moreover, the concept of 'wearable' AI is on the rise, with new devices being developed which individual's can wear on their person at all times (Abd-alrazaq et al., 2023). This technology will act to keep a 24/7 insight int any developments into the patient's health condition, introducing improved fundamental diagnosis and monitoring of depression and anxiety symptoms.

No matter the method, implementing AI for early intervention and treatment of mental disorders expands the horizons of providing assistance before more serious symptoms develop. This is so beneficial not only to those individuals at risk, but also to the greater society, as reducing the prevalence of psychological disorders and treatment would decrease a massive economic and emotional burden (Lee et al., 2021).

Consequences of false assumption edit

Emotion AI is not always 100% accurate in it's judgements and reasoning, resulting in unavoidable room for error and potential for false detection of depressive symptoms. All forms of emotional recognition artificial intelligence are initially presented with 'training' or predetermined programming of how to detect emotion, whether that be through facial recognition, language, data, etc. Not all systems are going to be perfect first try, and so unfortunately, problematic programming persists in some AI. This can lead to a biased reading of other entities[grammar?] emotional state and therefore an incorrect interpretation of psychological symptoms (Nemesure et al., 2021). Consequences of false emotion detection can be extremely detrimental for the patient in question, for example, if a patient were to be diagnosed with anxiety rather than depression (the pair have many overlapping symptoms), they may be provided with a treatment plan or medication which does not accurately address their underlying condition (Joshi & Kanoongo, 2022). This can result in unnecessary prolonged suffering for the patient, financial loss on incorrect therapy, and damaged reputation for the organisation which implemented such AI to begin with.

Food for thought

  • Would you trust emotional AI to accurately interpret your emotions?
  • Do you believe technology holds a deserving place in the treatment of psychological illness and wellbeing?

Conclusion edit

So it all comes down to the million dollar question: Can artificial intelligence experience emotion? Professionals agree that while current AI systems are not able to truly feel human emotions for themselves, there is sound evidence for the abilities of AI to accurately recognise, interpret, and even mimic the emotions experienced by humans in their vicinity (Assuncao et al., 2022). Emotion recognition AI thrives on the concepts of emotion represented through facial expression and variances in language, tone, etc. with current applications of AI being harnessed for the successful early detection and intervention of certain psychological disorders, particularly depression. As with anything in life, there are a number of ethical considerations to be pondered including potential bias, error in judgement, and whether it is morally okay to allow technological systems to be considered on the same level of sensitivity and emotion as real human beings (Stark & Hoey, 2021). Having analysed the vast range of research concerning emotional AI, there appears to be an overwhelmingly positive consensus of potential for AI in generations to come, with hopeful advancements in technological abilities on the near horizon. Although AI can sometimes predict mental illness and assist in the preparation of treatments or intervention, these detections are not yet considered to be sufficient enough for full diagnosis without the secondary opinion of a human medical professional (Lee & Park, 2022). But who knows? With all the advancements AI has already accomplished and continuously growing technological ability, maybe solid diagnosis from a machine alone will be a viable concept in the near future.

See also edit

References edit

Abd-alrazaq, A., AlSaad, R., Aziz, S., Ahmed, A., Denecke, K., Househ, M., Farooq, F., & Sheikh, J. (2023). Wearable Artificial Intelligence for Anxiety and Depression: Scoping Review. Journal of Medical Internet Research, 25, e42672. https://doi.org/10.2196/42672

Ahmed, A., Aziz, S., Toro, C. T., Alzubaidi, M., Irshaidat, S., Serhan, H. A., Abd-alrazaq, A. A., & Househ, M. (2022). Machine Learning Models to Detect Anxiety and Depression through Social Media: A Scoping Review. Computer Methods and Programs in Biomedicine Update, 100066. https://doi.org/10.1016/j.cmpbup.2022.100066

Assuncao, G., Patrao, B., Castelo-Branco, M., & Menezes, P. (2022). An Overview of Emotion in Artificial Intelligence. IEEE Transactions on Artificial Intelligence, 1–1. https://doi.org/10.1109/tai.2022.3159614

Balcombe, L., & De Leo, D. (2022). Human-Computer Interaction in Digital Mental Health. Informatics, 9(1), 14. https://doi.org/10.3390/informatics9010014

Bartneck, C., Lyons, M. J., & Saerbeck, M. (2017, June 28). The Relationship Between Emotion Models and Artificial Intelligence. ArXiv.org. https://doi.org/10.48550/arXiv.1706.09554

Basu, S., Chakraborty, J., Bag, A., & Aftabuddin, Md. (2017). A review on emotion recognition using speech. 2017 International Conference on Inventive Communication and Computational Technologies (ICICCT). https://doi.org/10.1109/icicct.2017.7975169

Borod, J. C. (2000). The neuropsychology of emotion. Oxford University Press.

Cao, S., Fu, D., Yang, X., Wermter, S., Liu, X., & Wu, H. (2022, December 1). Can AI detect pain and express pain empathy? A review from emotion recognition and a human-centered AI perspective. ArXiv.org. https://doi.org/10.48550/arXiv.2110.04249

Carvajal-Velez, L., Harris Requejo, J., Ahs, J. W., Idele, P., Adewuya, A., Cappa, C., Guthold, R., Kapungu, C., Kieling, C., Patel, V., Patton, G., Scott, J. G., Servili, C., Wasserman, D., & Kohrt, B. A. (2021). Increasing Data and Understanding of Adolescent Mental Health Worldwide: UNICEF’s Measurement of Mental Health Among Adolescents at the Population Level Initiative. Journal of Adolescent Health, 72(1). https://doi.org/10.1016/j.jadohealth.2021.03.019

Cowen, A. S., Keltner, D., Schroff, F., Jou, B., Adam, H., & Prasad, G. (2020). Sixteen facial expressions occur in similar contexts worldwide. Nature, 589(7841), 251–257. https://doi.org/10.1038/s41586-020-3037-7

Ghotbi, N. (2022). The Ethics of Emotional Artificial Intelligence: A Mixed Method Analysis. Asian Bioethics Review. https://doi.org/10.1007/s41649-022-00237-y

Haenlein, M., & Kaplan, A. (2019). A Brief History of Artificial Intelligence: On the Past, Present, and Future of Artificial Intelligence. California Management Review, 61(4), 5–14. https://doi.org/10.1177/0008125619864925

Heimerl, A., Weitz, K., Baur, T., & Andre, E. (2020). Unraveling ML Models of Emotion with NOVA: Multi-Level Explainable AI for Non-Experts. IEEE Transactions on Affective Computing, 1–1. https://doi.org/10.1109/taffc.2020.3043603

Islam, M. R., Ahmed, M. U., Barua, S., & Begum, S. (2022). A systematic review of explainable artificial intelligence in terms of different application domains and tasks. Applied Sciences, 12(3), 1353.

Joshi, M. L., & Kanoongo, N. (2022). Depression detection using emotional artificial intelligence and machine learning: A closer review. Materials Today: Proceedings. https://doi.org/10.1016/j.matpr.2022.01.467

Kagan, J. (2008). What is emotion? : history, measures, and meanings. Yale University Press.LaGrandeur, K. (2015). Emotion, Artificial Intelligence, and Ethics. Topics in Intelligent Engineering and Informatics, 97–109. https://doi.org/10.1007/978-3-319-09668-1_7

Latika Kharb. (2018). A Perspective View on Commercialization of Cognitive Computing. https://doi.org/10.1109/confluence.2018.8442728

Lee, E. E., Torous, J., De Choudhury, M., Depp, C. A., Graham, S. A., Kim, H.-C., Paulus, M. P., Krystal, J. H., & Jeste, D. V. (2021). Artificial Intelligence for Mental Healthcare: Clinical Applications, Barriers, Facilitators, and Artificial Wisdom. Biological Psychiatry: Cognitive Neuroscience and Neuroimaging, 6(9). https://doi.org/10.1016/j.bpsc.2021.02.001

Lee, Y.-S., & Park, W.-H. (2022). Diagnosis of Depressive Disorder Model on Facial Expression Based on Fast R-CNN. Diagnostics, 12(2), 317. https://doi.org/10.3390/diagnostics12020317

Marchi, E., Baltrusaitis, T., Adams, A., Mahmoud, M., Golan, O., Fridenson-Hayo, S., Tal, S., Newman, S., Meir-Goren, N., Camurri, A., Piana, S., Schuller, B., Bolte, S., Sezgin, M., Alyuz, N., Rynkiewicz, A., Baranger, A., Baird, A., Baron-Cohen, S., & Lassalle, A. (2019). The ASC-Inclusion Perceptual Serious Gaming Platform for Autistic Children. IEEE Transactions on Games, 11(4), 328–339. https://doi.org/10.1109/tg.2018.2864640

Mathews, S. M. (2019). Explainable Artificial Intelligence Applications in NLP, Biomedical, and Malware Classification: A Literature Review. Advances in Intelligent Systems and Computing, 1269–1292. https://doi.org/10.1007/978-3-030-22868-2_90

Monteith, S., Glenn, T., Geddes, J., Whybrow, P. C., & Bauer, M. (2022). Commercial Use of Emotion Artificial Intelligence (AI): Implications for Psychiatry. Current Psychiatry Reports, 24(3), 203–211. https://doi.org/10.1007/s11920-022-01330-7

Nemesure, M. D., Heinz, M. V., Huang, R., & Jacobson, N. C. (2021). Predictive modeling of depression and anxiety using electronic health records and a novel machine learning approach with artificial intelligence. Scientific Reports, 11(1), 1980. https://doi.org/10.1038/s41598-021-81368-4

Pfeifer, R. (1988). Artificial Intelligence Models of Emotion. Cognitive Perspectives on Emotion and Motivation, 287–320. https://doi.org/10.1007/978-94-009-2792-6_12

Stark, L., & Hoey, J. (2021). The Ethics of Emotion in Artificial Intelligence Systems. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. https://doi.org/10.1145/3442188.3445939

Tolentino, J. C., & Schmidt, S. L. (2018). DSM-5 criteria and depression severity: Implications for clinical practice. Frontiers in Psychiatry, 9(450). https://doi.org/10.3389/fpsyt.2018.00450

Zhao, G., Li, Y., & Xu, Q. (2022). From Emotion AI to Cognitive AI. International Journal of Network Dynamics and Intelligence, 65–72. https://doi.org/10.53941/ijndi0101006

External links edit