Empathic ai can’t get under the skin
Empathic ai can’t get under the skin"
- Select a language for the TTS:
- UK English Female
- UK English Male
- US English Female
- US English Male
- Australian Female
- Australian Male
- Language selected: (auto detect) - EN
Play all audios:
Personalized LLMs built with the capacity for emulating empathy are right around the corner. The effects on individual users need careful consideration. It doesn’t take much for humans to
recognize human-like traits and abilities in chatbots. The German American computer scientist Joseph Weizenbaum discovered this effect in the 1960s, as he built a program called ELIZA that
is widely regarded as the first chatbot. The user typed in statements and the program generated responses that could emulate some forms of natural language conversation between people and
computers. ELIZA was primitive compared with today’s large language models (LLMs), operating primarily by identifying ‘keywords’ and performing ‘text manipulation’. Many users were
captivated, perceiving human characteristics such as understanding or empathy when interacting with the chatbot. Weizenbaum was surprised and dismayed by the power of the illusion and became
concerned about overreliance on artificial intelligence (AI) and its undermining of human values1,2. Several decades later, LLMs are capable of fluent human-like conversations, producing
output in any style as desired. With the wide availability and popularity of LLMs and the emerging capability to personalize chatbots to user-specific data, the tendency of humans to project
human traits in computed programs needs further examination given the potential widespread effects on individual users and society. In a Correspondence in this issue, Garriy Shteynberg et
al. consider a human ability and trait that is increasingly emulated by LLMs: empathy — the ability to understand and share the feelings of others. LLMs, in the form of romantic chatbots,
personal assistants, mental health apps or therapists3, can give users the illusion of understanding, empathy, caring and love. However, language models cannot experience any of these
psychological states, leading the authors to ask an interesting question: does it matter if empathic AI has no empathy? When it comes to cognitive abilities such as reasoning or
understanding, there appears to be a grey area in which LLMs can plausibly, according to some experts, demonstrate such abilities4. For example, one might believe that an LLM that can solve
a complex problem via prompts is performing a form of reasoning, even if it is not exactly how humans would solve it. Likewise, an LLM might be said to ‘understand’ a topic if it can
converse cogently about it in a detailed way. But empathy is different, or so it seems. LLMs may give linguistic responses that appear empathic (such as, ‘I feel sorry’), but they are not
equipped with complex, machine versions of the biology and psychology of empathy, which, in humans, involves the integration of internal regulatory mechanisms such as homeostatic processes
and the control of neural pathways5. Unlike many instances of cognitive abilities, empathy often involves autonomic signals. Put facetiously, no LLM has shown changes in heart rate or the
galvanic skin response when making empathic claims. Yet, although LLMs cannot feel empathy, they can use the language of empathy, and may induce real feelings and emotions within their human
users. Shteynberg et al. call for research to probe the ethical questions and consequences of empathic AI. For example, an argument in defence of empathic AI applications is that users are
informed or warned that the AI chatbot they interact with only simulates empathy, friendship or love. However, the efficacy of empathic LLMs depends on how much users believe that the
chatbot they interact with truly feels empathy. A research question posed by the authors is whether long-term users of empathic LLMs can (or should) sustain the belief that AI empathy is
simulated rather than real. Another question they explore is how the experience of LLM disillusionment compares with losing a human social bond. Examination of such ethical questions on
users’ interactions with empathic chatbots will be important, given the increasingly wide adoption of personalized AI bots in which users can create personalized AI-based companions. In a
recent Perspective, Hannah Kirk et al.6 highlight personalization of LLMs as a frontier development in AI. As they discuss, the potential benefits of LLMs tailored to individual preferences
are extensive. Information retrieval, tutorship and mentoring, and mental health support might become more efficient and targeted. However, the risks are real. In addition to issues with
privacy infringement, there is the concern that individuals using personalized LLMs are caught in an echo chamber. A further risk is that users may foster a perceived emotional connection
between themselves and the LLM, with the heightened risk that users form unhealthy attachments or reveal sensitive information. There is no doubt that the technology will develop quickly, as
tech companies are racing to integrate LLM-based products in everyday applications. In a potential future scenario, personalized LLMs could rapidly become the norm. However, without proper
ethical consideration of the effects on users and responsible deployment, the dangers expressed by Shteynberg et al. and Kirk et al.6 may inadvertently become rooted in everyday life.
Weizenbaum’s apprehensions about the human tendency to attribute human-like qualities to machines have become more urgent with the rise of sophisticated LLM-based chatbots that may seem
attuned to our emotional needs. Kirk et al.6 ask a critical question: what are the appropriate bounds of personalization, and who decides? REFERENCES * Tarnoff, B. _The Guardian_
https://go.nature.com/4bwUIUK (2023). * Weizenbaum, J. _Computer Power and Human Reason: From Judgment to Calculation_ (W. H. Freeman & Co., 1976). * Robb, A. _The Guardian_
https://go.nature.com/3UxUfL9 (2023). * Mitchell, M. & Krakauer, D. C. _Proc. Natl Acad. Sci. USA_ 120, e2215907120 (2023). Article Google Scholar * Preston, S. D. et al. _Social
Neurosci_ 2, 254–275 (2007). Article Google Scholar * Kirk, H. R., Vidgen, B., Röttger, P. & Hale, S. A. _Nat. Mach. Intell._ 6, 383–392 (2024). Article Google Scholar Download
references RIGHTS AND PERMISSIONS Reprints and permissions ABOUT THIS ARTICLE CITE THIS ARTICLE Empathic AI can’t get under the skin. _Nat Mach Intell_ 6, 495 (2024).
https://doi.org/10.1038/s42256-024-00850-6 Download citation * Published: 24 May 2024 * Issue Date: May 2024 * DOI: https://doi.org/10.1038/s42256-024-00850-6 SHARE THIS ARTICLE Anyone you
share the following link with will be able to read this content: Get shareable link Sorry, a shareable link is not currently available for this article. Copy to clipboard Provided by the
Springer Nature SharedIt content-sharing initiative
Trending News
Javascript support required...
African land grabs hinder sustainable developmentSales of forest land to corporations are dispossessing inhabitants and harming ecosystems. Access through your instituti...
Cardiovascular disease and subclinical atherosclerosis in rheumatoid arthritisWe read with interest the article “Traditional cardiovascular risk factors and residual disease activity are associated ...
The page you were looking for doesn't exist.You may have mistyped the address or the page may have moved.By proceeding, you agree to our Terms & Conditions and our ...
How it all began | Nature PhotonicsSince the first discovery of optical gradient and scattering forces in 1970, optical tweezers have helped unveil many my...
Latests News
Empathic ai can’t get under the skinPersonalized LLMs built with the capacity for emulating empathy are right around the corner. The effects on individual u...
Cardiovascular disease and subclinical atherosclerosis in rheumatoid arthritisWe read with interest the article “Traditional cardiovascular risk factors and residual disease activity are associated ...
The page you were looking for doesn't exist.You may have mistyped the address or the page may have moved.By proceeding, you agree to our Terms & Conditions and our ...
How it all began | Nature PhotonicsSince the first discovery of optical gradient and scattering forces in 1970, optical tweezers have helped unveil many my...
Latent classes for chemical mixtures analyses in epidemiology: an example using phthalate and phenol exposure biomarkers in pregnant womenABSTRACT Latent class analysis (LCA), although minimally applied to the statistical analysis of mixtures, may serve as a...