As artificial intelligence (AI) becomes increasingly integrated into products and services, there is an ongoing debate about how AI will impact societies and their citizens. Even among experts, attitudes and forecasts vary widely: Nobel laureate Geoffrey Hinton has warned about the potential risks of AI, while Yann LeCun, Meta’s chief AI scientist, has expressed a more optimistic view regarding the capabilities of large language models (LLMs). Against this background, it is crucial to recognise that public perception and attitudes towards AI will play a role in shaping the future of AI development and its applications in society.
AI is Changing Society
Despite the challenges of accurately predicting AI’s trajectory, it is clear that AI is changing society. From the recommendation engines behind social media feeds to the voice assistants on smartphones and advancements in autonomous vehicles, AI is here to stay. Therefore, it is important for governments around the world to understand whether their citizens are prepared for this rapid evolution and possess the necessary skills to leverage AI to enhance their competitiveness. After all, AI does not destroy jobs; it is the way humans choose to use it that does.
Assessing Attitudes About AI
To better understand what people think and know about AI, researchers have developed various psychometric tools. In our study, Shedding light on relationships between various attitudes towards AI measures and trust in generative AIs ChatGPT/Ernie Bot in a Chinese sample, published in Journal of Psychology and AI, we applied some of these tools to explore global attitudes towards AI, focusing on whether people have positive or negative perceptions of AI. Critics may argue that this approach has limitations, as the concept of AI is broad and not entirely clear. For example, some people might associate AI with science fiction, like Hollywood’s cybernetic Terminator, while others might think of mundane practical applications such as voice assistants. Nevertheless, our research found that global attitudes towards AI products such as ChatGPT and Ernie Bot correlate with greater trust and/or usage of specific LLM technologies. This suggests that assessing global attitudes towards AI has diagnostic value—it can help forecast AI adoption across different areas of life.
Our findings also show that it is meaningful to distinguish between positive and negative attitudes towards AI. For example, participants with positive attitudes did not necessarily report fewer negative attitudes. In fact, some participants expressed enthusiasm about AI’s potential while simultaneously feeling anxious about it. This cognitive dissonance is understandable, given the unpredictable nature of the AI revolution.
It is important for government officials to understand people’s attitudes and objective knowledge about AI. Such insights can inform the design of educational programmes and regulations to better prepare citizens for an AI-driven future, ultimately fostering AI wellbeing.
What is AI Wellbeing?
AI wellbeing can be understood in various ways, but its overarching goal is to foster wellbeing in society where everything—from refrigerators to cars—will be AI-powered. To achieve this, engineers and policy makers must design and implement AI systems in a humane and ethically sound manner. In another paper, On the relevance of Maslow’s need theory in the age of artificial intelligence, we argue that Maslow’s hierarchy of needs provides a useful framework for this purpose. Guided by Maslow’s theory, AI systems should not only address basic human needs but also assist individuals in reaching their full potential through human-AI interaction, thereby facilitating ‘self-actualising’ as described in the theory. In addition, promoting AI wellbeing involves creating positive emotional experiences during interactions with AI systems and products. To measure AI wellbeing, we have developed the ‘AI-interaction positivity scale’, which evaluates how happy, excited, delighted and satisfied individuals feel when they interact with an AI system or product. This scale was recently introduced in our paper, Introduction of the AI-Interaction Positivity Scale and its relations to satisfaction with life and trust in ChatGPT, published in the journal Computers in Human Behavior.
The findings from this study show that higher scores on AI wellbeing correlate with greater trust in ChatGPT, a widely used LLM with over one billion users. Although our data is cross-sectional, it suggests that higher AI wellbeing may enhance trust in AI products. This can be beneficial when the AI system is benevolent. However, the emotional benefits of interacting with AI also come with potential risks. Overreliance on AI systems may lead to cognitive outsourcing in situations where independent thinking is essential. These risks are explored in another paper, The darker side of positive AI attitudes: Investigating associations with (problematic) social media use. This study found that individuals with higher positive attitudes towards AI are also more likely to have social media addiction, especially among male participants, given the heavy reliance of social media platforms on AI. In this view, the study shows that overreliance could result in addictive behaviours towards AI. While I recommend avoiding the term ‘addiction’ in this context—both to avoid overpathologising everyday behaviour and because the term ‘social media addiction’ is not officially recognised—we must remain vigilant about these potential risks and maintain our ability to think independently while enjoying the convenience AI offers.
How are AI Wellbeing and Attitudes Towards AI Related?
Readers may wonder how AI wellbeing and attitudes towards AI are linked. These constructs are indeed related: higher AI wellbeing shows correlations with attitudes towards AI, with the correlations ranging from moderate to strong. This relationship is not surprising, as attitudes towards AI reflect a more cognitive approach to understanding human reactions to AI, while the ‘AI interaction positivity scale’ primarily captures emotional responses. Despite these differences, understanding people’s attitudes towards AI remains valuable for developing AI systems and products that increase AI wellbeing.
The Opportunities and Challenges that AI Offers
AI offers numerous opportunities, but both citizens and governments need to adopt a thoughtful and responsible approach to its diverse applications. To understand public perceptions of AI and its potential to foster wellbeing, many factors need to be considered. Another paper from our research group, Considering the IMPACT framework to understand the AI-well-being complex from an interdisciplinary perspective, published in Telematics and Informatics Reports, presents a new theoretical framework called ‘IMPACT’. This framework outlines key considerations for successfully navigating the AI revolution. These include the ‘Interplay of Modality’ of an AI system, human factors (‘Person-variable’), the ‘Area’ where AI is applied, cultural and regulatory aspects (‘Country/Culture’), and the level of ‘Transparency’ of AI systems. I believe that the interdisciplinary work being done at the Institute of Collaborative Innovation at the University of Macau holds promise for addressing the complex challenges brought about by the rise of AI.
About the author(s)
Christian Montag is Distinguished Professor for Cognitive and Brain Sciences, and associate director of the Institute of Collaborative Innovation at the University of Macau. He works at the intersection of psychology, neuroscience and computer science.
Text: Christian Montag
Photo: Christian Montag, Editorial board
Source: UMagazine Issue 32
Academic Research is a contribution column. The views expressed are solely those of the author(s).