Trust, attitudes and use of artificial intelligence: A global study 2025
Given the transformative effects of Artificial Intelligence (AI) technologies on work, education and society, it is important to understand the public’s trust and attitudes towards AI, their experience of the impacts from AI use, and their expectations for the management and governance of these technologies. This understanding is critical to align public policy and industry practice with evolving societal needs and expectations and inform a human-centred approach to AI deployment.
This study offers a comprehensive examination of public trust, attitudes and use of AI based on survey data collected from 48,340 people from 47 countries using nationally representative sampling. It also examines how employees and students use AI at work and in education and their experiences of the impacts of AI in these specific settings.
Data was collected in each country between November 2024 and mid-January 2025 using an online survey completed by a minimum of 1000 respondents per country. Samples were nationally representative of the adult population on gender, age and regional distribution matched against official national statistics. The countries surveyed covered all global geographic regions: 1) North America (Canada, United States of America), 2) Latin America and Caribbean (Argentina, Brazil, Chile, Colombia, Costa Rica, Mexico), 3) Northern and Western Europe (Austria, Belgium, Denmark, Estonia, Finland, France, Germany, Ireland, Latvia, Lithuania, Netherlands, Norway, Sweden, Switzerland, United Kingdom), 4) Southern Europe (Greece, Italy, Portugal, Slovenia, Spain), 5) Eastern Europe (Czech Republic, Hungary, Poland, Romania, Slovakia), 6) Africa (Egypt, Nigeria, South Africa), 7) Western Asia (Israel, Saudi Arabia, Türkiye, United Arab Emirates), 8) Eastern, Southern and Central Asia (China, Japan, India, Singapore, Republic of Korea), and 9) Oceania (Australia, New Zealand).
Surveys were conducted in the native language(s) of each country. To ensure question equivalence across countries, surveys were professionally translated and back translated from English to each respective language, using separate translators. The research was approved by and adhered to the Guidelines of the ethical review process of The University of Queensland and the National Statement on Ethical Conduct in Human Research.
The report provides timely, global research insights on a range of questions, including the extent to which people trust, use, and understand AI systems, how they perceive and experience the benefits, risks and impacts of AI use in society, at work and in education, expectations for the management, governance and regulation of AI by organizations and governments, and perceived organizational support for the responsible use of AI. It draws out commonalities and differences in these key dimensions across countries and sub-groups of the population.
Key insights include:
- Most people are wary about trusting AI systems, particularly in advanced economies, and report both optimism and worry. However, most people accept the use of AI.
- People are experiencing a range of beneficial outcomes, as well as negative impacts, from the use of AI in society, with more people believing the benefits of AI outweigh the risks, particularly in emerging economies.
- The public expect AI regulation at both the national and international level, as well as co-regulation with industry. Yet the current regulatory landscape is falling short of public expectations.
- AI literacy and training is lagging AI adoption yet is important for responsible and effective use.
- There are notable differences between countries with advanced and emerging economies: People in emerging economies report greater trust, acceptance and adoption of AI, higher levels of AI literacy and training, and more realized benefits from AI.
- The majority of employees report intentionally using AI at work on a regular basis.
- The use of AI at work is delivering a range of performance benefits, coupled with mixed impacts on workload, stress, human collaboration and compliance.
- Many employees report having used AI in complacent and inappropriate ways at work and hiding the use of AI in their work.
- Most students report regularly using AI for their studies and deriving a range of benefits from its use, but also report mixed impacts on critical thinking, communication and collaboration with peers and instructors, and equity of assessment.
- Most students report having used AI in complacent and inappropriate ways in their studies and over-relying on AI.
The survey is the fourth in a program of research providing the unique opportunity to examine changes in public attitudes over time in 17 countries surveyed in late 2022 (just prior to the release of ChatGPT) and late 2024. This revealed a trend of declining trust and increased concern and worry about AI as adoption increased.
The research identifies evidence-based pathways for strengthening the responsible use of AI systems and the trusted adoption of AI in work and society.
The findings have important implications for public policy and industry practice. They are relevant for informing responsible AI strategy, practice and policy within business, government, and educational institutions, as well as informing AI guidelines, policy and regulation at the national, international and pan-governmental level. The research insights can help policymakers, organizational leaders, and those involved in developing, deploying, and governing AI systems to understand and align with evolving public expectations, and deepen understanding of the opportunities and challenges of integrating AI into work, education and society.
To cite this research:
Gillespie, N., Lockey, S., Ward, T., Macdade, A., & Hassed, G. (2025). Trust, attitudes and use of artificial intelligence: A global study 2025. The University of Melbourne and KPMG. DOI 10.26188/28822919
This research was conducted in collaboration with industry partner KPMG.