When human savvy meets machine intelligence
- Caroline Sapriel

- 50 minutes ago
- 3 min read
AI is revolutionising the way we work and workplace dynamics, but we should not underestimate the value of the human dimension, says Caroline Sapriel.
This article first appeared in the Autumn–Winter 2025 issue of CODI, the magazine for the European Association of Communication Directors.

AI is driving a workforce evolution, but with far-reaching economic, legal, political, and regulatory implications.
Generative AI tools like ChatGPT improve our productivity, so we spend less time on routine and repetitive tasks and more time on higher-value work. The danger lies in relying on ChatGPT to perform tasks that require confidentiality, critical thinking, or making high-stakes ethical decisions.
A groundbreaking study from MIT’s Media Lab suggests there’s a cognitive cost to using AI like ChatGPT. Researchers examined the neurophysiological effects of using AI for a complex task, and found that passive use of an AI assistant doesn’t just outsource work; it significantly reduces the critical brain activity associated with creativity, deep thinking, and memory formation. While the study has limitations, it highlights the need to consider the impact of using AI and its implications for our industry.
AI doesn’t think. It predicts. Chatbots like ChatGPT are trained on large collections of text data from books, articles, and web pages across different topics. They respond to user queries based on data used to train them, providing consistent (although not always reliable) responses to questions without the influence of emotions, fatigue, or personal biases.
Consider what happened when Japanese journalist and filmmaker Shiori Itō accused a prominent television journalist of rape. Itō’s bold move to speak out sparked massive backlash. In Japan, the emphasis is on maintaining group harmony and avoiding shame, rather than ideological pointing. She received threats and hate mail, but persisted. Eventually, she won damages in court, influenced crucial amendments to the country’s century-old rape laws, and became a central figure in Japan’s MeToo movement. Will ChatGPT provide the cultural competence responsive to Itō’s communication needs?
Chatbots are a security concern. While jurisdictions including China, the EU, UK, and US, are still developing and implementing their regulatory frameworks for AI, client confidentiality and data protection should remain priorities. Entering third-party confidential information or personal data into a Generative AI tool violates data protection law and breaches an obligation of confidentiality. In such cases, organisations do not have oversight of, or control over, the use of data and cannot guarantee its security. Crucially, you do not want to train AI on data that could be used against a client. Nor do you want to make it easier for AI to find and understand discussions with clients.
Consider the need for tight confidentiality of communication during a crisis, and the importance of legal and governance oversight. How can you preserve this while using chatbots to draft critical and sensitive messages?
Another issue is the proliferation of AI-generated deepfakes, which may compromise the trustworthiness and quality of data, and heighten the risks associated with inadvertently using such data. OpenAI’s recent launch of Sora 2, which allows virtually anyone to create their own premium-grade video content, shows that this technology is developing fast, as is its potential for misuse.
AI is a powerful tool, but an increased dependency on it is killing the insights that come from our deep knowledge and experience. The critical skill development and curiosity of the next generation, who are more reliant on AI, are at risk. As AI capabilities continue to evolve, we should remain vigilant of its impact on our ability to learn, and use it effectively—so we can focus on what makes us uniquely human: creativity, strategic oversight, ethical decision-making, and genuine human connection.
Caroline Sapriel is the founder and Managing Partner of CS&A International, a specialist risk, crisis, and business continuity management firm launched in 1991, focusing on helping multinational organisations build crisis resilience. With over 30 years of experience, she is recognised as an industry leader and acknowledged for her ability to provide customised, results-driven counsel and training at the highest level. A Fellow of the IABC, she has authored books and articles and lectures at universities.





Comments