Are companies prepared to communicate with employees during an AI-driven crisis?
employees are often the last stakeholder group to get any attention. That's problematic on a number of levels. Employees are not only the people who keep the organization running, but they are also the most credible source of information about the company for external audiences, including customers, partners, and even the media. If they are left in the dark, misinformation and speculation can spread quickly, damaging trust and morale. Worse, a lack of clear communication can lead to confusion, disengagement, and even increased turnover at a time when stability is critical.
Research shows that effective internal crisis communication fosters employee engagement, unity, and resilience, which in turn bolsters organizational performance during turbulent times.
Artificial Intelligence (AI) is making this situation even more untenable. The rise of AI-generated misinformation -- from deepfake videos to algorithmically generated fake news -- poses new challenges that require companies to evolve their strategies and put employees at the center of crisis communication.
New Hurdles
Thanks to the rise of generative AI, internal communicators now have to value over several new hurdles:
Unmatched speed and volume -- AI tools can produce massive volumes of text, images, and/or videos that appear credible, and social media ensures they spread virally in seconds. False narratives can spread like wildfire, obliterating your organization's corrections. During a crisis, this means employees might encounter rumors or fake news externally (or via personal chats) before official channels can respond. A deepfake video or a fabricated "internal memo" could circulate within minutes, catching employees and leaders off-guard. The sheer velocity of AI-powered misinformation forces internal communicators to react faster than ever to prevent falsehoods from taking hold.
High Realism and Credibility -- Gen AI models can create fake content that is highly realistic. For example, an audio clip mimicking a CEO's voice giving false instructions, or an image of an executive appearing to take a bribe. Because of the realism baked into these assets, employees could be more prone to believe they're real. In an internal context, a well-crafted deepfake could cause employees to question a leader's integrity or the company's stance, undermining trust based on a hoax. It could even lead them to take an action believing it is required when, in fact, it will damage the company and its reputation.
Erosion of Trust and Uncertainty -- Even when misinformation is identified as false, it can leave lasting damage. Repeated exposure to fake news can make people cynical or unsure about what to believe. Studies have shown that exposure to misinformation is linked to lower trust in reliable sources. In a company, if employees see AI-generated rumors or doctored content about the crisis, they might start doubting official communications as well. The "truth decay" effect means communicators have to work harder to prove authenticity. Furthermore, bad actors may target employees directly with false information (for instance, phishing emails that impersonate leaders), exploiting any breakdown in trust. As AI gets better at impersonation, employees may struggle to distinguish official guidance from fake communications, risking operational chaos.
Scale and Noise -- AI can create not just one piece of misinformation, but an entire swarm of narratives. During a crisis, communicators may find themselves fighting multiple false stories simultaneously, each appealing to different fears. This "infodemic" can drown out the organization’s message. Filtering signal from noise becomes difficult when so much content is being generated. AI-driven fake accounts (bots) might amplify rumors on internal forums or social media, making a fringe false claim seem widely believed. The volume of AI misinformation can overwhelm traditional verification processes, demanding more sophisticated monitoring (discussed later).
Psychological Impact -- AI misinformation often plays on emotions: fear, anger, or doubt. If employees encounter sensational but false claims (e.g. a conspiracy theory that the crisis was caused by internal negligence or that management is hiding something), it can trigger anxiety or outrage. For instance, an AI-generated propaganda post might claim a company’s safety measures are a cover-up for nefarious activities; an already upset employee could more readily believe it, leading to paranoia spreading internally. In crises, people are emotionally vulnerable, and misinformation exploits those vulnerabilities, potentially causing panic or division among staff when unity is most needed.
Internal Comms in a Modern Crisis
To leap high enough over these hurdles, communication leaders need to develop new strategies and tools that ensure employees get timely, accurate information -- and trust what they year.
Strengthen trusted internal channels -- Make sure your channels -- an intranet, official email updates, private chat groups, town halls, SMS text messages, or any of the other channels typically employed for internal communications -- and engage in communication to make sure employees understand that these are the channels they can trust. It's also vital that employees are able to reach these channels, regardless of where they are or what device they can use in the moment, and that the channels are two-way, enabling employees to ask questions or seek clarification. The goal is to create an internal single source of truth. These channels must also be hardened against misuse. If bad actors get access and start spreading disinformation behind the firewall, all bets are off.
Improve employee media literacy -- Work with your IT and training teams to get employees up to speed on recognizing fake or manipulated content. Teach them to be skeptical of unverified sources. Give them tools to vet information before they share it. Consider tactics like distributing tip sheets that highlight red flags of false information and the importance of verifying with official sources. (If your company is not providing regular reminders and updates on cybersecurity, why not?)
Implement rapid-response AI monitoring -- Since AI can be leveraged at lightning speed, your responses need to match, if not exceed, it. That could include using AI yourself as part of your rapid-response effort, establishing processes and technologies to scan for emerging misinformation. Develop a protocol that establishes who monitors these alerts, how they validate flagged items, and how quickly they craft and distribute a counter-message.
Use digital watermarks and verification tools -- As the AI-generated disinformation surges, organizations are increasingly exploring digital verification technologies. One such approach is using digital watermarks or content credentials on official media. A digital watermark is like an invisible signature embedded in digital content (be it a document, image, or video) that can verify its source and integrity. By watermarking official internal documents or leadership video messages, companies can later prove those were legitimately produced and unaltered. Other verification tools include digital signatures and email authentication. Pay attention to emerging standards, like the Content Authenticity Initiative (CAI) and the Coalition for Content Provenance and Authenticity (C2PA). These industry-wide methods attach metadata to content, which means a photo or video the company publishes carries a digital certificate of origin. Knowing how to find these signs of authenticity is another topic for your cybersecurity training and communication.
Tool categories for combating AI-driven misinformation include...
Implement AI-Resilient Internal Communication Strategies
Develop an ongoing approach to embedding these processes and strategies into your organization's overall crisis communication plan. Steps in the process include...
Identifying vulnerabilities in your current strategies -- Conduct a risk assessment of your communication strategies to determine where you might have weaknesses addressing misinformation or AI manipulation. Map out all the ways information flows to employees – official channels, unofficial channels (like WhatsApp groups employees might have), social media, etc. For each, consider "What if this channel were used to spread false information? Are there safeguards?" Identify who the key trusted voices are in your organization (CEO, managers, HR) and how they could be impersonated. For example, is there a risk of someone spoofing the CEO's email or Slack account? Could an employee inadvertently share a fake news article on the intranet?
Also review past incidents: Have there been instances of rumors or misinformation internally during previous crises (even pre-AI)? How did they spread and why? Those are vulnerabilities to fix. Identifying vulnerabilities might reveal needs such as: better access control on channels, clearer policies on who can send mass messages, lack of a rumor-response team, or insufficient verification steps for critical communications.
Establishing AI monitoring and misinformation detection protocols -- Once vulnerabilities are known, set up formal protocols to monitor for and detect misinformation quickly. Decide on the tools or services for social listening and internal monitoring (as discussed in section 3c). For example, an organization might establish a small “information integrity team” within Communications or in partnership with IT/security, tasked with watching for signs of fake news related to the company. This team should have clear instructions: what keywords or platforms to monitor, how to flag suspected false content, and how to escalate for verification and response. Document these protocols in the crisis communication plan. Your plan should articulate unequivocal protocols for promptly recognizing and addressing false narratives before they find footing. That means knowing who will investigate a suspicious video, how to technically analyze it (maybe engaging an AI forensics vendor if needed), and who has authority to broadcast a correction.
Training leadership and employees on misinformation resilience -- Training is essential at all levels. IN addition to the companywide cybersecurity training we have already discussed, leadership (executives and managers) should be trained in effective crisis communication techniques, including how to convey empathy and credibility, as well as how to handle misinformation-related scenarios. Leaders need to understand the importance of being visible and transparent during crises (to pre-empt rumors). They should also practice the protocol: for instance, the CEO might run through a simulation where a deepfake of them appears, so they know how they would publicly respond and internally reassure employees. This kind of leadership readiness is crucial; if leaders panic or go silent when misinformation strikes, the harm magnifies. training is essential at all levels. Leadership (executives and managers) should be trained in effective crisis communication techniques – including how to convey empathy and credibility, as well as how to handle misinformation-related scenarios. Leaders need to understand the importance of being visible and transparent during crises (to pre-empt rumors). They should also practice the protocol: for instance, the CEO might run through a simulation where a deepfake of them appears, so they know how they would publicly respond and internally reassure employees ("There's a fake video of me – here’s how you can tell it’s fake – and I’m here live to answer your concerns."). This kind of leadership readiness is crucial; if leaders panic or go silent when misinformation strikes, the harm magnifies.
Developing proactive crisis communication strategies -- Finally, internal communicators should shift from a reactive stance to a proactive and preventive approach wherever possible. In the AI era, that means anticipating what misinformation might arise and addressing concerns before they spiral. Part of this is pre-crisis communication: If you know a potential issue is brewing, start preparing employees with background information and education so that there is less fertile ground for rumors. Proactive strategy also involves pre-drafting holding statements and Q&As for various crisis types, including those involving AI fakery. For example, have a template ready for "We are aware of [content] circulating that appears to involve our company; we are investigating its authenticity and will update shortly." Speed matters, and having these baselines ready shaves off minutes or hours.
During a crisis, a proactive internal comms strategy emphasizes transparency and frequent updates. Even if not all information is available, it’s better to say “We are aware of the situation and will provide more details as we confirm them” than to go silent, which allows rumors to fill the gap. As a best practice, prioritize transparent communication to strengthen resilience against false narratives. If there’s bad news, deliver it straight to employees rather than letting them hear a distorted version elsewhere. A proactive stance might also mean debunking common myths related to the crisis in advance.
Be sure to incorporate values and ethics into crisis messaging consistently. Organizations that regularly demonstrate their integrity and social responsibility build up a reservoir of goodwill and credibility. If a lie emerges that contradicts the company’s known values, employees who have seen leaders walk the talk are less likely to believe it.
What now?
The future of crisis communication is being rewritten in real time, and internal communicators are on the front lines. AI-driven misinformation presents unprecedented challenges, but it also underscores a timeless truth: employees must be at the heart of any crisis response. Now is the time to fortify your internal channels, empower employees with media literacy, and build a culture of trust that can withstand even the most sophisticated falsehoods. The stakes have never been higher, but neither has the opportunity. By acting now, you can transform internal communication into a powerful shield against misinformation and a force for clarity, confidence, and resilience. The path forward isn’t just about responding faster—it’s about leading smarter.
(he/him) Founder, #WeLeadComms; Editor-in-Chief, Strategic; Communication Consultant and Strategist
5 小时前Excellent piece - very thorough. Two other things to highlight: the extent to which competitors can use these approaches to destabilize their rivals (and thus the need for more of a role for human-centered security practices) and the extent to which peer to peer communication strategy will need to go from being seen as a "nice-to-have" to being utterly critical and core.