Business Acumen

Ethical AI in Communication Practice Is a Trust Strategy

Artificial intelligence (AI) didn’t create the trust crisis. But it is quietly accelerating it, especially in a moment when audiences are already retreating into smaller, familiar circles of belief and belonging.

The 2026 Edelman Trust Barometer frames this shift as a slide into insularity: globally, 70% of people report being unwilling or hesitant to trust someone who holds different values, facts, approaches, or cultural backgrounds. In that environment, the margin for error in organizational communication gets slimmer. A single inaccurate claim, an unacknowledged bias, or a “helpful” AI-generated draft that feels inauthentic can become the spark that confirms what skeptical stakeholders already suspect.

This is why AI ethics in communication practice shouldn’t be treated as a compliance add-on or a tech policy. It’s a trust strategy and professional communicators are uniquely positioned (and obligated!) to lead it.

Start With Principles, Not Platforms

The IABC Guiding Principles for the Ethical Use of AI offer a clear foundation because they translate ethical AI into the actual risks communicators manage every day: accuracy, transparency, confidentiality, cultural sensitivity, legality, and professional accountability. They reinforce a simple truth: AI can assist the work, but it can’t own the work. Humans do.

From that base, the real question becomes practical: What does ethical, trust-building AI use look like in the daily reality of communication work? In my experience, most breakdowns happen in three trust fracture points.

Trust Fracture Point No. 1: Accuracy Without Rigor

Generative AI is optimized to sound plausible, not to be correct. That matters because communication outputs are often treated as authoritative: a leadership message, a policy explainer, a customer note, a media statement.

IABC’s principles are blunt for a reason. Communicators must independently verify AI outputs, ensure content is not plagiarized, and maintain the same professional rigor they would apply to any high-stakes message.

What this looks like in practice:

  • Treat AI like a drafting assistant, not a subject-matter expert.
  • Use a source-first mindset: if you make a claim, find the original source (data, policy, report, etc.) and cite it.
  • Apply the standard you already know from issues and media work: if you can’t defend it under pressure, don’t publish it.

This isn’t theoretical. A global study by the University of Melbourne and KPMG reports that many employees rely on AI outputs without checking accuracy, and that AI use is already associated with mistakes at work. When accuracy slips, trust doesn’t just drop in the message, it drops in the messenger.

Trust Fracture Point No. 2: Confidentiality Leaks Through Harmless Prompts

For communicators, confidentiality is both a legal issue and an ethical one. You know very well that drafting communications often involves proprietary strategy, internal dynamics, employee situations, financial information, or sensitive stakeholder relationships.

IABC’s guidance is clear: Don’t put confidential or proprietary information into prompts and searches, and protect personal and confidential information unless permission is explicit.

Ethical AI use here means building muscle memory:

  • Assume prompts may be visible beyond the moment you type them (even when a tool claims privacy safeguards).
  • Replace specifics with placeholders when using external tools (e.g., “Client X,” “Product Y,” “Region Z”), then apply the real details in your internal environment.
  • Use approved, enterprise-grade systems when your organization provides them and push for governance where it doesn’t.

This is one of the most important places communicators can lead. We understand how quickly a draft could become a screenshot, a leak, or a headline.

Trust Fracture Point No. 3: Transparency That Backfires

There is a paradox where ethical practice calls for transparency, yet disclosure can sometimes reduce trust.

We’re increasingly seeing that AI disclosure can trigger a legitimacy penalty where people may judge work as less trustworthy when they learn AI was involved, even if the content is accurate and well-made. That doesn’t mean we should hide AI use, but it means communicators need to practice meaningful transparency.

Meaningful transparency shifts the focus from the tool to the safeguards:

  • Who is accountable for the final message?
  • What verification occurred?
  • What wasn’t delegated to AI?
  • How were fairness and cultural impact considered?

A simple “AI wrote this” label invites the worst assumption that no one was steering. Meaningful transparency communicates the opposite: human leadership, review, and responsibility.

A Quick Ethical AI Checklist for Communicators

If you want something your team can use immediately, try this six-question check before anything AI-assisted leaves your desk:

  1. Truth: What claims are we making and what sources verify them?
  2. Ownership: Who is the human accountable for the final output?
  3. Attribution: Could this reproduce third-party language or ideas without proper credit or permission?
  4. Confidentiality: Did any prompt include proprietary, personal, or confidential information?
  5. Fairness: Who could be harmed or misrepresented by bias, missing context, or cultural blind spots?
  6. Transparency: What does the audience reasonably deserve to know about how this was created?

The Ethical Opportunity: Communicators as Trust Brokers

Edelman’s 2026 findings emphasize that employers are uniquely positioned to help broker trust in a fractured environment. If that’s true, then communication leaders have a specific responsibility: ensure AI use strengthens trust rather than undermine it.

The profession doesn’t need to choose between innovation and ethics. We need to insist they travel together with human judgment in the lead, rigorous verification behind every claim, confidentiality treated as non-negotiable, and transparency practiced in a way audiences can actually understand.

For a deeper look at the trust context shaping this moment, see the 2026 Edelman Trust Barometer. And for a clear ethical framework grounded in our profession’s commitments, return to IABC’s Ethical Use of AI.

In recognition of Ethics Month, the IABC Ethics Committee is offering two webinars in February to help communicators navigate the evolving landscape of responsible AI. Save the date and join us for these sessions featuring expert panelists: