Innovation

Generative AI and Humans: Frenemies Forever?

This line is written by a human. This sentence was automatically generated by an AI language model, designed to craft a clear, stakeholder-aligned communication. Should these two authors coexist, collaborate, or compete?

AI has existed in various forms for some time. Spam filters that automatically detect junk emails are an example of artificial narrow intelligence, while suggested content feeds on social media showcase analytic AI in action. What truly captured global attention, however, was the rise of generative AI (Gen AI).

Until recently, creativity was considered the final frontier that separated humans from machines. That changed with Gen AI’s ability to produce coherent texts, images, videos, and even music. The same question that surfaced with the introduction of personal computers has returned: Will this take over our jobs?

If history is any indication, the answer probably lies in what followed the computer revolution. Those who quickly embraced the new tools and upskilled gained a distinct advantage over those who resisted change. As with most technology, human direction is essential to unlock its full potential.

How Do We Collaborate With AI?

This is where prompt engineering becomes essential. Just as organizations brief external agencies, the quality of the input determines the value of the output: garbage in, garbage out. As Ogilvy notes, effective prompts typically include four key elements.

 

How Much Should We Collaborate With AI?

While the power of AI is undeniable, it’s equally important to exercise prudence in its use, especially in stakeholder communications. This raises two important questions:

  1. Can audiences differentiate between AI-generated and human-written text?
  2. If they can, does it affect how they perceive the message?

A 2024 Harvard Business Working Paper offers answers to both questions. The researchers developed the Wade Test to access whether employees could distinguish between authentic CEO communications and messages generated by AI trained on the CEO’s past messages.

Participants correctly identified AI-generated content only 59% of the time, slightly above random chance. More importantly, messages believed to be written by AI were rated as less helpful, regardless of whether they were actually AI-generated. A follow-up study with a general audience found similar results.

Why is Human Intervention Required at All?

In light of this study, one needs to be careful not to fall into the trap of overreliance on AI. Some red flags to watch out for are as follows:

  • Facts First: AI often hallucinates facts and rarely cites sources. Including guidance like “Do not make up information” in prompts and verifying outputs is always advisable.
  • Context is King: Without human input and perspective, AI tends to be generic. For example, in response to a prompt to write about World Environment Day, it may produce bland, templated responses. Human writers and readers know their story the best. If someone started out as a young tree hugger at the age of five, that’s what readers are interested in.
  • Avoid Foot-in-Mouth Moments: AI lacks social sensitivity. It has been known to include inappropriate content, like referencing costs or profitability in external-facing messages, which is why applying a human filter is imperative.
  • The Telltale Signs: Posts overloaded with emojis, excessive em-dashes, rhetorical questions, “not only… but also” loops, or oddly enthusiastic phrasing often signal copy that is AI-written. With many of us guilty of using AI without trying to bring in our own voices to the tone, these hallmarks of AI-generated content are easily recognizable to the trained eye.

What Are the Consequences of This Collaboration?

A recent study by MIT shows the results of over-dependence on large language models (LLMs)  to write essays. People who consistently used LLMs displayed low cognitive engagement during the task and struggled to remember what they had written. Over time, they underperformed at neural, linguistic, and behavioral levels. This study rings alarms about the long-term implications of relying on AI for writing. Will it lead to an accumulated cognitive debt that impacts our basic skills?

According to the World Economic Forum’s Future of Jobs Report 2025, the fastest-growing skills by 2030 will span several areas. Technological skills include, AI, big data, and technology literacy. Cognitive skills cover creative and analytical thinking. Skills for managing yourself include resilience, flexibility, agility, curiosity, and lifelong learning. Finally, skills for leading others focus on leadership and social influence.

That’s what sets us humans apart from AI … at least until the next version comes out.