Is ChatGPT About to Become Every Tech Writer and Content Strategist’s Frenemy?

Is ChatGPT your frenemy?

With all the hype about ChatGPT these days, we tech writers and content strategists can get excited by the opportunities coming our way, all while staying aware of the downsides.

What is ChatGPT?

ChatGPT is an artificial intelligence (AI) driven chatbot, a Large Language Model (LLM) or Generative Pre-trained Transformer tool. With these tools, the term “artificial intelligence” is a bit misleading.

ChatGPT is not a virtual, sentient being. It’s a combination of complex math, sophisticated language theory and some clever algorithms run with a lot of computing power that can generate text that seems like it was written by a person.

These kinds of tools scan vast amounts of information on the web, then they use algorithms to “learn” how to write like humans do. For a detailed explanation of how they work, see Stephen Wolfram’s article What Is ChatGPT Doing … and Why Does It Work?

Will ChatGPT Help or Harm?

Tools like ChatGPT are both impressive and unsettling. Designed to generate “baseless assertions with unfailing confidence,” it can become difficult to tell the results from an AI chatbot from text written by a human.

With tools like AI chatbots becoming commonplace, it does make one fear that AI chatbots will render our skills as writers obsolete. Or at least less valued. Thanks to science fiction, many of us harbour a latent fear that one day we’ll be replaced by robots.

As communications professionals, we should be more worried about their potential to generate massive volumes of content in a very short period of time, which could proliferate, overwhelm and dominate discourse, squeezing out facts and truth as well as stories of nuanced human experience.

Governments are grappling with the technology, how to monitor and potentially regulate it. We likely have much more to learn about what powerful content generation tools can do to help or harm society.

No matter what you think of these tools in general, it’s important to understand their potential impact on professional communications work.

Tricks are About to Get a Whole Lot Trickier

Every public-facing interface is already absorbing computer-generated communications. Your organization may only receive a small amount of AI-generated traffic, or it might be inundated, overwhelming the workloads of support teams and content writers.

Our teams need to start watching for ChatGPT-generated communications in:

  • job applications, through generated resumes and cover letters
  • support requests
  • forum posts and responses
  • social media posts that may appear to be from your organization when they are not
  • responses to your social media posts
  • comments on your blog or videos
  • email phishing, ransomware and other scam attempts

Sometimes well-meaning people with good intentions may make use of these tools to utilize skills they struggle with. For example, a job applicant really wants to shine, and uses the tool to generate a professional-looking cover letter. A person who needs to contact sales or support in a language other than their own might use the tools for translation.

However, the nature of these tools means someone can inadvertently generate content that appears factual that is tied to you or your organization. Since these tools use online information to build their text, elements about your organization—such as phone numbers, emails, employee names that appear on public-facing web sites, social media and elsewhere—can be pieced together to build realistic, seemingly factual pieces of text about a “pretend” organization. You may then get contacted with useless information because generated content containing snippets tied to you slip into your communication tools.

We’re already familiar with this in the email spam we receive. We’ve learned to spot spam bots and other tools pretty reliably through their nonsensical combinations of keywords sent out en-masse. AI chatbots will unleash a more clever type of spam without the usual tells, that looks like it was written by a human to both our spam filters and our eyes. Tricks to fool employees into revealing data, installing ransomware are about to get a whole lot trickier to spot.

A New Frenemy?

You can’t expect a human to accurately process thousands of calculations in a short time, and you shouldn’t expect a computer to have intuition, understand nuance and replicate what a skilled human can. However, as writers, we can and should use technology to help us be faster, better, stronger at what we do.

Rather than replacing our skill as writers, ChatGPT and similar tools could support us. If we learn about them, we have an opportunity to find new ways to utilize their capabilities. They can help us:

  • brainstorm new topics or solutions
  • explore alternative explanations
  • speed up editing and localization
  • improve consistency of style and tone
  • spot time-wasting AI generated communications

What if you use ChatGPT as a lateral thinking brainstorming tool? Even if the AI-generated text is inaccurate or nonsensical, reading alternatives to your own thinking can help spur new ideas.

In technical writing, we often need to explain complex processes in simple terms. Sometimes this is very difficult and as much as we try, we know that our explanation just comes out muddled. Khan Academy Founder, Sal Khan has spoken out against banning ChatGPT in educational settings because it could transform education by allowing educators and students to generate alternative ways of explaining learning concepts. What if we too could use ChatGPT to generate alternative explanations to clarify our own thinking? Like bouncing ideas off your favourite colleague.

The Skills You’ll Need

To use a tool like ChatGPT effectively, you need to learn how to prompt it to generate useful results.

The great thing is as writers, we’re already good and writing to get results from computers. We’ve got a few decades of writing for search engine optimization behind us. We just need to learn what language works for ChatGPT and how best to use what it generates.

Once you get good at using the tool and understand how to generate effective prompts, you will understand its potential and possible downsides.

Want to Try Some Prompts?

These articles can help you get started with ChatGPT prompts:

When you have learned how to prompt the tool to get good results, you can start to refine prompts, customize prompts, and create your own prompts. These are useful skills to develop.

What to Watch Out For

As you’re learning about LLM tools and writing prompts, watch out for a few things.

Never put confidential or identifying information into an AI tool.

This seems obvious but as we start to see how useful these tools can be, it will be tempting to write prompts for anything. But as soon as you do, it’s fair game for the tool to reuse that information to adapt, improve or use in its output for others. We have to be even more careful with safeguarding corporate communications, client conversations, or even words that seem innocuous at the time. (Our privacy policies might need a revamp too if we intend to apply ChatGPT to any support internal tools.)

Don’t rely on an AI tool to provide true, accurate information.

As linguist Emily M. Bender points out, the tools aren’t designed to create factual content. Imagine the problems with content created by tools that aren’t driven by truth or facts, but by plausibility.

As Emily states, “How should we interpret the natural-sounding (i.e., humanlike) words that come out of LLMs? The models are built on statistics. They work by looking for patterns in huge troves of text and then using those patterns to guess what the next word in a string of words should be. They’re great at mimicry and bad at facts.

This is what could happen…

Imagine that a journalist writes a timely article that goes viral. They spend months researching and bring important information into public discourse. However, they make a small mistake citing a source, referring to the wrong pages in a scientific journal. Other journalists use this incorrect reference, rather than paying for access to the journal, copying and pasting the error into their articles. This happens again and again, until the error becomes commonplace. So common that an AI also starts using the incorrect source because it is so widely distributed.

As AI tools become more pervasive, we’re all going to have to become better critical thinkers and fact checkers. Just because something is popular doesn’t mean it is correct.

AI Tools Need Human Supervision and Intervention.

While Large Language Model AI tools can produce convincing looking outputs, they don’t generate reliable content that requires deep specialization and knowledge. That’s likely because this kind of content aren’t represented in large quantities online. Furthermore, the AI tools aren’t applying knowledge and expertise, they are copying the most popular examples of human generated content online.

If you have specialized expertise, such as subject matter expertise, stylistic knowledge, or translation accreditation, be sure to verify whatever outputs these tools generate with your own skills and judgment.

With regards to language translation, Vogue Translations, LLC cleverly used ChatGPT to generate an article that outlines some of the shortcomings of Large Language Model tools:  4 Reasons Why Companies Should Avoid Using ChatGPT for Their Translations (As Told by ChatGPT)

The article concludes: “...while ChatGPT is an excellent tool for translating texts, it is not a replacement for human expertise. Companies should avoid relying solely on ChatGPT for translating critical documents or communications. Instead, they should seek the assistance of professional human translators who can provide accurate translations, taking into consideration the context, vocabulary, tone, and emotions in the original document.

These tools should complement human skill and expertise, not distract or create more work. Frenemies indeed.

Leave a Reply

Your email address will not be published. Required fields are marked *