As a tech lover, I’m amazed by Generative AI. I’ve used it to streamline processes and offload some of my most hated tasks. I’m a terrible note-taker, for instance, so using it to capture discussions and summarize meetings has saved both time and frustration.
It’s also tremendously useful in helping us scale our work efficiently. Upload a blog post to Claude or ChatGPT and AI can create additional assets insanely quickly.
Still, I’m wary of the ethical issues surrounding our use of GenAI. Building trust and credibility for our clients is central to our work. So, to avoid any missteps, we’ve embedded guardrails in our editorial process, focused on three key areas: authenticity, accuracy, and accountability.
Authenticity Is a Uniquely Human Quality
Take a quick look at the name of this blog, and you’ll immediately understand why this is a hot button for me. Our goal is to create authentic market-ready content that captures our clients’ unique styles, voices, and messages.
Whether it’s B2B or B2C marketing, people want to hear from other people. They want to buy from reliable brands, and authenticity is the first building block of trust.
Feed enough content to AI, and it can create fluent, competent prose. Just like any writer, though, GenAI needs a human editor. That’s because authenticity is uniquely human, and we can’t substitute machine intelligence for personal, in-the-moment, emotional hooks. Only a human editor can do that.
As writers and editors in the AI age, we have an ethical responsibility to ensure our clients connect with their audiences on a human level. We might use AI for research or an early draft, but our final content deliverable is always human-driven.
Accountability Is a Financial Imperative
The early days of the internet were wild and confusing, with few guardrails. Today, we’re in a similar situation. There are few universal legal rules around the use of AI. While laws are invaluable for protecting vulnerable audiences, they’re not the only reason for clients to do what’s right.
Brands have a financial incentive too. Buyers want to work with companies that are transparent about who they are.
That’s what makes accountability the second building block of trust.
As audiences engage more with AI models, they’re learning to distinguish the differences between human-created and machine-created content. At the same time, AI has made it significantly easier to spread misinformation. When brands aren’t clear about how or if they’re using AI-generated content, they risk losing the audience’s confidence – and their business.
(Full disclosure: This post was written by a human.)
Accuracy Is Nonnegotiable
Over my career, I’ve developed a fear of getting things wrong. Fact checking is a nonnegotiable step in our editorial process and for good reason. Our clients’ credibility rides on our ability to present information, truthfully, accurately, and fairly. Their buyers are not interested in working with a brand they feel is misrepresenting the truth.
Accuracy is the third building block of trust.
In its early days, I was reluctant to use GenAI for research. AI summaries often contained links to 404 pages or to works that simply didn’t exist. It has improved, thankfully, and now I value it. I find Claude exceptional for pulling accurate resources and providing a very useful and succinct summary that I don’t have time to create on my own. It can be an excellent starting point.
And yet…
My team and I still routinely vet sources and facts provided by AI, just as we would when editing the work of human writers. Our bar remains high. We only cite known, reputable sources, and we only trust information if we can find multiple sources to substantiate it.
We also know GenAI has significantly increased the risk of plagiarism. Since we never want to use a line of copy without crediting the original author, we tend to be conservative about our use of AI-generated content. It’s the only way to ensure we’re honoring intellectual property rights.
Balance AI with Editorial Oversight
In its guide on AI ethics (you must be a member to download), PRSA sums up the responsible use of AI in communications in this way:
“AI is not a peer with moral responsibility. Accountability belongs to people. What matters most is leading it, explaining it, and using it in ways that are ethical, transparent, and build trust.”
That’s a fairly accurate assessment of what’s essentially a tool. And it’s why I believe the most successful AI-driven storytelling will have the same common factors as traditional storytelling: Authentic human voices, accurate information, and accountable editorial oversight.
