Ethics, AI, and Content Writing

ChatGPT recently had a birthday, of sorts. OpenAI opened the innovative software to the public last November, making the artificial intelligence tool a truly precocious one-year-old.

Just as other one-year-olds are learning to talk, ChatGPT’s large language model (LLM) becomes more sophisticated with every conversation it has with users. It’s learning and growing much faster than previous technology, with the potential to become an expert on a wide range of topics. It’s even got new siblings in the form of other AI chatbots like Bard, Claude, and CoPilot.

Unlike most other one-year-olds, however, ChatGPT has revolutionized the way we research, write, and even make decisions. As companies increasingly incorporate AI into their processes, policymakers and industry leaders have been trying to sort out the ethical issues that come with this new technology.

Ethical Concerns Around AI

Although AI has many benefits, there are valid concerns about the way it might be used to intentionally or inadvertently cause harm. AI is not a neutral system – it’s just as susceptible to error as humans are. It’s trained on data produced by humans and its output is reinforced by users. Therefore, it can produce answers that seem authoritative but may contain biases or inaccuracies.

In two notable examples, both CNET Money and Sports Illustrated failed to disclose the use of AI to write articles. Last winter, CNET had to post extensive corrections to AI-generated content, acknowledging factual errors and “phrases that were not entirely original.” Nearly a year later, the CEO of Sports Illustrated was ousted after it was discovered that AI had been used to publish articles under fake author names with AI-generated headshots.

As AI technology continues to advance at breakneck speed, it’s important to apply rigorous ethical standards to its use and the answers it produces. When it comes to content marketing, there are guidelines we can follow to ensure our writing is enhanced, not hindered, by AI.

This is one thing everyone – not just professional writers – agrees on and is working toward, although progress can be slow when it comes to such a complicated topic. Major governing bodies and professional organizations have been hard at work developing policies regarding the responsible use of AI. Here’s a summary of a few.

The White House Blueprint for an AI Bill of Rights and AI Executive Order

The White House Office of Science and Technology Policy released the Blueprint for an AI Bill of Rights in October 2022, just before the public launch of ChatGPT. The document proposes five principles for the responsible design and use of automated systems to protect the civil rights of the American public. While not limited specifically to AI, the five principles are:

  1. Safe and Effective Systems: Automated systems must be designed to protect humans and their data from harm.
  2. Algorithmic Discrimination Protections: Developers must proactively remove harmful bias from automated systems and ensure equitable access.
  3. Data Privacy: Users must have agency over how their personal information is used.
  4. Notice and Explanation: There must be transparency around the use and outcomes of automated systems.
  5. Human Alternatives, Consideration, and Fallback: Users must be able to opt out of using an automated system and work with a person instead.

These principles are guidelines, not law, and are part of an ongoing initiative to govern automated systems and protect users. In July 2023, the White House announced that seven leading AI companies (Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI) had made voluntary commitments to making their technology safe, secure, and transparent. Since then, more companies have added their commitments, bringing the total to fifteen.

In October 2023, President Biden issued an Executive Order on safe, secure, and trustworthy artificial intelligence, directing developers to protect Americans from potential harm caused by their systems. It includes new standards for safety, privacy, and equity, while promoting their responsible use for innovation and competition.

Although both documents encompass the automated systems used in fields like education, financial services, and education, their guidelines are also relevant for writers. When using generative tools like ChatGPT to come up with ideas or produce text, writers must be aware of potential issues with the way the LLM is trained.

That means that anything published with the aid of AI must maintain the same standards of civil liberty as traditionally written work: it must be clear of bias; protect the privacy and personal data of sources; not promote harmful misinformation; and ensure AI-generated content is clearly marked.

The Artificial Intelligence Act

The European Union passed the Artificial Intelligence Act in early December 2023, although it likely will not take effect until 2025. The AI Act imposes legally binding rules on tech companies and their products.

Companies must notify users when they’re interacting with AI software, such as chatbots and biometric scanners. Text and images produced by AI must be labeled and detectable. Certain systems, especially those used for surveillance or behavioral manipulation, are banned entirely, and noncompliance will be penalized by steep fines.

The landmark legislation is especially strict for high-risk or powerful AI models, requiring risk mitigation systems, open documentation, and human oversight. The AI Act imposes specific obligations on generative AI systems to avoid the production of manipulative content and protect against the infringement of intellectual property rights. While these obligations are directed to the tech companies and developers behind the systems, writers using AI in their work must also maintain these standards.

Promises and Pitfalls: The Ethical Use of AI for Public Relations Practitioners

In November 2023, the Public Relations Society of America (PRSA) Board of Ethics and Professional Standards published a comprehensive set of guidelines for the use of AI in the public relations industry. This document maps AI ethical considerations onto the five provisions outlined in the PRSA Code of Ethics, which defines the society’s obligations to organizations and the public.

Each section identifies how generative AI relates to the provision, gives examples of improper applications, and provides guidance on their proper use to create ethical content.

  1. Free Flow of Information: Anything generated by AI must be validated and checked for accuracy to prevent the dissemination of harmful or incorrect information.
  2. Competition: If a hiring manager uses AI to screen applicants, they must be aware of implicit biases in automated systems and take steps to ensure equitable hiring procedures.
  3. Disclosure of Information: This section has several examples of the transparent use of AI to protect truthful information. It calls for users to clearly identify when and how AI is used in their work, correct and stop the spread of misinformation, and to call out the improper use of AI by digital imposters or malfeasants.
  4. Safeguarding Confidences: AI learns from new data and keeps it forever, so users must exercise caution when managing sensitive information with publicly accessible or “open” tools.
  5. Enhancing the Profession: PR professionals must maintain high ethical standards when using AI, exercising critical thinking and serving as AI’s conscience.

The document also lays out a cost-benefit analysis for the use of generative AI tools, cautioning users on the applications and limits of the technology.

While the PRSA Code of Ethics is written for the PR industry, much of its guidance can also be applied to marketing and other writing professions. For example, freelance content writers need to be especially careful around using AI with proprietary client information.

AI Guidelines for Journalists and Publications

Many other governing bodies, associations, and publications are examining how they can ethically employ AI in their processes.

News organizations like the Associated Press, Reuters, the Guardian, and the CBC call for human oversight of all published content as well as transparency when AI has been employed. Others, like Wired and Insider, have stated they will never post stories written by AI, but will use the technology in other parts of their writing workflow.

The Version A Approach to AI Ethics

Like the rest of the world, we’ve spent the last year figuring out how AI fits into our writing ethos. The truth is, we’re still working it out – but it hasn’t been as much of an adjustment as we had initially expected.

That’s because Version A’s roots are in the newsroom, and we’ve always followed sound journalistic practices with high ethical standards. We treat AI with the same rigorous scrutiny as we would any other source. It’s a tool for improving (but not replacing) our writing process. That process might change a bit with AI adoption, but our mission – to provide accurate, authentic, well-written content that people can trust – will always be the same.

Ethics, AI, and Content Writing
Avatar photo

Anna O'Neill

As a writer at Version A, I spend my days crafting all sorts of content for our clients. From blogs to white papers to customer stories - you name it, I’ve probably written it! My background in science and the arts means that I approach each project through a double lens of research and creativity. Whatever the topic, I look at every piece as an opportunity to teach myself something new, and hopefully help readers learn something too. My constant writing companion is my little mutt, Scout.
Scroll to top