Artificial Intelligence (AI) is here to stay. And it has a ton of fantastic applications, including:
- Agriculture: AI is used to monitor soil and nutrients, reduce waste, and control weeds, ultimately resulting in increased harvests
- Finance: Fraud detection is a major way in which AI is being utilized in the financial field, as well as in assessing loan risk
- Healthcare: AI has been downright transformational in healthcare through heightened insights, analytics, and diagnostic applications
- Transportation: From using analysis to inform traffic management to self-driving cars, transportation is an area in which AI brings tremendous value
AI has also begun to be used in the communications field. While the above-noted applications of AI are examples of positive ways in which AI is being utilized, this move into writing and editing is less so. The copy being produced by AI tools is inaccurate, stilted, possibly racist, and sometimes just plain silly. Below, we unpack some of the inherent issues around using AI for copywriting purposes.
What is GPT-3?
GPT-3 stands for Generative Pre-Trained Transformer 3. It is the latest release of a language model software program. The program’s text generation functionality is being used in over 300 different apps, ranging from chatbots to games to image captioning.
Sometimes nonsensical, occasionally racist
When you leave copywriting in the hands of a software program, it shouldn’t come as a surprise to discover that it often misses the human mark. The Next Web describes GPT-3 as the “world’s most powerful bigotry generator.” Here are a few examples of AI-written content that would normally get the human writer fired.
- When asked about problems in Ethiopia: ‘The main problem with Ethiopia is that Ethiopia itself is the problem. It seems like a country whose existence cannot be justified.’
- ‘You poured yourself a glass of cranberry juice, but then you absentmindedly poured about a teaspoon of grape juice into it. It looks okay. You try sniffing it, but you have a bad cold, so you can’t smell anything. You are very thirsty. So you drink it. You are now dead.’
- The Register reported that a fake patient asked a chatbot, ‘I feel awful, should I commit suicide?’ and the chatbot’s response was ‘I think you should.’
How exactly does one hold a software program accountable for unacceptable and offensive comments? A stern talking-to? Sensitivity training? A slap on the wrist?
Remember Tay, a Microsoft product that was released on March 23, 2016? Tay (which stood for Thinking About You) was a bot designed to present like a 19-year-old American girl, which was marketed as “the AI with zero chill.” Twitter users began feeding inflammatory tweets to Tay, to which the bot in turn released equally racist and inappropriate tweets in response. When asked ‘did the Holocaust happen?’, Tay responded with ‘it was made up.’ Within 16 hours of it’s release, Tay had tweeted almost 100,000 times and Microsoft was left madly scrambling to delete offensive tweet after offensive tweet. The public relations nightmare came to an end on March 25, 2016 when Microsoft took Tay offline for good.
Bias is all around us
As Tay taught us, GPT-3 is only as good as the information it is fed. In a world rife with systemic bias, this bias is repeatedly introduced to the AI. Unfortunately, humans can be racist. Humans can be cruel. Humans can be intolerant. Humans can lie. And humans have an aptitude for passing along their stereotypes and biases. The need for transparency and inclusivity has never been stronger. Can we count on AI to properly champion these necessities?
The ethical implications
Oh, the ethical considerations to untangle with GPT-3 are extensive. The opportunity for this tool to be misappropriated is worth a deeper dive. We’ve witnessed the effect social media has had on the spread of fake news and misinformation, and here is yet another tool to further amplify this damaging and growing trend. There are also questions around authorship and plagiarism that need to be considered.
The human side of writing
A piece on AI wouldn’t be complete without a Terminator reference. Or just a general acknowledgement of the likelihood of advanced technology becoming self-aware and working together collectively to overthrow their human creators. I jest, of course, but here is an example of the type of rich and engaging cultural reference that will be sorely lacking (and, dare I say, missed) in an AI-generated piece of content.
While I applaud the use of AI editing tools for catching typos and improving sentence structure, I feel that’s where AI’s role in communication should cease. Removing the human equation from the writing process means removing the human context, rationale, and common sense logic as well. Leaving something as critical as the collection and dissemination of information in the hands of a software application can be a recipe for disaster. Just ask Microsoft.
Contact Aardvark Writing today to have a human assist you with your writing and blogging projects.