Understanding GPT-3 as a Persuasive Technology

This is a short note on the persuasive potential of the GPT-3 algorithm recently released by OpenAI and the subsequent demos produced by the designers and coders who have laid their hands on an API already.

If you are not yet convinced about the game-changing nature of GPT-3 natural language algorithm, please read this article (in full). Without spoiling the surprise at the end of the article, we can reflect on the fact that text-generating capabilities of AI are much more powerful now than ever before and that the implications of high-fidelity text generation and more advanced machine imagination, will fundamentally transform how we think about authenticity, authorship and original thought.

As digital media and content are already being weaponized and used extensively for political ends, it ought to be discussed closer in the field of persuasive technologies – computer products and services designed with the intent to influence people’s behaviour and attitudes. People already spend a growing share of their lives through a ‘smart’ layer of technology: devices and software which subtly nudges our thoughts and actions towards goals, which may or may not be in line with our personal desires. When we outsource cognitive and physical processes to technology, we place part of our agency in the hands of the material environment. That also means that we must put our faith in that the services are on our side.

To be able to conceive what new forms of services that GPT-3 and similar algorithms may lead to, we can speculate a bit based on some of the demos that have been published recently:

These are but a few examples of what the GPT-3 algorithm is capable of today, in a few experimental service interfaces which have been released in public. There are also examples of weird texts generated by GPT-3, which represent some of the flaws that still remain with natural language processing. Some of the demos may not seem that impressive yet but extrapolating its potential a few years in future, the effects may be profound.

In academia, texts are the fundamental vehicle for research. Research is shared in textual form, it is scrutinised and shared in text. Original thought and authorship are cornerstones of the academic process but with the introduction of algorithms that can produce texts which is indistinguishable from texts being written by a human, it may soon be impossible to discern whether a work is man-made or machine-made.

In business, persuasive texts are used everywhere: in sales, marketing and on the strategic level, for example in investor’s pitches or reporting. Machine-generated dialogue software is already commonplace in customer support, as virtual assistants and chatbots are proliferating across industries. VA’s such as Alexa and Siri are already providing interfaces for search and shopping for millions of people. More persuasive texts mean more persuasive services, but it also increases the risk of coercion or deception.

For people in general, it will be even more difficult to navigate the information landscape and discern whether an actor is a human or machine. Perhaps we will place more value on authenticity and human interaction when machine content is abundantly available. Perhaps we need to develop services which are ‘on our side’, working to fend off invasive, persuasive algorithms in our environments, like more advanced spam-bots that protects our attention from predatory apps. We will also need to develop our sensitivity towards computer-generated content in any form or medium it may appear so that we are not led or mislead by hidden forces which we do not have the bandwidth to understand.

These are but a few ways in which society will be impacted by natural language processing technologies, as persuasive technologies. A longer post will follow soon, *perhaps* written by myself.

GPT-3 may not be a game-changer just yet, but that it has definitely opened people’s mind to a universe of possible use cases and an unexplored design space which is infinitely interesting.

Persuasive technology scholars should definitely be on their feet.

Other resources:
https://openai.com/blog/openai-api/
https://arxiv.org/abs/2005.14165

by Gustav Borgefalk
gustav.borgefalk@network.rca.ac.uk

Leave a Reply

Your email address will not be published. Required fields are marked *