The generative generation

The greatest technological advances grow to be so fundamental to human society that they are no longer thought of as technology at all. 

They become so embedded in everyday life that they are invisible. Great leaps forward, like artificial intelligence, or the internet, or the internal combustion engine, are hugely significant but none are invisible. We think of them as creations – something achieved. Perhaps the most fundamental technological advance is the alphabet: a way of codifying language and communication itself.

Language is a technology as it is a human-created, artificial manner of expressing any idea. It is the backbone of all human communication.

In fact, the term technology itself comes from two Greek words – techne (‘craft’ or ‘manner’) and logos (‘word’ or ‘expression’). So not only is language a form of technology: in a very real sense, it is technology itself. By extension, writing can be two things: a way of organising thoughts, and a mode of communicating ideas.

I note this as context to our main event today, Overtone’s thoughts on the newest generation of generative AI models. At first glance, these bots appear to be able to write expressively and communicatively.

In addition, recent advances have made the models incredibly easy to interact with. This user friendliness did two things: it helped chatbots break into the online mainstream; and it allowed people to gloss over obvious shortcomings (being incorrect, having no long term memory, being ethics free, not actually creating anything, etc).


1. Why do people care so much?

One reason for the wild mix of people’s reactions is the approximation of the work generative AI can do to human work. Not because of what it can do, but because of how it makes us reconsider the work we do. The thinking runs something like this: creativity is human; this AI’s outputs look like human creativity; it must therefore somehow be more human; so what is it that we, the humans, are doing in our offices all day and what does it mean?

The ground-shift in applied natural language processing begs the question of what human creativity is. It certainly looks like the models can replicate some of what humans do, which generates the question of, what will *they* do and what will *we* do? That’s bound to cause some people sleepless nights.

 

2. Who gets replaced?

In the same way these generative models extrapolate likely next words from vast amounts of pre-chewed human data, we will follow through with the most obvious next steps.

To start with, this technology won’t simply replace people. It’s a statistical model that predicts patterns. It uses countless texts – created by humans – to suggest words that may come next. The next is important. It doesn’t start. These models continue based on a prompt from a human. Therefore, the more likely effect is slightly more subtle: people who use generative models well are going to replace people who don’t.

When the automobile took off, one concern was for the fate of hostlers and grooms, given that horses were to become nearly obsolete as a form of transport. Nowadays, those terms are uncommon in daily use (a sign that this obsolescence did happen), but more pertinently: people’s jobs changed. As in the aftermath of any highly impactful technology, most jobs will change and workers (some with the help of training and others left to fend on their own) will adapt. Criminals find ways to exploit the technology; others will advance careers trying to stop them. There will be jobs galore in co-authoring; in ethical supervision; in content filtering; in distribution and advertising and monetization; and in countless ways we haven’t yet thought of. 

 

3. Where does it go from there?

Clearly, the new generative landscape can give rise to a new generation of content creators with the ability to ‘write’ limitless pieces of content on infinite topics on countless platforms. 

The logical follow up to the hype and usefulness of a technology is that a sustainable business model will develop. Currently, ChatGPT is estimated to cost its creator, OpenAI, something like $100,000 per day to run. In time, market effects will help launch and float such models, while societal needs and expectations will provide necessary counterweight. 

Given what this technology can create at the moment, those business models will likely develop in places where the need for only ‘passable copy’ rather than precision or eloquence makes generative AI more attractive than a pure human approach. 

 

4. So what do we do?

Since its inception, Overtone has talked about the ‘ocean of content’ – all that stuff out there, written and rehashed ad nauseam, that readers need to wade through. In 2020 Christopher, our CPO, warned of the coming influx of generated content. That ocean is about to get wider and deeper. It is going to be an ocean of ‘passable copy.’

In a world in which anyone can become a content creator – and publisher, and platform – with minimal effort, we need tools to sort through all that content. We need distinction. A knee jerk reaction is to think we should separate AI from non-AI. That’s not necessarily helpful, and besides it’s difficult to do: we have been blurring that line for years. Consider Siri and Alexa and email writing prompts.

Instead, we need distinction between types of content to help individual pieces serve purposes that ultimately help people, whether the text is written by a human or a machine. Quantitative methods (like counting keyword frequency or the number of click-throughs and likes) aren’t enough. As some bots create SEO content, others will spread those articles based on clicks or shares. The ocean of creative content will fill up with whirlpools of automated amplification with no real regard to the human experience of reading it.

Our generation of readers and writers needs tools to help make sense of all this. To ensure people get the best online experience, we need to sort through articles in a better way. Not just fact from fiction, or popular from niche, or news from opinion (though all of these are useful). We need to use these and other qualitative differences to sort the relevant from the inconsequential. The high-effort from the low-impact. We need to create, sort and share articles according to their ultimate value: their meaningfulness to you and me.