AI-generated image created with Midjourney, utilising quotes from this article as prompts.

Philosopher Bots: should government chatbots parrot the party line?

By Alex Read

“Brief me on prison overcrowding and suggest a solution” – not a minister barking at a civil servant, but a civil servant prompting a government-issue policy-bot. If you think that’s far-fetched, consider this: earlier this month the government told us that the first public service AI models would be piloted by the end of this year, according to its new AI taskforce.

Given the enormous media interest in AI’s rapid growth in capability, this should not surprise us. GPT-4, OpenAI’s latest Large Language Model (LLM), was found by Stanford University to outshine 90 per cent of successful students on the bar exam (its previous model, ChatGPT, failed the bar, scoring in the bottom 10 per cent).

These models are so knowledgeable that they have the potential to outsmart the best of our civil servants in a wide range of scenarios, so it seems logical for the government to train these systems to tackle some of our trickiest national issues.

But before we unleash these systems on sensitive government data and put them to work, a profound question needs to be answered: with what underlying values should these models be taught to reason? Indeed, AI’s potential is vast, but its ultimate utility hinges on the principles it is programmed to uphold. The algorithms may be neutral, but their outputs often aren’t.

All governments strive towards seamless and cost-effective delivery of public services. They often fail. In part, because the reality of policymaking presents weighty ethical dilemmas. Scarce resources, conflicts of interest, and election cycles mean trade-offs need to be made, often with serious implications for society.

Here we might expect a human politician to fall back on their ideology, a set of pre-ordained beliefs about the world that takes tricky decisions out of their hands and surrenders them to what is just, proper, and politically salient. We all know this, which is why we tend to vote for politicians that hold similar beliefs to our own.

We shouldn’t assume that AI will let us escape this reliance on ideology. It’s crucial to remember that AI, no matter how advanced, is not sentient. It does not innately understand human culture, ethics, or values. It does not comprehend the nuances of fairness, justice, or empathy. It processes what it’s been trained on, nothing more, nothing less.

When chatbots answer our queries, they don’t just spit out cold, hard facts. They echo the biases buried in their training data, even when those biases aren’t explicitly stated. Let’s imagine a system plotting the best path for a new high-speed rail link. Though trained on reams of government research, the model will also unwittingly inherit the biases from that data, biases which could be tied to race, gender, or socioeconomic status. The result? Its recommended route could unintentionally marginalise certain communities or wreak havoc on natural habitats.

The ‘black box’ issue exacerbates the problem. Because these models are so vast and their decision-making processes so complex, we can judge their outcomes, but we aren’t able to interrogate the decision-making process. It’s akin to diagnosing a car breakdown without being able to lift the bonnet. This opacity raises serious questions about transparency and accountability when it comes to policymaking.

So, if we’re to entrust our societal challenges to these AI models, we need to ensure they are guided by a robust ethical compass. The government’s task, therefore, is not just to implement AI, but to imbue it with a deep understanding of the values that underpin our society and the Government of the day?.

It therefore seems logical (even if instinctively a bit Orwellian) to program these systems with a particular ethos. After all, is it really any different to the government setting an ideologically driven agenda for civil servants? In essence, isn’t a government chatbot just another tool in the hands of the government to translate its ideology into policy?

Despite their computational prowess, these systems grapple with ethical crossroads just as humans do. Their responses, no matter how complex, are defined by the biases they’ve been trained on.

This debate is in its infancy in the public sector, but it’s already gaining momentum in the business world. OpenAI’s CEO Sam Altman admits there’s no clear answer to who sets these guidelines, while Elon Musk, who co-founded OpenAI alongside Altman, has criticised the creation of ‘woke’ chatbots that censor content deemed hurtful.

So, whose values should AI mirror? The government’s, the public’s, or an international standard? This question isn’t confined to technology; it’s a democratic challenge. We must tackle these questions head-on; the values we instil in AI today will steer its course for our society in the future.

Polls often show that most people don’t know what our politicians stand for. As we move towards AI-assisted policymaking, ideological clarity will become even more of a necessity for politicians. With a government chatbot potentially only months away, this debate is long overdue.