This blog piece provides my initial thoughts regarding the research paper produced by the Analysis and Research Team (ART) of the Council of the EU entitled “ChatGPT in the Public Sector ‘Overhyped’ or ‘Overlooked’?” (ART Note, available here ), in the aftermath of the Future For Life Institute’s Open Letter on AI which I discussed in a previous piece, “A Call for A ‘Luddite Rebellion’ or a ‘Tech (AI) Bretton Woods’?” (available here).
The ART note starts by providing a brief but well-presented historical background of the evolution of AI Technology. It then looks at the possible applications of Large Language Models (LLMs) in the public sector and considers the advantages and the associated risks.
It highlights the impact that private sector initiatives, primarily in the United States, have had in the field and contrasts this with the approach witnessed in Europe thus far. More specifically it observes research in the field in Europe relies significantly on public and publicly financed initiatives contrary to the private finance led initiatives in the USA. It points out that although the majority of the leaders in Large Language Model (LLM) Technologies are present in the US and that the latter seems to be at least two years ahead in tech development than the EU, there are initiatives which aim at boosting European capacities such as the Consortium for High Performance Language Technologies (HPLT) and the European High Performance Computing Joint Undertaking (EuroHPC JU), an initiative which aims at strengthening European supercomputing capacities, led by the European Commission, European Member States (EU27) plus Montenegro, North Macedonia, Norway, Serbia, Turkey and private partners such as the European Technology Platform for High Performance Computing (ETP4HPC), the Big Data Value Association (BDVA) and the European Quantum Industry Consortium (QuIC).
Concerning the use of LLM in the public sector the ART note correctly points out that the use of LLMs can have a variety of applications but it can also “…affect the main principles which underpin the work of the public sector” (ART Note p. 9).
It also takes a “half-full glass” approach on the possible impact of AI technology on public sector jobs stating that although a lot of jobs would be affected this might not lead to the loss of jobs but to the change of the skill sets required for performing public sector jobs.
With regard to the principles that characterise the functioning and aims of public sector governance such as Transparency, accountability ….Equality and impartiality efficiency
It then identifies the risks which at the present stage are associated with the use of LLM in the public sector for example regarding:
- accountability due to the so called “black box” problem, namely the lack of specific knowledge at a granular level of the way in which AI reaches the results it presents (in the case of ChatGPT3 there are approximately 175 billion parameters which the model considers);
- equality and impartiality due to biases that may emerge from the data sets;
- data security due to privacy issues;
- efficiency and quality due to the fact that the technology is not waterproof from advancing/reaching wrong results; the pursuit of public interest which may be influenced by private interests;
- citizens trust because arguably the use of AI in the public sector might lead to the loss of citizens trust.
“The preoccupation regarding the position of the EU industry in this technological field is logical but demonstrates an “in frame”, “old school” approach regarding the regulation of new technologies where geopolitical comparative advantages count.”
Regarding the ART note’s forecast on the AI impact on the public sector workforce I believe that AI’s impact has the potential of being more severe. Perhaps the impact on public sector jobs may be slower than in the private sector because of the inertia that often
characterises the former in comparison to the latter. Although in the long run
maintaining the workforce at the same levels would not be sustainable the public sector environment may allow for a more “controlled impact” paradigm.
An interesting point -which may prove ironic in the long term- is the suggestion for using AI in the screening of candidate CVs for public sector posts….
Regarding the assessment of the risks posed by AI in the public sector governance I think that the analysis is accurate. However I would like to play devil’s advocate here in order to push the conversation further, and ask the following:
My question is this, what is the counterfactual? A perfect AI system or the current state of what I can call as the ‘analog administration’? If it is the latter then one may argue that the counterfactual is an equally imperfect system affected by biases and subject to private sector influences. Then the exercise becomes one of comparing the degree of biases imperfections. The key area in my view is that of the risks regarding accountability acknowledging though that the level of accountability under the ‘analog system of governance’ is often suboptimal too.
All in all the ART note focuses on the AI’s short term implications but says little about the long term ones especially with regard to what actions and initiatives the EU collectively ought to take
The preoccupation regarding the position of the EU industry in this technological field is logical but demonstrates an “in frame”, “old school” approach regarding the regulation of new technologies where geopolitical comparative advantages count. Although this preoccupation is is valid it misses a significant part of the problem at hand. As I mentioned in elsewhere (“A Call for A ‘Luddite Rebellion’ or a ‘Tech (AI) Bretton Woods’?” here ) the challenge/risk posed by Artificial General Intelligence (AGI) is that it has the potential of reaching a “…point of no return after which the notion of ‘technological advantage’ in geopolitical terms would be meaningless.” I believe that like in the case of global warming only globally agreed initiatives have a chance of success -and certainly local or regional uncoordinated initiatives have no chance- likewise in the case of Artificial General Intelligence (AGI) the case of a serious, immediate “AGI Bretton Woods” approach is necessary. The recent discussions at the G7 strike an optimistic chord…on verra…