The trigger for writing this post was something I read during a lunch break at Badia Fiesolana, the European University Institute, a place which invites and facilitates deeper reflection.
The relevant read was the Future of Life Institute’s Open Letter (here) calling for an immediate pause of at least six months regarding the training of all AI systems that are more powerful than the Generative Pre-trained Transformer (GPT) 4.
The open letter put into perspective a lot my assumptions, endeavours and thoughts. After all it is not unusual for one to be sceptical about the frequently observed presentation of AI as a panacea for achieving efficiency in almost every walk of life; yet, the open letter takes scepticism towards AI to another level. What follows is a brief account of my first thoughts and reactions.
Open Letter: Content and Endorsement
The open letter is signed by a long – and growing- list of “tech gurus” such as Elon Musk, Steve Wozniak, as well as AI experts and AI funders. I will refer to them in the rest of this piece, in a loose/oversimplified way as “the creators”.
In other words, the open letter was not written and signed by those who oppose technological development a priori, and it has been endorsed by many of those who set the AI ball rolling and contributed to and/or financed its development.
I find the open letter compelling for two reasons.
Firstly, because the call action is for a pause; Why a pause?
The choice for a “pause” can be explained in two alternative ways:
Either the proposed pause is deemed sufficient to make the necessary plans and adaptations to mitigate the risks posed by the new technology and prevent Artificial General Intelligence (AGI) otherwise known as technological singularity, namely the point after which technological development becomes uncontrollable and irreversible by humans; a “rubicon-crossing” moment for life as we know it.
Or, the call for a pause, is simply an acknowledgment that stopping this technology is not possible anymore and therefore pausing remains the only (albeit second best) option to delay the inevitable.
Under normal circumstances (or better from a human time point of view) six months is not such a long period of time one might say.
Which takes me to the second reason for which I find the message in the open letter compelling; it is the sense of urgency in the letter which renders its message more a cry rather than a call for action: not just a pause, but an immediate pause. Every moment that passes makes the technology more powerful and brings singularity closer.
The creators’ cry for an immediate pause relates to a sense of wariness, a ‘gut-feeling’ to put it simply, of perilous “unknown-unknowns” associated with their “creation”.
A “Luddite Rebellion” or a “Tech Bretton-Woods”?
Although prima facie the open letter might give the impression of a rally to a Luddite-type of struggle against AI, the tone of the message and the identity of those who endorse it does not justify such a reading in my view.
First, the open letter focuses on what can be termed as the more advanced side of the AI spectrum, which continues to expand, and proposes the development and implementation of “safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts” (Open Letter para. 4) with the participation of all the relevant stakeholders.
“I read the letter more as a call for the establishment of a “Tech Bretton Woods” where the rules of engagement regarding the further development of this powerful technology would be agreed -and applied- across the world”
What’s more, the open letter calls on Governments to step in if such action is not taken quickly by the largely private actors who are involved in the development of the technology. I believe that such standard/regulatory, tech protocol development coordination with all the stakeholders including Governments is essential.
For these reasons I read the letter more as a call for the establishment of a “Tech Bretton Woods” where the rules of engagement regarding the further development of this powerful technology would be agreed -and applied- across the world.
Enter The Dragon
Experience has shown that for a Bretton-Woods type of an initiative to work all the stakeholders need to be at the same wavelength and to be willing to play the proverbial “ball”. However, the party which has the advantage and feels more constrained by the “accords” often succumbs to the temptation to break ranks as did the Nixon Administration in 1971 with the original Bretton Woods system.
Why China would consider jeopardising its competitive advantage -assuming that the reports about the considerable technological lead that China enjoys in the field of emerging technologies such as AI are true (see “Global ‘Tug of War’ – Artificial Intelligence (AI) and Procurement Reform” here) – by pausing the further development of the latter? The only answer is the realisation of the risks of the “unknown-unknowns” stressed by the ‘creators’ in the open letter. The fact that some of the open letter signatories are based in China is promising.
“[Meaningful engagement] can only happen if the Chinese political elite is convinced about the risks connected with the point of no return after which the notion of ‘technological advantage’ in geopolitical terms would be meaningless.”
However, it is also clear that for any meaningful Chinese involvement in such an endeavour political sanctioning at the highest political level is necessary. The latter can only happen if the Chinese political elite is convinced about the risks connected with the point of no return after which the notion of “technological advantage” in geopolitical terms would be meaningless.
Such realisation requires clear explanations and lobbying by the relevant Chinese AI expert stakeholders (Chinese “creators”), but one would not be unfair to suggest that the Chinese political ecosystem is not geared towards facilitating such frank discussions.
Furthermore, even if the Dragon is convinced that a “Tech Bretton Woods” initiative is necessary there is the problem of the current geopolitical environment which undermines the development of trust among the various stakeholders. It is for this reason that the reduction of tensions and the re-engagement of national governments with multilateral fora (be it in the field of trade, the environment, the economy and so on) as a means to rebuilding trust is so important.
It could be argued though that the realisation of the risks connected with the uncontrolled slalom towards AGI puts everyone on the same side of the court against a common adversary and this, in itself, should be a sufficient factor to incentivise cooperation.
However, even if there is a common understanding about the necessity of a “Tech Bretton Woods” system, underpinned by common protocols and the appropriate institutional framework, there is another challenge ,which is non-other than the agreement about the terms of reference of the initiative.
The open letter raises some very interesting points in this regard about the role of AI in our society more generally. For example objectives such as optimisation and efficiency, which have informed the decision making and technological development through the aeons might need to be redefined. Considerations about the utility of human work as an end in itself might come to the fore and inform the discussions about whether there should be some ceiling/limit of the human work’s replacement by automation. And if such ceiling is worthwhile where should the line be drawn? All these are not just philosophical, but existential questions.
And the clock is ticking.
#ArtificialIntelligence #futureoflifeinstitute #AI #AGI #machinelearning #ML #bigdata #singularity #OpenletterAI #OfficeforAI #EUAIAct #China_AI #AIBrettonWoods