AI is a force for good – and Britain needs to be a maker of ideas, not a mere taker | Will Hutton
![AI is a force for good – and Britain needs to be a maker of ideas, not a mere taker | Will Hutton AI is a force for good – and Britain needs to be a maker of ideas, not a mere taker | Will Hutton](https://i.guim.co.uk/img/media/ef81acebb3a81125fb6fecdba6854aa2471d3975/0_0_2480_1488/master/2480.jpg?width=1200&height=630&quality=85&auto=format&fit=crop&overlay-align=bottom%2Cleft&overlay-width=100p&overlay-base64=L2ltZy9zdGF0aWMvb3ZlcmxheXMvdG8tb3BpbmlvbnMucG5n&enable=upscale&s=e9b9b6da442ccd6afaef8200a1715e09)
Just 11 years ago, Professor Stephen Hawking announced that explosive and non-explosive growth in artificial intelligence is possible Threatening the future of humanity.
Two years ago, more than a thousand AI leaders, fearing a “loss of control” given its exponential growth to unknown outcomes, called for immediate Six-month hiatus in artificial intelligence research Pending the establishment of common safety standards. In two weeks, France and India will co-host an international tournament Summit in Paris Searching for better agreements to ensure the safety of AI, after the British-hosted 2023 Summit in Park Park.
All noble things – but throw back into history such initiatives to protect human agency, indeed humanity, from the wholesale outsourcing of our decisions to machines. Of all the many concerns expressed about Donald Trump — from his threat to American public health and the U.S. Constitution to the potential isolation of Greenland — last week Give up Joe Biden Artificial Intelligence Safety Agreements Among the most dangerous. AI companies were forced to share safety tests of new models with the US government before releasing them to the public, to ensure they would not harm America’s economic, social, or security interests. In particular, the ranking Common testing standards required For any “chemical, biological, radiological, nuclear, and cybersecurity” risks. no more.
Trump attacked Biden’s egregious safety order as “anti-heat rhetoric” and “anti-hyphenate.” But what gives the real threat is that it was accompanied by the launch of $500 billion in spending over the next four years on new AI Stargate projectwith $100 billion allocated to immediately build the necessary AI infrastructure, including energy-related meta brands. The goal is to overcome the dominance of American AI so that machines created in the United States and American intellectual property drive the mass automation that is expected to raise productivity.
So may. Goldman Sachs predicted this 18% of all employment globally could be lost To automation in the near future – 300 million jobs globally. Indeed, as AI observer Professor Anthony Elliott comments in his recent book, Algorithms of Anxiety: Fear in the Digital AgeThe outsourcing of our decision-making to machines and their increasing control—how we drive, what we monitor or the speed at which we work—sparks an epidemic of personal anxiety. (He laid out his case last week We are the community Podcastwhich I host.) You might take A.I our Jobs. And this is before Trump’s Ai Tsunami hits us.
The giants of “free speech” in the United States will unleash a torrent of misinformation that dramatically distorts our understanding of reality, indulging in a deluge of online chaos that incites sexual assault and fuels violence. There will be no scrutiny on the biased AI algorithms used to guide everything from court rulings to recommendations on employee hiring. Hacking will explode – and employers will use AI to monitor every second at work. There are other, more existential risks from Trump’s unilateral recklessness—it could be, for example, some artificial intelligence misjudgment on gene editing. Worse still, AI-driven drones will kill randomly from the air against all rules of warfare. Will AI-controlled nuclear weapons fail? Few think, Including, until recently, Elon Muskthat leading AI companies in the United States have processes in place to safely manage machine-generated intelligence. Now, in their race for commercial advantage, they don’t have to care.
Post promotion newsletter
The difference in position between Trump’s careless rejection of these risks and the UK government Artificial intelligence opportunities business planpublished earlier this month, could hardly be clearer. Artificial intelligence, the plan notes, is a technology that has the transformative power to do good. Deepmind’s AlphafoldFor example, it is estimated to save 400 m of researcher time examining protein structures by deploying the computing power of AI. There are opportunities across the board – in personalizing education and training, in dramatically better health diagnostics, in exploring patterns in vast data sets enabling all forms of research more comprehensively and faster.
But there is a tightrope. The plan acknowledges that there are “significant risks presented by AI” from which the public must be protected in order to foster vital trust. This means regulation that is “well designed and implemented” to protect the public while not hindering innovation. But it shouldn’t be just Britain Tucker Of the AI ideas from the largely American companies that we rely on and that are set to build most of our data brands — but a maker From artificial intelligence. “To secure Britain’s future, we need homegrown AI,” the report says, and to this end proposes a new government unit, UK Sovereign Artificial Intelligencewith a mandate, in partnership with the private sector, to ensure Britain is there to attend to the frontiers of artificial intelligence. The Prime Minister, Keir Starmer, rightly endorsed the report: it would put “the full weight of the British state” behind all 50 recommendations – the centerpiece of the government’s industrial strategy.
But, among the new imperialists in Washington, this is an act of deliberate rebellion — a declaration of independence from intended American hegemony. Britain had significant AI capabilities but, like… arm Collectibles (Sold to the Bank of Japan after Brexit In 2016 – to the delight of Nigel Farage, who disdainfully said it was proof that Britain was “open for business” – and now at the heart of Trump’s Stargate) and DeepMind (purchased by Google), they were allowed to dissipate. no more. Generating national AI champions (and, I would add, protecting our civilization) would mean strategic industrial activity “akin to Japanese meitei.” [ministry of international trade and industry] Or Singapore’s Economic Development Board in the 1960s,” says the Starmer-backed report.
It is possible, if inevitable, that unilateralism will be done with Trump on trade and corporate taxation – but to deliver on the ambitions of the report on AI is to deliver on our economic future and the kind of society we want to live in. We should not fall at the mercy of China. There is an opportunity for the government to defend Britain, and in the process forge new allies in the EU and beyond. We need a Boston Tea Party – no AI without representation – and resist the attempt at American AI imperial supremacy.