Editorial: Artificial intelligence poses real risks

Published: 06-07-2024 8:01 PM

Modified: 06-10-2024 3:52 PM


“Existential threat” has become the go-to catch phrase to warn about all sorts of looming catastrophes, from climate change to a second presidential term for Donald Trump. Add to the list “artificial general intelligence,” or A.G.I., which, The New York Times informs us, is the industry term for computer programs capable of doing anything humans can do (but maybe better, or worse).

Given how far artificial intelligence has already come, this is a terrifying prospect, and not just to technophobes. A chorus of Cassandras among industry insiders sounded the alarm this past week in an open letter alleging that OpenAI and other “frontier” AI companies are recklessly chasing profits in the race to develop A.G.I. at the expense of assessing, monitoring and ensuring safety.

The letter, signed by a group that includes 11 former and current employees of OpenAI, which began life as a nonprofit research lab and captured the public imagination with its 2022 release of ChatGPT, initially made reference to “the potential of AI technology to deliver unprecedented benefits to humanity.”

Then the other shoe dropped. The letter warned of risks ranging “from the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction.” Thus, the existential threat.

Nor is the threat remote in time. Daniel Kokotajlo, a former researcher at OpenAI who helped orchestrate the group’s letter, believes that there is a 50% chance that A.G.I. will become a reality in just three years given the rapid progress in the field. “OpenAI is really excited about building A.G.I.,” he told the Times, “and they are recklessly racing to be the first there.”

For its part, the now privately held company, in which Microsoft owns a 49% stake, says it believes in its “scientific approach to assessing risk.” The problem is that virtually nobody outside the company knows what that process is or why the company can be relied on to rigorously adhere to those protocols when a boatload of money is at stake. Other tech companies, such as those in social media, have ignored serious risks in rolling out their technology, in the process doing substantial harm to individuals, to social cohesion and to democratic government. A.G.I. would ramp up the potential for disaster exponentially.

Last month, two senior researchers at OpenAI who focused on managing risks of powerful AI tools left the company, with one of them saying publicly that “safety culture and processes have taken a back seat to shiny products.”

And when Kokotajlo left the company in April, he told his safety team that he had lost confidence that OpenAI would behave responsibly as its systems approach A.G.I. capability. “The world isn’t ready, and we aren’t ready,” he said. “And I’m concerned that we are rushing forward regardless and rationalizing our actions.”

Article continues after...

Yesterday's Most Read Articles

Hartford School Board denies violating open meeting law
Hanover makes interim town manager permanent
Upper Valley native co-recipient of Nobel Prize
Upper Valley Halloween events 2024
Home health insurance options limited amid changes in Medicare Advantage plans
Bridge over Connecticut River, section of I-91 to reopen soon

Given these concerns, an alert government would have stepped in long ago to provide effective oversight and, if required, regulation, which is what the European Union has done. U.S. regulators are just now increasing scrutiny of the industry.

Until government steps in, it pretty much leaves insiders to be the conscience of the industry. As the open letter puts it: “So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable to the public.” But it goes on to say that their ability to hold companies to account is hampered by broad confidentiality and non-disparagement agreements that prevent them from speaking out about their safety concerns. Congress could at least rouse itself from its slumber long enough to ensure that those critical voices are not silenced.

Ultimately, though, the public needs to get involved. This technology is too powerful to let unaccountable corporations shroud it in secrecy until they spring it on an unsuspecting world. It seems to us that the question to be asked of each new breakthrough-technology is not what is its highest and best use, but what is its lowest and worst use. Because bad actors will always find a way to use these advances for their own nefarious ends. The warning light is blinking red.