Saturday, June 3, 2023
Home Tech News OpenAI says it could ‘cease operating’ in the EU if it can’t...

OpenAI says it could ‘cease operating’ in the EU if it can’t comply with future regulation


The EU is finalizing new AI regulations, but OpenAI CEO Sam Altman says he has ‘many concerns’ about the law. The EU AI Act would require the company to disclose details of its training methods and data sources.

Share this story

OpenAI CEO Sam Altman has warned that the company might pull its services from the European market in response to AI regulation being developed by the EU.

Speaking to reporters after a talk in London, Altman said he had “many concerns” about the EU AI Act, which is currently being finalized by lawmakers. The terms of the Act have been expanded in recent months to include new obligations for makers of so-called “foundation models” — large-scale AI systems that power services like OpenAI’s ChatGPT and DALL-E.

“The details really matter,” said Altman, according to a report from The Financial Times. “We will try to comply, but if we can’t comply we will cease operating.”

In comments reported by Time, Altman said the concern was that systems like ChatGPT would be designated “high risk” under the EU legislation. This means OpenAI would have to meet a number of safety and transparency requirements. “Either we’ll be able to solve those requirements or not,” said Altman. “[T]here are technical limits to what’s possible.”

In addition to technical challenges, disclosures required under the EU AI Act also present potential business threats to OpenAI. One provision in the current draft requires creators of foundation models to disclose details about their system’s design (including “computing power required, training time, and other relevant information related to the size and power of the model”) and provide “summaries of copyrighted data used for training.”

OpenAI used to share this sort of information but has stopped as its tools have become increasingly commercially valuable. In March, Open AI co-founder Ilya Sutskever told The Verge that the company had been wrong to disclose so much in the past, and that keeping information like training methods and data sources secret was necessary to stop its work being copied by rivals.

In addition to the possible business threat, forcing OpenAI to identify its use of copyrighted data would expose the company to potential lawsuits. Generative AI systems like ChatGPT and DALL-E are trained using large amounts of data scraped from the web, much of it copyright protected. When companies disclose these data sources it leaves them open to legal challenges. OpenAI rival Stability AI, for example, is currently being sued by stock image maker Getty Images for using its copyrighted data to train its AI image generator.

The recent comments from Altman help fill out a more nuanced picture of the company’s desire for regulation. Altman has told US politicians that regulation should mostly apply to future, more powerful AI systems. By contrast, the EU AI Act is much more focused on the current capabilities of AI software.

- Advertisment -

Most Popular

How to vote in the Alberta election

Albertans have many options when it comes to how they cast their vote.Voters may choose between advance polls, special ballot and voting on election...

This sex toy makes my life so much more pleasurable – not just in the obvious way

I was first introduced to sex toys when I was a teenager. It was my 16th birthday and two of my male friends thought it...

‘They’re afraid their AIs will come for them’: Doug Rushkoff on why tech billionaires are in escape mode

Interview‘They’re afraid their AIs will come for them’: Doug Rushkoff on why tech billionaires are in escape modeEdward HelmoreThe leading intellect on digital...

Meet Samsung’s man on a mission to help the world sleep better

Good sleep and good health are closely linked (Picture: Getty/iStockphoto)It’s fair to say there weren’t many positives to come out of the pandemic, but...