Analysis: Tech giants push to water down Europe’s artificial intelligence law By Reuters
Analysis: Tech giants push to water down Europe’s artificial intelligence law By Reuters


By Martin Coulter

LONDON (Reuters) – The world’s biggest technology companies have embarked on a last-ditch effort to persuade the European Union to take a lax approach to regulating artificial intelligence as they seek to avoid the risk of billions of dollars in fines.

EU lawmakers in May approved the AI ​​Act, the world’s first comprehensive set of rules governing the technology, after months of intense negotiations between different political groups.

But until accompanying codes of practice are finalised, it remains unclear how strictly the rules will be applied to “general purpose” AI (GPAI) systems such as OpenAI’s ChatGPT, and how many copyright lawsuits and multi-million dollar fines companies may face.

The EU has invited companies, academics and others to help draft the code of practice, and has received nearly 1,000 applications, an unusually high number according to a source familiar with the matter who asked not to be named because he was not authorized to speak publicly.

The AI ​​code of practice will not be legally binding when it comes into force at the end of next year, but it will provide companies with a checklist they can use to demonstrate their compliance. A company that claims to be complying with the law and ignores the code could face legal action.

“The code of practice is crucial. If we get it right, we can continue to innovate,” said Boniface de Champris, policy director at trade organization CCIA Europe, whose members include Amazon (NASDAQ:), Google (NASDAQ:) and Meta (NASDAQ:).

“If it’s too narrow or too specific, it’s going to be very difficult,” he added.

DATA SCRAPING

Companies like Stability AI and OpenAI have faced questions about whether using best-selling books or photo archives to train their models without permission from their creators constitutes copyright infringement.

Under the AI ​​Act, companies will be required to provide “detailed summaries” of the data used to train their models. In theory, a content creator who discovers that their work has been used to train an AI model could claim compensation, although this is currently being tested in court.

Some business leaders have said the required summaries should contain few details to protect trade secrets, while others say copyright holders have a right to know if their content has been used without permission.

OpenAI, which has faced criticism for refusing to answer questions about the data used to train its models, has also applied to join the working groups, according to a person familiar with the matter, who asked not to be identified.

Google has also submitted an application, a spokesperson told Reuters. Meanwhile, Amazon has said it looks forward to “contributing our expertise and ensuring the code of practice is successful.”

Maximilian Gahntz, head of artificial intelligence policy at the Mozilla Foundation, the nonprofit behind the Firefox web browser, expressed concern that companies “are doing everything they can to avoid transparency.”

“The AI ​​Act represents the best opportunity to shed light on this crucial aspect and illuminate at least part of the black box,” he said.

LARGE COMPANIES AND PRIORITIES

Some business leaders have criticised the EU for prioritising technological regulation over innovation, and those drafting the code of practice will be working to find a compromise.

Last week, former European Central Bank chief Mario Draghi told the bloc it needed better coordinated industrial policy, faster decision-making and massive investment to keep pace with China and the United States.

Thierry Breton, an outspoken advocate of EU regulation and critic of tech companies that flout rules, resigned this week as European Internal Market Commissioner after clashing with Ursula von der Leyen, the president of the bloc’s executive arm.

In a context of rising protectionism within the EU, local tech companies are hoping that exceptions will be introduced in the AI ​​Act to benefit European start-ups.

“We have insisted that these obligations must be manageable and, if possible, tailored to startups,” said Maxime Ricard, policy manager at Allied for Startups, a network of trade organizations representing smaller tech companies.

Once the code is published in the first part of next year, tech companies will have until August 2025 before their compliance efforts begin to be measured against it.

© Reuters. FILE PHOTO: People are silhouetted in front of a Google logo during the inauguration of a new center in France dedicated to the artificial intelligence (AI) sector, at the Google France headquarters in Paris, France, February 15, 2024. REUTERS/Gonzalo Fuentes/File Photo

Nonprofit organizations including Access Now, the Future of Life Institute and Mozilla have also asked to help write the code.

Gahntz said: “As we enter the stage where many of the AI ​​Act’s obligations are being spelled out in more detail, we must be careful not to allow big AI players to dilute important transparency mandates.”

By Admin