France, Germany and Italy have reached an agreement on how artificial intelligence should be regulated, which is expected to accelerate negotiations at the European level.
In a joint paper released at the weekend, the governments of the three countries agreed to support "mandatory self-regulation through codes of conduct" for so-called foundation models of AI - which are designed to produce a broad range of outputs - but they oppose "un-tested norms."
"Together we underline that the AI Act regulates the application of AI and not the technology as such," the joint paper said.
"The inherent risks lie in the application of AI systems rather than in the technology itself."
The European Commission, the European Parliament and the EU Council are negotiating how the bloc should position itself on this topic.
Generative AI platforms such as ChatGPT are trained on vast amounts of data to enable them to answer questions, even complex ones, in human-like language.
They are also used to generate and manipulate imagery.
But the tech has triggered warnings about the dangers of its misuse -- from blackmailing people with "deepfake" images to the manipulation of images and harmful disinformation.
'Code of conduct', not sanctions
The paper explains that developers of foundation models would have to define model cards, which are used to provide information about a machine learning model.
"The model cards shall include the relevant information to understand the functioning of the model, its capabilities and its limits and will be based on best practices within the developers community," the paper said.
"An AI governance body could help to develop guidelines and could check the application of model cards," the joint paper said.
Initially, no sanctions should be imposed.
If violations of the code of conduct are identified after a certain period of time, however, a system of sanctions could be set up.
EU crackdown on Big Tech comes into effect with changes for usersRegulating AI application, not development
Germany's Economy Ministry, which is in charge of the topic together with the Ministry of Digital Affairs, said laws and state control should not regulate AI itself, but rather its application.
Digital Affairs Minister Volker Wissing said he was very pleased an agreement had been reached with France and Germany to only limit the use of AI.
"We need to regulate the applications and not the technology if we want to play in the top AI league worldwide," Wissing said.
Political and tech leaders tackle AI safety at inaugural summit
As governments around the world seek to capture the economic benefits of AI, Britain in November hosted its first AI safety summit.
The German government is hosting a digital summit in Jena, in the state of Thuringia, on Monday and Tuesday that will bring together representatives from politics, business and science.
Issues surrounding AI will also be on the agenda when the German and Italian governments hold talks in Berlin on Wednesday.
Last week, Jean-Noël Barrot, the French minister for Digital Development extended an invitation to the former head of OpenAI Sam Altman.
"Sam Altman, his team and their talents are welcome if they wish in France where we are accelerating to put artificial intelligence at the service of the common good," Barrot wrote on social media on Saturday.
This invitation came the day after an important announcement concerning the development of AI in France.
Two big bosses Xavier Niel (Iliad) and Rodolphe Saade (CMA CGM) announced on Friday the creation in Paris of a laboratory called "Kyutai" and endowed with €300 million.