The EU plans to regulate generative artificial intelligence at three levels, and the stronger the model, the stricter the rules.

通过admin

The EU plans to regulate generative artificial intelligence at three levels, and the stronger the model, the stricter the rules.

The first level will include all basic models; The second level aims at the "very powerful" basic model, which will be distinguished by the computational power used to train its large-scale language model; The third level is called large-scale general-purpose artificial intelligence system, which will include the most popular artificial intelligence tools and be measured by the total number of users.
The European Union is considering adopting a hierarchical approach to supervise the generative artificial intelligence (AI) model. A proposal says that the EU will establish rules for three levels of basic large-scale language models and conduct additional external tests on the most powerful AI technology.
According to the proposed rules, the first level will include all basic models; The second level aims at "very powerful" basic models, which will be distinguished by the computational power used to train their large-scale language models. The proposal says that these models may be "beyond the current technical level and may not be fully understood"; The third level is called large-scale general-purpose artificial intelligence system, which will include the most popular artificial intelligence tools and be measured by the total number of users.
Chat bots like ChatGPT rely on a large language model-that is, using a large number of data sets to develop artificial intelligence algorithms. The generated artificial intelligence software developed on this basis can respond to human prompts through text, pictures and videos, and its skill level is sometimes surprising and worrying.
At present, many countries around the world are trying to set up guardrails for generative AI to cope with the security risks brought by the rapid development of this emerging technology. The EU is expected to become the first western government to make mandatory rules for artificial intelligence. According to its proposed Artificial Intelligence Act, companies that develop and deploy artificial intelligence systems need to conduct risk assessment, mark the contents generated by artificial intelligence, and completely ban the use of biometric monitoring and other measures. Negotiators hope to improve the legislation at the next meeting on October 25, with the goal of finalizing the bill before the end of the year.
At a meeting earlier this month, representatives of three EU institutions generally supported the hierarchical supervision method, but technical experts put forward more specific suggestions. According to the October 16 document seen by Bloomberg, these ideas are currently taking shape, but they may change as the negotiations begin.
According to people familiar with the matter, the goal of this move is not to let new start-ups bear too much regulatory burden, while controlling large enterprises.
The following are the requirements of the proposal for three-level supervision:
1. All Foundational Models.
Artificial intelligence developers must comply with transparency requirements before putting any model on the market. They must record the model and its training process, including the results of the internal "red team" work, that is, the process in which independent experts try to push the model to bad behavior and judge the safety. Developers will also evaluate according to standardized protocols.
After the model is put on the market, the company needs to provide information to enterprises that use the technology and enable them to test the basic model.
The company must provide a "sufficiently detailed" summary of the content used to develop the model and how to manage copyright issues, including ensuring that rights owners can choose not to use their content for training the model. Companies must also ensure that artificial intelligence content can be distinguished from other materials.
Negotiators propose to define the basic model as a system capable of "performing various unique tasks".
2. Very powerful foundation models.
Companies that develop this level of technology will abide by stricter rules. Before being put on the market, these models must be regularly reviewed by external experts, who will be reviewed by the newly established artificial intelligence office in the European Union. The test results will be sent to the organization.
Companies must also introduce systems to help identify systemic risks. After these models are put on the market, the EU will let independent examiners and researchers carry out compliance control, including checking whether companies comply with transparency rules.
Negotiators are also considering creating a forum for companies to discuss best practices and voluntary codes of conduct, which will be recognized by the European Commission.
Very powerful basic models will be classified according to the computational power needed to train them, using a measure called FLOPS or floating-point operations per second. The exact threshold will be determined by the European Commission at a later stage and will be updated as necessary.
The company can object to this assessment. On the contrary, even if the threshold is not reached after the investigation, the Committee can consider the model "very powerful". Negotiators are also considering the "potential impact" of using this model-based on the number of high-risk artificial intelligence applications built on it, as a way to classify technologies.
3. General Purpose AI Systems at Scale.
These systems must also be inspected by a red team of external experts to identify vulnerabilities, and the results will be sent to the Committee’s artificial intelligence office. Companies must also introduce risk assessment and mitigation systems.
The EU will consider any system with 10,000 registered enterprise users or 45 million registered end users as a large-scale general-purpose artificial intelligence system. The Committee will decide how to calculate the number of users later.
Companies can appeal the status of their large-scale general-purpose artificial intelligence systems. Similarly, the EU can make other systems or models comply with these additional rules, even if they do not reach the threshold, they may "cause risks".
In addition, the proposal indicates that further discussion is needed to determine the guardrail to ensure that neither the general-purpose artificial intelligence system nor the very powerful artificial intelligence system will generate illegal and harmful content.
The additional rules of large-scale general-purpose artificial intelligence and very powerful basic model will be supervised by the new artificial intelligence office. The agency can obtain documents, organize compliance tests, create a registration system to audit the red team testers and conduct investigations. The agency can even suspend a model "as a last resort".
Although the artificial intelligence office is located in the European Commission, it will be "independent". The European Union can charge for large-scale general-purpose artificial intelligence and very powerful basic models, thus obtaining funds to hire office staff.
(This article is from The Paper, please download the "The Paper" APP for more original information)
Reporting/feedback

关于作者

admin administrator