position: EnglishChannel > Comment > Article

Regulating AI Should be Joint Endeavor

Source: | 2024-06-12 09:52:45 | Author:


By Staff Reporters

Regulations on AI are now firmly on the global agenda. On May 21, 16 companies from different countries and regions, such as reputable AI-tech giants OpenAI, Google, Amazon and Microsoft, signed the Frontier AI Safety Commitments at the AI Seoul Summit. Combined with this is the EU AI Act, approved on the same day.

Since the end of 2022, access to ChatGPT has increased public awareness of the growing risks surrounding advanced AI systems, which may challenge many norms of human life.

Generally speaking, AI runs the risk of perpetuating discrimination, and distributing misinformation, as well as exposing sensitive personal information as indicated by an event which shows that a training data extraction process conducted by GPT-2 provided personally identifiable information, including phone numbers and email addresses, which were published online.

Large language models faithfully mirror language found in the training data. This training data may come from various sources such as online books or internet forums.

The incapacity to speak like humans, who usually take into account multiple societal-personal factors such as listeners' emotional responses, social impact, and ideological attitudes, may lead to aggravated discrimination against certain groups of people. This is sufficient to surmise that the quality of AI training data cannot be fully guaranteed.

AI-induced misinformation may get worse. Unlike discriminative attitudes, which in some cases are developed unconsciously over time by assimilating information without careful reflection, misinformation is false information that is spread, regardless of whether there is intent to mislead, eroding societal trust in shared information.

As for regulations, from a government perspective, they need to establish agile AI regulatory agencies and provide them with adequate funding. The annual budget for the AI Safety Research Institute in the United States is currently $10 million, which may sound substantial, but actually pales in comparison to the $6.7 billion budget of the U.S. Food and Drug Administration.

Countries across the globe should now establish laws not only based on their existing legal systems, but also seek viable approaches to cooperation. The AI field requires stricter risk assessments and the implementation of enforceable measures, rather than relying on vague model evaluations. Meanwhile, AI development companies should be required to prioritize safety and demonstrate that their systems will not cause harm. In a nutshell, AI developers must take responsibility for ensuring the safety of their technologies.

The international community has now decided that comprehensive regulation of AI needs to be accelerated. It needs to be a united effort, and every company involved in AI development must commit to upholding these standards to protect societal values and trust. Only through vigilant regulation and cooperation can we harness the full potential of AI while mitigating its risks.

Editor: 林雨晨

Top News

  • Scientists from Tianjin University in China have developed a cost-effective and environment-friendly catalyst for the production of propylene, one of the highest produced basic chemicals globally, used to manufacture plastics, rubber, fibers, and pharmaceuticals.

Space Cooperation Consolidates Sino-French Friendship

On June 22, the Sino-French satellite Space Variable Objects Monitor (SVOM) was successfully launched. Earlier, on May 3, China's Chang'e-6 lunar probe carried France's Detection of Outgassing RadoN to the moon, marking the first collaboration between the two countries in lunar exploration and France's debut in a lunar landing project. This year also celebrates the 60th anniversary of diplomatic relations between China and France, highlighting their longstanding cooperation in space.

Wolf Amendment Can't Curb China's Space Feats

Being excluded from U.S.-led missions has not discouraged Chinese space scientists.