Can governments turn talk about AI safety into action?

Andriy Onufriyenko/Getty Images

In it Asia Tech x Singapore 2024 At the summit, several speakers were ready for high-level discussions and greater awareness of the importance of artificial intelligence (AI) security to turn it into action. Many seek to prepare everyone, from organizations to individuals, with the tools to successfully implement this technology.

Also: How to use ChatGPT to analyze PDF files for free

“Take pragmatic and practical action. That’s what’s missing,” said Ieva Martinekaite, head of research and innovation at Telenor Group, who spoke to ZDNET on the sidelines of the summit. Martinekaite is a board member of the Open AI Lab in Norway and a member of the Singapore Advisory Council on the Ethical Use of AI and Data. She also served as an expert member on the European Commission’s High-Level Expert Group on AI from 2018 to 2020.

Martinekaite noted that senior officials are also beginning to recognize this problem.

Delegates to the conference, which included senior government ministers from several nations, joked that they were simply burning jet fuel by attending high-level meetings on AI safety summits, most recently in South Korea and the United Kingdom. , since they still have little to show. in terms of concrete measures.

Martinekaite said it is time for governments and international bodies to start implementing guidance, frameworks and benchmarking tools to help businesses and users ensure they are deploying and consuming AI safely. She added that continued investments are also needed to facilitate such efforts.

AI-generated deepfakes, in particular, carry significant risks and can affect critical infrastructure, he warned. Today they are already a reality: images and videos of politicians, public figures and even Taylor Swift have surged.

Also: There are more political deepfakes than you think

Martinekaite added that technology is now more sophisticated than a year ago, making it increasingly difficult to identify deepfakes. Cybercriminals can leverage this technology to steal credentials and gain illegal access to systems and data.

“Hackers don’t hack, they log in,” he said. This is a critical issue in some sectors, such as telecommunications, where deepfakes can be used to penetrate critical infrastructure and amplify cyberattacks. Martinekaite noted that employee IDs can be spoofed and used to access data centers and IT systems, adding that if this inertia is not addressed, the world is at risk of a potentially devastating attack.

Users should be equipped with the training and tools necessary to identify and combat those risks, he said. Technology also needs to be developed to detect and prevent such AI-generated content, including text and images, such as digital watermarking and media forensics. Martinekaite believes these should be implemented alongside legislation and international collaboration.

However, he noted that legislative frameworks should not regulate the technology, or innovation in AI could be stifled and affect potential advances in healthcare, for example.

Instead, regulations should address where deepfake technology has the greatest impact, such as critical infrastructure and government services. Requirements such as watermarking, source authentication and security barriers for data access and tracking can be implemented for high-risk sectors and relevant technology providers, Martinekaite said.

According to Natasha Crampton, Microsoft’s chief AI officer, the company has seen an increase in deepfakes, non-consensual images, and cyberbullying. During a panel discussion at the summit, she said Microsoft is focusing on tracking misleading information online. election contentespecially with several elections taking place this year.

Stefan Schnorr, state secretary of Germany’s Federal Ministry of Digital and Transport, said deepfakes can potentially spread false information and mislead voters, resulting in a loss of trust in democratic institutions.

Also: What TikTok content credentials mean to you

Protecting against this also involves a commitment to safeguarding personal data and privacy, Schnorr added. He stressed the need for technology companies and international cooperation to adhere to cyber laws implemented to boost AI security, such as EU AI Law.

If allowed to perpetuate unchecked, deepfakes could affect decision-making, said Zeng Yi, director of the Brain-Inspired Cognitive Intelligence Laboratory and the International Research Center for AI Ethics and Governance at the Institute for Automation. from the Chinese Academy of Sciences.

Zeng also emphasized the need for international cooperation and suggested that a deepfake “observatory” should be established around the world to drive better understanding and information sharing on disinformation in an effort to prevent such content from spreading rampantly throughout the world. countries.

A global infrastructure that fact-checks and disinformation can also help inform the general public about deepfakes, he said.

Singapore updates generic AI governance framework

Meanwhile, Singapore has published the final version of its governance framework for generative AIwhich extends its existing AI governance framework, first introduced in 2019 and last updated in 2020.

He AI Governance Framework Model for GenAI sets out a “systematic and balanced” approach that Singapore says balances the need to address GenAI concerns and drive innovation. It covers nine dimensions, including incident reporting, content provenance, security, and testing and assurance, and provides suggestions on initial steps to take.

At a later stage, AI Verify, the group behind the framework, will add more detailed guidelines and resources under the nine dimensions. To support interoperability, they will also map the governance framework to international AI guidelines, such as the G7 Hiroshima Principles.

Also: Apple’s AI features and Nvidia’s AI training speed top innovation index

Good governance is as important as innovation in realizing Singapore’s vision of AI forever and can help enable sustained innovation, said Josephine Teo, Singapore’s Minister of Communications and Information and Minister in Charge of Smart Nation and Cybersecurity. during his speech at the summit.

“We need to recognize that it is one thing to deal with the harmful effects of AI, but another to prevent them from occurring in the first place… through proper design and preliminary measures,” Teo said. He added that risk mitigation measures are essential and that new regulations that are “evidence-based” can result in more meaningful and impactful AI governance.

In addition to establishing AI governance, Singapore is also looking to increase its governance capabilities, such as building an advanced technology center in online security that focuses on malicious online content generated by AI.

Users must also understand the risks. Teo noted that it is in the public interest for organizations using AI to understand both its benefits and limitations.

Teo believes businesses should then equip themselves with the right mindset, capabilities and tools to do so. He added that Singapore’s AI governance model framework offers practical guidelines on what should be implemented as safeguards. He also sets basic requirements for AI implementations, regardless of company size or resources.

According to Martinekaite, for Telenor, AI governance also means monitoring the use of new AI tools and reassessing potential risks. The Norwegian telecommunications company is currently testing Microsoft Copilotwhich is based on OpenAI technology, against Telenor’s own AI ethical principles.

Asked if OpenAI recent fight The involvement of his voice mode had affected his confidence in using technology, Martinekaite said that major companies running critical infrastructure like Telenor have the capacity and controls in place to ensure they are deploying reliable AI tools, including platforms from third parties like OpenAI. This also includes working with partners such as cloud providers and smaller solution providers to understand and learn about the tools you use.

Telenor created a task force last year to oversee the adoption of responsible AI. Martinekaite explained that this involves establishing principles that its employees must adhere to, creating rulebooks and tools to guide the use of AI, and establishing standards that its partners, including Microsoft, must adhere to.

These are intended to ensure that the technology the company uses is legal and secure, he added. Telenor also has an internal team reviewing its governance and risk management structures to take into account its use of GenAI. It will evaluate the tools and remedies needed to ensure it has the right governance structure to manage its use of AI in high-risk areas, Martinekaite said.

Also: Enterprise cloud security flaws ‘concerning’ as AI threats accelerate

As organizations use their own data to train and tune large language models and smaller AI models, Martinekaite believes that companies and AI developers will increasingly discuss how this data is used and managed.

He also believes that the need to comply with new laws, such as the EU AI Law, will further drive such conversations, as companies work to ensure they meet additional requirements for high-risk AI deployments. For example, they will need to know how their AI training data is selected and tracked.

There is much more scrutiny and concern from organizations, who will want to closely examine their contractual agreements with AI developers.




Source link

Leave a Comment