President Biden on Monday will sign what the White House is calling a ‘landmark’ executive order that contains the ‘most sweeping actions ever taken to protect Americans from the potential risks of AI systems.’ 

Among them is requiring that artificial intelligence developers share their safety-test results – known as red-team testing – with the federal government. 

‘In accordance with the Defense Production Act, the Order will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests,’ the White House says. ‘These measures will ensure AI systems are safe, secure, and trustworthy before companies make them public.’ 

‘The National Institute of Standards and Technology will set the rigorous standards for extensive red-team testing to ensure safety before public release,’ the White House continued. 

EXPERTS CALL BIDEN EXECUTIVE ORDER ON AI A ‘FIRST STEP,’ BUT SOME EXPRESS DOUBTS 

‘The Department of Homeland Security will apply those standards to critical infrastructure sectors and establish the AI Safety and Security Board,’ it added. ‘The Departments of Energy and Homeland Security will also address AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks.’ 

In response to concerns about AI putting people out of work, the White House says the executive order will ‘Produce a report on AI’s potential labor-market impacts, and study and identify options for strengthening federal support for workers facing labor disruptions, including from AI.’ 

BUSINESS LEADERS EXPECT AI INVESTMENTS TO PAY OFF, BUT IT MIGHT TAKE TIME: STUDY 

The White House also says the executive order is aimed at protecting Americans from AI-enabled fraud by establishing standards and best practices to differentiate between AI-generated and authentic content. 

‘The Department of Commerce will develop guidance for content authentication and watermarking to clearly label AI-generated content,’ it says. ‘Federal agencies will use these tools to make it easy for Americans to know that the communications they receive from their government are authentic—and set an example for the private sector and governments around the world.’ 

The moves by the White House come after the Senate hosted its first-ever bipartisan AI forum last month. 

‘More action will be required, and the Administration will continue to work with Congress to pursue bipartisan legislation to help America lead the way in responsible innovation,’ the White House says. 


This post appeared first on FOX NEWS
Author