Microsoft Corp. is calling on Congress to pass a comprehensive law to crack down on images and audio created with artificial intelligence — known as deepfakes — that aim to interfere in elections or maliciously target individuals.
Noting that the tech sector and nonprofit groups have taken steps to address the problem, Microsoft President Brad Smith on Tuesday said, “It has become apparent that our laws will also need to evolve to combat deepfake fraud.” He urged lawmakers to pass a “deepfake fraud statute to prevent cybercriminals from using this technology to steal from everyday Americans.”
The company also is pushing for Congress to label AI-generated content as synthetic and for federal and state laws that penalize the creation and distribution of sexually exploitive deepfakes.
The goal, Smith said, is to safeguard elections, thwart scams and protect women and children from online abuses. Congress is currently mulling several proposed bills that would regulate the distribution of deepfakes.
“Civil society plays an important role in ensuring that both government regulation and voluntary industry action uphold fundamental human rights, including freedom of expression and privacy,” Smith said in a statement. “By fostering transparency and accountability, we can build public trust and confidence in AI technologies.”
Manipulated audio and video technology has already created some controversy in this year’s campaign for US president.
(Only the headline and picture of this report may have been reworked by the Business Standard staff; the rest of the content is auto-generated from a syndicated feed.)
First Published: Jul 31 2024 | 12:07 AM IST