11.1 C
Madrid
Sunday, December 22, 2024

Microsoft President Voices Concerns over Deepfakes, Calls for AI Regulation

Smith was delivering a speech in Washington about AI-generated deepfakes

Must read

Aditya Saikrishna
Aditya Saikrishna
I am 21 years old and an avid Motorsports enthusiast.

UNITED STATES: Brad Smith, the president of Microsoft, has expressed his massive concern regarding artificial intelligence (AI) as the proliferation of deepfakes continues to rise. Deepfakes refer to manipulated images or videos that appear authentic. However, people fabricate these using editing software.

In recent months, the internet witnessed the circulation of fake images depicting Donald Trump’s arrest, Elon Musk walking alongside General Motors CEO Mary Barra, and even Pope Francis sporting a white puffer jacket and sunglasses. The advent of AI marks the era of AI deepfakes, and Smith has now voiced his apprehensions about this technology.

- Advertisement -

During a speech in Washington, he addressed the concerns surrounding AI-generated deepfakes, emphasizing the need to address issues such as foreign cyber-influence operations conducted by governments like Russia, China, and Iran.

Smith emphasized the urgency to take steps to protect against the manipulation of legitimate content to deceive or defraud individuals using AI.

- Advertisement -

He proposed licensing AI with obligations to safeguard various aspects, including physical security, cybersecurity, and national security.

Smith also highlighted the necessity of implementing new or evolved export controls to prevent the theft or misuse of AI models that would violate a country’s export control requirements.

- Advertisement -

The call for AI regulation extends beyond Microsoft’s president. Sam Altman, the CEO of OpenAI and business partner of Smith, echoed similar sentiments during a recent Senate panel hearing in the United States.

Altman emphasized the potential dangers of AI and stressed the need for regulation to mitigate potential harms. He expressed OpenAI’s willingness to collaborate with the government to prevent adverse consequences of AI.

Altman acknowledged the possibility of AI technology going “quite wrong” and stressed the importance of being proactive in regulating its development.

He further emphasized the need for collaboration between technology companies and the government to establish safeguards and prevent potential misuse.

The concerns raised by Smith and Altman highlight the growing recognition within the tech industry of the potential risks associated with AI, particularly concerning deepfakes.

The proliferation of manipulated content can deceive and influence public opinion, posing threats to multiple societal aspects.

As the development of AI continues to progress, the call for regulation and proactive measures to address these challenges becomes increasingly crucial to ensuring the responsible and ethical use of this powerful technology.

Also Read: Microsoft Introduces ‘Jugalbandi’, an AI-Powered Multilingual Chatbot

Author

- Advertisement -

Archives

spot_img

Trending Today