You are currently viewing Microsoft Engineer Says Company’s AI Tool Generates Sexual And Violent Images

Microsoft Engineer Says Company’s AI Tool Generates Sexual And Violent Images

2024-03-07 05:44:04

Microsoft Engineer Says Company's AI Tool Generates Sexual And Violent Images

Mr Jones claims he previously warned Microsoft management but saw no action

A Microsoft AI engineer, Shane Jones, raised concerns in a letter on Wednesday. He alleges the company’s AI image generator, Copilot Designer, lacks safeguards against generating inappropriate content, like violent or sexual imagery. Mr Jones claims he previously warned Microsoft management but saw no action, prompting him to send the letter to the Federal Trade Commission and Microsoft’s board.

“Internally the company is well aware of systemic issues where the product is creating harmful images that could be offensive and inappropriate for consumers,” Mr Jones states in the letter, which he published on LinkedIn. He lists his title as “principal software engineering manager”.

In response to the allegations, a Microsoft spokesperson denied neglecting safety concerns, The Guardian reported. They emphasized the existence of “robust internal reporting channels” for addressing issues related to generative AI tools. As of now, there has been no response from Shane Jones regarding the spokesperson’s statement.

The central concern raised in the letter is about Microsoft’s Copilot Designer, an image generation tool powered by OpenAI’s DALL-E 3 system. It functions by creating images based on textual prompts.

This incident is part of a broader trend in the generative AI field, which has seen a surge in activity over the past year. Alongside this rapid development, concerns have arisen regarding the potential misuse of AI for spreading disinformation and generating harmful content that promotes misogyny, racism, and violence.

“Using just the prompt ‘car accident’, Copilot Designer generated an image of a woman kneeling in front of the car wearing only underwear,” Jones states in the letter, which included examples of image generations. “It also generated multiple images of women in lingerie sitting on the hood of a car or walking in front of the car.”

Microsoft countered the accusations by stating they have dedicated teams specifically tasked with evaluating potential safety concerns within their AI tools. Additionally, they claim to have facilitated meetings between Jones and their Office of Responsible AI, suggesting a willingness to address his concerns through internal channels.

“We are committed to addressing any concerns employees have by our company policies and appreciate the employee’s effort in studying and testing our latest technology to further enhance its safety,” a spokesperson for Microsoft said in a statement to the Guardian.

Last year, Microsoft unveiled Copilot, its “AI companion,” and has extensively promoted it as a groundbreaking method for integrating artificial intelligence tools into both business and creative ventures. Positioned as a user-friendly product for the general public, the company showcased Copilot in a Super Bowl advertisement last month, emphasizing its accessibility with the slogan “Anyone. Anywhere. Any device.” Jones contends that portraying Copilot Designer as universally safe to use is reckless and that Microsoft is neglecting to disclose widely recognized risks linked to the tool.

 

Shane Jones,Microsoft AI tool,Copilot Designer

Source link

Loading