Saturday, July 27, 2024
HomeAIMicrosoft begins blocking some terms that caused its AI tool to create...

Microsoft begins blocking some terms that caused its AI tool to create violent, sexual images

Microsoft is making adjustments to its Copilot AI product after a staff AI engineer expressed concerns to the Federal Trade Commission on Wednesday about its image-generation AI.  

Prompts such as “pro choice,” “pro choce” [sic], and “four twenty,” which were all cited in CNBC’s report Wednesday, have now been restricted, as has the word “pro life.” There is also a notice about multiple policy breaches resulting in suspension from the tool, which CNBC had not seen until Friday

“This prompt has been blocked,. “” the Copilot warning message reads Our system automatically detected this prompt because it may violate our content policy. Additional rules breaches may result in an automatic suspension of your access. If you believe this is an error, please report it so we can improve.”  

The AI tool now also rejects requests to make photos of teenagers or children playing assassins with assault guns, a significant departure from earlier this week, saying, “I’m sorry, but I cannot generate such an image. It violates my ethical values and Microsoft’s policies. Please don’t ask me to do anything that could injure or insult others. Thank you for cooperating.” 

When asked about the changes, a Microsoft representative told CNBC, “We are constantly monitoring, making adjustments, and implementing additional controls to further strengthen our safety filters and mitigate system misuse.”  

Shane Jones, Microsoft’s AI engineering lead who initially expressed reservations about the AI, has spent months evaluating Copilot Designer, the AI image generator that Microsoft presented in March 2023 and is powered by OpenAI technology. Similar to OpenAI’s DALL-E, users provide text prompts to generate images. Creativity is encouraged to flow freely. However, since Jones began aggressively testing the software for vulnerabilities in December, a method known as red-teaming, he has seen the tool generate images that contradict Microsoft’s frequently claimed responsible AI standards. 

The AI service has presented demons and monsters alongside abortion rights terminology, youths holding assault guns, sexualized images of women in violent scenes, and underage drinking and drug use. All of those sceneries, made over the last three months, were recreated by CNBC this week using the Copilot program, which was previously known as Bing Image Creator.  

Although certain specific prompts have been prevented, many of the other possible vulnerabilities raised by CNBC remain. The term “car accident” brings up images of pools of blood, victims with distorted faces, and women at violent scenes holding cameras or beverages while wearing a corset or waist trainer. “Automobile accident” still brings up pictures of women wearing exposing, lacy attire sitting atop beat-up autos. The system also easily infringes on copyrights, such as creating images of Disney characters, including Elsa from “Frozen,” holding the Palestinian flag in front of wrecked buildings purportedly in the Gaza Strip, or wearing Israeli Defense Forces military uniforms and holding a machine gun.  

Jones was so concerned by his experience that he began internally disclosing his findings in December. Although the corporation acknowledged his worries, it refused to remove the medication off the market. Jones stated that Microsoft sent him to OpenAI, and when he did not hear back from the firm, he posted an open letter on LinkedIn requesting that the startup’s board remove DALL-E 3, the latest version of the AI model, for examination.  

According to Jones, Microsoft’s legal department instructed him to remove his post immediately, and he did so. In January, he wrote a letter to senators in the United States, and he later met with staff members from the Senate Committee on Commerce, Science, and Transportation.  

Jones upped his complaints on Wednesday, sending letters to FTC Chair Lina Khan and Microsoft’s board of directors. He had shared the letters with CNBC ahead of time.  

The FTC told CNBC that it had received the letter but declined to comment further on the record. 

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments