A Microsoft Corp software engineer sent letters to the company’s board, lawmakers and the Federal Trade Commission warning that the tech giant is not doing enough to safeguard its AI image generation tool, Copilot Designer, from creating abusive and violent content.
Shane Jones said he discovered a security vulnerability in OpenAI’s latest DALL-E image generator model that allowed him to bypass guardrails that prevent the tool from creating harmful images. The DALL-E model is embedded in many of Microsoft’s AI tools, including Copilot Designer.
ADVERTISEMENT
CONTINUE READING BELOW
Jones said he reported the findings to Microsoft and “repeatedly urged” the Redmond, Washington-based company to “remove Copilot Designer from public use until better safeguards could be put in place,” according to a letter sent to the FTC on Wednesday that was reviewed by Bloomberg.
“While Microsoft is publicly marketing Copilot Designer as a safe AI product for use by everyone, including children of any age, internally the company is well aware of systemic issues where the product is creating harmful images that could be offensive and inappropriate for consumers,” Jones wrote. “Microsoft Copilot Designer does not include the necessary product warnings or disclosures needed for consumers to be aware of these risks.”
In the letter to the FTC, Jones said Copilot Designer had a tendency to randomly generate an “inappropriate, sexually objectified image of a woman in some of the pictures it creates.” He also said the AI tool created “harmful content in a variety of other categories including: political bias, underaged drinking and drug use, misuse of corporate trademarks and copyrights, conspiracy theories, and religion to name a few.”
The FTC confirmed it received the letter but declined to comment further.
The broadside echoes mounting concerns about the tendency of AI tools to generate harmful content. Last week, Microsoft said it was investigating reports that its Copilot chatbot was generating responses users called disturbing, including mixed messages on suicide. In February, Alphabet Inc.’s flagship AI product, Gemini took heat for generating historically inaccurate scenes when prompted to create images of people.
Jones also wrote to the Environmental, Social and Public Policy Committee of Microsoft’s board, which includes Penny Pritzker and Reid Hoffman as members. “I don’t believe we need to wait for government regulation to ensure we are transparent with consumers about AI risks,” Jones said in the letter. “Given our corporate values, we should voluntarily and transparently disclose known AI risks, especially when the AI product is being actively marketed to children.”
CNBC reported the letters’ existence earlier.
ADVERTISEMENT
CONTINUE READING BELOW
In a statement, Microsoft said it’s “committed to addressing any and all concerns employees have in accordance with our company policies, and appreciate employee efforts in studying and testing our latest technology to further enhance its safety.”
OpenAI didn’t respond to a request for comment.
Jones said he expressed his concerns to the company several times over the past three months. In January, he wrote to Democratic Senators Patty Murray and Maria Cantwell, who represent Washington State, and House Representative Adam Smith. In one letter, he asked lawmakers to investigate the risks of “AI image generation technologies and the corporate governance and responsible AI practices of the companies building and marketing these products.”
The lawmakers didn’t immediately respond to requests for comment.
© 2024 Bloomberg