Microsoft engineer warns company’s AI tool creates violent, sexual images, ignores copyrights

On a late night in December, Shane Jones, an artificial intelligence engineer at Microsoft
, felt sickened by the images popping up on his computer.

Jones was noodling with Copilot Designer, the AI image generator that Microsoft debuted in March 2023, powered by OpenAI’s technology. Like with OpenAI’s DALL-E, users enter text prompts to create pictures. Creativity is encouraged to run wild.

Since the month prior, Jones had been actively testing the product for vulnerabilities, a practice known as red-teaming. In that time, he saw the tool generate images that ran far afoul of Microsoft’s oft-cited responsible AI principles.

The AI service has depicted demons and monsters alongside terminology related to abortion rights, teenagers with assault rifles, sexualized images of women in violent tableaus, and underage drinking and drug use. All of those scenes, generated in the past three months, have been recreated by CNBC this week using the Copilot tool, which was originally called Bing Image Creator.

“It was an eye-opening moment,” Jones, who continues to test the image generator, told CNBC in an interview. “It’s when I first realized, wow this is really not a safe model.”

Jones has worked at Microsoft for six years and is currently a principal software engineering manager at corporate headquarters in Redmond, Washington. He said he doesn’t work on Copilot in a professional capacity. Rather, as a red teamer, Jones is among an army of employees and outsiders who, in their free time, choose to test the company’s AI technology and see where problems may be surfacing.

Jones was so alarmed by his experience that he started internally reporting his findings in December. While the company acknowledged his concerns, it was unwilling to take the product off the market. Jones said Microsoft referred him to OpenAI and, when he didn’t hear back from the company, he posted an open letter on LinkedIn asking the startup’s board to take down DALL-E 3 (the latest version of the AI model) for an investigation.

Microsoft’s legal department told Jones to remove his post immediately, he said, and he complied. In January, he wrote a letter to U.S. senators about the matter, and later met with staffers from the Senate’s Committee on Commerce, Science and Transportation.

Now, he’s further escalating his concerns. On Wednesday, Jones sent a letter to Federal Trade Commission Chair Lina Khan, and another to Microsoft’s board of directors. He shared the letters with CNBC ahead of time.

“Over the last three months, I have repeatedly urged Microsoft to remove Copilot Designer from public use until better safeguards could be put in place,” Jones wrote in the letter to Khan. He added that, since Microsoft has “refused that recommendation,” he is calling on the company to add disclosures to the product and change the rating on Google’s
Android app to make clear that it’s only for mature audiences.

“Again, they have failed to implement these changes and continue to market the product to ‘Anyone. Anywhere. Any Device,’” he wrote. Jones said the risk “has been known by Microsoft and OpenAI prior to the public release of the AI model last October.”

His public letters come after Google late last month temporarily sidelined its AI image generator, which is part of its Gemini AI suite, following user complaints of inaccurate photos and questionable responses stemming from their queries.

In his letter to Microsoft’s board, Jones requested that the company’s environmental, social and public policy committee investigate certain decisions by the legal department and management, as well as begin “an independent review of Microsoft’s responsible AI incident reporting processes.”

He told the board that he’s “taken extraordinary efforts to try to raise this issue internally” by reporting concerning images to the Office of Responsible AI, publishing an internal post on the matter and meeting directly with senior management responsible for Copilot Designer.

“We are committed to addressing any and all concerns employees have in accordance with our company policies, and appreciate employee efforts in studying and testing our latest technology to further enhance its safety,” a Microsoft spokesperson told CNBC. “When it comes to safety bypasses or concerns that could have a potential impact on our services or our partners, we have established robust internal reporting channels to properly investigate and remediate any issues, which we encourage employees to utilize so we can appropriately validate and test their concerns.”

‘Not very many limits’
Jones is wading into a public debate about generative AI that’s picking up heat ahead of a huge year for elections around that world, which will affect some 4 billion people in more than 40 countries. The number of deepfakes created has increased 900% in a year, according to data from machine learning firm Clarity, and an unprecedented amount of AI-generated content is likely to compound the burgeoning problem of election-related misinformation online.

Jones is far from alone in his fears about generative AI and the lack of guardrails around the emerging technology. Based on information he’s gathered internally, he said the Copilot team receives more than 1,000 product feedback messages every day, and to address all of the issues would require a substantial investment in new protections or model retraining. Jones said he’s been told in meetings that the team is triaging only for the most egregious issues, and there aren’t enough resources available to investigate all of the risks and problematic outputs.

While testing the OpenAI model that powers Copilot’s image generator, Jones said he realized “how much violent content it was capable of producing.”

“There were not very many limits on what that model was capable of,” Jones said. “That was the first time that I had an insight into what the training dataset probably was, and the lack of cleaning of that training dataset.”

Source: https://www.cnbc.com/2024/03/06/microsoft-ai-engineer-says-copilot-designer-creates-disturbing-images.html

Exit mobile version