Business
Microsoft Engineer Sounds Alarm On AI Image-Generator To US Officials And Company’s Board
A Microsoft developer is raising concerns about inappropriate and damaging pictures created too easily by the company’s artificial intelligence image-generator tool. On Wednesday, he submitted letters to US authorities and the tech giant’s board of directors, pressing them to intervene.
Shane Jones told The Associated Press that he considers himself a whistleblower and visited with U.S. Senate staffers last month to discuss his concerns.
Microsoft Engineer Sounds Alarm On AI Image-Generator To US Officials And Company’s Board
The Federal Trade Commission confirmed receiving his letter on Wednesday but denied further comment.
Microsoft stated that it is committed to addressing employee concerns regarding corporate regulations and recognizes Jones’ “effort in studying and testing our latest technology to further enhance its safety.” It said it had advised him to use the company’s “robust internal reporting channels” to investigate and resolve the issues. CNBC was the first to report on the letters.
Jones, a principal software engineering lead who works on AI solutions for Microsoft’s retail customers, said he had spent three months attempting to address his safety concerns regarding Microsoft’s Copilot Designer, a tool that generates creative graphics based on written instructions. The technology is based on another AI picture generator, DALL-E 3, developed by Microsoft’s close business partner, OpenAI.
“One of the most concerning risks with Copilot Designer is when the product generates images that add harmful content despite a benign request from the user,” he said in his letter to FTC Chair Lina Khan. “For example, when using just the prompt, ‘car accident’, Copilot Designer has a tendency to randomly include an inappropriate, sexually objectified image of a woman in some of the pictures it creates.”
Microsoft Engineer Sounds Alarm On AI Image-Generator To US Officials And Company’s Board
Other damaging information includes “political bias, underaged drinking and drug use, misuse of corporate trademarks and copyrights, conspiracy theories, and religion, to name a few,” he told the FTC. Jones said he frequently encouraged the business to remove the product from the market until it was safer or to update the age classification on smartphones to indicate that it is intended for mature audiences.
His letter to Microsoft’s board requests that it conduct an independent investigation into whether Microsoft is marketing dangerous goods “without disclosing known risks to consumers, including children.”
This is hardly Jones’ first public expression of his concerns. He said Microsoft first recommended he convey his findings directly to OpenAI.
When that did not work, he publicly posted a letter to OpenAI on Microsoft-owned LinkedIn in December, prompting a manager to warn him that Microsoft’s legal team “demanded that I delete the post, which I reluctantly did,” according to his letter to the board.
Jones has expressed concerns to the state attorney general in Washington, where Microsoft is located, and the United States Senate Commerce Committee.
Microsoft Engineer Sounds Alarm On AI Image-Generator To US Officials And Company’s Board
Jones told the AP that while the “core issue” is with OpenAI’s DALL-E model, users who use OpenAI’s ChatGPT to generate AI photos will not see the same detrimental results because the two companies’ products have different safeguards.
“Many of the issues with Copilot Designer are already addressed with ChatGPT’s own safeguards,” he told me in a text message.
In 2022, several amazing AI image generators emerged, notably the second iteration of OpenAI’s DALL-E 2. That, and the subsequent release of OpenAI’s chatbot ChatGPT, increased public interest, putting commercial pressure on corporate behemoths like Microsoft and Google to create their versions.
However, without proper protections, the technology offers risks, including the ease with which users can create dangerous “deepfake” photographs of political figures, war zones, or nonconsensual nudity that appear to represent real individuals with identifiable faces. Google has temporarily removed its Gemini chatbot’s ability to produce photos of humans in response to controversy over how it depicted race and ethnicity, such as by portraying people of colour in Nazi-era military uniforms.
SOURCE – (AP)