Microsoft Engineer Raises Concerns Over Harmful Content Created by AI Image Generator
In a recent development, a Microsoft engineer named Shane Jones has brought attention to the offensive and harmful imagery being generated by the company’s AI image-generator tool. Jones has taken action by sending letters to U.S. regulators and Microsoft’s board of directors, urging them to address the issue.
The Federal Trade Commission has confirmed receipt of the letter, and Microsoft has responded by stating that they appreciate Jones’ efforts to enhance the safety of their technology. Jones specifically raised concerns about Microsoft’s Copilot Designer tool, which has been reportedly generating harmful content such as sexually objectified images and violent scenes.
Jones has called on Microsoft to either take the product off the market or change its age rating to prevent further dissemination of harmful content. He has also brought his concerns to the U.S. Senate’s Commerce Committee and the state attorney general in Washington.
Jones believes that the issues with Copilot Designer are already addressed in OpenAI’s ChatGPT model, which he believes offers better safeguards against generating harmful content. The emergence of AI image-generators in 2022 has put pressure on tech giants to release their versions, but without proper safeguards, they can inadvertently create harmful “deepfake” images.
In a similar vein, Google recently faced controversies over its Gemini chatbot’s ability to generate images of people, prompting the tech giant to temporarily suspend this feature following concerns over the depiction of race and ethnicity.
As concerns grow over the potential misuse of AI technology, it remains to be seen how companies like Microsoft and Google will address these challenges to ensure the responsible use of their products. Stay tuned to Poh Diaries for further updates on this developing story.
“Infuriatingly humble tv expert. Friendly student. Travel fanatic. Bacon fan. Unable to type with boxing gloves on.”