Microsoft adds more security to AI tool that created Taylor Swift deepfakes
[ad_1]
khaskhabar.com : Tuesday, January 30, 2024 1:30 PM
San Francisco. Microsoft has introduced more protections into its AI text to image generation tool Designer, amid concerns over abuse of creating non-consensual images of celebrities.
The change comes after AI generated nude photos of American singer-songwriter Taylor Swift, which went viral on Instagram last week, which came from a Telegram channel where people post photos of celebrities, 404 Media reports. The AI was using designers to create photos.
“We are investigating these reports and taking appropriate action to address them,” a Microsoft spokesperson was quoted as saying.
They added, “Our code of conduct prohibits the use of our tools to produce adult or non-consensual intimate content, and any attempt to produce content that goes against our policies will result in loss of access to the service.” “Maybe. We have large teams working on developing guardrails and other safety systems in line with our Responsible AI principles.”
Microsoft said that the ongoing investigation is unable to confirm whether the images of Swift on X were created using the designer. However, the company is continuing to strengthen its text filtering signals and address the abuses of its services outlined in the report.
Meanwhile, Microsoft President and CEO Satya Nadella has said that Swift AI fakes are dangerous and terrible.
In an interview with NBC Nightly News, Nadella said that I think we should work fast on this.
Swift is reportedly considering possible legal action against the website responsible for creating the deepfakes.
–IANS
read this also – Click to read the news of your state/city before the newspaper.
Web Title-Microsoft adds more security to AI tool that created Taylor Swift deepfakes
[ad_2]
Source link