More

    Microsoft and the Taylor Swift genAI deepfake problem

    The previous couple of weeks have been a PR bonanza for Taylor Swift in each good methods and unhealthy. On the nice aspect, her boyfriend Travis Kelce was on the successful crew on the Super Bowl, and her reactions in the course of the sport bought loads of air time. On the a lot, a lot worse aspect, generative AI-created pretend nude photos of her have lately flooded the web.As you’d anticipate, condemnation of the creation and distribution of these photos adopted swiftly, together with from generative AI (genAI) firms and, notably, Microsoft CEO Satya Nadella. In addition to denouncing what occurred, Nadella shared his ideas on an answer: “I go back to what I think’s our responsibility, which is all of the guardrails that we need to place around the technology so that there’s more safe content that’s being produced.”Microsoft weighed in on the problem of deepfakes once more yesterday (although with out mentioning Swift). In a weblog publish, Microsoft Vice Chair and President Brad Smith decried the proliferation of deepfakes and stated the corporate is taking steps to restrict their unfold. “Tools unfortunately also become weapons, and this pattern is repeating itself,” he wrote. “We’re currently witnessing a rapid expansion in the abuse of these new AI tools by bad actors, including through deepfakes based on AI-generated video, audio, and images. This trend poses new threats for elections, financial fraud, harassment through nonconsensual pornography, and the next generation of cyber bullying.”Smith pledged “a robust and comprehensive approach” from Microsoft, including: “We’re committed to ongoing innovation that will help users quickly determine if an image or video is AI generated or manipulated.” As far because it goes, the Microsoft view is actually true, and is the standard all-purpose, knee-jerk response one would anticipate from the world’s greatest and most influential genAI firm. But what Nadella and Smith neglected is that there’s proof the corporate’s AI instruments created the Swift photos; much more damning, a Microsoft AI developer says he warned the corporate forward of time that correct guardrails didn’t exist, and Microsoft did nothing about it. The case in opposition to MicrosoftProof that Microsoft instruments had been used to create the deepfakes comes from a 404 Media article, which claims they originated in a Telegram group devoted to creating “non-consensual porn;” it recommends that Microsoft Designer be used to generate the porn photos. The article notes that “Designer theoretically refuses to produce images of famous people, but AI generators are easy to bamboozle, and 404 found you could break its rules with small tweaks to prompts.”More damning nonetheless, a Microsoft AI engineer allegedly warned Microsoft in December that the security guardrails of OpenAI’s picture generator DALL-E, the brains behind Microsoft Designer, may very well be bypassed to create specific and violent photos. He claims Microsoft ignored his warnings and tried to get him to not say something publicly about what he discovered. The engineer, Shane Jones, wrote in a letter to US Sens. Patty Murray (D-WA) and Maria Cantwell (D-WA); Rep. Adam Smith (D-WA), and Washington state Attorney General Bob Ferguson that he “discovered a security vulnerability that allowed me to bypass some of the guardrails that are designed to prevent the [DALL-E] model from creating and distributing harmful images…. I reached the conclusion that DALL·E 3 posed a public safety risk and should be removed from public use until OpenAI could address the risks associated with this model.“The vulnerabilities in DALL·E 3, and products like Microsoft Designer that use DALL·E 3, makes it easier for people to abuse AI in generating harmful images. Microsoft was aware of these vulnerabilities and the potential for abuse.”Jones claimed Microsoft refused to behave, posted a public letter in regards to the concern on LinkedIn, after which was informed by his supervisor to delete the letter as a result of Microsoft’s authorized division demanded it.In his letter, Jones mentions the express photos of Swift and says, “This is an example of the type of abuse I was concerned about and the reason why I urged OpenAI to remove DALL·E 3 from public use and reported my concerns to Microsoft.” According to GeekWire, Microsoft in a press release stated the corporate “investigated the employee’s report and confirmed that the techniques he shared did not bypass our safety filters in any of our AI-powered image generation solutions.”All of that is, to a sure extent, circumstantial proof. There’s no affirmation the pictures had been created with Microsoft Designer, and we don’t know whether or not to belief Microsoft or Jones. But we do know that Microsoft has a historical past of downplaying or ignoring the risks of genAI.As I wrote final May, Microsoft slashed the staffing of a 30-member crew that was liable for ensuring genAI was being developed ethically on the firm — after which eradicated the crew completely. The slashing befell a number of months earlier than the discharge of Microsoft’s genAI chatbot; the crew’s elimination was a number of months after.Before the discharge of the chatbot, John Montgomery, Microsoft company vp of AI, informed the crew why it was being decimated: “The pressure from [CTO] Kevin [Scott] and [CEO] Satya [Nadella] is very, very high to take these most recent OpenAI models and the ones that come after them and move them into customers’ hands at a very high speed.” He added that the ethics crew stood in the best way of that.When a crew member responded that there are vital risks in AI that have to be addressed — and requested him to rethink— Montgomery answered, “Can I reconsider? I don’t think I will. ’Cause unfortunately the pressures remain the same. You don’t have the view that I have, and probably you can be thankful for that. There’s a lot of stuff being ground up into the sausage.”Once the crew was gone, Microsoft was off and operating with genAI. And that completed precisely what the corporate wished. The firm’s inventory has skyrocketed, and due to AI, it’s turn out to be essentially the most worthwhile firm on the planet — the second firm (behind Apple) to be valued at greater than $3 trillion.That’s three trillion causes you shouldn’t anticipate Microsoft to alter its tune in regards to the potential risks of AI, whether or not or not Microsoft Designer was used to create the Taylor Swift deepfakes. And it would not bode effectively for the probabilities of a tsunami of deepfakes within the yr forward, particularly with a contested presidential election within the US.

    Copyright © 2024 IDG Communications, Inc.

    Recent Articles

    Data Privacy: All the Ways Your Cellphone Carrier Tracks You and How to Stop It

    Data monitoring in 2024 appears inevitable. Whether you are utilizing an iPhone or Android telephone, your service is probably going gathering all types of...

    Funko Fusion isn't afraid to get a little bloody | Digital Trends

    10:10 Games I grew up adoring Lego video video games, however latest efforts from TT Games like The Skywalker Saga simply haven’t gelled with me. That’s...

    Beats Solo 4 review: New sound. Who dis?

    In 2016, I survived 30 days on the Whole30 eating regimen. The purpose of the eating regimen, I’d name it a “reset,” is to...

    Amazon, AT&T, Verizon Named Best Tech Companies for Career Growth in 2024

    Amazon leads LinkedIn’s listing of the 2024 high corporations in know-how and knowledge to...

    Arc's new browser for Windows is too twee for me

    I’ll admit it — I used to be turned off by the brand new Arc browser from the start. For one, there’s the maker’s identify:...

    Related Stories

    Stay on op - Ge the daily news in your inbox