Podcast Episode Details

Back to Podcast Episodes
"Deep Fakes" - Weaponizing Artificial Intelligence

"Deep Fakes" - Weaponizing Artificial Intelligence


Episode 63


What happens when you no longer control your own likeness? Is there an ethical line to be crossed with posthumous product spokesmanship? We skirt the line in this episode and get topical - talking about the subject of ethics and artificial intelligence and how online communities have banned "Deep Fakes" - pornographic simulations produced by artificial intelligence.

Deep Fakes: weaponizing AI

  • We're seeing a really gross intersection of what we talked about on our prediction show around digital personal identity rights with body timage and data technology and how it advances with consumer products.

  • The Verge and an AP article both discuss the emergence of "deep fakes:" applying people's likenesses using AI in a pornographic way. These communities take hi-res videos and still frames from notable actresses for training data and  apply their likeness to nude photos.

  • There's no real legal consensus on deep fakes and their consequences, so a lot of these online sites have come together and banned them and their communities.

  • This is hitting right on the topic of that scary, black mirror-esque world we now live in where your face can be unwittingly applied without your consent to literally any context in some of the seediest and darkest ways with no way for you to manage it.

Legal Ramifications of Deep Fakes

  • The legal ramifications are unclear because we've never had this sophisticated level of technology.

  • This is something that will come up in law, and we'll probably start to see entire bills at the federal level.

  • There's been no federal regulations yet that address how to handle your body data.

The Historical Blind Eye to Invasive Technology

  • It's troubling that there were a lot of communities that turned a blind eye for years to  still images.

  • There was this incredible story in Wired about 10-12 years ago about how Gillian Anderson at one point was the most photoshopped face on the internet, and many of the photos were suggestive.

  • They were suggesting it was because of facial symmetry.

  • Those types of images have been around for decades with no one doing anything about it.

  • We can all agree that it's harmful to somebody in some way when they're applying your likeness in that way.

  • The advent of AI assisted fakery is taking it to the next level and blurring the line of realism.

Incredibly complex technology in the hands of the people:

  • We were talking to Greg Steinberg at Something Digital and he asked hypothetically, what if we applied this to products? You could change any scene, from commercials to videos, to represent your product with AI.

  • You could apply the Coke filter to any image and anything anyone holding is a can of coke.

  • Amazing movie technology is now available in the palm of your hand.

  • It's now available for consumers and businesses to take advantage of in a pretty easy way.

  • In 2016 at the Adobe max creativity conference they announced a tool that with creative cloud suite, that after 20 minutes of training spoken word data, you could train an AI or ML algorithm to parrot back phrases in another person's voice that you typed in.

  • A year and a half ago tech demo showcased a face to face algorithm that was applied to fake CNN broadcasts that used a source actor and overlaid that actor with political figures to show that GW and Vladimir Putin saying things they didn't actually say.

  • Is there even one way that this is a positive contribution to society?

Manipulative technology for sales

Published on 7 years, 9 months ago






If you like Podbriefly.com, please consider donating to support the ongoing development.

Donate