Be warned! Live Deepfake audio-visual tech is being used to scam us!
“First, use visual cues to verify who you’re talking to. Deepfakes still can’t do complex movements in real time, so if in doubt, ask your video conference counterpart to write a word or phrase on a piece of paper and show it on camera. You could ask them to pick up a nearby book or perform a unique gesture, like touching their ear or waving a hand, all of which can be difficult for deepfakes to replicate convincingly in real-time.
Second, watch the mouth. Look out for discrepancies in lip syncing or weird facial expressions that go beyond a typical connection glitch.
Third, employ multi-factor authentication. For sensitive meetings, consider involving a secondary conversation via email, SMS or an authenticator app, to make sure the participants are who they claim to be.
Fourth, use other secure channels. For critical meetings that will involve sensitive information or financial transactions, you and the other meeting participants could verify your identities through an encrypted messaging app like Signal or confirm decisions such as financial transactions through those same channels.”
For the domestic personal context, eg between family members, might be good to have a secret code word to verify identity in suspicious calls eg when one claims on a call that they’ve lost their phone and need help, or purport that a child has been kidnapped etc.
https://lnkd.in/gUDCXGvr?
#AI #artificialintelligence #deepfake
Absolutely. The rise of AI deepfakes is certainly a double-edged sword. Companies must definitely bolster their security protocols, and it’ll be interesting to see how CI (common intelligence) will evolve to counteract these challenges. What strategies do you think companies should prioritize right now to address this?