30/01/2024

Artificial Intelligence Impersonation

Scammers target PM Lee in fake online ads

Fake advertisements that name Prime Minister Lee Hsien Loong and use his image to promote crypto scams, among others, have been seen on the Internet recently, Mr Lee said on Facebook on Saturday night.

He said such advertisements, which tend to surface after a major speech or announcement with lots of media coverage, have re-emerged in the past few days.

“If the ad uses my image to sell you a product, or asks you to invest in some scheme, or even uses my voice to tell you to send money, it’s not me,” he added.


Deepfake video of DPM Lawrence Wong promoting investment scam circulating on social media
A deepfake video of Deputy Prime Minister Lawrence Wong promoting an investment scam has been circulating on Facebook and Instagram.

In the video, his mouth is noticeably altered to synchronise with a fake voice-over promoting an investment scam. The voice-over mimics the pitch and intonation of his real voice. The Straits Times’ logo is used at the top right-hand corner of the video.

The video has modified footage of DPM Wong at a media doorstop interview recorded by ST. An SPH Media spokeswoman said the video in question was not created or published by the company or ST.


DEEPFAKE VIDEO OF PM LEE PROMOTING SOME INVESTMENT SCAMS

Imagine this: you’re leisurely scrolling through your usual YouTube shorts, and suddenly, an unexpected advertisement pops up.

Prime Minister (PM) Lee Hsien Loong appears to be promoting a crypto-trading video on the Beijing-based news outlet China Global Television Network (CGTN). Yes, PM Lee seems to be discussing the benefits of a hands-free crypto trading platform, which boasts the ability to compute algorithms, analyse market trends, make strategic investment decisions, and execute trades—all autonomously, without any manual input from the user.

On 29 Dec, PM Lee shared a recent deepfake video that has been circulating online. Elaborating on the type of scam involved, PM Lee explained that scammers employ AI (artificial intelligence) technology to mimic our voices and images. They transform real footage of us, taken from official events, into very convincing but entirely bogus videos of us purportedly saying things we have never said. PM Lee urged people not to respond to such scam videos, which promise guaranteed returns on investments.


DEEPFAKE VIDEO OF DPM LAWRENCE WONG SELLING SOME INVESTMENT SCAM

With the rise of artificial intelligence (AI), it’s sometimes difficult to tell what is real anymore. A deepfake video of Deputy Prime Minister Lawrence Wong promoting an investment scam has been circulating on Facebook and Instagram. The worst part is that it looks real.

Deepfakes are media that have been altered by AI to look or sound like someone. In the video, DPM Wong’s mouth is altered to synchronise with a fake voiceover that sounds like him. Yes, the voiceover mimics the pitch and intonation of DPM Wong’s actual voice. Don’t believe me? You can watch the deepfake video here

Notably, the video was made from modified footage of DPM Wong giving an interview recorded by The Straits Times. The deepfake video promotes an investment scam, even using terms reminiscent of a DPM speech, like “my dear Singaporeans”.


Scammers are using AI to impersonate your loved ones. Here's what to watch out for
The next time you get a call from a family member or friend in need, you might want to make sure it's not a robot first

Imagine getting a phone call that your loved one is in distress. In that moment, your instinct would most likely be to do anything to help them get out of danger's way, including wiring money. Scammers are aware of this Achilles' heel and are now using AI to exploit it. 

A report from The Washington Post featured an elderly couple, Ruth and Greg Card, who fell victim to an impersonation phone call scam. Ruth, 73, got a phone call from a person she thought was her grandson. He told her she was in jail, with no wallet or cell phone, and needed cash fast. Like any other concerned grandparent would, Ruth and her husband (75) rushed to the bank to get the money. It was only after going to the second bank that the bank manager warned them that they had seen a similar case before that ended up being a scam -- and this one was likely a scam, too.

This scam isn't an isolated incident. The report indicates that in 2022, impostor scams were the second most popular racket in America, with over 36,000 people falling victim to calls impersonating their friends and family. Of those scams, 5,100 of them happened over the phone, robbing over $11 million from people, according to FTC officials. Generative AI has been making quite a buzz lately because of the increasing popularity of generative AI programs, such as OpenAI's ChatGPT and DALL-E. These programs have been mostly associated with their advanced capabilities that can increase productivity amongst users. However, the same techniques that are used to train those helpful language models can be used to train more harmful programs, such as AI voice generators


Thousands scammed by AI voices mimicking loved ones in emergencies
In 2022, $11 million was stolen through thousands of impostor phone scams

AI models designed to closely simulate a person’s voice are making it easier for bad actors to mimic loved ones and scam vulnerable people out of thousands of dollars, The Washington Post reported.

Quickly evolving in sophistication, some AI voice-generating software requires just a few sentences of audio to convincingly produce speech that conveys the sound and emotional tone of a speaker’s voice, while other options need as little as three seconds. For those targeted—which is often the elderly, the Post reported—it can be increasingly difficult to detect when a voice is inauthentic, even when the emergency circumstances described by scammers seem implausible.

Tech advancements seemingly make it easier to prey on people’s worst fears and spook victims who told the Post they felt “visceral horror” hearing what sounded like direct pleas from friends or family members in dire need of help. One couple sent $15,000 through a bitcoin terminal to a scammer after believing they had spoken to their son. The AI-generated voice told them that he needed legal fees after being involved in a car accident that killed a US diplomat.


Scammers use deepfakes to create voice recordings and videos to trick victims’ family, friends

Scammers are tapping sophisticated artificial intelligence (AI) tools to create deepfake voice recordings and videos of people, to fool their relatives and friends into transferring money.

Speaking at the Regional Anti-Scam Conference 2023 at the Police Cantonment Complex on Tuesday, Minister of State for Home Affairs Sun Xueling said scammers can also use deepfake technology to clone authority figures.

“We have already seen overseas examples of bad actors making use of deepfake technology to create convincing clones – whether voice or videos of public figures – to spread disinformation,” she said. “As such, we need to constantly monitor this threat, work with research institutes, relevant government agencies, market players who themselves are at the forefront of these technologies, to study ways to counter them.” Her comments come in the wake of a rise in AI-driven fraud, and amid reports of countries like China rolling out new rules to curb the use of generative AI to alter online content.


Broken English no longer a sign of scams as crooks tap AI bots like ChatGPT
Chatbots like ChatGPT have helped scammers craft messages in near-perfect language

Bad grammar has long been a telltale sign that a message or a job offer is likely to be a scam.

But cyber-security experts said those days may be over as generative artificial intelligence (AI) chatbots like ChatGPT have helped scammers craft messages in near-perfect language.

Cyber-security experts said they have observed improvements in the language used in phishing scams in recent months – coinciding with the rise of ChatGPT – and warned that end users will need to be even more vigilant for other signs of a scam.


New AI Tech Can Mimic Any Voice
Emerging technologies in speech generation raise ethics and security concerns

Even the most natural-sounding computerized voices—whether it’s Apple’s Siri or Amazon’s Alexa—still sound like, well, computers. Montreal-based start-up Lyrebird is looking to change that with an artificially intelligent system that learns to mimic a person’s voice by analyzing speech recordings and the corresponding text transcripts as well as identifying the relationships between them. Introduced last week, Lyrebird’s speech synthesis can generate thousands of sentences per second—significantly faster than existing methods—and mimic just about any voice, an advancement that raises ethical questions about how the technology might be used and misused.

The ability to generate natural-sounding speech has long been a core challenge for computer programs that transform text into spoken words. Artificial intelligence (AI) personal assistants such as Siri, Alexa, Microsoft’s Cortana and the Google Assistant all use text-to-speech software to create a more convenient interface with their users. Those systems work by cobbling together words and phrases from prerecorded files of one particular voice. Switching to a different voice—such as having Alexa sound like a man—requires a new audio file containing every possible word the device might need to communicate with users.

Lyrebird’s system can learn the pronunciations of characters, phonemes and words in any voice by listening to hours of spoken audio. From there it can extrapolate to generate completely new sentences and even add different intonations and emotions. Key to Lyrebird’s approach are artificial neural networks—which use algorithms designed to help them function like a human brain—that rely on deep-learning techniques to transform bits of sound into speech. A neural network takes in data and learns patterns by strengthening connections between layered neuronlike units.


Microsoft’s new AI can simulate anyone’s voice with 3 seconds of audio

On Thursday, Microsoft researchers announced a new text-to-speech AI model called VALL-E that can closely simulate a person's voice when given a three-second audio sample. Once it learns a specific voice, VALL-E can synthesize audio of that person saying anything—and do it in a way that attempts to preserve the speaker's emotional tone.

Its creators speculate that VALL-E could be used for high-quality text-to-speech applications, speech editing where a recording of a person could be edited and changed from a text transcript (making them say something they originally didn't), and audio content creation when combined with other generative AI models like GPT-3.

Microsoft calls VALL-E a "neural codec language model," and it builds off of a technology called EnCodec, which Meta announced in October 2022. Unlike other text-to-speech methods that typically synthesize speech by manipulating waveforms, VALL-E generates discrete audio codec codes from text and acoustic prompts. It basically analyzes how a person sounds, breaks that information into discrete components (called "tokens") thanks to EnCodec, and uses training data to match what it "knows" about how that voice would sound if it spoke other phrases outside of the three-second sample. Or, as Microsoft puts it in the VALL-E paper:


Artificial Intelligence Impersonating a Human: The Impact of Design Facilitator Identity on Human Designers

Advances in artificial intelligence (AI) offer new opportunities for human–AI cooperation in engineering design. Human trust in AI is a crucial factor in ensuring an effective human–AI cooperation, and several approaches to enhance human trust in AI have been explored in prior studies.

However, it remains an open question in engineering design whether human designers have more trust in an AI and achieve better joint performance when they are deceived into thinking they are working with another human designer. This research assesses the impact of design facilitator identity (“human” versus AI) on human designers through a human subjects study, where participants work with the same AI design facilitator and they can adopt their AI facilitator’s design anytime during the study. Half of the participants are told that they work with an AI, and the other half of the participants are told that they work with another human participant but in fact they work with the AI design facilitator.

The results demonstrate that, for this study, human designers adopt their facilitator’s design less often on average when they are deceived about the identity of the AI design facilitator as another human designer. However, design facilitator identity does not have a significant impact on human designers’ average performance, perceived workload, and perceived competency and helpfulness of their design facilitator in the study. These results caution against deceiving human designers about the identity of an AI design facilitator in engineering design.


The Ick of AI That Impersonates Humans


PHILIP K. DICK was living a few miles north of San Francisco when he wrote Do Androids Dream of Electric Sheep?, which envisioned a world where artificially intelligent androids are indistinguishable from humans. The Turing Test has been passed, and it’s impossible to know who, or what, to trust.

A version of that world will soon be a reality in San Francisco. Google announced this week that Duplex, the company's phone-calling AI, will be rolled out to Pixel phones in the Bay Area and a few other US cities before the end of the year. You might remember Duplex from a shocking demonstration back in May, when Google showed how the software could call a hair salon and book an appointment. To the receptionist on the other end of the line, Duplex sounded like a bona fide person, complete with pauses and “ums” for more human-like authenticity.

Duplex is part of a growing trend to offload basic human interaction to robots. More and more text messages are being automated: ride-sharing apps text you when your car is there; food-delivery apps text you when your order has arrived; airlines text you about delays; political campaigns send you reminders to vote. Smartphones predict the words you might want to complete your own texts; recently, Google’s Gmail has attempted to automate your side of the conversation in emails as well, with smart responses and suggested autocomplete.


When you realize your favorite new song was written and performed by ... AI

Music fans responded with disbelief this week to the release on streaming and social media platforms of the viral song "Heart on My Sleeve."

The hosts of the popular music-related YouTube channel LawTWINZ were among the many who weighed in, discussing whether the track, which uses artificial intelligence to simulate the music of pop stars Drake and The Weeknd, even surpasses the real pop stars' talents. Advances in AI have gotten to the point where the technology can quickly create new songs like "Heart on My Sleeve" that sound like they're the work of real artists.

Recent examples, which include a faux song that sounds a lot like something the British alt-rock band Oasis would put out, hint at AI's bold, creative possibilities and its ethical and legal limitations. Now, artists, lawyers and other industry players are trying to figure out how the technology can be used responsibly.


We Spoke To The Guy Who Created The Viral AI Image Of The Pope That Fooled The World

Over the weekend, a photo of Pope Francis looking dapper in a white puffer jacket went mega-viral on social media. The 86-year-old sitting pontiff, it appeared, has some serious drip. But there was just one problem: The image is not real. It was made using the AI art tool Midjourney.

As word spread across the internet that the image was generated by AI, many expressed surprise. “I thought the pope’s puffer jacket was real and didnt give it a second thought,” Chrissy Teigen tweeted. “no way am I surviving the future of technology.” Garbage Day newsletter writer and former BuzzFeed News reporter Ryan Broderick called it “the first real mass-level AI misinformation case,” following hot on the heels of faked images of Donald Trump being arrested by police in New York last week.

Now, for the first time, the image’s creator has shared the story of how he generated the photograph that fooled the world. Pablo Xavier, a 31-year-old construction worker from the Chicago area who declined to share his last name over fears that he could be attacked for creating the images, said he was tripping on shrooms last week when he came up with the idea for the image.


AI-faked images of Donald Trump’s imagined arrest swirl on Twitter
 AI-generated photo faking Donald Trump's possible arrest, created by Eliot Higgins using Midjourney v5

As the world waits to see if former President Donald Trump will actually be indicted today over hush-money payments to porn star Stormy Daniels, AI-generated images began circulating on Twitter imagining what that arrest would look like. Showing Trump resisting arrest and being dragged off by police, the realistic but very fake photos have already been viewed by millions.

“Making pictures of Trump getting arrested while waiting for Trump's arrest,” tweeted Eliot Higgins, who is the founder and creative director of Bellingcat, an independent international collective of researchers, investigators, and citizen journalists. In a tweet, Higgins confirmed that he used the impressively realistic AI engine Midjourney v5 to generate the fake images.

Ars couldn’t immediately reach Higgins for comment on the images, some of which have been viewed 2.2 million times on Twitter as of this writing. Twitter guidelines say that users “may not deceptively share synthetic or manipulated media that are likely to cause harm” and suggest that, at the very least, the images may soon be labeled to “help people understand their authenticity and to provide additional context.” Ars reached out to Twitter for comment on the images, but—as CEO Elon Musk tweeted the company would do days ago—Twitter only responded with a poop emoji.


AI-generated deepfakes are moving fast. Policymakers can't keep up
An image from a Republican National Committee ad against President Biden features imagery generated by artificial intelligence. The spread of AI-generated images, video and audio presents a challenge for policymakers

This week, the Republican National Committee used artificial intelligence to create a 30-second ad imagining what President Joe Biden's second term might look like. It depicts a string of fictional crises, from a Chinese invasion of Taiwan to the shutdown of the city of San Francisco, illustrated with fake images and news reports. A small disclaimer in the upper left says the video was "Built with AI imagery."

The ad was just the latest instance of AI blurring the line between real and make believe. In the past few weeks, fake images of former President Donald Trump scuffling with police went viral. So did an AI-generated picture of Pope Francis wearing a stylish puffy coat and a fake song using cloned voices of pop stars Drake and The Weeknd.

Artificial intelligence is quickly getting better at mimicking reality, raising big questions over how to regulate it. And as tech companies unleash the ability for anyone to create fake images, synthetic audio and video, and text that sounds convincingly human, even experts admit they're stumped.


Send in the clones: Using artificial intelligence to digitally replicate human voices
A model replica of Wolfgang von Kempelen's Speaking Machine

The science behind making machines talk just like humans is very complex, because our speech patterns are so nuanced. "The voice is not easy to grasp," says Klaus Scherer, emeritus professor of the psychology of emotion at the University of Geneva. "To analyze the voice really requires quite a lot of knowledge about acoustics, vocal mechanisms and physiological aspects. So it is necessarily interdisciplinary, and quite demanding in terms of what you need to master in order to do anything of consequence."

So it's not surprisingly taken well over 200 years for synthetic voices to get from the first speaking machine, invented by Wolfgang von Kempelen around 1800 – a boxlike contraption that used bellows, pipes and a rubber mouth and nose to simulate a few recognizably human utterances, like mama and papa – to a Samuel L. Jackson voice clone delivering the weather report on Alexa today.

Talking machines like Siri, Google Assistant and Alexa, or a bank's automated customer service line, are now sounding quite human. Thanks to advances in artificial intelligence, or AI, we've reached a point where it's sometimes difficult to distinguish synthetic voices from real ones.


Deepfake Technology: What Are Deepfakes? How Do They Make Deepfakes?
Deepfake is a new media technology wherein a person simply takes existing text, picture, video, or audio and then manipulates, i.e., ‘fakes’ it to look like someone else using advanced artificial intelligence (AI) and neural network (NN) technology.

After its first appearance a few years back, deepfake technology has evolved from an innocuous tech geek’s chicanery to a malicious slandering weapon. In this article, we’ll see what exactly this dreaded deepfake tech is, how it works, what different forms it comes in, and how we can detect or bust a deepfake.

Deepfake is one of the buzzwords in media technology wherein a person simply takes existing text, picture, video, or audio and then manipulates, i.e., ‘fakes’ it to look like someone else using advanced artificial intelligence (AI) and neural network (NN) technology.