That white guy who can’t get a job at Tim Hortons? He’s AI

[

A series of AI-generated videos that show a white man complaining about how difficult it is to get a job in Canada have been taken down by TikTok, following inquiries made by the CBC News Visual Investigations team.

The social media platform says the videos violated its community guidelines, because it wasn’t clear enough that they were made with AI.

Most of the videos feature what looks like a white man in his 20s named “Josh,” who speaks to the camera and makes racially charged statements about immigrants and their role in the job market. In fact, “Josh” is created by AI and doesn’t exist.

In one video, he complains he can’t get a job because people from India have taken them all, particularly at Tim Hortons. He claims that he applied for a job at the doughnut shop and was asked if he spoke Punjabi.

In a statement, Tim Hortons said the emergence of videos such as this have been extremely frustrating and concerning for the company, and adds that it has had difficulty getting them taken down.

Screenshot of a TikTok page, showing thumbnails of videos.
A TikTok account that featured AI-generated videos of a white man complaining he couldn’t get a job in Canada, has since been taken down. It’s part of a trend known as ‘fake-fluencing.’ (Unemployedflex/TikTok)

In another video, “Josh” attacks Canada’s immigration policy, asking why so many people are admitted to Canada when there aren’t enough jobs to go around.

It’s part of a trend known as “fake-fluencing.” That’s when companies create fake personas with AI in order to make it look like a real person is endorsing a product or service. The company in this case is Nexa, an AI firm that develops software that other companies can use to recruit new hires. Some of the videos feature Nexa logos in the scene. The company’s founder and CEO Divy Nayyar calls that a “subconscious placement” of advertising.

An AI-generated video shows a man holding a coffee cup on a busy urban street.
The man in the videos complains he can’t get a job because Indian immigrants have taken them all. There are subtle clues he isn’t real. His hand holds a coffee cup unconvincingly, and is a different colour from his other hand. There is also a small logo for Google’s Veo AI software in the corner. (Unemployedflex/TikTok)

In an interview with CBC News, he said he wanted to “have fun” with the idea held by some that “Indians are taking over the job market.” He says he created the “Josh” persona as a way of connecting with those who have similar views: young people just out of school who are looking for work.

Marketing experts say it’s deceptive and unethical. 

“This type of content and highly polarizing storytelling is something that we would expect from far-right groups,” said York University marketing professor Markus Giesler.

“For a company to use this kind of campaign tonality in order to attract consumers to its services is highly, highly problematic and highly, highly unethical and unlike anything that I’ve ever seen.”

Far more convincing

Making videos such as this has never been easier. Nayyar says his company made them with Google’s Veo AI software and some other tools. The latest iteration, Veo3, was released in May, and can make videos from text prompts that are far more convincing than previous versions.

Obvious clues such as people with extra fingers or physical impossibilities appear less frequently in Veo3. The audio is often indistinguishable from real human voices, and matches the lip movements of the characters in the scene, something previous AI video generators struggled with.

Words on a comment section saying AI racism is crazy.
A screenshot of a comment about one of the videos. Some TikTok users spotted the fakery, while others complained about the racist messaging. (Unemployedflex/TikTok)

But some TikTok users were not fooled. They called the videos out as AI-generated in the comments. But others responded to what they referred to as the racist message, suggesting they believed they were watching a real person. In some cases, “Josh,” the fake character, responds to them in the comments to defend himself, further implying he is real.

Marvin Ryder, an associate professor of marketing at McMaster University in Hamilton, says he was initially taken in. “I was convinced that this was a real character and had a real story that he was trying to tell in his little eight-second videos,” he said. 

Ryder says we may reach a point in the coming years where fakery is undetectable. “How are we as consumers of social media, even if it was just for entertainment, supposed to discern reality from fiction?”

An AI-generated man stands on a busy street corner with a coffee cup.
Other clues that the videos were made with AI include that the street signs have no real words and, apart from ‘Job fair’ there are no real words on the poster to the man’s left. (Unemployedflex/TikTok)

TikTok says it wants clear labelling

TikTok didn’t comment on the inflammatory and controversial message of the videos. It said they were taken down because its guidelines say AI-generated videos that show realistic-appearing scenes or people must be clearly marked with a label, caption, watermark or sticker.

After reviewing Nexa’s videos of “Josh,” TikTok said it wasn’t clear enough. There is a Google Veo watermark in the bottom right corner of the videos, but TikTok said it should have been clearer, or included an AI label attached to the post. When that’s done, there is a message that reads, “Creator labelled as AI-generated.”

Nayyar said he was trying to make something that looked as realistic as possible, but at the same time he claims people would use “common sense” and conclude they were made with AI. He says videos such as this are often labelled automatically by TikTok as being AI-generated. But TikTok labels are not automatic.

It’s not clear how rigorously TikTok enforces its policy. Although some AI-generated videos on the platform are labelled, and others have an #ai hashtag, many offer no clear indication.

Giesler says the problem is going to get worse, because AI makes it easier than ever to create videos, seemingly of real people, with hateful messages that find an audience on social media. “I would say it’s an irresponsible utilization of emotional branding tactics. We should not condone this.”

Leave a Comment