Genuine Video vs. Deepfake Video

Let’s Connect

Blogs About Brand-building Tips, Marketing Strategies, Insights & Stories

Thank you for subscribing. Appreciate your comments and sharing!

Subscribe to our Blogs

To fight the danger of deepfake video, all video makers are called to make a one-sentence disclaimer: “This is a genuine video, Deepfake is not used”.

What is a genuine video?

As an antonym to deepfake video, a genuine video uses real people in their own physical bodies and in their own real voices, as opposed to fabricating, misappropriating, or altering images, footage, speeches, and sounds with generative AI and machine learning, and/or creating and using look-alike avatars, in order to misrepresent or mislead. 

A genuine video therefore does not create or synthesize non-existing faces and voices, or replace faces and speeches, or manipulate facial expressions, or alter spoken words, to portray anyone saying or doing something they never actually said or did. 

Within the scope of editing a genuine video, technology may be used to edit video footage and audio recordings, create animations and special effects, for the purpose of representing real people and subject matters that a genuine video is based on, in order to inform, articulate, entertain, promote, and influence.

(To watch this as a video)

(To listen as a podcast)

Defining a genuine video requires understanding what a deepfake is 

Merriam-Webster Dictionary defines the word “deepfake” as: “an image or recording that has been convincingly altered and manipulated to misrepresent someone as doing or saying something that was not actually done or said.” It further explains its usage: “The term deepfake is typically used to refer to a video that has been edited using an algorithm to replace the person in the original video with someone else (especially a public figure) in a way that makes the video look authentic.”

The US Department of Homeland Security in a published DHS paper has a broader definition for “deepfakes”: “Deepfakes, an emergent type of threat falling under the greater and more pervasive umbrella of synthetic media, utilize a form of artificial intelligence/machine learning (AI/ML) to create believable, realistic videos, pictures, audio, and text of events which never happened.”

The same DHS paper warned that “Deepfakes, synthetic media, and disinformation in general pose challenges to our society. They can impact individuals and institutions from small businesses to nation states. All may be impacted by them.“

A deepfake recently tricked a $25 million fraudulent payment

A news report by CNN on Feb. 4, 2024 stated: “A finance worker at a multinational firm was tricked into paying out $25 million to fraudsters using deepfake technology to pose as the company’s chief financial officer in a video conference call, according to Hong Kong police.

The elaborate scam saw the worker duped into attending a video call with what he thought were several other members of staff, but all of whom were in fact deepfake recreations, Hong Kong police said at a briefing on Friday.

‘(In the) multi-person video conference, it turns out that everyone [he saw] was fake,’ senior superintendent Baron Chan Shun-ching told the city’s public broadcaster RTHK.

Chan said the worker had grown suspicious after he received a message that was purportedly from the company’s UK-based chief financial officer. Initially, the worker suspected it was a phishing email, as it talked of the need for a secret transaction to be carried out.

However, the worker put aside his early doubts after the video call because other people in attendance had looked and sounded just like colleagues he recognized, Chan said.”

My own experience with a deep fake call, and the harm caused by deep fake videos, audios, speeches, photos and visuals

I got a pre-recorded message on my voicemail, in perfect English, a too-perfect male voice said: “This is IRS.  We need to review some issues about your taxes, please call 1-800-XXX-XXXX right away…” 

I bet that all of us have already experienced being approached, or even fooled, by deepfakes, in one way or another.

More than videos, deepfake technology can fake human voices and put words into the mouths of fictional or real people, or mimic real people’s voices to spread lies or steal money, [1] create fictional photos from words, and turn text into videos within minutes, using software subscriptions that “start at just a few dollars a month.”

This is undermining our trust in information, since deepfakes can be “leveraged to defame, impersonate, and spread disinformation”, per Wikipedia.

Deepfakes threaten democracy, open society, and civilization

When seeing is not believing, when truth and trust are replaced by falsehood and distrust, the foundation for fact-based decision making is like a rug being pulled out from under our feet. This is a real threat to democracy and civilized societies.

Laws and regulations are far too slow to defeat deepfakes, compared to the speed of AI. Like a frog being comfortably boiled to death in slow-warming water, before we know it, we are yielding control to AI without even desiring to pull the plug.

What are you waiting for? For the government to step in and regulate? Even if it happens eventually, it will be too little, too late.  

Doing nothing means you are waiting for deepfakes to take over your right and ability to access truthful information – you are allowing deepfakes to further mislead yourself and humanity.

Or we each can do something. 

We can fight deepfakes and protect truth, democracy, and open society.

To differentiate Genuine Videos from Deepfakes, start with a simple disclaimer

We at 10 Plus Brand, from today on, will proudly and prominently label every video we produce with this simple statement: “This is a Genuine Video. No deepfake of any kind has been used.” In fact, since we at 10 Plus Brand have never used any deepfake for any videos bearing the 10+ logo on all of our brand promotional videos produced for our clients, our own vlogs and interviews, etc, the 10+ logo connotes completely Genuine Videos. 

I ask all video makers in the world to do the same: Add a one-sentence “disclaimer” at either the beginning or the end of each video, in text or with audio: “This is a genuine video. No deepfake of any kind has been used.” 

Let’s all leapfrog out of this gen-AI comfortable death, before it is too late.

[Footnote 1:]  “Audio can be deepfaked too, to create “voice skins” or ”voice clones” of public figures. Last March, the chief of a UK subsidiary of a German energy firm paid nearly £200,000 into a Hungarian bank account after being phoned by a fraudster who mimicked the German CEO’s voice. The company’s insurers believe the voice was a deepfake, but the evidence is unclear. Similar scams have reportedly used recorded WhatsApp voice messages.”


© Joanne Z. Tan   All rights reserved.


– To stay in the loop, subscribe to our Newsletter

– Download free Ebook

Please don’t forget to like it, comment, or better, SHARE IT WITH OTHERS, – they will be grateful!

(About 10 Plus Brand: In addition to the “whole 10 yards” of brand building, digital marketing, and content creation for business and personal brands. To contact us: 1-888-288-4533.)

– Visit our Websites:

Phone: 888-288-4533

– Find us online by clicking or follow these hashtags:












#AI Experience Design

Leave a comment

Your email address will not be published. Required fields are marked *

Decode | Create | Amplify