Can technology and innovation, blockchain, AI, big data, software authenticate digital information, authors, source, and tell if humans write the codes, news, algorithms, or machines produced fake news, made up videos and false data? Can machines verify and certify between truth and falsehood?

“In Machines We Trust”? – Verifying Authenticity by AI, Blockchain, and Digital Trust (pt 1)

Let’s Connect

Blogs About Brand-building Tips, Marketing Strategies, Insights & Stories

Thank you for subscribing. Appreciate your comments and sharing!

Subscribe to our Blogs

To stay in the loop about digital trust and other topics, subscribe to our Newsletter

Can software be developed for verifying authenticity using blockchain as “Digital Trust”? Can AI create fake news? Who is liable, machines or humans?

“Information Authenticating Tech” is most desired today

This article is continued in Part 2

A prestigious club in Silicon Valley (Churchill Club) has annual predictions of the top 10 trends in technology.  In May 2018, the winner by popular vote of several hundred in the attending audience was the prediction of a new class of software that guarantees authenticity for news, authorship, information, digital data, etc.  This software answers questions like:

Did a computer write it, or a human?

How do I know if the facts are facts?  Which media to trust?

How do I know if I am talking to a human or a computer voice? What’s real? “Who are you?”

Even though four out of five of the panelists and almost 80% of the popular vote from the audience agreed that  this new class of software for authenticating information is needed, most of the five panelists, consisting of Silicon Valley prominent venture capitalists and prognosticators, believed that software alone is not the answer.  Presenting truthful and authenticate information, digital or in print, is a societal issue.  Some mentioned that in the age where publishing companies are more and more driven by advertising dollars (and “what sells”), and where social media companies and users filter information, the real issue is not about “authenticity”, but rather “what is real and unreal.”

For news, fake or not, technology alone cannot deal with verifying authenticity, said another panelist. Did the journalists follow ethics of journalism in creating the news?  The source of the news may carry more or less trustworthiness, which I call the brand power.

Technology can play only a limited role in auditing statistics and sources, provided that the sources are reliable and the numbers are truthful.  Even this can be further compromised by the facts against reliability of machine-generated results. Many website-visitors are bots rather than human beings, and recent studies show that 95% of the public comments about net neutrality were fabricated.  We are still dealing with election-related facts-or-fictions issues created by bots, humans, crawlers, Russians,… and mixtures thereof. In my opinion, the issue is how much weight we can place on “digital trust”.

AI is already creating content, even art

Artificial Intelligence (AI) today is not just verifying authenticity but creating content independently. The Chinese “Xiao Ice” social chatbot AI system can write poems. It can also create and publish a children’s book in hours rather than months. I look forward to the day when poems and books written by both AI systems and by humans can compete – just like the computer Deep Blue defeated a world chess champion (G Kasparov) on 10 February 1996.  But I do hope AI’s own creativity is limited to only the fictional world and that AI will never be used for inventing facts and writing fake news stories. There is only so much faith we can put in “digital trust”.

Can blockchain and AI contribute to news’ verifying authenticity and factuality? Where’s the digital trust?

Machines and technology such as cameras, recording devices and other tools have always been aiding humans in capturing news events, however, technology has been limited to helping humans in telling the truth.  Humans have been the sole authors and owners of the final news stories.

Blockchain techniques can trace the real author and the source of information, thus holding the originators of information accountable.  It can tell between computer generated and human generated writing, art, codes, algorithms, videos, voices, etc. But the tasks for reporting news stories, writing history and producing authored information have always been and will always be in the hands of humans.  Richard Nixon once said:”What history says […] will depend upon who writes history,” Like it or not, like him or not.

Blockchain technology with irrefutable records for transactions can decentralize institutionalized trust traditionally held by banks, financial and other industries. That is the best, and perhaps the furthest we can rely on “digital trust”.  But AI or blockchain should not be used in assisting with or verifying fabrication of news or manufacturing of records.  Before catastrophic damages are done, lawmakers need to prepare to regulate using technology to manufacture false data and dishonest information.  In September 2018, California Governor Jerry Brown signed SB-1001 into law, requiring companies to disclose to customers when they are communicating with a bot rather than a human, the first piece of legislation about  the use of AI and chatbots.

Humans, not machines, are ultimately responsible for truth or falsehood in news stories

Humans are responsible for presenting trustworthy facts by verifying authenticity, and adhering to the principles of honesty and truthfulness in observing events, collecting data and writing news, even with the aid of audio, visual, AI and blockchain technology.  Humans, not machines, are ultimately held liable for the facts in news stories, even in the best scenario where machines and humans work together seamlessly. Thus the duty of veracity and the trust based on the prerequisite ethics for telling the truth belong only to humans. If machines are fed with only dishonest and wrong information, no amount of machine learning can make it honest and right. Garbage in, garbage out.  Without reliable data as the “raw materials” a software can only “verify” and “authenticate” the available dishonest misinformation and called it “authenticated”.

Perhaps in the future when AI and blockchain technology is mature, certain untruthful data may be ignored by the technology as long as there is plentiful of correctly-labeled data available, –  thanks to the answers by Dr. Kai-fu Lee to my questions related to AI, blockchain, big data and clean data. Dr. Lee is the author of the book “AI Super Powers.” I do not claim to be even remotely knowledgeable about AI, blockchain, big data, or other technology. But this is common sense: the more truthful accounts of news events written by humans are out there in the digital world, the better chance for both machines and humans to accurately authenticate such information.

(To be continued on Part 2. ) 

Written by Joanne Tan, edited by Glenn Perkins, 10/19-21/2018.   © Joanne Tan, all rights reserved.

Please don’t forget to like it, comment, or better, SHARE IT WITH OTHERS, – they will be grateful!

– To stay in the loop, subscribe to our Newsletter 

(About 10 Plus Brand: the “whole 9 yards” is not enough, we go 10+ to exceed your expectations in brand building, digital marketing, and content creation for both business and personal brands.)

– Visit our Websites:

https://10plusbrand.com/

https://10plusprofile.com/

https://poemandart.com/

– Find us online by clicking or follow these hashtags:

#10PlusBrand

#10PlusPodcast

#JoanneZTan

#10PlusInterviews OR

#InterviewsofNotablesandInfluencers

#30SecondsofAnything

#BrandDNA

#BeYourOwnSuperpower 

#BeYourOwnBrand

#StandForSomething

#WeAreBetterAngels

Leave a comment

Your email address will not be published. Required fields are marked *

Decode | Create | Amplify