“Information Authenticating Tech” is most desired today

A prestigious club in Silicon Valley (Churchill Club) has annual predictions of the top 10 trends in technology.  In May 2018, the winner by popular vote of several hundred in the attending audience was the prediction of a new class of software that guarantees authenticity for news, authorship, information, digital data, etc.  This software answers questions like:

Did a computer write it, or a human?

How do I know if the facts are facts?  Which media to trust?

How do I know if I am talking to a human or a computer voice? What’s real? “Who are you?”

Even though four out of five of the panelists and almost 80% of the popular vote from the audience agreed that  this new class of software for authenticating information is needed, most of the five panelists, consisting of Silicon Valley prominent venture capitalists and prognosticators, believed that software alone is not the answer.  Presenting truthful and authenticate information, digital or in print, is a societal issue.  Some mentioned that in the age where publishing companies are more and more driven by advertising dollars (and “what sells”), and where social media companies and users filter information, the real issue is not about “authenticity”, but rather “what is real and unreal.”

For news, fake or not, technology alone cannot deal with its authenticity, said another panelist. Did the journalists follow ethics of journalism in creating the news?  The source of the news may carry more or less trustworthiness, which I call the brand power.

Technology can play only a limited role in auditing statistics and sources, provided that the sources are reliable and the numbers are truthful.  Even this can be further compromised by the facts against reliability of machine-generated results. Many website-visitors are bots rather than human beings, and recent studies show that 95% of the public comments about net neutrality were fabricated.  We are still dealing with election-related facts-or-fictions issues created by bots, humans, crawlers, Russians,… and mixtures thereof.

AI is already creating content, even art

Artificial Intelligence (AI) today is not just verifying but creating content independently. The Chinese “Xiao Ice” social chatbot AI system can write poems. It can also create and publish a children’s book in hours rather than months. I look forward to the day when poems and books written by both AI systems and by humans can compete – just like the computer Deep Blue defeated a world chess champion (G Kasparov) on 10 February 1996.  But I do hope AI’s own creativity is limited to only the fictional world and that AI will never be used for inventing facts and writing fake news stories.

Can blockchain and AI contribute to news’ authenticity and factuality?  

Machines and technology such as cameras, recording devices and other tools have always been aiding humans in capturing news events, however, technology has been limited to helping humans in telling the truth.  Humans have been the sole authors and owners of the final news stories.

Blockchain techniques can trace the real author and the source of information, thus holding the originators of information accountable.  It can tell between computer generated and human generated writing, art, codes, algorithms, videos, voices, etc. But the tasks for reporting news stories, writing history and producing authored information have always been and will always be in the hands of humans.  Richard Nixon once said: “History depends on who is writing it.” Like it or not, like him or not.

Blockchain technology with irrefutable records for transactions can decentralize institutionalized trust traditionally held by banks, financial and other industries. But AI or blockchain should not be used in assisting with or verifying fabrication of news or manufacturing of records.  Before catastrophic damages are done, lawmakers need to prepare to regulate using technology to manufacture false data and dishonest information.  In September 2018, California Governor Jerry Brown signed SB-1001 into law, requiring companies to disclose to customers when they are communicating with a bot rather than a human, the first piece of legislation about  the use of AI and chatbots.

Humans, not machines, are ultimately responsible for truth or falsehood in news stories

Humans are responsible for presenting trustworthy facts by adhering to the principles of honesty and truthfulness in observing events, collecting data and writing news, even with the aid of audio, visual, AI and blockchain technology.  Humans, not machines, are ultimately held liable for the facts in news stories, even in the best scenario where machines and humans work together seamlessly. Thus the duty of veracity and the trust based on the prerequisite ethics for telling the truth belong only to humans. If machines are fed with only dishonest and wrong information, no amount of machine learning can make it honest and right. Garbage in, garbage out.  Without reliable data as the “raw materials” a software can only “verify” and “authenticate” the available dishonest misinformation and called it “authenticated”.

Perhaps in the future when AI and blockchain technology is mature, certain untruthful data may be ignored by the technology as long as there is plentiful of correctly-labeled data available, –  thanks to the answers by Dr. Kai-fu Lee to my questions related to AI, blockchain, big data and clean data. Dr. Lee is the author of the book “AI Super Powers.” I do not claim to be even remotely knowledgeable about AI, blockchain, big data, or other technology. But this is common sense: the more truthful accounts of news events written by humans are out there in the digital world, the better chance for both machines and humans to accurately authenticate such information.

(To be continued on Part 2. ) 

Written by Joanne Tan, edited by Glenn Perkins, 10/19-21/2018.   © Joanne Tan, all rights reserved.