The PKI Guy discusses AI and security with technologist and author Charles Jennings
Q&A with Charles Jennings, author of the new book Artificial Intelligence: Rise of the Lightspeed Learners
TPG: What are some misconceptions about AI?
CJ:At a rudimentary level, many people conflate AIs with robots — I’ve seen major articles in both the New York Times and the New Yorker which have done so this year. AI is the intelligence inside some robots — but also inside search engines, weather forecasting systems, disease diagnosis services, and so on.
Another misconception is that AI is not yet taking away jobs. In the book, I tell the story of Goldman Sachs’ currency exchange department, as reported by the MIT Technology Review. In 2010, that department took up three floors in its headquarters building, occupied by 600 highly compensated humans; today, it has fewer than 10 humans, and resides mainly in an AI-powered cloud. Uber and Lyft leveraged AI heavily in eliminating taxi driver jobs around the world. John Deere has a new AI-driven greens thinner (a kind of lettuce-savvy, automated hand plow) that is killing farm worker jobs. Much of the debate about AI and jobs looks too far into the future; AI is already having a big impact on the American job market.
Expand Your PKI Visibility
Discover why seeing is securing with revolutionary PKI monitoring and alerting.
Learn More About PKI Spotlight®TPG: Tell us about your new book, Artificial Intelligence: Rise of the Lightspeed Learners.
CJ: My starting premise was that the more Americans know about AI, the better for all of us. So I tried to write a book about AI that was accessible (to any reader not afraid of a few tech terms and concepts), and which consisted mostly of stories, along with a high-level examination of AI’s role in the world. In the audiobook, we added original music, poetry, and political rants. It’s available in print, e-book and audiobook, with all necessary details at www.lightspeedlearners.com.
TPG: What is your work background? How did it lead you to writing your book?
CJ: I’ve been both an entrepreneur and a writer all my life. In 1992, I started my first tech company, which in 1999 had a welcome IPO; in 2014, I started my last company (probably), an AI startup partnered with Caltech/JPL. What I learned, 2014–17 as a CEO in the AI industry, made me want to sound alarm bells. Hence this book.
TPG: How is AI disrupting markets? What will AI’s impact on businesses be in the next five years?
CJ: Let’s start with this: in five years, if your job involves using technology in any way and you don’t understand AI and can’t work with it, you’ll be at a huge competitive disadvantage. AI is rapidly entering nearly every market sector on Earth. AI-driven market disruption times will vary by sector, but they are coming nonetheless. Take the medical radiology market, for example. In five years, it will also largely exist in an AI cloud, because AIs today are already better than radiologists at reading x-rays.
Working in tech for over 30 years, I’ve seen disruptions caused by the Internet, mobile phones, social networks, and cloud computing, first hand. I would be shocked if AI doesn’t produce the biggest marketplace disruption yet.
TPG: Why is AI cybersecurity so critical?
CJ: Most contemporary discussion of AI and cybersecurity falls into the AI vs. AI category: good guys against bad guys, in a cyber arms race, using AI cyber defense to combat malevolent AI malware. These discussions are not unimportant, but this is a race we good guys should win.
More important are the security, safety, and information integrity of AI systems themselves. Even in the biggest data breaches, no one dies. A hacked AI-managed air traffic control system could result in thousands of deaths.
TPG: You discuss a horserace between the U.S. and China. Please elaborate.
CJ: China, Russia, India, Israel, Germany, France, Canada, Japan, South Korea, and Singapore all have national AI strategies. The U.S. does not. China has the strongest, most aggressive national AI program, and is publishing more scientific AI research papers, and producing far more AI PhDs, than we are. Hangzhou’s “CityBrain” AI — built by Alibaba and financed by Xi Jinping — is arguably the most sophisticated and successful AI in the world.
I lived and worked in China for two years. I think the world is better off if the United States retains its current world leadership in AI. In the book, I give my reasons why.
TPG: How can AI help cybersecurity vendors stay ahead of threats?
CJ: Advanced polymorphic malware — threats that keep changing form — are impossible for traditional signature-based applications to detect. Intel’s hardware-enhanced security platform uses AI to monitor malware behavior at the level of silicon telemetry. Only an AI-enhanced service could keep learning fast enough to identify ever-changing polymorphic threats (especially cryptomining). AI gives serious cyber defenders an important new tool.
The larger question is how cybersecurity vendors can support AI — how they can help make good-guy AIs the best cyber protectors ever? The answer: collect and curate all your data.
AI-ready datasets, when properly constructed, can be powerful balance sheet assets. They can also be used to train AIs to perform information assurance tasks no human would even attempt. Most cybersecurity vendors today have some sort of AI initiative, but if not grounded in the collection of clean, AI-ready data, it will likely fail.
TPG: On the flipside, what hacking concerns does AI pose?
CJ: AI creates many new opportunities for hackers, but none bigger than in the realm of social engineering. Take FaceApp, for example. Its parent Russian company, headquartered in the Russian government’s leading tech innovation center, has apparently convinced tens of millions of Americans to give them AI gold: multi-dimensional personal data (name, photo, address, credit card number). Just the kind of data that gets the best price in Dark Web identity markets, and ideal fuel for election hacking exploits. Deepfakes are another AI hack that can be used for social engineering. I expect we will see many more AI-driven political hacks leading up to our 2020 election.
TPG: With AI generating deepfake content, what kind of tools and security will we need to develop in order to identify genuine content from fraudulent?
CJ: Basically, we need a global AI Truth Engine — actually, many of them. Such AI truth engines would be under the control of notable, highly trustworthy humans, experts in each engine’s domain. But the AIs would sort out all conflicting facts and assertions.
We’re also going to need a concerted deepfake debunking effort, especially in the waning days of the 2020 election campaign.
TPG: What are your thoughts on AI and ethical concerns?
CJ: AIs are quite capable of operating with human — and humane — ethical motivation. But humans have to program them that way. Humans who themselves often fall short of the ethical ideal.
AIs might ultimately compel us to come up with a seat of ethical guidelines for all intelligent life on Earth, AIs included.
In the short run, my biggest ethical concern about AIs involves their use in warfare. Over 90 countries now have some kind of AI-driven automated warfighter program. AI terrorism is also a serious threat. We need nuclear weapons-style treaties to halt the use of AI weapons, now.
Ultimately, the central AI ethical question will be, how to we humans stay in control of them.
Mark B. Cooper
President & Founder at PKI Solutions, Leading PKI Cybersecurity Subject Matter Expert, Author, Speaker, Trainer, Microsoft Certified Master.
View All Posts by Mark B. Cooper