A well-known optimist on AI is Ray Kurzweil. He is an inventor, such as of innovations in OCR (optical character recognition) and NLP (natural language processing). He has also written various books on topics like futurism and healthcare. Since 2012, Kurzweil has served as a director of engineering at Google.

So what are his predictions about AI? Well, he has noted: “Artificial intelligence will reach human levels by around 2029. Follow that out further to, say, 2045, we will have multiplied the intelligence, the human biological machine intelligence of our civilization a billion-fold.”Footnote 1

But of course, when it comes to predictions, especially those about technology, the failure rate is very high! So it should be no surprise that there are many AI pessimists as well.

One of the most famous was the late physicist and professor Stephen Hawking. In an interview with the BBC, he said, “The development of full artificial intelligence could spell the end of the human race.”Footnote 2

Such optimism vs. pessimism debates are definitely enlightening and helpful. It’s a way to think through the consequences of AI.

But regardless, there is something that seems certain: that is, the technology will continue to grow rapidly and significantly impact the world. So in this chapter, we’ll take a look at some of the major trends with AI.

5G

5G stands for the fifth generation of the mobile network. This version, though, will perhaps be the most transformative, as it will allow for much higher speeds, reliability, and capacity but also the seamless connectivity across machines and devices.

According to research from Qualcomm, 5G is forecasted to increase the worth of goods and services by $13.2 trillion by 2035 and there may be the creation of 22.3 million new jobs.Footnote 3

But 5G will also likely be a game changer for AI. Because of the high speeds, it will be possible to handle much more of the analytics processing in the cloud. This will go a long way to increasing the power of AI models in the real world.

For example, there will be much more innovation of IoT (Internet of Things). The AI will be connected across complex global supply chains to improve efficiencies and predictions, such as for demand. This may be enhanced even with things like AI-connected refrigerators that can help with understanding usage patterns. But there are some other interesting applications like remote surgery, faster drug discovery, and autonomous cars.

Let’s look at an interesting use case: Neteera. The founder, Isaac Litman, is a top entrepreneur, who was the CEO of Mobileye (a pioneer of automobile safety systems). In 2017, he sold the company to Intel for $15.3 billion and it became the centerpiece of the chip giant’s self-driving efforts.

As for Neteera, it is the developer of AI systems that can detect very small movements in a person’s skin, and this can be done without making any contact. This means minimizing the risk of contamination but also allowing for a much wider use of the technology, say in cars or homes.

The result is, by having a high-speed mobile approach, people can have easy access to medical diagnoses. But there are also potential treatments for sleep apnea detection and SIDS prevention for babies, just to name a few.

Regulation

At the Davos conference in early 2020, Alphabet CEO Sundar Pichai made some headlines when he said: “There is no question in my mind that artificial intelligence needs to be regulated. The question is how best to approach this.”Footnote 4

He would go on to propose “sensible regulation.” But for the most part, he was vague on what this would entail.

Yet it was a clear-cut sign that he realized that government regulation would likely get more onerous. And he was not alone among the mega tech operators. The CEOs of Microsoft and Facebook have also indicated their willingness for more regulation.

But of course, their opinions involve much hedging. For example, in the case of Pichai, he does not want regulation that would stifle innovation.

It’s important to keep in mind that there are emerging laws, such as for privacy, that are already impacting the development of AI. Examples include General Data Protection Regulation (GDPR), which is the framework for the European Union, and the California Consumer Privacy Act (CCPA).

In fact, some companies are even putting risk factors for AI regulation in their SEC filings. Here’s an example from Lemonade: “State and federal lawmakers, and insurance regulators are focusing upon the use of AI broadly, including concerns about transparency, deception, and fairness in particular. Changes in laws or regulations, or changes in the interpretation of laws or regulations by a regulatory authority, specific to the use of AI, may decrease our revenues and earnings and may require us to change the manner in which we conduct some aspects of our business. In addition, our business and operations are subject to various U.S. federal, state, and local consumer protection laws, including laws which place restrictions on the use of automated tools and technologies to communicate with wireless telephone subscribers or consumers generally.”Footnote 5

In other words, it seems inevitable that there will be more regulation of AI, and businesses will need to find ways to navigate this.

Quantum Computing

Moore’s Law, which states that the number of transistors on a microchip doubles about every two years, has been a main driver for growth and innovation. But the prospects of this concept appear to be in danger.

At the 2019 CES event, Nvidia CEO and co-founder Jensen Huang said, “Moore’s Law isn’t possible anymore.”Footnote 6 That is, traditional chip technologies are running into diminishing returns. This is particularly troublesome for AI since it relies heavily on high-end computer systems.

Granted, Jensen believes that his company’s GPU technology is the future, and he is probably right. This approach has proven quite powerful.

But there are other innovations. And perhaps the most important is quantum computing. It’s a category that has become a priority of the world’s largest tech companies like Microsoft, Alibaba, IBM, and Google.

Quantum computing is based on the complex physics of subatomic particles. Instead of relying on 0s and 1s (bits) for computations, there are blended values from 0 to 1 (called quibits). This is essentially about handling probabilities and working in parallel. To pull this off, there is a need for such things as cryogenics and superconductivity.

This does seem like something straight out of science fiction. But keep in mind that quantum computing is still in the experimental phase and commercialization is not likely until the next few years. There is still much that needs to be worked out.

Yet the potential benefits of the technology are transformative. Quantum computing systems will be able to process enormous amounts of data at high speeds. All in all, it will make it easier to create advanced AI.

“Don’t try to beat classical computers at ML/AI problems that classical computers are good at because they are really good at them,” said David Hayes, who is the Head of Honeywell’s Quantum Theory team. “Quantum computers are better suited for ML/AI problems that are still hard for classical computers like modeling complex probability distributions, or generative model problems.”Footnote 7

What Does Hinton Think?

In Chapter 2, we learned about Geoffrey Hinton, who pioneered major breakthroughs in AI, such as with backpropagation. His theories ultimately led to the creation of deep learning.

Hinton’s background is definitely interesting and inspirational. Even when he was a teenager in the 1950s he wanted to be a professor and study AI! Then again, his mom would say to him, “Be an academic or be a failure.”Footnote 8

When Hinton attended the University of Edinburgh for his PhD, the timing was terrible since it was a dark period of the first AI Winter. But this was no concern for him. He was convinced that neural networks would provide a great way to advance AI, and so he continued his work confidently. True, many people thought he was wasting his time. The perception was that AI was mostly a fringe topic. But hey, Hinton liked being a rebel and was willing to take the long view of things. He knew that computer power would eventually make it possible to show the true value of neural networks.

And yes, he was eventually vindicated. He is now called the “Godfather of Deep Learning” and won the Turing Award in 2018, along with Yoshua Bengio and Yann LeCun.

So whenever Hinton talks about AI, people listen.

So what is his view of the future? What are some of his takeaways about AI? Well, here are some quotes:

  • “No, there’s not going to be an AI winter, because it drives your cellphone. In the old AI winters, AI wasn’t actually part of your everyday life. Now it is.”Footnote 9

  • “I think things like reasoning, abstract reasoning, they’re the kind of last things we learn to do, and I think they’ll be among the last things these neural nets learn to do…Well, we are neural nets. Anything we can do they can do.”Footnote 10

  • “Instead of programming them [computers], we now show them, and they figure it out. That’s a completely different way of using computers, and computer science departments are built around the idea of programming computers. And they don’t understand that sort of this showing computers is going to be as big as programming computers. Except they don’t understand that half the people in the department should be people who get computers to do things by showing them.”Footnote 11

  • “If you can dramatically increase productivity and make more goodies to go around, that should be a good thing. Whether or not it turns out to be a good thing depends entirely on the social system, and doesn’t depend at all on the technology. People are looking at the technology as if the technological advances are a problem. The problem is in the social systems, and whether we’re going to have a social system that shares fairly, or one that focuses all the improvement on the 1% and treats the rest of the people like dirt. That’s nothing to do with technology.”Footnote 12

Conclusion

We have come to the end of the book. And we’ve covered quite a bit. We looked at the fundamentals of AI and the various steps in the process for a successful implementation.

Granted, on your own AI journey, there will certainly be many tough challenges and complex issues. The technology is still evolving. So in your journey, the key is to always learn new ideas and approaches.

AI is also a team sport. There must be much collaboration to make an AI project that gets results. It’s absolutely critical.

So then, good luck on your own journey. By reading this book, you will have a set of tools to get off to a great start!

Key Takeaways

  • Predicting the trends for AI is quite difficult, if not impossible. Even the greatest minds in the industry have widely differing views. But there seems to be one thing that is certain: the growth will continue for the long haul. And the technology will increasingly become essential for business success.

  • 5G stands for the fifth generation of the mobile network, which will mean much higher speeds. Because of this, the technology will likely have a major impact on AI. This will be especially the case since more processing can be done in the cloud. Some of the applications include remote surgery, faster drug discovery, autonomous cars, and the Internet of Things.

  • Regulation is likely to increase for AI. It is far from clear how this will play out. But large tech companies like Google, Facebook, IBM, and Microsoft have set forth proactive strategies to allow for reasonable protections while also finding ways to not damper innovation.

  • Quantum computing is a radical new approach for developing machines. Instead of using 0s and 1s, there is a blend from 0 to 1, and the numbers vary based on probabilities. While quantum computing will take time to get to commercialization, the technology does have the potential of providing much more powerful AI models.