The era of artificial intelligence (AI) is finally here, and it’s already everywhere we look. From Google Home or Alexa in your house to Pandora’s musical DNA and Amazon’s ability to predict our shopping preferences, it’s already playing a significant role in our typical day-to-day lives.
But while the current state of AI is a good starting point, we’re currently in the “Commodore 64 phase.”

Today’s AI is equivalent to the iconic Commodor 64 PC. (Image: Shutterstock)

In short: AI is bound for more. Our current level of benefits accrued from AI often centers around the automation of the mundane, allowing us to take the ‘hands off the wheel’ in some way. But to get from Commodore 64 level to Playstation 4, AI needs to learn how to learn.
Take driverless cars, for example. It is estimated that each car will transmit approximately 4000 gigabytes of data every day. While much of that may stay onboard, some significant portion of it will travel to mobile towers and sensors around the city, leveraging 4G and, soon, 5G networks. As a comparison, a Boeing 787 generates some 500GB of data per flight. So, there is clearly a ton of data to sift through, a perfect application for AI.
In a hypothetical instance of mass gridlock, a smart roads system leveraging today’s AI would sift through all available traffic data, determine paths of lesser resistance and direct as many autonomous cars out of gridlock and on a different track — or have those on the verge of merging into gridlock avoiding doing so.
For this purpose alone, the Commodore 64-level of AI is good enough — its fast decision-making will optimize the connections made between the network and the driverless car and make decisions based on the best outcomes determined by the network operators.
But what if the network is breached and the underlying programs are altered? What happens in the event of a failure or a network outage?
It’s why we believe that for the network of tomorrow to get the most out of AI, it must do more than automate — it needs to learn and adapt.
The optimum networks of tomorrow will operate by means of a control loop to provide automation, recommendation, optimization, assurance and estimation. There is potential to create a network that is alert enough to enable faster and more predictable changes, one that is adaptable to changing instructions and aware when things go wrong, and is able to fix the problem. This is an AI that would accelerate decision-making, increase network predictability and analyze timelines to instantly identify issues and automate change.
This future is not far off, and we’re already seeing AI tools incorporated in networks within the more progressive markets across the globe, such as Australia (Telstra), South Korea (SKT) and Singapore (Singtel).
In fact, AT&T is also exploring the use of machine learning in its networks to quickly predict where issues or breaks may be, including in hardware, and finding ways to self-heal automatically in autonomous ways. That network includes its customers who rely on a series of connected devices — the AT&T AI system can predict the likelihood of, say, the vitals on a smartphone are showing signs of imminent failure based on a number of data points, and adapt the network to ensure the impact is minimized and resources sent to the consumer as required.
This is ultimately the end game when it comes to bringing AI into the network: It will be able to self-heal. It will be able to completely rebuild itself in a matter of minutes and identify irregularities in the network, such as a cybersecurity breach or failure at critical points and fix them before any of us have a chance to notice — and it will compress decision-making timelines by orders of magnitude, so issues can be identified instantly.
It’s why we’re confident that, decades down the track, we’ll look back on the current state of AI in much the same way we reminisce about the Commodore 64 and marvel how far we’ve come.
Rick Hamilton is Senior VP, Global Software & Services, Ciena Corporation.