top of page

Will the Real AI Please Stand Up?


Cole, A. (2017, 10 2). Retrieved from Will the Real AI Please Stand Up?:

Takeaway: There's a lot of hype about artificial intelligence, just how intelligent is it?

Artificial intelligence has garnered so much attention in enterprise circles that many IT leaders can be excused for thinking that it will provide all the answers to an increasingly complex data ecosystem. But while it certainly has the potential to make many meaningful improvements to existing technology, it is also fair to say that some of the expectations surrounding its efficacy are overblown

In fact, there is relatively little understanding of exactly what AI is, how it really functions and what it can actually do. And this is leading to broad misconceptions surrounding its role in the enterprise and the way it will relate to existing infrastructure and the humans who operate it.

AI in the Hype Cycle

According to Gartner’s most recent Hype Cycle, key AI subsets like deep learning, machine learning and cognitive computing are at the top of the Peak Inflated Expectations curve, which means they are on the cusp of the long slide into the Trough of Disillusionment. While this is par for the course for virtually every disruptive technology over the past 30 years, it points out the fact that the projected impact of AI in the enterprise, which was derived mainly from controlled lab tests, is about to run headlong into the realities of the production environment. (Check out a history of computing innovations in From Ada Lovelace to Deep Learning.)

Nevertheless, Gartner researcher Mike Walker expects AI to become ubiquitous over the next decade through a combination of advancing compute power, which is leading to the development of such constructs as the neural network, and the mere fact that the enterprise data load has become so immense and so complex that human operators can no longer cope on their own.

One of the first things the enterprise needs to understand about AI is that it plays fast and loose with the term “intelligence.” As Swiss neuroscientist Pascal Kaufmannexplained to ZDnet recently, there are profound differences in the ways a computer algorithm and a human brain process information to arrive at a conclusion. Given enough processing power, a computer algorithm can compare millions, billions, perhaps even trillions of data sets to make a simple determination, such as whether an image of a cat is indeed an image of a cat. But even a small child, given very little data, can instinctively determine that it is a cat and will forever after know what a cat is and what it looks like.

By this standard, even the leading example of AI at work – Google DeepMind’s AlphaGo’s mastery of the strategy game Go ­– was not really artificial intelligence but a cross-section of big data, analytics and automation that was capable of rationalizing a rules-based approach to winning. Interestingly, Kaufmann adds that a true example of artificial intelligence would be if AlphaGo had figured out how to cheat to win. In order to do this, however, science will first have to crack the “brain code” that powers our ability to process information, retrieve knowledge and store memories. (Learn more about automation with Automation: The Future of Data Science and Machine Learning?)

So Far, Not So Good

Indeed, despite the fears that AI is about to subsume everyone’s job, the results so far are almost comical. Fans of George R.R. Martin’s “Game of Thrones” are so impatient for the next installment of the series that many flocked to a chapter of almost pure gobbledygook written by a form of AI called a recurrent neural network. Meanwhile, IBM is taking flak from oncology researchers who were told that Watson would unleash a new era in diagnosis and treatment, but is instead still struggling just to differentiate between basic forms of cancer. Given this track record, it is quite possible that when AI is first introduced into the typical enterprise, it will probably require more effort on the part of human operators just to track and monitor all the mistakes it will make.

But here’s the rub: AI will get better over time without having to be reprogrammed. As Cornell Tech researcher Daniel Huttenlocker told Tech Crunch recently, AI is more likely to displace traditional software – and all the pesky patches, updates and fixes it requires – than human operators. This does not mean AI does not need to be programmed, but that the approach is vastly simplified. With today’s software, the programmer needs to define not only the task to be solved but the exact steps with which to solve it. With AI, all that is needed is the goal and the software should be able to handle the rest, provided it has the right data to work with.

It All Hinges on the Data

That last point is crucial because, at the end of the day, AI is simply an algorithm, and algorithms are only as good as the data they are fed. This means that in addition to building a proper AI operational framework, the enterprise will have to establish a fairly vigorous data conditioning environment so that the analytics results will be based on accurate information going in. As ActiveCampaign CEO Jason VandeBoom told Forbes recently, the old rules of “garbage in equals garbage out” still apply, so it could be a while before organizations see the true benefits of their AI investment.

Given all of this, the enterprise should not expect AI to provide a quick fix for the emerging challenges of big data and the IoT. The learning curve for both humans and machines is likely to be quite long, and the results are uncertain at best.

But if it all works out as planned, both the enterprise and the knowledge workforce should see substantial benefits in the long run. Just think of the most mundane, tedious and time-consuming task that is slowing down your processes at the moment and imagine never having to do them again, ever.

1 view0 comments
Post: Blog2_Post
bottom of page