Generic filters
Exact matches only
 

AI for Business: What’s All the Fuss About?

By: Glen Hilford

AI for Business: What’s All the Fuss About?

As we begin our look at AI, let’s ground ourselves by answering some foundational questions.

IS IT REAL?

The most fundamental question – Is AI real? – is also the easiest to answer. While there has been reasonable skepticism in the past, today the reality of AI is undeniable. From groundbreaking medical diagnoses to automatic language translation to autonomous vehicles, AI has quickly become ubiquitous in our lives. Just consider the omnipresence of Siri and Alexa, and the incredible accuracy of today’s weather forecasting

At the same time, many vendors are using the term AI to hype a variety of products and services that are in no way “intelligent,” artificially or otherwise. Simply slapping the label “AI” on a product doesn’t make it so. Caveat emptor.

IS IT INTELLIGENT?

You may recognize Alan Turing, commonly acknowledged as the father of theoretical computer science and artificial intelligence, as the key character in The Imitation Game. Turing created what he called the imitation game, and what we now know as the Turing Test, when he observed that, “A computer would deserve to be called intelligent if it could deceive a human into believing that it was human.”

Consider two well-known examples, both board games played against human experts. In 1997, IBM’s Deep Blue AI defeated chess’ world champion, Garry Kasparov. In 2015, Deep Mind Technologies pitted its first-generation AlphaGo AI against a professional Go player and won. Its most recent iteration, AlphaGo Zero, is considered to be the world’s best Go “player” and perhaps the world’s best in chess as well. While both matches were well publicized as man vs. machine (AI), had the human opponent not known this, could these AI’s have passed the Turing Test?

A quick note on nomenclature.  Like many emerging fields, Artificial Intelligence has generated its own vocabulary.  As we move forward in this blog series, I’ll try to point out this jargon, and provide some definition and context for its use.

AI – You might have noticed the term AI used as if it is a human being (e.g., “could these AI’s have passed the Turing Test?”).  I suspect that this is a result of early AI enthusiasts promoting the idea of computer “intelligence”.  Over time, this usage has become shorthand for describing an AI model, solution, implementation, and the like.

IS IT NEW?

In some ways, AI is as old as modern computing. Turing created his imitation game and made the observation quoted above in 1950. While this was theoretical, most of the AI techniques that we’ll look at grew out of Turing’s groundbreaking work. Perhaps surprisingly, many of these techniques have been around for decades. For example, Arthur Samuel developed an AI-powered checker player in 1952[1], just two years after Turing’s observation.

As early as the 1990s, AI techniques were, somewhat stealthily, creeping into mainstream business. I first worked with both symbolic AI and neural network applications during this timeframe, even though I wasn’t aware that they were “AI” techniques. At the time they were just esoteric and effective methods for solving hard business problems such as gas pipeline optimization and demand forecasting.

More recently, four things have changed to bring AI to the forefront:

    • First, and least important, AI has become the flavor of the week – what we in the field refer to as buzzword-compliant.
    • Second, supporting technologies and computing power have advanced to make these AI techniques more achievable and widely applicable to business and consumer needs. Twenty years ago, AI could predict tomorrow’s weather with a high degree of accuracy. Unfortunately, using that era’s computing environments, the calculations took more than a day to complete.
    • Third, as AI techniques have evolved and become more powerful and accessible, new and novel use cases have been identified. Uber doesn’t exist without a brilliant idea and AI to make it a reality.
    • Most importantly, today’s digital environment generates vast amounts of readily and quickly available data that, in turn, power and enable AI. More on this later.

IS IT RISKY?

The answer to this simple question is complex, not only for today’s business environment, but also for the future.

Evolutionary Risks – Any new, powerful, and somewhat immature technology comes with risks – some we can foresee and others that appear as the technology evolves over time. When Facebook broke onto the scene, who could have foreseen the sociological side effects or the overwhelming influence that it has had on modern society (and to make this discussion circular, AI is heavily used by Facebook).

The recent evolution of AI has followed a similar pattern where enormous value and insights have become reality, while questions about explainability and ethics, and side effects of AI on worker behaviors and organizations continue to emerge.

Contemporary risks continue to develop as the velocity at which AI interacts with the physical world increases. While we might not be able to precisely identify emerging risks from AI applications such as autonomous vehicles (cars, trucks, rail, ships, airplanes, spaceships?), military applications, air traffic control, and medical diagnoses, we can certainly anticipate that there will be some that pop up. At best, the effect of these risks could be disruptive and potentially dangerous. At worst, catastrophic. These are the type of AI-related risks that we should be concerned with in today’s business world –  it’s up to us to attempt to anticipate them and make informed decisions about how to react.

 

Equivalency Risks – Today, AI is best described as Artificial Narrow Intelligence, where a machine is able to perform a single task extremely well, often better than a human. Weather forecasting is a good example as no human being can begin to compete with AI.

Sometime in the next 20-30 years, AI is expected to evolve into Artificial General Intelligence, where machines can be made to think and function as equivalent to a human mind. I can imagine a much smarter C-3PO (perhaps without the gaudy gold plating, fussy persona, and stiff mannerisms) as a personal assistant. Unfortunately, no human can anticipate the effects (good, bad, or indifferent) that this level of equivalence will have on society and business. But we can all sense that there will be effects and that they could be significant.

 

Singularity – In James Cameron’s sci-fi classic Terminator 2: Judgement Day, Arnold Swartzenegger’s character (an AI-enabled cybernetic organism) defends humanity against SkyNet, an AI that has reached “singularity”, in other words, one that has become self-aware and super-intelligent. Several visionaries, including Stephen Hawking and Elon Musk, have expressed concern that singularity could result in the extinction of humans. While the specter of singularity is (at least theoretically) terrifying, fortunately for us, it would be in the far distant future.

WHAT’S NEXT?

Now that we have a 30,000-foot view of AI, it’s time to move past marketing hype, technological mystique, and Hollywood fear mongering.  In our next installment, we will dig deeper into the thing we call AI to better understand its capabilities, limitations, and application to the business world.

References: [1] Barto, A and Sutton, R 1992, Reinforcement Learning: An Introduction, The MIT Press, London

FOR MORE INFORMATION

Stay tuned for the next blog in our AI for Business series, “Deconstructing AI: Part 1.”

Share via LinkedIn
Share via Facebook
Share via Instagram
Tweet