Generic filters
Exact matches only
 

Unforced Errors: Four Costly AI Mistakes

By: Glen Hilford

Unforced Errors: Four Costly AI Mistakes

Overview:

Business leaders are increasingly being asked to sponsor, approve, and fund AI projects without a complete understanding of their underlying risks in terms of project viability, ROI, and organizational impact. At the same time, potentially valuable opportunities to leverage AI to generate revenue, reduce costs, and mitigate risk are being overlooked.

The first two installments in this webinar series, AI for Business, prepared us to ask the right questions when considering an AI investment and presented the AI techniques most used by mainstream businesses to solve their common challenges.

This installment expands on this background to examine common and costly mistakes organizations make when pursuing an AI solution. Equipped with this information, business leaders can avoid unnecessary risks, what we refer to as unforced errors, while driving AI project success. As a bonus, we’ll also look at ways to identify valuable AI opportunities that might otherwise fly under the radar.

Key Takeaways include:

– An overview of risks unique to AI initiatives
– An examination of four common mistakes organizations make when pursuing an AI solution
– A method for identifying potentially valuable AI opportunities

Presentation Transcript

Unforced Errors: Four Costly AI Mistakes

Presented By Glen Hilford

Julia:

Hi, everyone. Welcome to Unforced Errors: Four Costly AI Mistakes. Our speaker is currently out of town and having bandwidth issues, so he will not be able to use his camera for today’s webinar. It will just be audio. But other than that, we’re good to go. So, the first installment in our AI for business webinar series prepared us to ask the right questions when considering an AI investment and presented the AI techniques most used by mainstream businesses to solve their common challenges. This installment expands on this background to examine common and costly mistakes organizations make when pursuing an AI solution. Equipped with this information, business leaders can avoid unnecessary risks, what we refer to as unforced errors, while driving AI project success. As a bonus, we’ll also look at ways to identify valuable AI opportunities that might otherwise fly under the radar.

Today’s webinar is being led by our VP of Corporate Development, Glen Hilford. Glen is Access Sciences’ AI expert, heading our AI initiatives and projects. He has published a lot of content on the topic, AI for business, which you can view by scanning the QR code on the screen, and will also give you a chance to scan at the end of the presentation. Also, feel free to ask Glen questions by using the Q&A feature throughout the presentation, and he will be addressing those at the end. Alright, let’s get started.

Glen:

Thank you, Julia. Good afternoon, everybody. And thank you for joining us. Today, we’re going to talk about risk, not everyone’s favorite topic, but one that’s vitally important as we consider investing in an AI. AI can be risky, but it doesn’t have to be. As we’ve seen in the first two episodes of this webinar series, AI can do some amazing things, things that were once unimaginable, predicting the future, autonomous vehicles, language translation, and the list goes on. But to achieve much, we must sometimes risk much. While there will always be some level of risk involved in any AI initiatives, with some foresight and discipline, some significant and predictable risk can be mitigated. That’s why we’re here today, to identify these common risks and look at ways to deal with them before they get out of hand.

How do we achieve this? Julia, can you go the next please? As business leaders, many of us are familiar with M&A, especially the due diligence process where we examine an acquisition’s offerings, financials, contracts, obligations, litigation, and the like. In order to avoid AI risk, we’re going to talk about the same level of rigor that needs to be applied when we’re looking at AI opportunities. Next slide, please. It’s baseball season, so a baseball analogy is probably in order. Since some AI risks are predictable, we can refer to ignoring them as unforced errors. Some of these include recognizing when a solution just won’t work and hitting the eject button as soon as possible. Understanding the true cost and realistic return of an investment upfront. Ensuring that our employees, not organizations are prepared for the changes that AI will assuredly impose on us, and learning to identify otherwise unseen opportunities.

By addressing these head on, we have the opportunity to drive enormous value into our organizations while minimizing any downsides. And we’re going to look at these in turn. Next, please. Here’s a problem we recently worked with for a manufacturing client. They wanted to identify a component in a CAD drawing, using an AI technique known as object detection. If you look on the left side of the screen or the slide, you can see a symbol that represents a component within the CAD drawing and the CAD drawings on the right, obviously. Conceptually, the business problem was solvable. They had the right data. In this case, it was PDFs of CAD drawings and any number of examples, thousands of examples. It was a very nice opportunity.

But before we attempted to develop a production system, we built a proof of concept. And during that process, determined that the resolution of the CAD drawings wasn’t clear enough for an object detection technique to work. So, despite having a valuable use case, the solution just wasn’t viable. As a result, we stopped the project before the client made a significant investment. This is what we talk about in terms of viability. You could go to the next, we’ll talk about this a little deeper. As we just saw, some seemingly viable AI solutions don’t pan out. This can happen for any number of reasons, and I’m just going to name a few. Some business problems aren’t solvable using an AI technique.

In our first webinar, we talked about how machine learning solutions can approximate an answer, but not give you a precise answer. It’s not a calculator. Trying to solve a traditional transactional problem like the things we get with ERP systems or CRM or accounting and finance systems, using artificial intelligence, these rarely succeed. What we’re looking for are problems that lend themselves to artificial intelligence. Machine learning models, such as prediction and classification use historical data for training. But if we don’t identify the right variables, the right input data, we won’t get an accurate answer. This is the old garbage in, garbage out dilemma. And sometimes the right variables don’t even exist, meaning that the problem isn’t solvable.

Assuming we’ve identified the appropriate inputs, we still need to have enough historical data to train an AI model on, and an ongoing source for the same data for use in production. Sometimes an external factor that we didn’t anticipate pops up. Murphy always gets a vote. Some models don’t produce accurate results. Sometimes these things just don’t work and that can happen for any number of reasons. In other cases, we can determine early on that even a technically successful project isn’t going to deliver the value that we anticipated. In one famous example, IBM invested $62 million in a cancer diagnosis and treatment solution. It’s a lot of money and it failed miserable. There were signals early on in the project that accurate input data wasn’t available. And they had to develop some synthetic or artificial data in order to build and test the solution. That should have been a glaring signal.

With adequate due diligence, they could have cut their losses earlier and saved some of that enormous investment. This is an example of the sunk cost fallacy, what we see on the slide, which is an age old description of throwing good money after bad. What we want to do is nip these things in the bud when we see early on that they won’t deliver value. We should always ask ourselves, have we identified an AI approach that directly addresses our business problem? Have we performed due diligence to determine if our approach is valid and it can reasonably be expected to produce a needed result? And can we make this determination early enough in the process to avoid prolonging investment in a dead end.

In AI, recognizing a dead-end initiative early is how we avoid the sunk cost fallacy. Now, let’s look at value and return on investment. As we compete for limited budgets, and we all do, we have to be able to demonstrate value in terms of increased revenue and margin, reduced costs or mitigated risks. If an AI opportunity doesn’t address at least one of these, we should take a good hard look at why we’re considering it. Value and ROI should always drive investment decisions. Before we can calculate an ROI, we first need to determine its real cost. And this is not apparent to most folks as we’re moving into an AI world. The technical solution is obviously important, but it’s only the first consideration. Once we’ve identified the appropriate data inputs, what we talked about on the last slide, they have to be provisioned internally where the data needs to be purchased.

In a data infrastructure, what I like to call plumbing has to be implemented to ensure that the data’s available, clean, consistent, timely, and that it can be governed. Poor data always equals poor results. It’s unusual for a business system to function in isolation and AI solutions are no different. We should anticipate that an AI solution is going to be integrated with other systems, adding to the overall cost of the solution. As business conditions evolve, supporting AI solutions need to evolve with them. In the cost for ongoing training or retraining and maintenance of the solution has to be factored in. As we’ll discuss in episode five of the series, curating AI solutions is more complex than traditional AI systems, and the cost for governments isn’t trivial.

And finally, change management is critical to solution adoption. If our users don’t adopt the solutions, then we have an investment that’s not going to be worth it. We’ll talk about more about this in a moment. Now, equipped with this information, we should be able to calculate a solution’s real cost and use that to make an informed investment decision. In addition to solution viability and monetary investment, AI initiatives could be expensive in terms of the time it takes from our valuable subject matter experts who are always in high demand, not investing in something that could be more valuable to the organization. The turmoil created when people’s jobs and the supporting of organization change, and risks to personal, professional, and organizational reputations. And these aren’t insignificant.

Developing a clear value proposition and business case right up front, help mitigate these risks and provide a tangible way to measure a project’s value. One of the interesting things about developing presentations like this is selecting the images we use to illustrate concepts. And this image jumped out at me as a picture of radical change. Imagine what it must have felt like to the occupants of the house when that high rise popped up next to them. Study after study shows that lack of adoption is a primary cause of AI failure. The premise and the promise of AI is predicated on change, otherwise, why would we be doing it? Why would we not be doing it? The change is hard. Employees’ jobs often change with the introduction of AI, frequently for the better. Machines can glean insights from data that humans can’t and these insights can help employees make more informed business decisions.

AI can also automate an employee’s road activities, the routine things we do every day, bring them to apply that time to higher value work, to benefit both the worker, but also the organization. The challenge is helping our employees embrace that they’ll be working alongside machines. It’s a really weird concept for people to grasp. A change that’s often poorly communicated and that can foster resistance. Let’s look at an example. In our first webinar, we looked at an example of a pipeline that receives natural gas in the south and southwest, and delivers it to the Chicago market area. It takes about three days to give a molecule of gas from the production fields up to Chicago. If we could predict demand for gas in Chicago, three days in advance, the company could save millions annually in operation costs. To do this, we implemented a machine learning model to forecast demand. It’s a prediction model. We looked at those in our last webinar.

But the problem was the pipeline operators. These are people who manage the flow of gas through the pipeline, it’s their job, they’re professionals, and they’re good at it, fought the idea of a computer replacing their seat at the pan’s expertise. But in this case, it was the right thing to do. That was because the model proved to be more accurate than their experience could provide. So, it’s a good case in point, that even though you may have a really good solution and a solution that brings additional value to a business problem, you have to walk users through the journey to embrace the change that’s going to occur as a result. Now, change we’ve talked about is being hard on individual, but it’s sometimes even harder on an organization.

We just talked about how workers’ jobs could change, what about the underlying organization? It would also have to change as well. One way to proactively address both types of change at the worker level and at the organizational level is using an approach like ADKAR. ADKAR provides a structured framework for communicating, addressing, and reinforcing change by raising awareness within the workers, building desire for the change, and that can be a little challenging, and planning knowledge, helping them understand what the change is and how it’s going to affect their jobs. Developing ability, helping them learn how to use these new capabilities to best advantage, and ongoing reinforcement, because nobody gets it right the first time.

If you tune into our next installment of our AI for business webinar series on July 20th, Yvette Clark and Todd Brown are going to do a much deeper dive into AI and this change. Now we get to my favorite part of the presentation. Our final topic isn’t really an unforced error, it’s more of a challenge. We’ve all experienced it, that moment when the light bulb comes on. AI delivers value, sometimes incredible value to organizations like ours. As leaders, how can we recognize opportunities to leverage it? More importantly, how can we equip and lead our organizations to do the same? Here’s a question from our last webinar, but these remain valid. How can we recognize opportunities in our organizations to make predictions about the future, classify seemingly unclassifiable information and images to discover hidden patterns in groupings and data, visually recognize objects, intelligently automate processes, convert text into meaning and full information and drive connectivity, using semantic relationships?

That’s an enormous challenge and opportunity. The short answer is something called ideation. Well, ideation, well, like many industries, AI has its own vocabulary and this is one of those instances. Ideation is AI speak for brainstorming, something we’ve all done, identifying valuable opportunities to leverage it. And we’ll start with a non-AI example of ideation, but one that’s really on point, that’s the discovery of Post-it notes. So, sometime back, a 3M researcher was tasked with creating a better adhesive for the aerospace industry, but the researcher failed and created a weak adhesive, but one that didn’t leave any resident. Later, a different researcher looked for a way to keep his place in a church hymnal without damaging the pages, and tried using yellow scrap paper with the failed adhesive. That was the aha moment, ideation at its finest.

By 1981 more than 50 billion units were sold annually. That’s not bad for a failure. And on a side note, is a fun fact, the use of yellow scrap paper proved by that researcher is the reason that yellow Post-Its were and remain predominant in the industry. At first, AI’s uniqueness makes ideation, recognizing opportunities that might be right in front of us, seem like finding a needle in a haystack. But with the little effort, the odds can be much better, especially when we acknowledge that the best ideas often come from unlikely places or unlikely people. While brainstorming is necessary, it’s rarely sufficient for truly valuable ideation. And without making ideation systematic, organizations can find themselves falling prey to three common mistakes.

Let’s look at those. Mistake number one, what’s a camel? Is one of my favorite analogies. It’s a horse designed by committee. If you’ve never encountered a camel, they’re not very pleasant creatures and they’re not designed very well. Ideation frequently begins with a well-intentioned meeting, where participants are asked to brainstorm ideas. It sounds good, right? Unfortunately, something called group dynamics comes into play. Leaders are going to lead, presenting their ideas first, causing less assertive participants to hesitate, where they differ to authority and avoid rocking the boat. And strong personalities will dominate the discussion, also minimizing opportunities for less assertive participants to share their ideas. And even when they do, are able to voice their ideas, they often get discounted and filtered out by louder voices. We’ve all experienced this phenomenon, whether we’re on one side of that camp or the other.

The second mistake is what I’ve called, ignoring stakeholders. AI solutions are rarely focused on just one work group within an organization. Business problems can come from anywhere, both internally and external to the organization. More importantly, the results of an AI initiative model results, can affect multiple groups and be used or reused in different contexts. If ideas are only solicited from and discussed within one group, we miss the opportunity to benefit from multiple viewpoints and insights. If we go back to our pipeline example, the initial use case was forecasting demand in the Chicago market area, in a very valuable use case. A very successful implementation too, I might add.

But a second business problem, what we call capacity recapture, proved to be much more valuable. Here the contracts group, the people that sold contracts or capacity on the pipe had not been included in the ideation process, and only learned about the demand forecast via the grape vine. But when they did, they immediately saw an opportunity to recapture an enormous amount of money, bottom line money, by reselling capacity that was going unused on the pipeline. That turned out to be worth more than 10 million annually, and probably continues to this day. If that group had not stumbled on the demand forecasting project and the results of it, they would’ve missed the opportunity to recapture or gain that additional revenue. It’s an amazing opportunity.

The third mistake is a little more esoteric, but it’s very important. Some organizations use facilitators to drive ideation efforts and there’s nothing wrong with that. In fact, it’s a good idea. But a lot of times, facilitators are inexperienced in the AI domain and don’t have the skills to lead the effort. AI experience when doing facilitation for an AI opportunity, is a absolute prerequisite. But even an AI skilled facilitator isn’t enough to minimize the group think that we talked about, and include perspectives from across the organization. The facilitator needs a system to guide the group through the ideation process, and he needs to do that while making sure all voices are heard in giving credence, and that all work groups are included in giving credence, only then does this work.

We refer to this process as systematic facilitation. Using systematic facilitation, valuable ideas are bound to surface, excuse me, where they could then be evaluated for, you guessed it, viability, value, and organizational impact, leading to new opportunities and improved outcomes. So, to recap, we’ve learned that due diligence can mitigate AI risks, that there are four primary risks or unforced errors that we need to address head on. First is viability, where we want to avoid the sunk cost fallacy. We want to make a determination about viability as early in the process as possible. Understanding the true costs and realistic returns of an opportunity and ensuring that the opportunity will be valuable to the organization.

Understanding the impacts to the organization, both at a worker level and at the organization, and being prepared for that inevitable change. Looking for otherwise undiscovered or unseen AI opportunities within the organization, and then using something like systematic facilitation to drive effective ideation. Okay, Julia, I think you’ve got a poll for us.

Julia:

Thanks, Glen. So, we want to know which of these risks is most prevalent in your organization. So, I’ll pull up the poll and we’ll give you guys a few minutes to answer. Just give it a minute. Looks like 75% of people struggle the most with value, understanding the true cost. So, if you’re already wanting more on the topic of AI for business, be sure to register for our next webinar, navigating change on your AI journey, which will be led by principal consultant, Yvette Clark, and director Todd Brown. That will take place on Wednesday, July 20th, and will provide takeaways on how to engage, communicate, and train initiatives before, during, and after AI implementations. More immediately, you can check out Glen’s AI for business blog series. I’ll be sending out an email shortly with links to today’s recording, the slide deck, and these links.

Now we have some time for questions, if anyone wants to drop those in the Q&A feature. I’ll give a few minutes if anyone wants to ask something.

Looks like no questions are coming in at the moment. So, again, thank you all for joining us. Please take the time to fill out the survey before you log out of GoToWebinar. We always appreciate some feedback. And please join us next month for our next AI for business webinar.

Share via LinkedIn
Share via Facebook
Share via Instagram
Tweet