Generic filters
Exact matches only
 

Deconstructing AI – A Deeper Dive Into Common AI Solutions

By: Glen Hilford

Deconstructing AI – A Deeper Dive Into Common AI Solutions

Overview:

Autonomous vehicles, disease detection, industrial robots, automatic language translation, facial recognition… At first glance, the AI landscape can be overwhelming, but it doesn’t have to be. For most businesses, the vast majority of AI use cases can be solved using a small set of straightforward AI techniques.

Our first installment in this webinar series, AI for Business – Demystifying AI, prepared us to ask the right questions when considering an AI investment. This installment dives deeper to better understand the AI techniques used by businesses to solve their most common problems. As we examine these techniques, we’ll look at examples about how they’re constructed, where they have succeeded and failed, discuss their pros and cons, and dispel a few myths along the way.

Key Takeaways include:

  • An overview of the AI domain from a human perspective
  • The seven most common business challenges that AI addresses
  • The seven AI techniques used to solve these challenges
Presentation Transcript

Deconstructing AI – A Deeper Dive Into Common AI Solutions

Presented By Glen Hilford

Julia:

Hi, everyone. Welcome to today’s webinar, Deconstructing AI: A Deeper Dive into Common AI Solutions. As we go through the presentation today, please feel free to leave your questions in the chat, and we will be addressing those at the end. After the presentation, we will work on sending out the slide deck and recording, as well as your choice of the Panera Bread gift card or donation to the Barbara Bush Literacy Foundation. So today’s webinar is the second in our AI for Business series… If you want to click the slide, Glen. Thanks… following Demystifying AI, which prepared us to ask the right questions when considering an AI investment. And today, we’ll dive deeper to better understand the AI techniques businesses use to solve their most common problems, with examples for how they’re constructed, where they have succeeded and failed, and their pros and cons. And then next time, we’ll examine four common mistakes organizations make when trying to implement AI. Next slide.

To kick off the webinar, we want to get an idea on which AI capabilities you or your organization are most interested in. So I’m going to launch a poll. And you can choose between machine learning, text analytics, object detection, RPA, or knowledge graphs. Okay. We have lots of responses coming in. I’ll give it a few more seconds. Okay. I’m going to close the poll. It looks like a majority of people are interested in learning more about machine learning. Okay. So, I’d like introduce you guys to Glen Hilford. Glen is the VP of corporate development and AI expert for Access Sciences Feel free to scan the QR code to connect with Glen, and check out his content on AI for Business.

Glen:

Great. Thank you, Julia. And good afternoon, everyone. Thank you for joining us. Today, we’re tasked with deconstructing AI. So let’s get started by looking at how AI mimics human functionality. AI does some human-like things that include cognition, the ability for AI to learn, for it to make deductions and decisions, and for it to ingest history as a way of thinking, if you will. It can also recognize and act on visual information, much like we can with our eyes. It can recognize and act on language, human language, both spoken oral language and written language, it can automate physical activity and processes, and finally, it can coordinate inner relationships, much like our nervous system connects pieces of the body. That’s a pretty tall order. Let’s look at our human functions through the lens of AI. So the AI domain includes many different types of functionality, as we’ve seen. Here, we refer to those as human functions. It also is enabled by AI techniques or solutions. You’ve probably heard of some of these, things like robotic process automation or machine learning.

One thing we’ll see is that our human functions can be recategorized or relabeled in industry standard terms. So we talk about AI in terms of machine language, computer vision robotics. And while it’s easy to categorize AI in these terms, there’s also a little bit of a danger. The techniques that we are going to look at tend to morph or blend themselves across these categories. And while this is an easy way for us to represent AI and the things that it can do, it’s interesting or important for us to understand how things like object detection and recognition, things that are in machine learning and computer vision, are really one and the same, just different manifestations of the same types of technology. We’ll see this with autonomous cars and navigation. So autonomous cars use a form of computer vision that helps with navigation. But at the same time, the autonomous cars themselves, the autonomous vehicles are robotic entities, and those two combine or morph as well. So let’s be a little bit careful when we think about these categories

While there’s any number of solutions, we looked at some of those on the previous screen, that certainly was not a comprehensive list. There are about seven techniques that I think are most important for mainstream businesses. Your list may differ. I’m sure I’ve left off a couple of things that you may include in this list, or maybe have included something that you might not consider core to mainstream businesses, but these are certainly important, and we’re going to look at these in more detail as we go through. The first thing we should do here is switch away from our human functionality language and let’s use industry standard terms. So hence forth, we’re going to look at things like machine learning, natural language processing, computer, vision, robotics, and something called knowledge graphs, which may not be familiar to many people on the call, but we’ll look at those in more detail. In terms of the techniques, we’re going to examine, we’ll look at things in the machine learning category called prediction, classification and clustering.

We’ll look at a specialization in natural language processing, or NLP, known as text analytics, object detection, robotic process automation, which is a little bit of misnomer, and of course, knowledge graphs. One of our sons and his wife worked for ESPN, and they are graphic producers. One of the things they do is create the score lines at the bottom of the screen, or the statistics that you see at the bottom of the screen. The graphical information that ESPN puts into a game, they call that a bug. I have no idea why they got that terminology, but we’re going to use a similar concept to guide us through the presentation, as we look at these seven techniques. So you can see our bug at the bottom of the screen. The first technique we’ll look at is prediction, a machine learning technique. Who wouldn’t like to have a crystal ball? Well, with the right data and the right business problem, sometimes you can. Sometimes you can predict the future or forecast the future using historical data that trains a prediction model.

Let’s look at a couple of flavors of this. The first we’ll look at is value. And the term prediction’s a bit of a misnomer here. What we’re doing is determining a current value based on a model trained on historical data. A very simple example is trying to predict what the price of the house should be. We can certainly gather things about a house, like the number of bedrooms, the number of bathrooms, how old it is, it’s condition, the neighborhood that it’s in, the schools that it’s zoned to, and we can use that historical information, along with the sales price, to train a model, and then use that model to predict price of a house that’s going on the market, based on the conditions or the characteristics of that house. I’m sorry. Time series, on the other hand, uses the same mechanism for training. We’ll use historical data, but here we are truly trying to predict the future. A couple of very common examples are predicting the price of a stock in the future. It’s pretty useful, or weather forecasts. We’re all accustomed to using mother forecast to plan our activities for tomorrow or the next day. And those are examples of time series forecast.

Let’s start with an example of value prediction. And my apologies. I had to include this image. It may be the worst Photoshop ever, but it’s very illustrative of the problem we’re going to describe. So could this catastrophe have been prevented? I didn’t know this a couple of months ago, but the highest home insurance claims are caused by water damage. And if a leak is discovered early, that damage can be limited. But if it isn’t discovered, what may be a 10 or $15,000 repair can quickly become a six figure disaster. How can AI help with that? How can prediction help with that challenge? My wife and I recently moved to a member owned insurance company, and they’re keenly interested in reducing losses, or limiting their losses and claims. Their motto is, what happens if a loss never happens? They offer a device that monitors a house’s water usage, and it learns usage patterns, things like the dishwasher and the washing machine, or taking the shower. After training, it can automatically shut off water to the house when it detects something that’s abnormal, something that’s outside of those normal usage patterns. So here, what we’re seeing is a value prediction that learns from the past to determine if an event is abnormal or something that should be acted on.

Now, let’s switch gears and look at a real-world successful story that deals with time-based prediction. Our example focuses on a natural gas transportation system, a set of pipelines, if you will, to take natural gas that’s produced or taken out of the ground in the south and southwest, what are marked as the production areas on diagram, and then transports it up to the Chicago market are, where it’s consumed. The important thing is it takes about three days to get a molecule gas from the production areas up to Chicago. So if we could predict gas demand in Chicago market area three days in advance, we could optimize the configuration of our pipeline. And why does that matter? It matters because we can save millions of dollars in setting up the pipeline in a way to deliver that information, and most effectively and efficiently. A good point about this is that not every application in AI has to be flashy to be valuable. In fact, in many cases, it’s the mundane applications, or the mundane business problems, that can bring enormous value to the operation. Keep that in mind as we move through this presentation.

Here’s a few examples of prediction. The first one should be familiar to everybody. We use our weather app on our phone. And we take these for granted today, but 20 or 30 years ago, being able to predict tomorrow’s weather, or weather a week from now, was… It was crapshoot, and it was difficult to be able to depend on those forecasts. Today, they’re amazingly accurate due to prediction models. Client churn is another opportunity to be able to save not only money, but be able to direct our resources more effectively. If we can predict when a customer is going to leave or is likely to leave and move to a competitor, we can focus our customer retention efforts on that person or those people and use our resources more effectively, and hopefully retain those clients. Predictive maintenance for equipment kind of speaks for itself. If we can predict when a piece of equipment’s going to fail, we can act proactively and deal with the situation. During the COVID surges, the ability to predict demand for hospital beds was very important.

Skipping down, if we look at product demand and inventory levels, Walmart… If you’ve ever been into a Walmart store room, the back part of the store, you’ll see that their inventory levels are very small, surprisingly small. What they do a very good job there, and what has really differentiated Walmart from their competitors over the last 20 years is their ability to predict what their customers want at the store level and at the product level. And they are able to keep their inventory costs and levels low as a result of their ability to predict the demand. You may think of it this way, that Walmart is really an AI company that happens to sell retail goods rather than the other way around. Let’s move on to the next machine learning technique called classification. We all classify and categorize information, so frequently that we often do it subconsciously. In the working world, some of these tasks are so repetitive, time consuming, and error that there has to a better way to. That’s where classification comes in. Let’s look at a couple of examples.

Let’s start with my first encounter with classification. It was a February in Houston. I purchased something from San Diego using my phone and my credit card. I then drove to the airport, and on the way, purchased gas, again, using my credit card, and flew to Calgary. When I got to Calgary, I presented my credit card at the hotel desk and found that my account had been locked. Now, mind you, it’s Calgary. It’s February. It’s nighttime. And in Calgary, they plug their cars in to keep them from freezing at night. And by the way, Visa’s 800 number doesn’t work in Canada. It was not a comfortable situation. And a good plug for Best Western in Southern Calgary, yeah, Southern Calgary, they let me stay the evening, which I always appreciated. So what happened here? Well, classification happened. A classification model that was trained to detect credit card fraud determined that using the same credit card to purchase something in San Diego, Houston, and Calgary, all within six hours probably wasn’t me, and so it locked my card. It was not a fun evening, but it was certainly informative.

Classification could also work on images. It can classify pictures, and that’s something we call image classification. Let’s look at a good example of this. Public works departments are tasked with maintaining our roadways, but just assessing and categorizing and prioritizing road repairs is a monumental task in and of itself, not to mention the actual repairs. So how can organizations, or how can government organizations, assess and categorize road conditions quickly and accurately, without a lot of human time involved that would allow them to use their limited budget for the actual repairs, and do that so most effectively. When we’ve all encountered image capturing vehicles, I’m not sure what the real term for that is, but these are the Google Earth cars that drive around our neighborhoods with cameras strapped on top. Using these same technologies, and maybe even the images that are generated by those Google Earth cars, machine learning can classify road conditions. That’s what we see in the illustration on the slide.

You can see a road condition that it is classified as poor. One that’s substandard, asphalt starting to break up. And the other two are a little hard to discern the difference between, but one is the satisfactory of one is good. Those are the results of an image classification model, a classification model using machine learning that takes this task out of the hands of humans and puts it in the hands of a machine. It’s a very powerful use of this tool. There are any number of object recognition use cases out there. In fact, this is a very widely used technology. Let’s look at a couple of these before we move on. The first is facial recognition. If you use your cell phone and use the camera to unlock your cell phone, you’re using facial organization, which is just a form of image recognition. Many organizations use classification to filter job candidates, to look at resumes and background information, and classify them as to their appropriateness for a job. If you’re an organization like ours, accounts receivable is very important.

Being able to identify who is likely to be a late payer is very valuable information. That way, you can direct your efforts towards them. Detecting fraudulent transaction… Excuse me, detecting fraudulent transactions, we saw an example of that in Calgary. And then spam filtering. If you use Outlook, you’re using object recognition right now. I mean, you’re using classification right now. Our last machine learning technique is clustering. Unlike prediction and classification, clustering doesn’t require training. So if we don’t pre-train the model. Instead, we present information to it and have the clustering model look for patterns that may be hidden to human beings are groupings in that set of data. As an example, let’s look at document analysis. Any organization dealing with high volumes of documents can benefit by organizing them as they’re generated. That means being able to understand underlying themes into the documents, and then being able to compare these to other documents. Clustering examines text in the documents, and then groups them into clusters of different themes. That way, they can be quickly and automatically organized according to their actual content.

So, a few clustering examples. The first two are in the marketing world where we’re attempting to discover the types of customers or their personas, and data trends for cross selling products. So if I buy a product, I can often be prompted to go and look at a different product that may be somehow related to the first product I looked at. We see this in Amazon all the time. Identifying patterns of terrorist behavior. So in Iraq and Afghanistan, the military used clustering technology to gather information from cell phone conversations, text messages, social media, email, and combined that, along with the clustering mechanism, to identify terrorist behavior, identify individual terrorists, and in some cases, pinpoint where they are at a certain point in time. It is a very valuable tool for our military. Medical condition discovery, I think that speaks for itself. Detecting criminal fraud in the workplace, things like employee theft, fraud, insider trading, and money laundering. And forensic analysis and legal discovery.

This harkens back to the document analysis and use case we looked at, being able to look at a set of documents, in this case, unstructured content, and find patterns or find relationships within that information. Just as a side note, clustering can also be used as a pre-processing step for machine learning, and often is. It’s especially useful with classification where we can use clustering to identify classes or categories, and then use those as the target classifications or categories for a classification network. Let’s look at the difference between prediction and classification and clustering. You may have noticed that prediction and classification work differently than clustering. In essence, the models used historical data to learn from past history, and then use that information to predict and classify new information. This technique, or this class technique is called supervised learning, where a model uses a historical information to learn, somewhat like a baby learns to talk and to walk and to feed himself.

In this situation with supervised learning, the model first is trained, as we can see in the diagram, using a set of information, historical information. And then we use that train model to classify, in this case, an image of a valve, and it tells us it’s a valve. Clustering is an unsupervised learning technique, in contrast to the one we just looked at. Here, a data set is presented to the algorithm. We can see the set of damages on the left. And the algorithm, or the classification model, discovers patterns and groupings that the human may not be able to. In this instance, we can see that it finds examples of compressors, valves, and pumps. And it’s worth noting that the data that it ingests to the images, as well as non-image data. So that’s about it for machine learning. But before we move on, it’s worth noting the importance of machine learning to AI. In my opinion, machine learning’s cognitive abilities, it’s ability to learn from history, to think, if you will, to make decisions, to discern, that’s what makes AI intelligent.

And it’s not surprising that we’ll find machine learning embedded in many of the other techniques that we’re going look at today. Let’s move on to a new technique called text analytics. Here, we’re using an algorithm, a text analytics algorithm to extract or mine actionable information from unstructured content. And when we talk about unstructured content, we’re referring to things like documents and PDFs and drawings, things with text. And we can use text analytics to discover patterns and trends within the text. Now, unlike some of the other techniques that we’re looking at today, text analytics doesn’t have a lot of standalone use cases that are interesting to us, but it is frequently used as a pre-processor for other AI techniques. For example, it can be used as input to something like machine learning or robotics process automation.

Let’s look at an example. Here, we’re using text mining or text analytics as a pre-processor for machine learning. So we’re looking at things like documents, drawings, email, text messages, and the like, to identify, extract, and transform data within those objects into actionable information. We then feed that into the model, into the machine learning model for more processing. You’ll see this used over and over, and we’ll talk about it later at the end of the presentation. Our next technique, object detection, is widely used. You’ll see hundreds of use cases for object detection. So let’s look at it a little more deeply. An object detection model is a specialized machine learning classification model, we looked at those a moment ago, but is trained on and acts on images, pictures. And this can be either still images or video. It differs from image classification in that it detects and identifies objects within an image, rather than treating the image as a whole.

On the slide, we can see images of retinas with that part of an eye where the object detection model has isolated the optic nerve, where the optic nerve comes together, the optic nerve center for early glaucoma detection. If you look closely, you can see blue boxes around the optic nerve center. Those are called bounding boxes, and that’s what isolates the object. So once again, we’re going to look just at that object, and not at the image as a whole. Once we detected the object, we can then use the model to classify it, to categorize the objects, in this case, the object nerve center, according to their likelihood of exhibiting glaucoma symptoms. So, we’re isolating the object out of an image, and then we’re classifying it to solve a. Here’s another example where object detection is used to detect equipment in an industrial setting. You can see in the left hand image, we haven’t done anything to it. We run it through the object detection model, which remember, is a classification model that we’ve trained, and it results in a set of objects that have been identified within that image. You can see a blowout of a flange. These can then be acted on. So we can isolate all of the flanges, or all of the valves and so forth, and then act on those individually.

These are just a few examples of object detection. Like I mentioned, there are hundreds of use cases. This is very useful technology, and we’ll run into this over and over. First one we’re going to talk about is identifying manufacturing defects and anomalies. Consider, if you will, a manufacturing process that manufactures circuit boards. Being able to detect flaws in the circuits is just an excruciating task. There are thousands of opportunities for errors in that process. And having human beings do that, visually examine each one, is just mind numbing, and it’s not going to work very well. But if we use object detection, it does this automatically. It can do this at a scale that we can’t, and with a much higher success rate. So you see manufacturing defect detection used all over the place when things are being manufactured. Worker safety is an emerging area for object detection, where if a worker does not have the proper protective equipment, PPE, and they enter a restricted area, some action can occur. So when the worker’s detected going into that area, perhaps the process stops, or an alarm sounds.

Autonomous vehicles, object detection is used heavily in navigation and in processing. Here, we’re looking to detect external objects like pedestrians or road hazards or traffic lights, and having the ability to act on those conditions. Detecting skin lesions, I think that speaks for itself in the medical world. Video surveillance, if you have a ring camera at your house or something similar, it is performing object detection. It’s looking for humans and detecting those people, and can notify you when a human comes into the scene, as opposed to a dog or a cat. And I think crowd counting speaks for myself. So RPA, what is RPA? We hear that acronym all the time. It stands for robotic process automation, but spoiler alert, there’s no robot involved. RPA is essentially process automation, which many of us has seen for a couple of decades, but with machine learning layered on to make it somewhat intelligent. A common misconception is that RPA is a robotic phone system, and we all hate this. We deal with banks or such.

And while a phone or voice interface may be part of an RPA implementation, that’s not RPAs primary role. RPA automates back office tests, like extracting data, billing in forms, moving files, and it does those across unrelated systems. The way it learns is it observes human beings, users, if you will, performing these computer based tasks, and then mimics those within the computers graphical interface. So why does that matter? Well, RPA automates tasks that we really don’t like to do. And it doesn’t complain, it doesn’t take coffee breaks, it doesn’t make clerical errors, and it works 24 by 365. It’s just there all the time. Process-centric businesses, things like banks and insurance companies, absolutely love RPA. And that’s why we have to deal with the front end of it all the time. I’m not going to go through these examples. I think they’re self-explanatory. And it’s difficult to illustrate RPA with a picture, but one thing we can do is look at a commonly overlooked opportunity to leverage it. And that’s in the manufacturing world, once again.

If you go into a chemical plant or a refinery, or some other process industry, and go into a control room, you’ll find operators, process operators who are copying information from one system and reentering it in another system. It happens all the time, even in 2022. There’s a huge opportunity to take RPA into the manufacturing world and automate those processes, not only for efficiency, but also for safety. The last AI technique we’re going to look at are knowledge graphs, and knowledge graphs are an interesting animal. Not many people are aware of them or know how they work. Yet, at the same time, they’re probably the next big beginning in AI. So if you’re not familiar with them, you might want to pay a little closer attention to this portion of the webinar. Knowledge graphs are used to connect and extract meaningful data through something called semantic relationships. And this sounds pretty basic, but connecting systems and data silos, using semantic relationships, and then augmenting this with machine learning allows enterprises to discover new and powerful ways to leverage their information. Rather than try to describe this with words, let’s look at a major example to learn more about the basics.

So, we all used Google and Google search, and we also be familiar with the Google knowledge panel. That’s what we’re seeing. In this case, we’ve searched for an actress named Penelope Cruz. On the left is a representation of the knowledge graph. Knowledge graphs have two components. They’re made up of objects, things such as people, locations, assets, and relationships, more specifically semantic relationships that connect those things. In our illustration, the bubbles represent the objects, and the connecting minds represent the semantic relationships. So our friend Penelope is an object. She’s a person. Let’s walk through how the interconnections Penelope has with other objects on the knowledge graph are important. And to do that, we’re going to look at the links shown in her knowledge panel.

If you look down, you can see that she was born at a town in Spain. She has a spouse named Javier. She has two children, Luna and Leo, and two parents, Encarna and Eduardo. So from that information, we know that she’s married. She has a spouse, and she’s married to Javier Bardem. Well, Javier is also an object on our knowledge graph. He’s also a person, but the important part here is they share a semantic relationship, a spousal relationship. They’re married to one another. So on the diagram, you can see that Penelope has spouse named Javier, and the converse is also true. And we can also infer that they have two children. We can see Luna and Leo on the knowledge, and that they are also person objects. We won’t illustrate that. We on the screen, but have a different semantic relationship with each of their parents. You can think of that as a has parent relationship. Let’s take this one step further, and we can see from the knowledge pain that Javier appears in the movie, and one of those movies happens to be Dune, which came out recently.

And so, the semantic relationship between the object, Javier, who’s a person, and the object Dune, which is a movie, is that Javier appears in the movie. So here we can see how the knowledge graph expands. These objects and relationships don’t end here. They only form a new piece of a virtually infinite network of objects and relationships within Google. And as you can see, simply by using that search, you’re already using a knowledge barrel. So with those basics under our hats, let’s look at a business example. Now, before we get there, let’s look at some examples. Knowledge graphs are used in some pretty prominent applications, but they may not be apparent to people at first glance. One area is for institutional knowledge capture and later retrieval. NASA uses this to capture the information that’s used as decades of people have worked on the various space programs, starting with Mercury, Gemini, Apollo on to the space station, and now as we’re looking at things like going to Mars and back to the moon.

As we go through generational change, especially the great [inaudible 00:39:45] change that’s been threatened for the last couple of decades, but that really has manifested itself in the last of years, especially with COVID, being able to capture that institutional knowledge and pass it generation of workers, and their ability to confidently find and retrieve that information is paramount. So knowledge graphs provide a way to do that. And I think you can probably see how they work by linking knowledge objects, perhaps as a person, perhaps that’s some piece of knowledge itself, and semantic relationships, how to navigate through those knowledge objects to find what we’re looking for, much like we do with the search engine. We’ve already looked at Google. We’ll move beyond that. Let’s look at Wikipedia for a moment. Wikipedia is simply the combination of an encyclopedia and a knowledge graph. And you probably experienced this as well. It’s easy to get lost in Wikipedia. Look up one topic, find a link to another topic, and pretty soon you spent an hour floating around the internet, looking at interesting information that may or may not be terribly productive.

I have to admit I do that more than I should. The last example we’re going to look at is enterprise data integration. We’ll look at that in more detail in the next slide. Knowledge graphs, in an enterprise setting, and we’ll look at the example on the screen, Maximo SAP, Oracle, and SharePoint can describe the same asset of building or some other asset, but they do it in a different way, one that’s appropriate given the context of that application. So Maximo is looking at things like equipment in the building. SRP is looking at the value of the building and other aspects about its ongoing financial needs. Oracle has a different perspective, as does SharePoint. A knowledge graph provides an overarching information architecture, and this is an important point, and the semantic relationships between these systems so that they can be used in harmony.

So, let’s think about this, information management structures, what does that mean? That means things like a taxonomy or a ontology, but one that is built using a knowledge graph so that we can then guide configuration of these individual applications, but do in a way that is in harmony with the other applications, and that supports semantic relationships between those entities, those objects that are represented within each application. That’s a hugely powerful concept. That’s really hard to describe in words. We can implement a common security model, but one that’s uniquely implemented at the application level. So our security model can span all four of these applications, but be implemented in a way that meets the requirements of the individual application. We can provide attributes or metadata to each of these applications and controlled vocabularies, the values that make up some of these metadata fields or attributes in a way that is common across the applications. This is a very important concept.

Let’s look at an example. Couple of weeks ago, I was at an AI conference, and a global energy company presented how they are using a knowledge graph, in conjunction with machine learning, to operate in a safer environment. So what they’ve done is they’ve used the knowledge graph to connect to enterprise systems within their company, use semantic relationships to find, and in essence, extract information about certain objects. In this case, they happen to be risks that are potentially incurred within their operations, and pull that back into a machine learning model, in this case, a predictive model, so that they can then predict when a risk event may happen. That’s important to them, because safety is paramount, and the ability to predict when a risk might occur gives them the opportunity to act on that potential risk before it happens. That leads to a safer working environment. It lowers their liability for dangerous events. It’s all goodness. And the way they’ve been able to do that, which really was unavailable in the past, was by using knowledge graphs and the semantic relationships that they supported.

So those are our seven AI techniques that we think are key to most mainstream businesses. Before we go, let’s look at a couple of themes that really transcend this. First, we’ll look at something called composite AI. AI solutions frequently require multiple AI capabilities to be combined. We saw an example of this when we talked about using text analytics as a pre-processor to machine learning. Stringing these techniques together is an increasingly common and effective way to address a business problem, especially when a commercial software product meets a business problem’s requirements. So you’ll see this very… It’s a very common construct, the need to put one technique behind another in order to solve a business problem. Of course, the AI industry has a buzzword for this concept, and that’s composite AI.

Finally, as we saw in the pipeline demand forecasting example, AI solutions don’t need to be flashy. They don’t have to be dramatic. Sometimes, the most ordinary, but pressing business issues can be solved using AI. Let’s not overlook the mundane. So as we wrap up, let’s recap what we’ve learned. We’ve learned that AI mimics human functions, that there are a set of AI techniques that are relevant to mainstream businesses or more, or most relevant. Machine learning is what makes AI intelligent. It really lays the foundation for artificial intelligence. AI techniques are frequently used in combination. We just looked at composite AI. Knowledge graphs are emerging as a powerful tool for harnessing enterprise information and knowledge, and acting on it in ways that we couldn’t before. And solving mundane business problems can be very, very valuable. So, with that, I appreciate your time today. And I’m going to hand the time back to Julia to wrap this up.

Julia:

Thanks, Glen. So everyone in the audience, we’re about to do questions, so if you can take a few seconds to put your questions in the chat. And if you’re ready for more on the topic of AI, check out our blog on AI ethics written by our CEO, Steve Erickson, and Glenn’s blog series, AI for Business. Also, be sure to sign up for Glen’s next webinar on four costly AI mistakes, which will take place on Wednesday, June 8th. The links will be available in this slide deck, which we’ll send to all attendees shortly. So now, onto questions. So one that we got in is, what is the difference between image classification and object detection? They seem to be the same thing. Glen, you’re on mute.

Glen:

Thank you for the question. Image classification deals with an entire image, the entire picture. So if you think of an image as a whole, the example we used was using facial recognition with your camera, it acts on the entire picture. Object detection is a way to train a model, a classification model, to look for a specific object within that image and isolate it. If you recall the blue boxes or the yellow boxes that we showed in the examples. Then, we can use that model to identify what that object is. In the industrial example we looked at, we were able to isolate valves and flanges and such, and identify those confidently so that they could then be.

Julia:

Okay. Another one is, can you use text analytics to classify documents in a document management system?

Glen:

The short answer is yes. There are a number of products that claim to be able to do that. Whether they can or not using AI techniques is up for debate. We would need to look at the individual product. But analytics, in and of itself, is capable of doing that, but it absolutely takes some work to do.

Julia:

Okay. One other question we got is, can you recommend other educational resources to further our learning conferences or organizations?

Glen:

Absolutely. Coursera has some phenomenal AI learning resources. And you could look for those. If you’ll send a note, or if we have your information, I can point out some specific ones. I don’t have them hand at the moment, but I’ve been very pleased with some of the content.

Julia:

Okay. That’s all the questions we have in right now. Any others? Okay, doesn’t look like it. Thank you so much, Glen.

Glen:

Thank you. Thank you, everybody.

Julia:

All right. Thank you. We’ll be sending out an email with the recording and slide deck soon. Bye.

Share via LinkedIn
Share via Facebook
Share via Instagram
Tweet