Watch the replay: Create a radically efficient supply chain with AI
Our first webcast ever in December 2018 was all about creating a radically efficient supply chain with AI. Hosted by Noodlers Jeff Alpert and Jenn Gamble, it was a great primer for using AI in your supply chain. Looking back on it almost a year later, their advice holds up.
Watch the video replay now:
Radically Efficient Supply Chain with AI webcast transcript:
Note: This transcript was generated using NLP AI, then lightly edited for clarity.
Leslie Poston 00:00
Hello, and welcome to the Noodle AI webcast “How to drive radical efficiencies in the supply chain with artificial intelligence.” We’re so glad you could join us! My name is Leslie Poston. I’m the Director of Content and Social at Noodle AI, and I’ll be your host today. First, a little housekeeping. Everyone in the audience should automatically be on mute. If for some reason you’re not, please mute your phone or computer to prevent feedback. We have three modules alongside our presentation today. A Q&A window, a downloadable white paper (you can find it by clicking the yellow Noodle logo), and a Twitter feed. You can ask questions anytime in the Q&A window and we’ll find a break in the conversation to relay them to our presenters, time permitting. We’d love it if you tweeted your notes during the webcast today as well. Our hashtag is #NoodleAIWebcast, all one word. The Twitter console on your dashboard should add that hashtag automatically. Our presenters today bring a wealth of knowledge about artificial intelligence, data science, and the supply chain challenges that you face. Jennifer Gamble is a principal data scientist here at Noodle AI who specializes in the development of AI and machine learning systems for enterprise applications, topological data analysis, network analysis, and the visualization of high dimensional data. She received her Ph.D. in electrical engineering from North Carolina State University and did her undergraduate and Master’s degrees in mathematics and statistics from the University of Alberta. Jeff Alpert is a director within our Enterprise Services Group here at Noodle AI who founded his first business while studying mechanical and aerospace engineering at Princeton University. He then became interested in the intersection of analytics and business strategy, and worked for 12 years as a management consultant servicing the needs of Fortune 100 companies with a focus on supply chain strategy before joining us at Noodle. Jeff and Jenn, thanks for presenting today. Why don’t you take it from here.
Jenn Gamble 01:57
Jeff Alpert 01:58
Thanks, Leslie. This is Jeff. The subject of the webinar today is how to drive radical efficiencies in the supply chain with AI. And just to let the audience know and to level-set expectations, Noodle will be hosting a series of webinars. Some will go very deep on specific topics. The idea for the webinar today is “where do you start” – a one-on-one of sorts – to really introduce the concept of AI in the supply chain and AI in the enterprise without going too deep on any one topic. Just to give people an idea of, “Hey, what do I need to do if I want to get started?” We want to give an introduction to how we think about AI and machine learning within the enterprise. Okay. So, the agenda, as you can see: first we’ll talk about who’s actually using AI, then we’ll go into a rough guide for enterprise AI and where it can be applied. Next, a little bit about algorithms (that’s sort of a pet peeve of mine) as well, then where to start talking about data. Lots and lots of our potential customers bring up issues that they might have, like they think they don’t have the right kind of data, or that their data is no good. So we’ll we’ll talk a bit about that. And we’ll talk about what we found to be the secrets to a successful AI/ML projects, and then tie it all together at the end.
Jenn Gamble 03:31
So who is really using AI right now? Hi, everyone, this is Jenn. We tend to see that the two main types of companies that are mature relative to the rest of the industry are companies that are super digital. Foundationaly, they are tech companies; these are the Google, Amazon, Facebook, Netflix type of companies, and also some “normal” companies that work in different spaces. We’re founded and built from the beginning with a focus on analytics. So Stitch Fix and Airbnb are great examples of this, where they have an analytics and data science methodology interwoven throughout many of their business practices. Most other companies don’t actually seem to be so deep in the weeds in AI, despite the hype we see in the media. If you feel like you’re behind because of what you see happening in the media, or you see your colleague companies having a lot of advanced institutes within the organization, that doesn’t necessarily imply that they’re as far into AI as they seem to be.
Jeff Alpert 04:44
Yeah. And then we have lots of friends who work in data science at a lot of those truly digital companies, right?
Jenn Gamble 04:51
Oh, yes, exactly.
Jeff Alpert 04:52
I feel like here in Silicon Valley it’s easy to forget the fact that there are lots of online manufacturing companies, or even newer companies with really complex supply chains. And, for a lot of them, there’re hype-filled newsletters that come out, magazines, and so on. The other day I saw one with a robot that was picking up a guy in a suit and saying that AI was going to take everyone’s job immediately. There’s just a lot of media hype out there. From our experience it’s sort of like dating in sixth grade. Everyone loves to be on the playground talking about how far they got. But it’s really mostly in the realm of theory, at that point, and a lot of groups, even with advanced analytics departments, a lot of companies, even with analytics departments, still tell us they’re just dipping a toe into AI. So I guess my whole point is that no one should feel discouraged. If they are still dipping a toe, if they’re not sure what to do, or that they they might feel behind. The best time to start is now because everyone is still still figuring out what’s going to work.
Jenn Gamble 06:21
Yes, absolutely. And so, in terms of getting started, we do find that many executives are not quite sure yet how to think about advanced analytics: where to use it within the organization, what they can do, etc. We do tend to see that there are three main classes that we use to think about ways to start applying AI in business. We rank them in terms of easy, medium, and difficult levels. The very easiest is when you already have some current analyses that you’re doing, maybe current predictions that you’re making, whether it’s just some type of statistical model, or something more advanced than that. Then taking a more advanced analytical approach using some machine learning algorithms to approach problems that you’re already tackling. A good example of this that we’ve dealt with a number of times is a classical demand forecasting situation. You need to be doing some planning, you have some current method of making demand forecasts. So it might be a linear regression, a statistical model, maybe just estimates based off of year-over-year trends, or you have some bottom-up processes and top-down practices and people are meeting in the middle and hashing things out. Whatever you’re current demand forecasting processes, you have some numbers that you get. Maybe it is at the SKU by SKU level, maybe this is broken down geographically. Moving from that to more advanced machine learning algorithms, maybe like an ensemble, tree-based method, this could be random forest or gradient percent trees,well, we’ve had a lot of good experiences using deep learning approaches in this type of setting. The important thing is that the model can incorporate data, not necessarily just in a kind of univariant or a small multivariate way, but where you’re pulling from data across the entire portfolio, maybe incorporating other external variables as well, lots of timelines, the output is the same, you’re just getting a demand forecast, at this SKU by SKU or geographic level. But the downstream applications of what you do with those forecasts are the same. Changing the modeling technique, maybe incorporating some more advanced feature engineering, your data processing, even when the underlying kind of raw data that’s available at the same time can yield pretty considerable improvements in many cases. So we’ve seen examples where clients will get five or 10% gain over traditional methods. But the improvement tends to be higher, especially in the prediction subgroups that have more volatile or more intermittent types of demand patterns, those might be in the range of even 10 to 20%. improvement in mean absolute prediction error, for example. And so in that easiest setting, you’re taking some analysis you’re already doing, and you’re just applying some more advanced machine learning style algorithm and data planning to the problem.
Jeff Alpert 09:22
Yeah, and I think that’s probably the easiest one to think about. Right? That’s the place where, if you look at your business as a business, you say we have a process X, how do we add AI to this process? Right? What can we do this better. And in some cases, using modern methods really will give you good results. Just talking about demand forecasting, I’ve worked on that project here, we’ve seen that there are definitely better results from those sorts of things. But there are some other problems where it’s like, unless you’re really approaching it in a new way, you’re probably going to be a little bit underwhelmed. So that’s the second class of problem we want to discuss, which is approaching old problems in really new ways. And one of the good examples that we have of this is our app that does fill rate predictions and offers recommendations for what to do with that for a supply chain planning system. I think we have a lot of supply chain experts on the call today so this might be near and dear to your heart, the idea that a typical planning system is really rigid. It’s not good at differentiating between big risks and small risks, everything’s just based on rules. If there’s an issue in the supply chain, your planners have to figure out based on feel, based on your mistakes, based on history, based on their own personality, maybe on what they eat that day, what’s important in the moment, what’s not, etc. The idea is that the systems that are currently being used are mostly rules based, there might be some optimization there, maybe not, but for the most part, it’s just really difficult to know what’s important. The idea is your ERP just says, “so this happened” and “if x happened, do y”, which may not be true and may not be important. So planners have to apply a lot of judgment. What we do instead is sort of flip the problem on its head and say, instead of starting from the back of the problem, start from the front. Let’s do an actual forecast of what we think the fill rate is going to be for a particular node, for a particular product, and a particular factory, etc, whatever it is, let’s start with that. Let’s start with the understanding of what’s actually important. And of course, we build that up from lots of things that are going on inside of business. But what this does is it helps you say, okay, rather than looking in the back of the chain and saying “what was my capacity? What was this? What was that?” and building forward, we start with “where’s the risk?” Right? Directly forecasting fill rate, and then backing it up to say, now, “what do you need to do to adjust that? Where are the risks with that” to get the planner a much better recommendation on what to do next, because we’re approaching the problem from the other direction. That’s a good example of an application of more modern techniques, whereas in the past, maybe directly forecasting was never going to work very well, because either you didn’t have a computer to do it, or you didn’t have data scientists working on a problem in this way, or it wasn’t a generalizable enough thing you could do within your business. So SAP for example built something that was more rules based, and that’s all fine. What we’re saying is there are modern methods now that that flip old methods on their head, and that’s where you’re really going to see a bigger game than only using an older method. So that’s the medium difficulty level. And I think that’s a little bit more difficult to think about. It still involves processes that are new to the organization. The last one, the one that’s very difficult, and one that I think everyone gets excited about, is what everyone is thinking the potential of machine learning AI is going to be. One example of this is solving strategic problems. The idea is you can take a strategic issue, for instance, many organizations will set up a strategic issue with many conflicting KPIs where one group has a KPI of maybe customer satisfaction. And another group has a KPI of throughput if you’re in production, and another group has a KPI of filtering, right? And so when there are trade offs to be made, how do you do it, you get everybody in a room, and they argue with each other. And the idea is that because you give people conflict, the KPI is they’re going to work out some sort of solution, I think the hope is that we’ll be able to do this with interconnected AI systems, right?
Jenn Gamble 14:09
So best case scenario, they actually get into a room and hash it out. And in many other situations, they just have conflicting KPIs. And they make decisions completely independently, with totally different incentives, and then have to use the output from each other’s decision making processes in this way, that’s totally not optimal. If you were to look at the system as a protocol.
Jeff Alpert 14:28
Oh, yeah. You get your finance people saying, let’s take last year, sales needs a certain percent increase. You get your category management people looking at point of sale data saying, okay, here’s what I think is going to happen. And then you get your Cisco forecast in your supply chain saying something different. None of it agrees because there’s all the conflicting incentives. Maybe that can be good, but a lot of times it can be bad. And I think people feel that pain all the time. I can give a live example, based on some work that we’ve done working on a problem for a factory with a bottlenecked process. We ran an optimization on that process. Part of the issue was the conflicting KPIs. Right? It was, do you optimize for throughput? Do you optimize for customer priority? And we asked the question “What’s the trade off here?” Do we take away some customer priority for some throughput or cost? Or whatever else? No one could really give us an answer to that. So what we did was set the optimization up to just give us a fairly balanced set of KPIs. But the real value, and something that we’re doing right now, which is, I think, the real hope for AI, is where we could say, “All right, let’s set up a sort of supply chain twin,” let’s set all these incentives up. And let’s really understand under what circumstances certain sets of priorities are more important or less important. Then we can dynamically adjust that optimization, right? Rather than having a rigid set of rules, which is how most systems work. We would actually say we have different point solutions within the organization that are giving us information about “this is what’s happening with your customers.” “This is what’s happening with your throughput in the factory,” they’re talking to each other and saying, “Okay, I’m in this situation where I’ve been late with this customer several times in a row. My customer module is saying that’s not a good situation to be in. Let’s prioritize that.” I think that’s the real vision, right? Where people want AI solutions to go. And we’re constantly optimizing towards that. What do you think the current state of most business is right now?
Jenn Gamble 16:45
Not even not even close to that right now.
Leslie Poston 16:52
Speaking of customers. We got a question from Twitter that is really two questions back to back. I think it’s this is the perfect time to break in with it. “How much did customers get involved in setting up the plan versus leading them through the process? And how much do you need to know at the customer to create a plan?”
Jeff Alpert 17:10
Yeah, so I’ll speak first, Jen, you can answer that as well. We see a very wide range of expertise. Within customers that we deal with we we’ve had several smaller customers who have no analytics group, they don’t know where to start. They come and talk to us, and they rely very heavily on us. As a matter of fact, it’s one of the things I think makes Noodle a little bit different. I’ll give a little plug for my team and the organization, I work with the the enterprise AI services team, which is comprised of our business consultants and supply chain experts. So when a customer doesn’t have a really clear vision of exactly what they want to do, how they want to do it, what data do they have, what problem are they exactly trying to solve, we help find those answers. We have a lot of projects that we start, I’d say probably 5050, where it’s not totally clear exactly what we’re trying to solve or exactly how they’re going to put it into, into production, exactly how they’re going to use it. And my group, along with data science, helps figure that out before we deploy our apps. Then we’ve got a whole other set of customers who come in, we’re working with one right now, and they already have an analytics group. They have great ideas. And they’re looking to us for really creative solutions. But they already know how they’re going to use this, exactly what they want to do. They have a roadmap. So there’s a pretty wide gamut, I would say. And we’re able to work with both those types of customers with a good degree of success.
Jenn Gamble 18:39
Yeah, and the thing that’s really important, what the customers need to be experts in is their own business, right? Because that’s the part that really needs to be fed into the entire analytical framing process to get a really good outcome. And in a couple more slides, we’ll talk in a bit more detail about how it’s not always actually just the underlying algorithms that really makes these applications so useful and bring a lot of business value. It’s also all of the decision making around what is even the right problem to be solving, what is the right information to be servicing? Who are the end users? What are they going to be using this information for? Etc. And so I’m working really closely with the business people to make sure that whatever the solution is, whether it’s one that’s already framed up really nicely, or one that needs to be discovered through a bit of a more iterative process, that output from that is really usable and is also very high value.
Jeff Alpert 19:37
Yeah, that’s a great segue to go to the next piece of it. And I think a lot of the education and hype over the space right now is all about the individual algorithms. Like, if you do research into what do I need to know about AI for business, right? A lot of people are going to give you books, or you’re going to go take courses online, or you’re going to read articles, a lot of which are highly technical. And they’re always going to give almost the exact same examples, which is going to be neuro machine translation, natural language processing, image identification, self driving cars, things like that. And that should certainly help illustrate how cool the solutions can be. But I talk to a lot of professionals who struggle to understand what that means for an enterprise. They say “I don’t know what I would do with machine translation”, like, how do you bridge that gap? And I think that just says to us, there’s all of this obsession with the algorithms. And I’ve got to learn how a neural network works in this representation. And I’ve got to know exactly this whole toolkit of data science tools, we would say, that’s not really the point. As a matter of fact, that’s kind of missing the point. And I feel like that might be part of the reason that a lot of people feel like they don’t know where to start or where to begin. They feel like if they have to build up this whole knowledge base of exactly how things work from a technical perspective, but the truth is most of the algorithms are open source. Most companies that are doing machine learning today, the algorithms are open source, anyone can use them. The secret is not in the algorithm, the secret is in how you solve a business problem using what the algorithms can offer you. That’s the magic.
Jenn Gamble 21:19
Exactly. The phrase I was using before is the analytical framing of the problem. By this, we really mean cleverly figuring out how to approach the problem, you know, what information is or is not important, connecting the solution for business process, building a solution that the business can actually use and wants to use. I like to judge the success of an enterprise AI application or system as one where you have the end user who ends up kind of using it on a daily basis afterwards. If they’re saying, I don’t ever want to have to do my job without this thing again, then that’s a very successful integration of that system, actually. And this matters a lot. Because a lot of the work benches are dashboards that companies can be promoting a lot right now. They are very helpful, maybe from an analyst perspective, or as a data scientist, and can speed up the discovery process or some of this “trying to turn data into insights” process. But, if you want to go beyond some nice PowerPoint summaries into something that’s actually deployed as a system that is, like you said, in the news every day, then you need to have that entire piece about “What are we solving? Who is it for? What predictions are we making? What kind of information do we need to put up for the end user along with the predictions so that they really have the right information in the right context to be able to make new relevant decisions off of it”? It’s not just about a model. The models are a key part, but it’s the whole stack, exactly understanding what the algorithms allow you to do, that you couldn’t actually do before.
Jeff Alpert 23:10
Yeah, and I think that’s a really important point. Everyone’s so obsessed with the algorithm but what you use the algorithm to do is much more important. As an example, Jenn and I are actually working on a project together right now, where we’re working with a industrial company, and they have a process. Currently there’s no automatic control on that process. And the reason for that is because it’s very difficult to model the physics on the process. With typical automatic control, I remember this from mechanical engineering school, you have to model the physics of the process. And then you can set up a controller that’s based on what it believes the outcome from the process will be and make sure that you’re getting it. In this case, you can’t model the physics in any normal, traditional way, so what we’re doing is we’re using a variation on neural networks. And the reason that we’re using those is because they’re able to model much more nonlinear, very difficult to model processes. So we’re able to feed these data off the line. The hardest part of this whole thing is figuring out where the signal is, right? Yes, the algorithm will take care of the modeling of the physics, once we identify the signal, but we’re working right now trying to identify the frequency of the motors, for example. And for me, knowing what the model can handle has allowed us to sort of open this up with control for the factory. So that is an example of where we’re using modern techniques. In a place where you you literally could not have done this in the past.
Jenn Gamble 25:04
Yeah, I mean, even over 10 years ago, nothing.
Jeff Alpert 25:07
Jenn Gamble 25:08
Jeff Alpert 25:08
Yeah. Yeah. Right. On to the next topic. Yeah. So you’ve listened to us talk a bit about our opinions on what makes a good project, on how to think about this. A lot of people have questions on where to start. So okay, I’m sitting in an enterprise right now. And someone told me go find some cool a project. How do I think about this? Where do I start? What What do you think that?
Jenn Gamble 25:38
In the previous couple slides, we were talking about these kind of easy and difficult versions of a lot of projects. And definitely, people tend to want to start with easy things and do very small proof of concepts. But a place where that can get you into trouble is when you’re trying to justify effort with results. If you choose something very small that really has no financial impact, it can sometimes be difficult to get the relevant, you know, CXO, on board or whatever. And they’ll say, Yeah, you did this, it seems like it worked. But there isn’t any larger financial value for this, so why would we expand it? So actually, choosing something that’s more of a middle ground often seems to be a better approach than starting too small, because you want something where it is successful. And if you get it up and running, and it’s working, that there is actually really relevant impact in and of itself. Then on top of that you want to be able to build off of the things that you’ve already built – have it be a stepping stone onto larger, more complicated problems. So you definitely don’t want to start with something that’s a massive, thorny kind of thing. You need to layer in the basic levels of data quality and data pipeline, and a lot of the derived tables and processes and models that you build with these medium sized solutions, you can then start to kind of layer on top of each other. As we mentioned before, if you have point solutions, maybe in different departments, and then you start getting them to talk to each other, you can actually have these decision making processes be based off of the same underlying data understanding, but it’s impossible to just skip straight to the super complex problem without building up this relevant success in medium sized problems. The most important thing with each of the medium sized problems, you don’t ever want them to be only a stepping stone and say, “Oh, this is going to be relevant once it starts getting hooked up to these other things.” We want it to be standalone so that there’s clear value in each piece that you’re doing in and of itself. And then it’s also building towards more of a complicated interconnect.
Jeff Alpert 28:01
We would not recommend the, “if you build it, they will come” sort of mentality in AI problems, what that’s going to lead to is someone working for a very long period of time with very little impact, and then someone else asking, what did I get for what I paid for? And there’s not really an answer to that it’s, “well, this is going to be valuable in the future”, those sorts of projects. I mean, you know, it might be alluring to say we’ll just wait for it, we’re going to get the value later. But the truth is, that’s just not how a business works, right? Everyone wants to understand. “Okay, so is this working? Am I getting value along the way?” So what we recommend is starting with those use cases that are valuable in themselves, maybe a little bit more complicated. As a matter of fact, if you feel like okay, well, we have a couple of processes as part of our business that aren’t quite perfect, yet. There’s just a mentality that everything must be perfect in order to start a project. We would say not really. As a matter of fact, these two projects can be used to uncover issues. Like we were talking about with the optimization challenge where they had the conflicting KPIs, and no one could tell us what the right answer was. These projects can be used to highlight those things, and fix those things along the way, as well. So starting with the project that’s valuable in and of itself that you can build off of we see a lot of success. The first one, here are sort of the the IoT, industrial plays, like predictive and fleet maintenance, predictive quality control. And then when you get into supply chain, we’ve we’ve seen really good target rich environments, again, supply network risk mitigation, we’re talking about silhouette prediction, what do I do? How do I change my plan to be able to deliver and make sure I don’t go out of stock. That’s something that a lot of companies struggle with, and we can definitely help the demand intelligence downstream applications, we’ve seen that we’re able to make pretty big gains there. And that’s one that could stand on its own, because you can get value. Soon as you tie a good prediction into a downstream application, you can get big value there, I will say, but the main prediction is and and is itself useless. Right? If you’ve got a great perfect demand prediction and no one’s executing on it, they’re not doing anything with inventory, matching anything with fill rates. That’s another issue, but you’ve got to connect it to value. So I’ll sort of reiterate that point again. We’ve had success as well, in our time order prediction. So how do I make my customers happy? What can I commit to energy prediction and shaping again, you can see how each of these will be standalone, each of these will deliver value on its own, when you begin to connect those things together? What’s going on in my factory? Is my equipment going to go down? How will that affect my film rates? Right? Once these things start talking to each other? That’s when they’re really, really big value? And it’s sort of like our passion here. But the truth is, you have to take baby steps along the way.
Jenn Gamble 30:55
Exactly. The question that people always have when we start working with them is, “yeah, but my data is not great”. “Yeah. I’m really pumped about the idea of using this project, we have data, you know, we’re running our business off of some data that we have, right? But but we feel like the quality is not that good.” Or maybe it’s in a bunch of different systems and they’re not connected to each other in any way. We have this database over here and this database over there. And we have no, you know , come in join keys for these different processes.
Jeff Alpert 31:29
And you want you to build a whole data lake and spent three years doing that, before I hire you to help me solve my problems, right? That’s always a question that we get.
Jenn Gamble 31:36
Yeah. And so our rule of thumb is if you have data that you’re running your business on now, and it’s generally correct, it’s in a database or some type of structured format, then that’s enough to get started. So of course, it’s better if you do have things in a more sophisticated state. And maybe you can’t get started yet if you’re working primarily off of pen and paper, for example, then we’d say you’re maybe not at this stage, where you want to be getting started with this. And there are going to be different levels of lift required to set things up if the decisions that you make require data from a bunch of places, then there will be a little bit of extra work consolidating those pipeline NGOs together that can be kind of part of your initiative to build this product. And as Jeff was mentioning, earlier, these projects, tend to get, you know, some attention within the company, maybe they have some kind of executive level oversight. And so when you’re working in an area that does have some kind of basic, underlying, process, issues that might need fixing these types of projects or deployments can actually be a pretty good opportunity to encourage the organization to fix some processes that maybe wouldn’t get fixed without this impetus that they need to be kind of into this type of machine learning application,
Jeff Alpert 33:03
You won’t know until you start, right? Absolutely. You can think all day and try and hypothesize what parts of your process need to get fixed. But you really need to just get started and begin to understand what you need to fix within your organization. And we hear people saying, “Well, my data is here, my data is there”. And we would love if a customer has a data lake, right. But the reality is, we have data scientists and engineers, we’re a full service company, in the same way that we talked about the services group helping solve problems before, our data engineering group helps collect all the data and design the pipelines and work with the customer, their design, and data lakes. And we’ve certainly been along that journey many, many times. So it’s not something that should really prevent you from taking a step forward and trying to solve a problem, you might make a different decision about whether or not you want to use an startup vendor, depending on your kind of current level of data or analytical maturity and your current like internal skill sets. building out these types of things definitely do require skill sets from, you know, like data engineering, data science, you know, this kind of business process understanding, depending on if you want to have a UI component, there often can be this UI, user design, user experience skill set that can be really helpful there, too. And so some companies do have these internally and can kind of bring them together for specific teams to help other these projects, also, obviously, software engineering from a legacy systems integration perspective.
Jenn Gamble 34:40
But if that feels a little bit daunting, then that’s the time when it’s really helpful to, I think, work with an external vendor.
Jeff Alpert 34:47
Yeah. Yeah, it makes sense. So okay, um, the next thing that we’d like to talk about is, is what makes an AI project successful. And this is based on our experience working with our customers over time. And I just want to give a quick sort of five things here, right? I mean, number one is just get started. Right? You’re, you’re never going to meet success, if you never actually even get started along the lines of planning. Right? So number one, you know, you just have to get started. Number two, is make sure you’ve got identified executive sponsorship that’s both on board and has sort of a long view of what this is going to do to the business. And when I say that, I don’t mean a rosy view of well, this is going to blow our whole business overnight. And we need to rethink exactly how everyone doesn’t know, an idea that look, this is a journey, we’re going to start with some point solutions that are going to, you know, that are going to start delivering value over time. But it’s one thing I’ve certainly learned is these are not the easiest things to stand up, you don’t just dump the data into a cement mixer and get an answer out of you know, out of the end, you should have realistic expectations that this will deliver value, the value will increase over time as the algorithms learn and as you mature. And the point is, we’re thinking about problems in new ways. That’s one of the points you made earlier, right? The really successful projects, turn old problems sort of on their head, when you’re doing that there’s going to be a learning curve, just making sure that executives are on board with the idea that we’re approaching things in brand new ways. And it’s not going to be “Oh, yes, I absolutely love change management”, because people are probably going to have to make adjustments about how they do their job. Next point. Just because you have executive sponsorship does not guarantee success. I think a lot of people in large organizations know this, just because an executive says go to this doesn’t mean that people are actually going to fall in line. So our next point is that the second line of business managers who are actually going to be executing this, need to be on board, understand the value and and be committed to delivering the results. The best thing we see is when someone builds their results into their budget, and they’re fully committed to it, people are not willing to do that, unless they really believe that there’s value there. Next, defining what success looks like at the outset, like we said, The if you build it, they will come sort of mentality does not work. Let’s be very, very clear. What are the KPIs we’d like to see move by when? Right, what does success look like for us be realistic about it. But let’s also be clear that we know when we’re done with the first phase,
Jenn Gamble 37:31
and what decision making processes do we expect to change and improve through the process? Exactly.
Jeff Alpert 37:36
Yeah. And then the last one is a clear line of sight to value and adoption, like what Jenn was saying before, the absolute best thing that happens when we deploy an application is we have users of that application saying, “I don’t know what I could do without this”, right? Like they really love using the applications, they’ve adopted it. And we’re really beginning to see some value there. If you can’t identify what is it in for someone to adopt the application? And what is the value that we expect out of this are we going to be reducing scrap Are we going to be reducing our out of stock rates are we going to be reducing inventory, make sure that you know exactly which KPIs you want to move, and do your best, which would be very difficult, but do your best to attribute the movement of KPIs to the entire process you go through as you deploy these apps. So that’s our sort of five things. Number one, of course, just just get started, or five things that we’ve seen pre very successful projects.
Jenn Gamble 38:31
Yeah, and this whole approach of being really focused on the end users and the decision making processes that they’re going to use the applications for, or the output from the machine learning or AI systems, that doesn’t have to be something that’s 100% decided, before you get started, right. So we definitely advocate this kind of minimum viable model approach or like, minimum viable system. So you do want to emphasize as early as possible to kind of get the right people making the decisions about what are we building? Who is it for, you know, what types of predictions do we want to be making, and then you can start building out kind of a baby version of that system as early as possible. So before you even have the high powered algorithms, and the entire details of the advanced feature engineering or whatever, you know, semi complicated stuff that can be going on under the hood, you can build out a baby version of that, that’s maybe just using, you know, some of the basics of the input data that’s available and using some simpler algorithmic methods, but it’s actually connecting into systems and surfacing things to people saying, Okay, this is the type of output I would give to you. What would you be doing with this thing? Exactly. And then you can start actually layering on, it’s not good enough requirements. Yeah, we need to make sure that we’re incorporating these other features, etc. Yeah. Leslie.
Leslie Poston 39:52
Hi, we have a couple more questions from the Twitter verse. One is, how do you talk this through with managers who think the project could threaten their job? And related, How do you get managers engaged?
Jenn Gamble 40:10
I would say that the these kind of sweet middle spot applications that we’re talking about are often the ones where the people who are using them are managers, and they’re making their jobs actively easier and better. So a lot of the time people have decisions that they’re kind of forced to make on a regular basis and their job that they feel like they actually don’t really have sufficient information to be making in an informed way. And so many of our clients, the people who end up using these applications are just so excited, when they come out, they’re like, Oh, this is great, I’m going to be able to do my job so much better now. And so and so those types of situations are often the ones that people get really excited about. And then there’s not so much.
Jeff Alpert 40:56
Yeah, intention. And I would also say, and you went through this with one of our customers before, when you first deploy an application, it’s not like the day that we deploy it, right, it’s going to totally blow up a process or or it’s really going to dramatically change how things work on on day one, there’s always sort of a learning period or workshop period, where like Jenn said, the whole point is to make that person’s job easier and have them make better decisions. For instance, if we’re giving a recommendation to a planner, or someone who’s providing, say, price quotes or something like that, they’re actually going to trust the system on day one, right? It sits next to the person, and they’re looking at it saying, Okay, I’m going to use this information to make better decisions, maybe that’s where it ends. And then you say, now this person is making better decisions, they don’t have to go between eight different screens to find the data that they need. It’s all their recommendations right there that helps them do their job that makes for a successful project, rather than this big, scary, hairy, no, we are going to complete replace. That’s just not the reality of how these applications work. Right.
Jenn Gamble 42:06
So getting actually automated away, or you see the parts about people’s the most, because they don’t really require as much kind of critical thinking and human intelligent style of decision making. It’s more like just having to go through and like, you know, a bunch of times, and people are happy to not have to do those jobs anymore.
Jeff Alpert 42:26
And a lot of what the UI design we’re doing with those people. So I would say I understand the question, I understand that there might be people who are not bought in or they’re scared about what does this mean for me, but I think once they begin the process, at least from what we’ve seen with us, once they begin the process, everyone sort of understands, like, okay, maybe this is a little bit different than what I thought it was going to be, it’s still extremely powerful, and I’m going to get a lot of value out of it. But this is not necessarily like a major threat, it’s a threat to the boring parts of their job, probably where we’re automating. And we’re giving good recommendations in a way that they would have had. And just like, it’s figuring out how many different sources are there before, but it’s not, it’s not like going directly at the middle of it. And I think once you talk to people about that, you know, those sort of fears begin to go away
Jenn Gamble 43:09
And one other thing I would recommend is that whoever are the end users of the system or the managers for the department that the system applies to, you want to have them describe what they want. Then don’t have whomever’s building the analytics for the system just go off and build it completely independent, and then come back when it’s done and say here tada, because usually there’s there’s a good amount of kind of iteration and feedback that’s required. So whether it’s your own internal analytics team or an external vendor you’re working with or whoever the kind of regular communication feedback iteration process with them during the development and the end users is really critical, I would say, Yeah, well for buying purposes and to ensure that the end solution is just as useful and usable
Jeff Alpert 44:00
Yeah, our supply chain application is made to be used by demand planners, it’s not made to replace demand planners. It’s meant to make their job easier make their decision making processes better. Yeah. It was designed with those people in mind, it was not designed to replace those people out. Right. I’m not going to not gonna lie and say that we never encounter any resistance within organizations, we absolutely do encounter resistance within organizations. But a lot of times, it’s either misplaced fear or it’s just typical organizational politics where they say the analytics guys are pushing the solution on me and I have my own vendor that I like, or something like that. So it’s usually more political in nature than anything else. Once you describe the actual solution to people, they begin to be getting much more excited once they hear about it. And like Jenn said, we work with them from the start. So anyone who’s going to be using it on a day to day basis is brought in at the very beginning, and helping us think through it from the start. So we are getting towards the end here. What do you take with you today? We would say that the three action items that that you would do as a supply chain professional, as an enterprise professional manager, would be:
1. Shortlist for most promising use cases.
2. Identify the data you’ll need, even if it’s not perfect.
3. Find your keys to success. What do you think the value of this could potentially be? Who are the people that are going to be involved in adopting this? What is the process that may need to change as you go forward? What are what are the technologies that are involved? Maybe you have a big KPI that this needs to play with. Maybe you have a legacy ERP system in place it has to play with, right? You have to really think about the whole ecosystem because that can that can stifle projects big time. And there is like an IT group already has lots of ideas of what they want to do. And the the the SMP team has something different. Right? You really have to think about the technologies involved as well.
And then our last point is just get started. Right. So 2019 is definitely the perfect time to start this. No better time than now. Right? Just do it. Yeah, just get it started. Okay, so I think we’ve come to the end. Leslie, are there any outstanding questions that have come through at the last minute?
Leslie Poston 47:43
We had one last minute question that just came in and then we have a couple minutes. If anyone who’s listening wants to send the question right now, before we close out, you still have a chance. From one of our attendees, “can you estimate time to value for a project particularly?”
Jeff Alpert 48:02
Yeah, this is? So this is something we deal with, every time we scope a project, that’s a question that always comes up. So yes, that is absolutely possible. I think that it’s actually necessary to do, it’s not going to be quite as precise. We absolutely try and provide a customer an idea of what they’ll get along the way. Oftentimes, maybe you start with a pilot, and so you’re going to get value on a small side of the business. As maybe your accuracy improves, as your models get better. As you feed more data. And things get better as the adoption goes up. We’ll contract that along the way. But it’s not the kind of thing where you need to tell executives, “hey, just wait for it, we’ll deal with it into the future”, that’s completely unacceptable within an enterprise, you have to provide an estimate of time to value in it is generally possible, especially if it’s an app that we’ve deployed more than one time, many times it’s a we can give some sense based on our experience.
Jenn Gamble 49:08
And then you can know, “what time does it take to get the data into kind of usable state model?” And then what time does it take to go from there to actually having both the model and the system up and running in a production environment where the output is in a usable state to the end users? And then once the end users are actually taking in the information and start using it to inform their business processes? Do you see value from day one? Is there some type of ramp up or cumulative impact that happens over time? In general, we would say that you can’t be promising value in 18 months or two years from now, it needs to be definitely kept within within the year, or ideally, closer to six months, yes, from project start is kind of a good rule of thumb. And then that original, you know, getting the data into a usable format can be like somewhat variable. Sometimes you can go in and get started on modeling right away. And then other cases, you can do a little bit heavier lift at the beginning, and that can change timelines a bit. But
Jeff Alpert 50:15
Yeah, and then one thing that we find as well, is that we can often deliver value along the way, just as part of the process. You’re going to be exploring data, you’re going to be finding things that are that are interesting. I mean, you can begin making process changes right away, you can begin having discussions about what other types of things need to be collected, hey, why is this, you know, in this way, we, I mean, we’re having a lot of times weekly, bi weekly, monthly meetings with the people that we’re working with, and they want to know what we’re finding every single one, they have
Jenn Gamble 50:48
insights into taking their current processes. Then before the system is actually up and running, you can still have some, you know, one time kind of batch predictions that you make, that they’re they’re using on a intermediate basis, and the different things, so it’s always building over time and delivering value, each kind of intermediate stage, but then there’s kind of the big 10 milestones of having a system up and running and having it be delivering ongoing value on like a daily basis.
Jeff Alpert 51:14
Yeah, I mean, I would caution against making promises that are too grandiose, but if you don’t believe that your application is going to deliver value, why are you building in the first place? So I would think when we talk about what makes a good machine learning problem, if you can’t identify what the value will be and roughly how long will take you to get there, that might not be the best problem to solve right now. Right? Because people aren’t just going to wait for you to fail repeatedly.
Leslie Poston 51:50
Right. So I think we have time for one more question. We did just get one come in. And then after that, I’ll close it out. The question is, can you please talk about your experience with helping companies with work process or change management issues ensure that the AI solution actually gets adopted and used on a regular basis?
Jeff Alpert 52:10
Sure, so. So Jenn, do you want to talk about things that you built, the customers began using? And how that works in processing?
Jenn Gamble 52:16
Yeah, definitely the part that we were mentioning earlier about having the kind of ongoing discussions throughout the development process, I think it makes a huge difference in terms of ensuring that the final product is very usable, and also very used. And in general, don’t necessarily expect that the very first version of the kind of output that is provided is going to be the final state that everything lives in. So you know, if you’re on even an internal analytics team, and you’re trying to build up something that people within your company, or if you’re the sponsor for a project that’s getting started, then you’re always working at every stage along it to ensure that you have as good of an understanding as possible about who’s going to be using it, what decisions will be made off of it, and how it’s going to integrate into their existing decision making processes or existing systems. But then once that beta version comes out, you typically want to have some kind of heavy feedback period, where you’re actually having people use the information, give feedback about “Oh, when I see this separate prediction, I really wish that I could know this other piece of context to help me with the decision making that I am using it for”. Then, typically, you can have some some pretty tight iterations and quick wins in terms of the kind of initial improvements that you make to get it to a super usable state. And then as you’re kind of carrying on over time, there’s going to be a lot of big requests for more major features, updates, “Oh, Could you get me this? Could you get me that”. And this is how these types of systems and processes can can build on themselves, and then start to connect with other point solutions that are being built within the company.
Jeff Alpert 54:10
Yeah. And, and I would say, beyond that, once again, for my part of the organization, the ES Group, we found that actually, the change management process, the adoption process is maybe the most important part of the whole thing. You can have the best model in the world, you can make the best predictions in the world. And if people don’t know how to use it, or can’t use it, it’s useless. Right? We learned that over time, that the algorithms, the models don’t stand alone. And so what we do, like Jenn said, we involve people from the very beginning, but this is why we had those recommendations, getting executive involved, making sure that people were actually going to be executing this are involved from the very beginning. Because if they’re not bought in, your project’s not going to be successful, that’s where the change management is going to come from. And honestly, there are some projects where it’s obvious, right? I’m not really needed that much on some projects, where the company already says, “my planning process is x, right? Now you provide me a better plan”. And I’ll just do that. Okay, fine. Let’s talk about who’s going to use it, and then it goes away. But there are other projects where this is what we do. There’s like a whole parallel track today, I project where the data science team is following the path to build a model. And my group is talking to potential users understanding processes, and challenges be to just surface those as early as possible. If you get to the end of the modeling stage, you deliver your model, and people say, this is great. I don’t know what to do with this. That’s a failed project. Right? That is a project that has gone wrong. And we see that sometimes with internal groups might get beat up for that reason. And we’d say, it is absolutely critical to understand what change management will be required from the outside and begin working on that from the beginning.
Jenn Gamble 56:02
And often that that type of situation can happen because you talk to people at the beginning who are theoretically end users and they say, oh, if you could give me these predictions, that would be fantastic. And then you take them at their word and go build that, but maybe they hadn’t quite through it all the way. And so in addition to knowing like, what information somebody you know, wants to have, knowing how they’re going to use it is is really key, because sometimes if they can answer that question, then even once they’ve given that information, it might not be very useful.
Jeff Alpert 56:33
Yeah, we just had a discussion with a customer a couple days ago, they said, if you could give me x, it’d be great. And we said, okay, we thought about it for a few days and said, what would you actually do with that information? And then we said, I don’t think that information is going to be helpful. What you really want is this other thing. He said, Oh, actually, you’re right. I mean, that’s something that happens all the time. Right? And so you’ve got to make sure that you’ve got someone that’s just looking at the entire process.
Leslie Poston 57:01
Okay, that’s fantastic, Jeff and Jen, thank you so much for doing this webcast. It has been extremely informative, and it will be posted to our website, we have been recording this. So for people who had to drop off early or people who are unable to attend, you’ll be able to get the recording. Again, if you’re on here live, if you click the yellow in the top left of your screen, you’ll be able to grab our latest supply chain, white campus. And if you have any other questions about this webcast after the webcast, just tweet us using the hashtag (we monitor Twitter frequently), or you can also email us at the email firstname.lastname@example.org at any time, and we’ll make sure that someone answers your question and helps you. Thank you so much for coming.
Want more AI in your supply chain?
- Atlas, Noodle.ai’s Machine Learning (ML) Framework Part 2: Design Premise & Architecture
- Experiencing Supply Chain Blindspots?… Enter Athena Insights
- Atlas, Noodle.ai’s Machine Learning (ML) Framework Part 1: Challenges With Building AI Applications
- Selling Certainty in an Uncertain World: Why Noodle.ai is a “Top 15 Startup to Emerge Stronger from the Crisis”
- The Marketing of Industry 4.0: The Hype Cycle Ends Where Reality Begins