Webcast replay: Noodle.ai sits down with Intel and Forrester to talk smart factories
We sat down with Noodle.ai’s CTO Dr. Ted Gaubert, Forrester’s Mike Gualtieri, and Intel’s Rick Lisa to talk about how artificial intelligence can create a learning factory in our webcast “The Future of Manufacturing is the Self-Driving Factory.” CMO Gail Moody-Byrd moderated the chat.
Transcript of “The Future of Manufacturing is the Self-Driving Factory”
(Unedited, as produced using Otter.ai transcription service)
Gail Moody-Byrd 00:02
Hello, and welcome to our new webcast, the future of manufacturing is the self driving factory. This will be a conversation between some of the best thought leaders in AI and machine learning and IoT, discussing what’s required to enable smart factories that fully leverage the data you’re gathering, making sure that that data is providing business value measurable business value for your operations. My name is Gail Muhlenberg and I’ll be your host today I’m the CMO at noodle. A quick reminder, there will be an opportunity to ask questions at the end and you will get a webcast replay link 24 to 48 hours after the webcast. So without further ado, let me introduce you to your speakers today. First, from noodle. Dr. Ted to bear is the chief technology officer and co Founder of noodle AI where he leads a world class team in developing the company’s core technologies. He’s a globally recognized technology thought leader and innovator and a disruptor. When he founded noodle, he had a vision about how enterprise AI would enable the next generation of digital transformation. noodle is ranked at a top pump and ranked the top company by LinkedIn, a Gartner cool vendor and many other awards for company. So we’re eager to hear about heads vision for the self driving factory. Our second speaker is Mike Galtieri was a VP and a principal analyst at Forrester. He serves as the application development and delivery professionals lead at Forrester and His research focuses on software technologies, platforms and practices that enable technology professionals to deliver digital transformations. Mike’s also leading and experts area in the intersection of business strategy, AI and innovation. Last but not least, we have Rick Lisa from Intel. at Intel, Rick is the director of IoT sales and marketing. He has a deep knowledge and experience selling in b2b environments across multiple channels. He’s a thought leader within intel on the subject of IoT, and he has a lot to say about that subject. He’s an experienced component systems and solutions sales executive with proven skills in operational improvement systems of execution, and OEM and strategic partner alliances. So there you have our speakers for the day. And we’ll kick off our conversation, which is really going to be a three part interchange between our three experts. But we’re going to start with the topic of the self driving factory. So Ted, I’d like to kick it off with you. By asking, What is this self driving factory? And how do you think about this topic?
Ted Gaubert 03:07
I think one way to think about it is how do you break a factory that continuously learns and improves over time? Right so that there’s a various levels of autonomous nature to the way that things happen within the factory and execution. I think to just make sure that you know how we’re defining it. As we look at the review model, it’s it’s everything from level zero to effectively level and just for the audience, just if they’re not familiar with the new model, level zero is usually at the component. Level One is usually at the program Logic Controller process level. Level Two is usually around the HDMI layer or supervisor for level three on the IT side manufacturing execution. systems, level five is typically defined for your style applications. And the reason why that’s important in the context of self driving factor is that we see opportunities in terms of not only optimizations across each levels, but how those levels interact with one another and very complex ways, and being able to quickly be able to understand the trade offs because many things that happen collectively at the process can impact other things at different levels. You know, for instance, if there is a process problem, that that that would impact scheduling, you know, scheduling and remix. So we talked about the economist factory, it’s not just in terms of component health or their product quality, but really, you know, looking all levels of you get a higher level of automation and learning and higher decision making
Gail Moody-Byrd 05:03
Okay, thank you. So based on what Ted has has done in terms of introducing the topics, Mike, do you have anything to add about your thoughts on autonomous factories and what it takes to build them?
Mike Gualtieri 05:18
Yeah, so I think when you think of the AI technologies and how they can, they can apply, they can apply it all those levels that that Ted mentioned. And when you actually think of the opportunities there, there, there can be hundreds, if not thousands, of different models that can come to play in various aspects of the factory. And when I think of self driving, in manufacturing, I also think of self healing as well, which is, you know, very different from from a car once it crashes it, you know, it crashes, but factories can have all kinds of production problems, process issues, things that come up, and I think AI can also help In that sort of be self driving is to also be self healing as well. The other thing I think about is the real time nature of the factory. A lot of issues that occur, to overcome many of those issues has to do with the speed at which you can make that decision. So when we think about the implementation of artificial intelligence, it may have to be pushed down to IoT devices. But it would certainly also have to have something that can coordinate across devices. And at Forrester, we have something we call perishable insights. And these are any sites that you have to act on. They have a shelf life, right and, and, and in some instances, that shelf life is seconds and others it’s an hour. And what we’ve seen what I’ve seen largely in the IoT world is a lot of companies collecting the data for later analysis. But I think to create a self driving factory, that’s not good enough, you still have to do that because you need to do some tactical strategic changes by analyzing that data offline, but you have to also analyze it in real time, and then make those real time decisions and course corrections.
Gail Moody-Byrd 07:13
Yes. And something that I think we will we want to get to is AI at the edge and controllers. And Ted, I know you want to say something about that.
Ted Gaubert 07:23
Yeah, I think that’s a great point, Mike. Many times when people are talking, at least today, I see a lot of nomenclature around either computing at the edge. Edge AI, is really what you described, which is a lot of this data collection at that, and then the processing happening somewhere else. But I really I see the next kind of step in the journey. It’s really that Liddy to be able to push AI back down to the edge, where it is really being able to take real time data and make real time decisions effectively on a lot of the context that’s happening, because like, as you rightly pointed down many times, there is a very short shelf life to the data. And effectively anytime you’re getting into the world of, you know, closed loop process control any of those optimizations, you have to act on it in a very short order. So there’s a whole world that actually developing around what I call edge AI solutions or AI at the edge. And with it actually comes a number of other challenges. So, you know, it’s not just the first time you put it that model in it may work. But then how do you deal with effectively model drift? Over several weeks? How do you deal with the security concerns effectively, if you have a model that needs to be retrained edge? How do you ensure integrity for version two, and three? Or how do you handle roll back? So this is the whole new paradigm of security concerns and things like that, when you’re actually getting these deployed, or they’re interacting. It’s so So how do you ensure safety? But I think that’s kind of the next evolution, I think we’re seeing a lot of that play out our people are realizing is it really short life to the game, and there needs to be the leader actually have that intelligence right at the edge admin and doing so January is actually a whole new set of technologies that have to be developed to make sure that all of that process can happen in a secure.
Gail Moody-Byrd 09:29
So Rick, I want to pull you into this. I know that we’ve spoken several times about the difference between theory and actualization at the customer level. And so as you hear about the theories of the self driving factory, I think you’re meeting some customers who are having some difficulty realizing the value. Would you like to jump in here and speak about this?
Rick Lisa 09:52
Yes, thanks, Gail. I appreciate the intro. Sure, we’ve been working with a number of companies around the world Actually and you know, we’re starting to get an appreciation for the amount of data that can be created within a factory environment. One thing that a lot of people forget, obviously, is that we sort of eat our own dog food at Intel. Most people recognize us as the silicon in the PC or the server, but we’re actually a fairly significant manufacturing company. And we’ve begun to understand the value of data in our own facilities. And we’re sharing that learning with people in other manufacturing environments across multiple industries. Clearly, what we’re seeing is the rate of change of technology in the manufacturing space is pretty is pretty varied. We’re seeing you know, state of the art factories that are being built new with new fresh connected, data capable technologies being deployed in those factories. Some older factories, obviously are looking at how to Continue to utilize existing assets and existing capabilities. And so rip and replace is not always an option. And so what we’ve been focusing on is because of the difference at the deep edge at the equipment level, a level zero and level one type of areas that Mike Ted described earlier. You know, we’re focusing on the data platform, and the application platform that then supports value delivery back into the organization. And so, to the points that have been made, you know, data has a shelf life data has a data is a journey, actually, you can start at one point and end up at a completely different point in terms of what data matters and what data doesn’t. So it’s actually a continuous living being in and of itself in terms of the data. But what we’re seeing is that if we focused on the data and application platform, And the value delivery and the user experiences that are being built around that data and application platform, including the opportunity to build inference machine learning training models. And then to deliver that into closed loop analytics environments that allow for essentially, artificially intelligent environments where, to the points that have been made the system’s self learn, they heal, they become more intelligent and better over time, the more data that’s presented to them, the faster they heal, and we’re helping a lot of companies around the world go through that journey. And it is a complex journey, as has been said, the merging of the information technology space together with the operational technology space, and how those two environments are now operating collaboratively and delivering new experiences within the factory is actually an exciting about space for us. And we’re seeing a lot of companies that are struggling with how to plug all the pieces together in order to, to build capable or capable factories and capable structure that allows for the delivery of return value. So we’ve developed a couple of mechanisms by which we help coach companies through the process. And, you know, we share that openly with based on our own learnings within our own facilities, and then our learnings from across multiple industries around the world.
Gail Moody-Byrd 13:35
And would you like to share one of those, those theories or formulas? I think it’s something we spoke about prior to this call.
Rick Lisa 13:44
Sure. You know, two things that we’ve learned and I think, you know, a couple of folks Sysco in particular did impactful study a couple years ago, where they found that a large percentage of IoT implementation have, you know run into difficulties in terms of scale? It’s one thing to implement a proof of concept and to do a one off capability. But what we’re what we’ve been learning and we’ve been learning it from across multiple industries, is that what a couple of you know, firms have now called pilot purgatory, which is these PO sees that get built, each of which have a unique capability, or deliver a unique value but don’t necessarily tie into enterprise value becomes sort of the desert of a lot of effort, but not a lot of results and probably not particularly the value that a lot of companies have been looking to deliver from the promise of IoT. So one thing that we’ve implemented is a discussion point with our customers that we share with them around how to build enterprise IoT capabilities to actually have an investable IoT strategy and By that, we look across the company’s business enterprise. And we look at the processes that you know, drive their business and create their products and allow them to deliver and deploy products to their customers. We look at the premises within which they work. And so we’re looking at how we help them architect smart, smart facilities that make either great places to work, live, play, visit, whatever the case may be. And then beyond that, we also look at how we start connecting people into that infrastructure. And so, when we start looking at the processes, the premises and the people both internal and external to the operation, you know, we begin to see opportunities where we can actually start to prioritize effort and define True Value return from these different activities, applying a lot of the concepts that folks have talked about so far from artificial intelligence and machine learning capabilities. We also start looking at the company’s products and services that they’re deploying out to their customers, so that they can build, you know, product capabilities, services platforms with their products that allow them to stay connected with their customers, and and their products as they leave their facilities and their factories. So this idea of taking processes premises people and products, we call that the four P’s. And we try to build that infrastructure, that corporate enterprise infrastructure on top of a very strong Information Technology base that’s bridging it and OT and allowing for the opportunity of to corporate value return. So that’s one of the ways in which we help companies post through this and it’s things we’re applying within our own business.
Ted Gaubert 17:03
So wrecked a little bit. One, you know, we found a lot of the very same things in terms of a lot of pilot projects. But a lot of challenges kind of at least getting a lot of synergies between those projects and a coherent roadmap in terms of I think a lot of, you know, what it means to kind of create a Thomas factory or go on this journey of industry for it auto is it’s, it’s, it’s part of a journey, but it’s also a destination, right? And many times, we see the same piece where there’s lots of small pieces that are being made successful, but they’re very desegregated, right, and there’s no cohesive structure. So, you know, at the end of this, you’ve got all these small little successes but not in such a way that it’s actually moving the organization forward. And so one of the ways that we have found that kind of help give cohesion, I think also goes back to what you were talking about in terms of platform is really being able to have a platform where you start to pull together all of these point projects, but it’s done in such a way that you’ve got to learn longer term roadmap and vision. Where are we all headed with it? It’s very short term successes, right? So that that there’s good buy in you were talking about the people aspect. And then being able to also set the vision of where we going, where’s all this leading, what is the next step right after we’ve, you know, shown success in this proof of concept? How do we scale that out? Right? How do you scale it across? If you’ve proven in one place, how do you scale to 40 or 400 or 4000 places and do so in a somewhat manageable a way. And I think that goes back to having a platform a platform that allows not only this connectedness, but also this this way of being able to also push forward from a scalability stage. I’m going to kind of tee off to another piece that I think we want to talk to about is a little bit around the data piece. I think everyone realizes the need to capture data. And I think one of the places that we actually have found, you know, some challenges in the marketplace is everyone is collecting data. But are they collecting the right data? And it hasn’t been the right data specifically to make AI and ml successful? And quite often, you know, the answer is not not quite, you know, many people have collected a lot of data. And when the original strategy was implemented, it was really driven by a lot of human based use cases, right. So it was a human based use case either for reporting or historical record, or being able to have some data for an engineer to kind of go back and kind of look through to kind of see what was going on. But it was not necessarily designed specifically for a where the needs for AI and ml and kind of the way to kind of explain this and by at least by way of analogy is, is if we kind of liken that to the self driving car, it would be like people are collecting a lot of GPS data, a lot of mapping. And let’s say we have a huge data set of available. And then we gave that to a team and said, you know, create a self driving car, here’s all of the road data available for this geography. Well, the reality is like, that’s a really great useful data set, but it’s insufficient for being able to get drive a self driving car, it’s completely sufficient for human being if you want to go to the, from the west coast of the US to the east coast, and have a human do it. How would be a perfect dataset right there. Have you ever heard and you know how to map it completely insufficient for the self driving car? And I think we see that a lot where, you know, they’ve collected a massive amount of data that led to GPS road data, which is helpful, but really The data set we actually needed would have been, you know, a LIDAR data set or, you know, camera data set. And I think the good news is that, you know, the self driving car world, we’ve had to retrofit the cars, right, add LIDAR, we’ve had to add cameras. And many times in our manufacturing context, the good news is a lot of the sensory inputs already exist on the lines. And the data actually exists. It’s just simply not being stored, or it’s not being sampled and a frequency that is usable for AI. And so the good news is, many of the customers we work with, have actually, you know, 60 to 70% of the way there least in terms of the telemetry piece of it. So 30% piece that often is missing is simply just either storing the data or sampling the data at the right frequency that can be useful for a lot of these downstream use cases that they want to enable.
Mike Gualtieri 21:57
Yeah, I agree with that Ted. I See that a lot of our client base to the need to sort of create additional sensors or even bring in more IoT devices to better sense world, right? So you have more than information, couple of other roadblocks that we see as well, which is there’s often misunderstanding of how machine learning works. So actual IT departments aren’t providing the data or they’re, they’re providing a roadblock. Because they’ll often ask, Well, why do you need that? And a data scientist building that model will say, Well, I’m not sure, right? Because Because the way machine learning works is you give it a lot of data, you hypothesize what data matters. And then the algorithm actually figures out Well, okay, out of these hundred variables, it’s actually these six variables. So So I think there’s a mismatch and understanding sometimes that that adds delays to the implementation of projects. The other thing I’ve seen as well, particularly unpredictable maintenance. This is where, you know, you have an AI system that wants to predict particular machine is failing. And that is, and this is vendors or auto vendor, makers of those machines are often susceptible to this as well. But you train a model, and it’s working for that machine in that factory, right. And they just want to take that model and put it in a different factory. But the thing is that machine operating in northern in a in a facility in northern Canada is operating differently from the environment of the machine operating in Brazil, for example. So So what people also have to realize that sometimes those models have to be trained not just for the machine in one setting, but for the machine and all of the unique settings for which you’re using that machine.
Ted Gaubert 23:50
Yeah, that’s actually a really nice segue into this concept of what we call Model Management, which is YouTube. The model, let’s say for the machine in Brazil, we may have 400. Other deployments of identical machine, they happen to be made in different years, right. And there’s this slightly deviation, even though they’re supposedly identical, just because they were made in different years running different or more may have had different motors and parts in it. So each one’s unique. So then it comes to the question is, how do you effectively manage effectively 400 different models that are out in the field, because you’re effectively having to train each model for each machine. And then over time, every machine gets better on it, right? So it’s not only being able to predict, you know, machine failure, but also this idea, right? If you’ve got a machine, it’s going to have a normal wear pattern on it. Right? And it may not necessarily be a failure, it just may be that over time, the machine equipment is just getting your bearings and things like that. It still works. But the model has to effectively be retraining compensate for kind of the new rabbit space. So the way the machine is operate today, right? It just, you know, day one, it’s going to behave one way, you know, two years down, may not necessarily be anything wrong with the machine. It’s just now that, you know, the bearings a little looser, and it was probably vibrating. That one, that’s okay. Right, as long as you can characterize. But the challenge is, how do you do retraining? How do you kind of push new models back out to these 400 ass knowing that you’ve got 400 different models out in the field? So it kind of gets into this whole concept of Model Management and how do you do that? And I think that’s kind of a new challenge in the technology space that’s kind of rapidly developing as to how to even deal with that challenge.
Rick Lisa 25:46
Yeah, I think you know, what, what we’re seeing to kind of amplify on points you’ve both made, you know, one to Mike’s point on difference of application of an appliance. We It’s amazing to us to see how differently a given appliance can be used in in almost identically the same environment but in completely different ways. What’s optimal in a paper factory is not what’s optimal in a shipbuilding factory, but they are in a in a earthmoving factory years moving equipment factory, but at the end of the day, they’re all welding, right? Or they’re all using, you know, robotic technologies or lower all using conveyance or conveyor technologies. But it’s completely different from one to the next. And the vendors may be identical. But the equipment is completely different. The tool sets By the way, are completely different. So you know, one point you may Ted is around all the different sources of data and, and how all those different sources of data inform different operational capabilities or what makes a good operation better. And the end the data journey itself. So, you know, we’re what we’ve found is we’ve tried to characterize what makes a useful solution in the hands of a factory owner is that the last mile customization of all the tools that are available in the industry is really where, where the the value gets delivered, because for each user, how they deploy their equipment is is very unique. So again, kind of where we started the discussion today. You know, a lot of people perceive that when they want to get to a self driving autonomous or self healing factory or predictive capability or preventative or prescriptive capability within their factories, that somehow they have to give up where they’ve been in order to move forward. And what we’re seeing is actually all the things they’ve done so far are valuable. If the data application if the data analysis training and learning platform is built the right way. It actually uses all of the established tools and sources of data going forward to build new capabilities on top of that, that don’t destroy all the old values in the process. So it’s very much an evolutionary process, not a revolutionary process. And how this is, it’s done. So two points that are important is that the the new generation of artificially intelligent machine learning, you know, inference based training, capabilities that are built with data don’t need to be a rip and replace dynamic it needs to be an incremental value on top of what’s already been creating, using and sourcing data from the tools that are already established in the factory and to the point Mike made. Very important to understand that there’s probably no one way we’re going to deploy And appliance everywhere in the world. And so in this operational technology space, building platforms that are resilient and know how to adapt and to be modified around the given tools in an environment and around the machines in the environment and how they’re being used, is something that the industry needs to actually recognize and embrace because it’s in that capability and in that recognition that we’re going to find tremendous opportunities to move forward quickly and at scale to your point earlier, Ted.
Mike Gualtieri 29:36
Yeah, and I think what shows a lot of promise is, it’s probably beyond the scope of this call, but other machine learning techniques, specifically, transfer learning. So that’s where you build a model for one particular environment. So you sort of priming the pump, you don’t have to like duplicate the work for every single particular application. But then also have reinforcement learning where it can, it can learn as it’s going, because typically you gather that data, a data scientist builds the model, they deploy the model, you monitor it, and you periodically retrain it. Reinforcement learning can learn. And it’s not doesn’t work for all use cases. But reinforcement learning can learn as you go, so to speak, as well. And I think technologies and techniques like that are also going to help incredibly scale towards the self driving factory.
Ted Gaubert 30:34
Yeah. On mic on that is a very unique use case, we’ve developed actually a lot of tech on the front of both of the end really comes into this idea of a dynamic recipe optimization. It’s closed loop control, but it’s really where you have a lot of variation in the input material somewhere in the process that’s uncontrollable. How do you dynamically adjust the recipe real time to effectively hit some type of quality metric. An interesting thing about that is it’s it’s a very classical problem, but the tools and techniques in terms of what you’re talking about as a new approach to being able to solve that problem where effectively use complex to really Model X. Entertainment makes it that in a way that classically would be very difficult to do. So that gives you your neural network foundation. And then you can supplement that with not only real data, but synthetic data, where you’re using, you know, computational fluid dynamics, you’re being able to create a sim environment, it gives you a base set, and then you get into which talking about reinforcement, whereas you can transfer what’s up Thank you. Excellent, effectively habits making decisions on this process, also observing based on these actions, how is this changing in terms of quality? How is this changing in terms of the tolerance of material I’m making, and then that’s giving this feedback loop that continually trying to train the neural network. Really exciting piece about this is that when we have a client who is (audio segment lost or damaged)
Gail Moody-Byrd 33:30
Excuse me, Ted, we’re having some trouble with your audio. But I wonder if what you’re describing has been documented in any way. And if there’s something that folks could read that talks about some of our work with reinforcement learning,
Ted Gaubert 33:46
I think we do have some things
Gail Moody-Byrd 33:49
much better, by the way.
Ted Gaubert 33:50
Okay. Yeah. And I can try changing audio. We do in a very generic sense. I think that we describe it. I think when we start getting very specific On the use case, you know, some of its pretty confidential. So I don’t exactly know how much of that is published. But there are things out at least around transfer learning. Some things we are doing that is very interesting around the deep neural neural networks, and then applying that in terms of some of these dynamic recipe optimization problems.
Gail Moody-Byrd 34:24
Right there we’re actually working on in customer environments.
Ted Gaubert 34:28
Mike Gualtieri 34:30
There’s a another, you know, challenge that some companies face and that is in trusting the models, because there are probabilistic, they’re developed on historical or current, but they are probabilistic. And so we’ve seen a lot of organizations especially like when an executive has to say go on this be very reticent towards that especially exploring like edge use cases. And, you know, there’s there’s actually quite a simplistic solution to this, which is another technology business rules, right? So you don’t it’s very simple, you don’t have to do with the model tells you to do. Right. So if you’re uncomfortable with a model, making a decision, for certain edge use cases, you simply don’t have to do it. So So we’ve seen a lot of successful implementations where they surround the models with business rules. I’ll give you a non manufacturing example. You know, machine learning is often used to determine whether a loan should be made. And if you know if an executive says, Well, what if it What if it’s wrong on a million dollar loan? Very simple. You just say if this is a million dollar loan, we’re not going to use the model. Right? So this is where you it’s not all or nothing. It’s not a AI versus the humans. You can actually set rules when humans get involved in the decision or when they should be in the loop versus the decisions that you’re comfortable with. So we’re seeing that as a, as a great way to help organizationally. People who are reticent to move forward to move forward. And then one other thing that I’ve seen in a manufacturing environment is or so operations research, and I get a lot of questions. Well, what’s the difference between OR and machine learning? And no, you know, operations research. So mixed integer programming, math, sometimes called mathematical programming. These are extraordinarily complementary technologies, and they’re not at odds and least bit so you use the machine learning model to make a prediction, say that. You know, there’s a supply chain issue he was or to determine what to do about it. So so there’s also a lot of complementary technologies that that and those those two groups, the AI machine learning people and the OR people should actually love one another and the technology they bring to bear.
Rick Lisa 37:07
I’d add to that. Sorry, Gail. No, I was just going to add to one of the comments Mike made in that, you know, a lot of people kind of see it as a black and white decision to implement machine learning and artificial intelligence in effect to turn the machines on and let them run the show. Actually, the whole idea behind you know, digital twins as an idea is the ability to run a simulated model of urban environment, using machine learning and artificial intelligence and machine practices along with decision business decision and policy rules and engines to build out decision systems for how data and services are applied to a given workflow. But you always have the option to run a digital twin, if you will, in parallel with a human operating And so it’s not we actually spend a lot of time talking with with teams about how what we’re doing in artificial intelligence, machine learning, data processing, algorithmic development of control systems in order to augment the worker as opposed to displace the worker. And when you can actually combine the two things, where you can allow the digital model to run side by side with an operator, and then decide at what level you transition from one to the other, or begin to shift the workload to the machine in order to get better results, better productivity, better output. You know, there’s that opportunity to basically use that kind of process in order to actually grow into these ideas around artificially intelligent and autonomous factories without having to again go through as I said, Before our revolution, you can evolve. And so what we’re doing a lot of with folks is helping them understand the process of implementing a system of messaging of worker augmentation. And and then we’re required allowing the machine learning and artificial intelligence cape, the artificial intelligence capabilities actually begin to control the process where it adds value to the operation and where there’s clear opportunity to take advantage of those capabilities.
Gail Moody-Byrd 39:34
Rick Lisa 39:34
Yeah, that’s right
Gail Moody-Byrd 39:36
Excellent leads to a question that came from the audience about the change management about how to build organizations around implementing this. What have you seen in the field? What are the best practices in terms of managing this change, building an environment of trust getting buy in from the people who might think you’re there to replace them? Talk about change management a moment.
Rick Lisa 40:04
So I guess I’ll jump in on that one. If everybody doesn’t mind, you know, what we have seen is as IT and OT systems begin to bridge. And as we’ve worked across again that idea of an enterprise capabilities You know, we’ve we’ve spent a lot of time and what we have found to be successful in change management is when you can get the primary stakeholders from across the company in the room and exchanging openly ideas, concerns, frustrations, and ambitions, and doing so collaboratively. You know, when you get the constituent organizations breaking down sort of the silos of operational value from across the OT it and, and RMB and development organizations product organizations line of business, the more the more The more collaborative across the organization, the faster we’ve seen people able to implement.
Ted Gaubert 41:08
Yeah, follow on to Rick’s thought, you know, one of the things that we find is that there’s a lot of tribal process knowledge very valuable, you know, knowledge in terms of how that process works. And also to get to the the trust aspect. And in terms of, you know, taking the first step is that many times, it doesn’t have to be completely autonomous, you know, simply just being able to tell the factory workers, you know, you know, these are the like, reasons that this defect is occurring is is very, very helpful, especially when there can be, you know, hundreds if not thousands of variables upstream that could be causing that quality problem, simply narrowing it down to a top 10 list. These are the most likely things that are causing the problem today, you know, helps them start building building trust in the system, right, it gives them focus then they can use their effectively tribal knowledge To go, Okay, let’s go check on, you know, out of those 10, it’s probably one of these three, let’s go to the floor, let’s figure it out. And then if they they see, every time they’re having a problem that’s always on the top 10 list, they start to get more confidence in the system, they start to really start to trust it. And I think that is starting to get more and more round to the bridge of a really big being able to bridge the people gap to the people trust that they realize that, you know, they’re not being replaced by it. They’re just now today not having to spend four or five hours hunting around the planet trying to figure out what upstream is causing the problem. They can, you know, 2030 minutes figure out, and that’s a hydraulic system issue, you know, upstream that’s causing causing the problem, right. And so now you’ve got the people who really understand the process much more hyper focused on being able to prove the process and leverage that tribal knowledge and then you’re getting an efficiency gain and you’re making sure that people are doing the most valuable work. I see that is kind of like the first step is like if a organization can achieve that on the ground level, that’s a big win, right? Not only from the trust aspect, but really being able to show show the value. And then that kind of sets the stage for kind of, you know, the next step in the evolution of how how the company is maturing along its journey.
Gail Moody-Byrd 43:24
So I’m sure that the listeners have gotten some great information from this. Thank you so much for sharing and collaborating. I wonder if each of you would like to take a minute or two and just provide some parting thoughts based on what you’ve heard today from your other two colleagues. So what do you thinking what do you want to leave the listener with as they embark upon this journey to move beyond a really overhyped topic to something that’s real? What words would you leave them with? Anyone can start?
Ted Gaubert 43:57
Yeah, I think we’ve talked about several different themes. You know, we’ve talked about the people aspect, the need to be able to make sure that you are on a journey that’s progressing, right? And how do you get out of this world of we’re doing lots of small PO sees, but they’re all disconnected. Right. And a part of that, I think is having a not only a cohesive strategy, but a platform to be able to enable that. And at the same time ensuring that that platform is really designed for the future state that you’re designing for to make AI and ml successful, right, but realizing it may not be day one, you have AI and ml, but if you design the right platform, and you’re collecting the right data to make AI and ml successful, you will then be able to get to the destination. So I think that is kind of the theme that you know, I think I’ve heard on this call is being able to tie those pieces together and and move the organization forward on this journey.
Gail Moody-Byrd 44:56
Thank you, Mike.
Mike Gualtieri 45:00
I’d like to add that I agree that the journey to the autonomous factory is a journey. And there is one narrow use case probably in most manufacturing facilities that that is worth millions of dollars and savings. So a single use case certainly would prove the value and and increase the trust. And I think a lot of what Rick said about and content about the the PO sees making those successful is a very important step because that’s what we create that trust. There are hundreds of use cases, but just one can be can be worth millions of dollars.
Gail Moody-Byrd 45:50
Okay, thank you and wrapping it up with Rick. Rick, how would you like to leave your legacy on this webcast?
Rick Lisa 46:01
I think you know, I think Ted and Mike is summed it up really well, you know, what the call was trying to get across to folks out there in the audience. I guess, you know, we’ve already talked on this idea of, you know, a vision, a game plan going forward. And actually, that starts with the value proposition around implementing autonomous or smart infrastructure type factories. That value proposition, you know, if the organization has a clear vision of where the value will come from, you know, my suggestion is just get started, pick that first point of opportunity that drives the greatest return of value to the organization that would compel the next level of investment. And as they build out that platform, as Ted has described it, make sure that that platform has the ability to scale The next one and the next one and the next one. So that you can start with the most important one or three and grow over time to an IT to essentially an infinitely scalable platform that allows you to bring any use case or application into that environment. And to know that where you are today will drive value in the future. And the things you’ll build will grab it will drive greater value as you go forward.
Gail Moody-Byrd 47:29
All right. Well, thank you. Thank you. I’d like to thank all of the attendees, all the listeners, and certainly each of you, Ted, Mike and Rick, in terms of the information that you’ve imparted. Everyone who has attended will get a record. Make sure that if there are any documents that you’d like to append, any piece of paper to append to the follow up email will do that as well. I think this is the beginning of a really interesting conversations. So hopefully they’ll be future versions of this. Because it’s it’s a topic that’s just emerging. So thank you so much. And we will see you all out at a factory somewhere near you. Thank you.
Rick Lisa 48:18
Great. Thank you.
Like what you heard? Talk to us about how we can help you.
- Introduction to MLflow for MLOps Part 3: Database Tracking, Minio Artifact Storage, and Registry
- Believe the Hype: Gartner, Noodle.ai, and Decision Intelligence
- Introduction to MLflow for MLOps Part 2: Docker Environment
- Atlas, Noodle.ai’s Machine Learning (ML) Framework Part 3: Using Recipes to Build a ML Pipeline
- Supply Chain Leaders Struggle with Unpredictability. Noodle.ai has the Cure.