Introducing a Plug-and-Play Sensorless Autonomous Machining System

Supercharge your predictive analytics applications with high-frequency machine data straight from the PLC to diagnose, predict, and avoid failures on your manufacturing equipment. No sensors required.
Mar 26, 2021

Supercharge your predictive analytics applications with high-frequency machine data straight from the PLC to diagnose, predict, and avoid failures on your manufacturing equipment. No sensors required. It is a messy, tedious activity to acquire, parse, and clean data for analysis in a factory. When tools break, it can be costly. A tool can become damaged yet still make parts that seem to spec, but those parts end up often getting scrapped. Subtle anomalies in machine load, torque, acceleration, and spindle speed can cause parts to be made outside of required tolerances. All of this costs you time and money. MachineMetrics has connected to thousands of machine tools, enabling our data scientists to build algorithms that can predict quality defects and extend tool life. Machine tool operators label data using the operator interface when tool failures or quality defects occur. Our ML/AI Algorithms detect patterns from the hundreds of data items collected from each machine that can detect these problems before they occur again and stop the machine before failure occurs. This technology is a cornerstone of the automated factory of the future.



Transcript:

Lou Zhang:

Thank you for the very nice introduction, Stephen.

Stephen LaMarca:

You're very welcome, Lou, as always.

Lou Zhang:

Of course. So, let me go ahead and kick it off, if everyone is ready.

Stephen LaMarca:

Absolutely.

Lou Zhang:

So, today, I want to talk about something that we have, at MachineMetrics, researched and investigated for the better part of the last three years or so, and this is actually a product that we've recently released that has been hugely beneficial, and not just to our own customers, but to the industry as a whole. So, I thought that this was an appropriate form to share it. Really, what we're talking about here is what I call a plug and play autonomous machining system, which only uses embedded machine sensors. So, let's go through that slowly.

Lou Zhang:

Really, when we talk about autonomous systems, autonomous machines or systems are able to operate without being controlled directly by humans. So, if you think about a typical CNC machine, there's a lot of human activity involved. There's loading, there's your guys listening to see if the machine is perhaps having access vibration or some other problem. But this could actually be done automatically is what we found, and without any aftermarket sensor installations.

Lou Zhang:

So, first of all, what are we monitoring here? So, this is [inaudible 00:01:34] cutting, cutting, and you'll see here that it fails right there. So, this is any machinist's nightmare. It turns out that there are actually patterns that exist in terms of the motor data on a machine that precede this failure that you can actually pick up. So, that is mainly actually what we're going to be talking about today. So, our thesis here is that we extract rich data from embedded sensors on the machine to predict and prevent costly scenarios or different scenarios that could prove costly.

Lou Zhang:

So, breaking that down further, so we extract rich data. Basically, this means that we pull data at a frequency about one kilohertz. So, that's once every millisecond. We get about 170 million data points per motor every single day on the machine, and we're going it from embedded sensors on the machine. So, we don't go onsite and sell additional sensors anywhere. We don't do any of that fancy integration stuff. We literally just use what's on the machine already. What's the value of this? Well, it's to predict and prevent different scenarios that could prove costly.

Lou Zhang:

Basically, we save our customers a lot of money through this, and you can imagine tool failures, scrap parts, bearing failures, they're very expensive. It turns out that they can indeed be prevented. So, what's the traditional way of thinking about this? This is obviously not a new problem. So, in academia, there's been a lot of activity around this, which has been good and been bad. So, we'll talk about both aspects of it. Usually, you have these professors coming up with very complex systems or models using neural networks or deep learning and things like that, and they require very complex integrations. You have to place four sensors inside the machining table, you have to task a computer, and the sensors and really, really expensive, $50,000 for this one in this paper.

Lou Zhang:

Really, it's a bit disconnected from industry because if you break apart the machining table to put sensors in, you are actually going to void your warranty at a lot of these LEMs. So, this is from DMG and it says, "If you do this purported installation here, you will no longer be able to get service from us or be able to use your warranty." So, this has really happened because there's been a very large disconnect as well, we found, between industry and academia over the past couple of decades around this. The goal of academia is often to publish papers and move the industry forward through these isolated experiments that are siloed for specific applications.

Lou Zhang:

So, yeah, we can predict tool wear on this particular tool for only stainless steel or something like that. But that's not a generalizable approach, and industry, they just want something that they can deploy and that can help them save money. They can't spend their time reconfiguring all of these things, especially if they're a contract manufacturer and they have dozens of different jobs that they go through every single month. So, what's the solution to this? All right. So, these things really don't cut it right now, frankly.

Lou Zhang:

So, one, they're unscalable, aftermarket sensor installations, just like any physical device, are difficult to standardize. Where do you put it? Difficult to install. How are you going to get the guy to put it in and how do you know he's doing the right job? And they're subject to degradation. The machining environment is a complex, very hostile environment with lots of things, especially if you try and do it inside of the machine. You're dealing with lubricant, coolant, all sorts of different things that can really screw up that sensor.

Lou Zhang:

They're unreliable. So, what if you have a guy that walks by the machine and bumps it out of place or something? What if you're doing a changeover and you hit that sensor somehow? They're expensive. Good sensors, which sample at high enough frequency and are reliable, the cost doesn't scale. It scales in a linear fashion, which means that every single time you add a machine, every single time you add a sensor, you're going to add cost to that project, and they're inflexible. So, all of these overheads act as barriers to making any changes. So, you have a different job. Say you want to sensor up a different machine. Well, that's a whole new project right there.

Lou Zhang:

So, the solution that we deployed can be best explained through a case study, actually. So, one of our customers, BC Machining in North Carolina, they're a contract manufacturer, and what this picture of is here is one of their projects that they have continuously running. So, they make these tourniquet handles that go inside of tourniquet sleeves, and you can see these are pretty small parts. They're made inside these Swiss CNC machines. I know, high precision, high volume machines.

Lou Zhang:

Really, we first saw this problem a couple of months ago and they said, "Yeah. We run these machines 24 hours a day. They really produce the vast majority of our revenue. But look at all these end mill failures that we're having on these machines. Every single week, we have end mills break dozens of times on these star machines that are manufacturing these parts, and it is not good because it's costing us tons of money." So, as we saw before, a good part looks like this, nice and shiny, good finishes. But a bad part cut with a bad tool looks like this. The slot that's here, actually, when it's cut with this bad tool, it doesn't get that nice finish in it.

Lou Zhang:

So, they can't sell these parts, they don't pass QA, and they have to end up throwing away thousands of these parts actually every single month because of this quality issue. They do this because they run these machines at way past their rate of capacity. It actually makes more economic sense for them to run their machines really, really hard and make more parts, but also more scrap. So, what we said was, "All right. How about this? You can run your machines hard still. But we'll make it so that you don't make anymore scrap." That's what this one project was really all about.

Lou Zhang:

So, if we step back a little, at every single machine that has MachineMetrics, there's something called an operator tablet. What it allows is for your machinist to add context to what's going on on a machine. So, when a machine stops that has our software on it, this all cloud connected, cloud based software, essentially, we know when the machine goes from active to inactive, whenever that happens, and we prompt you to give a reason why that happened. So, in this case, catastrophic tool failure, 5:35 for five minutes, and they said tool 34 broke.

Lou Zhang:

So, over many, many weeks, we were essentially gathering training data here, labeled training data from these mass produced sorts of scenarios where we could get massive amounts of labeled data from frankly thousands of operators across the US, and you can imagine that that is extremely valuable for us. So, we know exactly when down to the second when every single maintenance issue happened. So, our approach, this is another type of machine, but I'm using it just to demonstrate this purpose, is to collect motor data directly from the control. Every single tool that cuts, there's six tools here, has a specific signature in terms of the power that it uses to make each cut.

Lou Zhang:

Really, for our purposes, power, current, load, and torque, they're all the same thing. They're the amount of essentially energy that the machine is drawing to make that particular action. You can see that over time, if you integrate the area under the curve, this is the amount of energy that's being consumed for every single cut that's being made. If you look at that over many, many part numbers, you can see that essentially, you have patterns that emerge that can tell you how the machine is performing.

Lou Zhang:

So, keeping that in mind, why do we need data at such high frequency? Why do we need it at 1,000 hertz, 1,000 times a second? Well, if you look at these four parts that are being cut right here, so this is the exact same part that's being cut. For some reason, you have spikes here in low frequency data. This is one hertz, and some artifacts just exist in some parts that don't exist in others. Well, turns out that this is not actually the case. This is a consequence of aliasing, in fact. The true data that lies beneath this, the high frequency data, is exactly the same from part to part.

Lou Zhang:

You can see that this red line right here, the low frequency data, is just because of undersampling. So, typical machine APIs, they'll give you data at one hertz because that's essentially what is being surfaced on the HMI itself to the operators. If you try and use one hertz data and try and look at these patterns, you will get noise. You put junk in, you're going to get junk out. But with high frequency data, we start to see those patterns that we demonstrated in the previous few slides. For these two tools, you see that pattern for tool 401, for 8:05, the reason why you see these jumps here is actually because they're making offsets, and every time they make an offset, that constitutes a different power signature that's being used for that tool.

Lou Zhang:

Okay. So, what's going on on a live level here? So, this is about 20 seconds of data, 20,000 points that we pull, and every single line here is a different part. So, every single part, it's cutting, it's cutting, it's cutting, and then all of a sudden, you might not really see it, but there's this sudden bump right here in load. It just jumps all of a sudden. That may not be obvious to the naked eye, but when you take the integral over time, it is extremely obvious. There is essentially a huge jump in load, about a dozen parts or so, before the actual labeled failure, which is this red line right here.

Lou Zhang:

If you look at that over many tens of thousands of parts and dozens of annotations, you can see that a particular pattern emerges. So, every single time you have that jump, you also have an annotation minutes later that the operator's like, "Help. My machine broke." So, you can say that some of these don't have black lines. That's just because the operator forgot to put in the annotation in those cases. But whenever we have an obvious pattern like this, we can basically implement a predictive system around it.

Lou Zhang:

So, that is exactly what we did. This is a snapshot of our product, and you can see that we're integrating load and we're integrating, we're integrating, and then all of a sudden, it jumps. What we do is we issue a feed hold to the machine. MachineMetrics is two way communication. Not only do we read data from the machine, we can also write data back to your machine if you so wish. So, we stop the machine in its tracks and they look at the part or look at the tool and they're like, "Oh. It's indeed looking like it's compromised." Then they change it out and then load goes back to normal.

Lou Zhang:

So, we're monitoring this 24/7. You don't have to do anything essentially. The software does all the work. Just to remind everyone, we never step foot on this person's factory floor during this deployment. It was completely deployed in its entirety during COVID-19. So, they were very happy that essentially, they don't have any overhead on their part. They already had our software. Really, all it is is an edge device that you plug into your machine that lets you start collecting data, and this is just one additional feature that you could get from that.

Lou Zhang:

So, what's our performance here? So, over the past four months, we've issued 55 feed holds. Of those 55, 52 have been accurate. So, we've had three false positives and we've had two failures that we haven't detected. So, that's a 96% recall rate. What this resulted in was over 2,000 scrap parts prevented, and numbers don't do it justice. Basically, they were saying that their time savings have been monumental. They previously lose about a third of their shift's work of parts. Then they go into these bins and someone has to sort out the scrap parts from the good parts, and it's no good.

Lou Zhang:

So, now they're not wasting any time doing that. Every single second a machine is running, it's making good parts. There's just a much more efficient and streamlined outcome here. So, let me check my time real quick. All right, I got 12 minutes. So, I'm going to go over one more use case real quickly. So, we're not a one trick pony. This is at actually an automotive manufacturer. They make these brake rotors, and you can see that here, we're not monitoring load. We're actually monitoring spindle speed.

Lou Zhang:

Spindle speed is commanded to be a certain RPM, and when the machine is machining basically, it's trying to keep that RPM that the control is telling it to do. But sometimes, it can't. So, here, on line two, cut four, you can see that there's something awry going on. The reason why this particular thing happens here is because of the fact that it can't keep that spindle speed anymore because there's some sort of a compromise in the tool that's cutting itself. The tool itself is fractured and it can't do the job that it's being commanded to do. So, the spindle speed gets all wobbly, the part isn't cut correctly, they have to scrap them, and the tool fails very soon afterwards because of this fracture.

Lou Zhang:

So, we could create just a simple rolling standard deviation and single out when this happens, put a threshold on it, and prevent it from happening. This gets to another topic of ours, which is that really, if you look at tools, a freshly replaced tool vs. a worn out tool, they do have a different load signature, different power signature. A worn out tool is going to use more energy than a freshly replaced tool. This is kind of like when you're writing with a pencil and it gets more and more dull, you have to use more force when that pencil is dull vs. when it's fresh. Same thing is happening to your machine tool.

Lou Zhang:

You can see that the difference is minuscule. It would probably not be able to be picked up by external sensors very easily. But because we're directly tapping into that motor data, we can very clearly see that there is a difference. What this allows us to do is it also allows us to anticipate these failures. Also, I know tool life has been a real hot topic for manufacturers for a long time, and then the typical way to deal with tool life is just to set a tool counter based off of, frankly, many times, it's arbitrary, off of an arbitrary number of parts that you think can be created, or a manufacturer recommended amount of parts that you think can be created.

Lou Zhang:

Here, we can actually look directly at the motor data and we're looking at the noise over time. So, the more noisy the tool gets, the closer it is to the end of life. So, you can imagine, you can just put a threshold here and say, "Oh, well, I want this tool to be 80% to end of life before I change it over," vs. just kind of guessing and being either too short or too long on your estimation. So, really what this allows us to do is physics based monitoring. All right? So, notice here that I am a data scientist, but I never mention anything about deep learning or machine learning or any fancy methodologies like that, and that is for a reason.

Lou Zhang:

It's because CNC machines are physical devices, which are, surprise, surprise, bound by the laws of physics. So, physics is deterministic. Every single time some sort of failure happens on a machine, just like every single time you throw a ball on the ground, it's going to do the same thing. So, that's why it is so predictive to use a physics based model with the machines. The trick is in getting the right data to do that and then cleaning that data to be able to basically have these models work. So, what we're really monitoring here on the machine, if you think about it, is through power, we can get things like acceleration, we can get things like friction. In a way, we can get things like geometry based off of offset data and different power patterns.

Lou Zhang:

I'll go over that more in a bit. But really, what this means is that if we look at a Swiss lathe over one tool engagement period, so this is about 1,000 parts, which they made in a day, very fast parts, 30 seconds, 45 seconds, you can see that every single time that load decreases and jumps again, that is from a bar feeder change. So, this is actually the machine running out that bar, and as that bar gets shorter and shorter, the type of power that's being used to accelerate that bar is lesser and lesser. So, we can actually separate that out.

Lou Zhang:

So, what's actually happening on this particular cut is that most of this load change is from material and spindle effects, not from cutting effects. The way that we could really do this is proprietary. There is a proprietary physics model behind this. But basically, we only care about the cutting effects on the machine itself because the cutting effects are essentially what are going to tell us how far along that tool is in terms of failure, if there's a fracture or a compromise, if it can't keep its spindle speed. All of that is embedded in this orange line here. This green line is essentially the remainder that we don't care to look at.

Lou Zhang:

Another thing we can actually look at is we did an experiment and we looked at temperature at one of our customers. They're based in Murphy, North Carolina, and we can essentially see that the frictional instability on the machine, as evidenced by the amount of noise on the main spindle from these motor data signals that we're collecting, the noise is directly correlated to the temperature. This is because at that particular factory, they don't use a lot of heat or AC. They actually just open up the bay of their machine shop, giant garage door, whatever, at the beginning of every day to let fresh air in.

Lou Zhang:

It turns out that the colder the factory is, the more instability there is on the spindle. We just thought that this was incredibly interesting that you can actually use our data as sort of temperature gauge as well. So, we're not a one trick pony. We have hundreds of customers, thousands of machines that we're attached to. We look at these examples that are pouring in from our customers and we can create all sorts of algorithms around it. So, this is at one of our customers, a hypotherm, actually, and back in November, they labeled a piece of information on our software and they said, "All right. There was unplanned maintenance and we had to do a spindle rebuild and burn it."

Lou Zhang:

Shortly after, the guy emailed me and he was like, "Well, we had a bearing failure on this one machine. Initially, we thought that this was it and there wasn't anymore damage, but it turns out that we also damaged our spindle housing." So, this is obviously a very expensive multi-thousand dollar repair, and it turns out that if we look at the data directly before failure, we can indeed see, this is about two hours before failure, we can indeed see this giant creep up in load that happens. This is because directly before the bearing failure, the machine is basically not able to function as normal anymore. A bearing failure is going to influence everything on your machine, especially the tools that are being used.

Lou Zhang:

So, you put a simple threshold here and stop the machine before it happens, great. You've prevented that. Spindle housing damage, at least. It started climbing because the bearing supposedly is already beyond repair and fractured. But it turns out that we can also prevent the bearing failure itself. If we look at this data over many, many days, all the way back a week before it happened, you can see that there was very consistent signals beforehand until about two days before the bearing failure. Then you see this giant jump here in load before the catastrophic exponential increase at the very end.

Lou Zhang:

So, again, whenever you see this consistently over time as we have, you can put a threshold, create an algorithm, whatever, stop the machine before any of this happens. That's what we're in the process of doing with this customer as well is we're trying to figure out when exactly do you want the machine stopped and what do you want to do when you see patterns like this? So, there's multiple examples, really. There's the, hmm, the load is creeping up on my machine. So, what happened here was an operator made a bad offset and it caused the load to increase until the machine ultimately failed.

Lou Zhang:

Here, the something broke and my oscillation spiked. So, here, we're looking at vibration. For those of you who are more technical, vibration is measured by the Savitzky-Golay measure of torque on the machine. So, this is an analog to what you would get from your vibration sensors. So, oh, no. It looks like there's a giant spike here. That's because the tool is compromised and basically, there's going to be something bad that happens on your machine unless you do something about it. So, eventually, the tool fails and it goes back to normal after it gets swapped out. This is another scenario that we could have prevented.

Lou Zhang:

Then finally, these bearing failures, if you look at these load traces, you can see that over time, the load really degenerates exponentially. This is very different from the end mill failure that we saw before because really what this is is it can't keep spec anymore. With the end mill failures, there was this giant jump. Here, there's this gradual worsening, but it gets exponentially worse and worse every single cut. I think this is actually a pretty good demonstration of what the value of the data is here. We could see at such high fidelity, at such high frequency exactly what's going on on the motor of the machine, and that's how we do everything here.

Lou Zhang:

So, takeaways, and I wanted to leave plenty of time for questions, because I know this is a fairly new greenfield solution. So, one, high fidelity. So, this is kilohertz scale, physics rich motor data streamed directly from the machine control. It's extremely dense, 170 million points per day at, I think it's one thousandths of a precision in terms of spindle speed and one ten thousandths in terms of continuous load. It's plug and play. So, it's sensor free, really. It's using the embedded sensors, the information that is already necessarily there for the machine to even be able to run itself. So, it needs this torque data. It needs spindle speed to be able to even do what it's supposed to be doing.

Lou Zhang:

It's autonomously actionable. So, not only can you issue things like feed holds, but you can also tell your machine, because MachineMetrics is a two way protocol, to change an offset or to reset the program or to move one way or the other. So, really, this essentially allows you to take any data point that we collect, it could be load, it could be spindle speed, it could even be what other machines are doing or not doing, or it could be what jobs are running for not running because we get the program code as well, and it allows you to create actions on them.

Lou Zhang:

So, you have basically entire arsenal of what's going on on your machines in terms of a data perspective on one side, and then you have the entire arsenal of what you can do to your machine on the other side. So, really, the possibilities are endless. As I said, yup, they're infinitely customizable. You could implement these simple threshold algorithms like we've shown today. You can implement machine learning models, you could make small geometric adjustments to your machining, and you can stop an entire production line if you want.

Lou Zhang:

So, really, this is a new offering that we're bringing to market. We've already had many customers adopt and productionalize this. So, we're excited to share it with the industry today. We think that it can make a really big difference in terms of [inaudible 00:30:16] maintenance, and especially for these small and medium sized machine shops who haven't really had access to this sort of Industry 4.0 technology. We don't need complex integrations.

Lou Zhang:

We basically put the onus on ourselves to absorb all that complexity in our software and in our data science so that you, our customers, don't have to do anything on your side. So, that's it, and I do have an appendix slide for those who are more interested in the technical aspects. But first, I want to stop my screen share and go ahead and take some questions for the last 15 minutes.

Stephen LaMarca:

Lou, thank you so much. That was really awesome. A man after my own heart implementing physics as opposed to a hardware headache, if you would. So, basically, what you're saying is instead of throwing all of these new sensors and trying to implement and install a bunch of new hardware and a bird's nest of cables, as I would imagine, to your existing machine tools and other manufacturing technology in your plant or factory, you can just take the data that is already being pulled from your machine as it is and just throw a little bit more calculations and physics equations, if you would, to get the data that you might be looking for instead of making that hardware investment.

Lou Zhang:

Yes. Exactly. What we're doing is something called indirect measurement, whereas measurement with sensors is something called direct measurement. So, with direct measurement, you're buying these sensors, vibration sensors, current transducers, you're slapping them on directly where you think the problem is. Oh, there's vibration going on on the spindle housing or this tool, and you do something like a simple Fourier transform to get the data from that.

Lou Zhang:

With indirect measurement, we're not directly measuring via sensors that particular component that's failing. Right? We're actually measuring the second order effects here. So, it is a bit more complex in terms of the mathematics and the physics. But ultimately, we believe that this is the scalable approach. It puts, again, the onus on us to figure out how to do it. But what our customers see is that they don't have to buy additional sensors. They don't have to have electricians come onsite for days or weeks on end messing with their machines, and they would much prefer this approach where we do everything on the backend vs. them trying to figure out all the stuff with sensors.

Stephen LaMarca:

Right. Right. Thomas, what did you think?

Thomas Feldhausen:

I loved it, Steve. We've got a lot of great questions here.

Stephen LaMarca:

We do.

Thomas Feldhausen:

Shall we get started with them?

Stephen LaMarca:

Sure.

Thomas Feldhausen:

All right. So, the first one is from Dr. Pavel. It says what brand was the machine control and what protocol did you use to collect the data?

Lou Zhang:

Yeah. That's a great question. So, the particular machine control that we showed in our use case was a Fanuc control. So, right now, we can do this on all Fanuc controls. We started with Fanuc because they, I think, own about 70% of the US market, and our plan is basically to expand to Siemens, Heidenhain, Citizen. I see another question here about getting permission from the OEM to do this. We don't need to get permission from the OEM. This was all done ... We do have a partnership with Fanuc, but we didn't need to ask them if they would allow us to do this. This is a API that they have. It's called the high frequency API that is open and somewhat documented online. So, really, all this entails is us figuring out the idiosyncrasies of all of these OEM APIs and basically writing C++ adapters on our edge device to be able to implement this.

Stephen LaMarca:

Cool. The next question we've got coming in is, is the data evaluated locally at the machine shop or is it over broadband?

Lou Zhang:

Yeah. So, it's evaluated on the edge device itself, which is a local device. It's about this big. It's just a very simple Windows or Linux IoT computer that we place on your factory floor. Usually, one edge device can service about 15 or 20 machines at a high frequency. Essentially, all of the algorithms, all of the initial evaluation is run on that edge device itself, and then it's the summary metrics are beamed up to our cloud, so to say.

Stephen LaMarca:

Got you. I assume the reason for having the data pooled directly to an onboard machine, onboard device is so you can get the latency, the frequency of data pooling, the stream, if you would, that you guys are looking for, because you touched on earlier that one data frame a minute isn't enough, or you need, I think you said 1,000 snapshots in a second, and that just simply wouldn't be possible over broadband, I take it, especially if you've got a facility with 100 or more machines that you're all trying to monitor.

Lou Zhang:

Yep. So, latency is one of the issues. There's latency both on the pooling side and on the data pushing side. So, you can imagine, since we're dealing with cyber physical systems here, timing is extremely important. If you miss a feed hold by one second, or even half a second, it could potentially be catastrophic because you could be feed holing the machine then in a compromised position. So, doing this off of the edge device, basically we're doing edge computing on these things. We have hundreds of edge devices, thousands deployed across the US. They constitute our fleet of essentially executor nodes, in this case.

Lou Zhang:

They execute these actions for us because they are onsite and they have a direct connection to the machine itself. So, latency's really important. Then the other thing is really security. We don't need to stream every single data point up to our cloud, especially if we've figured out what algorithm we can use for your particular application.

Lou Zhang:

But a lot of these guys, we have customers in aerospace, medical, they're subject to FDA regulations, they don't want everything leaving their factory floor, and we can be selecting about what leaves your factory floor. We don't need every single thing. So, it's kept on that edge device, it's local to you, you can disconnect it when you want. Our customers feel a lot of, I guess, peace of mind around that. So, that's also an important part of this.

Stephen LaMarca:

Right, and before we go on to the next question, I just want to touch back. You mentioned that there can also be latency issues with the data being pushed from the machine to the device. Have you seen any cases where the computational power of the machine tool in question wasn't powerful enough, didn't have enough capacity or bandwidth to send the data to the device fast enough where you've had to say, "You guys need to upgrade this machine's computational capability to keep up with what we're doing"?

Lou Zhang:

Yep. So, initially, we thought that we could do this over 4G on some of our factories that were a bit more isolated and didn't have good wifi. You can imagine you could do this over cellular, the possibilities are really endless because you could have a completely remote device somewhere in the middle of a field. It's just machine tools. You could do it with even some sort of, I don't know, like a Caterpillar digger or something like that. Unfortunately, it doesn't work over 4G. The 4G protocol is not steady enough for this.

Lou Zhang:

Again, because we are dealing with systems that actually influence the physical world, we really have to get latency really under about 20 milliseconds is what we feel comfortable with. For that to happen, wifi works, LAN works, and then we find that there's really no lag between the edge and the machine itself. But if you want to do any sort of cloud side processing, then the connection type is of an important consideration.

Stephen LaMarca:

All right. Our next question, would this work on almost any sized tool, and in particular, a really small tool where the load could be minuscule?

Lou Zhang:

That's a great question. So, we actually have a lot of customers try and stress test us with this sort of application. They're like, "I've been here 35 years. I don't believe this. I don't believe you can just do it over software." So, it turns out that one of the tests that people gave us, one of our customers in the medical device industry, they actually make these needles out of plastic and they go inside their medical devices. So, they cut these needles with six thousandths inch tools, or diameter tools. So, they are tiny. Tiniest tools I've ever seen.

Lou Zhang:

Oftentimes, these tools break and they get embedded or stuck inside these parts that they're making. They're like, "Can you see when this is happening?" It turns out we can. The load is at the, I think it's at the half a percent level. But because the fidelity we get is way higher than that, so we get many, many decimal points past zero, we can still have a pretty high signal to noise ratio, even on those small diameter tools, and indeed, we can see when it's cutting normally and when it's not. So, that's a great question.

Stephen LaMarca:

Right, and our last one for you, to wrap things up, in the machine shop example, is it now possible to predict behavior of that same tool on other parts, or is it specific on just that one particular part?

Lou Zhang:

Yeah. That's a great question. So, if another part is similar enough, then you can use the same algorithm. What we have found is that oftentimes, to make it part agnostic, all you have to do is take the first derivative, which gives you the rate of change. So, you're no longer looking at a raw threshold. You're looking at a rate of change threshold, and you can imagine on every single part that's being cut, when you see a giant jump in terms of the rate of change of load, it's not great.

Lou Zhang:

So, there's different transformations you can do on the data that will essentially make it part agnostic. I think I actually have an example to show with that, but wondering if there's any other questions either from you guys or from the audience before I show that.

Stephen LaMarca:

That's our last one that we've got so far, and if you can, if you can do it in about 60, or actually 120 seconds, go for it.

Lou Zhang:

Yeah. Okay. So, first, I want to geek out a little bit. All right. So-

Stephen LaMarca:

Yes.

Lou Zhang:

... this is really what we're collecting. All right? So, this is the feedback loop on a Fanuc control. So, this is why we can collect the data that we can because when a machine tool is contacting metal, it has to alter the amount of power that it draws. Right? So, how does it actually do that? Well, it turns out that it contacts metal and then it's like, "Oh, crap. It looks like I'm contacting something. Looks like I need to alter the amount of power that I need to change my speed and my position."

Lou Zhang:

The way it does that is through this lever called commanded torque. Commanded torque is basically the amount of power that it's drawing. So, the commanded torque changes. This is exactly what we pool. This is what we call load. So, when the commanded torque changes, it gives us all sorts of phenomenon that are on the machine. So, is it changing because it contacted metal? Is it changing because it contacted metal and there's a giant fracture in the tool? So, this exists inside of every single modern CNC.

Lou Zhang:

The commanded torque changes the commanded spindle speed, which changes the commanded position as well. So, it's really a giant feedback loop that allows you to even be able to control your machine in the first place. So, that's the first thing. Second thing is we did look at bearing failures on multiple machines, and we actually wrote a blog post about this. So, this is two completely separate machines, and this is from February and this is from November, two different machines. You can see that the bearing failure essentially looks exactly the same on both of these machines.

Lou Zhang:

It's a little different, but really, this is the load signature of what a bearing failure looks like. The reason why these are so similar is because again, we're looking at physical phenomenon that are governed by the same laws of physics, whether it's November or February, and we see this across many of our factories. We're not just connected to one or 10 machines. We're connected to thousands. So, you can imagine leveraging failures that have happened at other customers to help every single customer, that's incredibly powerful. We're a cloud based software and we've been collecting this data for a while.

Lou Zhang:

So, I think there's a lot of potential here for the industry to move towards a model where you actually have scalable predictive analytics and predictive maintenance, and that's why we really took this approach. We could have sensored up all our machines, but the cost would have been prohibitive for our customers and really for us.

Stephen LaMarca:

Sure. Lou Zhang with MachineMetrics, everybody. Lou, thank you so much for being here today. That was awesome.

PicturePicture
Author
Lou Zhang
Lead Data Scientist
Recent technology News
Artificial intelligence (AI), machine vision, and sensors are bringing gains in flexibility and efficiency to the movement of parts in facilities and how exoskeletons are increasing worker endurance in repetitive physical tasks.
The concept of a digital twin has been around for ages and is most commonly defined as a representation of an object digitally. If you have created a CAD model of a screw, tool, or digital map of the manufacturing process, you have created a digital twin..
In his presentation “AI-powered Process and Quality Monitoring for Automotive Welding,” Matteo Dariol will discuss the value of data process monitoring and visualization in predicting weld quality and the important role of the cloud and edge computing in..
Job rules and the nature of work are changing in what’s called the Fourth Industrial Revolution. We examine what future manufacturing jobs will be like in the digital era...
Similar News
undefined
Advocacy
By Amber Thomas | Nov 30, 2021

Last month, a three-judge panel of the 5th U.S. Circuit Court of Appeals extended the stay of the Labor Department's OSHA's COVID-19 Vaccination and Testing Emergency Temporary Standard (ETS) for private businesses.

4 min
undefined
Technology
By Stephen LaMarca | Dec 03, 2021

Less chips has been great for U.S. manufacturing’s health. NFTs are dumb but smart for blockchain. 3D-printed speaker enclosures. ICYMI: Formnext 2021. W is for Tungsten, but the L goes to the supply chain.

5 min
undefined
Intelligence
By Kathy Keyes Webster | Nov 30, 2021

At the recent MFG Meeting + MTForecast conference in Denver, Colorado, AMT – The Association For Manufacturing Technology presented the Albert W. Moore AMT Leadership Award to two industry legends. 

5 min