So, here's a question that I got asked recently: what's out there, "on the market", currently, for using AI/Deep Learning in industrial automation? Not research projects or PhD theses, but actual stuff we can buy now?
Turns out, either I'm really bad at Google, or there's not a lot out there. Oh, there's plenty of research papers (most buried behind paywalls), but actual usable products, that an integrator could offer to customers as part of a typical turnkey production solution? Not so much.
Fanuc has an "AI" servo-tuning system. Cognex's VisioPro ViDi uses Deep Learning for inspection tasks. Kinema has a self-optimizing depalletizing system (but that appears to be mostly vision, again). And a lot of conversations about the Big Data side of IIoT, like Siemens Mindsphere, use the AI/DL terms a lot. But that's about it, unless someone has other examples that I failed to find?
After some thought, I think I know the seems to be so little. Basically, AI/DL is (currently) most suited for applications with lots of variables, lots of variance, and very hard to predict. Whereas classical industrial automation has evolved over decades for optimization of highly repeatable, low-variance applications. Also, with big robots flinging heavy objects around at high speeds, most automation needs to be highly deterministic and repeatable in its behaviors -- we need to be able to rely on failing safely when something unexpected happens.
Machine Vision quality inspection is one of the low-hanging fruit for DL. As the features being inspected become more numerous and complex, you soon reach a point where it's easier to "teach" a neural network with hundreds (or thousands) of "good" and "bad" images, than to have someone manually programming each feature with Blob- and Shape-find tools, with failovers using multiple exposure times and alternate digital filtering. Ditto for the IIoT Big Data stuff -- an AI is good for finding correlations that programmers might never think of when creating algorithms. OTOH, for a simple Vision Guidance application, would an AI be necessary, or even helpful? Finding a target and lining up to it is usually pretty straightforward for "classical" Machine Vision tools.
The downside of AI, from what I've seen (mostly in online setups like Google's Deep Dream) is that it tends to be "brittle" -- that is, it works great 90%, or even 99% of the time, but when it does fail, it fails hard. It also isn't predictable the way a hard-coded set of algorithms is, because the thinking of a "taught" machine isn't visible from the outside -- once you have a neural network that works, you don't really know why it works; you can't run a variable trace through it, the way you can with a normal algorithm (at least, not yet). Imagine trying to get safety qualification for a Collaborative Robot that uses an AI/DL neural network and cameras to determine when a human is too close, or liable to get hurt.
On the other side of the coin, what do we need AI for in industrial applications? A well-designed spot-welding line can work with what we already have -- Big Data IIoT may be able to assist, with things like predictive maintenance, or identifying subtle process interactions, but that's post-processing, looking at many days/months/years of data and drawing connections. Right now, trying to teach a Neural Network to spot-weld an entire car would require a sophisticated metrology rig, very sensitive collision detection, and hundreds (thousands?) of hours of trial-and-error. Why bother, when a decent robot programmer can do it in a day or two?
Well, I thought of a few possibilities: imagine a CoBot that helps a human operator pick up a large, moderately heavy object (say, a car dashboard panel) and install it, with sophisticated on-board force-torque sensing. After the human operator teaches the robot how do do this task across a dozen or two cars, maybe the robot can do the job 50-75% of the time, without help. When it gets stuck, the operator selects "learn" mode and pushes/pulls the dash panel into place (the robot mostly just counters gravity, reducing the weight, and follows the operator's physical cues). With the critical difference being, the robot learns again, updating its neural network, every time it has to ask for help. Done well, the robot might learn "when encountering this set of force/torque errors at this position&orientation, the human fixed it this way, so I'll try that." Could we end up with a robot that would get progressively better over time, with some human help?
(the fact that this is very similar to how a novice human worker would learn from a senior co-worker is not coincidental -- neural network AI/DL is, basically, an attempt at bio-mimicry of the human learning process).
A lot of our current automated industrial processes just don't seem like they'd benefit from AI. But... is that just because those processes have mostly been optimized for what our robots have been best at for the past few decades? In automotive, every part has locator holes, every cell has locator pins that use those holes to ensure 100% repeatability of part placement, part-to-part, for hundreds of thousands of cycles. What if we threw all that away? What if the work cell just contained a good vision system, three robots (one welder, two material handlers), and some loose parts lying on the floor? Could we generatively teach a neural network AI to assemble those parts, and do it optimally? Would it be worth the time and effort (and the number of "broken eggs" during the teaching process)? Well, what if the AI was at the Simulation/OLP level, instead of directly on the factory floor? Right now, a good Sim takes hundreds of man-hours of CAD design and simulation running, with innumerable try-fail-repeat cycles -- what if we could speed that up with AI? Combine "generative design" of tooling with "generative processing" of robot programs? And line layout, process flows, even selecting between (for example) spot welding, MIG welding, or rivetting for particular processes?
I guess what I'm trying to get at here is, for the automation processes we already do, where are the "entry points" for AI/DL systems? Where would they be "value added" in a short time frame, and not just a "blue sky" research paper proposal? And what products are out there that are already leveraging this capability?