March 20, 2019, 01:27:05 AM
Robotforum | Industrial Robots Community

 AI or Deep Learning in Industrial Automation

Author Topic:  AI or Deep Learning in Industrial Automation  (Read 247 times)

0 Members and 1 Guest are viewing this topic.

January 09, 2019, 10:07:43 PM
Read 247 times


Global Moderator
So, here's a question that I got asked recently:  what's out there, "on the market", currently, for using AI/Deep Learning in industrial automation?  Not research projects or PhD theses, but actual stuff we can buy now?

Turns out, either I'm really bad at Google, or there's not a lot out there.  Oh, there's plenty of research papers (most buried behind paywalls), but actual usable products, that an integrator could offer to customers as part of a typical turnkey production solution?  Not so much.

Fanuc has an "AI" servo-tuning system.  Cognex's VisioPro ViDi uses Deep Learning for inspection tasks.  Kinema has a self-optimizing depalletizing system (but that appears to be mostly vision, again).  And a lot of conversations about the Big Data side of IIoT, like Siemens Mindsphere, use the AI/DL terms a lot.  But that's about it, unless someone has other examples that I failed to find?

After some thought, I think I know the seems to be so little.  Basically, AI/DL is (currently) most suited for applications with lots of variables, lots of variance, and very hard to predict.  Whereas classical industrial automation has evolved over decades for optimization of highly repeatable, low-variance applications.  Also, with big robots flinging heavy objects around at high speeds, most automation needs to be highly deterministic and repeatable in its behaviors -- we need to be able to rely on failing safely when something unexpected happens.

Machine Vision quality inspection is one of the low-hanging fruit for DL.  As the features being inspected become more numerous and complex, you soon reach a point where it's easier to "teach" a neural network with hundreds (or thousands) of "good" and "bad" images, than to have someone manually programming each feature with Blob- and Shape-find tools, with failovers using multiple exposure times and alternate digital filtering.  Ditto for the IIoT Big Data stuff -- an AI is good for finding correlations that programmers might never think of when creating algorithms.  OTOH, for a simple Vision Guidance application, would an AI be necessary, or even helpful?  Finding a target and lining up to it is usually pretty straightforward for "classical" Machine Vision tools.

The downside of AI, from what I've seen (mostly in online setups like Google's Deep Dream) is that it tends to be "brittle" -- that is, it works great 90%, or even 99% of the time, but when it does fail, it fails hard.  It also isn't predictable the way a hard-coded set of algorithms is, because the thinking of a "taught" machine isn't visible from the outside -- once you have a neural network that works, you don't really know why it works; you can't run a variable trace through it, the way you can with a normal algorithm (at least, not yet).  Imagine trying to get safety qualification for a Collaborative Robot that uses an AI/DL neural network and cameras to determine when a human is too close, or liable to get hurt.

On the other side of the coin, what do we need AI for in industrial applications?  A well-designed spot-welding line can work with what we already have -- Big Data IIoT may be able to assist, with things like predictive maintenance, or identifying subtle process interactions, but that's post-processing, looking at many days/months/years of data and drawing connections.  Right now, trying to teach a Neural Network to spot-weld an entire car would require a sophisticated metrology rig, very sensitive collision detection, and hundreds (thousands?) of hours of trial-and-error.  Why bother, when a decent robot programmer can do it in a day or two?

Well, I thought of a few possibilities:  imagine a CoBot that helps a human operator pick up a large, moderately heavy object (say, a car dashboard panel) and install it, with sophisticated on-board force-torque sensing.  After the human operator teaches the robot how do do this task across a dozen or two cars, maybe the robot can do the job 50-75% of the time, without help.  When it gets stuck, the operator selects "learn" mode and pushes/pulls the dash panel into place (the robot mostly just counters gravity, reducing the weight, and follows the operator's physical cues).  With the critical difference being, the robot learns again, updating its neural network, every time it has to ask for help.  Done well, the robot might learn "when encountering this set of force/torque errors at this position&orientation, the human fixed it this way, so I'll try that."  Could we end up with a robot that would get progressively better over time, with some human help?
(the fact that this is very similar to how a novice human worker would learn from a senior co-worker is not coincidental -- neural network AI/DL is, basically, an attempt at bio-mimicry of the human learning process).

A lot of our current automated industrial processes just don't seem like they'd benefit from AI.  But... is that just because those processes have mostly been optimized for what our robots have been best at for the past few decades?  In automotive, every part has locator holes, every cell has locator pins that use those holes to ensure 100% repeatability of part placement, part-to-part, for hundreds of thousands of cycles.  What if we threw all that away?  What if the work cell just contained a good vision system, three robots (one welder, two material handlers), and some loose parts lying on the floor?  Could we generatively teach a neural network AI to assemble those parts, and do it optimally?  Would it be worth the time and effort (and the number of "broken eggs" during the teaching process)?  Well, what if the AI was at the Simulation/OLP level, instead of directly on the factory floor?  Right now, a good Sim takes hundreds of man-hours of CAD design and simulation running, with innumerable try-fail-repeat cycles -- what if we could speed that up with AI?  Combine "generative design" of tooling with "generative processing" of robot programs?  And line layout, process flows, even selecting between (for example) spot welding, MIG welding, or rivetting for particular processes? 

I guess what I'm trying to get at here is, for the automation processes we already do, where are the "entry points" for AI/DL systems?  Where would they be "value added" in a short time frame, and not just a "blue sky" research paper proposal?  And what products are out there that are already leveraging this capability?

Today at 01:27:05 AM
Reply #1



January 21, 2019, 02:48:39 PM
Reply #1


You're right about thise (predictive maintenance, IIoT applications, etc.) but mobile robots have probably the simplest implementation of a fully generated path program of all the types of robots out there.

In regards to your ideas about using AI/DL to generate process without programming, ROS-Industrial recently did a project with MTConnect focused on enabling machine-to-machine communication in a machine-tending application. ROS presented at IMTS 2018, showing a UR5 robot, a cnc mill and a CMM, all pinging each other to start and change tasks (eg. the cnc program is done, so the robot opens door and takes part, puts it in CMM and CMM program starts). The robot's motions were not programmed, the entire cell was simulated. I could see how AI/DL could be used to optimize processes like this between machines.

ROS + MTConnect study:
« Last Edit: January 21, 2019, 02:51:04 PM by IsaacMaw »

January 21, 2019, 04:23:46 PM
Reply #2


Global Moderator
On arm collision detection and avoidance, I found this pretty interesting:

Not really AI or DL, more a means of using FPGAs to massively parallelize parts of the collision-detection algorithm.  But I can see where an AI/DL approach might work for generating those FPGA circuits.

Going further, I keep wondering if there might be a way to emulate that paralleliztion in simulation environments.  Keeping Collision Detection turned on is one of the biggest time sinks in most Sim programs I'm aware of -- most Sim jockeys keep it turned off in order to get their simulations to run in anything resembling real time, then turn it back on temporarily to check for collisions. 

Share via facebook Share via linkedin Share via pinterest Share via reddit Share via twitter

Deep learning and robot programming?

Started by stephen127 on General Discussion of Industrial Robots Only

4 Replies
Last post May 30, 2017, 06:48:08 AM
by Adhithyaa
Machine vision with monocular camera and deep/extreme online learning?

Started by astronaut71 on General Discussion of Industrial Robots Only

1 Replies
Last post April 25, 2018, 10:47:30 AM
by Constanz
Taipei Int'l Industrial Automation Exhibition 2013

Started by nekoneko on Fairs and Events with Industrial Robots

0 Replies
Last post August 24, 2013, 04:39:27 PM
by nekoneko
kuka.plc automation 2.0.4

Started by eya20 on KUKA Robot Forum

3 Replies
Last post April 03, 2017, 12:25:27 PM
by panic mode