Vision Systems With Fanuc Robots

  • Hello, i was just wanting opinions on Keyence 3D Vision-Guided robotics vs FANUC's iRvision 3D area sensor. We recently installed a keyence system but all of my experience is with fanuc's system. To me it seams as though the fanuc system is much easier to maintain and correct when there is an issue. All of the robot motions are dictated by the keyence system for example which makes touch ups extremely difficult considering I have no prior experience with their system, whereas with the fanuc system using vision offsets and a defined position touchups are relatively simple. What are your opinions if you have experience with both and why would integrators favor the keyence system over the fanuc?

  • Generally speaking, with an "external" vision system, you can have much more fine-grained control over the vision operations. Any "internal" vision system (that is, one built into the robot, usually by the robot manufacturer) generally gives you less control, in favor of easier setup and use. It's a bit like Windows vs Linux, if that makes sense.


    Another reason someone might favor a 3rd-party vision system is that they have multiple robot brands, and want to stick with one vision system type. Or the 3rd-party system might offer features that haven't been added to irVision yet. And robots don't have GPUs -- if your vision system needs lots of processing horsepower, using a 3-party "super gaming" PC might be the only way to get enough cores/clock cycles to throw at the problem.

  • Any "internal" vision system (that is, one built into the robot, usually by the robot manufacturer) generally gives you less control, in favor of easier setup and use. It's a bit like Windows vs Linux, if that makes sense.

    So basically we just shot ourselves in the foot and handcuffed ourselves to the integrator. I have years of experience with the FANUC vision system and today, what would have been a simple 5 minute touch up on iRvision, resulted in hours of downtime because the motions are dictated by the keyence system and nothing can be done from the robot side. I highly doubt in our bin picking application, there are enough features to make it worth installing a system that is more difficult for the end user.

  • My opinion: always use the integrated vision unless it isn't capable of doing what you need. But in my experience iRvision is plenty capable. I looked into using 3rd party a couple of times and it was more trouble than it was worth. Not that they didn't work, it was just extra effort.

  • the motions are dictated by the keyence system

    I'm confused on this, though. When I've used Keyence vision in the past, re-teaching the Vision-controlled points was no big deal. It did require some careful setup up-front to enable easy re-teaching, but there's seldom anything in the vision system that precludes doing so. What is it about your Keyence that's causing this difficulty?


    My opinion: always use the integrated vision unless it isn't capable of doing what you need. But in my experience iRvision is plenty capable. I looked into using 3rd party a couple of times and it was more trouble than it was worth. Not that they didn't work, it was just extra effort.

    I come at it from the opposite side -- I bounce between robot brands a lot, and tend to have a preferred vision system on a per-application basis. Using 3rd-party vision systems gives me flexibility to carry my vision applications and experience from robot to robot. Also, in the past, every time I get stuck using "integrated" vision, I feel handcuffed. I like having my vision systems as separate units communicating with my robot over I/O interfaces.


    Of course, the dirty little secret is that a lot of "integrated" vision is just partial implementations of 3rd-party vision with a branded "skin" on it. KUKA VisionTech is just Cognex VisionPro done through the teach pendant (but doesn't have access to all the Cognex tools). ABB Integrated Vision (as opposed to FlexVision) is literally just Cognex InSight with an "easy" interface layer, but again, you lose the fine-grained ability to really control all the Cognex tools.


    At the end of the day, like everything, it comes down to what works best for a particular user/application combo.

  • I'm confused on this, though. When I've used Keyence vision in the past, re-teaching the Vision-controlled points was no big deal. It did require some careful setup up-front to enable easy re-teaching, but there's seldom anything in the vision system that precludes doing so. What is it about your Keyence that's causing this difficulty?

    This would require completely rewriting every program the robot uses as the integrator has every single motion programmed as PR1 with an offset established by the keyence system except for the home position which is PR2. Every single motion the robot makes is PR1 and even the speed of the motion is programmed as a register value dictated by the keyence system so it enters the bin at 100% speed and causes a collision which bends the tool and results in miss picks. It is a nightmare and i have recommended every new vision system installation be quoted with the fanuc system going forward. Keyence must just have good salesmen because in my opinion, after working exclusively with the fanuc system, it is an expensive piece of junk and does not do enough, or is as user friendly to justify the cost. Our maintenance guys cannot work on it, or do simple touch ups, or even correct a failed image so we cannot maintain it through 3 shifts without them needing to call me in as well as the integrator to recalibrate on an almost daily basis. In my years of working with fanuc, i have never had as much trouble as i have with this system.

  • My opinion: always use the integrated vision unless it isn't capable of doing what you need. But in my experience iRvision is plenty capable. I looked into using 3rd party a couple of times and it was more trouble than it was worth. Not that they didn't work, it was just extra effort.

    Thank you. This is exactly what i thought and after using irvision for years, there aren't many things i have seen it incapable of doing. For some reason production wanted the keyence system after speaking with their salesmen and sort of went behind my back and purchased it without consulting the lead robotics engineer who would be in charge of maintaining the system.

  • This would require completely rewriting every program the robot uses as the integrator has every single motion programmed as PR1 with an offset established by the keyence system except for the home position which is PR2. Every single motion the robot makes is PR1 and even the speed of the motion is programmed as a register value dictated by the keyence system so it enters the bin at 100% speed and causes a collision which bends the tool and results in miss picks.

    Well, that definitely seems like a badly designed program. This is a bin-picking application, then? It's true that bin-picking needs to give the vision system more control over the robot path.


    Is the problem Keyence, or the way the original integrator set up the Keyence/robot control? I'll admit I've never used Keyence's bin-picking solution, but their other products I've used in the past have always been servicable. Having the vision system dictate robot speed is downright bizarre, for certain.

    For some reason production wanted the keyence system after speaking with their salesmen and sort of went behind my back and purchased it without consulting the lead robotics engineer who would be in charge of maintaining the system.

    Oh, dear. Yes, that's always a recipe for disaster, regardless of what they're buying. Systems designed by salesmen, I have PTSD flashbacks about them....

  • It certainly seems that the program wasn't structured correctly. I'm sure the Keyence system could be done in a manner that would work better. At least you could remove the speed control by just not using that register.


    I do understand standardizing if you work on multiple systems.


    I have used Fanuc Bin picking, and for me it worked really well. It was accurate and not too difficult to setup after I learned a few things worked such as interference checking and tool orientation.

  • Well, that definitely seems like a badly designed program. This is a bin-picking application, then? It's true that bin-picking needs to give the vision system more control over the robot path.


    Is the problem Keyence, or the way the original integrator set up the Keyence/robot control? I'll admit I've never used Keyence's bin-picking solution, but their other products I've used in the past have always been servicable. Having the vision system dictate robot speed is downright bizarre, for certain.

    Oh, dear. Yes, that's always a recipe for disaster, regardless of what they're buying. Systems designed by salesmen, I have PTSD flashbacks about them....

    Yes, it is a bin picking application and yes it is poorly designed. For instance, their robotics expert who wrote the programs used joint motions for tool changes which require linear movement to chuck the tool changer and depart the tool stand. This resulted in collisions at every tool change and when i tried to explain that the motion was incorrect for the application he doubted me and wouldn't let me make that simple change until i explained how linear and joint motions worked. I am just speechless. :face_with_rolling_eyes: Honestly, I think it is a combination of keyence and this "robotics expert" as to why we are fighting this system every day as he seems to have limited robotics experience for being a so called "expert". The keyence system seems overcomplicated for the application, and doesn't perform well even in the best environment. We are only getting about 15% successful picks even with a high image score above 80%. We cannot touch up the pick positions from the robot and have to do touch ups from what is essentially a CAD drawing with no real idea of how accurate the cad calibration is to the actual tool. I have reached out to keyence, but since the integrators have not released the rights to the equipment, they cannot assist. I am just hoping once the line is released and we can get keyence in here to assist, things will improve. Until then, i am left fantasizing about running the robot into the cameras and ripping them off the stand. :grinning_squinting_face:

  • their robotics expert who wrote the programs used joint motions for tool changes

    :icon_eek:    :gaah: :censored: :mad:

    We cannot touch up the pick positions from the robot and have to do touch ups from what is essentially a CAD drawing with no real idea of how accurate the cad calibration is to the actual tool.

    Hm... in my (limited) experience with bin-picking apps, programming the Pick positions with the CAD model instead of by hand is normal. But it absolutely requires the CAD data of the tool to be 100% accurate not only to the tool, but to how the tool is mounted to the robot. The TCP has to be precise, and the CAD has to be accurate in order for the path planner to simulate out all the collisions in advance.


    How rigid is the tool? I've been burned in the past by flimsy tools whose TCP data could change by a few mm just by changing the tool angle relative to gravity.


    From what you describe, you have ample cause to withhold final payment from the integrator until they make this thing meet your requirements. And I would insist that that requirement include a robust, well-documented, and reasonably simple re-teaching/re-calibration procedure. Ideally with support programs built into the robot and vision system.

Advertising from our partners