iRVision Calibration for Pick and Place

  • Robot newbie here, so dumb it down for me..


    I am working with a Fanuc 4 axis robot and iRVision for a simple pick and place. When trying to do a re-calibration of the vision the robot started going to seemingly random locations trying to pick parts.


    We use a fixed camera with a calibration grid placed on the table where parts feed out to.
    To calibrate I went to Menu->Setup->Frames->User Frames->Frame 5(our Calibration Grid Frame->Clear Frame->Enter and then proceeded with a four point calibration.
    Immediately after this I went to Vision Setup->Camera. Here I setup the grid within the reference box, it locates all the points,
    I deleted points with high error, saved it. Then removed the grid from the table and placed a part.
    Went into my vision program for locating parts, taught a reference position of my GPM tool.
    Then stepped my robot through its pick and place program, taking an image, then moving the robot to the part, manual moved it to the exact position I wanted, taught the position to my position to the position register, with subtracting vision and tool offsets.
    Then whenever i tried to jog through the program again, it takes an image, starts to move to the part and is off by >50mm.


    Any help you guys have is greatly appreciated.

  • Do you have the correct user frame selected in your vision process? Are you setting the correct user and tool frame in your program, and is it the same as when you jogged and taught the position register? Did you set the correct z height in the vision process?


    Sent from my SM-G930V using Tapatalk

  • aceofcourts


    Welcome to the robot forum


    Sorry, but I had to break your post. It was hard to read
    Anyway


    Do you have the correct grid selected ?
    Are you saying you taught the grid with 4 points ? It should be only Origin, X and Y
    " with subtracting vision and tool offsets " When you teach the RefPos you have to answer NO, , then for touch up answer YES

    Retired but still helping

  • HawkME
    We only use a single user frame for our entire process. I don't have a solid understanding of what exactly user frames are, but it remains constant. Whenever I look at the frames in the pendant it is user frame 1, all the x, y, and z values are 0. I'm not sure what that implies. The z height is correct, within reason. Our part is a "cup" shape, so a cylinder with one of the circular faces removed. The robot picks it up either way. So sometimes it picks a part at 33mm off the table, and sometimes at 2mm off the table. We use vacuum to determine when a part is ready to be picked up.


    Fabian Munoz
    I apologize. I was trying to be as detailed as possible because I was not sure what would be my critical mistake.


    Yes, we use grid with dot spacing of 15mm. It is the only grid we have ever used, it is actually the only one we have. I think I misspoke whenever I said a simple pick and place, because the rotation/orientation of the part is important for the pick. The system integrator that actually built our system said that because we have rotation we have to do a four point calibration.. now whether that's factual or not?? But for our calibration we have System Origin(Center of grid), x, y, and coordinate origin.

    "" with subtracting vision and tool offsets " When you teach the RefPos you have to answer NO, , then for touch up answer YES"


    This is something I was definitely messing up. But I went back this morning and tried to teach the reference position, responding no when the pendant asks to subtract vision offsets, and I didn't have any different results. Although, I'm not sure that I fully understand how the reference position is used.


    Whenever I teach the reference position here is my process.
    First, place part of table near center of camera's field of view. Go into my vision program, snap and find, the hit set reference position.
    Then, I step through the pick and place program, making sure I don't move the part, stop the step before I go to my pick position, which is an offset of my pick position.
    Next, I manually jog the robot to the pick position I desire, hit touch-up, no to vision offset, yes to tool offset.
    Then I let it run through the rest of the program, place the part and doing it's thing.
    But whenever I sit the part in the exact same position and run it again, off my 100+mm.


    I've browsed through the forums but haven't been able to find anything that seems helpful. A Fanuc training course would likely help, but that doesn't help me much now .


    I appreciate your guys help. Let me know what you think

  • 1) Next, I manually jog the robot to the pick position I desire, hit touch-up, no to vision offset, yes to tool offset.
    2)Then I let it run through the rest of the program, place the part and doing it's thing.
    3)But whenever I sit the part in the exact same position and run it again, off my 100+mm.


    1) and 2) are ok
    You are done with teaching at this point


    Put a part again, run and when you see the robot moving toward the part 'stop it, move the robot to the part (yes, just like you did before), touch it up and this time say YES

    Retired but still helping

  • Fabian,
    I attempted to go through the procedure you outlined and it resulted in a similar outcome. I taught the point say no to the vision offset. Ran through the program again and taught the point saying yes to the vision offset. The next time I ran through the program it went away from the part again.


    This time I watched the vision offset register. The first time I taught the pick position the offset was essentially zero, which made sense because the part was exactly where the reference image was taught. The second time i ran through the program I sat the part as close as I could eyeball to the original position. The parts are picked from a table so there are no fixed positions. Whenever I went to pick the part it was off my ~10mm, so I touched-up the position saying yes to vision offsets this time. The third time I ran through the program I didn't sit the part quite as precisely, I was probably off by 5-10mm. After the vision ran the vision offset was something like 75mm each direction.


    This makes me think there is something going on with the camera, but I'm not sure what.

  • I don't think you completely answered my questions on the frame and z height set in the vision process. It would help if you took a screenshot of the vision process setup screen.


    The z height needs to be set to the height of the part above the user frame selected in the vision process. This is critical that it is accurate or your vision offset will never work correctly.


    Sent from my SM-G930V using Tapatalk

  • HawkME


    We use the world frame for all of our applications. The z height is set correctly, assuming the world frame is the table, which may not be true. How would I go about setting up a user frame at the table?


    I haven't been able to connect to the robot controller via laptop, but here are some pictures of our vision process from the teach pendant. For reference, the part in the image is 80mm in diameter. The thing I'm noticing moving the part ~100mm results in a >1000mm change in the location seen by the vision.

  • That is most likely the problem. The world frame has nothing to do with a table, it is typically the intersection of J1 and J2 on the robot. If user frame 1 was taught as your table then you need to use that in your vision process. Or, it looks like you used UF 5 for the calibration grid. You can use that then enter the Z height difference from the surface of the grid to the surface of your part.


    An easy way to double check your work is to put the robot in the UF that you have selected in your vision process, then touch a properly taught TCP pointer to the surface of your part. The Z value of your current position should match the Z height that you type in the vision process.

  • HawkME,


    Using the calibration frame for the vision program worked. Vision is picking up parts and all the measurements look exactly like I'd expect. But a new problem has emerged.. Position Unreachable.. everything on the table is unreachable and I'm not sure why. The parts are exactly where the calibration grid was during calibration. I went ahead and changed the user frame through my pick program to the same user frame I calibrated in and did my vision program in. Was this a mistake?

  • I re-taught all the points that don't use the vision offset, such as home position, because it said these points were not reachable. Those points worked after I re-taught them. I have re-taught the pick position multiple times with subtracting the vision offset and it seems to be hit or miss. Some positions it can reach and other it cannot.

  • Our part is not symmetrical. One of the holes is further from the center of the part. This is how we determine which way for the robot to turn to pick up the part. I am able to walk through the coordinates that correspond to the pick position but the robot won't move there on its own. I assume that offsets are added to the position


    x y z w p r
    Pick Position 1.402 17.27 14.725 -179.52 .05 83.944
    Rotation offset 0 0 0 0 0 0
    V(R) 29.9 30.1 0 0 0 -179.5


    Pick w/ Offset 30.302 47.37 14.725 -179.52 .05 -95.556


    I have no problem walking the robot to his combination of coordinates, it's right over the table and over where I calibrated. Not sure what's going on


    Also, just went out and did some more testing.. my z axis wont rotate all the way around in one press. It will turn around 30 degrees and then stop, telling me position is not reachable. When I press the button again it will rotate another 30-ish degrees and then error out. Any ideas? :wallbash:

  • I don't have experience with 4 axis robots, but I wonder if there is an issue with orientation. I believe a 4 axis robot must be orthogonal in W and P. I suspect the vision process doesn't necessarily protect you from this during a rotation.


    To rule this out you can make a copy of your program but teach everything in world, i.e. set user frame to 0. Also make a copy of your vision process in UF 0, then update your z height to be that distance. You can verify z height by touching the part with a taut TCP selected as the UT and UF=0.


    Sent from my SM-G930V using Tapatalk

  • Update for you guys..


    The z height was incorrect, but I'm not sure what changed. Whenever we set up the robot the pick position was very good and repeatable across the whole table. This was with the height set to 27.5mm. Over time the pick position got worse and worse, I messed with the vision and lighting attempting to improve, and it seemed to help some. Measured the current z height at 45mm. Retaught the reference position in the vision program, program took off.


    What could cause this? We have experienced multiple crashes directly in the z axis, which resulted in OVC Alarms and Pulse Mismatch Errors. Could this cause the world frame to move in the -z direction?


    I appreciate all the help from you guys!!

  • If that large amount of change was due to damage to the actual robot then I don't think your robot would be running. You can verify the robot mastering by sending each joint to 0 degrees and checking the witness marks. Possible explanations: fixture was moved, robot mounting was moved, user frame was modified, robot motor was replaced and not mastered correctly.

  • I know how to solve your part not reachable problem. Go to your user frame 5 and make sure that your W,P axises are zeroed. I know it sounds crazy that fanuc would even let there be a value in there because its a 4 axis robot and they don't have a fifth or sixth axis but I have gone through this struggle before. After you zero W,P on your user frame recalibrate the camera but don't touch your user frame.

Advertising from our partners