Issue localisation iRVision

  • Hello everybody,


    I am having a localization problem with iRVision.


    I use a simple 2D Process Vision with a GPM locator to identify the position of my target in the robotic cell.

    However, I am having a problem with the location accuracy because the X and Y given by iRVision seems to depend on the position of the target in the field of view of the camera.


    Let me explain: if the target is in the upper right corner of my photo or in the lower left corner, the position returned by iRVision will not be at all the same even though my object has not yet moved.


    I would like the position returned by iRVision to match the actual position of my target in the world frame.


    Is there a way to do this?


    Thank you

  • The value in X and Y that iRVision returns to me.


    My camera is mounted on the robot and I performed the calibration with the grid supplied by Fanuc.

  • The part needs to be on a surface that you taught an accurate user frame for and you need to set the Z height on your vision process. Then your offset frame will use that user frame to output x,y, and r.


    You really don't want to use world frame. It would require the robot to be perfectly level to the part.

  • Thank you for all your answers.


    In fact, I used the calibration grid which is recorded in a UFrame to calibrate the camera. I set the height in Z and then I started to set my vision process.

    The calibration works well, but when I move the object and restart a calibration the value of the positions is not exact.


    The objective would be to scan the entire cell until I find the location of my part and thus calibrate its position.

  • How did you determine Z height? How big is your work area? If it is much larger than the grid then your z height will not be consistent the further away you get. In that case manually teach a UF that matches the size of your work area.

  • In fact, the value given in Z does not interest me because it does not vary over time. On the other hand, the position of my part in XYR is important.


    My robotic cell is about 2 square meters and I would like to sweep it using the robot to find my part. I use a loop that moves my robot forward until I identify the position of the part. The problem is that the location of the part returned by iRVision is not correct.


    If I understand your point correctly, would I have to have a much larger grid to find my target in the cell?

  • If you don't have an accurate Z value set in the vision process then X and Y will never be accurate. The Z height setting is critical to a 2D vision application.


    You don't need a larger grid, just teach a standard 3 point user frame using a correctly taught pointer. Then you can take your pointer and touch the surface of the part while in that user and tool frame to find out the Z height. Finally go back into your vision process and set the Z height.

  • Ok thank you, I understand your point very well. However, I have properly set the Z height of my vision process.


    I calibrated the grid mark with a fine point perfectly calibrated at the TCP level.


    My problem is that depending on where my part is in the field of view of my camera, the position in X and Y is not the same. (The room is fixed in the cell and it's my camera that moves to look for the room)

  • the position in X and Y is not the same.

    You keep saying this, but without details that has no meaning. How different? In which axes? How are you determining this? Is the error consistent or random? Does it change based on where the object has been moved to?

    (The room is fixed in the cell and it's my camera that moves to look for the room)

    Room? Cells are usually inside rooms, which are inside buildings. What does this mean?


    The camera is robot-mounted? Is IRVision configured for that? Usually, with robot-mounted cameras, corrections are generated in tool coordinates, to allow the robot to move to the part, without much regard to the part's position in World or UFrame.

  • I think my case is not clear, I start again:


    My vision process is correctly configured, whether it is at the level of the Frame of the calibration grid or the height of my room.


    The problem is that depending on where my part is in the field of view of my camera, the inverted position is different.


    Here is a concrete example:





    iRVision gives me the X, Y and R position of my part

  • Mmmmm ok thank you I understand. So there is no way to avoid this?


    I would like to scan my entire cell with my camera looking for my part which can be anywhere in my robotic cell.

  • That depends on how your system is configured, and what you want to achieve.


    Do you need to measure the part's location in World or UFrame? Or do you just need the robot to be able to grab the part?


    I'm not certain if IRVision directly supports using a robot-carried camera to measure a location in World or UFrame (that's normally done with a fixed overhead camera), but if your UFrame and the calibration grid are set up correctly, it should be possible to use a combination of LPOS() and the camera feedback to calculate the UFrame position of the part.


    I'm not certain how to do it in TP. It would require having the IrVision UFrame active, having a TCP at the center of the camera active, then taking LPOS() and multiplying/offsetting it by a PR set to to the vision results.


    Or, a cruder but simpler way, might be to simply let the robot guide onto the part "normally", by moving in Tool coordinates until the IrVision position error is 0, then record LPOS().

  • A robot mounted camera absolutely can find parts relative to a user frame. It must be setup for a fixed frame offset, then a reference must be set.


    Can you show a screenshot of your vision process setup?

Advertising from our partners