Details about GPM locator and Aspect fonction

  • Hello everybody,


    Please excuse me if the question has already been asked but after some research I did not find anything relevant.


    I am currently working on Fanuc vision processes. I am using the Gaze Line feature available in iRVision. I am equipped with a 2D camera mounted on the robot.


    My goal is to identify the rotation of my part on the W, P and R axes. For that I use the GPM locator function of iRVision to teach a pattern to my program.


    The recognition of the rotation on the Z axis is going perfectly, but I cannot detect the rotation on X and Y.

    It seems that the Aspect function can detect these rotations but it does not work on my application.


    The pattern I use is a simple circle with a cross in the middle. In the case of a rotation on the X axis for example, the circle tends to become oval and logically iRVision should be able to calculate the degree of inclination but the value of W seems wrong.


    Do you have any tips for identifying the rotation of a part in X and Y using a simple 2D camera?


    Thank you in advance !

  • How off are you? A few degrees? Several?


    The problem with doing it that way is that there is two solutions. A negative angle and a positive angle will generate the same aspect ratio.


    Another issue with that is that the aspect ratio is a bit noisy.

    Check out the Fanuc position converter I wrote here! Now open source!

    Check out my example Fanuc Ethernet/IP Explicit Messaging program here!

  • Thank you for your answer !


    Only a few degrees. But you're right about the 2 possible solutions, I hadn't thought about it.

    Maybe using a different pattern that would be recognizable for each orientation?


    It does not matter if the precision is not maximum but at least detect if there is a problem with the angle.

  • I don't think you can get w,p, and r from a 2D vision process. For this you would need to use the 3DL vision process which is essentially a 2D camera with a laser mounted next to it. The camera and laser are calibrated together and when running the vision process, the camera takes 2 extra images that include a laser cross section to establish the W, P, and R. I went through the 3DL class at FANUC in Rochester Hills Michigan, and there is no way for the GPM to find a 3 dimensional "angle" using a 2 dimensional system. You would need the 3DL system for this purpose.


  • Thank you for your answer but I do not understand why the Gaze Line process which is a 2D vision process sends me back 6 degrees of freedom.


    As we can see, it is able to return the position of 6 degrees of freedom. However, the W and the P are wrong and I cannot figure out where this result came from.

  • Thank you for your answer but I do not understand why the Gaze Line process which is a 2D vision process sends me back 6 degrees of freedom.


    As we can see, it is able to return the position of 6 degrees of freedom. However, the W and the P are wrong and I cannot figure out where this result came from.

    I believe those values are relative to your camera position, origin position, and the offset frame you are using. They are not set from the 2D vision process, because as I said before, a 2D vision process cannot find the third dimension to calculate the angle in which the part is lying. That requires the 3DL vision process. Think about the terms they are using, 2D and 3DL (the L stands for laser) so basically 2D and 3D. 2D is a flat image like the original Mario on Nintendo, so you can only locate a flat image, whereas 3D gives depth like the new Mario allowing you to calculate the angle of the workpiece relative to the camera and origin position.

  • Does w and p ever change? They might just be default values for that offset frame.

    In fact, the values seem really random but remain in the same order of magnitude despite a strong tilt.


    I believe those values are relative to your camera position, origin position, and the offset frame you are using. They are not set from the 2D vision process, because as I said before, a 2D vision process cannot find the third dimension to calculate the angle in which the part is lying. That requires the 3DL vision process. Think about the terms they are using, 2D and 3DL (the L stands for laser) so basically 2D and 3D. 2D is a flat image like the original Mario on Nintendo, so you can only locate a flat image, whereas 3D gives depth like the new Mario allowing you to calculate the angle of the workpiece relative to the camera and origin position.

    I think you're right, it's strange that FANUC doesn't give more details about the calculations made by the algorithm. And why display the values of W and P if they are wrong.

  • I think you're right, it's strange that FANUC doesn't give more details about the calculations made by the algorithm. And why display the values of W and P if they are wrong.

    It does this because it still needs to use those values to calculate a proper offset or physical position, so it assumes the part is flat and pulls data from those other positions to find the actual location of the part.

  • Also it is important to remember that this is all dependent on lighting, and values can and will change based on variations in supplied light. So even if your image appears the same from one piece to the next, minute variations in light will result in different calculations even in the Z, W, and P planes which are not established through the 2D process. I was wrong before when I stated w, p, and r are not established with a 2D process. It can do X, Y, and R, whereas the 3DL process establishes Z, W, and P. My apologies, I have not used the 2D system independently in quite a while.

  • It is true that lighting is essential for all operations related to vision. I added LEDs to properly illuminate the shooting area, there is no problem at this level.


    The repeatability of the measurement is very good, I have very little variation between different shots spaced over time.


    The Z position acquired with the 2D camera is also good. The only problem is the orientation in W and P which does not correspond to reality.


    For the detection of the Z position, I used the GPM locator in order to teach the robot the size of a given feature at a given height and then teach it a second reference position when the pad is closer to the camera . Thus, the robot is able to create a scale in order to calculate the height of my part according to the size at which appears the pattern that I taught to the robot.


    This method works well for estimating height in Z, however I don't understand how the measurement in W and P is done. Logically, if the circle appears oval to the camera it should be able to estimate its orientation with more or less precision, but unfortunately this is not the case.

  • The Z position acquired with the 2D camera is also good. The only problem is the orientation in W and P which does not correspond to reality.


    For the detection of the Z position, I used the GPM locator in order to teach the robot the size of a given feature at a given height and then teach it a second reference position when the pad is closer to the camera . Thus, the robot is able to create a scale in order to calculate the height of my part according to the size at which appears the pattern that I taught to the robot.


    This method works well for estimating height in Z, however I don't understand how the measurement in W and P is done. Logically, if the circle appears oval to the camera it should be able to estimate its orientation with more or less precision, but unfortunately this is not the case.

    The camera is going to look for the trained image. If the circle appears as an oval, your vision score will be off because it varies from the original feature. It cannot distinguish how much of an angle the part has using this concept in a 2 dimensional plane because it is not calculating a 3 dimensional object, rather it is calculating the orientation of a 2D image with respect to the trained image. Therefor, even if it sees an oval instead of a circle, it will only reduce the score of the vision process, and will not establish values for a 3 dimensional plane. Since you are using a 2D process, you will only get 2D results. I rarely use the 2D process independently since the 3DL system is more accurate, so I had never thought of using 2 reference points to establish Z.:thumbs_up:

  • I am fairly sure of this. I believe aspect ratio is used to pass/fail the vision process based off of the aspect of the part. Changing this value allows the aspect to be skewed more but still pass the vision process. I don't believe it can actually use this feature to calculate the angle of the object. If this was the case, there would be no need for the 3DL system at all, and the inclusion of a laser would be only for redundancy. The only way to get accurate 3D results is to use the 3DL process. Also, if you change the value to allow more tolerance in an objects respective angle, it will increase the scan time of the process.

  • I have never seen a way to determine W & P from a single 2D image.


    There is a built in process to get Z based on scale. It is called "Depalletizing vision process". it is more accurate the closer the camera is to the part.

  • I have never seen a way to determine W & P from a single 2D image.

    There isn't a way to do it. That's why they have the 3DL vision system. I think they were trying to use the aspect function to calculate the angle of the part, which is not possible in order to set the W, and P. The aspect function is only used to pass/fail objects that are skewed. If you want a part that is skewed +/- a certain percentage to pass you would set your min/max to the amount in which is acceptable for your application. I left my 2D and 3DL books from these courses at home, otherwise I'd just quote FANUC directly.

  • Thanks for all your answers, I will keep looking. Maybe a method using different 2D photos from different angles could allow us to get the W and P.

  • Thanks for all your answers, I will keep looking. Maybe a method using different 2D photos from different angles could allow us to get the W and P.

    I don't think this will be possible either, as the software most likely doesn't include the computational algorithm to perform this calculation. The best option is to just get the correct equipment for what you are trying to accomplish. If you already have the 2D system installed, it shouldn't be hard to install and set up the 3DL system. It is going to be more effective, accurate, and less troublesome in the future since it is already a FANUC system as opposed to trying to create a separate process to accomplish the same goal. Also, you will not see a huge reduction in cycle time with the 3DL system. Take into account the extra time to move to and snap multiple pictures as well as processing time to perform the calculations necessary to establish the W, and P. This might result in a noticeable cycle time reduction which in most industries is unacceptable.

  • You're right, however my application does not necessarily require a short cycle time. Of course, this must be kept reasonable.


    As for the 3DL system, since I already have a 2D camera is it possible to add a simple laser to my robot and match the camera and laser to use the 3DL program?

  • As for the 3DL system, since I already have a 2D camera is it possible to add a simple laser to my robot and match the camera and laser to use the 3DL program?

    You would need to get the 3DL head and cables to run to the controller. These are specific to the iRVision application, so they would need to be the correct components, otherwise they will not communicate with the controller. That's really about it. Since you are already using the 2D system you shouldn't need any extra software.

Advertising from our partners