Ok thank you I understand, I will test this early next week. I come back to you. Thanks everyone.
Posts by RudiVoller
-
-
Have you tried this?
Yes I used that but the found position is not correct
That depends on how your system is configured, and what you want to achieve.
Do you need to measure the part's location in World or UFrame? Or do you just need the robot to be able to grab the part?
I'm not certain if IRVision directly supports using a robot-carried camera to measure a location in World or UFrame (that's normally done with a fixed overhead camera), but if your UFrame and the calibration grid are set up correctly, it should be possible to use a combination of LPOS() and the camera feedback to calculate the UFrame position of the part.
I'm not certain how to do it in TP. It would require having the IrVision UFrame active, having a TCP at the center of the camera active, then taking LPOS() and multiplying/offsetting it by a PR set to to the vision results.
Or, a cruder but simpler way, might be to simply let the robot guide onto the part "normally", by moving in Tool coordinates until the IrVision position error is 0, then record LPOS().
I appreciate this approach but would like something more robust and precise.
A robot mounted camera absolutely can find parts relative to a user frame. It must be setup for a fixed frame offset, then a reference must be set.
Can you show a screenshot of your vision process setup?
Here is a screenshot of my vision process:
-
Mmmmm ok thank you I understand. So there is no way to avoid this?
I would like to scan my entire cell with my camera looking for my part which can be anywhere in my robotic cell.
-
Now if I jog the robot 3mm in Y, the position returned by iRvision in Y also changes by 3mm. But my part has absolutely not moved.
-
I think my case is not clear, I start again:
My vision process is correctly configured, whether it is at the level of the Frame of the calibration grid or the height of my room.
The problem is that depending on where my part is in the field of view of my camera, the inverted position is different.
Here is a concrete example:
iRVision gives me the X, Y and R position of my part
-
Ok thank you, I understand your point very well. However, I have properly set the Z height of my vision process.
I calibrated the grid mark with a fine point perfectly calibrated at the TCP level.
My problem is that depending on where my part is in the field of view of my camera, the position in X and Y is not the same. (The room is fixed in the cell and it's my camera that moves to look for the room)
-
In fact, the value given in Z does not interest me because it does not vary over time. On the other hand, the position of my part in XYR is important.
My robotic cell is about 2 square meters and I would like to sweep it using the robot to find my part. I use a loop that moves my robot forward until I identify the position of the part. The problem is that the location of the part returned by iRVision is not correct.
If I understand your point correctly, would I have to have a much larger grid to find my target in the cell?
-
Thank you for all your answers.
In fact, I used the calibration grid which is recorded in a UFrame to calibrate the camera. I set the height in Z and then I started to set my vision process.
The calibration works well, but when I move the object and restart a calibration the value of the positions is not exact.
The objective would be to scan the entire cell until I find the location of my part and thus calibrate its position.
-
The value in X and Y that iRVision returns to me.
My camera is mounted on the robot and I performed the calibration with the grid supplied by Fanuc.
-
Hello everybody,
I am having a localization problem with iRVision.
I use a simple 2D Process Vision with a GPM locator to identify the position of my target in the robotic cell.
However, I am having a problem with the location accuracy because the X and Y given by iRVision seems to depend on the position of the target in the field of view of the camera.
Let me explain: if the target is in the upper right corner of my photo or in the lower left corner, the position returned by iRVision will not be at all the same even though my object has not yet moved.
I would like the position returned by iRVision to match the actual position of my target in the world frame.
Is there a way to do this?
Thank you
-
You're right, however my application does not necessarily require a short cycle time. Of course, this must be kept reasonable.
As for the 3DL system, since I already have a 2D camera is it possible to add a simple laser to my robot and match the camera and laser to use the 3DL program?
-
Thanks for all your answers, I will keep looking. Maybe a method using different 2D photos from different angles could allow us to get the W and P.
-
Are you sure about this?
In the details of the GPM Locator, I found the following parameter: -
It is true that lighting is essential for all operations related to vision. I added LEDs to properly illuminate the shooting area, there is no problem at this level.
The repeatability of the measurement is very good, I have very little variation between different shots spaced over time.
The Z position acquired with the 2D camera is also good. The only problem is the orientation in W and P which does not correspond to reality.
For the detection of the Z position, I used the GPM locator in order to teach the robot the size of a given feature at a given height and then teach it a second reference position when the pad is closer to the camera . Thus, the robot is able to create a scale in order to calculate the height of my part according to the size at which appears the pattern that I taught to the robot.
This method works well for estimating height in Z, however I don't understand how the measurement in W and P is done. Logically, if the circle appears oval to the camera it should be able to estimate its orientation with more or less precision, but unfortunately this is not the case.
-
Does w and p ever change? They might just be default values for that offset frame.
In fact, the values seem really random but remain in the same order of magnitude despite a strong tilt.
I believe those values are relative to your camera position, origin position, and the offset frame you are using. They are not set from the 2D vision process, because as I said before, a 2D vision process cannot find the third dimension to calculate the angle in which the part is lying. That requires the 3DL vision process. Think about the terms they are using, 2D and 3DL (the L stands for laser) so basically 2D and 3D. 2D is a flat image like the original Mario on Nintendo, so you can only locate a flat image, whereas 3D gives depth like the new Mario allowing you to calculate the angle of the workpiece relative to the camera and origin position.
I think you're right, it's strange that FANUC doesn't give more details about the calculations made by the algorithm. And why display the values of W and P if they are wrong.
-
Thank you for your answer but I do not understand why the Gaze Line process which is a 2D vision process sends me back 6 degrees of freedom.
As we can see, it is able to return the position of 6 degrees of freedom. However, the W and the P are wrong and I cannot figure out where this result came from.
-
Thank you for your answer !
Only a few degrees. But you're right about the 2 possible solutions, I hadn't thought about it.
Maybe using a different pattern that would be recognizable for each orientation?
It does not matter if the precision is not maximum but at least detect if there is a problem with the angle.
-
Hello everybody,
Please excuse me if the question has already been asked but after some research I did not find anything relevant.
I am currently working on Fanuc vision processes. I am using the Gaze Line feature available in iRVision. I am equipped with a 2D camera mounted on the robot.
My goal is to identify the rotation of my part on the W, P and R axes. For that I use the GPM locator function of iRVision to teach a pattern to my program.
The recognition of the rotation on the Z axis is going perfectly, but I cannot detect the rotation on X and Y.
It seems that the Aspect function can detect these rotations but it does not work on my application.
The pattern I use is a simple circle with a cross in the middle. In the case of a rotation on the X axis for example, the circle tends to become oval and logically iRVision should be able to calculate the degree of inclination but the value of W seems wrong.
Do you have any tips for identifying the rotation of a part in X and Y using a simple 2D camera?
Thank you in advance !
-
You're right about the fixtures. It is possible to imagine a simple positioning system on which the tool presses and thus we block 2 rotations and the Z translation. But this Z translation will be taken to vary depending on the tool used and it is therefore necessary to evaluate it at each calibration.
-
A single point laser sensor only gives you 1 value. So you would need to measure or probe 6 points to get 6 DoF.
Right but is it possible to use laser sensors which will be able to detect several points which are in their range while remaining in the same position?
Is it possible to prelocate the part ? Push it against a known edge ?Slide to a corner ?
In fact, it is not possible to come and put the object in position in the same corner because its geometry can vary.
I would like to have an independent calibration of the fixation of the object in relation to a surface. However, it is possible to position it in the cell to within a few centimeters. Then the calibration method should do the job.