Hello All,
I have R-30iB controller with 2D iRVision system (Sony camera 640x480). I have some experiences with this system in fixed camera configuration.
But now I have new task - get items from a drawers. There can be a different types of parts with different sizes in drawer and they are not fixed. So it is needed to read current offset of selected item by camera. I will get approximate position of the part from PLC (X, Y and Z in a drawer frame) and I have a robot mounted camera. My idea is to move camera above a center of a part, read a part offset using vision process and get it by robot tool.
First I tried to solve this task using fixed camera approach - I set a fix mounted camera calibration and taught the part. Then I tried to move robot with camera to position from PLC. In this position iRVision found a offset from reference part (values looks correct) and I tried to add this offset from camera together with part offset from PLC. But it was big fail. Offset from vision process was in matrix form and offset from PLC was in carthesian form. So it is no way how to add this two offsets to one correct offset value. So I tried to add the part reference position with the offset from PLC first, and then I tried to use it as a new part reference point. In the next step I used offset from vision process to this new part reference point. It was second fail. Working tool on the robot move to complete wrong position in every trial, which I made.
Now I have this questions:
1, Is there any way how to solve my task with robot mounted camera, but with fixed camera process settings and manual calculation of final offset? For example how can I calculate carthesian offset (which I see in result table of vision process) from matrix form (which I can read to position register)? Or is it possible to make it work if I will apply offset from PLC by changing the application frame before start of vision process? Or is it possible to use a part position from vision process settings instead of part offset for my purposes?
2, Is it possible to solve this task with robot mounted camera? Unfortunately I dont have any experience with this configuration. If I want to use it, is it needed to make a two plane calibration (in PDFs from FANUC they are using words 'should be' without explanation)? What values I will get from vision proces if I will change the camera position before every snap&find? Is there any document, which can explain this configuration to me (I have iRVision Operation Manual 8.1 from FANUC and also iRVision Startup Manual, but I can't see answers to my questions)?
Any help would be very much appreciated.