Retaught the reference position in the vision program, program took off.
Thanks for this tip - my 2D vision program had also suddenly started being off in the X direction of the VR offset by >75. This solved my problem. Thank you!
Retaught the reference position in the vision program, program took off.
Thanks for this tip - my 2D vision program had also suddenly started being off in the X direction of the VR offset by >75. This solved my problem. Thank you!
Yes, the parts are stacked similarly to a log pile, if you will, with just the ends of all the cylinders showing; so, in the x or y direction, approximately there would be a 1-1/2 to 2 cylinder offset between the center of one cylinder and the next. If moving on a diagonal, there would be approximately a 1-cylinder offset from center to center.
I am new to Fanuc vision - so please forgive the question. If I were to save the VR offset data into a PR, would I then combine the two (add or subtract, depending on direction) to have the vision system find the next pick position?
Thank you.
Currently, I have a PR set at the approach/retreat point in front of the first part, a PR at the pick point of the first part, and a PR at the photo-taking position of the first part.
I was planning to calculate the PR offset from either the approach/retreat PR, or the photo-place PR, for each "point" in the matrix where a part may be found.
Can anyone recommend a way to move through a vertical matrix of cylindrical parts stacked lengthwise? I have a 2D Vision Processing system set up with iRVision; the Kowa camera is robot-mounted 12 mm; the camera has 25 inches from its base to the vertical plane of the matrix; the matrix is approximately 25 inches wide by 35 inches in height; the vertical matrix is loosely arranged in that the stacked parts (4.5 inches diameter) create each position of the matrix.
The vision processing system is to pick and place found parts, starting at top-left position of the vertical matrix. I'm currently taking the photo of the top-left position in a parallel plane about 10 inches from the parts. (The vision system works well for this first part.)
Sometimes the part will be "found" by the 2D vision system from the snap taken by the camera; sometimes a part will not be found. In the case that a part is not found, I would like the robot to continue to the next offset of the PR[ ] and take a new photo. If a part Is found, we will pick and place.
I understand position register offsets, and I have a basic understanding of vision registers used by the camera system to locate the next part. What I do not understand is how to combine PR[ ] and VR[ ].
Thank you very much.