Hello All,
I am trying to set up a simple 2D vision process with a fixed camera. I've trained the TCP and User Frame correctly: TCP is just a pointer with a Z offset, User Frame has been calibrated and recalibrated using 3 and 4 point methods and both give the same values. Question: Does it matter what frame you are in while training the User Frame?
My issue is when setting up my camera; I use a fixed (bolted in place) calibration grid that was also used to teach the User Frame but when I get to the calibration step if I use Perspective with an automatic camera distance calculation I get a value of around 1 m, while my measured height is 416 mm. This method gives good mean and max errors, 0.25 and 0.693 pix respectively. When I use Perspective and override my standoff distance to my measure value my mean and max errors are around 1 and 3 pix respectively. Using Orthogonal gives mean and max errors equal to those I get with my standoff distance overridden.
My camera is a Basler acA1920-48gm with a Fujinon 1:1.6/35mm lens.
All three calibration methods (perspective auto, perspective override, and orthogonal), when used in the same vision program give about a 20 mm offset from the part.
Had trouble uploading pictures but they can be found at these links: