Hello,
I am currently working on developing an application for labeling electronic cards using a suction cup gripper mounted on my robot equipped with an iRVision 2D camera. There are always two markers present on the electronic card blanks, each marker being a 3mmx3mm square. My aim is to utilize these markers to precisely adjust the positioning of labels on the electronic card blanks. I deposit the labels using the suction cup mounted on the gripper.
As a card blank can contain multiple cards, I employ an offset matrix to determine the necessary shift for each label. The electronic card blanks are fed through a supply system, and the referencing may sometimes shift due to variations in blank sizes. Hence, I am exploring the most suitable vision method for this process. In summary, I aim to correct the referencing offsets by capturing images of the markers using the iRVision camera. I have noted various methods such as Tool Offset, Fixed Frame Offset, etc.
Currently, no testing has commenced, but I seek clarification on these points before initiating experiments. Additionally, I wonder if it is necessary to create a UFRAME from the supply system for this process.
Thank you in advance for your assistance!