Thank you so much for the quick and detailed response, I will be sure to try those suggestions listed above and see if that improves overall accuracy.
Let me start by saying I am here in this debug process currently:
I am attempting to use two LR-Mates to assemble two parts, one flexible, and one rigid, with specific matching icons to one another with a tolerance of 0.25mm for XY and rotational placement of the outside profiles. Each robot has a foam EOAT that shifts the part upon picking which has me using a second snap with a tool offset which helps account for the flexible part rotating but not consistently enough to not call in adjustment offset. I currently have a vision process with two parent tools established, one looking at the icon for XY accuracy, the other looking at the outside profile of the part to get rotational accuracy. I have two model ID's taught, each with their own offset data taught to the vision. In my programming I am sending the GET.OFFSET and FOUND.POSITION data for my XY/icon to VR2 and VR3, as well as sending the FOUND.POSITION data for rotation/outside profile to VR4, then extracting the received VR data from all components to their respected position registers where I call a matrix and inverse to compile the pieces of the offset data I need that is applied to the final application position. The flexible part has an adhesive backing that sticks to the rigid part in the application process to which I set up a shared user frame between the two robots for this handshake.
The above process holds fantastic rotation but compromises my XY icon alignment accuracy each time. I have also exhausted the adjustment offset feature which did help but unfortunately did not resolve the issue.
In the past I had a vision process setup where I had one parent tool looking at my XY icon, and a child tool that was looking at the overall rotation of the outside profile to which I then called in a positional adjustment tool for rotation/angle. In this process I was able to achieve the XY assembly to the needed tolerance but my outside profiles were rotated off of one another and I had to reject those parts.
I am wondering if it would be possible to do a combination of both the listed vision processes above, extract the XY data from the second vision process listed, and take an average from the two sets of rotational data from each process to help keep the outside profile within the other outside profile? Or if there is a way to weight or take a percentage of how much rotational offset data I should use.
I'm not sure if anyone had any success extracting vision data in the matter that I am and manipulating it mathematically but I'm happy to include my programs for reference if anyone needs it to help with this issue.
If there are any other suggestions or recommendations someone can offer me I would sincerely appreciate it!
Thank you for your time,