I have an Epson N2 with a Cognex Insight vision system and I am trying to develop a vision guided pick and place application. The hardware decision was made prior to my being part of this project so Epson vision is NOT going to be used.
History - I have done this type of application with Cognex and either a Fanuc/ABB/Others in the past. Epson is a little different does not have the step thru code and also does not touchup of positions mid cycle functionality along with subtract an offset. Also Epson does not have the INVERSE or RELATIVE operator that other systems have so I am working on rolling my own version of those functions for this system.
FYI This system will not have a dead center TCP U-Tool where the robot can just move to the X,Y,U reported by the vision system. I will ultimately have multiple styles of parts and each ones pick orientation will have to be taught on part by part basis so Iwill have to do the snap and find then teach and remember the golden location.
Currently I have an Excel spread sheet in which I can type in the Cognex values for a golden location and the run-time location and it will return the difference as an offset the same as is found using Fanuc Karel --> RUN_OFFSET = RUN_LOC : INV (GOLD_LOC) which gives me a set of X, Y and R values (Epson U Values). I expected I could use those values in the Epson and make moves such as
MOVE P5 +X(RUN_OFFSET_X) +Y(RUN_OFFSET_Y) +U(RUN_OFFSET_U).
I would just teach the P5 with no offfset and save the RUN_LOC values as my GOLD_LOC values for that point. This is becuase when
GOLD_LOC is the exact same as RUN_LOC then NILL pos or a all zero offset is returned NILPOS_OFFSET = RUN_LOC : INV (GOLD_LOC)
So I wrote the equivalent trig based math function for Epson created faux set of Matrix arrays to do the translate/rotate math and this returns these same values for X,Y,U in the Epson as my Excel worksheet so I know that math works. Now I am not sure the Epson motion offset works as expected when I translate X,Y only it tracks pretty well but I find errors when I translate and also have U rotation. I expected the X,Y rotate U math be the same with an Epson system as a Fanuc or other industrial robot. Is this not so?
Anyone who has implemented Epson Robot with Cognex vision would appreciate some input. Please excuse the rambling post I was trying to be descriptive.