Hello,
I have a special vision application : I need to check if a "basket" is empty or not. This basket contains metal parts. These parts are first dropped by the robot in a flowpacker machine.
After the drop, the robot must move to the vision system in order to check if the basket is really empty (sometimes it happens than some parts stay blocked in the basket during the dropping process.
Currently we use a Keyence colour camera to execute this check. But due to high reflexion and very low repeatability of the basket colours, our first tests were NOK.
So Keyence suggests us to replace the camera by a profile pattern laser sensor (LJ-V7080). The robot will move the basket in front of the sensors and triggers the vision system during the motion. So the vision system can acquire "a few" images (up to 64000 per second...) and generates a signal GOOD or NOT GOOD that is sent to my PLC when the basket has been totally scanned.
Keyence needs for that an encoder pulse signal physically connected to the vision controller! This signal is used as a trigger to acquire an image.
Does someone know if there is a way to output a robot motor encoder to another system or to simulate an encoder signal based on the robot position?
Best regards,
B.Jouret