The cloud app could be used for data collection
App1 will get the image data from a camera mounted in the robot cell and send it to App2 which
process this raw image with computer vision and AI algorithm, as well as validate trajectories given by the AI algorithm before sending them to the robot along with the position of detected parts in the image to the robot.
App1 only sends the data to App2 when App2 requests it and commands the robot to move along the trajectories provided by App2.
This is what needs to be done and I am exploring ways in this journey and getting help from the experience of the users of this form to keep me on track. So what do you say about the approach I am following also have you work with image data in KAREL?