PickControl and ConveyorTech expertise wanted!

  • Hi everybody,


    I've purchased four KUKA KR4 R600 robots with C5 micro controllers running KSS8.7. I've purchased KUKA.ConveyorTech 8.1 with my robots, and I will elaborate on that below. I'm working with WorkVisual 6.0.


    So, first of all, I have not yet received my robots, so I'm not able to test anything just yet - I'm trying to prepare as much as I can beforehand because I'm on a tight schedule.

    The intended operation is as follows: Three of the robots are to be picking from one conveyor. The position and orientation of the workpieces are given by a vision system (non-VisionTech) which is running on an IPC and sending information for each workpiece to a PLC, which will distribute a given workpiece to a given robot. Now, I have been advised to purchase ConveyorTech over PickControl, but I am having my doubts that ConveyorTech will be able to fulfill the requirements of the system. The main concerns I have are:


    1. Load sharing. Specifically, when one robot is unable to pick an item, PickControl will allow for the next robot to pick it instead, whereas with ConveyorTech, this is something to be implemented in the PLC. However, it is not important that all items are picked, since they will roll back into the feeder if they are not picked, but I presume the load sharing properties of PickControl will give a nicer flow.
    2. External vision system. I've not been able to find out in the ConveyorTech manual, how to "marry" a workpiece's position and orientation (given by the vision system), with the tracked workpiece in ConveyorTech (given by the sync switch). It is critical that the workpieces are picked with the correct orientation of the tool because of later processing. Does anybody know how to do this in practice?

      Additionally, with PickControl it seems that it is not quite straightforward to integrate an external (non-VisionTech) vision system in PickControl - does anybody have any experience with this?
    3. Object types. The workpieces are of different types, which contain information on how they should be handled in a later operation (by the same robot). After reading the PickControl manual, I understand that you can assign object markers to the items to be picked, which might be able to specify what I'd like to have attached to each item. Does anybody know of a similar thing in ConveyorTech?

    All in all, the good thing about ConveyorTech is that it seems a lot easier to set up than PickControl - I'm just unsure that it will be able to perform the way I want it to, despite it being the recommendation from KUKA.


    So, you guys. Does anybody have advice on the three points highlighted above, or advice on the choice of ConveyorTech vs. PickControl for this specific use case?

  • i am not sure how simple (or not) this has become.


    years ago (before PickControl was conceived) i was working on creating such solution using only ConveyorTech, VisionTech and a lot of customization. it did work but required advanced knowledge it was still way too cumbersome...


    PickControl is supposed to make all this much simpler. From what i understand, this also means one may no longer need separate resolver for each robot to track conveyor(s). It should be doable with just one and ethernet messaging. also VisionTech could be running on an external PC. This would allow higher performance which could be important if handling variety of complex parts (since they also could be oriented differently).


    load sharing is the easy part, option for this is already part of ConveyorTech (Conveyor.Skip).

    adding 3rd party vision will only complicate things further.

    1) read pinned topic: READ FIRST...

    2) if you have an issue with robot, post question in the correct forum section... do NOT contact me directly

    3) read 1 and 2

  • I've not been able to find out in the ConveyorTech manual, how to "marry" a workpiece's position and orientation (given by the vision system), with the tracked workpiece in ConveyorTech (given by the sync switch). It is critical that the workpieces are picked with the correct orientation of the tool because of later processing. Does anybody know how to do this in practice?

    ConveyorTech - at least the versions I worked - will not see part orientation given by the vision system, so You will need to craft your own code to take care of this.


    But this is doable. An "look like" application that comes to mind it the wheel assembly on ca body at automakers. Robots use ConveyorTech to synchronize with Power Free linear movement, and also use correction send by vision systems do match the wheel roles with the car body axis.

  • Thank you for your quick response panic mode. Firstly, the new version of ConveyorTech allows for just one resolver connected to the first robot controller, and then the resolver signal will be shared over a RoboTeam network - so they've made this part a bit easier rather than having three resolvers for three controllers. However, I do agree with you that it seems I will need to do quite a bit of customization still with the setup I have.


    Unfortunately, it is not viable in this specific project to use VisionTech, as the vision task is not trivial - we will have to utilize deep learning, so VisionTech will just not cut it. Hence the need for an external vision system, even though this complicates things further as you've correctly stated.

    massula thank you for your response. That is exactly what I was afraid of, that information on the orientation could not be passed on. I see how your example application relates to this, but I can't help but wonder if I'd just be wasting time "reinventing the wheel" when it seems PickControl will do this for you. You mention you've worked with ConveyorTech, have you also worked with PickControl?

  • VisionTech is quite powerful and allows a lot of customization. i was modifying C# code and adding own data etc to the results. if using another vision system one can do pretty much anything as long as message is in format that PickControl expects (if you are still using PickControl).


    note that synchronization of camera and conveyor can be an issue depending on conveyor speed. this usually means using high speed output from camera to trigger conveyor or separate registration sensor or conveyor speed need to be sufficiently low.

    1) read pinned topic: READ FIRST...

    2) if you have an issue with robot, post question in the correct forum section... do NOT contact me directly

    3) read 1 and 2

  • I cannot comment on anything else but just out of curiosity, is the vision system by any chance based on a omron FH controller?


    Even as a big fan on ML, if you can make it work without the ML part you will most definately have a much easier task at hand. What are you trying to find that you cannot do without ML? :thinking_face:

  • not sure what "ML" is or what the this deep learning vision system is.

    Kuka VisionTech is based on Cognex VisionPro.

    1) read pinned topic: READ FIRST...

    2) if you have an issue with robot, post question in the correct forum section... do NOT contact me directly

    3) read 1 and 2

  • I cannot comment on anything else but just out of curiosity, is the vision system by any chance based on a omron FH controller?


    Even as a big fan on ML, if you can make it work without the ML part you will most definately have a much easier task at hand. What are you trying to find that you cannot do without ML? :thinking_face:

    No, the vision is not based on an Omron FH controller.

    I agree, that using "traditional" computer vision would be a much nicer road to go down. However, the reason for using ML or deep learning is that the items to be picked have little to no markers that will allow for us to detect the correct orientation of the item (critical for further processing) with standard vision techniques, and the items will vary widely - so what we are seeing today, might not be what we are seeing a week from now, but we should still be able to find the "right way up" in regards to orientation.

  • VisionTech is quite powerful and allows a lot of customization. i was modifying C# code and adding own data etc to the results. if using another vision system one can do pretty much anything as long as message is in format that PickControl expects (if you are still using PickControl).


    note that synchronization of camera and conveyor can be an issue depending on conveyor speed. this usually means using high speed output from camera to trigger conveyor or separate registration sensor or conveyor speed need to be sufficiently low.

    Ah okay! We were under the impression that there was little to no customization possible with VisionTech. That's good to know.


    With regards to synchronization, does this primarily apply to high conveyor speeds? The conveyor speed for this application will be around 300 mm/s. We want to trigger the vision on a switch on the conveyor and start tracking the item upon the registration, and when the image has been processed, we want to "marry" the registered item with the correct (x,y) position on conveyor an orientation of item given by the vision system - I believe I read that at least with PickControl that was possible with the use of a timestamp check.

Advertising from our partners