Vision guided conveyor tracking

  • I've been signed up for a cell that will be doing conveyor tracking on a car body that will have a vision offset applied. I'm trying to wrap my head around how to apply the vision offset to the points, and I just want to do a sanity check to see if I'm on the right track.


    Here is what I am thinking for a typical move:


    Code
    CVLMOVE vis_frame+vis_offs+point[1]


    The robot is laser shot into the vision frame, which is upstream about 4 meters, and that is represented by vis_frame.

    The vision system will produce a delta from its master part, that is vis_offs.

    Point[1] is where the robot will be going, while tracking.


    The question is, will this work, or do I have to add an additional transform to account for the movement of the conveyor?

    Check out the Fanuc position converter I wrote here! Now open source!

    Check out my example Fanuc Ethernet/IP Explicit Messaging program here!

  • I am not familiar with Kawasaki but conveyor position will need to be reset by registration sensor. then conveyor position can be used as dynamic base and allow robot to track it. here is some mention on this:

    1) read pinned topic: READ FIRST...

    2) if you have an issue with robot, post question in the correct forum section... do NOT contact me directly

    3) read 1 and 2

  • Look at the Decompose, Trans, and Point commands in AS Language, that's where I'd start. I'm pretty sure Kawasaki has a vision and synchronous conveyor software add on so that might have some more bells and whistles added to it. Reach out to your Kawasaki Rep and see if they can pass on the associated literature.

  • After doing some testing in K-Roset, it appears that the method mentioned in my first post works.


    Now to figure out how to convert a tracking path taught in world into the vision frame.


    Edit:

    Figured it out. Here is how (assumeing no vision offset):

    visionpnt = -(vis_frame)+worldpnt

    Check out the Fanuc position converter I wrote here! Now open source!

    Check out my example Fanuc Ethernet/IP Explicit Messaging program here!

  • The thing to always remember about Kawasaki, the stored values for any transformation is only an XYZOAT value and does not contain ANY relative information to where/what it was taught to.


    What I mean by this is simply for example:

    Code
    HERE vis_frame + vis_offset + point[1]


    vis_frame must have been defined already and will probably be relative to the robot base.

    vis_offset must have been defined already relative to vis_frame.

    point[1] is then defined in relation to vis_offset - ie define the variable and store that XYZOAT value.

    So only 1 variable - the right most variable is defined relative to the prefixed variable.


    If you then looked at the location values (LIST/l), you will see the XYZOAT values stored.


    The same principle applies when moving to a compound transformation too.

    - The right most variable or TRANS values used are applied as the offset to the prefix variable.


    What I operate on while using compound is the 1st variable used is for want of a better word is my reference frame and then all other points are just offsets to it.


    From what you've described above, it should work no problem.


    Just be mindful if the conveyor is being driven using the robots external axis as this forms the synchronous part of the positional data and would need to be programmed as an external axis away from the usual synchronous positions which will include JT7 motor positional values.


    If the conveyor is being independently controlled and you are just feeding the conveyor position directly from an encoder to the robot, you should be golden.

  • And to concur with panic mode, at the start of conveyor tracking, you would indeed reset the tracking value to a value either 0 or any value offset you require based on a part detection sensor.


    The link panic mode was pointing to, if you look at my KRoset demo and would like the base code I did for it, send me 'conv' and I'll send it over to you.

    It's purely for simulation purposes, but follows all Kawasaki programmable principles and functionality.


    In that demo, I am also using the conveyor tracked positional to set my external slide axis to follow the conveyor tracking position too.

  • The robot has no control over the conveyor. We are mounting an encoder on a friction wheel, and installing that to monitor conveyor position.


    The plan is to have the car break a photo eye as it enters the cell. I'll either have the motion program waiting on that bit, or a background program watching for it. It will depend on the spacing of the cars in the plant. Looks like CVSET handles this.

    Check out the Fanuc position converter I wrote here! Now open source!

    Check out my example Fanuc Ethernet/IP Explicit Messaging program here!

  • Update on this thread. Turns out the above method does not work. Well, it does work, but only if the position you are trying to shift is at the zero conveyor position.


    In order to get it to work on all tracking positions, I had to do the following:

    1. Extract the taught convoyer position of the point I want to shift.
    2. Move the visframe downstream by that amount.
    3. Apply the offset to the downstream shifted visframe.
    4. Project to the shifted visframe back to where the original was.
      1. In my case along the shifted X vector, since it pointed back to the original vis frame.
    5. Find the delta between the original and the projected back frame.
    6. Apply that delta to the original visframe.
    7. Use the new visframe+delta to move to the original point.

    Check out the Fanuc position converter I wrote here! Now open source!

    Check out my example Fanuc Ethernet/IP Explicit Messaging program here!

Advertising from our partners