Positioning panels with sensors

  • Hello Everybody,


    I'm new on robotics, and working on an project where i have to find the orientation of the products related to TCP of the robot.


    Setup:
    Robot: KRC 60-3
    Controller: KRC4 8.8.33


    We have to handle multiple panels all of them have other dimensions.
    The panels Will be picked from multiple in feed stations, a vision system Will identify the product type.
    based on that I'm able to locate roughly the orientation of the panel in the gripper.


    In the image attached the steps of the measurement procedure where i was thinking about.
    The result what i need is just an corner of the panel X/Y and its rotation A.


    Step 1:
    The robot fly's to the measurement space.
    This position Will always be the same rotation of the TCP, based on product length and width i can define the X/Y offset.


    Step 2:
    Find the length of the panel by moving the robot towards S2/S3.


    Step 3:
    Based on the sensor that was found first, the robot has to rotate around it to find the other sensor.
    from here on I'm able to define the rotation from the panel according to my gripper(TCP), and the offset in width.


    Step 4:
    Find S1 and define the length offset of the panel.



    I understand that i can find the positions of the sensors by interrupt, save the position of it an so on.


    But i need a little bit of help by step 3. My first idea is to calculate an TCP related to the sensor switch position what is found as first and the actual robot position.
    then make an rotation around that point and find the other sensor.


    i was wondering are there more easy options to do this ?
    for example:
    is it possible to skip the calculation of an TCP related to the sensor what i suggested before.
    something like S2 found an rotate around fixed position of the switch point of the sensors towards S3.



    Thanks.

  • Wouldn't it be simpler to do a simple search for S1, then S2, then use a trigonometry calculation to determine the angle between them? That would eliminate the need for a slow rotation to line up to the long edge.

  • Tanks for your answer SkyeFire,


    That would be nice and even quicker.


    But the sensors are mechanical switches, so i can't go trough them like a laser beam.


    then the next question comes up on my mind, for calculating the orientation i need at least 3-points. <- Correct me if I'm wrong
    the challenge is that the product on the gripper has an unknown orientation also the TCP is not always on the same spot (Length/width) of the panel.


    option for your suggestion sounds to me like:


    1. find S1 and save position.
    2. move away from S1 otherwise i Will properly damage the sensor.
    3. find S2 and save position.
    4. move away from S2 otherwise i Will properly damage the sensor.
    5. search S1 again and save position.
    6. calculate.

  • i did exact same job few years ago.
    my approach was to make TCP at each sensor and base lined up with sensors S1,S2,S3.



    then:
    1. move panel towards S1/S2
    2. when either S1 or S2 are sensed (by interrupt), save status which sensor was tripped and move panel back to position where interrupt occurred
    3. rotate about TCP that was tripped (S1 for example) until other sensor is triggered (by interrupt)
    4. rotate back to position where interrupt occurred. now panel is lined up with S1/S2.
    5. move in base until S3 is triggered, then move back to position of interrupt. now panel is completely aligned.



    advantage of this is that one does not need to do any math on their own.

    1) read pinned topic: READ FIRST...

    2) if you have an issue with robot, post question in the correct forum section... do NOT contact me directly

    3) read 1 and 2

  • Thanks Panic mode,


    A base lined up with S1,S2,S3 was already in my plan, but there are still 2 questions left.


    1:

    Quote

    my approach was to make TCP at each sensor and base lined up with sensors S1,S2,S3.


    did you predefined TCP points for S1,S2,S3 or did you create them based on actual position when sensor triggered ?


    2:
    Is there an advantage for searching S1,S2 first ?


    for example: if S1 is triggered first and rotating around it Will result most of times in triggering S3.
    Unless the corner of the plate is passed beyond S2.
    See attachment where i situated it.


    Tank you.

  • [size=2]sorry, [/size]


    [size=2]in my description sensor order was different. S1 and S2 were both on the same side (long side) of the sheet, S3 was on the short side.[/size]


    [size=2]i was not pressed for time but still wanted to do a decent timing... this is why i moved broad side towards both sensors - without knowing which one would be tripped first...[/size]


    [size=2]actually in this case there was total of six sensors (three per side) due wide range of products (some 40 products, lengths from 0.9 ... 5.8m) and client wanted that first edge to be aligned was what they called "the good edge" (not necessarily the long one). [/size]


    [size=2]as SkyeFire mentioned rotation was the slow part but this was due physical setup being less than optimal - integrator mounted lasers only about an inch (25mm) from a hardstop which meant that stopping distance had to be very short. if there was more room, robot could use higher speed and finish alignment sooner. i was given tons of time (they were ok for up to 55 sec.) but despite low speed and huge distances, entire alignment took about 10-12 sec. [/size]


    [size=small]also in this case precision of alignment was the highest priority. this is why after each edge was detected i moved sheet back to the interrupt position before scanning for the next edge. it was nice to see that fine point laser beam is just hitting the edge and... it STAYS on it during further operations (rotation and linear scan).[/size]


    [/size][size=2]when speed is important, i would consider using X33 interface (high speed measure) or use two speed edge detection. this means you can use high search speed to detect edge first time. this means detection is quick but the accuracy is not great. however now we got close to the sensor and we can search in reverse at low speed (only short distance traveled by low speed since now we are close).[/size][size=2]i would do some comparison measurements to see if this is really justified, in many cases slow back search may not be really needed. [/size][size=small]

    1) read pinned topic: READ FIRST...

    2) if you have an issue with robot, post question in the correct forum section... do NOT contact me directly

    3) read 1 and 2

  • Thank You,


    Very helpful !.


    Now i have an plan and i can write some code, i have to do the modification next month.


    About the Cycle time of measurement i wil find out what its cone be.
    al the panels has an max error on how there put in the infeed stations, so im able to find out an safe position

  • Sorry my post was not ready.


    About the Cycle time of measurement i wil find out what its cone be.
    al the panels have an max error on how there put in the infeed stations, so im able to find out an safe position before i hit any sensor.


    About the X33 interface, is it an option or standard available on Kuka controllers ?

  • As Panic stated, the Fast Measurement inputs have a faster reaction time than the "regular" $INs -- 5us, IIRC, vs 12ms. Combined with an Interrupt using BRAKE and RESUME, this can allow for faster search-to-contact motions than the slower inputs. Inertia is still an issue, though -- no matter how fast the electronics react, it still takes time and distance for the robot to physically come to a halt. So you'll need to work out your fastest safe search speed by trial and error.


    KRC4 supports 4 fast-measurement inputs, but you need a special cable to access them from the CCU board.

  • Well, we applied the Fast Measure inputs on X33.


    The interrupt is works very nice, also the rotation around the sensor.


    But i am struggling whit the offset i calculate.
    First, what i want to achieve are the following offsets:


    Finding the Rotation around Z-Axis. in The tool the A-Offset.
    The width of the panel. X-Offset in the tool.
    At last the length of the panel. Y-Offset of the tool.
    If i have these values i have the distance and orientation from he panel corner related to my origin of my tool.


    What i do:


    1. Move panel to S2 and S3 (see attachment in previous post).
    2. When S2 or S3 is triggered, save the position when triggered. then rotate around it.
    3. If opposite sensor is detected,save the position when triggered and the panel is aligned whit the sensors.
    4. now i can do the calculation of the orientation. i calculate them by comparing the angle of the base of S1 and S2. then subtract the actual rotation of the gripper (keeping in mind -180/180)
    This works fine. If i put the offset angle in the new tool, the robot keeps the orientation.
    5. Now i have the rotation of the panel aligned whit the sensors, I'm able to compare the actual robot position with the sensor position on X-axis.
    the difference between them is the Offset. And apply the offset into the new tool.
    6. then move to S1. when sensor triggered i subtract actual position with sensor Position.



    After All i activate the tool and move in another base towards an point for example X 100, Y 100, A 180, B,0 C 180


    This seems to work, but I'm struggling on the next situation.


    when testing i move the robot to an fixed Pick-Up position.
    The panel is random placed, after positioning the panel Will move to an point suggested before.
    i have drawn the contours of the panel on an paper when positioning done, sow i can check the repeatability of the system.


    But when i Retry the procedure, and chance the X/Y direction of the panel before it get picked it comes back on the same position.
    unless i chance the rotation of the panel, that Will result in an total other end position.
    I'm breaking mine mined on it, the calculated X and Y positions are looking good.
    but a chance in rotation result in Strange things.


    perhaps there is one of you that who has experience whit it.
    is there something weird on mine procedure of calculation, my main question is how tho do the procedure proper.

    Edited once, last by willy ().


  • [size=1em]Does not sound good to me... you need to do a sanity check before programming. i would suggest to practice with props and move them by hand until you figure out CORRECT sequence. then do the programming... for example use business card as a panel substitute[/size]


    [size=1em]your step2 is not ok... you are are you sure you are rotating about correct point? you are skipping things.... you are supposed to bring the panel edge BACK to the tripped sensor, then rotate ABOUT SENSOR. this can be done different ways, probably simplest is to create external tool TCP at each sensor.[/size]

    1) read pinned topic: READ FIRST...

    2) if you have an issue with robot, post question in the correct forum section... do NOT contact me directly

    3) read 1 and 2

    Edited once, last by panic mode ().

Advertising from our partners