array/grid-wise measurement with kuka robot

  • ok, first things first... you should really get to know KRL basics before writing KRL code.


    for example:


    1. variables can be declared in both SRC and DAT but that will determine their life, scope as well as ability to monitor them. don't have time to go in depth but you probably want to declare and initialize most of them in the DAT file.


    2. things you marked as "globals" are not global at all, they are pure runtime variables (see begin of your SRC file). in KRL all globals MUST be declared in the DAT file. i would add they should also be initialized in the DAT file even if the value changes. this allows you to see last value after reboot for example. you can still override initial value in SRC if you need to.


    3. variables declared in SRC need to be declared at the top of the file. but any user added initialization need to be either inside designated fold within INI... or... after INI. you ended setting orientation type before INI and of course it does not have any effect - because later on INI is executed and this resets all motion parameters. the same goes for other motion parameters including velocity and acceleration. you set those using FOR loop but this is completely useless since INI resets everything. even if you move this after INI fold it still would have no effect because most of your motions are inline form instructions and they are using everything local - position, speed, acceleration etc. Not just that. once the ILF motion is executed and all those parameters are set by it, all motion parameters keep the values until something else modifies them (tool, base, speed, accel, approximation etc.). so ILF motion to P4 sets everything not just for P4 but also for

    PTP_REL {X 250, Y -1000, E1 300}

    and

    PTP_REL newLine


    then once the PTP P6 is executed, all of the motions in your FOR loop will use the P6 settings. the only motion that will be different in final PTP HOME1


    4. in KRL position data can be declared two ways - either in Cartesian or Axis form. home positions are in axis form. most other positions are Cartesian. Cartesian positions do define correct place and orientation of the TCP but to define robot arm pose one must also assign values to Status and Turn. Without those values specified, robot will try to figure out on its own what S and T values are to be used. And that is not something you want to leave for a brainless machine unless you really know what is going on.

    1) read pinned topic: READ FIRST...

    2) if you have an issue with robot, post question in the correct forum section... do NOT contact me directly

    3) read 1 and 2

  • hey panic,


    first of all, i wanna really thank you for your patience and engagement.


    although you said that you don't have time in this regard, but then you kindly took the time to give detailed tips.


    recently i was a week off this kuka-project-issue, so i unfortunately can't working full time on it.


    further, you are right ... i did not enjoy any kuka krl lesson or stuff like that. the task is, that we wanna use te kuka to help us calibrate our system but do not have any human or financial resources to do this thing "right"


    so technically i won't getting really better in coding krl without especially tips like those from you guys here


    so yeah, obviously i misinterpreted the initializing thing of krl and the deeper links and meanings of inline coding and expert coding. but at this point, unfortunately i actual can't do anything like read my gathered pdf's and try to adapt those code examples..


    edit: jesus, text below the pic was dropped >.>:

    ...but honestly i think, those are not super documents at all or maybe i cannot get my wanted information between the lines.


    well anyway.


    yesterday i gave the kuka another 15 min and tried to fix those parameter placement. i put velocity limitation loop and $ori_type - parameter, technically to keep the orientation of the flange, right in the default init section.


    so far the velocities immediately started to work. the ori_type-thing did not. the plain x-y-trajectories near the linear axis are still massively changing flange's orientation. i hope, that today i can experiment with placing setup commands in xxx.dat-file, as you mentioned. maybe there is another ori-type-command, which overwrite mine from init section. i don't know.


    so i wanna thank you again and will keep you guys updated. so far my issue is not gone. kr c2 is still not behaving as in my simulation video (posted on top)


    have a nice day/night :smiling_face:

  • i suggest to work your way up, and start with fundamentals.

    you only use PTP motion and as the name suggests PTP is "point-to-point".

    in terms of PATH and knowing where the TCP is you only get those two points - start and the end of the motion. if robot is not at start or end of the programmed motion, robot does not keep track of the path. basically everything in-between those two points is unknown and unpredictable.


    yes you can get value of the current robot coordinates pretty much any time but this is in no way planned or anticipated. as a result TCP can be (and it is) going wild through the space, simply trying to interpolate between the only two known points. in other words there is no tool orientation control here.


    if you want to control tool orientation, you must use CP motions and not PTP motions. CP motions are "continuous path", meaning that during entire motion (from start to end), TCP is on some planned path (a straight line or circle).


    there is also a difference how the robot moves....


    in PTP motions all axes start and stop moving at the same time. this is called synchronous move. axis that need to move less, will simply travel slower so it still arrives at the same time as the others.

    since all axes are moving together, their combined speeds result in the fastest robot motion possible.


    in CP motions, TCP is moving along some predefined line or curve. one axis may run fast initially, then slow down then run faster during the move. the objective here is path accuracy so robot axes are not synchronized. but the added control constraints introduce new challenges - singularities, must have tool and base specified etc.

    1) read pinned topic: READ FIRST...

    2) if you have an issue with robot, post question in the correct forum section... do NOT contact me directly

    3) read 1 and 2

  • yep, yep, yep, ... you are right again. now when you are saying / describing this, i remember some lines in expert programming guide..



    Also that ori_type correspond with cp-motion i can read out (see page 70 in expert programming guide krc2). Not really bi-unique but in any case present.


    So again, i really thank you for your hints and i will try to change it now.

  • grml .. it is still not working

    strongly simplified:

    1. hint:

    use .dat to declare and set globals

    -> since it's just a basic run and a massively flat program hierarchy, i technically don't need to do anything with .dat


    2. hint:

    use init area to set and declare

    -> since this is crucial, i placed vel_axis and acc_axis and ori_type in init area.

    -> since in this test scenario i dont care about the path between grid points of interest, S and T are untouched and ptp_rel was used

    -> result: movements and velocity are proper. trajectories with ptp_rel commands lose flange's orientation not only in the path even at the grid points


    3. hint

    use ori_type in combination with cp-commands, so e. g. lin_rel

    -> since i still dont't care about the flange's orientation while moving to a grid point, S and T are still untouched

    -> result orientation i grid positions is still broken



    i don't get it...

    with a super simplified code example, where i don't care anything but flange's orientation on start and end point and using x-y-movement command, one have to be able to get this fu... flange in correct position. what the hell.

    i don't care, what is between the grid points. even, if robot would jump off external axis and would start to dance, when the "tcp" reaches target grid position and the flange is horizontal again, i would say "ok, nice...now i can improve the in between"


    ggggrrrrr, jesus.. i just want to reach this.. even without keeping in between orientation



    can it be so hard ....

  • Okay, let's back up. You're just trying to "walk" the robot through a grid (2D?) of locations, correct? With no real regard for the position or orientation between those grid points?


    I wrote a Grid library that's in the Manuals Software&Tools sub-forum. That's based on using BASE shifts, but it might be helpful.


    Still, this should be simple. For one thing, as long as you don't alter the ABC values of your point or frame, your destination should be exactly the same orientation, even if the orientation changes "en route". LIN or PTP should be irrelevant.


    First, you need one "hand-taught" point, that sets up your ideal orientation. Ideally, this would be the first point of your grid, but it doesn't have to be. If this is an ILF, then it will set all the motion parameters for that motion type. I'm going to stick to PTPs to keep this simple.


    This assumes that your grid is in the XY plane, and that moving 'up' in Z will give you clearance 'away' from the surface you're working near. Adjust the values as needed. Creating a Base that's aligned with your work surface would be a good idea. And your ILF hand-taught point needs to be using that Base.

    This example uses shifts in the Base coordinate frame. The ':' operator does matrix multiplication of two frame/position type variables, and the order matters. If you used _eStartPoint : _fShift, your shifts would be executed in Tool coordinates, and would be vulnerable to your initial starting orientation.


    Recording _eStartPosition before the FOR loops, and not altering it, means that each point in the grid should be independent of the one before it, and so eliminate cumulative errors.


    A FRAME variable is just XYZABC, with no S&T values, so with this example, the PTP moves "inherit" the S&T from the previous position. However, if the shifted Cartesian position pushes the robot into a different S&T combination, the robot will choose whichever S&T disambiguation that requires the least motion (in joint space) to get there from wherever the robot is starting from.

  • Your code seems OK for me.

    At beginning you set $Tool, $base and velocities, but afterwards you use inline movements, they overwrite your settings.

    You set $tool to zero, but the movements use tool 14.

    But anyway, also with this small bugs, the orientation should be constant in your program.

    Does the robot tcp stay still when you move the robots external axis when world coordinates are active?

  • Hi there,


    since weeks of doing another things, i finally could get back to my kuka work. Sorry for not answering anymore.

    ...

    But anyway, also with this small bugs, the orientation should be constant in your program.

    Does the robot tcp stay still when you move the robots external axis when world coordinates are active?

    Technically this is good to hear, since I'm also convinced, that it should work.


    Q: Does the TCP stay still while E1-moving?

    A: Yes, it stay where it is (visual inspection and "actual position view/window).


    Now, i gave another try to mimic expert programming codes of expert programming guide (..and i guess thus some earlier comments of you guys aka "that's not expert programming so far" occurred).

    I'm referring to attached *.pdf and it's page 76.
    One can see example code of a constant-orientation-movement.


    Now i created a new file via NEW - FILE - EXPERT not via NEW - FILE - MODULE and typed


    And now i think, this is theoretically working.

    The flange's misalignment seems to be way lower as before (visual inspection).


    Indeed, when I'm putting a (bubble) level on flange, i can see, that flange is not staying horizontal, but i think, this is because we don't have a calibration tool to set the correct adjustment.

    We only can adjust the robot with "keeping it's white marks aligned in HOME position" (thus visual).



  • Now the "slightly misalignment" means,..

    - when i move the robot to grid points, which are "right hand-side" of centered position (A1 = 0), the level gauge is on end (a)

    - when i move the robot to grid points, which are "left hand-side" of centered position (A1=0) , the level gauge is on the other end (b)


    Am i right / could i be right, when i now assume, that is because of not using EMT or another reference to adjust the robot's world frame.

    I assume that this remaining symmetrically misalignment of flange's orientation probably is, because i cannot adjust robot via tools.

  • We only can adjust the robot with "keeping it's white marks aligned in HOME position" (thus visual).

    You are aware that the marks are not in mastering position. They only show where a mastering movement should start. You could reach better results even without mastering tool from KUKA by using a pen or a measurement clock/dial gauge. This has been explained many times in this forum.

  • You are aware that the marks are not in mastering position. They only show where a mastering movement should start. You could reach better results even without mastering tool from KUKA by using a pen or a measurement clock/dial gauge. This has been explained many times in this forum.

    okkkkaaayyy =-)


    "of course" this is new to me. i kinda think, it's difficult to know/search something, that you/one don't ever minded to look for.. so far :grinning_face_with_smiling_eyes:

    but yeah, i will check it out. until now, i just thought, one only can adjust the robot properly, if one has got certain tools


    thank you very much for your hint

  • there are different calibration marks possible - the coarse ones and the fine ones.


    the coarse ones are big, wide, usually painted white. they cannot be used for precision, in fact often they are result of casting and not machining. therefore they are only as starting point to begin calibration. actual reference point is usually some 2-3deg away from here. normally in the negative direction.

    example:


    the fine marks are precise, narrow, usually engraved or laser etched. they do coincide with actual reference position.

    example:


    fine marks are used rarely. if present at all, they are normally only found on the robot wrist axis (most of them are on A6 but some robots may have them on A4, A5,A6).


    yes, can adjust the robot properly, if one has correct tools. messing with mastering is something to be avoided unless there is a real reason for doing that and one really knows what to do. but i have encountered plenty of people who assumed that something is correct and just do it anyway. :loudly_crying_face:


    Quote


    it's difficult to know/search something, that you/one don't ever minded to look for


    that is why there is pinned topic READ FIRST. it tells you where to start and what to watch for. for example this very topic covers conversations spanning weeks. one could read in much less time one manual such as Programming Manual for System Integrators and many things would become much clearer.

    1) read pinned topic: READ FIRST...

    2) if you have an issue with robot, post question in the correct forum section... do NOT contact me directly

    3) read 1 and 2

  • UPDATE:


    Many thanks again Fubini and Panic for your patience and explanations about calibration.

    So the actual calibration process seemed to work enough.
    I grabbed the idea of clock gauge and created an adapter (see figure).


    dLwSZd7


    While gently pressing the rod a mate performed slowly movements. While doing so i was looking for the bottom of the "V". After this, my orientation is kept well now and I can live with that. So indeed the assumption in post #31 was right. My KUKA was everything, but not calibrated.


    I recorded a short video clip for further documentations here (company), where you can see the current array-like movement. My belonging code file is also attached. This code exactly leads to trajectories shown in the inked video file.


    my_nextcloud_link


    In this run, I span a workspace area of 1400 mm in X by 2000 mm in Y and tested this with a 8 by 3 grid.

    So my "interface" in *.src file is this..


    Code
      ;IMPORTANT user's input here:
    gridWidth = 1400.0 ;geometrically width of workspace area in [mm]
    gridHeight = 2000.0 ;geometrically "height" of workspace area in [mm]
    
    tPulse = 0.5 ;output pulse duration in [s]..
                 ;..to measure inductive voltages (used in function call)
    
    nRow = 8 ;grid's height (number of rows)
    nCol = 3 ;grid's width (number of columns)
      ;IMPORTANT user's input end


    Here i define my area and the number of rows and cols. The rest will be done automatically with state ments as:


    Code
            IF (row == idxDragE) THEN ;"drag E1 to enlarge Y-workspace" functionality
              nextP.Y = nextP.Y + yStep ;
              nextP.E1 = nextP.E1 - yStepE1 ; 
              LIN nextP ;
            ELSE ;regular Y-step functionality
              nextP.Y = nextP.Y + yStep ;
              LIN nextP ;
            ENDIF


    Critically grid elements for singularity are those, where the KUKA's wrist is near the external linear axis.

    in this shown combination of rows and columns there was no singularity issue, even if it looks like (in video).

    Next i will successive increase the number of rows and cols and when singularity occurs, I can try those approaches you guys gave here.
    I am optimistic.


    - First I would try to place $CP_VEL_TYPE to #VAR_ALL at appropriate position in code.

    - Second I would try to avoid CP motions in this critically spots (maybe i can hold flange's orientation anyway).

    - Third I would try to keep external axis far away of momentary row, so the robot would always have to lean over more to reach momentary grid point


    So the first part/issue of array-wise movements is solved (orientation).

    The second part/issue (singularity) is not done so far.


    UPDATE ends

Advertising from our partners