Posts by arikrundquist

    Aha! I wouldn't have thought of that, but it works perfectly! Thanks kwakisaki!

    Why do you need a system variable?

    The UO[2] "System ready" indicates exactly what you need.

    I was using UO[2]; unfortunately, UO[2] goes high a decent bit before the robot will respond to streamed positions, so I was hoping there was another value that I could read that signaled exactly when the robot would begin to respond to commanded motions. I reached out to Fanuc and got this from them:

    Quote from Fanuc Technical Support

    Normally for a newer robot controller of R-30iB or B Plus the servos will take about two seconds to fully recover from a "servo off event" before motion will occur. If the robot is just sitting still and after about 20 seconds with no command of motion then it will go into a "power save mode". Upon recovery from a power save mode will normally take a little over a second, about 1.5 seconds then motion will occur. You may use these times for your [application] to start the streaming process after stopped motion[.]

    It seems the official response is that no such value exists... the closest we can get is watching for SYSRDY and then waiting a second and a half :thinking_face:

    Is there a system variable I can read to see if my robot is ready for motion? I've noticed that even after the robot "clicks in", there is often a short interval (a couple hundred milliseconds) between when the servos seem to engage and the robot can actually start moving. Our system streams positions for the robot to move to, and works really well except during that interval. What it looks like is happening is the servos engage, the motion program begins running and commanding positions, the robot receives those positions and "moves", but then doesn't actually, physically move for ~300ms. This wouldn't be a problem, but then it jumps to try to catch up to where it was expected to be after those 300ms before smoothly following the rest of the planned motion.

    This isn't an acceleration problem -- there are no issues starting the motion planner if the servos are already engaged, and we can mask this issue by adding an artificial "if the servos were off and just came on, wait half a second before streaming the path" to our planner's logic, but ideally we just want the robot to be able to signal that it is actually ready to receive motion commands.

    1. ssh instead of telnet, scp or sftp instead of ftp -- it's 2019, security is important.

    2. Dynamic memory allocation and pointers in KAREL. Actually, I'd probably be okay with just support for data structures other than statically sized arrays and structs (Vectors/ArrayLists, maps, etc).

    3. More ways to transform position datatypes, particularly relating to orientation.

    4. Termination type to move through point without stopping slowing down. I know they are working on a package that might do this, but from what I've heard it'll be pretty restrictive and prone to throwing errors.

    5. Ability to read safety IO directly, instead of having to map safety IO to regular IO and then reading that.

    6. Better methods for communicating between TP and KAREL programs.


    I agree, welding at high speeds is not a good idea. Still, sometimes I wish I could run through a program quickly, without actually welding, just to verify that the motion and weaving will line up with my parts.

    My understanding of WELD_SPEED is that it just defaults the motion speed to the speed specified in the schedule. I didn't think it affected weaving at all (other than through the same robot-speed-affecting-weaving phenomina that this thread is about)... am I mistaken?

    Hey y'all!

    I'm was playing around with the FANUC welding package and the different weaving options it has, and I noticed that my robot will weave at low programmed speeds, but not at high programmed speeds and a low general override.

    For example, if I do

    Weave Sine[1]
    L P[1] 20inch/min CNT100
    Weave End

    I can clearly see the robot weave back and forth, but if I do

    Weave Sine[1]
    L P[1] 4000inch/min CNT100
    Weave End

    and then run that at like 1% override, the robot does not seem to weave at all.

    My guess is that the robot sees the high programmed speed, calculates that it can complete maybe half a sine weave, and then slowly plays that half-weave back over the course of the entire move rather than calculating the weave based off of the current speed.

    Can anyone confirm (or reject) this? Is there any way to alter this behavior? I want to proof some motion and run through my programs at higher speeds to make sure everything (including my weave) looks good before I actually start welding parts.



    1. What are the communication protocols available for us to communicate the Position and motion setpoints/co-ordinates to the FANUC controller?
    With PC communication through TCP/IP, what is the rate at which we can update coordinates to the robot controller?
    Ethernet-IP communication is feasible? If yes, is the speed of communication better than TCP-IP scanning rates for FANUC? What is the scanning Rate for TCP and Ethernet-IP?

    You have three choices for sending positions to the robot: DPM (software package), Stream Motion (software package), and Socket Messaging (uses KAREL and TCP).
    If all of your pick and place operations are pretty much the same, DPM might be the way to go. It's designed to dynamically modify a previously taught path based on real time sensor data. However, if you want to give your machine learning algorithm more flexibility to create the path the robot follows, then you want either Stream Motion or Socket Messaging. Stream Motion is a fairly new package from Fanuc. I've used it a bit but haven't had much luck with it (frequent OS crashes, often complained about jerk or acceleration limits). Socket Messaging is essentially Fanuc's prototype of Stream Motion. It is essentially what you described -- the idea is to fill a buffer of positions using data from your controller, and then have a TP program loop through those positions. My company uses this approach for our own application -- I was doubtful that we would have the fine control we wanted over the robot, but it actually works really well. From our own experimentation, we found that using a circular buffer of three position registers with high speed and CNT values works really well. Fanuc has example programs for this, but they're more complex than they need to be and use unnecessary features that break compatibility on robots that do not have those options installed. All you really need to do is have a KAREL loop fill position registers and set an output indicating that that position is valid, and a TPP loop waiting for each position to become valid and then move there.
    The fastest you'll be able to update is every 8ms, that's a limit imposed by the controller itself. Someone mentioned a specialized software packate that ran every 2ms, but honestly you probably don't really need it that fast. Play around with the timing of your position streaming and how far apart each point is.


    2. Are there the APIs/Methods available for us to programmatically control the FANUC robot by talking to the controller, remotely from our PC? (I understand that there is karel programming to be deployed into Controller but is there any lapse of movement functions that limit us from doing a particular type of movement using robot when giving dynamic commands vs when a TP based predetermined paths are executed?)

    Are you asking if streaming positions has all of the same capabilities as a TP program? The answer to that is probably not: since you are dynamically streaming positions, you are necessarily taking away Fanuc's ability to plan it's path. You will still have good motion, but you might not be able to do precise path planning -- for example, a TP program could move in a circle by using several Arc instructions. If you wanted to stream the same path, you might send several hundred points on that circle. However, there is no guaruntee that Fanuc will move in a perfectly circular motion (still, it would almost certainly be essentially the same path). That's the tradeoff you would be making, but I don't think you'll run into any issues because of it. As for other capabilities, it would be pretty easy to expand your system to be able to set IO, read and set system variables, etc, so in that sense you aren't losing anything.


    3. What is the rate at which we can update the motion setpoints or co-ordinates? For eg: Lets say the Robot was asked to move from Point 1 to point 2 at 0.5 metres/second speed.. While it was moving, lets say, the code wants to change direction and wants to move to point3. Within how much time will the control change its path when a new setpoint is given? Otherwise what is the Robot’s Response time to Setpoints? Universal robots for example in its e-series says 120 Hz. Which means I can send 120 different setpoints within a second and robot will respond to it.

    The robot is pretty responsive, so if you're sending and consuming points that fast I don't anticipate you seeing significant lag. Do be aware, however, that if you're using the Socket Messaging approach, the robot may not always be right where you command it to be by the time you send your next point. It can be easy to plan a motion and stream points from that path, expecting the robot to move along the path at the same rate you sample from it -- get position feedback from the robot so that you can handle cases where you get a little ahead of it. This isn't a huge issue, but it is something you should be looking out for.


    4. The rate of response for queries on the FANUC robots coordinates and other internal & process data?
    If coordinate set-points cannot be updated in run-time, can we execute a stop command while the robot is in motion, rewrite the coordinates and then restart the motion with the new trajectory? How fast can this action be executed?
    After sending of commands from PC, what is the response time for the robot to begin executing the command?

    This is not the way you want to go. This is going to be significantly slower and have higher latency than any other method.
    You could, however, generate TPP scripts and FTP those to the robot, though I would budget a couple extra seconds for this transfer. It might be interesting to do a combination of TPP and KAREL position streaming: you could have a general TPP program that abstractly performs the task you want (eg. move towards object until the object is reached, attempt to grab object until successful, move object, attept to place object until successful), then communicates, via KAREL, with your external system to determine where to move, whether an operation was successfully completed, etc. I haven't tried that myself but it would be pretty cool if you got it working.


    5. Can you share the possibility of obtaining training for Karel programming and advanced robot programming?

    As some of the other posts have mentioned, KAREL training isn't really necessary as long as you have the KAREL manual (having the Programming Instructions manual (for TP programs) is also helpful). There are some weird KAREL quirks that you won't find in the manual, but I doubt you'd learn about them in a training; you just have to pinpoint the issue yourself and then be conscious of it going forward. For example, yesterday and today I was struggling with a stack overflow error, even though my call stack was only three routines deep. Turns out, it was an issue with KAREL trying to pass robot IO values by reference to my routine. I'm still not even sure what the robot thought it was doing, but forcing the IO to be passed by value fixed the issue. My point is, KAREL isn't that difficult of a langauge and can easily be learned without training, and any issues you run into would probably not even be in any training you take.
    Still, if you own a robot, it's pretty easy to get TP programming training from the customer portal.

    I've heard rumors of an official Fanuc brake test that I can run to make sure each servo's brakes are functioning properly, but cannot seem to find any other information about what it is or where to get it.

    Does anyone have a copy of or any information concerning the official brake test? Thanks!

    Hey y'all!

    Hopefully this is a quick question.
    I've been poking around the documentation for the Offset, PR[ x ] option, which, as far as I can tell, has two modes depending on the value of $OFFSET_CART.

    In the program instructions manual, it says:


    if $OFFSET_CART is TRUE, offsets for Cartesian positions are treated as frames and used to pre-multiply positions. If this is FALSE, offsets for XYZQPR positoins are added field by field (for example, target.w=pos.w+offset.w).

    For the sake of discussion, suppose I have this motion instruction:

    :J P[1] 50% FINE Offset, PR[1];

    I understand the manual to mean that if $OFFSET_CART is true, than the point the robot moves to is the position of P[1] relative to PR[1] relative to the current user frame, ie. USERFRAME:PR[1]:P[1]. If $OFFSET_CART is false, then it moves to position A (still relative to the current user frame) where A is the fieldwise sum of each of the components of P[1] and PR[1] (assuming P[1] and PR[1] are of the same type, joint or Cartesian).

    However, in the vision operator's manual, I found this:


    If $OFFSET_CART is FALSE, the matrix format is used. If the value is TRUE, the XYZWPR format is used

    I'm not entirely sure what this is trying to say, but it feels backwards from the previous explanation.

    Can anyone explain what, exactly, Offset, PR[ x ] does? I am looking for a way to temporarily adjust my current frame without setting any system variables (to avoid forgetting to undo an adjustment and potentially break other programs), and this seems like my best bet.


    Hey y'all!

    A few months ago I switched from using KUKA robots to FANUC. I am trying to convert several KRL programs into TPP. For the most part, this is pretty straightforward, but I'm not quite sure how to handle splines. In KRL, you can just do

      SPL P1
      SPL P2
      SPL P3

    and the robot will smoothly interpolate through all of the points (I think it's a cubic spline, but I'm not positive about that).

    I doubt I can create a program as simple as this:

    :J P[1] 100% CNT0;
    :J P[2] 100% CNT0;
    :J P[3] 100% CNT0;

    Is there a way to spline through points on FANUC, or is my best bet to calculate a bunch of intermediate points, put them in position registers, and do something like

    :J P[1] 100% CNT0;
    :J PR[1] 100% CNT100;
    :J PR[2] 100% CNT100;
    :J P[2] 100% CNT0;
    :J PR[98] 100% CNT100;
    :J PR[99] 100% CNT100;
    :J P[3] 100% CNT0;


    Suppose I had the following point P1 = {X 300, Y 50, Z 300, S 6, T 3} (let's ignore orientation for now).
    I can then run

    PTP P1

    without any issues. If instead, however, I wanted to run

    P2 = P1
    P2.Y = P2.Y - 100
    PTP P2

    the robot throws an error (because P2 is on the other side of the robot). Ideally, I would want the robot to recognize that it is about to cross the turn boundary, and automatically PTP to {X 300, Y -50, Z 300, S 6, T 2}, but it looks like it still wants to go to {X 300, Y -50, Z 300, S 6, T 3}, which it cannot do. The code works if I call P2 a FRAME, but that seems like a hack and I would rather not.

    For my application, I have a sequence of points to move to that I want to loop through, offset by some amount, and execute again. For each of these sequences, the robot will start fairly close to the start of the sequence.

    My code looks like this (it's not this, but the same concept):

    Because of the way this application works, I cannot just convert

    PTP P1
    PTP P2
    PTP P3


    frameVar = P1
    PTP frameVar
    frameVar = P2
    PTP frameVar
    frameVar = P3
    PTP frameVar

    The entire sequence of steps is determined during runtime by an external system and pushed to the robot using DirectoryLoader. I have little access to what motions the robot must execute, my job is just to make sure it can execute those steps, offset them some number of times, and execute them again. Again, offsetting the points is not a problem, it's just making sure that the status and turn of the robot change fluidly and do not cause any weird motion.

    Thanks for the assistance!

    I should add that I tried moving to a point 0.1mm away from where I was but that gave me a CPMO-036 JBF set not valid error (which the manual says is the result of an internal INTR error which should never occur in normal operation). I also tried lowering the stop tolerance values in $PARAM_GROUP[1].$stoptol[1-6] but that did not work either.

    Is there a way to allow 0 distance motion? I want to use DPM to move to an offset from my current position but I keep running into a Zero Distance error, especially if my calculated offset in each channel is 0 (which happens fairly often).


    Update, the entire chain of variables I use [above] exists on my controller, I was able to find them in the Variables tab. However, I still cannot use them in a program because KTRANS does not recognize them as available pieces of data.

    The more general question here is how can I make KTRANS aware of additional software packages I have activated on my controller?

    Thanks again!

    I am trying to set up DPM on my robot and have decided that directly setting offsets in the DPM system variables will work best for my application; however, when I attempted to set

    $dpm_sch[1].$grp[1].$ofs[1].$ini_ofs = x        -- offset in x direction, type REAL

    as described in the manual, I got the following output from ktrans:

    17 $dpm_sch[1].$grp[1].$ofs[1].$ini_ofs = x
                ^ ERROR
    Id must be defined before this use.  Id: $DPM_SCH

    How do I set my offsets with KAREL?


    Hey guys, I got it!
    I just had to do

    READ output(msg::i)

    to specify how many bytes I wanted to read. I had thought that READ defaulted to reading a full line or to the end of the buffer (whichever comes first), but it looks like maybe that is not the case? I want to take a further look at that, but this works for now.

    Hey guys!
    I am trying to communicate to a remote plc via a socket, and have been able to send messages from my robot to the plc, but am having trouble communicating the other direction. I can tell that data has been received by the robot, but when I try to capture it and use it I get an unitialized data warning and the program crashes. Here's the relevant section of code:

    WHILE i = 0 DO                          -- i is always 0  when I reach this point
    BYTES_AHEAD(output, i, i2)        -- i is set to the number of bytes in the buffer, this loop runs until a message is received; output is the ID for "S3:"; I am able to write to this channel without an issue
                                                         -- and I have printed out the value of i to prove to myself that a message is received, it stays 0 until I send a message from my tcp client, at which point i becomes
                                                         -- the length of my message, including the "\n\r" eol constant I append (so if I send the string "payload\n\r", i becomes 9), then the loop exits
    DELAY 250                                 -- gate my loop
    READ output(msg)                      -- read my input, it used to be "READ output(msg, CR) but that did not work either"   ||   msg is defined as STIRNG[128]
    WRITE(msg, CR)                         -- write the message I received, this is the line where it crashes because it says msg is uninitialized (prior to this segment of code I added "msg = 'some text'" and
                                                     -- "WRITE(msg, CR)", which prints 'some text' without an issue... the KAREL manual says if READ fails it sets the variables it is reading to to [i]uninitialized[/i] but
                                                     -- I did not see anything about how to fix the issue)

    I am not totally sure how the READ statement works or how to debug it, since I don't believe there is a way to get a status message from it as there is for functions such as MSG_CONNECT or BYTES_AHEAD (in my case, BYTES_AHEAD puts a status code in i2, though I currently do not use this value). Does anyone know what I am doing wrong?


Advertising from our partners