Posts by SkyeFire

    Couldn't be simpler or safer IMHO to remove T2 modes on ALL robots.

    I'm not sure I'd go that far. I've had too many situations where I needed T2 in order to debug a system.

    That said, T2 scares me, and the situations where I've needed it are a fairly small % of the total. Striking the balance is a thorny issue. I can't really disagree with the majority of end users who buy controllers with the T2 option eliminated.

    I will say, the great majority of the hazardous incidents I've seen with T2 arose from people forgetting they were in T2 (usually jogging and touching-up points), and hitting "run" while expecting T1 speed. That's why I really like what KUKA did with T2 in KSS 8: Switching to T2 automatically reduces the override to 10%, creates a message that has to be manually acknowledged, and (most importantly) blocks jogging -- T2 can only perform program playback. This handily eliminates 90% of the risk I've seen in T2 over the years.

    the bot always adds a 'X' in front of the position name.

    Only if the points are made using Inline Forms, though.

    How do I edit a position in a KUKA controller, KR C4, while the program is running?

    What kind of points? Created and used in what way? The answer depends on details you have not provided. KSS version also matters -- "KRC4" tells us very little.

    Usually, points are recorded in .DAT files as static, nonvolatile variables. If you know the full name of the point variable, and the module that contains it, you can edit it using the VarCor while the robot runs. Of course, if you fat-finger 1000 where you meant to type 100.0, well....

    If you want to adjust points remotely, say from a PLC/HMI, then you need to create some sort of I/O handshake in your program that takes the value from the PLC and applies it to your point.

    The encoder value will not be visible/readable until pulse is established, which seems to need more than one motor revolution.

    It should be -- an absolute encoder will always have "live" data of its absolute position, as long as it has power. Many of these encoders are glass disks with binary values inscribed into their surface in radial patterns. Similarly, DRO encoders on machine tools.

    Whether Fanuc servos use this type of encoder, I'm not sure. I have a hazy memory of seeing one disassembled many years ago and having the photo-etched encoder disk, but I could easily be remembering incorrectly.

    I don't pretend to understand how it works, but I think it is very disappointing that in the year of 2023 we still need backup batteries for encoders.

    Good news! KUKAs don't. :icon_smile: OTOH, they get around the battery issue by relying on EEPROMs that do eventually hit their end of life, although they do seem to last decades in most usages.

    Jokes aside, I suppose the manufacturers just haven't felt enough pressure from their customers to put more R&D into a "if it ain't broke, don't fix it" situation. Not to mention the pain factor of getting any new replacement certified through all the various safety requirements.

    In an ideal world, we've have absolute encoders wrapped around every axis, but the amount of custom parts would be a nightmare. Not to mention what field repairs would require.

    So, my understanding of this subject is incomplete, but what the heck:

    1. The Fanuc encoders are absolute encoders, mounted to the servo motor driveshaft. As the motor rotates, the absolute value from the encoder ranges between 0 and some maximum (I've heard 4096, but that may be wrong) -- if the motor rotates one way, it counts down to 0 then "underflows" to 4096, and if moving the other way, it counts up to 4096 then "overflows" to 0. So (assuing 4096-count encoders), the motor would have a rotation resolution of 360deg/4096=0.08789deg.

    2. The servo motor rotates multiple times for each degree of axis rotation. The motor feeds into a gear trains (harmonic drives, wave gears, etc) that multiplies the motor's torque, at the cost of speed. It's not unusual for these gear trains to have input/output ratios of 150-250:1

    3. When the robot is booted with dead batteries, the value of the encoder is visible, but the number of rotations (how many times the encoder has passed 0 in which direction) is lost. So the robot knows exactly where the motor is within one rotation, but the larger context has been lost.

    4. The factory encoder values that come with every robot (and unique to each one) represent the exact position of the "Master" position of each axis within the encoder rotation. That's why Mastering works by a "be within one motor rotation of the physical Master position" rule -- the human eyeball is accurate enough to achieve that with good vernier marks. So the human eye provides the "rough" Mastering (to within +/- one motor rotation), and the factory encoder value provides the "fine" Mastering.

    5. #4 happens because it would be impractical to try to perfectly align each axis, and each motor, at 0 before assembly at the factory. Not to mention replacing motors in the field. And when replacing a motor in the field, the 'factory' encoder value must be updated, b/c it's essentially impossible to get two different motors rotated to exactly the same encoder value in that situation.

    ABB works essentially the same way as Fanuc. KUKA does it differently -- instead of absolute encoders, KUKA servos have resolvers -- like a simple quadrature encoder, except generating two sine waves instead of two pulse trains. KUKAs keep track of both the "rough" and "fine" Mastering in memory (and on an EEPROM chip). This makes KUKAs less vulnerable to losing Mastering due to dead batteries, but requires extremely fine Mastering (more than the human eye can manage) any time an axis needs re-Mastering. This is why KUKAs have special tools for Mastering.

    I've tried to use "Import KRL files" tool to import files from backup to right folder in KUKA.Sim but every time when starting a new simulation things value of things like tool_data or base_data are set to 0.

    When writng specific value to tool_data or base_data in robot settings same thing hapened, walue in robot settings is set to 0.

    You mean the values in BASE_DATA and TOOL_DATA are all set to {X 0,Y 0,Z 0,A 0,B 0,C 0}?

    Are you importing the entire backup, or only certain files? The Base and Tool arrays are recorded in $CONFIG.DAT. Normally copying $CONFIG.DAT between robots is a bad idea, unless the robots are identical, but if you're trying to build a virtual clone of a real-world robot, copying in $CONFIG.DAT should be harmless.

    So, had my first Yumi experience today -- a local school had one and the batteries had failed, so I took a whack at it. I'm going to leave my notes here in case anyone finds them useful.

    First thing to know: the Yumi is officially listed at the IRB 14000, so the Product Manual is located under that name in the ABB online library:…014000%20product%20manual

    Opening up the robot to get to the batteries takes a T10 torx driver, but two of the screws are recessed deep, so make sure you have a long skinny torx driver, not a socket-drive one. I learned this the hard way.

    Each arm (Left and Right) is its own mechanical unit, and its own Task, in the controller. You can switch between arms in the jogging screen, just like when you use an external-axis positioner. Each arm has 7 axes (A7 is in the middle of the arm), so the "axis group" button scrolls through 3 screens: 1-2-3, 4-5-6, and 7.

    Selecting a program for either arm requires opening that arm's Task before using, for example, PP To Main.

    The calibration marks on the arm axes are pretty good, but it's easy to hit the physical limit of, for example, Axis 6 while trying to jog to the marks. Fortunately, the internal collision detection appears to handle this well, but it doesn't throw any error messages -- instead, there was a loud click, and then Axis 5 started sagging as if it had lost brakes. Scared me good, but giving it a minute seemed to take care of everything.

    The Axis 7 calibration marks are tricky -- there are two different marks on the same axis. The trick is that one mark is labelled R and the other L -- you use the R mark on the Right arm (as seen from "behind" the robot), and the L mark on the Left robot (again, as seen from "behind" the robot).

    Once you have the marks lined up, you can Call the routine CalHal, or just go through the Calibration screen and hit the "CalHal" button instead of "Manual Calibration". Run this program like you would any other RAPID program. It will default to having all 7 axes [x]'d, and you can un-check any you don't want or need to calibrate. You'll get a couple TPReadFK pages to do this. Then the program will run each joint back-and forth a few degrees (one at a time, 1-7) and use some Hall sensors inside each axis to re-zero itself. I didn't try the Fine Calibration option, just the Update Rev Counters option, and it worked fine.

    Oddly, one battery was held down by a piece of velcro tape, and the other needed a pair of zip ties.

    I need to calculate the compression force exerted by a fixed nozzle on a print bed that is being handled by a Kuka KR6 R900 robot. Does the robot controller already calculate the load on it?

    If so, how to obtain that information?

    No, not really. The torque on each servo is available, but translating that into actual force at the point of contact is something you would have to do yourself, and would be a non-trivial exercise. Generally, to get good contact-force data, the best practice is to add a load cell to the end effector. And if you want the robot path to adjust in realtime to that pressure, you would need the FTC or RSI options.

    Now, if you're only trying to do bed mesh levelling by detecting contact, that might be doable using collision detection or interrupts based on the torque feedback, but it's still going to not be very accurate -- for one thing, the joint torque levels change as the robot reaches near or far, due to gravity effects.

    This bug has been around for a while. Not really sure when it was fixed, but I haven't encountered it for over a year now. You should try to update your roboguide to a newer revision.

    My current version of RG is less than a year old, though?


    Quick fix is to restart roboguide.

    I tried that, but it didn't have any effect.

    Also, not sure, but if you get it to work with a simple box, then go into properties for that box, change the model from "box" to "CAD" and then apply and see if the parts-tab is still present?

    Yes. "Box" or "CAD," after hitting Apply I still get all the tabs:



    Well, I got a chance to use my OfficePC (KSS 8.3) for a bit this morning. I was able to create two different Virtual interfaces with separate IPs, and ping them both from my remote computer. However, I found that I could only have one interface with the "Windows Interface" box checked. And I could only ping out from the KRC, or get an RDP connection to the KRC, over the interface with Windows Interface checked.

    So, the next question would be if EKI needs to operate through the Windows Interface. My guess would be "yes", but the EKI manuals don't seem to indicate either way.

    Yeah, I did the Add Part first, before I started adding fixtures. And I just started adding fixtures without making any part changes (also rebooted RG a couple times to see if that would help -- it didn't). And I'm pretty sure that the last time I did this in RG, that same table didn't give me this problem.


    RoboGuide 9 Rev.ZB

    Was doing an exercise with HandlingTool, and ran into something very odd. I added a Fixture table (picked at random) from the standard RG CAD library -- the one named table02, in this case. But when I opened the Properties window for the table, it only had the General and Spray Simulation tabs. If I used a generic Box for my Fixture, I got all the tabs one would expect. I also tried a few different Tables from the RG Fixture CAD library, and some had all the tabs, and some had the same missing tabs as table02.

    It's puzzling. Any ideas what could cause this? Not to mention, why would Fanuc include Fixtures in their standard library that block certain tabs? Or is this some kind of bug in my RG?



    now I changed my setup as I needed more DI and DO, so now I have:

    (4x8Channel DI

    3x8Channel DO)

    INB0=10,0,x4 ;$DIN[1-32]

    OUTB0=10,0,x3 ;$DOUT[1-24]

    This looks correct. How are the various DI and DO slices arranged physically? Wago modules can be fussy about having all the input slices first, followed by the output slices (or vice versa, it's been a long time). You may want to dig into the Wago documentation, this will be in there (though not always easy to find).

    the devicenet is connected with no problem but I cannot see the input and cannot set output, but I don´t get any errors what so ever.

    any thoughts?

    What do the indicator lights on the Wago bus coupler show? If the KRC shows no errors, it's possible there's an issue between the Wago bus coupler and the DI/DO slices.

    IIRC, Wago slices get their communication power over the backplane (Data Contacts in the diagram), so they'll "talk" to the bus coupler, but they need power on the Power Jumper Contacts in order to actually perform I/O functions. If there's no power on those PJCs, that might explain what you're seeing.

    Most I/O slices I remember working with had separate wire terminals for I/O power (or at least the DO modules did), but the ones whose part numbers you listed appear not to. There may be a need for a separate "power" slice, in between the I/O slices and the bus coupler, to apply I/O power to the PJCs.