There's a lot you can do with DAT files. But it depends on what data, exactly you are trying to extract. I still don't have a good idea of what you're trying to do, or what the actual requirements are.
Posts by SkyeFire
-
-
KSR_Configurator is the program on the KSR USB stick that allows you to configure what the KSR stick does when you boot the robot with the stick installed.
Part of the config is to set up network paths for saving the image, rather than saving it to the stick. The OP's question was a bit incoherent, but this is my best guess as to what they were referring to.
-
Using INVERSE() would not be possible if you were using FTC/RSI, because it is a KRL function that is not accessible from RSI.
However, since you are not using RSI (I misunderstood), you probably can make use of INVERSE().
Any interpolated motion near a singularity is going to produce large axis motions for small Cartesian motions -- it's practically the definition of a singularity. Unfortunately, there's no way around this, as it's an issue with the physical design of the robot. The only way is to avoid physical poses that approach singularity too closely.
INVERSE might be able to help you with that. One thing you could potentially do would be to use INVERSE to evaluate multiple S&T combinations of your desired Cartesian coordinate in advance, and pick the best one. For example, if you want to evaluate a LIN motion, you could iterate through multiple INVERSE results from both the start and end of the LIN path, and pick the combination that stays inside the axis limits, and avoids passing through singularity.
Of course, this isn't simple -- a long LIN motion might have start and end points that look fine, but still have singularity issues in the middle. So to perform this pre-check, it would probably be necessary to break your LIN motion into a series of points along the path, down to a certain spatial resolution, and check all of them with INVERSE. This may be too computationally intensive to perform on the fly, but at current KRC4 processor speeds, you can do quite a bit of math in a fraction of a second.
The alternative solution would be to create an entire inverse solving algorithm on your external PC, but that is a very non-trivial exercise -- I've participated in such an effort before. It can be done, but requires advance math and programming skills, and a thorough understanding of kinematics. Not to mention a full DH model of the robot arm.
-
That sounds about right. You just have to have your SRC program set $TOOL, $ACT_TOOL, and $LOAD to the correct values before running any motions. If you use inline-form programmed points, this is as simple as selecting the tool number that you calibrated. If you are writing raw KRL, you will need to set all three variables as part of your SRC code. If you do not, then the robot will not be applying the calibration correctly.
For testing the TCP, don't rotate an axis -- program a rotation around the tool axis. If you are using an Absolute Accuracy robot, one important fact is that the Absolute Accuracy calibration is inactive when the robot is being jogged. So any test of accuracy must be carried out by running a program.
When I need to tune in a critically important TCP, I generally mount a dial indicator to a fixed location, then program the tool to rotate 180deg while in contact with the indicator. Then I observe the measurement, and adjust the TCP XYZ values until the deflection is minimized. This usually takes several iterations for each axis. I also need to perform "sliding" tests, where the TCP holds orientation but moves along X, Y, or Z, and use those results to tune the ABC values of the TCP.
-
Do you mean KSRConfigurator?
-
-
Whoops. Lost track of the discussion -- I thought you did have AA. Well, so much for that theory.
Setting a negative value into the M element of $LOAD causes the robot to assume "default load", which is rated maximum. If you look at the LOAD_DATA array in $CONFIG.DAT, any entries that have an M of -1 are still at the factory default. Which is one of the reasons that setting correct load data on a Kukabot will generally make it go faster -- if you use the default loads, the robot assumes it's carrying its max payload, and de-rates all motions accordingly.
PID tuning of servo parameters can be done, but normally only for external axes -- the settings for the arm are not editable (or rather, you can edit the file, but the changes will be overwritten by the "correct" data when you save). But since KMCs exist, it should be possible to unlock those settings -- you'll just have to get KUKA to tell you that, and acknowledge that you're voiding whatever warranties this robot might still have.
The big issue, though, is that tuning individual servos is only a small part of the overall problem. Tuning the Cartesian motion model is... mathematically hellacious.
One thing that I think might potentially be helpful: make an O-Scope trace of the robot's position and velocity while this "shaky" motion is happening. The O-scope is one of the KRC2's greatest features when trying to deal with issues like this, and I wish every robot manufacturer would add it.
-
There is no simple way to do this. And since you are running FTC, you cannot perform this kind of check in KRL using INVERSE() -- you would have to do it in your FTC RSI module.
At minimum, I think you would need an RSI task monitoring $AXIS_ACT vs $SOFTx_END in realtime, and "pushing" axes away from the soft limits using a PD algorithm as they get too close, outputting through an AXISCORR object. The real trick is how this would interact with the FTC -- I've been told that multiple AXISCORR and MOVECORR objects active at the same time essentially perform summation at the motion control layer, but I've never experimented with that.
Getting the two competing CORR objects to interact correctly, without resonances, and tuning them to prevent $SOFT clashes without overcontrolling the FTC... that would take quite a bit of careful tuning.
-
From experience, I know that Abs-Acc only works for robots running at least 50% (more like 75%) of their rated load. Part of the issue is simply that AA calibration is done in the factory using a nearly maximum payload, and the further your actual payload is from the calibration payload, the less precise the corrective math becomes. And even without AA, running a robot so far under its rated load can still give the gravity compensation fits.
The root issue here is that no one really has a reliable model for all the ways that a load effects the robot's accuracy, across wide ranges of loads and distances. This is a problem that has been keeping a lot of researchers busy for a long time, without a satisfactory practical answer. The best results so far that I've seen have been from New River Kinematics' SARCA module, which gets its performance by building a volumetric correction table across the robot's working volume, calibrating in the actual working conditions with the actual working payload. But I've never seen anyone come up with a predictive model that works -- the working solutions all depend on individually calibrating the robot under working conditions.
I do have one quick suggestion: try turning AA off. It sounds counterintuitive, but under conditions like this, AA can actually have negative effects.
-
Hm... $OUTs are set "instantaneously" when commanded -- essentially they act like a hardware interrupt. So the internal processor delay should be on the order of 5 nanoseconds or so. (unlike $INs, which are only checked every 12ms)
But yes, there's then the KEB bus delay, followed by the "bridge" delay to whatever I/O bus you're using. Or, if you use Ethernet/IP or ProfiNet, it would be only the packet delay over the KLI bus. Those delays should probably be less than 500us, most of the time.
Probably the biggest issue is the refresh cycle of the polled I/O, which on "legacy" fieldbusses is generally longer than 500us. So the signal timing could be inconsistent, depending on what point in the refresh cycle the bus is when the $OUT is set. EIP or Profinet... hm. I've never looked into their refresh timing at that resolution.
Even using IPO_FAST, RSI only updates at 4ms, which is about 8x your timing resolution requirement.
Let's look at this from a different angle: what drives this 500us requirement? Is precise timing the main driver, or is consistency more important?
-
Where would I find that value of “ret”? The error on that line was just “General Error:Ethernet1”. Is there an interface that I can bring up on the TP that displays the current values of program variables? Sorry, I’m not used to programming in this format (I use KUKA PRC within Grasshopper).Thanks
Simplest way is to declare Ret in your application's .DAT file (I think the "stock" example program declares it in the SRC). Then, after selecting the program, you can go to DISPLAY>VARIABLE>SINGLE, type in the name of the variable, and start monitoring it (pay attention to the Refresh button). That should show you the value of the variable.
-
What is the value of ret after the error occurs?
-
adding virtual network interface to KLI does not help... You can have up to 5 of them but:
a) EthernetIP (and ProfiNet) can ONLY use virtual5. This is why one cannot have both EIP and PNET on same KRC4.
b) There can only be one setup for virtual5.
c) There can only be one (out of those 5) virtual adapters that is Windows interface. If needed this can be different from virtual5
d) both EIP and PNET (one at a time of course) can be both master and slave. EIP can be up to 5 slaves at once... but all must be on a same subnet.Good to know -- I couldn't recall if EIP/PN had to be on one specific Virtual, or not.
And now, it occurs to me I'm over-thinking this -- the robot is already using EIP, just only the Master. So it must be running over Virtual5. So all I should have to do is set my PLC to an IP compatible with the robot's existing Virtual5 subnet, and activate Slave1 in WorkVisual. D'oh! -
So, this one is a tad unusual. I'm supposed to connect several "legacy" KRC4s to some "legacy" PLCs, using EIP. When I say "legacy," I mean that all these units already have a lot of hardware they're connected to, and a lot of configurations that the customer doesn't want changed.
For example, I'm looking at a robot that already has EIP and is acting as an EIP Master to its end effector and related hardware. I should be able to just activate the EIP Slave config in the robot and let the PLC be its Master on that circuit.
But there's a complication (of course there is, why else would I be posting?). The KRC4 and the PLC have incompatible IP ranges (and the customer doesn't want them changed, of course). My first thought was to simply add another virtual network to the robot, but Virtual 5 is already in use for the Windows interface, and Virtual 6 is already set up for RSI.
So, I haven't had to mess with the virtual networks outside of the typical pre-defined ones before. Is it simple to just add a Virtual 7 with an IP compatible with the PLC, and set it up for just EIP traffic? Or am I wading into deep waters here?
-
$FLANGE is "mounted" to A6. So the position&orientation of $FLANGE in space is always a transform relative to $BASE at any given time. $FLANGE Z+ is always "straight out" from the A6 flange, colinear with the rotation axis of A6.
For most 6-axis KUKAbots, when the robot axes are all at what Panic calls "Cannon position," the $FLANGE Z+ axis will be parallel to $WORLD X+, both Y+ axes will be parallel, and $FLANGE X+ will be anti-parallel to $WORLD Z+.
IIRC, on the 4-axis palletizer robots, the $FLANGE Z axis is always anti-parallel to $WORLD Z, but I'm not sure which way X and Y point when the axes are at "cannon position."
-
Okay, back up -- what signal are you trying to send? EKI is non-deterministic, and vulnerable to network delays. Using a normal polled output signal would be more reliable, from a timing standpoint.
In a TRIGGER command, you can perform a simple Boolean assignment ($OUT[1]=TRUE), or a subroutine call after the DO. The Trigger commands are tuneable in their space/time relationship to a particular motion point in space.
-
Are there any warning messages about the battery voltage? Does the controller stay up for 1-2 minutes after you kill main power, or does it "die" immediately? Most often, losing Mastering on power-down is due to the batteries holding insufficient charge.
Another possibility is that the RDC card EEPROM has worn out and isn't accepting the Mastering write during power-down.
-
"Mastering distance exceeded" means that the axis has turned beyond a set number of degrees without detecting the divot/notch in the top of the mastering gauge block.
Could be an issue with the EMT, or the block, or with the spring-loaded pin in the gauge that the EMT screws into (they get bent sometimes). Sometimes it's as simple as dirt on the pin or in the divot. The divot and the pin tip are machined to precisely matching angles, and the EMT detects the mastering location by detecting the moment at which the pin instantly reverses direction from up to down. On the EMT, you should see this happen in the LEDS: as the pin tip moves across the flat top of the block, the red and green LEDs will flicker wildly. When the pin hits the downslope of the divot, only one LED will light (the red, I think) and stay on solidly. Then, at the moment the pin hits the bottom of the divot, the two LEDs should swap, and stay solid, until the Mastering process ends about 500-750ms later. That little instant swap between the two LEDs indicates the bottom center of the divot -- if you jog the EMT across the mastering block at 1% speed, you should see it happen.
Try doing a Check Mastering on another axis -- if it works, then the EMT is good and the problem is limited to A1. If you can't find another axis that the EMT can work on, then you probably have an EMT issue.
-
So, this happens when you Jog E1 in LIN mode, correct? And you see the TCP hold the correct position in Y and Z, but moves incorrectly in X? Does the TCP "lag" or "lead" E1 as E1 moves?
Quick double-check: write a program that runs to the same XYZABC point in space, but at two very different E1 values. Ideally, your TCP would be touching a fixed point in space that you can use as a reference.
Assuming the only TCP positional error you see is along the axis of E1, then it's a simple matter of $RAT_MOT_AX[7]. By comparing the mm of commanded E1 motion to the mm of TCP drift, you can adjust $RAT_MOT_AX to tune out the drift.
$RAT_MOT_AX is a "fraction" -- a STRUC variable of two integers, N (numerator) and D (denominator). So once you find the ratio of the TCP drift to the E1 motion, you can multiply it into $RAT_MOT_AX.
-
If your robot has output relays, and you can wire the existing valve to those relays in place of the existing switch, doing so is quite simple.
How to do it, I cannot say, since you have provided no details as to your robot brand, model, configuration, electrical schematics, valve type, voltages, I/O hardware, etc....