Posts by SkyeFire
-
-
Well, you're running into one of the root issues with using articulated robots for applications dependent upon accuracy, as opposed to precision/repeatability.
Articulated robots, compared to (for example) CNC machines, trade accuracy and rigidity for lower cost and a much larger working envelope. So you need to understand before starting that any attempt to calibrate a robot with a laser tracker will have a certain degree of error.
There are, however, ways to reduce that error somewhat. I'll get to that in a moment. Your concept of generating the origin of the robot base frame by intersecting the axes of the Axis-1 and Axis-2 motion arcs is good, although generally the origin of the robot's "Base 0" is not located exactly at that intersection, but at some distance from it (usually along the axis of Axis 1). That distance can generally be obtained from the CAD model of the robot. That generally works well enough, but any as-built tolerance errors in the robot's construction will be the limiting factor on how well it works.
The method I've generally used is to build a grid of points shared by the robot and the sensor, to do a best-fit calculation on. To minimize the error in the measurement, I use the following best-practice techniques:
1. Properly configured payload data in the robot. Most robots attempt to compensate for gravity effects of the payload (given the robot lacks rigidity), so making this accurate (mass and CG) is important
2. Limited volume. I make the volume I'm attempting to calibrate as small as possible for the process I am trying to make accurate.
3. Limited orientation changes. Ideally, every measurement point should use the same orientation of the TCP. Orientation changes throw off the robot's accuracy faster than almost anything else
4. Avoid positions with different gravity effects as much as possible. For example, positions with Axis 2 forward of vertical have different gravity effects on the backlash ("lost motion") of Axis 2, compared to positions that have Axis 2 behind vertical (also depends on payload, and Axis 3-6, so this becomes potentially quite complex)
5. Minimize axis backlash. Since every axis has some degree of "lost motion" which cannot be reduced, minimizing the effects of that backlash is the next best thing. General practice for this is to perform an anti-backlash move at each measurement location. This generally consists of arriving at the "nominal" measurement position, then performing a small anti-backlash motion on all 6 axes -- for example, rotate all axes +0.1deg, then -0.1deg. This does not make the robot more accurate, but it reduces the randomness of the backlash effects by "biasing" the backlash in a consistent direction at each measurement position.
6. Thermal stability. This usually consists of making the measurements fairly quickly, so that temperature changes over time (either ambient temperature, or simply the waste heat of the robot servo motors). Also, avoid generating excessive waste heat in the robot as much as practical -- use low speeds and accelerations
7. Position holding. Most robots perform energy-saving operations when not in motion, by shutting down power to the motors and engaging the internal motor brakes. On some robots, in some poses, this "handover" from "servo holding" to "brake holding" can cause the robot to sag or twitch very slightly. Best practice is to keep the servos energized after reaching each measurement point (and performing the anti-backlash move) while taking the laser tracker measurement. That way, the measurements are all taken in a consistent context.
8. Target calibration. This related back to #3. The dimensional relationship between where your sensor target is mounted to the robot, and where the robot thinks it is mounted, will directly drive errors in your comparative measurement. With common TCP orientation at every measurement point, this error will be very nearly static in Cartesian space. However, if you have any orientation changes, the error will rapidly become highly parametric and grow on an exponential curve. So bringing your sensor, and your robot, into agreement on where the target is mounted can range from important (no TCP orientation changes) to critical (substantial orientation changes). Exactly how to do this calibration would make a good PhD thesis paper, but I've gotten reasonably good results (for industrial applications) by performing the calibration in the same limited volume (see Rule #2), and performing iterative motions of the TCP while correcting the TCP values in the robot until the robot and sensor agree on how the target is moving.It's not always possible to follow all of these rules, but you need to be aware that, for every rule you bend or break, your measurement error increases. Breaking two rules doesn't double your error, but more likely quadruples it (or more). I would recommend setting up some practice tests in circumstances where you can obey all the rules, then experiment with breaking one rule at a time to get a reasonable figure of merit for how each rule contributes to accuracy (or lack thereof).
-
FATs are generally customer-driven. But as a general rule, you need to test and certify everything safety-related -- robot reach vs the fences, E-Stops, safety gates, etc. Testing for random events, like sudden power failures, and recovery from same, is generally done. Also, things like testing how the system handles things like cables being cut, sensors failing, items wearing out and breaking mid-process -- ensure that if something like that happens, the system fails in a non-fatal, non-destructive way.
Floor foundations... generally, any given robot has a standard floor spec from the robot manufacturer. This floor spec is made to the worst-case scenario for that robot -- maximum load at maximum speed and inertia, plus a generous safety margin. Often a robot mfgr can supply alternatives; a robot that might normally require 8inch concrete beneath it might also be usable on a 4in-thick floor, if and only if the robot is mounted to a very wide steel plate that spreads the forces out further. Usually the robot mfgr has some standard specs for alternatives like this.
The biggest issues for flooring are caused by inertia, rather than load -- the likely failure mode is concrete anchors pulling out around the robot base. This isn't surprising when you consider that concrete is much stronger in tension than in compression, and that a robot in motion will produce high torsion loads across its baseplate while in motion.One thing too keep in mind here is that an insufficiently-strong floor could survive for quite some time, and then suddenly fail under load without warning. And possibly go right through a safety fence and kill someone. So the spec for floor strength needs to be taken seriously. Similar to the safety certification for passenger elevators, the spec is generous, but deliberately so -- if one of your concrete anchors is driven into a bubble void in the concrete, you'll never know it. So the spec has to allow for variations in concrete strength, age, temperature changes, etc....
-
i guess I don't understand why you're trying to daisy-chain KRL motion commands with RSI active. Why not simply have all the target-following performed in RSI with a MOVESENS?
Assuming that it's possible to daisy-chain KRL motions with RSI active, you have to approximate those motions -- it doesn't matter if they're LIN, LIN_REL, PTP, etc.
-
Well, on a KRC2 with DeviceNet, you can use the Telnet diagnostics to query any slave devices on the bus and get them to report back their parameters. I don't have my manuals handy, but it's in the forum archives. You'll have to try a few different means, though, since the exact commands changed over different versions of KSS. But doing this before and after removing the DI8, and comparing the results, should shed some light on what's going on.
-
Well, it depends on what brand of robot you're using, and on what degree of precision you require.
Cartesian position and orientation are generally represented by a combination of XYZ coordinates, and a set of Euler angles or Quaternions. Normally, to perform mathematical transforms on these positions, you convert them into a specific type of 4x4 matrix -- Google "NOAP Vectors" for a starting point.
The good news here is that only the conversion is brand-specific. Once the conversion is done, the matrices are identical for ABB, KUKA, Fanuc, etc, and can be worked on using identical means. This is the best book I've found covering this subject: https://amzn.to/2ZLVadz
The transform between two different position/orientation frames is generally obtained by taking the cross product of the two matrices. Keep in mind that this is non-commutative -- reversing the order gives different results. A-cross-B will generate the position/orientation of B, treating A as the frame of reference. B-cross-A will generate A's position with B as the frame of reference. These calculations are all relative.
Now, you didn't provide any details about your sensor, so I'll have to speak generally. But you need a good way to measure several points in space, in both coordinate systems (robot and sensor). This usually requires some kind of metrology. Three points is the absolute minimum, but I strongly suggest using more. Once you have two matching "point clouds" in both reference frames, you can perform a best-fit between them to establish the transforms between the two reference frames.
-
To be clear -- this post is in regards to running RobotStudio Online, in a Windows VM, on a Mac host, using Parallels?
-
That must be it -- I'm still using Version 8.3.2.0, and the time zone option is definitely not available.This made me run around in circles for a while -- I'd kicked a copy of the KRCDiag file to FOCUS, and we couldn't figure out why were were seeing completely different lists of events, until I eventually figured out that everything on their readout was exactly 5hrs different from mine.
The weird thing there is that the robot the produced the KRCDiag is located in UTC-0's time zone....I need to correct myself -- the Time Zone option is in my version of LogViewer, I just kept overlooking it somehow. I do see some weird things -- there's more than one Time Zone that are shown as "UTC - 0", but selecting each one of them generates a 1-hr difference in the time stamps. Not sure what's up with that. But I can at least find the right time zone by trial and error, if necessary.
-
Well, what kind of motion behavior are you trying to achieve?
...how does that code even work, anyway? Your pointer can never reach the RSI_MOVECORR() call.
What you have is not going to achieve a "no pauses" kind of motion. Your LIN_REL is not continuous, for one. And I don't think an RSI_MOVECORR() can be called without breaking the advance pointer, anyway.
So, off the top of my head, I see two possible approaches:
1. Activate your RSI_MOVECORR, followed by a daisy-chain of continuous motions. I'm not sure if this works, I've never seen it tried
2. Generate the entire motion from inside RSI. This can be difficult, as RSI is inherently parallel, not sequential. But I've had luck with KRL interrupts running in the Level 1 interpreter and periodically firing SETPUBLICPAR() functions to cause changes in the RSI container's behavior.OTOH, if the pauses are less of a problem than the "jump" in following the Sine wave, you might keep your current code but take steps to de-randomize where the next motion "picks up" on the Sine. That could be a bit tricky, but I could see connecting a $SEN_PINT variable to the Sine output, and blocking the motion until the Sine output reached a certain level.
-
Post the code. That message, IIRC, comes up when the syntax of the SIGNAL DECL is incorrect. It's also possible that you used a KRL reserved keyword for the Signal name -- the manuals don't cover this well, but you can't (for example) name a Signal "WAIT", b/c KRL treats that as a reserved keyword.
-
Hm... yes, that is in the manual. Never noticed that. I wonder if that's a misprint, or a change for KSS 8.x?
Classically, KRL did not allow any Boolean logic beyond single comparisons in the INTERRUPT DECL statement. That's one of the reasons that Cyclic Flags exist. I don't have a KRC4 handy to try out the AND/OR/etc on right now....
Aside from that, your basic understanding of the Interrupt was correct, however -- the Interrupt only triggers on a change of the logical condition. One "gotcha" here is that, if the condition is already true when you execute the INTERRUPT ON, the Interrupt will not trigger until after the condition becomes false, then true again. So, whether putting the AND logic into the DECL is allowed, or you have to use a Cyclic Flag, the Interrupt will trigger as soon as the "output" of the AND transitions from False to True.
As to whether the robot will have to decelerate, that depends entirely on what you're doing inside your Interrupt Routine.
-
-
Hi, guys...
I have a problem.
There is an error at KRC2 controller. RTWLoadVxD( LPVXWRT (#0x1) - Device Driver could not be initialized (not found or wrong path)
Can anybody help me?What is the history of this controller? Are you just powering it up for the first time after years in storage? Or was it working correctly until this happened? Have there been any changes made to the robot recently?
-
What is the specific fault when this happens (text AND error code)? An over-current fault on an E-Stop is... odd. I would expect a dynamic braking error, or something related to the ballast resistor.
It may be possible to adjust the E-stop braking behavior of the E1 axis, but I don't have access to my manuals at the moment.
Does this error occur only when E1 is moving, or does it occur even if E1 is stationary? Is it connected to the speed E1 is moving at when the E-Stop occurs?
-
Why did you remove the second DI8? I would suggest putting it back in, restoring the old IOSYS settings, and see if that works.
Is it possible that the power wiring to the DO8 was damaged or removed when the DI8 was removed?
-
Look up "Status and turn" in the manuals. Briefly, because it is possible to achieve a given position&orientation in Cartesian space with multiple physical contortions of the robot arm, S&T are used to disambiguate which contortion options to use.
-
Depends. At lower speeds, you can push A5 closer to 0. Most people use the 15deg rule, though.
-
Versions? Check READ FIRST topic.
Where is the rest of the code? Where are your RSI_ON and RSI_OFF?
Most likely, without an RSI_OFF, your Sine object keeps cycling in the background and, as such, is at some random non-zero value when the next LIN_REL starts.
-
What is the error code on that message? It's odd that you would get this error from any program -- only CELL.SRC makes use of those signals. Hm... unless something in BAS-Init is performing the check unconditionally.
Is that the exact text of the error message? Because on a quick search, I don't see any messages that match that in the normal KRC2 files. What is the source module displayed with this error message?
What KSS version is this robot running?
It's been a loooong time since I looked at ArcTech, but... possibly the error is coming from the ArcTech config, rather than the CELL/P00 support files. That might explain why it shows up when running the INI of any module, rather than just CELL. I'd look at the variables set in A20.DAT. The A20 manuals should be on the robot hard drive, located in D:\KUKA_OPT -- you'll have to dig a bit, but you should find an explanation of each of the user-configurable variables inside A20.DAT. The PDF-search function in Adobe Acrobat is a godsend in situations like this.
-
Can't believe I got the resistor ohm value wrong...
Power rating... I've used 1/4-watt resistors without any issues. There's really very little power dissipation through these resistors -- they're really just there to damp out harmonics and reflections from the cable ends.