Advertising

Posts by SkyeFire

    I've exactly the same project but i can't make search option, I checked this forum prepared program as directed but during working when I have input signal from sensor i have many error

    WHAT errors? How can we help if you don't tell us what the errors are? Be specific.


    Your program is declaring Interrupt 11 in SEARCH -- this must take place in MAIN.


    MOVEP should end with a WAIT FOR TRUE or WAIT SEC 0, to ensure the Advance Run Pointer cannot return to Main before the Interrupt fires.

    One specific weakness of controlling $OV_PRO from the SPS is that, if the SPS is trying to set the SPS to 0% while someone is trying to change the SPS from the pendant +/- softkeys, the KCP display can become "disconnected" from the real value of $OV_PRO. One way to check if this has happened is to open the Variable Monitor and type in $OV_PRO. This will access the real value of $OV_PRO. If this value disagrees with the number displayed in the KCP's upper-right corner, you've encountered this issue.


    Another cross-check is to open the Position Monitor while attempting to run the program. If the robot is simply moving or accelerating very slowly, you should still be able to see consistent motion in the Position Monitor. If there is no motion in the PM while the program is running, this strongly suggests that $OV_PRO has been set to 0%, as other posters have suggested.

    I'm not sure there is a simple solution. A normal 6-axis robot can, worst case, have something like 16-32 different joint solutions for a single Cartesian location, and either calculates the least-effort disambiguation on the fly, or contains additional disambiguation data as part of the programmed location.


    The issue with robots like the iiWA is that adding the 7th axis mid-arm creates a near-infinitude of valid joint poses for any given Cartesian coordinate set. Typically the iiWA resolves this by treating the mid-arm axis as separately controllable, rather like an integrated external axis. So the mid-arm axis position is explicitly controlled, and the IK solution is generated the same as for a 6-axis arm, just with the arm "shape" altered to match the programmed angle of the mid-arm axis.


    When you get into some of the more unique abilities of the iiWA (like "bending" the arm in response to external forces without altering the TCP position/path, I'm not privy to the details, but I suspect the kinematics are resolved by limiting the solution space in realtime to poses very close to the current pose, and iterating this every interpolation cycle, rather similar to how a Linear motion is made up of many small Joint motions calculated on the fly.

    Add S,T and E1-E6 to your points.They have form as frames not as E6POS in your case.

    That's strongly advised, but not actually required -- KRL will accept them as-is. The issue would be that they lack the S&T disambiguation. The E values can be left out entirely unless the robot actually has external axes. But leaving out those values is not what's causing Koppel's current issue.


    Koppel: are you trying to use those points in an inline form, or a "raw" KRL command? Inline forms automatically add an "X" prefix to the name of any point. But for raw KRL, what you have should work. Are there any linking errors?

    The camera is mounted to the robot end effector, yes?


    It sounds like you are using a Tool-mounted camera to perform Base shifts. That's... odd. Doable, but odd.


    How did you do your original robot<>camera calibration? And how are you changing XPick's rotation?


    Generally, to use a Tool-mounted camera, you create a TCP that is aligned with the camera, and apply the camera corrections in that TCP.


    So, you would move to the photo-taking position using Base shifts, as normal. Then, when taking the camera shot and getting the correction data, apply that in Tool. So, something like:

    There's programatically better ways to do this, and much more robust. But that's the general idea.

    As Panic says, message 205 indicates that an axis (two axes, in your case) are on the soft limits. This prohibits program execution. Also, any point programmed with a location hitting an axis limit will not execute.


    On the KCP, the color of the "R" icon in the bottom status bar indicates the program status. Yellow indicates that the program has been Reset or freshly Selected, and has not begun execution. Green indicates that the program is executing. Red indicates the program has halted for some reason. Black indicates the final line of the program has been reached and there are no more lines to execute.


    In T1, when you press Step Forward, the "R" should change from Yellow to Green. If the program hits an error, or if you release the Step Forward or deadman, it will change to red.

    Bingo! That was it! Never occurred to me that it might be using radians.


    Hm... so, I'm guessing that the rotx function "wraps" rotations larger than 360deg around, and just tries to move the robot to the equivalent endpoint? That would make sense, and emulates what the robot's organic function would do.

    Positions can be manipulated in KRL to any degree you like, as long as you have sufficient data. A 3-DOF vision system, for example, will not provide enough data for a 6-DOF correction.


    You need to study the KUKA documentation on the Geometric Operator, the INV_POS function, and the relationships between $WORLD, $BASE, $TOOL.

    Panic's right.


    About the only way I can see to even try this without ConveyorTech would be to write a fast SPS loop that monitors the encoder and manipulates $OV_PRO on the fly. But that'll be a pretty crude approximation of what CT does, and probably would only work at low speeds and with a conveyor whose speed is pretty stable.


    A far-out option might be to try using the Function Generator (which underpins CT), but that feature is so undocumented that that would be a high-risk approach without bringing in someone who knows all its ins and outs. And there's darn few of those people, even inside KUKA.

    Yeah. It really depends. There are some things Machine Vision could see and detect well, other things that MV would be almost useless at. And a huge range in between where choosing specific sensors, lighting, spectra, angles, etc, could make it possible.


    Your best bet would be to look for vendors that specialize in this application, and ask them how they would handle your specific application. But in general, "classic" MV is only going to find really obvious flaws in a weld, and even then, what's "obvious" to a human eye is often not to MV.


    There might be some Machine-Learning based MV options on the market that might suit. Those need to be trained, however -- basically, pass a very large number of parts through the MV system, and tell it which images showed good welds and which ones were bad. The weakness of ML inspection systems is that they can fail unpredictably when you present them a type of flaw that never turned up in their training. ML systems seem to have intuition, but really, they're just complex pattern-matching algorithms.

    I've attached a zip containing my current test cell.


    ...and now, I can't get the reversing behavior to repeat. :wallbash:


    Even so, there's something not right with how the rotx is behaving.


    In the simulated cell, if I move to the position RP1_Enter, and then execute the Python script ToolRotateX+, the TCP rotates clockwise around X when the rotx value is positive. But that's backward -- a positive rotation should result in a counterclockwise rotation.


    If I make the rotx value negative, the TCP rotates counterclockwise.

    I've been trying to emulate the behavior of a real-world KUKA KR16 in RoboDK, and run into something very strange (at least, to me).


    This robot moves to several hand-taught positions, and then runs a mathematically-generated several-hundred-point motion pattern. The motion is a sort of "corkscrew" motion along the Tool X axis and rotating around the Tool X axis. The corkscrew pattern is a series of offsets from the hand-taught Start Position.


    In my RoboDK cell, I've been able to establish the Base, Tool, and hand-teach the starting points. I've also hand-taught a few of the mathematically-generated points, just as a sanity check.


    To carry out the corkscrew pattern, I have a Python script that, when called from the RDK program, takes the robot pose (using robot.Pose()) and uses transl and rotx in a simple FOR loop.


    The strange behavior I'm seeing is that, when I execute target_i = target_ref * transl(LinOffset,0,0) * rotx(-RotOffset) with a LinOffset value of 0, the TCP rotates in the correct direction. But, if LinOffset has any non-zero value, the TCP rotates in the opposite direction, even though RotOffset is the same value for every test. I've tested this extensively, and the behavior is consistent.


    Is there any full documentation of the transl and rotx functions that explain their behavior in-depth?

    RSI is "realtime" motion control. It can work over a TCP/IP connection, or over any FieldBus (EIP, ProfiNet, DeviceNet, etc) that has been installed and configured on the KRC. The KRC4 can support almost any of the common industry FBs, but they each require adding either software or hardware options. RSI updates every 12ms (4ms for some configurations).


    EthernetKRLXML (EKI) is the official KUKA option package to enable KRL programs to use TCP/IP (or UDP) connections. This is non-realtime interface, but a well-written SPS program can crudely approximate an RSI connection. For simple position reporting, this would probably serve.


    OpenShowVar is an unofficial 3rd-party option that operates rather like EKI. I've never used it, but my understanding is that it can "expose" global variables on the robot to a remote system over TCP/IP, without needing any paid KUKA options.


    OPC Server is a KUKA paid add-on that, in the robot, looks/feels like a FieldBus. The main difference is that OPC can work on regular PCs without requiring special hardware add-ons (unlike, say, ProfiNet). OPC's other big advantage is that it's an open standard that almost all automation brands support, unlike (for example) the Profinet/EIP divide.

    You haven't stated what the problem is. Are you still getting high levels of noise in your force sensor when you activate the grinder?


    Is the grinder electric? Do the cables run alongside or near the sensor cabling? That can introduce EMI into the sensor wiring. Or the grinder itself could be emitting radio frequency noise.


    What is the frequency of the noise on the force sensor signal? Is it a fixed frequency? Does it vary with the grinder RPM?


    I can't speak to ABB FC, but on KUKAs, the FC software include the ability to add filter modules. If ABB has a similar capability, I would try either a notch filter or a low-pass filter. This worked for me using FC on a robot pair that was performing rivet hammering (very violent force spikes), and had to keep an average force on the rivet tail while it was being "squashed".

    Was the mastering successful?


    Usually, the "work envelope exceeded" message indicates that the robot is being commanded to a position that is physically impossible -- for example, a position 5m from $WORLD, when the arm is only 3m long at maximum extension.


    Check the robot's current position, in both Axis and Cartesian. My guess is that something went wrong with the Mastering process.

    Not really. VW kept using all the KRs until they were fully depreciated. But on their regular replacement schedule, they bought Fanucs to replace the KRs. Those KRs would have ended up on the 2nd-hand market at about the same time, in about the same shape, even if they'd been replaced with more KUKAs.

    Technically one could make a car auto-park itself by bumping into things. But unless that "bumper car" process was carried out at very low speeds, there's going to be a lot of damage in the process. The root issue is inertia, and stopping distance. Like a car, a robot can stop instantly... if it's moving at something like 1mm/sec. As the speed and payload go up, so does the stopping distance. A fully-loaded KR60, for example, moving at 2m/s, could easily take 100mm to stop even for a 0-category safety stop (basically, kill power and slam brakes on). There's the length of time it takes for the robot to detect the torque, then issue a stop command, then the physical deceleration to a complete stop.


    Bottom line: Collision Detection only reacts after a collision has started. It can reduce damage from a collision, but it's ability to do this drops off drastically as the speed increases.


    There's also the "numbness" factor. A KR60, rated to carry a 60kg payload at high speeds, wouldn't even notice a 1kg external force, most of the time. Bigger robots are even more "numb" to small forces. This is one of the ways that robots can be fatally dangerous -- people are "squishy" enough that a mid- or high-payload robot could easily apply lethal crushing force, even at low speeds, before the robot ever notices that it's hit something.


    (if you've ever been trapped between a wall/fence, and a horse that's decided to lean on you, you get the idea. Except robots are even dumber than horses)