Hi there
I'm looking for an offline programming software, but I face a major problem: our cell layout is constantly changing (2~3 times a day). Is there any software that allows me to create a program based on position variables instead of fixed coordinates? I know I'd lose a lot in terms of precision, but this is not an issue for our processes.
Offline programming without calibrating
-
dardo -
April 22, 2025 at 10:14 PM -
Thread is Unresolved
-
-
95devils
April 22, 2025 at 10:44 PM Approved the thread. -
It's not clear (to me) what you are asking / seeking. When you mention "calibration", do you mean use of those complicated expensive tracker devices? I've never used them. I've never known anyone who could afford them.
I've done many high-precision / high-tolerance robot apps with OLP. Almost any OLP software will support this method. The workflow goes like this:
- Plan your programming with extensive employment of User Frames or coordinate systems in order to calculate path locations relative to the Frames. These Frames need to be mathematically attached to fixtures, workpieces, whatever makes sense.
- The goal is to teach (or calculate) your path points relative to a Frame. Your path locations will be calculated (in programming) relative coordinate transformations.
- Then go out to the work cell and teach those Frame locations to "calibrate" the physical work cell to the CAD or OLP work cell. For most applications, "close" is close enough.
- Frames can be aligned with corners of tables & fixtures, or on any three "touchable & repeatable" arbitrary points on parts, components, anything. Frames needn't be orthogonally aligned with anything in the physical work cell, even arbitrarily skewed Frames based upon physical points work just as well.
- Design your CAD or OLP work cell "close to" the physical cell.
- Make approximate measurements to the User Frame(s) from the robot's origin planes. I used a tape measure for this, hoping for ±100 mm location accuracy.
- Construct the CAD or OLP cell using this location data
- Build your OLP simulation model and make all path points relative to the appropriate User Frame(s)
- Prepare your physical work cell / program.
- include one or more Frame location programming calculations using the three common Frame location variables (typically: Origin, X-Axis, Y-Axis direction)
- teach those Frame calculation locations as location variables. I used a fabricated pointer tool with appropriate pointer tool transform invoked for the purpose
- program path points will all be relative to the User Frame(s), so this relative operator must be part of the motion statement
- this requires (back to Step 3) that your Frames and Frame locations all be "touchable & repeatable" with the pointer tool
- test, adjust, and run your program using those User Frames
This method can be easily adaptable to dynamic work cell changes. All that is required is to re-teach three Frame locations as needed. With smart design, a pointer tool can be easily integrated into existing tools. I have even designed and constructed specialized "Frame teaching appliances" that allowed me to quickly, easily, and accurately re-teach a Frame. I no longer have the eyes for such high tolerance work, but in my day I was able to hold path accuracy to ±0.010 mm on my OLP CAD→Path applications.
- Plan your programming with extensive employment of User Frames or coordinate systems in order to calculate path locations relative to the Frames. These Frames need to be mathematically attached to fixtures, workpieces, whatever makes sense.
-
I'm looking for an offline programming software, but I face a major problem: our cell layout is constantly changing (2~3 times a day). Is there any software that allows me to create a program based on position variables instead of fixed coordinates? I know I'd lose a lot in terms of precision, but this is not an issue for our processes.
Changing how? By how much? Are there just new obstacles the robot needs to avoid, or do the process points the robot much reach move about?
If you want to program offline, your virtual model has to match your real world condition in order to achieve anything. It sounds almost as if you want to make changes to the final program output without changing the virtual world? That's not going to work.
What are you actually trying to accomplish? In some circumstances, it might actually be easier to make the on-the-fly changes on the real robot, rather than in whatever offline software you're using. This does, however, require you to structure your program properly to support that from the beginning.
It would also be helpful to know what brand of robot you're using, as they all handle this rather differently.
-
It's not clear (to me) what you are asking / seeking. When you mention "calibration", do you mean use of those complicated expensive tracker devices? I've never used them. I've never known anyone who could afford them.
I've done many high-precision / high-tolerance robot apps with OLP. Almost any OLP software will support this method. The workflow goes like this:
- Plan your programming with extensive employment of User Frames or coordinate systems in order to calculate path locations relative to the Frames. These Frames need to be mathematically attached to fixtures, workpieces, whatever makes sense.
- The goal is to teach (or calculate) your path points relative to a Frame. Your path locations will be calculated (in programming) relative coordinate transformations.
- Then go out to the work cell and teach those Frame locations to "calibrate" the physical work cell to the CAD or OLP work cell. For most applications, "close" is close enough.
- Frames can be aligned with corners of tables & fixtures, or on any three "touchable & repeatable" arbitrary points on parts, components, anything. Frames needn't be orthogonally aligned with anything in the physical work cell, even arbitrarily skewed Frames based upon physical points work just as well.
- Design your CAD or OLP work cell "close to" the physical cell.
- Make approximate measurements to the User Frame(s) from the robot's origin planes. I used a tape measure for this, hoping for ±100 mm location accuracy.
- Construct the CAD or OLP cell using this location data
- Build your OLP simulation model and make all path points relative to the appropriate User Frame(s)
- Prepare your physical work cell / program.
- include one or more Frame location programming calculations using the three common Frame location variables (typically: Origin, X-Axis, Y-Axis direction)
- teach those Frame calculation locations as location variables. I used a fabricated pointer tool with appropriate pointer tool transform invoked for the purpose
- program path points will all be relative to the User Frame(s), so this relative operator must be part of the motion statement
- this requires (back to Step 3) that your Frames and Frame locations all be "touchable & repeatable" with the pointer tool
- test, adjust, and run your program using those User Frames
This method can be easily adaptable to dynamic work cell changes. All that is required is to re-teach three Frame locations as needed. With smart design, a pointer tool can be easily integrated into existing tools. I have even designed and constructed specialized "Frame teaching appliances" that allowed me to quickly, easily, and accurately re-teach a Frame. I no longer have the eyes for such high tolerance work, but in my day I was able to hold path accuracy to ±0.010 mm on my OLP CAD→Path applications.
Thank you for your reply.
- Plan your programming with extensive employment of User Frames or coordinate systems in order to calculate path locations relative to the Frames. These Frames need to be mathematically attached to fixtures, workpieces, whatever makes sense.
-
Changing how? By how much? Are there just new obstacles the robot needs to avoid, or do the process points the robot much reach move about?
If you want to program offline, your virtual model has to match your real world condition in order to achieve anything. It sounds almost as if you want to make changes to the final program output without changing the virtual world? That's not going to work.
What are you actually trying to accomplish? In some circumstances, it might actually be easier to make the on-the-fly changes on the real robot, rather than in whatever offline software you're using. This does, however, require you to structure your program properly to support that from the beginning.
It would also be helpful to know what brand of robot you're using, as they all handle this rather differently.
Not new safety fences or anything to avoid (in the most of the cases). I guess that reteaching the points relative to a frame would be the best solution. We're using ABB and Kawasaki robots.
-
Not new safety fences or anything to avoid (in the most of the cases). I guess that reteaching the points relative to a frame would be the best solution. We're using ABB and Kawasaki robots.
Still leaving multiple questions unanswered....
If you want to chase "moving targets", the simplest approach is to teach all points relative to a reference frame, and then adjust that reference frame. The reference frame can be created in the simulation in an "optimal" condition, then in use can be "tweaked" as needed.
The typical method is to create a frame using the 3-point method that most major robot brands support. ABB supports this with Work Objects, and Kawasakis support something equivalent whose name I can't remember ATM. Your simulation and real frames would have to match up to make this work, but it is doable.
This is a pretty typical use case for tweaking sim OLPs to match real-world conditions -- build the simulation with a 3-point-touchable frame on each piece of tooling, generate the OLP in those frames, then touch-up each frame to the same 3 points in the real world to "snap in" the OLP to match the real-world as-built conditions.