# Work Object Orientation for Vision

• Hello all,

Here's an interesting question.

We're currently comissioning 2 ABB robots that use a 7th axis each (a track).

The cell uses a vision system to determine the correct position of the work object as it arrives in.

Now, the problem is such:

We use 4 cameras to determine the displacement of the work object and the rotation. The vision guy gave us the zero position coordinates for his camera system relative to a cell zero position that was measured previously. Each work object has its own origin for the measurement and we will be receiving X, Y, Z and RX, RY, RZ displacement offsets.

Using laser measuring, we've also determined the zero position for the work object for the two robots and created a WOBJ for each robot.

Now, where we're a bit stuck is this:

How do we define WOBJs for each work object origin now that we have a cell zero position on each robot? The robots are facing each other but their coordinate systems are rotated 180 deg one from the others.

I don't know if this is terribly clear as far as explanations go, but I'll only have access to the vision guy next week to pick his brains. I'd rather I get some trajectorie programmed this weekend so I won't have to go over them again later on when we're doing vision tests.

• Are they set up independently or as a MultiMove system ?

 I assume that each robot is set up with the track in the same task, and as "Baseframe move by", but is the two robots set up on an independent controller or as a part of a multimove system [\Edit]

• Independent in this case.

At the moment my colleague just had a brain storm where we'll try creating our cell WOBJ and then created a number of Object Frames related to this WOBJ which will represent the various work objects we'll be dealing with.

• So, if I get this right....

You've established a "Zero" position for each robot.

You know the relationship / distance between that Zero and the Camera(s) Zero ?
The Camera system will feed you the distance / offset from the Camera-Zero to the part ?

I'd probably look into PoseVect and PoseMult - assuming I'm understanding the setup correctly.

• Close enough, you got it.

The camera feeds us an X, Y, Z, RX, RY, RZ offset of the Object Frame.

We defined a WOBJ with the cell zero coordinates, and defined the work object itself as an Object Frame on the WOBJ - which seems really obvious looking back, but it's our first time working with a camera system this complex.

After that it was just a matter of calculating a PDISP pose out of the information we got from the camera. We ran tests all day yesterday and it's working like a dream fortunately. Thank you for the input.

• All the robots think they are in the same location (base frames match). This means that they should all have the same workobject, so that a single vision result can be given to all robots in the cell. Wherever the shared location of the workobject is, put that into the U-Frame

put the vision offsets into the o-frame. Here's the basic structure, simplified slightly.

VAR pos VisionTranslationPoints:=[0,0,0];

VAR num VisionXRotation:=0;

VAR num VisionYRotation:=0;

VAR num VisionZRotation:=0;

VAR Wobj WobjExample;

VisionTranslationPoints.x:=giVisionX;

VisionTranslationPoints.y:=giVisionY;

VisionTranslationPoints.z:=giVisionZ;

VisionXRotation:=giRotX;

VisionYRotation:=giRotY;

VisionZRotation:=giRotZ;

WobjExample.oframe.trans:=VisionTranslationPoints;

WobjExample.oframe.rot:=OrientZYX(VisionXRotation,VisionYRotation,VisionZRotation);

• Out of curiosity, which vision vendor are you using?

For this particular project we're using an ISRA system.

• Okay, ISRA systems work in WorkObject coordinates, rather than Tool. It's been quite a few years, but I did a lot of ISRA rack-picking with IRC5s way back when.

(Disclaimer: I'm talking about ISRA Mono3D here, their product line may have changed since)

The trick is that, during system configuration, you have to give the ISRA system the actual robot position, with (IIRC) the WorkObject you're going to be using, and Tool 0 active. Then, the ISRA will give you an offset to apply to the WorkObject. You apply that offset to the WorkObject, and run from there. ISRA has (or used to) a RAPID module they would give you that would handle all this -- you pass their function the active WorkObject, the ISRA function modifies the OFrame portion of the WorkObject, and then you run your motions.

• SkyeFire,

I've worked with ISRA as recently as this year, and your statements are still accurate.

• Okay, ISRA systems work in WorkObject coordinates, rather than Tool. It's been quite a few years, but I did a lot of ISRA rack-picking with IRC5s way back when.

(Disclaimer: I'm talking about ISRA Mono3D here, their product line may have changed since)

The trick is that, during system configuration, you have to give the ISRA system the actual robot position, with (IIRC) the WorkObject you're going to be using, and Tool 0 active. Then, the ISRA will give you an offset to apply to the WorkObject. You apply that offset to the WorkObject, and run from there. ISRA has (or used to) a RAPID module they would give you that would handle all this -- you pass their function the active WorkObject, the ISRA function modifies the OFrame portion of the WorkObject, and then you run your motions.

Yeaaaah, not in this case. I got a pat on the back from ISRA mostly.

Thankfully, PROMIA to the rescue! Seems like there's a whole module dedicated to this stuff. Once we cherry picked all we needed from it, things ran smoothly.

Our problems stemmed from the fact that we didn't know exactly how to reference an object to an work object. This project has been very educational for me. Always a plus!

• Yeaaaah, not in this case. I got a pat on the back from ISRA mostly.

Hm! That's... odd. Well, they used to give it to everyone using Mono3D with ABBs, so:

Now, while it's production code that ran well for millions of cycles, it's perhaps not 100% polished. And it does include some inherent assumptions -- for example, that the ISRA will be sending signed 16bit integers (pre-multiplied by 10 for the mm values, and by 1000 for the degrees), and the RAPID Group Inputs are unsigned integers, hence the extra math if the incoming value is >32767 --- basically, if the sign bit is set.

But basically, you pass it a WorkObject and the ISRA style/recipe/type number, it handles all the communications, moddifies the WorkObject, and passes it back, along with a True for success, and a False for an error.

It should be bus-agnostic (this system was using DeviceNet), as long as you build your I/O in EIO.CFG the same way.

Of course, it's been enough years, modern ISRA systems may have different I/O maps now, so your mileage may vary.

• ¿No hacer una suma de vectores el resultado de la visión más el objeto de trabajo?

• Hm! That's... odd. Well, they used to give it to everyone using Mono3D with ABBs, so:

Except that we're using CAPMES in this case.

• CAPMES? Well, I can't say I know anything about that one. Although, given the way ISRA normally does things, I'd be surprised if the robot interface was extremely different from Mono3D.