Posts by CLanford_Conmet

    Thanks! Hadn't considered using CartRep programatically. Basically I'm just doing some X,Y,Z adjustments to PR210 with 2D vision and a laser for z offset, but always relative to the origin of UFrame 5 hence why I was using it as a reference in the program. I'll try this and see if that fixes it.

    Ok so similar issue and need some feedback on this. I have set the user frame with a specific TCP orientation:

    When I move this User Frame 5 to PR210, then try to move to PR210, J5 and J6 twist to this weird orientation, and I cannot figure out how to keep the TCP orientation the same as how it was when I taught the User Frame. This is what happens when I try to move to PR210 after moving UFrame 5 to it:

    Is it related to the CONF settings? If so, I don't know how to transform them in a way that maintains TCP orientation to the pallet the same way as the way I taught the Frame. Tried updating the CONF options but nothing seems to work.

    Do I understand correctly that you wish to allow people to update a userframe's origin, but not its orientation?

    Because that would be possible by teaching the frame with the 4-point method, where the first three points are used to create the frame's orientation, and the origin is shifted to the fourth point.

    Yes that's what I'm trying to accomplish! I'll take a look at the 4 point method and see how I can make that work.

    I was also trying to put the origin and orientation (fully defined robot TCP position) into a PR, and then have a program drive the robot to that PR and set up some comments to instruct folks how to update that origin in frame setup should it ever need to be changed. I can always just do it with an internal program position with some instruction comments in the program on how to update the origin, unless there is a better way to do that part.

    Thanks for the tip, I'll try it out!

    the only way to ensure that you maintain the same directions while taughting the uframe is to move in the same direction as the world or putting all 0s in WPR.

    So I could avoid this difference in rotation values by simply teaching my user frame X and Y direction the same as World? I can't change my w,p r values to 0 because I need them to be in these specific positions.


    First off, I am not a robot expert by any stretch of the imagination, so please forgive my ignorance!

    I am working on a fixed mounted 3D vision guided application using a R2000iC/210F running a handling tool program. I've decided to create a fixed reference within the cameras Field of View, and teach the robot a User Frame (in this case, UF5) to the same origin point. From there the plan is to send X-Y coordinates to the robot for precise locating of parts.

    Once I trained the camera and the robot user frame (3 point method) to the same point in space, I backed up the robot program and have been doing some offline testing in RoboGuide. But I've run into an issue that I don't fully understand, and was hoping for some help here.

    I wanted to initially run the robot to the taught user frame origin, and followed other threads recommendations by setting the UFrame to a PR. The problem is I keep getting Position Not Reachable faults. Upon further investigation and reading other threads, I set all of the coordinate to 0,0,0,0,0,0 and tried jogging to that position while setting the robot's coordinate system to UF5 but still got the same issue.

    So next I tried jogging the robot to the UF origin using the "Move To" function in the User Frame setup detail, then checked the robot's position in World vs User, and noticed that the w,p and r had values that I couldn't determine how they were calculated. Attached are the three sets of values for the same UF position. First SS shows the UF values from the UF setup screen, and below it the actual position in world when I drove the robot to that origin point. The 2nd screenshot shows this same position, but in User coordinates. I understand why the User X,Y,Z coordinates would be 0,0,0 because it's at the origin, but I don't understand w, p and r.

    UF5 in User.JPG

    My primary issue is that I plan on giving people the ability to re-teach the robot this new origin by touching up the user frame origin if needed, but programatically I need to be able to ensure the w, p, r values reflect the new correct position that will be taught. I also need to be able to jog the robot to this origin programmatically in case anyone does have to change the origin. Can anyone help explain how I can translate the w, p, r values as shown in world for UF5 setup vs what they represent when I actually drive the robot to the origin, and show the positions in User?

    One thing to note: I was also getting faults when attempting to write the UF to a PR, but it was because PR_CARTPREP was set to False. Set it to true and was able to get past that. Robot was taught the UF with this set to False. Not sure if this matters, just info for you experts in case it did!


    Digging up a post I dug up last month. Got another robot with a "Safety Network Number Not Set: Device out of box." Followed every step from these previous posts, but I cannot get the Safety Network Number on the robot to reset what I generated on the GuardLogix. Slightly different system:

    1769-L30ERMS Processor

    R30iB Plus running a R2000iC 165F robot.

    Steps I took:

    1) PLC in program mode

    2) Robot ethernet module inhibited on PLC

    3) Set Ethernet IP connection with PLC to FALSE on ROBOT

    4) Generated new configuration signature on ROBOT

    5) Input new configuration signature in Safety tab on PLC

    6) Generated new SNN on PLC

    7) Reset Ownership

    This did not work. I tried changing DCS CIP Safety signature configuration to Fixed and Actual, no change. Tried all combinations of EIP Scanner True/False and CIP Safety ENABLE/BYPASS.

    Robot SNN Stays at FFFF_FFFF_FFFF_FFFF no matter what I do. Fanuc forum wasn't much help, just sent me over the DCS configuration PDF.

    Any thoughts?

    Update: Program mode worked, but something else to consider. When I reset ownership, the PLC then prompted me that the config signature didn't match. Someone else made other DCS changes that started all of this, and I had to annunciate a new signature and then copy that code back into the module in the logix processor to be able to get comms back.


    Hope yall don't mind me resurrecting this, but I'm having the exact same problem. My issue is that I've tried all of the above, but I'm having the following issues:

    - Cannot inhibit the module(box is grayed out) and I'm getting the code 16#080d(Safety Network number not set, device out of box)

    - SNN in the robot was initially set to fixed, tried setting it to "Auto(or whatever it is)", applying DCS, resetting and then resetting ownership but the Config still states Remote(SNN:FFFF-FFFF-FFFF, Address:)

    - I was able to get the SNN in the robot to match the SNN in the Fanuc AOP in the GuardLogix processor once, but it still balked at the ownership reset. When the robot reset, it went back to FFFF-FFFF-FFFF and now won't change no matter what I do.

    - I also cannot change the SNN in the Fanuc AOP(not sure if I should be allowed to do that).

    - The one thing I have not tried is doing this with the PLC in program mode. Is this a requirement?

    Any ideas?

    Ok so I figured this out myself. These instructions are add-ons that you can include in a Fanuc robot when using a Servo Robot Fuji Cam option installed (using RoboCom). Once you have this add-on, you can then use the instruction DETECT_JOINT(5,x), where x=the PR number you want to update.

    After setting up a joint profile on the camera's software, this instruction triggers the camera to compare the current "image"(which is a laser profile), to the expected image. Once this occurs, whatever PR you're numbering in the DETECT_JOINT function, the Fuji Cam will write the current robot position to that PR, including a specified Z offset from the Fuji Cam that you can specify if you want the weld to be a specific offset from where the Fuji Cam is located on the robot's EOAT.

    This instruction sets up effectively interpolation points to move the robot to during a weld, so this is effectively a pre-weld scan function.

    If don't need that level of synchronization then I would keep the robots separate. It is much simpler to have separate robot controllers and programs.

    Unfortunately I don't have that luxury. I am making a change to an existing cell where both robots run of one controller, but I may need to do separate movements. I think I've figured out a way to do it with individual GRP PR updates with hard coded numbers prior to starting movement, but I'll need to test and see if it works.

    Basically I'm trying to do two linear welds on a double robot cell, but I have to rotate the weld head away from a close proximity device near the weld seam. These close proximity devices are not in the same position for each robot, so I'm going to try creating PR's for each device's proximity for each robot, but only apply hard coded joint offsets on the robot that has the device nearby and leave the other robot's offset the same. I'll do the same for the other side and set the other robot's head offsets back to their normal position.

    This is my preference since both of these are running in one TP program and I won't have to synch everything up, but I've never done anything like this before so wish me luck!

    This is correct.

    And yes, you can monitor multiple programs.

    Great, thank you! If both programs are performing motion, and I want to ensure that the programs are synchronized, can I just turn on a flag or DO in each program (for example, DO1 for PRG1 and DO2 for PRG2), and then turn those off at the end of each program's sequence?

    Then I could look for the appropriate DO status at the start of these programs to ensure that the motion stays coordinated. Does that sound like the right approach?

    Could you elaborate on this? Also interested in this subject.

    I've read that you can execute the run command and it would effectively run a 2nd TP program simultaneously to running the one you're currently on. I believe the motion group you're using for the RUN command has to be separate than the motion group for the motion group of the current TP progran that you call the RUN command from.

    Is this correct? If so, is it possible to watch both TP programs executing simultaneously (ie from a double view on the TP)? Also how do you sync the tasks?

    This applies to some work I'm doing as well.

    ARs are temporary and different instances for every called program. You see what they are by looking at the program that calls the program with the ARs. You have to go up one level.

    Thanks for the tip. I did eventually find out where these AR's are being modified. I guess this was more about my ignorance of AR usage than anything else. So I'm guessing the GESNDDAT and Sendsysvr are effectively Karel programs that handle the AR mapping between TP programs?

    Thank you all for the responses and help so far.

    What I still don't understand is what is passing data into the AR's. The AR values themselves are written to some individual registers in some routines that are then used to jump to LBL's that control specific DO's. These DO's are sent over ethernet to a PLC which then turns on DI's that allow the robots to enter and exit cells. I have done a full ASCII dump of all TP programs and cross referenced the AR's and I only see them used to write to the above mentioned registers, and these functions. Yet I'm seeing the AR's change values dynamically based on robot movement. So it's like somehow these AR's are being updated but I can't figure out how.

    Tried to attach a screenshot of my NotePad++ cross reference of the AR's but it wouldn't let me attach photos. Pasting the text version of the cross reference below, and you can see they appear to only be used to write to registers within my TP programs, but nothing appears to write to them from the TP programs that I can see:

    Searching for all AR's using syntax "AR[" in my search:

    TP Programs ASCII\ (9 hits)

    Line 24: 2: CALL GESNDDAT(AR[1],1,AR[2],AR[3],AR[4],AR[5],AR[6],AR[7],AR[8],AR[9]) ;

    TP Programs ASCII\ (2 hits)

    Line 24: 2: CALL GESNDEVT(AR[1],1,AR[2]) ;

    TP Programs ASCII\ (10 hits)

    Line 24: 2: CALL GESNDDAT(AR[1],AR[2],AR[3],AR[4],AR[5],AR[6],AR[7],AR[8],AR[9],AR[10]) ;

    TP Programs ASCII\ (3 hits)

    Line 24: 2: CALL GESNDEVT(AR[1],AR[2],AR[3]) ;

    TP Programs ASCII\ (6 hits)

    Line 24: 2: CALL GESNDSYS(AR[1],AR[2],AR[3],AR[4],AR[5],AR[6]) ;

    TP Programs ASCII\ (2 hits)

    Line 22: 1: CALL ACSET(AR[1],AR[2]) ;

    TP Programs ASCII\ (8 hits)

    Line 40: 15: R[80:Zone 2 Request]=AR[1] ;

    Line 41: 16: R[81:Robot Safe Zone]=AR[2] ;

    Line 42: 17: R[82:OP45 DN Request]=AR[3] ;

    Line 43: 18: R[83:Screw Request]=AR[4] ;

    Line 44: 19: R[84:Press Request]=AR[5] ;

    Line 45: 20: R[85:Snap Request]=AR[6] ;

    Line 46: 21: R[86:OP50 Request]=AR[7] ;

    Line 47: 22: R[87:Reject Request]=AR[8] ;

    TP Programs ASCII\ (8 hits)

    Line 40: 15: R[80:Zone 2 Request]=AR[1] ;

    Line 41: 16: R[81:Robot Safe Zone]=AR[2] ;

    Line 42: 17: R[82:OP45 DN Request]=AR[3] ;

    Line 43: 18: R[83:Screw Request]=AR[4] ;

    Line 44: 19: R[84:Press Request]=AR[5] ;

    Line 45: 20: R[85:Snap Request]=AR[6] ;

    Line 46: 21: R[86:OP50 Request]=AR[7] ;

    Line 47: 22: R[87:Reject Request]=AR[8] ;

    TP Programs ASCII\ (8 hits)

    Line 40: 15: R[80:Zone 2 Request]=AR[1] ;

    Line 41: 16: R[81:Robot Safe Zone]=AR[2] ;

    Line 42: 17: R[82:OP45 DN Request]=AR[3] ;

    Line 43: 18: R[83:Screw Request]=AR[4] ;

    Line 44: 19: R[84:Press Request]=AR[5] ;

    Line 45: 20: R[85:Snap Request]=AR[6] ;

    Line 46: 21: R[86:OP50 Request]=AR[7] ;

    Line 47: 22: R[87:Reject Request]=AR[8] ;

Advertising from our partners