I am working through the same issue right now, but with an ABB robot. robin_gdwl would you be able to share some of your spreadsheet showing the buffer setup and EIP cells?
Posts by ColoradoTaco
-
-
And here's what happens when my parts get too close to the edge. The calipers to find the ends of the screw (and therefore calculate length) exceed the edge of the image, and it fails the measurement.
-
And here is another image, showing a closeup of the tool I'm using (twice, in opposite directions) to find the edges of the screw at each end. I then do a distance measure between those edges to find the length.
In the Fanuc iRVision, this appears to be MUCH EASIER. At least, based on what I've been able to figure out the previous programmer did... Looks like their blob tool can output major axis (length and orientation). Am I missing an easier way of doing this?
-
I have a vision job that can find and extract all the relevant information from a single part, and I *could* use that for the bulk processing, but one of the tools doesn't work near the edge of the camera field and I haven't found a suitable alternative.
Images of a test part and the data collection
-
You could create a program that would ask the operator for inputs (part name, part ID number, array index, etc), then would take the "training shot" you mentioned, then save all those user-entered values, along with the length/diameter/etc of the trained part, into an array entry in RAPID.
The idea is to use the first part image essentially in place of any operator input. We actually have a Fanuc robot running iRVision that is doing this (although rudimentary). It looks at blob area, circumference, circularity, and major axis length, and uses those data points to filter results from all the production images.
I'm trying to replicate the same thing in Integrated Vision (we have a Cognex 5000-series smart camera, connected to an OmniCore controller). I had hoped I might be able to do it without measuring all the discrete data every single time, because that gets slow and tedious. Something like First Part Image >> Locate black-on-white blob >> save as pattern in second job to ID all other parts
-
As the title suggests... Is it possible to use one vision job to help define the parameters of a second job? I am trying to set up a pick-and-place arrangement that should be capable of handling a vast array of parts. I want to have the operator place a single part on the feeder tray, in order to "train" the current part profile, then have a second job that simply uses that first piece as a pattern in order to locate all the similar parts for picking.
Parts vary in length, diameter, and shape (all different types of orthopedic screws and fasteners).
My alternative is to use the first piece to pull a bunch of measurements using calipers and such, then measure EVERY SINGLE PART with the same calipers, and compare the results in RAPID do determine whether or not that part is correct. I would really like to avoid having to do all that data transfer though. If I can just extract a PATMAX from the single-part first image, then run everything else according to that (until I reset it).
Ideas? Or am I dreaming?
-
I am trying to navigate IP addresses, ethernet switches, etc. on an OmniCore controller. And I'm a dunce when it comes to networks and communications. I am currently trying to establish socket messaging between the controller and an Asyril vibratory feeder. Asyril has confirmed that this should work, and I can communicate with the feeder directly from my PC if I connect to it.
OmniCore has the following options that pertain to my issue
- 3020-2 PROFINET Device
- 3023-2 PROFIsafe Device
- 3024-2 Ethernet/IP Adapter
- 3114-1 Multitasking
Public (WAN) Settings
IP Address: 192.168.10.10
Subnet: 255.255.255.0
Gateway: 192.168.10.10
Port Speed: Auto
Asyril Feeder Settings
IP Address: 192.168.10.151
Subnet: 255.255.255.0
TCP Port: 4001
So I have a background task that will (hopefully, eventually) handle all the communication and logic for the feeder. In that task, I am currently just trying to set up the socket and test the communication by sending a command and waiting for a reply.
Code
Display MoreMODULE Asycube_MAIN VAR socketdev socket_CUBE1; VAR socketdev socket_CUBE2; VAR string received_string; CONST string CUBE1_IP:="192.168.10.151"; CONST string CUBE2_IP:="192.168.10.152"; CONST num CUBE1_Port:=4001; CONST num CUBE2_Port:=4001; PROC main() Init_Cubes; WaitTime 1; AsyCube_GetRecipe; WaitDI diBGR1_RingSens,1; ENDPROC PROC Init_Cubes() SocketCreate socket_CUBE1; SocketBind socket_CUBE1, CUBE1_IP,CUBE1_Port; ENDPROC
However, when I reboot the controller to fire off this background task, i get the following Error
Quote41575: Socket error
Description
Task: T_AsyCube.
The specified address is invalid. The only valid addresses are any public WAN addresses or the service port address, 192.168.125.1.
Program ref: /Asycube_MAIN/Init_Cubes/SocketBind/19.
Actions
Specify a WAN address or the service port address.
Recovery: ERR_SOCK_ADDR_INVALID
So yeah, umm..... help? haha
-
Nation, if the camera were to lose connection, would the program still proceed to GET OFFSET, or would the error stop it prior to that line?
Is there any other way outside of KAREL to flag that communication error condition?
-
Why doesn't the TP accept your changes?
I've tried adding several variations of the following line, and always get a splash screen saying the line is invalid.
VISION RUN_FIND SR[1] JMP LBL[1]
I've defined the label ahead of time, so that's not the issue. What I'm trying to understand is how to have RUN_FIND kick me down to the label if it is unsuccessful (no response from the camera is the specific case we are trying to handle).
QuoteIs the program write protected? Is the password protection option present on the controller, and you are not logged in at an appropriate level? Is the TP turned on when you try to make the edits?
I have appropriate access/credentials. TP is enabled. I have tried on the physical TP (we have the tablet option), as well as in the web browser editor on my laptop.
Quote
If you are trying to add a jump label to the end of "VISION RUN FIND", that is not possible.Okay... so what we need is a way to set a digital output if we attempt to take an image and do not get a valid response from the camera. (We force the condition by simply disconnecting the camera). Can you give me an example of how to do that with VISION RUN_FIND, or some other method?
I'm an old ABB guy, and the FANUC code is just not very intuitive for me yet.
-
Trying to expand the error handling on a robot that was programmed by a coworker who recently left. He and I were both new to Fanuc, but he was able to spend a lot more time on the project than I was.
I am trying to set an alarm ID register when I request an image using VISION RUN_FIND, and a result is not obtained. (Essentially a VISION TIMEOUT alarm). My only examples are the existing code that was written, but when I try to duplicate it, the TP doesn't accept my changes. We don't have RoboGuide, and are limited to the pendant or the browser editor.
Here is an example of how errors are handled throughout the system. "JMP" is appended to a line of code, then the associated LBL sets an AlarmID and StatementID.
Code: Snippet from VISION Program
Display MoreCALL IRVWAITLOG VISION RUN_FIND SR[1] VISION GET_NFOUND SR[1] R[3] IF (R[3:QtyFound]=1) THEN --eg:SCREW FOUND, GET VR VISION GET_OFFSET SR[1] VR[1] JMP LBL[1] --eg:STORE MES VALUES R[22]=VR[1].MES[1] R[23]=VR[1].MES[2] R[24]=VR[1].MES[3] R[25]=VR[1].MES[4] --eg:CHECK CIRCULARITY IF (R[25:CircularityRef]>R[103:CircularityMax]),JMP LBL[2] ....... --eg:GET VR FAILED LBL[1] R[2:AlarmID]=5 --eg:SET NEXT STATE TO ALARM HANDLER R[1:StatementID]=7 --eg:JUMP OUT OF PROGRAM JMP LBL[999] --eg:CIRCULARITY FAILED LBL[2] R[2:AlarmID]=4 DO[147]=PULSE,1.0sec --eg:SET NEXT STATE TO ALARM HANDLER R[1:StatementID]=7 --eg:JUMP OUT OF PROGRAM JMP LBL[999]
Then there is a separate program, ALARM_HANDLER which does any number of things depending on the AlarmID set elsewhere.
- Call other programs (such as GO_HOME)
- Set UALM register
- Set a DO
- PAUSE the program
So how can can I cause a camera communication/vision timeout error to trigger a JUMP then call the ALARM_HANDLER program?
-
chandsavaliya9768 Here is a very recent version, specifically for the OmniCore controller. Should be pretty close in most regards.
3HAC066559 AM Functional safety and SafeMove for OmniCore-en.pdf
-
Hello I am having issues with mapping Safety IO's. Do you have any manuals where it explains how to do mappings for safety Io's?
Probably should have been a new thread, but no worries. You will want to look at the Functional Safety or SafeMove manual for whatever version of RobotWare you're running.
Start here: https://library.abb.com
-
Not sure if this counts as an easter egg, but another time was where the robot had a large metal spike as part of its tooling. I nick named the robot "Stab-a-tron".
Was this robot working in sand-casting by chance? Most prominent use of robo-spikes I've ever seen was stabbing vent holes in sand casting molds.
-
You have a working solution with round about 10 lines of code, now you want to have a solution where you can spare 3 or 4 lines . Have a drink and be happy with your solution.
I see your point. I just felt like I was maybe missing something because it seems like it should be cleaner and easier to move in an error handler. I will go sip my coffee now.
-
I had a customer at a foundry in Pennsylvania many years ago who employed just one person with the knowledge to program ABB robots, and he was bitter and strange. Went out there for a service call and found robtargets named all manner of inappropriate things. Milder examples I can remember were "titty" "buttcrack" and "cleavage". It got way worse than that. About a year later I had another service call out there and the guy was gone and all his programs had been completely wiped.
Not really an "easter egg" per se. But I'm sure he thought it was hilarious.
-
Works from a normal routine as well, but i still have to have that litany of instructions to prevent this error...
41739: StorePath required
Description
Task: T_ROB1.
Instruction MoveJ is executing in an error handler or a trap routine. Use StorePath before using a movement instruction on other level than base.
Program ref: /SERVICE/IdleDance/MoveJ/188.
Causes
A movement instruction executed without having the path stored.
Actions
Execute StorePath before using movement instruction MoveJ. Read Programming type examples in the RAPID manual to see how to use movement instructions in TRAP routines and error handlers.
-------------
Do I really need all this, just to move the robot in an error handler??
-
Have you tried calling a normal routine from the trap and that routine has the motions?
I have not. Honestly didn't even realize I could. I'll definitely try that!
-
Because it came up in a discussion yesterday while polishing up an in-house HMI for a new system... Have you ever added, found, or been haunted by an easter egg or other "undocumented feature" in an industrial robot? Would love to hear your funny stories here (if you are at liberty to share).
-
Code: Trap Routine
Display MoreTRAP tr_IdleCheck SkipWarn; RestoPath; StopMove; ClearPath; StorePath; IdleTime:=ClkRead(clk_IDLE); IF IdleTime<MaxIdleTime OR doRobotAtHome<>1 THEN RETURN ; ELSE ISleep int_IDLE; IdleMinutes:=IdleTime/60; ErrWrite\W,"Idle time alert."; FOR i FROM 1 TO 3 DO MoveJ pIdleCheck1,v200,z50,tool0; MoveJ pIdleCheck2,v200,z50,tool0; MoveJ pIdleCheck3,v200,z50,tool0; MoveJ pIdleCheck4,v200,z50,tool0; ENDFOR ENDIF MoveAbsJ jt_HomePos,v200,fine,tGripper_0Deg; IWatch int_IDLE; restopath; StartMove; RETURN ; ENDTRAP
-
Follow-up... Adding the same instructions (in red) as above, and replacing the RETRY with RETURN, my interrupt works the same way with no issues. Still just seems like it's more cluttered than it should be with all the SKIP/RESTO/STOP/CLEAR/STORE all just so I can make some other motions then come back to the WAIT instruction.