We had a issue where a YRC1000 micro controller looks like a capacitor blew up. We have a spare controller but nobody seems to have a backup of the jobs we had on the controller. We found an internal SD card upon inspection and was wondering if we can swap that with the new controllers one and that stores the jobs or if that will cause us any other issues.
If anyone has a solution of a easy way that doesn’t involve writing them all again to get get the jobs off the damaged controller to the spare one that would be great
Posts by steveo86
-
-
I've been training new guys at work for years. Usually I do not care about programming I do care that they understand 3 things
1) jogging the robot linearly or joint. They need to understand the difference
2) teaching a point L or J. They need to understand that has nothing to do how the jog the robot
3) Understand with a simple A to B programs. Teach A, jog the robot all over the place, teach B. Then run.
They need to understand that the robot only know how to go from A to B and all the other motions they did are "forgotten"
thanks for the input. I went over all the basic stuff like that with them. I was more looking for little projects to try and get to do that would use a wide range of instructions to try and get them thinking, kind of like a little puzzle for them to solve
-
A few things that comes to mind.
1. Modifying points
Have them moving an object from a to b. Move the object to a different location. Do this enough times and they'll get tired. Then show them how to use the shift function.
2. Tooldata
Make them move a object from a to b, then switch the tool to a similar but where the TCP coordinates aren't the same. Tell them that they are not allowed to modify the points.
3. Workobject
Use a table on wheels and have the robot move a part from a to b. Move the table. Also here they aren't allowed to modify any points.
4. Optimization
Create a slow and weird process and make them optimize the movements.
I've covered a few of them however the tooldata skipped my mind that's a good one for me to give them.
I've set them a task at the moment to make a program that picks from point A and then places to point B in 10mm increments on the x or y place position for 10 cycles then pick from point c and place to point d incrementing on the z this time for 5 cycles then reverse the the entire thing and make it take everything back from the positions to where they got them from.
Might have gone a bit far but youngsters these days pick stuff up easy enough
-
I was wondering if anyone has or can suggest some simple training exercises I can give to a couple of people who have joined where I work.
I've been over loads with them and have set a couple of tasks that I was set when I went on motoman training here in the UK but wondered if people had any suggestions on good practice exercises and tests to give them. I've done bits like increment and once you reach a certain number jump parts of the job and then reset to start again. I'm new to this training people and just thought someone here might have some good suggestions.
Thanks for your help in advance
Steve
-
worked great. Thank you. I will take a look when I get to the office.
no problem best of luck. If you have any issues i have the guy who helped us set up the anyfeeder coming in to my place on wednesday. if you let me know what issue you're having he may be able to offer some advise
-
well if you want to attach I would take a look at it.
AnyFeeder_UsersGuide_en_201707_R235I-E-02_tcm849-116241.zip
hopefully this worked and hopefully it is of help to you
-
I have the user guide but it only says how to set up with robot. Unless you have something different?
the one i have i don't see anything about setting up with a robot at all. just has all ascii code and stuff to send to make the anyfeeder move and stuff
-
i have a pdf of the user guide but don't know how much help that would be
Yeah that would be fine I got it in yesterday and am sending code with no response and it is a successful send according to the plc. Even using Omron plc and there are no docs on setting up with plc only through robot which I find odd.
-
did you use a plc to control the anyfeeder? Any issues? Project coming up and just curious
yeah plc is controlling the anyfeeder sort of. tbf we are only using dispense, feed forward and feed backwards. however from what we have found is you have to load the ascii code everytime you turn it on or off. doesn't retain the code you program in so with the plc we had to have a reload ascii button to load everything up for us everytime the machine shuts down
-
so after many hours today i managed to get it working. doubtful its the correct way to do it but thought i would post how i managed to do it in case anyone in the distant future finds this post hoping for some help.
Here is my solution right or wrong but it works
GETS PX001 $PX000
CNVRT PX001 PX001 RF
GETE D080 P001 (3)
SET D080 EXPRESS D080 + -67000
SETE P001 (3) D080
MOVL P001 VMAX=100 PL=0
DOUT OT#(42) ON
TIMER T=0.200
GETS PX002 $PX000
CNVRT PX002 PX002 RF
GETE D081 P002 (3)
SET D081 EXPRESS D081 + 60000
SETE P002 (3) D081
MOVL P002 VMAX=100
-
so have a program working for a vision guided system and slowly making tweaks as i go along. was originally using imov's to move into positions but when we had someone in from motoman he suggested using sfton and a movl which i have done for the inital X,Y and angle which seems to have helped with speed and also reduced if not completely removed any excessive segment alarms we was getting. he said he wouldn't use imov's at all as he doesn't like them, at the minute i am still using imov to move Z minus and then Z positive. Trying to get rid of them and replace them with a movl that moves a set distance on the Z minus and positive but can't get it working. tried using sfton again and position variable to move but that just didn't work at all. then tried a getpos and set that to a position variable then convert to robot frame then set element 3 to -67000 to come down 67mm but the robot moved a couple of mm down and thats all.
Does anyone have any ideas on the best way for me to move down from the camera position that has been given and also then move up a set distance after the robot has gripped the part.
Hopefully this makes sense as looking at the same thing over and over and getting nowhere has started to frazzle my brain a bit
-
This is over my head but I just wanted to say how rad this project is.
sounds more complicated than it is. It’s very cool though I must admit. Was awesome when we first got it working, such a great feeling
-
What kind of waits in the program? A simple timer? Or a handshake going on? Can you set up the robot to echo back the data so that the plc can verify that it matches what has been sent. Perhaps some read ahead or race of some kind is going on.
the waits are more of a handshake. Seems the issue was the R and B axis wasn’t calibrated properly when it left the factory that was adjusted yesterday and the work home position reset and it’s been great ever since
-
yes so basically we bring the robot back to the 0,0 point and then use the IMOV to move relative from that position to the position that is sent from the camera so it always moves relative from the same position.
We had someone from Yaskawa yesterday and the B and R axis was out possibly from the factory which wasn’t helping. Also adjusted the 0,0 position to the camera slightly and it’s night and day the difference
-
Let me see if this summation is accurate:
1) Did a pixel to world conversion. If the part location is at this XY pixel location the robot is here. If the XY pixel location is here the robot is here. How many points did you use? Are you using a User Frame to match the camera angle?
2) The data coming into the robot always matches the camera data and plc data.
3) The robot will go to the correct location but may not be the first time. May require two or more tries.
This last one to me is the greatest concern. Is another image acquired between the miss and the get? Is the robot is reacting on the same image/same data, does the data stay the same on a miss/get? Or does the data change? Is the camera being triggered from a robot job that is being CALLed or PSTARTed? Basically, sequentially or in parallel.
Where are you using IMOVs?
easier to quote so i can see everything to answer easily.
1) used a grid on the camera and placed points at 500,500 pixels and also 1500,1500 pixels and drove the robot to each point noting the position of the robot on the XY axis'. Also placed a grid on the anyfeeder and matched to camera centre with points at 50mm off the centre on the XY and drove to them points after working out conversion factor to check correct.
Not using a user frame at this moment at the robot and camera are parallel and seemed to be working ok.
2) from everything we have been able to establish the data coming into the robot always matches the camera and plc data.
3) yes that's the big one that we just can't get out head round. will go to the coords and miss the part completely then go back and pick the part up perfectly 2nd time around.
Another image is taken and new data sent after every single pick even on the missed one. The data technically changes but when checking camera images and so on the data sent seems to be the same data just sent again as it has still found that part as the best one to pick.
Camera is being triggered by the plc and then we have waits in the robot program until the data has been collected by the plc and sent to the robot then the robot goes through and does a calculation in the program to work out where it needs to go on the XY and Th.
IMOVs are being used as the first move using P000 to move to X,Y and Rz, then IMOV on P001 to move the Z axis down to the part and then a final IMOV with P002 to move the robot up on the Z axis clear of the shaker to carry on the placing of the part with normal MOVJ and MOVL
-
Good morning, afternoon or evening depending where you are reading this from.
I have been working on a vision guided robot system for the last few months and we have come extremely far seeing as this is the first time we have done this sort of thing. I'm using a omron FH series camera system with a omron FZ camera for the vision side of things. I'm using the new motomini robot from yaskawa for the robot side. i'm using a siemens plc for the communications between them. I'm also using a Omron anyfeeder to feed the parts to the camera system and the robot picks from the bed of the anyfeeder.
So the issue i am having is there is a good 5-10mm variation on the pick point the robot goes to, the detection point on the camera system remains very consistant and doesn't seem to have much variation on it and the amount that is goes out i would see that point moving around which it doesn't.
If i explain as much as i can into the process i'm hoping someone might be able to offer some help. We have set the robot to the 0,0 point of the camera by working out a set distance in pixels and driving the robot to the points and working out the difference in mm and using that to create a conversion factor between the pixels and mm. In the camera program we then use that conversion factor to send out the distance from 0,0 in mm to the plc which we then send to the robot and do calculations within the robot to make the data sent into the mm's the camera is sending. We have checked the camera output of mm to the position variable we are setting in the robot and the distance in mm's match up perfectly every single time which would imply that the data isn't being sent differently between the two. We are using the X,Y and Theta from the camera system and have checked that its not constantly out on the X position only or only the Y position or just certain angles, it seems to be out on any of them and doesn't seem to have certain position that is always a problem. The weird part is that the robot will go to a position and completely miss the part then come back to the same part and pick it up perfectly. The tool point has been set, we are using imovs as the move to get to the position needed. The speed is fairly high as we have a cycle time goal of under 3.6 seconds for the complete machine cycle as this is what it was with an operator before we changed to a vision guided robot.
I don't know if thats enough information for someone to have a idea of what I could be doing wrong or not but any advise would be extremely helpful as i'm completely out of ideas at the moment.
-
Sounds like in the math that you are coming up with rotational values that are high numbers that the robot can't get to. I saw you mention the IMOV instruction. It is a linear move from wherever the robot is at by the amount and direction in the P-variable. IMOVs typically have small values in them for XYZ and 0 in the RX, Ry, and Rz. There are exceptions. If the values are large in the RX, Ry, or Rz the robot can't move linearly through that rotation.
Could you briefly describe how the part looks like (cylinder, square, etc.), how does the end effector engage the part (inside side grip, outside grip, vacuum on top, clamp sides, etc), is the camera giving an offset from a trained position or an absolute position?
i am only trying to move it small values which is why i don't understand whats giving it the error. i'm trying to move the Ry axis +7 degrees or -7 degrees depending on the rotation of the part and the Rx axis is also only moving 15-20 degrees can't remember the exact figure.
the part is a cylindrical elbow with one side of the elbow a larger diameter than the other side. i'm picking up on the outside of the part on the smaller end but need to pick it up as square as i can with the manual angles i have for Rx and Ry.
the camera is giving an absolute position to the centre point of where we want to pick up the part and the TCP is going directly to that point to grip the part. i have worked around it by having the gripper start in that angled position then switching the angle over if the part is the mirrored version
-
What frame of reference are your position variables in? Are the parts all at the same angle every time? If so, a user frame would work. If the parts can be all different angles (Rx or Ry or both) then it's into 3D.by frame of reference i take it you mean pulse, base, robot, ect.? if so then it is in Robot.
the parts lay in the same angle all the time just the rotation of the part (T) is different which is already calculated. tbh we haven't ever used user frames at the moment we have the X and Y of the camera parallel with the X and Y of the robot so we didn't have the need to delve into user frames.
i just find it weird that it won't drive to the angle i input using imov and P variables
-
Hi, we have used motoman robots at work for many years but with very basic use. we only ever used movl and movj and set our positions as everything is in a fixed position.
Recently been tasked with trialling making a vision guided system. we are taking a picture of a load of parts with a omron camera system and relaying the X,Y and T to the plc then to the robot and picking the best part. we have it working fine with parts that lay flat but now i am trying to do it with parts that sit at an angle. i've used the same program and driven to that point then adjusted to Ry and Rx to the correct position, taken the values of them in robot mode then added them to a P variable. Problem is when i get down to the correct pick up point height and i want it to correct to the Ry and Rx i get a interpolation error not matter what i try.
Does anyone have any ideas on how to make this work correctly with the tool. any help or info people can offer on vision guided systems to try and help out would be absolutely brilliant.
Thank you in advance
PS using mh5f FS100 robot but if all works the application will be done with a Motoman mini robot