Posts by Nation

    Here is my configuration for connecting to the first simulated robot in roboguide:

    I'm not in control of the touch up of the path, so when the people responsible for the the touchup finish, I want to verify if my chosen CVWAIT positions are still valid.

    Also, the robot is overloaded in terms of its path. I need every mm of upstream real estate I can grab in order to not fall behind.

    In addition to all that, the entire path is shifted by a vision system. One point that may be reachable at a certain conveyor position may be reachable in one offset, and not another.

    Since I got it working I was able to make a graph of the upstream reachability of one of the paths:

    I got it to work.

    Inrange does function as a reachability test, but with how Kawasakis do their line tracking point teaching, it is pretty counter intuitive. From my testing, INRANGE checks reachability of the XYZOAT position of the pose, and then just checks to see if the ext axis is within the motion limits.

    Since how I was originally using INRANGE was effectively testing taught points, so of course every point was reachable. The key is to shift the XYZOAT of the along the conveyor tracking vector by the delta of where the point was taught, and where the conveyor currently is, then run the inrange test.

    What is it you are actually looking at being reachable and at what point in the process?

    When you start tracking, at what point are you looking whether it is reachable or not?

    It all depends on how fast the algorithm for reachability checks ends up being. Ideally I would like do to the check repeatably while the conveyor is moving before moving to a point, and then releasing the robot to move when the point becomes reachable. Maybe with some padding so the robot doesn't get into an elbow lock position.

    If I can't do it real time, I would like to develop an offline check that would output a range on conveyor for when a specific point is reachable.

    Is this for outside of the robot envelope (error), or within a certain zone to cancel the tracking of it?

    For outside of robot envelope.

    Have you looked at CVPOS command?

    Yes, I will need to use that command to pull out the current conveyor value for in range testing.

    Have you looked at ULIMIT and LLIMIT commands?

    Haven't looked at these, but after looking at them in the AS manual, I'm not sure they will help that much. I'd like to keep the joint limits of the robots as open as possible.

    Have you looked at DISTANCE and DEST commands?

    DISTANCE will be used if I have to go with the IK solver idea. I don't plan on using DEST as I only want to do the check before moving to the next point.

    Maybe I can use distance to check the current position vs where the next point is. I would like to check orientation though.

    Anyone have any advice testing to see if a point is reachable? I want to test to see if a point is reachable before attempting to run to it.

    The kicker is that this is on a line tracking cell. The INRANGE function seems to not work on this type of robot, and after discussing with Kawi tech support, it only checks if the passed in point is within joint bounds, and not if it is reachable or not. It was returning a 64 everytime, but once I opened up the bounds on J7 (the conv tracking axis), it just returned 0 for every point tested, even for points obviously unreachable.

    My next attempt was to attempt to convert a pose to joint, and use the error catch of ONE if the conversion failed. Either I didn't setup right, as the controller faults the moment I attempt the conversion. The ONE program is never entered/called.

    My next idea is to write an IK solver in AS, but I would prefer not to do that.

    I reverse engineered what the sFc section does a while ago.

    My comments:

    Your tool may be over the payload rating of the robot. That, or the payload data was never entered into the robot, or the data is incorrect.

    I want to write a program on the robot controller to be able to set these robot registers through the program. I need this to simulate my robot - the program will update these positions without actually moving the robot. Is it possible? What do I need to do?

    If this is something that doesn't need to happen all the time, you could turn off motion in the test menu. I believe the CURJPOS will updated with where the robot "believes" it is, not where it actually is.

    I tried updating these values manually from the Kepware client, but they didn't change. So I assumed it was impossible. Or maybe I need to enable something on the controller for this?

    They may have updated for a scan on the robot side, but then your background program probably immediately overwrote it.

    I had the same thing happen, and I did extensive research into it. In short, it is a custom connector made by Hirose for Fanuc. They will not sell it to anyone but Fanuc. I asked them.

    With that said, it looks like it is the same as this connector, with one of the keys rotated by 45 degrees. If I was in a jam, I would buy that connector, and just file off the incompatible key.

    The robot has no control over the conveyor. We are mounting an encoder on a friction wheel, and installing that to monitor conveyor position.

    The plan is to have the car break a photo eye as it enters the cell. I'll either have the motion program waiting on that bit, or a background program watching for it. It will depend on the spacing of the cars in the plant. Looks like CVSET handles this.

    After doing some testing in K-Roset, it appears that the method mentioned in my first post works.

    Now to figure out how to convert a tracking path taught in world into the vision frame.


    Figured it out. Here is how (assumeing no vision offset):

    visionpnt = -(vis_frame)+worldpnt

    I've been signed up for a cell that will be doing conveyor tracking on a car body that will have a vision offset applied. I'm trying to wrap my head around how to apply the vision offset to the points, and I just want to do a sanity check to see if I'm on the right track.

    Here is what I am thinking for a typical move:

    CVLMOVE vis_frame+vis_offs+point[1]

    The robot is laser shot into the vision frame, which is upstream about 4 meters, and that is represented by vis_frame.

    The vision system will produce a delta from its master part, that is vis_offs.

    Point[1] is where the robot will be going, while tracking.

    The question is, will this work, or do I have to add an additional transform to account for the movement of the conveyor?