Camera Calibration

  • Hi!

    I'm working with RS030N and want to use a camera for visual servoing. I need to calibrate the camera (don't know if eye-to-hand or eye-in-hand yet) but didn't find any functional solution at my question. Anybody could help me or just suggest videos/tutorials/papers which explain this procedure?

    TIA

  • What is it you are trying to accomplish exactly?

    Remember we are dealing with industrial robots here and not vision systems.

    As far as calibration is concerned, there are 2 areas, calibration of the camera itself and calibration of camera frame to robot frame.


    Basic Overview...………….

    Camera calibration (always brand specific):

    - Establish a communication protocol to use between camera and robot in order to receive transformation data (TCP/IP or UDP for example).

    - Your field of view from your camera should have a relative coordinate system to operate from.

    - Train the camera system to detect an object, match an object, determine centre of the object, determine the coordinate/orientation within the field of view.

    - This coordinate data is then to be used as the robot target data.


    Robot calibration:

    - Establish a communication protocol to use between camera and robot in order to receive transformation data (TCP/IP or UDP for example).

    - The field of view of the camera, you would either use relative positioning or simply create a FRAME to match this.

    - Create a simple data routine to wait/request image acquisition.

    - Decode the received data into XYZOAT values.

    - Add the received values to the relative or FRAME of the robot to create the target.

    - Check the robot can achieve the decoded location (ie create a neutral posture so the robot can cover all aspects of location/orientation variability).

    - Then instruct the robot to move to it.


    The main areas to consider are usually:

    - Ambient light control to avoid exposure issues and incorrect detection.

    - Accurately 'trained' detection methods.

    - Speed of detection, to transfer of data to decoding of data.

    - Accurate defined relative/FRAME of robot to camera field.


    Not having ever set a camera up from scratch, I have interacted with them and been involved with robot programming and adjustments of the robot interface, the principle is fairly straight forward from a robot perspective.


    Kawasaki has it's own vision software called K-VFnder, K-VAssist and K-VStereo.

    Never used them, but I doubt they are free, you would need to contact your nearest distributor for prices etc.

    However, you could consider using alternative methods as all you are doing from a robot perspective is receiving target data and applying it to a predetermined area of motion in order to exact motion to it.

  • Thanks for you answer.


    The task is to pick an bent ironbar (with 1 or more bents) on a vertical plane, where the background is not well defined ( gears and devices, shadows, reliefs)


    I use a Sapera GigE Genie Nano.

    I've done all the software to detect the piece and transfer the coordinate to the robot using a TCP/IP Protocol and it works fine.

    For first, I tested on a white table and to convert the coordinates from pixel to mm, I just use the known measurements of the piece, frame with a fixed camera (pixelsPerMillimeters = f.size().width/widthPiece). All was good, and the robot moves on the right coordinate with the right orientation of the end-effector.

    The main areas to consider are usually:

    - Ambient light control to avoid exposure issues and incorrect detection.

    - Accurately 'trained' detection methods.

    - Speed of detection, to transfer of data to decoding of data.

    - Accurate defined relative/FRAME of robot to camera field.

    Now, i want to generalize all and test on a vertical planar surface.

    I've done a more efficient algorithm for detection using RCNN algorithm (just 100 images in train, so it could be powered for sure) .


    - Detection works but is slow, any hints to improve?


    - Studying on books and reading some algorithm on the web, i found the calibration method with the use of a printed chessboard.

    Now, I want understand more of this and how it works.


    - I applied a IR flashlight (smart vision light l300) behind the camera but for sure it could be improved with different position and settings

    Hints?


    Thank you

  • Quote

    Remember we are dealing with industrial robots here and not vision systems.

    Not being a vision system expert, then I can only offer generalised comments and this board is related to the Kawasaki Industrial Robot Family and not vision systems.


    It sounds like you have all the components in place already as far as the Kawasaki is concerned.

    Your results then are purely dependent on the camera, software, calibration and training methods then and you are Just looking for enhancements.


    Results can be improved in some scenario's for image acquisition when the camera is mounted directly to the manipulator, but not in all cases.

    Dependent on application.

    - External influences are then reduced/minimised then and can therefore further enhance results.


    Your referencing a requirement for further investigation in to vision as opposed to Robotics.

    I would therefore suggest some further searches in our general robots section and post a specific question in there.


    Remember the robot is receiving a target location ONLY......the accuracy of these values are determined by the camera, software and training methods used.

  • Hey I don't know if you found a solution to speed everything up, but I am planning on doing something similar soon with the Kawasaki FS30L. I have little experience with converting information from camera to robot, but for detection I can recommend a couple things.

    Have you checked out Tensorflows object_detection git hub?

    https://github.com/tensorflow/…research/object_detection

    It uses Frozen graphs of pretrained networks allowing faster processing. However when you increase speed you sacrifice accuracy, Tensorflow has several different networks to chose from, so you can select one that fits your application,

    https://github.com/tensorflow/…oc/detection_model_zoo.md

    I would then apply to transfer learning to which ever network you choose, making it tailored to the set of items you wish to classify. Save the trained network and 'Freeze' it using tensorflow to call back during your script. This should improve some speed during computation.

    Other trainable networks check out yolo and darknet on GitHub, both very helpful in guiding you through creating your own classification.


    Alternative if you are using this detection just for rods, I would venture into using just line detection using OpenCV or any other image processing.


    As far as using checkerboard, this is used to calibrate cameras, If you are using eye-to-hand this is crucial as it will eliminate a "bowl" or "fish-eye" effect the camera lens has around the edge of the image giving you a more accurate result. I like the OpenCV guide as it explains the process for both python and C.

    https://docs.opencv.org/2.4/do…n/camera_calibration.html

    If you have access to MATLAB there is an add-on for calibration that uses a gui and is much more user friendly if you have no idea what is going on.

    https://www.mathworks.com/help…amera-calibrator-app.html


    Hope this was helpful and not a pointless rant. Also let me now if you find any other alternatives!

  • Welcome to the forum........:beerchug:

    I think it is useful for sure and many thanks for adding to the discussion, I'll certainly have a read into this too for my own learning.

    However, try and maintain the discussion towards the Kawasaki robot and not just turn this into a vision system thread.

    If you would like, I could transfer this thread to the Robot Vision and Vision Products Board, it may yield some further discussions and information, let me know and I'll transfer it over...…:top:

  • I'm sorry, actually that was my first post so not really sure how the threads worked and not really sure if i should create own thread but it builds off of video servoing with a Kawasaki arm.

    So back to Kawasaki robots!


    Like I mentioned earlier I will be attempting something very similar to this on a FS30L using a D+ controller, it is the only thing I had accessible to work with. I created a callback function that allows me to update the robot's pose with respect to X,Y,Z,O,A,T while searching for a new location to be defined by the image classification. I don't know if there is an easier way to do this with the D+ controller, would love to know if anyone else has manged to create an online programming method in a different way. In the past I have been using LMOVE as my understanding is that LMOVE is with respect to the robots base frame. However, if i wanted to add a fixed tool tip and start using the FLMOVE command instead. Does that mean the robot will now traverse with the respect to the fixed tools axes or simply still move in respect to base frame coordinates with the tool tip at the center of the defined pose?


    The overall scope of the project is to play a game of billiards vs the robotic arm, the fixed tool is a linear motion device I have made to simulate a pool stroke. Ideally I would like to image classify the balls, determine shot, and then communicate to the robot to LMOVE to center location of the cue ball with respect to the tool tip. So that no matter what after actuation the tool tip will effectively strike directly through the middle of the cue ball. I am Just getting confused with defining the tool tip and the associated commands to use, as the FS30L has a rather strange home position, with joint angles at [0 0 0 0 0 0], places the robot into singularity at a complete vertical position. Which we no the Kawasaki doesn't like, at all. Do you happen to be familiar with this, I have all the PDFs associated with the robot and have read it numerous times, there is something I am just not grasping.

  • No problem at all, if this was directed more towards vision, then it may get better results in the robot vison and vision products board where this is generically better suited, that's what I was suggesting.


    Updating the robots pose can be done in many ways, tcp/ip I believe is the more favoured route of location data transfer.

    Since you haven't mentioned how you're doing it, it's hard to make any suggestions.


    Regarding LMOVE, I must add to what you've wrote.

    - LMOVE is just a LInear Move where the robot translates the current TCP from A to B.

    - In it's totality, yes it is relative to the BASE.

    - However, when used as a compound transform, you make it relative to another location.

    - Overall the target produced is relative to BASE.


    Regarding Fixed Tool.

    - Fixed Tool is normally remote from the Robot.

    - This will be setup relative to the BASE (it has to be in order for the robot to know where it is).

    - ALL 'F' motion instructions will be relative to the Fixed Tool vectors you set.

    - You also need to be aware, that the robot is also moving the Tool TCP relative to this also.

    - You can still use any motion instruction as before and not just have to use 'F' motions.


    Regarding HOME position:

    - In Kawasaki you have 2, HOME1 (HOME) and HOME2.

    - These can be freely modified to suit.

    - SETHOME and SET2HOME respectively or via Aux Func 04.


    No robot likes singularities, not just Kawasaki and also yourself, your arms hate them too.

    You can convert transformations to joint angles and vice versa and use UWRIST/DWRIST commands to assist with singularities too.


    Just with you saying about LMOVES, leads me to believe you have limited yourself in your thinking.

    Read up more in the AS Language Manual concerning motion and transform instructions.

    LAPPRO, LDEPART, ALIGN and TRANS specifically, and possibly FRAME.

    Also, the POINT command and differences between Transformation and Joint Displacement Values.

    I am sure you will benefit from utilizing all the above in your application.

  • Hi guys, I am interested to find out how I can get an accurate intrinsic parameters of a zoom-lens camera. I am thinking depending on the fov angle, the matrix should be different.

    Short of doing calibration for all fov(s), I wonder if anyone here could point me to the right direction? :smiling_face:

    Thank you all!

  • Please post your question in a relevant board, this board is for Kawasaki Robots, not Camera settings.

    Robots receive data from the camera, the accuracy in these values is determined by the camera/software settings, not in the robot.

  • Please post your question in a relevant board, this board is for Kawasaki Robots, not Camera settings.

    Robots receive data from the camera, the accuracy in these values is determined by the camera/software settings, not in the robot.

    Hi and sorry to bother you again! I'm back to work with the robot and now have more details.

    The camera is fixed and not on-board.

    Camera calibration with the chessboard method is done and works well and distortion disappeared.

    Now, what remains to do is to transform the camera coordinates in robot coordinates, so need the rotation matrices. Studied on books but how can do in practice?

    TIA

  • You'll probably have to do some searching around for those, I personally have never gone into that side of things (never had a need to).

    You may well benefit from firing an email off to KHI for that and see what they come back with.

    Or perhaps contact someone from RoboDK, I hear they have Kawasaki Modelling in their simulation offering and may give you a heads up for that information.

  • You'll probably have to do some searching around for those, I personally have never gone into that side of things (never had a need to).

    You may well benefit from firing an email off to KHI for that and see what they come back with.

    Or perhaps contact someone from RoboDK, I hear they have Kawasaki Modelling in their simulation offering and may give you a heads up for that information.

    Do you have specific mail address where i can contact?

    TIA

  • There is a RoboDK sub forum here on Robot Forum that one of the guys from RoboDK monitors:

    RoboDK

    It's just recently been launched, so I don't know how active it is at present, but definitely worth posting a question there...…


    As far as KHI is concerned, go here and locate your regional office:

    http://global.kawasaki.com/en/…noselect&business=gl-b100

Advertising from our partners