1. Home
    1. Dashboard
    2. Search
  2. Forum
    1. Unresolved Threads
    2. Members
      1. Recent Activities
      2. Users Online
      3. Team Members
      4. Search Members
      5. Trophys
  3. Articles
  4. Blog
  5. Videos
  6. Jobs
  7. Shop
    1. Orders
  • Login or register
  • Search
This Thread
  • Everywhere
  • This Thread
  • This Forum
  • Articles
  • Pages
  • Forum
  • Blog Articles
  • Products
  • More Options
  1. Robotforum - Support and discussion community for industrial robots and cobots
  2. Forum
  3. Industrial Robot Support and Discussion Center
  4. other Robots
Your browser does not support videos RoboDK Software for simulation and programming
Visit our Mainsponsor
IRBCAM
Robotics Channel
Robotics Training
Advertise in robotics
Sponsored Ads

DOBOT with Vision sensor? How do I get the coordination from camera?

  • Miller P.
  • December 17, 2024 at 3:37 AM
  • Thread is Unresolved
  • Miller P.
    Posts
    4
    • December 17, 2024 at 3:37 AM
    • #1

    Hi,

    I am a Robotic and Automation student from RMUTL (Rajamangala University of Technology Lanna) And I'm currently working on my project about using DOBOT with vision sensor to pick and place small object.

    The problem is, I don't really have any experience with getting the coordination from vision sensor.

    I know the basic of vision sensor and object detection and getting some digital output out of it (like using camera to trigger some input)

    But I have no idea of how to get the coordination from the camera itself, I've tried to looking it up online, but I still couldn't wrap my head around it.

    So, I would like to ask for help, if there's anyone who could explain on how to do it in general. (Down to the monkey levels.)

    Thank you in advance.

    P.S. Sorry for my bad English :loudly_crying_face:

  • Nation December 17, 2024 at 4:02 AM

    Approved the thread.
  • SkyeFire
    Reactions Received
    1,060
    Trophies
    12
    Posts
    9,456
    • December 17, 2024 at 8:41 PM
    • #2

    Is the issue getting the coordinates, or applying them to robot motions? What vision sensor are you using?

    Getting the coordinates from the sensor depends entirely on the brand and model of the vision sensor. Applying them to robot motion depends on the robot type, and the relationship between the sensor and robot (for example, if the sensor is carried by the robot or fixed).

    A typical 2D vision sensor should provide 3 coordinates -- X, Y, and Rz, in the sensor's internal coordinate frame system. This will typically be in pixel counts. To make use of this data, you need two things: a conversion between pixel and mm, which will depend on the sensor, lens, and working distance.

    To apply the vision coordinates to robot motions, you need the geometric relationship between the camera and the robot's reference frame. For example, if the camera reference frame is rotated 45deg around Z relative to the robot reference frame, you will need to perform SIN and COS functions to convert the camera X and Y values into the robot offsets. Then you will need to offset either the robot's base frame (for a fixed camera) or tool frame (for a robot-carried camera).

    To work well, your camera Z axis should be perpendicular to the work surface, and parallel to the Z axis of the robot reference frame. Otherwise, the camera accuracy suffers badly, and the math for applying offset conversions become very complicated.

  • Miller P.
    Posts
    4
    • December 18, 2024 at 4:57 AM
    • #3

    First of all, thank you for answering, SkyeFire

    And to answer your question is, yes, to both of them.

    My knowledge in vision sensor-ing is nothing but only on the surface level, getting digital I/O out of it is only what I can do for the moment, let alone sending the coordinate data to the robot itself.

    For the brand of the camera, I haven't decided yet (because I'm not sure which one to choose) but I do have this picture to simplify my concept.

    My concept is to use vision sensor to detect object on the tray (really small object roughly the same size of a resistor) and then make it remember "where" the object is and send array or pack of coordination data to robot and make it pick up and place the objects in the jig (like sorting robot or some sort)

    Accuracy has been my concern (because the object robot is intended to pick up is really small, it's like resistor but you could only touch it by its legs) but right now, making robot go to the coordinate of the object itself is my huge hiccup.

    I hope you could give me some advice, thank you!

    Images

    • ads.jpg
      • 54.48 kB
      • 1,152 × 648
      • 1
  • SkyeFire
    Reactions Received
    1,060
    Trophies
    12
    Posts
    9,456
    • December 18, 2024 at 6:35 PM
    • #4

    Well, getting the coordinates out of the camera will depend entirely on what sensor you buy. Different brands and models all support different methods. Also, I don't know anything about how the Dobot robot is driven.

    Your sketch shows a fixed camera. You'll need to determine your XY FOV, and how high the camera has to be to allow the robot to pass under. That, plus your required pixel resolution, will drive your sensor resolution and lens selection.

    With a fixed sensor, you'll want to use the Base Offset method. I don't know how Dobot does this, but typically with most robots you can create a reference frame on the "tray" by touching three specific points. Locating the same three points in the sensor can bring the robot and sensor reference frames into alignment. Usually a checkerboard pattern of some type is used for this.

    I would suggest getting your feet wet with a cheap webcam and OpenCV.

  • Miller P.
    Posts
    4
    • December 19, 2024 at 2:14 AM
    • #5

    Thank you for the reply!


    I see, I've tried to play with the Dobot software a bit and found out that it can change the data in the software itself, thus, change the base's coordination depends on the string input.

    The main goal is to detect each individual object that placed on the picking area (far left of the sketch) which each object also be placed randomly.

    But I think I can poke around and see what I can do, if I understand this correctly, some (if not most) cameras can communicate via Modbus protocol which mean I can get x,y coordination of the detected object in the string form and send it to robot.

    My final goal is to detect AND pick up multiple objects with one snap of camera, thus, eliminate the wait time for camera to take a picture each object I needed to pick, but I guess right now, I should focus on picking it up just one at the time to at least get the better understanding of how this works.

    Also, is Keyence camera any good in this kind of work? I didn't plan to get one, but I just look online, and it keeps popping up.

  • SkyeFire
    Reactions Received
    1,060
    Trophies
    12
    Posts
    9,456
    • December 19, 2024 at 3:32 PM
    • #6

    Keyence makes a lot of different cameras. They're generally pretty good, but each type of camera is optimized for different types of jobs.

Advertising from our partners

IRBCAM
Robotics Channel
Robotics Training
Advertise in robotics
Advertise in Robotics
Advertise in Robotics

Job Postings

  • Anyware Robotics is hiring!

    yzhou377 February 23, 2025 at 4:54 AM
  • How to see your Job Posting (search or recruit) here in Robot-Forum.com

    Werner Hampel November 18, 2021 at 3:44 PM
Your browser does not support videos RoboDK Software for simulation and programming

Tag Cloud

  • abb
  • Backup
  • calibration
  • Communication
  • CRX
  • DCS
  • dx100
  • dx200
  • error
  • Ethernet
  • Ethernet IP
  • external axis
  • Fanuc
  • help
  • hmi
  • I/O
  • irc5
  • IRVIsion
  • karel
  • kawasaki
  • KRC2
  • KRC4
  • KRC 4
  • krc5
  • KRL
  • KUKA
  • motoman
  • Offset
  • PLC
  • PROFINET
  • Program
  • Programming
  • RAPID
  • roboguide
  • robot
  • robotstudio
  • RSI
  • safety
  • Siemens
  • simulation
  • SPEED
  • staubli
  • tcp
  • TCP/IP
  • teach pendant
  • vision
  • Welding
  • workvisual
  • yaskawa
  • YRC1000

Thread Tag Cloud

  • abb
  • Backup
  • calibration
  • Communication
  • CRX
  • DCS
  • dx100
  • dx200
  • error
  • Ethernet
  • Ethernet IP
  • external axis
  • Fanuc
  • help
  • hmi
  • I/O
  • irc5
  • IRVIsion
  • karel
  • kawasaki
  • KRC2
  • KRC4
  • KRC 4
  • krc5
  • KRL
  • KUKA
  • motoman
  • Offset
  • PLC
  • PROFINET
  • Program
  • Programming
  • RAPID
  • roboguide
  • robot
  • robotstudio
  • RSI
  • safety
  • Siemens
  • simulation
  • SPEED
  • staubli
  • tcp
  • TCP/IP
  • teach pendant
  • vision
  • Welding
  • workvisual
  • yaskawa
  • YRC1000

Tags

  • education
  • student
  • VISION SENSOR
  • DOBOT
  1. Privacy Policy
  2. Legal Notice
Powered by WoltLab Suite™
As a registered Member:
* You will see no Google advertising
* You can translate posts into your local language
* You can ask questions or help the community with your knowledge
* You can thank the authors for their help
* You can receive notifications of replies or new topics on request
* We do not sell your data - we promise

JOIN OUR GREAT ROBOTICS COMMUNITY.
Don’t have an account yet? Register yourself now and be a part of our community!
Register Yourself Lost Password
Robotforum - Support and discussion community for industrial robots and cobots in the WSC-Connect App on Google Play
Robotforum - Support and discussion community for industrial robots and cobots in the WSC-Connect App on the App Store
Download