Are there high precision sensors for height for robot ?

  • Hello,


    Does anyone know

    do they exist wifi or radios, ... transmitters / receivers, with which, as in the picture, can measurement the difference in height of the receiver relative to the transmitter (∆h) precisely in mm or greater precision ? I know there are GPS navigation gauges and those that measure height as a function of temperature and pressure, but they are not precise in mm (the accuracy of the best pressure/temperature sensor is 8 cm). I need use to work indoors.



  • Aleksandari

    Changed the title of the thread from “Sensors height ∆h for robot ?” to “Are there high precision sensors for height for robot ?”.
  • .1mm is still pushing things pretty hard. You're probably looking at optical metrology, something like a laser tracker.


    Something like Nikon's Indoor GPS would be even more expensive.


    If you can afford the physical obstructions and speed limits, you could perhaps use a couple draw-wire encoders like these and do some trig to generate a height value.

  • Maybe the camera can determine the difference in height ?


    I need for a robot (a robot arm with a moving platform) moving past the wall of the room, if to mount a camera on it that will record the wall during movement in real-time, and to obtain precise parameters of the robot’s motion based on that shot and the distance sensor: coordinates x, y, z, current coordinates, distance traveled, total distance traveled, … relative to some starting reference point.


    Theoretically for example when we sit in a car and looking out the window while the car is moving, our eye can see that all the exterior objects are “moving” in the opposite direction of the car movement, so if we place the camera on the moving platform of the robot against the wall, during the motion of this robot the pixels of the camera will also move in the opposite direction, if the robot goes uphill, the pixels will move obliquely downhill, so I think then if the camera is of good quality and with the appropriate program those pixels could track and project an image of space orientation based on the speed, direction and direction movement the pixels from the camera.


    This possible in theory, and how in practice ? Is it explained somewhere on the internet which camera is best for it, how to program it ?

  • I’m trying to design a robotic arm with a moving platform for technical drawing on the wall, and for that I need appropriate sensor assembly, that the robotic arm is not meandering because of floor unevenness.


    For example, I want to draw straight lines like in the picture:




    so I wouldn't get this kind of drawing on the wall on the curves on the floor:


  • Okay, these are the details we needed. The key problem here is that the robot knows nothing about the floor, or it's relationship to the floor or wall. Usually, this is overcome by deliberately building the robot platform to be as fixed and horizontal as possible.


    Does the robot have to draw while the platform is moved? Or could the drawing process be adapted to draw a limited area, then pause while the platform is moved, re-calibrate the robot to the new platform position, then draw the next section?


    Something like the wire-draw encoders might work for this. Something like the Maslow CNC does this in reverse -- it uses motors located at the upper corners of the work area to control the location of the cutting tool with surprising precision. For a flat plane, two precise distances from the upper corners (with a rejection for the redundant solution) would be enough to locate the TCP location on the drawing plane. You might want to avoid using a floor-mounted robot for this operation and try a drawbot type solution -- this would remove the floor conditions from the problem entirely.


    However, you're still going to have issues. First off:

    1. How dynamic are the changes in the robot platform position? For example, hitting a small bump in the floor could cause the robot's base to tilt by 1deg very suddenly, which could have 5-10mm effects on the TCP position, and the robot would have great difficulty in reacting quickly enough. This gets into dynamic control theory problems.

    2. We cannot simply treat this as a 2D problem, if the floor would cause the robot to tilit towards/away from the drawing plane. Even if the 2D position is well-measured in real time, the drawing tool could be crushed, or lift free.

    3. Dynamic control of the robot will not be simple. Most industrial robots are not made for realtime path adjustment. Some brands offer realtime control suites that can be added (for a cost), but the programming burden is still pretty heavy to make this work well.

  • I think that, floor can never be perfectly flat, with at 1 mm (1-1.5 mm) always varying (in some places). Especially if there are parquet (or smilar). Therefore, is important to robot register unevenness and measurement .

  • OK. So I would assume speeds not really fast. If you had a perfect line drawn on a wall or maybe even a taught wire, you could have a camera (not too expensive), or two which could give feedback on the height of your system. Use that feedback to correct the drawing tool "Z" position. However, if there is a roll (side to side) pitch that would complicate things.

  • Only if one knows what is professionally called such a system and monitoring procedure, cameras are widely used in various measurements and monitoring, but specifically this how to called expertly ? What exactly to type in search ? Using a camera is a pretty broad term.


    Where to find programming examples of all this.

  • I do not know of any off-the-shelf solution to this problem. There are some brand-specific tools that might help, but which of them we can discuss would depend entirely on what robot brand you are going to buy.


    This becomes a complex 3D or even 6D issue because the floor does not merely change the robot's height, but can also introduce tilt (in more than one axis) to the robot's base. And to accurately draw on the wall, all of these multi-dimensional errors would have to be taken into account.


    A vision-based solution would have to have a field of view large enough to span the entire drawing area, and might require 3D stereo vision. It would have to be calibrated to the drawing area, and the robot would have to carry a target that the vision system could locate reliably and accurately. Then, every time the moving platform is halted, the robot would need to carry out a calibration routine, moving to multiple points (without drawing) while the vision system measured the motion. Then, a dimensional transform between the programmed locations and the vision locations would be carried out to characterize the delta between the robot's actual location, and it's ideal location. Then a correction for this delta would have to be applied to the robot's program.


    But a vision system loses resolution as the field of view increases. So a long-distance vision system would be either imprecise, or very expensive.


    Another option might be to have a grid pattern of some kind pre-existing on the drawing surface. Then, a small, close-range vision system mounted to the robot end effector could be used to search for several points on the grid, and perform a similar offset correction.

Advertising from our partners