Hello everyone,
I'm currently working on a project that involves controlling a Mitsubishi industrial robot (RV-2FD) using a Raspberry Pi 4B, and I would really appreciate your insights and suggestions.
Project Overview:
The aim of my project is to perform visual inspection on copper pipes used in systems like air conditioners, dryers, and refrigerators — specifically at their welding joints, where cooling gases (like freon) are used. I want to detect possible defects or leakage points using image processing, and then guide the Mitsubishi robot to those coordinates for further inspection or marking.
Here’s how I plan to approach it:
- I'm using OpenCV on Raspberry Pi for real-time image processing.
- A camera module attached to the Raspberry Pi will capture images of the copper pipe welds.
- I’m creating a custom dataset of defective and non-defective weld joints to train a model for detection.
- The image will be processed on the Raspberry Pi, and defect coordinates will be extracted from the image.
- These coordinates need to be translated into robot motion commands so the Mitsubishi robot can move to the detected defect points.
Communication & Integration Questions:
- What is the best way to establish communication between the Raspberry Pi and the Mitsubishi robot? Are there supported protocols like serial (RS-232), Ethernet/IP, CC-Link, Modbus TCP, or any others that are compatible with both?
- Would it be better to use a PLC as an intermediary, or is direct communication between the Raspberry Pi and the robot feasible?
- Are there software tools, SDKs, or APIs available for Mitsubishi robots that can be integrated with Raspberry Pi (Linux-based OS)?
- How can I send coordinates from the Raspberry Pi to the robot controller? Is there a standard for converting image coordinates to robot world coordinates?
- Can the robot receive commands from external systems (e.g., via socket communication, HTTP, or a serial port)?
- Are there any existing examples or libraries (open-source or vendor-provided) that could help bridge Raspberry Pi with Mitsubishi robots?
Ultimately, I want to make the Raspberry Pi act as both the vision system and the decision-making unit, while the Mitsubishi robot executes the movements based on the coordinates extracted from image analysis.
Any documentation, tutorials, experience-based suggestions, or links to similar projects would be extremely helpful. I want to ensure the feasibility and optimize the architecture before finalizing the system.
Thanks in advance for your help!