Posts by DChernyshev

    Hello Alexandru!


    Thank you for your answer it was helpful.


    Could I ask you, of course if it possible, to attach a small sketch of how you physically connected the outputs of F-DQ through the safety relay of SICK (RLY3-OSSD400) to dedicated safety controller inputs X7 and X8?


    Thank you in advance for your information and that you share your experience.

    Hallo Alexandru!


    Thank you for your reply and sorry for the delayed answer.


    So in my application I have to connect an E-Stop curcuit, two safety doors separately and three light curtains also separately to a F-DI module of F-CPU 1512SPF-1 PN. As I mentioned before I also will use two Kawasaki robots series BX300L and controller type E02 with an istalled Cubic-S module. So all the described safety signals after processing in the safety program of the named F-CPU must be output at a F-DQ and connected to safety inputs of the Cubic-S module. The reason why I am doing so because except robots I also have in my small plant some other machines that will receive information about safety signals over the PROFIsafe layer.


    So I would like to establish the follwing connection between F-DQ and Cubic-S:


    - E-Stop which will call the "Emergency Stop Function"

    - Each safety door will call the "Protective stop function". Also I would like to mention that each door will act with an intelock, so it can be opened only after each machine in my small plant has stopped according performance level PLd (Cat.3).

    - The first light curtain will activate a "selectable prohibited area" for the robot.

    - The second light curtain will call the "Protective stop function".

    - The third light curtain will call the "Speed monitoring function".

    First of all I am interesting if it is possible to realize a direct connection between a F-DQ of Siemens and the Cubic-S without using any safety relays inbetwewen, for instance, like a two channel relays PNOZ s3 of PILZ. But as you said you have gotten it to connect F-DQ to dedicated safety inputs X7 and X8 of controller I hope it should also work for Cubic-S module.


    At that point I would like to ask you some hardware questions, namely as I correct understood to establish a direct connection of a F-DQ of Siemens and the Cubic-S module do I need to use the PP-switching type? So no PM switching type, because the output modules of SP series can be ordered either of PP or PM switching, so not lile MP series where it can adjusted in TIA-Portal.


    Also in the description of the connection of the safety inputs the following is indicated "Ground the 0V (GND) of the power that is used". As I understand it is called "Functional grounding", but I don't fully understand how it can be physically done. In the image below I gave an example how a connection between a light curtain and a safety input channel of Cubic-S can be done where I pointed the grounding of the 0V of the 24VDC power supply. But I also think that in the presented image below instead of the light curtain a F-DQ can be used with the same connection manner, but a question remains to correct supply of modules and grounding of the power supply (grounding of the M terminal of 24VDC supply):



    Could you also tell me did you connect all E-Stop buttons in series and then connected them to a F-DI channel or you used different fail-safe inputs channel of a F-DI for different E-Stop buttons?


    Secondly as I know all safety inputs of the Cubic-S module are dual which have a certain justification for complinace to performance level PLd, for instance, to evaluate the discrapancy time between two inputs in a channel. At the same time the fail-safe outputs of a F-DQ of Siemens are individual. That's why I am interesting how it must be programmed in the safety programm of Siemens for the outputs correctly so that the dual estimate of the Cubic-S inputs is preserved.


    I would be very grateful if you would share your experience in solving this problem.


    Thank you in advance and have a nice day!

    Hello to all forum members!


    Dear kwakisaki could I ask you about your first message in this chat about the basics of programming selectable zones using the Cubic-S module, namely to the sixth and seventh points:



    Below in the attached folder I have given a table from the manual on Cubis-S "90210-1272DEC_Cubic-S_Instruction Manual", where it is indicated that the control of the selected zone occurs when the signal at the input is OFF, so may I ask you to explain, because it seems to be I understood incorrect:


    Application example.zip


    I also have given in the attached folder two pictures from my application where the robot should instantly be stopped if a person crosses the light barrier, that is, the area indicated in red is turned off. That is, in this case, the OSSD signals from the light barier become OFF and the Cubic-S begins to monitor this zone and if the robot is there as in the attached picture it must be shut off. Please correct if I am wrong.


    Also I would like to ask if someone has got an experience with connecting of the Cubic-S module to a fail-safe cpu? I am going to use in my application a Siemens S7 1500 F-CPU so that all safety signals such as E-STOP, safety door position and light barier will be requested at the fail-safe digital inputs of siemens f-cpu. Further I need to establish a connection between a fail-safe digital output of the f-cpu and the Cubic-S module. If someone already had experience with solving a similar problem, it would be great if you share your experience.


    Thank you in advance for your information and have a nice weekend!

    Good day!


    I apologize for the very long delay in answering. I hope my topic has not been forgotten yet)


    So I run the test with the changes you suggested, namely:


    Regarding your test, I would be tempted to:

    - Change ACCURACY 0.5 FINE ; prior to executing the LMOVE target instruction

    - Replace BRAKE with STABLE 0.1


    The results are the following:


    SPEED ... MM/SX coordinate in BASE at signal transaction of the simulated sensorSPEED ... MM/SX coordinate in BASE at signal transaction of the simulated sensorSPEED ... MM/SX coordinate in BASE at signal transaction of the simulated sensorSPEED ... MM/SX coordinate in BASE at signal transaction of the simulated sensorSPEED ... MM/SX coordinate in BASE at signal transaction of the simulated sensorSPEED ... MM/SX coordinate in BASE at signal transaction of the simulated sensor
    10192,2020289,8850294,16100286,01150301,18200290,64
    290,08289,88294,26298,41296,38284,62
    289,98290,08293,06285,61283,49233,44
    290,30291,00289,17297,22304,79288,64
    192,48289,49291,06297,41302,68282,64


    So the results seem to be quite similar, with the exception of a few highligthed in red. But for what reason a couple of results dropped out of the general zone, it is not clear for me.


    Thanks again kwakisaki for your advices:thumbs_up:


    Then I will wait for the possibility of perfoming tests on a real robot and write in the chat about the results.

    Thank you kwakisaki for the time you are spending to consider my issue and also the code templates you gave!


    Below I tried to answer your questions:

    Is the deviation a retarded deviation or advanced?

    I carried out several measurements in K-ROSET by 100% DECEL and different SPEED values. The target X value is 291 mm. The result is the following:


    SPEED ... MM/SX coordinate in BASE at signal
    transaction of the simulated sensor
    SPEED ... MM/SX coordinate in BASE at signal
    transaction of the simulated sensor
    SPEED ... MM/SX coordinate in BASE at signal
    transaction of the simulated sensor
    SPEED ... MM/SX coordinate in BASE at signal
    transaction of the simulated sensor
    SPEED ... MM/SX coordinate in BASE at signal
    transaction of the simulated sensor
    SPEED ... MM/SX coordinate in BASE at signal
    transaction of the simulated sensor
    10291.2520291.2550292.56100296.01150295.78200300.63
    289.76289.76291.86289.21303.58290.22
    290.16290.16288.76288.81299.98296.63
    291.36291.36292.56287.61301.48303.03
    292.04292.04290.66291.81290.38290.63


    The method I used is the following:



    Is the deviation a result of speed/accuracy to the target or the result of detection method not working quickly enough (ie is it being triggered too late)?


    So I think that the accuracy of fixing the position when a signal 1007 appears depends mainly on the speed of passage of this sensor. At speed 100 mm/s the difference with the required value reaches 5 mm, wich is the tolerance in my task. At speed values 150 and 200 mm/s the fluctuations became larger and as you can see the desired position has also been measured, namely 290.38 and 290.63 respectively.


    Have you tried adjusting the model location at the pickup point to see if this deviation repeats itself?

    I also ran a test where I put the bars in the bundle not flash (please see in the picture below), so that the first one +100 mm from the theoretical position, the second one in theoretical position and the third one -100 mm from the theoretical position. This means that the coordinates to be obtained for the first bar and for the third are 191 mm and 391 mm respectively (the target value is 291 mm). The indicated coordinates 191 mm and 391 mm have been obtained again at low speeds.



    I think further I should wait for the opportunity to test with real robots to make conclusions about the possibility of performing this function and at what speed. After I make the tests, I will definitelly write about the results.


    I have got a small question else, could you please answer, is there a possibility to monitor an actual linear velocity value and also acceleration/decelaration values?


    Thank you kwakisaki for your great support :top:

    Thank you for your answer! It was helpful:thumbs_up:


    So the deviation during the simulation is up to 50 mm from a value it should be.


    Without having access to your actual project, it would be difficult to pinpoint where the problem is occurring, even then, it could be difficult.

    Unfortunately I am not allowed to post the whole project as there is some information a have to hide, but I promose I will prepare a cut version where this issue can be simulated. Sorry for that!


    You mentioned the clamp zone twice within the K-ROSET settings which might influnce the simulation results. So I used as method to clamp "Work nearest to TCP". Should I change to "Work nearest the inside of ClampZone" to achieve best results or it doesn't matter?


    Have you tried any other form of detecting the edge of the bar, ie the XMOVE for instance?

    As for your question I thought the instruction XMOVE is not alowed in cooperative motion at least it is not present in the list of motion instructions used for COOP. That's why I did not try it. Or am I wrong?




    The reason I used HSENSESET and HSENSE instructions is that I thought I will get an exact position of TCP when a signal transaction from a sensor would be detected. I my case, when a front edge of bar crosses the light beam. But I also was thinking to use the following templates as:


    But under saved positions with HERE instruction I also get different results depending on velocity I predefine in my code.


    If XMOVE is allowed in COOP mode then I suppose it could be used like:


    Code
      XMOVE theoretical_pose TILL 1007
      HERE actual_position
      BRAKE
    ; ***********************
    ; further the difference in X between the theoretical pose and the actual one will be calculated
    ; and the robot will be moved to a modified pose
    ; ***********************

    So, as a summary, for me is important to know that it is possible in a real world application to get a position with high repeatability with a fixed signal from a sensor.


    Thank you in advance for your information!

    Good day kwakisaki and all forum members!


    First of all, I want to thank kwakisaki again for the help he has already provided before.


    I am proceeding with my application still in KROSET and would like to test a function that will be applied to the real robot series BX300L. I decided to keep the same thread neverthereless that the topic I want to touch on is different from the one touched on earlier.


    As it was described earlier the robots series BX300L have to pick steel bars from a bundle using cooperative motion and place it onto a conveyor with or without rotation depending on the orientation of their legs in a bundle. After that, the bar must be moved along the X-Base into a saw and placed there on a predefined position. In theory, the distance between the saw body and the front edge of the bar afer placing it on the conveyour is always the same, but in reality this value varies, since the bars are not perfectly in the bundle as some of them stick out, and some are kind of hidden inside. That's why an actual distance between the saw body and the front edge of a bar has to be measured before final positioning of the bar in the saw. For this purpose, I think to use an optical sensor, which is installed by default at the entrance to the saw. The beam emitted by the sensor is directed perpendicular to the X-Base, that is, the sensor's task by default is to establish the fact that the bar enters the saw.


    To simulate a sensor action I used a trick in KRoset namely I created an obstacle (shown in attached picture as photo sensor) and assigned a small motion to it in Y-Base direction. The appropriate settings a did under "Plug-ins/Action/Move Env Model", further I checked the box "Set I/O when Collide" and chose an input signal which will be set in case of contact of the steel bar with the simulated sensor.


    In order to get the exact position where the signal was detected I used the HSENSESET instruction. Then I brake the current movement to the theoretical position when the signal was detected using BRAKE instruction, read from the buffer the sensed position with HSENSE instruction, calculate a difference in X between the theoretical position and the actual one, as a result I modify the X coordinate using SHIFT instruction and move the TCP to that new position.


    But unfortunatelly when I read a saved position from the buffer using HSENSE instruction when a signal transaction was detected I don't get a desired result so that the robot holds on in the saw at different positions constantly. I checked it by outputting the coordinate to the terminal using the PRINT command and also I measured the distance between the steel bar front edge and the large red plane which indicates the sawing plane (also shown in the attached picture).


    The code I used, small video to my simulation and an image are below:


    Sensor test.zip


    I would like to heed your advice on whether it makes sense to test the following in a simulator regarding the results obtained, whether they will be the same as in the real robot.


    I will be glad to your advices and thank you in advance!

    Good day!


    May I ask you one more question esle? As you mentioned in one of your previous messages I still got an error (E1118) "Command value for JT4 suddenly changed" before executing of the circular interpolation. The intersting thing I did not change anything before, I just wanted to follow the movements again. I managed to get around this error either by having to reduce the rotational speed or by additional rotating around RX in Base coordinate system so to increase the angle of JT4 before starting the circular interpolation. But if I rotate too much around RX increasing too much JT4 I get E1118 again. The appropritate code in the attachment is below:


    Code.zip


    where:

    Code
    LMOVE #stelle_22_3; NO ERROR E1118  
    ;LMOVE #stelle_22_4; ERROR E1118  
    ;LMOVE #stelle_22_5; ERROR E1118 
    ;LMOVE #stelle_22_6; ERROR E1118


    Could you therefore please explain your statement below:


    It's only when you transfer between upper and lower hemispheres it can create 'JTxx suddenly moves' errors.


    Thank you in advance for your information!

    Good day!


    Thank you for your detailed answer! It was helpful.


    A short video with a simulation where I used only linear and circular interpolaiton is below:


    Simulation.zip


    So as you suggested after picking a bar and placing it in an intermediate posture I added an additional movement in + X direction in BASE coordinates and after that I executed an circular interpolation. Furher I did not performe a back movement in - X in BASE coordianates before placing a bar on the conveyor as I still need to move a steel bar in + X for it's next handling. Also I used the circualr interpolation at the same X coordinate in order to change the gripping point of the steel bar.


    As you can follow in the attached video there is some jerky movement during bar rotation, namely in it's second halfe. Could you tell me how can I avoid it?

    So I teached the points manualy, maybe the better way would be to calculate them mathematicaly according circule law based on some teached posture?


    Thank you in advance for your informaiton!

    Good day!


    Thank you for your answer.


    I created the code just for one robot using joint movements and configuration changing commands to show how the steel bars should be handled by the robots acting together, i.e. how the bars should be picked from a bundle, rotated 180 degrees and placed on a conveyor. The robot in the foreground will act further as master, the second one in the background will act as slave, so the robots will act synchronous using the so called cooperative motion.


    My mistake was that I read the corresponding manual for the cooperative motion after I created the above presented code, because I did not think, that for the coopeative motion only linear and circular interpolations can be used, such commands like MLLMOVE, MRLMOVE, MLC1MOVE, MRC1MOVE etc. That's why I started to change the code I posted changing joint movements by linear and circular interpolations. As a result I got the error E6007.


    On the one hand there is no way to test cooperative motion in K-Roset for this type of robot, it can be tested just with physical once, but on the other hand I think if it turns out to change the code only for one robot as if it were working alone using linear and circular interpolations, it can be transfered for the same robot which will already act as master in real application changing LMOVE by MLLMOVE, C1MOVE by MLC1MOVE, C2MOVE by MLC2MOVE etc.


    But I still get the singularity error and would like to ask you how it can avoided?


    Thank you in advance for your help!

    Good day everyone!


    Could I ask you to help me solve the following problem.


    In my application a have to programm two robots of series BX300L. At the moment I am doing this in K-Roset. In one function the robots must work in cooperative motion. As far as I know there is no way to test cooperative motion in K-Roset for the named robot type. Also after studying the manuals for cooperative motion I found that only linear interpolation and circular interpolation can be used, i.e. no joint movements. So I tryed to write a code just for one robot using only linear and circular interpolations and test it individual in K-Roset. In one part of the code the robot must rotate a workpiece 180 degrees. For this purpose I used circular interpolation and got an error E6007, what's related to the singularity.


    A short video with the required movements and appropriate code are attached in the file below (E6007.zip). With a purpose how it has to be I wrote a code using joint movements to avoid singularity and commands DWRIST and UWRIST to change the configuraiton:


    E6007.zip


    In case someone has met with a similar situation, I would be grateful if you would share your experience.


    Thank you in advance for your information!

Advertising from our partners