Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Simulating UR in ROS #59

Open
murtazabasu opened this issue Oct 26, 2019 · 10 comments
Open

Simulating UR in ROS #59

murtazabasu opened this issue Oct 26, 2019 · 10 comments

Comments

@murtazabasu
Copy link

murtazabasu commented Oct 26, 2019

Hello, I have been looking for such a repository for a long time. I am currently working on a project which involves stimulating and coordinating two UR5 in ROS/Gazebo for a pick and place task i.e. (the first robot picking up an object from a given position and moving to a certain position and the second robot moving to that position, taking the object from the first robot and going to the goal position). I am using Robotiq85 grippers for picking up an object and I am successfully able to control both the robots. till now I am successfully able to perform this task by hard-coding it, you can find the video here (https://www.youtube.com/watch?v=n6Vk9lIxKkg) but I want to perform this task using PPO for which I need to create an environment such that when the agent takes an action that actions is performed into the simulated world using Moveit Python interface (library which is used to control motions to the robot in Gazebo-simulated world). What sort of modification do I need to make in the environment and the agent in order to train the robot in the simulation?

@seivazi
Copy link

seivazi commented Nov 11, 2019

Hello,

I guess you need to write your own Communicator e.g. ROScommunicator. Also later your own task. Look to https://github.com/kindredresearch/SenseAct/tree/master/docs

Could you share your ROS/Gazebo code?

@cambel
Copy link

cambel commented Nov 15, 2019

@Murtaza786-oss Hi, that is a cool video you have there.

I have been working on ROS/Gazebo for some time now too, I have been working on this repo https://github.com/cambel/ur3. I am using a ur3 but the changes needed to use a ur5 aren't that much. I'm also starting to create some gym environments based on openai_ros. So feel free to check it and I'd be glad to help you with any questions/issues

@murtazabasu
Copy link
Author

@cambel Thank you for replying and for your comment, well I have started working with RL using PPO on my task. I saw your repo which you referred it seems that you are not using moveit-python interface for your task and you are defining the FK and IK manually while I am using the Moveit-python interface for doing the motion planning. Right now, I have defined my environment
(states, actions and rewards) and the agent (PPO) is giving me an action i.e. (joint angles). But for some reason I am facing issues in publishing that joint angle (action) to the moveit interface which should simulate in Gazebo. My question is why didn't you prefer the moveit commander interface with python to do the simulation which would actually save a lot of coding?

@cambel
Copy link

cambel commented Nov 16, 2019

@Murtaza786-oss Yeah, I don't use moveit-python because on one hand, I was to it was a bit slower compared to a direct control approach. On the other hand, since I want the robot to move one step at a time and not with a predefined trajectory, a motion planner such as moveit is not necessary. Therefore, I use two different approaches to control the robot, each one with some pros and cons. Basically, the main controller I use is here https://github.com/cambel/ur3/blob/master/ur_control/src/ur_control/arm.py, there is one for the gripper too.
It uses the standard ROS FollowJointTrajectory controller, which allows me to define a trajectory with multiple points, and the other option is to directly publish to the robot's control node, which allows me to send and overwrite motion commands (e.g. if the robot is still moving but I have a new target goal, I can send a new command and the robot will update its trajectory without stoping)
In any case, I'd have liked to save coding but for me moveit interface didn't quite work either.
I have been using this repo to train an agent using GPS and recently I manage to make it work with SAC.

@murtazabasu
Copy link
Author

@cambel Yeah makes a lot of sense! Which version of ROS and Python did you use for your project? I tried to do a catkin_make after cloning your repo but it is throwing me this error

CMake Error at /usr/share/cmake-3.10/Modules/FindPackageHandleStandardArgs.cmake:137 (message):
 Could NOT find PythonLibs: Found unsuitable version "2.7.15+", but required
 is at least "3" (found /usr/lib/x86_64-linux-gnu/libpython2.7.so)
@cambel
Copy link

cambel commented Nov 16, 2019

@Murtaza786-oss I'm using python 3 right now, but with a few minor changes, it should work for python 2 as well. Open an issue in my repo if you have any troubles I'd happy to help

@murtazabasu
Copy link
Author

@cambel Hi I cannot open a new issue in your rep because I don't see any option to open a new issue. By the way,
I have made my agent and it is learning in the environment the only problem is collision check. The UR5 arm gets collided to the surface and objects which disrupts the entire simulation. Do you by chance develop any contact sensor for the UR5 which gives ROS message when the end effector is close to some object? So that I can tell the agent that if the collision rosmsg prints something then RESET! Right now I am using the position Z of the end effector as a substitute by saying that if Z is less than the Z of the table then RESET! but the problem with this is that the Z of the end effector only after simulating the action taken by the agent and so if the agent choses a bad action where the end effector already goes and collides to the table then the entire simulation gets disrupted!
I have included a code snippet of the current collision measurement using Z of the end effector


 self.reward_coll = 0
        self.iterator += 1
        coll_check = self._obs_pose1.position.z

        if coll_check < 0.25: #0.22 is the height of the table
            self.reset_robot2.call()
            self.reward_coll = -0.5
            self.collided += 1
            rospy.sleep(1)

I am also sharing you the current training video of one of the UR5, it can been seen that the robot is colliding to itself and also to the table because there is no contact sensor installed

[https://www.dropbox.com/s/b6bd5qjyj1bh9x3/Training_UR5_PPO.mp4?dl=0]

@cambel
Copy link

cambel commented Nov 20, 2019

@Murtaza786-oss I re-uploaded the repository so that now Issues can be created.

I do not have a collision detector for the UR but I implemented a force/torque sensor at the end effector that you could virtually use to check if force is detected. But from what I can see from the video, it seems that you are using direct joint commands as actions of the robot. By experience, I would recommend you avoid that, it kind of does not work, there is no continuity to the motion of the robot. What I do instead of direct joint commands, is step joint commands, i.e., instead of joint_command = action use joint_command = current_joints + action. Then make the actions smaller, limit to something suitable for your task such as a maximum of 5cm per step or less. Then it would be easier to compute the forward kinematics of the expected new action to validate the position z of the end effector before the robot even move.

@ZZWang21
Copy link

ZZWang21 commented Dec 8, 2022

@cambel Hi, have you finished the openai_ros setup for the ur3 robot? I am doing a project on UR5e. I am blocked when coding the task_envs.

@cambel
Copy link

cambel commented Dec 9, 2022

@ZZWang21 Hi, yes I have been using it for a while now. You can find a working demo here https://github.com/cambel/robot-learning-cl-dr

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
4 participants