Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About 3D-LiDAR Sensor #150

Closed
namjiwon1023 opened this issue Jul 1, 2024 · 3 comments
Closed

About 3D-LiDAR Sensor #150

namjiwon1023 opened this issue Jul 1, 2024 · 3 comments

Comments

@namjiwon1023
Copy link

Thank you for your contribution!

I have two questions regarding 3D LiDAR:

  1. In the environmental code, when processing the point cloud, data with a Z-axis value greater than -0.2 is required. If I raise the position of the 3D LiDAR from 0.23 to 0.35, do I need to change the -0.2 to -0.3?

  2. Regardless of whether the input is 180° or 360°, by using the self.garps angle range limitation, it is possible to output the smallest 20 values in the 180° directly in front. Is that correct?
    Because I adjusted the 3D LiDAR's range to 360°, but I only need the output of the minimum values for the front 180°.
    I was reviewing the callback code and found that normalization is done using the angle between the robot's origin and the point cloud. So, I want to confirm this.

Thank you.

@reiniscimurs
Copy link
Owner

Hi

  1. You are correct, generally we want to filter out the floor plane so if we place the sensor higher we should decrease the z axis value. This is slightly described in the turtlebot branch: https://github.com/reiniscimurs/DRL-robot-navigation/blob/Noetic-Turtlebot/TD3/velodyne_env.py#L24-L26
  2. Yes, we check the angle to figure out the which gap does the reading belong to. It should still work for 360 fov but would be good to double-check. Mind you, it would be rather suboptimal as half of points in your sensor reading would be passed through all of the checks even though they would not be used. So that is computationally expensive and could be optimized.
@namjiwon1023
Copy link
Author

I am working on implementing the orientation-related state processing as described in your paper 'You Are Not Alone: Towards Cleaning Robot Navigation in Shared Environments through Deep Reinforcement Learning.'

I am unsure if there are any errors in the code I wrote and would like to confirm this with you.

image

Thank you.

@reiniscimurs

@reiniscimurs
Copy link
Owner

hi,
these should be the functions to get sin cos dist representation of "polar coordiates". Hope this helps

    def get_sincos_dist(self, odom, goal_x, goal_y):
        # Calculate robot heading from odometry data
        odom_x = odom.pose.pose.position.x
        odom_y = odom.pose.pose.position.y
        quaternion = Quaternion(
            odom.pose.pose.orientation.w,
            odom.pose.pose.orientation.x,
            odom.pose.pose.orientation.y,
            odom.pose.pose.orientation.z,
        )
        euler = quaternion.to_euler(degrees=False)
        angle = round(euler[2], 4)
        pose_vector = [np.cos(angle), np.sin(angle)]
        goal_vector = [goal_x - odom_x, goal_y - odom_y]

        distance = np.linalg.norm(goal_vector)
        cos, sin = self.cossin(pose_vector, goal_vector)

        return distance, cos, sin, angle

    @staticmethod
    def cossin(vec1, vec2):
        vec1 = vec1/np.linalg.norm(vec1)
        vec2 = vec2/np.linalg.norm(vec2)
        cos = np.dot(vec1, vec2)
        sin = np.cross(vec1, vec2).item()
        return cos, sin
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
2 participants