Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to achieve replicable results in SenseAct? #51

Open
fisherxue opened this issue May 15, 2019 · 3 comments
Open

How to achieve replicable results in SenseAct? #51

fisherxue opened this issue May 15, 2019 · 3 comments

Comments

@fisherxue
Copy link

Sorry for the simple question.
I'm trying to replicate my results across runs in SenseAct using PPO. I've set a constant seed to get a fixed random state and have verified that the state is the same across runs. However, the simulator seems to still be randomly generating both the initial network and targets/resets (the returns are very inconsistent, as are the observations).

In Appendix A.5 of Benchmarking Reinforcement Learning Algorithms on Real-World Robots, it is mentioned that "For the agent, randomization is used to initialize the network and sample actions. For the environment, randomization is used to generate targets and resets. By using the same randomization seed across multiple experiments in this set of experiments, we ensure that the environment generates the same sequence of targets and resets, the agent is initialized with the same network, and it generates the same or similar sequence of actions for a particular task. "

Could someone please clarify how this is done? I have attempted to set numpy and tf with some fixed seed. Furthermore, I have attempted to set each tensorflow operation to use some fixed seed.

Thanks!

@gauthamvasan
Copy link
Collaborator

Hi @fisherxue, we had a similar discussion in this issue. Please look at the entire discussion and try the steps listed there. If you're still running into issues, let me know.

@fisherxue
Copy link
Author

What I've done: saved the random state in a file, then loaded it using pickle. I do this before the env is loaded. I then set the tensorflow random seed and the python random seed after sess.enter()

I am also passing the random state I load from file into the environment.
I'm fairly sure my hardware is not the bottleneck.

I am able to get relatively consistent results when I run two simulations at the same time. However, when I run one after the other, I get vastly different results. Any advice?

This is what I have:

# use fixed random state
with open('random.obj', 'rb') as f:
    rand_state = pickle.load(f)
np.random.set_state(rand_state)
tf_set_seeds(np.random.randint(1, 2**31 - 1))

#Create Asynchronous Simulation of InvertedDoublePendulum-v2 mujoco environment.
env = DoubleInvertedPendulumEnv(agent_dt=0.005,
                                sensor_dt=[0.01, 0.0033333],
                                is_render=False,
                                random_state=rand_state
                               )
# Start environment processes
env.start()

# Create baselines ppo policy function
sess = U.single_threaded_session()
sess.__enter__()
seed = np.random.randint(1, 2**31 - 1)
tf.set_random_seed(seed)
random.seed(seed)

Thanks!

@fisherxue
Copy link
Author

@gauthamvasan
I'm getting this warning when running on one machine:
WARNING:root:Agent has over-run its allocated dt, it has been 0.008300065994262695 since the last observation, 0.003300065994262695 more than allowed
However, on the other machine I'm running it on, I only get that warning at the start of each iteration.
I'm still failing to get tight repeatability curves on double pendulum.

Any tips?
Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
2 participants