Hi,
In the unity3d_env_local.py
function, I set num_workers>0
. Is it possible to perform parallel operations?
I know that in the rllib environment, rllib_env_vector
inherits gym.env
. When the gym environment is running, rollout_worker will automatically create a vector environment. Can unity3d_env
be created automatically? In Reinforcement Learning with RLlib in the Unity Game Engine | by Sven Mika | Distributed Computing with Ray | Medium,
In RLlib, a “worker” is a parallelized Ray process that runs in the background, collecting data from its own copy of the environment (the compiled game) and sending this data to a centralized “driver” process (basically our script). Each worker has its own copy of the neural network, which it uses to compute actions for each game frame.
We set num_workers=2. But inside the attached program https://raw.githubusercontent.com/ray-project/ray/master/rllib/examples/unity3d_env_local.py
#For running in editor, force to use just one Worker (we only have
# one Unity running)!
"num_workers": args.num_workers if args.file_name else 0,
Do I need to manually open the two environments. At this time, the two environments have the same ip and port. Will this affect it? Do I need to write a vectorized environment function?