How to run unity3d_env.py in parallel

Hi,
In the unity3d_env_local.py function, I set num_workers>0. Is it possible to perform parallel operations?
I know that in the rllib environment, rllib_env_vector inherits gym.env. When the gym environment is running, rollout_worker will automatically create a vector environment. Can unity3d_env be created automatically? In Reinforcement Learning with RLlib in the Unity Game Engine | by Sven Mika | Distributed Computing with Ray | Medium,

In RLlib, a “worker” is a parallelized Ray process that runs in the background, collecting data from its own copy of the environment (the compiled game) and sending this data to a centralized “driver” process (basically our script). Each worker has its own copy of the neural network, which it uses to compute actions for each game frame.

We set num_workers=2. But inside the attached program https://raw.githubusercontent.com/ray-project/ray/master/rllib/examples/unity3d_env_local.py

 #For running in editor, force to use just one Worker (we only have
 # one Unity running)!
 "num_workers": args.num_workers if args.file_name else 0,

Do I need to manually open the two environments. At this time, the two environments have the same ip and port. Will this affect it? Do I need to write a vectorized environment function?

Hey @robot-xyh , yes, it’s possible (and you should do this if you really want to learn a harder env, like the ones with image spaces or requiring an LSTM). You should compile your game in the Unity editor, then follow this blogpost’s instructions here:

Where I talk about the soccer example (not the simper 3DBall one). :slight_smile:

Hello, @sven1977, I ran the Unity environment in parallel according to your blog post. 3Dball and Soccer environments work very well. I noticed that in the initialization of the environment, I can call

UnityEnvironment(
                    file_name=file_name,
                    worker_id=worker_id_,
                    base_port=port_,
                    seed=seed,
                    no_graphics=no_graphics,
                    timeout_wait=timeout_wait,
                )

paralleling creation environment.

My main research work now is to use airsim to simulate cars and drones. I tested your pr : UE4 AirSim Car adapter and example script, which gave me a lot of inspiration. As you said, it is best to use a parallel environment when using images as input for observation.
It seems that airsim does not directly provide functions for running in parallel. I need to set ip and port in the settings file. When running the environment, specify a different setting file. As follows
AirSim.exe --settings'C:\path\to\settings.json'
Therefore, I hope to vectorize the environment ( Discussion 2294: Custom vector env example and fix) when the environment is initialized (ray==1.3), the parallel environment cannot run well, I wrote the error in the comments.

When I upgraded to ray==1.4, I got the following error.

  [Previous line repeated 1 more time]
  File "C:\ProgramData\Anaconda3\lib\site-packages\ray\util\iter.py", line 471, in base_iterator
    yield ray.get(futures, timeout=timeout)
  File "C:\ProgramData\Anaconda3\lib\site-packages\ray\_private\client_mode_hook.py", line 62, in wrapper
    return func(*args, **kwargs)
  File "C:\ProgramData\Anaconda3\lib\site-packages\ray\worker.py", line 1494, in get
    raise value.as_instanceof_cause()
ray.exceptions.RayTaskError(AttributeError): ray::RolloutWorker.par_iter_next()::Exiting (pid=3176, ip=10.30.21.53)
  File "python\ray\_raylet.pyx", line 501, in ray._raylet.execute_task
  File "python\ray\_raylet.pyx", line 451, in ray._raylet.execute_task.function_executor
  File "C:\ProgramData\Anaconda3\lib\site-packages\ray\_private\function_manager.py", line 563, in actor_method_executor
    return method(__ray_actor, *args, **kwargs)
  File "C:\ProgramData\Anaconda3\lib\site-packages\ray\util\iter.py", line 1151, in par_iter_next
    return next(self.local_it)
  File "C:\ProgramData\Anaconda3\lib\site-packages\ray\rllib\evaluation\rollout_worker.py", line 333, in gen_rollouts
    yield self.sample()
  File "C:\ProgramData\Anaconda3\lib\site-packages\ray\rllib\evaluation\rollout_worker.py", line 718, in sample
    elif self.input_reader is None:
AttributeError: 'RolloutWorker' object has no attribute 'input_reader'
The trial PPO_ue4_airsim_drone_8400b_00000 errored with parameters={'env': 'ue4_airsim_drone', 
'env_config': {'episode_horizon': 100}, 'num_workers': 2, 'lr': 0.0003, 'lambda': 0.95, 'gamma': 0.99, 'sgd_minibatch_size': 256,
 'train_batch_size': 512, 'num_gpus': 0, 'num_sgd_iter': 20, 'rollout_fragment_length': 200, 'clip_param': 0.2, '
multiagent': {'policies': {'Drone': (None, Box(-inf, inf, (84, 84, 1), float32), Box(-1.0, 1.0, (3,), float32), {})},
 'policy_mapping_fn': <function UnrealEngine4AirSimDroneEnv.get_policy_configs_for_game.<locals>.policy_mapping_fn at 0x000002119CB95280>}, 
'model': {'conv_filters': [[84, [4, 4], 4], [42, [4, 4], 4], [21, [5, 5], 2]], 'conv_activation': 'relu'}, 
'framework': 'torch', 'no_done_at_end': True}. 
Error file: C:\Users\pc\ray_results\PPO\PPO_ue4_airsim_drone_8400b_00000_0_2021-06-13_21-08-59\error.txt

I also tried to use ExternalMultiAgentEnv for parallelization, but still no effect.

I have been stuck with this question for a long time, and I am very grateful if you can give me any ideas.