As i’m running my simulation within a game engine, it’s necessary to retrieve the action and observation space from the game at initialization.
In the case of single agent type, it’s easy: Connect to the game, ask for information, and define env.observation_space and env.action_space with it. If the policies doesn’t have any space defined, the one in the env will be used
In the case of multi agent type: I’m now in trouble, I can’t define it within the environement because of the multiple actions/obs spaces, so I have to define them within the policiesSpec
But, I need to define the policies before starting the game which is not possible in my case since i’m getting them from the game
I think there would be a hacky way to make it work: giving a callback to each env, that will be called after initialization and be used to create the policies if there are none yet in the trainer.
But when using Tune, I don’t have any reference to the trainer until the first on_train_result callback, so not possible
Another way would be to run a fake environement (so basically launch the game once before starting the training), and retrieve essential data before starting the training. This might lead to annoying issues with remote executions in the future
I saw some part of API for the agent_id → agent_space mapping
Here it use a Dict space to map agent_id to sub space: ray/test_nested_observation_spaces.py at 740def0a131a152a9408b22eaede28c62a848e3b · ray-project/ray · GitHub
Here there is a mention about multi agent spaces mapping: ray/multi_agent_env.py at 740def0a131a152a9408b22eaede28c62a848e3b · ray-project/ray · GitHub and : _check_if_space_maps_agent_id_to_sub_space() and _spaces_in_preferred_format seems to be useful for my case, but cannot find any working examples of it and most of it is marked as ExperimentalAPI
Does anyone sees a “cleaner” way to do it ?