RLlib external environment example throws ValueError

How severe does this issue affect your experience of using Ray?

  • Medium: It contributes to significant difficulty to complete my task, but I can work around it.

Hi! I am currently working on a project with the Gazebo Simulator and want to use RLlib to handle the reinforcement learning part.

I was currently looking into external environments and how i could create a wrapper for Gazebo. The example that is mentioned in the documentation (Environments — Ray 2.2.0) of the cartpole environment as an external environment however does not work and throws an Error.

I can start the cartpole_server.py just fine, but the cartpole_client.py throws the following error:

Env checking isn’t implemented for RemoteBaseEnvs, ExternalMultiAgentEnv, ExternalEnvs or environments that are Ray actors.
Traceback (most recent call last):
File “/home/ydenker/Repositories/pushing-robot/RLlib/cartpole_example_client.py”, line 81, in
client = PolicyClient(
File “/home/ydenker/.local/lib/python3.10/site-packages/ray/rllib/env/policy_client.py”, line 79, in init
self._setup_local_rollout_worker(update_interval)
File “/home/ydenker/.local/lib/python3.10/site-packages/ray/rllib/env/policy_client.py”, line 261, in _setup_local_rollout_worker
(self.rollout_worker, self.inference_thread) = _create_embedded_rollout_worker(
File “/home/ydenker/.local/lib/python3.10/site-packages/ray/rllib/env/policy_client.py”, line 405, in _create_embedded_rollout_worker
rollout_worker = RolloutWorker(**kwargs)
File “/home/ydenker/.local/lib/python3.10/site-packages/ray/rllib/evaluation/rollout_worker.py”, line 826, in init
self.sampler = SyncSampler(
File “/home/ydenker/.local/lib/python3.10/site-packages/ray/rllib/evaluation/sampler.py”, line 246, in init
self._env_runner_obj = EnvRunnerV2(
File “/home/ydenker/.local/lib/python3.10/site-packages/ray/rllib/evaluation/env_runner_v2.py”, line 236, in init
raise ValueError(
ValueError: Policies using the new Connector API do not support ExternalEnv.

Is there a quick fix for this? If i can’t get the example to work how am i suppose to create my own adapter class that is working? Thanks for help in advance.
Here are some more details about what i am using:
Ray version: 3.0.0.dev0 (commit: a830694359a272703aace4ac6f7e5f98b1d5d4a9)
OS: Ubuntu 22.04.1 LTS
Python version: Python 3.10.6

I have not edited the cartpole_client.py file in any way. It is an exact copy of:

Please Help :wink:
Yannick

Hi @YDenker, and welcome to the forum. I am running this on master and the example runs fine. Can you switch to master and check if you can reproduce?

Hi @Lars_Simon_Zehnder thanks, happy to be here.
i currently have installed ray with rllib with the following command:
pip install https://s3-us-west-2.amazonaws.com/ray-wheels/master/a830694359a272703aace4ac6f7e5f98b1d5d4a9/ray-3.0.0.dev0-cp310-cp310-manylinux2014_x86_64.whl
If i am not mistaken this should be directly from the master branch or am i misunderstanding something.
I have tried to run it again today and got the same error.

Do you just have the repo cloned on your device and it works ?
I can try that as well.

@YDenker, this is the master. Can you reinstall with the option --no-cache-dir and see, if this solves the problem?

Also, install the additional packages for RLlib via

python -m pip install -U --no-cache-dir "ray[rllib] @ https://s3-us-west-2.amazonaws.com/ray-wheels/master/a830694359a272703aace4ac6f7e5f98b1d5d4a9/ray-3.0.0.dev0-cp310-cp310-manylinux2014_x86_64.whl"

@Lars_Simon_Zehnder i uninstalled and reinstalled using your command and the error persists.
I don’t really know what i am doing differently, if i am doing anything different at all.
The Error seems to revolves around “Env checking”:

‘Env checking isn’t implemented for RemoteBaseEnvs, ExternalMultiAgentEnv, ExternalEnvs or environments that are Ray actors.’

What exactly is Env checking and how can it not be supported if it works on your end?

The resulting error hints at the new Connector API not supporting what i’m planning to do:
‘ValueError: Policies using the new Connector API do not support ExternalEnv.’

Can i maybe circumvent the issue by using the old Connector API assuming there is such a thing?

Is the error even related to those at all and i’m just missing something crucial?

I’m not sure how to proceed from here. Any help is much appreciated.

Yannick

Hi @YDenker, it appars that the Connector API is in the newer version used on default. You could plug it out by using in your config:

config = (
        get_trainable_cls(args.run).get_default_config()
        # Indicate that the Algorithm we setup here doesn't need an actual env.
        # Allow spaces to be determined by user (see below).
        .environment(
            env=None,
            # TODO: (sven) make these settings unnecessary and get the information
            #  about the env spaces from the client.
            observation_space=gym.spaces.Box(float("-inf"), float("inf"), (4,)),
            action_space=gym.spaces.Discrete(2),
        )
        # DL framework to use.
        .framework(args.framework)
        # Create a "chatty" client/server or not.
        .callbacks(MyCallbacks if args.callbacks_verbose else None)
        # Use the `PolicyServerInput` to generate experiences.
        .offline_data(input_=_input)
        # Use n worker processes to listen on different ports.
        .rollouts(
             num_rollout_workers=args.num_workers,
             enable_connectors=False,
        )
        # Disable OPE, since the rollouts are coming from online clients.
        .evaluation(off_policy_estimation_methods={})
        # Set to INFO so we'll see the server's actual address:port.
        .debugging(log_level="INFO")
    )
1 Like

Thanks for the tip. I will try that. I currently am not using the example at all, but created my own wrapper class that inherits from gymnasium instead of using a tcp connection.