Getting errors while using documentation sample codes

Hay,
I was trying to run this code on my PC but some problems occured.

import ray
from ray import train, tune

ray.init()

config = PPOConfig().training(lr=tune.grid_search([0.01, 0.001, 0.0001]))

tuner = tune.Tuner(
    "PPO",
    run_config=train.RunConfig(
        stop={"episode_reward_mean": 150},
    ),
    param_space=config,
)

tuner.fit()

After running this code, I received an error and a message explaining that I should check the error.txt file and I did. here is the error in error.txt file:

Failure # 1 (occurred at 2024-04-22_12-37-26)
The actor died because of an error raised in its creation task, e[36mray::PPO.init()e[39m (pid=13436, ip=127.0.0.1, actor_id=f3849e3512be3dc081c920c601000000, repr=PPO)
File “C:\Users\Hamid\anaconda3\envs\uavrl\lib\site-packages\ray\rllib\evaluation\worker_set.py”, line 229, in _setup
validate=config.validate_workers_after_construction,
File “C:\Users\Hamid\anaconda3\envs\uavrl\lib\site-packages\ray\rllib\evaluation\worker_set.py”, line 593, in add_workers
raise result.get()
File “C:\Users\Hamid\anaconda3\envs\uavrl\lib\site-packages\ray\rllib\utils\actor_manager.py”, line 481, in __fetch_result
result = ray.get(r)
File “C:\Users\Hamid\anaconda3\envs\uavrl\lib\site-packages\ray_private\auto_init_hook.py”, line 24, in auto_init_wrapper
return fn(*args, **kwargs)
File “C:\Users\Hamid\anaconda3\envs\uavrl\lib\site-packages\ray_private\client_mode_hook.py”, line 103, in wrapper
return func(*args, **kwargs)
File “C:\Users\Hamid\anaconda3\envs\uavrl\lib\site-packages\ray_private\worker.py”, line 2549, in get
raise value
ray.exceptions.RayActorError: The actor died because of an error raised in its creation task, e[36mray::RolloutWorker.init()e[39m (pid=5940, ip=127.0.0.1, actor_id=ef81ae01ff5edfb2444ae6fd01000000, repr=<ray.rllib.evaluation.rollout_worker.RolloutWorker object at 0x00000181C8464F48>)
File “python\ray_raylet.pyx”, line 1616, in ray._raylet.execute_task
File “python\ray_raylet.pyx”, line 1556, in ray._raylet.execute_task.function_executor
File “C:\Users\Hamid\anaconda3\envs\uavrl\lib\site-packages\ray_private\function_manager.py”, line 726, in actor_method_executor
return method(__ray_actor, *args, **kwargs)
File “C:\Users\Hamid\anaconda3\envs\uavrl\lib\site-packages\ray\util\tracing\tracing_helper.py”, line 467, in _resume_span
return method(self, *_args, **_kwargs)
File “C:\Users\Hamid\anaconda3\envs\uavrl\lib\site-packages\ray\rllib\evaluation\rollout_worker.py”, line 473, in init
default_policy_class=self.default_policy_class,
File “C:\Users\Hamid\anaconda3\envs\uavrl\lib\site-packages\ray\rllib\algorithms\algorithm_config.py”, line 2974, in get_multi_agent_setup
"observation_space not provided in PolicySpec for "
ValueError: observation_space not provided in PolicySpec for default_policy and env does not have an observation space OR no spaces received from other workers’ env(s) OR no observation_space specified in config!
During handling of the above exception, another exception occurred:
e[36mray::PPO.init()e[39m (pid=13436, ip=127.0.0.1, actor_id=f3849e3512be3dc081c920c601000000, repr=PPO)
File “python\ray_raylet.pyx”, line 1610, in ray._raylet.execute_task
File “python\ray_raylet.pyx”, line 1704, in ray._raylet.execute_task
File “python\ray_raylet.pyx”, line 1616, in ray._raylet.execute_task
File “python\ray_raylet.pyx”, line 1556, in ray._raylet.execute_task.function_executor
File “C:\Users\Hamid\anaconda3\envs\uavrl\lib\site-packages\ray_private\function_manager.py”, line 726, in actor_method_executor
return method(__ray_actor, *args, **kwargs)
File “C:\Users\Hamid\anaconda3\envs\uavrl\lib\site-packages\ray\util\tracing\tracing_helper.py”, line 467, in _resume_span
return method(self, *_args, **_kwargs)
File “C:\Users\Hamid\anaconda3\envs\uavrl\lib\site-packages\ray\rllib\algorithms\algorithm.py”, line 520, in init
**kwargs,
File “C:\Users\Hamid\anaconda3\envs\uavrl\lib\site-packages\ray\tune\trainable\trainable.py”, line 185, in init
self.setup(copy.deepcopy(self.config))
File “C:\Users\Hamid\anaconda3\envs\uavrl\lib\site-packages\ray\util\tracing\tracing_helper.py”, line 467, in _resume_span
return method(self, *_args, **_kwargs)
File “C:\Users\Hamid\anaconda3\envs\uavrl\lib\site-packages\ray\rllib\algorithms\algorithm.py”, line 646, in setup
logdir=self.logdir,
File “C:\Users\Hamid\anaconda3\envs\uavrl\lib\site-packages\ray\rllib\evaluation\worker_set.py”, line 179, in init
raise e.args[0].args[2]
ValueError: observation_space not provided in PolicySpec for default_policy and env does not have an observation space OR no spaces received from other workers’ env(s) OR no observation_space specified in config!

At the beginning of the training my resources was like this:

Logical resource usage: 9/12 CPUs, 0/1 GPUs