Actor died error when running trial on CPU or with single GPU

I have a multi-agent custom environment setup, when I run training with CPU only or with “num_gpu = 1” after the first trial completes actor died error is shown, but when num_gpu set to 2 the error do not come. Log files for the trails can be accessed here

2021-10-23 16:31:03,946	ERROR trial_runner.py:810 -- Trial experiment_HierarchicalGraphColorEnv_67e7a_00000: Error processing event.
Traceback (most recent call last):
  File "/home/cs20mtech12003/anaconda3/envs/rllib_env/lib/python3.7/site-packages/ray/tune/trial_runner.py", line 776, in _process_trial
    results = self.trial_executor.fetch_result(trial)
  File "/home/cs20mtech12003/anaconda3/envs/rllib_env/lib/python3.7/site-packages/ray/tune/ray_trial_executor.py", line 759, in fetch_result
    result = ray.get(trial_future[0], timeout=DEFAULT_GET_TIMEOUT)
  File "/home/cs20mtech12003/anaconda3/envs/rllib_env/lib/python3.7/site-packages/ray/_private/client_mode_hook.py", line 89, in wrapper
    return func(*args, **kwargs)
  File "/home/cs20mtech12003/anaconda3/envs/rllib_env/lib/python3.7/site-packages/ray/worker.py", line 1622, in get
    raise value
ray.exceptions.RayActorError: The actor died unexpectedly before finishing this task.
2021-10-23 16:31:03,947	WARNING worker.py:1214 -- A worker died or was killed while executing a task by an unexpected system error. To troubleshoot the problem, check the logs for the dead worker. RayTask ID: ffffffffffffffff047e3d2cf236eb02d7f84a2f01000000 Worker ID: 4591979c296a4ee32e96c08c9e6ebe6535df6da89f8a05bb821f9ef8 Node ID: 177c4c41b8fbbc5ea16f472d6214df12a84c484a704bfecaefacc64c Worker IP address: 192.168.50.100 Worker port: 42657 Worker PID: 2840624