How to use my own docker image to run a local on-Premise cluster?

The cuda version in the official ray docker image is 10.1, but I need version 10.2 in case there is something wrong with my program. I had tried to use my own image but it failed.

==> /tmp/ray/session_latest/logs/monitor.log <==
2021-11-24 19:19:45,697	INFO autoscaler.py:699 -- StandardAutoscaler: Queue 2 new nodes for launch
2021-11-24 19:19:45,698	INFO node_launcher.py:78 -- NodeLauncher0: Got 2 nodes to launch.
2021-11-24 19:19:45,699	ERROR node_launcher.py:72 -- Launch failed
Traceback (most recent call last):
  File "/root/miniconda3/envs/py36/lib/python3.6/site-packages/ray/autoscaler/_private/node_launcher.py", line 70, in run
    self._launch_node(config, count, node_type)
  File "/root/miniconda3/envs/py36/lib/python3.6/site-packages/ray/autoscaler/_private/node_launcher.py", line 40, in _launch_node
    launch_config = copy.deepcopy(config["worker_nodes"])
KeyError: 'worker_nodes'

ray version: 1.8.0
What feature the image should have so that it can be used as cluster image?