Ray indicates that the request resource is insufficient

The environment:
ray:2.2.0
2 node with 8 CPUs and 1 GPUs each.

The slurm error log is as follows,What's the reason, please?
2022-12-19 15:37:22,116	WARNING insufficient_resources_manager.py:128 -- Ignore this message if the cluster is autoscaling. You asked for 3.0 cpu and 2.0 gpu per trial, but the cluster only has 8.0 cpu and 1.0 gpu. Stop the tuning job and adjust the resources requested per trial (possibly via `resources_per_trial` or via `num_workers` for rllib) and/or add more resources to your Ray runtime.

slurm script contents:

#!/bin/bash
#SBATCH --job-name=ray
#SBATCH --partition=GPU
#SBATCH --nodes=2
#SBATCH --tasks-per-node=1
#SBATCH --gpus-per-task=1
#SBATCH --cpus-per-task=4
#SBATCH --output=job.%j.out
#SBATCH --error=job.%j.err
source /opt/Miniconda3/bin/activate xgboost

set -x

# Getting the node names
nodes=$(scontrol show hostnames "$SLURM_JOB_NODELIST")
nodes_array=($nodes)

head_node=${nodes_array[0]}
head_node_ip=$(srun --nodes=1 --ntasks=1 -w "$head_node" hostname --ip-address)

# convert it to an ipv4 address.
if [[ "$head_node_ip" == *" "* ]]; then
IFS=' ' read -ra ADDR <<<"$head_node_ip"
if [[ ${#ADDR[0]} -gt 16 ]]; then
  head_node_ip=${ADDR[1]}
else
  head_node_ip=${ADDR[0]}
fi
echo "IPV6 address detected. We split the IPV4 address as $head_node_ip"
fi
# head_address_end

# head_ray_start
port=6379
ip_head=$head_node_ip:$port
export ip_head
echo "IP Head: $ip_head"

echo "Starting HEAD at $head_node"
srun --nodes=1 --ntasks=1 -w "$head_node" \
    ray start --head --node-ip-address="$head_node_ip" --port=$port --block &
# head_ray_end

# worker_ray_start
sleep 10

# number of nodes other than the head node
worker_num=$((SLURM_JOB_NUM_NODES - 1))

for ((i = 1; i <= worker_num; i++)); do
    node_i=${nodes_array[$i]}
    echo "Starting WORKER $i at $node_i"
    srun --nodes=1 --ntasks=1 -w "$node_i" \
        ray start --address "$ip_head" --block &
    sleep 5
done
# worker_ray_end

python xgboost-ray.py

python file contents:

import ray
from ray.train.xgboost import XGBoostTrainer
from ray.air.config import ScalingConfig

# Load data.
dataset = ray.data.read_csv("/172.16.0.43/NFSSharedFiles/xg_case/example-data/breast_cancer.csv")

# Split data into train and validation.
train_dataset, valid_dataset = dataset.train_test_split(test_size=0.3)

trainer = XGBoostTrainer(
    scaling_config=ScalingConfig(
        # Number of workers to use for data parallelism.
        num_workers=2,
        # Whether to use GPU acceleration.
        use_gpu=True,
    ),
    params={
        # XGBoost specific params
        # "objective": "binary:logistic",
        "tree_method": "gpu_hist",  # uncomment this to use GPU for training
        "eval_metric": ["logloss", "error"],
    },
    label_column="target",
    num_boost_round=20,
    datasets={"train": train_dataset, "valid": valid_dataset},
)
result = trainer.fit()
print(result.metrics)
ray status
======== Autoscaler status: 2022-12-19 15:36:14.930115 ========
Node status
---------------------------------------------------------------
Healthy:
 1 node_9942d42c4010be17933e994abb232b175b4e3698798cf2a982a1f65f
 1 node_98dd6584393de9fb935f9b2f38b565bd086bc35f46f77c363c42ee64
Pending:
 (no pending nodes)
Recent failures:
 (no failures)

Resources
---------------------------------------------------------------
Usage:
 0.0/16.0 CPU
 0.0/2.0 GPU
 0.0/2.0 accelerator_type:V100S
 0.00/39.893 GiB memory
 0.00/18.409 GiB object_store_memory

Demands:
 (no resource demands)