Controlling number of worker processes in an on-premise cluster

I have four linux machines in a local cluster with a configuration described below.

cluster_name: default
    type: local
    head_ip: boxcox
    worker_ips: [london, paris, milan]
auth: {ssh_user: ubuntu, ssh_private_key: ~/.ssh/id_rsa}
min_workers: 3
max_workers: 3

And I defined a task for running in parallel, which requires a specific device attached on a worker. Also such device are equipped only one for a worker machine and only a task allowed to run on a worker at a time.

Under these conditions, I am looking for a way to setup my local cluster to limit a number of worker process on each node.

I want to put 0 workers on head node as the device is not installed on the head node, and want to limit 1 worker for each of three worker nodes in order to make it sure of running only one task with the device at a time.

I read some documents but could not find a way so far. I want to make sure that there actually not exist such a way to control number of workers on a node and move on to try with other ways.

Any kind of comments would be appreciated.