Ray 2.7.1, ray.train and aws neuron cores


I’m trying to get ray.Train to use aws neuron cores. I see how it should work with @ray.remote, but I have not been able to correctly set the accelerator type via ray.Train.TorchTrainer. The logs show that the neuron cores are correctly detected, but setting trainer_resources in the scaling_config seems to have no effect. Does anyone know how to make this work? Thanks!

neuron_resources={f"accelerator_type:{AWS_NEURON_CORE}": 2000}
scaling_config=ScalingConfig(trainer_resources=neuron_resources, num_workers=1, use_gpu=use_gpu)

Cheers, Cos.