I’m trying to get ray.Train to use aws neuron cores. I see how it should work with @ray.remote, but I have not been able to correctly set the accelerator type via ray.Train.TorchTrainer. The logs show that the neuron cores are correctly detected, but setting trainer_resources in the scaling_config seems to have no effect. Does anyone know how to make this work? Thanks!
scaling_config=ScalingConfig(trainer_resources=neuron_resources, num_workers=1, use_gpu=use_gpu)