Example in ray.train for tensorflow distributed training?

I’ve been following ray train tf. Under Distributed Model Training one can choose TensorFlow or PyTorch.

For TensorFlow there is this function:

# For GPU Training, set `use_gpu` to True.
use_gpu = False

trainer = TensorflowTrainer(
    train_loop_per_worker=train_func_distributed,
    scaling_config=ScalingConfig(num_workers=4, use_gpu=use_gpu),
)

It does not run with tf 2.18, and ray 2.39.0 and ran on Ubuntu 24.04.

Have others ran this example successfully?

Others seem to report the same issue here.

I suspect that it’s because of Keras 3; since users report breaking in tf==2.16 which is when the new Keras became the default.

One can pass: os.environ["TF_USE_LEGACY_KERAS"] = "1" # For TF2.16+.

But I’m not interested in this workaround. (and that the large amount of users won’t either, in the near future/now.)