How to separate APPO learner and worker with full CPU training?

Hi, I’m having a ray cluster on aws with about 100+ workers and an APPO algorithm, my APPO is always scheduled together with workers on some node and I’d like to separate them.
I see pretty straightforward path if it’s GPU training but not quite so if it is CPU training.
It’s just so that I have a better network stability

Is there somewhere I can utilize ray.remote() or a custom tagging system to help the scheduler understand?

can I just wrap an actor around APPO, tag it and plug that in on the, config=trial_config) ?