Not able to use second worker cpu and memory

Hey everyone ! I am performing dimensionality reduction and the dataset is very huge so I am
using another system.
The other system gets connected to my cluster i can check via cluster.available_resources()

but when I run the script only host computer cpu and memory are utilised the second worker is not at all using any of its cpu power or memory.

where am i going wrong ?

os: windows 10, python 3.7.9 , ray 1.6.0

How severe does this issue affect your experience of using Ray?

  • High: It blocks me to complete my task.

Hi @sohail_4233. Could you provide more details?

  • Can you share configuration for your cluster?
  • What does your script look like? How are you running your script?
  • Can you share the exact output of cluster.available_resources()?

This will make it easier to see what’s going wrong.

Hi @cade

operating system windows 10, python 3.9.12, ray 1.6.0
I am not using any cloud service

  1. I am doing the following first in cmd i type " ray start --head "
    it will start and give me node ip address and other details.

  2. I turn on another laptop and connect using cmd by typing following command
    " ray start --address=" number"

it gets connected then I open jupyter notebook and I am sharing the script below.

I run the script and open task manager and notice my host computer doing all the work and when i open task manager of worker laptop it is idle.

but when i type cluster.available_resources() it shows perfectly all the available resource.

I am attaching the screenshot and ipynb file view below.

** please note both laptops have same version of python and ray installed.