Run computationaly intense, licensed software on my pc from a remote pc

Hey this is a general feasibility question (careful: Noob warning). This is the Scenario:

At the place where I work we have PC1 which has a licensed simulation software installed, that I can’t install on my private PC2. We basically all have to share PC1 so everyone can run their sims. So I’d like to use my limited time efficiently.
How it works is:

  1. I remote log into PC1 (from shared VPN).
  2. execute a batch script which starts the simulation with the correct params (it can use only one core per sim). Each sim takes roughly 20mins.
  3. When its done, it saves the output to a specified location, each resultfile is around 1GB.

I will need roughly 5000 sims in total.

PC1 specs:
-Windows
-8 cores

PC2 specs
-Windows
-12 cores

First idea was to use a basic ray parallel for loop, to first of all at least use all 8 cores.
Basicall like this:

@ray.remote
def f(input_pars):

log = check_output(["start_sim.bat", input_name_pars], shell=True)
print(log)
return log

result_ids = []
for i in range(1,5000):
result_ids.append(f.remote(input_pars[i]))

Now my question is: Would it make sense to create a cluster over vpn with PC1 and PC2 to have 20 cores total and thus reduce the computation time.

I appreciate your help!

Naively I would think the processes on PC2 wouldn’t be able to run the licensed software, otherwise no company would need to buy more than one license.

Yes but i think thats how it works in a lot of places at the moment when dealing with expensive specialised Software. The institution buys one or two licenses, installs them on headless pcs, and everyone remote connects to them.

Gotcha–still, I think in the “remote connect” situation you mention, I would imagine the actual computation is taking place on PC 1, while PC 2 is just being used as a remote terminal. It should be possible to verify this by monitoring CPU usage on the remote machine while doing an intensive task with the simulation software.