I am running with the data scientist plan and have bought GPU power-ups since I have to get some simulations done for this weekend.
Does anyone know how GPU's are allocated when running consecutive workspaces?
I am running simulations that are very similar to each other. Running them separately on Google Colab resulted in the same training time (+/- 10%).
I started the first two and speed seemed to be as expected, but when I started two more, the training takes about three times longer than expected on workspace 3 and 4.
Is GPU power divided over my workspaces or do I get 4 GPU when opening 4 workspaces?