A couple of questions for helping with calculating hardware performance tables:
Is there a way to know how many work units you should be running at the same time for a given project (other than as many as it takes for full GPU usage)?
Does running multiple tasks on a GPU slow down the individual tasks?
I try to max-out the GPU usage.
As I only have mid-range GPUs, it's always only one task per GPU.
They are then at 95-100%.
You can use two tasks per GPU to get the time between ending and starting a new task not wasted but I think this results only in some single percent in total.
Your GPU can run as many tasks simultaneously as they fit in its VRAM.
It doesn't make a difference in running one or multiple.
If one task alone needs 1min and already uses 100% of the GPU, then by using 5 tasks, one needs 5min.
In the other way, if this one task uses only 20%, then by using 5 tasks, your GPU gets the wanted 100%.
Honestly it's a long time ago I've tried multiple GPU-Tasks but from the logical side this should be the behavior... :)
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit
No I don't have a super accurate way for determining how many work units I should run simulanteously, other than looking at GPU usage and trying to max it out, and then just comparing how many work units I'm outputting per unit of time. So for example, I did a quick test for PrimeGrid, with only 1 task per GPU to see what the difference is:
TITAN V
1 task - 1:50 per WU
2 tasks - 2:30 per WU
1080 Ti
1 task - 3:20 per WU
2 tasks - 5:00 per WU
So you can see here, running two tasks in parallel on the TITAN would end up producing equivalently 1 WU per 75 seconds, whereas if I was running one task at a time, I'd be producing 1 WU per 110 seconds. So yes, running multiple tasks does slow down the individual tasks, but overall you get more output, if GPU usage wasn't maxed out already.
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit