that can also do GenAI work for a similar “hardware cost per output”? No
FYI, the server hosts for the cards often have eight of the cards each. The power draw becomes the host server’s RAM and CPU, plus eight times 750w (or whatever). It scales up quickly.
Can’t they just make sub 75w GPUs that require basic cooling?
that can also do GenAI work for a similar “hardware cost per output”? No
FYI, the server hosts for the cards often have eight of the cards each. The power draw becomes the host server’s RAM and CPU, plus eight times 750w (or whatever). It scales up quickly.
Seems like optimization issue.
If they can’t train and run a big ass ai model on igpu power at very fast speeds then they are useless as developer companies.
So much bloat.