Ampere AI conducted a rather unique case study satisfying AI training performance needs with CPU instances alone. The use of OCI Ampere A1 instances decreased the time needed for training by 30%.
You’ll find more details in a PDF from the OCI Ampere A1 Compute page.
That’s pretty cool tech from Matoah, will be interesting to see how that scales out to make better use of established resources.
I at least would like to read more case studies - if possible, I’d personally like to see more details in the case studies - computation times, training/testing metrics, node types or cores/speeds to do a more direct comparison, etc.