Conference Day Two: 17 September 2019
Tuesday, September 17th, 2019
OpenAI recently blogged about the growth of compute applied to AI training since 2012. According to their statistics, compute has grown by a factor of 300,000 (doubling roughly every 3.5 months, more than five times faster than Moore’s Law). Large “real world” models like BERT can be trained today in under an hour – with 1,472 co-operating V100 GPUs in a DGX-2 based SuperPod – an architecture that has evolved over the past few years to be the premier platform for AI research. But it’s not just hardware, or the largest models – the relatively recent MLPerf benchmark results (v0.6) from NVIDIA show year over year performance increases of between 20% and 75% on a variety of problems on the same hardware. Software and innovation up and down the stack are what enable the industry to keep up the relentless pace of performance that is fueling research breakthroughs and real world applications. Leave with an understanding of how GPUs acceleration is advancing research and applications across data science.