Graphcore, a well-funded and ambitious British chip designer that focuses solely on AI applications, has unveiled what it says is the world’s most complex chip: the Colossus MK2 or GC200 IPU.
The processor has 59.4 billion transistors and offers an eight times performance increase from the company’s Colossus MK1, says Graphcore. It boasts more than the 54 billion transistors found in Nvidia’s A100, which previously held the title as the world’s largest processor, and which the American firm announced earlier this year.
Each GC200 chip has 1,472 independent processor cores and 8,832 separate parallel threads, all supported by 900MB of in-processor RAM. Graphcore will be making the GC200 available via its new IPU Machine, the M2000, which contains four GC200 chips in a package the size of a pizza box and which delivers 1 petaflop of total compute. The firm says its new hardware is “completely plug-and-play” and that customers will be able to connect up to 64,000 IPUs together for a total of 16 exaflops of computing power.
<img src="https://cdn.vox-cdn.com/thumbor/r24E_U1w34fTNIGlcrfCHf0G9d0=/250×250/cdn.vox-cdn.com/uploads/chorus_asset/file/20083246/GC011_IPURACK_002_W4K.jpg" alt="Graphcore’s new M2000 IPU Machine contains four GC200 chips.“>Graphcore’s new M2000 IPU Machine contains four GC200 chips. Image: Graphcore
The announcement comes at a time when the chip world is still being shaken up by the advent of artificial intelligence. Training AI models requires highly parallel processors, a demand which has drawn new players to the market (like Graphcore) and even encouraged some existing tech giants (like Google) to make their own specialized chips.
So far, though, Nvidia has dominated the market, as its GPUs, originally designed to accelerate graphics rendering in video games, have proved a perfect fit for AI processing. Graphcore is trying to challenge that dominance and has already attracted significant amounts of funding and veteran backers in the tech industry like Microsoft and Dell.
Earlier this year, Graphcore announced it had attracted $150 million in funding for R&D in its latest funding round, giving it a total valuation of $1.95 billion. The company, which was founded in 2012, around the time the trend for deep learning really took off, claims its biggest advantage is that its chips have been designed from the ground up with AI in mind.
Karl Freund, an analyst at Moor Insights & Strategy, told The Verge that he was impressed by Graphcore’s latest offering, particularly the upgrades to its software, which is key to properly harnessing the huge parallel processing power needed for AI.
“What Graphcore is focused on is not just the chip, but the system,” says Freund. “Training meaningful neural networks can’t be done on a single chip, it has to be done on hundreds, thousands, maybe tens of thousands of them, and that scalability factor is really what, in my mind, makes Graphcore stand out.”
He notes, for example, that Graphcore’s support software is “very complete for a startup,” able to interface with a wide range of AI frameworks and offering the sort of workload monitoring tools that allow researchers to get the most of their hardware.
It seems even in a hardware market, where firms compete over the number of transistors they can fit onto a chip, it’s software that will still make or break a company’s fortunes.
1,052 total views, 2 views today