/cdn.vox-cdn.com/uploads/chorus_image/image/63705662/robot_man.0.1462676042.0.png)
In the war over deep learning that’s currently taking place in Silicon Valley, Facebook just dropped a bomb.
The social network is one of the Valley’s most invested when it comes to building out artificial intelligence technology to help its products think and act like humans. It’s a competitive endeavor — Google, IBM, Uber and Baidu are just a few of the companies racing Facebook to scoop up deep learning experts, the rare minds capable of building this type of software.
You’d think, then, that Facebook would keep its AI advancements under wraps and away from the competition. Not so, apparently.
The company announced Thursday that it built some new AI-specific servers — the physical hardware used to store all of the AI software its employees are creating — to do things like automate text conversations and understand what’s visible in a photograph. The new servers, called Big Sur, are twice as fast as the old ones Facebook used and hold twice as many graphics processing units — GPU chips are specific to hosting and preparing videos and images that are then seen on a screen (like your smartphone).
In other words, the new servers can store more info on one device, which means Facebook’s research team will be able to move faster, explained Serkan Piantino, engineering director of Facebook AI Research.
“Because we can train bigger models and we can make use of more data and just do things faster,” he said, “we can shorten our iteration cycles [and] we will discover more things in the fields of machine learning and AI as a result.”
That’s great! But the other key thing is that Facebook is open sourcing the Big Sur design, which means it’s passing out the blueprint for how to reproduce this server to anyone who wants to see it. Facebook open sources a lot of technology, so this isn’t unprecedented. It has already open sourced some of its deep learning code library in a project called Torch. Google’s doing this, too, and has open sourced its own software called Tensorflow.
For both companies, these libraries can be powerful recruiting tools — a way to find the smart scientists hacking away at their code before scooping them up. They may also serve as a vehicle for commanding control of how AI is used in the future. The more companies use Torch or Tensorflow, the more control Facebook or Google have over the future AI ecosystem.
The key difference, until now: Google has been unwilling to hand over any intel on its research hardware. Facebook is doing just that. Now researchers and companies using deep learning tech can use both Facebook’s software and hardware.
:no_upscale()/cdn.vox-cdn.com/uploads/chorus_asset/file/6454633/big-sur-isometric-view-without-cover-1.0.jpeg)
So there’s a recruiting and control benefit for Facebook. But the other benefit is financial. The hope is that, by open sourcing its servers, Facebook can establish an industry standard for AI-specific hardware. If other startups (or even larger companies) use Facebook’s design, it will become cheaper and more common in the long term, explained director of Facebook’s AI Research team Yann LeCun.
“This is a way of [telling manufacturers], ‘here’s what we use, here’s what we need,'” he added.
Ask and you shall receive (Facebook hopes). In case you want to hear how Facebook describes its AI efforts, here is a helpful video.
https://www.youtube.com/watch?v=l8V0jh62zvE
Additional reporting by Mark Bergen.
This article originally appeared on Recode.net.