SAN JOSE, Calif. — Facebook is using OpenGL to deploy to smartphones’ visual effects created with machine learning. The open API is delivering solid performance across iOS and Android phones; however, a lead developer called for a move to more modern Vulkan or Metal APIs to ease mobile graphics programming.
That was one of several news nuggets from @Scale, the social network’s event targeting software engineers. In other developments, an exhibitor showed a copper alternative to solder, a startup demoed its 16-lens camera, and an academic described progress using DNA for computer storage.
Facebook runs the event in various cities to spawn a collaborative ecosystem using open-source software to solve some of the biggest issues plaguing big data centers.
At one booth, the company showed image recognition and special-effects filters running on smartphone cameras at rates from 30 to 45 frames/second using OpenGL-based inference code that it developed in-house. By contrast, Qualcomm’s new neural-networking SDK for Snapdragon delivers just 15 frames/second and doesn’t support iOS.
“The Qualcomm software breaks more often, and when it does, you have to go to Qualcomm to fix it because it’s not open-source,” said Fabio Riccardi, a Facebook engineer who wrote the OpenGL code.
Riccardi shows Facebook's inference software running on his iPhone. (Images: EE Times)
Facebook expects to deploy generations of OpenGL-based inference code on smartphones for at least two or three years. It first showed machine-learning inference on handsets at an event in April.
OpenGL is widely used in handsets, but it is a relatively old and hard-to-program API. The newer Khronos Vulkan or Apple Metal APIs deliver higher performance and ease of programming, but so far, they are only used on a few high-end phones, said Riccardi.
“The advantage of our code is [that] you write something once and it works on virtually all smartphones, even though Android is very fragmented,” said Riccardi, who wrote camera software for Apple for five years before joining Facebook last September.
Although Facebook is not using the Qualcomm neural-net SDK for the service, the company encouraged more than 3,000 developers attending the event to see a talk given on it here.
“Being able to scale and run on the [consumer] device is really important, said Jay Parikh, head of engineering and infrastructure at Facebook, noting that the SDK gave Snapdragon chips a five-fold boost on some machine-learning tasks.
Separately Facebook announced that it now updates its live code about every two hours with tens to hundreds of changes. Google used the event to talk about its language translation services as well as another system it runs that contains a whopping two billion lines of code.
Next page: Copper alternative to traditional solder
�We are trying to build a community to share best practices,� said Parikh of @Scale.