When Facebook’s chief technology officer Mike Schroepfer took the stage at the Web Summit technology conference in Lisbon on Tuesday showing off a way to put filters over videos, you’d be forgiven for thinking it’s a minor product update.
But Schroepfer’s demonstration was actually a way to show the social media giant’s breakthrough in technology.
The “style transfer” feature allows you to stylize videos to look like Van Gogh paintings. Traditionally this would be difficult, requiring both that the video to be sent to data centers for processing and a fast internet connection. It would also have a slight lag.
But Facebook has managed to pack AI into its app, which allows people to add filters to their videos in real-time, meaning that computing tasks that once required large data centers can now be done on the mobile device.
“This is one application of AI on the device, it’s one of the first. But the real breakthrough here is being able to train and build models on a big server…and deploy them directly to your pocket so you can run them in real time wherever you are. That is the exciting future of AI,” Schroepfer said during a keynote speech.
This so-called “deep learning” system is called Caffe2Go, and it is what makes stylized videos in the app possible.
“By condensing the size of the AI model used to process images and videos by 100x, we’re able to run various deep neural networks with high efficiency on both iOS and Android,” Facebook wrote in a blog post.
Neural networks refers to AI that mimics the human brain and can learn and make connections. It’s seen as a key technology within the AI field. AI developments at Facebook also link to its efforts in virtual reality (VR).
“In VR, image and video processing software powered by computer vision is improving immersive experiences and helping to support hardware advances. Earlier this year we announced a new stabilization technology for 360 videos, powered by computer vision. And computer vision software is enabling inside-out tracking to help usher in a whole new category of VR beyond PC and mobile, as we announced at Oculus Connect 3 last month. This will help make it possible to build high-quality, standalone VR headsets that aren’t tethered to a PC,” Schroepfer wrote in a blog post.
Schroepfer also talked about the limitations of AI at the moment, particularly the lack of understanding of context. He gave an example of a water bottle on the edge of a table. A human will understand that it will fall while a computer won’t. That’s because we have learned thought a process called “predictive learning — by forming hypotheses and testing them”.
Facebook’s aim is to make computers that “learn, plan and reason like humans” and combination of context, knowledge, reasoning and the ability to predict events will get AI to that stage.
“When our research succeeds in teaching computers all the abilities I outlined … these will add up to something like what we call common sense. And when computers have common sense they can interact with us in better, more natural ways, from surfacing the most relevant information for us and assisting us with tasks to enabling whole new ways for people to connect,” Schroepfer said.
source”cnbc”