The machine knows what you want to look at
Speedy Neural Networks for Smart Auto-Cropping of Images
Once they’d trained a neural network to identify these areas, they needed to optimize it to work in real time on the site. Luckily for them, the cropping needed for a photo preview is pretty broad — you’re only narrowing down an image to maybe its most interesting third. You don’t need to target in on specifics. That means Twitter could pare down and simplify the criteria the neural network was judging using a technique called “knowledge distillation.”
The end result was a neural network ten times faster than its original design. “This lets us perform saliency detection on all images as soon as they are uploaded and crop them in real-time,” write Theis and Wang.
This new feature is currently being rolled out on desktop, iOS, and Android apps to all users says the company. So next time you see a photo preview on Twitter that invites you to click remember to thank a neural network.
Cropping using saliency
A better way to crop is to focus on “salient” image regions. A region having high saliency means that a person is likely to look at it when freely viewing the image. Academics have studied and measured saliency by using eye trackers, which record the pixels people fixated with their eyes. In general, people tend to pay more attention to faces, text, animals, but also other objects and regions of high contrast. This data can be used to train neural networks and other algorithms to predict what people might want to look at.
The basic idea is to use these predictions to center a crop around the most interesting region.
The basic idea is to use these predictions to center a crop around the most interesting region.
Thank you Twitter official blog @ 2018
No comments:
Post a Comment