Shutterstock has launched a beta test of its Composition Aware Search, letting users specify one or more keywords, or to search for copy space, and arrange them spatially on a canvas to reflect the specific layout of the image they are seeking.
The new technology goes live today in beta on Shutterstock Labs, the company’s test site for innovative search tools.
This patent pending tool uses a combination of machine vision, natural language processing, and state of the art information retrieval techniques to find strong matches against complex spatially aware search criteria. For example, a user can look for images of wine and cheese, where the wine is on the left and the cheese is on the right. By simply moving the placement in the search, users can see the requested changes reflected in the image results. Shutterstock customers can then license and edit the image for use in their work.
“Shutterstock is on the front lines of improving the future of visual search technology using pixel data, deep learning, and artificial intelligence. What’s remarkable about this breakthrough is that we only trained our model to learn what things are, but our deep network learned how to represent where things are, ” said Jon Oringer, Founder and CEO of Shutterstock. “For marketers, searching for an image with copy space using this tool will save a significant amount of time. We continue to innovate on this valuable search technology and invest in machine learning to improve the customer experience and provide more time for productivity and creativity.”
Composition Aware Search is the latest innovation leveraging Shutterstock’s investment in deep learning, following the launch of Reverse Image Search and Visually Similar Search last year. These innovations were developed by Shutterstock’s in-house computer vision team whose focus is on creating new ways to search and providing an unparalleled customer experience.
Learn more about Composition Aware Search in this white paper or try it here or watch the video here.