Amazon researchers have produced a series of papers to be presented at the IEEE Conference Vision and Pattern Recognition (CVPR).
In the papers, Amazon researchers propose AI systems that could form the basis of an assistant who helps customers try on clothes online before buying them.
One of the systems allows people to fine-tune searches by describing variations on a product's image, while another suggests clothes based on those already worn, because they "look good on". A third system mixes data and images and takes out a photo of a model wearing the chosen dress, or the various chosen dresses combined in a unique look.
Amazon already uses artificial intelligence to enhance its services (not yet present in Italy). Style by Alexa, for example, a feature of the Amazon Shopping app that suggests, compares and evaluates clothing using human algorithms and operators. Or ways to try on clothes online and offline like Prime Wardrobe, which allows users to rate clothes online, try them on, and return the ones they don't want to buy.
With these solutions, Amazon is aiming for a larger share of sales. Even with products that customers might not normally choose.
Tests in a virtual dressing room
Researchers from Lab126, the Amazon laboratory that created products like Fire TV, Kindle Fire and Echo, developed an image-based virtual test system called Outfit-VITON. Outfit VITON is designed to help visualize how clothing items in reference photos could look like on a person.
The system can be trained on a single image using a contradictory generative network (GAN). If you don't know what a GAN is, here I explain it clearly (I hope).
Online clothing shopping offers the convenience of shopping from the comfort of your home, a wide selection of items to choose from, and access to the latest products.
However, it does not allow you to try physically, and this limitation has encouraged the development of virtual fitting rooms, in which images of a customer wearing selected clothing are automatically generated. This helps to compare and choose the most desired garment (or look).
How Outfit-VITON works
Outfit-VITON comprises two parts. A shape generation model whose inputs are an image that serves as a model for the final image. And a number of reference images, clothes that will then be transferred to the model.
In the initial phase, the AI segments the input images and calculates the body model of the person making the request. The selected segments are then “sewn” and virtually recombined on the model's body, to create the complete image of our avatar with the dress on.
One of the papers addresses the challenge of using text to refine a request in the “virtual dressing room”. A client will be able to say something abstract like "I would like something more formal" or as precise as "Change the style of the sleeves", and is trained to modify the final images based on these requests.
In tests, the researchers say that the AI system found text requests 58% more frequently than its more effective predecessor.
Recovery of complementary articles
The latest paper examines a technique for large-scale data recovery. The AI system provides for the compatibility of a garment with other garments and accessories allowing a customer to try on clothes online such as shirts or jackets and receive shoe recommendations.
Trying clothes online: a new standard?
"Customers often buy clothing items that fit well with what has been selected or purchased before", the researchers wrote. “Being able to recommend compatible items at the right time will improve their shopping experience. Our system is designed for large scale and exceeds the state of the art in terms of compatibility prediction ”.
In short, we will have a virtual salesman who advises us (and graciously “annoys” us) clothes on clothes to try on online, suggesting perfect combinations. And we won't even have to go behind a booth to get changed.