Skip to main content

DeepFashion2: A Versatile Benchmark for Fashion Image Understanding


-By Yuying Ge1, Ruimao Zhang
, Lingyun Wu2, Xiaogang Wang
, Xiaoou Tang1, and Ping Luo
The Chinese University of Hong Kong
2SenseTime Research


Even as fashion image analysis gets more traction from today’s image recognition researchers, understanding fashion images remains challenging for real-world applications due to large deformations, occlusions, and discrepancies in clothing across domains and between consumer and commercial images.
DeepFashion is a large-scale clothes database introduced last year by a research team from the Chinese University of Hong Kong (CUHK). The dataset contains over 800k diverse fashion images, each labeled with 50 categories, 1,000 descriptive attributes, bounding boxes and clothing landmarks.
DeepFashion was a solid foundation, but it left a number of areas for improvement. It was limited to a single clothing-item per image, sparse landmarks (4~8 only), and had no per-pixel masks. CUHK researchers recently teamed up with Chinese AI giant SenseTime to develop a greatly improved iteration in DeepFashion2, a large-scale benchmark with comprehensive tasks and annotations of fashion image understanding.
DeepFashion2 contains 491K images of 13 popular clothing categories. A full spectrum of tasks are defined, including clothes detection and recognition, landmark and pose estimation, segmentation, as well as verification and retrieval. All these tasks are supported by rich annotations.
The dataset also includes a total of 801K images of pieces of clothing. Each item is labeled with scale, occlusion, zooming, viewpoint, bounding box, dense landmarks, and per-pixel mask. These items can be categorized as 43.8k clothing identities, where a clothing identity represents a class of apparel with nearly identical cuts, patterns, and designs. Images of the same clothing identities are taken from buyers and sellers, where an item from the buyer and an item from the seller forms a pair.
Researchers say the work makes three main contributions:
  1. Compared with other clothes datasets, DeepFashion2 annotations are at least 3.5× those of DeepFashion, 6.7× of ModaNet, and 8× of FashionAI.
  2. A full spectrum of tasks is carefully defined on the proposed dataset.
  3. Researchers extensively evaluated Mask R-CNN with DeepFashion2. A novel Match R-CNN is also proposed to aggregate all the learned features from clothes categories, poses, and masks to solve clothing image retrieval in an end-to-end manner.
333The research team believes the rich data and labels of DeepFashion2 will accelerate the development of future algorithms to understand fashion images. The paper DeepFashion2: A Versatile Benchmark for Detection, Pose Estimation, Segmentation and Re-Identification of Clothing Images is on arXiv

Comments

Popular posts from this blog

ABOD and its PyOD python module

Angle based detection By  Hans-Peter Kriegel, Matthias Schubert, Arthur Zimek  Ludwig-Maximilians-Universität München  Oettingenstr. 67, 80538 München, Germany Ref Link PyOD By  Yue Zhao   Zain Nasrullah   Department of Computer Science, University of Toronto, Toronto, ON M5S 2E4, Canada  Zheng Li jk  Northeastern University Toronto, Toronto, ON M5X 1E2, Canada I am combining two papers to summarize Anomaly detection. First one is Angle Based Outlier Detection (ABOD) and other one is python module that  uses ABOD along with over 20 other apis (PyOD) . This is third part in the series of Anomaly detection. First article exhibits survey that covered length and breadth of subject, Second article highlighted on data preparation and pre-processing.  Angle Based Outlier Detection. Angles are more stable than distances in high dimensional spaces for example the popularity of cosine-based similarity measures for text data. Object o is an out

TableSense: Spreadsheet Table Detection with Convolutional Neural Networks

 - By Haoyu Dong, Shijie Liu, Shi Han, Zhouyu Fu, Dongmei Zhang Microsoft Research, Beijing 100080, China. Beihang University, Beijing 100191, China Paper Link Abstract Spreadsheet table detection is the task of detecting all tables on a given sheet and locating their respective ranges. Automatic table detection is a key enabling technique and an initial step in spreadsheet data intelligence. However, the detection task is challenged by the diversity of table structures and table layouts on the spreadsheet. Considering the analogy between a cell matrix as spreadsheet and a pixel matrix as image, and encouraged by the successful application of Convolutional Neural Networks (CNN) in computer vision, we have developed TableSense, a novel end-to-end framework for spreadsheet table detection. First, we devise an effective cell featurization scheme to better leverage the rich information in each cell; second, we develop an enhanced convolutional neural network model for tab

Rule Extraction Algorithm for Deep Neural Networks: A Review

-By Tameru Hailesilassie Department of Computer Science and Engineering National University of Science and Technology (MISiS) Moscow, Russia Today's blog is the continuation of XAI series. Rule Extraction from Neural Networks Abstract—Despite the highest classification accuracy in wide varieties of application areas, the artificial neural network has one disadvantage. The way this Network comes to a decision is not easily comprehensible. The lack of explanation ability reduces the acceptability of neural network in data mining and decision system. This drawback is the reason why researchers have proposed many rule extraction algorithms to solve the problem. Recently, Deep Neural Network (DNN) is achieving a profound result over the standard neural network for classification and recognition problems. It is a hot machine learning area proven both useful and innovative. This paper has thoroughly reviewed various rule extraction algorithms, considering the classifi