Mohammad Azam Khan

Postdoc


Curriculum vitae




Mining Multi-Label Samples from Single Positive Labels


Journal article


Youngin Cho, Daejin Kim, Mohammad Azam Khan, Jaegul Choo
Neural Information Processing Systems (NeurIPS), 2022

Semantic Scholar ArXiv DBLP DOI
Cite

Cite

APA   Click to copy
Cho, Y., Kim, D., Khan, M. A., & Choo, J. (2022). Mining Multi-Label Samples from Single Positive Labels. Neural Information Processing Systems (NeurIPS).


Chicago/Turabian   Click to copy
Cho, Youngin, Daejin Kim, Mohammad Azam Khan, and Jaegul Choo. “Mining Multi-Label Samples from Single Positive Labels.” Neural Information Processing Systems (NeurIPS) (2022).


MLA   Click to copy
Cho, Youngin, et al. “Mining Multi-Label Samples from Single Positive Labels.” Neural Information Processing Systems (NeurIPS), 2022.


BibTeX   Click to copy

@article{youngin2022a,
  title = {Mining Multi-Label Samples from Single Positive Labels},
  year = {2022},
  journal = {Neural Information Processing Systems (NeurIPS)},
  author = {Cho, Youngin and Kim, Daejin and Khan, Mohammad Azam and Choo, Jaegul}
}

Abstract

Conditional generative adversarial networks (cGANs) have shown superior results in class-conditional generation tasks. To simultaneously control multiple conditions, cGANs require multi-label training datasets, where multiple labels can be assigned to each data instance. Nevertheless, the tremendous annotation cost limits the accessibility of multi-label datasets in real-world scenarios. Therefore, in this study we explore the practical setting called the single positive setting, where each data instance is annotated by only one positive label with no explicit negative labels. To generate multi-label data in the single positive setting, we propose a novel sampling approach called single-to-multi-label (S2M) sampling, based on the Markov chain Monte Carlo method. As a widely applicable"add-on"method, our proposed S2M sampling method enables existing unconditional and conditional GANs to draw high-quality multi-label data with a minimal annotation cost. Extensive experiments on real image datasets verify the effectiveness and correctness of our method, even when compared to a model trained with fully annotated datasets.


Share



Follow this website


You need to create an Owlstown account to follow this website.


Sign up

Already an Owlstown member?

Log in