Generate, Segment, and Refine: Towards Generic Manipulation Segmentation

Authors

  • Peng Zhou University of Maryland, College Park
  • Bor-Chun Chen University of Maryland, College Park
  • Xintong Han Huya Inc
  • Mahyar Najibi University of Maryland, College Park
  • Abhinav Shrivastava University of Maryland,College Park
  • Ser-Nam Lim Facebook
  • Larry Davis University of Maryland, College Park

DOI:

https://doi.org/10.1609/aaai.v34i07.7007

Abstract

Detecting manipulated images has become a significant emerging challenge. The advent of image sharing platforms and the easy availability of advanced photo editing software have resulted in a large quantities of manipulated images being shared on the internet. While the intent behind such manipulations varies widely, concerns on the spread of false news and misinformation is growing. Current state of the art methods for detecting these manipulated images suffers from the lack of training data due to the laborious labeling process. We address this problem in this paper, for which we introduce a manipulated image generation process that creates true positives using currently available datasets. Drawing from traditional work on image blending, we propose a novel generator for creating such examples. In addition, we also propose to further create examples that force the algorithm to focus on boundary artifacts during training. Strong experimental results validate our proposal.

Downloads

Published

2020-04-03

How to Cite

Zhou, P., Chen, B.-C., Han, X., Najibi, M., Shrivastava, A., Lim, S.-N., & Davis, L. (2020). Generate, Segment, and Refine: Towards Generic Manipulation Segmentation. Proceedings of the AAAI Conference on Artificial Intelligence, 34(07), 13058-13065. https://doi.org/10.1609/aaai.v34i07.7007

Issue

Section

AAAI Technical Track: Vision