Download PDFOpen PDF in browser

Synthetic image translation for football players pose estimation

EasyChair Preprint no. 785

17 pagesDate: February 20, 2019


In this paper, we present an approach for football players pose estimation on very low-resolution images. The camera recording the football match is far away from the pitch in order to register at least half of it. As a result, even using very high resolution cameras, the image area presenting every single player is very small. Additionally, variable weather conditions or shadows and reflections, make this aim very hard. Such images are very hard to annotate by human. In our research we assume lack of manually annotated training data from our target distribution. Instead of manual annotation of large dataset, we create simple python script for rendering synthetic images with perfect annotations. Then we train vanilla CycleGAN for transformation of raw synthetic images into more realistic. We use transformed images to train CPN model. Without bells and whistles, we achieve similar precision on our images as the same CPN model trained with COCO keypoints dataset.

Keyphrases: Deep Convolutional Neural Network, Image Translation, pose estimation, synthetic dataset

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
  author = {Michał Sypetkowski and Grzegorz Sarwas and Tomasz Trzciński},
  title = {Synthetic image translation for football players pose estimation},
  howpublished = {EasyChair Preprint no. 785},

  year = {EasyChair, 2019}}
Download PDFOpen PDF in browser