Skip to main content

PifPaf: Composite Fields for Human Pose Estimation

Project description

https://travis-ci.org/vita-epfl/openpifpaf.svg?branch=master

We propose a new bottom-up method for multi-person 2D human pose estimation that is particularly well suited for urban mobility such as self-driving cars and delivery robots. The new method, PifPaf, uses a Part Intensity Field (PIF) to localize body parts and a Part Association Field (PAF) to associate body parts with each other to form full human poses. Our method outperforms previous methods at low resolution and in crowded, cluttered and occluded scenes thanks to (i) our new composite field PAF encoding fine-grained information and (ii) the choice of Laplace loss for regressions which incorporates a notion of uncertainty. Our architecture is based on a fully convolutional, single-shot, box-free design. We perform on par with the existing state-of-the-art bottom-up method on the standard COCO keypoint task and produce state-of-the-art results on a modified COCO keypoint task for the transportation domain.

@article{kreiss2019pifpaf,
  title = {PifPaf: Composite Fields for Human Pose Estimation},
  author = {Kreiss, Sven and Bertoni, Lorenzo and Alahi, Alexandre},
  journal = {CVPR},
  year = {2019},
  month = {3}
}

Demo

docs/coco/000000081988.jpg.skeleton.png

Image credit: “Learning to surf” by fotologic which is licensed under CC-BY-2.0.

Created with:

python -m openpifpaf.predict \
  --checkpoint outputs/resnet101block5-pifs-pafs-edge401-l1-190131-083451.pkl \
  data-mscoco/images/val2017/000000081988.jpg -o docs/coco/ --show

Install

Create a virtualenv. Use --system-site-packages for OpenCV3 access for openpifpaf.webcam.

python3 -m venv venv3 --system-site-packages

Inside virtualenv, install with optional dependencies:

pip install numpy cython
pip install 'openpifpaf[train,test]'

# from source:
pip install --editable '.[train,test]'

Interfaces

  • python -m openpifpaf.train --help

  • python -m openpifpaf.eval_coco --help

  • python -m openpifpaf.logs --help

  • python -m openpifpaf.predict --help

  • python -m openpifpaf.webcam --help

Pre-trained Networks

Put these files into your outputs folder: Google Drive

Visualize logs:

python -m pifpaf.logs \
  outputs/resnet50-pif-paf-rsmooth0.5-181209-192001.pkl.log \
  outputs/resnet101-pif-paf-rsmooth0.5-181213-224234.pkl.log \
  outputs/resnet152-pif-paf-l1-181230-201001.pkl.log

Train

See datasets for setup instructions. See studies.ipynb for previous studies.

Train a model:

python -m openpifpaf.train

# or refine a pre-trained model
python -m openpifpaf.train \
  --lr=1e-3 \
  --epochs=75 \
  --lr-decay 60 70 \
  --batch-size=8 \
  --basenet=resnet50block5 \
  --headnets pif paf \
  --square-edge=401 \
  --regression-loss=laplace \
  --lambdas 10 3 1 10 3 3 \
  --freeze-base=1

Every 5 minutes, check the directory for new snapshots to evaluate:

while true; do \
  CUDA_VISIBLE_DEVICES=0 find outputs/ -name "resnet101block5-pif-paf-l1-190109-113346.pkl.epoch???" -exec \
    python -m openpifpaf.eval_coco --checkpoint {} -n 500 --long-edge=641 --skip-existing \; \
  ; \
  sleep 300; \
done

Person Skeletons

COCO / kinematic tree / dense:

COCO skeleton

KinTree skeleton

Dense skeleton

Created with python -m openpifpaf.data.

Video

Processing a video frame by frame from video.avi to video-pose.mp4 using ffmpeg:

ffmpeg -i video.avi -qscale:v 2 -vf scale=641:-1 -f image2 video-%05d.jpg
python -m openpifpaf.predict --checkpoint outputs/resnet101block5-pifs-pafs-edge401-l1-190213-100439.pkl video-*0.jpg
ffmpeg -framerate 24 -pattern_type glob -i 'video-*.jpg.skeleton.png' -vf scale=640:-1 -c:v libx264 -pix_fmt yuv420p video-pose.mp4

Evaluations

See evaluation logs for a long list. This result was produced with python -m openpifpaf.eval_coco --checkpoint outputs/resnet101block5-pif-paf-edge401-190313-100107.pkl --long-edge=641 --loader-workers=8:

Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets= 20 ] = 0.662
Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets= 20 ] = 0.872
Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets= 20 ] = 0.724
Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.623
Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.721
Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 20 ] = 0.712
Average Recall     (AR) @[ IoU=0.50      | area=   all | maxDets= 20 ] = 0.895
Average Recall     (AR) @[ IoU=0.75      | area=   all | maxDets= 20 ] = 0.768
Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.660
Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.785

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

openpifpaf-0.2.1.tar.gz (162.6 kB view details)

Uploaded Source

File details

Details for the file openpifpaf-0.2.1.tar.gz.

File metadata

  • Download URL: openpifpaf-0.2.1.tar.gz
  • Upload date:
  • Size: 162.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.13.0 pkginfo/1.5.0.1 requests/2.20.1 setuptools/40.8.0 requests-toolbelt/0.9.1 tqdm/4.31.1 CPython/3.7.2

File hashes

Hashes for openpifpaf-0.2.1.tar.gz
Algorithm Hash digest
SHA256 b626a2f21cc2f4f2e1464a9d0cdc9c0aa6594fdac0a5b92e1f7a6d759672fe8e
MD5 1f4eee32da6d2f30a9c17cb155b43b36
BLAKE2b-256 5c3a9ea5857ebd82168dc085afc25dd69f945ba337b91ebc91acc8ae2819ab3e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page