FutureHuman3D: Forecasting Complex Long-Term 3D Human Behavior from Video Observations (CVPR'24)

1Technical University of Munich 2Google

Abstract

We present a generative approach to forecast long-term future human behavior in 3D, requiring only weak supervision from readily available 2D human action data. This is a fundamental task enabling many downstream applications.
The required ground-truth data is hard to capture in 3D (mocap suits, expensive setups) but easy to acquire in 2D (simple RGB cameras). Thus, we design our method to only require 2D RGB data while being able to generate 3D human motion sequences. We use a differentiable 2D projection scheme in an autoregressive manner for weak supervision, and an adversarial loss for 3D regularization.
Our method predicts long and complex behavior sequences (e.g. cooking, assembly) consisting of multiple sub-actions. We tackle this in a semantically hierarchical manner, jointly predicting high-level coarse action labels together with their low-level fine-grained realizations as characteristic 3D human poses. We observe that these two action representations are coupled in nature, and joint prediction benefits both action and pose forecasting.
Our experiments demonstrate the complementary nature of joint action and 3D pose prediction: our joint approach outperforms each task treated individually, enables robust longer-term sequence prediction, and outperforms alternative approaches to forecast actions and characteristic 3D poses.


Teaser

Teaser. We propose a novel generative approach to model long-term future human behavior by jointly forecasting a sequence of coarse action labels and their concrete realizations as 3D body poses. For broad applicability, our autoregressive method only requires weak supervision and past observations in the form of 2D RGB video data, together with a database of uncorrelated 3D human poses.

Video

You can download a high-quality version of this video here.

Paper

Paper

Bibtex

If you find this work useful for your research, please consider citing:

@article{diller2023futurehuman3d,
    title={FutureHuman3D: Forecasting Complex Long-Term 3D Human Behavior from Video Observations},
    author={Diller, Christian and Funkhouser, Thomas and Dai, Angela},
    booktitle={Proc. Computer Vision and Pattern Recognition (CVPR), IEEE},
    year={2024}
}