[About Me]

Libin Liu | 刘利斌

Assistant Professor

School of Intelligence Science and Technology
Peking University

Email: libin.liu [at] pku.edu.cn

I am an assistant professor at the School of Intelligence Science and Technology, Peking University. Before joining Peking University, I was the Chief Scientist of DeepMotion Inc. I was a postdoctoral research fellow at Disney Research and the University of British Columbia. I received my Ph.D. in computer science and B.S. degree in mathematics and physics from Tsinghua University.

I am interested in character animation, physics-based simulation, motion control, and related areas such as reinforcement learning, deep learning, and robotics. I put a lot of work into realizing various agile human motions on simulated characters and robots.

[New] I am seeking highly motivated PhD students starting Fall 2025, with a focus on embodied AI, particularly in humanoid robotics and physics-based characters. If you are interested, please email me your CV and a statement of research interests. You need to have a master’s degree or expect to obtain one before Fall 2025 to be qualified.

[News]
[Projects]

MoConVQ: Unified Physics-Based Motion Control via Scalable Discrete Representations

Heyuan Yao, Zhenhua Song, Yuyang Zhou, Tenglong Ao, Baoquan Chen, Libin Liu†

We present MoConVQ, a uniform framework enabling simulated avatars to acquire diverse skills from large, unstructured datasets. Leveraging a rich and scalable discrete skill representation, MoConVQ supports a broad range of applications, including pose estimation, interactive control, text-to-motion generation, and, more interestingly, integrating motion generation with Large Language Models (LLMs).

ACM Transactions on Graphics, Vol 43 Issue 4, Article 144 (SIGGRAPH 2024).

Semantic Gesticulator: Semantics-aware Co-speech Gesture Synthesis

Zeyi Zhang*, Tenglong Ao*, Yuyao Zhang*, Qingzhe Gao, Chuan Lin, Baoquan Chen, Libin Liu†

We introduce Semantic Gesticulator, a novel framework designed to synthesize realistic co-speech gestures with strong semantic correspondence. Semantic Gesticulator fine-tunes an LLM to retrieve suitable semantic gesture candidates from a motion library. Combined with a novel, GPT-style generative model, the generated gesture motions demonstrate strong rhythmic coherence and semantic appropriateness.

ACM Transactions on Graphics, Vol 43 Issue 4, Article 136 (SIGGRAPH 2024).

Cinematographic Camera Diffusion Model

Hongda Jiang, Xi Wang, Marc Christie, Libin Liu, Baoquan Chen

We present a cinematographic camera diffusion model using a transformer-based architecture to handle temporality and exploit the stochasticity of diffusion models to generate diverse and qualitative trajectories conditioned by high-level textual descriptions.

Computer Graphics Forum, 2024 (Eurographics 2024).

MuscleVAE: Model-Based Controllers of Muscle-Actuated Characters

Yusen Feng, Xiyan Xu, Libin Liu†

We present a novel framework for simulating and controlling muscle-actuated characters. This framework generates biologically plausible motion and accounts for fatigue effects using model-based generative controllers.

ACM SIGGRAPH Asia 2023 Conference Track.

Neural Novel Actor: Learning a Generalized Animatable Neural Representation for Human Actors

Yiming Wang*, Qingzhe Gao*, Libin Liu†, Lingjie Liu†, Christian Theobalt, Baoquan Chen† (*: equal comtribution, †: corresponding author)

We propose a new method for learning a generalized animatable neural human representation from a sparse set of multi-view imagery of multiple persons.

IEEE Transactions on Visualization and Computer Graphics, 2023

MotionBERT: A Unified Perspective on Learning Human Motion Representations

Wentao Zhu, Xiaoxuan Ma, Zhaoyang Liu, Libin Liu, Wayne Wu, Yizhou Wang†

We present a unified perspective on tackling various human-centric video tasks by learning human motion representations from large-scale and heterogeneous data resources.

ICCV 2023

GestureDiffuCLIP: Gesture Diffusion Model with CLIP Latents

Tenglong Ao, Zeyi Zhang, Libin Liu†

We introduce GestureDiffuCLIP, a CLIP-guided, co-speech gesture synthesis system that creates stylized gestures in harmony with speech semantics and rhythm using arbitrary style prompts. Our highly adaptable system supports style prompts in the form of short texts, motion sequences, or video clips and provides body part-specific style control.

ACM Transactions on Graphics, Vol 42 Issue 4, Article 40 (SIGGRAPH 2023). (SIGGRAPH 2023 Honorable Mention Award [news])

Control VAE: Model-Based Learning of Generative Controllers for Physics-Based Characters

Heyuan Yao, Zhenhua Song, Baoquan Chen, Libin Liu†

We introduce Control VAE, a novel model-based framework for learning generative motion control policies, which allows high-level task policies to reuse various skills to accomplish downstream control tasks.

ACM Transactions on Graphics, Vol 41 Issue 6, Article 183 (SIGGRAPH Asia 2022).

Rhythmic Gesticulator: Rhythm-Aware Co-Speech Gesture Synthesis with Hierarchical Neural Embeddings

Tenglong Ao, Qingzhe Gao, Yuke Lou, Baoquan Chen, Libin Liu†

We present a novel co-speech gesture synthesis method that achieves convincing results both on the rhythm and semantics.

ACM Transactions on Graphics, Vol 41 Issue 6, Article 209 (SIGGRAPH Asia 2022). (SIGGRAPH Asia 2022 Best Paper Award)

Neural3Points: Learning to Generate Physically Realistic Full-body Motion for Virtual Reality Users

Yongjing Ye, Libin Liu†, Lei Hu, Shihong Xia†

We present a method for real-time full-body tracking using three VR trackers provided by a typical VR system: one HMD (head-mounted display) and two hand-held controllers.

Computer Graphics Forum, Vol 41 Issue 8, Page 183-194 (SCA 2022).

Learning to Use Chopsticks in Diverse Gripping Styles

Zeshi Yang, KangKang Yin, Libin Liu†

We propose a physics-based learning and control framework for using chopsticks. Robust hand controls for multiple hand morphologies and holding positions are first learned through Bayesian optimization and deep reinforcement learning. For tasks such as object relocation, the low-level controllers track collision-free trajectories synthesized by a high-level motion planner.

ACM Transactions on Graphics, Vol 41 Issue 4, Article 95 (SIGGRAPH 2022).

Camera Keyframing with Style and Control

Hongda Jiang, Marc Christie, Xi Wang, Libin Liu, Bin Wang, Baoquan Chen

We present a tool that enables artists to synthesize camera motions following a learned camera behavior while enforcing user-designed keyframes as constraints along the sequence.

ACM Transactions on Graphics, Vol 40 Issue 6, Article 209 (SIGGRAPH Asia 2021).

Learning Skeletal Articulations With Neural Blend Shapes

Peizhuo Li, Kfir Aberman, Rana Hanocka, Libin Liu, Olga Sorkine-Hornung, Baoquan Chen

We present a technique for articulating 3D characters with pre-defined skeletal structure and high-quality deformation, using neural blend shapes — corrective, pose-dependent, shapes that improve deformation quality in joint regions.

ACM Transactions on Graphics, Vol 40 Issue 4, Article 130 (SIGGRAPH 2021).

Unsupervised Co-part Segmentation through Assembly

Qingzhe Gao, Bin Wang, Libin Liu, Baoquan Chen

We propose an unsupervised learning approach for co-part segmentation from images.

Proceedings of the 38th International Conference on Machine Learning (ICML),
PMLR 139:3576-3586, 2021.

Learning Basketball Dribbling Skills Using Trajectory Optimization and Deep Reinforcement Learning

Libin Liu, Jessica K. Hodgins

We present a method based on trajectory optimization and deep reinforcement learning for learning robust controllers for various basketball dribbling skills, such as dribbling between the legs, running, and crossovers.

ACM Transactions on Graphics, Vol 37 Issue 4, Article 142 (SIGGRAPH 2018).

Learning to Schedule Control Fragments for Physics-Based Characters Using Deep Q-Learning

Libin Liu, Jessica K. Hodgins

We present a deep Q-learning based method for learning a scheduling scheme that reorders short control fragments as necessary at runtime to achieve robust control of challenging skills such as skateboarding.

ACM Transactions on Graphics, Vol 36 Issue 3, Article 29. (presented at SIGGRAPH 2017)

Guided Learning of Control Graphs for Physics-Based Characters

Libin Liu, Michiel van de Panne, KangKang Yin,

We present a method for learning robust control graphs that support real-time physics-based simulation of multiple characters, each capable of a diverse range of movement skills.

ACM Transactions on Graphics, Vol 35, Issue 2, Article 29. (presented at SIGGRAPH 2016)

Learning Reduced-Order Feedback Policies for Motion Skills

Kai Ding, Libin Liu, Michiel van de Panne, KangKang Yin

We introduce a method for learning low-dimensional linear feedback strategies for the control of physics-based animated characters around a given reference trajectory.

Proc. ACM SIGGRAPH / Eurographics Symposium on Computer Animation 2015 (SCA Best Paper Award)

Deformation Capture and Modeling of Soft Objects

Bin Wang, Longhua Wu, KangKang Yin, Uri Ascher, Libin Liu, Hui Huang.

We present a data-driven method for deformation capture and modeling of general soft objects.

ACM Transactions on Graphics, Vol 34, Issue 4, Article 94 (SIGGRAPH 2015)

Improving Sampling-based Motion Control

Libin Liu, KangKang Yin, Baining Guo.

We address several limitations of the sampling-based motion control method. A variety of highly agile motions, ranging from stylized walking and dancing to gymnastic and Martial Arts routines, can be easily reconstructed now.

Computer Graphics Forum 34(2) (Eurographics 2015).

Simulation and Control of Skeleton-driven Soft Body Characters

Libin Liu, KangKang Yin, Bin Wang, Baining Guo.

We present a physics-based framework for simulation and control of human-like skeleton-driven soft body characters. We propose a novel pose-based plasticity model to achieve large skin deformation around joints. We further reconstruct controls from reference trajectories captured from human subjects by augmenting a sampling-based algorithm.

ACM Transactions on Graphics, Vol 32, Issue 6, Article 215 (SIGGRAPH Asia 2013)

Terrain Runner: Control, Parameterization, Composition, and Planning for Highly Dynamic Motions

Libin Liu, KangKang Yin, Michiel van de Panne, Baining Guo.

We present methods for the control, parameterization, composition, and planning for highly dynamic motions. More specifically, we learn the skills required by real-time physics-based avatars to perform parkour-style fast terrain crossing using a mix of running, jumping, speed-vaulting, and drop-rolling.

ACM Transactions on Graphics, Vol 31, Issue 6, Article 154 (SIGGRAPH Asia 2012)

Sampling-based Contact-rich Motion Control

Libin Liu, KangKang Yin, Michiel van de Panne, Tianjia Shao Weiwei Xu.

Given a motion capture trajectory, we propose to extract its control by randomized sampling.

ACM Transactions on Graphics, Vol 29, Issue 4, Article 128 (SIGGRAPH 2010)

[Professional Activities]
Editorial Board:
  • 2024 - now: IEEE Transactions on Visualization and Computer Graphics (TVCG)
Program Committee:
  • ACM SIGGRAPH North America 2019, 2020, 2024
  • ACM SIGGRAPH Asia 2022,2023
  • Eurographics 2024
  • Pacific Graphics 2018, 2019, 2022, 2024
  • SCA 2015-2019, 2021-2024
  • MIG 2014, 2016-2019, 2022
  • Eurographics Short Papers 2020, 2021
  • SIGGRAPH Asia 2014 Posters and Technical Briefs
  • CASA (Computer Animation and Social Agents) 2017, 2023
  • Graphics Interface 2023
  • CAD/Graphics 2017, 2019
Paper Reviewing (partial list):
  • SIGGRAPH NA/Asia
  • ACM Transactions on Graphics (TOG)
  • IEEE Transactions on Pattern Analysis and Machine Intelligence Information (TPAMI)
  • IEEE Transactions on Visualization and Computer Graphics (TVCG)
  • International Conference on Computer Vision (ICCV)
  • Eurographics (Eupopean Association for Computer Graphics)
  • Pacific Graphics
  • Computer Graphics Forum
  • IEEE International Conference on Robotics and Automation (ICRA)
  • ACM SIGGRAPH/Eurographics Symposium on Computer Animation (SCA)
  • ACM SIGGRAPH Conference on Motion, Interaction and Games (MIG)
  • Computer Animation and Social Agents (CASA)
  • Graphics Interface
  • Computers & Graphics
  • Graphical Models