Icon Animation Blend Spaces without Triangulation

 

Icon Quaternion Weighted Average

 

Icon BVHView

 

Icon Dead Blending Node in Unreal Engine

 

Icon Propagating Velocities through Animation Systems

 

Icon Cubic Interpolation of Quaternions

 

Icon Dead Blending

 

Icon Perfect Tracking with Springs

 

Icon Creating Looping Animations from Motion Capture

 

Icon My Favourite Things

 

Icon Inertialization Transition Cost

 

Icon Scalar Velocity

 

Icon Tags, Ranges and Masks

 

Icon Fitting Code Driven Displacement

 

Icon atoi and Trillions of Whales

 

Icon SuperTrack: Motion Tracking for Physically Simulated Characters using Supervised Learning

 

Icon Joint Limits

 

Icon Code vs Data Driven Displacement

 

Icon Exponential Map, Angle Axis, and Angular Velocity

 

Icon Encoding Events for Neural Networks

 

Icon Visualizing Rotation Spaces

 

Icon Spring-It-On: The Game Developer's Spring-Roll-Call

 

Icon Interviewing Advice from the Other Side of the Table

 

Icon Saguaro

 

Icon Learned Motion Matching

 

Icon Why Can't I Reproduce Their Results?

 

Icon Latinendian vs Arabendian

 

Icon Machine Learning, Kolmogorov Complexity, and Squishy Bunnies

 

Icon Subspace Neural Physics: Fast Data-Driven Interactive Simulation

 

Icon Software for Rent

 

Icon Naraleian Caterpillars

 

Icon The Scientific Method is a Virus

 

Icon Local Minima, Saddle Points, and Plateaus

 

Icon Robust Solving of Optical Motion Capture Data by Denoising

 

Icon Simple Concurrency in Python

 

Icon The Software Thief

 

Icon ASCII : A Love Letter

 

Icon My Neural Network isn't working! What should I do?

 

Icon Phase-Functioned Neural Networks for Character Control

 

Icon 17 Line Markov Chain

 

Icon 14 Character Random Number Generator

 

Icon Simple Two Joint IK

 

Icon Generating Icons with Pixel Sorting

 

Icon Neural Network Ambient Occlusion

 

Icon Three Short Stories about the East Coast Main Line

 

Icon The New Alphabet

 

Icon "The Color Munifni Exists"

 

Icon A Deep Learning Framework For Character Motion Synthesis and Editing

 

Icon The Halting Problem and The Moral Arbitrator

 

Icon The Witness

 

Icon Four Seasons Crisp Omelette

 

Icon At the Bottom of the Elevator

 

Icon Tracing Functions in Python

 

Icon Still Things and Moving Things

 

Icon water.cpp

 

Icon Making Poetry in Piet

 

Icon Learning Motion Manifolds with Convolutional Autoencoders

 

Icon Learning an Inverse Rig Mapping for Character Animation

 

Icon Infinity Doesn't Exist

 

Icon Polyconf

 

Icon Raleigh

 

Icon The Skagerrak

 

Icon Printing a Stack Trace with MinGW

 

Icon The Border Pines

 

Icon You could have invented Parser Combinators

 

Icon Ready for the Fight

 

Icon Earthbound

 

Icon Turing Drawings

 

Icon Lost Child Announcement

 

Icon Shelter

 

Icon Data Science, how hard can it be?

 

Icon Denki Furo

 

Icon In Defence of the Unitype

 

Icon Maya Velocity Node

 

Icon Sandy Denny

 

Icon What type of Machine is the C Preprocessor?

 

Icon Which AI is more human?

 

Icon Gone Home

 

Icon Thoughts on Japan

 

Icon Can Computers Think?

 

Icon Counting Sheep & Infinity

 

Icon How Nature Builds Computers

 

Icon Painkillers

 

Icon Correct Box Sphere Intersection

 

Icon Avoiding Shader Conditionals

 

Icon Writing Portable OpenGL

 

Icon The Only Cable Car in Ireland

 

Icon Is the C Preprocessor Turing Complete?

 

Icon The aesthetics of code

 

Icon Issues with SDL on iOS and Android

 

Icon How I learned to stop worrying and love statistics

 

Icon PyMark

 

Icon AutoC Tools

 

Icon Scripting xNormal with Python

 

Icon Six Myths About Ray Tracing

 

Icon The Web Giants Will Fall

 

Icon PyAutoC

 

Icon The Pirate Song

 

Icon Dear Esther

 

Icon Unsharp Anti Aliasing

 

Icon The First Boy

 

Icon Parallel programming isn't hard, optimisation is.

 

Icon Skyrim

 

Icon Recognizing a language is solving a problem

 

Icon Could an animal learn to program?

 

Icon RAGE

 

Icon Pure Depth SSAO

 

Icon Synchronized in Python

 

Icon 3d Printing

 

Icon Real Time Graphics is Virtual Reality

 

Icon Painting Style Renderer

 

Icon A very hard problem

 

Icon Indie Development vs Modding

 

Icon Corange

 

Icon 3ds Max PLY Exporter

 

Icon A Case for the Technical Artist

 

Icon Enums

 

Icon Scorpions have won evolution

 

Icon Dirt and Ashes

 

Icon Lazy Python

 

Icon Subdivision Modelling

 

Icon The Owl

 

Icon Mouse Traps

 

Icon Updated Art Reel

 

Icon Tech Reel

 

Icon Graphics Aren't the Enemy

 

Icon On Being A Games Artist

 

Icon The Bluebird

 

Icon Everything2

 

Icon Duck Engine

 

Icon Boarding Preview

 

Icon Sailing Preview

 

Icon Exodus Village Flyover

 

Icon Art Reel

 

Icon LOL I DREW THIS DRAGON

 

Icon One Cat Just Leads To Another

Publications


publication supertrack

SuperTrack: Motion Tracking for Physically Simulated Characters using Supervised Learning

ACM SIGGRAPH Asia '21

Levi Fussell, Kevin Bergamin, Daniel Holden

WebpagePaperVideoArticle

In this research we present a method for motion tracking of physically simulated characters which relies on supervised learning rather than reinforcement learning. To achieve this we train a world-model to predict the movements of the physically simulated character and use it as an approximate differentiable simulator through which a policy can be learned directly. Compared to previous methods, our approach is faster to train, has better animation quality, and scales to much larger databases of animation.


publication learned motion matching

Learned Motion Matching

ACM SIGGRAPH '20

Daniel Holden, Oussama Kanoun, Maksym Perepichka, Tiberiu Popa

WebpagePaperVideoSupplementary VideoArticleCodeData

In this research we present a drop-in replacement to Motion Matching which has vastly lower memory usage and scales to large datasets. By replacing specific parts of the Motion Matching algorithm with learned alternatives we can emulate the behavior of Motion Matching while removing the reliance on animation data in memory. This retains the positive properties of Motion Matching such as control, quality, debuggability, and predictability, while overcoming the main core limitation.


publication drecon

DReCon: Data-Driven responsive Control of Physics-Based Characters

ACM SIGGRAPH Asia '19

Kevin Bergamin, Simon Clavet, Daniel Holden, James Richard Forbes

PaperVideoArticleData

In this research we present a method for interactive control of physically simulated characters. Unlike previous methods which either track fixed animation clips or have very unresponsive interactive control, we put an interactive Motion Matching based kinematic controller inside the training environment, controlled by a virtual player, and train the Reinforcement Learning to imitate this controller. This allows for fast and responsive interactive control appropriate for applications such as video games.


publication deep cloth

Subspace Neural Physics: Fast Data-Driven Interactive Simulation

ACM SIGGRAPH/Eurographics SCA '19

Daniel Holden, Bang Chi Duong, Sayantan Datta, Derek Nowrouzezahrai

WebpagePaperVideoArticleGDC Talk

In this research we show a method of accelerating specific physics simulations using a data-driven approach that combines subspace simulation with a neural network trained to approximate the internal and external forces applied to a given object. Our method can achieve between 300 and 5000 times performance gains on simulations it has been trained on, making it particularly suitable for games and other performance sensitive interactive applications.


publication neural solve

Robust Solving of Optical Motion Capture Data by Denoising

ACM SIGGRAPH '18

Daniel Holden

WebpagePaperVideoArticle

In this research we present a method for computing the locations of a character's joints from optical motion capture marker data which is extremely robust to errors in the input, completely removing the need for any manual cleaning of the marker data. The core component of our method is a deep neural network which is trained to map from optical markers to joint positions and rotations. To make this neural network robust to errors in the input we train it on synthetic data, produced from a large database of skeletal motion capture data, where the marker locations have been reconstructed and then corrupted with a noise function designed to emulate the real kinds of errors that can appear in a typical optical motion capture setup.


publication pfnn

Phase-Functioned Neural Networks for Character Control

ACM SIGGRAPH '17

Daniel Holden, Taku Komura, Jun Saito

WebpagePaperSlidesVideoExtrasDemo Code & DataTalkGDC Talk

This paper uses a new kind of neural network called a "Phase-Functioned Neural Network" to produce a character controller for games which generates high quality motion, requires very little memory, is very fast to compute, and can be used in complex and difficult environments such as traversing rough terrain.


publication nnao

Neural Network Ambient Occlusion

ACM SIGGRAPH Asia '16 Technical Briefs

Daniel Holden, Jun Saito, Taku Komura

WebpagePaperVideoSlidesShader & FiltersCode & Data

This short paper uses Machine Learning to produce ambient occlusion from the screen space depth and normals. A large database of ambient occlusion is rendered offline and a neural network trained to produce ambient occlusion from a small patch of screen space information. This network is then converted into a fast runtime shader that runs in a single pass and can be used as a drop-in replacement to other screen space ambient occlusion techniques.


publication synthesis

A Deep Learning Framework For Character Motion Synthesis and Editing

ACM SIGGRAPH '16

Daniel Holden, Jun Saito, Taku Komura

WebpagePaperVideoSlidesCodeDataTalk

In this work we show how to apply deep learning techniques to character animation data.

We present a number of applications, including very fast motion synthesis, natural motion editing, and style transfer - and describe the potential for future applications and work. Unlike previous methods our technique requires no manual preprocessing of the data, instead learning as much as possible unsupervised.


publication manifold

Learning Motion Manifolds with Convolutional Autoencoders

ACM SIGGRAPH Asia '15 Technical Briefs

Daniel Holden, Jun Saito, Taku Komura, Thomas Joyce

WebpagePaperVideoSlides

In this work we show how a motion manifold can be constructed using deep convolutional autoencoders.

Once constructed the motion manifold has many uses in animation research and machine learning. It can be used to fix corrupted motion data, fill in missing motion data, and naturally interpolate or take the distance between different motions.


publication rigmapping

Learning an Inverse Rig Mapping for Character Animation

ACM SIGGRAPH/Eurographics SCA '15

Daniel Holden, Jun Saito, Taku Komura

WebpagePaperVideoSlidesJournal Paper

In this work we present a technique for mapping skeletal joint points, such as those found via motion capture onto rig controls, the controls used by animators in keyframed animation environments.

This technique performs the mapping in real-time allowing for the seamless integration of artistic tools that work in the space of the joint positions to be used by key-framing artists - a big step torward the application of many existing animation tools for character animation.


Other Publications

Google Scholar

ZeroEGGS: Zero-shot Example-based Gesture Generation from Speech

Computer Graphics Forum '23 • Saeed Ghorbani, Ylva Ferstl, Daniel Holden, Nikolaus F. Troje, Marc-AndrĂ© Carbonneau

Fast Neural Style Transfer for Motion Data

IEEE Computer Graphics and Applications '17 • Daniel Holden, Ikhsanul Habibie, Taku Komura, Ikuo Kusajima

Carpet unrolling for character control on uneven terrain

ACM SIGGRAPH/Eurographics MIG '15 • Mark Miller, Daniel Holden, Rami Al-Ashqar, Christophe Dubach, Kenny Mitchell, Taku Komura

A Recurrent Variational Autoencoder for Human Motion Synthesis

British Machine Vision Conference '17 • Ikhsanul Habibie, Daniel Holden, Jonathan Schwarz, Joe Yearsley, Taku Komura

Scanning and animating characters dressed in multiple-layer garments

The Visual Computer 2017 • Pengpeng Hu, Taku Komura, Daniel Holden, Yueqi Zhong

github twitter rss