Watch our webinar on the DDH pipeline
Integration with Character Creator
DDH avatars are now based on Character Creator's "headshot" generating digital doubles that are fully rigged, reducing the amount of work needed to build and rig the model. Only one 3D scan is required and can be produced with a structured light scanner or the iPhone 12
One camera performance capture
That's right! All you need is a single standard 2k camera (even a smartphone) to capture your subjects facial performance. Our patented technology does the rest. You create your game engine assets with standard retargeting software (Faceware) for animation FBX files and the DDH process will create a video as texture locked to and synced with the 3D model's animations.
Simplified process
Our patented process of projecting video as three-dimensional texture produces a perfect scynronised speech mouth performance without lengthly keyframe animation cleanup.
Physically based rendering
Using PBR realistic shading and lighting models to accurately represent real-world materials, Our DDH process seamlessly combines animation and video together. In run-time applications, our process requires only one draw call.
Supported platforms
The DDH avatar can be assembled in both Unreal and Unity or rendered inline with any animation software. Our characters can run in high detail cinematics with path tracing and advanced shaders, as well as an optimised workflow to run in realtime scenarios including mobile 6DoF. (eg Oculus Quest)
Lightweight assets
Using video as texture DDH avatars can have very low polygon counts without compromising image fidelity, leaving lots of headroom for game or application design.