Jan 21, 2026
Announcing Clothing Data + Motion Capture for AI Development
Protege and Render Ready are excited to announce our partnership to deliver multimodal clothing datasets that pair high-fidelity garment captures with human motion. We envision a world where we can significantly close the gap between how clothes look and how they move in 3D environments — enabling more realistic games and simulations, stronger machine learning models, and safer, more accurate digital twins.

About Protege: Protege is the single platform for real world AI training data. We aggregate data sources across industries to help cutting-edge model builders train and evaluate their models, with data sources ranging from media to healthcare to motion capture and more.
In particular, we specialize in combining multiple modalities from different data partners to generate novel datasets, such as combining Render Ready’s digitized, production-ready clothing data with professional-grade motion capture data.
About Render Ready: Render Ready is focused on a singular mission: collecting the most accurate, ground-truth garment data available. They specialize in acquiring and processing high-fidelity, production-ready data sourced directly from real-world clothing. This isn't just data; it's the essential, foundational layer that AI builders need to train models. Their meticulous focus on real-world accuracy is the critical difference that allows machine learning to finally understand and replicate how clothing actually moves, folds, and interacts on a body in motion.
If your team needs production-ready 3D assets, measurements, and metadata aligned to real human motion, the Protege media and motion capture team is ready to talk more about how these multimodal datasets can help drive your AI roadmap.
To learn more, check out our case study examples below! Interested in this unique data offering? Fill out our data access form here, and our team will be in touch.
CASE STUDY: What Protege + Render Ready Unlocks
Multimodal training data: Combines high-fidelity clothing datasets (photos, scans, measurements, materials) with human motion to capture to model how garments behave on real bodies in motion.
Real-world variability at scale: Captures diverse garments and dynamic states (zipped/unzipped, sleeves rolled, different fits) to reflect how clothes actually look, move, and wrinkle.
Production-ready formats: Delivers point clouds, textured 3D assets, and structured metadata (CSV/Excel) in the formats teams already use.
Faster from idea to implementation: Clear data cataloging and examples shorten the path from exploration to integration in engines and pipelines.
Who this is for:
Gaming and simulation teams: More realistic, performant cloth behavior for characters; assets aligned to runtime needs.
ML and research teams: High-quality, labeled, dynamic garment data to train models for cloth simulation, pose-conditioned deformation, and embodied AI.
Robotics and digital twin builders: Safer, more accurate human–robot and environment simulations where clothing affects interaction and occlusion.
Apparel and manufacturing innovators: Precise measurements, materials, and 3D captures to inform fit, prototyping, QA, and virtual try-on.
Example use cases
Wrinkle/fit prediction conditioned on pose and body shape
Runtime cloth deformation learned from diverse garment states
Motion-aware garment testing (e.g., stairs, reaching, sleeve rolling)
Cross-modal studies linking scans, photos, and metadata for robust models
The case studies below dive deeper into each of the problems and how Protege’s combination of Render Ready’s clothing data with proprietary motion capture datasets can address it in depth.
Case Study #1: The Staircase Problem

The problem:
Complex motion: Hip and knee flexion, variable stride length, and changing contact points create challenging, non-periodic cloth behavior.
Garment–body interactions: Long coats catch on thighs, hems collide with knees, and sleeves ride up with shoulder elevation and wrist extension.
Edge cases that break pipelines: Button plackets gape, vents flare, zippers buckle, and layered garments self-intersect—often causing artifacts in engines.
Sparse, single modality training data: Most datasets don’t pair motion context (e.g., stair ascent) with garment state, fit, and materials at sufficient variety.
What builders need:
Pose-aware captures: Synchronized motion data with garment geometry and textures under real stair-climb kinematics.
Dynamic state coverage: Variations like coat open vs. closed, belt tightness, sleeve positions, and different step heights.
Production-ready formats: Clean meshes/point clouds, high-res photos, and structured metadata that drop into existing tools.
How Protege addresses the problem with Render Ready combined with motion capture data:
Multimodal pairing: Render Ready’s garment captures (3D scans with textures, full photo turnarounds, measurements/materials in CSV) combined with Protege’s motion capture data that addresses stair ascent and descent sequences.
Variability at scale: Same garment in multiple states—zipped/unzipped, belt adjusted, sleeves rolled/unrolled—across body sizes and step geometries to cover real-world diversity.
Metadata that matters: Per-capture details like garment dimensions, material composition, wearer size, range of motion angles, and environment notes (step height, cadence).
Engine-aligned deliverables: OBJ/FBX/GLTF meshes, point clouds if needed, and labeled clip sets for quick evaluation in Unreal/Unity and ML pipelines.
Example application scenario:
Long coat on stair ascent: Side-by-side frames of motion capture + textured 3D asset showing hem interaction with the leading knee, vent flare on push-off, and wrinkle evolution across three steps. Variants include coat open vs. buttoned; different cadence; backpack on/off.
Potential model improvement from the Protege x Render Ready datasets:
Better runtime behavior: Train or validate cloth solvers against real stair-climb dynamics to reduce clipping and jitter.
Fewer iteration loops: Targeted datasets surface failure modes (hem–knee collisions, sleeve ride-up) earlier, cutting rework.
Generalization: Models trained on these paired captures transfer to adjacent motions (curbs, ramps, steep grades).
Case Study #2: Rolled Up Sleeves

The problem:
Nonlinear topology/state change: Fabric transitions from covering the forearm to layered folds at the elbow/upper arm.
Self-contact and collision: Multiple layers of cuff and sleeve collide, stick, and slide; common source of self-intersections.
Material- and fit-dependence: Knit vs. denim vs. shell fabrics behave differently; tight vs. loose fits change fold patterns and stability.
Sparse supervision: Few datasets pair the roll-up action with measurements, materials, and motion context for reliable training/validation.
What builders need:
Action shots: Synchronized hand/arm kinematics with garment geometry during the roll-up sequence and after-settle.
State diversity: Single vs. double roll, partial rolls, asymmetric sleeves, different cuff constructions, and repeated rolls over time.
Durable formats: Clean meshes/point clouds, high-res photos, and structured metadata ready for engines and ML.
How Protege addresses the problem with Render Ready combined with motion capture data:
Multimodal combination: Render Ready’s garment scans, texture turnarounds, and measurements/materials (CSV/Excel) combined with Protege motion data for roll-up actions.
Controlled variability: Same shirt across fabrics, sizes, and roll styles; captures include pre-roll, in-motion, post-roll settle, and re-adjustments.
Rich metadata: Sleeve length/circumference, fabric composition and thickness, friction notes, wearer forearm measurements, motion timing, and repeatability.
Pipeline-ready formats: OBJ/FBX/GLTF meshes, point clouds if needed, labeled clip sets and timestamps for quick use in Unreal/Unity and ML training.
Example application scenario:
Character interacting with long-sleeve cotton shirt: Sequence showing hand grasp, initial fold, layered buildup at the forearm, micro-slippage during settle, and range-of-motion tests (reach, rotate wrist). Variants: single vs. double roll; slim vs. relaxed fit.
Potential model improvement from the Protege x Render Ready datasets:
Fewer modeling issues: Reduce cuff popping, texture stretching, and self-intersection in engines.
More robust models: Train deformation/solver components on layered-fabric interactions and post-adjustment stability.
Better UX: Predict when a rolled sleeve will drift down during motion; inform animation and gameplay states.
Case Study #3: Motion-Aware Garment States

The problem:
Combinatorial states: Zipped vs. unzipped, half-zipped, belts tightened/loose, sleeves rolled/unrolled multiply quickly across sizes and body shapes.
State-dependent deformation: Open jackets flap and invert at edges; closed jackets transfer tension across the zipper and bias wrinkle fields.
Layering and collision: Belts, plackets, hood cords, and layered tops introduce frequent self-contact and clipping under motion.
Sparse paired data: Few datasets link garment state, wearer size, and motion context (walk, run, reach, stairs) with consistent metadata.
What builders need:
Systematic state coverage: Captures that enumerate key garment states across multiple body sizes and motion primitives.
Motion-conditioning: Synchronized motion data for everyday actions (idle, walk, jog, reach, turn, stairs) per state.
Consistent metadata: Clear labels for state, adjustments, wearer dimensions, and materials to enable learning and validation.
How Protege addresses the problem with Render Ready combined with motion capture data:
Multimodal pairing: Render Ready’s detailed garment captures (3D scans with textures, photo turnarounds, measurements/materials in CSV/Excel) combined with Protege motion sequences across common actions.
Factorial design: Same jackets recorded zipped, half-zipped, unzipped; belts at discrete tightness levels; sleeves rolled/unrolled; repeated across body sizes to isolate effects.
Rich, comparable labels: Per-take metadata including garment state, wearer height/weight/chest/waist, fabric composition/thickness, step cadence/speed, and environment notes.
Drop-in deliverables: OBJ/FBX/GLTF meshes, point clouds if needed, labeled clip sets and thumbnails for Unreal/Unity and ML pipelines.
Example application scenario:
Lightweight jacket across states and sizes:
State A: Zipped, sleeves down, belt snug; brisk walk and arm reach. Note zipper tension, reduced hem flare, tighter chest wrinkles.
State B: Unzipped, sleeves down; jog and directional turn. Observe front panel flap, placket flutter, increased hem inversion.
State C: Half-zipped, sleeves rolled; stair ascent and phone reach. Mixed constraint at chest, sleeve ride-up stability, vent behavior.
Variants: S/M/L body sizes to illustrate how identical motions yield different deformation fields.
Potential model improvement from the Protege x Render Ready datasets:
Reliable state handling: Reduce clipping and jitter by training/validating solvers on matched motion across garment states.
Faster iteration: Clear labels and comparable clips surface state-specific failures (placket gape, hem inversion, belt buckle collisions).
Generalization: Models learn to predict deformation transitions when users change garment state in gameplay or simulation.
Interested in this unique clothing + motion capture data offering from Protege? Fill out our data access form here, and our team will be in touch.
