A contact solver for physics-based simulations involving 👚 shells, 🪵 solids and 🪢 rods. All made by ZOZO.
- 💪 Robust: Contact resolutions are completely penetration-free. No snagging intersections.
- ⏲ Scalable: An extreme case includes beyond 150M contacts. Not just one million.
- 🚲 Cache Efficient: All on the GPU runs in single precision. No double precision.
- 🥼 Inextensible: Cloth never extends beyond very strict upper bounds, such as 1%.
- 📐 Physically Accurate: Our deformable solver is driven by the Finite Element Method.
- ⚔️ Highly Stressed: We run GitHub Actions to run stress tests 10 times in a row.
- 🚀 Massively Parallel: Both contact and elasticity solvers are run on the GPU.
- 🐳 Docker Sealed: Everything is designed to work out of the box.
- 🌐 JupyterLab Included: Open your browser and run examples right away (Video).
- 🐍 Documented Python APIs: Our Python code is fully docstringed and lintable (Video).
- ☁️ Cloud-Ready: Our solver can be seamlessly deployed on major cloud platforms.
- ✨ Stay Clean: You can remove all traces after use.
- 📝 Change History
- 🎓 Technical Materials
- ⚡️ Requirements
- 💨 Getting Started
- 🐍 How To Use
- 📚 Python APIs and Parameters
- 🔍 Obtaining Logs
- 🖼️ Catalogue
- 🚀 GitHub Actions
- 📡 Deploying on Cloud Services
- ✒️ Citation
- 🙏 Acknowledgements
- 🧑 💻 Setting Up Your Development Environment (Markdown)
- 🐞 Bug Fixes and Updates (Markdown)
- (2025.10.03) Massive refactor of the codebase (Markdown). Note that this change includes breaking changes to our Python APIs.
- (2025.08.09) Added a hindsight note in eigensystem analysis to acknowledge prior work by Poya et al. (2023).
- (2025.05.01) Simulation states now can be saved and loaded (Video).
- (2025.04.02) Added 9 examples. See the catalogue.
- (2025.03.03) Added a budget table on AWS.
- (2025.02.28) Added a reference branch and a Docker image of our TOG paper.
- (2025.2.26) Added Floating Point-Rounding Errors in ACCD in hindsight.
- (2025.2.7) Updated the trapped example (Video) with squishy balls.
- 📚 Published in ACM Transactions on Graphics (TOG) Vol.43, No.6
- 🎥 Main video (Video)
- 🎥 Additional video examples (Directory)
- 🎥 Presentation videos (Short) (Long)
- 📃 Main paper (PDF) (Hindsight)
- 📊 Supplementary PDF (PDF)
- 🤖 Supplementary scripts (Directory)
- 🔍 Singular-value eigenanalysis (Markdown)
The main branch is undergoing frequent updates and will deviate from the paper 🚧. To retain consistency with the paper, we have created a new branch sigasia-2024.
- 🛠️ Only maintenance updates are planned for this branch.
- 🚫 General users should not use this branch as it is not optimized for best performance.
- 🚫 All algorithmic changes listed in this (Markdown) are excluded from this branch.
- 📦 We also provide a pre-compiled Docker image: ghcr.io/st-tech/ppf-contact-solver-compiled-sigasia-2024:latest of this branch.
- 🌐 Template Link for vast.ai
- 🌐 Template Link for RunPods
- 🔥 A modern NVIDIA GPU (CUDA 12.8 or newer)
- 🐳 A Docker environment (see below)
Install a 🎮 NVIDIA driver (Link) on your 💻 host system and follow the 📝 instructions below specific to the 🖥️ operating system to get a 🐳 Docker running:
| Install the Docker engine from here (Link). Also, install the NVIDIA Container Toolkit (Link). Just to make sure that the Container Toolkit is loaded, run sudo service docker restart. | Install the Docker Desktop (Link). You may need to log out or reboot after the installation. After logging back in, launch Docker Desktop to ensure that Docker is running. |
Next, run the following command to start the 📦 container:
⏳ Wait for a while until the container becomes a steady state. Next, open your 🌐 browser and navigate to http://localhost:8080, where 8080 is the port number specified in the MY_WEB_PORT variable. Keep your terminal window open.
🎉 Now you are ready to go! 🚀
To shut down the container, just press Ctrl+C in the terminal. The container will be removed and all traces will be 🧹 cleaned up.
If you wish to build the container from scratch 🛠️, please refer to the cleaner installation guide (Markdown) 📝.
Our frontend is accessible through 🌐 a browser using our built-in JupyterLab 🐍 interface. All is set up when you open it for the first time. Results can be interactively viewed through the browser and exported as needed.
This allows you to interact with the simulator on your 💻 laptop while the actual simulation runs on a remote headless server over 🌍 the internet. This means that you don't have to own ⚙️ NVIDIA hardware, but can rent it at vast.ai or RunPod for less than 💵 $0.5 per hour. For example, this (Video) was recorded on a vast.ai instance. The experience is 👍 good!
Our Python interface is designed with the following principles in mind:
- 🛠️ Dynamic Tri/Tet Creation: Relying on non-integrated third-party tools for triangulation, tetrahedralization, and loading can make it difficult to dynamically adjust resolutions. Our built-in tri/tet creation tools eliminate this limitation.
- 🚫 No Mesh Data: Preparing mesh data using external tools can be cumbersome. Our frontend minimizes this effort by allowing meshes to be created on the fly or downloaded when needed.
- 🔗 Method Chaining: We adopt the method chaining style from JavaScript, making the API intuitive and easy to understand.
- 📦 Single Import for Everything: All frontend features are accessible by simply importing with from frontend import App.
Here's an example of draping five sheets over a sphere with two corners pinned. Please look into the examples directory for more examples.
-
Full API documentation 📖 is available on our GitHub Pages. The major APIs are documented using docstrings ✍️ and compiled with Sphinx ⚙️. We have also included jupyter-lsp to provide interactive linting assistance 🛠️ and display docstrings as you type. See this video (Video) for an example. The behaviors can be changed through the settings.
-
A list of parameters used in param.set(key,value) is documented here: (Global Parameters) (Object Parameters).
Note
⚠️ Please note that our Python APIs are subject to breaking changes as this repository undergoes frequent iterations. 🚧
📊 Logs for the simulation can also be queried through the Python APIs 🐍. Here's an example of how to get a list of recorded logs 📝, fetch them 📥, and compute the average 🧮.
Below are some representatives. vid_time refers to the video time in seconds and is recorded as float. ms refers to the consumed simulation time in milliseconds recorded as int. vid_frame is the video frame count recorede as int.
| time-per-frame | Time per video frame | list[(vid_frame,ms)] |
| matrix-assembly | Matrix assembly time | list[(vid_time,ms)] |
| pcg-linsolve | Linear system solve time | list[(vid_time,ms)] |
| line-search | Line search time | list[(vid_time,ms)] |
| time-per-step | Time per step | list[(vid_time,ms)] |
| newton-steps | Newton iterations per step | list[(vid_time,count)] |
| num-contact | Contact count | list[(vid_time,count)] |
| max-sigma | Max stretch | list(vid_time,float) |
The full list of log names and their descriptions is documented here: (GitHub Pages).
Note that some entries have multiple records at the same video time ⏱️. This occurs because the same operation is executed multiple times 🔄 within a single step during the inner Newton's iterations 🧮. For example, the linear system solve is performed at each Newton's step, so if multiple Newton's steps are 🔁 executed, multiple linear system solve times appear in the record at the same 📊 video time.
If you would like to retrieve the raw log stream, you can do so by
This will output something like:
If you would like to read stderr, you can do so using session.get.stderr() (if it exists). They return list[str]. All the log files 📂 are available ✅ and can be fetched ⬇️ during the simulation 💻.
Below is a table summarizing the estimated costs for running our examples on a NVIDIA L4 instance g6.2xlarge at Amazon Web Services US regions (us-east-1 and us-east-2).
- 💰 Uptime cost is approximately $1 per hour.
- ⏳ Deployment time is approximately 8 minutes ($0.13). Instance loading takes 3 minutes, and Docker pull & load takes 5 minutes.
- 🎮 The NVIDIA L4 delivers 30.3 TFLOPS for FP32, offering approximately 36% of the performance of an RTX 4090.
- 🎥 Video frame rate is 60fps.
| trapped | $0.37 | 22.6m | 300 | 263K | 299K | 885K | N/A | N/A |
| twist | $0.91 | 55m | 500 | 203K | 406K | N/A | N/A | N/A |
| stack | $0.60 | 36.2m | 120 | 166.7K | 327.7K | 8.8K | N/A | 5% |
| trampoline | $0.74 | 44.5m | 120 | 56.8K | 62.2K | 158.0K | N/A | 1% |
| needle | $0.31 | 18.4m | 120 | 86K | 168.9K | 8.8K | N/A | 5% |
| cards | $0.29 | 17.5m | 300 | 8.7K | 13.8K | 1.9K | N/A | 5% |
| domino | $0.12 | 4.3m | 250 | 0.5K | 0.8K | N/A | N/A | N/A |
| drape | $0.10 | 3.5m | 100 | 81.9K | 161.3K | N/A | N/A | 5% |
| curtain | $0.33 | 19.6m | 300 | 64K | 124K | N/A | N/A | 5% |
| friction | $0.17 | 10m | 700 | 1.1K | N/A | 1K | N/A | N/A |
| hang | $0.12 | 7.5m | 200 | 16.3K | 32.2K | N/A | N/A | 1% |
| belt | $0.19 | 11.4m | 200 | 12.3K | 23.3K | N/A | N/A | 5% |
| codim | $0.36 | 21.6m | 240 | 122.7K | 90K | 474.1K | 1.3K | N/A |
| fishingknot | $0.38 | 22.5m | 830 | 19.6K | 36.9K | N/A | N/A | 5% |
| fitting | $0.03 | 1.54m | 240 | 28.4K | 54.9K | N/A | N/A | 10% |
| noodle | $0.14 | 8.45m | 240 | 116.2K | N/A | N/A | 116.2K | N/A |
| ribbon | $0.23 | 13.9m | 480 | 34.9K | 52.9K | 8.8K | N/A | 5% |
| woven | $0.58 | 34.6m | 450 | 115.6K | N/A | N/A | 115.4K | N/A |
| yarn | $0.01 | 0.24m | 120 | 28.5K | N/A | N/A | 28.5K | N/A |
| roller | $0.03 | 2.08m | 240 | 21.4K | 22.2K | 61.0K | N/A | N/A |
Large scale examples are run on a vast.ai instance with an RTX 4090. At the moment, not all large scale examples are ready yet, but they will be added/updated one by one. The author is actively woriking on it.
| large-twist | cbafbd2 | 3.2M | 6.4M | N/A | N/A | 56.7M | 2,000 | 46.4s |
We implemented GitHub Actions that test all of our examples except for large scale ones, which take from hours to days to finish. We perform explicit intersection checks 🔍 at the end of each step, which raises an error ❌ if an intersection is detected. This ensures that all steps are confirmed to be penetration-free if tests are pass ✅. The runner types are described as follows:
The tested 🚀 runner of this action is the Ubuntu NVIDIA GPU-Optimized Image for AI and HPC with an NVIDIA Tesla T4 (16 GB VRAM) with Driver version 570.133.20. This is not a self-hosted runner, meaning that each time the runner launches, all environments are 🌱 fresh.
We use the GitHub-hosted runner 🖥️, but the actual simulation runs on a g6e.2xlarge AWS instance 🌐. Since we start with a fresh 🌱 instance, the environment is clean 🧹 every time. We take advantage of the ability to deploy on the cloud; this action is performed in parallel, which reduces the total action time.
We generate zipped action artifacts 📦 for each run. These artifacts include:
- 📝 Logs: Detailed logs of the simulation runs.
- 📊 Metrics: Performance metrics and statistics.
- 📹 Videos: Simulated animations.
Please note that these artifacts will be deleted after a month.
We know that you can't judge the reliability of contact resolution by simply watching a single success 🎥 video example. To ensure greater transparency, we implemented GitHub Actions to run many of our examples via automated GitHub Actions ⚙️, not just once, but 10 times in a row 🔁. This means that a single failure out of 10 tests is considered a failure of the entire test suite!
Also, we apply small jitters to the position of objects in the scene 🔄, so at each run, the scene is slightly different.
Our contact solver is designed for heavy use in cloud services ☁️, enabling us to:
- 💰 Cost-Effective Development: Quickly deploy testing environments 🚀 and delete 🗑️ them when not in use, saving costs.
- 📈 Flexible Scalability: Scale as needed based on demand 📈. For example, you can launch multiple instances before a specific deadline ⏰.
- 🌍 High Accessibility: Allow anyone with an internet connection 🌍 to try our solver, even on a smartphone 📱 or tablet 🖥️.
- 🐛 Easier Bug Tracking: Users and developers can easily share the same hardware, kernel, and driver environment, making it easier to track and fix bugs.
- 🛠️ Free Maintenance Cost: No need to maintain hardware for everyday operations or introduce redundancy for malfunctions.
This is made possible with our purely web-based frontends 🌐 and scalable capability 🧩. Our main target is the NVIDIA L4 🖱️, a data-center-targeted GPU 🖥️ that offers reasonable pricing 💲, delivering both practical performance 💪 and scalability 📊 without investing in expensive hardware 💻.
Below, we describe how to deploy our solver on major cloud services ☁️. These instructions are up to date as of late 2024 📅 and are subject to change 🔄.
Important: For all the services below, don't forget to ❌ delete the instance after use, or you’ll be 💸 charged for nothing.
- Select our template (Link).
- Create an instance and click Open button.
- Follow this link (Link) and deploy an instance using our template.
- Click Connect button and open the HTTP Services link.
- Set zone to fr-par-2
- Select type L4-1-24G or GPU-3070-S
- Choose Ubuntu Jammy GPU OS 12
- Do not skip the Docker container creation in the installation process; it is required.
- This setup costs approximately €0.76 per hour.
- CLI instructions are described in (Markdown).
- Amazon Machine Image (AMI): Deep Learning Base AMI with Single CUDA (Ubuntu 22.04)
- Instance Type: g6.2xlarge (Recommended)
- This setup costs around $1 per hour.
- Do not skip the Docker container creation in the installation process; it is required.
-
Select GPUs. We recommend the GPU type NVIDIA L4 because it's affordable and accessible, as it does not require a high quota. You may select T4 instead for testing purposes.
-
Do not check Enable Virtual Workstation (NVIDIA GRID).
-
We recommend the machine type g2-standard-8.
-
Choose the OS type Deep Learning VM with CUDA 12.4 M129 and set the disk size to 50GB.
-
As of late 2024, this configuration costs approximately $0.86 per hour in us-central1 (Iowa) and $1.00 per hour in asia-east1 (Taiwan).
-
Port number 8080 is reserved by the OS image. Set $MY_WEB_PORT to 8888. When connecting via gcloud, use the following format: gcloud compute ssh --zone "xxxx" "instance-name" -- -L 8080:localhost:8888.
-
Do not skip the Docker container creation in the installation process; it is required.
-
CLI instructions are described in (Markdown).
The author thanks ZOZO, Inc. for permitting the release of the code and the team members for assisting with the internal paperwork for this project.
.png)



