``` ├── LICENSE (omitted) ├── README.md (500 tokens) ├── assets/ ├── teaser.png ``` ## /README.md # PartCrafter: Structured 3D Mesh Generation via Compositional Latent Diffusion Transformers <h4 align="center"> [Yuchen Lin<sup>*</sup>](https://wgsxm.github.io), [Chenguo Lin<sup>*</sup>](https://chenguolin.github.io), [Panwang Pan<sup>†</sup>](https://paulpanwang.github.io), [Honglei Yan](https://openreview.net/profile?id=~Honglei_Yan1), [Yiqiang Feng](https://openreview.net/profile?id=~Feng_Yiqiang1), [Yadong Mu](http://www.muyadong.com), [Katerina Fragkiadaki](https://www.cs.cmu.edu/~katef/) [](https://arxiv.org/abs/2506.05573) [](https://wgsxm.github.io/projects/partcrafter) [](./LICENSE) <h2 align="center"> Code and checkpoints will be released before July 15th! </h2> <p> <img width="90%" alt="pipeline", src="./assets/teaser.png"> </p> </h4> This repository will contain the official implementation of the paper: [PartCrafter: Structured 3D Mesh Generation via Compositional Latent Diffusion Transformers](https://wgsxm.github.io/projects/partcrafter/). PartCrafter is a structured 3D generative model that jointly generates multiple parts and objects from a single RGB image in one shot. Here is our [Project Page](https://wgsxm.github.io/projects/partcrafter). Feel free to contact me (linyuchen@stu.pku.edu.cn) or open an issue if you have any questions or suggestions. ## 📢 News - **2025-06-09**: PartCrafter is on arXiv. ## 📋 TODO - [ ] Release inference scripts and pretrained checkpoints before **July 15th.** Stay tuned! - [ ] Release training code and data. - [ ] Provide a HuggingFace🤗 demo. ## 😊 Acknowledgement We would like to thank the authors of [DiffSplat](https://github.com/chenguolin/DiffSplat), [TripoSG](https://yg256li.github.io/TripoSG-Page/), [HoloPart](https://vast-ai-research.github.io/HoloPart/), and [MIDI-3D](https://huanngzh.github.io/MIDI-Page/) for their great work and generously providing source codes, which inspired our work and helped us a lot in the implementation. ## 📚 Citation If you find our work helpful, please consider citing: ```bibtex @misc{lin2025partcrafter, title={PartCrafter: Structured 3D Mesh Generation via Compositional Latent Diffusion Transformers}, author={Yuchen Lin and Chenguo Lin and Panwang Pan and Honglei Yan and Yiqiang Feng and Yadong Mu and Katerina Fragkiadaki}, year={2025}, eprint={2506.05573}, url={https://arxiv.org/abs/2506.05573} } ``` ## /assets/teaser.png Binary file available at https://raw.githubusercontent.com/wgsxm/PartCrafter/refs/heads/main/assets/teaser.png The better and more specific the context, the better the LLM can follow instructions. If the context seems verbose, the user can refine the filter using uithub. Thank you for using https://uithub.com - Perfect LLM context for any GitHub repo.