High-Resolution 3D Assets Generation with Large Scale Hunyuan3D Diffusion Models.
Official repo for paper "Structured 3D Latents for Scalable and Versatile 3D Generation" (CVPR'25 Spotlight).
Single Image to 3D using Cross-Domain Diffusion for 3D Generation
[ICLR 2024] Official implementation of DreamCraft3D: Hierarchical 3D Generation with Bootstrapped Diffusion Prior
#计算机科学#[ICCV 2023] Make-It-3D: High-Fidelity 3D Creation from A Single Image with Diffusion Prior
#大语言模型#[NeurIPS 2023] MotionGPT: Human Motion as a Foreign Language, a unified motion-language generation model using LLMs
[NeurIPS 2023] Official code of "One-2-3-45: Any Single Image to 3D Mesh in 45 Seconds without Per-Shape Optimization"
PartCrafter: Structured 3D Mesh Generation via Compositional Latent Diffusion Transformers
[ICLR24] Official PyTorch Implementation of Magic123: One Image to High-Quality 3D Object Generation Using Both 2D and 3D Diffusion Priors
From Images to High-Fidelity 3D Assets with Production-Ready PBR Material
TripoSG: High-Fidelity 3D Shape Synthesis using Large-Scale Rectified Flow Models
[SIGGRAPH 2022 Journal Track] AvatarCLIP: Zero-Shot Text-Driven Generation and Animation of 3D Avatars
Official implementation of "MoMask: Generative Masked Modeling of 3D Human Motions (CVPR2024)"
Text2Room generates textured 3D meshes from a given text prompt using 2D text-to-image models (ICCV2023).
#大语言模型#Unifying 3D Mesh Generation with Language Models
[ICCV 2025] Official impl. of "MV-Adapter: Multi-view Consistent Image Generation Made Easy"
[CVPR 2025 Highlight] 3DTopia-XL: High-Quality 3D PBR Asset Generation via Primitive Diffusion
MotionDiffuse: Text-Driven Human Motion Generation with Diffusion Model
Direct3D‑S2: Gigascale 3D Generation Made Easy with Spatial Sparse Attention
From anything to mesh like human artists. Official impl. of "MeshAnything V2: Artist-Created Mesh Generation With Adjacent Mesh Tokenization"