一区二区日本_久久久久久久国产精品_无码国模国产在线观看_久久99深爱久久99精品_亚洲一区二区三区四区五区午夜_日本在线观看一区二区

DreamFusion: Text-to-3D using 2D Diffusion

Ben Poole
Google Research
Ajay Jain
UC Berkeley
Jonathan T. Barron
Google Research
Ben Mildenhall
Google Research
Paper Project Gallery

Abstract

Recent breakthroughs in text-to-image synthesis have been driven by diffusion models trained on billions of image-text pairs. Adapting this approach to 3D synthesis would require large-scale datasets of labeled 3D assets and efficient architectures for denoising 3D data, neither of which currently exist. In this work, we circumvent these limitations by using a pretrained 2D text-to-image diffusion model to perform text-to-3D synthesis. We introduce a loss based on probability density distillation that enables the use of a 2D diffusion model as a prior for optimization of a parametric image generator. Using this loss in a DeepDream-like procedure, we optimize a randomly-initialized 3D model (a Neural Radiance Field, or NeRF) via gradient descent such that its 2D renderings from random angles achieve a low loss. The resulting 3D model of the given text can be viewed from any angle, relit by arbitrary illumination, or composited into any 3D environment. Our approach requires no 3D training data and no modifications to the image diffusion model, demonstrating the effectiveness of pretrained image diffusion models as priors.

Given a caption, DreamFusion generates relightable 3D objects with high-fidelity appearance, depth, and normals. Objects are represented as a Neural Radiance Field and leverage a pretrained text-to-image diffusion prior such as Imagen.

Generate 3D from text yourself!


Example generated objects

DreamFusion generates objects and scenes from diverse captions. Search through hundreds of generated assets in our full gallery.


Composing objects into a scene


Mesh exports

Our generated NeRF models can be exported to meshes using the marching cubes algorithm for easy integration into 3D renderers or modeling software.


How does DreamFusion work?

Given a caption, DreamFusion uses a text-to-image generative model called Imagen to optimize a 3D scene. We propose Score Distillation Sampling (SDS), a way to generate samples from a diffusion model by optimizing a loss function. SDS allows us to optimize samples in an arbitrary parameter space, such as a 3D space, as long as we can map back to images differentiably. We use a 3D scene parameterization similar to Neural Radiance Fields, or NeRFs, to define this differentiable mapping. SDS alone produces reasonable scene appearance, but DreamFusion adds additional regularizers and optimization strategies to improve geometry. The resulting trained NeRFs are coherent, with high-quality normals, surface geometry and depth, and are relightable with a Lambertian shading model.


Citation

@article{poole2022dreamfusion,
  author = {Poole, Ben and Jain, Ajay and Barron, Jonathan T. and Mildenhall, Ben},
  title = {DreamFusion: Text-to-3D using 2D Diffusion},
  journal = {arXiv},
  year = {2022},
}
主站蜘蛛池模板: 亚洲美女网站 | 亚洲喷水 | 日韩欧美网 | 国产精品毛片无码 | 亚洲福利 | 少妇无套高潮一二三区 | 成年免费大片黄在线观看一级 | 国产日韩精品久久 | 成人不卡一区二区 | 青草青草久热精品视频在线观看 | 一区二区三区国产 | 在线观看国产视频 | 青青草免费在线视频 | 成人精品视频在线观看 | 91精品国产91久久综合桃花 | 热re99久久精品国99热观看 | 国产精品999| 免费观看的av毛片的网站 | 亚洲欧美aⅴ | 国产成人精品一区二区在线 | 在线观看国产精品视频 | 精品国产乱码久久久久久丨区2区 | 五月天婷婷激情 | 成人黄在线观看 | 日本不卡一区二区三区在线观看 | 中文字幕在线观看视频一区 | 久久国产一区二区 | 请别相信他免费喜剧电影在线观看 | 国产精品久久久久久久岛一牛影视 | 欧美一级高潮片免费的 | 亚洲国产精品日韩av不卡在线 | 91在线精品视频 | 中文字幕精品一区二区三区在线 | 91精品国产91久久久久青草 | 国产 欧美 日韩 一区 | 美国十次成人欧美色导视频 | 一区二区视频 | 美女视频一区 | 日韩一区二区三区视频 | 国产精品99久久免费观看 | 日日精品|