Editable Neural Radiance Fields Convert 2D to 3D Furniture Texture

Authors

  • Chaoyi Tan Department of Electrical and Computer Engineering, Northeastern University, Boston, MA, USA
  • Chenghao Wang Georgia Institute of Technology, USA
  • Zheng Lin University of California Santa Cruz, USA
  • Shuyao He Information Systems, Northeastern University, Boston, USA
  • Chao Li Georgetown University, USA

DOI:

https://doi.org/10.5281/zenodo.12662936

Keywords:

Neural Radiance, 2D, 3D, Texture

Abstract

Our work presents a neural network designed to convert textual descriptions into 3D models. By leveraging the encoder-decoder architecture, we effectively combine text information with attributes such as shape, color, and position. This combined information is then input into a generator to predict new furniture objects, which are enriched with detailed information like color and shape.[1] The predicted furniture objects are subsequently processed by an encoder to extract feature information, which is then utilized in the loss function to propagate errors and update model weights. After training the network, we can generate new 3D objects solely based on textual input, showcasing the potential of our approach in generating customizable 3D models from descriptive text.[2]

Downloads

Download data is not yet available.

Published

2024-06-29

How to Cite

Chaoyi Tan, Chenghao Wang, Zheng Lin, Shuyao He, & Chao Li. (2024). Editable Neural Radiance Fields Convert 2D to 3D Furniture Texture. International Journal of Engineering and Management Research, 14(3), 62–65. https://doi.org/10.5281/zenodo.12662936

Issue

Section

Articles