Deep Convolutional Priors for Indoor Scene Synthesis
Given the importance and the ubiquity of indoor spaces in our everyday lives, the ability to have computer models which can understand, model, and synthesize indoor scenes is of vital importance for many industries such as but not limited to interior design, architecture, gaming, virtual reality, etc. Previous works towards this goal have relied on constrained synthesis of scenes with statistical priors on object pair relationships, “human-centric relationship priors”, or constraints based on “hand-crafted interior design principles”. Moreover, owing to the difficulty of unconstrained room-scale synthesis of indoor scenes, prior work has focused on either small regions within a room or additional inputs (in the form of fixed set of objects, manually specified relationships, natural language description, sketch, or 3D scan of the room) as constraints, and deep generative models such as GANs and VAEs struggle with producing multi-modal outputs. Driven by the success of convolutional neural networks (CNNs) in scene synthesis tasks and the availability of large 3D scene datasets, this paper proposes the first CNN-based autoregressive model to design interior spaces, where given the wall structure and the type of a room, the model predicts the selection and placement of objects. ...