Those who create 3D content know how difficult it is to recreate all kinds of objects in virtual worlds. Now NVIDIA wants to make this task easier with its 3D MoMa technology, which, thanks to artificial intelligence, allows access to the so-called inverse rendering.
These systems allow 3D objects to be extracted from 2D images. The technique makes use of a series of photos from different perspectives and is something like a simplified “3D scanning”, but the result, at least according to NVIDIA, is fantastic.
Play it again, Sam
The technology was presented at a recent computer vision conference in New Orleans, where NVIDIA demonstrated this inverse rendering process that harnesses the power of the GPU “to produce 3D objects quickly that creators can import, edit, and scale without limitation from existing tools.”
NVIDIA’s demo took advantage of this framework to reconstruct musical instruments such as a trumpet, saxophone, and clarinet. After taking photos from different perspectives, the process then combines them all to create a matrix of triangles that recreates that design in an initial 3D model.
This model is compatible with existing modeling tools, and the reconstruction includes both the 3D mesh model and the application of materials and textures or lighting.
NVIDIA officials show in a video how, after reconstructing these objects, the NVIDIA team imported them into NVIDIA’s omniverse and simulation platform for editing.
The behavior of these objects is also correct: the members of the development team checked it with the so-called Cornell box, a well-known graphics test that evaluates the quality of rendering.
Thanks to it, it was possible to confirm that, for example, the objects reflected light perfectly concerning the material with which they were modeled. The proposal is a further milestone in NVIDIA’s ambition to demonstrate its efforts to provide practical use cases for its graphics cards and their support of artificial intelligence algorithms.