Door to CG Vol.37: NVIDIA GTC 2022 Report / From an Art and AI Perspective

NVIDIA GTC 2022 has taken place

NVIDIA’s biggest online event, GPU Technology Conference 2022 (GTC 2022), was held online on March 23, 2022. Until a few years ago, NVIDIA had a strong image as a chipmaker graphics for games and CG, but with the recent spread of the use of GPU (Graphics Processing Unit) AI, not only AI but also embedded computers and cloud computing It has increased its presence in various fields such as eng.

In this GTC 2022, a new GPU called H100 (code name Hopper Hopper) named after Grace Hopper was announced. Hopper is the creator of COBOL, a programming language still widely used in basic banking systems such as banks. The H100 follows the stream of existing NVIDIA A100 GPUs and is designed entirely for AI, not graphics. NVIDIA calls it “enterprise AI” and is expected to be used in deep learning on a larger scale.

The “DGX H100” to use AI like machine learning equipped with eight H100s and the “DGX SuperPOD” supercomputer which can use a total of 256 H100s by interconnecting 32 DGX H100s were announced. It seems that even larger configurations are possible. It should be used in medical field, energy field, security field, language model, space industry, automobile industry, agriculture, etc. The minimum configuration of computers with A100 currently on the market is about 3-5 million yen, but it can be used for a short time in various cloud computing environments and at low cost during server free time. I think the range will expand.

NVIDIA DGX-H100
https://www.nvidia.com/ja-jp/data-center/dgx-h100/

NVIDIA DGX SuperPOD
https://www.nvidia.com/ja-jp/data-center/dgx-superpod/

GTC 2022 Conference Official Website (Japanese)
https://www.nvidia.com/ja-jp/gtc/

These computing powers will also be used for the simulation of automatic driving in virtual space by NVIDIA DRIVE Sim and the simulation of industrial robots in virtual space by NVIDIA Isaac.

Instant NeRF (Instantly generate 3D scenes from 2D photos)

Training and rendering with instant-ngp

As NVIDIA commented, 75 years ago, with the advent of Polaroid’s intermittent camera, it was surprisingly accepted that 3D space could be recorded instantaneously in 2D space, and vice versa. Instant NeRF ( Neural Radiance Fields) is a technology being researched by NVIDIA Research that instantly reconstructs 3D data from photographs, which are images, and dozens of photographs. Previously, there were researches to reconstruct 3D shapes and 3D spaces from multiple photographs, but this instant NeRF completes the analysis and reconstruction at 1,000 times faster speed than before. Although some preparation is required, 3D data can be reconstructed in tens of milliseconds.

With Instant NeRF, this has been difficult so far due to “fast processing” benefits such as converting real people into virtual space avatars, converting conference rooms and participants to 3D for every videoconference and converting cartographic information into 3D. use it in various ways. The Instant NeRF intro video is named after Andy Warhol, who favored Polaroid cameras for self-portrait photography and turned instant photos into works of art.

Session: Video streaming on demand on GTC 2022 (registration required)
Instant Neural Graph Primitives
https://www.nvidia.com/en-us/on-demand/session/gtcspring22-s41441/

Official NVIDIA Commentary (Japanese)
NVIDIA Research instantly converts 2D photos into 3D scenes with AI
https://blogs.nvidia.co.jp/2022/04/07/instant-nerf-research-3d-ai/

Source Code: Instant Neural Graph Primitives
https://github.com/NVlabs/instant-ngp

Bringing AI art closer to you with its narrative and whimsical nature

bitGAN training data

Conference by Pindar Van Arman, creator of bitGANs which combines pixel art and AI. He is also the creator of the artonomous painting robot, which was featured in “Door to CG Vol.26: The Role of AI to Accelerate Art #GTC2021 Report”. Pindar Van Arman comes out of computers and networks giving AI-created art a story and a whim, as viewers of AI art reject things they don’t understand and lose interest. says he was able to feel the connection with the viewer. The many bitGAN art created here are sold on the NFT (Non-Fungible Token) OpenSea platform.

How to make a GAN train on bitGAN training data
https://www.youtube.com/watch?v=q8HwFbOz9eI

Session: Video streaming on demand on GTC 2022 (registration required)
Making AI art more accessible through storytelling and fantasy
https://www.nvidia.com/en-us/on-demand/session/gtcspring22-s41036/

OpenSea’s bitGAN art (prices start at thousands of dollars per bitGAN)
https://opensea.io/collection/bitgans

Leverage Machine Learning in Art Workflows

Example of using training data different from the initial objective

Jane Adams’ session advocates the idea of ​​curating training data because AI-based art, like StyleGAN-based works, is not bound by preconceptions and does not necessarily require data. well-trained apprentices.

Curation is an exhibition in a museum, etc., and acts as an editor, selecting appropriate writers and works according to a certain theme, designing the arrangement and composition of the exhibition, and the order in which the viewer sees the works. In other words, in general, it is important to collect a necessary and sufficient amount of training data that is not biased, but when the training data is used for art using AI, the data may be biased. Although it is arbitrary, it is suggested that it is one of the productions.

Moreover, solving programmatically is not the only solution, and preparing suitable training data leads to the solution, and the value of machine learning is not only practical, but also aesthetic, narrative, expressive and curious. I develop my theory that it stimulates.

Session: Video streaming on demand on GTC 2022 (registration required)
Leverage Machine Learning in Art Workflows
https://www.nvidia.com/en-us/on-demand/session/gtcspring22-s41450/

Data Art: An Emerging Complement to Data Science | Jane Adams | TEDxSpringfield
https://www.youtube.com/watch?v=RBkYGKPH6a4

The art at the heart of the machine

Based on the work of Refik Anadol Studio Machine Hallucinations

Refik Anadol is a multimedia artist who designs spaces with video works that use AI. I work on projection mapping works for buildings and interactive theaters. It was also featured in “Door to CG Vol.26: AI’s Role in Accelerating Art #GTC2021 Report”. During the GTC session, we will present a number of installation works and explain the context of the works, behind the scenes and what inspires us. You can see that various data such as nature, architecture, and a large amount of artificial data are clues for the work.

Session: Video streaming on demand on GTC 2022 (registration required)
Art in the Mind of a Machine
https://www.nvidia.com/en-us/on-demand/session/gtcspring22-s41954/

Art in the Age of Artificial Intelligence | Refik Anadol
https://www.youtube.com/watch?v=UxQDG6WQT5s

Hallucination Machine: NYC
https://vimeo.com/450313024

Other Notable AI-Related Sessions

Last year’s GTC 2021 had an art-focused work introduction section, but this year’s GTC 2022 can be seen from over 900 session titles that the overall focus was on a Solid “enterprise AI”. The main topics have been covered above, but here is a list of AI-related sessions that could not be covered.

  1. From natural movement to real sculpture: 3D motion capture and modeling using Deep Learning
  2. Express character AI emotion through style transfer paintings and neural music composition
  3. AI-powered video editing for creators at all levels

Moore’s law that “integration density of semiconductor circuits doubles in one and a half to two years” is said to have been broken, but technological progress is exponential, and some time ago it There are many cases where what could only be done with a huge computer that cost hundreds of millions of yen has been done with a smartphone that costs tens of thousands of yen. It is easy to predict that the use of AI, which currently requires enormous computing power, can be used inexpensively and reasonably at some point. The only difference may be whether such a world will arrive soon or whether it will take time.

Future plans for this series:In “Door to CG”, we will introduce the topic in the context of CG/VFX and art, which is a bit different from the topic of simple AI. Expect the use of AI in the field of video production, advanced tools that have gained value with AI, topics that make you feel future potential, and technology topics. If you have any topics or wishes you would like us to cover, let us know.

Gate to CG:

Vol.36: AI for Creation – The Future of AI and Human Creativity: Nao Tokui Conference Report

Vol.35: Machine Learning That Supports The Marvel Cinematic Universe

Vol.34: #SIGGRAPHAsia2021 CG Festival Throwback from Featured Article

Vol.33: Inevitability of AI #SIGGRAPHAsia2021 Report

Vol.32 : Know the evolution direction of Adobe Sneaks

Vol.31: “Face” that artificial intelligence thinks and “face” that people think

Vol.30: SIGGRAPH 2021 Report “Battle with Deepfake”

Vol.29: The world of CG research that benefits from AI. Excerpt from article # SIGGRAPH2021

Vol.28: Application of standard methods to other fields, image processing AI derived from natural language processing AI

Vol.27: Catch Your Eyes And Overtake?Machine Learning Evolved Camera

Vol.26: The Role of AI in Accelerating Art #GTC2021 Report

Vol.25: The Face That Can Be Transformed Is Actually An Artificial Intelligence

≫≫Click here for all back numbers

Contributor: Yukio Ando

Leave a Comment