Here we will talk about the upcoming game Grand Theft Auto 6 Graphics. GTA 6 is an open-world game with a vast virtual environment that players can explore freely, and even large game studios can take up to four to five years. Therefore, the time it takes to launch a new game tends to be significant.
Grand Theft Auto 6 trailer
Table of Contents
About Grand Theft Auto 6 Graphics
Here, deep learning algorithms can help reduce development time by taking over creative artists’ game visual design and rendering. This article will look at two recent research papers that can help you design and visualize virtual worlds in GTA.
You can train neural networks to learn the shapes of various objects or assets you want to include in the game’s virtual world. You can then feed the semantic label map describing the location of these objects to render a visual that looks realistic.
Researchers at MIT and NVIDIA have published a “Video-Video Synthesis” paper that can synthesize video from high-resolution, temporally consistent images. Remarkable GAN architecture ensures that the synthesized frames in the video look realistic and maintain visual consistency between the different frames.
GAN architecture used for vid2vid
The generator on the vid2vid network does not use the previous frame’s semantic map and the current semantic map you want to convert. It also uses the final composite frame of the previous output and combines it to calculate the flow map.
This gives you the information you need to understand the difference between two consecutive frames, allowing you to synthesize images that are consistent in time.
On the discriminator side, we use an image discriminator to control the output quality and add to it to check if the frame-by-frame sequence of the synthesized image makes sense according to the flow map. This rarely causes flicker between frames.
It also uses a gradual growth approach that starts with perfecting low-resolution output and then uses that knowledge to climb to produce high-resolution output incrementally. See the excellent results of this network in the picture below.
Problems with a GAN-based Approach
The visual quality of the vid2vid GAN network is impressive, but there are practical problems if you want to use it in your games. It’s worth noting how GTA has day and night cycles that change the look of the virtual world. In addition, other weather effects, such as rain and fog, make this world look completely different.
In other words, the neural network that wants to render the graphics of the virtual world should be able to do this for various visual styles that correspond to lighting and weather effects. However, creating images of various shapes is problematic in GAN due to mode shrinkage.
Reduced mode from vid2vid to GAN
Imagine a lot of training images in the higher dimension coordinate space. In the figure above, it is represented in two dimensions. Some of these points represent daytime samples, and some represent night images. When unconditionally starting GAN training, we first generate a random image that will be pushed through the generator.
The training process pushes these fake images towards the training image to make it look natural. This causes some training images to be missing and unused. This allows the generator to produce only the same kind of image as the training is going on.
Therefore, GAN suffers from mode collapse, and images created in this way can not be visually diverse. This is how I found a research paper to solve this problem using Maximum Likelihood Estimation.
Different Image Synthesis in GTA 6
Berkeley’s researchers have published a paper titled “Different Image Synthesis of Semantic Layout with Conditional IMLE” to address the abovementioned problem with a GAN-based training course on the vid2vid network. Rather than improving the quality of the output frame, we focus on compositing various images from the same semantic map.
In other words, unlike GAN, where one semantic label can only produce one output, you can use this method to render the same scene under all lighting or weather conditions.
This white paper shows how to use implicit probability estimation or IMLE. We will try to understand why IMLE is better than GAN in this case.
First, select the training image, then bring the randomly generated image closer. This process is the reverse of how it works in the GAN. Then select another training image and drag another random image. This process is repeated until all the training images have been covered.
In training, it is now possible to process both day and night images so that the generator is trained to produce images of various styles.
It is now an unconditional example of IMLE starting from a random noise image rather than a semantic label map, but the training process remains the same in both cases. Only the input encoding changes when using semantic maps, so let’s look at it.
In the conditional case of IMLE
The input here is a semantic label and not any image previously seen. Random input noise channels are added to the input encoding, which is used to control the visual style of the network output. Instead of using RGB semantic labels as input, classify the map into channels.
Each channel corresponds to one object type on the map. Here is the essential part of this paper I found most interesting. Use additional noise input channels to control the appearance of the output style. So for one random noise image on this channel, the output follows a fixed output style like the daytime effect.
Changing this channel to another random noise image follows a different style, such as a night effect. And by interpolating these two random images, you can control the time in the output image. This is cool and attractive!
Use this AI to render GTA 5
I tried to reproduce this effect with a short clip in-game GTA 5. I used the Image Segmentation network to get the game’s semantic label and then ran it over the IMLE-trained network. The result is fascinating, considering that it’s the same generator network that GTA can generate for both day and night time of footage!
Grand Theft Auto 6 system requirements
[mks_tab_item title=”Minimum requirements”]
- Operating system: Win 10 64
- Processor: Intel Core i5-4460 3.2GHz / AMD FX-8350
- Graphics: AMD Radeon R9 390 or NVIDIA GeForce GTX 970 4GB
- VRAM: 4 GB
- System Memory: 8GB RAM
- Storage Space: 100 GB Hard Drive Space
- DirectX 12 compatible graphics card.
[mks_tab_item title=”Recommended requirements”]
- Operating system: Win 10 64
- Processor: Intel Core i7-8700K 6-Core 3.7GHz / AMD Ryzen R7 1700X
- Graphics: AMD Radeon RX Vega 64 Liquid 8GB or NVIDIA GeForce GTX 1080 Ti
- VRAM: 8 GB
- System Memory: 16GB RAM
- Storage Space: 55 GB Hard Drive Space.
Final words about Grand Theft Auto 6 Graphics
You can see how far we’ve come from AI-based graphics rendering between today’s two papers, vid2vid and IMLE-based image compositing. Before experimenting with this new AI-based graphics technology, there are a few more hurdles to tackle.
In about a decade from today, it’s expected that Grand Theft Auto will have AI-based asset rendering to help shorten game development time. The future of game development is exciting!