One question gamers have been asking more or less since the invention of graphics is, “How long until GPUs can make a game look “real?” This has always been a difficult lift. While it’s possible to create photo-realistic renders, doing so typically requires a great deal of careful work from the artist, not to mention a lot of processing time. Games, which typically run between 30 and 60 frames per second, cannot afford to spend a single second on a frame, much less several hours.
Intel has released the results of a new paper that illustrates how photorealism could come to gaming courtesy of AI enhancement. The video below explains how the new system works, but I’ll cover it in text for anyone who can’t watch:
The Intel researchers began with rendered images, which were then passed through an image enhancement network. That’s fairly standard for upscaling or any other type of image enhancement. In addition, the AI network snoops the graphics card g-buffers, extracting data from the game engine about the types of materials, shapes, and lighting in the current scene. That information is then passed to a g-buffer encoder network to produce feature tensors.
The network uses a perceptual discriminator to score how realistic each scene is, with additional information on that topic provided by real photos. Data from the scene is extracted and labeled to ensure the network understands to treat trees differently than cars, for example. (Trees with glossy showroom paint jobs tend to stick out.)
We’ve put together a set of comparison shots to give you a sense of the before and after. Additional images are available here:
The open road, in default GTA V.
The exact same frame, but with Intel’s image enhancements applied. The foliage on the hills is verdant and the asphalt is remarkable. Colors pop more, but in a way that looks more realistic (to my eye) than the original GTA V image.
A comparison shot of the two outputs. Here’s another example:
This isn’t a very interesting image at first glance, but that’s actually why I chose it. Compare the default GTA V screenshot with the AI-enhanced output:
The Intel-improved image is dull. It turns a once brightly-lit scene with some interesting splashes of color into a dull, gray, overcast environment. In the first shot, the sky is a high white and it merely looks cloudy. In the second, the clouds hang oppressively overhead. The boards need staining. But the shot, in totality, also looks more realistic to my eye.
I think it’s fair to ask if the AI-enhanced version has changed the shot so much that it could impact the ambient experience of playing the game, though the goal of this project was to create photorealistic games, not to “improve the graphics” as such. This raises the question of whether we want games to look real in the first place, as plenty of universes rely on visual conventions that are explicitly not intended to be true to life. But Intel’s work demonstrates real progress in the field.
Finally, here’s a comparison where the AI clearly needs some work. First, the original GTA V screenshot:
Now, Intel’s version:
The square raindrops are obviously an oversight, and probably a solvable one. The Cityscapes data set that the researchers trained on may not have had many rain photos in them, especially not of rain puddles.
Some people aren’t going to like the way this changes the look of GTA V, which is completely fine. The goal of the project wasn’t to create a “realism” filter that gets slapped on every title. This work is a concrete step towards achieving better photorealism overall. Give it a few more years, and developers may use these types of approaches to improve content before it goes on sale.
Now Read:
- Sony Patents AI That Plays Games for You
- IBM Built an AI Capable of Holding Its Own Against Humans in a Debate
- Cerebras Unveils 2nd Gen Wafer Scale Engine: 850,000 Cores, 2.6 Trillion Transistors
from ExtremeTechExtremeTech https://ift.tt/3tSCffq
ليست هناك تعليقات:
إرسال تعليق