الثلاثاء، 29 مارس 2022

NEWS TECHNOLOGIE

This piece is a companion to “I Can Upscale Star Trek Deep Space Nine and Voyager to HD, So Why Can’t Paramount?” Both of this story and that one contain their own unique videos and images of restored Star Trek, but this article is aimed more at people who are interested in the technical aspects of upscaling and what can be accomplished with Topaz Video Enhance AI. The other story focuses as much on Star Trek itself.

The Deep Space Nine and Voyager Upscale project relies on AviSynth, VapourSynth, DaVinci Resolve 17, and Topaz Video Enhance AI. All of these applications are available for free except for TVEAI.

The past week beat the absolute crap out of me. My project SSD’s failure happened at the worst possible time, as far as catching me between backups. I did not intend to recreate a lot of this footage, especially not at such haste. This article will show more work that qualifies as work-in-progress (WIP) and some samples to show various ideas in action, in lieu of some of the finished work I originally intended to show.

There are multiple finished Deep Space Nine clips embedded in this story that are not shown in the other story, and a few Voyager samples as well. If you would like to see all of my current before and after shots comparing the original DVDs to the new restoration, a single imgsli link with drop-down menus to select your comparison targets is available here.

This imgsli comparison contains every Before / After shot I’ve shown this update. Note that there are other upscale methods for Defiant that maintain more noise and grain in the final image; we’ll check some of those out later in this story.

Workflow Changes

My goal, when I began the Deep Space Nine Upscale project, was to create a method of improving Deep Space Nine that anyone could follow. I still plan to publish an updated tutorial based on these methods for DS9 (with video this time around) and will be publishing a Voyager tutorial as well in the weeks to come. What I’ll be discussing today, however, is several steps beyond feeding a single input through Topaz Video Enhance AI.

I’ve developed three different methods for improving the absolute quality of AI upscales. The first one is to inject additional grain and noise into the video. This can be done with AviSynth or VapourSynth via QTGMC. It can theoretically be done with other applications and filters as well, but I’ve had the best and most consistent luck with GrainRestore and NoiseRestore in QTGMC. I find NoiseRestore and GrainRestore values between 1.0 and 1.75 work best depending on the content. Some experimentation is usually helpful. Do not expect miracles. Do expect improvements.

Janeway with and without noise and grain addition via AviSynth. User-adjustable comparison available on imgsli.com

Second is blending footage from multiple models. I’ve used DaVinci Resolve for this, though I’m considering exploring other solutions. My overall experience with Resolve has been problematic, to say the least. Resolve, ironically, is not very good at compositing video. It has ruined multiple encodes by introducing motion errors during composition. It sometimes refuses to decode files properly if they are provided in MP4 or MKV format (Resolve supports MKV as of version 17). I had some luck moving to MOV but have not resolved the problem yet.

DaVinci Resolve has been critical to my quality improvements and also a giant, sabotaging pain in my ass. I’ve had to unmake and remake multiple clips manually to fix motion errors. I’ve had to rely on other tools to splice in audio because DaVinci doesn’t always understand what to do with audio files attached to video files. Without DaVinci Resolve, I couldn’t make some of the improvements I’ve made. With a better version of DaVinci Resolve, I might have gotten an extra 12 hours of sleep last week. I have mixed feelings about this application.

Then again, I also have mixed feelings about Topaz Video Enhance AI. One of the great apparent ironies of video editing is that for best results, one does not edit video at all. You edit sequences of image frames that have already been dumped to disk. Topaz Video Enhance AI and DaVinci Resolve both struggle with maintaining audio properly through a video encode. DaVinci sometimes struggles with maintaining proper frame motion when compositing video.

I currently have more of a love/hate relationship with DaVinci than I do with Topaz Video Enhance AI because I have a better sense of the things TVEAI is bad at. DaVinci will silently change the timing in your video and never bother to tell you that it did so, or why it did so, or whether this could be fixed by using a .MOV file instead of a .MP4 or .MKV. It throws random “Media Offline” errors in MKV files for no reason and will refuse to read certain frame sequences that extract perfectly and scan as ordinary files to every file application imaginable. Manual frame replacement in the video will sometimes fix these problems. Giving DaVinci the same footage in a different wrapper will also sometimes fix these problems. Adobe Premiere might work just as well as DaVinci Resolve for my purposes but I haven’t had time to test it yet.

Defiant, Alternate Grade

Third, I recommend moving sharpening out of AviSynth / VapourSynth and performing it in either Topaz VEAI or DaVinci Resolve. It can still be worth handling noise/grain injection in these utilities, but TVEAI’s sharpening works better if it isn’t paired with AviSynth / VapourSynth sharpening. AviSynth and VapourSynth cause severe haloing compared to other approaches available in other applications. If you are struggling not to over-sharpen content when you try to improve it, leave sharpening off altogether and try Proteus settings like 40-40-0-0-0-35 x2. These are conservative settings for most content. Be aware that Proteus’ dehaloing tool can cause haloing and its noise removal tool can wind up injecting noise.

TVEAI’s 2x models continue to be generally better than 4x models and 1280×960 remains a better target resolution than 2560×1920, at least until the very last step. DaVinci can resize 1280×960 into 2560×1920 (or 1310×960 into 2624×1920 if you prefer). 2560×1920 is preferred for any YouTube uploads if you don’t want to eat an ugly quality hit.

Exploring the Impact of Noise and Grain Injection

The following comparisons also show the impact of injecting grain, noise, and additional sharpening into final output at various points as opposed to not doing so.

I wish I had more footage from Year of Hell Parts 1 & 2 to show, but it’s one of the episodes I lost when my SSD died earlier this week and I wasn’t able to fully recreate my own process in time for this article.

Voyager, “Year of Hell.” You’ll need to click this one to see it, but injecting grain and noise substantially improves detail retention on the wreck. User-adjustable comparison on imgsli.com

Proper footage treatment can yield dramatic differences in final image quality. Both of the frames below were upscaled using the same  Proteus settings in TVEAI. The only difference is how I pretreated the footage. I adjusted the color grade on the image below to make the difference more visible from a preview image.

Harry Kim, with and without noise and grain preinjected before upscaling in Topaz Video Enhance AI. User-adjustable comparison available on imgsli.

It’s possible to over-shoot the mark and to then use TVEAI to correct the problem. Here’s a deliberately broken video example from Endgame. It’s over-noised and over-grained and there’s a visible pattern in the video as a result.

How do we fix it?

Obviously one way is to recreate the video but with less pre-injected grain and noise. But that’s not the only way. Some Topaz models are pretty good at removing unwanted noise and grain. Experiment with Artemis High Quality and Low Quality (LQ is stronger) if you need some noise and grain removal, Medium Quality if you also need some smoothing, and the Artemis DeHalo and Strong DeHalo options if you need a particularly heavy hand applied to an over-sharpened or otherwise damaged bit of content. Gaia HQ is generally used to add noise and grain, while Proteus and AHQ do not remove it as much as ALQ or AMQ.

I’m not suggesting that the below clip represents an idealized output, but it does a great job of illustrating how Topaz can work as a finishing tool to clean up content you’ve already rendered. I’d probably prefer a 50/50 blend of this output and the original high-grain / high-noise version.

You may or may not like my specific blend choices, but that’s less important than understanding how to apply the principle. Mixing grain and noise into material you intend to upscale may not be the way to process normal video, but it should be understood as one of the foundations of good AI upscaling with Topaz Video Enhance AI.

The Benefits of Compositing

Blending / compositing footage is another way to improve your output from Topaz Video Enhance AI. Here’s an example of three frames from “Far Beyond the Stars.” They are drawn from the same base file but have been put through different models.

Input A. Input A has great eye and nose detail and good facial lines, but it verges on too clear and over-smoothed. It also oversharpens the background.

Input B has a good balance of characteristics between Kira’s face and her clothing, but it doesn’t have as much detail as Input A. Kira doesn’t “pop” as much, but she’s not as striking, either.

Input C combines the balanced look of Input B and adds back a lot of grain and noise that A & B both lack, but it’s less clear and heavily noised.

This image is a 33/33/33 weighted blend of the previous three images. The composite strengthens each output where it was weak without compromising any of the areas where it was strong. A user-controllable comparison model is available on imgsli.

Again, you might not like my specific blend choices, but I literally picked this frame at random. It’s the opposite of a cherry-picked example.

Blending works best when you have specific facets of different images that you want to utilize, without any problematic artifacts that are going to poke through as well. The episode of Trials and Tribble-ations I built used parts of 18 different encodes and file processing methods. This is not strictly required to restore an episode but I wanted to see just how far I could improve the show.

The video and images presented in this article are composites. They are composited from multiple AviSynth files that were preprocessed in multiple ways. Those files were then run through multiple Topaz models to create outputs that were themselves blended together. In some cases, 1280×960 outputs were combined and layered in DaVinci Resolve Studio 17 and then fed through Topaz again as single inputs, creating 2560×1920 output. This 2560×1920 output was then back-blended with resized 1280×960 output to create the final product.

Jadzia Dax, single-model upscale based on DVD. Footage was detelecined but no other modification was performed.

Jadzia Dax after blending with multiple inputs. User-adjustable image comparison available on imgsli.com

Here’s another example of how some simple compositing can improve the final result. On the left is Reg Barclay upscaled using a version of the show that’s been detelecined but not processed using any other filter. The image on the left was generated using Proteus at a 2x upscale. The right-hand image is a 50/50 blend. 50 percent of it is the image on the left. The other 50 percent is a high-noise version of the same episode that’s been upscaled using the Gaia High Quality model at 2x.

How Much Do These Changes Matter?

I’m going to show some examples of how my own work has improved over the last two years, not to toot my own horn but to illustrate that I’ve spent a lot of time experimenting with these applications. Let’s do some apples-to-apples comparisons. This comparison of the Defiant under fire in Way of the Warrior shows how far I’ve come since September 2020:

The restored Defiant as of September, 2020. I worked darn hard for this quality uplift at the time.

Adding grain and noise injection, switching to Proteus, and blending outputs together has collectively provided a significant net quality improvement over and above what was previously achievable. Here’s an imgsli comparison of the same frames.

Here’s a comparison of Sisko from the Defiant battle in “Way of the Warrior.” I’ve actually cropped both of these from YouTube, to make certain people are seeing what they see, not what I see with uncompressed images. If you’ve watched my work on YT, this is the detail improvement to uploaded footage. This comparison shows the quality loss between YT and downloaded video. It’s not much here. Some episodes take a heavier hit than others (smoke and clouds tend to get uglified in YT’s encoder).

My original feature image for the story “What No Fan Has Seen Before.”

Updated version of the same frame. Variant #1.

Alternate, updated version of the same frame. Variant #2.

Here’s another comparison from Sacrifice of Angels. The older image in this case is from a 60 fps encode of the episode done in Gaia-CG. It doesn’t look bad by any means, but the uplift between now and two years ago is substantial.

Voyager Credits Comparison

I’ve created two full versions of the Voyager credits in 23.976 fps and 30 fps and a partial 60 fps version. I converted to 30 fps using Topaz’s “Chronos” feature to see how the application would deal with some jerky movement. I felt like the overall results were mixed. Moving the credits to 30 fps definitely helps motion in the initial fly-past, but it doesn’t do anything for the ship’s camera-facing flyby a few moments later. It also introduces some jerky motion in the planetary rings scene later in the credits.

First, here’s the credits in restored 23.976.

Next, here’s the Topaz Video Enhance AI-converted version at 30 fps. I tested doing this in DaVinci as well, but the output wasn’t as good as Topaz Video Enhance AI. If you want to muck around with frame speed changes, I recommend testing TVEAI first.

Personally, I think it’s a bit of a wash. The 30 fps conversion looks better in some specific spots and the 23.976 native looks best in others.

Problems and Challenges

Trials and Tribble-ations restored beautifully, but there were a few pain points. One challenge in this episode was dealing with how TVEAI handled foreground actors versus projected backgrounds. This is also a problem with Emissary, but the lower level of detail in the background green screening make it even harder to deal with.

Sisko pops a bit out of frame here. The effect can be reduced by back-blending in less sharp material but it was difficult to eradicate completely. Better color grading would’ve also been helpful to adjust TOS faces — they tend to shine — but finding one approach that worked for both the TOS-era and DS9-era footage was already difficult.

Also, YouTube likes to stutter on my end at the tail of this clip. This doesn’t happen when watching the original footage from a file. I’m not sure what causes it.

This clip from Year of Hell illustrates how hard it is to get Voyager and Deep Space Nine to hold detail when upscaling. This clip isn’t bad, but it doesn’t meet my own criteria for “HD” because it didn’t preserve enough detail in the upscale. This was a work-in-progress encode that I happened to still have after my SSD failure last week.

Besides these things, the greatest challenge I’m currently facing is untangling what is giving DaVinci such a set of hairballs and finding a more reliable way to create this kind of quality uplift. I’m confident I will. If I hadn’t spent last week trying to recreate my previous work without the benefit of any of my project records I might have found a solution already.

And Just For Fun

I promised some clips that weren’t featured in the other story and I didn’t just mean WIPs or intermediate files. First up, here’s the first two fleet engagements from Deep Space Nine‘s Sacrifice of Angels.

I’ve also uploaded the final battle from “Way of the Warrior,” and restored it again for this project iteration, specifically.

“It’s no illusion.” No, it jolly well isn’t. The reason some of color grades vary from clip to clip and from image to image is because I’m still learning how to color grade and I experiment a fair bit. What looks good in one scene may not look good in the next. When I make multi-scene clips, I tend to be much more conservative with color rather than risk wrecking something.

I don’t have a great or specific way to end this story, except that I hope this discussion of how to achieve the highest-quality results in TVEAI is useful to some of you and that you generally enjoy the footage and samples. I’ll have updated tutorials soon and hopefully a method of extracting high-quality consistency out of DaVinci that I feel like I can share without ruining someone’s day if they try to duplicate my work. Fixing the weird motion errors it’s introducing has to happen first.

Now Read:

 



from ExtremeTechExtremeTech https://ift.tt/ktDwZ6X

ليست هناك تعليقات:

إرسال تعليق