الخميس، 4 أغسطس 2022

NEWS TECHNOLOGIE

For the last 2.5 years, I’ve worked to restore Star Trek: Deep Space Nine and more recently, Star Trek: Voyager. I’ve written about those efforts on multiple occasions, but today’s story is a bit different, and it casts a much broader net. Instead of focusing on Star Trek, this article is a comprehensive overview of what AI upscaling software does well, where it falls short, and how to improve your final upscale quality. In some scenarios, a properly processed, upscaled DVD can rival or even surpass an official studio Blu-ray release.

I know that last is an eyebrow-raising claim. It’s one of the topics we’ll be discussing today. You can see the evidence below.

The native Blu-ray is on the left, upscaled DVD on the right.

I have tested multiple AI upscaling applications, both paid and free, using everything from old VHS-equivalent footage to native 720p HD content. This article is also a guide to maximizing your output quality while avoiding some of the pitfalls that can wreck your video. There is some Star Trek footage here, but it’s only one type of content among many.

The question I get most often is “Can AI upscaling improve a video?” Here’s one of my answers to that. If you like what you see, keep reading.

Based on what I’ve heard from readers, there’s a lot of confusion about when AI upscaling is useful, how much improvement can be achieved, and whether paid products like AVCLabs Video Enhancer, DVDFab Video Enhancer AI,  or Topaz Video Enhance AI are worth the money. My goal is to help you answer these questions and give you a feel for what’s possible, no matter what your project is.

It is also possible to train your own AI models at home if you have a sufficiently powerful GPU and a training data set, but that’s a separate topic from the question of whether any currently available paid or free AI products are worth using. This article does not address the relative merits of training one’s own model versus using already-available products. This story also does not consider online services, as these are not suitable for large-scale image processing.

How to Read This Article

I’ve never had to write a section telling people how to read one of my stories, but I’ve also never written a story that was 16,000 words long with 50 or so videos across eight pages.

This is an omnibus article that deals with three distinct topics. The text on this page discusses what AI upscalers can and can’t do, with example images and videos embedded throughout. There’s also a section on advanced AI upscaling tips for those looking to get more out of these applications, and a specific demonstration of how an upscaled version of  Stargate SG-1 based on a DVD source can beat the quality of MGM’s official Blu-ray release.

The links in the table of contents below allow you to jump to any section of the article while the video links point to a dedicated page for each clip. These supplemental pages contain additional encodes and side-by-side comparisons for interested readers. At the end of each section, you’ll see two small links labeled “Contents” and “Video.” Those links will return you to the table of contents and the list of video appendices, respectively. I’ve also written a separate page for anyone who wants advanced tips on maximizing upscale quality. All of the supplemental appendices will be linked again at the bottom of the page for easy navigation, so you don’t have to jump back up to the top to access them.

If you have questions about what AI upscaling is, what it can do for you, and how to achieve the highest possible quality, you’ll want to keep reading this page. If you want to know more about advanced methods for improving upscale quality, grab the “Master Class” link. If you want to see the DVD v. Blu-ray discussion, specifically, it’s the last link in the video list below.

Table of Contents

I opted to split the story up this way after realizing there was no practical way to discuss both the software and the video it creates in detail in a single-page article. The video-specific pages at the bottom of this article document every step of the workflow. Every appendix contains multiple encodes and comparisons, but I created over 20 videos for Final Fantasy X, specifically. If you want a one-page comparison of how different AI upscalers and non-AI methods for improving content compare against each other, check there.

The Blu-ray v. DVD claim is addressed on the last page with a comparison between the Blu-ray release of Stargate SG-1 versus an upscaled version of the same episodes from DVD. All of the videos shown in this story were resized to 4K for upload to minimize the impact of YouTube’s bad low-resolution video encoding.

Videos

You will want to set YouTube for 4K or as high as your monitor supports, in all cases. You can use the “<” and “>” keys to navigate frame-by-frame in a YouTube video. When you see a label like “Original Video” in a comparison, that means you are seeing the original, unaltered footage or an upscaled version of that footage with no other processing applied. I deinterlaced / detelecined these clips if necessary but changed nothing else. Terms like “filtered” and “processed” means the video was edited in a different application before I upscaled it.

These sample videos were of all different sizes, from 320×240 to 1280×720. The level of baked-in damage and baseline quality varied widely. Each of the individual video pages show the footage at every stage of processing, from the initial video through to the final upscale. This will help you track where improvements come from and how each stage in processing changes the final output.

You may not like every output I created. You may not like any output I created. That’s fine. The goal is to show you the breadth and depth of what’s possible with various settings and applications, not to sell you on my own work as the crème de la crème of upscaling.

Before we get started, a few quick terms: “Original source” means “The original video file created by the authoring device.” In the context of a DVD, original source might be an .m2v or set of VOB files. “Native” refers to the original resolution of a file. A video can still be in its native resolution without being original source. I will also sometimes refer to a 200 percent upscale as a 2x upscale and a 400 percent upscale as a 4x upscale. These terms are equivalent.

Table of Contents / Videos

What Is AI Upscaling?

Upscaling — with or without artificial intelligence — is the process of converting a video or image from a lower resolution to a higher one. When you resize an image from 640×480 to 1280×960 in a program like Photoshop or Paint.net you are also upscaling the image, just not with AI. Some video and image-editing applications allow the end-user to choose a specific image resizing algorithm, including options like Lanczos, Bicubic, Bilinear, and Nearest Neighbor.

The image below shows how different image resizing algorithms interpret the same source image. By using a different algorithm, you can change how your newly-resized image looks whether you are making it smaller or larger than before. It isn’t very easy to compare all the filtering patterns, so I cropped the Bilinear, Lanczos, None, Spline36, and Spline16 samples and uploaded them to imgsli if you’d like to see a close-up comparison. Use the slider to check differences and the drop-down menu to select different images. The little square icon will maximize the comparison.

Image by MATLAB

AI upscalers also process and resize images/video, but they don’t rely on the same scaling algorithms. AI upscalers use models trained with machine learning to process video and increase its apparent quality and/or display resolution. Developers can choose to train the same base model on different types of content to create similar models with different specializations, or they can use new training sets for each model. Similar models with different specializations are often said to belong to the same model family. One model in a family might be tuned only to remove noise, while another might also sharpen the picture. Run the same video through two different AI models, and you’ll get two different-looking outputs.

The example below compares original frame 85,020 from the original Star Trek: Deep Space Nine episode “The Way of the Warrior” with the same frame after upscaling with different models in Topaz Video Enhance AI. Included are examples from the Dione Robust, Artemis High Quality (AHQ), and Proteus models as rendered in TVEAI 2.6.4, as well as AHQ from the just-released Topaz 3.0 Beta. The color tweaks between the OG source and the three upscaled frames from TVEAI 2.6.4 was intentional on my part. The 3.0 Beta shifted color a bit in its own direction, however.

User-adjustable image comparison available on Imgsli. Each of these images is derived from the same base source.

The model you choose determines what your output will look like, no matter what AI upscaling application you use. Run live-action content through an AI designed to upscale animation, and you’ll get results that could charitably be described as “odd.” Pick a model that includes heavy denoising when your video doesn’t need it, and your output video may be over-smoothed. Choosing an AI model to resize content is conceptually similar to choosing a specific resizing algorithm in an image editing application, even though the mechanism of action is very different. In both cases, you are selecting a particular processing method that will impact how your resized image looks.

Table of Contents / Videos

What Kinds of Upscaling Software Exist?

There are a variety of AI upscaling applications you can download, both paid and free. Out of all of them, there are two clear leaders: Topaz Video Enhance AI and Cupscale. Topaz Video Enhance AI is easily the best paid product you can purchase, but at $200, it’s not cheap. Cupscale is a free alternative worth investigating if you don’t have the cash, though there are some tradeoffs we’ll discuss in this section.

Cupscale’s UI and upscaling options. If you are interested in a non-paid application, this is the one I recommend.

Initially, I had planned to include multiple paid applications across this entire article, but I ran into serious problems with both AVCLabs Video Enhancer AI and DVDFab’s Video Enhancer. DVDFab’s problem is that it produces generally low-quality output that doesn’t qualify as “improved” in any meaningful sense of the word. AVCLabs had a similar problems and often broke at scene boundaries by pulling information from the next scene backward into the current. I have included results for both AVCLabs and DVDFab in the Final Fantasy X appendix but did not run extensive tests with these applications.

In this comparison, AVCLabs Broken 1 is blurry and destroys detail while Broken 2 causes certain items of clothing to pop out of the scene. Both cause unintended color shifts. DVDFab Broken 1 is full of odd vertical bars, Broken 2 flattens everything and has a lovely error on the bottom-right-side of the frame, and Broken 3 gives up on upscaling in favor of smearing Vaseline on your display. The Video 2x and Cupscale outputs all have color shifts, but they do not cause damage like AVCLabs or DVDFab.

User-adjustable comparison available on imgsli. Use the drop-down menus to access different images. While all of the non-Topaz models cause large color shifts, Cupscale and Video2x at least deliver some degree of improvement. DVDFab and AVCLabs deliver nothing but sorrow.

As of this writing, neither AVCLabs nor DVDFab has built a video upscaler worth spending money on. If you decide to spend money on an upscaler after reading this article, Topaz Video Enhance AI is the one to buy. If you don’t think TVEAI is worth spending money on, there is no other paid application I’m aware of that remotely compares. The content samples provided throughout this article and in the appendices should help you make that decision.

There is another free application that I also tested: Video2x. It produced some reasonable results in Final Fantasy X that I’ve included on that video’s supplemental page, but it’s a bit tougher to use than Cupscale and somewhat less flexible. Overall, I think Cupscale gives a better sense of what modern AI can accomplish, but Video2x is also worth a look.

Note: This article focuses on Topaz Video Enhance AI 2.6.4 and Cupscale 1.390f1 with some specific sample videos for AVCLabs (paid), DVDFab (paid), and Video2x (free). Because I’ve also made some intentional color changes during upscaling, I’ll also note where I intended for those to happen.

Table of Contents / Videos

What’s the Difference Between Cupscale and Topaz Video Enhance AI?

Cupscale and TVEAI are aimed at rather different audiences. Topaz VEAI very much wants to be an easy, one-click solution for improving videos. Unfortunately, the app cannot automagically determine which of models and settings would give the best results in your specific video. As a result, the end-user must experiment to discover which model(s) produce the best results. The application has a preview function that’s quite useful for this.

The Topaz Video Enhance AI preview window.

Model choice isn’t the only thing that impacts your final output. Users also need to test the difference between 200 percent and 400 percent upscales. 400 percent upscales are not just bigger versions of 200 percent upscales. The subjective difference between 200 percent and 400 percent can be just as large as the difference between two different model families. I recommend testing 200 percent upscales before 400 percent because the 400 percent models are not as flexible at dealing with a wide range of content as the 200 percent models and they also take much longer to process.

200 percent (left) versus 400 percent upscale. User-adjustable comparison available on imgsli.

Topaz Video Enhance AI’s 200 percent models tend to maintain detail better and sometimes render more accurately than its 400 percent models. 400 percent models are sometimes better at smoothing and noise-removal. They also require less VRAM and render more quickly. This is not always the case with Cupscale, where individual model processing times sometimes vary to the point that a fast 4x model might outperform a slow 2x or even 1x model. If you’ve read earlier stories I’ve written you may recall that I used to recommend TVEAI 400 percent models over 200 percent, but I’ve changed my approach as Topaz’s video upscaler has matured. I suggest starting with 200 percent models and testing 400 percent models as needed. Final resolution is much less important when upscaling than people think (the ‘Master Class‘ page has more details on this).

The video below compares an unmodified clip of Battlestar Galactica with a clip I upscaled 200 percent in TVEAI using the Artemis High Quality (AHQ) preset. Longtime special effects artist Adam “Mojo” Leibowitz created updated VFX for this clip back in 2010 to show what a modern remaster of the show might look like. This video and a number of other comparisons are posted on the BSG dedicated page.

The comparison above represents the beginning of the improvements Topaz Video Enhance AI can deliver, not the end. The next video shows a Cupscale-created video rendered with the 4xUniscaleRestore model and using a version of this clip that was preprocessed in Hybrid with VapourSynth.

Topaz VEAI includes features Cupscale lacks, including the option to only upscale a specific subset of frames within a larger video. It supports at least one file type (.mov) that Cupscale doesn’t. TVEAI’s models create color shifts less often and the degree of change is typically smaller. Topaz Video Enhance AI’s user interface is also a bit easier to navigate, in my opinion.

Cupscale’s base installation includes several AI models, but the real strength of the application is its ability to run a plethora of user-created models available from pages like this one and scattered around the web. You can experiment with a much larger total range of models in Cupscale than you can in TVEAI, and installing them is as simple as copying them to the appropriate subdirectory. There are Cupscale-compatible models that can improve a video just as much as any paid application I’ve seen, including Topaz’s. The output below is color-shifted, but the overall level of detail improvement is reasonable for a single pass on the original video.

I had to install Python 3.9 to enable the Python AI network model; the included embedded version did not work with my GPU. If you follow this set of instructions closely you should be able to get it up and running. Python proved faster than NCNN, but not always dramatically. I did not test Cupscale on AMD or Intel GPUs, but both are supported through Vulkan.

I really like Cupscale — some of the videos in this article are TVEAI + Cupscale hybrids — but the application also has some downsides that being free doesn’t fix.

First, Cupscale is slower than Topaz Video Enhance AI. This is true whether you use the ESRGAN (Pytorch) network or the ESRGAN (NCNN) option. My understanding is that Pytorch is Nvidia-only and that AMD GPUs need to use Vulkan. I tested both options on an RTX 3080 and found Pytorch to be 1x – 3x faster than Vulkan, depending on the model. My RTX 3080 was able to upscale Final Fantasy X’s “The Dance” CGI scene at roughly 41.5 frames per minute when using the ESRGAN (NCNN) AI network and the 2x_KemonoScale_v2 model. The fastest absolute performance in any clip I saw when using Pytorch was ~90 fpm. Unfortunately, even 90 fpm does not compare to Topaz. Topaz Video Enhance AI is anywhere from 4x – 10x faster than Cupscale. The performance gap is large enough to meaningfully improve your ability to check samples and proof content.

The difference between Cupscale and Topaz Video Enhance AI isn’t just a matter of speed. Topaz Video Enhance AI’s models collectively do a much better job dealing with content. I refer to a model’s ability to produce good results in a wide range of samples as flexibility. A model is flexible if you can apply it to many different kinds of video with reasonably good results and inflexible if it is limited to specific types of content.

Inflexibility is not automatically a bad thing. A number of TVEAI models are designed to deal with problems like haloing, and they don’t yield great results if applied to all content across the board. Unfortunately, many of the Cupscale-compatible models you find online aren’t very flexible.

A rather public example of inflexibility went viral earlier this year after the Oscars. After Will Smith slapped Chris Rock, a rumor made the rounds that Chris had been wearing a flesh-colored patch. He wasn’t. The rumor got started when someone fed a photo of the event into an upscaling service, which generated this:

A model that creates data like this is said to be hallucinating detail — in this case, generating a patch where none exists.

Technically, this is an example of the AI “hallucinating” detail that isn’t there, but it’s also an example of what I mean when I say a model is inflexible. An inflexible model might work well for space combat or special effects shots, but break when asked to handle humans. Alternatively, it might work beautifully with live-action up-close shots but create errors in backgrounds, as shown in the image below.

Jadzia looks pretty fine. Everybody and everything back of her, not so much. 4xBSRGAN, Cupscale.

This critique really only applies to models that are designed for general use cases. A model designed for animation might not handle live-action very well because it isn’t intended to. Inflexibility in this case would be an animation model that can only handle one specific show. Even then, that might not be a problem; I know several people working on show-specific AI models. Inflexibility is not always a bad thing, but it limits the scenarios in which a model can be used and may require the end-user to test more models to find one that works.

Topaz Video Enhance AI’s models sometimes break, but they break much less than any other application I have tested. The errors TVEAI occasionally introduces can almost always be reduced or fixed completely through the use of other software, provided two things are true: 1). Your video’s quality is high enough for it to upscale well, and 2). You’re willing to put some effort into fixing the problem.

As for Cupscale, the user-created AI models you can download and run are more likely to introduce or magnify the following effects, based on my testing:

  • Strobing (rapid light/dark shifts across large parts of the scene)
  • Texture flicker in fine details.
  • Significant color shifts
  • Improperly rendered detail
  • Improper changes in foreground/background image focus.
  • Inconsistent output quality from scene to scene.

Cupscale also occasionally has a problem with skipping and will periodically jump forward in a video and begin upscaling a new scene. Frames 0-750 might be followed by frames 11235 – 12570. I couldn’t figure out the reason behind this behavior, but decompressing video files first and telling the app to upscale images seems to prevent the problem.

After investigating some 16 Cupscale models I’d recommend 4x-UniRestore for live action. I also had good results with the Kemono family in Final Fantasy X and live-action both, apart from some significant and unexpected color shifts, as shown below.

User-adjustable comparison available on Imgsli. AviSynth competes well with Cupscale’s KemonoScale after one run and preserves color better. Running KemonoScaleLite_v1_Kiwami twice yields better detail than AviSynth, but a significant color shift, as shown here.

After extensive testing, I’ve found Cupscale is more likely than TVEAI to crash if you launch another GPU-accelerated application while already upscaling a video. In some cases, just launching a hardware-accelerated application can trigger a crash. Topaz Video Enhance AI used to have this problem but it’s much better behaved than it once was.

The fact that Cupscale models are more likely to break and cause a wider range of problems does not mean that no Cupscale-compatible model is capable of matching or even beating Topaz Video Enhance AI’s quality. I’ve been genuinely impressed with the 4x-UniRestore and Kemono models. The UniScale_CartoonRestore_Lite model is shown below, this time run on a video that had been preprocessed in AviSynth. Don’t worry if that sentence doesn’t make sense to you — we’ll discuss what AviSynth is a little farther on.

Unfortunately, Cupscale’s slower rendering speed and the relative inflexibility of its models means you’ll spend more time waiting for clips to render so you can check to see if the output is useful or not. Individual frames can be found in the “frames-out” subdirectory if you want to check mid-run output, but certain problems may not be visible until you see the clip in motion. Run enough video through enough models and you’ll develop a better sense of whether any given model will work for a piece of content… eventually. The same learning process happens more quickly in Topaz VEAI.

Topaz Video AI 2.6.4 is not perfect. It occasionally misunderstands aspect ratios or introduces small contrast changes. Despite these lingering growing pains, the application has matured since I last wrote about it and now supports Intel, AMD, and Nvidia GPUs, as well as both x86 and Apple Silicon Macs. The company has continued to improve its AI upscaling models and has added new frame rate interpolation models over the past year.

Here’s how I would summarize the comparison between TVEAI and Cupscale:

Cupscale is a lot of fun if you are an enthusiast who likes testing various models or you think you want to build your own someday. I said from the beginning that I wanted my Deep Space Nine project to be based on free software, and while Cupscale isn’t fast enough to handle a job that size quite yet, you can squint and see the day coming.  Cupscale is worth testing as an adjunct or additional option for upscaling content, especially if TVEAI isn’t yielding good results. The fact that the app is as flexible, powerful, and stable as it is says nothing but good things about its authors.

The clip below is a 50/50 blend between two Cupscale models. I discuss blending in more detail on the “Master Class” page, but it’s a great way to minimize errors while capturing the benefits upscaling can provide.

Topaz Video AI is the application to use if you have a project of any significant size, you need somewhat predictable results, and/or if you care about performance. TVEAI also requires experimentation, but it renders much more quickly and its models are less likely to break.

Now that we’ve discussed what AI upscaling is and some of the differences between the upscalers themselves, I’d like to switch gears a bit and talk about two important applications: AviSynth and VapourSynth. These applications are not upscalers themselves, but they often play a large role in determining an upscale’s final quality.

Table of Contents / Videos

The Impact of Pre-Processing

Topaz Video Enhance AI is an application focused on one task: Upscaling video content. Unfortunately, a lot of older videos need more than just a trip through an AI upscaler to look their best. While the Dione model family offers deinterlacing and the Proteus model features user-adjustable denoising, deringing, and sharpening options, deinterlacing and other general video editing tasks are not what Topaz VEAI or Cupscale are designed to do.

AviSynth (technically AviSynth+) and VapourSynth are frameservers with extensive video editing capabilities. Both applications can deinterlace and detelecine video, convert content between the NTSC and PAL standards (or vice-versa), shift video from one container format and/or codec to another, and offer a wide range of repair and modification functions for virtually every type of content. There are filters for antialiasing, sharpening, smoothing, denoising, degraining, line darkening, line thinning, rainbow removal, fixing chroma errors, and more. A lot more. Below, I’ve embedded a video demonstrating the same BSG clip as earlier, only this time post-filtering.

Experiment with filters and you’ll often find a combination that results in a net improvement in your final upscale. For example, both AviSynth and VapourSynth can inject grain and noise in ways that improve TVEAI’s output. Cupscale models are also affected by adding grain and noise, but not in the same way. I am not certain the technique is as helpful when working with Cupscale, though the benefit would always be model-dependent. Readers should be aware that grain and noise injection do not always improve detail and some experimentation may be needed to find the right filter. QTGMC’s GrainRestore and NoiseRestore functions are a great place to start.

Here’s a still from Battlestar Galactica. It can be difficult to see the difference between processed and unprocessed in rapid motion, so this mirrored comparison should make it easier. Pull the slider right and left and you’ll see the images swap as you cross the center line. Processing the video first removes rainbowing and overall detail.

The video below shows the difference between upscaling the original Final Fantasy X clip with no processing in AviSynth versus upscaling it after AviSynth processing. The Final Fantasy X appendix page shows two different methods if processing this video in AviSynth and the different output it produces. This is the second method.

VapourSynth is a port of AviSynth that’s been rewritten to use Python. AviSynth scripting is said to be simpler if you aren’t a programmer; VapourSynth is more flexible but requires the end-user to learn more programming syntax. There are GUI front-ends that make using either an easier task. I use StaxRip as a front-end for AviSynth and Hybrid as a front-end for VapourSynth.

Again, you can typically use AviSynth or VapourSynth to process your video. I came across Hybrid after I had a workflow established in StaxRip, and I like the application, so I’ve kept using it. AviSynth is older than VapourSynth and so has more filters, but many common AVS filters have been ported to VS. There’s also a handy encyclopedia devoted to AviSynth if you want to know more.

For simplicity, I will sometimes refer to both applications as AVS/VS, but you probably don’t need to use both unless a specific filter is only available for one of them. I refer to running AVS/VS as “pre-processing” in this context because the video is being processed in preparation for upscaling rather than ordinary viewing.

If you want access to all the models TVEAI offers and/or you have telecined content that you want to revert back to a 23.976 fps progressive frame rate, you’ll need to use a third-party application. AviSynth and/or VapourSynth is typically the best way to go, but there’s definitely a learning curve associated with these programs. Handbrake is another option, and while it’s not my favorite app personally, it sometimes does a reasonable job. As far as paid applications, TMPGEnc 7 did a good job detelcining shows like Deep Space Nine and Voyager when I tested it earlier this year.  AviSynth and VapourSynth are both free, as is Handbrake. TMPGenc 7 is a paid application, but it does have a 30-day free trial.

It is sometimes possible to use AviSynth and/or VapourSynth to repair a non-viable video to the point that it can benefit from upscaling, but this must be evaluated on a case-by-case basis. It may take longer to repair a video than you care to spend. In some cases, the filter choices you’d make to achieve a high-quality upscale are not the same as what you’d choose if you were planning to watch the video sans upscaling. Topaz often creates more detailed output if additional noise and grain are injected into the video, as shown below:

I originally showed these frames in a Star Trek article earlier this year to show off the impact of injecting extra grain and noise. I altered the color in these images to make the effect easier to see, and I applied more grain and noise than you’d probably want to use. You don’t have to inject this much grain and noise to see a benefit. This is important, as not all these changes are beneficial ones.

Voyager, “Year of Hell.” You’ll need to click this one to see it, but injecting grain and noise substantially improves detail retention on the wreck. User-adjustable comparison on imgsli.com

Proper footage treatment can yield dramatic differences in final image quality. Both of the frames below were upscaled using the same  Proteus settings in TVEAI. The only difference is how I pre-processed the footage. The settings I recommend testing in the appendices will not produce this strong an effect.

Harry Kim, with and without noise and grain preinjected before upscaling in Topaz Video Enhance AI. User-adjustable comparison available on imgsli.

The relationship between AviSynth, VapourSynth, and Topaz Video Enhance AI is best understood as complementary. It is often better to perform deinterlacing using the QTGMC filter in AviSynth or VapourSynth as opposed to using one of the Dione models in Topaz Video Enhance AI. Similarly, injecting grain and noise via the QTGMC filter often improves TVEAI’s final output.

Because an upscaler will enthusiastically enhance errors as well as desired detail, the only way to upscale some footage and wind up with a net improvement is to fix what’s broken first. Proper pre-processing is pivotal to the process.

Table of Contents / Videos

Is Your Video a Good Candidate for Upscaling?

Now that we’ve discussed the basic capabilities and differences of AI upscalers and their supporting applications, let’s talk about how to evaluate a video. Although the only way to know for certain how a video will look is to feed it through the upscaler, there are broad guidelines that can help you determine if your video is a good candidate. Videos that tend to upscale well are videos that:

  • Were professionally edited/mastered / captured by someone who knew what they were doing and who took the time to treat the material well.
  • Are either the original source video with the original noise/grain pattern or as close to that video as possible.
  • Have relatively few problems to start with.
  • Were shot in the relatively recent past, on relatively modern equipment.

Here’s what this means, in practical terms:

It is much easier to upscale footage of a sunny outdoor birthday party shot on a Samsung Galaxy S5 than it is to upscale footage of a sunny outdoor birthday party shot back in 1991 on a camcorder. A good outcome is fairly likely in the first case and highly unlikely in the second. The lower quality the source video, the less likely you are to get a satisfactory result, at all times, in all cases.

The video below was shot on a 72op camera back in July 2010 on a Nikon 300S by professional photographer and colleague David Cardinal. As you can see, it cleans up pretty well.

There is a certain minimum quality threshold that a video needs to meet before an upscaler can improve it. If the video is below that threshold, it will require additional processing (at minimum) before it can benefit from upscaling. Sufficiently low-quality videos do not benefit from current common upscaling tools, no matter how they are processed first. Upscaling may make such footage look worse.

The last few questions have nothing to do with your video quality, but they’re arguably the most important when it comes to evaluating your chances of success:

  • How much time and energy are you willing to devote to this project?
  • Are you willing to learn to use other applications if they improve your video’s final quality?
  • Are you willing to test different methods of processing a video until you find the result you are looking for?

There are no wrong answers to these questions, but you need to consider them. I suggest taking advantage of Topaz Video Enhance AI’s free trial before buying the program. Test it on the actual content you want to upscale, if possible. The old joke about “Enhance” being able to extract infinite levels of detail from a fuzzy 32×32 pixel image is much closer to reality than it used to be, but we still don’t have anything like a push-button solution. Repairing damaged video to prep it for upscaling is often a fair bit of work.

Table of Contents / Videos

Why Is Upscaling Controversial?

Upscaling is not well-loved in certain corners of the video editing community. The reasons vary depending on the individual. Some people dislike Topaz Video Enhance AI specifically, some dislike the entire concept of AI upscaling, and some people are fine with the concept but unhappy with the way current products are marketed and/or what they can achieve.

Get “When Bad Footage Breaks Good Upscalers,” coming soon to VHS. AviSynth arguably wins here against Cupscale. Topaz Video Enhance AI’s 2x AHQ produces some particularly ugly errors. User-adjustable comparison available at Imgsli.

The big picture problem is that a lot of AI upscaler output isn’t always very good. Sometimes, it’s downright bad. It’s possible for 38 minutes of a 43-minute video to upscale beautifully, while five minutes of it look as if they’ve been scraped through an MPEG-1 decoder and left out in the rain for a week. Final upscale quality can vary considerably, even within the same professionally-produced episode of television. It doesn’t help that the entire field is new and people are still figuring out what works and what doesn’t.

Here’s an example of a video that doesn’t upscale very well if you just toss the original clip through Topaz Video Enhance AI:

What’s clear is that the back and forth online has left readers confused. I’ve heard from at least a half-dozen people who weren’t sure what to think about AI upscaling because my previous articles and sample clips showcasing Deep Space Nine and Voyager argued in favor of one conclusion, while knowledgeable video editors in online forums had made very different arguments.

I’ll take “Inappropriate behavior” for $500, Alex. Output courtesy of AVCLabs.

Instead of arguing over theory, I launched this project to explore the facts. I did not cherry-pick my sources for this story. I was asked to restore the Dick Tracy and BSG clips by their respective owners, the 320×240 Jack Russell video is one of the oldest and lowest-resolution videos I had sitting around, and I chose the Final Fantasy X video on a whim after seeing another restoration project on YouTube. I asked David Cardinal for an older video that would let me test upscaling on 720p content, but he picked the video to share.

Take the same video clip above and put it through several AI models as well as different processing workflows and the end result is different. While the degree of improvement is not enormous, the upscaler no longer causes damage and can even be said to improve the content somewhat.

The general video editing community is well aware of the myriad ways AI upscaling can go wrong, but there’s less awareness of the ways that AI upscaling can be nudged into going right, even if it wasn’t headed that way to start with.

Table of Contents / Videos

How Much Improvement Is It Possible to Get?

This is a difficult question to answer. What people want to hear is something specific, like: “Easily upscale DVDs to 1080p HD quality!” Some companies make claims like this in their marketing literature. They shouldn’t. This is a rare situation in which speaking literally can leave people with the wrong impression. When I say that improving DVD footage to the point that it looks like native 720p is difficult, I do not mean the literal shift from 720×480 to 1280×960. I mean the difficulty of raising a DVD’s perceived quality high enough that it could be mistaken for a native 720p source.

Those who wish to try and fully bridge the gap between SD and HD will find themselves on the hook for more than a quick single pass through any video-editing application, including Topaz Video Enhance AI or Cupscale. One reason I’ve included so many samples in this article is to illustrate not just how much video can be improved, but where those improvements come from and how much work it takes to get them.

As far as a rough guide is concerned, here’s what I can offer:

Imagine a quality scale with notches at the usual points – VHS, DVD, 720p, 1080p, 4K. Now, add four notches between each standard. VHS —*—*—*—*—DVD. These points represent dimensionless intermediate improvements in image quality that are large enough for you to notice, but not large enough to declare that your video looks like it was shot in a later standard. Four is an arbitrary number I chose to make my example math work. An upscaled DVD that improved by three points would look like an excellent DVD, but it wouldn’t quite fool a knowledgeable viewer into thinking they were watching 720p or 1080p-native source. This four-point scale works best for VHS -> DVD or DVD -> 720p. There would probably only be 2-3 points of “quality” between 720p and 1080p.

If you ask Topaz Video Enhance AI to upscale the equivalent of a high-quality DVD source file with no pre-processing, you might reasonably expect a gain of 1-2 points. If your source is of middling quality, you might get 0.5 – 1. Low-quality source, and you’ll get anywhere from -3 to +1 points at most. I went negative here to acknowledge that upscaling poor-quality footage can reduce its quality.

The point of pre-processing, model blending, and other application workflows I discuss in the “Master Class” section is to increase the quality of your final upscale. Put your footage through AviSynth or VapourSynth first, and you might step up 1.25 – 2.5 points instead of 1-2. Are you willing to blend multiple upscaling models? Add another 0.5 – 1 points, depending on the condition of your source and the quality of your pre-processing. Are you willing to experiment with various different pre-processing methods, different filters in other utilities in addition to AVS/VS, and experiment with blending all of the video together, possibly with more than one trip through more than one upscaler? If you do, and your source footage is of sufficiently high quality, you may be able to gain enough perceived visual quality to pass as a native source of the next-highest standard. Even then, your video probably won’t maintain this illusion perfectly in every scene and the overall quality will still dip in wide shots with a lot of people in them. Your chances of achieving this level of improvement are better if you know how to color grade.

An example of my own work that I would argue meets the “could pass for native 720p” threshold is below. While I did not cherry pick this episode or this scene, I would say that Trials and Tribble-ations is one of the better-looking episodes of Deep Space Nine to start with.

I suggest focusing less on whether or not your upscale would pass as native HD and more on whether or not you like the output. Some TV shows never received high-quality releases and it is very difficult to compensate for a low-quality source. Raising the perceived visual quality of an upscale gets easier the higher your base resolution and overall quality. 720p is easier to convert to 1080p than DVD to 720p. DVD to ~720p is easier than boosting VHS to DVD-equivalent quality.

There is no consumer AI upscaling application that can take DVD footage and transform it into 4K-equivalent. 1080p-equivalent might be possible with literally ideal footage and a master video editor + color grader who knew every aspect of every application in their workflow. Anyone who claims they can upscale Deep Space Nine, Voyager, or any other late 1990s DVD-based show to 4K is misrepresenting the scope of what’s achievable by pretending a literal 4K resolution will deliver 4K-equivalent quality.

Table of Contents / Videos

Conclusion: The Current State of Consumer AI Upscaling Software

Right now, the only paid application worth considering is Topaz Video Enhance AI. None of the other paid apps we tested were fast enough and they all produced broken output. TVEAI is far more capable, but readers should be aware that older, lower-quality footage is much harder to improve. A lot of professionally produced DVD footage from the mid-1990s is still marginal and requires extra work to bring it to best quality.

Cupscale’s speed is good enough for small clips but a bit painful for professional use. At its best, its quality can rival or even surpass Topaz Video Enhance AI, but finding the right models is a slower affair and the app does not play nice with other GPU applications. If you do not have the money to spend on Topaz but you want to experiment with AI, I highly recommend Cupscale. I would’ve been thrilled to see an app this good two years ago, and I expect it will continue to improve over time.

Neither TVEAI nor Cupscale are not magic. They don’t – and can’t – replace the Mark I Human Eyeball or the need for some good old-fashioned experimentation. Neither application can ingest a DVD and magically spit out a native 4K-equivalent video. That doesn’t mean upscaling can’t work wonders – it just means reality continues to demand a troublesome amount of actual work and expertise that science-fiction TV shows often manage to skip.

Personally? I find the level of achievable improvement astonishing. The DVD release of SG-1 can be repaired and improved to the point that it rivals or surpasses the Blu-ray. Under ideal circumstances, shows like Deep Space Nine and Voyager can be enormously improved over their current states. Proper handling allows more marginal videos to be recovered. Upscaling is not the right tool for every job, but it shares that distinction with every video filter and application ever created.

If you’ve looked through the results I’ve shared and don’t find them particularly interesting, check back in two years and see how things have changed. I’ve watched upscalers become far more capable in just the past two years and I don’t see any reason to think improvements are going to stall. GPUs from AMD and Nvidia continue to improve at a rapid clip. AMD and Intel will continue to improve on-chip AI performance through some combination of SIMD instruction support, integrated GPU capabilities, and specialized, on-chip accelerators. If your favorite TV show is languishing on a mediocre DVD transfer, that’s a static workload that isn’t going anywhere while software and hardware continue improving at a rapid pace. In 5 years we ought to have real-time application-level AI acceleration that delivers better quality than Cupscale and TVEAI do today. By then, the performance of non-real-time applications will have leaped ahead as well.

When I started this project in 2020, top-end performance was ~0.44s per frame. 2.5 years later, top-end performance is more like 0.09s/frame for the same DVD source material. Real-time at 23.976 fps requires ~0.04s/frame with a little room for overhead. It’s not crazy to think GPUs might be able to deliver this kind of uplift in the not-so-distant future, given how quickly performance has improved thus far.

If you would like to see more samples from the videos above, you can use the links below to access some video-specific pages. There are additional samples and footage, plus more information on how each video was processed. If you’d like to read more about advanced AI processing and how to squeeze very last detail out of your video, click here.

Now Read:

AI Master Class: How to Maximize Uplift

If you’re here, I’m assuming it’s because you are looking for some additional ideas to improve your own upscale quality. I touched on some of this on the first page of the article but didn’t go into much detail.

The key to improving your upscale quality is thinking about video a bit differently. Most people instinctively think of AI upscaling as a process in which you take one (1) video, drop it into one program, choose one model, and get one output. The video processing methods that I’ve developed for the Star Trek Deep Space Nine and Voyager Upscale Project go well beyond this. They are designed to avoid the visual problems and errata that leave people with a bad opinion of upscaling. Pre-processing helps, as previously discussed, but pre-processing a video is just one tactic for improving overall quality. Blending multiple outputs together to create a new composite video is the next step.

Blending is the process of upscaling a video multiple times using different models, then combining those outputs together in an application like DaVinci Resolve or Adobe Premiere. This process creates a new composite video with the blended characteristics of its contributing sources.

Blending in DaVinci Resolve Studio 17. Note the “33 percent” opacity setting to the right.

The image above shows this creation process in DaVinci Resolve. By varying the opacity of each layer, I can change the characteristics of the final video. The opacity of each layer affects the visibility of the layers underneath. Blending videos together like this is a great way to improve the overall consistency of your upscale quality across scene boundaries and in any footage with a mixture of distance and close-up shots. Blending does not always improve your top-drawer quality. It’s more reliably useful as a way of lifting the bottom of the barrel. It’s always a balancing act between how hard you can push your best-looking scenes without breaking the footage that doesn’t upscale as well. This next comparison shows the impact of filtering the Deep Space Nine episode “Way of the Warrior” in Cupscale instead of AviSynth:

User-adjustable comparison available on Imgsli. Cupscale also responds differently to AviSynth, but not in the same way as Topaz Video Enhance AI.

No current single model in any application can be trusted to perform optimally across 20-40 minutes of footage. This is not to say that every model will create visual errors, but if you check scene by scene, you’d likely find some where you’d prefer model A, and some where you would prefer Model B. Blending various outputs together helps avoid this problem, provided you choose your models well. I have advocated for future versions of TVEAI to offer blended output options by default, so users do not have to perform this step manually with the aid of a third-party application. For now, however+, this remains a manual process.

While blending outputs can make good-looking scenes even better, I primarily use it to improve low-quality scenes, not to extract higher and higher levels of detail from already good ones. It works because two things are true:

1). Different models treat the same content differently.
2). By pre-processing video differently, you can change what any given AI model sees and emphasizes.

You can take advantage of these two facts to create virtually any kind of output you like. Here’s a set of four frames upscaled from two different sources in both Cupscale and TVEAI. Note that both Cupscale and TVEAI are impacted by pre-processing the video, as we’ve previously stated. What’s fascinating is that the two outputs are affected very differently. Cupscale’s Kemono_ScaleLite_v1_Kiwami model treats pre-processing in Hybrid as a reason to create a much smoother video with less noise and weaker lines. Topaz doesn’t do nearly as much additional denoising when given the filtered file, but it does apply an entirely different level of sharpening across the scene. Preprocessing strengthens line detail in Artemis High Quality and weakens it in v1_Kiwami.

With blending, you can take advantage of these differences to precision-target how you want your final videos to look. Is your final upscale too strong? Drop the original video back on top at 15-25 percent opacity. Is it over-sharpened in a few places beyond what the original video can help you diminish? Spin up a custom set of filters in AVS/VS to deliberately smooth and soften your source file. Upscale this new  version of your video and drop it on top of the stack.

Deinterlace With AVS/VS, Not TVEAI

Detelecine/deinterlace in either AviSynth or VapourSynth, not Topaz VEAI. The QTGMC filter available in both AviSynth and VapourSynth is a great deinterlacer and image processor. Generally speaking, older videos often deinterlace / detelecine better in AviSynth and VapourSynth than in paid applications like DaVinci Resolve or Adobe Premiere. AviSynth and VapourSynth are also better at handling DVD footage from television shows that mixed both 23.976 fps and 29.97 fps content — which is pretty much all late-90s sci-fi and fantasy TV.

I also recommend allowing TVEAI to handle sharpening in most cases; you’ll get less ringing that way. If you use QTGMC, I recommend experimenting with the NoiseRestore and GrainRestore functions at settings of 0.25 – 1.25, depending on your content. TVEAI tends to perform better when grain and noise are injected, and it likes QTGMC’s injected grain and noise better than filters like GrainFactory or AddGrainC. EZGrain can also be useful.

Blend Various Outputs (and Possibly Inputs)

The blending method I’ve discussed so far starts with one video and creates multiple upscaled outputs in a single application. These outputs are then blended according to user preference in a video editing application like Resolve or Premiere. This is the most straightforward way to blend, but it’s not the only way. There are three other methods I’d like to briefly touch on. These other types of blending can be used in conjunction with the method I’ve already described, provided you are careful about keeping motion and aspect ratios synchronized between applications.

Test different AviSynth or VapourSynth filters to find pleasing options that emphasize different desirable characteristics of your video. Imagine you’ve got two videos you’ve already put through AVS/VS. Video A emphasizes noise and grain by adding both via QTGMC. Video B applies antialiasing and derainbowing filters. Assume you’ve tested and found that creating a new composite video from two or more differently-tuned outputs results in a better final upscale than applying the same set of filters against a single clip. This isn’t a hypothetical — I’ve seen it happen.

You might choose to run Video A through a smoothing model like Artemis Medium Quality or Artemis Low Quality, while Video B is upscaled with Artemis High Quality. Alternately, you might combine Video A and Video B using DaVinci Resolve before even touching an upscaler. Then, you’d test your new Video AB in various Cupscale or TVEAI to see which upscaling models you preferred. In some cases, blending pre-processed inputs together can lessen/avoid the need to upscale in multiple models later on. This may be an option you prefer if you have a strong CPU but a relatively weak GPU, since it emphasizes blending lower resolution video and puts less strain on the graphics card. You will still have the option to upscale the video in multiple models and perform a second blending step if you wish to do so.

If you choose to test this method I recommend using an intermediate codec like DNXHR, ProRes, or Cineform when you blend your AviSynth / VapourSynth outputs together. Your video will retain more data and your upscale is likely to be slightly higher quality. Note: If you go this route I strongly suggest not changing anything in your deinterlacing or detelecine filter calls without testing the change first to see if it creates motion errors.

Upscale your video once in Topaz and once in Cupscale before combining the outputs together instead of relying solely on one or the other. Be advised that this may not work seamlessly with every video  — AI upscalers do not always treat aspect ratios properly — but it can yield good results.

Process your video in entirely different video editing applications prior to upscaling. As an example: You might process one version of your video in AviSynth or VapourSynth and one in an application like TMPGenc, Resolve, Premiere, Handbrake, or any other video editing app with a specific set of features you presumably wish to take advantage of.

This is the trickiest method of blending, with the highest chance of failure. Different applications make different default assumptions about how the same piece of content should be (re)sized at import. While it’s often possible to harmonize settings between applications to ensure identical crops and aspect ratios across non-identical workflows, it may take some additional experimenting to find the right settings. It might seem like this is a simple thing, easily controlled for. It is not. Seasoned video editors may have little trouble, but new folks — and I was brand-new at the start of this — will need to do some experimenting to figure out how different applications treat the same video.

The advantage of blending multiple videos from different workflows is that it gives you the highest chance of finding an idealized source file for maximum quality when upscaling a single source in Topaz Video Enhance AI. The disadvantage is that you are deliberately creating a fragile, prone-to-break workflow and gambling that you can deal with any issues related aspect ratios, color shifts, flickering, or other problems introduced by differences in filters and/or upscaling applications. You are also betting that merging multiple AVS/VS-processed files will not cause motion errors and that you can eyeball the various model-blending options to create a whole greater than the sum of its parts. While I’ve used this strategy successfully, there are usually other ways to improve image quality that don’t require going to such lengths.

Don’t be afraid to test running content through TVEAI more than once, especially if you are starting from a low resolution. You may also want to experiment with running Topaz Video Enhance AI in 100 percent output mode. In this mode, the model applies certain effects to the video but does not resize it. If you run a video through TVEAI more than once, I recommend experimenting with different models rather than sticking to the same one. This trick can also work in Cupscale, but the effects are even harder to predict.

Another option worth exploring is running videos through Cupscale after you’ve run them through TVEAI (or vice-versa). Model output can change dramatically if your source has already been through an upscaler. My own favorite FFX video output is a combination of TVEAI and Cupscale output.

Mixing videos created from different pre-processed versions of your source becomes riskier the farther apart the video workflows diverge. Sometimes different versions of a video processed with different AVS/VS filter settings will have perfectly aligned motion through 99 percent of the clip, but a specific rapid-movement sequence will come out slightly differently in one video versus the other. Either output might be fine on its own, but blending them together creates an ugly, ghosted scene. This is more likely to occur if you are experimenting with different deinterlacers, different deinterlacing settings within the same filter, or testing the various “InputType” settings in QTGMC. (Input 0 will double your frame rate and is intended for interlaced content, Inputs 1, 2, and 3 are used for progressive content. More details available here).

Similarly, attempting to combine the output of two different upscalers can run into problems if one of them changes the content aspect ratio and/or crop and the other upscaler doesn’t. It is trivial to keep the aspect ratio and crop of two videos identical if they began the process at identical resolutions and have both been through the exact same workflow. Change the applications, output codecs, and processing steps you use, and all bets are off. Sometimes, things will come together perfectly. Sometimes, they very much won’t.

There is one more way of improving your upscale quality that I haven’t yet mentioned. Instead of mucking around with rendering the same video 2-5x and combining all the outputs together, you could instead choose to upscale Frames 0 – 150 with Artemis High Quality, Frames 151 – 467 with Artemis Low Quality, 468 – 751 with Proteus v3, and so on and so on. The reason I don’t recommend this method is that it requires you to test scene-by-scene to find the best model. It also assumes that there is a best single model.

If you only need to upscale a few thousand frames, you might not mind checking and testing a different model for each scene, but the idea of doing this for 14 seasons of TV across two shows left me wanting to die, so I opted for blending. Both blending and scene-by-scene model-checking require a lot more time, but if I’m rendering out the same video in multiple models, I can leave the PC to do the work.

How I Choose Models, Modifications, and Output Settings

Mixing applications, models, and filter settings allow the end-user to target virtually any desired “look” for a video, provided that the clip meets the necessary quality requirements. When choosing models, I pick them depending on the characteristics of the input. I often lean on Gaia HQ for noise and grain and 1-2 of the Artemis models. Dione Robust (the non-frame doubling version) can also be a good choice. The Artemis model family is often best if you want a single-model upscales without going to the trouble of blending output. The Dehaloing models have more specialized uses. They often look good in specific scenes but they will cause heavy blurring in others.

Proteus at default settings can be used as a stand-in for Gaia HQ if you want a similar output with slightly different characteristics. Proteus at settings like 40-40-0-0-0-20 (200% upscale) will often clarify and denoise content in a pleasing way, but the effect may be too strong when used as a single model. Test it before assuming. If you don’t like 40-40-0-0-20, try using -20 instead.

Ultimately, I pick my models based on the characteristics I want the final video to possess. Sometimes I’ll encode a noisier and grainier version of a video, because I know I’m going to use a model with strong denoising and injecting the additional grain and noise beforehand improves final output quality. In some cases, I’ve poured in noise by the bucket and achieved very pleasing results, even though this isn’t a normal way to process video intended for human viewing.

I might choose a noisy, grainy output, a clear, sharper output (provided this does not introduce ringing) and a smooth, slightly blurred output. To accomplish this, I’d try the following models first:

For Noise/Grain: GaiaHQ, followed by Proteus (all 0’s) or sometimes whatever Proteus’ defaults are for a given clip. GaiaHQ almost always leaves clips noisier than they were before processing.

For Clarity / Sharpness: Proteus v3, at settings similar to 40-40-0-0-0-20 @ 200%. Proteus is my go-to here, but every now and then Dione Robust or AHQ will surprise me.

For Smoothness and Repair: Honestly, you’ve got a lot of options here. Artemis Medium Quality, Artemis Low Quality, Artemis DeHalo, and Artemis Strong Dehalo all apply various levels of smoothing and noise removal. I typically check the Artemis models first unless I know I need one of the others. Artemis Anti-Aliasing and Moire is the mildest of the Artemis filters and usually makes the fewest changes to the video while upscaling it. It can provide a nice polish on top of a blended video.

I chose these three characteristics to show how each video can compensate for weaknesses in the other two. Did you end up with more noise than you wanted? Lean on the sharp, clear model. Is your output a little too sharp or does it have some damage poking through? Add a dash of your chosen smoothing model. Does the final blended video look a little too over-processed? Drop the original source video on top of your stack at 10-20 percent opacity, as previously mentioned. While many of these options must be deployed judiciously, careful testing of what you blend and how you blend it can yield surprising improvements.

By the time I create a final episode of Star Trek: Deep Space Nine or Star Trek: Voyager, I’ve rendered it out at multiple resolutions and with a mix of models. To avoid data loss, tell Topaz Video Enhance AI to output as either a TIF or a PNG. If you are using Resolve, be advised that codecs like DNXHR and Cineform will create much higher-quality intermediate files than using Resolve’s native H.264 encoder. You do not need to use these options if you are only upscaling footage once, but if you intend to run it through TVEAI multiple times I recommend using intermediate codecs to do it.

There are a few final ways you can increase performance and upscale quality when working with applications like Cupscale or TVEAI. Let’s turn our attention to them.

Final Tips and Tricks

If you are seeing slow performance in Topaz Video Enhance AI, open the Preferences menu and try allocating more VRAM to the application. This will reduce available VRAM for other tasks, but it may improve TVEAI performance. This is especially true if you are attempting to upscale anything with a base resolution of 720p or above.

For best performance, run multiple instances of Topaz Video Enhance AI at the same time. This may not work if your GPU is VRAM-limited (4GB or below), but if you are upscaling native 480p footage and have at least 6GB of VRAM you might be able to run at least two application instances. Larger GPUs with more VRAM can run more (I’ve successfully used up to four simultaneously with a net speed gain compared to three).

This is obviously helpful when you are planning to blend output, but it’s also helpful when you are targeting single-blends as well, provided you don’t mind outputting to image. Assign Instance #1 to Frames 0 – 15,000, Instance 2 to Frames 15,001 – 30,000 and so on and so forth, until you’ve evenly split the workload across your GPU. If you have sufficient VRAM, Topaz Video Enhance AI runs two instances nearly as quickly as it runs one, and I’ve seen net performance improvements up to four instances total. This trick does not seem to work as well for Cupscale. While it is possible to run multiple application instances, Cupscale performance does not appear to improve from running in parallel.

Do not assume that larger files = better output or that higher resolution is automatically better. This is a tricky point because we all grow up thinking that higher resolution = better files. This is not the case when upscaling. Because 200 percent and 400 percent models produce different outputs and 200 percent models are typically better, targeting 400 percent instead 0f 200 percent may result in a 2560×1920 upscale that looks much worse than a 1280×960 version of the same content. You may be better off targeting a 2x upscaling and then resizing in a conventional video editor if you want 4K output (or whatever your target resolution is).

Experiment with codec settings when exporting output, whether in AviSynth and VapourSynth or in an application like Resolve. Topaz responds differently to different codecs. Also, be aware that GPU codecs typically produce worse outputs than CPU codecs because GPUs are designed for speed and CPUs are not. In some cases, you may be able to obviate these differences by adjusting codec output options, but some applications offer limited control over your output quality.

Test sharpening and adding grain and noise in other applications, not just AviSynth or VapourSynth via QTGMC. Testing has demonstrated that how you manipulate footage in other applications can increase or decrease final upscale quality. Don’t be afraid to experiment.

I recommend telling Topaz to output images rather than videos. If you choose to output to video, do not attempt to retain existing audio. Audio can be re-attached in other applications, including programs like MKVToolnix and DaVinci Resolve. If you do want to output to video, Apple ProRes is a good choice for maintaining high quality. Future versions of Topaz will support GPU-assisted encoding but you may need to change the default codec settings to avoid a quality loss compared to lossless image output or a codec like ProRes.

It can also be useful to reduce your video resolution when running a video through TVEAI multiple times, though this does not always yield good results. If you wind up with over-processed footage, drop your source video back on top at 15-25 percent opacity.

If you use Cupscale and Resolve, be advised that Resolve 17 often seems to dislike how other applications attach audio and will sometimes crash rather than deal with it. Removing the audio from these files will solve the problem.

Applications that run on your graphics card are often bad at sharing the card with other programs. Even when applications are well-behaved and do not crash, closing Photoshop can meaningfully improve Resolve’s performance and vice-versa. GPU-accelerated applications often run faster if they are the only GPU-utilizing application running, even if the other apps are sitting idle in the background. Applications that rely on GPU hardware acceleration are not always well-behaved. The more VRAM your GPU has, the easier time you’ll have running multiple image and video processing applications simultaneously. Try shutting down other GPU-using applications if you are having performance problems.

While I do not present formal benchmark figures in this article, my tests on an AMD RX 6800 XT suggest this GPU is roughly as fast as the RTX 3080 while offering an additional 6GB of VRAM. The one downside to the AMD GPU that I could find is that it will reboot your system if you attempt to run multiple application instances while simultaneously upscaling in Gaia-HQ. If you don’t use Gaia-HQ, you can run 3-4 instances of TVEAI simultaneously without a problem. This is an excellent way to boost performance, provided you are writing to an SSD and not an HDD. Hard drives may struggle to keep up.

Understand the Limits of the Software

The last point I want to leave you with is this: Despite everything I’ve said about improving final output, there are scenes that resist improvement. Some frames look bad no matter what, and putting them through an upscaler results in nothing but more-noticeable damage. Upscaling can greatly improve perceived image quality under the right circumstances, but there are limits to how much improvement applications like TVEAI can offer. Close-up shots sourced from DVD footage clean up beautifully. Distance shots? Not so much.

The problem here is straightforward. A DVD frame doesn’t contain much information to start with, but in a close-up where someone’s face takes up 60 percent of the frame, that’s ~203,000 pixels worth of data for an upscaler to use. In a wide-angle shot of an entire room, each individual might be represented by a few thousand or even just a few hundred pixels. That’s not much information to start with.

Most TV shows are a mixture of wide-angle and close facial shots, and that means occasionally ping-ponging between frames that could rival a Blu-ray restoration and frames that look as if someone scraped them out of a smoker’s moldering VCR.

I have spent years developing techniques for minimizing bad output from upscaling, but there are still frames and scenes that do not look great. Deep Space Nine Seasons 1 & 2 are in much worse shape than Seasons 4-7, particularly the special effects. Sometimes, leaving some blur in a video is the right way to minimize damage that upscaling makes painfully visible.

I would not have spent so much time on Deep Space Nine and Voyager if I felt the quality uplift wasn’t worth the effort, but it’s important to be realistic about just how much improvement can be extracted from DVD-quality frames. There are visible problems and even some motion errors baked into certain episodes. Some of those problems can’t truly be fixed, only ameliorated, and some repair techniques have side effects that the viewer may not like.

Even well-processed content sometimes looks better at TV viewing distance (5-7 feet) as opposed to typical PC viewing distance (24 – 36 inches).

Chance v. Stream

Source History: This is a video of a Jack Russell terrier I once owned playing in / declaring war against a small stream of water that ran outside my apartment after it rained. This dog would lie about needing to go to the bathroom after it precipitated to get his paws into that stream.

Original Resolution: 320×240

Output Resolution: 1280×960

Included Outputs:

Processing Notes

I processed this file by upscaling both the original version of the video and a version I created using VapourSynth (via Hybrid). Upscales based on both outputs were then blended into a single new video. This new video – “Gen 1” in my parlance – went through four more processing cycles. I increment the generation counter every time I use Resolve Studio to create a new blended output.

Although I typically blend the videos of each generation together, I will occasionally grab either a specific model from a previous generation, up to and including the original output. In this case, I also blended the VapourSynth-created output back into the upscales at several points. This video is nine seconds long, so it didn’t actually take much time to upscale on an RTX 3080, but I did wrangle 16 different versions of the clip across five generations to create the final output.

One thing I want to specifically draw attention to is the final quality of the output video. While a definite improvement from the original, it scarcely looks HD. At best, I’d argue the quality has gone from “Bad VCR tape” to “Much nicer-looking VCR tape.”

Although I typically blend the videos of each generation together, I will occasionally grab either a specific model from a previous generation, up to and including the original output. In this case, I also blended the VapourSynth-created output back into the upscales at several points.

I am wary of recommending TVEAI or any AI upscaler for repairing content that starts off this low quality. The lower quality the source file, the less total improvement you’ll achieve. In some cases, no improvement will be achievable. If this happens, don’t give up. Applications like AviSynth and VapourSynth can sometimes fix problems that AI upscaling currently doesn’t deal with very well.

Why I Chose This Video

I chose this video for two reasons. First, my dog Chance was a ridiculously aggressive about attacking running water. Had he ever glimpsed Niagara Falls I suspect he would’ve died of fury on the spot. I don’t mind sharing the chuckle. Second, it gives me a good way to demonstrate what happens when a single-model upscale goes wrong. The result is ugly, error-filled content that looks worse than the original video.

Even after careful tuning, it’s not clear that the upscaled video is much better than the non-upscaled video. It would’ve been a better use of time to lean on AviSynth alone.

The Dance: Final Fantasy X (TVEAI)

Source History: I extracted this video source from the original PS2 disc.

Original Resolution: 582×416

Upscaled Resolutions: 1162×864 / 3840×2160 (Resized)

Included Outputs: I wanted at least one sample that was pre-processed and upscaled in as many different methods as was practically possible. I have created so many versions of FFX: The Dance, specifically, that I need to actually subdivide them to make it easier to parse what’s here. A lot of people like using upscaling for animation, so I spent a lot of time to create different versions of animated content, specifically. All of these clips were resized for 4K to make certain YT would encode them in reasonable quality.

If you want to see the difference between the various upscales and filtering methods, I recommend pausing 17 – 25 seconds in. The close-up on Yuna’s face makes it easier to compare.

There are six “versus” videos below that showcase two different outputs side-by-side. These are intended to make it easier to compare various outputs, and I’ve created several of them. There are too many to reasonably embed, so I’ve provided text links below.

FFX: Side-by-Side Comparisons

1). OG video vs. AviSynth processing (Method 2) (embedded above)
2). Original video vs. Original video (2x AHQ)
3). AviSynth Method 2 (Not Upscaled) versus Maybe Better (2x AHQ)
4). OG 2x AHQ vs. AviSynth 2x AHQ
5). Cupscale Kenomo v. Maybe Better (Not Upscaled)
6). Cupscale UniRestore v. Video2x
20). OG v. Best Upscale

The first video shows the impact of running AviSynth or not-running AviSynth with no upscaling involved at all. The second video shows the impact of upscaling, but only with one run through TVEAI using Artemis High Quality at 200 percent. Video #3 repeats this comparison, but uses the pre-processed AviSynth source instead of the original, unprocessed source. Video #4 shows how the original video looks upscaled compared to the AviSynth-processed version.

Comparison #5 shows a single-model Cupscale encode against AviSynth with no upscaling at all, to emphasize how well the application can improve video without relying on upscaling.

Beyond these videos, I’ve also uploaded the following outputs.

FFX: Not Upscaled

7). Original video.
8). Original video after AviSynth processing (Method 1)
9). Original video after AviSynth processing (Method 2)

I’ve included the original video, plus three different methods of processing this clip in AviSynth to show how filter settings change final output. When I talk about blending different outputs together to create a blended, multi-source pre-processed file before you ever use TVEAI, this is where you might use that kind of strategy to create your desired balance of smoothness and noise in the underwater sections of the video.

FFX: Single-Model Upscaling

10). Original video after one upscale pass in TVEAI (AHQ, no pre-processing)
11). Original video after one upscale pass in TVEAI (AHQ, pre-processed with AviSynth Method 1)
12). Original video after one upscale pass in TVEAI (AHQ, pre-processed with Method 2)
13). Original video after one pass through Cupscale (2x Kemeno_Kiwami, no pre-processing) (embedded above)
14). AviSynth Method 2 video after one pass through Cupscale (2x Uniscale_Cartoon)

These video sequences showcase how upscaling output depends on pre-processing method. In some cases, the differences between two videos will be magnified when the video is upscaled. Even though these videos have only been through TVEAI or Cupscale once, each is distinctly different from the other. As I said in the main body of the article, AI upscaling is not one “thing” and you can target a wide range of sharpness levels, color adjustments, and denoising for your video, depending on what you think it should look like.

FFX: Multi-Model Upscaling

15). Original video after one blend (AHQ + 4x Prot)
16). Original video after one blend (Cupscale + Cupscale) (embedded above)
17). Multi-Model Blend (TVEAI)
18). Multi-Model Blend (TVEAI + Cupscale) (embedded at top of page)
19). Multi-Model Blend (Cupscale)

The multi-model upscale lists show off the improvements from using more than one model and blending the output together in increasingly sophisticated ways. #17 shows the impact of blending just two videos together, #18 showcases the same effect in Cupscale (note the dramatic color shift in this case), #19 illustrates what can be achieved in a multi-model, multi-generational video with several trips through the upscaler, #20 shows how TVEAI and Cupscale look when outputs are combined, and #21 is a multi-model blend of Cupscale alone.

Cupscale models can also benefit from multiple passes through the application, but they don’t see the boost that TVEAI does. In some cases, the uplift is marginal or counteracted by color shifts. The video embedded at the top of this section benefits from a second pass through the model but it comes at a high cost to color accuracy.

Broken Outputs:

DVDFab Output 1
DVDFab Output 2
AVCLabs Output 1
AVCLabs Output 2

What the Avalanche Tells Us

While there’s a lot of output here, the big picture trends break down pretty readily. First, the two different AviSynth processing methods show that this content can be cleaned up in multiple ways with varying impacts on the final output. The Cupscale outputs aren’t nearly as strong as the TVEAI outputs, but I found a way to use that to my advantage in #18, where I was able to blend both TVEAI output and Cupscale output together. The darker color in #18 is my own doing.

Processing Notes

I decided to work on Final Fantasy X’s iconic “The Dance” video after seeing a video someone else had created. One thing I think this video illustrates is how different approaches and processing methods can yield very different results. All of the Final Fantasy X results on this page are different from one another, very much including the upscaled videos. The “right” version, ultimately, is the one that feels right to you and that reflects the vision of the video you had in your mind’s eye when you set out to create it.

The trickiest part of improving this video is not wrecking the underwater scenes. The flowers have a slight oil painting look to them at several points, and there isn’t always enough detail in the distant shots of Yuna dancing for the video to upscale cleanly.

Oftentimes when upscaling, the trick is to clean up the bad-looking frames more so than to improve the frames that come out well. I took a variety of approaches to this video. Video #17 was created entirely from Topaz Video Enhance AI blending, Video #18 is a blend of TVEAI and Cupscale, and Video #19 is Cupscale alone.

Why I Included This Video

A lot of upscaling work is focused on animation and I wanted to include an animated source. Taking on a clip from the PS2 era was a fun challenge. I’m really pleased with how this output came out and I feel like it showcases some of the strengths of AI upscaling.

Video: Battlestar Galactica: The Living Legend Project

Source History: The original source for this video was created by Adam “Mojo” Liebowitz. He asked if I’d take a shot at upscaling the video, to show his own remastering work on the series at higher quality.

Original Resolution: 720×400

Output Resolution: 2880×1600

Included Outputs:
Original video (as provided)
Original video upscaled in TVEAI, 2x AHQ-only, no pre-processing.
Multi-Model Upscale #1
Multi-Model Upscale #2
Cupscale 4xUniScale_Restore
Cupscale 4xUniScale_Restore v. 2x AHQ

There is a slight color difference between the single-model video and the multi-model video. This was the result of other processing decisions I made, not something introduced directly by TVEAI. TVEAI can cause small color shifts, but it did not cause this one.

Processing Notes, Comparisons

The impact of pre-processing is particularly clear in this video, as shown in the following image comparisons. First, here’s a comparison between the pre-processed and original version of the file with no upscaling applied at all. Next up, here’s the same set of image comparisons, only this time both videos have been upscaled using Artemis High Quality. Sometimes, the difference between two different encodes is small until both videos are upscaled, at which point it becomes larger. That’s not really the case for this video. Running VapourSynth via Hybrid makes a major difference, even before the video is upscaled.

Finally, here’s a frame-by-frame comparison of a single-model AHQ upscale against my multi-model, multi-generational technique. To see the difference in either the Frame 118 or Frame 167 comparison particularly well, hit the + button and look at the letters spelling out “Galactica.”

Why I Included This Video

This was an interesting opportunity to visit an iconic show. Mojo’s remastered CGI effects already fit flawlessly into the episode, but it wasn’t clear if TVEAI would handle both sets of footage equally well. In this case, it did. This BSG video also shows the impact of pre-processing quite well. The gap between the single-model upscale and the blended model is real, but smaller than the gap between pre-processing and not pre-processing the footage.

Mamma Bear

Source History: This video was shot by professional photographer and ET author David Cardinal. He provided it when I asked for an example of a video he’d shot on older equipment, to test what kind of improvements were possible. A great deal of content has been shot on 720p or 1080p equipment over the last 20 years. This video is our representative test for this type of content.

Original Resolution: 1280×720

Output Resolution: 2560×1440

Included Outputs:

Original Video
Upscaled Video Comparison (Cupscale v. TVEAI, Single Model)
Upscaled Video (Single Model, Cupscale)

Processing Notes

In this case, I got a quote from David directly to get his opinion on the upscale.

“I set up a dual-monitor system with a 1080p and a 4K monitor, and played both the original 720p version and Joel’s upscaled version on each of them side-by-side — alternating. The original version looks okay at 1080p, but really shows its age on a 4K display. That said, the 4K version on the 1080p monitor still looked much more clear than the original version did on either monitor. Lots of cool detail that typically wouldn’t be possible without actually capturing in native 4K. Hat’s off!”

This video cleaned up nicely. I didn’t actually use AviSynth or VapourSynth on this output, they weren’t necessary. The gap between the single-model AHQ and the multi-model blend is not very large here.

Why I Chose This Video

I chose this video to illustrate how Topaz Video Enhance AI would perform when tested against relatively modern, professionally shot footage. Skipping the pre-processing steps saves some time, though working with 1280×720 video also takes a fair bit longer than 720×480 or below.

The Make-Up of Dick Tracy: An Interview With Doug Drexler and John Caglione

This is a short interview/interest piece on two of the lead makeup artists behind Dick Tracy. Doug Drexler and John Caglione. The pair won an Academy Award for Best Makeup on the 1990 movie. This clip is narrated by the inimitable Don LaFontaine.

Source History: MediaInfo claims the authoring application was an iPhone XS Max. Clearly, this is not the original source, since the video was shot ~1990. I have no information on the video’s journey from then to now.

Original Resolution: 568×320

Upscaled Resolutions: 1136×640 / 2272×1280

Included Outputs:

The original video
The original video filtered in AviSynth
Filtered, Upscaled Version (Method #1)

Why I Included This Video

I included this video to showcase how TVEAI would perform with older, low-quality content and the limits of the software, even when it is able to improve final quality. None of this output looks particularly good, but the footage that’s been through Resolve and TVEAI looks better than the footage that’s only been through Resolve, or only through TVEAI.

These video and images also show how blending multiple outputs together can avoid the damage caused by using a single model.

Beyond the Blu-ray: Stargate SG-1

I am not kidding or exaggerating when I say that it’s possible for an upscaled DVD to beat an official Blu-ray release from a Hollywood studio, but there is a reason why this is the case. The reason it’s possible to upscale an SG-1 DVD to the point that it beats the Blu-ray like a rented mule is because the Blu-ray release of Stargate SG-1 sucks. Sometimes, quality is a subjective thing. Sometimes, it isn’t. The official Blu-ray release of Stargate SG-1 is egregiously damaged in some places. The first examples I’m going to show you are from the Season 7 episode “Inauguration.”

I don’t know what VFX company and/or national zoo was responsible for transferring these episodes from DVD to Blu-ray, but it’s difficult to believe a human was involved. Compare how the same uniform looks on an upscaled version of the DVD (left) versus the officially restored Blu-ray (right).


The upscaled DVD is on the left, native Blu-ray on the right. Here’s the full shot, with the native Blu-ray first. You can click on either image to open it in a new window.

From the SG-1 episode “Inauguration.” Native Blu-ray. (Above)

From the SG-1 episode “Inauguration.” Upscaled DVD. I’d rather have a little banding than flickering patches and checkerboard crawl across most surfaces, but the banding can be dealt with regardless. I just didn’t have time to re-do the clip before this article ran.

One of the major themes of this article is how easy it is to tune upscaling output for whatever characteristics you’d like it to have as far as smoothness, sharpness, and noise/grain. You don’t have to love my own choices in the frames above to recognize three facts:

1). There is objectively more detail preserved in the upscale.
2). The Blu-ray is stretched out of its original aspect ratio while the DVD preserves it.
3). The general’s uniform in the native Blu-ray is damaged in ways the DVD is not.

These are not one-time errors; the entire episode is shot full of similar problems. There’s an odd checkerboard pattern on the edges of many objects in virtually every scene of the episode, including articles of clothing, flags, cups, and patches. It appears on people’s faces and it shimmers in motion in an extremely distracting way. I made multiple attempts to fix it with various AviSynth and VapourSynth utilities. While there are some filters that can “remove” damage of this sort, they also destroy detail. Less damaging methods did not fix the problem.

This is a close-up of the first image comparison linked above. You can see the checkerboard damage creeping up the side of the man’s face until we cross over to the upscaled DVD, at which point the damage vanishes.

This checkerboard doesn’t appear in every episode of the show — I haven’t done a full enough sweep of the discs to say how common it is — but one thing I can say is the quality of this “Blu-ray” release is deeply disappointing. The video has been effectively sandblasted to remove any trace of noise or grain. That’s on top of the checkerboard damage and the retained interlacing damage.

I can tell why the people who created this video stretched it out of aspect ratio — it’s because the corners of the video are damaged and have distinct lines visible on the sides. It would have been vastly preferable to leave tiny black bars on either side than to do what’s been done here. Part of the reason the picture quality is poor — even in episodes that lack checkerboard damage — is because the video has been stretched out of true, possibly by a machine of similar vintage to those used during SG-1’s run.

Not every episode looks this bad. The Season 7 finale “Lost City” doesn’t show the same weird checkerboard pattern and the overall quality seems higher, but there’s still scarcely any additional detail to be had over the DVD, especially once you run the DVD through AviSynth.

User-adjustable comparison available at Imgsli. The native Blu-ray does have a small detail advantage over the original DVD in this episode, and the new release has been color graded — but the gap is not large.

If you’re curious to see what these files look like when upscaled, I’ve got that output as well. The Blu-ray footage actually upscales much less well than the DVD. AHQ-Hybrid and AHQ-AviSynth show two different grain and noise injection patterns, one stronger than the other.

I wanted to create the same kind of comparison video for Lost City as I did for Inauguration, but Lost City appears to have actually been edited compared to its DVD version. Unlike Inauguration, where the Blu-ray and the DVD align frame-for-frame, Lost City does not. There are specific clips where the output will be synchronized and then later clips where it is not.

My clip from Lost City is shorter due to time, but I’ve included the upscaled DVD footage here. Apologies for the somewhat muddled audio, but the video is what I suspect you’re here for.

The Blu-ray version of “Lost City” may lack for checkerboard damage, but it’s damaged in other ways. There’s a scene at the climax of the episode where Earth’s only functional starship, the Prometheus, is making a doomed run on Anubis’ flagship. Here’s how that looks on the native Blu-ray versus the DVD. You may need to click on the image to see the difference between the backgrounds on the left versus the right.

Once again, that’s Blu-ray on the left, DVD on the right.

Whatever denoising method the “restoration” team used managed to denoise the stars right out of the sky. I didn’t insert new stars in my upscale and I didn’t make any special effort to preserve or expand the number of stars already there. I am aware that VFX houses often work in very challenging conditions on tight deadlines, but if the best output I could create with Topaz Video Enhance AI looked like the above, I would have never written my first article. I’d sooner quit a job than ask fans of a TV show to buy garbage that looks like this. MGM ought to be ashamed of itself for the way it’s treated Stargate: SG-1.

Not all Blu-ray releases look like this. At the other end of the spectrum, there’s Star Trek: The Next Generation, which got a full film remaster, recolor, and new special effects. I spent some serious time earlier this year working to see how close I could even potentially get to matching TNG’s quality with nothing but a DVD. Spoiler — not very close. But Stargate: SG-1‘s Blu-ray release is on store shelves just as validly as TNG. This is the product MGM has put its imprimatur upon and brought to market — and that product isn’t very good.

Ultimately, it doesn’t matter if the reason an upscaled DVD can beat a Blu-ray is that some Blu-ray releases suck. If I was an SG-1 fan considering which version of the show to buy, I’d buy the DVD collection and upscale it before I’d pay a dime for the Blu-ray. I only had a few days to spend on this project in total before I had to write it up for this article. I am confident I can further improve on these results given time. The difference between the Blu-ray and DVD versions of the show is that upscalers are still getting better. Over time, the upscaled version of the DVD will improve. The Blu-ray is stuck with itself.

If you stopped off here at the beginning of the story because you wanted to see the DVD v. Blu-ray comparison, here’s a link to get you back to the rest of the article. If you hit this page after looking through the others, I appreciate your truly impressive stamina, and thank you for taking the time to look through all this content.

Now Read:



from ExtremeTechExtremeTech https://ift.tt/B4kvPwS

ليست هناك تعليقات:

إرسال تعليق