Aladdin4d Yeah, I was talking about de-Bayering and image processing... both of which are very GPU intensive, kind of overlooked the compression/decompression part.
Not that any of that makes QuickSync any better, of course.
@CNK a lot of people use Quicksynch for low-priority things. To use a common example you'll see on forums, a user might rip an entire season of a TV show from Blu-Ray or DVD and want to watch it on a phone or tablet on the way to/from work. In this case if you're transcoding 20 hours of footage (the specific post I'm thinking of was ripping Babylon 5, so 22 episode seasons) to something you're going to view on a tiny 5 inch screen, Quicksynch is fine.
@Triem23 But what about the compression artifacts and sound quality?
Obviously I'm just coming up with a scenario that I might end up in sometime in the future, though I can't stand the poor quality of streaming or recompressed blu rays or whatever else is out there.
You would have to trade faster encoding for higher file size right?
@CNK Quicksynch would yield lower quality, but for that specific example--transcoding dozens of hours of footage to watch on a five inch screen, who cares? The lower quality would certainly be obvious on larger monitor or 50" TV, but when doing a quick and dirty transcode to watch on a phone--compression artifacts won't look bad on a phone--then something that encodes for or five times faster is the right tool...for that specific use! For the guy who wants to watch all of Babylon 5 on the train over a month or two of work commutes, it's worth saving tens of hours of encode time. I mean it's Babylon 5 and it's standard-def, anyways.
Now, for the stuff we're doing, where it's original work that we want to look good, I'll take the slower render time and higher quality output every time.
As far as faster encoding vs higher file size, well file size is more of a function of bitrate than the encoding engine, I think.
@Triem23 Yeah but then there's the next problem that is YouTube. Did anyone make a comparison between high quality export - YouTube vs "low" quality export - YouTube yet? I'm guessing that the difference is not very big. And I read somewhere that the vast majority of viewers view the content on their tablets, phones, laptops not larger monitors or TV's. I feel like you can make pro's and con's list really long but it still depends on the situation you're in. It's an interesting topic though.
@CNK no clue if anyone's done that comparison.
It's my understanding that when you upload to YouTube your video gets re-encoded - often at a lower bitrate. Kaby Lake also supports the new YouTube codec (VP9?). - for what it's worth.
Back to the thread.
So the consensus is (please correct me):
x264 scores highest here Sixth MPEG-4 AVC/H.264 Video Codecs Comparison 2013
However Intel gpu wins here HEVC Video Codecs Comparison 2016 Test was done using Skylake. Kaby gpu h.265 encoding/decoding is improved (see my Anandtech link above) and might score higher.
Compositing is unique to the Hitfilm NLE. I assume it's more gpu demanding than editing. I only have a Gigabyte G1 GTX 960 4GB discrete gpu. Is it adequate for compositing?
Yup, Youtube re-encodes video to it's spec once uploaded. Another point on Quicksync--Quicksync here is being tested for encoding, but encoding is a small part of exporting from Hitfilm. before a frame can be encoded, it has to be rendered--computed and drawn--. So while Quicksync will speed up the encoding process, that's just dealing with writing the frame to disc after it's been rendered. If you've done, basically, nothing--just something like a Let's Play video--then there might be a speed advantage here (if lower quality), but if you're doing anything with lots of color grading, effects, 3D models, particle sims, keying and heavy compositing, then Quicksync is absolutely wrong. I'm sure I'll restate this as I hit your numbered questions. :-)
1) Correct. In fact, Hitfilm seeks to use the fastest GPU a system has. To use Quicksync on a system with an AMD or Nvidia GPU the AMD/Nvidia would have to be disabled to even access Quicksync. That would slow down all rendering operations to speed up disc encode and would actually be counterproductive.
2) Again--correct. h.264 is, in general, a high-overhead codec, not well suited for editing. NormanPCN's optimized settings make h.264 pretty fast, but a "real" intermediate codec like ProRes, Cineform or DNxHD is still a better choice.
3) I'm not certain here. Hitiflm doesn't seem to offer multiple h.264 encoders. I'm not certain which one it uses, although I'm thinking Main Concept? Perhaps @Aladdin4d or @NormanPCN can clarify?
Yes compositing is more intensive than editing. Simply put it's the difference between having one image with some effects applied and having several different images being layered up with transparency with each images having it's own effects. A 4GB 960 is a perfectly good card, and should do well for you. Obviously a brand-spanking new 1080 would be much faster, but there will always be a newer, faster card (until Moore's law runs out, which will be in about three or four iterations).
FWIW, Futuremark currently ranks the GTX 960 as the 42'nd fastest GPU of about 200 tested. That puts it (barely) in the top quarter. A better comparison is that your 960 benchmarks at roughly 20 times the speed of the minimum-spec Intel HD 4000... Now, the HD 4000 would be on a chip that has Quicksync, so, for my final argument... Would anyone REALLY want to give up 95% of one's GPU performance for a bit faster file encoding?
It should do reasonably well. I can do some basic compositing on my machine, and I don't yet have a dedicated gpu. I can't go above HD/2K on my machine though.
One issue for a developer not talked about yet is licensing. Intel point blank says no license fees are covered by Quick Sync and it's up to the developer to meet any and all obligations for actually using it. Making matters worse Quick Sync ends up being both a hardware and software decoder and a hardware and software encoder. Licensing for H.264 through MPEG-LA is notoriously complicated under the best of circumstances and there are differences in the licensing depending on what's being done for what purpose and if it's hardware or software encoding or decoding. Because the licensing can be such a nightmare to figure out and handle is why a lot of developers choose to go with a third party implementation like MainConcept which is what HitFilm uses.
@Triem23 Thanks for the info. Any suggestions on my planned i7 Kaby upgrade? Coming 42nd out of 200 is like not even being in the race. Glad to hear my handicapped 960 will work - at least it looks impressive with 3 fans and blue led! I've seen 4k workstations at $20-30k++ using two or more Nvidia $5k gpus - which makes it amazing we can do anything on a home desktop.
@Aladdin4d - Intel refers to it as the "Intel® Quick Sync Video (QSV) H.264 codec". If we are using their free codec I would assume any other licensing would be Intel's problem assuming they used patented products. Last I heard the USA ruled that patents on h.264 compliant products were unenforceable (Wiki).
@FishyAl without knowing your budget it's hard to call. Note that a lot of those other 41 GPUS included 6 variants of the 980/1060/1080 etc from different "brands" so 42 isn't actually that bad.
For reference I'd recommend Passmark or Tom's Hardware which rank all cards by power, and rank all cards by value for your dollar.
That said, today the best value GPU overall seems to be the Nvidia 1060.
@Triem23 The 1060 3GB is quite a bit weaker than the 6 GB model, though the story should more or less be the same in HitFilm if your work doesn't require more than 3 GB VRAM. I have actually not seen any data about the importance of VRAM in HitFilm, would be happy to see that to aid future buyers.
Something worth mentioning is that YouTube does allow you to fake 4k in order to achieve a higher bitrate for native 1080 content that is uploaded to it. The vast majority of "4k" options that you see, are infact 1080 videos at a higher bitrate. That's because several trusted sources have conducted tests with native 4k and 1080 higher bitrate side by side, there is literally no difference between the two despite the much higher resolution, so for content that ends up on YouTube.
Because of that reason I think that the improvement in 4k editing or whatever they actually claimed with their advertisement doesn't really make any sense other than being a fancy buzzword. 4k basically only makes sense if you put it on a blu ray disc, IMO (and many others opinions). 4k is not here yet to be delivered by streaming services. Experts say that 4k streaming are beat by 1080 blu rays in both sound and picture quality so that's enough for me to forget about 4k completely on the internet (for now).
I'm curious how FinalCut implements QuickSync though, because it's able to render your project while you're working. It's beyond my technical understanding and I'm not able to find any easy to follow information on how their software works.
@FishyAl Here's a response from Intel after being asked about patent licensing and Quick Sync
"License fees are not covered by Intel. We provide the tool and technical support. It is up to you to comply with any obligations to other parties like MPEG LA.":
"Intel has no blanket agreement with MPEG-LA like you describe. It will be best to take this discussion to them. We can help you to get your application working, but questions on distribution obligations are beyond what we can answer."
AFAIK the patent unenforceability is limited only to two Qualcomm patents because they failed to inform the MPEG or VCEG of the patents even though they were a participant in the joint team setting the standards for H.264 and had a duty to inform everyone involved. Everybody else's patents related to H.264 are still fully enforceable
@CNK VRAM is like regular RAM having more VRAM doesn't speed things up, but more VRAM means you can process more data. Back on my previous computer I had a project I could never reload--when contacting support it turned out the massive models and particle sims needed twice as much VRAM as my system had. The fact that Hitfilm got through an initial setup, edit and draft render speaks to the stability of the code!
In Hitfilm at least 4GB of VRAM is nedded for greater than 4k work and is recommended for 4k work.
Think of it this way (this is oversimplified and I'm sure Norman or Aladdin will make corrections, but this is close enough to make the point) --whatever compression a file may have on disc, it's uncompressed internally. A 4k frame is roughly 8 megapixels. With an alpha channel that's 32 megabytes per frame. At 24 fps over 900 megs a second. Per layer. Take two layers of video (say we're doing a custom wipe), add in the required control map... Put in a grade layer, and now we're at roughly 3.5 gigs/second of throughput. Now how many effects are chained on that grade layer? Effect B processes after Effect A, so that's requiring MORE memory...
What if I'm using models and particles and HFP 2017 depth mapping? That's a memory hog...
And user X with an i3 and Intel HD 4000 wonders why he can't do 4k...
@Triem23 Sorry my planned upgrade from an i5 Haswell for 4k with Hitfilm Pro is Win 10 64 - Suggestions welcomed
Kaby Lake i7 7700K water cooled (80% o/clock to 5mhz)
Asus Prime Z270-A m/board (great for o/clock)
32 Gb DDR4 3200
SSD Samsung 960 M.2 512GB (3,500mb/sec read)
2x 2TB WD Blue 7200rpm HDDs
A second 24"HD monitor
I'll keep the GTX 960 4GB until I can afford a 1080.
@CNK Interesting YouTube observation but it makes sense.
I would guess FCP allows you to to keep working because QSV runs with almost no cpu load leaving the CPU free to continue editing - but that's a guess.
Kaby Lake also claims 4k streaming for Netflix but I assume that's UHD .
@FishyAI Uhm, an 80% OC would get you to 7.5 GHz, you will be able to travel back to the future with that kind of speed. =P
Kind of an insane machine you're going for there. If you can justify spending that much money then by all means, do it... I'm just curious as to how you came to the conclusion that you need a super fast drive, I don't think it makes a difference because exporting or playback will never reach those kinds of speeds which that particular drive offers.
For the record, here are my PC specs:
AMD A8 5500 3.4 GHzR9 380 4 GB240 GB SATA3 SSD4 GB RAM
My PC is so incredibly underpowered on the CPU and RAM side, but I still get by. I can't even fathom the performance increase I would see with that kind of system...
@Aladdin4d - Thanks I'm no lawyer but would guess Intel is covering their backsides. Wiki says"In December 2008, the US Court of Appeals for the Federal Circuit affirmed the District Court's order that the patents be unenforceable but remanded to the District Court with instructions to limit the scope of unenforceability to H.264 compliant products". " H.264 compliant products" covers a lot of territory. Outside the USA may be another story. Are there no open source equivalents to h.264 AVC? Other software using QSV and the AVC codec including free GoPro Studio outputs AVC codec with no patent warnings.
Professional video codes like Cineform, Avid, Apple, and Canopus (Grass Valley) HQX were unobtainable or very expensive but are now free for all to use. Soft ware that supports the use of these codecs does not supply the codecs to avoid licensing issues. Cineform support has been added to Premiere - but the user has to download the codec. This would imply that Hitfilm could add QSV support using the downloadable Intel API but they don't need any licenses as they are not supplying it. The end user has to get a CPU with QSV and like the codecs on my PC, it's me who agrees to the end user license terms not the application software
@CNK - Sorry not 80% overclock. Asus tested hundreds of these cpus and found that 80% will overclock to a stable 5mhz and 100% to 4.8mhz.
@FishyAl Yep that's Qualcomm vs Broadcom and that wikipedia page has links to the two patents Qaulcomm sued over. The ruling only applies to those two patents and Qualcomm. The District court originally ruled there was no set of circumstances under which Qualcomm could ever claim patent infringement. The appellate court disagreed with that broad of a ruling and said Qualcomm could potentially claim patent infringement outside of anything H.264 related.
NormanPCN re Cineform gpu acceleration - Info came from many sources. Here is an example: $300 CPU Beats $4000 CPU?? - Cores vs clockspeed for video encoding .
@FishyAl I would not believe anything you read/watch on the NET. And of course that goes for what I say.
I will state that those tests shown in the video are all inclusive. Meaning, you don't know exactly what is doing what. In a simple transcode you have decode of the source, a possible scaling, with Cineform input and/or output a possible activate metadata application and the encode. Exactly what is using what (CPU vs GPU) one cannot really say. Only when you can isolate every function in a stream can one knowledgeably say.
I will say that in GoPro studio I see 0% GPU utilization on a transcode with no metadata or scaling applied. The source (decode) was AVC (GoPro specifically) and of course the output (encode) was Cineform. When something like GoPro studio, Hitfilm or Vegas plays back Cineform you will see some GPU utilization but what is it doing with GPU. At a minimum is is displaying the stream on screen and probably scaling to fit the preview window. On my Nvidia setup the GPU certainly does not just at all from 2D/idle clock rate so the use of the shaders is questionable.
So when one tests a specific app doing a specific thing with specific data input then one can reasonably give results. Trying to take pieces of a specific test and applying that to something else is not straightforward. How Adobe media encoder operates doing something specific verses GoPro studio or Hitfilm or Vegas is nowhere near 1:1.
@FishyAl and @NormanPCN
This is from a Jake Seagraves article on the Cineform site.
"CUDA is additive: It is not always an either/or decision. CUDA acceleration applies equally well to effects and transitions applied on top of CineForm media. The CineForm codec is not accelerated by CUDA as it is blazingly fast already (200+ frames per second on a single-quad i7), but Adobe effects are all accelerated using CUDA even for CineForm media."
In the Video FishyAl posted the tests were done using Adobe Media Encoder and CPU vs GPU was handled by enabling or disabling GPU acceleration in Media Encoder. That's not a good representation of what resources a particular codec uses. There's no telling how Media Encoder used the results or how it uses the GPU to accelerate encoding with a codec that doesn't use the GPU
Thanks - I know Jake. He was responsible for Cineform integration into Adobe and Resolve. I should have done my homework. Seems the complex maths in codecs would lend themselves to gpus. There is another new codec I am using for 4k called MagicYUV.
I recently had very good experience with MagicYUV: super fast render time - actually was faster than Animation which blew me away and third of the file size and great quality and it's free.
"Believe nothing you read on the internet" Thomas Jefferson
One more idea - Assuming Cineform is a good intermediate codec for UHD/4k in Hitfilm why not add it to the create Optimized Media function as an option prior to edit to save the extra work using 3rd party software for h.264 camera formats??
Because it costs money to do so and GoPro Studio to do it is a) already available and b) free? Just like Handbrake is for other formats. You'd spend the same amount of time waiting for it to chew through the file in Hitfilm as you would doing it externally, except people would then be asking "Why isn't Hitfilm quicker at transcoding the files before I can edit them?"
Note that in Hitfilm Pro 2017 with the new export queue and native Cineform export, you absolutely can batch transcode files to Cineform--assuming they aren't VFR, in which case you'll still need to use external software.
Thanks but if they already have Cineform transcode support for export it shouldn't cost much to add transcode for imported files before edit for 4k/UHD. H.264 4k editing is tough for consumer PCs. It makes sense to use intermediate codecs as part of the workflow. Convenient useful feature at low cost.
You just confirmed my point: low cost still = cost. Want to pay more for Hitfilm? No, me neither. Although as Triem23 and Aladdin4d pointed out: You can just pass your files directly through Hitfilm and it does it for you.
@FishyAl and adding an automatic transcode before allowing you to edit? If I wanted my NLE to do THAT annoying crap, I'd have stayed on Avid. Your NLE should never FORCE you to transcode on input. And I'm also giving FCPX a stinkeye over that.