Hitfilm is a multithreaded software, but, until what point?
How many cores does hitfilm use (average)?
Is it better a 5,2Ghz CPU 8 cores or a 4Ghz CPU 12 cores ONLY FOR HITFILM?
"Hitfilm is a multithreaded software, but, until what point?
How many cores does hitfilm use (average)?"
How many cores? Depends on exact specifics of a given test. Multiple CPU cores are mostly going to be used for media file decode. UHD should want/like more threads than HD. With media decode, remember that these days most AVC files will use hardware decode (no CPU).
If you are doing media decode and your timeline is 16/32-bit float then I have questions about how Hitfilm goes about this. I wonder if they are using the GPU for the integer to float conversion. If not then a fast CPU can be help here.
AVC encode/export will use many CPU threads. What is unknown is how well the Mainconcept encoder scales with cores. By that I mean what is the scaling when going from 4->8->12->16 core. Do you get near linear increase in perf or does it start running out of steam, and if so at what level. I don't know if anyone has done such tests. Years ago, I remember seeing such core scaling tests for the x264 AVC encoder but not for other encoders.
I may have tested the Cineform encoder in Hitfilm for threading but I cannot remember any results. Image sequence export seems to be single thread except for EXR. EXR is low thread as I remember.
"Is it better a 5,2Ghz CPU 8 cores or a 4Ghz CPU 12 cores ONLY FOR HITFILM?"
Only someone who has two such machines and constructed a test case can answer. And then the answer might only be relevant to the specific test case. Thus one would need many test cases.
GPU handles most all effects, and GPUs are massive parallel, and there is likely a single CPU thread setting up (defining) the work for the GPU. Something has to tell the GPU what to do. The OpenGL driver is multi-threaded but the common word is that the OpenGL pipeline is not so suited to multi-threading. Poor core scaling. Exactly what that means is unknown to me. Where does the driver run out of steam, and what things in the driver can use multi-threads. Hitfilm is not like typical GL apps, in doing VR/game type worlds. Different dataflow and GPU use model.
Many things in Hitfilm are single/low threaded so a high clock CPU is a potential advantage here. Particle simulator and image sequence export are prime items here.
You want a definite answer that probably does not exist.
We can't really expect FxHome to setup and construct such tests. They have better things to do with their time. A PC builder like Puget systems have done such testing for apps like Resolve, Fusion, Premiere, After effects, Photoshop. It is one thing they try to do to attract business of the creative market in PC purchasing.
In the past I have posted in this forum some BS tests at UHDp30 for AVC (software), Cineform, Prores, DNxHR. BS in that this was quick and dirty. That was on my old 4Ghz 4770k. Currently I have a 4.8/5Ghz 9900k. Problem is that old thread was Hitfilm 2017 which is faster than current Hitfilms. So a comparo is not apples to apples. Maybe 13 is back to 2017 speed but I have not tested, and frankly it's not my job.
My old 4770k machine still exists collecting dust with my old GTX 980 in it. I could run the new machine at the same 4Ghz for a core scaling test for at least some things. There is the matter of memory bandwidth, but close enough really. Anyway testing is not fun so I'm not terribly interested on that basis alone.
@NormanPCN; Thank you very much for your response. I have been doing some tests in hitfilm with my overclocked I9-9900K and 2070 SUPER. And came across a conclusion: I did some workflow test with different media encoders and different editing and VFX scenarios. Finally, it turned out that hitfilm rarely uses more than 4 cores on the timeline EXCEPT ON 4K mp4 60FPS and EXPORT, when all cores were used. But as you mentioned: Hitfilm does almost all rendering on GPU. I also prepared a test bench and discovered that you will notice a bigger improvement when moving from an Nvidia 1070/ 1070 TI to a 2070 SUPER than moving from an i7 2600K, 7700K or i7 8700K to a core I9-9900K even when overclocked. General HF workflow is much more GPU dependant that CPU.
@8KMAX General HF workflow is much more GPU dependant that CPU.
Thanks for dialing up some tests. I wish we could inscribe your quote at the top of the Forum. This question comes up often. Good to have some quantitative results.
General rule of thumb (as @NormanPCN said) is CPU for encoding/decoding and generally dealing with disk I/O but GPU for effects and layers (that's where the rubber meets the road).
"General rule of thumb (as @NormanPCN said) is CPU for encoding/decoding and generally dealing with disk I/O but GPU for effects and layers (that's where the rubber meets the road) ": Totally correct.
" Finally, it turned out that hitfilm rarely uses more than 4 cores on the timeline EXCEPT ON 4K mp4 60FPS and EXPORT,"
For media decode multi-core use is kinda dependent on the frame size. A larger frame gives more opportunity for efficient parallel decode. CPU threads on the same task(data) can get in each others way, bump into each other, and they tend to work best with a little elbow room. So yes, 4K media decode can more easily use more threads than 1080. This excluding HW decode for AVC as previously stated.
Also, in playback we are frame rate locked. Therefore of course 60p will use more CPU than 30p. In terms of utilization. The same number of threads can be used in both 30p and 60p, but the CPU use dies for the remainder time period until the next frame when it starts up again. 60p simply has twice the decode work per second as 30p. I suspect Hitfilm is synchronous here WRT media decode and the timeline so this start/stop is likely occurring.
With something like ram preview we are not frame rate locked but Hitfilm is terrible at GPU readback performance. Export has readback as well but export adds other work to the dataflow.
So with Hitfilm you need enough CPU (clock and threads) to get the timeline moving where the GPU takes over. Once you have that more is a waste. What is enough very much depends on your media type and the frame size and rate and how many simultaneous media streams (compositing) you want handled with no stutter in basic decode/play. Of course you always want as much clock as you can get since there is still much single/minimally threaded. I question how well Hitfilm perform async pipelined operation, so in cases like these clock can be a benefit when you push the timeline hard.
It looks like you're new here. If you want to get involved, click one of these buttons!