Achieving CinemaScope or other 'ultra' widescreen ratios in 2022?

Scarlet_Impaler
Scarlet_Impaler Posts: 3 Just Starting Out

Hello,

I've recently learned about odder aspect ratios in cinema used in the past, and want to use it in future videos. CinemaScope is being a more well known technique and these aspect ratios were closer to or about 2.4:1, instead of a 16:9 or even a 21:9.

Is this something that can be done in 2022, without expensive pro cameras or finding and using old Hollywood tech? It doesn't necessarily need to be 2.4:1 precisely, but something more than 21:9.

I can easily add black bars to 16:9 in HitFilm, but that's real cheap and you'll be able to see through the effect easily.

Apparently, some of the techniques back then would use two cameras on the same rig and stitched them together in editing. If its as simple as taking two identical smart phones or point-and-clicks to a single rig, perfectly leveled, then is there a simple way in HitFilm that layers the two 16:9 video tracks over each other to make a wider aspect ratio? My concern would be lining up the same details on top of each other perfectly.

Also, I understand that I'd likely still have to export the final film in 16:9 for Youtube and video players.

Answers

  • JavertValbarr
    JavertValbarr Staff Posts: 373 Staff

    If I'm understanding correctly, all you would have to do is set your Project Settings to something like 1920x800 or 3840x1600.

    Also, I understand that I'd likely still have to export the final film in 16:9 for Youtube and video players.

    YouTube supports different aspect ratios: https://youtu.be/1xo3af_6_Jk

  • Scarlet_Impaler
    Scarlet_Impaler Posts: 3 Just Starting Out
    Sorry if I didn't make it clearer. I would still need to record in aspect ratio, thats the main issue. Since AFAIK, most common cameras will only film in 16:9, 4:3, or maybe 21:9, not 2.4:1 or anything like that. Which is why I brought up the two-camera-one-rig trick they used back in the day.
  • Triem23
    Triem23 Posts: 20,276 Power User

    Ok, the two camera rig might work, but not with smartphones, unless both are the most recent iPhone Pro.

    With the exception of the iPhone 14 Pro (and maybe the 13), every smartphone records variable frame rate (VFR). This means the recording speed of the phone can constantly change - if you've set the phone for 23.97/24 fps, it's recording more like "24, 23, 25, 18, 20, 24, 16, 25, 24." Yes, VFR might go faster than the target frame rate and the fps swing might be far wider than the example given - one forum user had phone footage that swung between 35 and 0.5(!) fps on a "29.976" recording.

    And you can't guarantee both phones will vary in the same way, so your half-frames might not align correctly.

    I've never tried to shoot wide with two phones, but I've tried shooting multi-clone shots on a motion control rig using the phone as the camera and my passes didn't line up. There was stuttering and tearing as the seams.

    Get cameras which can record constant frame rate (CFR), and this can work.

    What version of Hitfilm are you using? Express 2021.3 and earlier has a resolution cap of 3840x2160. Hitfilm Free 2022.1 and 2022.2 have a resolution cap of 1920x1080. Pro (and I think Creator tier subscription) has no resolution caps.

    For wide-screen, instead of bars, you're better just setting the Timeline to the desired resolution. Rendering a full 16:9 frame with bars is resource wasteful (Hitfilm has to calculate all pixels, only for a bunch to be hidden with black), and locks your render at true 16:9. YouTube and any media player will cheerfully accept non 16:9 media and its player will just fill out the screen with black for you.

    Creating as close as possible to the true aspect ratio future proofs your work. I'm sure you've seen old SD "wide-screen" media that had hard bars added so it's playing 4:3 with bars on all four sides? Creating close to the true aspect ratio also accounts for odd monitor/phone sizes like a 21:9 monitor or the 17:1 phone I'm typing on.

    Obviously you set your vertical resolution by dividing the horizontal resolution by the aspect ratio. So, for a 2.4:1 resolution that's 1920x2.4=800, so 1920x800 resolution. Or 3840x1600 for 4k.

    Note that the way mp4 encoding works, you need a vertical resolution divisible by 8. For 2.4:1, hey! It works! 2.35:1 is an issue.

    1920/2.35=817.02 pixels. You can't have 0.2 pixels, so you have to round and 817/8=102.125. You have to either go with 816 (probably no one will notice 1.02 missing pixels) or 824, which probably won't be noticed, but is actually 2.33:1. Go with 1920x816. For 4k it's 3840x1632, which is a 2.353:1 aspect ratio or 3840x1640, which is a 2.341:1 aspect ratio.

    Bear in mind you could create your comp at 1920x824 or 3840x1640 and use small bars to mask to your correct ratio which reduces the "error" to a fraction of a pixel and is less wasteful that putting bars on a full 16:9 frame.

    Note plenty of films still shoot at a 4:3 or 16:9 aspect ratio and reframe in post. Depending on the camera/monitor you might have aspect ratio bars as a guide, or you can attach a cheap screen protector and marker in your own framing guides.

    Hope this helps!

  • Scarlet_Impaler
    Scarlet_Impaler Posts: 3 Just Starting Out
    Thanks. This helps a lot, I didn't even think about frame rates, and 1920x800 doesn't seem too far out of the norm. And I'm using Hitfilm Express 2021.3 so, I should be good there.

    With that said, do you know if those point-and-shoot cameras in the $100 range (that can do video) have CFR? Or is that something I'll have to research per camera?

    With everything being ideal, is it just as simple as just placing the two footages at the two sides of the frame and the middle will simply line up properly? Or is there another editing method I need to do?

    Also, when it comes to the a two camera rig... should cameras line up like this:
    ||
    ||
    Or, like this:
    //
    \\

    I feel like the latter is more proper to get that wider view and the former will only get a bit more on each side. I'm sure I can model and 3D print some sort of brace that can get them both at the exact angles, if needed.
  • Triem23
    Triem23 Posts: 20,276 Power User

    Sorry, I moved to a different country thus year, am renting and had a lot of property viewing this week. Forgot to follow up with you.

    I have no idea if varied sub-$100 cameras record VFR or CFR. You'll need to check specs on specific cameras. That said, I think it's mostly phones and screen recorders that are VFR.

    Another caveat I forgot to mention. You'll absolutely have to color match your cameras. Two cameras of identical make and model will still have slightly different color response due to slight differences in individual sensors. Even more annoying, as a sensor warms up while the camera is on, it's color response will drift. I did a lot of broadcast work on multicam shoots, and even with $20k broadcast cameras there's a Tech Director (TD) on set whose job is to use an external camera control box to adjust the tint (yellow/blue), phase (green/magenta), pedestal (black), gamma (mid gray) and white levels to match before the shoot. During the shoot the TD will continue to adjust color as the cameras drift. You're going to have to color match in post.

    Film would have needed to color correct as well. Even two reels of identical film stock will have different color response because it's an analog, chemical process. Even within a single roll of film color will vary. Since it's a slow frame by frame drift it's not noticed. An example of this can be seen in the 1982 film "Tron." The post work was done on kodaliths (basically a photographic paper) before optical compositing. Well, during post the kodaliths all got scrambled out of sequential order and there would be sudden flickers in brightness. This was covered up in post with (sometimes) additional light effects and/or sound effects like there was a minor power fluctuation in the digital world, but it was really desperation fixing an "oh, crap!" error.

    OK, back to your brackets. I've never done this, so, from here on out I'm speculating.

    Two 16:9 cameras side by side give potential 32:9 or 3.555:1 coverage. Much wider than 2.4. You'll want some overlap so you can blend your frames together with a feathered mask (which will also help hide color drift). You could leave the overall coverage wider than 2.4:1 to have some reframing room in post.

    How to angle the brackets?

    The right side of this comment is the way the lenses are pointing. To define our directions, here...

    Parallel:

    ||

    ||

    Tilt IN:

    //

    \\

    Tilt OUT:

    \\

    //

    No clue, but, since 3D shooting uses two cameras tilted IN - and the point the two cameras mid points converge is the "surface" of the screen - my guess is you'd need to try for either parallel, or tilting OUT a bit. Tilting IN will likely give parallax errors. Anything in front of or behind the point of convergence won't line up. Again, I haven't done this, but I think you'll have fewer parallax issues with a parallel mount or tilt OUT mount. Just make sure you've got some overlap on your captures so you can soft-mask. That's the key thing.

    Now. After all this discussion I hate to kinda step on your idea, because I think it's really neat you're trying this old technique (and would want to see your tests if it works), but why do you want to do this?

    Film moved away from dual cameras to anamorphic lenses, which squeeze the footage as recorded to squish a 2.4:1 image onto a 4:3 negative. A second anamorphic lens was used on the projector to re-stretch the image when projected. Obviously an anamorphic lens is easier than messing with two cameras and anamorphic lenses are still used in digital filming.

    Another option - especially since you're OK with 1080p output - is simply to record 16:9 but stick a screen protector on your camera's view screen and draw in a 2.4:1 guide for framing and just edit at 1920x800. This even gives you the "extra" vertical space for a reframe in post. Shooting 4k (or higher) let's you do the same thing - add a shoot guide to your monitor - with even more reframing room. If you're shooting anything commercial or are shooting personal films with other people this is just going to be faster than having a client or actor friend waiting around while you check alignment of your rig, and will save you a lot of time on post not lining up your shots, masking and color correcting. So it does beg the question "Why?"

    If you're just having fun with the attempt or trying for a unique look - because you will get a different character of frame compared to an anamorphic lens or and overscan-and-crop - then, yeah, go for it! As I said, I'd like to see how it works out! Does seem like a lot of extra work. Then again, I'm biased. I worked video production for decades, do have a "time is money" mindset, and am almost always looking for how to speed up/simplify a shoot - even if it's a personal/fun shoot (Although sometimes you just gotta slow down and do the odd thing).