@Triem23 HAHAHA I managed to find a bit of time to log in this week and was greeted to wonderful joke. 20 Years is a long time.
@TriFlixFilms it is. That last code was (I think) a bit of code to generate hypar structures in CAD 3D on the ancient Atari ST.
@Aladdin4d Blimey, go on a short break and come back to a flipping maths lecture, which is all well and good if it's relevant, but it doesn't fix the problem. That's not so much a sledgehammer to crack a nut, as a crane and wrecking ball (with Miley Cyrus still attached) to knock the skin off a rice pudding.However it might be doing it, it isn't doing it as well as it could. If the hardware (?) can't even draw a line between two points without the sort of wobbly geometry you used to get on early PS1 titles, then perhaps it should be done in software instead? A line is a line is a line and adding texture to it is a piece of gateau, because you use yet another line algorithm to choose your pixels from the source texture. No wobbles, no wandering points. Also no matrices or quaternions, or quadrilaterals or anything else more complicated than a couple of tight loops with add and jump if carry set. Also.... no perspective unless you add that to the second algorithm, but that'd be trivially fast these days and you'd do it for every pixel instead of every 8 or 16, which would also fix the perspective 'popping' problem.If the maths is just messing up where the ends go, then that's a different hovercraft full of eels, but as no staff have commented, who knows?So, anyway, I thought I put together the project for people to play with and... found a new crash bug 100% of the time when I turn on Motion Blur after the Quad Warp when using the 3 pixel high .jpg image, instead of just the flickering that it has with the whole original image.http://www.mediafire.com/download/170e34ykb3mvqvb/wobbly_ends.rar
@Palacono Ease up there camper!
Anyway if I'm right about the how then none of this is applicable
"A line is a line is a line and adding texture to it is a piece of gateau, because you use yet another line algorithm to choose your pixels from the source texture. No wobbles, no wandering points. Also no matrices or quaternions, or quadrilaterals or anything else more complicated than a couple of tight loops with add and jump if carry set. Also.... no perspective unless you add that to the second algorithm, but that'd be trivially fast these days and you'd do it for every pixel instead of every 8 or 16, which would also fix the perspective 'popping' problem."
That's purely linear and actually has some pretty severe limitations (it's ok when everything is known beforehand but not much good in an interactive use case) whereas projective mapping is rational linear using a compound matrix. Linear interpolation just flat out won't work.
@Aladdin4d "Linear interpolation just flat out won't work. "Eh? It does work. What doesn't is what we have now. What do you mean by "it's ok when everything is known beforehand but not much good in an interactive use case"Sure the "how" is speculation by both of us, although it looks like what happens when you draw lines by plotting them a pixel at a time - if the GPU can be made to do that. But if you're telling me we can't plot the ends of the lines in the right places because of quadrilateral voodoo, then I thumb my nose in your general direction. And I've barely looked at Bezier Warp yet, which is probably 4x Quad Warps stitched together... Although it was useful to add something to an environment map to make it look convincingly bendy. Anyway..... Quad Warp + Motion Blur on thin textures = 100% crash. Unless it's my GPU/CPU combination, anyone else confirm that from that project file in my last post?
"Eh? It does work. What doesn't is what we have now. "
First I was talking about projective mapping and if that is the case then linear interpolation won't work period. Even if I wasn't talking about projective mapping linear interpolation still isn't a solution, It would work in some specific cases but fail miserably in others.
What do you mean by "it's ok when everything is known beforehand but not much good in an interactive use case"
What I mean is linear (or affine) mapping only has 6 degrees of freedom vs 8 degrees of freedom for projective mapping. This is ok for triangle to triangle mapping and you can even do a rectangle to a parallelogram but you can't do a rectangle to a quadrilateral directly. If everything is known beforehand you could conceivably break a quadrilateral up into triangles and parallelograms and solve for each but that isn't good for an interactive use case. Because you only have 6 degrees of freedom you can only define three points in the destination plane at a time and you need four points to define a quadrilateral. This means making the user manually define everything as parallelograms and triangles first or make everything else 10 times more complicated than it has to be with the odds of internal edges not matching going up tremendously. Another major issue with this approach in general is it really really wants to bend horizontal and vertical lines.
For the use case, an interactive quad warp, you need at least a bilinear method giving 8 degrees of freedom like a projective mapping. With bilinear mapping you can do a square to a quadrilateral fairly easily but you need to do that twice and compose the results to be able to do a rectangle to a quadrilateral and the composition of two bilinear mappings is no longer bilinear but biquadratic. The inverse mapping needed to get something on screen isn't bilinear either and it's multi-valued making it complex with a high computational penalty (multiple quadratic equations). Unlike linear mapping this method inherently preserves horizontal and vertical lines and equidistant spacing but diagonals? Well they get curved into parabolas.
"Sure the "how" is speculation by both of us, although it looks like what happens when you draw lines by plotting them a pixel at a time"
Partial speculation on my part. You're right in that I don't know the exact method but I can say with absolute certainty it involves some form of "quadrilateral voodoo" because that's the nuts and bolts computer science needed to accomplish the task. The problem with going solely by what you see on screen like "drawing lines by plotting them a pixel at a time" is that's the final step and is ignoring 95% of the process. Everything I've described or something similar to it has to happen before there can even be a texture in texture space to be read and drawn on screen.
It should also be noted for completeness that part of the issue is also one of the most basic things about the process--one is mathematically projecting images onto a fixed-resolution grid. Image scaling is always a little bit of voodoo (Images don't scale down cleanly unless to 50% or 25% resolution with 33.333333% or 66.666666% being the pain because of a non-repeating infinite decimal). There's a lot of software figuring out on the fly what color a pixel is going to end up as if the final projection contains elements of many source pixels. Hence, anti-aliasing and other solutions that basically blur to hide the problem.
So, whatever method is being used to do the projection, the weak part of the equation is always going to be the render code.
@Aladdin4d I know there is more than one way to skin a cat, but what you're describing sounds like overkill for what should/could be a relatively simple task. If you wanted it to be... OK, let's forget textures and take a simple white line and draw it on the screen. The two knowns are the end points, and even if they're only rendered to whole pixel positions then there should be no issue with them wobbling away from the points they are following/parented to, because they're following the same rules too. Sure, there would be the issue of sub pixel positioning if you're interpolating a single pixel's movement over 3 frames and you want to plot at 100.33, then 100.66, then 101, but let's just throw away any decimals and whatever is happening there should be the same as anything else (a plane, another single pixel, a tracked position) parented to the same point and they would all make the same 'mistakes' in positioning in the final render. And to simplify things, I'm ignoring Anti-aliasing and assuming the final render is 1:1 with the pixel coordinates. Any of that would apply to everything, so they'd still all move together if AA was applied or it was scaled.That's just not happening with Quad Warp.Now, it should be possible, because using Lightning to make a line - by removing all the branches and wiggles and making it thin and parenting the two ends to the same points - works perfectly and follows the points BL and BR and stays locked to the corner of the small white plane, which is also following the BR point. It also works with Text. So apparently anything else is capable of being parented to a point and staying attached to it except Quad Warp (and possibly/probably Bezier Warp).But a line is not a quad, you say? Well, draw more lines between TL and TR, then TL and BL and TR and BR and you have an outline around the phone's screen, which is sort of a start and certainly something to compare the Quad Warp's outline to. All 4 of these lines follow the points correctly and better than the 4 corners of the Quad Warp texture. If you created 100 lines and interpolated the positions of their end points from TL>BL at one end and TR>BR at the other (be nice if you could use the output from one calculation as the input for another, wouldn't it?) then you'd have a Venetian blind effect (because there are more than 100 lines needed to cover the phone face so there would be gaps) between the outline you created. Do enough lines to have no gaps and you have a 'manual' version of Quad Warp... for a single colour. Sourcing the correct pixels to plot on each line from a texture isn't a big additional step, code wise for a Perspective Off version. It's literally just linear interpolation across the source texture using the destination line's length to calculate the step size.The ends of all of those lines (interpolated from their accurate starting corner points) more accurately stick to the phone's screen than the edges of the Quad Warped texture, because the interpolation if its lines are between its 4 corners, which are moving about, so cannot avoid having everything else move with them. If you did what I just said with the 4 lines, you can see the edges of the Quad Warp wiggling away from the lines, but the lines themselves look like they're a stuck-on part of the original background video. I want Quad Warp to look like that. Not much to ask, is it? So I don't think it's necessarily the render code that's at fault; as it may be accurately rendering the resultant Quad to the wrong coordinates it gets supplied with. Accuracy is obviously reduced on the corner positions somewhere (see how it requires a change of 3 pixels in BR before the Quad Warp corner bothers to update in the video) which then causes everything else to fall down. But, the mistake could either be made well before it ever has to render the result, or it could also be given them accurately, then mangles them internally. No way to know. It just needs sorting. @Triem23 the white test plane, lightning lines and some text all manage to be positioned and drawn correctly, even though they'd have the same issues calculating sub-pixel positional accuracy. Just Quad Warp's out of step.
"I know there is more than one way to skin a cat, but what you're describing sounds like overkill for what should/could be a relatively simple task. If you wanted it to be... "
Sorry you think it's overkill or meaningless garbage but reality check - when I said fundamental I really did mean absolute rotgut, ultra basic, ultra simplistic fundamental building blocks. If we were talking about genetics these techniques would be amino acids that join together to form a DNA sequence that produces a specific genetic trait. You might think they're ridiculous or useless or I'm an idiot and that's ok but before you keep thinking that take a moment of your time to Google fundamantal image warping. You're going to see these same fundamental techniques over and over and over and over. There's no escaping them and even if you didn't realize it you've been utilizing them for decades like when programming a Sega Saturn emulator. (your description sounds like linear/affine mapping to me BTW) or playing a game or doing anything that needs a perspective manipulation.
Skipping ahead to......
"I want Quad Warp to look like that. Not much to ask, is it? "
Well it just might be too much to ask I don't know. My issue, the same one that started this whole mess is I don't think you can make a relevant or meaningful comparison to a warp using simple lines or planes. It's like trying to compare an Astra diesel to a Ferrari 430 and saying they should perform identically because they both happen to be on the same road.
Oh, I know there's always a mathematical model for something. I fairly recently threw away my stack of graphics books with all the equations and ray traced images of chrome balls reflecting checkerboard patterns etc. from about 30 years ago. It all seems overkill considering my original line algorithm used about a quarter of the machine code bytes than there are letters in this sentence and the ends stayed where I put them. Yep, was Affine mapping. If using more complex maths produces an inferior result decades later then something is definitely up. Aside: The Sega Saturn did it's Quad Warping in the dumbest way ever. It drew every pixel of the source texture onto the destination quad, which meant that a 512x512 source would overdraw itself dozens of times to produce a 32x32 (or smaller) screen quad. Which was also slooooow. The suggested "solution" was to also have versions at 256x256, 128x128, 64x64 etc. and choose the relevant one to try and minimise the overdraw to be closer to 1:1: although you'd be lucky to get better than 3:1. as you couldn't afford to have all the relevant sizes for every texture or you'd run out of memory and the checks slowed everything down too. I didn't do that in my emulator, as the proper method is to use the destination texture to choose the source texture pixels and only plot each pixel once, but Sega were in a hurry to get something out the door after the Genesis/32x fiasco. But, one positive: the Saturn's Quad's ends also stayed were you put them on the screen. No wobbling...
"It all seems overkill considering my original line algorithm used about a quarter of the machine code bytes than there are letters in this sentence and the ends stayed where I put them. Yep, was Affine mapping. "
Regardless of the code you used this is what you were actually solving
This yields 6 equations in 6 unknowns but simplifies into two 3 x 3 systems, one for x and one for y. At first glance this is simpler than projective mapping with 8 equations in 8 unknowns but there's other things to consider before making that assumption. This method usually needs more computationally involved resampling and filtering than others. Once you take that into consideration this method doesn't look quite as appealing.
Another thing to consider is the use case. Any quad warp method for HitFilm must be able to do rectangle and/or square to arbitrary quadrilater (ideally arbitrary quadrilateral to arbitrary quadrilateral too). Affine mapping can do a lot of things like scale, rotation, shear and even some translations but one thing it just cannot do without a lot more passes and user interventions is rectangle and/or square to quadrilateral. This is why the world has bilinear and projective mapping. I would imagine those two methods turned up about two weeks after affine mapping was figured out In the end projective mapping is only slightly more complex than affine mapping and code-wise all three are about the same.
All three methods eventually end up inferring values somewhere along the chain but projective a little more so than the others. This where I'm about 85% sure the problem lies, when values are inferred during the warping process long before anything gets drawn to the screen. The other possibility is the constraints being imposed by the 4 points of the quadrilateral. You're looking for 1:1 movement of one of those 4 points with a another point but instead are seeing a 3:1 ratio. It's possible the ratio is 3:1 to account for or because of the other three points of the quadrilateral in some way. In short I don't think the 3:1 ratio is accidental.
@Aladdin4d "It's possible the ratio is 3:1 to account for or because of the other three points of the quadrilateral in some way. In short I don't think the 3:1 ratio is accidental." Did you watch the video? If the point you're following moves by 3 pixels before you update (on the Y) then that's just hopeless, whatever the method. And I've since parented them all to the same coordinate, so why aren't they even moving together?I was going to see if it did the same in the X axis to see if there was a 3x3 deadspot around the ends, and found what I should have before if I'd looked further than I did: the 4 corners are drawn differently - as well as moving differently- even though they're parented to the same point. It looks like it's drawing very thin triangles to simulate lines, and whatever else the fundamentals say (remembered one of those books I threw away: Fundamentals of Interactive Computer Graphics ) it needs to be drawing lines to produce the curves you get when it folds on itself.It also looks like they are drawn as right angled triangles ABC with A as the Apex and AB on the left and C on the right. What this produces in the 480x3 quad is a sharp bottom left corner B that moves in whole pixels, in X and Y, but it takes two pixels of movement in the parented point before it moves in either X or Y, so the deadzone is 2x2 for B. But for A, it is drawing Antialiased sub pixels on every pixel change, so the height of the quad varies between 2 and 3 pixels high, with the top edge alternating gradually going from hard to soft, then hard again. At the C end on the right, it's similar, but slightly different. The Y axis changes produce whole pixel changes at 3:1, as shown in the video, on the bottom of the quad, but the top of the quad is again showing sub pixels anti aliasing and it alternates between a soft and hard edge. It looks a little softer than at A, presumably because there are fewer pixels being drawn/stretched at the tip. The difference is in what it does for X changes at C. That also produces sub-pixels smeared across the changes and as the coord changes from xx.1 to xx.9 the tip of the triangle is smeared across a pair of pixels the whole time with 1/10 to 9/10 of the texture visible in the end pixel until it skips onto the next whole pixel boundary. It almost never looks sharp and is always moving.The texture is so thin the triangles used to draw it are clearly visible: the base is straight, and the corner B is drawn at whole pixel boundaries, and the tip A is a little softer and the tip C is very soft. with what would be the TR corner of a quad the softest of all.Using triangles is expected, but it doesn't explain the 2x2 dead zone around the point B, which is just making things worse.I wonder if alternating the directions or orientation of the triangles throughout the Quad Warp would produce sharper results - once the coordinate accuracy issue is fixed. E.g. drawing two right-angle triangles, one above the other with AB at the left and C at the right is always going to give a sharper left side and bottom. with a saw-toothed right side. But one with AB and C on the bottom and another with C and BA above it, so they combine to form a rectangle would give you 4 sharp edges. It doesn't appear to be doing that on this very small texture, even though it can't be more than 2 or 3 triangle-lines high. I wouldn't mind if it took longer to render something like that if the results were better.Here's a little test project to see what's going on. All 4 corners are parented to BR with offsets to spread it across the bottom of the phone display. If you zoom the display right into the bottom right corner you can manipulate the point BR directly in X and Y and see how the two ends BR and TR change the Quad's behaviour. If you want to see the other end, manipulate 'BR Handle', which BR is parented to, as you can't see BR directly as it's off screen when you're zoomed into see BL and TL. Wobbly Ends 2 Project
This is short and sweet: the crash bug I mentioned previously.
Hey @Grumpy! @Palacono! Focus!
Notice it's now your white plane that's having problem when BR is moved. Quad Warp, while still not absolutely perfect, is much improved.
@Aladdin4d That is better. OK, I suppose you're going to tell me that's how it works in the update that came down on the day I originally posted about the problem and which I've not updated to because:a) I thought it unlikely timing that it would be fixed when I hadn't complained yet and b) I only update periodically when I remember and after there is confirmation that an update doesn't break something else. Been caught too many times before (in other programs). ( Rassen frassen updates. Rassen Frassen timing. ) Well, I've got to shoot now, so will have to update everything and check this out later.
Did it also fix the Motion Blur flicker/crash bug? That really would be a coincidence! Also, have they quietly fixed Export Frame applying AA? So many things to check on later.
@Aladdin4d OK, I've updated HF4Pro and Express to the latest versions and...it's still broken and not like yours at all; so what gives? GPU card/drivers? If so, that's going to be tricky...
@Palacono I don't want to tell you what gives because then I would have to explain it and end up typing 150,000+ words
@Aladdin4d What's to explain? It's an Effect that either works or it doesn't. Neither Perspective On/Off nor running Pro or Express makes any difference for me. Still different on each corner even when parented to a single point. See videoIf it works on your machine and you're running the same version as I (now) am, then the only other variable I can think of is different GPUs.
GPU is a big enough variable. Since Hitfilm is rendering based on Open GL calls, logically the quality of the GPU's hardware rendering and Open GL implementation will have massive potential effects on render quality
@Triem23 well, that's a pain in the backside if it is the case, because my video above is clearly nothing like @Aladdin's. I wonder what else is affected.
First part - The white plane appears to be jittery now because of the scaling factor of the viewer. As you adjust the scale of the viewer up and down you'll notice the movement ratio change too.
Second part - Remember when I mentioned constraints? Well if I'm right then easing the constraints should help. Makes sense to me anyway but how in the heck do you do it? My theory was if you could feed Quad Warp a boundary of null values it would ease any constraints on the remaining non-null values. I tried it by embedding the pic to be warped in a comp, masking the entire layer then setting Feather to In with a value of 4 pixels with the worst case being it might help, well, mask the issue if nothing else. When you try it you'll notice it does more than that as corner/point movement is more stable and fluid across the board. Like I said earlier, not perfect but much better.
Side notes - When you're doing something like this after you apply Quad Warp duplicate that layer, move it to the bottom of the stack, hide it and drop in a new grade layer just above it. Now add Set Matte to the phone video layer and choose the grade layer as the source layer and in this case, Alpha as the source. This will show you where you're "off" a little clearer and you can use it if you need to place the warped layer below in order to hide some things. Adjust the corners on this layer then copy Quad Warp back to the layer you want to use.
Since a modern phone screen is glass I tried adding Caustics to the warp layer with a Depth of 20 and Refractive Index of 1.51. I think the refracted edges give it a nice sense of depth. I also just used the phone layer as an environment map mostly because I didn't take the time to hunt down something else to use It could use a little more tweaking and a different environment map but I do like the effect.
Just rendered but it should be available soon.
P.S. I didn't have the time to post this all earlier and I'm sorry but I couldn't resist........
"GPU is a big enough variable. Since Hitfilm is rendering based on Open GL calls,"
Not really. For 3D model rendering then certainly yes. For everything else the code is a pure Hitfilm invention and should be GPU invariant.
This stuff is 99.9% likely implemented as OpenGL fragment shaders. These are not API calls to draw. The GL shader language is a programming language much like CUDA and OpenCL. Fragment shaders are intimately tied to the frame buffer and its pixels. Shader kernels are how you get the speed of massive parallel execution via the GPU on these types of pixel functions.
If you want to warp the geometry of a GL object, then you can use a vertex shader. It is intimately tied to the geometry buffer of an object. The warps in the Boris effects in Hitfilm are an example of Vertex shaders in action.
This is a big difference in OpenCL/CUDA and OpenGL. GL is intimately tied to graphics. CL/CUDA are to the GPU compute ALU's as C/C++,C#,Modula-2, Pascal, Ada95 and so on are to your CPU. Completely generic.
@Aladdin4d It does look more realistically part of the phone's screen, especially the little reflected edge at the bottom. I know the viewer scale makes a slight difference to how smoothly the points are apparently followed because the position markers are drawn at the screen's resolution - they stay the same size - and are drawn at a higher accuracy than the centre of the pixels they represent when you zoom in. After all, they are held to 3 decimal places (maybe more internally, but that's all we see), so their smoother movement is more visible when you zoom in, as at 800% zoom, they can move by what looks like 1/8 of a pixel. But the White Plane and some Text and the Lightning Lines I drew around the edge all followed each other perfectly and within a pixel of the BR marker. If you edit out the decimals and just increment it manually, you'll see that happening. I can upload that project if it would help, or you could just add the bits yourself, I used a large letter 'T' for the text; but no need to do the lines: they move in exactly the same way. That's funny that your method of smoothing out the drift at the edges is exactly what I'd done previously for Hitfilm 3 Pro's Quad Warp, because that was truly wobbly, but I'd done it mainly because HF3P's Quad Warp had no AA applied and the edges were really jagged if the texture was at any kind of an angle and stuck out like a(n even) sore(r) thumb. I'd also feather-masked it by 4 pixels, as just the right amount to blend in without losing too much of the image or being obvious. I thought that was all behind us, but... Doesn't help with the weird way the Perspective On 'pops' as the angle changes on a turning surface though. Oh well, one thing at a time. The Set Matte idea is something I hadn't though of though. Not sure I followed what you were doing entirely - as I haven't tried the steps yet, but thinking at the keyboard and guessing at your image with the hole in it: could you (are you?) using a version of the Quad Warp to cut out a hole in the background image (feathered mask too?), then placing a slightly larger Quad Warp behind the cutout, so the wobbly edges are hidden by the border around the hole in the original video and only the centre portion shows through the hole? I'll have to try that. And what you said; if that was something else.
@NormanPCN Thanks for the info. I was worried I'd need a different GPU, but above you'll see Aladdin4d revealed he was pulling my leg. I probably deserved it, having harped on about this for a couple of weeks now.
I thought that was all behind us, but... Doesn't help with the weird way the Perspective On 'pops' as the angle changes on a turning surface though. Oh well, one thing at a time.
Actually it might but I've never had that happen to the extent you have and haven't tested it yet. In fact on the Tower Turn project render I put up sometime back you mentioned there was still some pop and I forgot to tell you that what you're seeing isn't in the project or the uncompressed render. It's only in the YouTube render. Double and triple checked that because that's just weird.
"could you (are you?) using a version of the Quad Warp to cut out a hole in the background image"
Yes that's exactly what I'm doing.
"then placing a slightly largerQuad Warp behind the cutout, so the wobbly edges are hidden by the border around the hole in the original video and only the centre portion shows through the hole?"
I did not do that, my Quad Warp is on top but yes you could do exactly that which is why I brought it up. A side benefit was not having to use spill removal after some careful adjustment and applying Caustics
@Palacono"I was worried I'd need a different GPU"
Of course, there can always be bugs. Even if a fragment shader is 100% certifiably bug free. In GL you pass the source code of a shader kernel to GL. Therefore the driver needs to compile the code for the GPU. Like anything, a compiler can have a bug.
@NormanPCN Cheers, I think we know it's probably a universal problem now. @Aladdin4d I've had another idea... Well... an extension of yours, TBH. Although masking around the edge is a fairly effective way to reduce the more obvious wobbles, we're still at the mercy of the original shape, and where it wobbles a lot, it'll still wobble a lot, it'd just be slightly less obvious. In your video, the top edge was still flickering very slightly on the left side. Now that could be that we're down to worrying about the accuracy of the original tracking, and a slight expansion of the Quad would have hidden it, or Spill removal (the green thumb) , or both; but I thought of another, possibly better way.... How about an accurate way to slice the edges off the Quad first, that would also apply a nice AA edge to it and that followed the points perfectly? Answer: Lightning lines. I've been using them to show where the edge should be, I could just as easily (well, not quite yet... I'll come to that) use them to trim the edges of the Quad Warp and adjust the tracking markers to account for that 0.5-1.0 pixels that are sliced off. It would also improve the overall accuracy, as there was no 'dead zone' around the corners for the lines, so if the quad didn't move and lags behind the image, then it's going to get the edges guillotined anyway, to look like it does. And because the lines have a nice AA edge to them, they'll produce a nice AA edge to the Quad.On some textures it might be more evident that the centre didn't move - although with the weird way things are happening, it will probably be drawn differently so it might appear to - but I suspect that wobbly edges and corners are easier to spot for most people (rassen frassen centre Perspective On 'pop' aside) and it would certainly be an improvement until FXHome do the real thing.Only thing is I'm a bit fuzzy on order of operations, and combining things to use a line to chop off the edge of the Quad Warp, probably involving an embedded comp, Set Matte on the outline and a copy of the tracked corners points etc. etc... so I'm only there in theory at the moment and I'm off out in a little while, so I'll come back to actually playing with that later.Meanwhile here's the project file using the lines (and text) to show where the edges should be. Feel free to experiment. Wobbly Quad with Straight Lines PresetAnyone else reading this (just us three or four, I'd have thought; but the view counter goes up about 50 a day) who wants to be able to draw lines: save the Lightning Effect as a 2D Preset called Lightning Line and just apply it to a simple Black Plane above whatever you want to draw a line on. Adjust width (both ends or just one for wedgy lines) to taste. It's not good enough to upload to @InScapeDigital 's page, but useful all the same. You can also do something similar with the Lightsword Effect, but that requires an Addon for Express.Also, to expand on what I was saying I think it's doing when it draws the quad, based on the nice solid bottom left corner that only moves in whole pixels (well, every 2, so still sucks) and the very soft and wobbly right side: I think it's being drawn a little like the top image and it would produce better results if it was more like the bottom one.http://i854.photobucket.com/albums/ab106/pickaname2/quadshape.jpg
@Aladdin4d Actually, I couldn't wait, as I thought it might be quick to do and, Woohoo! it works! Nice edges and following the points. Could feather it more with slightly thicker lines perhaps - these are 0.5 pixels wide - but it makes all the difference. You could add your Caustics and Environment Map onto that and it would look close to perfect.
@Palacono You're right the top left was still flickering in one generation the video but it was because I forgot to adjust the upper left corner before rendering. Oops
That particular flicker was from a plane underneath the phone video layer that I put there to highlight stupid mistakes on my part so I would know I needed to adjust something it just didn't work out that way
Anyway we're kind of on the same page here because I did start looking at other ways to get to the same place and cut down the pixel width and found out that any cutting regardless of type or width makes a huge difference. I didn't try Lightning though and that's a great idea because it should work in pretty much any situation.
@Aladdin4d I experimented with adding Caustics with your values and that added very jagged unAnti-Aliased edges to the Quad Warp - which was in an embedded comp: so I could use that to apply the Set Matte to in the main comp. But by changing the order to have the Set Matte>Lightning Lines done last, those jagged edges were guillotined away again. Now we still have the Perspective Pop to deal with, as it was that that alerted me to Quad Warp's problems initially, and that banner on the building doesn't even have an edge, it's Hue&RGB Keyed away anyway. That's FXHome's problem though. With regard to your previous attempt with the tower+banner using Mocha and you saying you didn't see the same perspective 'pop': are you using it with corner points in 2D and Quad Warp, or as a 3D camera and a static texture in the correct position? The latter will obviously not be moving the corners in the same way, just interpolating from a single set of values, which will look different. Also, if you remember, your video seemed to have Perspective On both times, rather than one being Off, so I wasn't clear about what your render method was.With HF3Pro, I had to use the camera and projected plane method because there was no corner point option in that version of Mocha Hitfilm and even if you did it manually by attaching a Quad Warp to the 4 corners of the tracked plane, it was far too wobbly. My solution was to use the centre point only and orient and scale the texture plane manually, just like orienting the purple plane grid in mocha, with a few keyframes on the angle and scale, as required, to keep it in place.Even if you are using mocha generated Quad Warp corners points, because they're generated from the 4 corners of the tracked plane, they may be slightly more smoothly consistent - with respect to each other - than 4 individually tracked points from inside Hitfilm, which will each have their own sub-pixel noise, which could exacerbate the visible wobbling. Although its clearly still there, even when they all follow a single tracked point, so lose-lose whatever the method.