Really nice case study of shooting Leonard in slow motion presented by vfx supervisor Hugo Guerra. I came across this by watching nuke studio course (by Hugo), where is much more about workflow and whole production of Leonard, done in nuke studio.
Jurassic World Behind the Scenes with Image Engine. Compositing Supervisor Joao Sita explaine the pipeline and overall workflow of creating visual effects for Jurassic World.
3 tutorials form Escape Studios, about new features of some nodes in Nuke 9.
The new motion estimation algorithm in KRONOS 2 improves the quality of your retimes, giving you a smoother warp with less artifacts and improved image reconstruction. NUKE’s Oflow tool has been further integrated and enhanced to give you new control over retime curves in the source time range, combining to give you concise, intuitive control over your speed ramps. Both features have been GPU accelerated.
Vector Generator & Motion Blur
Based on KRONOS 2, the new Vector Generator comes with improved quality vectors based on the new motion estimation algorithm. It also provides Blink GPU acceleration. Motion Blur 2 also provides you with faster and more accurate results, as well as Blink GPU Acceleration.
NUKE STUDIO and NUKEX’s Planar Tracker allows you to track areas in your image sequence that lie on a plane. You can quickly place new 2D elements on a flat surface, such as the face of a building, the floor or the side of a car, and then automatically animate with correct perspective as required.
I just saw film made by RocketJump Film School, and it’s so true!
Article from Nukepedia 10 tips to optimising Nuke and creating efficient workflows. Written by Scott Chambers.
1. B PIPE
Keep every layer operation piping in to the B pipe stream (your main branch) – this means, among other benefits, you can dissable the merge and the image stream will still flow. In terms of include/exclude mask ops like the popular in Shake ‘inside’ and ‘outside’ – you will have to get used to using ‘mask’ and ‘stencil’.
Make sure you are optimising your bounding box on any element you have in the comp. If the image is full frame, take care it doesn’t grow larger (from blurs, transforms etc) than the full format, and if it smaller than full frame make sure you have a bounding box that sits tightly around the element.
When merging it is important to chose the ‘set bbox to’ option that is the most optimised for what you are trying to achieve, with the goal of the smallest bbox possible as paramount.
If it is a CG pass, your 3D department should be rendering exr’s with bounding boxes built in, but if not then you can create them yourself. AutoCrop can analayse zero data pixels in a frame and drawn a bounding box around the element. You will then need to take this AutoCrop data and copy that data into a crop node to make use of this data.
When rendering out and exr file sequence, of say a precomp of an element, you can check the autocrop option on the write node. Bear in mind that his option only appears when you are rendeirng out exrs. This is quite a slow process as it consumes quite a bit of memory, but the beauty is that if you do it once, you won’t need to do it again and when you bring in the sequence as a read node you will now have the bounding box baked in.
3. CONCATENATING TRANSFORMS
Geometric transforms should concatenate to retain the integrity of your plates and elements. Why? Ultimately whenever you filter pixels (transforms, convolves / blurs etc) You are approximating new pixels with filter algorithms that are essentially a visual cheat, and a cheat that degrades the image integrity – albiet normally ever so slightly but if these degradations pile up on top of each other you start to see unwanted artifacts in your plates / elements. Concatenate means that the mathmatics behind multiple transform nodes can be ‘folded’ into one operation. It is useful to have multiple transform nodes to have the upmost control of transform operations, and the 3d environment in Nuke will concatenate with 2d transforms. Say you wanted move an element around but have independent control on movement in X/Y, scale and rotate. By splitting these operations into three transform nodes you can have total control on adjusting, removing or just quickly disabling these now independent transforms. If Nuke didn’t concatenate the three transforms, you would be degrading the image every transform. Luckily it does, but only if you follow the golden rule: keep transforms one after each other and don’t ‘break’ them by placing color correction nodes or merges between them! In Shake, there was a handy green line that would appear connecting transform nodes to give you visual feedback that your transforms were concatenation, alas Nuke doesn’t do this (yet? hopefully!).
Use Card3D’s when you can instead of cards in a 3D setup with a scanline renderer. Much, much faster to render and you are essentially doing the same thing if you just have a single card.
5. BLUR INSTEAD OF DEFOCUS
Although Nuke’s Defocus node is pretty fast, a blur beats it for speed. And you should only need to use the defocus node when you want optical ‘bokeh’ effects (the blooming of highlights when defocused. Don’t use defocus nodes on mattes, or to just soften images when you aren’t after the said optical effect.
6. EXPOSURE = MULT
This isn’t really an optimisation but remember that the exposure node is only a RGB multiply grade operation like the mult in a grade node, only difference is that the parameters are to an exposure scale. Handy if you are used to working in stops or printer lights, or if you have been instructed by your supervisor / director to take it up/down a stop. This is no magic in there, you can just use a regular grade node if you are color correcting.
7. LOWER RES / LESS FRAMES PREVIEW
Previewing at lower resolutions / frame rates. In the process of compositing a shot, it’s not neccessary to view all your frames at full res all the time. You can get away with working in proxy modes, and also rendering in proxy modes for you just to check how things are going in the comp. Sure when it comes time to submit, or you are getting pretty close then rendering full res is what’s required, but for just getting comps up the scratch or problem solving errors rendering proxy sizes is more efficient. The same applies to frames, you can also get away with rendering say every fifth frame instead of every frame in the early stages of your comp work. The render time will be five times faster (10 minutes instead of 50 mintues for instance) and you will able to spot most errors this way. I wouldn’t recommend doing this all day, everyday as you will need to see the inbetween frames, but its a very quick way of getting up to speed on your comp and really does save you and the rest of the team wanting to using the render farm time! Most flipbook viewers can play back at various frame rates so if you do render every fifth frame you can play back at %20 of the speed.. sure this will look steppy but you will get the idea of timing. Even if you are rendering every second frame, this is two times faster of course, something to think about it.
Precomp sections of your script that you aren’t changing or have been signed off. Even so called ‘still frames’ can actually use quite a bit of processing for each frame, so if it really is supposed to be just a ‘held’ frame or matte painting that has a lot of additional work done in Nuke, it’s best to pre render this as a still frame. You can precomp on the fly while rendering you whole script also. If you set render orders on write nodes, you can go down through your tree creating write nodes, and then read nodes straight after, rending in the write nodes. If you haven’t rendered these write nodes yet, you will have to manually fill out a read node with the path from the write node, and remember to set the frame range! Nuke will default to 1 during this process. You will also get an error from the read nodes saying it doesn’t exist, and it doesn’t, yet! So working your way down, set the write node orders so your final main comp has the highest value, say if you had two precomps in your script, the first would be 1, second would be 2 and your main comp write node would be 3. Nuke now has a read check box on write nodes, saving you from creating both read and write nodes for precomps. You will have to write them out first before you select read othewise it will error.
Nuke now also had a ‘precomp’ node that saves a selected portion of your script into a new script and adds a write node to it. You can also manage versioning of this new script and its output for bringing back into your comp. If you have render an exr sequence from this precomp script, Nuke will be smart enough to realise if the script has been updated but the file sequence that is being read is out of date (Nuke picks this up from the hash information in the EXR). Although you can create these precomp scripts for your own use, I prefer to manage my precomps in the same script with my own read/write breaks. I would keep the precomp scripts function for collaborative work (either with lighting TD’s or other compers for example). Refer to the excellent Nuke user manual for more information. Screen_shot_2010-06-29_at_3.15.34_PM
9. RENDER LOCALLY
Render locally in the back ground. Most modern workstations have tonnes of RAM and multiple processors – you will probably find that you can get away with rendering in the background via the command line, yet still be able to work pretty comfortably in terms of interactive responsiveness. Especially if your frame ranges are short, and if the farm is clogged or slow to pick up (if you have one!).
10. VECTOR BLURS
Use vector blurs instead of multisampled blurs. Motionblur3D and Motionblur2D. The transform node now has a Motionblur2D setup under the hood, with standard user parameters in the properties tab.
Additional tips regarding Nuke slowdowns and render that fail:
1. When nuke seems to freeze or is slow always check the terminal for any informations/errors/warnings.
2. Check input resolution: spatial resolution (format & bbox) and colour resolution, i.e. 32bit exrs instead of 16bit? Avoid tifs, those memory hogs are for print and have nothing to do in a compositing pipeline (have fun explaining that to your matte painter).
3. Channel output: how many channels are being written to disk, are all output channel actually needed? – use Remove node to control when Write is set to “All”
check size of 3D scenes (if any) -> geo building is single threaded and you won’t see a scanline until the geo is generated for a given frame.
1. RGBA output into cineon format (DPX and CINEON file format doesn’t officially support alpha channels – you can run into big trouble by doing this)
2. Wrong Nuke or plugin version
3. Conflicting render orders in Write nodes (Read is being used before respective Write was executed)
4. Missing alpha channel(s) in precomp output (i.e. when using multiple Writes with render orders).
5. Output directory does not exist
6. Trying to read images (i.e. cg renders) that have not completed rendering ie. “zlib decompression error”
7. Trying to render in proxy mode when you haven’t set up a file path for the proxy option in the write node.
Whoaaa, wanna learn this! so cool
3D Point Generator, Projections and Reconcile 3D node. First by Joe Raasch, and second from the Foundry.
FROM ISAAC NEWTON TO THE COEN BROTHERS
If you wanna read more, see FilmmakerIQ