KeenTools
KeenTools is a plugin suite for Nuke to assist with 3D object tracking — in a nutshell. And I have to say, it’s really, really smart and straight forward. The suite consists of five tools: GeoTracker, FaceTracker, FaceBuilder, PinTool, and the latest addition: ReadRiggedGeo. Most of them use the PinTool as the foundation.
The PinTool is an incredibly simple concept with some intelligent math behind it. If you have a 3D object that needs to be placed in a scene, you grab a vertex as a control point and move it into the position it should be in the frame, then you repeat with other vertices. The object will quickly line up into place. This is most helpful if you are replacing a practical object with a 3D one, or have a specific spot it needs to be. If you have a moving camera that’s tracked, you can move to a new frame, make adjustments, and Keen will refine the position. Repeat until your object is locked.
GeoTracker and FaceTracker work with the same fundamental foundation, with the difference being that once you place your object à la PinTool, then Keen will attempt to analyze the pixel data to track your object to what you’ve aligned it to, allowing you to make further refinements along the way. GeoTracker is designed to work with rigid objects, while FaceTracker can be used to track changing human facial expression. FaceTracker uses FaceBuilder (mentioned next) to first establish the geo that matches the character’s head, and then tracks deformation to match to the performance.
So, FaceBuilder uses PinTool technology to generate head geo that matches the structure of the character, utilizing various photos of the performer. Like PinTool, you grab points on the geo face and match them to the actor — the geo will transform and deform the more points you align. You go through and line up one view, then switch to another photo and repeat. With each view of the head, the solve will become more and more accurate. Then, you can use the photos to project onto the geo.
Finally, the ReadRiggedGeo node allows you to bring in an FBX file of a rigged character, and actually animate that character within Nuke. You can also bring in animation data from outside and apply it to the character. I’m not quite sure that you can sell the idea to animators to animate in Nuke, and I’m not absolutely clear how it would be beneficial, but I’m totally open to the idea that I’m overlooking something.
However, my ignorance aside, the other tools in the suite are so fast and so powerful that I’m not sure why one wouldn’t have it in their arsenal. For FaceTracker and GeoTracker, I don’t see rendering out CG characters, but for adding digital makeup or beauty work, I can see so many applications. One day, I will also probably use FaceBuilder to quickly get a model of an actor’s head.
This may be a small suite, but it’s quite mighty.
GeoTracker Price: $199 (personal), $299 (commercial), $399 (personal)
FumeFX 5.0
I have been a big fan of FumeFX for years and years. Kresimir Tkalcec at Sitni Sati has been making fire and smoke for a long time, and doesn’t appear to be slowing down. With FumeFX 5, we get a mix of performance and interactivity that allows us to get where we want to be faster, with higher quality.
The first thing you’ll notice when you open up the new FumeFX, is that the UI has gotten a bit of a facelift — mostly to work with the new GPU viewport as well as consolidating controls into the FumeFX window, rather than bouncing around between interfaces.
So, the GPU Viewport Display: It’s incredibly helpful — dare I say, vital — to be able to see what is happening with your simulation. I mean, how do you know if something isn’t working? Do you want to wait for a 400 frame simulation to find out that it broke on frame 20? Before, we had our little render viewport, and we had a pixel-y representation in the 3ds Max viewport that kind of gave you a hint of what the simulation was doing. But in FumeFX 5, the results are soft and smooth, and working in the 3D scene. You are getting volume shadows. Geo occludes the sim, and the sim does reflect changes that you make to the shading parameters. Also, depending on the power of the GPU, FumeFX does support millions of voxels within the viewport.
Retiming — always a huge thing in simulations — was already pretty impressive, utilizing wavelet and velocities to be able to retime a simulation without destroying the integrity. And the under-the-hood simulations have a new advection type that minimized numerical losses, Vorticity II optimizations mean faster calculations with a lower memory footprint, and simulations can scale in density but retain similar results.
Another cool, nerdy feature is that you can choose different kinds of compressions for different kinds of data. If your velocity channel doesn’t need lots of detail, then a lossy (irreversible) compression can be used to help with storage issues. But, you can retain the detail in the places where it needs it most. These output files are multithreaded during I/O, so access and writing is faster. This allows for faster caching for previewing, which can also be optimized by changing the resolution of the cache for viewport playback. In addition, the results can be saved to OpenVDB for use in Arnold, Redshift and Houdini.
Also, a new licensing scheme allows for license rentals, just in case you aren’t burning stuff year round!
www.afterworks.com/FumeFX.asp
Price: $295 (upgrade), $695 (permanent license)
V-Ray Next
Holy cats! The new gen V-Ray has arrived. Its been out for less than a month and it’s already changed my workflow!
First up: Obviously, V-Ray Next is faster. It’s tied into your GPU, but it’s not the raw power that gives it the speed. The math behind the render is smarter and more efficient, allowing it to pick and choose where to put most of its effort. During the light caching, V-Ray is learning about the scene it’s about to render and choosing which lights to sample or ignore.
Starting with the new Dome Light, it is now adaptive in this way. Using a dome light can be very expensive, especially for interiors where one is contributing light from the outside in. V-Ray Next analyzes the scene and determines which element of the HDRI in the Dome Light is going to contribute to the interior, and then eliminates the rest. This makes the render far more efficient, and it’s more physically accurate than the cheat used in the past, which was placing a light portal in front of the windows.
In the same way V-Ray is smarter in its lights, it has also become smarter in the camera. V-Ray Next checks for best exposure and white balance for the scenes and can adjust the camera. The exposure adjust come through a change in the ISO — so it doesn’t affect your f-stop or shutter, which means that your artistically chosen depth of Field (tied to your aperture’s f-stop) isn’t affected.
And to make the user smarter about lighting, there’s a Lighting Analysis tool along with heat maps showing light temperature, and a Light Meter utility. That’s right, you may have to look up how to use a light meter — or even worse, what a light meter is.
A new denoiser has been incorporated in the same line as smarter adaptive lights. This is the NVIDIA OptiX Denoiser — and it’s smart and learns deeply. In fact, out of the box, the OptiX denoiser is the result of thousands of images rendered through Iray. And through deep learning, is has begun to understand the most efficient way to denoise an image. Pretty cool, yes? But not as cool as continuing to learn about your image as you go through your progressive renders to adjust lighting and shaders. Your test renders become a classroom for OptiX to figure out how the image should be denoised.
The downside may be that you do need an NVIDIA card to take advantage of the denoiser. But before we leave denoising, it’s worth it to mention that render elements can be denoised so that your comps with remain as clean as the final image.
In the materials arena, V-Ray Next supports the hundreds of VRscans from Chaos Group. There is a new physical hair material that really gets into the physiology of hair — like melanin content and an eumelanin to pheomelanin ratio. (Yeah, I had to break out Google, too.) But what I’m most excited about, and I know it’s probably not a big deal, is that V-Ray material now has “metalness” reflection controls to play nicer with PBR shaders coming from Substance Painter and Designer.
So much goodness! If I could have rendered this review in V-Ray, I would have!
chaosgroup.com
Price: Commercial annual license, $470 (3ds Max); $520 (Maya); educational license $99.
ZBrush 2018.1
It’s difficult to keep up with the changes that happen in ZBrush from version to version. The advances are substantial and numerous, and it just overwhelms me to the point of inaction. However, for ZBrush 2018, I simply had to overcome my fears and dig in. Some of the new stuff is just … so … good.
Firstly, most digital sculptors know about Sculptris — even before it became part of the Pixologic ecosystem. It’s a fast, light sculpting system centered around the concept of molding a lump of clay, and Pixologic offers it as a free downloadable primer for getting into the world of digital sculpting. The super cool thing about Sculptris is that as you sculpt, it dynamically tessellates the mesh to accommodate the detail you need. Now, take that concept and apply it to the ZBrush workflow.
Pixologic has taken the best of Sculptris and plussed it into Sculptris Pro and embedded it into ZBrush 2018. This is a big deal: In the past, one had to subdivide the sculpt as one carved more and more detail. Either subdivision levels or DynaMesh-ing, you were increasing the resolution of the mesh globally and exponentially. So the whole mesh was getting up-rez’d, even in the places that didn’t have detail. Sculptris Pro makes it so your hundreds of available brushes will dynamically rez up the mesh just around the strokes you are making. And inversely, smoothing the surface will decimate it — lowering the resolution, and optimizing your model so you won’t hit a wall when your model reaches a gazillion polygons.
The Sculptris Pro feature is activated through a button on your tool shelf, and the settings can be saved on a per brush level, including a feature to dynamically adjust the size of the stroke so it stays the same size relative to the model, rather than relative to the size on screen. This “locking” of parameters to brushes is also something new in ZBrush 2018 and can apply to your normal brushes, not just the Sculptris Pro features.
The Sculptris Pro feature should be enough to upgrade. But, there is more. ZBrush 4R8 introduced the concept of live primitives where the creation of a primitive comes along with a number of contextual little cones that generally live on the corners of the bounding box or on active points. Manipulating these cones changes parameters of the primitives like resolution, twist, symmetry, etc. — all within close reach of your working area. Then there is the idea of Insert Multi Mesh (IMM) brushes, that you can use to actually embed a mesh into another mesh. Take those two concepts and you kind of have ZBrush 2018’s Project Primitive.
An extension of the primitives and deformers menu, Project Primitive embeds primitives into a mesh, but the merge remains live while you manipulate the parameters through the cone UX controls., Along with the primitives is a vastly increased selection of deformers to further massage the combination. This allows one to quickly prototype complex objects — which in turn allows one to iterate faster and more often.
Price: $895 (single user license)
RealityCapture
Scanning and photogrammetry has quickly become all the rage. Yes, it’s been around for a while, but with faster and less expensive computers and more efficient software, it’s becoming accessible to everyone. Between my DSLR and my DJI Mavic Pro, it is a very real concern that I may end up scanning everything in my life. And RealityCapture is a new(er) tool in the toolbox for attaining that goal.
RealityCapture is photogrammetry software from a company called Capturing Reality. (True story!) Like its competitors Agisoft Photoscan and Autodesk ReCap and such, the idea is that you can take a whole bunch of photos of an object, person, terrain, etc., and the software triangulates features in the images and derives a point cloud, builds a mesh, and textures it. Basically, there is a lot of math involved!
The UX feels rather simplistic, and some have compared it to a Microsoft Word-y type of thing. But that simplicity is deceptive. On the surface, it steps you through each of the phases of the workflow. Well actually, in the Workflow tab, you can load in all your images and click the Start button, and off it goes. For the most part, you get something pretty usable out of the gate — as long as you shot your images correctly. From there, you can determine how much further you need to push things to get a better solve.
Step one: Press Start again! Seemingly, the system knows that you weren’t happy with the result, and it shifts its approach — whether with a different algorithm, or weighing some images more than others, or something. Frequently, you may get a better result, or if the first time something utterly failed, you actually might get something. From there, you can see, “Oh, I need more data there” — and, if it’s accessible, you can go and shoot more photos in that area and add them to the solution. The solve will get better. And it’s not just photos; you can use lidar scans, geo-referencing, DSMs — basically, the more data you have, the more accurate your solve.
Then you can dig in and really push to get manual solutions. Like if your solve comes up with three different pieces because RC couldn’t figure out how to connect some photos, you can tell it which features in the photos are the same, bridging those holes. Select two images from each solve, find four common points and RC will do the rest.
Meshes can be extremely dense, but RC has tools to simplify it before exporting. And it has some UV unwrapping tools to help with the projections. The UDIM support isn’t as robust as I would like, but it looks like it’s in the works. In the meantime, you can export out your cameras along with the mesh to re-project textures in your favorite texture program.
RealityCapture is really straight forward, even when you have to do some stuff manually. It’s pretty darn fast, and the solutions are great. For the hobbyist or indie guy, the price is within reach. You just have a few restrictions, like only 2,500 photos per project! The cost does skyrocket when you go commercial. There’s also a free trial version — you just can’t export meshes or textures, and renders will be watermarked. But I definitely suggest you take it out for a spin.
www.capturingreality.com
Price: Starts at 99€ for three months