
Modo 10
modo.thefoundry.co.uk
Modo was getting some heat even before The Foundry acquired it and began to incorporate it into its suite of products. After numerous iterations, advances in UV mapping, animation and an incorporation of a sophisticated Boolean tool called MeshFusion, Modo is up to version 10.1 and its momentum doesn’t appear to be slowing down.
The biggest splash for Modo 10.1 is the addition of what is referred to as the layer stack — which is a dynamic and procedural way to model and rig. Now, to be honest, this idea isn’t groundbreaking. The 3ds Max modifier stack is based on this concept. And anyone you talk to who works in Houdini — well, “procedural” is the name of the game. But for longtime Modo users, or Maya users who have migrated to Modo, this idea of procedural modeling could be a game changer.
The concept is that a model is made up of a number of tasks: create base mesh, select faces, bevel, subdivide, etc. But frequently, this is a linear process, and it’s difficult to go backward. Procedural modeling keeps each step alive and accessible, and (to an extent) if you make changes upstream, those changes will propagate throughout to the end of the change. It’s quite powerful.
These procedural methods have been incorporated into other new tools such as some advanced text tools. The layer stack allows you to change text, font, etc., without having to remodel.
But Modo doesn’t stop at modeling. The parameters in the tasks are open to data input and can be driven dynamically by conditions or by user input. So the structure of the model can be set up to change based on circumstances. Modo has adopted a Houdini-Ice-Bifrost type of nodal system to control the rig. In this day and age, it’s really the best way to go. Everyone’s getting into the act.
Not all of the model tools have made their way into the layer stack as of 10.1, but plans are to continue to migrate them as the software evolve. With Modo jumping into this modeling methodology — and I predict Bifrost’s philosophy will bleed throughout the rest of Maya — we are going to see a mass movement with modelers toward this way of thinking. And once clients get wind that you can dynamically iterate versions, they will demand it. If you are still in the old-school, linear-modeling mindset? Well, you just better be a damn fast modeler.

Fusion 8
www.blackmagicdesign.com/products/fusion
I’ve been hemming and hawing for the last couple of years since Blackmagic Design acquired the compositing tool Digital Fusion and rebranded it as Fusion. I’ve concluded that I’ve been waiting on this because, outside of the mind-blowing announcement that there would be a free version and a professional version for under $1,000, there actually weren’t many advances to discuss.
This was my error, because Fusion does not and has not gotten the recognition it deserves, and I should have been a voice — albeit a small voice — to help along what, in fact, is an incredibly powerful compositing system.
I started compositing with Eyeon’s Digital Fusion at Imageworks way back in 1997 — yeah, back when Nuke was still proprietary under the roof of a visual effects house in Venice Beach named Digital Domain. I adopted Fusion for my own little effects boutique for years. I also used it while at Blur Studios and Uncharted Territory — both of which still use it in their productions. And they’ve done some pretty darn high-profile stuff. Not to mention that VFX guru Douglas Trumbull (2001: A Space Odyssey, Close Encounters of the Third Kind, Star Trek: The Motion Picture) has made it his compositor of choice as he continues to push the technological boundaries with stereo and high frame rates.
For a relatively unknown product, it still has a pedigree.
So, why hasn’t it grown in leaps and bounds after Blackmagic acquired it? Well, Blackmagic has been hard at work tackling a problem that has kept the Fusion community so small: It was limited to Windows. And now, as of Fusion 8, the platform has expanded to live on OSX — as both free and studio versions. And, hot on its tails, there is word of a Linux version (as a studio version). OSX opened up the user base to the motion-design houses that like their Macs. But I feel it will be the Linux version that opens the doors to the wide world of features films, whose VFX studios almost exclusively run on Linux.
I will keep you all apprised of further developments from here on out. But let’s start the discussion by saying that if you are blossoming compositor or a small one-man show doing freelance work here and there, Fusion is definitely a way to dive into robust node-based compositing. The free version is limited only by the exclusion of networking and network rendering, use of third party OFX plugins, some optical flow tools (these analyze motion to derive vectors used in generating pseudo motion blur or slow-motion effects), and the more robust stereo tools. Maybe that sounds like a lot to miss, but when you balance it with what you get, you can accomplish 95 percent of what you need to learn or do. There are way too many features to touch on — because I would have to talk about them all. Best thing to do is to download and start compositing.

Mocha Pro 5
www.imagineersystems.com/products/mocha-pro
Mocha Pro has made itself the go-to tool for tracking, track-assisted rotoscoping and object removal. So much so, that its team was awarded a Science and Technology Award from the Academy of Motion Arts and Sciences. Not too shabby. And their latest version makes the tools even easier to incorporate as well as expanding into new technological fields.
The biggest announcement for Mocha Pro 5 is that there are plugins that open a pipe directly from After Effects, PremierePro, Avid or HitFire into Mocha Pro. What does this do? First, because the tools are sharing data there is no need for additional file sequences for Mocha to track — it can use what is available within After Effects (for example), including compositions. Not only can the footage be used for the track source, it could be the element that is tracked into the plate. And in turn, tracks and masks can be sent directly back to the host software.
Additionally, other procedures can be pulled from Mocha, and then rendered directly in After Effects. Object removal for instance, would be processed in the comp, rather than being required to render out an entirely new sequence.
And don’t worry — the plugin for Nuke is on its way.
Because of the open pipe between packages, this means that Mocha Pro can be used with other plugins in the mix. For instance, a problem that is becoming more and more prominent as virtual reality becomes a thing is incorporating VFX into VR, or repairing it — like … where does the film crew hide? VR tools such as Mettle allow for reconfiguring and processing of VR data. The VR footage can be flattened and manipulated, and then fed into Mocha where objects can be tracked in, or objects removed. Then the footage is restored to it original format for use in the target VR system.
If you are in VFX, Mocha Pro should already be in your arsenal. If you are starting to look into virtual reality, you are walking into a minefield of post-production unknowns, and the Mocha team is one group that is looking forward into the problems of a nascent industry.

After Effects
www.adobe.com/products/aftereffects.html
The latest version of After Effects has a bunch of cool tools that make things easier and faster for us compositors and designers. But I really want to focus on a tool that, in fact, is still technically in a preview state. And that is Character Animator.
When it first came out, I was pretty dismissive of it as a viable animation tool. And, due to the simplicity of the setup and the lack of a requirement of any earned skill, I predicted that we would probably see a whole ton of awful animation before it eventually got into the hands of artists who knew what they were doing.
And finally, it happened. Through a collaborative effort between Adobe and Film Roman, the tools were refined and honed to help create a live Q&A with Homer Simpson. The experienced animators from the show took the tools Adobe has created, provided feedback, which Adobe took to heart, and they crafted it into something kind of amazing. And following on their footsteps, Cartoon Trump appeared on Late Night with Stephen Colbert in a number of live segments. This all done with a new feature in Character Animator that feeds the animation — driven by a performance captured on a webcam — and then transposing the motion to a rigged character, and then fed to the broadcast.
And it’s all technology that is available — just add talent and stir.
That said, I’ll briefly touch on what Character Animator does. You set up a character in Photoshop or Illustrator in various states — phoenyms, head turns, etc. Those are brought into After Effects and tagged as specific elements, which are then driven by you on your webcam. The lip sync, movements, etc., drive the triggers and call up elements depending on the performance. All of which can be overridden, of course, to fix, refine or add to animation.
In Preview 4, things have become more streamlined, where character setup used to require elements to be explicitly named for the tags to work, you now can connect the tags visually. So, you can have Square486 be the right eye if you want — but I wouldn’t recommend it. You can now setup auto blinks to happen randomly or based on behavior. The facial analysis algorithms have more fidelity. And it works with Syphon (on OSX) to feed the animation into a live broadcast situation — like Simpsons or Colbert.
I was skeptical. But I’m happy to be proven wrong.

Premiere Pro
www.adobe.com/products/premiere.html
PremierePro got a nice little boost with its latest update, which was announced at NAB this year, but just recently went live.
The increased use of Ultra High Definition footage like 4K and 6K compounded with the more frequent use of laptops and mobile devices by editors require us to be a bit more diligent with our media management. So, Adobe has thrown in a process for easily ingesting footage: copying, transcoding, and proxifying (is that a word?) footage all at the same time. When you drag footage from the Media Browser into you project, your ingest settings are enacted and Media Encoder begins to work in the background to prepare the different representations of the footage. Better yet, the ingestion can take place — get this — while you are editing. You have immediate access to the full resolution footage, and you can begin work while the proxies and such cook away. When ingestion is done, you can swap to your other version at will. Also, you can save your settings as easily accessible presets, and you can even set a watermark for proxies.
The Lumetri color system has received some newer features that refine the ability to control and manipulate color with secondary HSL controls. You can isolate particular color ranges in your shots, and then shift the hues within that range. This provides a really intense level of control over the colors in your scene. Frankly, it’s getting more and more like SpeedGrade isn’t even needed in the Adobe suite — yeah, you colorists out there, I said it — as ignorant as that probably is.
And another larger feature, amongst a laundry list of features and fixes, is incorporating 360-degree virtual-reality footage and editing with it. Like the team over at Mocha Pro, looking into VR is looking into the future with so many unpredictable pitfalls that I don’t even like to think about it. But as long as we’re here, let’s do it anyway. Because editing VR footage is a new beast. But Adobe has provided tools for cutting the new footage together — in fact, you can change your viewer so that you can see what the result will look like — watching the cut footage in a view that you can spin around in. Mettle tools are available to work within Premiere for manipulating and converting the 360-degree footage. And, as if that isn’t enough to wrap your head around, there are tools to view or separate VR stereoscopic footage. Yeah. Crazy, right? Anyway, keep your eyes on this technology. I don’t know how it’s going to play out, but we’re all learning about it together.

Neofur
www.neoglyphic.com/neofur
So, I’m generally a visual effects and animation person, doing things for films and stuff, so, outside of cinematics and promos for E3 and such, I haven’t really dipped my toe into the game industry or the tools that support it. But, I suppose it’s been long enough that I need to face the fact that our technology is merging as games become faster and fidelity higher, the same technology that drives that also benefits the artist in the post-production world.
My first mini-review and installation is for a thing called NeoFur. Well, technically, NeoFur is my second game installation — because I had to install Unreal first. Then I could get to the NeoFur.
NeoFur perked up my ears because I know how intensive hair and fur is when we have to calculate it for Rocket Raccoon or Shere Khan or Richard Parker (maybe those are the same cats). So to see something calculating fur in real time is something worthy of attention.
NeoFur is easy to install, easy to learn and easy to use. Knowing how hair works in VFX isn’t a bad thing either. The methods are very similar, but with the Unreal engine driving things, the development and tweaking are in real-time. Anyone familiar with any 3D programs will slide right in and start making things. The interface feels like developing materials or shaders — driving parameters with maps or sliders. But additionally you can use meshes to determine the volume of a hair structure, and with control splines control the groom.
The physics engine dynamically and fluidly moves the hair and reacts to changes as you work with it. The character can even cycle through animation to show you how the hair will work in motion. And many of the NeoFur parameters are open to Unreal’s Blueprint, so you can get custom reactions driven by what might be happening in the interactive platform.
Now, the fidelity is nowhere near feature film simmed and rendered hair. So don’t expect to throw millions of hairs through Unreal and maintain responsiveness. But it’s not the purpose of NeoFur. You are developing simulations that work in game situations, and with that require a certain structure and economy. And within those restrictions, NeoFur performs incredibly.
NeoFur is absolutely with the budget of people who just want to play around ($19), or as a small developer making an indie game ($99), or if you are a larger game company ($349 — or custom packages).