ADVERTISEMENT

December 2016 Tech Review

foundry_katana_box
The Foundry’s Katana

www.thefoundry.co.uk

Katana is a unique beast in the CG toolbox. Its functionality is focused and limited, which makes it exceptional at what it does, rather than pretty good at a whole lot of things (which many generalist 3D package suffer from). So what does Katana do? It collects geometry assets, assigns shaders, uses lights and cameras to generate a scene, which it then packages up and throws to a render engine to make the final images (along with AOVs). So, its basically a scene assembler. You don’t model in it. You don’t animate in it. You don’t simulate in it. You develop looks, light assets, and render them.

Katana uses a Nuke-like (or Houdini-like, if you like) node based workflow to build scenes. Load camera. Load ABC animated geometry, etc., all come in as nodes, which essentially makes it procedural, and reusable. Change something upstream and the result pops out downstream. So, if you have a bunch of similar scenes, you can recycle the same setup, using different cameras or animation — but getting the same look. The node tree — or even part of the node tree, can be saved as a “recipe,” which can be shared with other lighting artists.

Look dev artists can work in parallel to the lighters, publishing new variants to audition without necessarily breaking everything in process. Since the shaders are applied within the Katana node tree and not to the model directly, the workflow becomes a bit safer. Additionally, Katana shaders are driving version of all the installed render plug-ins. Renderman, Arnold, VRay, and 3Delight receive their version of the shader as it gets sent to render.

So these are all cool — and in my mind, kinda critical. But its Katana’s scene management that really makes things worthwhile. Because it loads in assets, or subsets of assets, and it works in conjunction with delayed read processes (within having to save out to the specific file format per renderer). It only worries about the pieces of the scene that you really need — providing you with an interactive render of the scene without bringing in the 2 billion polygons from the full geometry. This keeps load time down and the UI responsive. Because, if you have artists waiting for files to load — thats a lot of money wasted. Better to have them be productive.

Katana is powerful and it’s catering to a niche market — which may grow now that its available for Windows. This makes it an expensive investment and may not be worth it for MomNPop’s VFX Studio to get it. It also becomes more powerful if you can bolster it with pipeline tools developed in Python, or internally using LUA — this kinda requires a bit more of a support team. So, all in all, it’s extremely useful — and has been proven in studios like Imageworks (where it was first created), Digital Domain, ILM, and MPC, just to name a few. But it may be luxurious for tiny boutiques.

160419_CaraVR

The Foundry’s CaraVR

www.thefoundry.co.uk

Virtual reality is still seemingly all the rage, and with things that are all the rage, people want to dive right in and start developing for it without really investigating the problems stemming from new and relatively unproven technology when it come to developing the actual content. VR is completely synthetic worlds was (comparatively) cake. You build CG stuff, and then through a real-time engine you get to look around and experience the new world.

But what happens when you try to develop live-action for VR? It’s a whole new ball of wax. You have multiple cameras whose footage needs to be stitched. And each camera has slightly different color. And then there is a film crew hanging around — how do you get rid of them? And I haven’t even gotten to the part where you incorporate visual effects! Yikes

Well, as usual, The Foundry has a smarty pants team with a bunch of Ph.D’s I’m sure who have been looking into these problems, and the result is Cara VR, a plugin that lives inside of Nuke and provides a whole new toolset for dealing with the issues of VR.

First step in the VR process is to solve the camera array. Cara VR has a bunch of presets for the most used VR camera rigs, but the algorithm is made to analyze all of the camera footage together to build a digital rig, and it puts together everything into a single output in a long flattened image. Because VR is stereo, there is a consideration for depth, and solutions for convergence can be separated to get the best convergence for distant and closer objects.

But now, what happens? Each camera looks different and there are mismatches along borders an ghosting on elements where the convergence is off. But Cara VR has a ColourMatcher node for balancing color and exposure differences, and Stitcher for massaging those edges together. Traditional Nuke tools are still functional in the Cara VR world so you can remove or paint items that might cause trouble if you can’t solve the problems automagically with the Cara VR toolset.

Now the cameras match, but there was rotational movement in the rig which makes the footage look like you are viewing it through an aquarium, which will cause nausea and headaches in your VR audience. And once again Cara VR tools comes in with a combo of a CameraTracker and SphericalTransform (because VR is ostensibly a projection on a sphere) to stabilize the footage — which is then fed through a MetaDataTransform node to the original cameras to fix things before the stitch takes place — which maintains the fidelity of the image.

I have to kind of wrap this up, but something else to mention is that Cara VR also plays nice with the RayTracer introduced in Nuke10, so you can render 3D elements directly in Nuke using the solved camera rig. On top of that, there is a new “slit-scan” renderer that fixes the pinching at the poles of the sphere.

Like Katana, this is a niche product, and for a plug-in is kind of expensive. But hey, if you want to be at the forefront of technology, it’s the price of admission. When VR production becomes more ubiquitous, I suspect the price will come down as the demand increases.

CityEngine

Esri CityEngine

www.esri.com

Cities are crazy, complex, organic creatures. When it comes to building and designing them from scratch, there isn’t a good way to approach it. You can’t take a 2,000-year-old city like Cairo, for example, you are talking about a gagillion decisions from individuals and committees making decision on where to put streets, where to build buildings, how do I fix my dock after the last flood? A city is an evolving entity, and there is no way one artist is going to recreate that — at least not from scratch. And don’t think that recreating a real city is much easier.

CityEngine provides tools for generating cities, both fictional and real. It is developed from a company called Esri, which has been in the business of geographic information systems for nearly 50 years. And they’ve collected an enormous amount of data and developed algorithms for analyzing that data. They do this for companies and governments interested in how geography affects their functionality. But someone put together that all this data would help the visual effects industry because we are always creating these things — and generally it take a lot of work.

The engine within CityEngine allows for either importing existing ArcGIS data from the Esri database —like if you want to destroy a fictional skyscraper within existing building in Los Angeles. Both the 2D information and building footprints, and any existing 3D data with textures can be imported. Or, say you wanted to figure out a totally new (or old) city like … Troy. You could take the topography and generate a city pattern with new footprints.

Buildings and structures can be generated dynamically with a set of rules to determine and randomize height, facades, roof-styles, etc. And the geo can be customized if needed. And these fill into the original footprints, to populate a fully fledged city — which can then be exported to numerous 3D programs through an FBX exporter.

Most of the applications for ArcGIS and CityEngine are for visualizing functional aspects (you can even determine heat contribution from the reflections off of a new, yet-to-be-built building — which is kinda cool). But this means that its not an “export and render” type of tool. You’ll still have to put in some time to make it pretty. But by removing the decision-making from the layout process, I can see CityEngine saving weeks if not months of development time — and then you can take the time to make it look pretty.

ADVERTISEMENT

NEWSLETTER

ADVERTISEMENT

FREE CALENDAR 2024

MOST RECENT

CONTEST

ADVERTISEMENT