ADVERTISEMENT

April 2016 Tech Review

Hewlett-Packard Z Workstation

Hewlett-Packard Z Workstation

store.hp.com

The HP Z series of workstations continues to bring substantial power through hardware, firmware and software updates — even at the entry level workstations. While I’m a fan of the 800s because I am usually doing pretty robust tasks in visual effects, the 200s should not be ignored as a viable option — especially as an introductory machine, or for those artists who don’t need all that horsepower. Animators come to mind; or tracking and roto artists.

My review system was the Z240 SFF (Small Form Factor) configuration, which is nearly half the size of its sibling workstation model, made to sit on your desk rather than under it, but it still packs a lot of punch.

The quad-core processor is the step up from Haswell to Skylake at 3.5Ghz, but that’s not really the primary source of the speed. That comes from the expanded NVMe PCIe SSD slots that an HP Z Turbo Drive G2 can be put in, providing extremely fast data access in comparison to the typical SATA drives. This is critical for retrieving large data sets like particles in fluid sims, or simply long image sequences. But with a potential of 64GB of RAM in the 4 UDIMM slots, you can throw quite a bit at the machine without taking it down.

Graphics are driven by either NVidia or AMD. My machine sports an NVidia 1200 with 4GB of VRAM — which is pretty beefy. I do pretty beefy stuff. Lower cost models would have a FirePro W2100 or an NVidia K420 or K640, which should provide more than enough pixel power for most artists. But, if you are using GPU accelerated compositing or 3D stuff, I’d recommend going for broke.

With all this power, you’d think that the box would be jet-engine noisy, but because HP is always looking for a balance of power and energy conservation, there is an effort to reduce heat, which reduces the workload on the cooling fans, making quieter machines. That, and the case design does a great job of keeping things pretty whispery.

For individuals, this is a great entry system as a powerful enough workstation to get most animation, art and visual effects tasks done — especially if you boost it up with some RAM and a Turbo Drive. But for studios, you could populate an entire roto or tracking department with a fleet of these machines at a fraction of the cost of the Z840s — which are great machines, but potential overkill.

Chaos Theory VRscans

Chaos Theory VRscans

www.vrscans.com

The idea of creating photorealistic shaders from scratch is daunting … for any render engine. There may be repositories and libraries of pre-built shaders that you can start from, but those never really work out of the box, and could require hours of tweaking to get even an approximation of the original surface.

Well, the developers over at Chaos Theory — the guys who brought us V-Ray — have been working for the past couple of years on a scanner that records not only diffuse color data, but reflectance and glossiness as well. The information is saved into a BTF (Bidirectional Texture Function) which can be used within VRay 3.3 as a VRscan material, which is different than the more traditional BRDF functions that other shader systems use (including V-Ray’s regular shader). Since all these components work together to generate what we perceive as a “leather”, or “satin” or whatnot, the scan brings you close to photoreal, and you can begin tweaking from there.

The whole idea is similar to Quixel’s Megascans. But the difference is that Megascans feed into map channels of standard renderer shaders — which you still need to dial in, once applied. The VRscan shader incorporates the values into the shader itself, which can then be used as a baseline reference for typical shader dev, or if you want to incorporate it into something like a game engine.

The approach is great when one has to match surfaces to ones captured photographically.  But it’s also amazing for industry outside of entertainment (as if) — like fabrication, where you are trying to prototype products before you make the investment in actually purchasing the raw materials to build it. Real-world scans will allow you to visualize that stuff with confidence before making costly decisions.

Despite the development time, the tech was just released and is starting to get traction, both as a potential subscription service with access to a building library, as well as a specific scanning service where clients can send in project-specific materials to be scanned. The process is limited to opaque hard-surfaces.  So — no skin, or glass, or anything like that. But this is a pretty amazing start.

Glyph Software Mattepainting Toolkit

Glyph Software Mattepainting Toolkit

www.glyphfx.com

One component of visual effects that doesn’t really get much love, technically-speaking, is matte paintings. The technique itself is one of the oldest in the book, starting with set painting from George Melies around 1900. Willis O’Brien used them in King Kong 80-some years ago. Albert Whitlock was frequently hired by Hitchcock. But back then, the artists would paint on glass, and it would be photographed with either a piece painted black in front of it to generate a matte, or the paint would be scraped away, and the live action would be shot through the matte painting — capturing it all in one pass.

Then along came digital painting. And after that, we could project paintings onto geometry. And then, everyone was all like, “Send it to DMP — they’ll fix it” (DMP = Digital Matte Painting). So, with the high demand for such things, it became necessary to have tools to manage it all.

Traditionally (in digital terms), you have a matte painting that it supposed to be viewed from one camera angle, projected onto geometry like a projector. A building in a city that has a bunch of damage, for example. If you move to the side and reveal the other wall, then the painting doesn’t work anymore, and you have to make another painting from the new angle. But that painting doesn’t work from the first position, so you need to blend the two with a mask.

Now imagine that there are fifty buildings. This is where Glyph comes in.

Glyph Software’s Mattepainting Toolkit (gs_mptk) is a simple but powerful tool that creates layered shaders, allows you to manage the textures (a.k.a. paintings) for each layer (up to 16) which are each tied to projection cameras, and then control the geometry that the shader is attached to. And it uses Viewport 2.0 in Maya to display the paintings in the context of the shot. And on top of that, it has a toolset that makes managing everything easier.

For instance, you can generate occlusion and coverage maps. The coverage maps show what parts of the objects in the scene the shot camera is seeing from the beginning to the end of the shot — revealing to the matte painter where the painting ends, hence avoiding unnecessary work.

Then there are mattes in many different flavors, which are using to blend the different projections. Shadow Occlusions will turn the projection camera into a light, ostensibly, and whatever geometry is not “illuminated” will reveal the next projected painting down in the layered shader — which is a different projection from a different camera. Facing Ratio does a similar thing, but fades the mask the further away the faces of the geometry turn away from the camera. And finally, you can go old school and explicitly paint the areas that you want to blend using Maya’s internal paint tools. And once you are done, you can bake down the textures to the original UV maps on the objects.

This is the core functionality of Glyph’s Toolkit…but it doesn’t stop there. You can also import point clouds generated from photogrammetry software like Photoscan and Photosynth.

For Matte Painters, this tool is a must. If I were to quibble, I would love the texture baking to  utilize UDIM UV space — for feature film FX, the traditional 0-1 UV space just doesn’t cut it anymore. But maybe we’ll see that in future versions.

ADVERTISEMENT

NEWSLETTER

ADVERTISEMENT

FREE CALENDAR 2024

MOST RECENT

CONTEST

ADVERTISEMENT