The way Fajardo explains SSS research at the moment it really falls in just one of two camps. It does not change the workflow it makes the images look a bit better. You get more sharpness in the pores of the skin — it is hard to see — but when you see it, it is good. That is one thing, one axis but there is another axis. It is to make the whole process more efficient and that is what we have done, and we are really proud of this new system and it changes the way you think about SSS — it just makes it a lot easier.
The memory issue is really key, in fact the memory constraints for this job would have made this project impossible to complete — without a major rethink — if it were not for the new approach. We co-developed that with Sony Pictures Imageworks, the results are really good in terms of performance compared to point cloud. This makes it much better for scaling, it works much better with multi-threading and less memory requirements.
Users do not need to worry about the density of point clouds or tweaking parameters. He is incredibly excited about the creative windows this opens and the scalability, especially the notion of being able to render entire Massive crowds with ray traced SSS. This is a remarkable commitment to image quality as almost the entire industry has stopped short of a full Monte Carlo solution for large scale production…. But this was different from the new implementation. The Spider-Man technique was single scattering; Fajardo explains the difference.
You can just use SSS to simulate what we call the first bounce — under the surface — that is single scattering. It is an easier and well defined problem. That is what they did on Spider-Man. We along with Sony helped develop single bounce scattering more efficiently with GI. Now we are talking about multiple scattering, this is what gives you the softness and bleeding of light. That is a lot more difficult, and that is only now possible now that people are starting to do this with ray tracing.
Up to now you really needed to use point clouds and it was painful. I am so happy that we are putting the final nail in the coffin of point clouds. I cant even tell you! For many years that has been the last place you needed point clouds. A few people have been trying to do multiple scattering with ray tracing and we touch on this in our talk, but it was not very efficient, we use a new importance sampling technique for sub surface scattering, what we call BSSRDF Importance Sampling.
If you are smart enough to find multiple samplers, most of the time people find just one sampler or method, but if you are smart enough you can find multiple samples for the same task and then combine them. The user should never never know as it is unrelated to the art of using a renderer. While the user may never need to know directly, MIS is incredibly important to image quality and render speed. IS is being used in the new SSS example above and also with area lights. Area lights are not only great tools for producing very attractive lighting as any DOP knows but it is also key in using IBL with HDR lights in the scene and many other areas of modern production.
This rather dry sounding paper explains how much more sensibly the lights can be sampled given the spherical projection nature of working in computer graphics. In this diagram below left it can be seen that a rectangular light can appear bowed — in much the same way a real light such as a Kino Flo appears bowed when shot with an 8mm fish eye lens right. So much stronger is this bias or effect than one might imagine — it is worth checking any square for yourself count in from the left and bottom and you can see that both of the shapes in A marked Area sampling are exactly the same. What is needed is to start from a distribution.
This directed sampling is just a refinement that falls under improved Importance Sampling. It is most noticeable closer to the lights. Exclusively, we can show an animation rendered with the normal area sampling and then — with no other change than the new IS — the spherical sampling version. The reduction in noise is dramatic. But Fajardo says he would add a fourth: threading scalability.
Search This Site
Today machines can have 32 threads and this is only going to increase. Arnold has incredible multi-threading performance. Fajardo argues that things that one might take for granted, like texture mapping, can become threading bottlenecks unless the renderer and development teams can benchmark, analyze and optimize to a machine with many cores. In the case of texture mapping, the problem is that you need a texture cache to hold the hundreds of GB of texture data required to render a complex VFX shot.
This becomes increasingly important of course as artists do lighting work on increasingly complex scenes, in their powerful workstations with an ever increasing number of CPU cores. But run it on all the threads of a powerful machine, as we did, on a simple scene with a single Ptex-textured polygon, and the results are abysmal. Katana has never been thread-safe and therefore forced single-threaded loading of geometry though I imagine they will fix this eventually.
Most hair-generation pipelines are ancient and therefore not ready for multi-threading. Unless the company is hell-bent on systems performance on modern machines, like Solid Angle is, multi-threading scalability is the Achilles heel of production renderers. Arnold has a code base also inside Sony Pictures due to the historical development of the product see our original rendering article. The cloud work in Oz , and the rocket Apollo launch in Men in Black 3 are both excellent examples of the impressive new volumetric innovations.
These innovations, as has happened historically with all such advances, has been shared between SPI and Solid Angle. This work builds on the research that Christopher Kulla at SPI has been doing and publishing in conjunction with Solid Angle and Fajardo in particular. The volumetric lights have proven very popular with clients. There are two aspects to volumetric lighting, homogeneous lights or uniform lighting a spot light in an even fog with a beautiful cone of light and the other is non-uniform heterogeneous lighting — and this is of course much more difficult.
V-Ray from Chaos Group is one of the most successful third party renderers, with wide adoption. Stuart White, head of 3D at Fin Design, a boutique high end commercials animation, design and effects company in Sydney, uses V-Ray and finds it a perfect fit, providing high end ray traced accurate results without the pipeline and artist overhead of non-raytraced solutions.
It makes consistently beautiful images whilst being easy to use, affordable and pretty bullet proof even in the face of some seriously heavy scenes. As seen above, V-Ray produces excellent images with particluarly good fur, SSS and is used around the world, by large facilities but especially mid-sized companies producing high end work.
It is also now available to on several popular cloud services and was used by Atomic Fiction that way for Flight. There are various version of V-Ray supporting different products, such as Max, Maya, Rhino, SketchUp and more, but for the purposes of this article we can assume they are the same from a rendering point of view. V-Ray is basically a ray tracer and it does do brute force ray tracing very well, but the team at Chaos Group have added all types of optimizations for architectural visualization and other areas, so the product does have radiance caches and a bunch of other things which would be classed as biased, but it can work very much as an unbiased renderer.
From there — there is always the artistry. The product has always used MIS since starting. V-Ray is very much the product of being a modern renderer, sampling is often handled for the artist keeping the interface very clean using adaptive sampling. The adaptive sampling both increases based on a noise sampling threshold system.
The renderer is checking neighboring pixels and until the noise threshold is reached it can apply more samples. In the early days of the product the company had to deal with efficient memory use to allow for the scenes to be rendered in what was then very small amounts of RAM. The team deployed a proxy system which was very successful and is still used today.
It avoids having to load all the geometry at once. We are also looking to implement a simple skin shader with simple, artist-friendly settings. Some of our customers have written their own SSS shaders for V-Ray including multipole and quantized diffusion. It is possible to render a full brute force solution inside V-Ray but it will naturally be slow. When Dan Roarty is working he sets up a few area lights behind the head to see how much light passes through the ears.
This helps him gauge how thick the SSS should be. So with that said, there are a couple of specific techniques that some artists might not be aware of. One very useful technique is to use a separate bump map for the specular component. The advantage of this approach is you can introduce an extremely fine bump map that affects only the specular, which is very useful for controlling the microstructure of a surface.
A great example of this quality can be seen in something like dry lips, where you have two very distinct materials interacting with each other: the soft, highly scattering skin of the lips as a whole, with the more diffuse, dry skin on top. It is expecting to ship V 3. Hair and hair rendering should be 10 to 15 times faster in version 3.
There will also be a new simplified skin shader in version 3. Also in version 3 will be open source support as mentioned above, with Alembic and OpenEXR 2 support. Viz maps are being introduced — these are material definitions that will be V-Ray maps which can be common across multiple applications like Max and Maya.
- 101 Businesses You Can Start With Less Than One Thousand Dollars: For Stay-at-Home Moms and Dads.
- Narrative and Innovation: New Ideas for Business Administration, Strategic Management and Entrepreneurship.
- The Hypochondriacs Pocket Guide to Horrible Diseases You Probably Already Have.
- The Magic of Computer Graphics: Landmarks in Rendering - Bookdl.
- 3. Future Directions.
Also as mentioned above support for OSL in version 3. Both are now at the testing stage. Maxwell Render is a standalone unbiased renderer designed to replicate light transport using physically accurate models. Maxwell Render 1. Apart from that we wanted to create a very easy to use tool and make it very compatible, so everybody can use it no matter what platform you wanted to use. Maxwell Render is unbiased — this means that the render process will always converge to a physically correct result, without the use of tricks.
This is very important both in terms of quality but also ease of use. Maxwell really does mirror the way light works without tricks and hacks. The software can fully capture all light interactions between the elements in a scene, and all lighting calculations are performed using spectral information and high dynamic range data, a good example of this is the sharp caustics which can be rendered using the Maxwell bi-direction ray tracer with some Metropolis Light Transport MLT approach as well.
The algorithms of Maxwell use an advanced bi-directional path tracing with a hybrid special Metropolis implementation, that is unique in the industry. S and multi-core threading to optimize the speed in real world production environments. The team is focused on issues such as Mutli-threading and other practical issues. We have been very focused on Multi-threading so when you had just one or two cores Maxwell might have been slow but now people have 8 or 12 cores.
It is common now to use Maxwell for animation, something that was fairly unrealistic just four or five years ago. Normal path tracing is slowed or confounded by optical phenomena such as bright caustics; chromatic aberration, fluorescence or iridescence. MLT can also be very fast on complex shots and yet more expensive to render on others.
For example, its approach of nodally mapping paths bi-directionally helps it focus in on the problem of say light just coming through a keyhole in a door to a darkened room or to produce very accurate caustics. But a full MLT can be slower than other algorithms when rendering simple scenes. Sometimes with a MLT you can not use all the same sampling techniques you can use with a path tracing system, at least not everywhere in the code.
While pure MLT does not seem to be favored by any part of the industry, Next Limit believes there is a lot to be learnt from MLT and they are constantly exploring how to improve bi-directional path tracing. Maxwell Render includes Maxwell FIRE, a fast preview renderer which calculates an image progressively, and so renders can be stopped and resumed at any time. If the renderer is left long it enough it will simply converge to the correct full final solution.
It is very good for preview, but normally once an artist is happy with the look, they switch to the production renderer for the final. One of the most challenging things for an unbiased renderer is SSS. Combined with the multi-light feature, advanced ray tracing, massive scene handling, procedural geometry for fur, hair and particles, and a python SDK for custom tools, Maxwell is a production tool today.
RealFlow has been hugely successul in fluid simulation, so providing good rendering visualisation of simulations is a great bonus, after all most sim artists are not necessarily lighters — so easy and high quality renders will just provide the sims team with more information on what the sims will look like. In countless films now it seems a Houdini component exists helping with either fluid effects, destruction sequences, smoke, flames or just procedural implementations of complex animation.
Like many people the first time I fired up a Mantra render I was thoroughly disappointed by the lack of prettiness, a clunky speed, and having to go to a few different places in the software to get in and start adjusting things. But, when it came time to get the job done, Mantra has never let me down. What at first seems like a slow render on a sphere manifests itself in production as a highly efficient render of millions of particles with full motion blur.
And what seems like a lack of user interface with ease concerning lighting and submission turn into highly automated and dependent systems in the latter stages of production. I submitted my final elements as lighting elements.
The highs and lows of a game changer
Everyone was on board thinking how well we had lit elements except for the compositing department, who wanted to know why the motion blur was of higher quality. Of course, Houdini could be used for any 3D animation, but it is known for its effects animation more than anything else today. Mantra is included with Houdini. In fxguide celebrated the 25th anniversary of the company. In that story we wrote:. That person was Mark Elendt, who at the time was working for an insurance company.
Mantra is still to this day the Side Effects Houdini packaged renderer. Today Mantra is very much a powerful solid option for rendering, offering one of the best known in-house renderers from any any of the primary 3D vendors. It is very much a tool that could be marketed separately but has always been part of Houdini. With raytracing, mantra does not refine geometry if it knows how to ray trace it natively.
The other advantage is that if you have polygons smaller than a pixel, you spend a lot of time breaking up objects that are already smaller than a pixel. VEX is a high-performance expression language used in many places in Houdini. Mantra uses VEX for all shading computation. This includes light, surface, displacement and fog shaders. The core ray tracer could have a biased or unbiased renderer written on top of it thanks to the flexibility of VEX.
Side Effects has experienced a lot of interest, but actually they built it some time ago, before there was as much interest. Today there is much more interest in physical plausible pipelines, something that has validated a lot of the early work Side Effects did in this area. Mantra and Houdini are known for their volumetric work, having won technical Oscars in this general area of research Micro-voxels. Side Effects was one of the first companies to work with Dreamworks on OpenVDB, partnering with them to help make it open source.
Side Effects really supports open source, also very actively supporting Alembic for example. They have also done serious work in volumetric lighting, providing say fire as a lighting source, which was a generalization of their area lights to handle volumes as well as surfaces as volumes. The next release not only will have improved Alembic support, but new lighting tools for Houdini and Mantra interaction. But as the next release is not until later in the year Side Effects may release support before the next release for OpenEXR 2.
Mantra has had its own format for some time for deep data but this would be that output in the new OpenEXR 2. Mantra supports SSS using a point cloud approach with an irradiance cache, it is based on a Jenson dipole model.
There is a ray tracing and path tracing approach in the lab, but many to have a ground truth to compare the point cloud to. Research is continuing but there are no immediate plans to change the system or approach. Mantra continues to improve its speed, this is especially true of the ray tracer. Clinton joked that some work is new algorithms and some stuff is more dumb stuff that was broken.
In one isolated case a simple fix on opacity made a huge difference to fur rendering — literally one tweak yielded a render several orders of magnitude faster on complex fur for one client. Like many other companies Side Effects is working hard on moving things from being single threaded to multi-threaded. Here a really wide benefit can be felt by customers, especially those on newer 8 and 12 core machines.
It is up to the user to choose whichever renderer they feel comfortable with and is the best for the project. There may also be a V-Ray update which will be coming. The key area to watch out for with V-Ray is support of light mapping. Light mapping also called light caching is a technique for approximating GI in a scene.
This method was developed by Chaos Group and will be in R15 to be announced on July 23rd. It is very similar to photon mapping, but without many of its limitations. The light cache or map is built by tracing many eye paths from the camera. Each of the bounces in the path stores the illumination from the rest of the path into a 3d structure, very similar to the photon map. But in a sense the exact opposite of the photon map, which traces paths from the lights, and stores the accumulated energy from the beginning of the path into the photon map.
After version 13 there has been a second physical renderer. The light mapping is in the physical renderer for example. The SSS shader was completely rewritten from scratch for version 13, and thus is fairly new. The standard set in SSS with its varying wavelength adjustments has proven popular with customers. Like many users there is a desire amongst C4D users to move to a simpler lighting model, with no loss in quality but with an easier more natural lighting setup phase that behaves more like one might expect and involves less hack and tricks.
The product is the leading application for motion graphics but it is more and more used in visual effects, and while it is not a primary focus for the company, they are happy with the growth the product has experienced in both the entertainment space and the product visualisation community. Maxon has customers in the automotive industry and many other major product design companies. The main goal remains the motion graphics industry. While not a rendering issue directly it has helped to bring the product to an even wider audience and given the brand vast extra international exposure.
There is also a live or dynamic link from Premiere to AE which allows teams to work more effectively in a production concurrently. Tim Clapham. This combined with a central location to control global samples for blurry effects, area shadows, sub-surface scattering and occlusion shaders results in enhanced workflow with more realistic renders. Modo from Luxology, now at The Foundry, is expanding on several fronts. Firstly, as a part of The Foundry it is more exposed to the high end effects market, but also because independently key supervisors such as John Knoll, senior visual effects supervisor and now chief creative officer at ILM, have been forthcoming in saying how much they like the clean and fresh user experience of Modo and its renderer.
This allowed actors to know where to look and for anyone to judge what the framing should allow for — in effect it was a virtual set — on set — via Modo and an iPad. ILM uses a variety of renderers and Knoll is no different but he seems to genuinely like the Modo tools and renderer for certain projects or tasks. Modo is a hybrid renderer, if one keeps an eye on setting it is able to be run as a physically plausible, unbiased way.
The render is not as mature as some, for example its EIS Environment importance Sampling does not yet provide IS on directional lights nor full MIS covering materials, but the EIS does work well for both Monte Carlo and irradiance caching approaches and produces greater realism from HDR light probe captures or approaches. Furthermore the team plan to expand IS throughout the product. Peebler points out that every renderer makes pretty pictures and can render photorealistic images, but the key now is getting there faster.
In reverse the design and architectural clients requested embedded python, which has been a big boost to many effects and animation customers. Modo is one of the companies focused on a variety of markets, pointing out some of their design companies are doing vfx work, but vfx companies like Pixomondo are doing design work to even out production cycles.
For Peebler they believe they can cover multiple markets with the same core product, without the need to bifurcate to address them individually. The Modo renderer is provided as both a final render engine and as an optimized preview renderer that updates as you model, paint or change any item property within Modo.
CA2875276A1 - Drawing graphical objects in a 3d subsurface environment - Google Patents
CNB zh. USA1 en. CAA1 en. BRA2 pt. AUA1 en. CNA zh. Three-dimensional wellbore visualization system for drilling and completion data.
Method and system for creating irregular three-dimensional polygonal volume models in a three-dimensional geographic information system. Method and system for dynamic, three-dimensional geological interpretation and modeling. Three dimensional tangible interface for interacting with spatial-temporal data using a laser scanner. Li et al. Glander et al. Abstract representations for interactive visualization of virtual 3D city models. Cohen et al. USB1 en. Method, apparatus and system for constructing and maintaining scenegraphs for interactive feature-based geoscience geometric modeling.
WO2011038221A2 - Drawing graphical objects in a 3d subsurface environment - Google Patents
AUB8 en. Systems and methods for selectively imaging objects in a display of multiple three-dimensional data-objects. System and methods for creating a three-dimensional view of a two-dimensional map. DET5 de. Mitani et al. Ep: the epo has been informed by wipo that ep was designated in this application. Request for preliminary examination filed after expiration of 19th month from priority date pct application filed from It was really great being there, working on it with the pioneers of that whole process. The stuff really stands up today.
With it John Lasseter pushed yet more CG boundary. Work on the 75 second sequence was ultimately divvyed up between seven different FX houses, with ILM taking on the bulk of the work and designing a program that could simulate the watery beast-tube-thing with incredible realism. Another Oscar winner. It was a breathtaking reveal: physically textured dinosaurs so realistic it felt like they might come pounding out of the screen. The CGI was bleeding edge, but the studio also used a smorgasboard of physical effects on the movie: of 14 minutes of dinosaurs in Jurassic Park, only four minutes were entirely computer generated.
Occlusion-Free Animation of Driving Routes for Car Navigation Systems - Semantic Scholar
Along with the CGI, animatronics and stop-motioned miniatures were used to create the thunderous Gallimimus stampede, and a computer-generated stunt double created for the first time he was munched by the animatronic T-Rex. The first ever full-length CG feature, Toy Story was a mighty undertaking undertaking with a team of animators less-than-mighty in number. It was enough to have Rex cowering in terror, but Pixar came through, again mingling super-detailed animation with emotional beats. We were essentially kick-starting an industry in terms of CG films.
Over artists and technicians were hired to digitally breed that icky strain of alien bug warrior, and those CG designs still hold up today, helping enshrine Starship Troopers as cult viewing. Not only were the fundamental pieces of the ship — the hull, boiler room and boat deck — generated by computers, but major advancements were made in the depiction of flowing water that allowed the audience to immerse themselves in the illusion of a watery grave.