These 2 demos show final frame video generated using CentiLeo renderer for 2 scenes: a New York downtown scene represented with 25 million polygons and 8 GB textures and a dynamic simulation for Boeing 777 scene represented with 360 million polygons (stored in 14 GBs of geometry data). Last scene provided by David Kasik, Boeing Corp. For both demos each frame was produced after 1000 iterations of progressive path tracing (ray tracing) at 720p resulting in smooth global illumination with all reflections ON.
Full single frame production (including dynamic data processing and ray tracing) takes up to 5 minutes on a laptop with a mobile GPU: GeForce GTX 485M. All the computations of the renderer are done using GPU and scene content exceeds the physical GPU memory by an order of magnitude.
These results are distinguished because now you don't need ultra expensive computers or processors to run your challenging projects. With a CentiLeo approach consumer computers (even mobile ones) suffice your rendering needs.
Since 2011 we have added fast dynamic polygon support (building high quality spatial index), massive texture support, out-of-core ray tracer working several times faster for worst cases, removed lossy geometry compression (now full precision is used), friendly pipeline architechture for programable shaders. See details on Features page.
You can test it on SIGGRAPH 2012 exhibition (booth 1012)!
This demo proves that it is possible to explore huge scenes with lot's of textures on a consumer computer with a single GPU thanks to the efficient virtual memory manager.
New York downtown scene represented by 25 million polygons and 8 gigabyte of textures (more than 1000 high-resolution textures) and Boeing 777 scene represented by 360 million polygons (or 250 million polygons in some shots).
A demo shows captured work of CentiLeo program where the image is updated with a viewer camera move through the scene. Frame updates are accumulated together reducing the Monte Carlo noise in the image if camera is not moving.
1280x720 image updates run 10 times per second with 10 bounce path tracing (ray tracing) using a laptop GPU resulting in interactive huge scene exploration with GI.
Laptop specs include: 200 GB SSD storage, 16 GB RAM, NVIDIA GeForce GTX 485M graphics card with 2 GB of memory (pricing for this laptop was 2000$ in 2011).
Another summer at SIGGRAPH will be marked by the participation of CentiLeo 3D rendering technology!
Much faster than previous, the first interactive GPU ray tracer of huge 3D content sets new state of the art in the industry of CG! Higher resolution, more features and capabilities, comprising out-of-core dynamic geometry and textures, interactive ray traced frame updates of massive models using a single GPU… even a GPU such as on a laptop!
Visit our technical talk and exhibition booth #1012 at SIGGRAPH 2012, you can meet our team and try the features of CentiLeo renderer
CentiLeo, the out-of-core interactive GPU ray tracer for massive models, will be presented this summer at SIGGRAPH in Vancouver, Canada, 9 August 9:00 am - 10:30 am | West Building, Rooms 109/110 in the out-of-core section.
Now it is disclosed that CentiLeo implementation uses CUDA and is based on Kirill Garanzha's PhD research in Keldysh Institute of Applied Mathematics, Russian Academy of Sciences.
Join us for the fastest experience of GPU rendering/ray tracing for large models, composed of several hundred million polygons!
This video demonstrates interactive ray traced rendering (3-10 FPS) of up to 400 million polygon model of Boeing 777 (courtest provided by David Kasik, Boeing Corp) on a desktop PC with NVIDIA graphics cards GTX 480, AMD Phenom 4-core CPU and 16GB of DDR RAM (1500$ is approximate price of computer in 2010).
Path tracing based lighting is shown for the memory access stress test of the out-of-core geometry rendering.
The demonstration presents the first interactive out-of-core GPU ray tracing in the world. The demo shows Windows, how we load the scene and navigate through the fine mechanical details, turn on/off the global illumination preview.
With this video we show why an engineer or movie artist will be able to explore, edit and tune the illumination of the very large scene completely interactively on a desktop PC equipped with GPUs.
In the past it was hard to process and render huge scenes because of the slow render time of the former software. Using the supercomputers or networks was also complicated because of the cost, support complexity and network latency.
Our technology mostly relies on NVIDIA GPU (Graphics Processing Unit, GTX480) rather than CPU (Central Processing Unit). Such GPUs are widely-available in all computer markets and offer 1.5TerraFlops of compute power which is programmable (can run other software) and valuable for other tasks as well.
Till the moment the largest scene that was presented and fully rendered with ray tracing on the desktop GPU (with comparable to ours 1.5GB memory GTX480) had around 10-20 million polygons (i.e. 20x smaller than what we present). We break this limit because we have developed a novel Virtual Memory Manager for this kind of application for GPU.
The CPU-based rendering solutions (even many-core, e.g. 16-core) are much slower than our GPU-based software (around 25x slower) because of memory bandwidth issues and worse cache hierarchy. Such 16-core CPUs are 3-4x more expensive than our testing GPU.
This kind of high-performance ray tracing based rendering for large models was not yet demonstrated for cheap computers (GPU-based or not).