Ray tracing – the inevitable step in 3D hardware evolution
Many of you might know Imagination as the UK-based leading semiconductor IP company that designs the PowerVR GPUs and MIPS CPUs found in some of the most well-known consumer and enterprise products available in the market.
When it comes to 3D graphics hardware, Imagination has been championing the Tile-Based Deferred Rendering (TBDR) concept for more than two decades, pioneering the adoption of high-end 3D graphics in mobile and embedded devices. More recently, Imagination has been busy growing and developing a new aspect of our business: real-time, power-efficient ray tracing IP.
What is ray tracing?
Ray tracing takes computer graphics technology to the next level – simulating the behaviour of light to enable the creation of photorealistic images. Ray tracing allows the seamless blending of captured ‘real life’ with computer-generated images; it is much better at simulating optical effects like reflection and refraction, scattering, and dispersion phenomena compared to traditional rendering methods like scanline.
With real-time ray tracing, power efficiency is the Holy Grail for Imagination, since it will enable us to make this exciting technology a reality for products such as laptops and ultimately even mobile phones and tablets.
Where is ray tracing used today?
Even though you might not have heard of ray tracing before, there’s a high probability you’ve experienced it without realising. This is because ray tracing is the primary technique used for Hollywood special effects rendering. By modelling the physics of light, ray tracing provides the only way to achieve photorealism for interactive content.
This is not a real-life picture but a rendered object created by Mads Drøschler with the help of our ray tracing IP.
Ray tracing also plays a role in graphics beyond Hollywood. Since the 1990s, continuously increasing GPU power has led to ever-better looking images in games, augmented reality, and other graphical applications. The downside of this progress is that each advance in visual quality makes the content more complicated and costly to generate. Every single effect (each shadow, reflection, highlight, or subtle change in illumination, for example) must be anticipated by the content designer or game engine, and special-cased. This work often results in massive game title budgets, slower time to market, and ultimately less content on the platform. Often the techniques to produce realistic-looking games today use ray tracing offline (not real-time), in a process called “pre-baking”.
Bringing real-time ray tracing to mobile
When people think of real-time ray tracing today, a system of desktop-class GPUs running an OpenCL-based application usually springs to mind. There is a great deal of computation needed to handle the complexity of keeping track of coherency between rays. But while desktop GPUs have the luxury of almost 300 watts and massive memory bandwidth to work with, mobile GPUs have almost two orders of magnitude less power available and therefore must operate with much higher efficiency. This meant that Imagination had to look at the problem differently if the company were to bring ray tracing to GPUs for mobile.
Instead of relying on the computation-intensive approach, the PowerVR engineering team set out to design a much better hardware solution that is modelled after how ray tracing actually works. Rather than looking at the ray tracing as a computation issue, the company looked at it as a database issue.
Simply put, the problem of rendering in typical 3D graphics means finding the intersection between a set of triangles or polygons and a set of pixels. All typical graphics processing units (GPUs) use different raster-based algorithms to solve this. For example, PowerVR TBDR GPUs split the scene into tiles in a grid pattern. Each tile is then evaluated for triangles that visibly overlap in a ray-casting fashion, and hidden surfaces are removed before the texture and shading process starts.
Ray tracing, on the other hand, starts with a pixel. In each step, it finds intersections of a ray with a set of relevant primitives of the scene and performs geometry processing using a shader, exactly as in rasterization.
Traditional raster graphics vs. ray tracing.
Behind the scenes, the ray tracing unit (RTU) assembles the triangles into a searchable database representing the 3D world. When a shader is run for each pixel, it can directly emit rays into the 3D world to implement shadows, reflections, transparency or other types of lighting behaviour very easily for the content creator. This is when the RTU steps in again to search the database to determine which triangle is the closest intersection with the ray. If a ray intersects a triangle, the shader on that triangle is scheduled on the GPU, and can in turn contribute color into the pixel, or cast additional rays. These “secondary rays” can, in turn, cause the execution of additional shaders.
The potential of real-time ray tracing in mobile
The key reason to enable ray tracing on a mobile PowerVR GPU is to simplify content creation for games and other media, while also substantially improving the visual impact. Enabling ray tracing in mobile requires a massive gain in efficiency for ray tracing which is only possible through specialized hardware to handle the scene database and keep track of secondary rays and their coherency.
The two cars in the scene are rendered with ray tracing while the other elements rely on traditional graphics.
Real-time ray tracing in mobile applications is considered one of the main opportunities for the mobile user experience to evolve significantly moving forward. Even though raster content today already looks pretty impressive, there are still big challenges to closing the gap between the photorealism that ray tracing offers and what we see on current-generation smartphones and tablets. We believe our ray tracing IP will close this gap and enable the next advances in graphics for mobile devices, ushering a new standard in quality and application development.