Featured

Lighting with Linearly Transformed Cosines

a new approach to real-time area lights

Advertisements

I have recently read this:

Real-Time Polygonal-Light Shading with Linearly Transformed Cosines

It might just be me, but there seems to be a renewed interested in analytic solutions to rendering problems. This might be because GPU ALU counts are growing faster than memory bandwidth. It might be because analytic solutions offer potentially better anti-aliasing opportunities – and with the new interest  in VR, good anti-aliasing has become  important.

Whatever the reason, this paper provides an interesting approach to the analytic rendering of area light sources. There is already a surprisingly  elegant  analytic solution to  irradiance from a polygonal area light source:

lambert_polygon

Where beta βi is the angle edge i makes with the receiver point r, and γi is the angle the plane containing both edge i and the receiver point makes with the plane normal. This is in tangent space, so the edges of the polygon have simply been obtained by projecting them  and normalizing the corresponding vertices. This seems like a useful formula – why is it not more widely used in graphics? The authors suggest:

Problem 1: Integrating parametric spherical distributions over spherical polygons is difficult in general, even with the simplest distribution

Problem 2: State-of-the-art physically based material models are not simple distributions; they have sophisticated shapes with anisotropic stretching and skewness that need to be represented for the material to appear realistic.

Problem 2 is important. All modern game engines now use some form of physically based rendering:  Unreal Engine 4, for example,  uses  the Disney-tweaked (here) version of the GGX microfacet model:

ufacet
Here l and v are the incident light and viewer direction respectively, D is a non-linear ( e.g gaussian ) distribution function, F is the non-linear fresnel term. G is a geometry term defined by the shape of the microfacets that make up the surface.The diffuse term  is the diffuse lighting contribution – this is often just ( arguably incorrectly )  Lambertian reflectance. The key thing to note is that this is not, in general, a simple expression.

This complexity is an issue. For the most part, area light sources provide most value when they are used in conjunction witha glossy surfaces. For diffuse surfaces, they add little value as the shape of the source cannot readily be observed and the overall effect can more easily be approximated by cheaper lighting methods. For glossy reflectors, environment maps are more likely to be used, though they have difficulty representing local light sources.  Without a good way of handling physically based BRDFs, analytical models are of relatively little interest.

The paper introduces a novel approach – Linearly Transformed Cosines – to computing the response to complex materials. From the abstract:

We show that applying a linear transformation— represented by a 3×3 matrix—to the direction vectors of a spherical distribution yields another spherical distribution, for which we derive a closed-form expression. With this idea, we can use any spherical distribution as a base shape to create a new family of spherical distributions with parametric roughness, elliptic anisotropy and skewness. If the original distribution has an analytic expression, normalization, integration over spherical polygons, and importance sampling, then these properties are inherited by the linearly trans- formed distributions.

The idea is to start with lambertian  polygonal irradiance and define a linear transformation between this space and the non-lambertian BRDF:

ltc

This relationship is non-linear in general. The approach taken is similar to that for  integrating complex BRDF’s against lighting defined by spherical harmonics: i.e. to parametrize the BRDF by the azimuth of the view angle and the surface roughness. At each of these points the authors find  the best linear transformation from the original BRDF to the simple Lambertian cosine space. This table is computed offline and stored in a texture. In practise, as the authors demonstrate, transforming the lighting polygonal is equivalent to transforming the BRDF and this is what is actually done. Intuitively, this makes sense –  a mirror like surface will respond strongly over a smaller fraction of the hemisphere than a rougher surface, and the linear approximation will express this as a scaling of the lighting polygon. This is what we see in b above. This matrix  would describe a low-roughness surface with a zero viewing angle.  The two non-unity diagonal terms describe the roughness and anisotropy (c)  – in visual terms, the  shape of the highlight . The off diagonal skew terms express the change in the BRDF with response to view angle ( d) .  Two other terms are stored  for the F() fresnel  term and the overall magnitude. Diffuse materials can simply use identity in place of the linear transform (a). The pseudo code – adapted from code taken from the authors site – looks like:

vec3 LTC_Evaluate( vec3 N, vec3 V, vec3 P, mat3 Minv, vec4 points[4] )
{
    // construct tangent basis around 
    vec3 T1 = normalize(V - N*dot(V, N));
    vec3 T2 = cross(N, T1);

    // transform tangent basis by LTC matrix
    Minv = Minv * mat3(T1, T2, N);

    // transform polygons into LTC transformed tangent space
    L[0] = Minv * points[0] -  P;  // etc. for 1..3  

    // clip - should exit if all points clipped 
    ClipToHorizon(L)

    // project onto sphere
    normalizePoints( L  );

    // integrate
    return  IntegrateEdges(L);
}

The matrix Minv is looked up from the texture and the IntegrateEdges function computes the irradiance from the lighting polygon using the formula above.  This requires about 80k of data for a 64×64 sampling of  azimuth and roughness space. Reducing this to an 8×8 sampling still yields good results – albeit with some banding . The four matrix terms can be packed into the four channels of an f32 texture. More compact representations are almost certainly possible.

The approach is extended to textured area light sources. This assumes that the shape term given by the linearly transformed lambertian irradiance expression is independent of the colour term given by the texture. The colour term is obtained by transforming the texture coordinates – this causes rough BRDFs to sample from the upper ( lower frequency  ) end of the mip chain. With suitable filtering, an accurate looking response can be obtained. For evaluation purposes, I  skimped on doing the correct filtering and just used existing mip chain generation and clamped the texture look-up. This looks fine, but would want to revisit this if implemented in anger.

Timings seem a bit on the high side – 0.64ms for a single untextured quad ight  at 1280×720 ( on a GTX980 ) seems high. At a very rough estimate, this seems like a few hundred ALU ops /edge – suspect that if you restricted yourself to quads you could exploit symmetry to reduce the transform overhead. One would also want  to look at the acos transcendental in the edge integration and see if it can be removed – these operations are costly on modern GPUs.

Still, it seems an elegant technique with practical application to games. The authors supply lots of material (here)- an executable demo with source, an online webgl demo and  the BRDF fitting code. The code is very easy to understand in the context of the paper.

A few examples from my experiment: top left roughness of 1.0, bottom right roughness of 0.