Yusuf Pisan

Volumetric Fog

Joshua Lanman

Advances in computing hardware and the introduction of the Graphics Processing Unit (GPU) have empowered computer graphics in many fields, including the movie and video game industries. These advances have led to the widespread use of complex and realistic special effects such as explosions, fire, smoke, rain, fog, dust, shafts of light or shadow, and others. While many public displays of these effects in use exist, the details of their illumination modeling and implementations are often protected as trade secrets. This is especially true in the video game industry, where the difference between being a leader or just another competitor in the large sea of companies and independents can come down to the small quality achievements one company’s product has over its competitors [1], [2].

The focus of this project is the understanding and implementation of one volumetric effect, fog. Our project has three goals. The first goal is to build an understanding of the illumination model of fog in general and how this effect can be approximated and created in modern imagery using a GPU. The second goal is to demonstrate our understanding by implementing an interactive system where these effects can be interacted with and examined. Our third goal is to provide a starting point for future researchers and enthusiasts by sharing both the lessons we have learned and the resources we have developed during this project.

Many challenges exist in creating realistic volumetric fog. In general, the volumetric illumination by itself is a complex process to model. The challenge becomes even greater when we consider the interaction between other objects in the scene and the fog particles. Add to this the fact that real fog varies in density as a function of distance, altitude, and time. Faced with an overwhelming number of calculations, modern techniques seek to strike a balance between hardcore mathematical modeling, advanced hardware techniques, and approximation to meet quality and performance goals.

In this project, we studied both the science behind light interactions with a participating medium and different techniques for modeling this behavior, culminating in the system we have implemented. To meet our goal of providing a starting point for future researchers and game enthusiasts, we have chosen to implement our project in the free, cross-platform game engine, Unity. Our system can generate fog either scene-wide or within finely controlled fog volumes at specified locations within the scene. Our solution can control the density of the fog as a function of both distance and altitude. The system can also introduce random, variable density into the fog along with velocity settings to simulate wind effects. This system also supports fog interaction with multiple lights within the game scene, as well as shafts of light and shadow within the fog volume. Each effect can be independently switched on and manipulated, giving the user precision control over the resulting fog. Our system, with its well-documented implementation, is an excellent learning and exploration tool for anyone interested in volumetric fog.


Yusuf Pisan Computing & Software Systems (CSS) University of Washington Bothell