Air quality models
Air quality models are being used to study the atmosferic boundary layer (i.e. the part of the troposphere closest to the ground) and especially to determine the concentration of pollutants in the air.
A distinction is made between assessment models on one side and forecast models on the other. Assessment models are used to assess the air quality over a certain period of time (e.g. one year), or to map real-time measurement data from a telemetric network. Forecast models, in contrast, are used to make predictions about air quality and are supported by, amongst others, equations from fluid dynamics, chemical reactions, neural networks, etc.
Besides assesment models and forecast models, interpolation techniques are used. These are mathematical methods originally used in geology to map geological units (for example the concentration of a certain substance in the ground), based on a few measure points. Interpolation techniques are used in the same way in the field of air quality, namely to map concentrations of certain pollutants in a region, starting from measures gathered at different measuring stations. The main difference with geology is that soil concentrations are more or less static, which is certainly not the case for air pollutants. For certain substances precautions have to be taken for the used interpolation techniques to have credible results.
There are 4 main factors defining air quality:
- Emisson sources.Pollutants are emitted into the atmosphere by various sources. The most important of these sources are industry, traffic, heating (or cooling) of buildings, the energy sector and agriculture. The amount of emitted pollutants varies with time and is depending on location: traffic emissions, for example, will be higher during rush hours and in urban areas (with a lot of traffic). It is not always easy to determine the exact amount of emitted pollutants.
- Meteo. The meteorological conditions are a second important factor defining air quality. For example, in winter, with windless weather conditions and when a temperature inversion occurs, the pollution will be diffused badly, resulting in increased pollutant concentrations. Wind, rain and maritime air currents will in our region result in periods with less air pollution.
- Chemical processes in the atmosphere. The atmosphere is a location where a plethora of complicated chemical reactions can take place. For instance in summer, with sunny and warm weather conditions, there will be a lot of photochemical activity, which may result in ozone being formed from the present air pollution. Chemical reaction also play an important role in the formation of particulate matter.
- Transport of air pollution. Air pollution can be transported to other regions, over long distances by wind. An important part of air pollution in the different Belgian regions is being introduced from abroad. However, air pollution is not only imported, locally produced pollutants might also be exported abroad. The amounts of locally produced pollution and pollution imported form elsewhere are dependent on the retention time of said pollutant in the atmosphere.
These four factors are four sources of incertainty that have to be known optimally in forecast and assessment models.
The general aim of assessment models is to map the concentrations of pollutants, based on concentraions measured in the different measuring stations. Using these data, estimating the spatial distribution of concantration becomes possible, either in real-time or afterwards. These models also make it an option to evaluate the air quality over a certain period of time by for example calculating the population exposure to high concentrations of polluting substances.
The RIO interpolation technique was developed to take into account the more or less local character of air pollution.
Common interpolation techniques such as Inverse Distance Weighting (IDW) and Ordinary Kriging (OK) require every measuring point to be representative for equal spatial areas. Practically, this is not the case for air pollution. Concentrations measured close to a pollution source will mostly only be representative for a small area around the source, whereas concentrations measured in a rural zone will generally be representative on a larger scale. To take this into account and thus to consider the local character of air pollution, the RIO interpolation technique was developed.
RIO is an intelligent interpolation technique in which the local influence per measuring site is removed first, in order to obtain a spatially homogenous dataset of air quality measurements. The values received as such can then be interpolated via Ordinary Kriging. The local character of a measuring site is determined by a statistical analysis of long time series of concentrations at the measuring sites and the land use (Corine Land Cover) in the vicinity of the stations. From this analysis it becomes clear a robust correlation exists between land use and concentration levels. This correlation between the concentrations and the land use is summarized in trend functions. Because land use is known for all of Belgium, the local character of every place where an interpolation occurs can be taken into account. For the interpolation of PM2.5 aerosol optical dept (AOD) was used in addition to land use for the determination of the local character.
The spatial resolution of the RIO interpolation technique is 4x4 km². Using RIO, it is possible to calculate the air quality at every hour in all the 4x4 km² grid cells in Belgium. The RIO method is used on the IRCEL website to show the real-time air quality data.
The RIO-IFDM model is the result of a coupling of two data sources, firstly the interpolation of air quality measurements with the RIO interpolation model, secondly the calculation of the air quality based on the meteorological data and the emission of air pollutants of a dispersion model. The IFDM model calculates the impact of the emissions of point and line sources on the air quality in the direct vicinity of these sources. An example of a point source might be a factory chimney, while a line source is the emission of the traffic on (a part of) a road.
The exact location of industrial sources and the amount of emitted air pollution is known. Via traffic censuses and the average composition of the vehicle fleet, a good estimation of traffic based emissions can be made per road (segment). The emissions per road (segment) is known for the most important city and regional roads in addition to highways and ring roads.
The IFDM model is a bi-Gaussian plume model, which departs from emission sources and models their distribution by means of meteorological parameters (see box on deterministic models). This is a stationary result and represents a snapshot of a plume before it undergoes the advection of windfield in onde direction and diffuses in a Gaussian way in two directions (hence the name bi-Gaussian). At a given moment, the plume of the considered source is calculated based on both wind speed and direction and based on the amount of pollutant emissions at that moment. Besides that a chemical module is used, which takes into account the photochemistry of ozone and nitrous oxide in a simplified manner.
The IFDM model doesn't use measurements like the RIO interpolation method, but calculates concentrations of air pollutants based on emission data such as wind speed, wind direction and temperature. The meteorological data determine to what extent and whereto the pollution spreads. For some compounds, such as nitrous oxide (NO2) or ozone (O3) chemical reactions happening in the atmosphere also play a role. The speed of these reaction is dependent on, amongst others, temperature. In the current version of IFDM, only traffic and industrial sources are modelled. Other sources of air pollution, such as agriculture and households are not modelled seperately, but are part of the RIO 4x4 km² background. In contrast with point sources or line sources, the emissions originating from households or agriculture are spread out over larger areas. To avoid the double counting of traffic and industrial sources, a double counting correction is applied. The contributions of these sources are after all included the RIO 4x4 km² results, which are the result of interpolated measurements.
As of yet the RIO-IFDM model is still being validated.
The general goal of forecast models is to make maps of the air pollution concentrations on the present day and on the following days. Different kinds of forecast models exist: deterministic models, which are based on evolution equations from physics and chemistry, neural network models, which more or less behave like a neural network in order to predict concentrations, and so on. Besides, other subdivisions and types of models exist, but we will stick here to the models used at IRCEL.
These models are based on evolution equations from physics, in addition to a whole series of chemical reactions of the present pollutants.The general aim of a deterministic air quality model is to determine aerial pollutant concentrations based on emissions from various sources, corresponding with the measured aerial pollutant concentrations. These concentrations are also known as imission concentrations. Pollutant are released into the atmosphere by a source, are afterwards transported and undergo physical transformations and chemical reactions. The eventually measured concentrations of atmospheric pollution thuis correspond with the emitted substances after transportation, thinning and diffusion in the environment and physical and/or chemical alterations. When a pollutant is inert ( and thus does not react with other substances), it is caleld a passive tracer. The measured atmospheric concentrations of these compounds are only affected by transportation. One example of a passive tracer is black carbon.
In general, a deterministic air quality model has to resolve Navier-Stokes equations, which describe fluid dynamics (law of conservation of momentum, energy, mass and specific humidity) as well as the advection-diffusion equation for the pollutant concentration. The Navier-Stokes equations are non-linear differential equations and are thus very complex to solve. Practically, this is done using numerical fluid dynamics (and thus using computers); The chemical reactions pollutants can undergo in the atmosphere are numerous and also non-linear (for example photochemical reactions of ozone). A number of chemical terms ofthe production of certain compounds (about one hundred in general) are also added to the advection-diffusion equation, in order to complete the description of the studied pollutants. What also has to be taken into account are the physics of aerosols ( a mixture of dust particles or liquid droplets suspended in the air, using the General Dynamic Equation (GDE), but also the physicochemical reactions between gases and aerosols (condensation, nucleation, growth, etc.). A number of pollutants can leave the atmosphere by being taken down to the ground by gravity or by being washed oout by rain (in-cloud and below-cloud scavenging), etc.
Deterministic models can be divided into two subcategories: coupled online models that resolve the system of coupled models (Navier-Stokes + evolution of the concentrations), and coupled offline models. With offline models, the fluid dynamics are decoupled form the evolution of pollutant concentrations, and for example the results of a meteorological model are used as input for the advection-diffusion equation. This assumption may surely be made for air quality models, but is not valid for climate models, because of the importance of the interaction between matter and radiation in those models.
The Navier-Stokes equations describe turbulent currents, wherein multiple scales of movement are observed. As such, they form a true challenge for modern computers, because of reasons of numeric stability of the solution. In an online model the Navier-Stokes equations can be solved in a direct way, but this requires a solid IT-infrastructure. The equations can also be solved using a Large Eddy Simulation (LES), whereby large dynamic whirls are solved by the computational grid and only the small details are modelled, which drastically decreases the time of computation. This is also the most used method in scientific research. A last method, the averaged Reynolds (RANS for Reynolds-averaged Navier-Stokes), enables the separation between mean components and dynamic components. This however requires an additional assumption regarding the way turbulence is defined. Most of the time, a gradient theory is used, which describes the turbulent flux of a parameter as proportionate with the inverse gradient of that parameter, in analogy with Fick's law. For offline models, the averaged Reynolds is applied to the equation of the advection-diffusion reaction.
This explanation immediately demonstrates that the description of fluid dynamics (in the framework of air quality) is very complex and complicated, and that it is necessary to use approximations in numerical calculations, firstly to limit the calculation time and required IT-resources, but on the other hand also because of numeric stability
Artificial Neural Networks are a group of (machine learning) techniques developed in analogy to the biological brain. In general, an artificial neural network consists of multiple layers of neurons: the first layer is an entry layer, which lets the information flow to the next layer, and so on, until the exit layer. Between each layer, the data are mathematically treated by transfer functions.
Now let's discuss the use of neural networks within the field of air quality. A forecast model based on a neural network first needs to undergo a training phase for each pollutant measuring point. After this training phase, the model is operational and can make forecasts based on the acquired transfer functions. The training of the network happens by providing a long historical series (multiple years) of input data and their corresponding desired output values (concentrations and meteorological parameters), and continually adjusting the transfer functions by recalculation. The model also needs meteorological data, pollutant concentrations in the morning, as well as meteorological predictions of that day.
The largest advantage of a forecast model based on a neural network, in comparison with deterministic models, is the decreased need of input data and the shortened calculation time. The training phase might however carry some more weight. The neural network models can also perform regressions on non-linear functions in multidimensional envrionments. In comparison with traditional statistical techniques, neural network models offer a great number of possibilities and more flexibility.
The biggest disadvantage of a neural network model is that it can only make predictions for point for which the model has been trained, i.e. for points where measurements have been carried out. This means a training has to happen for every measuring point. Another drawback is that the model does not describe the underlying physics of the problem, which may lead to restrictions regarding the data use.
Three models are used to predict air quality at IRCEL.
- OVL (particulate matter). The OVL computer model is a neural network model. The model generates a prediction of particulate matter (PM10) concentrations based on predicted meteorological parameters and historical series of measurements of air pollution. The advantage of neural network models is that they can make predictions for a specific location really quick, with simple input data. The drawback is they can only generate forecasts for locations with sufficient air quality measurements. More info on the OVL, which was developed by VITO, can be found here.
- SMOGSTOP (ozone) The computer model SMOGSTOP is a cluster of multiple statistiscal model (among which a neural network). The model generates a forecast of the highest hourly mean ozone concentrations based on predicted meteorological parameters and historical data series of air pollution. The advantage of SMOGSTOP is that it can make a quick prediction with simple input data for a specific location. The drawback, however, is it can only do so for locations where ozone has been measured already for a sufficiently long time.
- CHIMERE (ozone, particulate matter, nitrous dioxide,...) The CHIMERE computer model is a deterministic model that tries to simulate the very complex physical processes and chemical reactions that might occur in the atmosphere, based on meteo forecasts, emissions of pollution and land use data. The advantage of deterministic models is that they can also estimate the air quality in places where air quality is not measured. The drawback, on the other hand, are the much more complex input and long calculation times (althought the latter is getting less important as computers become faster). The current resolution of the CHIMERE model is around 50x50 km. The model thus calculates the concentrations representative for a large area. Locally (near industrial sites, important roads,...) the real concentrations might thus be higher. The CHIMERE model has been developed by the Institut Pierre-Simon Laplace (IPSL-Paris) and was adapted in Belgium by IRCEL.
In general, all the models, and particularly air quality models tend to approach the reality increasingly well, but they always include a degree of error that only regular use and experience allow to understand. The art of forecasting is to make the most of the qualities of the models, while remaining aware of their limitations.
Combined with our own expertise, we use the results of the OVL, SMOGSTOP and CHIMERE models and sometimes other models not described above to inform the population about ozone peaks in summer and episodes with high levels of particulate matter.
 Sportisse, B. Pollution atmosphérique - Des processus à la modélisation. Springer-Verlag France, Paris, 2008.