The integration of different methodologies to address interactions accross a wide range of scales and processes has rapidly progressed, probably much faster than anticipated at the time when HD(CP)2 was perceived. The conference may well be remembered as one of the first meetings that successfully strived for this goal.
More specifically, a wide range of modelling approaches were presented: GCMs with conventional / emerging convection parameterization schemes and with super-parameterizations, models with mesoscale resolution on global to regional scales, kilometer-scale convection-resolving models, and LES models run with both idealized and realistic boundary conditions. Model simulations considered domains sizes from global, to continental, and down to a few kilometers, over time scales from days to decades.
Oral and poster presentations contributed towards closing the gap between climate models on the one hand, and boundary-layer and cloud models on the other hand: a wide range of specific mesoscale dynamical processes affecting the organization of clouds and convection were beginning to be addressed with the behavior of the climate system in mind; among these the dynamics of up and downdrafts, the role of propagating cold-air pools, the spatial organization of convection, the role of the diurnal cycle, as well as aspects of meso-scale and synoptic-scale dynamical processes that affect cloud and precipitation. In addition, different methodologies to address microphysical, radiative and turbulent subgrid-scale processes were exploited, and promising progress on the further development of these schemes were presented.
Consideration was also given to the exploitation of new emerging computer architectures such as GPU and inexact hardware, or reduced precision approaches for moderating memory bandwidth constraints.
There was a lot of interaction and discussion between different modeling groups. It was becoming evident that the development of convection schemes is becoming increasingly informed by process understanding, and that a closer alliance between software engineers, model developers and domain scientists will be required to take best advantage of advances in computing.
In parallel to the technical advances in computing and numerics that enable the ambitious modeling program of HD(CP)2 advances in instrumentation, algorithms, and computing have allowed opened new observational windows on turbulence, cloud and precipitation behavior at scales from the local to the global. In particular the synergetic use of instruments probing the same scene with complementary techniques enable new insights into atmospheric processes be it from the ground, aircraft or satellite.
Substantial progress has been made in the ability to remotely measure the microphysical state of clouds and precipitation. Some work has begun to make the ambitious move from the traditional 1D view to 3D using scanning instrumentation. Advances have been enabled in part by new radar technologies exploiting Doppler velocity spectra and multiple frequencies, leading to stronger constraints on ice microphysical processes including riming and aggregation. This new information, soon to be supplemented by space-borne radar and lidar, arrives as the parameterization community is engaged in a deep reconsideration of how ice microphysics is best parameterized in models. Complementary techniques are also emerging to better constrain droplet concentrations in clouds.
The value of synergy — using a diversity of complementary measurements to obtain a more complete picture of the state of the atmosphere — was especially clear in Berlin, in applications ranging from high-resolution profiling of atmospheric temperature and water vapor to estimation of cloud drop number. Formal methods for synthesizing information and improved estimates of conditional uncertainty continue to develop, though it remains unclear how to treat systematic errors in interpretive models.
Novel datasets have been developed for long-term satellite observations that allow scientists to address investigate processes like glaciation and precipitation on a statistically sound basis. These inversions are increasingly being aided by forward models that connect remote sensing measurements to atmospheric variables.
The very high resolution of the HD(CP)2 simulations provides new challenges for model evaluation. Several presentations and posters described ambitious observational efforts, some at scales commensurate with the developing simulations. The wealth of high resolution - partly scanning observation operated with a radius of less than 5 km during the HD(CP)2 field Experiment (HOPE) revealed new insights into boundary layer processes and transports though some puzzling observations point tot he need for a better understanding of the heteorogeneous land surface. As such high-resolution information is relatively new, substantial effort along these lines can be expected in the second phase of HD(CP)2.
On a narrower but quite important topic, advances in active ground-based remote sensing are poised to shed new light on turbulent transports in the boundary layer. Observations so far have focused on the turbulent characteristics of the boundary layer as made visible by fluctuations in single variables; input from the modeling community made clear the value of looking at co-variances to describe transport.
Improvements in the parameterizations of deep convection focus on one hand on the representation of multiple plumes or an explicit mass flux or cloud size distribution similar in spirit to the original formulation of Arakawa and Schubert. On the other hand, the treatment of cloud organization is seen as an essential feature for next-generation convection parameterizations. Progress on the latter is often closely related to the use of stochastic methods like cellular automata. The application and usefulness of stochastic approaches was seen in many presentations. It has also been shown that the superparameterization approach, also known as multiscale modeling framework (MMF), has to be interpreted as a stochastic parameterization and provides new opportunities to evaluate stochastic parameterizations.
Many presentations on cloud microphysics emphasized the importance of cloud glaciation and the related uncertainty. A lack of knowledge exists especially regarding the glaciation within convective updrafts in different regimes ranging from the deep tropics to mid-latitudes and the arctic. This uncertainty is present in parameterizations as well as in cloud-resolving models.
The central theme of the session was the necessity to comprehensively use both modelling and observations at the same time, as well as conceptual interpretations, in order to advance constraints on climate prediction.
Emergent constraints use the spread of a multi-model ensemble to link relevant unknowns (such as climate sensitivity or a proxy for it) to observables. Impressive recent progress was documented, which points to a relatively large climate sensitivity. Detailed investigation of models allowed to link such results to individual processes (e.g. mixing), sometimes in ways that begins to call the inferences into question.
Several talks illustrated how hypotheses about climate feedbacks from data alone can be misleading, for which the term "emergent non-constraints" was coined. Statistical relationships need corroboration and interpretation from models to advance climate prediction.
Modeling is also needed to quantify mechanisms postulated from theoretical considerations, e.g., the remaining high cloud changes may be large enough for a profound effect on climate sensitivity, even if a fixed anvil temperature is a good assumption; and precipitation (extreme) changes scaling with temperature changes are not readily traceable to simple explanations .
Detailed model analyses are necessary for constructing as well as interpreting metrics to constrain relevant processes, for example freezing mechanisms and their manifestation in the liquid fraction – temperature relationship.