Future Directions in Precipitation Science

: Precipitation science is a growing research ﬁeld. It is concerned with the study of the water cycle from a broad perspective, from tropical to polar research and from solid precipitation to humidity and microphysics. It includes both modeling and observations. Drawing on the results of several meetings within the International Collaborative Experiments for the PyeongChang 2018 Olympics and Paralympic Winter Games (ICE-POP 2018), and on two Special Issues hosted by Remote Sensing starting with “Winter weather research in complex terrain during ICE-POP 2018”, this paper completes the “Precipitation and Water Cycle” Special Issue by providing a perspective on the future research directions in the ﬁeld.

The International Collaborative Experiments for the PyeongChang 2018 Olympics and Paralympic Winter Games (ICE-POP 2018) were conducted over the northeastern region of the Korean peninsula. The main scientific purpose of the field campaigns and associated experiments was to obtain various observational datasets at high spatial and temporal resolutions that could provide insight into the cloud microphysics processes and detailed structures of snow formation [1]. However, ICE-POP 2018 also acted as a lively and dynamic meeting point for a panoply of international scientists working on precipitation science. The various meetings held between 2016 and October 2020 were a unique opportunity to explore many of the perspectives and challenges faced by the precipitation science community. The Special Issue in Remote Sensing on "Winter weather research in complex terrain during ICE-POP 2018" included some of the ICE-POP 2018 results that can be used to envision the future of the field [2][3][4][5]. Additional contributions in the "Precipitation and Water Cycle" Special Issue can also help unravel the many dimensions of precipitation science, including ground observations (rain gauges, ground radars, disdrometers), satellite estimates (radars, radiometers) and models, including numerical weather prediction (NWP) models, regional climate models (RCMs), global climate/circulation models (GCMs), Earth system models (ESMs) and variable resolution models (VRMs) [6][7][8][9][10][11].
Indeed, one of the motivations of scientific literature is to share results, ideas and visions so they can be compared and refined. Drawing heavily on these contributions and on the recent literature, this paper provides a vision of the future of the field with a focus on satellite technology. As a perspective, the authors have enjoyed some latitude to speculate and it is possible, and even expected, that not everyone will agree with some of the statements. It is hoped that some of the ideas we discuss could help eventually define how community plans develop in the coming decades. The following ideas are not driven by any programmatic or project-driven purpose: while we are aware of the interests and strategies of many players, such as those involved in the Decadal Survey [12], our comments are not prompted by any other interest than that of advancing our scientific knowledge on precipitation. These ideas may, or may not, be aligned with the goals of several governments, agencies, organizations, institutions and research centers.
Precipitation science today appears to be a thriving field, but one at a crossroads. A recent milestone can help frame the challenges and opportunities faced by scientists working in this field. The 'Destination Earth' project [13] is aiming to build a 1-km resolution numerical model for the whole planet with realistic boundary conditions, i.e., also modeling the ocean and the land cover. At such resolution, some eddies can be resolved, convection can be more directly modeled and modeled precipitation is expected to improve over coarse resolution simulations [14,15]. The visual aspect of the first simulations of the albedo through a radiative transfer simulator shows little difference to satellite views. Aimed mainly at testing computing resources, previous similar attempts were made on an Aquaplanet idealization, but the new simulations can be one-to-one compared with reality. The precipitation metrics at the climatological level of the results have not yet been evaluated, but the results for some months-worth of data are promising. The resulting dataset will have unprecedented resolution at global scale. The Energy Exascale Earth System Model (E3SM) project is heading in the same direction, with the support of the U.S. Department of Energy (DOE; [16]). In the light of these projects, it seems natural to ask what the remote-sensing community working in precipitation science will do now when models are on the point of catching up with observations. This question leads to further ones: What will we be doing in the next 25 years? What will be the situation of precipitation science in 2045? Figure 1 shows a visual idea of the main aspects of precipitation science in 2021, including datasets of past missions such as CloudSat. Ground observations, models and satellite platforms provide more or less direct estimates of the planet's precipitation on several spatial and temporal scales. Observations, retrievals and forecasts are then used for a variety of applications, including renewable energy management [17]; insurance [18]; agriculture [19]; urban hydrological management [20]; fresh water supply networks [21] and flood readiness [22]. Satellites provide reasonable estimates of current precipitation. The skill scores and performance metrics of these are reasonable when compared at degree resolution and monthly accumulation with rain gauges [23][24][25][26][27][28][29][30][31][32][33]. It is possible to have about 8-km spatial resolution estimates and obtain our estimates about 3 h after the episode of interest with the Integrated Multi-satellitE Retrievals for Global Precipitation Measurement (GPM) (IMERG) algorithm, which uses global precipitation measurement (GPM) mission data, but then the skill is far lower due to the large spatio-temporal variability of precipitation.
The situation on the modeling side is quite different. Models such as the Destination Earth project model can provide reasonable estimates of precipitation at around 1-km resolution. However, they can do this more than 48 h in advance (as a reference, the faster estimate of precipitation from satellites is 3 h after precipitation occurs). It is widely accepted that modeled precipitation is still not really sufficiently good for most of the applications mentioned before, as the forecasts often miss the location or the timing, or both. However, models can provide a glimpse of the future precipitation when it is really useful to know, hours in advance, and this is critical for most of the many interesting applications, such as planning a trip [34][35][36].
What about the conceivable future? Can we already see sufficient progress in the precise location and timing of model estimates to state that models are catching up with observations? We believe we can, and this means profound changes in the precipitation science field. One can easily imagine the evolution of satellite precipitation products reaching 1-km resolution, having the data available as soon as 30 min after collection. One can even imagine a nearly perfect estimate of present precipitation: it is easy to picture such an ideal outcome by resorting to several geostationary microwave sensors combining the data with a dense constellation of radiometers and radars. However, that would still be for providing instantaneous estimates. What is evident is that models can not only estimate current precipitation, but also forecast it. Based on today's technology, one can see that models are soon likely to go below the 1-km scale. This is just a question of computing time [37,38], which is a technological, not a scientific or fundamental, barrier. Even if the models are just 'good enough' for most applications, say hydrology, agriculture, safety and security, traffic, etc., they will always have the edge on being capable of predicting the future. This is something that is often forgotten when products from satellites are compared with model outputs.
The bottom line of the discussion is that models can tell us about the future, but they are still not good enough [36,39,40]. This, however, could soon change as we get better resolution, which is a defining factor in improving the modeling of precipitation [41]. What seems clear is that eventually models will catch up with observations in terms of accuracy and precision, and it will then be obvious that models can also forecast. The elephant in the room in remote sensing is that models are soon likely to take center stage in precipitation science for most applications. While not everyone may agree with this statement, it seems clear that more computing power and less need for some parameterizations [38] will result in models being the primary provider of precipitation data in far less than 20 years' time. Indeed, satellite data will be extensively used to drive models towards realistic states and more accurate measurements will certainly improve forecasts, but the final product will be more a model forecast than an observation.
Such a realization is a pivotal idea to start thinking about the future. Speculating about the evolution of precipitation science includes considering future directions in re- One can even imagine a nearly perfect estimate of present precipitation: it is easy to picture such an ideal outcome by resorting to several geostationary microwave sensors combining the data with a dense constellation of radiometers and radars. However, that would still be for providing instantaneous estimates. What is evident is that models can not only estimate current precipitation, but also forecast it. Based on today's technology, one can see that models are soon likely to go below the 1-km scale. This is just a question of computing time [37,38], which is a technological, not a scientific or fundamental, barrier. Even if the models are just 'good enough' for most applications, say hydrology, agriculture, safety and security, traffic, etc., they will always have the edge on being capable of predicting the future. This is something that is often forgotten when products from satellites are compared with model outputs.
The bottom line of the discussion is that models can tell us about the future, but they are still not good enough [36,39,40]. This, however, could soon change as we get better resolution, which is a defining factor in improving the modeling of precipitation [41]. What seems clear is that eventually models will catch up with observations in terms of accuracy and precision, and it will then be obvious that models can also forecast. The elephant in the room in remote sensing is that models are soon likely to take center stage in precipitation science for most applications. While not everyone may agree with this statement, it seems clear that more computing power and less need for some parameterizations [38] will result in models being the primary provider of precipitation data in far less than 20 years' time. Indeed, satellite data will be extensively used to drive models towards realistic states and more accurate measurements will certainly improve forecasts, but the final product will be more a model forecast than an observation.
Such a realization is a pivotal idea to start thinking about the future. Speculating about the evolution of precipitation science includes considering future directions in regional climate models (RCMs), limited area models, variable resolution models, Earth system models (ESMs) and seamless prediction over scales, and of course parameterizations. Topics of interest in the discussion include the future of precipitation from space, the role of ground radars, the future of disdrometers and gauges, and the role of soil moisture sensors and river flow measurements in the bigger picture of precipitation science.
Regarding RCMs, last year was the 30th anniversary of this valuable tool. There are at least two papers on future directions in this field with contrasting views on the topic [42,43]. Regarding criticisms, on the one hand, it is argued that a major issue is that the results depend too heavily on a parent model. RCMs inherit all the defects of the parents critically on the long-scale flow deficiencies. This is a problem in terms of tuning the model for different areas and parent models, and to decide which combination is best, as small ensembles are known to be insufficient to gauge the uncertainties involved. On the other hand, and more critically, RCMs are heavily tuned to present-day conditions at the regional level, so we do not know about their actual behavior when CO 2 and other greenhouse gases continue increasing. There is no reference for this. Nonetheless, even with all the RCM tuning and fixes (such as moving the domains, etc.), the performance of the models is less than notable when compared with observational datasets [44][45][46][47]. Therefore, it has been suggested that RCMs should be kept in the research domain because they are not suited to providing useful, consistent policy advice. They might have a role in rapid prototyping but not for policy advice [43]. Even more critically, ESMs are catching up with RCMs in terms of resolution. We already have several ESMs with a comparable resolution to RCMs in the Coupled Model Intercomparison Project Phase 6 (CMIP6) [15,48]. There is good reason to expect that in 2045 even improved RCMs (convection-permitting and including complex biophysical processes) will be limited to the research realm, having been replaced for mitigation and adaptation applications by more comprehensive and global approaches.
As for limited-area models (LAMs), the situation is somewhat different. Their outputs are highly dependent on the large-scale flow and they also present large differences depending on the parent model [49]: The same LAM nested on the Global Forecast System (GFS) or in European Centre for Medium-Range Weather Forecasts (ECMWF) models can produce very different local forecasts. Indeed, again in this realm, global high-resolution models are also catching up in limited-area approaches (the ECMWF model is rapidly increasing its level of detail), but assimilation plays a more critical role here, and LAMs with their own assimilation system can outperform the parent model [7,[50][51][52][53]. LAMs are required to provide vital information for people, for example, through weather applications ("apps") on mobile phones, and to help inform authorities about local severe weather, storms and weather conditions relevant to human activities. So, they have a role that global models are unlikely to fulfil. It is unreasonable to run a global model at the 100-m resolution, for example, required to properly model runoff in a canyon during a flashflood, and so the approach is likely to remain unchanged for a time. Note that the same reasoning does not apply to RCMs since, at climatological 30-year scales, the actual conditions of the canyon are very likely to change (urbanization, land use/cover changes, etc.) in a way that renders the boundary conditions highly hypothetical and thus disconnected from the actual historical evolution. An informed look back at the evolution of European landscapes in the last 30 years shows that changes have been significant enough to make high-resolution RCM outputs under the 1990s' conditions unrealistic in terms of what actually happened at local scale. Indeed, as one aggregates the estimates (say from 10 km to 100 km grids), such changes become less important, but this betrays the purpose of RCMs since nothing could then be said about local scales.
LAMs are likely to remain for a time, but not for long. It seems unavoidable that in twenty-five years' time they will be phased out and everyone will be using variable resolution models (VRMs). These models represent a sort of a 'middle way': in areas where the fields have small spatial variability, there is little need for a detailed grid, whereas areas of contrasting terrain require detail to account for large spatial variability and gradients. Additionally, the infusion of large-scale flows into local conditions can be progressively done by gradually refining the mesh. This is a sensible way to convey large-scale, global inputs to higher resolutions using the same physics, which is a plus in terms of consistency.
The approach mitigates most of the shortcomings exhibited by LAMs in this respect [54,55]. The model's details, or how large the meshes are, are only limited by computing power and not by any structural strong constraint. VRMs such as the Variable Resolution Community Earth System Model (VR CEMS) or the Model for Prediction Across Scales (MPAS) also depend on good assimilation for the weather temporal scale, and are very useful for people and economic activities, having also the potential to inform the public about alerts and severe weather [56]. It is foreseeable that by 2045 VRMs will be extraordinarily accurate, with very high, tens-of-meters spatial resolution in the region of interest, and that computing power will allow for a large number of tailored, application-based simulations.
It is likely that the influence of VRMs will reach beyond weather simulations. ESMs are today's standard for climate research. The CMIP5 and CMIP6 projects have produced a great deal of new information on how the Earth system works [57,58]. These models include most of the known cycles, such as the social layer through representative concentration pathways or other approaches, and are becoming increasingly complex. Indeed, they need massive computing infrastructures, but remain the basic tool for climate impact assessments [59][60][61]. In the next decade, however, they are likely to be superseded by, or perhaps we should say embedded in, the overarching idea of seamless prediction over scales using VRMs. The approach here is to run the same model in time and cover from the weather to seasonal forecasts and then the climate. A functional description of the next step would be the widespread use of ESMs of variable resolution, or "variable resolution ESMs providing seamless prediction" [62,63]. These models will likely be the basis of future Intergovernmental Panel on Climate Change (IPCC) assessments. They will be exceedingly complex and computationally expensive tools, but in the near future quantum computing could fulfil a long-awaited promise and advance sufficiently to solve some of the technological challenges posed by this approach. Even if this sort of model remains concentrated in a few centers worldwide, the advances in atmospheric sciences and, by extension, in precipitation research will herald a new era in more detailed understanding of the complex physics of precipitation.
What about parameterizations? A few will no longer be necessary given the improvements in the spatial resolution of the models [38], but this is unlikely to be the case of the microphysics of precipitation, as many of the relevant processes occur at centimeter scale. The microphysics of precipitation is today an active field with many unknowns [64][65][66][67][68]. Critical experiments on the physics behind the parameterizations are needed. More field work, more laboratory experiments and more in-cloud measurements are known to be required. The need is especially acute for solid precipitation. Precipitation phase-rainfall and snowfall, not matter in observations, or satellites, or models, is difficult to capture and even differentiate, and ongoing global change complicates research. Possible directions to overcome present difficulties include dedicated campaigns, more laboratory work and advances in stochastic parameterizations below the kilometer scale.
The limit of the mechanistic approach to the subject is more theoretical than practical, because the problem is ultimately a statistical one, in the sense of statistical physics, and so the need for parameterization in the microphysics will likely remain. This contrasts with other parameterizations, such as that of convection of the planetary boundary layer (PBL). The need for better or more efficient ways to do computations, through machine learning for instance, has been a constant over recent decades [69][70][71]. One approach has been to use microphysics as a validation tool for physical hypotheses, as it will be difficult to directly include the results of the experiments in the codes. In the future, the microphysics of precipitation is likely to be extremely complex and detailed, requiring extraordinary resources and quantum computing approaches.
What about precipitation from space? This represents another way to understand this extremely complex process. It is not speculation, but rather a fact, to say that in the near future we will have access to massive amounts of data, including radio-occultations and several more radars in space [72][73][74][75]. It is less safe to say, but worth sharing the vision, that in the future satellites observing precipitation will be used mainly in the assimilation chain, or for very precise validation experiments through intensive observations programs. It is unlikely, though, that precipitation from space in the future will be mainly used to derive products or climate data records. There is a growing overlap in the field, with algorithms such as the IMERG [76] now using model-derived vertically integrated water vapor fields to propagate microwave measurements. The next logical step of such a procedure is to improve the model so it can produce precipitation in the right amount, time and location, thanks to the observations. The obvious advantage is that the model will be capable of predicting the precipitation field. Regarding soil moisture, it is also likely that this meteorological field will reasonably soon also be derived from models.
As for ground-based radars, the most likely development is instruments having more bands, being more precise and increasing even further the resolution of the estimates [77][78][79][80]. It is also safe to assume that the technology will become more economical, and that integrated networks will be the norm. Radar will greatly help elucidate the physics, especially the dynamics or the processes involved. Nowcasting will certainly evolve to fully automatic systems using machine learning, and radars will likely be used for alerts and warnings in small basins, or for certain very specific applications such as aviation. The dynamic information driven by integrated networks of radars will also help models provide better representation and prediction of the dynamics and, subsequently, of the microphysics, probably integrated into a multisource global assimilation system.
Disdrometers are likely to be gone by 2045. Radar technology will probably cover the research realm that disdrometers cover today, and more advanced, integrated instruments will likely soon be available [81][82][83]. Rain gauges, however, will certainly continue in use. They will remain the reference data source for many years. Ultimately, we have long series of historical data we would like to continue advancing. The key defining element of rain gauges is that they measure something physical, which is an object, a quantity of water. This is indisputable, because it is direct. We know there are many errors and biases affecting gauges, but that is something we already account for, and there are quite robust methods to homogenize and adjust the series. It is safe to say that we will have advanced versions of this old and reputable instrument in the future [84][85][86]. River flow sensors will also stay in use, not only for legacy reasons like rain gauges, but also because they will become cheaper; rivers will be fully integrated into urban management systems as society evolves to a fully connected system, and they will help provide real-time control of the flows.
Regarding soil moisture sensors, one can envision a future in which they are fully integrated into the Internet of Things (IoT), or as part of meteorological sensors in buildings, clothes, vehicles, etc. In the future, they will be as cheap as a global positioning system (GPS) receptor [87,88]. Today, all the electronics have a GPS chip. In the future, everything will have meteorological sensors inside.
To summarize, a highly speculative overview of the landscape of precipitation science in 2045 may look something like this:

1.
Seamless variable resolution models providing precipitation estimates at several resolutions across scales. Perhaps we will soon only speak about simply "The model".
As mentioned, all efforts in modeling might soon converge in a single approach within the framework of quantum computing.

2.
IoT/wearables/cheap electronics for meteorological data. The likely problem will be dealing with the vast amount of data and making sense of the physics.

3.
Parameterizations of precipitation microphysics. This will still be an active research field in 2045. Targeted and high-quality measurements to elucidate specific processes will be a major reason for satellite missions, which should be combined with an extensive ground field experiment.

4.
International observatory of precipitation (IOP). It is not hard to imagine an international observatory of precipitation, probably a constellation of satellites with far more capabilities than today's systems. The evolution of the GPM constellation towards a more multinational effort with far more radars could be the basis of this IOP.

5.
Assimilation. Meteorological satellites will be devoted mainly to providing data for assimilation. This will be the main driver for meteorological satellite missions, rather than producing climate data records competing with increasingly precise re-analyses. 6.
Advanced rain gauges will remain the ultimate truth and reference source for precipitation on the ground and will still be used to validate model outputs.
All these ideas and projections have been considered under the supposition of no major changes in the field and smooth evolution of the technology. Evidently, one cannot account for revolutions or paradigm changes in the field, which are, almost by definition, impossible to forecast. With the benefit of hindsight, it is not unlikely that in 2045 (or sooner) this paper will be considered both candid and fundamentally misguided. That is not, however, a reason for refraining from venturing into the future with the information we have now. Indeed, forecasting trends is a difficult task especially in a human endeavorscience-that is intended to actually invent the future. Quoting Niels Bohr, "Prediction is very difficult, especially if it's about the future." Such intelligent observation should not, however, deter us from imagining what lies beyond the horizon we currently observe in precipitation science.