Potential Instability Of Weather Events

Thunderstorms are mesoscale weather phenomena with a spatial extent varying from a few kilometres to a few hundred kilometres, that can last anywhere from over an hour to several hours. A thunderstorm is said to develop when there is a “potential instability” in the atmosphere. Such an instability can arise when relatively warm air is overlain by cooler, heavier air. The cooler air tends to sink down, displacing the warmer air upwards. If a sufficiently large volume of warm, moist air rises up, an updraft is observed, which leads to the condensation of water vapour and the formation of cumulus clouds.

The cumulus clouds formed initially quickly evaporate, but serve to transport moisture to the middle layers of the troposphere. The atmosphere eventually becomes humid enough for the clouds to undergo vertical development without evaporation and cumulonimbus clouds are formed, which result in precipitation and downdrafts.

The columns of cooled air sinking downward strike the ground with strong horizontal winds. At the same time, electrical charges accumulate on the cloud particles, which, when sufficiently large, lead to the occurrence of lightning discharges.

Quick and intense heating of the air as the lightning passes through produces shock waves which manifest as claps and rolls of thunder. The temperate and tropical regions of the world are the most prone to thunderstorms, while they are rare in the polar regions. Central Europe and Asia have an average of 20 to 60 thunderstorm days per year. It has been estimated that at any moment there are approximately 1,800 thunderstorms in progress throughout the world.

Get quality help now
Bella Hamilton
Verified

Proficient in: Thunderstorm

5 (234)

“ Very organized ,I enjoyed and Loved every bit of our professional interaction ”

+84 relevant experts are online
Hire writer

Thunderstorms constitute a major fraction of tropical rainfall.

Thunderstorms and the associated strong winds and lightning have emerged as major weather hazards in recent years affecting different parts of India. More than 80 thunderstorm days occur per year over the northeastern part of India, some parts of Kerala and Jammu & Kashmir. Many hazardous phenomena also occur with thunderstorms, such as strong gusts, lashing rain, and hail. Cloud-to-ground lightning associated with thunderstorms pose hazards like forest fires and casualties. Thunderstorms with extremely intense and heavy precipitation can also result in floods. Thunderstorms can have a devastating impact on several sectors including agriculture, aviation, surface transport, power and communication and may also lead to loss of human and animal life and property.

Shows a yearwise record of human deaths related to thunderstorms from 2001 to 2018. Yearwise number of deaths reported due to thunderstorms and lightning Timely and location-specific prediction of thunderstorms will help caution the community and hence, mitigate some of the damaging effects. Forecasting thunderstorms, however, can be extremely difficult due to their limited spatial and temporal extent and the fact that their initiation and evolution are heavily dependent on the localized thermodynamic conditions of the atmosphere.

Data Assimilation

Numerical Weather Prediction is highly dependent on the initial conditions of the system provided. Small changes in the initial conditions can have drastic effects on the prediction accuracy of the model. Better initial conditions imply improved accuracy of prediction. Data assimilation is a technique which optimally blends the observational data with output from a numerical model, usually a short-range forecast, or a climatological value to produce the best possible initial conditions of the system considered. The number of observations are generally small as compared to the degrees of freedom of the state of the model. Besides that, the spatial distribution of the observations are also uneven.

Data assimilation usually takes a forecast (also known as the first guess, or background data) and applies a correction to the forecast based on a set of observed data and estimated errors that are present in both the observations and the forecast itself. The difference between the forecast and the observations at that time is called innovation. A weighting factor is applied to the innovation to determine how much of a correction should be made to the forecast. The best estimate of the state of the system, based on the correction to the forecast determined by a weighting factor times the innovation, is called the analysis. The current operational data assimilation systems uses an analysis cycle as shown in Figure 1.1, in which short range forecasts of 6h are used as the background or first guess.

 Data Assimilation Flowchart

Data assimilation can be broadly categorized into two types – sequential and non-sequential. In sequential assimilation only those observations made before the time of assimilation are taken into consideration. Sequential assimilation systems are prominently real time data assimilations systems. In non-sequential data assimilation observations from the future are also considered. An example of such a system would be the reanalysis exercise.

 Common Data Assimilation Methods

The earliest data assimilation methods were based on empirical approaches such as Successive Correction Method (SCM) and nudging. In the SCM method, the background fields serve as the first guess. This estimate is corrected iteratively using suitable weights. In the nudging approach, a term is added to the prognostic equations that nudges the solution towards the observation. In the case of observations having a coarse resolution, the grid values are nudged point-by-point to a 3-dimensional space- and time-interpolated analysis field. For fine-scale observations, the points near the observations are nudged based on the model error at observation site.

Subsequently, assimilation algorithms based on statistical estimation theory using both sequential and variational approaches were employed. In the least square method, the analysis error is minimized by determining the optimal weights using the least square approach. The most common least squares method of data assimilation has been optimal interpolation (OI). Least-squares methods differ from successive correction methods and nudging methods in that the observations are weighted according to some known or estimated statistics regarding their errors, rather than just by empirical values. Thus, observations from different sources can be weighted differently based on known instrument and other errors. For example, radiosonde measurements of temperature can be given a greater weight than satellite derived temperatures. The optimal interpolation method attempts to minimize the total error of all the observations to come up with an ideal weighting for the observations.

In variational data assimilation, a cost function is defined that is proportional to the distance of the analysis with the background to the observations. The above cost function is then minimized to obtain the analysis. In 4D-Var, the cost function includes within itself the distance to observations over a time interval, known as the assimilation window. The difference between the OI and the 3D-Var approaches is only in the method of solution. In OI method, the weights are calculated for each grid cell, while in 3D-Var, the minimization of the cost function is performed directly to perform the analysis. Kalman filtering is very similar to OI, but in this case, we compute the fore cast error covariance using the forecast model itself. The gain is determined by least-square minimization of the expected error of the updated state.

The KF’s main strength is that it explicitly considers the modeling and measurement uncertainties in the updating process as well as providing an estimate of the uncertainty of the system state. However, it assumes that the errors are Gaussian and that the dynamic model is linear. The Ensemble Kalman Filter (EnKF) method is a popular sequential data assimilation algorithm, which is mainly applied to nonlinear models. The EnKF is basically a Monte Carlo approximation of the KF, in which an ensemble of K data assimilation cycles is carried out simultaneously. All the cycles assimilate the same real observations, but in order to maintain them realistically independent, different sets of random perturbations are added to the observations assimilated in each member of the ensemble data assimilations. This ensemble of data assimilation systems is used to estimate the forecast error covariance.

DVar Assimilation

The analysis, i.e., the optimal state of the model at that point is determined by minimizing said cost function. Here, the control variable with respect to which the minimization is done is the observation variable itself, rather than the error variance, as in optimal interpolation. This is a huge advantage as it implies that raw data can be directly assimilated. Assimilation of the data at the observation location directly avoids possible error from the derivation step used to extract required fields from raw data. Extra processing of data is often a source of errors in addition to the observation errors.

 Doppler Weather Radars

Doppler effect observed in sound was described by Christian Andreas Doppler that the sound waves from a source coming closer to a standing person have a higher frequency while the sound waves from a source going away from a standing person have a lower frequency. The approach Doppler took on sound waves proved to be valid for all electromagnetic waves. A Doppler Weather Radar (DWR) is an active remote sensing instrument that transmits microwave pulses and receives reflected echoes which aid in the investigation of the properties of the atmosphere. The frequency of the echo returned from a fixed target remains same as the transmitted wave while the frequency of the echo from a moving target will be shifted based on the Doppler frequency. By measuring the phase shift between the transmitted and received signals, the radial wind can be ascertained. The amplitude of the returning wave pulses is used to estimate the reflectivity, which can be related to precipitation intensity.

The DWR also measures the velocity spread, which can be related to turbulence. In addition to this, polarimetric DWRs have the capability to measure additional parameters such as differential reflectivity, correlation coefficient, linear depolarization ratio and specific differential phase. These seven parameters can be used to extract information such as the rainfall rate, hydrometeor classification and even the type of rain by determining features like drop shape and size. Since it is operated in the microwave frequency, the radar beam penetrates thunderstorms and clouds to reveal the dynamical structure inside of otherwise unobservable events. This “inside look” helps researchers understand the life cycle and dynamics of storms. Radial velocity and radar reflectivity measured by the Doppler weather radar have special features as compared to conventional observations.

Radial velocity provides info about the vertical motion while reflectivity can provide a good measure of the different precipitation hydrometeors present. Assimilation of radar reflectivity and radial velocity could hence have a considerable positive impact on short-range numerical weather prediction of convective events. Xiao et al. (2007), did multiple-radar data assimilation experiments to demonstrate the ability of the WRF 3DVAR system in simulating the three-dimensional structure of a squall line system as well as the Quantitative Precipitation Forecast (QPF). Several studies have been carried out, showing the impact of radar data assimilation on the forecasts of strong convective events over the Indian region. Abilash et al. (2007), using MM5 3DVar for three strong convective events over Chennai and Kolkata, indicated that DWR data assimilation improves the initial field and enhances the QPF skill.

Srivatsava et al. (2010) used ARPS to show the positive impact of DWR radial velocity and reflectivity in the short-range prediction of convective systems over the Indian region. Prasad et al. (2014) studied two severe thunderstorms over the Gangetic West Basin and found that DWR observations improve the dynamic and thermodynamic features of the thunderstorm with the improvement of wind and moisture in the boundary layer. Abilash et al. (2012), using WRF 3DVar, showed that improvement of the QPF skill with radar data assimilation is more clearly seen in heavy rainfall events than light rainfall. Thiruvengadam et al., (2019) studied the impact of using the unconventional control variables of horizontal wind components for the assimilation of DWR data on the skill of high intensity precipitation forecast.

Thunderstorms over south India

The south Indian peninsula has an interesting and complex topography and severe convective activity is observed over this region. A Tyagi (2007) studied the climatology of thunderstorms over India and reported that the thunderstorm frequency is between 40 and 60 days over Karnataka and northern parts of Tamil Nadu and between 60 and 80 days over Kerala and adjoining south Tamil Nadu. In the premonsoon season, the highest frequency of more than 40 days is observed over Kerala, amongst other regions. Thunderstorm frequency is observed to be 15-20 days over Orissa and adjoining parts of Andhra while the frequency is between 20 and 30 days over South Tamil Nadu and Karnataka. Highest number of thunderstorms in India during the postmonsoon season occur over Kerala (20 to 25 days) whereas Tamil Nadu has a frequency of 10 to 15 days.

Manohar et al., (2005) reported the seasonal and latitudinal variation of thunderstorm activity over the Indian region. The latitudinal variation of thunderstorm frequency during premonsoon and postmosoon seasons are shown in Figures 1.2 and 1.3. In both cases, maximum thunderstorm activity (10 to 12 storms per month) is observed over the southern tip of Kerala. G Agnihotri and A P Dimri (2018) published a report on the observed structure of convective echoes over southern Indian Peninsula during premonsoon using TRMM radar. They studied the characteristics of 25 convective events and reported that 43.3% of convective echoes over south peninsula are very deep and have vertical extents between 10-15 km followed by 29.3% lying in the range of 8-10 km respectively. 18.1% and 9.1% of the echoes have height of less than 8 km and greater than 15 km respectively.

They concluded from the cumulative frequency distribution of 30 and 40 dBz that nearly 23% and 7% of the convective echoes cross 10 km height over the south Indian region in pre-monsoon season. A lot of destructive effects been reported relating to intense convective events over south India (Agnihotri, et al. (2018)) including damage of roads, electric poles, plantations, crops and buildings and human as well as animal casualties. While short range forecasting of intense convective systems over West Bengal and northeastern parts of the country has been extensively studied, only a few studies have been done regarding the prediction of thunderstorm events over South India.

Objectives

The main objective of this study is to assimilate radar data (equivalent reflectivity and radial velocity) from the C-Band polarimetric Doppler weather radar at the Vikram Sarabhai Space Centre (VSSC), Thiruvananthapuram using 3D-Var for the prediction of thunderstorms over South India.

Cite this page

Potential Instability Of Weather Events. (2021, Dec 27). Retrieved from https://paperap.com/potential-instability-of-weather-events/

Let’s chat?  We're online 24/7