Sign in to use this feature.

Years

Between: -

Article Types

Countries / Regions

Search Results (11)

Search Parameters:
Journal = Encyclopedia
Section = Earth Sciences

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 12108 KB  
Entry
Two Geophysical Technologies Used in Archaeological Research Simplified and Explained
by Philip Reeder
Encyclopedia 2025, 5(3), 151; https://doi.org/10.3390/encyclopedia5030151 - 15 Sep 2025
Viewed by 903
Definition
The geophysical techniques ground penetrating radar (GPR) and electrical resistivity tomography (ERT) are commonly used data collection methodologies in numerous disciplines, including archaeology. Many researchers are now, or will be in the future, associated with projects that use these geophysical techniques, but who [...] Read more.
The geophysical techniques ground penetrating radar (GPR) and electrical resistivity tomography (ERT) are commonly used data collection methodologies in numerous disciplines, including archaeology. Many researchers are now, or will be in the future, associated with projects that use these geophysical techniques, but who are not well versed in the instrumentation, its function, related terminology, data interpretation, and outcomes. This entry outlines the general approach and background for completing this type of research, dissects the methodology from a completed geoarchaeological project that uses both GPR and ERT, and provides concise definitions and explanations for all facets of the methodology. Based on this methodology, 21 terms or concepts related to GPR are explained in detail, as are 26 terms or concepts related to ERT, and visual representations of some of the terms and concepts are further illuminated via 11 figures. There are also 133 references linked to the various concepts and terms presented in this entry. Full article
(This article belongs to the Section Earth Sciences)
Show Figures

Figure 1

16 pages, 334 KB  
Entry
Data Structures for 2D Representation of Terrain Models
by Eric Guilbert and Bernard Moulin
Encyclopedia 2025, 5(3), 98; https://doi.org/10.3390/encyclopedia5030098 - 7 Jul 2025
Viewed by 646
Definition
This entry gives an overview of the main data structures and approaches used for a two-dimensional representation of the terrain surface using a digital elevation model (DEM). A DEM represents the elevation of the earth surface from a set of points. It is [...] Read more.
This entry gives an overview of the main data structures and approaches used for a two-dimensional representation of the terrain surface using a digital elevation model (DEM). A DEM represents the elevation of the earth surface from a set of points. It is used for terrain analysis, visualisation and interpretation. DEMs are most commonly defined as a grid where an elevation is assigned to each grid cell. Due to its simplicity, the square grid structure is the most common DEM structure. However, it is less adaptive and shows limitations for more complex processing and reasoning. Hence, the triangulated irregular network is a more adaptive structure and explicitly stores the relationships between the points. Other topological structures (contour graphs, contour trees) have been developed to study terrain morphology. Topological relationships are captured in another structure, the surface network (SN), composed of critical points (peaks, pits, saddles) and critical lines (thalweg, ridge lines). The SN can be computed using either a TIN or a grid. The Morse Theory provides a mathematical approach to studying the topology of surfaces, which is applied to the SN. It has been used for terrain simplification, multi-resolution modelling, terrain segmentation and landform identification. The extended surface network (ESN) extends the classical SN by integrating both the surface and the drainage networks. The ESN can itself be extended for the cognitive representation of the terrain based on saliences (typical points, lines and regions) and skeleton lines (linking critical points), while capturing the context of the appearance of landforms using topo-contexts. Full article
(This article belongs to the Section Earth Sciences)
Show Figures

Figure 1

17 pages, 9577 KB  
Entry
Geodynamics of the Mediterranean Region: Primary Role of Extrusion Processes
by Enzo Mantovani, Marcello Viti, Caterina Tamburelli and Daniele Babbucci
Encyclopedia 2025, 5(3), 97; https://doi.org/10.3390/encyclopedia5030097 - 7 Jul 2025
Viewed by 751
Definition
Tectonic activity in the Mediterranean region has been driven by the convergence of the confining plates (Nubia, Arabia and Eurasia). This convergence has been accommodated by the consumption of the oceanic domains that were present in the late Oligocene. It is suggested that [...] Read more.
Tectonic activity in the Mediterranean region has been driven by the convergence of the confining plates (Nubia, Arabia and Eurasia). This convergence has been accommodated by the consumption of the oceanic domains that were present in the late Oligocene. It is suggested that this process has been enabled by the lateral escape of orogenic belts in response to constrictional contexts. Where this condition was not present, subduction did not occur. This interpretation can plausibly and coherently account for the very complex pattern of tectonic processes in the whole area since the early Miocene. It is also suggested, by providing some examples, that the geodynamic context proposed here might help us to recognize the connection between the ongoing tectonic processes and the spatio-temporal distribution of past major earthquakes. A discussion is then reported about the incompatibilities of the main alternative geodynamic interpretation (slab pull) with the observed deformation pattern. Full article
(This article belongs to the Section Earth Sciences)
Show Figures

Figure 1

13 pages, 1507 KB  
Entry
Revisiting Lorenz’s Error Growth Models: Insights and Applications
by Bo-Wen Shen
Encyclopedia 2024, 4(3), 1134-1146; https://doi.org/10.3390/encyclopedia4030073 - 14 Jul 2024
Cited by 1 | Viewed by 2531
Definition
This entry examines Lorenz’s error growth models with quadratic and cubic hypotheses, highlighting their mathematical connections to the non-dissipative Lorenz 1963 model. The quadratic error growth model is the logistic ordinary differential equation (ODE) with a quadratic nonlinear term, while the cubic model [...] Read more.
This entry examines Lorenz’s error growth models with quadratic and cubic hypotheses, highlighting their mathematical connections to the non-dissipative Lorenz 1963 model. The quadratic error growth model is the logistic ordinary differential equation (ODE) with a quadratic nonlinear term, while the cubic model is derived by replacing the quadratic term with a cubic one. A variable transformation shows that the cubic model can be converted to the same form as the logistic ODE. The relationship between the continuous logistic ODE and its discrete version, the logistic map, illustrates chaotic behaviors, demonstrating computational chaos with large time steps. A variant of the logistic ODE is proposed to show how finite predictability horizons can be determined, emphasizing the continuous dependence on initial conditions (CDIC) related to stable and unstable asymptotic values. This review also presents the mathematical relationship between the logistic ODE and the non-dissipative Lorenz 1963 model. Full article
(This article belongs to the Section Earth Sciences)
Show Figures

Figure 1

18 pages, 765 KB  
Review
A Review of Event-Based Conceptual Rainfall-Runoff Models: A Case for Australia
by Sabrina Ali, Ataur Rahman and Rehana Shaik
Encyclopedia 2024, 4(2), 966-983; https://doi.org/10.3390/encyclopedia4020062 - 12 Jun 2024
Cited by 2 | Viewed by 3848
Abstract
Event-based models focus on modelling of peak runoff from rainfall data. Conceptual models indicate simplified models that provide reasonably accurate answers despite their crude nature. Rainfall-runoff models are used to transform a rainfall event into a runoff event. This paper focuses on reviewing [...] Read more.
Event-based models focus on modelling of peak runoff from rainfall data. Conceptual models indicate simplified models that provide reasonably accurate answers despite their crude nature. Rainfall-runoff models are used to transform a rainfall event into a runoff event. This paper focuses on reviewing computational simulation of rainfall-runoff processes over a catchment. Lumped conceptual, event-based rainfall-runoff models have remained the dominant practice for design flood estimation in Australia for many years due to their simplicity, flexibility, and accuracy under certain conditions. Attempts to establish regionalization methods for prediction of design flood hydrographs in ungauged catchments have seen little success. Therefore, as well as reviewing key rainfall-runoff model components for design flood estimation with a special focus on event-based conceptual models, this paper covers the aspects of regionalization to promote their applications to ungauged catchments. Full article
(This article belongs to the Section Earth Sciences)
Show Figures

Figure 1

18 pages, 317 KB  
Entry
Optimization Examples for Water Allocation, Energy, Carbon Emissions, and Costs
by Angelos Alamanos and Jorge Andres Garcia
Encyclopedia 2024, 4(1), 295-312; https://doi.org/10.3390/encyclopedia4010022 - 8 Feb 2024
Cited by 2 | Viewed by 2318
Definition
The field of Water Resources Management (WRM) is becoming increasingly interdisciplinary, realizing its direct connections with energy, food, and social and economic sciences, among others. Computationally, this leads to more complex models, wherein the achievement of multiple goals is sought. Optimization processes have [...] Read more.
The field of Water Resources Management (WRM) is becoming increasingly interdisciplinary, realizing its direct connections with energy, food, and social and economic sciences, among others. Computationally, this leads to more complex models, wherein the achievement of multiple goals is sought. Optimization processes have found various applications in such complex WRM problems. This entry considers the main factors involved in modern WRM, and puts them in a single optimization problem, including water allocation from different sources to different uses and non-renewable and renewable energy supplies, with their associated carbon emissions and costs. The entry explores the problem mathematically by presenting different optimization approaches, such as linear, fuzzy, dynamic, goal, and non-linear programming models. Furthermore, codes for each model are provided in Python, an open-source language. This entry has an educational character, and the examples presented are easily reproducible, so this is expected to be a useful resource for students, modelers, researchers, and water managers. Full article
(This article belongs to the Section Earth Sciences)
13 pages, 1021 KB  
Entry
Lorenz’s View on the Predictability Limit of the Atmosphere
by Bo-Wen Shen, Roger A. Pielke, Xubin Zeng and Xiping Zeng
Encyclopedia 2023, 3(3), 887-899; https://doi.org/10.3390/encyclopedia3030063 - 22 Jul 2023
Cited by 5 | Viewed by 7139
Definition
To determine whether (or not) the intrinsic predictability limit of the atmosphere is two weeks and whether (or not) Lorenz’s approaches support this limit, this entry discusses the following topics: (A). The Lorenz 1963 model qualitatively revealed the essence of a finite [...] Read more.
To determine whether (or not) the intrinsic predictability limit of the atmosphere is two weeks and whether (or not) Lorenz’s approaches support this limit, this entry discusses the following topics: (A). The Lorenz 1963 model qualitatively revealed the essence of a finite predictability within a chaotic system such as the atmosphere. However, the Lorenz 1963 model did not determine a precise limit for atmospheric predictability. (B). In the 1960s, using real-world models, the two-week predictability limit was originally estimated based on a doubling time of five days. The finding was documented by Charney et al. in 1966 and has become a consensus. Throughout this entry, Major Point A and B are used as respective references for these topics. A literature review and an analysis suggested that the Lorenz 1963 model qualitatively revealed a finite predictability, and that findings of the Lorenz 1969 model with a saturation assumption supported the idea of the two-week predictability limit, which, in the 1960s, was estimated based on a doubling time of five days obtained using real-world models. However, the theoretical Lorenz 1963 and 1969 models have limitations, such as a lack of certain processes and assumptions, and, therefore, cannot represent an intrinsic predictability limit of the atmosphere. This entry suggests an optimistic view for searching for a predictability limit using different approaches and is supported by recent promising simulations that go beyond two weeks. Full article
(This article belongs to the Section Earth Sciences)
Show Figures

Figure 1

9 pages, 716 KB  
Entry
A Methodology for Air Temperature Extrema Characterization Pertinent to Improving the Accuracy of Climatological Analyses
by Ana Žaknić-Ćatović and William A. Gough
Encyclopedia 2023, 3(1), 371-379; https://doi.org/10.3390/encyclopedia3010023 - 19 Mar 2023
Viewed by 1812
Definition
The suggested methodology for the characterization of temperature extrema presents a multistep preprocessing procedure intended to derive extrema time series of correctly identified and thermally defined daily air temperature extrema pairs. The underlying conceptual framework for this approach was developed in response to [...] Read more.
The suggested methodology for the characterization of temperature extrema presents a multistep preprocessing procedure intended to derive extrema time series of correctly identified and thermally defined daily air temperature extrema pairs. The underlying conceptual framework for this approach was developed in response to the existing gaps in the current state of daily extrema identification and the development of extrema-based synthetic air temperature time series. A code consisting of a series of algorithms was developed to establish four-parameter criteria for a more accurate representation of daily variability that allows easy replication of temperature distribution based on the correct characterization of daily temperature patterns. The first preprocessing step consists of subjecting the high-frequency temperature time series to a theoretical diurnal observing window that imposes latitudinally and seasonally crafted limits for the individual identification of daily minima and maxima. The following pre-processing step involves the supplementation of air temperature extrema with the information on the occurrence of extrema timing deemed as vital information for the reconstruction of the temperature time series. The subsequent step involves the application of an innovative temperature pattern recognition algorithm that identifies physically homogeneous air temperature populations based on the information obtained in previous steps. The last step involves the use of a metric for the assessment of extrema temperature and timing parameters’ susceptibility to climate change. The application of the presented procedure to high-frequency temperature data yields two strains of physically homogeneous extrema time series with the preserved characteristics of the overall temperature variability. In the present form, individual elements of this methodology are applicable for correcting historical sampling and air temperature averaging biases, improving the reproducibility of daily air temperature variation, and enhancing the performance of temperature index formulae based on daily temperature extrema. The objective of this analysis is the eventual implementation of the presented methodology into the practice of systematic temperature extrema identification and preprocessing of temperature time series for the configuration of physically homogeneous air temperature subpopulations. Full article
(This article belongs to the Section Earth Sciences)
Show Figures

Figure 1

10 pages, 3706 KB  
Entry
Three Kinds of Butterfly Effects within Lorenz Models
by Bo-Wen Shen, Roger A. Pielke, Xubin Zeng, Jialin Cui, Sara Faghih-Naini, Wei Paxson and Robert Atlas
Encyclopedia 2022, 2(3), 1250-1259; https://doi.org/10.3390/encyclopedia2030084 - 4 Jul 2022
Cited by 19 | Viewed by 10095
Definition
Within Lorenz models, the three major kinds of butterfly effects (BEs) are the sensitive dependence on initial conditions (SDIC), the ability of a tiny perturbation to create an organized circulation at large distances, and the hypothetical role of small-scale processes in contributing to [...] Read more.
Within Lorenz models, the three major kinds of butterfly effects (BEs) are the sensitive dependence on initial conditions (SDIC), the ability of a tiny perturbation to create an organized circulation at large distances, and the hypothetical role of small-scale processes in contributing to finite predictability, referred to as the first, second, and third kinds of butterfly effects (BE1, BE2, and BE3), respectively. A well-accepted definition of the butterfly effect is the BE1 with SDIC, which was rediscovered by Lorenz in 1963. In fact, the use of the term “butterfly” appeared in a conference presentation by Lorenz in 1972, when Lorenz introduced the BE2 as the metaphorical butterfly effect. In 2014, the so-called “real butterfly effect”, which is based on the features of Lorenz’s study in 1969, was introduced as the BE3. Full article
(This article belongs to the Section Earth Sciences)
Show Figures

Figure 1

16 pages, 5714 KB  
Review
A General Description of Karst Types
by Márton Veress
Encyclopedia 2022, 2(2), 1103-1118; https://doi.org/10.3390/encyclopedia2020073 - 6 Jun 2022
Cited by 4 | Viewed by 5253
Abstract
This study includes a general description of the Earth’s karst types based on literary data and field observations. An improved classification of karst types distinguishes the main group, group, and subgroup; and, a division of karst types involves a main karst type, karst [...] Read more.
This study includes a general description of the Earth’s karst types based on literary data and field observations. An improved classification of karst types distinguishes the main group, group, and subgroup; and, a division of karst types involves a main karst type, karst type, subtype, variety, and non-individual karst type. The relation between karst type and karst area is described. The role of various characteristics of karsts in the development of primary, secondary, and tertiary karst types is analyzed. Their structure is studied, which includes a geomorphic agent, process, feature, feature assemblage, karst system and the characteristics of the bearing karst area. Dominant, tributary, and accessory features are distinguished. The conditions of the stability and the development of types are studied, transformation ways are classified, and the effect of climate on types is described. Full article
(This article belongs to the Section Earth Sciences)
Show Figures

Figure 1

16 pages, 7469 KB  
Entry
Spatial Hurst–Kolmogorov Clustering
by Panayiotis Dimitriadis, Theano Iliopoulou, G.-Fivos Sargentis and Demetris Koutsoyiannis
Encyclopedia 2021, 1(4), 1010-1025; https://doi.org/10.3390/encyclopedia1040077 - 29 Sep 2021
Cited by 13 | Viewed by 4102
Definition
The stochastic analysis in the scale domain (instead of the traditional lag or frequency domains) is introduced as a robust means to identify, model and simulate the Hurst–Kolmogorov (HK) dynamics, ranging from small (fractal) to large scales exhibiting the clustering behavior (else known [...] Read more.
The stochastic analysis in the scale domain (instead of the traditional lag or frequency domains) is introduced as a robust means to identify, model and simulate the Hurst–Kolmogorov (HK) dynamics, ranging from small (fractal) to large scales exhibiting the clustering behavior (else known as the Hurst phenomenon or long-range dependence). The HK clustering is an attribute of a multidimensional (1D, 2D, etc.) spatio-temporal stationary stochastic process with an arbitrary marginal distribution function, and a fractal behavior on small spatio-temporal scales of the dependence structure and a power-type on large scales, yielding a high probability of low- or high-magnitude events to group together in space and time. This behavior is preferably analyzed through the second-order statistics, and in the scale domain, by the stochastic metric of the climacogram, i.e., the variance of the averaged spatio-temporal process vs. spatio-temporal scale. Full article
(This article belongs to the Section Earth Sciences)
Show Figures

Figure 1

Back to TopTop