Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (73)

Search Parameters:
Keywords = BigTable

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 11759 KB  
Review
Data Sources for Traffic Analysis in Urban Canyons—The Comprehensive Literature Review
by Michał Zawodny and Maciej Kruszyna
Appl. Sci. 2025, 15(19), 10686; https://doi.org/10.3390/app151910686 - 3 Oct 2025
Viewed by 1257
Abstract
We propose a comprehensive literature review based on big data and V2X research to find promising tools to detect vehicles for traffic research and provide safe autonomous vehicle (AV) traffic. Presented data sources can provide real-time data for V2X systems and offline databases [...] Read more.
We propose a comprehensive literature review based on big data and V2X research to find promising tools to detect vehicles for traffic research and provide safe autonomous vehicle (AV) traffic. Presented data sources can provide real-time data for V2X systems and offline databases from VATnets for micro- and macro-modeling in traffic research. The authors want to present a set of sources that are not based on GNSS and other systems that could be interrupted by high-rise buildings and dense smart city infrastructure, as well as review of big data sources in traffic modeling that can be useful in future traffic research. Both reviews findings are summarized in tables at the end of the review sections of the paper. The authors added propositions in the form of two hypotheses on how traffic models can obtain data in the urban canyon connected environment scenario. The first hypothesis uses Roadside Units (RSUs) to retrieve data in similar ways to cellular data in traffic research and proves that this source is data rich. The second one acknowledges Bluetooth/Wi-Fi scanners’ research potential in V2X environments. Full article
(This article belongs to the Special Issue Mapping and Localization for Intelligent Vehicles in Urban Canyons)
Show Figures

Figure 1

27 pages, 1118 KB  
Article
Enabling Intelligent Data Modeling with AI for Business Intelligence and Data Warehousing: A Data Vault Case Study
by Andreea Vines, Ana-Ramona Bologa and Andreea-Izabela Bostan
Systems 2025, 13(9), 811; https://doi.org/10.3390/systems13090811 - 16 Sep 2025
Viewed by 1373
Abstract
This study explores the innovative application of Artificial Intelligence (AI) in transforming data engineering practices, with a specific focus on optimizing data modeling and data warehouse automation for Business Intelligence (BI) systems. The proposed framework automates the creation of Data Vault models directly [...] Read more.
This study explores the innovative application of Artificial Intelligence (AI) in transforming data engineering practices, with a specific focus on optimizing data modeling and data warehouse automation for Business Intelligence (BI) systems. The proposed framework automates the creation of Data Vault models directly from raw source tables by leveraging the advanced capabilities of Large Language Models (LLMs). The approach involves multiple iterations and uses a set of LLMs from various providers to improve accuracy and adaptability. These models identify relevant entities, relationships, and historical attributes by analyzing the metadata, schema structures, and contextual relationships embedded within the source data. To ensure the generated models are valid and reliable, the study introduces a rigorous validation methodology that combines syntactic, structural, and semantic evaluations into a single comprehensive validity coefficient. This metric provides a quantifiable measure of model quality, facilitating both automated evaluation and human understanding. Through iterative refinement and multi-model experimentation, the system significantly reduces manual modeling efforts, enhances consistency, and accelerates the data warehouse development lifecycle. This exploration serves as a foundational step toward understanding the broader implications of AI-driven automation in advancing the state of modern Big Data warehousing and analytics. Full article
(This article belongs to the Special Issue Business Intelligence and Data Analytics in Enterprise Systems)
Show Figures

Figure 1

29 pages, 919 KB  
Article
DDoS Defense Strategy Based on Blockchain and Unsupervised Learning Techniques in SDN
by Shengmin Peng, Jialin Tian, Xiangyu Zheng, Shuwu Chen and Zhaogang Shu
Future Internet 2025, 17(8), 367; https://doi.org/10.3390/fi17080367 - 13 Aug 2025
Cited by 1 | Viewed by 1116
Abstract
With the rapid development of technologies such as cloud computing, big data, and the Internet of Things (IoT), Software-Defined Networking (SDN) is emerging as a new network architecture for the modern Internet. SDN separates the control plane from the data plane, allowing a [...] Read more.
With the rapid development of technologies such as cloud computing, big data, and the Internet of Things (IoT), Software-Defined Networking (SDN) is emerging as a new network architecture for the modern Internet. SDN separates the control plane from the data plane, allowing a central controller, the SDN controller, to quickly direct the routing devices within the topology to forward data packets, thus providing flexible traffic management for communication between information sources. However, traditional Distributed Denial of Service (DDoS) attacks still significantly impact SDN systems. This paper proposes a novel dual-layer strategy capable of detecting and mitigating DDoS attacks in an SDN network environment. The first layer of the strategy enhances security by using blockchain technology to replace the SDN flow table storage container in the northbound interface of the SDN controller. Smart contracts are then used to process the stored flow table information. We employ the time window algorithm and the token bucket algorithm to construct the first layer strategy to defend against obvious DDoS attacks. To detect and mitigate less obvious DDoS attacks, we design a second-layer strategy that uses a composite data feature correlation coefficient calculation method and the Isolation Forest algorithm from unsupervised learning techniques to perform binary classification, thereby identifying abnormal traffic. We conduct experimental validation using the publicly available DDoS dataset CIC-DDoS2019. The results show that using this strategy in the SDN network reduces the average deviation of round-trip time (RTT) by approximately 38.86% compared with the original SDN network without this strategy. Furthermore, the accuracy of DDoS attack detection reaches 97.66% and an F1 score of 92.2%. Compared with other similar methods, under comparable detection accuracy, the deployment of our strategy in small-scale SDN network topologies provides faster detection speeds for DDoS attacks and exhibits less fluctuation in detection time. This indicates that implementing this strategy can effectively identify DDoS attacks without affecting the stability of data transmission in the SDN network environment. Full article
(This article belongs to the Special Issue DDoS Attack Detection for Cyber–Physical Systems)
Show Figures

Figure 1

22 pages, 1159 KB  
Article
Compaction-Aware Flash Memory Remapping for Key–Value Stores
by Jialin Wang, Zhen Yang, Yi Fan and Yajuan Du
Micromachines 2025, 16(6), 699; https://doi.org/10.3390/mi16060699 - 11 Jun 2025
Viewed by 1807
Abstract
With the rapid development of big data and artificial intelligence, the demand for memory has exploded. As a key data structure in modern databases and distributed storage systems, the Log-Structured Merge Tree (LSM-tree) has been widely employed (such as LevelDB, RocksDB, etc.) in [...] Read more.
With the rapid development of big data and artificial intelligence, the demand for memory has exploded. As a key data structure in modern databases and distributed storage systems, the Log-Structured Merge Tree (LSM-tree) has been widely employed (such as LevelDB, RocksDB, etc.) in systems based on key–value pairs due to its efficient writing performance. In LSM-tree-based KV stores, typically deployed on systems with DRAM-SSD storage, the KV items are first organized into MemTable as buffer for SSTables in main memory. When the buffer size exceeds the threshold, MemTable is flushed to the SSD and reorganized into an SSTable, which is then passed down level by level through compaction. However, the compaction degrades write performance and SSD endurance due to significant write amplification. To address this issue, recent proposals have mostly focused on redesigning the structure of LSM trees. We discover the prevalence of unchanged data blocks (UDBs) in the LSM-tree compaction process, i.e., UDBs are written back to SSD the same as they are read into memory, which induces extra write amplification and degrades I/O performance. In this paper, we propose a KV store design in SSD, called RemapCom, to exploit remapping on these UDBs. RemapCom first identifies UDBs with a lightweight state machine integrated into the compaction merge process. In order to increase the ratio of UDBs, RemapCom further designs a UDB retention method to further develop the benefit of remapping. Moreover, we implement a prototype of RemapCom on LevelDB by providing two primitives for the remapping. Compared to the state of the art, the evaluation results demonstrate that RemapCom can reduce write amplification by up to 53% and improve write throughput by up to 30%. Full article
Show Figures

Figure 1

20 pages, 3225 KB  
Article
Merging Multiple System Perspectives: The Key to Effective Inland Shipping Emission-Reduction Policy Design
by Solange van der Werff, Fedor Baart and Mark van Koningsveld
J. Mar. Sci. Eng. 2025, 13(4), 716; https://doi.org/10.3390/jmse13040716 - 3 Apr 2025
Cited by 1 | Viewed by 1226
Abstract
Policymakers in the maritime sector face the challenge of designing and implementing decarbonization policies while maintaining safe navigation. Herein, the inland sector serves as a promising stepping stone due to the possibility of creating a dense energy supply infrastructure and shorter distances compared [...] Read more.
Policymakers in the maritime sector face the challenge of designing and implementing decarbonization policies while maintaining safe navigation. Herein, the inland sector serves as a promising stepping stone due to the possibility of creating a dense energy supply infrastructure and shorter distances compared to marine shipping. A key challenge is to consider the totality of all operational profiles as a result of the range of vessels and routes encountering varying local circumstances. In this study, we use a new scheme called “event table” to transform big data on vessel trajectories (AIS data) combined with energy-estimating algorithms into shipping-emission outcomes that can be evaluated from multiple perspectives. We can subsequently tie observations in one perspective (for example, large-scale spatial patterns on a map) to supporting explanations based on another perspective (for example, water currents, vessel speeds, or engine ages and their contributions to emissions). Hence, combining these outcomes from multiple perspectives and evaluation scales provides an essential understanding of how the system works and what the most effective improvement measures will be. With our approach, we can translate large quantities of data from multiple sources into multiple linked perspectives on the shipping system. Full article
(This article belongs to the Special Issue Green Shipping Corridors and GHG Emissions)
Show Figures

Figure 1

23 pages, 1811 KB  
Article
EGA: An Efficient GPU Accelerated Groupby Aggregation Algorithm
by Zhe Wang, Yao Shen and Zhou Lei
Appl. Sci. 2025, 15(7), 3693; https://doi.org/10.3390/app15073693 - 27 Mar 2025
Viewed by 959
Abstract
With the exponential growth of big data, efficient groupby aggregation (GA) has become critical for real-time analytics across industries. GA is a key method for extracting valuable information. Current CPU-based solutions (such as large-scale parallel processing platforms) face computational throughput limitations. Since CPU-based [...] Read more.
With the exponential growth of big data, efficient groupby aggregation (GA) has become critical for real-time analytics across industries. GA is a key method for extracting valuable information. Current CPU-based solutions (such as large-scale parallel processing platforms) face computational throughput limitations. Since CPU-based platforms struggle to support real-time big data analysis, the GPU is introduced to support real-time GA analysis. Most GPU GA algorithms are based on hashing methods, and these algorithms experience performance degradation when the load factor of the hash table is too high or when the data volume exceeds the GPU memory capacity limit. This paper proposes an efficient hash-based GPU-accelerated groupby aggregation algorithm (EGA) that addresses these limitations. EGA features different designs for different scenarios: single-pass EGA (SP-EGA) maintains high efficiency when data fit in the GPU memory, while multipass EGA (MP-EGA) supports GA for data exceeding the GPU memory capacity. EGA demonstrates significant acceleration: SP-EGA outperforms SOTA hash-based GPU algorithms by 1.16–5.39× at load factors >0.90 and surpasses SOTA sort-based GPU methods by 1.30–2.48×. MP-EGA achieves 6.45–29.12× speedup over SOTA CPU implementations. Full article
(This article belongs to the Special Issue Methods and Software for Big Data Analytics and Applications)
Show Figures

Figure 1

16 pages, 7829 KB  
Article
Fusion of Remotely Sensed Data with Monitoring Well Measurements for Groundwater Level Management
by César de Oliveira Ferreira Silva, Rodrigo Lilla Manzione, Epitácio Pedro da Silva Neto, Ulisses Alencar Bezerra and John Elton Cunha
AgriEngineering 2025, 7(1), 14; https://doi.org/10.3390/agriengineering7010014 - 9 Jan 2025
Cited by 1 | Viewed by 1602
Abstract
In the realm of hydrological engineering, integrating extensive geospatial raster data from remote sensing (Big Data) with sparse field measurements offers a promising approach to improve prediction accuracy in groundwater studies. In this study, we integrated multisource data by applying the LMC to [...] Read more.
In the realm of hydrological engineering, integrating extensive geospatial raster data from remote sensing (Big Data) with sparse field measurements offers a promising approach to improve prediction accuracy in groundwater studies. In this study, we integrated multisource data by applying the LMC to model the spatial relationships of variables and then utilized block support regularization with collocated block cokriging (CBCK) to enhance our predictions. A critical engineering challenge addressed in this study is support homogenization, where we adjusted punctual variances to block variances and ensure consistency in spatial predictions. Our case study focused on mapping groundwater table depth to improve water management and planning in a mixed land use area in Southeast Brazil that is occupied by sugarcane crops, silviculture (Eucalyptus), regenerating fields, and natural vegetation. We utilized the 90 m resolution TanDEM-X digital surface model and STEEP (Seasonal Tropical Ecosystem Energy Partitioning) data with a 500 m resolution to support the spatial interpolation of groundwater table depth measurements collected from 56 locations during the hydrological year 2015–16. Ordinary block kriging (OBK) and CBCK methods were employed. The CBCK method provided more reliable and accurate spatial predictions of groundwater depth levels (RMSE = 0.49 m), outperforming the OBK method (RMSE = 2.89 m). An OBK-based map concentrated deeper measurements near their wells and gave shallow depths for most of the points during estimation. The CBCK-based map shows more deeper predicted points due to its relationship with the covariates. Using covariates improved the groundwater table depth mapping by detecting the interconnection of varied land uses, supporting the water management for agronomic planning connected with ecosystem sustainability. Full article
Show Figures

Graphical abstract

26 pages, 7294 KB  
Article
Public Authentic-Replica Sampling Mechanism in Distributed Storage Environments
by Jiale Ye, Yongmei Bai, Jiang Xu, Shitao Huang, Zhaoyang Han and Wei Wan
Electronics 2024, 13(21), 4167; https://doi.org/10.3390/electronics13214167 - 23 Oct 2024
Cited by 1 | Viewed by 1402
Abstract
With the rapid development of wireless communication and big data analysis technologies, the storage of massive amounts of data relies on third-party trusted storage, such as cloud storage. However, once data are stored on third-party servers, data owners lose physical control over their [...] Read more.
With the rapid development of wireless communication and big data analysis technologies, the storage of massive amounts of data relies on third-party trusted storage, such as cloud storage. However, once data are stored on third-party servers, data owners lose physical control over their data, making it challenging to ensure data integrity and security. To address this issue, researchers have proposed integrity auditing mechanisms that allow for the auditing of data integrity on cloud servers without retrieving all the data. To further enhance the availability of data stored on cloud servers, multiple replicas of the original data are stored on the server. However, in existing multi-replica auditing schemes, there is a problem of server fraud, where the server does not actually store the corresponding data replicas. To tackle this issue, this paper presents a formal definition of authentic replicas along with a security model for the authentic-replica sampling mechanism. Based on time-lock puzzles, identity-based encryption (IBE) mechanisms, and succinct proof techniques, we design an authentic replica auditing mechanism. This mechanism ensures the authenticity of replicas and can resist outsourcing attacks and generation attacks. Additionally, our schemes replace the combination of random numbers and replica correspondence tables with Linear Feedback Shift Registers (LFSRs), optimizing the original client-side generation and uploading of replica parameters from being linearly related to the number of replicas to a constant level. Furthermore, our schemes allow for the public recovery of replica parameters, enabling any third party to verify the replicas through these parameters. As a result, the schemes achieve public verifiability and meet the efficiency requirements for authentic-replica sampling in multi-cloud environments. This makes our scheme more suitable for distributed storage environments. The experiments show that our scheme makes the time for generating copy parameters negligible while also greatly optimizing the time required for replica generation. As the amount of replica data increases, the time spent does not grow linearly. Due to the multi-party aggregation design, the verification time is also optimal. Compared to the latest schemes, the verification time is reduced by approximately 30%. Full article
(This article belongs to the Special Issue Novel Methods Applied to Security and Privacy Problems, Volume II)
Show Figures

Figure 1

18 pages, 8484 KB  
Article
Feasibility of Emergency Flood Traffic Road Damage Assessment by Integrating Remote Sensing Images and Social Media Information
by Hong Zhu, Jian Meng, Jiaqi Yao and Nan Xu
ISPRS Int. J. Geo-Inf. 2024, 13(10), 369; https://doi.org/10.3390/ijgi13100369 - 18 Oct 2024
Cited by 6 | Viewed by 2414
Abstract
In the context of global climate change, the frequency of sudden natural disasters is increasing. Assessing traffic road damage post-disaster is crucial for emergency decision-making and disaster management. Traditional ground observation methods for evaluating traffic road damage are limited by the timeliness and [...] Read more.
In the context of global climate change, the frequency of sudden natural disasters is increasing. Assessing traffic road damage post-disaster is crucial for emergency decision-making and disaster management. Traditional ground observation methods for evaluating traffic road damage are limited by the timeliness and coverage of data updates. Relying solely on these methods does not adequately support rapid assessment and emergency management during extreme natural disasters. Social media, a major source of big data, can effectively address these limitations by providing more timely and comprehensive disaster information. Motivated by this, we utilized multi-source heterogeneous data to assess the damage to traffic roads under extreme conditions and established a new framework for evaluating traffic roads in cities prone to flood disasters caused by rainstorms. The approach involves several steps: First, the surface area affected by precipitation is extracted using a threshold method constrained by confidence intervals derived from microwave remote sensing images. Second, disaster information is collected from the Sina Weibo platform, where social media information is screened and cleaned. A quantification table for road traffic loss assessment was defined, and a social media disaster information classification model combining text convolutional neural networks and attention mechanisms (TextCNN-Attention disaster information classification) was proposed. Finally, traffic road information on social media is matched with basic geographic data, the classification of traffic road disaster risk levels is visualized, and the assessment of traffic road disaster levels is completed based on multi-source heterogeneous data. Using the “7.20” rainstorm event in Henan Province as an example, this research categorizes the disaster’s impact on traffic roads into five levels—particularly severe, severe, moderate, mild, and minimal—as derived from remote sensing image monitoring and social media information analysis. The evaluation framework for flood disaster traffic roads based on multi-source heterogeneous data provides important data support and methodological support for enhancing disaster management capabilities and systems. Full article
Show Figures

Figure 1

98 pages, 28240 KB  
Article
Water and the Origin of Life
by Marc Henry
Water 2024, 16(19), 2854; https://doi.org/10.3390/w16192854 - 8 Oct 2024
Cited by 2 | Viewed by 25156
Abstract
This article reviews all the major stages in the origins of life, from the emergence of matter in the initial Big Bang to the modern, civilized human being. On an immaterial level, it is proposed and explained how consciousness necessarily takes precedence over [...] Read more.
This article reviews all the major stages in the origins of life, from the emergence of matter in the initial Big Bang to the modern, civilized human being. On an immaterial level, it is proposed and explained how consciousness necessarily takes precedence over matter. Next, we explain how consciousness, with its ability to process information, selected the water molecule to breathe life into the periodic table of elements. We also explain why the notion of entropy allows us to evolve, “Die Entropie der Welt strebt einem Maximum zu” (second principle), and, therefore, takes precedence over the notion of energy, which, on the contrary, encourages us to preserve what we have, “Die Energie der Welt bleibt konstant” (first principle). This is followed by a discussion of the importance of quantum coherence and the need to rely on a second quantization formalism for a proper understanding of the physical–biochemical properties of water. Moreover, throughout the argument developed on the best and most fundamental things science has to offer, care is taken to link this knowledge to the great philosophies of the West (Greece), the East (China and India), and even to practices of a shamanic nature (Africa and America). Hence, finally, we propose reconsidering all musical practice within the framework of the diapason of water at a frequency of 429.62 Hz, as well as all therapeutic practice on the basis of seven clearly identified and established frameworks of thought. Full article
Show Figures

Figure 1

16 pages, 2290 KB  
Article
Why Do Companies Cook the Books? Empirical Study of the Motives of Creative Accounting of Slovak Companies
by Jakub Michulek, Lubica Gajanova, Anna Krizanova and Roman Blazek
Adm. Sci. 2024, 14(7), 158; https://doi.org/10.3390/admsci14070158 - 22 Jul 2024
Cited by 4 | Viewed by 2314
Abstract
Studies on creative accounting date back to the latter part of the 20th century. Creative accounting is still a big challenge in financial accounting. The problem of financial statement manipulation might be investigated, for instance, from an accounting, legal, ethical, or psychological perspective. [...] Read more.
Studies on creative accounting date back to the latter part of the 20th century. Creative accounting is still a big challenge in financial accounting. The problem of financial statement manipulation might be investigated, for instance, from an accounting, legal, ethical, or psychological perspective. This research aims to identify the main motives for the use of creative accounting and to find out whether corporate culture has an impact on the motives leading to the use of creative accounting. Data collection took place from 18 November 2023 to 18 December 2022. In the research, we used Pearson’s χ2 test to determine the dependence of the studied variables in contingency tables. Subsequently, correspondence analysis was used. The type of corporate culture does not have an impact on the motives that lead to creative accounting. It was proven that the type of corporate culture has an impact on the performance of creative accounting actions based on the request of a senior employee. The uniqueness of the research lies in the investigation of creative accounting from a psychological and managerial point of view in the territory of the Slovak Republic. Full article
Show Figures

Figure 1

18 pages, 957 KB  
Article
Research on the Optimization of Pricing and the Replenishment Decision-Making Problem Based on LightGBM and Dynamic Programming
by Wenyue Tao, Chaoran Wu, Ting Wu and Fuyuan Chen
Axioms 2024, 13(4), 257; https://doi.org/10.3390/axioms13040257 - 13 Apr 2024
Cited by 1 | Viewed by 2149
Abstract
Vegetables have a short period of freshness, and therefore, the purchase of vegetables has to be carefully matched with sales, especially in the “small production and big market” setting prevalent in China. Therefore, it is worthwhile to develop a systematic and comprehensive mathematical [...] Read more.
Vegetables have a short period of freshness, and therefore, the purchase of vegetables has to be carefully matched with sales, especially in the “small production and big market” setting prevalent in China. Therefore, it is worthwhile to develop a systematic and comprehensive mathematical model of replenishment plans and pricing strategies for each category of vegetables and individual products. In this paper, we analyze the following three questions: Question One: What is the distribution law and relationship between the sales volume of vegetable categories and single products? Question Two: What is the relationship between total sales volume and cost-plus pricing of vegetable categories? And is it possible to provide the daily total replenishment and pricing strategy of each vegetable category for the following week to maximize supermarket profit? Question Three: How can we incorporate the market demand for single vegetable products into a profit-maximizing program for supermarkets? Is it possible to further formulate the replenishment plan requirements for single products? To answer the first question, we created pivot tables to analyze occupancy. We found that mosaic leaves, peppers, and edible mushrooms accounted for a larger proportion of occupacy, while cauliflowers, aquatic rhizomes, and eggplants accounted for a smaller proportion. For the single items, lettuce, cabbage, green pepper, screw pepper, enoki mushroom, and shiitake mushroom accounted for a large proportion of their respective categories. We used the Pearson correlation coefficient and the Mfuzz package based on fuzzy c-means (FCM) algorithm to analyze the correlation between vegetable categories and single products. We found that there was a strong correlation between vegetable categories. Moreover, the sale of vegetable items belonging to the same category exhibited the same patterns of change over time. In order to address the second question, we established the LightGBM sales forecasting model. Combined with previous sales data, we forecasted and planned an efficient daily replenishment volume for each vegetable category in the coming week. In addition, we developed a pricing strategy for vegetable categories to maximize supermarket profits. For the third question, we built a dynamic programming model combining an optimal replenishment volume with a product pricing strategy for single items, which let the supermarket maximize its expected profits. Full article
(This article belongs to the Special Issue Multi-Criteria Decision Making (MCDM) with Preference Modeling)
Show Figures

Figure 1

24 pages, 26431 KB  
Review
When Taekwondo Meets Artificial Intelligence: The Development of Taekwondo
by Min-Chul Shin, Dae-Hoon Lee, Albert Chung and Yu-Won Kang
Appl. Sci. 2024, 14(7), 3093; https://doi.org/10.3390/app14073093 - 7 Apr 2024
Cited by 5 | Viewed by 9317
Abstract
This study explores the comprehensive understanding of taekwondo, the application of fourth industrial revolution technologies in various kinds of sports, the development of taekwondo through artificial intelligence (AI), and essential technology in the fourth industrial revolution while suggesting advanced science directions through a [...] Read more.
This study explores the comprehensive understanding of taekwondo, the application of fourth industrial revolution technologies in various kinds of sports, the development of taekwondo through artificial intelligence (AI), and essential technology in the fourth industrial revolution while suggesting advanced science directions through a literature review. Literature was sourced from six internet search electronic databases, consisting of three English databases and three Korean databases, from January 2016 to August 2023. The literature indicated cases of sports convergence with the application of fourth industrial revolution technologies, such as the game of go, golf, table tennis, soccer, American football, skiing, archery, and fencing. These sports not only use big data but also virtual reality and augmented reality. Taekwondo is a traditional martial art that originated in Republic of Korea and gradually became a globally recognized sport. Since taekwondo’s competition analysis is an analysis in which researchers manually write events, it takes a very long time to analyze, and the scale of the analysis varies depending on the researcher’s tendencies. This study presented the development of an AI Taekwondo performance improvement analysis and evaluation system and a metaverse-based virtual Taekwondo pumsae/fighting coaching platform through an AI-based motion tracking analysis method. Full article
Show Figures

Figure 1

10 pages, 210 KB  
Article
The Meeting: Ideas for an Architecture of Interreligious Civic Collaboration
by Steven G. Smith
Religions 2024, 15(3), 360; https://doi.org/10.3390/rel15030360 - 18 Mar 2024
Viewed by 2054
Abstract
Interreligious engagement (IE) has been experienced and theorized mainly as the pursuit of a shared respectful awareness of the beliefs, practices, and social experiences of multiple religious communities. In rare instances, it has been possible to create architecture specifically to foster IE, as [...] Read more.
Interreligious engagement (IE) has been experienced and theorized mainly as the pursuit of a shared respectful awareness of the beliefs, practices, and social experiences of multiple religious communities. In rare instances, it has been possible to create architecture specifically to foster IE, as in the “tri-faith” Abrahamic campus in Omaha and the Berlin House of One. The theme is: Here we are, accepting that we share the world. Another form of IE that deserves to attract more interest is multireligious collaboration in civic work (addressing homelessness, urban blight, illiteracy, etc.). Some adherents of the intrinsically cosmopolitan “world” religions are actively cosmopolitan to the extent of seeking this engagement. The theme is: Let us share the work of the world, including sharing our religiously inflected processing of what the practical issues facing us are. There is a new initiative of this sort in my city, Jackson, Mississippi, named (from M. L. King) the “Beloved Community”. An architectural thought experiment may prove helpful in articulating the ideals for such an endeavor. What would be the physical desiderata for its headquarters? Let us imagine a new downtown building, The Meeting, dedicated to housing meetings where mixed religious groups learn about civic issues and coordinate efforts to address them. Full interreligious sharing of a space seems to require a neutral design lacking any definite religious inspiration. But there are nonsectarian ways to create an appreciably special, non-ordinary space, as in courtrooms and classrooms. Could a civic IE headquarters be special, expressive of practical optimism, and contain a sufficient religious allusion to qualify as a “next-to-sacred space” in which religious actors felt supported in the civic extension of their religious lives? I offer suggestions for discussion, including (1) a pavilion-style building suggestive of being set up for a special purpose—not soaringly grandiose but with a vertical feature such as a central roof lantern; (2) at least one major porch, with benches and tables; (3) an outside water fountain with public water supply (a historical allusion to the Islamic sabil); (4) inside, right-sized meeting rooms around the glass-walled periphery; (5) a big “living room” lounge in the center, usable for larger meetings, with access to a kitchen, and with a big project board for tracking work completed and work in hand next to a large map of the city; (6) a moderate descent of several steps into each meeting room so that there is a feeling of commitment in attending a meeting and a sense of challenge in going forth from one; (7) otherwise a main floor levelness and openness facilitating movement in and out, as in a train station; and (8) upstairs small offices for religious and other qualifying organizations. Answering the aesthetic and practical questions these suggestions raise takes us into imagining civic IE more concretely. Full article
(This article belongs to the Special Issue Inter-Religious Encounters in Architecture and Other Public Art)
18 pages, 2905 KB  
Article
Combining Radon Deficit, NAPL Concentration, and Groundwater Table Dynamics to Assess Soil and Groundwater Contamination by NAPLs and Related Attenuation Processes
by Martina Mattia, Paola Tuccimei, Giancarlo Ciotoli, Michele Soligo, Claudio Carusi, Elisa Rainaldi and Mario Voltaggio
Appl. Sci. 2023, 13(23), 12813; https://doi.org/10.3390/app132312813 - 29 Nov 2023
Cited by 5 | Viewed by 1696
Abstract
Soil and groundwater contamination by NAPLs (Non-Aqueous Phase Liquids) is certainly a big issue for protecting the environment. In situ clean-up actions are routinely applied to mitigate the risk and are supplemented by monitoring surveys to assess the degree, extension, and evolution of [...] Read more.
Soil and groundwater contamination by NAPLs (Non-Aqueous Phase Liquids) is certainly a big issue for protecting the environment. In situ clean-up actions are routinely applied to mitigate the risk and are supplemented by monitoring surveys to assess the degree, extension, and evolution of the contamination. Radon gas is here used as a tracer of contamination because of its high solubility in non-polar solvents that produce a reduced concentration of the gas in polluted soil and groundwater with reference to radon levels in adjacent “clean” areas. This approach was employed in two sites where gasoline and diesel spillage occurred, causing soil and groundwater contamination. The two case studies were chosen because of their difference in terms of the hydrogeological features, age of the spillage, composition of residual NAPLs, and clean-up measures to test the advantages and limits of this approach in a variety of settings. Radon data, NAPL concentration in the groundwater (mainly total hydrocarbons, Methyl Tertiary-Butyl Ether and Ethyl Tertiary-Butyl Ether) and the depth of the groundwater table were periodically collected in surveys that spanned a period of two years. This dataset was statistically processed using principal component analysis to unravel which factors and attenuation processes are working in the sites and the response of the radon deficit approach to this complex series of phenomena concurrently occurring there. Full article
Show Figures

Figure 1

Back to TopTop