Journal Description
Software
Software
is an international, peer-reviewed, open access journal on all aspects of software engineering published quarterly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 19.8 days after submission; acceptance to publication is undertaken in 6.6 days (median values for papers published in this journal in the first half of 2024).
- Recognition of Reviewers: APC discount vouchers, optional signed peer review, and reviewer names published annually in the journal.
- Software is a companion journal of Electronics.
Latest Articles
RbfCon: Construct Radial Basis Function Neural Networks with Grammatical Evolution
Software 2024, 3(4), 549-568; https://doi.org/10.3390/software3040027 - 11 Dec 2024
Abstract
►
Show Figures
Radial basis function networks are considered a machine learning tool that can be applied on a wide series of classification and regression problems proposed in various research topics of the modern world. However, in many cases, the initial training method used to fit
[...] Read more.
Radial basis function networks are considered a machine learning tool that can be applied on a wide series of classification and regression problems proposed in various research topics of the modern world. However, in many cases, the initial training method used to fit the parameters of these models can produce poor results either due to unstable numerical operations or its inability to effectively locate the lowest value of the error function. The current work proposed a novel method that constructs the architecture of this model and estimates the values for each parameter of the model with the incorporation of Grammatical Evolution. The proposed method was coded in ANSI C++, and the produced software was tested for its effectiveness on a wide series of datasets. The experimental results certified the adequacy of the new method to solve difficult problems, and in the vast majority of cases, the error in the classification or approximation of functions was significantly lower than the case where the original training method was applied.
Full article
Open AccessArticle
Implementing Mathematics of Arrays in Modern Fortran: Efficiency and Efficacy
by
Arjen Markus and Lenore Mullin
Software 2024, 3(4), 534-548; https://doi.org/10.3390/software3040026 - 30 Nov 2024
Abstract
►▼
Show Figures
Mathematics of Arrays (MoA) concerns the formal description of algorithms working on arrays of data and their efficient and effective implementation in software and hardware. Since (multidimensional) arrays are one of the most important data structures in Fortran, as witnessed by their native
[...] Read more.
Mathematics of Arrays (MoA) concerns the formal description of algorithms working on arrays of data and their efficient and effective implementation in software and hardware. Since (multidimensional) arrays are one of the most important data structures in Fortran, as witnessed by their native support in its language and the numerous operations and functions that take arrays as inputs and outputs, it is natural to examine how Fortran can be used as an implementation language for MoA. This article presents the first results, both in terms of code and of performance, regarding this union. It may serve as a basis for further research, both with respect to the formal theory of MoA and to improving the practical implementation of array-based algorithms.
Full article
Figure 1
Open AccessArticle
Analysing Quality Metrics and Automated Scoring of Code Reviews
by
Owen Sortwell, David Cutting and Christine McConnellogue
Software 2024, 3(4), 514-533; https://doi.org/10.3390/software3040025 - 29 Nov 2024
Abstract
►▼
Show Figures
Code reviews are an important part of the software development process, and there is a wide variety of approaches used to perform them. While it is generally agreed that code reviews are beneficial and result in higher-quality software, there has been little work
[...] Read more.
Code reviews are an important part of the software development process, and there is a wide variety of approaches used to perform them. While it is generally agreed that code reviews are beneficial and result in higher-quality software, there has been little work investigating best practices and approaches, exploring which factors impact code review quality. Our approach firstly analyses current best practices and procedures for undertaking code reviews, along with an examination of metrics often used to analyse a review’s quality and current offerings for automated code review assessment. A maximum of one thousand code review comments per project were mined from GitHub pull requests across seven open-source projects which have previously been analysed in similar studies. Several identified metrics are tested across these projects using Python’s Natural Language Toolkit, including stop word ratio, overall sentiment, and detection of code snippets through the GitHub markdown language. Comparisons are drawn with regards to each project’s culture and the language used in the code review process, with pros and cons for each. The results show that the stop word ratio remained consistent across all projects, with only one project exceeding an average of 30%, and that the percentage of positive comments across the projects was broadly similar also. The suitability of these metrics is also discussed with regards to the creation of a scoring framework and development of an automated code review analysis tool. We conclude that the software written is an effective method of comparing practices and cultures across projects and can provide benefits by promoting a positive review culture within an organisation. However, rudimentary sentiment analysis and detection of GitHub code snippets may not be sufficient to assess a code review’s overall usefulness, as many terms that are important to include in a programmer’s lexicon such as ‘error’ and ‘fail’ deem a code review to be negative. Code snippets that are included outside of the markdown language are also ignored from analysis. Recommendations for future work are suggested, including the development of a more robust sentiment analysis system that can include detection of emotion such as frustration, and the creation of a programming dictionary to exclude programming terms from sentiment analysis.
Full article
Figure 1
Open AccessArticle
Implementation and Performance Evaluation of Quantum Machine Learning Algorithms for Binary Classification
by
Surajudeen Shina Ajibosin and Deniz Cetinkaya
Software 2024, 3(4), 498-513; https://doi.org/10.3390/software3040024 - 28 Nov 2024
Abstract
►▼
Show Figures
In this work, we studied the use of Quantum Machine Learning (QML) algorithms for binary classification and compared their performance with classical Machine Learning (ML) methods. QML merges principles of Quantum Computing (QC) and ML, offering improved efficiency and potential quantum advantage in
[...] Read more.
In this work, we studied the use of Quantum Machine Learning (QML) algorithms for binary classification and compared their performance with classical Machine Learning (ML) methods. QML merges principles of Quantum Computing (QC) and ML, offering improved efficiency and potential quantum advantage in data-driven tasks and when solving complex problems. In binary classification, where the goal is to assign data to one of two categories, QML uses quantum algorithms to process large datasets efficiently. Quantum algorithms like Quantum Support Vector Machines (QSVM) and Quantum Neural Networks (QNN) exploit quantum parallelism and entanglement to enhance performance over classical methods. This study focuses on two common QML algorithms, Quantum Support Vector Classifier (QSVC) and QNN. We used the Qiskit software and conducted the experiments with three different datasets. Data preprocessing included dimensionality reduction using Principal Component Analysis (PCA) and standardization using scalers. The results showed that quantum algorithms demonstrated competitive performance against their classical counterparts in terms of accuracy, while QSVC performed better than QNN. These findings suggest that QML holds potential for improving computational efficiency in binary classification tasks. This opens the way for more efficient and scalable solutions in complex classification challenges and shows the complementary role of quantum computing.
Full article
Figure 1
Open AccessArticle
A Brief Overview of the Pawns Programming Language
by
Lee Naish
Software 2024, 3(4), 473-497; https://doi.org/10.3390/software3040023 - 19 Nov 2024
Abstract
►▼
Show Figures
This paper describes the Pawns programming language, currently under development, which uses several novel features to combine the functional and imperative programming paradigms. It supports pure functional programming (including algebraic data types, higher-order programming and parametric polymorphism), where the representation of values need
[...] Read more.
This paper describes the Pawns programming language, currently under development, which uses several novel features to combine the functional and imperative programming paradigms. It supports pure functional programming (including algebraic data types, higher-order programming and parametric polymorphism), where the representation of values need not be considered. It also supports lower-level C-like imperative programming with pointers and the destructive update of all fields of the structs used to represent the algebraic data types. All destructive update of variables is made obvious in Pawns code, via annotations on statements and in type signatures. Type signatures must also declare sharing between any arguments and result that may be updated. For example, if two arguments of a function are trees that share a subtree and the subtree is updated within the function, both variables must be annotated at that point in the code, and the sharing and update of both arguments must be declared in the type signature of the function. The compiler performs extensive sharing analysis to check that the declarations and annotations are correct. This analysis allows destructive update to be encapsulated: a function with no update annotations in its type signature is guaranteed to behave as a pure function, even though the value returned may have been constructed using destructive update within the function. Additionally, the sharing analysis helps support a constrained form of global variables that also allows destructive update to be encapsulated and safe update of variables with polymorphic types to be performed.
Full article
Figure 1
Open AccessArticle
Software Development and Maintenance Effort Estimation Using Function Points and Simpler Functional Measures
by
Luigi Lavazza, Angela Locoro and Roberto Meli
Software 2024, 3(4), 442-472; https://doi.org/10.3390/software3040022 - 29 Oct 2024
Abstract
►▼
Show Figures
Functional size measures are widely used for estimating software development effort. After the introduction of Function Points, a few “simplified” measures have been proposed, aiming to make measurement simpler and applicable when fully detailed software specifications are not yet available. However, some practitioners
[...] Read more.
Functional size measures are widely used for estimating software development effort. After the introduction of Function Points, a few “simplified” measures have been proposed, aiming to make measurement simpler and applicable when fully detailed software specifications are not yet available. However, some practitioners believe that, when considering “complex” projects, traditional Function Point measures support more accurate estimates than simpler functional size measures, which do not account for greater-than-average complexity. In this paper, we aim to produce evidence that confirms or disproves such a belief via an empirical study that separately analyzes projects that involved developments from scratch and extensions and modifications of existing software. Our analysis shows that there is no evidence that traditional Function Points are generally better at estimating more complex projects than simpler measures, although some differences appear in specific conditions. Another result of this study is that functional size metrics—both traditional and simplified—do not seem to effectively account for software complexity, as estimation accuracy decreases with increasing complexity, regardless of the functional size metric used. To improve effort estimation, researchers should look for a way of measuring software complexity that can be used in effort models together with (traditional or simplified) functional size measures.
Full article
Figure 1
Open AccessArticle
Opening Software Research Data 5Ws+1H
by
Anastasia Terzi and Stamatia Bibi
Software 2024, 3(4), 411-441; https://doi.org/10.3390/software3040021 - 26 Sep 2024
Abstract
►▼
Show Figures
Open Science describes the movement of making any research artifact available to the public, fostering sharing and collaboration. While sharing the source code is a popular Open Science practice in software research and development, there is still a lot of work to be
[...] Read more.
Open Science describes the movement of making any research artifact available to the public, fostering sharing and collaboration. While sharing the source code is a popular Open Science practice in software research and development, there is still a lot of work to be done to achieve the openness of the whole research and development cycle from the conception to the preservation phase. In this direction, the software engineering community faces significant challenges in adopting open science practices due to the complexity of the data, the heterogeneity of the development environments and the diversity of the application domains. In this paper, through the discussion of the 5Ws+1H (Why, Who, What, When, Where, and How) questions that are referred to as the Kipling’s framework, we aim to provide a structured guideline to motivate and assist the software engineering community on the journey to data openness. Also, we demonstrate the practical application of these guidelines through a use case on opening research data.
Full article
Figure 1
Open AccessArticle
A Software Tool for ICESat and ICESat-2 Laser Altimetry Data Processing, Analysis, and Visualization: Description, Features, and Usage
by
Bruno Silva and Luiz Guerreiro Lopes
Software 2024, 3(3), 380-410; https://doi.org/10.3390/software3030020 - 18 Sep 2024
Abstract
►▼
Show Figures
This paper presents a web-based software tool designed to process, analyze, and visualize satellite laser altimetry data, specifically from the Ice, Cloud, and land Elevation Satellite (ICESat) mission, which collected data from 2003 to 2009, and ICESat-2, which was launched in 2018 and
[...] Read more.
This paper presents a web-based software tool designed to process, analyze, and visualize satellite laser altimetry data, specifically from the Ice, Cloud, and land Elevation Satellite (ICESat) mission, which collected data from 2003 to 2009, and ICESat-2, which was launched in 2018 and is currently operational. These data are crucial for studying and understanding changes in Earth’s surface and cryosphere, offering unprecedented accuracy in quantifying such changes. The software tool ICEComb provides the capability to access the available data from both missions, interactively visualize it on a geographic map, locally store the data records, and process, analyze, and explore the data in a detailed, meaningful, and efficient manner. This creates a user-friendly online platform for the analysis, exploration, and interpretation of satellite laser altimetry data. ICEComb was developed using well-known and well-documented technologies, simplifying the addition of new functionalities and extending its applicability to support data from different satellite laser altimetry missions. The tool’s use is illustrated throughout the text by its application to ICESat and ICESat-2 laser altimetry measurements over the Mirim Lagoon region in southern Brazil and Uruguay, which is part of the world’s largest complex of shallow-water coastal lagoons.
Full article
Figure 1
Open AccessArticle
Signsability: Enhancing Communication through a Sign Language App
by
Din Ezra, Shai Mastitz and Irina Rabaev
Software 2024, 3(3), 368-379; https://doi.org/10.3390/software3030019 - 12 Sep 2024
Abstract
►▼
Show Figures
The integration of sign language recognition systems into digital platforms has the potential to bridge communication gaps between the deaf community and the broader population. This paper introduces an advanced Israeli Sign Language (ISL) recognition system designed to interpret dynamic motion gestures, addressing
[...] Read more.
The integration of sign language recognition systems into digital platforms has the potential to bridge communication gaps between the deaf community and the broader population. This paper introduces an advanced Israeli Sign Language (ISL) recognition system designed to interpret dynamic motion gestures, addressing a critical need for more sophisticated and fluid communication tools. Unlike conventional systems that focus solely on static signs, our approach incorporates both deep learning and Computer Vision techniques to analyze and translate dynamic gestures captured in real-time video. We provide a comprehensive account of our preprocessing pipeline, detailing every stage from video collection to the extraction of landmarks using MediaPipe, including the mathematical equations used for preprocessing these landmarks and the final recognition process. The dataset utilized for training our model is unique in its comprehensiveness and is publicly accessible, enhancing the reproducibility and expansion of future research. The deployment of our model on a publicly accessible website allows users to engage with ISL interactively, facilitating both learning and practice. We discuss the development process, the challenges overcome, and the anticipated societal impact of our system in promoting greater inclusivity and understanding.
Full article
Figure 1
Open AccessArticle
Sligpt: A Large Language Model-Based Approach for Data Dependency Analysis on Solidity Smart Contracts
by
Xiaolei Ren and Qiping Wei
Software 2024, 3(3), 345-367; https://doi.org/10.3390/software3030018 - 5 Aug 2024
Abstract
►▼
Show Figures
The advent of blockchain technology has revolutionized various sectors by providing transparency, immutability, and automation. Central to this revolution are smart contracts, which facilitate trustless and automated transactions across diverse domains. However, the proliferation of smart contracts has exposed significant security vulnerabilities, necessitating
[...] Read more.
The advent of blockchain technology has revolutionized various sectors by providing transparency, immutability, and automation. Central to this revolution are smart contracts, which facilitate trustless and automated transactions across diverse domains. However, the proliferation of smart contracts has exposed significant security vulnerabilities, necessitating advanced analysis techniques. Data dependency analysis is a critical program analysis method used to enhance the testing and security of smart contracts. This paper introduces Sligpt, an innovative methodology that integrates a large language model (LLM), specifically GPT-4o, with the static analysis tool Slither, to perform data dependency analyses on Solidity smart contracts. Our approach leverages both the advanced code comprehension capabilities of GPT-4o and the advantages of a traditional analysis tool. We empirically evaluate Sligpt using a curated dataset of Ethereum smart contracts. Sligpt achieves significant improvements in precision, recall, and overall analysis depth compared with Slither and GPT-4o, providing a robust solution for data dependency analysis. This paper also discusses the challenges encountered, such as the computational resource requirements and the inherent variability in LLM outputs, while proposing future research directions to further enhance the methodology. Sligpt represents a significant advancement in the field of static analysis on smart contracts, offering a practical framework for integrating LLMs with static analysis tools.
Full article
Figure 1
Open AccessArticle
Software Update Methodologies for Feature-Based Product Lines: A Combined Design Approach
by
Abir Bazzi, Adnan Shaout and Di Ma
Software 2024, 3(3), 328-344; https://doi.org/10.3390/software3030017 - 5 Aug 2024
Abstract
►▼
Show Figures
The automotive industry is experiencing a significant shift, transitioning from traditional hardware-centric systems to more advanced software-defined architectures. This change is enabling enhanced autonomy, connectivity, safety, and improved in-vehicle experiences. Service-oriented architecture is crucial for achieving software-defined vehicles and creating new business opportunities
[...] Read more.
The automotive industry is experiencing a significant shift, transitioning from traditional hardware-centric systems to more advanced software-defined architectures. This change is enabling enhanced autonomy, connectivity, safety, and improved in-vehicle experiences. Service-oriented architecture is crucial for achieving software-defined vehicles and creating new business opportunities for original equipment manufacturers. A software update approach that is rich in variability and based on a Merkle tree approach is proposed for new vehicle architecture requirements. Given the complexity of software updates in vehicles, particularly when dealing with multiple distributed electronic control units, this software-centric approach can be optimized to handle various architectures and configurations, ensuring consistency across all platforms. In this paper, our software update approach is expanded to cover the solution space of the feature-based product line engineering, and we show how to combine our approach with product line engineering in creative and unique ways to form a software-defined vehicle modular architecture. Then, we offer insights into the design of the Merkle trees utilized in our approach, emphasizing the relationship among the software modules, with a focus on their impact on software update performance. This approach streamlines the software update process and ensures that the safety as well as the security of the vehicle are continuously maintained.
Full article
Figure 1
Open AccessArticle
Towards a Block-Level Conformer-Based Python Vulnerability Detection
by
Amirreza Bagheri and Péter Hegedűs
Software 2024, 3(3), 310-327; https://doi.org/10.3390/software3030016 - 31 Jul 2024
Abstract
►▼
Show Figures
Software vulnerabilities pose a significant threat to computer systems because they can jeopardize the integrity of both software and hardware. The existing tools for detecting vulnerabilities are inadequate. Machine learning algorithms may struggle to interpret enormous datasets because of their limited ability to
[...] Read more.
Software vulnerabilities pose a significant threat to computer systems because they can jeopardize the integrity of both software and hardware. The existing tools for detecting vulnerabilities are inadequate. Machine learning algorithms may struggle to interpret enormous datasets because of their limited ability to understand intricate linkages within high-dimensional data. Traditional procedures, on the other hand, take a long time and require a lot of manual labor. Furthermore, earlier deep-learning approaches failed to acquire adequate feature data. Self-attention mechanisms can process information across large distances, but they do not collect structural data. This work addresses the critical problem of inadequate vulnerability detection in software systems. We propose a novel method that combines self-attention with convolutional networks to enhance the detection of software vulnerabilities by capturing both localized, position-specific features and global, content-driven interactions. Our contribution lies in the integration of these methodologies to improve the precision and F1 score of vulnerability detection systems, achieving unprecedented results on complex Python datasets. In addition, we improve the self-attention approaches by changing the denominator to address the issue of excessive attention heads creating irrelevant disturbances. We assessed the effectiveness of this strategy using six complex Python vulnerability datasets obtained from GitHub. Our rigorous study and comparison of data with previous studies resulted in the most precise outcomes and F1 score (99%) ever attained by machine learning systems.
Full article
Figure 1
Open AccessArticle
Mapping Petri Nets onto a Calculus of Context-Aware Ambients
by
François Siewe, Vasileios Germanos and Wen Zeng
Software 2024, 3(3), 284-309; https://doi.org/10.3390/software3030015 - 18 Jul 2024
Abstract
►▼
Show Figures
Petri nets are a graphical notation for describing a class of discrete event dynamic systems whose behaviours are characterised by concurrency, synchronisation, mutual exclusion and conflict. They have been used over the years for the modelling of various distributed systems applications. With the
[...] Read more.
Petri nets are a graphical notation for describing a class of discrete event dynamic systems whose behaviours are characterised by concurrency, synchronisation, mutual exclusion and conflict. They have been used over the years for the modelling of various distributed systems applications. With the advent of pervasive systems and the Internet of Things, the Calculus of Context-aware Ambients (CCA) has emerged as a suitable formal notation for analysing the behaviours of these systems. In this paper, we are interested in comparing the expressive power of Petri nets to that of CCA. That is, can the class of systems represented by Petri nets be modelled in CCA? To answer this question, an algorithm is proposed that maps any Petri net onto a CCA process. We prove that a Petri net and its corresponding CCA process are behavioural equivalent. It follows that CCA is at least as expressive as Petri nets, i.e., any system that can be specified in Petri nets can also be specified in CCA. Moreover, tools developed for CCA can also be used to analyse the behaviours of Petri nets.
Full article
Figure 1
Open AccessArticle
Using Behavior-Driven Development (BDD) for Non-Functional Requirements
by
Shexmo Santos, Tacyanne Pimentel, Fabio Gomes Rocha and Michel S. Soares
Software 2024, 3(3), 271-283; https://doi.org/10.3390/software3030014 - 18 Jul 2024
Abstract
►▼
Show Figures
In software engineering, there must be clarity in communication among interested parties to elicit the requirements aimed at software development through frameworks to achieve the behaviors expected by the software. Problem: A lack of clarity in the requirement-elicitation stage can impact subsequent
[...] Read more.
In software engineering, there must be clarity in communication among interested parties to elicit the requirements aimed at software development through frameworks to achieve the behaviors expected by the software. Problem: A lack of clarity in the requirement-elicitation stage can impact subsequent stages of software development. Solution: We proposed a case study focusing on the performance efficiency characteristic expressed in the ISO/IEC/IEEE 25010 standard using Behavior-Driven Development (BDD). Method: The case study was performed with professionals who use BDD to elicit the non-functional requirements of a company that develops software. Summary of Results: The result obtained was the validation related to the elicitation of non-functional requirements aimed at the performance efficiency characteristic of the ISO/IEC/IEEE 25010 Standard using the BDD framework through a real case study in a software development company. Contributions and impact: The article’s main contribution is to demonstrate the effectiveness of using BDD to elicit non-functional requirements about the performance efficiency characteristic of the ISO/IEC/IEEE 25010 standard.
Full article
Figure 1
Open AccessArticle
E-SERS: An Enhanced Approach to Trust-Based Ranking of Apps
by
Nahida Chowdhury, Ayush Maharjan and Rajeev R. Raje
Software 2024, 3(3), 250-270; https://doi.org/10.3390/software3030013 - 13 Jul 2024
Abstract
►▼
Show Figures
The number of mobile applications (“Apps”) has grown significantly in recent years. App Stores rank/recommend Apps based on factors such as average star ratings and the number of installs. Such rankings do not focus on the internal artifacts of Apps (e.g., security vulnerabilities).
[...] Read more.
The number of mobile applications (“Apps”) has grown significantly in recent years. App Stores rank/recommend Apps based on factors such as average star ratings and the number of installs. Such rankings do not focus on the internal artifacts of Apps (e.g., security vulnerabilities). If internal artifacts are ignored, users may fail to estimate the potential risks associated with installing Apps. In this research, we present a framework called E-SERS (Enhanced Security-related and Evidence-based Ranking Scheme) for comparing Android Apps that offer similar functionalities. E-SERS uses internal and external artifacts of Apps in the ranking process. E-SERS is a significant enhancement of our past evidence-based ranking framework called SERS. We have evaluated E-SERS on publicly accessible Apps from the Google Play Store and compared our rankings with prevalent ranking techniques. Our experiments demonstrate that E-SERS, leveraging its holistic approach, excels in identifying malicious Apps and consistently outperforms existing alternatives in ranking accuracy. By emphasizing comprehensive assessment, E-SERS empowers users, particularly those less experienced with technology, to make informed decisions and avoid potentially harmful Apps. This contribution addresses a critical gap in current App-ranking methodologies, enhancing the safety and security of today’s technologically dependent society.
Full article
Figure 1
Open AccessArticle
CORE-ReID: Comprehensive Optimization and Refinement through Ensemble Fusion in Domain Adaptation for Person Re-Identification
by
Trinh Quoc Nguyen, Oky Dicky Ardiansyah Prima and Katsuyoshi Hotta
Software 2024, 3(2), 227-249; https://doi.org/10.3390/software3020012 - 3 Jun 2024
Abstract
►▼
Show Figures
This study introduces a novel framework, “Comprehensive Optimization and Refinement through Ensemble Fusion in Domain Adaptation for Person Re-identification (CORE-ReID)”, to address an Unsupervised Domain Adaptation (UDA) for Person Re-identification (ReID). The framework utilizes CycleGAN to generate diverse data that harmonize differences in
[...] Read more.
This study introduces a novel framework, “Comprehensive Optimization and Refinement through Ensemble Fusion in Domain Adaptation for Person Re-identification (CORE-ReID)”, to address an Unsupervised Domain Adaptation (UDA) for Person Re-identification (ReID). The framework utilizes CycleGAN to generate diverse data that harmonize differences in image characteristics from different camera sources in the pre-training stage. In the fine-tuning stage, based on a pair of teacher–student networks, the framework integrates multi-view features for multi-level clustering to derive diverse pseudo-labels. A learnable Ensemble Fusion component that focuses on fine-grained local information within global features is introduced to enhance learning comprehensiveness and avoid ambiguity associated with multiple pseudo-labels. Experimental results on three common UDAs in Person ReID demonstrated significant performance gains over state-of-the-art approaches. Additional enhancements, such as Efficient Channel Attention Block and Bidirectional Mean Feature Normalization mitigate deviation effects and the adaptive fusion of global and local features using the ResNet-based model, further strengthening the framework. The proposed framework ensures clarity in fusion features, avoids ambiguity, and achieves high accuracy in terms of Mean Average Precision, Top-1, Top-5, and Top-10, positioning it as an advanced and effective solution for UDA in Person ReID.
Full article
Figure 1
Open AccessExpression of Concern
Expression of Concern: Stephenson, M.J. A Differential Datalog Interpreter. Software 2023, 2, 427–446
by
Software Editorial Office
Software 2024, 3(2), 226; https://doi.org/10.3390/software3020011 - 6 May 2024
Abstract
With this notice, the Software Editorial Office states their awareness of the concerns regarding the appropriateness of the authorship and origins of the study of the published manuscript [...]
Full article
Open AccessArticle
A MongoDB Document Reconstruction Support System Using Natural Language Processing
by
Kohei Hamaji and Yukikazu Nakamoto
Software 2024, 3(2), 206-225; https://doi.org/10.3390/software3020010 - 2 May 2024
Abstract
►▼
Show Figures
Document-oriented databases, a type of Not Only SQL (NoSQL) database, are gaining popularity owing to their flexibility in data handling and performance for large-scale data. MongoDB, a typical document-oriented database, is a database that stores data in the JSON format, where the upper
[...] Read more.
Document-oriented databases, a type of Not Only SQL (NoSQL) database, are gaining popularity owing to their flexibility in data handling and performance for large-scale data. MongoDB, a typical document-oriented database, is a database that stores data in the JSON format, where the upper field involves lower fields and fields with the same related parent. One feature of this document-oriented database is that data are dynamically stored in an arbitrary location without explicitly defining a schema in advance. This flexibility violates the above property and causes difficulties for application program readability and database maintenance. To address these issues, we propose a reconstruction support method for document structures in MongoDB. The method uses the strength of the Has-A relationship between the parent and child fields, as well as the similarity of field names in the MongoDB documents in natural language processing, to reconstruct the data structure in MongoDB. As a result, the method transforms the parent and child fields into more coherent data structures. We evaluated our methods using real-world data and demonstrated their effectiveness.
Full article
Figure 1
Open AccessArticle
Defining and Researching “Dynamic Systems of Systems”
by
Rasmus Adler, Frank Elberzhager, Rodrigo Falcão and Julien Siebert
Software 2024, 3(2), 183-205; https://doi.org/10.3390/software3020009 - 1 May 2024
Cited by 1
Abstract
Digital transformation is advancing across industries, enabling products, processes, and business models that change the way we communicate, interact, and live. It radically influences the evolution of existing systems of systems (SoSs), such as mobility systems, production systems, energy systems, or cities, that
[...] Read more.
Digital transformation is advancing across industries, enabling products, processes, and business models that change the way we communicate, interact, and live. It radically influences the evolution of existing systems of systems (SoSs), such as mobility systems, production systems, energy systems, or cities, that have grown over a long time. In this article, we discuss what this means for the future of software engineering based on the results of a research project called DynaSoS. We present the data collection methods we applied, including interviews, a literature review, and workshops. As one contribution, we propose a classification scheme for deriving and structuring research challenges and directions. The scheme comprises two dimensions: scope and characteristics. The scope motivates and structures the trend toward an increasingly connected world. The characteristics enhance and adapt established SoS characteristics in order to include novel aspects and to better align them with the structuring of research into different research areas or communities. As a second contribution, we present research challenges using the classification scheme. We have observed that a scheme puts research challenges into context, which is needed for interpreting them. Accordingly, we conclude that our proposals contribute to a common understanding and vision for engineering dynamic SoS.
Full article
(This article belongs to the Topic Software Engineering and Applications)
►▼
Show Figures
Figure 1
Open AccessArticle
NICE: A Web-Based Tool for the Characterization of Transient Noise in Gravitational Wave Detectors
by
Nunziato Sorrentino, Massimiliano Razzano, Francesco Di Renzo, Francesco Fidecaro and Gary Hemming
Software 2024, 3(2), 169-182; https://doi.org/10.3390/software3020008 - 18 Apr 2024
Abstract
NICE—Noise Interactive Catalogue Explorer—is a web service developed for rapid-qualitative glitch analysis in gravitational wave data. Glitches are transient noise events that can smother the gravitational wave signal in data recorded by gravitational wave interferometer detectors. NICE provides interactive graphical tools to support
[...] Read more.
NICE—Noise Interactive Catalogue Explorer—is a web service developed for rapid-qualitative glitch analysis in gravitational wave data. Glitches are transient noise events that can smother the gravitational wave signal in data recorded by gravitational wave interferometer detectors. NICE provides interactive graphical tools to support detector noise characterization activities, in particular, the analysis of glitches from past and current observing runs, passing from glitch population visualization to individual glitch characterization. The NICE back-end API consists of a multi-database structure that brings order to glitch metadata generated by external detector characterization tools so that such information can be easily requested by gravitational wave scientists. Another novelty introduced by NICE is the interactive front-end infrastructure focused on glitch instrumental and environmental origin investigation, which uses labels determined by their time–frequency morphology. The NICE domain is intended for integration with the Advanced Virgo, Advanced LIGO, and KAGRA characterization pipelines and it will interface with systematic classification activities related to the transient noise sources present in the Virgo detector.
Full article
(This article belongs to the Topic Software Engineering and Applications)
►▼
Show Figures
Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Applied Sciences, ASI, Blockchains, Computers, MAKE, Software
Recent Advances in AI-Enhanced Software Engineering and Web Services
Topic Editors: Hai Wang, Zhe HouDeadline: 31 May 2025
Topic in
Algorithms, Applied Sciences, Electronics, MAKE, AI, Software
Applications of NLP, AI, and ML in Software Engineering
Topic Editors: Affan Yasin, Javed Ali Khan, Lijie WenDeadline: 31 August 2025
Topic in
Applied Sciences, Electronics, Informatics, Information, Software
Software Engineering and Applications
Topic Editors: Sanjay Misra, Robertas Damaševičius, Bharti SuriDeadline: 31 October 2025
Conferences
Special Issues
Special Issue in
Software
Empower Connectivity: Software-Driven Solutions for Interoperable Blockchains
Guest Editors: Diego Pennino, Jianbo GaoDeadline: 22 December 2024
Special Issue in
Software
Software Reliability, Security and Quality Assurance
Guest Editors: Tadashi Dohi, Junjun Zheng, Xiao-Yi ZhangDeadline: 31 December 2024
Special Issue in
Software
Software Product Line Testing
Guest Editors: Edson OliveiraJr, Wesley Assunção, Elder RodriguesDeadline: 25 January 2025
Special Issue in
Software
Advances in Computational Software for Chemistry and Materials Science
Guest Editors: Xin Chen, Yongtao MaDeadline: 20 February 2025