<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns="http://purl.org/rss/1.0/"
 xmlns:dc="http://purl.org/dc/elements/1.1/"
 xmlns:dcterms="http://purl.org/dc/terms/"
 xmlns:cc="http://web.resource.org/cc/"
 xmlns:prism="http://prismstandard.org/namespaces/basic/2.0/"
 xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
 xmlns:admin="http://webns.net/mvcb/"
 xmlns:content="http://purl.org/rss/1.0/modules/content/">
    <channel rdf:about="https://www.mdpi.com/rss/journal/software">
		<title>Software</title>
		<description>Latest open access articles published in Software at https://www.mdpi.com/journal/software</description>
		<link>https://www.mdpi.com/journal/software</link>
		<admin:generatorAgent rdf:resource="https://www.mdpi.com/journal/software"/>
		<admin:errorReportsTo rdf:resource="mailto:support@mdpi.com"/>
		<dc:publisher>MDPI</dc:publisher>
		<dc:language>en</dc:language>
		<dc:rights>Creative Commons Attribution (CC-BY)</dc:rights>
						<prism:copyright>MDPI</prism:copyright>
		<prism:rightsAgent>support@mdpi.com</prism:rightsAgent>
		<image rdf:resource="https://pub.mdpi-res.com/img/design/mdpi-pub-logo.png?13cf3b5bd783e021?1772794056"/>
				<items>
			<rdf:Seq>
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/5/1/12" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/5/1/11" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/5/1/10" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/5/1/9" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/5/1/8" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/5/1/7" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/5/1/6" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/5/1/5" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/5/1/4" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/5/1/3" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/5/1/2" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/5/1/1" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/4/4/33" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/4/4/32" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/4/4/31" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/4/4/30" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/4/4/29" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/4/4/28" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/4/4/27" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/4/4/26" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/4/4/25" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/4/4/24" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/4/4/23" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/4/4/22" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/4/3/21" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/4/3/20" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/4/3/19" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/4/3/18" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/4/3/17" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/4/3/16" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/4/3/15" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/4/3/14" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/4/2/13" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/4/2/12" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/4/2/11" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/4/2/10" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/4/2/9" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/4/2/8" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/4/2/7" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/4/1/6" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/4/1/5" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/4/1/4" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/4/1/3" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/4/1/2" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/4/1/1" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/3/4/29" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/3/4/28" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/3/4/27" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/3/4/26" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/3/4/25" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/3/4/24" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/3/4/23" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/3/4/22" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/3/4/21" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/3/3/20" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/3/3/19" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/3/3/18" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/3/3/17" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/3/3/16" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/3/3/15" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/3/3/14" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/3/3/13" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/3/2/12" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/3/2/11" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/3/2/10" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/3/2/9" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/3/2/8" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/3/2/7" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/3/1/6" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/3/1/5" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/3/1/4" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/3/1/3" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/3/1/2" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/3/1/1" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/2/4/23" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/2/4/22" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/2/4/21" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/2/3/20" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/2/3/19" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/2/3/18" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/2/3/17" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/2/3/16" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/2/3/15" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/2/2/14" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/2/2/13" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/2/2/12" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/2/2/11" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/2/2/10" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/2/2/9" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/2/2/8" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/2/2/7" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/2/1/6" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/2/1/5" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/2/1/4" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/2/1/3" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/2/1/2" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/2/1/1" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/1/4/20" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/1/4/19" />
            				<rdf:li rdf:resource="https://www.mdpi.com/2674-113X/1/4/18" />
                    	</rdf:Seq>
		</items>
				<cc:license rdf:resource="https://creativecommons.org/licenses/by/4.0/" />
	</channel>

        <item rdf:about="https://www.mdpi.com/2674-113X/5/1/12">

	<title>Software, Vol. 5, Pages 12: Assessing the Generalizability of Mobile Software Engineering Research Through Combined Systematic Methods</title>
	<link>https://www.mdpi.com/2674-113X/5/1/12</link>
	<description>Mobile Software Engineering has emerged as a distinct subfield, raising questions about the transferability of its research findings to general software engineering. This paper addresses the challenge of evaluating the generalizability of mobile-specific research, using Green Computing as a representative case. We propose a combination of systematic methods to identify potentially overlooked mobile-specific papers with a focused literature review to assess their broader relevance. Applying this approach, we find that several mobile-specific studies offer insights applicable beyond their original context, particularly in areas such as energy efficiency guidelines, measurement, and trade-offs. The results demonstrate that systematic identification and evaluation can reveal valuable contributions for the wider software engineering community. The proposed method provides a structured framework for future research to assess the generalizability of findings from specialized domains, fostering greater integration and knowledge transfer across software engineering disciplines.</description>
	<pubDate>2026-03-03</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 5, Pages 12: Assessing the Generalizability of Mobile Software Engineering Research Through Combined Systematic Methods</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/5/1/12">doi: 10.3390/software5010012</a></p>
	<p>Authors:
		Robin Nunkesser
		</p>
	<p>Mobile Software Engineering has emerged as a distinct subfield, raising questions about the transferability of its research findings to general software engineering. This paper addresses the challenge of evaluating the generalizability of mobile-specific research, using Green Computing as a representative case. We propose a combination of systematic methods to identify potentially overlooked mobile-specific papers with a focused literature review to assess their broader relevance. Applying this approach, we find that several mobile-specific studies offer insights applicable beyond their original context, particularly in areas such as energy efficiency guidelines, measurement, and trade-offs. The results demonstrate that systematic identification and evaluation can reveal valuable contributions for the wider software engineering community. The proposed method provides a structured framework for future research to assess the generalizability of findings from specialized domains, fostering greater integration and knowledge transfer across software engineering disciplines.</p>
	]]></content:encoded>

	<dc:title>Assessing the Generalizability of Mobile Software Engineering Research Through Combined Systematic Methods</dc:title>
			<dc:creator>Robin Nunkesser</dc:creator>
		<dc:identifier>doi: 10.3390/software5010012</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2026-03-03</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2026-03-03</prism:publicationDate>
	<prism:volume>5</prism:volume>
	<prism:number>1</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>12</prism:startingPage>
		<prism:doi>10.3390/software5010012</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/5/1/12</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/5/1/11">

	<title>Software, Vol. 5, Pages 11: Is Code Co-Committal an Indicator of Evolutionary Coupling in Software Repositories?</title>
	<link>https://www.mdpi.com/2674-113X/5/1/11</link>
	<description>Software repositories such as Git are significant sources of metadata about software projects, containing information such as modified files, change authors, and often commentary describing the change. An emerging approach to support software change impact analysis is to exploit this metadata to determine which files are linked by co-committal, i.e., when two files are frequently updated together within the same Git commit. Such information can serve as an indicator for identifying potential change-impact sets in future development activities. The aim of this study is to determine whether co-committal is a reliable indicator of links between software artifacts stored in Git and, if so, whether these links persist as the artifacts evolve&amp;amp;mdash;thereby offering a potentially valuable dimension for change impact analysis. To investigate this, we mined the metadata of five large Git repositories comprising over 14K commits and extracted co-change sets from the resulting data. The results show that: (1) co-committal links between artifacts vary widely in both strength and frequency, with these variations strongly influenced by the development style and activity levels of the contributing developers, and (2) although co-committal can serve as an indicator of evolutionary coupling in certain scenarios, its usefulness depends on project-specific development practices and observable patterns of developer behavior.</description>
	<pubDate>2026-03-01</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 5, Pages 11: Is Code Co-Committal an Indicator of Evolutionary Coupling in Software Repositories?</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/5/1/11">doi: 10.3390/software5010011</a></p>
	<p>Authors:
		Niall Price
		David Cutting
		Vahid Garousi
		</p>
	<p>Software repositories such as Git are significant sources of metadata about software projects, containing information such as modified files, change authors, and often commentary describing the change. An emerging approach to support software change impact analysis is to exploit this metadata to determine which files are linked by co-committal, i.e., when two files are frequently updated together within the same Git commit. Such information can serve as an indicator for identifying potential change-impact sets in future development activities. The aim of this study is to determine whether co-committal is a reliable indicator of links between software artifacts stored in Git and, if so, whether these links persist as the artifacts evolve&amp;amp;mdash;thereby offering a potentially valuable dimension for change impact analysis. To investigate this, we mined the metadata of five large Git repositories comprising over 14K commits and extracted co-change sets from the resulting data. The results show that: (1) co-committal links between artifacts vary widely in both strength and frequency, with these variations strongly influenced by the development style and activity levels of the contributing developers, and (2) although co-committal can serve as an indicator of evolutionary coupling in certain scenarios, its usefulness depends on project-specific development practices and observable patterns of developer behavior.</p>
	]]></content:encoded>

	<dc:title>Is Code Co-Committal an Indicator of Evolutionary Coupling in Software Repositories?</dc:title>
			<dc:creator>Niall Price</dc:creator>
			<dc:creator>David Cutting</dc:creator>
			<dc:creator>Vahid Garousi</dc:creator>
		<dc:identifier>doi: 10.3390/software5010011</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2026-03-01</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2026-03-01</prism:publicationDate>
	<prism:volume>5</prism:volume>
	<prism:number>1</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>11</prism:startingPage>
		<prism:doi>10.3390/software5010011</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/5/1/11</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/5/1/10">

	<title>Software, Vol. 5, Pages 10: CONSENT: A Software Architecture for Dynamic and Secure Consent Management</title>
	<link>https://www.mdpi.com/2674-113X/5/1/10</link>
	<description>Current research in consent management techniques focuses on isolated aspects of data security, privacy, or auditability, but important issues like (i) dynamically integrating regulatory updates into form generation, (ii) support in content generation with verifiable audit trails, and (iii) tools that make compliance reasoning transparent for non-legal users are not yet addressed. This paper introduces CONSENT, an architecture that integrates AI-based consent reasoning using Large Language Models (LLMs) for automated consent-form drafting and compliance evaluation, alongside blockchain technology for secure and auditable storage. The architecture builds on prior work to address the aforementioned issues by introducing three supporting mechanisms: (a) Specialized AI models coordinated through expert routing which coordinate subtasks such as automation in form generation and regulatory compliance, (b) Retrieval-Augmented Generation (RAG) that supports the integration of regulatory updates into forms, and (c) Explainable AI (XAI) for the reasoning behind form content and compliance assessments. CONSENT architecture is evaluated through 250 test cases and a pilot case study for clinical trial consent management involving 20 engineers and attorneys, who evaluated the prototype on form quality (i.e., coherence, conciseness, factuality, fluency, and relevance) as well as time and effort efficiency. Results show that CONSENT substantially reduces the manual effort in consent-form creation while providing transparent, audit-ready compliance assessments, highlighting its potential for dynamic, user-centric consent management.</description>
	<pubDate>2026-02-26</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 5, Pages 10: CONSENT: A Software Architecture for Dynamic and Secure Consent Management</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/5/1/10">doi: 10.3390/software5010010</a></p>
	<p>Authors:
		Christina Zoi
		Ioannis Zozas
		Stamatia Bibi
		</p>
	<p>Current research in consent management techniques focuses on isolated aspects of data security, privacy, or auditability, but important issues like (i) dynamically integrating regulatory updates into form generation, (ii) support in content generation with verifiable audit trails, and (iii) tools that make compliance reasoning transparent for non-legal users are not yet addressed. This paper introduces CONSENT, an architecture that integrates AI-based consent reasoning using Large Language Models (LLMs) for automated consent-form drafting and compliance evaluation, alongside blockchain technology for secure and auditable storage. The architecture builds on prior work to address the aforementioned issues by introducing three supporting mechanisms: (a) Specialized AI models coordinated through expert routing which coordinate subtasks such as automation in form generation and regulatory compliance, (b) Retrieval-Augmented Generation (RAG) that supports the integration of regulatory updates into forms, and (c) Explainable AI (XAI) for the reasoning behind form content and compliance assessments. CONSENT architecture is evaluated through 250 test cases and a pilot case study for clinical trial consent management involving 20 engineers and attorneys, who evaluated the prototype on form quality (i.e., coherence, conciseness, factuality, fluency, and relevance) as well as time and effort efficiency. Results show that CONSENT substantially reduces the manual effort in consent-form creation while providing transparent, audit-ready compliance assessments, highlighting its potential for dynamic, user-centric consent management.</p>
	]]></content:encoded>

	<dc:title>CONSENT: A Software Architecture for Dynamic and Secure Consent Management</dc:title>
			<dc:creator>Christina Zoi</dc:creator>
			<dc:creator>Ioannis Zozas</dc:creator>
			<dc:creator>Stamatia Bibi</dc:creator>
		<dc:identifier>doi: 10.3390/software5010010</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2026-02-26</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2026-02-26</prism:publicationDate>
	<prism:volume>5</prism:volume>
	<prism:number>1</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>10</prism:startingPage>
		<prism:doi>10.3390/software5010010</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/5/1/10</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/5/1/9">

	<title>Software, Vol. 5, Pages 9: Verifying Machine Learning Interpretability and Explainability Requirements Through Provenance</title>
	<link>https://www.mdpi.com/2674-113X/5/1/9</link>
	<description>Machine learning (ML) engineering increasingly incorporates principles from software and requirements engineering to improve development rigor; however, key non-functional requirements (NFRs) such as interpretability and explainability remain difficult to specify and verify using traditional requirements practices. Although prior work defines these qualities conceptually, their lack of measurable criteria prevents systematic verification. This paper presents a novel provenance-driven approach that decomposes ML interpretability and explainability NFRs into verifiable functional requirements (FRs) by leveraging model and data provenance to make model behavior transparent. The approach identifies the specific provenance artifacts required to validate each FR and demonstrates how their verification collectively establishes compliance with interpretability and explainability NFRs. The results show that ML provenance can operationalize otherwise abstract NFRs, transforming interpretability and explainability into quantifiable, testable properties and enabling more rigorous, requirements-based ML engineering.</description>
	<pubDate>2026-02-14</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 5, Pages 9: Verifying Machine Learning Interpretability and Explainability Requirements Through Provenance</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/5/1/9">doi: 10.3390/software5010009</a></p>
	<p>Authors:
		Lynn Vonderhaar
		Juan Couder
		Tyler Thomas Procko
		Eva Lueddeke
		Daryela Cisneros
		Omar Ochoa
		</p>
	<p>Machine learning (ML) engineering increasingly incorporates principles from software and requirements engineering to improve development rigor; however, key non-functional requirements (NFRs) such as interpretability and explainability remain difficult to specify and verify using traditional requirements practices. Although prior work defines these qualities conceptually, their lack of measurable criteria prevents systematic verification. This paper presents a novel provenance-driven approach that decomposes ML interpretability and explainability NFRs into verifiable functional requirements (FRs) by leveraging model and data provenance to make model behavior transparent. The approach identifies the specific provenance artifacts required to validate each FR and demonstrates how their verification collectively establishes compliance with interpretability and explainability NFRs. The results show that ML provenance can operationalize otherwise abstract NFRs, transforming interpretability and explainability into quantifiable, testable properties and enabling more rigorous, requirements-based ML engineering.</p>
	]]></content:encoded>

	<dc:title>Verifying Machine Learning Interpretability and Explainability Requirements Through Provenance</dc:title>
			<dc:creator>Lynn Vonderhaar</dc:creator>
			<dc:creator>Juan Couder</dc:creator>
			<dc:creator>Tyler Thomas Procko</dc:creator>
			<dc:creator>Eva Lueddeke</dc:creator>
			<dc:creator>Daryela Cisneros</dc:creator>
			<dc:creator>Omar Ochoa</dc:creator>
		<dc:identifier>doi: 10.3390/software5010009</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2026-02-14</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2026-02-14</prism:publicationDate>
	<prism:volume>5</prism:volume>
	<prism:number>1</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>9</prism:startingPage>
		<prism:doi>10.3390/software5010009</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/5/1/9</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/5/1/8">

	<title>Software, Vol. 5, Pages 8: Towards Service-Oriented Knowledge-Based Process Planning Supporting Service-Based Smart Production Environments</title>
	<link>https://www.mdpi.com/2674-113X/5/1/8</link>
	<description>The increasing decentralization of industrial processes in Industry 4.0 necessitates the distribution and coordination of resources such as machines, materials, expertise, and knowledge across organizations in a value chain. To facilitate effective operations in such distributed environments, it is essential to digitize processes and resources, establish interconnectedness, and implement a scalable management approach. The present paper addresses these challenges through the knowledge-based production planning (KPP) system, which was originally developed as a monolithic prototype. It is argued that the KPP-System must evolve towards a service-oriented architecture (SOA) in order to align with distributed and interoperable Industry 4.0 requirements. The paper provides a comprehensive overview of the motivation and background of KPP, identifies the key research questions that are to be addressed, and presents a conceptual design for transitioning KPP into an SOA. The approach under discussion is notable for its consideration of compatibility with the Arrowhead Framework (AF), a consideration that is intended to ensure interoperability with smart production environments. The contribution of this work is the first architectural concept that demonstrates how KPP components can be encapsulated as services and integrated into local cloud environments, thus laying the foundation for adaptive, ontology-based process planning in distributed manufacturing. In addition to the conceptual architecture, the first implementation phase has been conducted to validate the proposed approach. This includes the realization and evaluation of the mediator-based service layer, which operationalizes the transformation of planning data into semantic function blocks (FBs) and enables the interaction of distributed services within the envisioned SO-KPP architecture. The implementation demonstrates the feasibility of the service-oriented transformation and provides a functional proof of concept for ontology-based integration in future adaptive production planning systems.</description>
	<pubDate>2026-02-12</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 5, Pages 8: Towards Service-Oriented Knowledge-Based Process Planning Supporting Service-Based Smart Production Environments</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/5/1/8">doi: 10.3390/software5010008</a></p>
	<p>Authors:
		Kathrin Gorgs
		Heiko Friedrich
		Tobias Vogel
		Matthias L. Hemmje
		</p>
	<p>The increasing decentralization of industrial processes in Industry 4.0 necessitates the distribution and coordination of resources such as machines, materials, expertise, and knowledge across organizations in a value chain. To facilitate effective operations in such distributed environments, it is essential to digitize processes and resources, establish interconnectedness, and implement a scalable management approach. The present paper addresses these challenges through the knowledge-based production planning (KPP) system, which was originally developed as a monolithic prototype. It is argued that the KPP-System must evolve towards a service-oriented architecture (SOA) in order to align with distributed and interoperable Industry 4.0 requirements. The paper provides a comprehensive overview of the motivation and background of KPP, identifies the key research questions that are to be addressed, and presents a conceptual design for transitioning KPP into an SOA. The approach under discussion is notable for its consideration of compatibility with the Arrowhead Framework (AF), a consideration that is intended to ensure interoperability with smart production environments. The contribution of this work is the first architectural concept that demonstrates how KPP components can be encapsulated as services and integrated into local cloud environments, thus laying the foundation for adaptive, ontology-based process planning in distributed manufacturing. In addition to the conceptual architecture, the first implementation phase has been conducted to validate the proposed approach. This includes the realization and evaluation of the mediator-based service layer, which operationalizes the transformation of planning data into semantic function blocks (FBs) and enables the interaction of distributed services within the envisioned SO-KPP architecture. The implementation demonstrates the feasibility of the service-oriented transformation and provides a functional proof of concept for ontology-based integration in future adaptive production planning systems.</p>
	]]></content:encoded>

	<dc:title>Towards Service-Oriented Knowledge-Based Process Planning Supporting Service-Based Smart Production Environments</dc:title>
			<dc:creator>Kathrin Gorgs</dc:creator>
			<dc:creator>Heiko Friedrich</dc:creator>
			<dc:creator>Tobias Vogel</dc:creator>
			<dc:creator>Matthias L. Hemmje</dc:creator>
		<dc:identifier>doi: 10.3390/software5010008</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2026-02-12</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2026-02-12</prism:publicationDate>
	<prism:volume>5</prism:volume>
	<prism:number>1</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>8</prism:startingPage>
		<prism:doi>10.3390/software5010008</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/5/1/8</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/5/1/7">

	<title>Software, Vol. 5, Pages 7: A Functional Yield-Based Traversal Pattern for Concise, Composable, and Efficient Stream Pipelines</title>
	<link>https://www.mdpi.com/2674-113X/5/1/7</link>
	<description>The stream pipeline idiom provides a fluent and composable way to express computations over collections. It gained widespread popularity after its introduction in .NET in 2005, later influencing many platforms, including Java in 2014 with the introduction of Java Streams, and continues to be adopted in contemporary languages such as Kotlin. However, the set of operations available in standard libraries is limited, and developers often need to introduce operations that are not provided out of the box. Two options typically arise: implementing custom operations using the standard API or adopting a third-party collections library that offers a richer suite of operations. In this article, we show that both approaches may incur performance overhead, and that the former can also suffer from verbosity and reduced readability. We propose an alternative approach that remains faithful to the stream-pipeline pattern: developers implement the unit operations of the pipeline from scratch using a functional yield-based traversal pattern. We demonstrate that this approach requires low programming effort, eliminates the performance overheads of existing alternatives, and preserves the key qualities of a stream pipeline. Our experimental results show up to a 3&amp;amp;times; speedup over the use of native yield in custom extensions.</description>
	<pubDate>2026-02-10</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 5, Pages 7: A Functional Yield-Based Traversal Pattern for Concise, Composable, and Efficient Stream Pipelines</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/5/1/7">doi: 10.3390/software5010007</a></p>
	<p>Authors:
		Fernando Miguel Carvalho
		</p>
	<p>The stream pipeline idiom provides a fluent and composable way to express computations over collections. It gained widespread popularity after its introduction in .NET in 2005, later influencing many platforms, including Java in 2014 with the introduction of Java Streams, and continues to be adopted in contemporary languages such as Kotlin. However, the set of operations available in standard libraries is limited, and developers often need to introduce operations that are not provided out of the box. Two options typically arise: implementing custom operations using the standard API or adopting a third-party collections library that offers a richer suite of operations. In this article, we show that both approaches may incur performance overhead, and that the former can also suffer from verbosity and reduced readability. We propose an alternative approach that remains faithful to the stream-pipeline pattern: developers implement the unit operations of the pipeline from scratch using a functional yield-based traversal pattern. We demonstrate that this approach requires low programming effort, eliminates the performance overheads of existing alternatives, and preserves the key qualities of a stream pipeline. Our experimental results show up to a 3&amp;amp;times; speedup over the use of native yield in custom extensions.</p>
	]]></content:encoded>

	<dc:title>A Functional Yield-Based Traversal Pattern for Concise, Composable, and Efficient Stream Pipelines</dc:title>
			<dc:creator>Fernando Miguel Carvalho</dc:creator>
		<dc:identifier>doi: 10.3390/software5010007</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2026-02-10</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2026-02-10</prism:publicationDate>
	<prism:volume>5</prism:volume>
	<prism:number>1</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>7</prism:startingPage>
		<prism:doi>10.3390/software5010007</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/5/1/7</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/5/1/6">

	<title>Software, Vol. 5, Pages 6: Integrating Continuous Compliance into DevSecOps Pipelines: A Data Engineering Perspective</title>
	<link>https://www.mdpi.com/2674-113X/5/1/6</link>
	<description>Modern DevSecOps environments face a persistent tension between accelerating deployment velocity and maintaining verifiable compliance with regulatory, security, and internal governance standards. Traditional snapshot-in-time audits and fragmented compliance tooling struggle to capture the dynamic nature of containerized, continuous delivery, often resulting in compliance drift and delayed remediation. This paper introduces the Continuous Compliance Framework (CCF), a data-centric reference architecture that embeds compliance validation directly into CI/CD pipelines. The framework treats compliance as a first-class, computable system property by combining declarative policies-as-code, standardized evidence collection, and cryptographically verifiable attestations. Central to the approach is a Compliance Data Lakehouse that transforms heterogeneous pipeline artifacts into a queryable, time-indexed compliance data product, enabling audit-ready evidence generation and continuous assurance. The proposed architecture is validated through an end-to-end synthetic microservice implementation. Experimental results demonstrate full policy lifecycle enforcement with a minimal pipeline overhead and sub-second policy evaluation latency. These findings indicate that compliance can be shifted from a post hoc audit activity to an intrinsic, verifiable property of the software delivery process without materially degrading deployment velocity.</description>
	<pubDate>2026-02-10</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 5, Pages 6: Integrating Continuous Compliance into DevSecOps Pipelines: A Data Engineering Perspective</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/5/1/6">doi: 10.3390/software5010006</a></p>
	<p>Authors:
		Aleksandr Zakharchenko
		</p>
	<p>Modern DevSecOps environments face a persistent tension between accelerating deployment velocity and maintaining verifiable compliance with regulatory, security, and internal governance standards. Traditional snapshot-in-time audits and fragmented compliance tooling struggle to capture the dynamic nature of containerized, continuous delivery, often resulting in compliance drift and delayed remediation. This paper introduces the Continuous Compliance Framework (CCF), a data-centric reference architecture that embeds compliance validation directly into CI/CD pipelines. The framework treats compliance as a first-class, computable system property by combining declarative policies-as-code, standardized evidence collection, and cryptographically verifiable attestations. Central to the approach is a Compliance Data Lakehouse that transforms heterogeneous pipeline artifacts into a queryable, time-indexed compliance data product, enabling audit-ready evidence generation and continuous assurance. The proposed architecture is validated through an end-to-end synthetic microservice implementation. Experimental results demonstrate full policy lifecycle enforcement with a minimal pipeline overhead and sub-second policy evaluation latency. These findings indicate that compliance can be shifted from a post hoc audit activity to an intrinsic, verifiable property of the software delivery process without materially degrading deployment velocity.</p>
	]]></content:encoded>

	<dc:title>Integrating Continuous Compliance into DevSecOps Pipelines: A Data Engineering Perspective</dc:title>
			<dc:creator>Aleksandr Zakharchenko</dc:creator>
		<dc:identifier>doi: 10.3390/software5010006</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2026-02-10</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2026-02-10</prism:publicationDate>
	<prism:volume>5</prism:volume>
	<prism:number>1</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>6</prism:startingPage>
		<prism:doi>10.3390/software5010006</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/5/1/6</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/5/1/5">

	<title>Software, Vol. 5, Pages 5: Data-Centric Serverless Computing with LambdaStore</title>
	<link>https://www.mdpi.com/2674-113X/5/1/5</link>
	<description>LambdaStore is a data-centric serverless platform that breaks the split between stateless functions and external storage in classic cloud computing platforms. By scheduling serverless invocations near data instead of pulling data to compute, LambdaStore substantially reduces the state access cost that dominates today&amp;amp;rsquo;s serverless workloads. Leveraging its transactional storage engine, LambdaStore delivers serializable guarantees and exactly-once semantics across chains of lambda invocations&amp;amp;mdash;a capability missing in current Function-as-a-Service offerings. We make three key contributions: (1) an object-oriented programming model that ties function invocations with its data; (2) a transaction layer with adaptive lock granularity and an optimistic concurrency control protocol designed for serverless workloads to keep contention low while preserving serializability; and (3) an elastic storage system that preserves the elasticity of the serverless paradigm while lambda functions run close to their data. Under read-heavy workloads, LambdaStore lifts throughput by orders of magnitude over existing serverless platforms while holding end-to-end latency below 20 ms.</description>
	<pubDate>2026-01-21</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 5, Pages 5: Data-Centric Serverless Computing with LambdaStore</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/5/1/5">doi: 10.3390/software5010005</a></p>
	<p>Authors:
		Kai Mast
		Suyan Qu
		Aditya Jain
		Andrea Arpaci-Dusseau
		Remzi Arpaci-Dusseau
		</p>
	<p>LambdaStore is a data-centric serverless platform that breaks the split between stateless functions and external storage in classic cloud computing platforms. By scheduling serverless invocations near data instead of pulling data to compute, LambdaStore substantially reduces the state access cost that dominates today&amp;amp;rsquo;s serverless workloads. Leveraging its transactional storage engine, LambdaStore delivers serializable guarantees and exactly-once semantics across chains of lambda invocations&amp;amp;mdash;a capability missing in current Function-as-a-Service offerings. We make three key contributions: (1) an object-oriented programming model that ties function invocations with its data; (2) a transaction layer with adaptive lock granularity and an optimistic concurrency control protocol designed for serverless workloads to keep contention low while preserving serializability; and (3) an elastic storage system that preserves the elasticity of the serverless paradigm while lambda functions run close to their data. Under read-heavy workloads, LambdaStore lifts throughput by orders of magnitude over existing serverless platforms while holding end-to-end latency below 20 ms.</p>
	]]></content:encoded>

	<dc:title>Data-Centric Serverless Computing with LambdaStore</dc:title>
			<dc:creator>Kai Mast</dc:creator>
			<dc:creator>Suyan Qu</dc:creator>
			<dc:creator>Aditya Jain</dc:creator>
			<dc:creator>Andrea Arpaci-Dusseau</dc:creator>
			<dc:creator>Remzi Arpaci-Dusseau</dc:creator>
		<dc:identifier>doi: 10.3390/software5010005</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2026-01-21</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2026-01-21</prism:publicationDate>
	<prism:volume>5</prism:volume>
	<prism:number>1</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>5</prism:startingPage>
		<prism:doi>10.3390/software5010005</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/5/1/5</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/5/1/4">

	<title>Software, Vol. 5, Pages 4: Mitigating Prompt Dependency in Large Language Models: A Retrieval-Augmented Framework for Intelligent Code Assistance</title>
	<link>https://www.mdpi.com/2674-113X/5/1/4</link>
	<description>Background: The implementation of Large Language Models (LLMs) in software engineering has provided new and improved approaches to code synthesis, testing, and refactoring. However, even with these new approaches, the practical efficacy of LLMs is restricted due to their reliance on user-given prompts. The problem is that these prompts can vary a lot in quality and specificity, which results in inconsistent or suboptimal results for the LLM application. Methods: This research therefore aims to alleviate these issues by developing an LLM-based code assistance prototype with a framework based on Retrieval-Augmented Generation (RAG) that automates the prompt-generation process and improves the outputs of LLMs using contextually relevant external knowledge. Results: The tool aims to reduce dependence on the manual preparation of prompts and enhance accessibility and usability for developers of all experience levels. The tool achieved a Code Correctness Score (CCS) of 162.0 and an Average Code Correctness (ACC) score of 98.8% in the refactoring task. These results can be compared to those of the generated tests, which scored CCS 139.0 and ACC 85.3%, respectively. Conclusions: This research contributes to the growing list of Artificial Intelligence (AI)-powered development tools and offers new opportunities for boosting the productivity of developers.</description>
	<pubDate>2026-01-21</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 5, Pages 4: Mitigating Prompt Dependency in Large Language Models: A Retrieval-Augmented Framework for Intelligent Code Assistance</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/5/1/4">doi: 10.3390/software5010004</a></p>
	<p>Authors:
		Saja Abufarha
		Ahmed Al Marouf
		Jon George Rokne
		Reda Alhajj
		</p>
	<p>Background: The implementation of Large Language Models (LLMs) in software engineering has provided new and improved approaches to code synthesis, testing, and refactoring. However, even with these new approaches, the practical efficacy of LLMs is restricted due to their reliance on user-given prompts. The problem is that these prompts can vary a lot in quality and specificity, which results in inconsistent or suboptimal results for the LLM application. Methods: This research therefore aims to alleviate these issues by developing an LLM-based code assistance prototype with a framework based on Retrieval-Augmented Generation (RAG) that automates the prompt-generation process and improves the outputs of LLMs using contextually relevant external knowledge. Results: The tool aims to reduce dependence on the manual preparation of prompts and enhance accessibility and usability for developers of all experience levels. The tool achieved a Code Correctness Score (CCS) of 162.0 and an Average Code Correctness (ACC) score of 98.8% in the refactoring task. These results can be compared to those of the generated tests, which scored CCS 139.0 and ACC 85.3%, respectively. Conclusions: This research contributes to the growing list of Artificial Intelligence (AI)-powered development tools and offers new opportunities for boosting the productivity of developers.</p>
	]]></content:encoded>

	<dc:title>Mitigating Prompt Dependency in Large Language Models: A Retrieval-Augmented Framework for Intelligent Code Assistance</dc:title>
			<dc:creator>Saja Abufarha</dc:creator>
			<dc:creator>Ahmed Al Marouf</dc:creator>
			<dc:creator>Jon George Rokne</dc:creator>
			<dc:creator>Reda Alhajj</dc:creator>
		<dc:identifier>doi: 10.3390/software5010004</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2026-01-21</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2026-01-21</prism:publicationDate>
	<prism:volume>5</prism:volume>
	<prism:number>1</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>4</prism:startingPage>
		<prism:doi>10.3390/software5010004</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/5/1/4</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/5/1/3">

	<title>Software, Vol. 5, Pages 3: RETRACTED: Stephenson, M.J. A Differential Datalog Interpreter. Software 2023, 2, 427&amp;ndash;446</title>
	<link>https://www.mdpi.com/2674-113X/5/1/3</link>
	<description>The Journal retracts the article titled, &amp;amp;ldquo;A Differential Datalog Interpreter&amp;amp;rdquo; [...]</description>
	<pubDate>2026-01-21</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 5, Pages 3: RETRACTED: Stephenson, M.J. A Differential Datalog Interpreter. Software 2023, 2, 427&amp;ndash;446</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/5/1/3">doi: 10.3390/software5010003</a></p>
	<p>Authors:
		Matthew James Stephenson
		</p>
	<p>The Journal retracts the article titled, &amp;amp;ldquo;A Differential Datalog Interpreter&amp;amp;rdquo; [...]</p>
	]]></content:encoded>

	<dc:title>RETRACTED: Stephenson, M.J. A Differential Datalog Interpreter. Software 2023, 2, 427&amp;amp;ndash;446</dc:title>
			<dc:creator>Matthew James Stephenson</dc:creator>
		<dc:identifier>doi: 10.3390/software5010003</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2026-01-21</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2026-01-21</prism:publicationDate>
	<prism:volume>5</prism:volume>
	<prism:number>1</prism:number>
	<prism:section>Retraction</prism:section>
	<prism:startingPage>3</prism:startingPage>
		<prism:doi>10.3390/software5010003</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/5/1/3</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/5/1/2">

	<title>Software, Vol. 5, Pages 2: rUnit&amp;mdash;A Framework for Test Analysis of C Programs</title>
	<link>https://www.mdpi.com/2674-113X/5/1/2</link>
	<description>Asserting program correctness is a longstanding challenge in software development that consumes lots of resources and manpower. It is often accomplished through software testing at various levels. One such level is unit testing, where the behaviour of individual components is tested. In this paper, we introduce the concept of test analysis, which instead of executing unit tests, analyses them to establish their outcome. This is line with previous approaches towards using formal methods for program verification; however, we introduce a middle layer called the test analysis framework, which allows for the introduction of new capabilities. We (briefly) formalize ordinary testing and test analysis to define the relation between the two. We introduce the notion of rich tests with a syntax and semantic instantiated for C. A prototype framework is implemented and extended to handle property-based stubbing and non-deterministic string variables. A few select examples are presented to demonstrate the capabilities of the framework.</description>
	<pubDate>2026-01-02</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 5, Pages 2: rUnit&amp;mdash;A Framework for Test Analysis of C Programs</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/5/1/2">doi: 10.3390/software5010002</a></p>
	<p>Authors:
		Peter Backeman
		</p>
	<p>Asserting program correctness is a longstanding challenge in software development that consumes lots of resources and manpower. It is often accomplished through software testing at various levels. One such level is unit testing, where the behaviour of individual components is tested. In this paper, we introduce the concept of test analysis, which instead of executing unit tests, analyses them to establish their outcome. This is line with previous approaches towards using formal methods for program verification; however, we introduce a middle layer called the test analysis framework, which allows for the introduction of new capabilities. We (briefly) formalize ordinary testing and test analysis to define the relation between the two. We introduce the notion of rich tests with a syntax and semantic instantiated for C. A prototype framework is implemented and extended to handle property-based stubbing and non-deterministic string variables. A few select examples are presented to demonstrate the capabilities of the framework.</p>
	]]></content:encoded>

	<dc:title>rUnit&amp;amp;mdash;A Framework for Test Analysis of C Programs</dc:title>
			<dc:creator>Peter Backeman</dc:creator>
		<dc:identifier>doi: 10.3390/software5010002</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2026-01-02</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2026-01-02</prism:publicationDate>
	<prism:volume>5</prism:volume>
	<prism:number>1</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>2</prism:startingPage>
		<prism:doi>10.3390/software5010002</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/5/1/2</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/5/1/1">

	<title>Software, Vol. 5, Pages 1: RoboDeploy: A Metamodel-Driven Framework for Automated Multi-Host Docker Deployment of ROS 2 Systems in IoRT Environments</title>
	<link>https://www.mdpi.com/2674-113X/5/1/1</link>
	<description>Robotic systems increasingly operate in complex and distributed environments, where software deployment and orchestration pose major challenges. This paper presents a model-driven approach that automates the containerized deployment of robotic systems in Internet of Robotic Things (IoRT) environments. Our solution integrates Model-Driven Engineering (MDE) with containerization technologies to improve scalability, reproducibility, and maintainability. A dedicated metamodel introduces high-level abstractions for describing deployment architectures, repositories, and container configurations. A web-based tool enables collaborative model editing, while an external deployment automator generates validated Docker and Compose artifacts to support seamless multi-host orchestration. We validated the approach through real-world experiments, which show that the method effectively automates deployment workflows, ensures consistency across development and production environments, and significantly reduces configuration effort. These results demonstrate that model-driven automation can bridge the gap between Software Engineering (SE) and robotics, enabling Software-Defined Robotics (SDR) and supporting scalable IoRT applications.</description>
	<pubDate>2025-12-19</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 5, Pages 1: RoboDeploy: A Metamodel-Driven Framework for Automated Multi-Host Docker Deployment of ROS 2 Systems in IoRT Environments</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/5/1/1">doi: 10.3390/software5010001</a></p>
	<p>Authors:
		Miguel Ángel Barcelona
		Laura García-Borgoñón
		Pablo Torner
		Ariadna Belén Ruiz
		</p>
	<p>Robotic systems increasingly operate in complex and distributed environments, where software deployment and orchestration pose major challenges. This paper presents a model-driven approach that automates the containerized deployment of robotic systems in Internet of Robotic Things (IoRT) environments. Our solution integrates Model-Driven Engineering (MDE) with containerization technologies to improve scalability, reproducibility, and maintainability. A dedicated metamodel introduces high-level abstractions for describing deployment architectures, repositories, and container configurations. A web-based tool enables collaborative model editing, while an external deployment automator generates validated Docker and Compose artifacts to support seamless multi-host orchestration. We validated the approach through real-world experiments, which show that the method effectively automates deployment workflows, ensures consistency across development and production environments, and significantly reduces configuration effort. These results demonstrate that model-driven automation can bridge the gap between Software Engineering (SE) and robotics, enabling Software-Defined Robotics (SDR) and supporting scalable IoRT applications.</p>
	]]></content:encoded>

	<dc:title>RoboDeploy: A Metamodel-Driven Framework for Automated Multi-Host Docker Deployment of ROS 2 Systems in IoRT Environments</dc:title>
			<dc:creator>Miguel Ángel Barcelona</dc:creator>
			<dc:creator>Laura García-Borgoñón</dc:creator>
			<dc:creator>Pablo Torner</dc:creator>
			<dc:creator>Ariadna Belén Ruiz</dc:creator>
		<dc:identifier>doi: 10.3390/software5010001</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2025-12-19</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2025-12-19</prism:publicationDate>
	<prism:volume>5</prism:volume>
	<prism:number>1</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>1</prism:startingPage>
		<prism:doi>10.3390/software5010001</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/5/1/1</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/4/4/33">

	<title>Software, Vol. 4, Pages 33: Graph Generalization for Software Engineering</title>
	<link>https://www.mdpi.com/2674-113X/4/4/33</link>
	<description>Graph generalization is a powerful concept with a wide range of potential applications, while established algorithms exist for generalizing simple graphs, practical approaches for more complex graphs remain elusive. We introduce a novel formal model and algorithm (GGA) that generalizes labeled directed graphs without assuming label identity. We evaluate GGA by focusing on its information preservation relative to its input graphs, its scalability in execution, and its utility for three applications: abstract syntax trees, class graphs, and call graphs. Our findings reveal the superiority of GGA over alternative tools. GGA outperforms ASGard by an average of 5&amp;amp;ndash;18% on metrics related to information preservation; GGA matches 100% with diffsitter, indicating the correctness of the output. For class graphs, GGA achieves 77.1% in precision at 5, while for call graphs, it exhibits 60% in precision at 5 for a specific application problem. We also test performance for the first two applications: GGA&amp;amp;rsquo;s execution time scales linearly with respect to the product of vertex count and edge count. Our research demonstrates the ability of GGA to preserve information in diverse applications while performing efficiently, signaling its potential to advance the field.</description>
	<pubDate>2025-12-08</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 4, Pages 33: Graph Generalization for Software Engineering</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/4/4/33">doi: 10.3390/software4040033</a></p>
	<p>Authors:
		Mohammad Reza Kianifar
		Robert J. Walker
		</p>
	<p>Graph generalization is a powerful concept with a wide range of potential applications, while established algorithms exist for generalizing simple graphs, practical approaches for more complex graphs remain elusive. We introduce a novel formal model and algorithm (GGA) that generalizes labeled directed graphs without assuming label identity. We evaluate GGA by focusing on its information preservation relative to its input graphs, its scalability in execution, and its utility for three applications: abstract syntax trees, class graphs, and call graphs. Our findings reveal the superiority of GGA over alternative tools. GGA outperforms ASGard by an average of 5&amp;amp;ndash;18% on metrics related to information preservation; GGA matches 100% with diffsitter, indicating the correctness of the output. For class graphs, GGA achieves 77.1% in precision at 5, while for call graphs, it exhibits 60% in precision at 5 for a specific application problem. We also test performance for the first two applications: GGA&amp;amp;rsquo;s execution time scales linearly with respect to the product of vertex count and edge count. Our research demonstrates the ability of GGA to preserve information in diverse applications while performing efficiently, signaling its potential to advance the field.</p>
	]]></content:encoded>

	<dc:title>Graph Generalization for Software Engineering</dc:title>
			<dc:creator>Mohammad Reza Kianifar</dc:creator>
			<dc:creator>Robert J. Walker</dc:creator>
		<dc:identifier>doi: 10.3390/software4040033</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2025-12-08</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2025-12-08</prism:publicationDate>
	<prism:volume>4</prism:volume>
	<prism:number>4</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>33</prism:startingPage>
		<prism:doi>10.3390/software4040033</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/4/4/33</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/4/4/32">

	<title>Software, Vol. 4, Pages 32: Dynamic Frontend Architecture for Runtime Component Versioning and Feature Flag Resolution in Regulated Applications</title>
	<link>https://www.mdpi.com/2674-113X/4/4/32</link>
	<description>Regulated web systems require traceable, rollback-safe UI delivery, yet conventional static deployments and Boolean flagging struggle to provide per-user versioning, deterministic fallbacks, and audit-grade observability. The objective of this research is to develop and validate a runtime frontend architecture that enables per-session component versioning with deterministic fallbacks and audit-grade traceability for regulated systems. We present a dynamic frontend architecture that integrates typed GraphQL flag schemas, runtime module federation, and structured observability to enable per-session and per-route component versioning with deterministic fallbacks. We formalize a version-resolution function v = f(u, r, t) and implement a production system that achieved a 96% reduction in MTTR, a P90 fallback rate below 0.7%, and over 280 k session-level logs across 45 days. Compared to static delivery and standard flag evaluators, our approach adds schema-driven targeting, component-level isolation, and audit-ready render traces suitable for compliance. Limitations include cold-start overhead and governance complexity; we provide mitigation strategies and discuss portability beyond fintech.</description>
	<pubDate>2025-12-08</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 4, Pages 32: Dynamic Frontend Architecture for Runtime Component Versioning and Feature Flag Resolution in Regulated Applications</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/4/4/32">doi: 10.3390/software4040032</a></p>
	<p>Authors:
		Roman Fedytskyi
		</p>
	<p>Regulated web systems require traceable, rollback-safe UI delivery, yet conventional static deployments and Boolean flagging struggle to provide per-user versioning, deterministic fallbacks, and audit-grade observability. The objective of this research is to develop and validate a runtime frontend architecture that enables per-session component versioning with deterministic fallbacks and audit-grade traceability for regulated systems. We present a dynamic frontend architecture that integrates typed GraphQL flag schemas, runtime module federation, and structured observability to enable per-session and per-route component versioning with deterministic fallbacks. We formalize a version-resolution function v = f(u, r, t) and implement a production system that achieved a 96% reduction in MTTR, a P90 fallback rate below 0.7%, and over 280 k session-level logs across 45 days. Compared to static delivery and standard flag evaluators, our approach adds schema-driven targeting, component-level isolation, and audit-ready render traces suitable for compliance. Limitations include cold-start overhead and governance complexity; we provide mitigation strategies and discuss portability beyond fintech.</p>
	]]></content:encoded>

	<dc:title>Dynamic Frontend Architecture for Runtime Component Versioning and Feature Flag Resolution in Regulated Applications</dc:title>
			<dc:creator>Roman Fedytskyi</dc:creator>
		<dc:identifier>doi: 10.3390/software4040032</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2025-12-08</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2025-12-08</prism:publicationDate>
	<prism:volume>4</prism:volume>
	<prism:number>4</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>32</prism:startingPage>
		<prism:doi>10.3390/software4040032</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/4/4/32</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/4/4/31">

	<title>Software, Vol. 4, Pages 31: Building Shared Alignment for Agile at Scale: A Tool-Supported Method for Cross-Stakeholder Process Synthesis</title>
	<link>https://www.mdpi.com/2674-113X/4/4/31</link>
	<description>Organizations increasingly rely on Agile software development to navigate the complexities of digital transformation. Agile emphasizes flexibility, empowerment, and emergent design, yet large-scale initiatives often extend beyond single teams to include multiple subsidiaries, business units, and regulatory stakeholders. In such contexts, team-level mechanisms such as retrospectives, backlog refinement, and planning events may prove insufficient to achieve alignment across diverse perspectives, organizational boundaries, and compliance requirements. To address this limitation, this paper introduces a complementary framework and a supporting software tool that enable systematic cross-stakeholder alignment. Rather than replacing Agile practices, the framework enhances them by capturing heterogeneous stakeholder views, surfacing tacit knowledge, and systematically reconciling differences into a shared alignment artifact. The methodology combines individual Functional Resonance Analysis Method (FRAM)-based process modeling, iterative harmonization, and an evidence-supported selection mechanism driven by quantifiable performance indicators, all operationalized through a prototype tool. The approach was evaluated in a real industrial case study within the regulated gaming sector, involving practitioners from both a parent company and a subsidiary. The results show that the methodology effectively revealed misalignments among stakeholders&amp;amp;rsquo; respective views of the development process, supported structured negotiation to reconcile these differences, and produced a consolidated process model that improved transparency and alignment across organizational boundaries. The study demonstrates the practical viability of the methodology and its value as a complementary mechanism that strengthens Agile ways of working in complex, multi-stakeholder environments.</description>
	<pubDate>2025-12-03</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 4, Pages 31: Building Shared Alignment for Agile at Scale: A Tool-Supported Method for Cross-Stakeholder Process Synthesis</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/4/4/31">doi: 10.3390/software4040031</a></p>
	<p>Authors:
		Giulio Serra
		Antonio De Nicola
		</p>
	<p>Organizations increasingly rely on Agile software development to navigate the complexities of digital transformation. Agile emphasizes flexibility, empowerment, and emergent design, yet large-scale initiatives often extend beyond single teams to include multiple subsidiaries, business units, and regulatory stakeholders. In such contexts, team-level mechanisms such as retrospectives, backlog refinement, and planning events may prove insufficient to achieve alignment across diverse perspectives, organizational boundaries, and compliance requirements. To address this limitation, this paper introduces a complementary framework and a supporting software tool that enable systematic cross-stakeholder alignment. Rather than replacing Agile practices, the framework enhances them by capturing heterogeneous stakeholder views, surfacing tacit knowledge, and systematically reconciling differences into a shared alignment artifact. The methodology combines individual Functional Resonance Analysis Method (FRAM)-based process modeling, iterative harmonization, and an evidence-supported selection mechanism driven by quantifiable performance indicators, all operationalized through a prototype tool. The approach was evaluated in a real industrial case study within the regulated gaming sector, involving practitioners from both a parent company and a subsidiary. The results show that the methodology effectively revealed misalignments among stakeholders&amp;amp;rsquo; respective views of the development process, supported structured negotiation to reconcile these differences, and produced a consolidated process model that improved transparency and alignment across organizational boundaries. The study demonstrates the practical viability of the methodology and its value as a complementary mechanism that strengthens Agile ways of working in complex, multi-stakeholder environments.</p>
	]]></content:encoded>

	<dc:title>Building Shared Alignment for Agile at Scale: A Tool-Supported Method for Cross-Stakeholder Process Synthesis</dc:title>
			<dc:creator>Giulio Serra</dc:creator>
			<dc:creator>Antonio De Nicola</dc:creator>
		<dc:identifier>doi: 10.3390/software4040031</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2025-12-03</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2025-12-03</prism:publicationDate>
	<prism:volume>4</prism:volume>
	<prism:number>4</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>31</prism:startingPage>
		<prism:doi>10.3390/software4040031</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/4/4/31</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/4/4/30">

	<title>Software, Vol. 4, Pages 30: Software Quality Assurance and AI: A Systems-Theoretic Approach to Reliability, Safety, and Security</title>
	<link>https://www.mdpi.com/2674-113X/4/4/30</link>
	<description>The integration of modern artificial intelligence into software systems presents transformative opportunities and novel challenges for software quality assurance (SQA). While AI enables powerful enhancements in testing, monitoring, and defect prediction, it also introduces non-determinism, continuous learning, and opaque behavior that challenge traditional quality and reliability paradigms. This paper proposes a framework for addressing these issues, drawing on concepts from systems theory. We argue that AI-enabled software systems should be understood as dynamical systems, i.e., stateful adaptive systems whose behavior depends on prior inputs, feedback, and environmental interaction, as well as components embedded within broader socio-technical ecosystems. From this perspective, quality assurance becomes a matter of maintaining stability by enforcing constraints as well as designing robust feedback and control mechanisms that account for interactions across the full ecosystem of stakeholders, infrastructure, and operational environments. This paper outlines how the systems-theoretic perspective can inform the development of modern SQA processes. This ecosystem-aware approach repositions software quality as an ongoing, systemic responsibility, especially important in mission-critical AI applications.</description>
	<pubDate>2025-11-13</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 4, Pages 30: Software Quality Assurance and AI: A Systems-Theoretic Approach to Reliability, Safety, and Security</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/4/4/30">doi: 10.3390/software4040030</a></p>
	<p>Authors:
		Joseph R. Laracy
		Ziyuan Meng
		Vassilka D. Kirova
		Cyril S. Ku
		Thomas J. Marlowe
		</p>
	<p>The integration of modern artificial intelligence into software systems presents transformative opportunities and novel challenges for software quality assurance (SQA). While AI enables powerful enhancements in testing, monitoring, and defect prediction, it also introduces non-determinism, continuous learning, and opaque behavior that challenge traditional quality and reliability paradigms. This paper proposes a framework for addressing these issues, drawing on concepts from systems theory. We argue that AI-enabled software systems should be understood as dynamical systems, i.e., stateful adaptive systems whose behavior depends on prior inputs, feedback, and environmental interaction, as well as components embedded within broader socio-technical ecosystems. From this perspective, quality assurance becomes a matter of maintaining stability by enforcing constraints as well as designing robust feedback and control mechanisms that account for interactions across the full ecosystem of stakeholders, infrastructure, and operational environments. This paper outlines how the systems-theoretic perspective can inform the development of modern SQA processes. This ecosystem-aware approach repositions software quality as an ongoing, systemic responsibility, especially important in mission-critical AI applications.</p>
	]]></content:encoded>

	<dc:title>Software Quality Assurance and AI: A Systems-Theoretic Approach to Reliability, Safety, and Security</dc:title>
			<dc:creator>Joseph R. Laracy</dc:creator>
			<dc:creator>Ziyuan Meng</dc:creator>
			<dc:creator>Vassilka D. Kirova</dc:creator>
			<dc:creator>Cyril S. Ku</dc:creator>
			<dc:creator>Thomas J. Marlowe</dc:creator>
		<dc:identifier>doi: 10.3390/software4040030</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2025-11-13</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2025-11-13</prism:publicationDate>
	<prism:volume>4</prism:volume>
	<prism:number>4</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>30</prism:startingPage>
		<prism:doi>10.3390/software4040030</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/4/4/30</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/4/4/29">

	<title>Software, Vol. 4, Pages 29: RCEGen: A Generative Approach for Automated Root Cause Analysis Using Large Language Models (LLMs)</title>
	<link>https://www.mdpi.com/2674-113X/4/4/29</link>
	<description>Root cause analysis (RCA) identifies the faults and vulnerabilities underlying software failures, informing better design and maintenance decisions. Earlier approaches typically framed RCA as a classification task, predicting coarse categories of root causes. With recent advances in large language models (LLMs), RCA can be treated as a generative task that produces natural language explanations of faults. We introduce RCEGen, a framework that leverages state-of-the-art open-source LLMs to generate root cause explanations (RCEs) directly from bug reports. Using 298 reports, we evaluated five LLMs in conjunction with human developers and LLM judges across three key aspects: correctness, clarity, and reasoning depth. Qwen2.5-Coder-Instruct achieved the strongest performance (correctness &amp;amp;asymp; 0.89, clarity &amp;amp;asymp; 0.88, reasoning &amp;amp;asymp; 0.65, overall &amp;amp;asymp; 0.79), and RCEs exhibited high semantic fidelity (CodeBERTScore &amp;amp;asymp; 0.98) to developer-written references despite low lexical overlap. The results demonstrated that LLMs achieve high accuracy in root cause identification from bug report titles and descriptions, particularly when reports contained error logs and reproduction steps.</description>
	<pubDate>2025-11-07</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 4, Pages 29: RCEGen: A Generative Approach for Automated Root Cause Analysis Using Large Language Models (LLMs)</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/4/4/29">doi: 10.3390/software4040029</a></p>
	<p>Authors:
		Rubel Hassan Mollik
		Arup Datta
		Anamul Haque Mollah
		Wajdi Aljedaani
		</p>
	<p>Root cause analysis (RCA) identifies the faults and vulnerabilities underlying software failures, informing better design and maintenance decisions. Earlier approaches typically framed RCA as a classification task, predicting coarse categories of root causes. With recent advances in large language models (LLMs), RCA can be treated as a generative task that produces natural language explanations of faults. We introduce RCEGen, a framework that leverages state-of-the-art open-source LLMs to generate root cause explanations (RCEs) directly from bug reports. Using 298 reports, we evaluated five LLMs in conjunction with human developers and LLM judges across three key aspects: correctness, clarity, and reasoning depth. Qwen2.5-Coder-Instruct achieved the strongest performance (correctness &amp;amp;asymp; 0.89, clarity &amp;amp;asymp; 0.88, reasoning &amp;amp;asymp; 0.65, overall &amp;amp;asymp; 0.79), and RCEs exhibited high semantic fidelity (CodeBERTScore &amp;amp;asymp; 0.98) to developer-written references despite low lexical overlap. The results demonstrated that LLMs achieve high accuracy in root cause identification from bug report titles and descriptions, particularly when reports contained error logs and reproduction steps.</p>
	]]></content:encoded>

	<dc:title>RCEGen: A Generative Approach for Automated Root Cause Analysis Using Large Language Models (LLMs)</dc:title>
			<dc:creator>Rubel Hassan Mollik</dc:creator>
			<dc:creator>Arup Datta</dc:creator>
			<dc:creator>Anamul Haque Mollah</dc:creator>
			<dc:creator>Wajdi Aljedaani</dc:creator>
		<dc:identifier>doi: 10.3390/software4040029</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2025-11-07</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2025-11-07</prism:publicationDate>
	<prism:volume>4</prism:volume>
	<prism:number>4</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>29</prism:startingPage>
		<prism:doi>10.3390/software4040029</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/4/4/29</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/4/4/28">

	<title>Software, Vol. 4, Pages 28: Software Engineering Aspects of Federated Learning Libraries: A Comparative Survey</title>
	<link>https://www.mdpi.com/2674-113X/4/4/28</link>
	<description>Federated Learning (FL) has emerged as a pivotal paradigm for privacy-preserving machine learning. While numerous FL libraries have been developed to operationalize this paradigm, their rapid proliferation has created a significant challenge for practitioners and researchers: selecting the right tool requires a deep understanding of their often undocumented software architectures and extensibility, aspects that are largely overlooked by existing algorithm-focused surveys. This paper addresses this gap by conducting the first comprehensive survey of FL libraries from a software engineering perspective. We systematically analyze ten popular open-source FL libraries, dissecting their architectural designs, support for core and advanced FL features, and most importantly, their extension mechanisms for customization. Our analysis produces a novel taxonomy of FL concepts grounded in software implementation, a practical decision framework for library selection, and an in-depth discussion of architectural limitations and pathways for future development. The findings provide developers with actionable guidance for selecting and extending FL tools and offer researchers a clear roadmap for advancing FL infrastructure.</description>
	<pubDate>2025-11-05</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 4, Pages 28: Software Engineering Aspects of Federated Learning Libraries: A Comparative Survey</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/4/4/28">doi: 10.3390/software4040028</a></p>
	<p>Authors:
		Hiba Alsghaier
		Tian Zhao
		</p>
	<p>Federated Learning (FL) has emerged as a pivotal paradigm for privacy-preserving machine learning. While numerous FL libraries have been developed to operationalize this paradigm, their rapid proliferation has created a significant challenge for practitioners and researchers: selecting the right tool requires a deep understanding of their often undocumented software architectures and extensibility, aspects that are largely overlooked by existing algorithm-focused surveys. This paper addresses this gap by conducting the first comprehensive survey of FL libraries from a software engineering perspective. We systematically analyze ten popular open-source FL libraries, dissecting their architectural designs, support for core and advanced FL features, and most importantly, their extension mechanisms for customization. Our analysis produces a novel taxonomy of FL concepts grounded in software implementation, a practical decision framework for library selection, and an in-depth discussion of architectural limitations and pathways for future development. The findings provide developers with actionable guidance for selecting and extending FL tools and offer researchers a clear roadmap for advancing FL infrastructure.</p>
	]]></content:encoded>

	<dc:title>Software Engineering Aspects of Federated Learning Libraries: A Comparative Survey</dc:title>
			<dc:creator>Hiba Alsghaier</dc:creator>
			<dc:creator>Tian Zhao</dc:creator>
		<dc:identifier>doi: 10.3390/software4040028</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2025-11-05</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2025-11-05</prism:publicationDate>
	<prism:volume>4</prism:volume>
	<prism:number>4</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>28</prism:startingPage>
		<prism:doi>10.3390/software4040028</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/4/4/28</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/4/4/27">

	<title>Software, Vol. 4, Pages 27: The Evolution of Software Usability in Developer Communities: An Empirical Study on Stack Overflow</title>
	<link>https://www.mdpi.com/2674-113X/4/4/27</link>
	<description>This study investigates how software developers discuss usability on Stack Overflow through an analysis of posts from 2008 to 2024. Despite recognizing the importance of usability for software success, there is a limited amount of research on developer engagement with usability topics. Using mixed methods that combine quantitative metric analysis and qualitative content review, we examine temporal trends, comparative engagement patterns across eight non-functional requirements, and programming context-specific usability issues. Our findings show a significant decrease in usability posts since 2010, contrasting with other non-functional requirements, such as performance and security. Despite this decline, usability posts exhibit high resolution efficiency, achieving the highest answer and acceptance rates among all topics, suggesting that the community is highly effective at resolving these specialized questions. We identify distinctive platform-specific usability concerns: web development prioritizes responsive layouts and form design; desktop applications emphasize keyboard navigation and complex controls; and mobile development focuses on touch interactions and screen constraints. These patterns indicate a transformation in the sharing of usability knowledge, reflecting the maturation of the field, its integration into frameworks, and the migration to specialized communities. This first longitudinal analysis of usability discussions on Stack Overflow provides insights into developer engagement with usability and highlights opportunities for integrating usability guidance into technical contexts.</description>
	<pubDate>2025-10-31</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 4, Pages 27: The Evolution of Software Usability in Developer Communities: An Empirical Study on Stack Overflow</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/4/4/27">doi: 10.3390/software4040027</a></p>
	<p>Authors:
		Hans Djalali
		Wajdi Aljedaani
		Stephanie Ludi
		</p>
	<p>This study investigates how software developers discuss usability on Stack Overflow through an analysis of posts from 2008 to 2024. Despite recognizing the importance of usability for software success, there is a limited amount of research on developer engagement with usability topics. Using mixed methods that combine quantitative metric analysis and qualitative content review, we examine temporal trends, comparative engagement patterns across eight non-functional requirements, and programming context-specific usability issues. Our findings show a significant decrease in usability posts since 2010, contrasting with other non-functional requirements, such as performance and security. Despite this decline, usability posts exhibit high resolution efficiency, achieving the highest answer and acceptance rates among all topics, suggesting that the community is highly effective at resolving these specialized questions. We identify distinctive platform-specific usability concerns: web development prioritizes responsive layouts and form design; desktop applications emphasize keyboard navigation and complex controls; and mobile development focuses on touch interactions and screen constraints. These patterns indicate a transformation in the sharing of usability knowledge, reflecting the maturation of the field, its integration into frameworks, and the migration to specialized communities. This first longitudinal analysis of usability discussions on Stack Overflow provides insights into developer engagement with usability and highlights opportunities for integrating usability guidance into technical contexts.</p>
	]]></content:encoded>

	<dc:title>The Evolution of Software Usability in Developer Communities: An Empirical Study on Stack Overflow</dc:title>
			<dc:creator>Hans Djalali</dc:creator>
			<dc:creator>Wajdi Aljedaani</dc:creator>
			<dc:creator>Stephanie Ludi</dc:creator>
		<dc:identifier>doi: 10.3390/software4040027</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2025-10-31</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2025-10-31</prism:publicationDate>
	<prism:volume>4</prism:volume>
	<prism:number>4</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>27</prism:startingPage>
		<prism:doi>10.3390/software4040027</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/4/4/27</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/4/4/26">

	<title>Software, Vol. 4, Pages 26: Using Genetic Algorithms for Research Software Structure Optimization</title>
	<link>https://www.mdpi.com/2674-113X/4/4/26</link>
	<description>Our goal is to generate restructuring recommendations for research software systems based on software architecture descriptions that were obtained via reverse engineering. We reconstructed these software architectures via static and dynamic analysis methods in the reverse engineering process. To do this, we combined static and dynamic analysis for call relationships and dataflow into a hierarchy of six analysis methods. For generating optimal restructuring recommendations, we use genetic algorithms, which optimize the module structure. For optimizing the modularization, we use coupling and cohesion metrics as fitness functions. We applied these methods to Earth System Models to test their efficacy. In general, our results confirm the applicability of genetic algorithms for optimizing the module structure of research software. Our experiments show that the analysis methods have a significant impact on the optimization results. A specific observation from our experiments is that the pure dynamic analysis produces significantly better modularizations than the optimizations based on the other analysis methods that we used for reverse engineering. Furthermore, a guided, interactive optimization with a domain expert&amp;amp;rsquo;s feedback improves the modularization recommendations considerably. For instance, cohesion is improved by 57% with guided optimization.</description>
	<pubDate>2025-10-28</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 4, Pages 26: Using Genetic Algorithms for Research Software Structure Optimization</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/4/4/26">doi: 10.3390/software4040026</a></p>
	<p>Authors:
		Henning Schnoor
		Wilhelm Hasselbring
		Reiner Jung
		</p>
	<p>Our goal is to generate restructuring recommendations for research software systems based on software architecture descriptions that were obtained via reverse engineering. We reconstructed these software architectures via static and dynamic analysis methods in the reverse engineering process. To do this, we combined static and dynamic analysis for call relationships and dataflow into a hierarchy of six analysis methods. For generating optimal restructuring recommendations, we use genetic algorithms, which optimize the module structure. For optimizing the modularization, we use coupling and cohesion metrics as fitness functions. We applied these methods to Earth System Models to test their efficacy. In general, our results confirm the applicability of genetic algorithms for optimizing the module structure of research software. Our experiments show that the analysis methods have a significant impact on the optimization results. A specific observation from our experiments is that the pure dynamic analysis produces significantly better modularizations than the optimizations based on the other analysis methods that we used for reverse engineering. Furthermore, a guided, interactive optimization with a domain expert&amp;amp;rsquo;s feedback improves the modularization recommendations considerably. For instance, cohesion is improved by 57% with guided optimization.</p>
	]]></content:encoded>

	<dc:title>Using Genetic Algorithms for Research Software Structure Optimization</dc:title>
			<dc:creator>Henning Schnoor</dc:creator>
			<dc:creator>Wilhelm Hasselbring</dc:creator>
			<dc:creator>Reiner Jung</dc:creator>
		<dc:identifier>doi: 10.3390/software4040026</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2025-10-28</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2025-10-28</prism:publicationDate>
	<prism:volume>4</prism:volume>
	<prism:number>4</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>26</prism:startingPage>
		<prism:doi>10.3390/software4040026</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/4/4/26</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/4/4/25">

	<title>Software, Vol. 4, Pages 25: From Intention to Adoption: Managerial Misalignment in Cybersecurity Training Investments for Software Development Organizations</title>
	<link>https://www.mdpi.com/2674-113X/4/4/25</link>
	<description>To ensure adequate skill development, but also competitive advantage as a software engineering organization, initiatives in cybersecurity training is one of several important investment decisions to make for management. This study builds upon three case organizations in Sweden and Greece, where managers&amp;amp;rsquo; and software developers&amp;amp;rsquo; perceptions on trialability and observability effects are analyzed, grounded in the theory of innovation diffusion. Using interviews and a developer-centric survey, both quantitative and qualitative data are collected, and used in combination to support the development of a pre-investment framework for management. The analysis includes thematic analysis, cosine similarity comparison, and, to some extent, sentiment polarity scoring. A pre-investment framework consisting of a process of seven concrete steps is proposed, based on the empirical findings in the study.</description>
	<pubDate>2025-10-13</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 4, Pages 25: From Intention to Adoption: Managerial Misalignment in Cybersecurity Training Investments for Software Development Organizations</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/4/4/25">doi: 10.3390/software4040025</a></p>
	<p>Authors:
		Hannes Salin
		Vasileios Gkougkaras
		</p>
	<p>To ensure adequate skill development, but also competitive advantage as a software engineering organization, initiatives in cybersecurity training is one of several important investment decisions to make for management. This study builds upon three case organizations in Sweden and Greece, where managers&amp;amp;rsquo; and software developers&amp;amp;rsquo; perceptions on trialability and observability effects are analyzed, grounded in the theory of innovation diffusion. Using interviews and a developer-centric survey, both quantitative and qualitative data are collected, and used in combination to support the development of a pre-investment framework for management. The analysis includes thematic analysis, cosine similarity comparison, and, to some extent, sentiment polarity scoring. A pre-investment framework consisting of a process of seven concrete steps is proposed, based on the empirical findings in the study.</p>
	]]></content:encoded>

	<dc:title>From Intention to Adoption: Managerial Misalignment in Cybersecurity Training Investments for Software Development Organizations</dc:title>
			<dc:creator>Hannes Salin</dc:creator>
			<dc:creator>Vasileios Gkougkaras</dc:creator>
		<dc:identifier>doi: 10.3390/software4040025</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2025-10-13</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2025-10-13</prism:publicationDate>
	<prism:volume>4</prism:volume>
	<prism:number>4</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>25</prism:startingPage>
		<prism:doi>10.3390/software4040025</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/4/4/25</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/4/4/24">

	<title>Software, Vol. 4, Pages 24: The Agile PMO Paradox: Embracing DevOps in the UAE</title>
	<link>https://www.mdpi.com/2674-113X/4/4/24</link>
	<description>This study investigates how Development and Operations (DevOps) practices impact Project Management Office (PMO) governance within the technology sector of the United Arab Emirates (UAE). It addresses the need for agile-aligned governance frameworks by exploring how DevOps principles affect traditional PMO structures. A quantitative cross-sectional survey was conducted, and data was collected from 321 DevOps and PMO professionals in UAE organizations. The analysis, using Partial Least Squares Structural Equation Modelling (PLS-SEM), revealed a moderate positive correlation between specific DevOps practices—such as microservices, Minimum Viable Experience (MVE) culture, continuous value streams, automated configuration, and continuous delivery—and effective PMO governance. The study’s novel theoretical contribution is the integration of the Dynamic Capabilities Framework (DCF) with the Agile DevOps Reference Model (ADRM) to examine this alignment, bridging strategic agility and operational execution. This research offers actionable insights for UAE organizations and policymakers seeking to enhance governance and digital maturity.</description>
	<pubDate>2025-09-24</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 4, Pages 24: The Agile PMO Paradox: Embracing DevOps in the UAE</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/4/4/24">doi: 10.3390/software4040024</a></p>
	<p>Authors:
		Ibrahim Peerzada
		</p>
	<p>This study investigates how Development and Operations (DevOps) practices impact Project Management Office (PMO) governance within the technology sector of the United Arab Emirates (UAE). It addresses the need for agile-aligned governance frameworks by exploring how DevOps principles affect traditional PMO structures. A quantitative cross-sectional survey was conducted, and data was collected from 321 DevOps and PMO professionals in UAE organizations. The analysis, using Partial Least Squares Structural Equation Modelling (PLS-SEM), revealed a moderate positive correlation between specific DevOps practices—such as microservices, Minimum Viable Experience (MVE) culture, continuous value streams, automated configuration, and continuous delivery—and effective PMO governance. The study’s novel theoretical contribution is the integration of the Dynamic Capabilities Framework (DCF) with the Agile DevOps Reference Model (ADRM) to examine this alignment, bridging strategic agility and operational execution. This research offers actionable insights for UAE organizations and policymakers seeking to enhance governance and digital maturity.</p>
	]]></content:encoded>

	<dc:title>The Agile PMO Paradox: Embracing DevOps in the UAE</dc:title>
			<dc:creator>Ibrahim Peerzada</dc:creator>
		<dc:identifier>doi: 10.3390/software4040024</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2025-09-24</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2025-09-24</prism:publicationDate>
	<prism:volume>4</prism:volume>
	<prism:number>4</prism:number>
	<prism:section>Hypothesis</prism:section>
	<prism:startingPage>24</prism:startingPage>
		<prism:doi>10.3390/software4040024</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/4/4/24</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/4/4/23">

	<title>Software, Vol. 4, Pages 23: Parallel Towers of Hanoi via Generalized Nets: Simulated with OnlineGN</title>
	<link>https://www.mdpi.com/2674-113X/4/4/23</link>
	<description>This paper introduces a variant of the classic Towers of Hanoi (TH) puzzle in which parallel moves&amp;amp;mdash;simultaneous transfers of multiple discs&amp;amp;mdash;are permitted. The problem is formalized with Generalized Nets (GN), an extension of Petri nets (PN) whose tokens and transitions encode the ordering and movement of the n discs among three rods under the usual constraints. The resulting GN model, implemented on the OnlineGN platform, provides a clear, precise, and systematic representation that highlights the role of parallelism and supports interactive experimentation. This framework enables the exploration of strategies that reduce the number of parallel steps (PS) and, more broadly, illustrates how GN-captured parallelism can shorten the sequential depth for selected problems with exponential-time solutions.</description>
	<pubDate>2025-09-23</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 4, Pages 23: Parallel Towers of Hanoi via Generalized Nets: Simulated with OnlineGN</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/4/4/23">doi: 10.3390/software4040023</a></p>
	<p>Authors:
		Angel Dimitriev
		Krassimir Atanassov
		Nora Angelova
		</p>
	<p>This paper introduces a variant of the classic Towers of Hanoi (TH) puzzle in which parallel moves&amp;amp;mdash;simultaneous transfers of multiple discs&amp;amp;mdash;are permitted. The problem is formalized with Generalized Nets (GN), an extension of Petri nets (PN) whose tokens and transitions encode the ordering and movement of the n discs among three rods under the usual constraints. The resulting GN model, implemented on the OnlineGN platform, provides a clear, precise, and systematic representation that highlights the role of parallelism and supports interactive experimentation. This framework enables the exploration of strategies that reduce the number of parallel steps (PS) and, more broadly, illustrates how GN-captured parallelism can shorten the sequential depth for selected problems with exponential-time solutions.</p>
	]]></content:encoded>

	<dc:title>Parallel Towers of Hanoi via Generalized Nets: Simulated with OnlineGN</dc:title>
			<dc:creator>Angel Dimitriev</dc:creator>
			<dc:creator>Krassimir Atanassov</dc:creator>
			<dc:creator>Nora Angelova</dc:creator>
		<dc:identifier>doi: 10.3390/software4040023</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2025-09-23</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2025-09-23</prism:publicationDate>
	<prism:volume>4</prism:volume>
	<prism:number>4</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>23</prism:startingPage>
		<prism:doi>10.3390/software4040023</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/4/4/23</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/4/4/22">

	<title>Software, Vol. 4, Pages 22: Automatic Complexity Analysis of UML Class Diagrams Using Visual Question Answering (VQA) Techniques</title>
	<link>https://www.mdpi.com/2674-113X/4/4/22</link>
	<description>Context: Modern software systems have become increasingly complex, making it difficult to interpret raw requirements and effectively utilize traditional tools for software design and analysis. Unified Modeling Language (UML) class diagrams are widely used to visualize and understand system architecture, but analyzing them manually, especially for large-scale systems, poses significant challenges. Objectives: This study aims to automate the analysis of UML class diagrams by assessing their complexity using a machine learning approach. The goal is to support software developers in identifying potential design issues early in the development process and to improve overall software quality. Methodology: To achieve this, this research introduces a Visual Question Answering (VQA)-based framework that integrates both computer vision and natural language processing. Vision Transformers (ViTs) are employed to extract global visual features from UML class diagrams, while the BERT language model processes natural language queries. By combining these two models, the system can accurately respond to questions related to software complexity, such as class coupling and inheritance depth. Results: The proposed method demonstrated strong performance in experimental trials. The ViT model achieved an accuracy of 0.8800, with both the F1 score and recall reaching 0.8985. These metrics highlight the effectiveness of the approach in automatically evaluating UML class diagrams. Conclusions: The findings confirm that advanced machine learning techniques can be successfully applied to automate software design analysis. This approach can help developers detect design flaws early and enhance software maintainability. Future work will explore advanced fusion strategies, novel data augmentation techniques, and lightweight model adaptations suitable for environments with limited computational resources.</description>
	<pubDate>2025-09-23</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 4, Pages 22: Automatic Complexity Analysis of UML Class Diagrams Using Visual Question Answering (VQA) Techniques</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/4/4/22">doi: 10.3390/software4040022</a></p>
	<p>Authors:
		Nimra Shehzadi
		Javed Ferzund
		Rubia Fatima
		Adnan Riaz
		</p>
	<p>Context: Modern software systems have become increasingly complex, making it difficult to interpret raw requirements and effectively utilize traditional tools for software design and analysis. Unified Modeling Language (UML) class diagrams are widely used to visualize and understand system architecture, but analyzing them manually, especially for large-scale systems, poses significant challenges. Objectives: This study aims to automate the analysis of UML class diagrams by assessing their complexity using a machine learning approach. The goal is to support software developers in identifying potential design issues early in the development process and to improve overall software quality. Methodology: To achieve this, this research introduces a Visual Question Answering (VQA)-based framework that integrates both computer vision and natural language processing. Vision Transformers (ViTs) are employed to extract global visual features from UML class diagrams, while the BERT language model processes natural language queries. By combining these two models, the system can accurately respond to questions related to software complexity, such as class coupling and inheritance depth. Results: The proposed method demonstrated strong performance in experimental trials. The ViT model achieved an accuracy of 0.8800, with both the F1 score and recall reaching 0.8985. These metrics highlight the effectiveness of the approach in automatically evaluating UML class diagrams. Conclusions: The findings confirm that advanced machine learning techniques can be successfully applied to automate software design analysis. This approach can help developers detect design flaws early and enhance software maintainability. Future work will explore advanced fusion strategies, novel data augmentation techniques, and lightweight model adaptations suitable for environments with limited computational resources.</p>
	]]></content:encoded>

	<dc:title>Automatic Complexity Analysis of UML Class Diagrams Using Visual Question Answering (VQA) Techniques</dc:title>
			<dc:creator>Nimra Shehzadi</dc:creator>
			<dc:creator>Javed Ferzund</dc:creator>
			<dc:creator>Rubia Fatima</dc:creator>
			<dc:creator>Adnan Riaz</dc:creator>
		<dc:identifier>doi: 10.3390/software4040022</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2025-09-23</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2025-09-23</prism:publicationDate>
	<prism:volume>4</prism:volume>
	<prism:number>4</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>22</prism:startingPage>
		<prism:doi>10.3390/software4040022</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/4/4/22</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/4/3/21">

	<title>Software, Vol. 4, Pages 21: MetaFFI-Multilingual Indirect Interoperability System</title>
	<link>https://www.mdpi.com/2674-113X/4/3/21</link>
	<description>The development of software applications using multiple programming languages has increased in recent years, as it allows the selection of the most suitable language and runtime for each component of the system and the integration of third-party libraries. However, this practice involves complexity and error proneness, due to the absence of an adequate system for the interoperability of multiple programming languages. Developers are compelled to resort to workarounds, such as library reimplementation or language-specific wrappers, which are often dependent on C as the common denominator for interoperability. These challenges render the use of multiple programming languages a burdensome and demanding task that necessitates highly skilled developers for implementation, debugging, and maintenance, and raise doubts about the benefits of interoperability. To overcome these challenges, we propose MetaFFI, introducing a fully in-process, plugin-oriented, runtime-independent architecture based on a minimal C abstraction layer. It provides deep binding without relying on a shared object model, virtual machine bytecode, or manual glue code. This architecture is scalable (O(n) integration for n languages) and supports true polymorphic function and object invocation across languages. MetaFFI is based on leveraging FFI and embedding mechanisms, which minimize restrictions on language selection while still enabling full-duplex binding and deep integration. This is achieved by exploiting the less restrictive shallow binding mechanisms (e.g., Foreign Function Interface) to offer deep binding features (e.g., object creation, methods, fields). MetaFFI provides a runtime-independent framework to load and xcall (Cross-Call) foreign entities (e.g., getters, functions, objects). MetaFFI uses Common Data Types (CDTs) to pass parameters and return values, including objects and complex types, and even cross-language callbacks and dynamic calling conventions for optimization. The indirect interoperability approach of MetaFFI has the significant advantage of requiring only 2n mechanisms to support n languages, compared to direct interoperability approaches that need n2 mechanisms. We developed and tested a proof of concept tool interoperating three languages (Go, Python, and Java), on Windows and Ubuntu. To evaluate the approach and the tool, we conducted a user study, with promising results. The MetaFFI framework is available as open source software, including its full source code and installers, to facilitate adoption and collaboration across academic and industrial communities.</description>
	<pubDate>2025-08-26</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 4, Pages 21: MetaFFI-Multilingual Indirect Interoperability System</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/4/3/21">doi: 10.3390/software4030021</a></p>
	<p>Authors:
		Tsvi Cherny-Shahar
		Amiram Yehudai
		</p>
	<p>The development of software applications using multiple programming languages has increased in recent years, as it allows the selection of the most suitable language and runtime for each component of the system and the integration of third-party libraries. However, this practice involves complexity and error proneness, due to the absence of an adequate system for the interoperability of multiple programming languages. Developers are compelled to resort to workarounds, such as library reimplementation or language-specific wrappers, which are often dependent on C as the common denominator for interoperability. These challenges render the use of multiple programming languages a burdensome and demanding task that necessitates highly skilled developers for implementation, debugging, and maintenance, and raise doubts about the benefits of interoperability. To overcome these challenges, we propose MetaFFI, introducing a fully in-process, plugin-oriented, runtime-independent architecture based on a minimal C abstraction layer. It provides deep binding without relying on a shared object model, virtual machine bytecode, or manual glue code. This architecture is scalable (O(n) integration for n languages) and supports true polymorphic function and object invocation across languages. MetaFFI is based on leveraging FFI and embedding mechanisms, which minimize restrictions on language selection while still enabling full-duplex binding and deep integration. This is achieved by exploiting the less restrictive shallow binding mechanisms (e.g., Foreign Function Interface) to offer deep binding features (e.g., object creation, methods, fields). MetaFFI provides a runtime-independent framework to load and xcall (Cross-Call) foreign entities (e.g., getters, functions, objects). MetaFFI uses Common Data Types (CDTs) to pass parameters and return values, including objects and complex types, and even cross-language callbacks and dynamic calling conventions for optimization. The indirect interoperability approach of MetaFFI has the significant advantage of requiring only 2n mechanisms to support n languages, compared to direct interoperability approaches that need n2 mechanisms. We developed and tested a proof of concept tool interoperating three languages (Go, Python, and Java), on Windows and Ubuntu. To evaluate the approach and the tool, we conducted a user study, with promising results. The MetaFFI framework is available as open source software, including its full source code and installers, to facilitate adoption and collaboration across academic and industrial communities.</p>
	]]></content:encoded>

	<dc:title>MetaFFI-Multilingual Indirect Interoperability System</dc:title>
			<dc:creator>Tsvi Cherny-Shahar</dc:creator>
			<dc:creator>Amiram Yehudai</dc:creator>
		<dc:identifier>doi: 10.3390/software4030021</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2025-08-26</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2025-08-26</prism:publicationDate>
	<prism:volume>4</prism:volume>
	<prism:number>3</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>21</prism:startingPage>
		<prism:doi>10.3390/software4030021</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/4/3/21</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/4/3/20">

	<title>Software, Vol. 4, Pages 20: Enabling Progressive Server-Side Rendering for Traditional Web Template Engines with Java Virtual Threads</title>
	<link>https://www.mdpi.com/2674-113X/4/3/20</link>
	<description>Modern web applications increasingly demand rendering techniques that optimize performance, responsiveness, and scalability. Progressive Server-Side Rendering (PSSR) bridges the gap between Server-Side Rendering and Client-Side Rendering by progressively streaming HTML content, improving perceived load times. Still, traditional HTML template engines often rely on blocking interfaces that hinder their use in asynchronous, non-blocking contexts required for PSSR. This paper analyzes how Java virtual threads, introduced in Java 21, enable non-blocking execution of blocking I/O operations, allowing the reuse of traditional template engines for PSSR without complex asynchronous programming models. We benchmark multiple engines across Spring WebFlux, Spring MVC, and Quarkus using reactive, suspendable, and virtual thread-based approaches. Results show that virtual threads allow blocking engines to scale comparably to those designed for non-blocking I/O, achieving high throughput and responsiveness under load. This demonstrates that virtual threads provide a compelling path to simplify the implementation of PSSR with familiar HTML templates, significantly lowering the barrier to entry while maintaining performance.</description>
	<pubDate>2025-08-13</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 4, Pages 20: Enabling Progressive Server-Side Rendering for Traditional Web Template Engines with Java Virtual Threads</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/4/3/20">doi: 10.3390/software4030020</a></p>
	<p>Authors:
		Bernardo Pereira
		Fernando Miguel Carvalho
		</p>
	<p>Modern web applications increasingly demand rendering techniques that optimize performance, responsiveness, and scalability. Progressive Server-Side Rendering (PSSR) bridges the gap between Server-Side Rendering and Client-Side Rendering by progressively streaming HTML content, improving perceived load times. Still, traditional HTML template engines often rely on blocking interfaces that hinder their use in asynchronous, non-blocking contexts required for PSSR. This paper analyzes how Java virtual threads, introduced in Java 21, enable non-blocking execution of blocking I/O operations, allowing the reuse of traditional template engines for PSSR without complex asynchronous programming models. We benchmark multiple engines across Spring WebFlux, Spring MVC, and Quarkus using reactive, suspendable, and virtual thread-based approaches. Results show that virtual threads allow blocking engines to scale comparably to those designed for non-blocking I/O, achieving high throughput and responsiveness under load. This demonstrates that virtual threads provide a compelling path to simplify the implementation of PSSR with familiar HTML templates, significantly lowering the barrier to entry while maintaining performance.</p>
	]]></content:encoded>

	<dc:title>Enabling Progressive Server-Side Rendering for Traditional Web Template Engines with Java Virtual Threads</dc:title>
			<dc:creator>Bernardo Pereira</dc:creator>
			<dc:creator>Fernando Miguel Carvalho</dc:creator>
		<dc:identifier>doi: 10.3390/software4030020</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2025-08-13</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2025-08-13</prism:publicationDate>
	<prism:volume>4</prism:volume>
	<prism:number>3</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>20</prism:startingPage>
		<prism:doi>10.3390/software4030020</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/4/3/20</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/4/3/19">

	<title>Software, Vol. 4, Pages 19: Research and Development of Test Automation Maturity Model Building and Assessment Methods for E2E Testing</title>
	<link>https://www.mdpi.com/2674-113X/4/3/19</link>
	<description>Background: While several test-automation maturity models (e.g., CMMI, TMMi, TAIM) exist, none explicitly integrate ISO 9001-based quality management systems (QMS), leaving a gap for organizations that must align E2E test automation with formal quality assurance. Objective: This study proposes a test-automation maturity model (TAMM) that bridges E2E automation capability with ISO 9001/ISO 9004 self-assessment principles, and evaluates its reliability and practical impact in industry. Methods: TAMM comprises eight maturity dimensions, 39 requirements, and 429 checklist items. Three independent assessors applied the checklist to three software teams; inter-rater reliability was ensured via consensus review (Cohen&amp;amp;rsquo;s &amp;amp;kappa; = 0.75). Short-term remediation actions based on the checklist were implemented over six months and re-assessed. Synergy with the organization&amp;amp;rsquo;s ISO 9001 QMS was analyzed using ISO 9004 self-check scores. Results: Within 6 months of remediation, mean TAMM score rose from 2.75 &amp;amp;rarr; 2.85. Inter-rater reliability is filled with Cohen&amp;amp;rsquo;s &amp;amp;kappa; = 0.75. Conclusions: The proposed TAMM delivers measurable, short-term maturity gains and complements ISO 9001-based QMS without introducing conflicting processes. Practitioners can use the checklist to identify actionable gaps, prioritize remediation, and quantify progress, while researchers may extend TAMM to other domains or automate scoring via repository mining.</description>
	<pubDate>2025-08-05</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 4, Pages 19: Research and Development of Test Automation Maturity Model Building and Assessment Methods for E2E Testing</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/4/3/19">doi: 10.3390/software4030019</a></p>
	<p>Authors:
		Daiju Kato
		Ayane Mogi
		Hiroshi Ishikawa
		Yasufumi Takama
		</p>
	<p>Background: While several test-automation maturity models (e.g., CMMI, TMMi, TAIM) exist, none explicitly integrate ISO 9001-based quality management systems (QMS), leaving a gap for organizations that must align E2E test automation with formal quality assurance. Objective: This study proposes a test-automation maturity model (TAMM) that bridges E2E automation capability with ISO 9001/ISO 9004 self-assessment principles, and evaluates its reliability and practical impact in industry. Methods: TAMM comprises eight maturity dimensions, 39 requirements, and 429 checklist items. Three independent assessors applied the checklist to three software teams; inter-rater reliability was ensured via consensus review (Cohen&amp;amp;rsquo;s &amp;amp;kappa; = 0.75). Short-term remediation actions based on the checklist were implemented over six months and re-assessed. Synergy with the organization&amp;amp;rsquo;s ISO 9001 QMS was analyzed using ISO 9004 self-check scores. Results: Within 6 months of remediation, mean TAMM score rose from 2.75 &amp;amp;rarr; 2.85. Inter-rater reliability is filled with Cohen&amp;amp;rsquo;s &amp;amp;kappa; = 0.75. Conclusions: The proposed TAMM delivers measurable, short-term maturity gains and complements ISO 9001-based QMS without introducing conflicting processes. Practitioners can use the checklist to identify actionable gaps, prioritize remediation, and quantify progress, while researchers may extend TAMM to other domains or automate scoring via repository mining.</p>
	]]></content:encoded>

	<dc:title>Research and Development of Test Automation Maturity Model Building and Assessment Methods for E2E Testing</dc:title>
			<dc:creator>Daiju Kato</dc:creator>
			<dc:creator>Ayane Mogi</dc:creator>
			<dc:creator>Hiroshi Ishikawa</dc:creator>
			<dc:creator>Yasufumi Takama</dc:creator>
		<dc:identifier>doi: 10.3390/software4030019</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2025-08-05</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2025-08-05</prism:publicationDate>
	<prism:volume>4</prism:volume>
	<prism:number>3</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>19</prism:startingPage>
		<prism:doi>10.3390/software4030019</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/4/3/19</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/4/3/18">

	<title>Software, Vol. 4, Pages 18: Intersectional Software Engineering as a Field</title>
	<link>https://www.mdpi.com/2674-113X/4/3/18</link>
	<description>Intersectionality is a concept used to explain the power dynamics and inequalities that some groups experience owing to the interconnection of social differences such as in gender, sexual identity, poverty status, race, geographic location, disability, and education. The relation between software engineering, feminism, and intersectionality has been addressed by some studies thus far, but it has never been codified before. In this paper, we employ the commonly used ABC Framework for empirical software engineering to show the contributions of intersectional software engineering (ISE) as a field of software engineering. In addition, we highlight the power dynamic, unique to ISE studies, and define gender-forward intersectionality as a way to use gender as a starting point to identify and examine inequalities and discrimination. We show that ISE is a field of study in software engineering that uses gender-forward intersectionality to produce knowledge about power dynamics in software engineering in its specific domains and environments. Employing empirical software engineering research strategies, we explain the importance of recognizing and evaluating ISE through four dimensions of dynamics, which are people, processes, products, and policies. Beginning with a set of 10 seminal papers that enable us to define the initial concepts and the query for the systematic mapping study, we conduct a systematic mapping study leads to a dataset of 140 primary papers, of which 15 are chosen as example papers. We apply the principles of ISE to these example papers to show how the field functions. Finally, we conclude the paper by advocating the recognition of ISE as a specialized field of study in software engineering.</description>
	<pubDate>2025-07-30</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 4, Pages 18: Intersectional Software Engineering as a Field</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/4/3/18">doi: 10.3390/software4030018</a></p>
	<p>Authors:
		Alicia Julia Wilson Takaoka
		Claudia Maria Cutrupi
		Letizia Jaccheri
		</p>
	<p>Intersectionality is a concept used to explain the power dynamics and inequalities that some groups experience owing to the interconnection of social differences such as in gender, sexual identity, poverty status, race, geographic location, disability, and education. The relation between software engineering, feminism, and intersectionality has been addressed by some studies thus far, but it has never been codified before. In this paper, we employ the commonly used ABC Framework for empirical software engineering to show the contributions of intersectional software engineering (ISE) as a field of software engineering. In addition, we highlight the power dynamic, unique to ISE studies, and define gender-forward intersectionality as a way to use gender as a starting point to identify and examine inequalities and discrimination. We show that ISE is a field of study in software engineering that uses gender-forward intersectionality to produce knowledge about power dynamics in software engineering in its specific domains and environments. Employing empirical software engineering research strategies, we explain the importance of recognizing and evaluating ISE through four dimensions of dynamics, which are people, processes, products, and policies. Beginning with a set of 10 seminal papers that enable us to define the initial concepts and the query for the systematic mapping study, we conduct a systematic mapping study leads to a dataset of 140 primary papers, of which 15 are chosen as example papers. We apply the principles of ISE to these example papers to show how the field functions. Finally, we conclude the paper by advocating the recognition of ISE as a specialized field of study in software engineering.</p>
	]]></content:encoded>

	<dc:title>Intersectional Software Engineering as a Field</dc:title>
			<dc:creator>Alicia Julia Wilson Takaoka</dc:creator>
			<dc:creator>Claudia Maria Cutrupi</dc:creator>
			<dc:creator>Letizia Jaccheri</dc:creator>
		<dc:identifier>doi: 10.3390/software4030018</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2025-07-30</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2025-07-30</prism:publicationDate>
	<prism:volume>4</prism:volume>
	<prism:number>3</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>18</prism:startingPage>
		<prism:doi>10.3390/software4030018</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/4/3/18</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/4/3/17">

	<title>Software, Vol. 4, Pages 17: Investigating Reproducibility Challenges in LLM Bugfixing on the HumanEvalFix Benchmark</title>
	<link>https://www.mdpi.com/2674-113X/4/3/17</link>
	<description>Benchmark results for large language models often show inconsistencies across different studies. This paper investigates the challenges of reproducing these results in automatic bugfixing using LLMs, on the HumanEvalFix benchmark. To determine the cause of the differing results in the literature, we attempted to reproduce a subset of them by evaluating 12 models in the DeepSeekCoder, CodeGemma, CodeLlama, and WizardCoder model families, in different sizes and tunings. A total of 35 unique results were reported for these models across studies, of which we successfully reproduced 12. We identified several relevant factors that influenced the results. The base models can be confused with their instruction-tuned variants, making their results better than expected. Incorrect prompt templates or generation length can decrease benchmark performance, as well as using 4-bit quantization. Using sampling instead of greedy decoding can increase the variance, especially with higher temperature values. We found that precision and 8-bit quantization have less influence on benchmark results.</description>
	<pubDate>2025-07-14</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 4, Pages 17: Investigating Reproducibility Challenges in LLM Bugfixing on the HumanEvalFix Benchmark</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/4/3/17">doi: 10.3390/software4030017</a></p>
	<p>Authors:
		Balázs Szalontai
		Balázs Márton
		Balázs Pintér
		Tibor Gregorics
		</p>
	<p>Benchmark results for large language models often show inconsistencies across different studies. This paper investigates the challenges of reproducing these results in automatic bugfixing using LLMs, on the HumanEvalFix benchmark. To determine the cause of the differing results in the literature, we attempted to reproduce a subset of them by evaluating 12 models in the DeepSeekCoder, CodeGemma, CodeLlama, and WizardCoder model families, in different sizes and tunings. A total of 35 unique results were reported for these models across studies, of which we successfully reproduced 12. We identified several relevant factors that influenced the results. The base models can be confused with their instruction-tuned variants, making their results better than expected. Incorrect prompt templates or generation length can decrease benchmark performance, as well as using 4-bit quantization. Using sampling instead of greedy decoding can increase the variance, especially with higher temperature values. We found that precision and 8-bit quantization have less influence on benchmark results.</p>
	]]></content:encoded>

	<dc:title>Investigating Reproducibility Challenges in LLM Bugfixing on the HumanEvalFix Benchmark</dc:title>
			<dc:creator>Balázs Szalontai</dc:creator>
			<dc:creator>Balázs Márton</dc:creator>
			<dc:creator>Balázs Pintér</dc:creator>
			<dc:creator>Tibor Gregorics</dc:creator>
		<dc:identifier>doi: 10.3390/software4030017</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2025-07-14</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2025-07-14</prism:publicationDate>
	<prism:volume>4</prism:volume>
	<prism:number>3</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>17</prism:startingPage>
		<prism:doi>10.3390/software4030017</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/4/3/17</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/4/3/16">

	<title>Software, Vol. 4, Pages 16: New Editor-in-Chief of Software</title>
	<link>https://www.mdpi.com/2674-113X/4/3/16</link>
	<description>I would like to introduce myself as the new Editor-in-Chief of Software [...]</description>
	<pubDate>2025-07-10</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 4, Pages 16: New Editor-in-Chief of Software</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/4/3/16">doi: 10.3390/software4030016</a></p>
	<p>Authors:
		Mirko Viroli
		</p>
	<p>I would like to introduce myself as the new Editor-in-Chief of Software [...]</p>
	]]></content:encoded>

	<dc:title>New Editor-in-Chief of Software</dc:title>
			<dc:creator>Mirko Viroli</dc:creator>
		<dc:identifier>doi: 10.3390/software4030016</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2025-07-10</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2025-07-10</prism:publicationDate>
	<prism:volume>4</prism:volume>
	<prism:number>3</prism:number>
	<prism:section>Editorial</prism:section>
	<prism:startingPage>16</prism:startingPage>
		<prism:doi>10.3390/software4030016</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/4/3/16</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/4/3/15">

	<title>Software, Vol. 4, Pages 15: Analysing Concurrent Queues Using CSP: Examining Java&amp;rsquo;s ConcurrentLinkedQueue</title>
	<link>https://www.mdpi.com/2674-113X/4/3/15</link>
	<description>In this paper we examine the OpenJDK library implementation of the ConcurrentLinkedQueue. We use model checking to verify that it behaves according to the algorithm it is based on: Michael and Scott&amp;amp;rsquo;s fast and practical non-blocking concurrent queue algorithm. In addition, we develop a simple concurrent queue specification in CSP and verify that Michael and Scott&amp;amp;rsquo;s algorithm satisfies it. We conclude that both the algorithm and the implementation are correct and both conform to our simpler concurrent queue specification, which we can use in place of either implementation in future verification tasks. The complete code is available on GitHub.</description>
	<pubDate>2025-07-07</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 4, Pages 15: Analysing Concurrent Queues Using CSP: Examining Java&amp;rsquo;s ConcurrentLinkedQueue</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/4/3/15">doi: 10.3390/software4030015</a></p>
	<p>Authors:
		Kevin Chalmers
		Jan Bækgaard Pedersen
		</p>
	<p>In this paper we examine the OpenJDK library implementation of the ConcurrentLinkedQueue. We use model checking to verify that it behaves according to the algorithm it is based on: Michael and Scott&amp;amp;rsquo;s fast and practical non-blocking concurrent queue algorithm. In addition, we develop a simple concurrent queue specification in CSP and verify that Michael and Scott&amp;amp;rsquo;s algorithm satisfies it. We conclude that both the algorithm and the implementation are correct and both conform to our simpler concurrent queue specification, which we can use in place of either implementation in future verification tasks. The complete code is available on GitHub.</p>
	]]></content:encoded>

	<dc:title>Analysing Concurrent Queues Using CSP: Examining Java&amp;amp;rsquo;s ConcurrentLinkedQueue</dc:title>
			<dc:creator>Kevin Chalmers</dc:creator>
			<dc:creator>Jan Bækgaard Pedersen</dc:creator>
		<dc:identifier>doi: 10.3390/software4030015</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2025-07-07</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2025-07-07</prism:publicationDate>
	<prism:volume>4</prism:volume>
	<prism:number>3</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>15</prism:startingPage>
		<prism:doi>10.3390/software4030015</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/4/3/15</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/4/3/14">

	<title>Software, Vol. 4, Pages 14: Machine Learning Techniques for Requirements Engineering: A Comprehensive Literature Review</title>
	<link>https://www.mdpi.com/2674-113X/4/3/14</link>
	<description>Software requirements engineering is one of the most critical and time-consuming phases of the software-development process. The lack of communication with stakeholders and the use of natural language for communicating leads to misunderstanding and misidentification of requirements or the creation of ambiguous requirements, which can jeopardize all subsequent steps in the software-development process and can compromise the quality of the final software product. Natural Language Processing (NLP) is an old area of research; however, it is currently undergoing strong and very positive impacts with recent advances in the area of Machine Learning (ML), namely with the emergence of Deep Learning and, more recently, with the so-called transformer models such as BERT and GPT. Software requirements engineering is also being strongly affected by the entire evolution of ML and other areas of Artificial Intelligence (AI). In this article we conduct a systematic review on how AI, ML and NLP are being used in the various stages of requirements engineering, including requirements elicitation, specification, classification, prioritization, requirements management, requirements traceability, etc. Furthermore, we identify which algorithms are most used in each of these stages, uncover challenges and open problems and suggest future research directions.</description>
	<pubDate>2025-06-28</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 4, Pages 14: Machine Learning Techniques for Requirements Engineering: A Comprehensive Literature Review</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/4/3/14">doi: 10.3390/software4030014</a></p>
	<p>Authors:
		António Miguel Rosado da Cruz
		Estrela Ferreira Cruz
		</p>
	<p>Software requirements engineering is one of the most critical and time-consuming phases of the software-development process. The lack of communication with stakeholders and the use of natural language for communicating leads to misunderstanding and misidentification of requirements or the creation of ambiguous requirements, which can jeopardize all subsequent steps in the software-development process and can compromise the quality of the final software product. Natural Language Processing (NLP) is an old area of research; however, it is currently undergoing strong and very positive impacts with recent advances in the area of Machine Learning (ML), namely with the emergence of Deep Learning and, more recently, with the so-called transformer models such as BERT and GPT. Software requirements engineering is also being strongly affected by the entire evolution of ML and other areas of Artificial Intelligence (AI). In this article we conduct a systematic review on how AI, ML and NLP are being used in the various stages of requirements engineering, including requirements elicitation, specification, classification, prioritization, requirements management, requirements traceability, etc. Furthermore, we identify which algorithms are most used in each of these stages, uncover challenges and open problems and suggest future research directions.</p>
	]]></content:encoded>

	<dc:title>Machine Learning Techniques for Requirements Engineering: A Comprehensive Literature Review</dc:title>
			<dc:creator>António Miguel Rosado da Cruz</dc:creator>
			<dc:creator>Estrela Ferreira Cruz</dc:creator>
		<dc:identifier>doi: 10.3390/software4030014</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2025-06-28</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2025-06-28</prism:publicationDate>
	<prism:volume>4</prism:volume>
	<prism:number>3</prism:number>
	<prism:section>Review</prism:section>
	<prism:startingPage>14</prism:startingPage>
		<prism:doi>10.3390/software4030014</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/4/3/14</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/4/2/13">

	<title>Software, Vol. 4, Pages 13: Characterizing Agile Software Development: Insights from a Data-Driven Approach Using Large-Scale Public Repositories</title>
	<link>https://www.mdpi.com/2674-113X/4/2/13</link>
	<description>This study investigates the prevalence and impact of Agile practices by leveraging metadata from thousands of public GitHub repositories through a novel data-driven methodology. To facilitate this analysis, we developed the AgileScore index, a metric designed to identify and evaluate patterns, characteristics, performance and community engagement in Agile-oriented projects. This approach enables comprehensive, large-scale comparisons between Agile methodologies and traditional development practices within digital environments. Our findings reveal a significant annual growth of 16% in the adoption of Agile practices and validate the AgileScore index as a systematic tool for assessing Agile methodologies across diverse development contexts. Furthermore, this study introduces innovative analytical tools for researchers in software project management, software engineering and related fields, providing a foundation for future work in areas such as cost estimation and hybrid project management. These insights contribute to a deeper understanding of Agile&amp;amp;rsquo;s role in fostering collaboration and adaptability in dynamic digital ecosystems.</description>
	<pubDate>2025-05-24</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 4, Pages 13: Characterizing Agile Software Development: Insights from a Data-Driven Approach Using Large-Scale Public Repositories</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/4/2/13">doi: 10.3390/software4020013</a></p>
	<p>Authors:
		Carlos Moreno Martínez
		Jesús Gallego Carracedo
		Jaime Sánchez Gallego
		</p>
	<p>This study investigates the prevalence and impact of Agile practices by leveraging metadata from thousands of public GitHub repositories through a novel data-driven methodology. To facilitate this analysis, we developed the AgileScore index, a metric designed to identify and evaluate patterns, characteristics, performance and community engagement in Agile-oriented projects. This approach enables comprehensive, large-scale comparisons between Agile methodologies and traditional development practices within digital environments. Our findings reveal a significant annual growth of 16% in the adoption of Agile practices and validate the AgileScore index as a systematic tool for assessing Agile methodologies across diverse development contexts. Furthermore, this study introduces innovative analytical tools for researchers in software project management, software engineering and related fields, providing a foundation for future work in areas such as cost estimation and hybrid project management. These insights contribute to a deeper understanding of Agile&amp;amp;rsquo;s role in fostering collaboration and adaptability in dynamic digital ecosystems.</p>
	]]></content:encoded>

	<dc:title>Characterizing Agile Software Development: Insights from a Data-Driven Approach Using Large-Scale Public Repositories</dc:title>
			<dc:creator>Carlos Moreno Martínez</dc:creator>
			<dc:creator>Jesús Gallego Carracedo</dc:creator>
			<dc:creator>Jaime Sánchez Gallego</dc:creator>
		<dc:identifier>doi: 10.3390/software4020013</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2025-05-24</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2025-05-24</prism:publicationDate>
	<prism:volume>4</prism:volume>
	<prism:number>2</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>13</prism:startingPage>
		<prism:doi>10.3390/software4020013</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/4/2/13</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/4/2/12">

	<title>Software, Vol. 4, Pages 12: AI Testing for Intelligent Chatbots&amp;mdash;A Case Study</title>
	<link>https://www.mdpi.com/2674-113X/4/2/12</link>
	<description>The decision tree test method works as a flowchart structure for conversational flow. It has predetermined questions and answers that guide the user through specific tasks. Inspired by principles of the decision tree test method in software engineering, this paper discusses intelligent AI test modeling chat systems, including basic concepts, quality validation, test generation and augmentation, testing scopes, approaches, and needs. The paper&amp;amp;rsquo;s novelty lies in an intelligent AI test modeling chatbot system built and implemented based on an innovative 3-dimensional AI test model for AI-powered functions in intelligent mobile apps to support model-based AI function testing, test data generation, and adequate test coverage result analysis. As a result, a case study is provided using a mental health and emotional intelligence chatbot system, Wysa. It helps in tracking and analyzing mood and helps in sentiment analysis.</description>
	<pubDate>2025-05-15</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 4, Pages 12: AI Testing for Intelligent Chatbots&amp;mdash;A Case Study</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/4/2/12">doi: 10.3390/software4020012</a></p>
	<p>Authors:
		Jerry Gao
		Radhika Agarwal
		Prerna Garsole
		</p>
	<p>The decision tree test method works as a flowchart structure for conversational flow. It has predetermined questions and answers that guide the user through specific tasks. Inspired by principles of the decision tree test method in software engineering, this paper discusses intelligent AI test modeling chat systems, including basic concepts, quality validation, test generation and augmentation, testing scopes, approaches, and needs. The paper&amp;amp;rsquo;s novelty lies in an intelligent AI test modeling chatbot system built and implemented based on an innovative 3-dimensional AI test model for AI-powered functions in intelligent mobile apps to support model-based AI function testing, test data generation, and adequate test coverage result analysis. As a result, a case study is provided using a mental health and emotional intelligence chatbot system, Wysa. It helps in tracking and analyzing mood and helps in sentiment analysis.</p>
	]]></content:encoded>

	<dc:title>AI Testing for Intelligent Chatbots&amp;amp;mdash;A Case Study</dc:title>
			<dc:creator>Jerry Gao</dc:creator>
			<dc:creator>Radhika Agarwal</dc:creator>
			<dc:creator>Prerna Garsole</dc:creator>
		<dc:identifier>doi: 10.3390/software4020012</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2025-05-15</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2025-05-15</prism:publicationDate>
	<prism:volume>4</prism:volume>
	<prism:number>2</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>12</prism:startingPage>
		<prism:doi>10.3390/software4020012</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/4/2/12</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/4/2/11">

	<title>Software, Vol. 4, Pages 11: Improving the Fast Fourier Transform for Space and Edge Computing Applications with an Efficient In-Place Method</title>
	<link>https://www.mdpi.com/2674-113X/4/2/11</link>
	<description>Satellite and edge computing designers develop algorithms that restrict resource utilization and execution time. Among these design efforts, optimizing Fast Fourier Transform (FFT), key to many tasks, has led mainly to in-place FFT-specific hardware accelerators. Aiming at improving the FFT performance on processors and computing devices with limited resources, the current paper enhances the efficiency of the radix-2 FFT by exploring the benefits of an in-place technique. First, we present the advantages of organizing the single memory bank of processors to store two (2) FFT elements in each memory address and provide parallel load and store of each FFT pair of data. Second, we optimize the floating point (FP) and block floating point (BFP) configurations to improve the FFT Signal-to-Noise (SNR) performance and the resource utilization. The resulting techniques reduce the memory requirements by two and significantly improve the time performance for the overall prevailing BFP representation. The execution of inputs ranging from 1K to 16K FFT points, using 8-bit or 16-bit as FP or BFP numbers, on the space-proven Atmel AVR32 and Vision Processing Unit (VPU) Intel Movidius Myriad 2, the edge device Raspberry Pi Zero 2W and a low-cost accelerator on Xilinx Zynq 7000 Field Programmable Gate Array (FPGA), validates the method&amp;amp;rsquo;s performance improvement.</description>
	<pubDate>2025-05-12</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 4, Pages 11: Improving the Fast Fourier Transform for Space and Edge Computing Applications with an Efficient In-Place Method</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/4/2/11">doi: 10.3390/software4020011</a></p>
	<p>Authors:
		Christoforos Vasilakis
		Alexandros Tsagkaropoulos
		Ioannis Koutoulas
		Dionysios Reisis
		</p>
	<p>Satellite and edge computing designers develop algorithms that restrict resource utilization and execution time. Among these design efforts, optimizing Fast Fourier Transform (FFT), key to many tasks, has led mainly to in-place FFT-specific hardware accelerators. Aiming at improving the FFT performance on processors and computing devices with limited resources, the current paper enhances the efficiency of the radix-2 FFT by exploring the benefits of an in-place technique. First, we present the advantages of organizing the single memory bank of processors to store two (2) FFT elements in each memory address and provide parallel load and store of each FFT pair of data. Second, we optimize the floating point (FP) and block floating point (BFP) configurations to improve the FFT Signal-to-Noise (SNR) performance and the resource utilization. The resulting techniques reduce the memory requirements by two and significantly improve the time performance for the overall prevailing BFP representation. The execution of inputs ranging from 1K to 16K FFT points, using 8-bit or 16-bit as FP or BFP numbers, on the space-proven Atmel AVR32 and Vision Processing Unit (VPU) Intel Movidius Myriad 2, the edge device Raspberry Pi Zero 2W and a low-cost accelerator on Xilinx Zynq 7000 Field Programmable Gate Array (FPGA), validates the method&amp;amp;rsquo;s performance improvement.</p>
	]]></content:encoded>

	<dc:title>Improving the Fast Fourier Transform for Space and Edge Computing Applications with an Efficient In-Place Method</dc:title>
			<dc:creator>Christoforos Vasilakis</dc:creator>
			<dc:creator>Alexandros Tsagkaropoulos</dc:creator>
			<dc:creator>Ioannis Koutoulas</dc:creator>
			<dc:creator>Dionysios Reisis</dc:creator>
		<dc:identifier>doi: 10.3390/software4020011</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2025-05-12</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2025-05-12</prism:publicationDate>
	<prism:volume>4</prism:volume>
	<prism:number>2</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>11</prism:startingPage>
		<prism:doi>10.3390/software4020011</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/4/2/11</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/4/2/10">

	<title>Software, Vol. 4, Pages 10: Enhancing DevOps Practices in the IoT&amp;ndash;Edge&amp;ndash;Cloud Continuum: Architecture, Integration, and Software Orchestration Demonstrated in the COGNIFOG Framework</title>
	<link>https://www.mdpi.com/2674-113X/4/2/10</link>
	<description>This paper presents COGNIFOG, an innovative framework under development that is designed to leverage decentralized decision-making, machine learning, and distributed computing to enable autonomous operation, adaptability, and scalability across the IoT&amp;amp;ndash;edge&amp;amp;ndash;cloud continuum. The work emphasizes Continuous Integration/Continuous Deployment (CI/CD) practices, development, and versatile integration infrastructures. The described methodology ensures efficient, reliable, and seamless integration of the framework, offering valuable insights into integration design, data flow, and the incorporation of cutting-edge technologies. Through three real-world trials in smart cities, e-health, and smart manufacturing and the development of a comprehensive QuickStart Guide for deployment, this work highlights the efficiency and adaptability of the COGNIFOG platform, presenting a robust solution for addressing the complexities of next-generation computing environments.</description>
	<pubDate>2025-04-15</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 4, Pages 10: Enhancing DevOps Practices in the IoT&amp;ndash;Edge&amp;ndash;Cloud Continuum: Architecture, Integration, and Software Orchestration Demonstrated in the COGNIFOG Framework</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/4/2/10">doi: 10.3390/software4020010</a></p>
	<p>Authors:
		Kostas Petrakis
		Evangelos Agorogiannis
		Grigorios Antonopoulos
		Themistoklis Anagnostopoulos
		Nasos Grigoropoulos
		Eleni Veroni
		Alexandre Berne
		Selma Azaiez
		Zakaria Benomar
		Harry Kakoulidis
		Marios Prasinos
		Philippos Sotiriades
		Panagiotis Mavrothalassitis
		Kosmas Alexopoulos
		</p>
	<p>This paper presents COGNIFOG, an innovative framework under development that is designed to leverage decentralized decision-making, machine learning, and distributed computing to enable autonomous operation, adaptability, and scalability across the IoT&amp;amp;ndash;edge&amp;amp;ndash;cloud continuum. The work emphasizes Continuous Integration/Continuous Deployment (CI/CD) practices, development, and versatile integration infrastructures. The described methodology ensures efficient, reliable, and seamless integration of the framework, offering valuable insights into integration design, data flow, and the incorporation of cutting-edge technologies. Through three real-world trials in smart cities, e-health, and smart manufacturing and the development of a comprehensive QuickStart Guide for deployment, this work highlights the efficiency and adaptability of the COGNIFOG platform, presenting a robust solution for addressing the complexities of next-generation computing environments.</p>
	]]></content:encoded>

	<dc:title>Enhancing DevOps Practices in the IoT&amp;amp;ndash;Edge&amp;amp;ndash;Cloud Continuum: Architecture, Integration, and Software Orchestration Demonstrated in the COGNIFOG Framework</dc:title>
			<dc:creator>Kostas Petrakis</dc:creator>
			<dc:creator>Evangelos Agorogiannis</dc:creator>
			<dc:creator>Grigorios Antonopoulos</dc:creator>
			<dc:creator>Themistoklis Anagnostopoulos</dc:creator>
			<dc:creator>Nasos Grigoropoulos</dc:creator>
			<dc:creator>Eleni Veroni</dc:creator>
			<dc:creator>Alexandre Berne</dc:creator>
			<dc:creator>Selma Azaiez</dc:creator>
			<dc:creator>Zakaria Benomar</dc:creator>
			<dc:creator>Harry Kakoulidis</dc:creator>
			<dc:creator>Marios Prasinos</dc:creator>
			<dc:creator>Philippos Sotiriades</dc:creator>
			<dc:creator>Panagiotis Mavrothalassitis</dc:creator>
			<dc:creator>Kosmas Alexopoulos</dc:creator>
		<dc:identifier>doi: 10.3390/software4020010</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2025-04-15</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2025-04-15</prism:publicationDate>
	<prism:volume>4</prism:volume>
	<prism:number>2</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>10</prism:startingPage>
		<prism:doi>10.3390/software4020010</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/4/2/10</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/4/2/9">

	<title>Software, Vol. 4, Pages 9: Regression Testing in Agile&amp;mdash;A Systematic Mapping Study</title>
	<link>https://www.mdpi.com/2674-113X/4/2/9</link>
	<description>Background: Regression testing is critical in agile software development, as it ensures that frequent changes do not introduce defects into previously working functionalities. While agile methodologies emphasize rapid iterations and value delivery, regression testing research has predominantly focused on optimizing technical efficiency rather than aligning with agile principles. Aim: This study aims to systematically map research trends and gaps in regression testing within agile environments, identifying areas that require further exploration to enhance alignment with agile practices and value-driven outcomes. Method: A systematic mapping study analyzed 35 primary studies. The research categorized studies based on their focus areas, evaluation metrics, agile frameworks, and methodologies, providing a comprehensive overview of the field. Results: The findings strongly emphasize test prioritization and selection, reflecting the need for optimized fault detection and execution efficiency in agile workflows. However, areas such as test generation, test minimization, and cost analysis are under-explored. Current evaluation metrics primarily address technical outcomes, neglecting agile-specific aspects like defect severity&amp;amp;rsquo;s business impact and iterative workflows. Additionally, the research highlights the dominance of continuous integration frameworks, with limited attention to other agile practices like Scrum and a lack of datasets capturing agile-specific attributes such as testing costs and user story importance. Conclusions: This study underscores the need for research to expand beyond existing focus areas, exploring diverse testing techniques and developing agile-centric metrics and datasets. By addressing these gaps, future work can enhance the applicability of regression testing strategies and align them more closely with agile development principles.</description>
	<pubDate>2025-04-14</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 4, Pages 9: Regression Testing in Agile&amp;mdash;A Systematic Mapping Study</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/4/2/9">doi: 10.3390/software4020009</a></p>
	<p>Authors:
		Suddhasvatta Das
		Kevin Gary
		</p>
	<p>Background: Regression testing is critical in agile software development, as it ensures that frequent changes do not introduce defects into previously working functionalities. While agile methodologies emphasize rapid iterations and value delivery, regression testing research has predominantly focused on optimizing technical efficiency rather than aligning with agile principles. Aim: This study aims to systematically map research trends and gaps in regression testing within agile environments, identifying areas that require further exploration to enhance alignment with agile practices and value-driven outcomes. Method: A systematic mapping study analyzed 35 primary studies. The research categorized studies based on their focus areas, evaluation metrics, agile frameworks, and methodologies, providing a comprehensive overview of the field. Results: The findings strongly emphasize test prioritization and selection, reflecting the need for optimized fault detection and execution efficiency in agile workflows. However, areas such as test generation, test minimization, and cost analysis are under-explored. Current evaluation metrics primarily address technical outcomes, neglecting agile-specific aspects like defect severity&amp;amp;rsquo;s business impact and iterative workflows. Additionally, the research highlights the dominance of continuous integration frameworks, with limited attention to other agile practices like Scrum and a lack of datasets capturing agile-specific attributes such as testing costs and user story importance. Conclusions: This study underscores the need for research to expand beyond existing focus areas, exploring diverse testing techniques and developing agile-centric metrics and datasets. By addressing these gaps, future work can enhance the applicability of regression testing strategies and align them more closely with agile development principles.</p>
	]]></content:encoded>

	<dc:title>Regression Testing in Agile&amp;amp;mdash;A Systematic Mapping Study</dc:title>
			<dc:creator>Suddhasvatta Das</dc:creator>
			<dc:creator>Kevin Gary</dc:creator>
		<dc:identifier>doi: 10.3390/software4020009</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2025-04-14</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2025-04-14</prism:publicationDate>
	<prism:volume>4</prism:volume>
	<prism:number>2</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>9</prism:startingPage>
		<prism:doi>10.3390/software4020009</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/4/2/9</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/4/2/8">

	<title>Software, Vol. 4, Pages 8: Uplifting Moods: Augmented Reality-Based Gamified Mood Intervention App with Attention Bias Modification</title>
	<link>https://www.mdpi.com/2674-113X/4/2/8</link>
	<description>Attention Bias Modification (ABM) is a cost-effective mood intervention that has the potential to be used in daily settings beyond clinical environments. However, its interactivity and user engagement are known to be limited and underexplored. Here, we propose Uplifting Moods, a novel mood intervention app that combines gamified ABM and augmented reality (AR) to address the limitation associated with the repetitive nature of ABM. By harnessing the benefits of mobile AR&amp;amp;rsquo;s low-cost, portable, and accessible characteristics, this approach is to help users easily take part in ABM, positively shifting one&amp;amp;rsquo;s emotions. We conducted a mixed methods study with 24 participants, which involves a controlled experiment with Self-Assessment Manikin as its primary measure and a semi-structured interview. Our analysis reports that the approach uniquely adds fun, exploring, and challenging features, helping improve engagement and feeling more cheerful and less under control. It also highlights the importance of personalization and consideration of gaming style, music preference, and socialization in designing a daily AR ABM game as an effective mental wellbeing intervention.</description>
	<pubDate>2025-04-01</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 4, Pages 8: Uplifting Moods: Augmented Reality-Based Gamified Mood Intervention App with Attention Bias Modification</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/4/2/8">doi: 10.3390/software4020008</a></p>
	<p>Authors:
		Yun Jung Yeh
		Sarah S. Jo
		Youngjun Cho
		</p>
	<p>Attention Bias Modification (ABM) is a cost-effective mood intervention that has the potential to be used in daily settings beyond clinical environments. However, its interactivity and user engagement are known to be limited and underexplored. Here, we propose Uplifting Moods, a novel mood intervention app that combines gamified ABM and augmented reality (AR) to address the limitation associated with the repetitive nature of ABM. By harnessing the benefits of mobile AR&amp;amp;rsquo;s low-cost, portable, and accessible characteristics, this approach is to help users easily take part in ABM, positively shifting one&amp;amp;rsquo;s emotions. We conducted a mixed methods study with 24 participants, which involves a controlled experiment with Self-Assessment Manikin as its primary measure and a semi-structured interview. Our analysis reports that the approach uniquely adds fun, exploring, and challenging features, helping improve engagement and feeling more cheerful and less under control. It also highlights the importance of personalization and consideration of gaming style, music preference, and socialization in designing a daily AR ABM game as an effective mental wellbeing intervention.</p>
	]]></content:encoded>

	<dc:title>Uplifting Moods: Augmented Reality-Based Gamified Mood Intervention App with Attention Bias Modification</dc:title>
			<dc:creator>Yun Jung Yeh</dc:creator>
			<dc:creator>Sarah S. Jo</dc:creator>
			<dc:creator>Youngjun Cho</dc:creator>
		<dc:identifier>doi: 10.3390/software4020008</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2025-04-01</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2025-04-01</prism:publicationDate>
	<prism:volume>4</prism:volume>
	<prism:number>2</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>8</prism:startingPage>
		<prism:doi>10.3390/software4020008</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/4/2/8</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/4/2/7">

	<title>Software, Vol. 4, Pages 7: Empirical Analysis of Data Sampling-Based Decision Forest Classifiers for Software Defect Prediction</title>
	<link>https://www.mdpi.com/2674-113X/4/2/7</link>
	<description>The strategic significance of software testing in ensuring the success of software development projects is paramount. Comprehensive testing, conducted early and consistently across the development lifecycle, is vital for mitigating defects, especially given the constraints on time, budget, and other resources often faced by development teams. Software defect prediction (SDP) serves as a proactive approach to identifying software components that are most likely to be defective. By predicting these high-risk modules, teams can prioritize thorough testing and inspection, thereby preventing defects from escalating to later stages where resolution becomes more resource intensive. SDP models must be continuously refined to improve predictive accuracy and performance. This involves integrating clean and preprocessed datasets, leveraging advanced machine learning (ML) methods, and optimizing key metrics. Statistical-based and traditional ML approaches have been widely explored for SDP. However, statistical-based models often struggle with scalability and robustness, while conventional ML models face challenges with imbalanced datasets, limiting their prediction efficacy. In this study, innovative decision forest (DF) models were developed to address these limitations. Specifically, this study evaluates the cost-sensitive forest (CS-Forest), forest penalizing attributes (FPA), and functional trees (FT) as DF models. These models were further enhanced using homogeneous ensemble techniques, such as bagging and boosting techniques. The experimental analysis on benchmark SDP datasets demonstrates that the proposed DF models effectively handle class imbalance, accurately distinguishing between defective and non-defective modules. Compared to baseline and state-of-the-art ML and deep learning (DL) methods, the suggested DF models exhibit superior prediction performance and offer scalable solutions for SDP. Consequently, the application of DF-based models is recommended for advancing defect prediction in software engineering and similar ML domains.</description>
	<pubDate>2025-03-21</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 4, Pages 7: Empirical Analysis of Data Sampling-Based Decision Forest Classifiers for Software Defect Prediction</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/4/2/7">doi: 10.3390/software4020007</a></p>
	<p>Authors:
		Fatima Enehezei Usman-Hamza
		Abdullateef Oluwagbemiga Balogun
		Hussaini Mamman
		Luiz Fernando Capretz
		Shuib Basri
		Rafiat Ajibade Oyekunle
		Hammed Adeleye Mojeed
		Abimbola Ganiyat Akintola
		</p>
	<p>The strategic significance of software testing in ensuring the success of software development projects is paramount. Comprehensive testing, conducted early and consistently across the development lifecycle, is vital for mitigating defects, especially given the constraints on time, budget, and other resources often faced by development teams. Software defect prediction (SDP) serves as a proactive approach to identifying software components that are most likely to be defective. By predicting these high-risk modules, teams can prioritize thorough testing and inspection, thereby preventing defects from escalating to later stages where resolution becomes more resource intensive. SDP models must be continuously refined to improve predictive accuracy and performance. This involves integrating clean and preprocessed datasets, leveraging advanced machine learning (ML) methods, and optimizing key metrics. Statistical-based and traditional ML approaches have been widely explored for SDP. However, statistical-based models often struggle with scalability and robustness, while conventional ML models face challenges with imbalanced datasets, limiting their prediction efficacy. In this study, innovative decision forest (DF) models were developed to address these limitations. Specifically, this study evaluates the cost-sensitive forest (CS-Forest), forest penalizing attributes (FPA), and functional trees (FT) as DF models. These models were further enhanced using homogeneous ensemble techniques, such as bagging and boosting techniques. The experimental analysis on benchmark SDP datasets demonstrates that the proposed DF models effectively handle class imbalance, accurately distinguishing between defective and non-defective modules. Compared to baseline and state-of-the-art ML and deep learning (DL) methods, the suggested DF models exhibit superior prediction performance and offer scalable solutions for SDP. Consequently, the application of DF-based models is recommended for advancing defect prediction in software engineering and similar ML domains.</p>
	]]></content:encoded>

	<dc:title>Empirical Analysis of Data Sampling-Based Decision Forest Classifiers for Software Defect Prediction</dc:title>
			<dc:creator>Fatima Enehezei Usman-Hamza</dc:creator>
			<dc:creator>Abdullateef Oluwagbemiga Balogun</dc:creator>
			<dc:creator>Hussaini Mamman</dc:creator>
			<dc:creator>Luiz Fernando Capretz</dc:creator>
			<dc:creator>Shuib Basri</dc:creator>
			<dc:creator>Rafiat Ajibade Oyekunle</dc:creator>
			<dc:creator>Hammed Adeleye Mojeed</dc:creator>
			<dc:creator>Abimbola Ganiyat Akintola</dc:creator>
		<dc:identifier>doi: 10.3390/software4020007</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2025-03-21</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2025-03-21</prism:publicationDate>
	<prism:volume>4</prism:volume>
	<prism:number>2</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>7</prism:startingPage>
		<prism:doi>10.3390/software4020007</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/4/2/7</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/4/1/6">

	<title>Software, Vol. 4, Pages 6: Designing Microservices Using AI: A Systematic Literature Review</title>
	<link>https://www.mdpi.com/2674-113X/4/1/6</link>
	<description>Microservices architecture has emerged as a dominant approach for developing scalable and modular software systems, driven by the need for agility and independent deployability. However, designing these architectures poses significant challenges, particularly in service decomposition, inter-service communication, and maintaining data consistency. To address these issues, artificial intelligence (AI) techniques, such as machine learning (ML) and natural language processing (NLP), have been applied with increasing frequency to automate and enhance the design process. This systematic literature review examines the application of AI in microservices design, focusing on AI-driven tools and methods for improving service decomposition, decision-making, and architectural validation. This review analyzes research studies published between 2018 and 2024 that specifically focus on the application of AI techniques in microservices design, identifying key AI methods used, challenges encountered in integrating AI into microservices, and the emerging trends in this research area. The findings reveal that AI has effectively been used to optimize performance, automate design tasks, and mitigate some of the complexities inherent in microservices architectures. However, gaps remain in areas such as distributed transactions and security. The study concludes that while AI offers promising solutions, further empirical research is needed to refine AI&amp;amp;rsquo;s role in microservices design and address the remaining challenges.</description>
	<pubDate>2025-03-19</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 4, Pages 6: Designing Microservices Using AI: A Systematic Literature Review</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/4/1/6">doi: 10.3390/software4010006</a></p>
	<p>Authors:
		Daniel Narváez
		Nicolas Battaglia
		Alejandro Fernández
		Gustavo Rossi
		</p>
	<p>Microservices architecture has emerged as a dominant approach for developing scalable and modular software systems, driven by the need for agility and independent deployability. However, designing these architectures poses significant challenges, particularly in service decomposition, inter-service communication, and maintaining data consistency. To address these issues, artificial intelligence (AI) techniques, such as machine learning (ML) and natural language processing (NLP), have been applied with increasing frequency to automate and enhance the design process. This systematic literature review examines the application of AI in microservices design, focusing on AI-driven tools and methods for improving service decomposition, decision-making, and architectural validation. This review analyzes research studies published between 2018 and 2024 that specifically focus on the application of AI techniques in microservices design, identifying key AI methods used, challenges encountered in integrating AI into microservices, and the emerging trends in this research area. The findings reveal that AI has effectively been used to optimize performance, automate design tasks, and mitigate some of the complexities inherent in microservices architectures. However, gaps remain in areas such as distributed transactions and security. The study concludes that while AI offers promising solutions, further empirical research is needed to refine AI&amp;amp;rsquo;s role in microservices design and address the remaining challenges.</p>
	]]></content:encoded>

	<dc:title>Designing Microservices Using AI: A Systematic Literature Review</dc:title>
			<dc:creator>Daniel Narváez</dc:creator>
			<dc:creator>Nicolas Battaglia</dc:creator>
			<dc:creator>Alejandro Fernández</dc:creator>
			<dc:creator>Gustavo Rossi</dc:creator>
		<dc:identifier>doi: 10.3390/software4010006</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2025-03-19</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2025-03-19</prism:publicationDate>
	<prism:volume>4</prism:volume>
	<prism:number>1</prism:number>
	<prism:section>Review</prism:section>
	<prism:startingPage>6</prism:startingPage>
		<prism:doi>10.3390/software4010006</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/4/1/6</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/4/1/5">

	<title>Software, Vol. 4, Pages 5: A Systematic Approach for Assessing Large Language Models&amp;rsquo; Test Case Generation Capability</title>
	<link>https://www.mdpi.com/2674-113X/4/1/5</link>
	<description>Software testing ensures the quality and reliability of software products, but manual test case creation is labor-intensive. With the rise of Large Language Models (LLMs), there is growing interest in unit test creation with LLMs. However, effective assessment of LLM-generated test cases is limited by the lack of standardized benchmarks that comprehensively cover diverse programming scenarios. To address the assessment of an LLM&amp;amp;rsquo;s test case generation ability and lacking a dataset for evaluation, we propose the Generated Benchmark from Control-Flow Structure and Variable Usage Composition (GBCV) approach, which systematically generates programs used for evaluating LLMs&amp;amp;rsquo; test generation capabilities. By leveraging basic control-flow structures and variable usage, GBCV provides a flexible framework to create a spectrum of programs ranging from simple to complex. Because GPT-4o and GPT-3.5-Turbo are publicly accessible models, to present real-world regular users&amp;amp;rsquo; use cases, we use GBCV to assess LLM performance on them. Our findings indicate that GPT-4o performs better on composite program structures, while all models effectively detect boundary values in simple conditions but face challenges with arithmetic computations. This study highlights the strengths and limitations of LLMs in test generation, provides a benchmark framework, and suggests directions for future improvement.</description>
	<pubDate>2025-03-10</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 4, Pages 5: A Systematic Approach for Assessing Large Language Models&amp;rsquo; Test Case Generation Capability</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/4/1/5">doi: 10.3390/software4010005</a></p>
	<p>Authors:
		Hung-Fu Chang
		Mohammad Shokrolah Shirazi
		</p>
	<p>Software testing ensures the quality and reliability of software products, but manual test case creation is labor-intensive. With the rise of Large Language Models (LLMs), there is growing interest in unit test creation with LLMs. However, effective assessment of LLM-generated test cases is limited by the lack of standardized benchmarks that comprehensively cover diverse programming scenarios. To address the assessment of an LLM&amp;amp;rsquo;s test case generation ability and lacking a dataset for evaluation, we propose the Generated Benchmark from Control-Flow Structure and Variable Usage Composition (GBCV) approach, which systematically generates programs used for evaluating LLMs&amp;amp;rsquo; test generation capabilities. By leveraging basic control-flow structures and variable usage, GBCV provides a flexible framework to create a spectrum of programs ranging from simple to complex. Because GPT-4o and GPT-3.5-Turbo are publicly accessible models, to present real-world regular users&amp;amp;rsquo; use cases, we use GBCV to assess LLM performance on them. Our findings indicate that GPT-4o performs better on composite program structures, while all models effectively detect boundary values in simple conditions but face challenges with arithmetic computations. This study highlights the strengths and limitations of LLMs in test generation, provides a benchmark framework, and suggests directions for future improvement.</p>
	]]></content:encoded>

	<dc:title>A Systematic Approach for Assessing Large Language Models&amp;amp;rsquo; Test Case Generation Capability</dc:title>
			<dc:creator>Hung-Fu Chang</dc:creator>
			<dc:creator>Mohammad Shokrolah Shirazi</dc:creator>
		<dc:identifier>doi: 10.3390/software4010005</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2025-03-10</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2025-03-10</prism:publicationDate>
	<prism:volume>4</prism:volume>
	<prism:number>1</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>5</prism:startingPage>
		<prism:doi>10.3390/software4010005</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/4/1/5</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/4/1/4">

	<title>Software, Vol. 4, Pages 4: On the Execution and Runtime Verification of UML Activity Diagrams</title>
	<link>https://www.mdpi.com/2674-113X/4/1/4</link>
	<description>The unified modelling language (UML) is an industrial de facto standard for system modelling. It consists of a set of graphical notations (also known as diagrams) and has been used widely in many industrial applications. Although the graphical nature of UML is appealing to system developers, the official documentation of UML does not provide formal semantics for UML diagrams. This makes UML unsuitable for formal verification and, therefore, limited when it comes to the development of safety/security-critical systems where faults can cause damage to people, properties, or the environment. The UML activity diagram is an important UML graphical notation, which is effective in modelling the dynamic aspects of a system. This paper proposes a formal semantics for UML activity diagrams based on the calculus of context-aware ambients (CCA). An algorithm (semantic function) is proposed that maps any activity diagram onto a process in CCA, which describes the behaviours of the UML activity diagram. This process can then be executed and formally verified using the CCA simulation tool ccaPL and the CCA runtime verification tool ccaRV. Hence, design flaws can be detected and fixed early during the system development lifecycle. The pragmatics of the proposed approach are demonstrated using a case study in e-commerce.</description>
	<pubDate>2025-02-27</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 4, Pages 4: On the Execution and Runtime Verification of UML Activity Diagrams</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/4/1/4">doi: 10.3390/software4010004</a></p>
	<p>Authors:
		François Siewe
		Guy Merlin Ngounou
		</p>
	<p>The unified modelling language (UML) is an industrial de facto standard for system modelling. It consists of a set of graphical notations (also known as diagrams) and has been used widely in many industrial applications. Although the graphical nature of UML is appealing to system developers, the official documentation of UML does not provide formal semantics for UML diagrams. This makes UML unsuitable for formal verification and, therefore, limited when it comes to the development of safety/security-critical systems where faults can cause damage to people, properties, or the environment. The UML activity diagram is an important UML graphical notation, which is effective in modelling the dynamic aspects of a system. This paper proposes a formal semantics for UML activity diagrams based on the calculus of context-aware ambients (CCA). An algorithm (semantic function) is proposed that maps any activity diagram onto a process in CCA, which describes the behaviours of the UML activity diagram. This process can then be executed and formally verified using the CCA simulation tool ccaPL and the CCA runtime verification tool ccaRV. Hence, design flaws can be detected and fixed early during the system development lifecycle. The pragmatics of the proposed approach are demonstrated using a case study in e-commerce.</p>
	]]></content:encoded>

	<dc:title>On the Execution and Runtime Verification of UML Activity Diagrams</dc:title>
			<dc:creator>François Siewe</dc:creator>
			<dc:creator>Guy Merlin Ngounou</dc:creator>
		<dc:identifier>doi: 10.3390/software4010004</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2025-02-27</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2025-02-27</prism:publicationDate>
	<prism:volume>4</prism:volume>
	<prism:number>1</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>4</prism:startingPage>
		<prism:doi>10.3390/software4010004</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/4/1/4</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/4/1/3">

	<title>Software, Vol. 4, Pages 3: The Scalable Detection and Resolution of Data Clumps Using a Modular Pipeline with ChatGPT</title>
	<link>https://www.mdpi.com/2674-113X/4/1/3</link>
	<description>This paper explores a modular pipeline architecture that integrates ChatGPT, a Large Language Model (LLM), to automate the detection and refactoring of data clumps&amp;amp;mdash;a prevalent type of code smell that complicates software maintainability. Data clumps refer to clusters of code that are often repeated and should ideally be refactored to improve code quality. The pipeline leverages ChatGPT&amp;amp;rsquo;s capabilities to understand context and generate structured outputs, making it suitable for addressing complex software refactoring tasks. Through systematic experimentation, our study not only addresses the research questions outlined but also demonstrates that the pipeline can accurately identify data clumps, particularly excelling in cases that require semantic understanding&amp;amp;mdash;where localized clumps are embedded within larger codebases. While the solution significantly enhances the refactoring workflow, facilitating the management of distributed clumps across multiple files, it also presents challenges such as occasional compiler errors and high computational costs. Feedback from developers underscores the usefulness of LLMs in software development but also highlights the essential role of human oversight to correct inaccuracies. These findings demonstrate the pipeline&amp;amp;rsquo;s potential to enhance software maintainability, offering a scalable and efficient solution for addressing code smells in real-world projects, and contributing to the broader goal of enhancing software maintainability in large-scale projects.</description>
	<pubDate>2025-02-02</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 4, Pages 3: The Scalable Detection and Resolution of Data Clumps Using a Modular Pipeline with ChatGPT</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/4/1/3">doi: 10.3390/software4010003</a></p>
	<p>Authors:
		Nils Baumgartner
		Padma Iyenghar
		Timo Schoemaker
		Elke Pulvermüller
		</p>
	<p>This paper explores a modular pipeline architecture that integrates ChatGPT, a Large Language Model (LLM), to automate the detection and refactoring of data clumps&amp;amp;mdash;a prevalent type of code smell that complicates software maintainability. Data clumps refer to clusters of code that are often repeated and should ideally be refactored to improve code quality. The pipeline leverages ChatGPT&amp;amp;rsquo;s capabilities to understand context and generate structured outputs, making it suitable for addressing complex software refactoring tasks. Through systematic experimentation, our study not only addresses the research questions outlined but also demonstrates that the pipeline can accurately identify data clumps, particularly excelling in cases that require semantic understanding&amp;amp;mdash;where localized clumps are embedded within larger codebases. While the solution significantly enhances the refactoring workflow, facilitating the management of distributed clumps across multiple files, it also presents challenges such as occasional compiler errors and high computational costs. Feedback from developers underscores the usefulness of LLMs in software development but also highlights the essential role of human oversight to correct inaccuracies. These findings demonstrate the pipeline&amp;amp;rsquo;s potential to enhance software maintainability, offering a scalable and efficient solution for addressing code smells in real-world projects, and contributing to the broader goal of enhancing software maintainability in large-scale projects.</p>
	]]></content:encoded>

	<dc:title>The Scalable Detection and Resolution of Data Clumps Using a Modular Pipeline with ChatGPT</dc:title>
			<dc:creator>Nils Baumgartner</dc:creator>
			<dc:creator>Padma Iyenghar</dc:creator>
			<dc:creator>Timo Schoemaker</dc:creator>
			<dc:creator>Elke Pulvermüller</dc:creator>
		<dc:identifier>doi: 10.3390/software4010003</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2025-02-02</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2025-02-02</prism:publicationDate>
	<prism:volume>4</prism:volume>
	<prism:number>1</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>3</prism:startingPage>
		<prism:doi>10.3390/software4010003</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/4/1/3</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/4/1/2">

	<title>Software, Vol. 4, Pages 2: German Translation and Psychometric Analysis of the SOLID-SD: A German Inventory for Assessing Security Culture in Software Companies</title>
	<link>https://www.mdpi.com/2674-113X/4/1/2</link>
	<description>The SOLID-S is an inventory assessing six dimensions of organizational (software) security culture, which is currently available in English. Here, we present the German version, SOLID-SD, along with its translation process and psychometric analysis. With a partial credit model based on a sample of N = 280 persons, we found, overall, highly satisfactory measurement properties for the instrument. There were no threshold permutations, no serious differential item functioning, and good item fits. The subscales&amp;amp;rsquo; internal consistencies and the inter-scale correlations show very high similarities between the SOLID-SD and the original English version, indicating a successful translation of the instrument.</description>
	<pubDate>2025-01-24</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 4, Pages 2: German Translation and Psychometric Analysis of the SOLID-SD: A German Inventory for Assessing Security Culture in Software Companies</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/4/1/2">doi: 10.3390/software4010002</a></p>
	<p>Authors:
		Christina Glasauer
		Hollie N. Pearl
		Rainer W. Alexandrowicz
		</p>
	<p>The SOLID-S is an inventory assessing six dimensions of organizational (software) security culture, which is currently available in English. Here, we present the German version, SOLID-SD, along with its translation process and psychometric analysis. With a partial credit model based on a sample of N = 280 persons, we found, overall, highly satisfactory measurement properties for the instrument. There were no threshold permutations, no serious differential item functioning, and good item fits. The subscales&amp;amp;rsquo; internal consistencies and the inter-scale correlations show very high similarities between the SOLID-SD and the original English version, indicating a successful translation of the instrument.</p>
	]]></content:encoded>

	<dc:title>German Translation and Psychometric Analysis of the SOLID-SD: A German Inventory for Assessing Security Culture in Software Companies</dc:title>
			<dc:creator>Christina Glasauer</dc:creator>
			<dc:creator>Hollie N. Pearl</dc:creator>
			<dc:creator>Rainer W. Alexandrowicz</dc:creator>
		<dc:identifier>doi: 10.3390/software4010002</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2025-01-24</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2025-01-24</prism:publicationDate>
	<prism:volume>4</prism:volume>
	<prism:number>1</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>2</prism:startingPage>
		<prism:doi>10.3390/software4010002</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/4/1/2</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/4/1/1">

	<title>Software, Vol. 4, Pages 1: A Common Language of Software Evolution in Repositories (CLOSER)</title>
	<link>https://www.mdpi.com/2674-113X/4/1/1</link>
	<description>Version Control Systems (VCSs) are used by development teams to manage the collaborative evolution of source code, and there are several widely used industry standard VCSs. In addition to the code files themselves, metadata about the changes made are also recorded by the VCS, and this is often used with analytical tools to provide insight into the software development, a process known as Mining Software Repositories (MSRs). MSR tools are numerous but most often limited to one VCS format and, therefore, restricted in their scope of application in addition to the initial effort required to implement parsers for verbose textual VCS output. To address this limitation, a domain-specific language (DSL), the Common Language of Software Evolution in Repositories (CLOSER), was defined that abstracted away from specific implementations while isomorphically mapping to the data model of all major VCS formats. Using CLOSER directly as a data model or as an intermediate stage in a conversion analysis approach could make use of all major repositories rather than be limited to a single format. The initial barrier to adoption for MSR approaches was also lowered as CLOSER output is a concise, easily machine-readable format. CLOSER was implemented in tooling and tested against a number of common expected use cases, including a direct use in MSR analysis, proving the fidelity of the model and implementation. CLOSER was also successfully used to convert raw output logs from one VCS format to another, offering the possibility that legacy analysis tools could be used on other technologies without any changes being required. In addition to the advantages of a generic model opening all major VCS formats for analysis parsing, the CLOSER format was found to require less code and complete parsing faster than traditional VCS logging outputs.</description>
	<pubDate>2025-01-06</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 4, Pages 1: A Common Language of Software Evolution in Repositories (CLOSER)</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/4/1/1">doi: 10.3390/software4010001</a></p>
	<p>Authors:
		Jordan Garrity
		David Cutting
		</p>
	<p>Version Control Systems (VCSs) are used by development teams to manage the collaborative evolution of source code, and there are several widely used industry standard VCSs. In addition to the code files themselves, metadata about the changes made are also recorded by the VCS, and this is often used with analytical tools to provide insight into the software development, a process known as Mining Software Repositories (MSRs). MSR tools are numerous but most often limited to one VCS format and, therefore, restricted in their scope of application in addition to the initial effort required to implement parsers for verbose textual VCS output. To address this limitation, a domain-specific language (DSL), the Common Language of Software Evolution in Repositories (CLOSER), was defined that abstracted away from specific implementations while isomorphically mapping to the data model of all major VCS formats. Using CLOSER directly as a data model or as an intermediate stage in a conversion analysis approach could make use of all major repositories rather than be limited to a single format. The initial barrier to adoption for MSR approaches was also lowered as CLOSER output is a concise, easily machine-readable format. CLOSER was implemented in tooling and tested against a number of common expected use cases, including a direct use in MSR analysis, proving the fidelity of the model and implementation. CLOSER was also successfully used to convert raw output logs from one VCS format to another, offering the possibility that legacy analysis tools could be used on other technologies without any changes being required. In addition to the advantages of a generic model opening all major VCS formats for analysis parsing, the CLOSER format was found to require less code and complete parsing faster than traditional VCS logging outputs.</p>
	]]></content:encoded>

	<dc:title>A Common Language of Software Evolution in Repositories (CLOSER)</dc:title>
			<dc:creator>Jordan Garrity</dc:creator>
			<dc:creator>David Cutting</dc:creator>
		<dc:identifier>doi: 10.3390/software4010001</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2025-01-06</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2025-01-06</prism:publicationDate>
	<prism:volume>4</prism:volume>
	<prism:number>1</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>1</prism:startingPage>
		<prism:doi>10.3390/software4010001</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/4/1/1</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/3/4/29">

	<title>Software, Vol. 3, Pages 587-594: Dental Loop Chatbot: A Prototype Large Language Model Framework for Dentistry</title>
	<link>https://www.mdpi.com/2674-113X/3/4/29</link>
	<description>The Dental Loop Chatbot was developed as a real-time, evidence-based guidance system for dental practitioners using a fine-tuned large language model (LLM) and Retrieval-Augmented Generation (RAG). This paper outlines the development and preliminary evaluation of the chatbot as a scalable clinical decision-support tool designed for resource-limited settings. The system&amp;amp;rsquo;s architecture incorporates Quantized Low-Rank Adaptation (QLoRA) for efficient fine-tuning, while dynamic retrieval mechanisms ensure contextually accurate and relevant responses. This prototype lays the groundwork for future triaging and diagnostic support systems tailored specifically to the field of dentistry.</description>
	<pubDate>2024-12-17</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 3, Pages 587-594: Dental Loop Chatbot: A Prototype Large Language Model Framework for Dentistry</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/3/4/29">doi: 10.3390/software3040029</a></p>
	<p>Authors:
		Md Sahadul Hasan Arian
		Faisal Ahmed Sifat
		Saif Ahmed
		Nabeel Mohammed
		Taseef Hasan Farook
		James Dudley
		</p>
	<p>The Dental Loop Chatbot was developed as a real-time, evidence-based guidance system for dental practitioners using a fine-tuned large language model (LLM) and Retrieval-Augmented Generation (RAG). This paper outlines the development and preliminary evaluation of the chatbot as a scalable clinical decision-support tool designed for resource-limited settings. The system&amp;amp;rsquo;s architecture incorporates Quantized Low-Rank Adaptation (QLoRA) for efficient fine-tuning, while dynamic retrieval mechanisms ensure contextually accurate and relevant responses. This prototype lays the groundwork for future triaging and diagnostic support systems tailored specifically to the field of dentistry.</p>
	]]></content:encoded>

	<dc:title>Dental Loop Chatbot: A Prototype Large Language Model Framework for Dentistry</dc:title>
			<dc:creator>Md Sahadul Hasan Arian</dc:creator>
			<dc:creator>Faisal Ahmed Sifat</dc:creator>
			<dc:creator>Saif Ahmed</dc:creator>
			<dc:creator>Nabeel Mohammed</dc:creator>
			<dc:creator>Taseef Hasan Farook</dc:creator>
			<dc:creator>James Dudley</dc:creator>
		<dc:identifier>doi: 10.3390/software3040029</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2024-12-17</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2024-12-17</prism:publicationDate>
	<prism:volume>3</prism:volume>
	<prism:number>4</prism:number>
	<prism:section>Communication</prism:section>
	<prism:startingPage>587</prism:startingPage>
		<prism:doi>10.3390/software3040029</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/3/4/29</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/3/4/28">

	<title>Software, Vol. 3, Pages 569-586: A Fuzzing Tool Based on Automated Grammar Detection</title>
	<link>https://www.mdpi.com/2674-113X/3/4/28</link>
	<description>Software testing is an important step in the software development life cycle to ensure the quality and security of software. Fuzzing is a security testing technique that finds vulnerabilities automatically without accessing the source code. We built a fuzzer, called JIMA-Fuzzing, which is an effective fuzzing tool that utilizes grammar detected from sample input. Based on the detected grammar, JIMA-Fuzzing selects a portion of the valid user input and fuzzes that portion. For example, the tool may greatly increase the size of the input, truncate the input, replace numeric values with new values, replace words with numbers, etc. This paper discusses how JIMA-Fuzzing works and shows the evaluation results after testing against the DARPA Cyber Grand Challenge (CGC) dataset. JIMA-Fuzzing is capable of extracting grammar from sample input files, meaning that it does not require access to the source code to generate effective fuzzing files. This feature allows it to work with proprietary or non-open-source programs and significantly reduces the effort needed from human testers. In addition, compared to fuzzing tools guided with symbolic execution or taint analysis, JIMA-Fuzzing takes much less computing power and time to analyze sample input and generate fuzzing files. However, the limitation is that JIMA-Fuzzing relies on good sample inputs and works primarily on programs that require user interaction/input.</description>
	<pubDate>2024-12-14</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 3, Pages 569-586: A Fuzzing Tool Based on Automated Grammar Detection</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/3/4/28">doi: 10.3390/software3040028</a></p>
	<p>Authors:
		Jia Song
		Jim Alves-Foss
		</p>
	<p>Software testing is an important step in the software development life cycle to ensure the quality and security of software. Fuzzing is a security testing technique that finds vulnerabilities automatically without accessing the source code. We built a fuzzer, called JIMA-Fuzzing, which is an effective fuzzing tool that utilizes grammar detected from sample input. Based on the detected grammar, JIMA-Fuzzing selects a portion of the valid user input and fuzzes that portion. For example, the tool may greatly increase the size of the input, truncate the input, replace numeric values with new values, replace words with numbers, etc. This paper discusses how JIMA-Fuzzing works and shows the evaluation results after testing against the DARPA Cyber Grand Challenge (CGC) dataset. JIMA-Fuzzing is capable of extracting grammar from sample input files, meaning that it does not require access to the source code to generate effective fuzzing files. This feature allows it to work with proprietary or non-open-source programs and significantly reduces the effort needed from human testers. In addition, compared to fuzzing tools guided with symbolic execution or taint analysis, JIMA-Fuzzing takes much less computing power and time to analyze sample input and generate fuzzing files. However, the limitation is that JIMA-Fuzzing relies on good sample inputs and works primarily on programs that require user interaction/input.</p>
	]]></content:encoded>

	<dc:title>A Fuzzing Tool Based on Automated Grammar Detection</dc:title>
			<dc:creator>Jia Song</dc:creator>
			<dc:creator>Jim Alves-Foss</dc:creator>
		<dc:identifier>doi: 10.3390/software3040028</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2024-12-14</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2024-12-14</prism:publicationDate>
	<prism:volume>3</prism:volume>
	<prism:number>4</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>569</prism:startingPage>
		<prism:doi>10.3390/software3040028</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/3/4/28</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/3/4/27">

	<title>Software, Vol. 3, Pages 549-568: RbfCon: Construct Radial Basis Function Neural Networks with Grammatical Evolution</title>
	<link>https://www.mdpi.com/2674-113X/3/4/27</link>
	<description>Radial basis function networks are considered a machine learning tool that can be applied on a wide series of classification and regression problems proposed in various research topics of the modern world. However, in many cases, the initial training method used to fit the parameters of these models can produce poor results either due to unstable numerical operations or its inability to effectively locate the lowest value of the error function. The current work proposed a novel method that constructs the architecture of this model and estimates the values for each parameter of the model with the incorporation of Grammatical Evolution. The proposed method was coded in ANSI C++, and the produced software was tested for its effectiveness on a wide series of datasets. The experimental results certified the adequacy of the new method to solve difficult problems, and in the vast majority of cases, the error in the classification or approximation of functions was significantly lower than the case where the original training method was applied.</description>
	<pubDate>2024-12-11</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 3, Pages 549-568: RbfCon: Construct Radial Basis Function Neural Networks with Grammatical Evolution</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/3/4/27">doi: 10.3390/software3040027</a></p>
	<p>Authors:
		Ioannis G. Tsoulos
		Ioannis Varvaras
		Vasileios Charilogis
		</p>
	<p>Radial basis function networks are considered a machine learning tool that can be applied on a wide series of classification and regression problems proposed in various research topics of the modern world. However, in many cases, the initial training method used to fit the parameters of these models can produce poor results either due to unstable numerical operations or its inability to effectively locate the lowest value of the error function. The current work proposed a novel method that constructs the architecture of this model and estimates the values for each parameter of the model with the incorporation of Grammatical Evolution. The proposed method was coded in ANSI C++, and the produced software was tested for its effectiveness on a wide series of datasets. The experimental results certified the adequacy of the new method to solve difficult problems, and in the vast majority of cases, the error in the classification or approximation of functions was significantly lower than the case where the original training method was applied.</p>
	]]></content:encoded>

	<dc:title>RbfCon: Construct Radial Basis Function Neural Networks with Grammatical Evolution</dc:title>
			<dc:creator>Ioannis G. Tsoulos</dc:creator>
			<dc:creator>Ioannis Varvaras</dc:creator>
			<dc:creator>Vasileios Charilogis</dc:creator>
		<dc:identifier>doi: 10.3390/software3040027</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2024-12-11</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2024-12-11</prism:publicationDate>
	<prism:volume>3</prism:volume>
	<prism:number>4</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>549</prism:startingPage>
		<prism:doi>10.3390/software3040027</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/3/4/27</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/3/4/26">

	<title>Software, Vol. 3, Pages 534-548: Implementing Mathematics of Arrays in Modern Fortran: Efficiency and Efficacy</title>
	<link>https://www.mdpi.com/2674-113X/3/4/26</link>
	<description>Mathematics of Arrays (MoA) concerns the formal description of algorithms working on arrays of data and their efficient and effective implementation in software and hardware. Since (multidimensional) arrays are one of the most important data structures in Fortran, as witnessed by their native support in its language and the numerous operations and functions that take arrays as inputs and outputs, it is natural to examine how Fortran can be used as an implementation language for MoA. This article presents the first results, both in terms of code and of performance, regarding this union. It may serve as a basis for further research, both with respect to the formal theory of MoA and to improving the practical implementation of array-based algorithms.</description>
	<pubDate>2024-11-30</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 3, Pages 534-548: Implementing Mathematics of Arrays in Modern Fortran: Efficiency and Efficacy</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/3/4/26">doi: 10.3390/software3040026</a></p>
	<p>Authors:
		Arjen Markus
		Lenore Mullin
		</p>
	<p>Mathematics of Arrays (MoA) concerns the formal description of algorithms working on arrays of data and their efficient and effective implementation in software and hardware. Since (multidimensional) arrays are one of the most important data structures in Fortran, as witnessed by their native support in its language and the numerous operations and functions that take arrays as inputs and outputs, it is natural to examine how Fortran can be used as an implementation language for MoA. This article presents the first results, both in terms of code and of performance, regarding this union. It may serve as a basis for further research, both with respect to the formal theory of MoA and to improving the practical implementation of array-based algorithms.</p>
	]]></content:encoded>

	<dc:title>Implementing Mathematics of Arrays in Modern Fortran: Efficiency and Efficacy</dc:title>
			<dc:creator>Arjen Markus</dc:creator>
			<dc:creator>Lenore Mullin</dc:creator>
		<dc:identifier>doi: 10.3390/software3040026</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2024-11-30</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2024-11-30</prism:publicationDate>
	<prism:volume>3</prism:volume>
	<prism:number>4</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>534</prism:startingPage>
		<prism:doi>10.3390/software3040026</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/3/4/26</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/3/4/25">

	<title>Software, Vol. 3, Pages 514-533: Analysing Quality Metrics and Automated Scoring of Code Reviews</title>
	<link>https://www.mdpi.com/2674-113X/3/4/25</link>
	<description>Code reviews are an important part of the software development process, and there is a wide variety of approaches used to perform them. While it is generally agreed that code reviews are beneficial and result in higher-quality software, there has been little work investigating best practices and approaches, exploring which factors impact code review quality. Our approach firstly analyses current best practices and procedures for undertaking code reviews, along with an examination of metrics often used to analyse a review&amp;amp;rsquo;s quality and current offerings for automated code review assessment. A maximum of one thousand code review comments per project were mined from GitHub pull requests across seven open-source projects which have previously been analysed in similar studies. Several identified metrics are tested across these projects using Python&amp;amp;rsquo;s Natural Language Toolkit, including stop word ratio, overall sentiment, and detection of code snippets through the GitHub markdown language. Comparisons are drawn with regards to each project&amp;amp;rsquo;s culture and the language used in the code review process, with pros and cons for each. The results show that the stop word ratio remained consistent across all projects, with only one project exceeding an average of 30%, and that the percentage of positive comments across the projects was broadly similar also. The suitability of these metrics is also discussed with regards to the creation of a scoring framework and development of an automated code review analysis tool. We conclude that the software written is an effective method of comparing practices and cultures across projects and can provide benefits by promoting a positive review culture within an organisation. However, rudimentary sentiment analysis and detection of GitHub code snippets may not be sufficient to assess a code review&amp;amp;rsquo;s overall usefulness, as many terms that are important to include in a programmer&amp;amp;rsquo;s lexicon such as &amp;amp;lsquo;error&amp;amp;rsquo; and &amp;amp;lsquo;fail&amp;amp;rsquo; deem a code review to be negative. Code snippets that are included outside of the markdown language are also ignored from analysis. Recommendations for future work are suggested, including the development of a more robust sentiment analysis system that can include detection of emotion such as frustration, and the creation of a programming dictionary to exclude programming terms from sentiment analysis.</description>
	<pubDate>2024-11-29</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 3, Pages 514-533: Analysing Quality Metrics and Automated Scoring of Code Reviews</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/3/4/25">doi: 10.3390/software3040025</a></p>
	<p>Authors:
		Owen Sortwell
		David Cutting
		Christine McConnellogue
		</p>
	<p>Code reviews are an important part of the software development process, and there is a wide variety of approaches used to perform them. While it is generally agreed that code reviews are beneficial and result in higher-quality software, there has been little work investigating best practices and approaches, exploring which factors impact code review quality. Our approach firstly analyses current best practices and procedures for undertaking code reviews, along with an examination of metrics often used to analyse a review&amp;amp;rsquo;s quality and current offerings for automated code review assessment. A maximum of one thousand code review comments per project were mined from GitHub pull requests across seven open-source projects which have previously been analysed in similar studies. Several identified metrics are tested across these projects using Python&amp;amp;rsquo;s Natural Language Toolkit, including stop word ratio, overall sentiment, and detection of code snippets through the GitHub markdown language. Comparisons are drawn with regards to each project&amp;amp;rsquo;s culture and the language used in the code review process, with pros and cons for each. The results show that the stop word ratio remained consistent across all projects, with only one project exceeding an average of 30%, and that the percentage of positive comments across the projects was broadly similar also. The suitability of these metrics is also discussed with regards to the creation of a scoring framework and development of an automated code review analysis tool. We conclude that the software written is an effective method of comparing practices and cultures across projects and can provide benefits by promoting a positive review culture within an organisation. However, rudimentary sentiment analysis and detection of GitHub code snippets may not be sufficient to assess a code review&amp;amp;rsquo;s overall usefulness, as many terms that are important to include in a programmer&amp;amp;rsquo;s lexicon such as &amp;amp;lsquo;error&amp;amp;rsquo; and &amp;amp;lsquo;fail&amp;amp;rsquo; deem a code review to be negative. Code snippets that are included outside of the markdown language are also ignored from analysis. Recommendations for future work are suggested, including the development of a more robust sentiment analysis system that can include detection of emotion such as frustration, and the creation of a programming dictionary to exclude programming terms from sentiment analysis.</p>
	]]></content:encoded>

	<dc:title>Analysing Quality Metrics and Automated Scoring of Code Reviews</dc:title>
			<dc:creator>Owen Sortwell</dc:creator>
			<dc:creator>David Cutting</dc:creator>
			<dc:creator>Christine McConnellogue</dc:creator>
		<dc:identifier>doi: 10.3390/software3040025</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2024-11-29</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2024-11-29</prism:publicationDate>
	<prism:volume>3</prism:volume>
	<prism:number>4</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>514</prism:startingPage>
		<prism:doi>10.3390/software3040025</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/3/4/25</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/3/4/24">

	<title>Software, Vol. 3, Pages 498-513: Implementation and Performance Evaluation of Quantum Machine Learning Algorithms for Binary Classification</title>
	<link>https://www.mdpi.com/2674-113X/3/4/24</link>
	<description>In this work, we studied the use of Quantum Machine Learning (QML) algorithms for binary classification and compared their performance with classical Machine Learning (ML) methods. QML merges principles of Quantum Computing (QC) and ML, offering improved efficiency and potential quantum advantage in data-driven tasks and when solving complex problems. In binary classification, where the goal is to assign data to one of two categories, QML uses quantum algorithms to process large datasets efficiently. Quantum algorithms like Quantum Support Vector Machines (QSVM) and Quantum Neural Networks (QNN) exploit quantum parallelism and entanglement to enhance performance over classical methods. This study focuses on two common QML algorithms, Quantum Support Vector Classifier (QSVC) and QNN. We used the Qiskit software and conducted the experiments with three different datasets. Data preprocessing included dimensionality reduction using Principal Component Analysis (PCA) and standardization using scalers. The results showed that quantum algorithms demonstrated competitive performance against their classical counterparts in terms of accuracy, while QSVC performed better than QNN. These findings suggest that QML holds potential for improving computational efficiency in binary classification tasks. This opens the way for more efficient and scalable solutions in complex classification challenges and shows the complementary role of quantum computing.</description>
	<pubDate>2024-11-28</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 3, Pages 498-513: Implementation and Performance Evaluation of Quantum Machine Learning Algorithms for Binary Classification</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/3/4/24">doi: 10.3390/software3040024</a></p>
	<p>Authors:
		Surajudeen Shina Ajibosin
		Deniz Cetinkaya
		</p>
	<p>In this work, we studied the use of Quantum Machine Learning (QML) algorithms for binary classification and compared their performance with classical Machine Learning (ML) methods. QML merges principles of Quantum Computing (QC) and ML, offering improved efficiency and potential quantum advantage in data-driven tasks and when solving complex problems. In binary classification, where the goal is to assign data to one of two categories, QML uses quantum algorithms to process large datasets efficiently. Quantum algorithms like Quantum Support Vector Machines (QSVM) and Quantum Neural Networks (QNN) exploit quantum parallelism and entanglement to enhance performance over classical methods. This study focuses on two common QML algorithms, Quantum Support Vector Classifier (QSVC) and QNN. We used the Qiskit software and conducted the experiments with three different datasets. Data preprocessing included dimensionality reduction using Principal Component Analysis (PCA) and standardization using scalers. The results showed that quantum algorithms demonstrated competitive performance against their classical counterparts in terms of accuracy, while QSVC performed better than QNN. These findings suggest that QML holds potential for improving computational efficiency in binary classification tasks. This opens the way for more efficient and scalable solutions in complex classification challenges and shows the complementary role of quantum computing.</p>
	]]></content:encoded>

	<dc:title>Implementation and Performance Evaluation of Quantum Machine Learning Algorithms for Binary Classification</dc:title>
			<dc:creator>Surajudeen Shina Ajibosin</dc:creator>
			<dc:creator>Deniz Cetinkaya</dc:creator>
		<dc:identifier>doi: 10.3390/software3040024</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2024-11-28</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2024-11-28</prism:publicationDate>
	<prism:volume>3</prism:volume>
	<prism:number>4</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>498</prism:startingPage>
		<prism:doi>10.3390/software3040024</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/3/4/24</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/3/4/23">

	<title>Software, Vol. 3, Pages 473-497: A Brief Overview of the Pawns Programming Language</title>
	<link>https://www.mdpi.com/2674-113X/3/4/23</link>
	<description>This paper describes the Pawns programming language, currently under development, which uses several novel features to combine the functional and imperative programming paradigms. It supports pure functional programming (including algebraic data types, higher-order programming and parametric polymorphism), where the representation of values need not be considered. It also supports lower-level C-like imperative programming with pointers and the destructive update of all fields of the structs used to represent the algebraic data types. All destructive update of variables is made obvious in Pawns code, via annotations on statements and in type signatures. Type signatures must also declare sharing between any arguments and result that may be updated. For example, if two arguments of a function are trees that share a subtree and the subtree is updated within the function, both variables must be annotated at that point in the code, and the sharing and update of both arguments must be declared in the type signature of the function. The compiler performs extensive sharing analysis to check that the declarations and annotations are correct. This analysis allows destructive update to be encapsulated: a function with no update annotations in its type signature is guaranteed to behave as a pure function, even though the value returned may have been constructed using destructive update within the function. Additionally, the sharing analysis helps support a constrained form of global variables that also allows destructive update to be encapsulated and safe update of variables with polymorphic types to be performed.</description>
	<pubDate>2024-11-19</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 3, Pages 473-497: A Brief Overview of the Pawns Programming Language</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/3/4/23">doi: 10.3390/software3040023</a></p>
	<p>Authors:
		Lee Naish
		</p>
	<p>This paper describes the Pawns programming language, currently under development, which uses several novel features to combine the functional and imperative programming paradigms. It supports pure functional programming (including algebraic data types, higher-order programming and parametric polymorphism), where the representation of values need not be considered. It also supports lower-level C-like imperative programming with pointers and the destructive update of all fields of the structs used to represent the algebraic data types. All destructive update of variables is made obvious in Pawns code, via annotations on statements and in type signatures. Type signatures must also declare sharing between any arguments and result that may be updated. For example, if two arguments of a function are trees that share a subtree and the subtree is updated within the function, both variables must be annotated at that point in the code, and the sharing and update of both arguments must be declared in the type signature of the function. The compiler performs extensive sharing analysis to check that the declarations and annotations are correct. This analysis allows destructive update to be encapsulated: a function with no update annotations in its type signature is guaranteed to behave as a pure function, even though the value returned may have been constructed using destructive update within the function. Additionally, the sharing analysis helps support a constrained form of global variables that also allows destructive update to be encapsulated and safe update of variables with polymorphic types to be performed.</p>
	]]></content:encoded>

	<dc:title>A Brief Overview of the Pawns Programming Language</dc:title>
			<dc:creator>Lee Naish</dc:creator>
		<dc:identifier>doi: 10.3390/software3040023</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2024-11-19</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2024-11-19</prism:publicationDate>
	<prism:volume>3</prism:volume>
	<prism:number>4</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>473</prism:startingPage>
		<prism:doi>10.3390/software3040023</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/3/4/23</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/3/4/22">

	<title>Software, Vol. 3, Pages 442-472: Software Development and Maintenance Effort Estimation Using Function Points and Simpler Functional Measures</title>
	<link>https://www.mdpi.com/2674-113X/3/4/22</link>
	<description>Functional size measures are widely used for estimating software development effort. After the introduction of Function Points, a few &amp;amp;ldquo;simplified&amp;amp;rdquo; measures have been proposed, aiming to make measurement simpler and applicable when fully detailed software specifications are not yet available. However, some practitioners believe that, when considering &amp;amp;ldquo;complex&amp;amp;rdquo; projects, traditional Function Point measures support more accurate estimates than simpler functional size measures, which do not account for greater-than-average complexity. In this paper, we aim to produce evidence that confirms or disproves such a belief via an empirical study that separately analyzes projects that involved developments from scratch and extensions and modifications of existing software. Our analysis shows that there is no evidence that traditional Function Points are generally better at estimating more complex projects than simpler measures, although some differences appear in specific conditions. Another result of this study is that functional size metrics&amp;amp;mdash;both traditional and simplified&amp;amp;mdash;do not seem to effectively account for software complexity, as estimation accuracy decreases with increasing complexity, regardless of the functional size metric used. To improve effort estimation, researchers should look for a way of measuring software complexity that can be used in effort models together with (traditional or simplified) functional size measures.</description>
	<pubDate>2024-10-29</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 3, Pages 442-472: Software Development and Maintenance Effort Estimation Using Function Points and Simpler Functional Measures</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/3/4/22">doi: 10.3390/software3040022</a></p>
	<p>Authors:
		Luigi Lavazza
		Angela Locoro
		Roberto Meli
		</p>
	<p>Functional size measures are widely used for estimating software development effort. After the introduction of Function Points, a few &amp;amp;ldquo;simplified&amp;amp;rdquo; measures have been proposed, aiming to make measurement simpler and applicable when fully detailed software specifications are not yet available. However, some practitioners believe that, when considering &amp;amp;ldquo;complex&amp;amp;rdquo; projects, traditional Function Point measures support more accurate estimates than simpler functional size measures, which do not account for greater-than-average complexity. In this paper, we aim to produce evidence that confirms or disproves such a belief via an empirical study that separately analyzes projects that involved developments from scratch and extensions and modifications of existing software. Our analysis shows that there is no evidence that traditional Function Points are generally better at estimating more complex projects than simpler measures, although some differences appear in specific conditions. Another result of this study is that functional size metrics&amp;amp;mdash;both traditional and simplified&amp;amp;mdash;do not seem to effectively account for software complexity, as estimation accuracy decreases with increasing complexity, regardless of the functional size metric used. To improve effort estimation, researchers should look for a way of measuring software complexity that can be used in effort models together with (traditional or simplified) functional size measures.</p>
	]]></content:encoded>

	<dc:title>Software Development and Maintenance Effort Estimation Using Function Points and Simpler Functional Measures</dc:title>
			<dc:creator>Luigi Lavazza</dc:creator>
			<dc:creator>Angela Locoro</dc:creator>
			<dc:creator>Roberto Meli</dc:creator>
		<dc:identifier>doi: 10.3390/software3040022</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2024-10-29</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2024-10-29</prism:publicationDate>
	<prism:volume>3</prism:volume>
	<prism:number>4</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>442</prism:startingPage>
		<prism:doi>10.3390/software3040022</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/3/4/22</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/3/4/21">

	<title>Software, Vol. 3, Pages 411-441: Opening Software Research Data 5Ws+1H</title>
	<link>https://www.mdpi.com/2674-113X/3/4/21</link>
	<description>Open Science describes the movement of making any research artifact available to the public, fostering sharing and collaboration. While sharing the source code is a popular Open Science practice in software research and development, there is still a lot of work to be done to achieve the openness of the whole research and development cycle from the conception to the preservation phase. In this direction, the software engineering community faces significant challenges in adopting open science practices due to the complexity of the data, the heterogeneity of the development environments and the diversity of the application domains. In this paper, through the discussion of the 5Ws+1H (Why, Who, What, When, Where, and How) questions that are referred to as the Kipling&amp;amp;rsquo;s framework, we aim to provide a structured guideline to motivate and assist the software engineering community on the journey to data openness. Also, we demonstrate the practical application of these guidelines through a use case on opening research data.</description>
	<pubDate>2024-09-26</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 3, Pages 411-441: Opening Software Research Data 5Ws+1H</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/3/4/21">doi: 10.3390/software3040021</a></p>
	<p>Authors:
		Anastasia Terzi
		Stamatia Bibi
		</p>
	<p>Open Science describes the movement of making any research artifact available to the public, fostering sharing and collaboration. While sharing the source code is a popular Open Science practice in software research and development, there is still a lot of work to be done to achieve the openness of the whole research and development cycle from the conception to the preservation phase. In this direction, the software engineering community faces significant challenges in adopting open science practices due to the complexity of the data, the heterogeneity of the development environments and the diversity of the application domains. In this paper, through the discussion of the 5Ws+1H (Why, Who, What, When, Where, and How) questions that are referred to as the Kipling&amp;amp;rsquo;s framework, we aim to provide a structured guideline to motivate and assist the software engineering community on the journey to data openness. Also, we demonstrate the practical application of these guidelines through a use case on opening research data.</p>
	]]></content:encoded>

	<dc:title>Opening Software Research Data 5Ws+1H</dc:title>
			<dc:creator>Anastasia Terzi</dc:creator>
			<dc:creator>Stamatia Bibi</dc:creator>
		<dc:identifier>doi: 10.3390/software3040021</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2024-09-26</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2024-09-26</prism:publicationDate>
	<prism:volume>3</prism:volume>
	<prism:number>4</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>411</prism:startingPage>
		<prism:doi>10.3390/software3040021</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/3/4/21</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/3/3/20">

	<title>Software, Vol. 3, Pages 380-410: A Software Tool for ICESat and ICESat-2 Laser Altimetry Data Processing, Analysis, and Visualization: Description, Features, and Usage</title>
	<link>https://www.mdpi.com/2674-113X/3/3/20</link>
	<description>This paper presents a web-based software tool designed to process, analyze, and visualize satellite laser altimetry data, specifically from the Ice, Cloud, and land Elevation Satellite (ICESat) mission, which collected data from 2003 to 2009, and ICESat-2, which was launched in 2018 and is currently operational. These data are crucial for studying and understanding changes in Earth&amp;amp;rsquo;s surface and cryosphere, offering unprecedented accuracy in quantifying such changes. The software tool ICEComb provides the capability to access the available data from both missions, interactively visualize it on a geographic map, locally store the data records, and process, analyze, and explore the data in a detailed, meaningful, and efficient manner. This creates a user-friendly online platform for the analysis, exploration, and interpretation of satellite laser altimetry data. ICEComb was developed using well-known and well-documented technologies, simplifying the addition of new functionalities and extending its applicability to support data from different satellite laser altimetry missions. The tool&amp;amp;rsquo;s use is illustrated throughout the text by its application to ICESat and ICESat-2 laser altimetry measurements over the Mirim Lagoon region in southern Brazil and Uruguay, which is part of the world&amp;amp;rsquo;s largest complex of shallow-water coastal lagoons.</description>
	<pubDate>2024-09-18</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 3, Pages 380-410: A Software Tool for ICESat and ICESat-2 Laser Altimetry Data Processing, Analysis, and Visualization: Description, Features, and Usage</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/3/3/20">doi: 10.3390/software3030020</a></p>
	<p>Authors:
		Bruno Silva
		Luiz Guerreiro Lopes
		</p>
	<p>This paper presents a web-based software tool designed to process, analyze, and visualize satellite laser altimetry data, specifically from the Ice, Cloud, and land Elevation Satellite (ICESat) mission, which collected data from 2003 to 2009, and ICESat-2, which was launched in 2018 and is currently operational. These data are crucial for studying and understanding changes in Earth&amp;amp;rsquo;s surface and cryosphere, offering unprecedented accuracy in quantifying such changes. The software tool ICEComb provides the capability to access the available data from both missions, interactively visualize it on a geographic map, locally store the data records, and process, analyze, and explore the data in a detailed, meaningful, and efficient manner. This creates a user-friendly online platform for the analysis, exploration, and interpretation of satellite laser altimetry data. ICEComb was developed using well-known and well-documented technologies, simplifying the addition of new functionalities and extending its applicability to support data from different satellite laser altimetry missions. The tool&amp;amp;rsquo;s use is illustrated throughout the text by its application to ICESat and ICESat-2 laser altimetry measurements over the Mirim Lagoon region in southern Brazil and Uruguay, which is part of the world&amp;amp;rsquo;s largest complex of shallow-water coastal lagoons.</p>
	]]></content:encoded>

	<dc:title>A Software Tool for ICESat and ICESat-2 Laser Altimetry Data Processing, Analysis, and Visualization: Description, Features, and Usage</dc:title>
			<dc:creator>Bruno Silva</dc:creator>
			<dc:creator>Luiz Guerreiro Lopes</dc:creator>
		<dc:identifier>doi: 10.3390/software3030020</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2024-09-18</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2024-09-18</prism:publicationDate>
	<prism:volume>3</prism:volume>
	<prism:number>3</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>380</prism:startingPage>
		<prism:doi>10.3390/software3030020</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/3/3/20</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/3/3/19">

	<title>Software, Vol. 3, Pages 368-379: Signsability: Enhancing Communication through a Sign Language App</title>
	<link>https://www.mdpi.com/2674-113X/3/3/19</link>
	<description>The integration of sign language recognition systems into digital platforms has the potential to bridge communication gaps between the deaf community and the broader population. This paper introduces an advanced Israeli Sign Language (ISL) recognition system designed to interpret dynamic motion gestures, addressing a critical need for more sophisticated and fluid communication tools. Unlike conventional systems that focus solely on static signs, our approach incorporates both deep learning and Computer Vision techniques to analyze and translate dynamic gestures captured in real-time video. We provide a comprehensive account of our preprocessing pipeline, detailing every stage from video collection to the extraction of landmarks using MediaPipe, including the mathematical equations used for preprocessing these landmarks and the final recognition process. The dataset utilized for training our model is unique in its comprehensiveness and is publicly accessible, enhancing the reproducibility and expansion of future research. The deployment of our model on a publicly accessible website allows users to engage with ISL interactively, facilitating both learning and practice. We discuss the development process, the challenges overcome, and the anticipated societal impact of our system in promoting greater inclusivity and understanding.</description>
	<pubDate>2024-09-12</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 3, Pages 368-379: Signsability: Enhancing Communication through a Sign Language App</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/3/3/19">doi: 10.3390/software3030019</a></p>
	<p>Authors:
		Din Ezra
		Shai Mastitz
		Irina Rabaev
		</p>
	<p>The integration of sign language recognition systems into digital platforms has the potential to bridge communication gaps between the deaf community and the broader population. This paper introduces an advanced Israeli Sign Language (ISL) recognition system designed to interpret dynamic motion gestures, addressing a critical need for more sophisticated and fluid communication tools. Unlike conventional systems that focus solely on static signs, our approach incorporates both deep learning and Computer Vision techniques to analyze and translate dynamic gestures captured in real-time video. We provide a comprehensive account of our preprocessing pipeline, detailing every stage from video collection to the extraction of landmarks using MediaPipe, including the mathematical equations used for preprocessing these landmarks and the final recognition process. The dataset utilized for training our model is unique in its comprehensiveness and is publicly accessible, enhancing the reproducibility and expansion of future research. The deployment of our model on a publicly accessible website allows users to engage with ISL interactively, facilitating both learning and practice. We discuss the development process, the challenges overcome, and the anticipated societal impact of our system in promoting greater inclusivity and understanding.</p>
	]]></content:encoded>

	<dc:title>Signsability: Enhancing Communication through a Sign Language App</dc:title>
			<dc:creator>Din Ezra</dc:creator>
			<dc:creator>Shai Mastitz</dc:creator>
			<dc:creator>Irina Rabaev</dc:creator>
		<dc:identifier>doi: 10.3390/software3030019</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2024-09-12</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2024-09-12</prism:publicationDate>
	<prism:volume>3</prism:volume>
	<prism:number>3</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>368</prism:startingPage>
		<prism:doi>10.3390/software3030019</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/3/3/19</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/3/3/18">

	<title>Software, Vol. 3, Pages 345-367: Sligpt: A Large Language Model-Based Approach for Data Dependency Analysis on Solidity Smart Contracts</title>
	<link>https://www.mdpi.com/2674-113X/3/3/18</link>
	<description>The advent of blockchain technology has revolutionized various sectors by providing transparency, immutability, and automation. Central to this revolution are smart contracts, which facilitate trustless and automated transactions across diverse domains. However, the proliferation of smart contracts has exposed significant security vulnerabilities, necessitating advanced analysis techniques. Data dependency analysis is a critical program analysis method used to enhance the testing and security of smart contracts. This paper introduces Sligpt, an innovative methodology that integrates a large language model (LLM), specifically GPT-4o, with the static analysis tool Slither, to perform data dependency analyses on Solidity smart contracts. Our approach leverages both the advanced code comprehension capabilities of GPT-4o and the advantages of a traditional analysis tool. We empirically evaluate Sligpt using a curated dataset of Ethereum smart contracts. Sligpt achieves significant improvements in precision, recall, and overall analysis depth compared with Slither and GPT-4o, providing a robust solution for data dependency analysis. This paper also discusses the challenges encountered, such as the computational resource requirements and the inherent variability in LLM outputs, while proposing future research directions to further enhance the methodology. Sligpt represents a significant advancement in the field of static analysis on smart contracts, offering a practical framework for integrating LLMs with static analysis tools.</description>
	<pubDate>2024-08-05</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 3, Pages 345-367: Sligpt: A Large Language Model-Based Approach for Data Dependency Analysis on Solidity Smart Contracts</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/3/3/18">doi: 10.3390/software3030018</a></p>
	<p>Authors:
		Xiaolei Ren
		Qiping Wei
		</p>
	<p>The advent of blockchain technology has revolutionized various sectors by providing transparency, immutability, and automation. Central to this revolution are smart contracts, which facilitate trustless and automated transactions across diverse domains. However, the proliferation of smart contracts has exposed significant security vulnerabilities, necessitating advanced analysis techniques. Data dependency analysis is a critical program analysis method used to enhance the testing and security of smart contracts. This paper introduces Sligpt, an innovative methodology that integrates a large language model (LLM), specifically GPT-4o, with the static analysis tool Slither, to perform data dependency analyses on Solidity smart contracts. Our approach leverages both the advanced code comprehension capabilities of GPT-4o and the advantages of a traditional analysis tool. We empirically evaluate Sligpt using a curated dataset of Ethereum smart contracts. Sligpt achieves significant improvements in precision, recall, and overall analysis depth compared with Slither and GPT-4o, providing a robust solution for data dependency analysis. This paper also discusses the challenges encountered, such as the computational resource requirements and the inherent variability in LLM outputs, while proposing future research directions to further enhance the methodology. Sligpt represents a significant advancement in the field of static analysis on smart contracts, offering a practical framework for integrating LLMs with static analysis tools.</p>
	]]></content:encoded>

	<dc:title>Sligpt: A Large Language Model-Based Approach for Data Dependency Analysis on Solidity Smart Contracts</dc:title>
			<dc:creator>Xiaolei Ren</dc:creator>
			<dc:creator>Qiping Wei</dc:creator>
		<dc:identifier>doi: 10.3390/software3030018</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2024-08-05</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2024-08-05</prism:publicationDate>
	<prism:volume>3</prism:volume>
	<prism:number>3</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>345</prism:startingPage>
		<prism:doi>10.3390/software3030018</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/3/3/18</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/3/3/17">

	<title>Software, Vol. 3, Pages 328-344: Software Update Methodologies for Feature-Based Product Lines: A Combined Design Approach</title>
	<link>https://www.mdpi.com/2674-113X/3/3/17</link>
	<description>The automotive industry is experiencing a significant shift, transitioning from traditional hardware-centric systems to more advanced software-defined architectures. This change is enabling enhanced autonomy, connectivity, safety, and improved in-vehicle experiences. Service-oriented architecture is crucial for achieving software-defined vehicles and creating new business opportunities for original equipment manufacturers. A software update approach that is rich in variability and based on a Merkle tree approach is proposed for new vehicle architecture requirements. Given the complexity of software updates in vehicles, particularly when dealing with multiple distributed electronic control units, this software-centric approach can be optimized to handle various architectures and configurations, ensuring consistency across all platforms. In this paper, our software update approach is expanded to cover the solution space of the feature-based product line engineering, and we show how to combine our approach with product line engineering in creative and unique ways to form a software-defined vehicle modular architecture. Then, we offer insights into the design of the Merkle trees utilized in our approach, emphasizing the relationship among the software modules, with a focus on their impact on software update performance. This approach streamlines the software update process and ensures that the safety as well as the security of the vehicle are continuously maintained.</description>
	<pubDate>2024-08-05</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 3, Pages 328-344: Software Update Methodologies for Feature-Based Product Lines: A Combined Design Approach</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/3/3/17">doi: 10.3390/software3030017</a></p>
	<p>Authors:
		Abir Bazzi
		Adnan Shaout
		Di Ma
		</p>
	<p>The automotive industry is experiencing a significant shift, transitioning from traditional hardware-centric systems to more advanced software-defined architectures. This change is enabling enhanced autonomy, connectivity, safety, and improved in-vehicle experiences. Service-oriented architecture is crucial for achieving software-defined vehicles and creating new business opportunities for original equipment manufacturers. A software update approach that is rich in variability and based on a Merkle tree approach is proposed for new vehicle architecture requirements. Given the complexity of software updates in vehicles, particularly when dealing with multiple distributed electronic control units, this software-centric approach can be optimized to handle various architectures and configurations, ensuring consistency across all platforms. In this paper, our software update approach is expanded to cover the solution space of the feature-based product line engineering, and we show how to combine our approach with product line engineering in creative and unique ways to form a software-defined vehicle modular architecture. Then, we offer insights into the design of the Merkle trees utilized in our approach, emphasizing the relationship among the software modules, with a focus on their impact on software update performance. This approach streamlines the software update process and ensures that the safety as well as the security of the vehicle are continuously maintained.</p>
	]]></content:encoded>

	<dc:title>Software Update Methodologies for Feature-Based Product Lines: A Combined Design Approach</dc:title>
			<dc:creator>Abir Bazzi</dc:creator>
			<dc:creator>Adnan Shaout</dc:creator>
			<dc:creator>Di Ma</dc:creator>
		<dc:identifier>doi: 10.3390/software3030017</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2024-08-05</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2024-08-05</prism:publicationDate>
	<prism:volume>3</prism:volume>
	<prism:number>3</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>328</prism:startingPage>
		<prism:doi>10.3390/software3030017</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/3/3/17</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/3/3/16">

	<title>Software, Vol. 3, Pages 310-327: Towards a Block-Level Conformer-Based Python Vulnerability Detection</title>
	<link>https://www.mdpi.com/2674-113X/3/3/16</link>
	<description>Software vulnerabilities pose a significant threat to computer systems because they can jeopardize the integrity of both software and hardware. The existing tools for detecting vulnerabilities are inadequate. Machine learning algorithms may struggle to interpret enormous datasets because of their limited ability to understand intricate linkages within high-dimensional data. Traditional procedures, on the other hand, take a long time and require a lot of manual labor. Furthermore, earlier deep-learning approaches failed to acquire adequate feature data. Self-attention mechanisms can process information across large distances, but they do not collect structural data. This work addresses the critical problem of inadequate vulnerability detection in software systems. We propose a novel method that combines self-attention with convolutional networks to enhance the detection of software vulnerabilities by capturing both localized, position-specific features and global, content-driven interactions. Our contribution lies in the integration of these methodologies to improve the precision and F1 score of vulnerability detection systems, achieving unprecedented results on complex Python datasets. In addition, we improve the self-attention approaches by changing the denominator to address the issue of excessive attention heads creating irrelevant disturbances. We assessed the effectiveness of this strategy using six complex Python vulnerability datasets obtained from GitHub. Our rigorous study and comparison of data with previous studies resulted in the most precise outcomes and F1 score (99%) ever attained by machine learning systems.</description>
	<pubDate>2024-07-31</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 3, Pages 310-327: Towards a Block-Level Conformer-Based Python Vulnerability Detection</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/3/3/16">doi: 10.3390/software3030016</a></p>
	<p>Authors:
		Amirreza Bagheri
		Péter Hegedűs
		</p>
	<p>Software vulnerabilities pose a significant threat to computer systems because they can jeopardize the integrity of both software and hardware. The existing tools for detecting vulnerabilities are inadequate. Machine learning algorithms may struggle to interpret enormous datasets because of their limited ability to understand intricate linkages within high-dimensional data. Traditional procedures, on the other hand, take a long time and require a lot of manual labor. Furthermore, earlier deep-learning approaches failed to acquire adequate feature data. Self-attention mechanisms can process information across large distances, but they do not collect structural data. This work addresses the critical problem of inadequate vulnerability detection in software systems. We propose a novel method that combines self-attention with convolutional networks to enhance the detection of software vulnerabilities by capturing both localized, position-specific features and global, content-driven interactions. Our contribution lies in the integration of these methodologies to improve the precision and F1 score of vulnerability detection systems, achieving unprecedented results on complex Python datasets. In addition, we improve the self-attention approaches by changing the denominator to address the issue of excessive attention heads creating irrelevant disturbances. We assessed the effectiveness of this strategy using six complex Python vulnerability datasets obtained from GitHub. Our rigorous study and comparison of data with previous studies resulted in the most precise outcomes and F1 score (99%) ever attained by machine learning systems.</p>
	]]></content:encoded>

	<dc:title>Towards a Block-Level Conformer-Based Python Vulnerability Detection</dc:title>
			<dc:creator>Amirreza Bagheri</dc:creator>
			<dc:creator>Péter Hegedűs</dc:creator>
		<dc:identifier>doi: 10.3390/software3030016</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2024-07-31</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2024-07-31</prism:publicationDate>
	<prism:volume>3</prism:volume>
	<prism:number>3</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>310</prism:startingPage>
		<prism:doi>10.3390/software3030016</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/3/3/16</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/3/3/15">

	<title>Software, Vol. 3, Pages 284-309: Mapping Petri Nets onto a Calculus of Context-Aware Ambients</title>
	<link>https://www.mdpi.com/2674-113X/3/3/15</link>
	<description>Petri nets are a graphical notation for describing a class of discrete event dynamic systems whose behaviours are characterised by concurrency, synchronisation, mutual exclusion and conflict. They have been used over the years for the modelling of various distributed systems applications. With the advent of pervasive systems and the Internet of Things, the Calculus of Context-aware Ambients (CCA) has emerged as a suitable formal notation for analysing the behaviours of these systems. In this paper, we are interested in comparing the expressive power of Petri nets to that of CCA. That is, can the class of systems represented by Petri nets be modelled in CCA? To answer this question, an algorithm is proposed that maps any Petri net onto a CCA process. We prove that a Petri net and its corresponding CCA process are behavioural equivalent. It follows that CCA is at least as expressive as Petri nets, i.e., any system that can be specified in Petri nets can also be specified in CCA. Moreover, tools developed for CCA can also be used to analyse the behaviours of Petri nets.</description>
	<pubDate>2024-07-18</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 3, Pages 284-309: Mapping Petri Nets onto a Calculus of Context-Aware Ambients</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/3/3/15">doi: 10.3390/software3030015</a></p>
	<p>Authors:
		François Siewe
		Vasileios Germanos
		Wen Zeng
		</p>
	<p>Petri nets are a graphical notation for describing a class of discrete event dynamic systems whose behaviours are characterised by concurrency, synchronisation, mutual exclusion and conflict. They have been used over the years for the modelling of various distributed systems applications. With the advent of pervasive systems and the Internet of Things, the Calculus of Context-aware Ambients (CCA) has emerged as a suitable formal notation for analysing the behaviours of these systems. In this paper, we are interested in comparing the expressive power of Petri nets to that of CCA. That is, can the class of systems represented by Petri nets be modelled in CCA? To answer this question, an algorithm is proposed that maps any Petri net onto a CCA process. We prove that a Petri net and its corresponding CCA process are behavioural equivalent. It follows that CCA is at least as expressive as Petri nets, i.e., any system that can be specified in Petri nets can also be specified in CCA. Moreover, tools developed for CCA can also be used to analyse the behaviours of Petri nets.</p>
	]]></content:encoded>

	<dc:title>Mapping Petri Nets onto a Calculus of Context-Aware Ambients</dc:title>
			<dc:creator>François Siewe</dc:creator>
			<dc:creator>Vasileios Germanos</dc:creator>
			<dc:creator>Wen Zeng</dc:creator>
		<dc:identifier>doi: 10.3390/software3030015</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2024-07-18</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2024-07-18</prism:publicationDate>
	<prism:volume>3</prism:volume>
	<prism:number>3</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>284</prism:startingPage>
		<prism:doi>10.3390/software3030015</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/3/3/15</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/3/3/14">

	<title>Software, Vol. 3, Pages 271-283: Using Behavior-Driven Development (BDD) for Non-Functional Requirements</title>
	<link>https://www.mdpi.com/2674-113X/3/3/14</link>
	<description>In software engineering, there must be clarity in communication among interested parties to elicit the requirements aimed at software development through frameworks to achieve the behaviors expected by the software. Problem: A lack of clarity in the requirement-elicitation stage can impact subsequent stages of software development. Solution: We proposed a case study focusing on the performance efficiency characteristic expressed in the ISO/IEC/IEEE 25010 standard using Behavior-Driven Development (BDD). Method: The case study was performed with professionals who use BDD to elicit the non-functional requirements of a company that develops software. Summary of Results: The result obtained was the validation related to the elicitation of non-functional requirements aimed at the performance efficiency characteristic of the ISO/IEC/IEEE 25010 Standard using the BDD framework through a real case study in a software development company. Contributions and impact: The article&amp;amp;rsquo;s main contribution is to demonstrate the effectiveness of using BDD to elicit non-functional requirements about the performance efficiency characteristic of the ISO/IEC/IEEE 25010 standard.</description>
	<pubDate>2024-07-18</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 3, Pages 271-283: Using Behavior-Driven Development (BDD) for Non-Functional Requirements</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/3/3/14">doi: 10.3390/software3030014</a></p>
	<p>Authors:
		Shexmo Santos
		Tacyanne Pimentel
		Fabio Gomes Rocha
		Michel S. Soares
		</p>
	<p>In software engineering, there must be clarity in communication among interested parties to elicit the requirements aimed at software development through frameworks to achieve the behaviors expected by the software. Problem: A lack of clarity in the requirement-elicitation stage can impact subsequent stages of software development. Solution: We proposed a case study focusing on the performance efficiency characteristic expressed in the ISO/IEC/IEEE 25010 standard using Behavior-Driven Development (BDD). Method: The case study was performed with professionals who use BDD to elicit the non-functional requirements of a company that develops software. Summary of Results: The result obtained was the validation related to the elicitation of non-functional requirements aimed at the performance efficiency characteristic of the ISO/IEC/IEEE 25010 Standard using the BDD framework through a real case study in a software development company. Contributions and impact: The article&amp;amp;rsquo;s main contribution is to demonstrate the effectiveness of using BDD to elicit non-functional requirements about the performance efficiency characteristic of the ISO/IEC/IEEE 25010 standard.</p>
	]]></content:encoded>

	<dc:title>Using Behavior-Driven Development (BDD) for Non-Functional Requirements</dc:title>
			<dc:creator>Shexmo Santos</dc:creator>
			<dc:creator>Tacyanne Pimentel</dc:creator>
			<dc:creator>Fabio Gomes Rocha</dc:creator>
			<dc:creator>Michel S. Soares</dc:creator>
		<dc:identifier>doi: 10.3390/software3030014</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2024-07-18</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2024-07-18</prism:publicationDate>
	<prism:volume>3</prism:volume>
	<prism:number>3</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>271</prism:startingPage>
		<prism:doi>10.3390/software3030014</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/3/3/14</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/3/3/13">

	<title>Software, Vol. 3, Pages 250-270: E-SERS: An Enhanced Approach to Trust-Based Ranking of Apps</title>
	<link>https://www.mdpi.com/2674-113X/3/3/13</link>
	<description>The number of mobile applications (&amp;amp;ldquo;Apps&amp;amp;rdquo;) has grown significantly in recent years. App Stores rank/recommend Apps based on factors such as average star ratings and the number of installs. Such rankings do not focus on the internal artifacts of Apps (e.g., security vulnerabilities). If internal artifacts are ignored, users may fail to estimate the potential risks associated with installing Apps. In this research, we present a framework called E-SERS (Enhanced Security-related and Evidence-based Ranking Scheme) for comparing Android Apps that offer similar functionalities. E-SERS uses internal and external artifacts of Apps in the ranking process. E-SERS is a significant enhancement of our past evidence-based ranking framework called SERS. We have evaluated E-SERS on publicly accessible Apps from the Google Play Store and compared our rankings with prevalent ranking techniques. Our experiments demonstrate that E-SERS, leveraging its holistic approach, excels in identifying malicious Apps and consistently outperforms existing alternatives in ranking accuracy. By emphasizing comprehensive assessment, E-SERS empowers users, particularly those less experienced with technology, to make informed decisions and avoid potentially harmful Apps. This contribution addresses a critical gap in current App-ranking methodologies, enhancing the safety and security of today&amp;amp;rsquo;s technologically dependent society.</description>
	<pubDate>2024-07-13</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 3, Pages 250-270: E-SERS: An Enhanced Approach to Trust-Based Ranking of Apps</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/3/3/13">doi: 10.3390/software3030013</a></p>
	<p>Authors:
		Nahida Chowdhury
		Ayush Maharjan
		Rajeev R. Raje
		</p>
	<p>The number of mobile applications (&amp;amp;ldquo;Apps&amp;amp;rdquo;) has grown significantly in recent years. App Stores rank/recommend Apps based on factors such as average star ratings and the number of installs. Such rankings do not focus on the internal artifacts of Apps (e.g., security vulnerabilities). If internal artifacts are ignored, users may fail to estimate the potential risks associated with installing Apps. In this research, we present a framework called E-SERS (Enhanced Security-related and Evidence-based Ranking Scheme) for comparing Android Apps that offer similar functionalities. E-SERS uses internal and external artifacts of Apps in the ranking process. E-SERS is a significant enhancement of our past evidence-based ranking framework called SERS. We have evaluated E-SERS on publicly accessible Apps from the Google Play Store and compared our rankings with prevalent ranking techniques. Our experiments demonstrate that E-SERS, leveraging its holistic approach, excels in identifying malicious Apps and consistently outperforms existing alternatives in ranking accuracy. By emphasizing comprehensive assessment, E-SERS empowers users, particularly those less experienced with technology, to make informed decisions and avoid potentially harmful Apps. This contribution addresses a critical gap in current App-ranking methodologies, enhancing the safety and security of today&amp;amp;rsquo;s technologically dependent society.</p>
	]]></content:encoded>

	<dc:title>E-SERS: An Enhanced Approach to Trust-Based Ranking of Apps</dc:title>
			<dc:creator>Nahida Chowdhury</dc:creator>
			<dc:creator>Ayush Maharjan</dc:creator>
			<dc:creator>Rajeev R. Raje</dc:creator>
		<dc:identifier>doi: 10.3390/software3030013</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2024-07-13</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2024-07-13</prism:publicationDate>
	<prism:volume>3</prism:volume>
	<prism:number>3</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>250</prism:startingPage>
		<prism:doi>10.3390/software3030013</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/3/3/13</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/3/2/12">

	<title>Software, Vol. 3, Pages 227-249: CORE-ReID: Comprehensive Optimization and Refinement through Ensemble Fusion in Domain Adaptation for Person Re-Identification</title>
	<link>https://www.mdpi.com/2674-113X/3/2/12</link>
	<description>This study introduces a novel framework, &amp;amp;ldquo;Comprehensive Optimization and Refinement through Ensemble Fusion in Domain Adaptation for Person Re-identification (CORE-ReID)&amp;amp;rdquo;, to address an Unsupervised Domain Adaptation (UDA) for Person Re-identification (ReID). The framework utilizes CycleGAN to generate diverse data that harmonize differences in image characteristics from different camera sources in the pre-training stage. In the fine-tuning stage, based on a pair of teacher&amp;amp;ndash;student networks, the framework integrates multi-view features for multi-level clustering to derive diverse pseudo-labels. A learnable Ensemble Fusion component that focuses on fine-grained local information within global features is introduced to enhance learning comprehensiveness and avoid ambiguity associated with multiple pseudo-labels. Experimental results on three common UDAs in Person ReID demonstrated significant performance gains over state-of-the-art approaches. Additional enhancements, such as Efficient Channel Attention Block and Bidirectional Mean Feature Normalization mitigate deviation effects and the adaptive fusion of global and local features using the ResNet-based model, further strengthening the framework. The proposed framework ensures clarity in fusion features, avoids ambiguity, and achieves high accuracy in terms of Mean Average Precision, Top-1, Top-5, and Top-10, positioning it as an advanced and effective solution for UDA in Person ReID.</description>
	<pubDate>2024-06-03</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 3, Pages 227-249: CORE-ReID: Comprehensive Optimization and Refinement through Ensemble Fusion in Domain Adaptation for Person Re-Identification</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/3/2/12">doi: 10.3390/software3020012</a></p>
	<p>Authors:
		Trinh Quoc Nguyen
		Oky Dicky Ardiansyah Prima
		Katsuyoshi Hotta
		</p>
	<p>This study introduces a novel framework, &amp;amp;ldquo;Comprehensive Optimization and Refinement through Ensemble Fusion in Domain Adaptation for Person Re-identification (CORE-ReID)&amp;amp;rdquo;, to address an Unsupervised Domain Adaptation (UDA) for Person Re-identification (ReID). The framework utilizes CycleGAN to generate diverse data that harmonize differences in image characteristics from different camera sources in the pre-training stage. In the fine-tuning stage, based on a pair of teacher&amp;amp;ndash;student networks, the framework integrates multi-view features for multi-level clustering to derive diverse pseudo-labels. A learnable Ensemble Fusion component that focuses on fine-grained local information within global features is introduced to enhance learning comprehensiveness and avoid ambiguity associated with multiple pseudo-labels. Experimental results on three common UDAs in Person ReID demonstrated significant performance gains over state-of-the-art approaches. Additional enhancements, such as Efficient Channel Attention Block and Bidirectional Mean Feature Normalization mitigate deviation effects and the adaptive fusion of global and local features using the ResNet-based model, further strengthening the framework. The proposed framework ensures clarity in fusion features, avoids ambiguity, and achieves high accuracy in terms of Mean Average Precision, Top-1, Top-5, and Top-10, positioning it as an advanced and effective solution for UDA in Person ReID.</p>
	]]></content:encoded>

	<dc:title>CORE-ReID: Comprehensive Optimization and Refinement through Ensemble Fusion in Domain Adaptation for Person Re-Identification</dc:title>
			<dc:creator>Trinh Quoc Nguyen</dc:creator>
			<dc:creator>Oky Dicky Ardiansyah Prima</dc:creator>
			<dc:creator>Katsuyoshi Hotta</dc:creator>
		<dc:identifier>doi: 10.3390/software3020012</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2024-06-03</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2024-06-03</prism:publicationDate>
	<prism:volume>3</prism:volume>
	<prism:number>2</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>227</prism:startingPage>
		<prism:doi>10.3390/software3020012</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/3/2/12</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/3/2/11">

	<title>Software, Vol. 3, Pages 226: Expression of Concern: Stephenson, M.J. A Differential Datalog Interpreter. Software 2023, 2, 427&amp;ndash;446</title>
	<link>https://www.mdpi.com/2674-113X/3/2/11</link>
	<description>With this notice, the Software Editorial Office states their awareness of the concerns regarding the appropriateness of the authorship and origins of the study of the published manuscript [...]</description>
	<pubDate>2024-05-06</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 3, Pages 226: Expression of Concern: Stephenson, M.J. A Differential Datalog Interpreter. Software 2023, 2, 427&amp;ndash;446</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/3/2/11">doi: 10.3390/software3020011</a></p>
	<p>Authors:
		Software Editorial Office Software Editorial Office
		</p>
	<p>With this notice, the Software Editorial Office states their awareness of the concerns regarding the appropriateness of the authorship and origins of the study of the published manuscript [...]</p>
	]]></content:encoded>

	<dc:title>Expression of Concern: Stephenson, M.J. A Differential Datalog Interpreter. Software 2023, 2, 427&amp;amp;ndash;446</dc:title>
			<dc:creator>Software Editorial Office Software Editorial Office</dc:creator>
		<dc:identifier>doi: 10.3390/software3020011</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2024-05-06</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2024-05-06</prism:publicationDate>
	<prism:volume>3</prism:volume>
	<prism:number>2</prism:number>
	<prism:section>Expression of Concern</prism:section>
	<prism:startingPage>226</prism:startingPage>
		<prism:doi>10.3390/software3020011</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/3/2/11</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/3/2/10">

	<title>Software, Vol. 3, Pages 206-225: A MongoDB Document Reconstruction Support System Using Natural Language Processing</title>
	<link>https://www.mdpi.com/2674-113X/3/2/10</link>
	<description>Document-oriented databases, a type of Not Only SQL (NoSQL) database, are gaining popularity owing to their flexibility in data handling and performance for large-scale data. MongoDB, a typical document-oriented database, is a database that stores data in the JSON format, where the upper field involves lower fields and fields with the same related parent. One feature of this document-oriented database is that data are dynamically stored in an arbitrary location without explicitly defining a schema in advance. This flexibility violates the above property and causes difficulties for application program readability and database maintenance. To address these issues, we propose a reconstruction support method for document structures in MongoDB. The method uses the strength of the Has-A relationship between the parent and child fields, as well as the similarity of field names in the MongoDB documents in natural language processing, to reconstruct the data structure in MongoDB. As a result, the method transforms the parent and child fields into more coherent data structures. We evaluated our methods using real-world data and demonstrated their effectiveness.</description>
	<pubDate>2024-05-02</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 3, Pages 206-225: A MongoDB Document Reconstruction Support System Using Natural Language Processing</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/3/2/10">doi: 10.3390/software3020010</a></p>
	<p>Authors:
		Kohei Hamaji
		Yukikazu Nakamoto
		</p>
	<p>Document-oriented databases, a type of Not Only SQL (NoSQL) database, are gaining popularity owing to their flexibility in data handling and performance for large-scale data. MongoDB, a typical document-oriented database, is a database that stores data in the JSON format, where the upper field involves lower fields and fields with the same related parent. One feature of this document-oriented database is that data are dynamically stored in an arbitrary location without explicitly defining a schema in advance. This flexibility violates the above property and causes difficulties for application program readability and database maintenance. To address these issues, we propose a reconstruction support method for document structures in MongoDB. The method uses the strength of the Has-A relationship between the parent and child fields, as well as the similarity of field names in the MongoDB documents in natural language processing, to reconstruct the data structure in MongoDB. As a result, the method transforms the parent and child fields into more coherent data structures. We evaluated our methods using real-world data and demonstrated their effectiveness.</p>
	]]></content:encoded>

	<dc:title>A MongoDB Document Reconstruction Support System Using Natural Language Processing</dc:title>
			<dc:creator>Kohei Hamaji</dc:creator>
			<dc:creator>Yukikazu Nakamoto</dc:creator>
		<dc:identifier>doi: 10.3390/software3020010</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2024-05-02</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2024-05-02</prism:publicationDate>
	<prism:volume>3</prism:volume>
	<prism:number>2</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>206</prism:startingPage>
		<prism:doi>10.3390/software3020010</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/3/2/10</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/3/2/9">

	<title>Software, Vol. 3, Pages 183-205: Defining and Researching &amp;ldquo;Dynamic Systems of Systems&amp;rdquo;</title>
	<link>https://www.mdpi.com/2674-113X/3/2/9</link>
	<description>Digital transformation is advancing across industries, enabling products, processes, and business models that change the way we communicate, interact, and live. It radically influences the evolution of existing systems of systems (SoSs), such as mobility systems, production systems, energy systems, or cities, that have grown over a long time. In this article, we discuss what this means for the future of software engineering based on the results of a research project called DynaSoS. We present the data collection methods we applied, including interviews, a literature review, and workshops. As one contribution, we propose a classification scheme for deriving and structuring research challenges and directions. The scheme comprises two dimensions: scope and characteristics. The scope motivates and structures the trend toward an increasingly connected world. The characteristics enhance and adapt established SoS characteristics in order to include novel aspects and to better align them with the structuring of research into different research areas or communities. As a second contribution, we present research challenges using the classification scheme. We have observed that a scheme puts research challenges into context, which is needed for interpreting them. Accordingly, we conclude that our proposals contribute to a common understanding and vision for engineering dynamic SoS.</description>
	<pubDate>2024-05-01</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 3, Pages 183-205: Defining and Researching &amp;ldquo;Dynamic Systems of Systems&amp;rdquo;</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/3/2/9">doi: 10.3390/software3020009</a></p>
	<p>Authors:
		Rasmus Adler
		Frank Elberzhager
		Rodrigo Falcão
		Julien Siebert
		</p>
	<p>Digital transformation is advancing across industries, enabling products, processes, and business models that change the way we communicate, interact, and live. It radically influences the evolution of existing systems of systems (SoSs), such as mobility systems, production systems, energy systems, or cities, that have grown over a long time. In this article, we discuss what this means for the future of software engineering based on the results of a research project called DynaSoS. We present the data collection methods we applied, including interviews, a literature review, and workshops. As one contribution, we propose a classification scheme for deriving and structuring research challenges and directions. The scheme comprises two dimensions: scope and characteristics. The scope motivates and structures the trend toward an increasingly connected world. The characteristics enhance and adapt established SoS characteristics in order to include novel aspects and to better align them with the structuring of research into different research areas or communities. As a second contribution, we present research challenges using the classification scheme. We have observed that a scheme puts research challenges into context, which is needed for interpreting them. Accordingly, we conclude that our proposals contribute to a common understanding and vision for engineering dynamic SoS.</p>
	]]></content:encoded>

	<dc:title>Defining and Researching &amp;amp;ldquo;Dynamic Systems of Systems&amp;amp;rdquo;</dc:title>
			<dc:creator>Rasmus Adler</dc:creator>
			<dc:creator>Frank Elberzhager</dc:creator>
			<dc:creator>Rodrigo Falcão</dc:creator>
			<dc:creator>Julien Siebert</dc:creator>
		<dc:identifier>doi: 10.3390/software3020009</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2024-05-01</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2024-05-01</prism:publicationDate>
	<prism:volume>3</prism:volume>
	<prism:number>2</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>183</prism:startingPage>
		<prism:doi>10.3390/software3020009</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/3/2/9</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/3/2/8">

	<title>Software, Vol. 3, Pages 169-182: NICE: A Web-Based Tool for the Characterization of Transient Noise in Gravitational Wave Detectors</title>
	<link>https://www.mdpi.com/2674-113X/3/2/8</link>
	<description>NICE&amp;amp;mdash;Noise Interactive Catalogue Explorer&amp;amp;mdash;is a web service developed for rapid-qualitative glitch analysis in gravitational wave data. Glitches are transient noise events that can smother the gravitational wave signal in data recorded by gravitational wave interferometer detectors. NICE provides interactive graphical tools to support detector noise characterization activities, in particular, the analysis of glitches from past and current observing runs, passing from glitch population visualization to individual glitch characterization. The NICE back-end API consists of a multi-database structure that brings order to glitch metadata generated by external detector characterization tools so that such information can be easily requested by gravitational wave scientists. Another novelty introduced by NICE is the interactive front-end infrastructure focused on glitch instrumental and environmental origin investigation, which uses labels determined by their time&amp;amp;ndash;frequency morphology. The NICE domain is intended for integration with the Advanced Virgo, Advanced LIGO, and KAGRA characterization pipelines and it will interface with systematic classification activities related to the transient noise sources present in the Virgo detector.</description>
	<pubDate>2024-04-18</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 3, Pages 169-182: NICE: A Web-Based Tool for the Characterization of Transient Noise in Gravitational Wave Detectors</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/3/2/8">doi: 10.3390/software3020008</a></p>
	<p>Authors:
		Nunziato Sorrentino
		Massimiliano Razzano
		Francesco Di Renzo
		Francesco Fidecaro
		Gary Hemming
		</p>
	<p>NICE&amp;amp;mdash;Noise Interactive Catalogue Explorer&amp;amp;mdash;is a web service developed for rapid-qualitative glitch analysis in gravitational wave data. Glitches are transient noise events that can smother the gravitational wave signal in data recorded by gravitational wave interferometer detectors. NICE provides interactive graphical tools to support detector noise characterization activities, in particular, the analysis of glitches from past and current observing runs, passing from glitch population visualization to individual glitch characterization. The NICE back-end API consists of a multi-database structure that brings order to glitch metadata generated by external detector characterization tools so that such information can be easily requested by gravitational wave scientists. Another novelty introduced by NICE is the interactive front-end infrastructure focused on glitch instrumental and environmental origin investigation, which uses labels determined by their time&amp;amp;ndash;frequency morphology. The NICE domain is intended for integration with the Advanced Virgo, Advanced LIGO, and KAGRA characterization pipelines and it will interface with systematic classification activities related to the transient noise sources present in the Virgo detector.</p>
	]]></content:encoded>

	<dc:title>NICE: A Web-Based Tool for the Characterization of Transient Noise in Gravitational Wave Detectors</dc:title>
			<dc:creator>Nunziato Sorrentino</dc:creator>
			<dc:creator>Massimiliano Razzano</dc:creator>
			<dc:creator>Francesco Di Renzo</dc:creator>
			<dc:creator>Francesco Fidecaro</dc:creator>
			<dc:creator>Gary Hemming</dc:creator>
		<dc:identifier>doi: 10.3390/software3020008</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2024-04-18</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2024-04-18</prism:publicationDate>
	<prism:volume>3</prism:volume>
	<prism:number>2</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>169</prism:startingPage>
		<prism:doi>10.3390/software3020008</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/3/2/8</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/3/2/7">

	<title>Software, Vol. 3, Pages 146-168: Revolutionizing Coffee Farming: A Mobile App with GPS-Enabled Reporting for Rapid and Accurate On-Site Detection of Coffee Leaf Diseases Using Integrated Deep Learning</title>
	<link>https://www.mdpi.com/2674-113X/3/2/7</link>
	<description>Coffee leaf diseases are a significant challenge for coffee cultivation. They can reduce yields, impact bean quality, and necessitate costly disease management efforts. Manual monitoring is labor-intensive and time-consuming. This research introduces a pioneering mobile application equipped with global positioning system (GPS)-enabled reporting capabilities for on-site coffee leaf disease detection. The application integrates advanced deep learning (DL) techniques to empower farmers and agronomists with a rapid and accurate tool for identifying and managing coffee plant health. Leveraging the ubiquity of mobile devices, the app enables users to capture high-resolution images of coffee leaves directly in the field. These images are then processed in real-time using a pre-trained DL model optimized for efficient disease classification. Five models, Xception, ResNet50, Inception-v3, VGG16, and DenseNet, were experimented with on the dataset. All models showed promising performance; however, DenseNet proved to have high scores on all four-leaf classes with a training accuracy of 99.57%. The inclusion of GPS functionality allows precise geotagging of each captured image, providing valuable location-specific information. Through extensive experimentation and validation, the app demonstrates impressive accuracy rates in disease classification. The results indicate the potential of this technology to revolutionize coffee farming practices, leading to improved crop yield and overall plant health.</description>
	<pubDate>2024-04-16</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 3, Pages 146-168: Revolutionizing Coffee Farming: A Mobile App with GPS-Enabled Reporting for Rapid and Accurate On-Site Detection of Coffee Leaf Diseases Using Integrated Deep Learning</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/3/2/7">doi: 10.3390/software3020007</a></p>
	<p>Authors:
		Eric Hitimana
		Martin Kuradusenge
		Omar Janvier Sinayobye
		Chrysostome Ufitinema
		Jane Mukamugema
		Theoneste Murangira
		Emmanuel Masabo
		Peter Rwibasira
		Diane Aimee Ingabire
		Simplice Niyonzima
		Gaurav Bajpai
		Simon Martin Mvuyekure
		Jackson Ngabonziza
		</p>
	<p>Coffee leaf diseases are a significant challenge for coffee cultivation. They can reduce yields, impact bean quality, and necessitate costly disease management efforts. Manual monitoring is labor-intensive and time-consuming. This research introduces a pioneering mobile application equipped with global positioning system (GPS)-enabled reporting capabilities for on-site coffee leaf disease detection. The application integrates advanced deep learning (DL) techniques to empower farmers and agronomists with a rapid and accurate tool for identifying and managing coffee plant health. Leveraging the ubiquity of mobile devices, the app enables users to capture high-resolution images of coffee leaves directly in the field. These images are then processed in real-time using a pre-trained DL model optimized for efficient disease classification. Five models, Xception, ResNet50, Inception-v3, VGG16, and DenseNet, were experimented with on the dataset. All models showed promising performance; however, DenseNet proved to have high scores on all four-leaf classes with a training accuracy of 99.57%. The inclusion of GPS functionality allows precise geotagging of each captured image, providing valuable location-specific information. Through extensive experimentation and validation, the app demonstrates impressive accuracy rates in disease classification. The results indicate the potential of this technology to revolutionize coffee farming practices, leading to improved crop yield and overall plant health.</p>
	]]></content:encoded>

	<dc:title>Revolutionizing Coffee Farming: A Mobile App with GPS-Enabled Reporting for Rapid and Accurate On-Site Detection of Coffee Leaf Diseases Using Integrated Deep Learning</dc:title>
			<dc:creator>Eric Hitimana</dc:creator>
			<dc:creator>Martin Kuradusenge</dc:creator>
			<dc:creator>Omar Janvier Sinayobye</dc:creator>
			<dc:creator>Chrysostome Ufitinema</dc:creator>
			<dc:creator>Jane Mukamugema</dc:creator>
			<dc:creator>Theoneste Murangira</dc:creator>
			<dc:creator>Emmanuel Masabo</dc:creator>
			<dc:creator>Peter Rwibasira</dc:creator>
			<dc:creator>Diane Aimee Ingabire</dc:creator>
			<dc:creator>Simplice Niyonzima</dc:creator>
			<dc:creator>Gaurav Bajpai</dc:creator>
			<dc:creator>Simon Martin Mvuyekure</dc:creator>
			<dc:creator>Jackson Ngabonziza</dc:creator>
		<dc:identifier>doi: 10.3390/software3020007</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2024-04-16</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2024-04-16</prism:publicationDate>
	<prism:volume>3</prism:volume>
	<prism:number>2</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>146</prism:startingPage>
		<prism:doi>10.3390/software3020007</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/3/2/7</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/3/1/6">

	<title>Software, Vol. 3, Pages 107-145: A Process for Monitoring the Impact of Architecture Principles on Sustainability: An Industrial Case Study</title>
	<link>https://www.mdpi.com/2674-113X/3/1/6</link>
	<description>Architecture principles affect a software system holistically. Given their alignment with a business strategy, they should be incorporated within the validation process covering aspects of sustainability. However, current research discusses the influence of architecture principles on sustainability in a limited context. Our objective was to introduce a reusable process for monitoring and evaluating the impact of architecture principles on sustainability from a software architecture perspective. We sought to demonstrate the application of such a process in professional practice. A qualitative case study was conducted in the context of a Dutch airport management company. Data collection involved a case analysis and the execution of two rounds of expert interviews. We (i) identified a set of case-related key performance indicators, (ii) utilized commonly accepted measurement tools, and (iii) employed graphical representations in the form of spider charts to monitor the sustainability impacts. The real-world observations were evaluated through a concluding focus group. Our findings indicated that architecture principles were a feasible mechanism with which to address sustainability across all different architecture layers within the enterprise. The experts considered the sustainability analysis valuable in guiding the software architecture process towards sustainability. With the emphasis on principles, we facilitate industry adoption by embedding sustainability in existing mechanisms.</description>
	<pubDate>2024-03-13</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 3, Pages 107-145: A Process for Monitoring the Impact of Architecture Principles on Sustainability: An Industrial Case Study</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/3/1/6">doi: 10.3390/software3010006</a></p>
	<p>Authors:
		Markus Funke
		Patricia Lago
		Roberto Verdecchia
		Roel Donker
		</p>
	<p>Architecture principles affect a software system holistically. Given their alignment with a business strategy, they should be incorporated within the validation process covering aspects of sustainability. However, current research discusses the influence of architecture principles on sustainability in a limited context. Our objective was to introduce a reusable process for monitoring and evaluating the impact of architecture principles on sustainability from a software architecture perspective. We sought to demonstrate the application of such a process in professional practice. A qualitative case study was conducted in the context of a Dutch airport management company. Data collection involved a case analysis and the execution of two rounds of expert interviews. We (i) identified a set of case-related key performance indicators, (ii) utilized commonly accepted measurement tools, and (iii) employed graphical representations in the form of spider charts to monitor the sustainability impacts. The real-world observations were evaluated through a concluding focus group. Our findings indicated that architecture principles were a feasible mechanism with which to address sustainability across all different architecture layers within the enterprise. The experts considered the sustainability analysis valuable in guiding the software architecture process towards sustainability. With the emphasis on principles, we facilitate industry adoption by embedding sustainability in existing mechanisms.</p>
	]]></content:encoded>

	<dc:title>A Process for Monitoring the Impact of Architecture Principles on Sustainability: An Industrial Case Study</dc:title>
			<dc:creator>Markus Funke</dc:creator>
			<dc:creator>Patricia Lago</dc:creator>
			<dc:creator>Roberto Verdecchia</dc:creator>
			<dc:creator>Roel Donker</dc:creator>
		<dc:identifier>doi: 10.3390/software3010006</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2024-03-13</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2024-03-13</prism:publicationDate>
	<prism:volume>3</prism:volume>
	<prism:number>1</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>107</prism:startingPage>
		<prism:doi>10.3390/software3010006</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/3/1/6</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/3/1/5">

	<title>Software, Vol. 3, Pages 81-106: Emergent Information Processing: Observations, Experiments, and Future Directions</title>
	<link>https://www.mdpi.com/2674-113X/3/1/5</link>
	<description>Science is currently becoming aware of the challenges in the understanding of the very root mechanisms of massively parallel computations that are observed in literally all scientific disciplines, ranging from cosmology to physics, chemistry, biochemistry, and biology. This leads us to the main motivation and simultaneously to the central thesis of this review: &amp;amp;ldquo;Can we design artificial, massively parallel, self-organized, emergent, error-resilient computational environments?&amp;amp;rdquo; The thesis is solely studied on cellular automata. Initially, an overview of the basic building blocks enabling us to reach this end goal is provided. Important information dealing with this topic is reviewed along with highly expressive animations generated by the open-source, Python, cellular automata software GoL-N24. A large number of simulations along with examples and counter-examples, finalized by a list of the future directions, are giving hints and partial answers to the main thesis. Together, these pose the crucial question of whether there is something deeper beyond the Turing machine theoretical description of massively parallel computing. The perspective, future directions, including applications in robotics and biology of this research, are discussed in the light of known information.</description>
	<pubDate>2024-03-05</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 3, Pages 81-106: Emergent Information Processing: Observations, Experiments, and Future Directions</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/3/1/5">doi: 10.3390/software3010005</a></p>
	<p>Authors:
		Jiří Kroc
		</p>
	<p>Science is currently becoming aware of the challenges in the understanding of the very root mechanisms of massively parallel computations that are observed in literally all scientific disciplines, ranging from cosmology to physics, chemistry, biochemistry, and biology. This leads us to the main motivation and simultaneously to the central thesis of this review: &amp;amp;ldquo;Can we design artificial, massively parallel, self-organized, emergent, error-resilient computational environments?&amp;amp;rdquo; The thesis is solely studied on cellular automata. Initially, an overview of the basic building blocks enabling us to reach this end goal is provided. Important information dealing with this topic is reviewed along with highly expressive animations generated by the open-source, Python, cellular automata software GoL-N24. A large number of simulations along with examples and counter-examples, finalized by a list of the future directions, are giving hints and partial answers to the main thesis. Together, these pose the crucial question of whether there is something deeper beyond the Turing machine theoretical description of massively parallel computing. The perspective, future directions, including applications in robotics and biology of this research, are discussed in the light of known information.</p>
	]]></content:encoded>

	<dc:title>Emergent Information Processing: Observations, Experiments, and Future Directions</dc:title>
			<dc:creator>Jiří Kroc</dc:creator>
		<dc:identifier>doi: 10.3390/software3010005</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2024-03-05</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2024-03-05</prism:publicationDate>
	<prism:volume>3</prism:volume>
	<prism:number>1</prism:number>
	<prism:section>Review</prism:section>
	<prism:startingPage>81</prism:startingPage>
		<prism:doi>10.3390/software3010005</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/3/1/5</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/3/1/4">

	<title>Software, Vol. 3, Pages 62-80: Precision-Driven Product Recommendation Software: Unsupervised Models, Evaluated by GPT-4 LLM for Enhanced Recommender Systems</title>
	<link>https://www.mdpi.com/2674-113X/3/1/4</link>
	<description>This paper presents a pioneering methodology for refining product recommender systems, introducing a synergistic integration of unsupervised models&amp;amp;mdash;K-means clustering, content-based filtering (CBF), and hierarchical clustering&amp;amp;mdash;with the cutting-edge GPT-4 large language model (LLM). Its innovation lies in utilizing GPT-4 for model evaluation, harnessing its advanced natural language understanding capabilities to enhance the precision and relevance of product recommendations. A flask-based API simplifies its implementation for e-commerce owners, allowing for the seamless training and evaluation of the models using CSV-formatted product data. The unique aspect of this approach lies in its ability to empower e-commerce with sophisticated unsupervised recommender system algorithms, while the GPT model significantly contributes to refining the semantic context of product features, resulting in a more personalized and effective product recommendation system. The experimental results underscore the superiority of this integrated framework, marking a significant advancement in the field of recommender systems and providing businesses with an efficient and scalable solution to optimize their product recommendations.</description>
	<pubDate>2024-02-29</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 3, Pages 62-80: Precision-Driven Product Recommendation Software: Unsupervised Models, Evaluated by GPT-4 LLM for Enhanced Recommender Systems</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/3/1/4">doi: 10.3390/software3010004</a></p>
	<p>Authors:
		Konstantinos I. Roumeliotis
		Nikolaos D. Tselikas
		Dimitrios K. Nasiopoulos
		</p>
	<p>This paper presents a pioneering methodology for refining product recommender systems, introducing a synergistic integration of unsupervised models&amp;amp;mdash;K-means clustering, content-based filtering (CBF), and hierarchical clustering&amp;amp;mdash;with the cutting-edge GPT-4 large language model (LLM). Its innovation lies in utilizing GPT-4 for model evaluation, harnessing its advanced natural language understanding capabilities to enhance the precision and relevance of product recommendations. A flask-based API simplifies its implementation for e-commerce owners, allowing for the seamless training and evaluation of the models using CSV-formatted product data. The unique aspect of this approach lies in its ability to empower e-commerce with sophisticated unsupervised recommender system algorithms, while the GPT model significantly contributes to refining the semantic context of product features, resulting in a more personalized and effective product recommendation system. The experimental results underscore the superiority of this integrated framework, marking a significant advancement in the field of recommender systems and providing businesses with an efficient and scalable solution to optimize their product recommendations.</p>
	]]></content:encoded>

	<dc:title>Precision-Driven Product Recommendation Software: Unsupervised Models, Evaluated by GPT-4 LLM for Enhanced Recommender Systems</dc:title>
			<dc:creator>Konstantinos I. Roumeliotis</dc:creator>
			<dc:creator>Nikolaos D. Tselikas</dc:creator>
			<dc:creator>Dimitrios K. Nasiopoulos</dc:creator>
		<dc:identifier>doi: 10.3390/software3010004</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2024-02-29</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2024-02-29</prism:publicationDate>
	<prism:volume>3</prism:volume>
	<prism:number>1</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>62</prism:startingPage>
		<prism:doi>10.3390/software3010004</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/3/1/4</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/3/1/3">

	<title>Software, Vol. 3, Pages 47-61: Deep-SDM: A Unified Computational Framework for Sequential Data Modeling Using Deep Learning Models</title>
	<link>https://www.mdpi.com/2674-113X/3/1/3</link>
	<description>Deep-SDM is a unified layer framework built on TensorFlow/Keras and written in Python 3.12. The framework aligns with the modular engineering principles for the design and development strategy. Transparency, reproducibility, and recombinability are the framework&amp;amp;rsquo;s primary design criteria. The platform can extract valuable insights from numerical and text data and utilize them to predict future values by implementing long short-term memory (LSTM), gated recurrent unit (GRU), and convolution neural network (CNN). Its end-to-end machine learning pipeline involves a sequence of tasks, including data exploration, input preparation, model construction, hyperparameter tuning, performance evaluations, visualization of results, and statistical analysis. The complete process is systematic and carefully organized, from data import to model selection, encapsulating it into a unified whole. The multiple subroutines work together to provide a user-friendly and conducive pipeline that is easy to use. We utilized the Deep-SDM framework to predict the Nepal Stock Exchange (NEPSE) index to validate its reproducibility and robustness and observed impressive results.</description>
	<pubDate>2024-02-28</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 3, Pages 47-61: Deep-SDM: A Unified Computational Framework for Sequential Data Modeling Using Deep Learning Models</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/3/1/3">doi: 10.3390/software3010003</a></p>
	<p>Authors:
		Nawa Raj Pokhrel
		Keshab Raj Dahal
		Ramchandra Rimal
		Hum Nath Bhandari
		Binod Rimal
		</p>
	<p>Deep-SDM is a unified layer framework built on TensorFlow/Keras and written in Python 3.12. The framework aligns with the modular engineering principles for the design and development strategy. Transparency, reproducibility, and recombinability are the framework&amp;amp;rsquo;s primary design criteria. The platform can extract valuable insights from numerical and text data and utilize them to predict future values by implementing long short-term memory (LSTM), gated recurrent unit (GRU), and convolution neural network (CNN). Its end-to-end machine learning pipeline involves a sequence of tasks, including data exploration, input preparation, model construction, hyperparameter tuning, performance evaluations, visualization of results, and statistical analysis. The complete process is systematic and carefully organized, from data import to model selection, encapsulating it into a unified whole. The multiple subroutines work together to provide a user-friendly and conducive pipeline that is easy to use. We utilized the Deep-SDM framework to predict the Nepal Stock Exchange (NEPSE) index to validate its reproducibility and robustness and observed impressive results.</p>
	]]></content:encoded>

	<dc:title>Deep-SDM: A Unified Computational Framework for Sequential Data Modeling Using Deep Learning Models</dc:title>
			<dc:creator>Nawa Raj Pokhrel</dc:creator>
			<dc:creator>Keshab Raj Dahal</dc:creator>
			<dc:creator>Ramchandra Rimal</dc:creator>
			<dc:creator>Hum Nath Bhandari</dc:creator>
			<dc:creator>Binod Rimal</dc:creator>
		<dc:identifier>doi: 10.3390/software3010003</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2024-02-28</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2024-02-28</prism:publicationDate>
	<prism:volume>3</prism:volume>
	<prism:number>1</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>47</prism:startingPage>
		<prism:doi>10.3390/software3010003</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/3/1/3</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/3/1/2">

	<title>Software, Vol. 3, Pages 28-46: Automating SQL Injection and Cross-Site Scripting Vulnerability Remediation in Code</title>
	<link>https://www.mdpi.com/2674-113X/3/1/2</link>
	<description>Internet-based distributed systems dominate contemporary software applications. To enable these applications to operate securely, software developers must mitigate the threats posed by malicious actors. For instance, the developers must identify vulnerabilities in the software and eliminate them. However, to do so manually is a costly and time-consuming process. To reduce these costs, we designed and implemented Code Auto-Remediation for Enhanced Security (CARES), a web application that automatically identifies and remediates the two most common types of vulnerabilities in Java-based web applications: SQL injection (SQLi) and Cross-Site Scripting (XSS). As is shown by a case study presented in this paper, CARES mitigates these vulnerabilities by refactoring the Java code using the Intercepting Filter design pattern. The flexible, microservice-based CARES design can be readily extended to support other injection vulnerabilities, remediation design patterns, and programming languages.</description>
	<pubDate>2024-01-12</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 3, Pages 28-46: Automating SQL Injection and Cross-Site Scripting Vulnerability Remediation in Code</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/3/1/2">doi: 10.3390/software3010002</a></p>
	<p>Authors:
		Kedar Sambhus
		Yi Liu
		</p>
	<p>Internet-based distributed systems dominate contemporary software applications. To enable these applications to operate securely, software developers must mitigate the threats posed by malicious actors. For instance, the developers must identify vulnerabilities in the software and eliminate them. However, to do so manually is a costly and time-consuming process. To reduce these costs, we designed and implemented Code Auto-Remediation for Enhanced Security (CARES), a web application that automatically identifies and remediates the two most common types of vulnerabilities in Java-based web applications: SQL injection (SQLi) and Cross-Site Scripting (XSS). As is shown by a case study presented in this paper, CARES mitigates these vulnerabilities by refactoring the Java code using the Intercepting Filter design pattern. The flexible, microservice-based CARES design can be readily extended to support other injection vulnerabilities, remediation design patterns, and programming languages.</p>
	]]></content:encoded>

	<dc:title>Automating SQL Injection and Cross-Site Scripting Vulnerability Remediation in Code</dc:title>
			<dc:creator>Kedar Sambhus</dc:creator>
			<dc:creator>Yi Liu</dc:creator>
		<dc:identifier>doi: 10.3390/software3010002</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2024-01-12</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2024-01-12</prism:publicationDate>
	<prism:volume>3</prism:volume>
	<prism:number>1</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>28</prism:startingPage>
		<prism:doi>10.3390/software3010002</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/3/1/2</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/3/1/1">

	<title>Software, Vol. 3, Pages 1-27: A Survey on Factors Preventing the Adoption of Automated Software Testing: A Principal Component Analysis Approach</title>
	<link>https://www.mdpi.com/2674-113X/3/1/1</link>
	<description>Automated software testing is a crucial yet resource-intensive aspect of software development. This burden on resources affects widespread adoption, with expertise and cost being the primary challenges preventing adoption. This paper focuses on automated testing driven by manually created test cases, acknowledging its advantages while critically analysing its implications across various development stages that are affecting its adoption. Additionally, it analyses the differences in perception between those in nontechnical and technical roles, where nontechnical roles (e.g., management) predominantly strive to reduce costs and delivery time, whereas technical roles are often driven by quality and completeness. This study investigates the difference in attitudes toward automated testing (AtAT), specifically focusing on why it is not adopted. This article presents a survey conducted among software industry professionals that spans various roles to determine common trends and draw conclusions. A two-stage approach is presented, comprising a comprehensive descriptive analysis and the use of Principal Component Analysis. In total, 81 participants received a series of 22 questions, and their responses were compared against job role types and experience levels. In summary, six key findings are presented that cover expertise, time, cost, tools and techniques, utilisation, organisation, and capacity.</description>
	<pubDate>2024-01-02</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 3, Pages 1-27: A Survey on Factors Preventing the Adoption of Automated Software Testing: A Principal Component Analysis Approach</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/3/1/1">doi: 10.3390/software3010001</a></p>
	<p>Authors:
		George Murazvu
		Simon Parkinson
		Saad Khan
		Na Liu
		Gary Allen
		</p>
	<p>Automated software testing is a crucial yet resource-intensive aspect of software development. This burden on resources affects widespread adoption, with expertise and cost being the primary challenges preventing adoption. This paper focuses on automated testing driven by manually created test cases, acknowledging its advantages while critically analysing its implications across various development stages that are affecting its adoption. Additionally, it analyses the differences in perception between those in nontechnical and technical roles, where nontechnical roles (e.g., management) predominantly strive to reduce costs and delivery time, whereas technical roles are often driven by quality and completeness. This study investigates the difference in attitudes toward automated testing (AtAT), specifically focusing on why it is not adopted. This article presents a survey conducted among software industry professionals that spans various roles to determine common trends and draw conclusions. A two-stage approach is presented, comprising a comprehensive descriptive analysis and the use of Principal Component Analysis. In total, 81 participants received a series of 22 questions, and their responses were compared against job role types and experience levels. In summary, six key findings are presented that cover expertise, time, cost, tools and techniques, utilisation, organisation, and capacity.</p>
	]]></content:encoded>

	<dc:title>A Survey on Factors Preventing the Adoption of Automated Software Testing: A Principal Component Analysis Approach</dc:title>
			<dc:creator>George Murazvu</dc:creator>
			<dc:creator>Simon Parkinson</dc:creator>
			<dc:creator>Saad Khan</dc:creator>
			<dc:creator>Na Liu</dc:creator>
			<dc:creator>Gary Allen</dc:creator>
		<dc:identifier>doi: 10.3390/software3010001</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2024-01-02</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2024-01-02</prism:publicationDate>
	<prism:volume>3</prism:volume>
	<prism:number>1</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>1</prism:startingPage>
		<prism:doi>10.3390/software3010001</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/3/1/1</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/2/4/23">

	<title>Software, Vol. 2, Pages 504-516: A Comparative Study on the Ethical Responsibilities of Key Role Players in Software Development</title>
	<link>https://www.mdpi.com/2674-113X/2/4/23</link>
	<description>Background: Issues of lack of consideration for professional responsibility by software engineers (SEs) present major challenges and concerns to software users. Previous studies on the subject of ethical responsibility in software development assessed whether software development key stakeholders should take ethical responsibility for their actions in software development. However, such studies focused on assessing responses from a particular grouping in software development. Objective: Based on the revelation, this study seeks to evaluate the perceived ethical responsibilities in software development by juxtaposing the perceptions of students, educators and industry-based software practitioners on the ethical responsibility of software development key stakeholders in South Africa. Methods: To meet this objective, the study collected data using a survey, which was shared on an online platform. A total of 561 (44 from computing academics; 103 from industry-based software practitioners and 414 from software development students) responses were received. The collected data were analysed using descriptive and variance statistical analysis approaches. Results: The study found that there is no significant statistical difference in how students, educators and software practitioners perceive the ethical responsibility of software development key stakeholders. Conclusions: This finding of the study shows that the prevailing view is that various software development key stakeholders should be held ethically responsible for their contribution to software development. Furthermore, the organisation of ethical responsibilities used in this study provides a useful framework to guide future studies on this subject.</description>
	<pubDate>2023-12-05</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 2, Pages 504-516: A Comparative Study on the Ethical Responsibilities of Key Role Players in Software Development</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/2/4/23">doi: 10.3390/software2040023</a></p>
	<p>Authors:
		Senyeki Milton Marebane
		Robert Toyo Hans
		</p>
	<p>Background: Issues of lack of consideration for professional responsibility by software engineers (SEs) present major challenges and concerns to software users. Previous studies on the subject of ethical responsibility in software development assessed whether software development key stakeholders should take ethical responsibility for their actions in software development. However, such studies focused on assessing responses from a particular grouping in software development. Objective: Based on the revelation, this study seeks to evaluate the perceived ethical responsibilities in software development by juxtaposing the perceptions of students, educators and industry-based software practitioners on the ethical responsibility of software development key stakeholders in South Africa. Methods: To meet this objective, the study collected data using a survey, which was shared on an online platform. A total of 561 (44 from computing academics; 103 from industry-based software practitioners and 414 from software development students) responses were received. The collected data were analysed using descriptive and variance statistical analysis approaches. Results: The study found that there is no significant statistical difference in how students, educators and software practitioners perceive the ethical responsibility of software development key stakeholders. Conclusions: This finding of the study shows that the prevailing view is that various software development key stakeholders should be held ethically responsible for their contribution to software development. Furthermore, the organisation of ethical responsibilities used in this study provides a useful framework to guide future studies on this subject.</p>
	]]></content:encoded>

	<dc:title>A Comparative Study on the Ethical Responsibilities of Key Role Players in Software Development</dc:title>
			<dc:creator>Senyeki Milton Marebane</dc:creator>
			<dc:creator>Robert Toyo Hans</dc:creator>
		<dc:identifier>doi: 10.3390/software2040023</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2023-12-05</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2023-12-05</prism:publicationDate>
	<prism:volume>2</prism:volume>
	<prism:number>4</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>504</prism:startingPage>
		<prism:doi>10.3390/software2040023</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/2/4/23</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/2/4/22">

	<title>Software, Vol. 2, Pages 476-503: Beam Transmission (BTR) Software for Efficient Neutral Beam Injector Design and Tokamak Operation</title>
	<link>https://www.mdpi.com/2674-113X/2/4/22</link>
	<description>BTR code (originally&amp;amp;mdash;&amp;amp;ldquo;Beam Transmission and Re-ionization&amp;amp;rdquo;, 1995) is used for Neutral Beam Injection (NBI) design; it is also applied to the injector system of ITER. In 2008, the BTR model was extended to include the beam interaction with plasmas and direct beam losses in tokamak. For many years, BTR has been widely used for various NBI designs for efficient heating and current drive in nuclear fusion devices for plasma scenario control and diagnostics. BTR analysis is especially important for &amp;amp;lsquo;beam-driven&amp;amp;rsquo; fusion devices, such as fusion neutron source (FNS) tokamaks, since their operation depends on a high NBI input in non-inductive current drive and fusion yield. BTR calculates detailed power deposition maps and particle losses with an account of ionized beam fractions and background electromagnetic fields; these results are used for the overall NBI performance analysis. BTR code is open for public usage; it is fully interactive and supplied with an intuitive graphical user interface (GUI). The input configuration is flexibly adapted to any specific NBI geometry. High running speed and full control over the running options allow the user to perform multiple parametric runs on the fly. The paper describes the detailed physics of BTR, numerical methods, graphical user interface, and examples of BTR application. The code is still in evolution; basic support is available to all BTR users.</description>
	<pubDate>2023-10-24</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 2, Pages 476-503: Beam Transmission (BTR) Software for Efficient Neutral Beam Injector Design and Tokamak Operation</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/2/4/22">doi: 10.3390/software2040022</a></p>
	<p>Authors:
		Eugenia Dlougach
		Margarita Kichik
		</p>
	<p>BTR code (originally&amp;amp;mdash;&amp;amp;ldquo;Beam Transmission and Re-ionization&amp;amp;rdquo;, 1995) is used for Neutral Beam Injection (NBI) design; it is also applied to the injector system of ITER. In 2008, the BTR model was extended to include the beam interaction with plasmas and direct beam losses in tokamak. For many years, BTR has been widely used for various NBI designs for efficient heating and current drive in nuclear fusion devices for plasma scenario control and diagnostics. BTR analysis is especially important for &amp;amp;lsquo;beam-driven&amp;amp;rsquo; fusion devices, such as fusion neutron source (FNS) tokamaks, since their operation depends on a high NBI input in non-inductive current drive and fusion yield. BTR calculates detailed power deposition maps and particle losses with an account of ionized beam fractions and background electromagnetic fields; these results are used for the overall NBI performance analysis. BTR code is open for public usage; it is fully interactive and supplied with an intuitive graphical user interface (GUI). The input configuration is flexibly adapted to any specific NBI geometry. High running speed and full control over the running options allow the user to perform multiple parametric runs on the fly. The paper describes the detailed physics of BTR, numerical methods, graphical user interface, and examples of BTR application. The code is still in evolution; basic support is available to all BTR users.</p>
	]]></content:encoded>

	<dc:title>Beam Transmission (BTR) Software for Efficient Neutral Beam Injector Design and Tokamak Operation</dc:title>
			<dc:creator>Eugenia Dlougach</dc:creator>
			<dc:creator>Margarita Kichik</dc:creator>
		<dc:identifier>doi: 10.3390/software2040022</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2023-10-24</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2023-10-24</prism:publicationDate>
	<prism:volume>2</prism:volume>
	<prism:number>4</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>476</prism:startingPage>
		<prism:doi>10.3390/software2040022</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/2/4/22</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/2/4/21">

	<title>Software, Vol. 2, Pages 447-475: A Systematic Mapping of the Proposition of Benchmarks in the Software Testing and Debugging Domain</title>
	<link>https://www.mdpi.com/2674-113X/2/4/21</link>
	<description>Software testing and debugging are standard practices of software quality assurance since they enable the identification and correction of failures. Benchmarks have been used in that context as a group of programs to support the comparison of different techniques according to pre-established parameters. However, the reasons that inspire researchers to propose novel benchmarks are not fully understood. This article reports the investigation, identification, classification, and externalization of the state of the art about the proposition of benchmarks on software testing and debugging domains. The study was carried out using systematic mapping procedures according to the guidelines widely followed by software engineering literature. The search identified 1674 studies, from which, 25 were selected for analysis. A list of benchmarks is provided and descriptively mapped according to their characteristics, motivations, and scope of use for their creation. The lack of data to support the comparison between available and novel software testing and debugging techniques is the main motivation for the proposition of benchmarks. Advancements in the standardization and prescription of benchmark structure and composition are still required. Establishing such a standard could foster benchmark reuse, thereby saving time and effort in the engineering of benchmarks for software testing and debugging.</description>
	<pubDate>2023-10-12</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 2, Pages 447-475: A Systematic Mapping of the Proposition of Benchmarks in the Software Testing and Debugging Domain</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/2/4/21">doi: 10.3390/software2040021</a></p>
	<p>Authors:
		Deuslirio da Silva-Junior
		Valdemar V. Graciano-Neto
		Diogo M. de-Freitas
		Plinio de Sá Leitão-Junior
		Mohamad Kassab
		</p>
	<p>Software testing and debugging are standard practices of software quality assurance since they enable the identification and correction of failures. Benchmarks have been used in that context as a group of programs to support the comparison of different techniques according to pre-established parameters. However, the reasons that inspire researchers to propose novel benchmarks are not fully understood. This article reports the investigation, identification, classification, and externalization of the state of the art about the proposition of benchmarks on software testing and debugging domains. The study was carried out using systematic mapping procedures according to the guidelines widely followed by software engineering literature. The search identified 1674 studies, from which, 25 were selected for analysis. A list of benchmarks is provided and descriptively mapped according to their characteristics, motivations, and scope of use for their creation. The lack of data to support the comparison between available and novel software testing and debugging techniques is the main motivation for the proposition of benchmarks. Advancements in the standardization and prescription of benchmark structure and composition are still required. Establishing such a standard could foster benchmark reuse, thereby saving time and effort in the engineering of benchmarks for software testing and debugging.</p>
	]]></content:encoded>

	<dc:title>A Systematic Mapping of the Proposition of Benchmarks in the Software Testing and Debugging Domain</dc:title>
			<dc:creator>Deuslirio da Silva-Junior</dc:creator>
			<dc:creator>Valdemar V. Graciano-Neto</dc:creator>
			<dc:creator>Diogo M. de-Freitas</dc:creator>
			<dc:creator>Plinio de Sá Leitão-Junior</dc:creator>
			<dc:creator>Mohamad Kassab</dc:creator>
		<dc:identifier>doi: 10.3390/software2040021</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2023-10-12</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2023-10-12</prism:publicationDate>
	<prism:volume>2</prism:volume>
	<prism:number>4</prism:number>
	<prism:section>Systematic Review</prism:section>
	<prism:startingPage>447</prism:startingPage>
		<prism:doi>10.3390/software2040021</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/2/4/21</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/2/3/20">

	<title>Software, Vol. 2, Pages 427-446: RETRACTED: A Differential Datalog Interpreter</title>
	<link>https://www.mdpi.com/2674-113X/2/3/20</link>
	<description>The core reasoning task for datalog engines is materialization, the evaluation of a datalog program over a database alongside its physical incorporation into the database itself. The de-facto method of computing is through the recursive application of inference rules. Due to it being a costly operation, it is a must for datalog engines to provide incremental materialization; that is, to adjust the computation to new data instead of restarting from scratch. One of the major caveats is that deleting data is notoriously more involved than adding since one has to take into account all possible data that has been entailed from what is being deleted. Differential dataflow is a computational model that provides efficient incremental maintenance, notoriously with equal performance between additions and deletions, and work distribution of iterative dataflows. In this paper, we investigate the performance of materialization with three reference datalog implementations, out of which one is built on top of a lightweight relational engine, and the two others are differential-dataflow and non-differential versions of the same rewrite algorithm with the same optimizations. Experimental results suggest that monotonic aggregation is more powerful than ascenting merely the powerset lattice.</description>
	<pubDate>2023-09-21</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 2, Pages 427-446: RETRACTED: A Differential Datalog Interpreter</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/2/3/20">doi: 10.3390/software2030020</a></p>
	<p>Authors:
		Matthew Stephenson
		</p>
	<p>The core reasoning task for datalog engines is materialization, the evaluation of a datalog program over a database alongside its physical incorporation into the database itself. The de-facto method of computing is through the recursive application of inference rules. Due to it being a costly operation, it is a must for datalog engines to provide incremental materialization; that is, to adjust the computation to new data instead of restarting from scratch. One of the major caveats is that deleting data is notoriously more involved than adding since one has to take into account all possible data that has been entailed from what is being deleted. Differential dataflow is a computational model that provides efficient incremental maintenance, notoriously with equal performance between additions and deletions, and work distribution of iterative dataflows. In this paper, we investigate the performance of materialization with three reference datalog implementations, out of which one is built on top of a lightweight relational engine, and the two others are differential-dataflow and non-differential versions of the same rewrite algorithm with the same optimizations. Experimental results suggest that monotonic aggregation is more powerful than ascenting merely the powerset lattice.</p>
	]]></content:encoded>

	<dc:title>RETRACTED: A Differential Datalog Interpreter</dc:title>
			<dc:creator>Matthew Stephenson</dc:creator>
		<dc:identifier>doi: 10.3390/software2030020</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2023-09-21</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2023-09-21</prism:publicationDate>
	<prism:volume>2</prism:volume>
	<prism:number>3</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>427</prism:startingPage>
		<prism:doi>10.3390/software2030020</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/2/3/20</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/2/3/19">

	<title>Software, Vol. 2, Pages 400-426: User Authorization in Microservice-Based Applications</title>
	<link>https://www.mdpi.com/2674-113X/2/3/19</link>
	<description>Microservices have emerged as a prevalent architectural style in modern software development, replacing traditional monolithic architectures. The decomposition of business functionality into distributed microservices offers numerous benefits, but introduces increased complexity to the overall application. Consequently, the complexity of authorization in microservice-based applications necessitates a comprehensive approach that integrates authorization as an inherent component from the beginning. This paper presents a systematic approach for achieving fine-grained user authorization using Attribute-Based Access Control (ABAC). The proposed approach emphasizes structure preservation, facilitating traceability throughout the various phases of application development. As a result, authorization artifacts can be traced seamlessly from the initial analysis phase to the subsequent implementation phase. One significant contribution is the development of a language to formulate natural language authorization requirements and policies. These natural language authorization policies can subsequently be implemented using the policy language Rego. By leveraging the analysis of software artifacts, the proposed approach enables the creation of comprehensive and tailored authorization policies.</description>
	<pubDate>2023-09-19</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 2, Pages 400-426: User Authorization in Microservice-Based Applications</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/2/3/19">doi: 10.3390/software2030019</a></p>
	<p>Authors:
		Niklas Sänger
		Sebastian Abeck
		</p>
	<p>Microservices have emerged as a prevalent architectural style in modern software development, replacing traditional monolithic architectures. The decomposition of business functionality into distributed microservices offers numerous benefits, but introduces increased complexity to the overall application. Consequently, the complexity of authorization in microservice-based applications necessitates a comprehensive approach that integrates authorization as an inherent component from the beginning. This paper presents a systematic approach for achieving fine-grained user authorization using Attribute-Based Access Control (ABAC). The proposed approach emphasizes structure preservation, facilitating traceability throughout the various phases of application development. As a result, authorization artifacts can be traced seamlessly from the initial analysis phase to the subsequent implementation phase. One significant contribution is the development of a language to formulate natural language authorization requirements and policies. These natural language authorization policies can subsequently be implemented using the policy language Rego. By leveraging the analysis of software artifacts, the proposed approach enables the creation of comprehensive and tailored authorization policies.</p>
	]]></content:encoded>

	<dc:title>User Authorization in Microservice-Based Applications</dc:title>
			<dc:creator>Niklas Sänger</dc:creator>
			<dc:creator>Sebastian Abeck</dc:creator>
		<dc:identifier>doi: 10.3390/software2030019</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2023-09-19</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2023-09-19</prism:publicationDate>
	<prism:volume>2</prism:volume>
	<prism:number>3</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>400</prism:startingPage>
		<prism:doi>10.3390/software2030019</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/2/3/19</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/2/3/18">

	<title>Software, Vol. 2, Pages 377-399: A Quantitative Review of the Research on Business Process Management in Digital Transformation: A Bibliometric Approach</title>
	<link>https://www.mdpi.com/2674-113X/2/3/18</link>
	<description>In recent years, research on digital transformation (DT) and business process management (BPM) has gained significant attention in the field of business and management. This paper aims to conduct a comprehensive bibliometric analysis of global research on DT and BPM from 2007 to 2022. A total of 326 papers were selected from Web of Science and Scopus for analysis. Using bibliometric methods, we evaluated the current state and future research trends of DT and BPM. Our analysis reveals that the number of publications on DT and BPM has grown significantly over time, with the Business Process Management Journal being the most active. The countries that have contributed the most to this field are Germany (with four universities in the top 10) and the USA. The Business Process Management Journal is the most active in publishing research on digital transformation and business process management. The analysis showed that &amp;amp;ldquo;artificial intelligence&amp;amp;rdquo; is a technology that has been studied extensively and is increasingly asserted to influence companies&amp;amp;rsquo; business processes. Additionally, the study provides valuable insights from the co-citation network analysis. Based on our findings, we provide recommendations for future research directions on DT and BPM. This study contributes to a better understanding of the current state of research on DT and BPM and provides insights for future research.</description>
	<pubDate>2023-09-01</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 2, Pages 377-399: A Quantitative Review of the Research on Business Process Management in Digital Transformation: A Bibliometric Approach</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/2/3/18">doi: 10.3390/software2030018</a></p>
	<p>Authors:
		Bui Quang Truong
		Anh Nguyen-Duc
		Nguyen Thi Cam Van
		</p>
	<p>In recent years, research on digital transformation (DT) and business process management (BPM) has gained significant attention in the field of business and management. This paper aims to conduct a comprehensive bibliometric analysis of global research on DT and BPM from 2007 to 2022. A total of 326 papers were selected from Web of Science and Scopus for analysis. Using bibliometric methods, we evaluated the current state and future research trends of DT and BPM. Our analysis reveals that the number of publications on DT and BPM has grown significantly over time, with the Business Process Management Journal being the most active. The countries that have contributed the most to this field are Germany (with four universities in the top 10) and the USA. The Business Process Management Journal is the most active in publishing research on digital transformation and business process management. The analysis showed that &amp;amp;ldquo;artificial intelligence&amp;amp;rdquo; is a technology that has been studied extensively and is increasingly asserted to influence companies&amp;amp;rsquo; business processes. Additionally, the study provides valuable insights from the co-citation network analysis. Based on our findings, we provide recommendations for future research directions on DT and BPM. This study contributes to a better understanding of the current state of research on DT and BPM and provides insights for future research.</p>
	]]></content:encoded>

	<dc:title>A Quantitative Review of the Research on Business Process Management in Digital Transformation: A Bibliometric Approach</dc:title>
			<dc:creator>Bui Quang Truong</dc:creator>
			<dc:creator>Anh Nguyen-Duc</dc:creator>
			<dc:creator>Nguyen Thi Cam Van</dc:creator>
		<dc:identifier>doi: 10.3390/software2030018</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2023-09-01</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2023-09-01</prism:publicationDate>
	<prism:volume>2</prism:volume>
	<prism:number>3</prism:number>
	<prism:section>Systematic Review</prism:section>
	<prism:startingPage>377</prism:startingPage>
		<prism:doi>10.3390/software2030018</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/2/3/18</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/2/3/17">

	<title>Software, Vol. 2, Pages 350-376: Challenges and Solutions for Engineering Applications on Smartphones</title>
	<link>https://www.mdpi.com/2674-113X/2/3/17</link>
	<description>This paper starts by presenting the concept of a mobile application. A literature review is conducted, which shows that there is still a certain lack with regard to smartphone applications in the domain of engineering as independent simulation applications and not only as extensions of smartphone tools. The challenges behind this lack are then discussed. Subsequently, three case studies of engineering applications for both smartphones and the internet are presented, alongside their solutions to the challenges presented. The first case study concerns an engineering application for systems control. The second case study focuses on an engineering application for composite materials. The third case study focuses on the finite element method and structure generation. The solutions to the presented challenges are then described through their implementation in the applications. The three case studies show a new system of thought concerning the development of engineering smartphone applications.</description>
	<pubDate>2023-08-18</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 2, Pages 350-376: Challenges and Solutions for Engineering Applications on Smartphones</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/2/3/17">doi: 10.3390/software2030017</a></p>
	<p>Authors:
		Anthony Khoury
		Mohamad Abbas Kaddaha
		Maya Saade
		Rafic Younes
		Rachid Outbib
		Pascal Lafon
		</p>
	<p>This paper starts by presenting the concept of a mobile application. A literature review is conducted, which shows that there is still a certain lack with regard to smartphone applications in the domain of engineering as independent simulation applications and not only as extensions of smartphone tools. The challenges behind this lack are then discussed. Subsequently, three case studies of engineering applications for both smartphones and the internet are presented, alongside their solutions to the challenges presented. The first case study concerns an engineering application for systems control. The second case study focuses on an engineering application for composite materials. The third case study focuses on the finite element method and structure generation. The solutions to the presented challenges are then described through their implementation in the applications. The three case studies show a new system of thought concerning the development of engineering smartphone applications.</p>
	]]></content:encoded>

	<dc:title>Challenges and Solutions for Engineering Applications on Smartphones</dc:title>
			<dc:creator>Anthony Khoury</dc:creator>
			<dc:creator>Mohamad Abbas Kaddaha</dc:creator>
			<dc:creator>Maya Saade</dc:creator>
			<dc:creator>Rafic Younes</dc:creator>
			<dc:creator>Rachid Outbib</dc:creator>
			<dc:creator>Pascal Lafon</dc:creator>
		<dc:identifier>doi: 10.3390/software2030017</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2023-08-18</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2023-08-18</prism:publicationDate>
	<prism:volume>2</prism:volume>
	<prism:number>3</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>350</prism:startingPage>
		<prism:doi>10.3390/software2030017</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/2/3/17</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/2/3/16">

	<title>Software, Vol. 2, Pages 332-349: A Synthesis-Based Stateful Approach for Guiding Design Thinking in Embedded System Development</title>
	<link>https://www.mdpi.com/2674-113X/2/3/16</link>
	<description>Embedded systems have attracted more attention and have become more critical due to the recent computer technology advancements and applications in various areas, such as healthcare, transportation, and manufacturing. Traditional software design approaches and the finite state machine cannot provide sufficient support due to two major reasons: the increasing need for more functions in designing an embedded system and sequential controls in the implementation. This deficiency particularly discourages inexperienced engineers who use conventional methods to design embedded software. Hence, we proposed a design method, the Synthesis-Based Stateful Software Design Approach (SSSDA), which synthesizes two existing methods, the Synthesis-Based Software Design Framework (SSDF) and Process and Artifact State Transition Abstraction (PASTA), to remedy the drawback of conventional methods. To show how to conduct our proposed design approach and investigate how it supports embedded system design, we studied an industrial project developed by a sophomore student team. Our results showed that our proposed approach could significantly help students lay out modules, improve testability, and reduce defects.</description>
	<pubDate>2023-08-12</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 2, Pages 332-349: A Synthesis-Based Stateful Approach for Guiding Design Thinking in Embedded System Development</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/2/3/16">doi: 10.3390/software2030016</a></p>
	<p>Authors:
		Hung-Fu Chang
		Supannika Koolmanojwong Mobasser
		</p>
	<p>Embedded systems have attracted more attention and have become more critical due to the recent computer technology advancements and applications in various areas, such as healthcare, transportation, and manufacturing. Traditional software design approaches and the finite state machine cannot provide sufficient support due to two major reasons: the increasing need for more functions in designing an embedded system and sequential controls in the implementation. This deficiency particularly discourages inexperienced engineers who use conventional methods to design embedded software. Hence, we proposed a design method, the Synthesis-Based Stateful Software Design Approach (SSSDA), which synthesizes two existing methods, the Synthesis-Based Software Design Framework (SSDF) and Process and Artifact State Transition Abstraction (PASTA), to remedy the drawback of conventional methods. To show how to conduct our proposed design approach and investigate how it supports embedded system design, we studied an industrial project developed by a sophomore student team. Our results showed that our proposed approach could significantly help students lay out modules, improve testability, and reduce defects.</p>
	]]></content:encoded>

	<dc:title>A Synthesis-Based Stateful Approach for Guiding Design Thinking in Embedded System Development</dc:title>
			<dc:creator>Hung-Fu Chang</dc:creator>
			<dc:creator>Supannika Koolmanojwong Mobasser</dc:creator>
		<dc:identifier>doi: 10.3390/software2030016</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2023-08-12</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2023-08-12</prism:publicationDate>
	<prism:volume>2</prism:volume>
	<prism:number>3</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>332</prism:startingPage>
		<prism:doi>10.3390/software2030016</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/2/3/16</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/2/3/15">

	<title>Software, Vol. 2, Pages 310-331: Comparing Measured Agile Software Development Metrics Using an Agile Model-Based Software Engineering Approach versus Scrum Only</title>
	<link>https://www.mdpi.com/2674-113X/2/3/15</link>
	<description>This study compares the reliability of estimation, productivity, and defect rate metrics for sprints driven by a specific instance of the agile approach (i.e., scrum) and an agile model-Bbased software engineering (MBSE) approach called the integrated Scrum Model-Based System Architecture Process (sMBSAP) when developing a software system. The quasi-experimental study conducted ten sprints using each approach. The approaches were then evaluated based on their effectiveness in helping the product development team estimate the backlog items that they could build during a time-boxed sprint and deliver more product backlog items (PBI) with fewer defects. The commitment reliability (CR) was calculated to compare the reliability of estimation with a measured average scrum-driven value of 0.81 versus a statistically different average sMBSAP-driven value of 0.94. Similarly, the average sprint velocity (SV) for the scrum-driven sprints was 26.8 versus 31.8 for the MBSAP-driven sprints. The average defect density (DD) for the scrum-driven sprints was 0.91, while that of the sMBSAP-driven sprints was 0.63. The average defect leakage (DL) for the scrum-driven sprints was 0.20, while that of the sMBSAP-driven sprints was 0.15. The t-test analysis concluded that the sMBSAP-driven sprints were associated with a statistically significant larger mean CR, SV, DD, and DL than that of the scrum-driven sprints. The overall results demonstrate formal quantitative benefits of an agile MBSE approach compared to an agile alone, thereby strengthening the case for considering agile MBSE methods within the software development community. Future work might include comparing agile and agile MBSE methods using alternative research designs and further software development objectives, techniques, and metrics.</description>
	<pubDate>2023-07-26</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 2, Pages 310-331: Comparing Measured Agile Software Development Metrics Using an Agile Model-Based Software Engineering Approach versus Scrum Only</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/2/3/15">doi: 10.3390/software2030015</a></p>
	<p>Authors:
		Moe Huss
		Daniel R. Herber
		John M. Borky
		</p>
	<p>This study compares the reliability of estimation, productivity, and defect rate metrics for sprints driven by a specific instance of the agile approach (i.e., scrum) and an agile model-Bbased software engineering (MBSE) approach called the integrated Scrum Model-Based System Architecture Process (sMBSAP) when developing a software system. The quasi-experimental study conducted ten sprints using each approach. The approaches were then evaluated based on their effectiveness in helping the product development team estimate the backlog items that they could build during a time-boxed sprint and deliver more product backlog items (PBI) with fewer defects. The commitment reliability (CR) was calculated to compare the reliability of estimation with a measured average scrum-driven value of 0.81 versus a statistically different average sMBSAP-driven value of 0.94. Similarly, the average sprint velocity (SV) for the scrum-driven sprints was 26.8 versus 31.8 for the MBSAP-driven sprints. The average defect density (DD) for the scrum-driven sprints was 0.91, while that of the sMBSAP-driven sprints was 0.63. The average defect leakage (DL) for the scrum-driven sprints was 0.20, while that of the sMBSAP-driven sprints was 0.15. The t-test analysis concluded that the sMBSAP-driven sprints were associated with a statistically significant larger mean CR, SV, DD, and DL than that of the scrum-driven sprints. The overall results demonstrate formal quantitative benefits of an agile MBSE approach compared to an agile alone, thereby strengthening the case for considering agile MBSE methods within the software development community. Future work might include comparing agile and agile MBSE methods using alternative research designs and further software development objectives, techniques, and metrics.</p>
	]]></content:encoded>

	<dc:title>Comparing Measured Agile Software Development Metrics Using an Agile Model-Based Software Engineering Approach versus Scrum Only</dc:title>
			<dc:creator>Moe Huss</dc:creator>
			<dc:creator>Daniel R. Herber</dc:creator>
			<dc:creator>John M. Borky</dc:creator>
		<dc:identifier>doi: 10.3390/software2030015</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2023-07-26</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2023-07-26</prism:publicationDate>
	<prism:volume>2</prism:volume>
	<prism:number>3</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>310</prism:startingPage>
		<prism:doi>10.3390/software2030015</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/2/3/15</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/2/2/14">

	<title>Software, Vol. 2, Pages 292-309: A Case Study on Applications of the Hook Model in Software Products</title>
	<link>https://www.mdpi.com/2674-113X/2/2/14</link>
	<description>The Hook model is used in digital products to engage and retain users through the mechanism of habit formation. This paper explores the use of Hook model techniques in two mobile applications, one being a popular taxi service (Uber taxi) and the other a social network (Instagram). The goal of this paper is to explore the Hook cycle patterns in the two products, and to identify commonalities and differences in how they are applied. Our results suggest that Hook cycle patterns appear with similar frequency; however, Instagram includes more internal Trigger calls. Uber uses fewer triggers to encourage usage, most probably because users already have a specific need for the application. For the same reason, Uber has less opportunity to fail in the reward delivery, while Instagram can use the failure (in providing a reward) as another trigger if the usage habit is already established. In addition, we introduce two types of Hook cycle patterns: internal (within a single use case) and external (transition between use cases). The insights obtained through the case studies serve as a practical reference for developing engaging and retention-focused applications.</description>
	<pubDate>2023-05-16</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 2, Pages 292-309: A Case Study on Applications of the Hook Model in Software Products</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/2/2/14">doi: 10.3390/software2020014</a></p>
	<p>Authors:
		Elena Lukyanchikova
		Nursultan Askarbekuly
		Hamna Aslam
		Manuel Mazzara
		</p>
	<p>The Hook model is used in digital products to engage and retain users through the mechanism of habit formation. This paper explores the use of Hook model techniques in two mobile applications, one being a popular taxi service (Uber taxi) and the other a social network (Instagram). The goal of this paper is to explore the Hook cycle patterns in the two products, and to identify commonalities and differences in how they are applied. Our results suggest that Hook cycle patterns appear with similar frequency; however, Instagram includes more internal Trigger calls. Uber uses fewer triggers to encourage usage, most probably because users already have a specific need for the application. For the same reason, Uber has less opportunity to fail in the reward delivery, while Instagram can use the failure (in providing a reward) as another trigger if the usage habit is already established. In addition, we introduce two types of Hook cycle patterns: internal (within a single use case) and external (transition between use cases). The insights obtained through the case studies serve as a practical reference for developing engaging and retention-focused applications.</p>
	]]></content:encoded>

	<dc:title>A Case Study on Applications of the Hook Model in Software Products</dc:title>
			<dc:creator>Elena Lukyanchikova</dc:creator>
			<dc:creator>Nursultan Askarbekuly</dc:creator>
			<dc:creator>Hamna Aslam</dc:creator>
			<dc:creator>Manuel Mazzara</dc:creator>
		<dc:identifier>doi: 10.3390/software2020014</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2023-05-16</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2023-05-16</prism:publicationDate>
	<prism:volume>2</prism:volume>
	<prism:number>2</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>292</prism:startingPage>
		<prism:doi>10.3390/software2020014</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/2/2/14</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/2/2/13">

	<title>Software, Vol. 2, Pages 276-291: Efficient Measurement Method: Development of a System Using Measurement Templates for an Orthodontic Measurement Project</title>
	<link>https://www.mdpi.com/2674-113X/2/2/13</link>
	<description>We have developed a new system for measuring dental, gnathic, and facial areas with cephalogram-equivalent images created from computed tomographic imaging data. An advantage of this collaborative system is that a measurement template and automated processing are used. First, experienced orthodontists were provided with the measurement templates; they then moved the measurement markers to the specified landmarks on the cephalogram in the template. Subsequently, the program automatically detected the coordinates of the markers and calculated the distance between those coordinates. The appropriate use of this system leads to highly accurate results in large quantities of measurements in a short time by means of both manual and automatic processing. The system was developed to contribute to worldwide research into dental and craniofacial measurements; the research involved 500 patients, and the system worked successfully.</description>
	<pubDate>2023-05-12</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 2, Pages 276-291: Efficient Measurement Method: Development of a System Using Measurement Templates for an Orthodontic Measurement Project</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/2/2/13">doi: 10.3390/software2020013</a></p>
	<p>Authors:
		Harumichi Koga
		Katsuhiko Taki
		Ayano Masugi
		</p>
	<p>We have developed a new system for measuring dental, gnathic, and facial areas with cephalogram-equivalent images created from computed tomographic imaging data. An advantage of this collaborative system is that a measurement template and automated processing are used. First, experienced orthodontists were provided with the measurement templates; they then moved the measurement markers to the specified landmarks on the cephalogram in the template. Subsequently, the program automatically detected the coordinates of the markers and calculated the distance between those coordinates. The appropriate use of this system leads to highly accurate results in large quantities of measurements in a short time by means of both manual and automatic processing. The system was developed to contribute to worldwide research into dental and craniofacial measurements; the research involved 500 patients, and the system worked successfully.</p>
	]]></content:encoded>

	<dc:title>Efficient Measurement Method: Development of a System Using Measurement Templates for an Orthodontic Measurement Project</dc:title>
			<dc:creator>Harumichi Koga</dc:creator>
			<dc:creator>Katsuhiko Taki</dc:creator>
			<dc:creator>Ayano Masugi</dc:creator>
		<dc:identifier>doi: 10.3390/software2020013</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2023-05-12</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2023-05-12</prism:publicationDate>
	<prism:volume>2</prism:volume>
	<prism:number>2</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>276</prism:startingPage>
		<prism:doi>10.3390/software2020013</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/2/2/13</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/2/2/12">

	<title>Software, Vol. 2, Pages 258-275: Transforming a Computational Model from a Research Tool to a Software Product: A Case Study from Arc Welding Research</title>
	<link>https://www.mdpi.com/2674-113X/2/2/12</link>
	<description>Arc welding is a thermal plasma process widely used to join metals. An arc welding model that couples fluid dynamic and electromagnetic equations was initially developed as a research tool. Subsequently, it was applied to improve and optimise industrial implementations of arc welding. The model includes the arc plasma, the electrode, and the workpiece in the computational domain. It incorporates several features to ensure numerical accuracy and reduce computation time and memory requirements. The arc welding code has been refactored into commercial-grade Windows software, ArcWeld, to address the needs of industrial customers. The methods used to develop ArcWeld and its extension to new arc welding regimes, which used the Workspace workflow platform, are presented. The transformation of the model to an integrated software application means that non-experts can now run the code after only elementary training. The user can easily visualise the results, improving the ability to analyse and generate insights into the arc welding process being modelled. These changes mean that scientific progress is accelerated, and that the software can be used in industry and assist welders&amp;amp;rsquo; training. The methods used are transferrable to many other research codes.</description>
	<pubDate>2023-05-08</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 2, Pages 258-275: Transforming a Computational Model from a Research Tool to a Software Product: A Case Study from Arc Welding Research</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/2/2/12">doi: 10.3390/software2020012</a></p>
	<p>Authors:
		Anthony B. Murphy
		David G. Thomas
		Fiona F. Chen
		Junting Xiang
		Yuqing Feng
		</p>
	<p>Arc welding is a thermal plasma process widely used to join metals. An arc welding model that couples fluid dynamic and electromagnetic equations was initially developed as a research tool. Subsequently, it was applied to improve and optimise industrial implementations of arc welding. The model includes the arc plasma, the electrode, and the workpiece in the computational domain. It incorporates several features to ensure numerical accuracy and reduce computation time and memory requirements. The arc welding code has been refactored into commercial-grade Windows software, ArcWeld, to address the needs of industrial customers. The methods used to develop ArcWeld and its extension to new arc welding regimes, which used the Workspace workflow platform, are presented. The transformation of the model to an integrated software application means that non-experts can now run the code after only elementary training. The user can easily visualise the results, improving the ability to analyse and generate insights into the arc welding process being modelled. These changes mean that scientific progress is accelerated, and that the software can be used in industry and assist welders&amp;amp;rsquo; training. The methods used are transferrable to many other research codes.</p>
	]]></content:encoded>

	<dc:title>Transforming a Computational Model from a Research Tool to a Software Product: A Case Study from Arc Welding Research</dc:title>
			<dc:creator>Anthony B. Murphy</dc:creator>
			<dc:creator>David G. Thomas</dc:creator>
			<dc:creator>Fiona F. Chen</dc:creator>
			<dc:creator>Junting Xiang</dc:creator>
			<dc:creator>Yuqing Feng</dc:creator>
		<dc:identifier>doi: 10.3390/software2020012</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2023-05-08</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2023-05-08</prism:publicationDate>
	<prism:volume>2</prism:volume>
	<prism:number>2</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>258</prism:startingPage>
		<prism:doi>10.3390/software2020012</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/2/2/12</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/2/2/11">

	<title>Software, Vol. 2, Pages 234-257: An Agile Model-Based Software Engineering Approach Illustrated through the Development of a Health Technology System</title>
	<link>https://www.mdpi.com/2674-113X/2/2/11</link>
	<description>Model-Based Software Engineering (MBSE) is an architecture-based software development approach. Agile, on the other hand, is a light system development approach that originated in software development. To bring together the benefits of both approaches, this article proposes an integrated Agile MBSE approach that adopts a specific instance of the Agile approach (i.e., Scrum) in combination with a specific instance of an MBSE approach (i.e., Model-Based System Architecture Process&amp;amp;mdash;&amp;amp;ldquo;MBSAP&amp;amp;rdquo;) to create an Agile MBSE approach called the integrated Scrum Model-Based System Architecture Process (sMBSAP). The proposed approach was validated through a pilot study that developed a health technology system over one year, successfully producing the desired software product. This work focuses on determining whether the proposed sMBSAP approach can deliver the desired Product Increments with the support of an MBSE process. The interaction of the Product Development Team with the MBSE tool, the generation of the system model, and the delivery of the Product Increments were observed. The preliminary results showed that the proposed approach contributed to achieving the desired system development outcomes and, at the same time, generated complete system architecture artifacts that would not have been developed if Agile had been used alone. Therefore, the main contribution of this research lies in introducing a practical and operational method for merging Agile and MBSE. In parallel, the results suggest that sMBSAP is a middle ground that is more aligned with federal and state regulations, as it addresses the technical debt concerns. Future work will analyze the results of a quasi-experiment on this approach focused on measuring system development performance through common metrics.</description>
	<pubDate>2023-04-17</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 2, Pages 234-257: An Agile Model-Based Software Engineering Approach Illustrated through the Development of a Health Technology System</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/2/2/11">doi: 10.3390/software2020011</a></p>
	<p>Authors:
		Moe Huss
		Daniel R. Herber
		John M. Borky
		</p>
	<p>Model-Based Software Engineering (MBSE) is an architecture-based software development approach. Agile, on the other hand, is a light system development approach that originated in software development. To bring together the benefits of both approaches, this article proposes an integrated Agile MBSE approach that adopts a specific instance of the Agile approach (i.e., Scrum) in combination with a specific instance of an MBSE approach (i.e., Model-Based System Architecture Process&amp;amp;mdash;&amp;amp;ldquo;MBSAP&amp;amp;rdquo;) to create an Agile MBSE approach called the integrated Scrum Model-Based System Architecture Process (sMBSAP). The proposed approach was validated through a pilot study that developed a health technology system over one year, successfully producing the desired software product. This work focuses on determining whether the proposed sMBSAP approach can deliver the desired Product Increments with the support of an MBSE process. The interaction of the Product Development Team with the MBSE tool, the generation of the system model, and the delivery of the Product Increments were observed. The preliminary results showed that the proposed approach contributed to achieving the desired system development outcomes and, at the same time, generated complete system architecture artifacts that would not have been developed if Agile had been used alone. Therefore, the main contribution of this research lies in introducing a practical and operational method for merging Agile and MBSE. In parallel, the results suggest that sMBSAP is a middle ground that is more aligned with federal and state regulations, as it addresses the technical debt concerns. Future work will analyze the results of a quasi-experiment on this approach focused on measuring system development performance through common metrics.</p>
	]]></content:encoded>

	<dc:title>An Agile Model-Based Software Engineering Approach Illustrated through the Development of a Health Technology System</dc:title>
			<dc:creator>Moe Huss</dc:creator>
			<dc:creator>Daniel R. Herber</dc:creator>
			<dc:creator>John M. Borky</dc:creator>
		<dc:identifier>doi: 10.3390/software2020011</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2023-04-17</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2023-04-17</prism:publicationDate>
	<prism:volume>2</prism:volume>
	<prism:number>2</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>234</prism:startingPage>
		<prism:doi>10.3390/software2020011</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/2/2/11</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/2/2/10">

	<title>Software, Vol. 2, Pages 218-233: Analysing and Transforming Graph Structures: The Graph Transformation Framework</title>
	<link>https://www.mdpi.com/2674-113X/2/2/10</link>
	<description>Interconnected data or, in particular, graph structures are a valuable source of information. Gaining insights and knowledge from graph structures is applied throughout a wide range of application areas, for which efficient tools are desired. In this work we present an open source Java graph transformation framework. The framework provides a simple fluent Application Programming Interface (API) to transform a provided graph structure to a desired target format and, in turn, allow further analysis. First, we provide an overview on the architecture of the framework and its core components. Second, we provide an illustrative example which shows how to use the framework&amp;amp;rsquo;s core API for transforming and verifying graph structures. Next to that, we present an instantiation of the framework in the context of analyzing the third-party dependencies amongst open source libraries on the Android platform. The example scenario provides insights on a typical scenario in which the graph transformation framework is applied to efficiently process complex graph structures. The framework is open-source and actively developed, and we further provide information on how to obtain it from its official GitHub page.</description>
	<pubDate>2023-04-06</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 2, Pages 218-233: Analysing and Transforming Graph Structures: The Graph Transformation Framework</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/2/2/10">doi: 10.3390/software2020010</a></p>
	<p>Authors:
		Andreas H. Schuler
		Christoph Praschl
		Andreas Pointner
		</p>
	<p>Interconnected data or, in particular, graph structures are a valuable source of information. Gaining insights and knowledge from graph structures is applied throughout a wide range of application areas, for which efficient tools are desired. In this work we present an open source Java graph transformation framework. The framework provides a simple fluent Application Programming Interface (API) to transform a provided graph structure to a desired target format and, in turn, allow further analysis. First, we provide an overview on the architecture of the framework and its core components. Second, we provide an illustrative example which shows how to use the framework&amp;amp;rsquo;s core API for transforming and verifying graph structures. Next to that, we present an instantiation of the framework in the context of analyzing the third-party dependencies amongst open source libraries on the Android platform. The example scenario provides insights on a typical scenario in which the graph transformation framework is applied to efficiently process complex graph structures. The framework is open-source and actively developed, and we further provide information on how to obtain it from its official GitHub page.</p>
	]]></content:encoded>

	<dc:title>Analysing and Transforming Graph Structures: The Graph Transformation Framework</dc:title>
			<dc:creator>Andreas H. Schuler</dc:creator>
			<dc:creator>Christoph Praschl</dc:creator>
			<dc:creator>Andreas Pointner</dc:creator>
		<dc:identifier>doi: 10.3390/software2020010</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2023-04-06</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2023-04-06</prism:publicationDate>
	<prism:volume>2</prism:volume>
	<prism:number>2</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>218</prism:startingPage>
		<prism:doi>10.3390/software2020010</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/2/2/10</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/2/2/9">

	<title>Software, Vol. 2, Pages 197-217: Vision-Autocorrect: A Self-Adapting Approach towards Relieving Eye-Strain Using Facial-Expression Recognition</title>
	<link>https://www.mdpi.com/2674-113X/2/2/9</link>
	<description>The last two years have seen a rapid rise in the duration of time that both adults and children spend on screens, driven by the recent COVID-19 health pandemic. A key adverse effect is digital eye strain (DES). Recent trends in human-computer interaction and user experience have proposed voice or gesture-guided designs that present more effective and less intrusive automated solutions. These approaches inspired the design of a solution that uses facial expression recognition (FER) techniques to detect DES and autonomously adapt the application to enhance the user&amp;amp;rsquo;s experience. This study sourced and adapted popular open FER datasets for DES studies, trained convolutional neural network models for DES expression recognition, and designed a self-adaptive solution as a proof of concept. Initial experimental results yielded a model with an accuracy of 77% and resulted in the adaptation of the user application based on the FER classification results. We also provide the developed application, model source code, and adapted dataset used for further improvements in the area. Future work should focus on detecting posture, ergonomics, or distance from the screen.</description>
	<pubDate>2023-03-29</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 2, Pages 197-217: Vision-Autocorrect: A Self-Adapting Approach towards Relieving Eye-Strain Using Facial-Expression Recognition</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/2/2/9">doi: 10.3390/software2020009</a></p>
	<p>Authors:
		Leah Mutanu
		Jeet Gohil
		Khushi Gupta
		</p>
	<p>The last two years have seen a rapid rise in the duration of time that both adults and children spend on screens, driven by the recent COVID-19 health pandemic. A key adverse effect is digital eye strain (DES). Recent trends in human-computer interaction and user experience have proposed voice or gesture-guided designs that present more effective and less intrusive automated solutions. These approaches inspired the design of a solution that uses facial expression recognition (FER) techniques to detect DES and autonomously adapt the application to enhance the user&amp;amp;rsquo;s experience. This study sourced and adapted popular open FER datasets for DES studies, trained convolutional neural network models for DES expression recognition, and designed a self-adaptive solution as a proof of concept. Initial experimental results yielded a model with an accuracy of 77% and resulted in the adaptation of the user application based on the FER classification results. We also provide the developed application, model source code, and adapted dataset used for further improvements in the area. Future work should focus on detecting posture, ergonomics, or distance from the screen.</p>
	]]></content:encoded>

	<dc:title>Vision-Autocorrect: A Self-Adapting Approach towards Relieving Eye-Strain Using Facial-Expression Recognition</dc:title>
			<dc:creator>Leah Mutanu</dc:creator>
			<dc:creator>Jeet Gohil</dc:creator>
			<dc:creator>Khushi Gupta</dc:creator>
		<dc:identifier>doi: 10.3390/software2020009</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2023-03-29</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2023-03-29</prism:publicationDate>
	<prism:volume>2</prism:volume>
	<prism:number>2</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>197</prism:startingPage>
		<prism:doi>10.3390/software2020009</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/2/2/9</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/2/2/8">

	<title>Software, Vol. 2, Pages 177-196: A Review to Find Elicitation Methods for Business Process Automation Software</title>
	<link>https://www.mdpi.com/2674-113X/2/2/8</link>
	<description>Several organizations have invested in business process automation software to improve their processes. Unstandardized processes with high variance and unstructured data encumber the requirements elicitation for business process automation software. This study conducted a systematic literature review to discover methods to understand business processes and elicit requirements for business process automation software. The review revealed many methods used to understand business processes, but only one was employed to elicit requirements for business process automation software. In addition, the review identified some challenges and opportunities. The challenges of developing a business process automation software include dealing with business processes, meeting the needs of the organization, choosing the right approach, and adapting to changes in the process during the development. These challenges open opportunities for proposing specific approaches to elicit requirements in this context.</description>
	<pubDate>2023-03-29</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 2, Pages 177-196: A Review to Find Elicitation Methods for Business Process Automation Software</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/2/2/8">doi: 10.3390/software2020008</a></p>
	<p>Authors:
		Thiago Menezes
		</p>
	<p>Several organizations have invested in business process automation software to improve their processes. Unstandardized processes with high variance and unstructured data encumber the requirements elicitation for business process automation software. This study conducted a systematic literature review to discover methods to understand business processes and elicit requirements for business process automation software. The review revealed many methods used to understand business processes, but only one was employed to elicit requirements for business process automation software. In addition, the review identified some challenges and opportunities. The challenges of developing a business process automation software include dealing with business processes, meeting the needs of the organization, choosing the right approach, and adapting to changes in the process during the development. These challenges open opportunities for proposing specific approaches to elicit requirements in this context.</p>
	]]></content:encoded>

	<dc:title>A Review to Find Elicitation Methods for Business Process Automation Software</dc:title>
			<dc:creator>Thiago Menezes</dc:creator>
		<dc:identifier>doi: 10.3390/software2020008</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2023-03-29</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2023-03-29</prism:publicationDate>
	<prism:volume>2</prism:volume>
	<prism:number>2</prism:number>
	<prism:section>Systematic Review</prism:section>
	<prism:startingPage>177</prism:startingPage>
		<prism:doi>10.3390/software2020008</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/2/2/8</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/2/2/7">

	<title>Software, Vol. 2, Pages 163-176: End-to-End Database Software Security</title>
	<link>https://www.mdpi.com/2674-113X/2/2/7</link>
	<description>End-to-end security is essential for relational database software. Most database management software provide data protection at the server side and in transit, but data are no longer protected once they arrive at the client software. In this paper, we present a methodology that, in addition to server-side security, protects data in transit and at rest on the application client side. Our solution enables flexible attribute-based and role-based access control, such that, for a given role or user with a given set of attributes, access can be granted to a relation, a column, or even to a particular data cell of the relation, depending on the data content. Our attribute-based access control model considers the client&amp;amp;rsquo;s attributes, such as versions of the operating system and the web browser, as well as type of the client&amp;amp;rsquo;s device. The solution supports decentralized data access and peer-to-peer data sharing in the form of an encrypted and digitally signed spreadsheet container that stores data retrieved by SQL queries from a database, along with data privileges. For extra security, keys for data encryption and decryption are generated on the fly. We show that our solution is successfully integrated with the PostgreSQL&amp;amp;reg; database management system and enables more flexible access control for added security.</description>
	<pubDate>2023-03-29</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 2, Pages 163-176: End-to-End Database Software Security</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/2/2/7">doi: 10.3390/software2020007</a></p>
	<p>Authors:
		Denis Ulybyshev
		Michael Rogers
		Vadim Kholodilo
		Bradley Northern
		</p>
	<p>End-to-end security is essential for relational database software. Most database management software provide data protection at the server side and in transit, but data are no longer protected once they arrive at the client software. In this paper, we present a methodology that, in addition to server-side security, protects data in transit and at rest on the application client side. Our solution enables flexible attribute-based and role-based access control, such that, for a given role or user with a given set of attributes, access can be granted to a relation, a column, or even to a particular data cell of the relation, depending on the data content. Our attribute-based access control model considers the client&amp;amp;rsquo;s attributes, such as versions of the operating system and the web browser, as well as type of the client&amp;amp;rsquo;s device. The solution supports decentralized data access and peer-to-peer data sharing in the form of an encrypted and digitally signed spreadsheet container that stores data retrieved by SQL queries from a database, along with data privileges. For extra security, keys for data encryption and decryption are generated on the fly. We show that our solution is successfully integrated with the PostgreSQL&amp;amp;reg; database management system and enables more flexible access control for added security.</p>
	]]></content:encoded>

	<dc:title>End-to-End Database Software Security</dc:title>
			<dc:creator>Denis Ulybyshev</dc:creator>
			<dc:creator>Michael Rogers</dc:creator>
			<dc:creator>Vadim Kholodilo</dc:creator>
			<dc:creator>Bradley Northern</dc:creator>
		<dc:identifier>doi: 10.3390/software2020007</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2023-03-29</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2023-03-29</prism:publicationDate>
	<prism:volume>2</prism:volume>
	<prism:number>2</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>163</prism:startingPage>
		<prism:doi>10.3390/software2020007</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/2/2/7</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/2/1/6">

	<title>Software, Vol. 2, Pages 133-162: Approach to Formalizing Software Projects for Solving Design Automation and Project Management Tasks</title>
	<link>https://www.mdpi.com/2674-113X/2/1/6</link>
	<description>GitHub and GitLab contain many project repositories. Each repository contains many design artifacts and specific project management features. Developers can automate the processes of design and project management with the approach proposed in this paper. We described the knowledge base model and diagnostic analytics method for the solving of design automation and project management tasks. This paper also presents examples of use cases for applying the proposed approach.</description>
	<pubDate>2023-03-08</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 2, Pages 133-162: Approach to Formalizing Software Projects for Solving Design Automation and Project Management Tasks</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/2/1/6">doi: 10.3390/software2010006</a></p>
	<p>Authors:
		Aleksey Filippov
		Anton Romanov
		Anton Skalkin
		Julia Stroeva
		Nadezhda Yarushkina
		</p>
	<p>GitHub and GitLab contain many project repositories. Each repository contains many design artifacts and specific project management features. Developers can automate the processes of design and project management with the approach proposed in this paper. We described the knowledge base model and diagnostic analytics method for the solving of design automation and project management tasks. This paper also presents examples of use cases for applying the proposed approach.</p>
	]]></content:encoded>

	<dc:title>Approach to Formalizing Software Projects for Solving Design Automation and Project Management Tasks</dc:title>
			<dc:creator>Aleksey Filippov</dc:creator>
			<dc:creator>Anton Romanov</dc:creator>
			<dc:creator>Anton Skalkin</dc:creator>
			<dc:creator>Julia Stroeva</dc:creator>
			<dc:creator>Nadezhda Yarushkina</dc:creator>
		<dc:identifier>doi: 10.3390/software2010006</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2023-03-08</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2023-03-08</prism:publicationDate>
	<prism:volume>2</prism:volume>
	<prism:number>1</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>133</prism:startingPage>
		<prism:doi>10.3390/software2010006</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/2/1/6</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/2/1/5">

	<title>Software, Vol. 2, Pages 121-132: AutodiDAQt: Simple Scientific Data Acquisition Software with Analysis-in-the-Loop</title>
	<link>https://www.mdpi.com/2674-113X/2/1/5</link>
	<description>Scientific data acquisition is a problem domain that has been underserved by its computational tools despite the need to efficiently use hardware, to guarantee validity of the recorded data, and to rapidly test ideas by configuring experiments quickly and inexpensively. High-dimensional physical spectroscopies, such as angle-resolved photoemission spectroscopy, make these issues especially apparent because, while they use expensive instruments to record large data volumes, they require very little acquisition planning. The burden of writing data acquisition software falls to scientists, who are not typically trained to write maintainable software. In this paper, we introduce AutodiDAQt to address these shortfalls in the scientific ecosystem. To ground the discussion, we demonstrate its merits for angle-resolved photoemission spectroscopy and high bandwidth spectroscopies. AutodiDAQt addresses the essential needs for scientific data acquisition by providing simple concurrency, reproducibility, retrospection of the acquisition sequence, and automated user interface generation. Finally, we discuss how AutodiDAQt enables a future of highly efficient machine-learning-in-the-loop experiments and analysis-driven experiments without requiring data acquisition domain expertise by using analysis code for external data acquisition planning.</description>
	<pubDate>2023-02-18</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 2, Pages 121-132: AutodiDAQt: Simple Scientific Data Acquisition Software with Analysis-in-the-Loop</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/2/1/5">doi: 10.3390/software2010005</a></p>
	<p>Authors:
		Conrad H. Stansbury
		Alessandra Lanzara
		</p>
	<p>Scientific data acquisition is a problem domain that has been underserved by its computational tools despite the need to efficiently use hardware, to guarantee validity of the recorded data, and to rapidly test ideas by configuring experiments quickly and inexpensively. High-dimensional physical spectroscopies, such as angle-resolved photoemission spectroscopy, make these issues especially apparent because, while they use expensive instruments to record large data volumes, they require very little acquisition planning. The burden of writing data acquisition software falls to scientists, who are not typically trained to write maintainable software. In this paper, we introduce AutodiDAQt to address these shortfalls in the scientific ecosystem. To ground the discussion, we demonstrate its merits for angle-resolved photoemission spectroscopy and high bandwidth spectroscopies. AutodiDAQt addresses the essential needs for scientific data acquisition by providing simple concurrency, reproducibility, retrospection of the acquisition sequence, and automated user interface generation. Finally, we discuss how AutodiDAQt enables a future of highly efficient machine-learning-in-the-loop experiments and analysis-driven experiments without requiring data acquisition domain expertise by using analysis code for external data acquisition planning.</p>
	]]></content:encoded>

	<dc:title>AutodiDAQt: Simple Scientific Data Acquisition Software with Analysis-in-the-Loop</dc:title>
			<dc:creator>Conrad H. Stansbury</dc:creator>
			<dc:creator>Alessandra Lanzara</dc:creator>
		<dc:identifier>doi: 10.3390/software2010005</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2023-02-18</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2023-02-18</prism:publicationDate>
	<prism:volume>2</prism:volume>
	<prism:number>1</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>121</prism:startingPage>
		<prism:doi>10.3390/software2010005</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/2/1/5</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/2/1/4">

	<title>Software, Vol. 2, Pages 71-120: Evaluation of Compliance Rule Languages for Modelling Regulatory Compliance Requirements</title>
	<link>https://www.mdpi.com/2674-113X/2/1/4</link>
	<description>Compliance in business processes has become a fundamental requirement given the constant rise in regulatory requirements and competitive pressures that have emerged in recent decades. While in other areas of business process modelling and execution, considerable progress towards automation has been made (e.g., process discovery, executable process models), the interpretation and implementation of compliance requirements is still a highly complex task requiring human effort and time. To increase the level of &amp;amp;ldquo;mechanization&amp;amp;rdquo; when implementing regulations in business processes, compliance research seeks to formalize compliance requirements. Formal representations of compliance requirements should, then, be leveraged to design correct process models and, ideally, would also serve for the automated detection of violations. To formally specify compliance requirements, however, multiple process perspectives, such as control flow, data, time and resources, have to be considered. This leads to the challenge of representing such complex constraints which affect different process perspectives. To this end, current approaches in business process compliance make use of a varied set of languages. However, every approach has been devised based on different assumptions and motivating scenarios. In addition, these languages and their presentation usually abstract from real-world requirements which often would imply introducing a substantial amount of domain knowledge and interpretation, thus hampering the evaluation of their expressiveness. This is a serious problem, since comparisons of different formal languages based on real-world compliance requirements are lacking, meaning that users of such languages are not able to make informed decisions about which language to choose. To close this gap and to establish a uniform evaluation basis, we introduce a running example for evaluating the expressiveness and complexity of compliance rule languages. For language selection, we conducted a literature review. Next, we briefly introduce and demonstrate the languages&amp;amp;rsquo; grammars and vocabularies based on the representation of a number of legal requirements. In doing so, we pay attention to semantic subtleties which we evaluate by adopting a normative classification framework which differentiates between different deontic assignments. Finally, on top of that, we apply Halstead&amp;amp;rsquo;s well-known metrics for calculating the relevant characteristics of the different languages in our comparison, such as the volume, difficulty and effort for each language. With this, we are finally able to better understand the lexical complexity of the languages in relation to their expressiveness. In sum, we provide a systematic comparison of different compliance rule languages based on real-world compliance requirements which may inform future users and developers of these languages. Finally, we advocate for a more user-aware development of compliance languages which should consider a trade off between expressiveness, complexity and usability.</description>
	<pubDate>2023-01-28</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 2, Pages 71-120: Evaluation of Compliance Rule Languages for Modelling Regulatory Compliance Requirements</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/2/1/4">doi: 10.3390/software2010004</a></p>
	<p>Authors:
		Andrea Zasada
		Mustafa Hashmi
		Michael Fellmann
		David Knuplesch
		</p>
	<p>Compliance in business processes has become a fundamental requirement given the constant rise in regulatory requirements and competitive pressures that have emerged in recent decades. While in other areas of business process modelling and execution, considerable progress towards automation has been made (e.g., process discovery, executable process models), the interpretation and implementation of compliance requirements is still a highly complex task requiring human effort and time. To increase the level of &amp;amp;ldquo;mechanization&amp;amp;rdquo; when implementing regulations in business processes, compliance research seeks to formalize compliance requirements. Formal representations of compliance requirements should, then, be leveraged to design correct process models and, ideally, would also serve for the automated detection of violations. To formally specify compliance requirements, however, multiple process perspectives, such as control flow, data, time and resources, have to be considered. This leads to the challenge of representing such complex constraints which affect different process perspectives. To this end, current approaches in business process compliance make use of a varied set of languages. However, every approach has been devised based on different assumptions and motivating scenarios. In addition, these languages and their presentation usually abstract from real-world requirements which often would imply introducing a substantial amount of domain knowledge and interpretation, thus hampering the evaluation of their expressiveness. This is a serious problem, since comparisons of different formal languages based on real-world compliance requirements are lacking, meaning that users of such languages are not able to make informed decisions about which language to choose. To close this gap and to establish a uniform evaluation basis, we introduce a running example for evaluating the expressiveness and complexity of compliance rule languages. For language selection, we conducted a literature review. Next, we briefly introduce and demonstrate the languages&amp;amp;rsquo; grammars and vocabularies based on the representation of a number of legal requirements. In doing so, we pay attention to semantic subtleties which we evaluate by adopting a normative classification framework which differentiates between different deontic assignments. Finally, on top of that, we apply Halstead&amp;amp;rsquo;s well-known metrics for calculating the relevant characteristics of the different languages in our comparison, such as the volume, difficulty and effort for each language. With this, we are finally able to better understand the lexical complexity of the languages in relation to their expressiveness. In sum, we provide a systematic comparison of different compliance rule languages based on real-world compliance requirements which may inform future users and developers of these languages. Finally, we advocate for a more user-aware development of compliance languages which should consider a trade off between expressiveness, complexity and usability.</p>
	]]></content:encoded>

	<dc:title>Evaluation of Compliance Rule Languages for Modelling Regulatory Compliance Requirements</dc:title>
			<dc:creator>Andrea Zasada</dc:creator>
			<dc:creator>Mustafa Hashmi</dc:creator>
			<dc:creator>Michael Fellmann</dc:creator>
			<dc:creator>David Knuplesch</dc:creator>
		<dc:identifier>doi: 10.3390/software2010004</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2023-01-28</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2023-01-28</prism:publicationDate>
	<prism:volume>2</prism:volume>
	<prism:number>1</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>71</prism:startingPage>
		<prism:doi>10.3390/software2010004</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/2/1/4</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/2/1/3">

	<title>Software, Vol. 2, Pages 21-70: A Model-Driven Approach for Software Process Line Engineering</title>
	<link>https://www.mdpi.com/2674-113X/2/1/3</link>
	<description>It has become increasingly preferable to construct bespoke software development processes according to the specifications of the project at hand; however, defining a separate process for each project is time consuming and costly. One solution is to use a Software Process Line (SPrL), a specialized Software Product Line (SPL) in the context of process definition. However, instantiating an SPrL is a slow and error-prone task if performed manually; an adequate degree of automation is therefore essential, which can be achieved by using a Model-Driven Development (MDD) approach. Furthermore, we have identified specific shortcomings in existing approaches for SPrL Engineering (SPrLE). To address the identified shortcomings, we propose a novel MDD approach specifically intended for SPrLE; this approach can be used by method engineers and project managers to first define an SPrL, and then construct custom processes by instantiating it. The proposed approach uses a modeling framework for modeling an SPrL, and applies transformations to provide a high degree of automation when instantiating the SPrL. The proposed approach addresses the shortcomings by providing an adequate coverage of four activities, including Feasibility analysis, Enhancing the core process, Managing configuration complexity, and Post-derivation enhancement. The proposed approach has been validated through an industrial case study and an experiment; the results have shown that the proposed approach can improve the processes being used in organizations, and is rated highly as to usefulness and ease of use.</description>
	<pubDate>2023-01-20</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 2, Pages 21-70: A Model-Driven Approach for Software Process Line Engineering</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/2/1/3">doi: 10.3390/software2010003</a></p>
	<p>Authors:
		Halimeh Agh
		Raman Ramsin
		</p>
	<p>It has become increasingly preferable to construct bespoke software development processes according to the specifications of the project at hand; however, defining a separate process for each project is time consuming and costly. One solution is to use a Software Process Line (SPrL), a specialized Software Product Line (SPL) in the context of process definition. However, instantiating an SPrL is a slow and error-prone task if performed manually; an adequate degree of automation is therefore essential, which can be achieved by using a Model-Driven Development (MDD) approach. Furthermore, we have identified specific shortcomings in existing approaches for SPrL Engineering (SPrLE). To address the identified shortcomings, we propose a novel MDD approach specifically intended for SPrLE; this approach can be used by method engineers and project managers to first define an SPrL, and then construct custom processes by instantiating it. The proposed approach uses a modeling framework for modeling an SPrL, and applies transformations to provide a high degree of automation when instantiating the SPrL. The proposed approach addresses the shortcomings by providing an adequate coverage of four activities, including Feasibility analysis, Enhancing the core process, Managing configuration complexity, and Post-derivation enhancement. The proposed approach has been validated through an industrial case study and an experiment; the results have shown that the proposed approach can improve the processes being used in organizations, and is rated highly as to usefulness and ease of use.</p>
	]]></content:encoded>

	<dc:title>A Model-Driven Approach for Software Process Line Engineering</dc:title>
			<dc:creator>Halimeh Agh</dc:creator>
			<dc:creator>Raman Ramsin</dc:creator>
		<dc:identifier>doi: 10.3390/software2010003</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2023-01-20</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2023-01-20</prism:publicationDate>
	<prism:volume>2</prism:volume>
	<prism:number>1</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>21</prism:startingPage>
		<prism:doi>10.3390/software2010003</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/2/1/3</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/2/1/2">

	<title>Software, Vol. 2, Pages 19-20: Acknowledgment to the Reviewers of Software in 2022</title>
	<link>https://www.mdpi.com/2674-113X/2/1/2</link>
	<description>High-quality academic publishing is built on rigorous peer review [...]</description>
	<pubDate>2023-01-18</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 2, Pages 19-20: Acknowledgment to the Reviewers of Software in 2022</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/2/1/2">doi: 10.3390/software2010002</a></p>
	<p>Authors:
		Software Editorial Office Software Editorial Office
		</p>
	<p>High-quality academic publishing is built on rigorous peer review [...]</p>
	]]></content:encoded>

	<dc:title>Acknowledgment to the Reviewers of Software in 2022</dc:title>
			<dc:creator>Software Editorial Office Software Editorial Office</dc:creator>
		<dc:identifier>doi: 10.3390/software2010002</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2023-01-18</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2023-01-18</prism:publicationDate>
	<prism:volume>2</prism:volume>
	<prism:number>1</prism:number>
	<prism:section>Editorial</prism:section>
	<prism:startingPage>19</prism:startingPage>
		<prism:doi>10.3390/software2010002</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/2/1/2</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/2/1/1">

	<title>Software, Vol. 2, Pages 1-18: Are Infinite-Failure NHPP-Based Software Reliability Models Useful?</title>
	<link>https://www.mdpi.com/2674-113X/2/1/1</link>
	<description>In the literature, infinite-failure software reliability models (SRMs), such as Musa-Okumoto SRM (1984), have been demonstrated to be effective in quantitatively characterizing software testing processes and assessing software reliability. This paper primarily focuses on the infinite-failure (type-II) non-homogeneous Poisson process (NHPP)-based SRMs and evaluates the performances of these SRMs comprehensively by comparing with the existing finite-failure (type-I) NHPP-based SRMs. In more specific terms, to describe the software fault-detection time distribution, we postulate 11 representative probability distribution functions that can be categorized into the generalized exponential distribution family and the extreme-value distribution family. Then, we compare the goodness-of-fit and predictive performances with the associated 11 type-I and type-II NHPP-based SRMs. In numerical experiments, we analyze software fault-count data, collected from 16 actual development projects, which are commonly known in the software industry as fault-count time-domain data and fault-count time-interval data (group data). The maximum likelihood method is utilized to estimate the model parameters in both NHPP-based SRMs. In a comparison of the type-I with the type-II, it is shown that the type-II NHPP-based SRMs could exhibit better predictive performance than the existing type-I NHPP-based SRMs, especially in the early stage of software testing.</description>
	<pubDate>2022-12-23</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 2, Pages 1-18: Are Infinite-Failure NHPP-Based Software Reliability Models Useful?</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/2/1/1">doi: 10.3390/software2010001</a></p>
	<p>Authors:
		Siqiao Li
		Tadashi Dohi
		Hiroyuki Okamura
		</p>
	<p>In the literature, infinite-failure software reliability models (SRMs), such as Musa-Okumoto SRM (1984), have been demonstrated to be effective in quantitatively characterizing software testing processes and assessing software reliability. This paper primarily focuses on the infinite-failure (type-II) non-homogeneous Poisson process (NHPP)-based SRMs and evaluates the performances of these SRMs comprehensively by comparing with the existing finite-failure (type-I) NHPP-based SRMs. In more specific terms, to describe the software fault-detection time distribution, we postulate 11 representative probability distribution functions that can be categorized into the generalized exponential distribution family and the extreme-value distribution family. Then, we compare the goodness-of-fit and predictive performances with the associated 11 type-I and type-II NHPP-based SRMs. In numerical experiments, we analyze software fault-count data, collected from 16 actual development projects, which are commonly known in the software industry as fault-count time-domain data and fault-count time-interval data (group data). The maximum likelihood method is utilized to estimate the model parameters in both NHPP-based SRMs. In a comparison of the type-I with the type-II, it is shown that the type-II NHPP-based SRMs could exhibit better predictive performance than the existing type-I NHPP-based SRMs, especially in the early stage of software testing.</p>
	]]></content:encoded>

	<dc:title>Are Infinite-Failure NHPP-Based Software Reliability Models Useful?</dc:title>
			<dc:creator>Siqiao Li</dc:creator>
			<dc:creator>Tadashi Dohi</dc:creator>
			<dc:creator>Hiroyuki Okamura</dc:creator>
		<dc:identifier>doi: 10.3390/software2010001</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2022-12-23</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2022-12-23</prism:publicationDate>
	<prism:volume>2</prism:volume>
	<prism:number>1</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>1</prism:startingPage>
		<prism:doi>10.3390/software2010001</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/2/1/1</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/1/4/20">

	<title>Software, Vol. 1, Pages 473-484: Analysis of Faults in Software Systems Using Tsallis Distribution: A Unified Approach</title>
	<link>https://www.mdpi.com/2674-113X/1/4/20</link>
	<description>The identification of the appropriate distribution of faults is important for ensuring the reliability of a software system and its maintenance. It has been observed that different distributions explain faults in different types of software. Faults in large and complex software systems are best represented by Pareto distribution, whereas Weibull distribution fits enterprise software well. An analysis of faults in open-source software endorses generalized Pareto distribution. This paper presents a model, called the Tsallis distribution, derived using the maximum-entropy principle, which explains faults in many diverse software systems. The effectiveness of Tsallis distribution is ascertained by carrying out experiments on many real data sets from enterprise and open-source software systems. It is found that Tsallis distribution describes software faults better and more precisely than Weibull and generalized Pareto distributions, in both cases. The applications of the Tsallis distribution in (i) software fault-prediction using the Bayesian inference method, and (ii) the Goal and Okumoto software-reliability model, are discussed.</description>
	<pubDate>2022-11-11</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 1, Pages 473-484: Analysis of Faults in Software Systems Using Tsallis Distribution: A Unified Approach</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/1/4/20">doi: 10.3390/software1040020</a></p>
	<p>Authors:
		Shachi Sharma
		</p>
	<p>The identification of the appropriate distribution of faults is important for ensuring the reliability of a software system and its maintenance. It has been observed that different distributions explain faults in different types of software. Faults in large and complex software systems are best represented by Pareto distribution, whereas Weibull distribution fits enterprise software well. An analysis of faults in open-source software endorses generalized Pareto distribution. This paper presents a model, called the Tsallis distribution, derived using the maximum-entropy principle, which explains faults in many diverse software systems. The effectiveness of Tsallis distribution is ascertained by carrying out experiments on many real data sets from enterprise and open-source software systems. It is found that Tsallis distribution describes software faults better and more precisely than Weibull and generalized Pareto distributions, in both cases. The applications of the Tsallis distribution in (i) software fault-prediction using the Bayesian inference method, and (ii) the Goal and Okumoto software-reliability model, are discussed.</p>
	]]></content:encoded>

	<dc:title>Analysis of Faults in Software Systems Using Tsallis Distribution: A Unified Approach</dc:title>
			<dc:creator>Shachi Sharma</dc:creator>
		<dc:identifier>doi: 10.3390/software1040020</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2022-11-11</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2022-11-11</prism:publicationDate>
	<prism:volume>1</prism:volume>
	<prism:number>4</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>473</prism:startingPage>
		<prism:doi>10.3390/software1040020</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/1/4/20</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/1/4/19">

	<title>Software, Vol. 1, Pages 450-472: Security Requirements Prioritization Techniques: A Survey and Classification Framework</title>
	<link>https://www.mdpi.com/2674-113X/1/4/19</link>
	<description>Security requirements Engineering (SRE) is an activity conducted during the early stage of the SDLC. SRE involves eliciting, analyzing, and documenting security requirements. Thorough SRE can help software engineers incorporate countermeasures against malicious attacks into the software&amp;amp;rsquo;s source code itself. Even though all security requirements are considered relevant, implementing all security mechanisms that protect against every possible threat is not feasible. Security requirements must compete not only with time and budget, but also with the constraints they inflect on a software&amp;amp;rsquo;s availability, features, and functionalities. Thus, the process of security requirements prioritization becomes an integral task in the discipline of risk-analysis and trade-off-analysis. A sound prioritization technique provides guidance for software engineers to make educated decisions on which security requirements are of topmost importance. Even though previous research has proposed various security requirement prioritization techniques, none of the existing research efforts have provided a detailed survey and comparative analysis of existing techniques. This paper uses a literature survey approach to first define security requirements engineering. Next, we identify the state-of-the-art techniques that can be adopted to impose a well-established prioritization criterion for security requirements. Our survey identified, summarized, and compared seven (7) security requirements prioritization approaches proposed in the literature.</description>
	<pubDate>2022-10-28</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 1, Pages 450-472: Security Requirements Prioritization Techniques: A Survey and Classification Framework</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/1/4/19">doi: 10.3390/software1040019</a></p>
	<p>Authors:
		Shada Khanneh
		Vaibhav Anu
		</p>
	<p>Security requirements Engineering (SRE) is an activity conducted during the early stage of the SDLC. SRE involves eliciting, analyzing, and documenting security requirements. Thorough SRE can help software engineers incorporate countermeasures against malicious attacks into the software&amp;amp;rsquo;s source code itself. Even though all security requirements are considered relevant, implementing all security mechanisms that protect against every possible threat is not feasible. Security requirements must compete not only with time and budget, but also with the constraints they inflect on a software&amp;amp;rsquo;s availability, features, and functionalities. Thus, the process of security requirements prioritization becomes an integral task in the discipline of risk-analysis and trade-off-analysis. A sound prioritization technique provides guidance for software engineers to make educated decisions on which security requirements are of topmost importance. Even though previous research has proposed various security requirement prioritization techniques, none of the existing research efforts have provided a detailed survey and comparative analysis of existing techniques. This paper uses a literature survey approach to first define security requirements engineering. Next, we identify the state-of-the-art techniques that can be adopted to impose a well-established prioritization criterion for security requirements. Our survey identified, summarized, and compared seven (7) security requirements prioritization approaches proposed in the literature.</p>
	]]></content:encoded>

	<dc:title>Security Requirements Prioritization Techniques: A Survey and Classification Framework</dc:title>
			<dc:creator>Shada Khanneh</dc:creator>
			<dc:creator>Vaibhav Anu</dc:creator>
		<dc:identifier>doi: 10.3390/software1040019</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2022-10-28</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2022-10-28</prism:publicationDate>
	<prism:volume>1</prism:volume>
	<prism:number>4</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>450</prism:startingPage>
		<prism:doi>10.3390/software1040019</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/1/4/19</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
        <item rdf:about="https://www.mdpi.com/2674-113X/1/4/18">

	<title>Software, Vol. 1, Pages 417-449: BPM2DDD: A Systematic Process for Identifying Domains from Business Processes Models</title>
	<link>https://www.mdpi.com/2674-113X/1/4/18</link>
	<description>Domain-driven design is one of the most used approaches for identifying microservice architectures, which should be built around business capabilities. There are a number of documentation with principles and patterns for its application. However, despite its increasing use there is still a lack of systematic approaches for creating the context maps that will be used to design the microservices. This article presents BPM2DDD, a systematic approach for identification of bounded contexts and their relationships based on the analysis of business processes models, which provide a business view of an organisation. We present an example of its application in a real business process, which has also be used to perform a comparative application with external analysts. The technique has been applied to a real project in the department of transport of a Brazilian state capital, and has been incorporated into the software development process employed by them to develop their new system.</description>
	<pubDate>2022-09-29</pubDate>

	<content:encoded><![CDATA[
	<p><b>Software, Vol. 1, Pages 417-449: BPM2DDD: A Systematic Process for Identifying Domains from Business Processes Models</b></p>
	<p>Software <a href="https://www.mdpi.com/2674-113X/1/4/18">doi: 10.3390/software1040018</a></p>
	<p>Authors:
		Carlos Eduardo da Silva
		Eduardo Luiz Gomes
		Soumya Sankar Basu
		</p>
	<p>Domain-driven design is one of the most used approaches for identifying microservice architectures, which should be built around business capabilities. There are a number of documentation with principles and patterns for its application. However, despite its increasing use there is still a lack of systematic approaches for creating the context maps that will be used to design the microservices. This article presents BPM2DDD, a systematic approach for identification of bounded contexts and their relationships based on the analysis of business processes models, which provide a business view of an organisation. We present an example of its application in a real business process, which has also be used to perform a comparative application with external analysts. The technique has been applied to a real project in the department of transport of a Brazilian state capital, and has been incorporated into the software development process employed by them to develop their new system.</p>
	]]></content:encoded>

	<dc:title>BPM2DDD: A Systematic Process for Identifying Domains from Business Processes Models</dc:title>
			<dc:creator>Carlos Eduardo da Silva</dc:creator>
			<dc:creator>Eduardo Luiz Gomes</dc:creator>
			<dc:creator>Soumya Sankar Basu</dc:creator>
		<dc:identifier>doi: 10.3390/software1040018</dc:identifier>
	<dc:source>Software</dc:source>
	<dc:date>2022-09-29</dc:date>

	<prism:publicationName>Software</prism:publicationName>
	<prism:publicationDate>2022-09-29</prism:publicationDate>
	<prism:volume>1</prism:volume>
	<prism:number>4</prism:number>
	<prism:section>Article</prism:section>
	<prism:startingPage>417</prism:startingPage>
		<prism:doi>10.3390/software1040018</prism:doi>
	<prism:url>https://www.mdpi.com/2674-113X/1/4/18</prism:url>
	
	<cc:license rdf:resource="CC BY 4.0"/>
</item>
    
<cc:License rdf:about="https://creativecommons.org/licenses/by/4.0/">
	<cc:permits rdf:resource="https://creativecommons.org/ns#Reproduction" />
	<cc:permits rdf:resource="https://creativecommons.org/ns#Distribution" />
	<cc:permits rdf:resource="https://creativecommons.org/ns#DerivativeWorks" />
</cc:License>

</rdf:RDF>
