Information for Business and Management–Software Development for Data Processing and Management

A special issue of Information (ISSN 2078-2489). This special issue belongs to the section "Information Applications".

Deadline for manuscript submissions: closed (30 September 2023) | Viewed by 58644

Special Issue Editor


E-Mail Website
Guest Editor
Institute of Information Technology, Lodz University of Technology, 90-924 Lodz, Poland
Interests: software engineering; information systems security; multi-agent-based systems; cloud computing; internet of things; mobile security; blockchain; data analysis; machine learning; data processing; distributed systems
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Today, data and information are among the most important resources in various aspects of our life and economy. Data and information are created, generated, collected, stored, and then processed and shared in various ways. All these activities are performed with the participation of contemporary software, applications, IT systems and their components.

Thus, in addition to creating the software itself, it is becoming increasingly important to manage the data and information that the software uses, processes and stores. Hence, it is extremely important not only the software itself and the process of its development, but also information management at the appropriate level, while maintaining a sufficiently high level of data protection, information and its flow.

The process of software development and information management are becoming more and more interconnected and dependent, striving to develop and support a modern society based on knowledge and modern technologies.

Therefore, this Special Issue aims to show various aspects of software creation and development, designed for fast, easy and secure processing and management of data and information.

The areas of interest for this Special Issue include the following topics: software analysis and design for processing and management of data and information, software deployment for data processing, business analysis, business rules, requirements engineering, software development process, information management system, knowledge management solutions, software for security and privacy of data, software for data mining, software for knowledge management. 

Prof. Dr. Aneta Poniszewska-Maranda
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Information is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Software engineering for data
  • Requirements engineering for information management
  • Data processing and management
  • Business analysis
  • Knowledge management
  • Security and privacy of data
  • Software for data mining

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Related Special Issue

Published Papers (17 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

27 pages, 8048 KiB  
Article
Pervasive Real-Time Analytical Framework—A Case Study on Car Parking Monitoring
by Francisca Barros, Beatriz Rodrigues, José Vieira and Filipe Portela
Information 2023, 14(11), 584; https://doi.org/10.3390/info14110584 - 25 Oct 2023
Viewed by 1688
Abstract
Due to the amount of data emerging, it is necessary to use an online analytical processing (OLAP) framework capable of responding to the needs of industries. Processes such as drill-down, roll-up, three-dimensional analysis, and data filtering are fundamental for the perception of information. [...] Read more.
Due to the amount of data emerging, it is necessary to use an online analytical processing (OLAP) framework capable of responding to the needs of industries. Processes such as drill-down, roll-up, three-dimensional analysis, and data filtering are fundamental for the perception of information. This article demonstrates the OLAP framework developed as a valuable and effective solution in decision making. To develop an OLAP framework, it was necessary to create the extract, transform and load the (ETL) process, build a data warehouse, and develop the OLAP via cube.js. Finally, it was essential to design a solution that adds more value to the organizations and presents several characteristics to support the entire data analysis process. A backend API (application programming interface) to route the data via MySQL was required, as well as a frontend and a data visualization layer. The OLAP framework was developed for the ioCity project. However, its great advantage is its versatility, which allows any industry to use it in its system. One ETL process, one data warehouse, one OLAP model, six indicators, and one OLAP framework were developed (with one frontend and one API backend). In conclusion, this article demonstrates the importance of a modular, adaptable, and scalable tool in the data analysis process and in supporting decision making. Full article
Show Figures

Figure 1

21 pages, 2858 KiB  
Article
How Could Consumers’ Online Review Help Improve Product Design Strategy?
by Wei Miao, Kai-Chieh Lin, Chih-Fu Wu, Jie Sun, Weibo Sun, Wei Wei and Chao Gu
Information 2023, 14(8), 434; https://doi.org/10.3390/info14080434 - 1 Aug 2023
Cited by 2 | Viewed by 1999
Abstract
This study aims to explore the utilization of user-generated content for product improvement and decision-making processes. In the era of big data, the channels through which enterprises obtain user feedback information are transitioning from traditional methods to online platforms. The original data for [...] Read more.
This study aims to explore the utilization of user-generated content for product improvement and decision-making processes. In the era of big data, the channels through which enterprises obtain user feedback information are transitioning from traditional methods to online platforms. The original data for this study were obtained from customer reviews of cordless hairdryers on JD.com. The specific process is as follows: First, we used the Python Requests package to crawl 20,157 initial comments. Subsequently, the initial data were cleaned, resulting in 1405 valid comments. Next, the cleaned and valid comments were segmented into Chinese words using the HanLP package. Finally, the Latent Dirichlet Allocation (LDA) method was applied for topic modeling. The visualization of the topic clustering was generated using pyLDAvis, and three optimal topics were identified. These topics were named “User Experience”, “Product Evaluation”, and “Product Features”, respectively. Through data analysis and expert consultation, this study developed product design improvement strategies based on online reviews and verified the validity of the developed cordless hairdryer design index system through a questionnaire survey, providing practical references and innovative theoretical foundations for future product design assessments. Full article
Show Figures

Figure 1

15 pages, 629 KiB  
Article
Challenges in Agile Software Maintenance for Local and Global Development: An Empirical Assessment
by Mohammed Almashhadani, Alok Mishra, Ali Yazici and Muhammad Younas
Information 2023, 14(5), 261; https://doi.org/10.3390/info14050261 - 27 Apr 2023
Cited by 6 | Viewed by 2978
Abstract
Agile methods have gained wide popularity recently due to their characteristics in software development. Despite the success of agile methods in the software maintenance process, several challenges have been reported. In this study, we investigate the challenges that measure the impact of agile [...] Read more.
Agile methods have gained wide popularity recently due to their characteristics in software development. Despite the success of agile methods in the software maintenance process, several challenges have been reported. In this study, we investigate the challenges that measure the impact of agile methods in software maintenance in terms of quality factors. A survey was conducted to collect data from agile practitioners to establish their opinions about existing challenges. As a result of the statistical analysis of the data from the survey, it has been observed that there are moderately effective challenges in manageability, scalability, communication, collaboration, and transparency. Further research is required to validate software maintenance challenges in agile methods. Full article
Show Figures

Figure 1

16 pages, 2874 KiB  
Article
Market Analysis with Business Intelligence System for Marketing Planning
by Treerak Kongthanasuwan, Nakarin Sriwiboon, Banpot Horbanluekit, Wasakorn Laesanklang and Tipaluck Krityakierne
Information 2023, 14(2), 116; https://doi.org/10.3390/info14020116 - 13 Feb 2023
Cited by 3 | Viewed by 6428
Abstract
The automotive and auto parts industries are important economic sectors in Thailand. With rapidly changing technology, every organization should understand what needs to be improved clearly, and shift their strategies to meet evolving consumer demands. The purpose of this research is to develop [...] Read more.
The automotive and auto parts industries are important economic sectors in Thailand. With rapidly changing technology, every organization should understand what needs to be improved clearly, and shift their strategies to meet evolving consumer demands. The purpose of this research is to develop a Business Intelligence system for a brake pad manufacturing company in Thailand. By analyzing the relationship between market demand and supply components of the company through regression analysis and the principles of the marketing mix, we develop a product lifecycle curve for forecasting product sales. The developed system increases the workflow efficiency of the case study company, being able to simplify the traditional data preparation process that requires employees to collect and summarize data every time a request is made. An intelligence dashboard is subsequently created to help support decision-making, facilitate communication within the company, and eventually improve team efficiency and productivity. Full article
Show Figures

Figure 1

34 pages, 10875 KiB  
Article
EverAnalyzer: A Self-Adjustable Big Data Management Platform Exploiting the Hadoop Ecosystem
by Panagiotis Karamolegkos, Argyro Mavrogiorgou, Athanasios Kiourtis and Dimosthenis Kyriazis
Information 2023, 14(2), 93; https://doi.org/10.3390/info14020093 - 3 Feb 2023
Cited by 4 | Viewed by 2108
Abstract
Big Data is a phenomenon that affects today’s world, with new data being generated every second. Today’s enterprises face major challenges from the increasingly diverse data, as well as from indexing, searching, and analyzing such enormous amounts of data. In this context, several [...] Read more.
Big Data is a phenomenon that affects today’s world, with new data being generated every second. Today’s enterprises face major challenges from the increasingly diverse data, as well as from indexing, searching, and analyzing such enormous amounts of data. In this context, several frameworks and libraries for processing and analyzing Big Data exist. Among those frameworks Hadoop MapReduce, Mahout, Spark, and MLlib appear to be the most popular, although it is unclear which of them best suits and performs in various data processing and analysis scenarios. This paper proposes EverAnalyzer, a self-adjustable Big Data management platform built to fill this gap by exploiting all of these frameworks. The platform is able to collect data both in a streaming and in a batch manner, utilizing the metadata obtained from its users’ processing and analytical processes applied to the collected data. Based on this metadata, the platform recommends the optimum framework for the data processing/analytical activities that the users aim to execute. To verify the platform’s efficiency, numerous experiments were carried out using 30 diverse datasets related to various diseases. The results revealed that EverAnalyzer correctly suggested the optimum framework in 80% of the cases, indicating that the platform made the best selections in the majority of the experiments. Full article
Show Figures

Figure 1

21 pages, 1317 KiB  
Article
Data-Oriented Software Development: The Industrial Landscape through Patent Analysis
by Konstantinos Georgiou, Nikolaos Mittas, Apostolos Ampatzoglou, Alexander Chatzigeorgiou and Lefteris Angelis
Information 2023, 14(1), 4; https://doi.org/10.3390/info14010004 - 22 Dec 2022
Cited by 3 | Viewed by 2506
Abstract
Τhe large amounts of information produced daily by organizations and enterprises have led to the development of specialized software that can process high volumes of data. Given that the technologies and methodologies used to develop software are constantly changing, offering significant market opportunities, [...] Read more.
Τhe large amounts of information produced daily by organizations and enterprises have led to the development of specialized software that can process high volumes of data. Given that the technologies and methodologies used to develop software are constantly changing, offering significant market opportunities, organizations turn to patenting their inventions to secure their ownership as well as their commercial exploitation. In this study, we investigate the landscape of data-oriented software development via the collection and analysis of information extracted from patents. To this regard, we made use of advanced statistical and machine learning approaches, namely Latent Dirichlet Allocation and Brokerage Analysis for the identification of technological trends and thematic axes related to software development patent activity dedicated to data processing and data management processes. Our findings reveal that high-profile countries and organizations are engaging in patent granting, while the main thematic circles found in the retrieved patent data revolve around data updates, integration, version control and software deployment. The results indicate that patent grants in this technological domain are expected to continue their increasing trend in the following years, given that technologies evolve and the need for efficient data processing becomes even more present. Full article
Show Figures

Figure 1

13 pages, 283 KiB  
Article
No-Show in Medical Appointments with Machine Learning Techniques: A Systematic Literature Review
by Luiz Henrique Américo Salazar, Wemerson Delcio Parreira, Anita Maria da Rocha Fernandes and Valderi Reis Quietinho Leithardt
Information 2022, 13(11), 507; https://doi.org/10.3390/info13110507 - 22 Oct 2022
Cited by 2 | Viewed by 3561
Abstract
No-show appointments in healthcare is a problem faced by medical centers around the world, and understanding the factors associated with no-show behavior is essential. In recent decades, artificial intelligence has taken place in the medical field and machine learning algorithms can now work [...] Read more.
No-show appointments in healthcare is a problem faced by medical centers around the world, and understanding the factors associated with no-show behavior is essential. In recent decades, artificial intelligence has taken place in the medical field and machine learning algorithms can now work as an efficient tool to understand the patients’ behavior and to achieve better medical appointment allocation in scheduling systems. In this work, we provide a systematic literature review (SLR) of machine learning techniques applied to no-show appointments aiming at establishing the current state-of-the-art. Based on an SLR following the PRISMA procedure, 24 articles were found and analyzed, in which the characteristics of the database, algorithms and performance metrics of each study were synthesized. Results regarding which factors have a higher impact on missed appointment rates were analyzed too. The results indicate that the most appropriate algorithms for building the models are decision tree algorithms. Furthermore, the most significant determinants of no-show were related to the patient’s age, whether the patient missed a previous appointment, and the distance between the appointment and the patient’s scheduling. Full article
Show Figures

Figure 1

16 pages, 1063 KiB  
Article
Representation of Women in Slovak Science and Research: An Analysis Based on the CRIS System Data
by Danica Zendulková, Gabriela Gavurníková, Anna Krivjanska, Zuzana Staňáková, Andrea Putalová and Mária Janková
Information 2022, 13(10), 482; https://doi.org/10.3390/info13100482 - 8 Oct 2022
Viewed by 2129
Abstract
The article presents an intention to examine the possibilities of processing data on the representation of women in science and research from data collected in Slovakia as part of the Gender Equality Plan. The methodology follows the declared intention and consists of three [...] Read more.
The article presents an intention to examine the possibilities of processing data on the representation of women in science and research from data collected in Slovakia as part of the Gender Equality Plan. The methodology follows the declared intention and consists of three steps. The first step is the identification of sources of sex-disaggregated data from the field of science and research in the Slovak Republic. Then follows the examination of the state of the art of tracking data in the identified data sources. The analysis of available data and the processing of the results is the next step. The share of women in Slovak science and research is demonstrated by the composition of project teams and by the statistical data of the supplementary statistical survey of research and development potential, which are collected through the national information system for research, development, and innovation, named SK CRIS. The result is a detailed analysis of the position of women in Slovak science and research, classified by research area and academic career stage. Based on the research conducted and the results achieved, we underline the importance of building national information systems in science and research. Data from these systems can significantly contribute to the creation and parameterization of science policy, including the principles of gender equality. Full article
Show Figures

Figure 1

17 pages, 4731 KiB  
Article
A Flexible Data Evaluation System for Improving the Quality and Efficiency of Laboratory Analysis and Testing
by Yonghui Tu, Haoye Tang, Hua Gong and Wenyou Hu
Information 2022, 13(9), 424; https://doi.org/10.3390/info13090424 - 8 Sep 2022
Cited by 1 | Viewed by 2095
Abstract
In a chemical analysis laboratory, sample detection via most analytical devices obtains raw data and processes it to validate data reports, including raw data filtering, editing, effectiveness evaluation, error correction, etc. This process is usually carried out manually by analysts. When the sample [...] Read more.
In a chemical analysis laboratory, sample detection via most analytical devices obtains raw data and processes it to validate data reports, including raw data filtering, editing, effectiveness evaluation, error correction, etc. This process is usually carried out manually by analysts. When the sample detection volume is large, the data processing involved becomes time-consuming and laborious, and manual errors may be introduced. In addition, analytical laboratories typically use a variety of analytical devices with different measurement principles, leading to the use of various heterogeneous control software systems from different vendors with different export data formats. Different formats introduce difficulties to laboratory automation. This paper proposes a modular data evaluation system that uses a global unified management and maintenance mode that can automatically filter data, evaluate quality, generate valid reports, and distribute reports. This modular software design concept allows the proposed system to be applied to different analytical devices; its integration into existing laboratory information management systems (LIMS) could maximise automation and improve the analysis and testing quality and efficiency in a chemical analysis laboratory, while meeting the analysis and testing requirements. Full article
Show Figures

Figure 1

24 pages, 443 KiB  
Article
GaSubtle: A New Genetic Algorithm for Generating Subtle Higher-Order Mutants
by Fadi Wedyan, Abdullah Al-Shishani and Yaser Jararweh
Information 2022, 13(7), 327; https://doi.org/10.3390/info13070327 - 7 Jul 2022
Cited by 2 | Viewed by 2101
Abstract
Mutation testing is an effective, yet costly, testing approach, as it requires generating and running large numbers of faulty programs, called mutants. Mutation testing also suffers from a fundamental problem, which is having a large percentage of equivalent mutants. These are mutants that [...] Read more.
Mutation testing is an effective, yet costly, testing approach, as it requires generating and running large numbers of faulty programs, called mutants. Mutation testing also suffers from a fundamental problem, which is having a large percentage of equivalent mutants. These are mutants that produce the same output as the original program, and therefore, cannot be detected. Higher-order mutation is a promising approach that can produce hard-to-detect faulty programs called subtle mutants, with a low percentage of equivalent mutants. Subtle higher-order mutants contribute a small set of the large space of mutants which grows even larger as the order of mutation becomes higher. In this paper, we developed a genetic algorithm for finding subtle higher-order mutants. The proposed approach uses a new mechanism in the crossover phase and uses five selection techniques to select mutants that go to the next generation in the genetic algorithm. We implemented a tool, called GaSubtle that automates the process of creating subtle mutants. We evaluated the proposed approach by using 10 subject programs. Our evaluation shows that the proposed crossover generates more subtle mutants than the technique used in a previous genetic algorithm with less execution time. Results vary on the selection strategies, suggesting a dependency relation with the tested code. Full article
Show Figures

Figure 1

16 pages, 3700 KiB  
Article
Reviewing the Applications of Neural Networks in Supply Chain: Exploring Research Propositions for Future Directions
by Ieva Meidute-Kavaliauskiene, Kamil Taşkın, Shahryar Ghorbani, Renata Činčikaitė and Roberta Kačenauskaitė
Information 2022, 13(5), 261; https://doi.org/10.3390/info13050261 - 20 May 2022
Cited by 2 | Viewed by 4849
Abstract
Supply chains have received significant attention in recent years. Neural networks (NN) are a technique available in artificial intelligence (AI) which has many supporters due to their diverse applications because they can be used to move towards complete harmony. NN, an emerging AI [...] Read more.
Supply chains have received significant attention in recent years. Neural networks (NN) are a technique available in artificial intelligence (AI) which has many supporters due to their diverse applications because they can be used to move towards complete harmony. NN, an emerging AI technique, have a strong appeal for a wide range of applications to overcome many issues associated with supply chains. This study aims to provide a comprehensive view of NN applications in supply chain management (SCM), working as a reference for future research directions for SCM researchers and application insight for SCM practitioners. This study generally introduces NNs and has explained the use of this method in five features identified by supply chain area, including optimization, forecasting, modeling and simulation, clustering, decision support, and the possibility of using NNs in supply chain management. The results showed that NN applications in SCM were still in a developmental stage since there were not enough high-yielding authors to form a strong group force in the research of NN applications in SCM. Full article
Show Figures

Figure 1

19 pages, 1250 KiB  
Article
Data Processing in Cloud Computing Model on the Example of Salesforce Cloud
by Witold Marańda, Aneta Poniszewska-Marańda and Małgorzata Szymczyńska
Information 2022, 13(2), 85; https://doi.org/10.3390/info13020085 - 12 Feb 2022
Cited by 5 | Viewed by 5980
Abstract
Data processing is integrated with every aspect of operation enterprises—from accounting to marketing and communication internal and control of production processes. The best place to store the information is a properly prepared data center. There are a lot of providers of cloud computing [...] Read more.
Data processing is integrated with every aspect of operation enterprises—from accounting to marketing and communication internal and control of production processes. The best place to store the information is a properly prepared data center. There are a lot of providers of cloud computing and methods of data storage and processing. Every business must do the right thing, which is to think over how the data at your disposal are to be managed. The main purpose of this paper is research and the comparison of available methods of data processing and storage outside the enterprise in the cloud computing model. The cloud in SaaS (software as a service) model—Salesforce.com and a free platform development offered by Salesforce.com—force.com were used to perform the research. The paper presents the analysis results of available methods of processing and storing data outside the enterprise in the cloud computing model on the example of Salesforce cloud. Salesforce.com offers several benefits, but each service provider offers different services, systems, products, and forms of data protection. The choice of customer depends on individual needs and business plans for the future. A comparison of available methods of data processing and storage outside the enterprise in the cloud computing model was presented. On the basis of collected results, it was determined for what purposes the data processing methods available on the platform are suitable and how they can meet the needs of enterprises. Full article
Show Figures

Figure 1

13 pages, 4035 KiB  
Article
Automation of Basketball Match Data Management
by Łukasz Chomątek and Kinga Sierakowska
Information 2021, 12(11), 461; https://doi.org/10.3390/info12110461 - 8 Nov 2021
Cited by 1 | Viewed by 2255
Abstract
Despite the fact that sport plays a substantial role in people’s lives, funding varies significantly from one discipline to another. For example, in Poland, women’s basketball in the lower divisions, is primarily developing thanks to enthusiasts. The aim of the work was to [...] Read more.
Despite the fact that sport plays a substantial role in people’s lives, funding varies significantly from one discipline to another. For example, in Poland, women’s basketball in the lower divisions, is primarily developing thanks to enthusiasts. The aim of the work was to design and implement a system for analyzing match protocols containing data about the match. Particular attention was devoted to the course of the game, i.e., the order of scoring points. This type of data is not typically stored on the official websites of basketball associations but is significant from the point of view of coaches. The obtained data can be utilized to analyze the team’s game during the season, the quality of players, etc. In terms of obtaining data from match protocols, a dedicated algorithm for identifying the table was used, while a neural network was utilized to recognize the numbers (with 70% accuracy). The conducted research has shown the proposed system is well suited for data acquisition based on match protocols what implies the possibility of increasing the availability of data on the games. This will allow the development of this sport discipline. Obtained conclusions can be generalized to other disciplines, where the games are recorded in paper form. Full article
Show Figures

Figure 1

9 pages, 892 KiB  
Article
Graph Analysis Using Fast Fourier Transform Applied on Grayscale Bitmap Images
by Pawel Baszuro and Jakub Swacha
Information 2021, 12(11), 454; https://doi.org/10.3390/info12110454 - 1 Nov 2021
Cited by 2 | Viewed by 2151
Abstract
There is spiking interest in graph analysis, mainly sparked by social network analysis done for various purposes. With social network graphs often achieving very large size, there is a need for capable tools to perform such an analysis. In this article, we contribute [...] Read more.
There is spiking interest in graph analysis, mainly sparked by social network analysis done for various purposes. With social network graphs often achieving very large size, there is a need for capable tools to perform such an analysis. In this article, we contribute to this area by presenting an original approach to calculating various graph morphisms, designed with overall performance and scalability as the primary concern. The proposed method generates a list of candidates for further analysis by first decomposing a complex network into a set of sub-graphs, transforming sub-graphs into intermediary structures, which are then used to generate grey-scaled bitmap images, and, eventually, performing image comparison using Fast Fourier Transform. The paper discusses the proof-of-concept implementation of the method and provides experimental results achieved on sub-graphs in different sizes randomly chosen from a reference dataset. Planned future developments and key considered areas of application are also described. Full article
Show Figures

Figure 1

13 pages, 1389 KiB  
Article
Spatial Pattern and Influencing Factors of Outward Foreign Direct Investment Enterprises in the Yangtze River Economic Belt of China
by Fei Shi, Haiying Xu, Wei-Ling Hsu, Yee-Chaur Lee and Juhua Zhu
Information 2021, 12(9), 381; https://doi.org/10.3390/info12090381 - 18 Sep 2021
Cited by 3 | Viewed by 2726
Abstract
This paper studies outward foreign direct investment (OFDI) enterprises in the Yangtze River Economic Belt. Using geographical information system (GIS) spatial analysis and SPSS correlation analysis methods, it analyzes the change in the spatial distribution of OFDI enterprises in 2010, 2014, and 2018. [...] Read more.
This paper studies outward foreign direct investment (OFDI) enterprises in the Yangtze River Economic Belt. Using geographical information system (GIS) spatial analysis and SPSS correlation analysis methods, it analyzes the change in the spatial distribution of OFDI enterprises in 2010, 2014, and 2018. It explores the influencing factors that have an impact on this change. The results show the following: (1) The geographical distribution of OFDI enterprises in the Yangtze River Economic Belt is uneven. In the downstream region, OFDI enterprises have significant advantages in both quantity and quality over those in the mid- and up-stream regions. In recent years, a multi-core spatial pattern has gradually emerged. (2) The factors influencing the spatial distribution of OFDI enterprises have been gradually changing from one dominant factor, i.e., technological innovation capability, to four core factors, namely, urbanization level, economic development level, technological innovation capability, and degree of economic openness. The research results serve as an important reference for future policy adjustment in the Yangtze River Economic Belt. First, the Yangtze River Economic Belt should adjust industrial policies; comprehensively increase the level of OFDI; accelerate the upgrading and transformation of regional industries; and, at the same time, inject vitality into the development of the world economy. Moreover, the downstream region should fully play a leading role in the Yangtze River Economic Belt, especially in encouraging OFDI enterprises to establish global production networks. Meanwhile, enterprises in the upstream region are encouraged to establish regional production networks to accelerate the development of inland open highlands. Full article
Show Figures

Figure 1

12 pages, 274 KiB  
Article
Use Dynamic Scheduling Algorithm to Assure the Quality of Educational Programs and Secure the Integrity of Reports in a Quality Management System
by Yasser Ali Alshehri and Najwa Mordhah
Information 2021, 12(8), 315; https://doi.org/10.3390/info12080315 - 6 Aug 2021
Cited by 1 | Viewed by 2418
Abstract
The implementation of quality processes is essential for an academic setting to meet the standards of different accreditation bodies. However, processes are complex because they involve several steps and several entities. Manual implementation (i.e., using paperwork), which many institutions use, has difficulty following [...] Read more.
The implementation of quality processes is essential for an academic setting to meet the standards of different accreditation bodies. However, processes are complex because they involve several steps and several entities. Manual implementation (i.e., using paperwork), which many institutions use, has difficulty following up the progress and closing the cycle. It becomes more challenging when more processes are in place, especially when an academic department runs more than one program. Having n programs per department means that the work is replicated n times. Our proposal in this study is to use the concept of the Tomasulo algorithm to schedule all processes of an academic institution dynamically. Because of the similarities between computer tasks and the processes of workplaces, applying this method enhances work efficiencies and reduces efforts. Further, the method provides a mechanism to secure the integrity of the reports of these processes. In this paper, we provided an educational institution case study to understand the mechanism of this method and how it can be applied in an actual workplace. The case study included operational activities that are implemented to assure the program’s quality. Full article
Show Figures

Figure 1

15 pages, 2680 KiB  
Article
Tracing CVE Vulnerability Information to CAPEC Attack Patterns Using Natural Language Processing Techniques
by Kenta Kanakogi, Hironori Washizaki, Yoshiaki Fukazawa, Shinpei Ogata, Takao Okubo, Takehisa Kato, Hideyuki Kanuka, Atsuo Hazeyama and Nobukazu Yoshioka
Information 2021, 12(8), 298; https://doi.org/10.3390/info12080298 - 26 Jul 2021
Cited by 27 | Viewed by 7211
Abstract
For effective vulnerability management, vulnerability and attack information must be collected quickly and efficiently. A security knowledge repository can collect such information. The Common Vulnerabilities and Exposures (CVE) provides known vulnerabilities of products, while the Common Attack Pattern Enumeration and Classification (CAPEC) stores [...] Read more.
For effective vulnerability management, vulnerability and attack information must be collected quickly and efficiently. A security knowledge repository can collect such information. The Common Vulnerabilities and Exposures (CVE) provides known vulnerabilities of products, while the Common Attack Pattern Enumeration and Classification (CAPEC) stores attack patterns, which are descriptions of common attributes and approaches employed by adversaries to exploit known weaknesses. Due to the fact that the information in these two repositories are not linked, identifying related CAPEC attack information from CVE vulnerability information is challenging. Currently, the related CAPEC-ID can be traced from the CVE-ID using Common Weakness Enumeration (CWE) in some but not all cases. Here, we propose a method to automatically trace the related CAPEC-IDs from CVE-ID using three similarity measures: TF–IDF, Universal Sentence Encoder (USE), and Sentence-BERT (SBERT). We prepared and used 58 CVE-IDs as test input data. Then, we tested whether we could trace CAPEC-IDs related to each of the 58 CVE-IDs. Additionally, we experimentally confirm that TF–IDF is the best similarity measure, as it traced 48 of the 58 CVE-IDs to the related CAPEC-ID. Full article
Show Figures

Figure 1

Back to TopTop