Next Article in Journal
Stance and Engagement: How Community Notes Influence HPV Vaccine Conversations on X in Japan
Previous Article in Journal
Study of Influence of Printing Speed and Layer Height on Dimensional Accuracy of 3D-Printed Carbon Fiber-Reinforced Polyamide Parts
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Evaluating Low-Code Development Platforms: A MULTIMOORA Approach †

1
School of Information Technology and Engineering, Kazakh-British Technical University, Almaty 050000, Kazakhstan
2
Department of Telecommunication, University of Ruse, 7017 Ruse, Bulgaria
3
Department of Transportation System Engineering, Korea National University of Transportation, Uiwang 16106, Republic of Korea
*
Authors to whom correspondence should be addressed.
Presented at the International Conference on Electronics, Engineering Physics and Earth Science (EEPES 2025), Alexandroupolis, Greece, 18–20 June 2025.
Eng. Proc. 2025, 104(1), 15; https://doi.org/10.3390/engproc2025104015
Published: 25 August 2025

Abstract

Swiftly advancing low-code development platforms (LCDPs) have created a new branch in software development, allowing for the rapid creation of applications with minimal knowledge of coding. However, in spite of the great opportunities gained, problems related to choosing the most appropriate platform from a wide range of alternatives that differ in features, usage scenarios, and performance metrics make it difficult to determine the most suitable solution. The use of the MULTIMOORA method can greatly facilitate the selection process, along with a strong evaluation and weighting system, which has a positive impact on the results. The evaluation system provides ten global criteria with internal sub-criteria of different factors. The list of tested platforms includes the seven most popular ones: Kissflow, Salesforce App Cloud, Zoho Creator, OutSystems, MS Power App, Mendix, and Appian. The results show the genuine value of this method, by accentuating the strengths of the proven platforms and the method itself. This study offers a multifaceted and sustainable approach to platform validation that allows the use of LCDPs for various applications and helps to make rational decisions.

1. Introduction

In recent years, low-code development platforms (LCDPs) are attracting considerable interest as tools that accelerate application development and allow various entities to optimize software development procedures by reducing time, cost, and effort for domain-specific tasks [1]. Creating such applications with minimal coding requirements is made possible by LCDPs, which have proven their ability to simplify and speed up the software development process [2]. Expanding technological advancements are contributing to the emergence of many different platforms in the market [3]. An important remark is that not only desktop, but also mobile apps can be created on such platforms with a common goal [4]. In such applications, analytics can be performed in various ways to optimize space and subsequently reduce task completion time [5]. With the help of these platforms’ user-friendly graphical interfaces and cloud computing capabilities, users with less programming knowledge can create complete working apps, reducing the complexity associated with traditional code-based software development [6]. In this way, development may involve non-professional staff with little experience of writing the architecture and codes for conventional software [7].
Empirical studies indicate the frequency of flaws and limitations in LCDPs (in one study, more than 40% of developers encountered serious problems) and identify factors that influence companies’ decisions to accept or reject a particular platform choice, as well as barriers to their adoption [8]. Assessment frameworks and taxonomies have also been utilized to enhance understanding and facilitate platform comparisons for low-code development [9]. These programs are made to provide decision-makers the resources they need to assess and choose the platform that best suits their needs, keeping security, scalability, and compatibility in mind [10]. It is also worth noting that the studies explored in detail topics such as process automation, system customization, and the continued use of models in reuse scenarios [11].
Often, in the methods of evaluating a particular platform, beneficiaries rely on subjective criteria and opinions and do not consider the multifaceted nature of platform selection [12]. The number of platforms on the market has crossed the 200 [13] mark, so choosing the optimal platform for a particular project becomes complex and can often lead to poor decision-making, contributing to increased implementation time and financial costs. In the current realities of the business world, it is unacceptable to irrationally choose a platform without using objective techniques and analysis for platform selection [14].
Different approaches are recommended for evaluation, but some of them focus on specific factors (e.g., product cost, scalability); however, they can also include approaches to evaluate subjective criteria such as user experience [15]. Such methods may be useful in certain situations, but methods with a wider range of objective criteria will present a larger picture of the representation; so, the use of the MULTIMOORA method, a strong approach specifically designed for multi-criteria decision-making (MCDM), is proposed.

2. Literature Review

The older MOORA (multi-objective optimization by ratio analysis) approach was refined and developed into the MULTIMOORA method, which incorporates a matrix of goal alternatives with applied ratios and, at one point, applies the anchor point method [16]. This method was first used in fields other than computer science, such as engineering, mathematics, and economics, but it was further refined and took on its current form in MULTIMOORA [17]. The method offers versatility and flexibility by allowing weights to be assigned to various criteria, ensuring that elements important to a specific project are prioritized in the final assessment [18].
The growing popularity of LCDPs has created a complexity that needs to be addressed in order to provide the most complete solution for development. Although a multitude of LCDPs exist, each has its own strengths and weaknesses across various functionalities, user experience, and development approaches. This complexity requires a robust and objective evaluation for determining the platform that best meets a wide range of developer needs, which the MULTIMOORA method fully satisfies [19]. Considering potentially conflicting criteria that are pertinent to the selection of a general-purpose LCDP, this technique, in contrast to traditional evaluation methods, enables a thorough examination.
This study creates a score for a set of selected criteria that have some weight in selecting the optimal platform based on their unique needs and goals by combining a set of metrics with the scores, which together provide a comprehensive output. The goal of this study is to objectively evaluate platforms using structured methods, such as MULTIMOORA, to help organizations in their selection and successful implementation.
Existing LCDP evaluation frameworks mostly focus on performance, usability, and functionality; they often use the Technical, Organizational, and Environmental (TOE) framework to consider aspects affecting adoption. In order to understand how business requirements and IT resources influence platform adoption, studies consider adoption archetypes (e.g., “democratizers” of application development and “mitigators of IT resource scarcity”) [20]. For example, Use Case Point (UCP) ignores the special characteristics of LCDPs (e.g., rapid prototyping and reduced code writing), but fits well with standard software development processes [21]. Moreover, the Task Technology Fit (TTF) theory assesses how well task and platform capabilities match, considering development resources and application complexity [22]. Systems that extend quality models, such as ISO/IEC 25010:2011, face challenges in assessing the dynamic nature of IoT systems, which is a key application of LCDPs [23,24,25]. Understanding the actual usefulness of these platforms is limited by the unavailability of models that consider both operational efficiency and quality.

3. Materials and Methods

3.1. MULTIMOORA Framework

The structure of the MULTIMOORA method comes from the combination of two main constructs, which include the MOORA method evaluating alternatives based on performance ratios on several criteria and comparing them with an ideal fulcrum to determine the most appropriate option. MOORA includes sub-methods, such as ratio systems and the reference point approach. Together, the combination of these methods presents a unique overview of the problem solution.
Ratio System (RS) Formula. The ratio system is calculated by evaluating the ratio of each alternative’s performance score relative to all alternatives for a given criterion. The formula is:
R S i j = x i j i = 1 m x i j ,
where:
  • R S i j is the normalized score of alternative j with respect to criterion i ;
  • x i j is the evaluation of the effectiveness of alternative j with respect to criterion i ;
  • m is the total number of possible alternatives.
MOORA Formula. The MOORA method combines multiple performance elements into a single score for each part by considering the weights of each criterion. The formula is:
S j = i = 1 n ω i   ×   R S i j ,
where:
  • S j is the aggregated performance score of alternative j ;
  • ω i is the weight of criterion I ;
  • n is the total number of criterion.
The simplicity of the formulas does not displace the mathematical rigor in any way, balancing between these polarities, resulting in a methodology that combines and produces a comprehensive result with platform evaluations.

3.2. MULTIMOORA Framework

A data collection of relevant information from official scientific publications, industry reports, and official technical documentation from each of the platforms was undertaken, and we are confident that this will help us provide a full understanding of each LCDP’s assessment, as these sources are primary sources capable of providing accurate and reliable data that can be referenced. The documentation obtained from the data collection was a valuable source of information on the capabilities, features, and limitations of each platform that provides insights into the technical details, performance metrics, and capabilities of LCDP.
As a result, the list of evaluated platforms is as follows: Kissflow, Salesforce App Cloud, Zoho Creator, OutSystems, MS Power App, Mendix, and Appian.
Table 1 shows that each criterion (feature) has corresponding components and weighting factors. In this case, each of the ten criteria had its own weight, and it was indexed by the number of internal components in the criterion that it had. Together, this provides an objective score for each criterion for general purpose development. An example of a criterion weight (e.g., if criterion 1 scores 5 component scores, its weighting factor is 0.5). If the platform scores 4 out of 5, the coefficient for this criterion will be 0.4, which corresponds to the numerical equivalent of 4. It is important to remember that each component, regardless of the criterion, is weighted 0.1 points.
Table 2 provides a structured overview of the key features in low-code development platforms (LCDPs), detailing their functionality and role in application development. The resulting data and scale scores were calculated using the MULTIMOORA index, which presents an overall score; the higher the score, the better the platform.

3.3. Evaluation

The steps of the method were constructed to systematically evaluate LCDPs according to predetermined criteria. First, using an extensive review, the relevant criteria were identified. Next, to provide comparability and reflect the relative relevance of each criterion, they were normalized and weighted. Ratio analysis was applied to evaluate each LCDP, obtaining performance scores for each criterion relative to the ideal. The total of these ratings at the end yielded a thorough evaluation of the overall suitability of each platform based on all the assessed factors.
The integration of the method into the program code involved importing libraries for calculations, including numpy for mathematical calculations of coefficients, os for working with the operating system, pandas for structured processing and working with data, and openpyxl for working with Excel files (Figure 1 illustrates the workflow diagram). Functions were defined to standardize the decision matrix, compute MULTIMOORA indices, and extract certain columns of the DataFrame as integer matrices, and data preparation included organizing the evaluation criteria and platform names into pandas DataFrames. Processing the resultant data entailed utilizing numpy arrays to apply weights to each criterion and transforming the DataFrame columns into integer matrices. After the operations were performed, the method indices could be calculated, the scores in the DataFrame were updated, the total scores were calculated, the platforms were sorted by score, and the results were exported to an Excel file for further overview analysis.

4. Results

The data were extracted from an Excel file after the LCDP study was completed. The ten most important criteria were considered in the evaluation. Table 3 shows the ratings of the platforms. The characteristic shows the sum of available platform components. The results are shown from left to right in descending order and at the bottom the sum of the calculated MULTIMOORA index and the total score calculated components of the plat-forms criteria. At first glance, the difference may seem critical, but it is worth considering that a complete overview of the components in the table helps to see the advantages of each; so, in certain scenarios, they may outweigh the importance, despite the score in the criteria group, the total score, and the index.
A fundamental factor influencing the decision in the choice of an LCDP and the subsequent implementation are the technical capabilities of the platform; these include security and scalability. Without these two elements, the flow of business processes becomes problematic. This method helps to make informed decisions by evaluating factors in a structured way. During the evaluation and selection process, organizations can encounter inconsistencies when looking at multiple criteria, but MULTIMOORA allows them to select the most appropriate one based on unique needs and mitigating risks through a comprehensive evaluation review.
Figure 2 shows a comparison chart by platform and criteria, showing that Kissflow and Zoho Creator performed exceptionally well in the Graphical User Interface (GUI) test, demonstrating better scores on visual components. All platforms received great marks for safety assistance. However, there was no support for cooperative development for Zoho Creator and MS Power App. In terms of reusability support, Kissflow, Zoho Creator, and Salesforce App Cloud had the best scores. With the exception of Appian, all platforms functioned well in terms of scalability. Kissflow, Salesforce App Cloud, and OutSystems were the best at business logic specification techniques. Every platform received an equal score for application development methods. While Kissflow led in the variety of supported applications, Salesforce App Cloud, OutSystems, Mendix, and Appian stood out in terms of deployment support.
Figure 3 shows the level of the MULTIMOORA index using a heat scale, where warmer colors (yellow, orange) correspond to high values and cooler colors (blue, violet) correspond to low values. By most criteria, the Kissflow platform is the leader, scoring 30.8638 points with a MULTIMOORA index of 0.8638. It is followed by Salesforce App Cloud with a score of 28.5497 and an index 0.5497. Conversely, Mendix and Appian received the lowest scores of 22.2696 and 21.3081, respectively, with indices of 0.2696 and 0.3081, an important attribute in the process of weighting scores that were previously not considered due to the inconsistent evaluation methodology. The indirect relationship between index and sum score is vividly shown in the weighted and summed scatter scores of Zoho Creator, OutSystems, MS Power App, and Mendix platforms, where the sum scores were 26.661, 25.3697, and 23.479, respectively, with indices of 0.661, 0.3697, and 0.479.
Figure 4 offers for consideration a multi-dimensional view of platform performance—a perspective that pays attention to reviewing each of the criteria and visually shows the balance (or imbalance) and the ability to analyze trends between different platform capabilities, helping us to understand which platforms are comprehensive and which excel in only a few areas. Due to the limitation in the maximum number of points in the criterion component group, the graph may appear monotonous; but, even so, striking differences and platform orientation patterns can be seen. Spider charts like this view emphasize the strengths of each platform and make it easier to find flaws in some of the fields that need improvement. By examining these patterns in detail, platform selection decisions are deeply researched and provide the best long-term adaptability and scalability for dynamic product development needs.
Despite the high sensitivity of the data, but due to equal weighting, the result is an unbiased evaluation of all platforms, as can be observed in Figure 5, which shows all platforms from the lowest to the highest functionality. The gaps between the total score and MULTIMOORA index can be replaced on the graph, due to the fact that they consider the equal importance of the criteria rather than just their sum applying normalization, where dominance in key aspects can outweigh the total scores highlighting platforms with unique strengths. By doing so, this study addresses some of the gaps in previous studies that used approaches that were strikingly different in their structure and comprehensiveness in evaluating a low-code platform.
The integration of the MULTIMOORA analysis adjusted the scalability evaluation, support for collaborative development, and rating of business logic specifications. As a result, based fundamentally on functional benefits rather than on incomplete or inconsistent previous evaluations, these findings provide a clear basis for selecting the most appropriate platform.

5. Conclusions

LCDPs have become considerably popular in both the academic and corporate worlds over the past decade. Studying and comparing multiple platforms is quite challenging and requires responsibility, and a careful and structured approach. At this refinement stage, interaction with various LCDPs is necessary to confirm the generated classification and avoid obstacles encountered during the creation of the reference application.
With a wide range of platforms and an abundance of criteria, it was possible to build an objective classification of the evaluation, which together produced reliable results that could be comprehensively studied and we could receive an understanding of the preferred option.
This study offers an alternative perspective on LCDP selection and demonstrates how well the MULTIMOORA approach supports the detailed and data-driven evaluation of low-code platforms. Although this study focuses on a specific set of LCDPs and evaluation guidelines, future research that may be conducted in this area has the potential to not only expand the amount of coverage and produce a broader scope of platforms, but also to explore additional guidelines, criteria, and new comparison and selection methods suitable for evolving low-code application development models.

Author Contributions

Conceptualization: D.S., A.B., A.M., T.I. and J.W.K. Methodology: D.S., A.B., A.M. and J.W.K. Investigation: D.S., T.I., A.B., A.M. and J.W.K. Writing—Original Draft: J.W.K. Writing—Review and Editing: A.B. and J.W.K. Resources: D.S., A.B. and A.M. Supervision: D.S., A.B., T.I. and A.M. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Korea Institute for Advancement of Technology (KIAT), grant provided by the Korea Government (MOTIE) (RS-2022-KI002562, HRD Program for Industrial Innovation). This research was supported by the Ministry of Trade, Industry and Energy (MOTIE), and the Korea Institute for Advancement of Technology (KIAT) through the “Support for Middle Market Enterprises and Regional innovation Alliances (RS-2025-02633071)” program.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding authors. The data are not publicly available due to privacy restrictions.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
LCDPLow-code development platform
MCDMMulti-criteria decision-making
RSRatio system
LDLinear dichroism
MOORAMulti-Objective Optimization on the Basis of Ratio Analysis

References

  1. Bock, A.C.; Frank, U. Low-code platform. Bus. Inf. Syst. Eng. 2021, 63, 733–740. [Google Scholar] [CrossRef]
  2. Picek, R. Low-code/no-code platforms and modern ERP systems. In Proceedings of the 2023 International Conference on Information Management (ICIM), Oxford, UK, 17–19 March 2023; pp. 44–49. [Google Scholar] [CrossRef]
  3. Di Ruscio, D.; Kolovos, D.; de Lara, J.; Pierantonio, A.; Tisi, M.; Wimmer, M. Low-code development and model-driven engineering: Two sides of the same coin? Softw. Syst. Model. 2022, 21, 437–446. [Google Scholar] [CrossRef]
  4. Phalake, V.; Joshi, S.; Rade, K.; Phalke, V. Modernized application development using optimized low code platform. In Proceedings of the 2nd Asian Conference on Innovation in Technology (ASIANCON), Ravet, India, 26–28 August 2022; pp. 1–4. [Google Scholar] [CrossRef]
  5. Phalake, V.S.; Joshi, S.D.; Rade, K.A.; Phalke, V.S.; Molawade, M. Optimization for achieving sustainability in low code development platform. Int. J. Interact. Des. Manuf. 2023, 18, 1–8. [Google Scholar] [CrossRef] [PubMed]
  6. Juhas, G.; Molnár, L.; Juhásová, A.; Ondrišová, M.; Mladoniczky, M.; Kováčik, T. Low-code platforms and languages: The future of software development. In Proceedings of the 20th International Conference on Emerging eLearning Technologies and Applications (ICETA), Stary Smokovec, Slovakia, 20–21 October 2022; pp. 286–293. [Google Scholar] [CrossRef]
  7. Tang, L. ERP low-code cloud development. In Proceedings of the 2022 IEEE 13th International Conference on Software Engineering and Service Science (ICSESS), Beijing, China, 21–23 October 2022; pp. 319–323. [Google Scholar] [CrossRef]
  8. Liu, D.; Jiang, H.; Guo, S.; Chen, Y.; Qiao, L. What’s wrong with low-code development platforms? An empirical study of low-code development platform bugs. IEEE Trans. Reliab. 2023, 73, 695–709. [Google Scholar] [CrossRef]
  9. Overeem, M.; Jansen, S. Proposing a framework for impact analysis for low-code development platforms. In Proceedings of the ACM/IEEE International Conference on Model Driven Engineering Languages and Systems Companion (MODELS-C), Fukuoka, Japan, 10–15 October 2021; pp. 88–97. [Google Scholar] [CrossRef]
  10. Almonte, L.; Cantador, I.; Guerra, E.; de Lara, J. Towards automating the construction of recommender systems for low-code development platforms. In Proceedings of the 23rd ACM/IEEE International Conference on Model Driven Engineering Languages and Systems: Companion Proceedings, Virtual Event, 16–23 October 2020; pp. 1–10. [Google Scholar] [CrossRef]
  11. Dalibor, M.; Heithoff, M.; Michael, J.; Netz, L.; Pfeiffer, J.; Rumpe, B.; Varga, S.; Wortmann, A. Generating customized low-code development platforms for digital twins. J. Comput. Lang. 2022, 70, 101117. [Google Scholar] [CrossRef]
  12. Käss, S.; Strahringer, S.; Westner, M. Practitioners’ perceptions on the adoption of low code development platforms. IEEE Access 2023, 11, 29009–29034. [Google Scholar] [CrossRef]
  13. Sahay, A.; Indamutsa, A.; Di Ruscio, D.; Pierantonio, A. Supporting the understanding and comparison of low-code development platforms. In Proceedings of the 2020 46th Euromicro Conference on Software Engineering and Advanced Applications (SEAA), Portoroz, Slovenia, 26–28 August 2020; pp. 171–178. [Google Scholar] [CrossRef]
  14. Philippe, J.; Coullon, H.; Tisi, M.; Sunyé, G. Towards transparent combination of model management execution strategies for low-code development platforms. In Proceedings of the 23rd ACM/IEEE International Conference on Model Driven Engineering Languages and Systems: Companion Proceedings, Virtual Event, 16–23 October 2020; pp. 1–10. [Google Scholar] [CrossRef]
  15. Ibrahimi, I.; Moudilos, D. Towards model reuse in low-code development platforms based on knowledge graphs. In Proceedings of the 25th International Conference on Model Driven Engineering Languages and Systems: Companion Proceedings, Montreal, QC, Canada, 23–28 October 2022; pp. 826–836. [Google Scholar] [CrossRef]
  16. Brauers, W.K.M.; Zavadskas, E.K. The MOORA method and its application to privatization in a transition economy. Control Cybern. 2006, 35, 445–469. [Google Scholar]
  17. Ji, S.; Agunbiade, M.; Rajabifard, A.; Kalantari, M. Strategies for improving land delivery for residential development: A case of the north-west metropolitan Melbourne. Int. J. Geogr. Inf. Sci. 2015, 29, 1649–1667. [Google Scholar] [CrossRef]
  18. Baležentis, T.; Baležentis, A. A survey on development and applications of the multi-criteria decision making method MULTIMOORA. J. Multi-Criteria Decis. Anal. 2014, 21, 209–222. [Google Scholar] [CrossRef]
  19. Brauers, W.K.M.; Zavadskas, E.K. Robustness of MULTIMOORA: A method for multi-objective optimization. Informatica 2012, 23, 1–25. [Google Scholar] [CrossRef]
  20. Käss, S.; Strahringer, S.; Westner, M. A multiple mini case study on the adoption of low code development platforms in work systems. IEEE Access 2023, 11, 118762–118786. [Google Scholar] [CrossRef]
  21. Abdurrasyid; Susanti, M.N.I.; Indrianto. Implication of low-code development platform on use case point methods. In Proceedings of the 2022 International Conference on ICT for Smart Society (ICISS), Bandung, Indonesia, 10–11 August 2022; pp. 1–6. [Google Scholar] [CrossRef]
  22. Gomes, P.M.; Brito, M.A. Low-code development platforms: A descriptive study. In Proceedings of the 2022 17th Iberian Conference on Information Systems and Technologies (CISTI), Madrid, Spain, 22–25 June 2022; pp. 1–4. [Google Scholar] [CrossRef]
  23. Ihirwe, F.; Di Ruscio, D.; Gianfranceschi, S.; Pierantonio, A. Assessing the quality of low-code and model-driven engineering platforms for engineering IoT systems. In Proceedings of the 2022 IEEE 22nd International Conference on Software Quality, Reliability and Security (QRS), Guangzhou, China, 5–9 December 2022; pp. 583–594. [Google Scholar] [CrossRef]
  24. Gabdullin, M.T.; Suinullayev, Y.; Kabi, Y.; Kang, J.W.; Mukasheva, A. Comparative Analysis of Hadoop and Spark Performance for Real-time Big Data Smart Platforms Utilizing IoT Technology in Electrical Facilities. J. Electr. Eng. Technol. 2024, 19, 4595–4606. [Google Scholar] [CrossRef]
  25. Gabdullin, M.T.; Mukasheva, A.; Koishiyeva, D.; Umarov, T.; Bissembayev, A.; Kim, K.-S.; Kang, J.W. Automatic cancer nuclei segmentation on histological images: Comparison study of deep learning methods. Biotechnol. Bioproc. 2024, 29, 1034–1047. [Google Scholar] [CrossRef]
Figure 1. Complete workflow of the MULTIMOORA method code.
Figure 1. Complete workflow of the MULTIMOORA method code.
Engproc 104 00015 g001
Figure 2. Comparison of low-code platforms by criteria (numbers correspond to weighted evaluation results).
Figure 2. Comparison of low-code platforms by criteria (numbers correspond to weighted evaluation results).
Engproc 104 00015 g002
Figure 3. Relationship between the MULTIMOORA index and total score.
Figure 3. Relationship between the MULTIMOORA index and total score.
Engproc 104 00015 g003
Figure 4. Spider chart of the strength of platform criteria.
Figure 4. Spider chart of the strength of platform criteria.
Engproc 104 00015 g004
Figure 5. Comparison of low-code platforms based on the MULTIMOORA index and total score.
Figure 5. Comparison of low-code platforms based on the MULTIMOORA index and total score.
Engproc 104 00015 g005
Table 1. Features, components, and weights of LCDPs.
Table 1. Features, components, and weights of LCDPs.
FeatureComponentsWeight
Graphical User InterfaceDrag-and-Drop Builder, Click-to-Configure Tools, Prebuilt Forms/Reports, Ready Dashboards, Responsive Forms, Milestone Monitoring, Detailed Reporting, Integrated Workflows, Flexible Workflows0.9
Compatibility AssistanceThird-Party Service Integration, Data Access Connectivity0.2
Safety AssistanceSoftware Safety, Environment-Level Safety0.2
Cooperative DevelopmentOffline Team Interaction, Real-Time Interaction0.2
Reusability SupportEmbedded Workflow Templates, Pre-Designed Forms/Reports, Pre-Configured Dashboards0.3
ScalabilityUser Capacity Expansion, Data Traffic Handling, Data Capacity Growth0.3
Business Process DefinitionRules Management System, Graphical Workflow Designer, AI-Enhanced Tools0.3
Application Development MethodsAutomated Generation of Code, Runtime Model Execution0.2
Deployment SupportCloud-Based Deployment, On-Premises Deployment0.2
Kinds of Supported ApplicationsEvent Tracking, Workflow Automation, Approval Management, Escalation Tracking, Inventory Tracking, Quality Management, Workflow Management0.7
Table 2. Description of core LCDP features.
Table 2. Description of core LCDP features.
FeatureDescription
Graphical User InterfaceSimplifies interaction with drag-and-drop builders, prebuilt forms, ready dashboards, progress tracking, and customizable workflows.
Compatibility AssistanceProvides seamless integration with data flows and various services.
Safety AssistanceProvides robust application- and platform-level safety measures.
Cooperative DevelopmentEnhances teamwork with offline and real-time online interaction tools.
Reusability SupportIncludes reusable templates, forms, and dashboards for efficiency and consistency.
ScalabilitySupports user expansion, data traffic handling, and capacity and storage growth.
Business Process DefinitionStreamlines logic with a rules engine, graphical editors, and AI-powered tools.
Application Development MethodsEnables quick development with automated generation of code and runtime execution.
Deployment SupportOffers cloud-based and on-premises deployment options.
Kinds of Supported ApplicationsFacilitates event tracking, workflow automation, approval management, and more.
Table 3. Evaluation of low-code development platforms using the MULTIMOORA method.
Table 3. Evaluation of low-code development platforms using the MULTIMOORA method.
FeatureKissflowSalesforce App CloudZoho CreatorOutSystemsMS Power AppMendixAppian
Graphical User Interface (GUI)7674533
Drag-and-Drop Builder++++++
Click-to-Configure Tools+
Prebuilt Forms/Reports+++++++
Ready Dashboards+++++
Responsive Forms++
Milestone Monitoring+++++++
Detailed Reporting+
Integrated Workflows+++
Flexible Workflows+++
Compatibility Assistance2222222
Third-Party Service Integration+++++++
Data Access Connectivity+++++++
Safety Assistance2222222
Software Safety+++++++
Environment-Level Safety+++++++
Cooperative Development2212122
Offline Team Interaction+++++++
Real-Time Interaction+++-++
Reusability Support3332211
Embedded Workflow Templates+++
Pre-Designed Forms/Reports+++++++
Pre-Configured Dashboards+++++-
Scalability3333331
User Capacity Expansion+++++++
Data Traffic Handling++++++
Data Storage Growth++++++
Business Process Definition3313122
Rules Management System+++++++
Graphical Workflow Designer++++
AI-Enhanced Tools++++
Application Development Methods1111111
Automated Generation of Code+
Runtime Model Execution++++++
Deployment Support1212122
Cloud-Based Deployment+++++++
On-Premises Deployment++++
Kinds of Supported Applications6454545
Event Tracking+++++++
Workflow Automation+++++
Approval Management
Escalation Tracking+
Inventory Tracking+++++++
Quality Management++++++
Workflow Management+++++++
MULTIMOORA Index0.86380.54970.6610.36970.4790.26960.3081
Total30.86428.549726.66125.369723.47922.269621.308
The color in the table is intentionally used to highlight the results obtained during the measurements.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Serekov, D.; Bissembayev, A.; Iliev, T.; Mukasheva, A.; Kang, J.W. Evaluating Low-Code Development Platforms: A MULTIMOORA Approach. Eng. Proc. 2025, 104, 15. https://doi.org/10.3390/engproc2025104015

AMA Style

Serekov D, Bissembayev A, Iliev T, Mukasheva A, Kang JW. Evaluating Low-Code Development Platforms: A MULTIMOORA Approach. Engineering Proceedings. 2025; 104(1):15. https://doi.org/10.3390/engproc2025104015

Chicago/Turabian Style

Serekov, Danial, Alibek Bissembayev, Teodor Iliev, Assel Mukasheva, and Jeong Won Kang. 2025. "Evaluating Low-Code Development Platforms: A MULTIMOORA Approach" Engineering Proceedings 104, no. 1: 15. https://doi.org/10.3390/engproc2025104015

APA Style

Serekov, D., Bissembayev, A., Iliev, T., Mukasheva, A., & Kang, J. W. (2025). Evaluating Low-Code Development Platforms: A MULTIMOORA Approach. Engineering Proceedings, 104(1), 15. https://doi.org/10.3390/engproc2025104015

Article Metrics

Back to TopTop