Next Article in Journal
Scenario-Guided Temporal Prototypes in Reinforcement Learning
Previous Article in Journal
AI-Powered Vulnerability Detection and Patch Management in Cybersecurity: A Systematic Review of Techniques, Challenges, and Emerging Trends
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Machine Learning-Based Three-Way Decision Model for E-Commerce Adaptive User Interfaces

1
Faculty of Management, Wroclaw University of Science and Technology, 50-370 Wroclaw, Poland
2
Faculty of Information and Communication Technology, Wroclaw University of Science and Technology, 50-370 Wroclaw, Poland
*
Author to whom correspondence should be addressed.
Mach. Learn. Knowl. Extr. 2026, 8(1), 20; https://doi.org/10.3390/make8010020
Submission received: 30 November 2025 / Revised: 13 January 2026 / Accepted: 14 January 2026 / Published: 16 January 2026
(This article belongs to the Topic Theories and Applications of Human-Computer Interaction)

Abstract

In the world of e-commerce, ensuring customer satisfaction and retention depends on delivering an optimal user experience. As the primary point of contact between businesses and consumers, a user interface’s success hinges on personalized human–computer interaction. The goal of this paper is to introduce the concept of a self-adaptive multi-variant user interface based on a novel application of a three-way decision-making model, which allows for “accept”, “reject”, or “delay” decisions on UI changes. The proposed framework enables the delivery of a multi-variant e-commerce user interface. It leverages human-centered machine learning to identify homogeneous groups of customers for whom a layout tailored to their behavior can be offered. The functionality of the solution was verified through pilot implementation and experimental studies. The results positively validated the three-way decision algorithm and highlighted clear directions for its refinement. The primary contribution of this work is the novel adaptation of the three-way decision model to create an automated framework for e-commerce UI personalization, moving beyond the limitations of traditional binary A/B testing. This study demonstrates the practical feasibility of using a self-adaptive, multi-variant interface to significantly improve user experience and key business metrics. These results confirm the feasibility and effectiveness of using self-adaptive e-commerce interfaces to improve the user experience. The proposed framework represents a promising solution to the challenges posed by static interfaces and demonstrates the potential for wider application in the e-commerce domain and beyond.

1. Introduction

Personalization, defined by Oxford Languages as the action of designing or producing something to meet someone’s individual requirements, is one of the most important trends in e-commerce today. Its practical importance is underlined by market research conducted by major consultancies. For example, 41% of customers say that personalization is the reason they currently subscribe or plan to subscribe to a particular product or service [1]. In addition, 71% of consumers expect personalization, and 76% are frustrated when they do not find it [2]. At the same time, the growing importance of e-commerce in the global economy should not be overlooked. Its share of retail sales is forecast to rise from 17% to 41% by 2027 [3].
Implementing the human-centered concept in e-commerce requires improving the customer’s user experience (UX). To achieve this goal, it is critical to focus on the user interface (UI)—the appearance and content presented [4]. This is due to the nature of this channel, where the interaction between business and consumer takes place on websites or in emails. On the one hand, this may seem like a disadvantage because it limits the ability to influence the audience in multiple directions, but on the other hand, it is an advantage because it allows for a focus on optimizing communication through the UI of the e-commerce system. It is worth noting that usability and perceived usefulness can be predictors of the frequency of use of e-commerce solutions [5], and consequently affect business efficiency.
Typically, UI design involves UX experts identifying trade-offs to satisfy diverse customer needs. A/B testing is often used for this purpose [6]. However, this approach leads to a situation where a one-size-fits-all layout has to meet the expectations of different users, which contradicts the idea of personalization and individualization of communication. In this case, technical limitations preclude a human-centered approach. An alternative is to use a multi-variant UI, which allows serving different versions of the layout, tailored to specific customer groups [7]. Such a solution requires, on the one hand, the collection of user behavior information to segment users and analyze their characteristics and, on the other hand, it must allow the design, implementation, and verification of modifications aimed at personalizing the different UI variants. Tailoring of UI variants should be based on feedback resulting from customer choices (similar to A/B testing [8]), but can be managed by a human expert or result from implemented self-adaptation algorithms. The latter option is discussed further in the paper. The proposed solution combines the concept of human-centered machine learning [9], which is the processing of information about user behavior to improve their experience in interacting with an information system, with the automation of user interface adaptation, leading to the idea of intelligent user interface (IUI) [10].
Adaptive user interfaces (AUIs) have been evolving practically since the beginning of computer applications in business in the 1960s [11]. They can be described as an interface that can change its behavior to suit an individual or a group of individuals [12]. It is worth noting that adaptive does not always mean the same thing. At least four different levels of adaptivity can be distinguished, ranging from manual to fully adaptive, with intermediate levels [13].
In general, it is possible to customize the user interface for all IT systems, but the specifics of particular applications must be taken into account. A key issue is the number of users of the system that should have self-adaptation functionality available. In the case of intra-organizational applications (such as Enterprise Resource Planning systems—ERP, Workflow), there is a fixed and finely counted set of individuals or individual groups whose behavior should be analyzed and used in adaptation mechanisms [14]. The situation is different for software applications that can have any number of users, such as e-commerce systems. As the number of users grows, so does the amount of information generated from customer journeys [15], increasing the challenge of processing large amounts of data. Despite this limitation, the potential of AUI for e-commerce applications is being recognized. Attempts to bring personalization to e-commerce date back to the early days of this sales channel [16]. The topic has become increasingly popular in recent years, as evidenced by the large number of publications highlighting the marketing relevance of such solutions [17].
Another challenge in designing an AUI is placing the system in a business context. Approaches that work well under some conditions are not necessarily optimal under others, although attempts are being made to develop domain- and device-independent, model-based AUI methodologies [18]. It is also necessary to consider the psychological aspects in order to optimize the involved users of information systems [19].
Current trends in AUI development—the proliferation of AI and ML applications—also should not be overlooked [20]. As early as the end of the 20th century, machine learning began to be considered for its potential in this area [21], and today this approach is already widely used, leading to the so-called human-in-the-loop learning [22]. The key to UI adaptation becomes information derived from continuous user tracking with dedicated tools. User preferences are an extremely important aspect in the design of self-adaptive systems due to their unique and dynamic nature [23]. Interesting areas that could have a significant impact on the incorporation of user feedback into the UI adaptation process in the future are the increasingly popular Virtual Reality (VR) and Digital Twin approaches [24]. However, the risks and negative consequences of AI applications in the human–computer interaction (HCI) domain cannot be overlooked [25] and must be weighed against the potential benefits when deciding on human-centered AI solutions [26].
Early attempts to personalize the user interface in e-commerce were limited in depth and scope [27]. Over time, however, they have increasingly turned to data-driven solutions. An example of such an application can be a generic software platform for automatically generating web user interfaces based on analyzed behavioral patterns using machine learning [28]. This type of approach could lead to the development of Intelligent User Interfaces [29], which represent the next stage of AUI evolution. Solutions that automatically tailor the layout to the customer’s needs and preferences represent both a challenge and an opportunity for deeper personalization [7]. This is especially important in channels such as e-commerce, where the competitive nature and lack of barriers to entry make it necessary to look for ways to stand out in order to attract and retain customers.
Various AI/ML techniques are used to personalize communication, which in e-commerce is mainly done through the UI. The use of clustering algorithms for customer segmentation is a very popular way of grouping users [30]. The level of complexity of the proposed solutions varies from simple services aimed at identifying groups of users based on basic data sets to complex systems using deep learning [31]. However, the use of ML in auto-adaptive systems poses several challenges. They include dealing with the uncertainty and dynamics of the changing environment, which requires the implementation of a lifelong learning approach [32], and dealing with large data sets that need to be processed efficiently. For example, it can be noted that not all clustering algorithms can be used in practice due to the size of data sets containing e-commerce customer behavior [33]. Computationally complex methods will not be effective in this business context, and an analysis of publications shows the dominance of the K-means algorithm, which allows combining high quality clustering with limited resource consumption [34]. The costs associated with using machine learning techniques should be one of the criteria for designing the solution architecture [35], as the return on investment, such as in e-commerce personalization, will ultimately be key for decision makers.
Product recommendation systems using various approaches such as Content-Based Filtering (CBF) and Collaborative Filtering (CF) are widely used in practice [36]. While such systems are the most common way of personalizing communication in e-commerce, other approaches to the problem can also be found, such as the use of the Semantic Enhanced Markov Model [37] and image-based product recommendations [38]. It is also possible to use the clustering methods mentioned above to create personalized product lists that are shown to customers to encourage them to buy [39]. However, product recommendations aren’t the only opportunities presented by the use of big data in e-commerce [40], and UI personalization can be much broader, including techniques for adapting website layout.
Finally, it is worth mentioning that from the perspective of personalized UI applications in e-commerce, the issues related to the customer’s reception of the dedicated message cannot be overlooked. The product-related information, the consistency of product and background presentation, and the variety of product information presentation are closely related to the perceived quality of e-commerce systems for many years [41]. On the one hand, measurable business results are a good measure of quality, but on the other hand, user satisfaction must be kept in mind [42]. The specific nature of e-commerce means that loyalty is extremely important and should be prioritized over short-term benefits [43]. There are also important issues related to the emotions that often drive online shopping decisions [44] and influencing the decision-making process through non-coercive methods (known as digital nudging—DN [45]).
An analysis of the literature on AUI reveals a research gap resulting from the widespread use of a single UI variant in e-commerce. While this is not a limitation for generating product recommendations that are always displayed in the same place for each customer, it can be a problem when there is a need for more intensive personalization that affects the entire layout. Furthermore, existing validation methods like A/B testing are limited to a binary decision framework. They lack a mechanism to formally handle inconclusive outcomes, often forcing a premature decision or the discarding of potentially valuable modifications. This paper addresses that gap by introducing a three-way decision model that explicitly incorporates an “indecision” or “delay” option, better suited for the iterative and complex nature of UI optimization.
Human-centered multi-variant UIs, based on user behavior and choices, offer a much wider range of possibilities for adapting the design and content presented to users, and their deeper analysis could make an interesting contribution to the development of intelligent user interfaces in e-commerce. An additional challenge is the automation of the UI adaptation process, leading to autonomous solutions that optimize the user experience without human intervention. These issues are addressed in this paper by analyzing the potential of ML-based self-adaptive multi-variant UIs to implement the concept of human-centered development to improve customer UX. The proposed solution is based on the concept of the Three-Way Decision Model (3WD), so it can work effectively in situations where a clear decision is not possible.
The goal of this paper is to introduce the concept of a self-adaptive multi-variant user interface based on a novel application of a three-way decision-making model, which allows for “accept”, “reject”, or “delay” decisions on UI changes. We present an end-to-end framework that integrates human-centered machine learning with this decision model to enable automated and scalable UI optimization for e-commerce platforms. To structure our investigation into this area, this study is guided by the following research questions:
  • RQ1: How can a three-way decision model be effectively adapted to create a self-adaptive framework for managing multi-variant user interfaces in an e-commerce environment?
  • RQ2: What is the practical impact of implementing a self-adaptive, multi-variant user interface on key e-commerce performance metrics, such as conversion rates and average order value?
  • RQ3: What are the key challenges and limitations encountered during the practical implementation of the proposed framework, and what directions do they suggest for future refinements?
By addressing these questions, this paper aims to provide a comprehensive validation of the proposed model, moving from theoretical concepts to practical implementation and evaluation. These questions directly address the research gap identified in the literature, particularly the limitations of static interfaces and binary (A/B testing) decision frameworks.
This paper makes several original contributions. Theoretically, it introduces a novel application of the three-way decision (3WD) model to the domain of UI adaptation, providing a formal mechanism to handle the uncertainty inherent in A/B testing. Practically, it presents an end-to-end framework for a self-adaptive, multi-variant e-commerce interface that goes beyond simple personalization. Methodologically, it integrates human-centered machine learning with this decision model to enable automated and scalable UI optimization. The remainder of this paper is structured as follows: Section 2 details the methods behind the framework. Section 3 presents the results of our empirical study. Section 4 discusses the implications and limitations of our findings, and Section 5 concludes the paper with a summary and directions for future research.

2. Methods

2.1. E-Commerce UI Self-Adaptation Mechanism

There are many ways to personalize the user interface in e-commerce. Product recommendation systems are the most common in practice, but the needs and opportunities are undoubtedly greater. One approach that takes a comprehensive approach to tailoring the UI to the specific characteristics of particular customer groups is the multi-variant user interface (MultiUI) [7]. Such a solution can operate in an expert-managed or self-adaptive mode. The two approaches differ in the process of optimizing the UI variant served to specific groups of customers, obtained through clustering based on behavioral data. In the first case, decisions about the selection of personalized modifications are made by a UX specialist and then verified experimentally. In the second case, the adaptation is automatic, with individual modifications being introduced and evaluated in successive iterations.
The general scheme of the platform supporting a multi-variant user interface is shown in Figure 1.
The first component is responsible for collecting data on customer behavior. In order to make quality recommendations, it is crucial to gather all possible information about how customers use the online store. This means not limiting yourself to basic actions such as adding a product to the cart or placing an order, but capturing all activities (e.g., filter settings, selecting product attributes, developing descriptions, searching) and their context (pages where the action was performed and time spent on the site). Web analytics systems such as Google Analytics or Matomo can be used for this purpose. It is important to record customer behavior in the form of a clickstream, which allows detailed analysis of user behavior and, if necessary, identify bottlenecks and critical steps in the purchase path (e.g., those where the purchase is interrupted). An example of the range of behavioral and contextual data collected by the Matomo system is shown in Figure 2.
Since the collected data includes not only user behavior but also contextual information, it is possible to use comprehensive knowledge about the customer during clustering, thus increasing the quality of the results obtained.
Another important aspect of data collection is privacy. Today’s market standards place increasing emphasis on the informed consent of customers to the tracking of their activities, and regulations enforce great care in the collection and processing of personal data. It is an ethical prerequisite that all data collection for the purposes described in this paper is performed only after obtaining explicit and informed user consent, in compliance with regulations such as GDPR. This includes being transparent with users about what data is being collected and how it will be used to personalize their experience. Therefore, in order to collect data on customer behavior, it is necessary to obtain their consent (e.g., by adding relevant marketing consents) and to anonymize the information (e.g., by tokenizing users). Data about how customers use the online store is used in two ways within the platform described: it forms the basis for clustering users using machine learning algorithms, and it allows analyzing the effectiveness of changes implemented through the self-adaptation mechanism.
Tokens associated with the devices used can be used to ensure privacy, but these will not always be unique identifiers. For this reason, it is worth considering the use of Universal Unique Identifiers (UUIDs) [46] stored in cookies to identify users. This solution also has drawbacks (e.g., inability to distinguish between multiple clients using the same device, or sensitivity to applications that restrict the use of cookies), but because of its simplicity it may be sufficient.
The second component of the platform is designed to group customers based on their characteristic behavior. Due to the large amount of data to be processed, clustering algorithms commonly used for customer segmentation are useful [34]. However, it is important to note that not all clustering methods are equally useful for grouping customers in order to serve them with specific UI variants. Three main aspects should be considered when choosing an algorithm: computational complexity, clustering quality metrics, and business context.
Computational complexity affects the resources (primarily memory and processor power) that must be allocated to successfully perform clustering. Some algorithms may not be able to handle large data sets due to insufficient computing power. Although there are solutions today that allow resources to be scaled, this is not always possible to the desired level and is usually expensive. Given these limitations, it is worth considering using methods that cluster large datasets well, such as K-means [33].
Several indicators can be used to evaluate the quality of clustering, including the Silhouette Score [47], the Calinski–Harabasz Index [48], the Dunn Index [49], and the Davies–Bouldin Index [50]. They measure various characteristics of the clusters, such as the degree of cohesion and separation of data points, the similarity between objects, and the compactness of the clusters. When selecting measures of clustering quality, it is important to keep in mind that the resulting recommendations may differ from each other and that it is better to rely on an aggregated indicator that considers several metrics rather than a single measure. In addition, the values of the indicators depend on the number of clusters (k), which makes it a challenge to determine this value [51].
An often overlooked criterion for selecting a clustering algorithm, but a key one for practical use, is the fit with the business context. When serving dedicated UI variants, the correct distribution of customers is the key, especially in terms of the number of customers in each cluster. The self-adaptation mechanism requires the collection of a sufficient amount of feedback, which means that dedicated UI variants must be served to many customers in order to be evaluated. It should also be noted that in online stores, the percentage of returning users is typically in the tens of percent (on an annual basis) [52], and the conversion rate is typically a few percent [53]. This means that for clusters with too small number of customers, a dedicated UI variant is rarely served and it would take a very long time to collect data to assess macro conversion (resulting from orders placed). In turn, prolonging the time to evaluate the changes made increases the risk of external factors influencing the evaluation, which could skew the results. Because of these limitations, clustering algorithms should be chosen so that the division of customers into clusters is as even as possible, and the number of users in the smallest cluster is not less than an assumed threshold (e.g., 5% of the total population). In addition, these requirements exclude algorithms that are not guaranteed to generate a certain number of clusters (e.g., DBSCAN), since there should not be too many clusters due to the cost of designing and maintaining UI variants. A comprehensive comparative analysis of different clustering methods, including K-means, DBSCAN, and Gaussian Mixture Models, was conducted separately to determine the most suitable approach for this e-commerce context [54]. That analysis concluded that the K-means algorithm provided the best balance of computational efficiency, cluster quality, and segment size equality, making it the most practical choice for creating actionable user groups for A/B testing of UI variants.
The third component is responsible for assessing the impact of changes on customer choices in the online store and is a key element of the self-adaptation mechanism (Figure 3).
The decision model is based on a set of metrics to verify whether the implemented changes have positively impacted user behavior. These metrics can include macro conversion metrics related to orders placed, such as Conversion Rate (CR) and Average Order Value (AOV), as well as micro conversion metrics related to single actions, like Click Through Rate (CTR), or sequences of actions, such as Partial Conversion Rate (PCR [7]). Descriptions of the metrics can be found in Table 1.
This last indicator can be flexibly adapted to the specifics of the changes being studied and allows the analysis of different sequences of customer activity, together with the specific priorities of certain events. Unlike the others, it is not commonly used, hence the need for clarification. PCR is calculated according to the following formula [7]:
P C R c = 1 n i = 1 n j = 1 s C V V ij
n is the number of sessions related to customers from the cluster c,
s is the number of activities within session n,
P C R c is the calculated PCR metric value for the cluster c,
C V V i j is the score of an activity j during a session i,
This metric offers considerable flexibility in selecting and prioritizing activities for study by assigning different CVV values to actions. This versatility is valuable for studying the impact of changes on different elements of the user interface without having to wait for the purchase process to complete.
Detailed configuration of the self-adaptation mechanism allows defining rules for accepting or rejecting changes. It can also include additional checks if the results are inconclusive. In the simplest case, it can be assumed that a change is accepted if at least one of the selected indicators for the custom UI has a value that is at least X% better than the value for the standard UI (acceptance threshold X is set as a parameter value), and the values of the other indicators do not deteriorate. Similarly, a rejection rule can be defined—if at least one of the indicators for the dedicated UI has a value that is at least X% lower than the value of that indicator for the standard UI, and the values of the other indicators are not improved. In ambiguous cases, the study can be repeated in the next period, with a finite number of iterations. If subsequent iterations do not result in a decision, a decision should be made based on the predefined settings of the system—accepting or rejecting changes that ambiguously influence customer decisions.
The fourth component is the design of the changes made to the UI variants. They result from the set of potential modifications that are the input to the described auto-adaptation mechanism and decision algorithm. Despite the common collection of analyzed UI modifications, the customization of UI variants takes place independently, and the time taken to check all options may vary. This is due to the fact that feedback is collected at different speeds, depending on the frequency of use of the e-shop and the behavior of users from specific clusters.
The last component allows for serving dedicated UI variants. In adaptation mode, client clusters are split, and one part receives a standard UI while the other receives a dedicated UI. This allows the impact of the changes to be compared, evaluated, and justified.
In the mechanism’s standard operation mode, all cluster clients receive a customized, dedicated UI variant, and the system returns to adaptation mode as new potential changes are added to the analysis.

2.2. Adaptation of the 3WD Model in Decision-Making Algorithms

The proposed decision solution of self-adaptation is based on the 3WD concept, which is rooted in decision theory and granular computing. This framework extends the traditional two-way decision model (‘accept’ vs. ‘reject’) by introducing a third option, often referred to as deferral, delay, or indecision. Such an approach is advantageous because it allows for postponing the decision to accept or reject a UI modification in order to further verify its impact on e-commerce customer behavior. Formally, the application of 3WD causes decisions to fall into three categories based on the level of certainty or risk:
  • Positive Area (P): High certainty supports acceptance of the UI change under analysis;
  • Negative Area (N): High certainty supports rejection of the analyzed UI change;
  • Boundary Area (B): Uncertainty or risk suggests deferring the decision for further analysis (e.g., re-verifying the impact of the change) or action (e.g., expert decision to reject/accept the change).
With such designations, the entire framework (labeled U—the universe of decision objects) can be described by the relation: U = P N B and P N = P B = N B = . Traditional 3WD assumes that for a given object x U function Θ ( x ) represents some measure of certainty or probability. The decision-making process, in turn, is based on two thresholds ( α and β ), where α > β , and:
  • the assignment to area P occurs if Θ ( x ) α ;
  • the assignment to area N occurs if Θ ( x ) β ;
  • the assignment to area B occurs if β < Θ ( x ) < α .
However, this single-criterion approach is insufficient for complex UI modifications, which require evaluating multiple performance indicators simultaneously. In such cases, there are multiple measures of certainty, which can be represented as: Θ ( x ) , Φ ( x ) , Ψ ( x ) , etc., where each measure corresponds to a different metric that describes the impact of UI changes on e-commerce customer behavior. Similarly, instead of two thresholds ( α and β ), there appear 2 n thresholds, where n is the number of performance indicators of the analyzed changes. The thresholds can be labeled respectively: α Θ , β Θ , α Φ , β Φ , α Ψ , β Ψ , etc. Assignment to areas in such a situation must take into account all decision-making criteria, and an extremely restrictive variant can be stated as follows:
  • to area P if Θ ( x ) α Θ Φ ( x ) α Φ Ψ ( x ) α Ψ ;
  • to area N if Θ ( x ) β Θ Φ ( x ) β Φ Ψ ( x ) β Ψ ;
  • to area B if β Θ < Θ ( x ) < α Θ β Φ < Φ ( x ) < α Φ β Ψ < Ψ ( x ) < α Ψ .
Less restrictive options may assume different rules for allocation to areas (e.g., exceeding thresholds for most decision factors).
This 3WD model adaptation proposal was verified in an experimental study to evaluate its usefulness in practical applications in the process of accepting UI modifications in e-commerce.

3. Results

This section presents the empirical results of our study, conducted to directly address our research questions. We begin by detailing the preparation of the UI self-adaptation mechanism (addressing RQ1) and then present the quantitative outcomes of the two experimental iterations, which provide a direct answer to RQ2.

3.1. Preparation of the UI Self-Adaptation Mechanism

To demonstrate how the described UI self-adaptation mechanism works, a sample study was conducted in an online store operating in the clothing industry. The research was conducted using the A I M 2 platform developed by Fast White Cat S.A., an official Adobe partner founded in 2012 and an experienced global e-commerce company.
The general architecture of the A I M 2 platform is shown in Figure 4.
The solution was developed using the following technologies [7]: PHP 8.1+, Symfony 6.1+, API Platform 3+, and Docker.
The survey consisted of several phases. The first was to collect data and group customers. Based on the previously mentioned approach [54], users were divided into 4 clusters using the K-means algorithm, based on a training data set of 670,766 user sessions. The number of clusters (k = 4) was determined to be the optimal choice by evaluating a range of clustering quality indicators, including the Silhouette Score, Calinski–Harabasz Score, and Davies–Bouldin Score, which collectively indicated the most distinct and meaningful segmentation at this value. Figure 5 shows the visualization of the clusters using the t-SNE technique [55]. The t-SNE dimensionality reduction was performed using the Scikit-learn library in Python 3.12, and the resulting two-dimensional data was plotted using Matplotlib 3.8.2 to generate the visualization.
The choice of K-means resulted in optimal cluster sizes, with the smallest cluster containing almost 20% of the study population. The relatively small variation in the number of users in the clusters allows for a flexible choice of sample group—regardless of the choice made, a similar number of customers will be served with a dedicated UI variant during the change verification phase. The following cluster sizes were obtained: 29,297 (Label0), 30,835 (Label1), 45,463 (Label2), 45,299 (Label3), meaning that the smallest cluster contained 19.42% of the total user population. The cluster designated Label2 was selected for further analysis as it contained the most active users, thereby maximizing the potential for collecting statistically significant feedback.
The values of the classical indicators describing the quality of clustering:
  • Silhouette Score = 0.278644;
  • Calinski–Harabasz Score = 33,392.592567;
  • Davies–Bouldin Score = 2.095141;
  • Entropy = 1.365417
Also indicate that this combination [clustering method − number of clusters] is a good option.
In the next phase, two sets of UI changes suggested by a UX expert were designed. The first set contained seven changes, including the layout of the filters, the search bar, the item description on the product card, the free shipping information, and the color scheme of the widgets.
The second set included 5 changes in the following areas: image gallery settings, size and position of the price and item ID on the product card, menu appearance (change to mobile-like), and footer settings (Table 2).
The next phase of experimental research involved the application of the UI self-adaptation algorithm. Both sets of modifications were reviewed sequentially. If the first set of changes was accepted, the changes from the second set would be added to the changes from the first one. If the first set was rejected, the changes it contained would not be included in the second part of the study.
The experiment assumed one iteration of testing for each set of changes (without retesting in the case of an inconclusive recommendation). Implementing the proposed mechanism required additional assumptions regarding the decision thresholds. These thresholds were not chosen arbitrarily, but they were defined by aligning a risk-averse business strategy with established statistical principles from A/B testing [56]:
  • The acceptance threshold was set to 10% ( α x = 10 % ). This threshold is conceptually linked to the Minimum Detectable Effect (MDE), which is the smallest improvement one wishes to be able to detect in an experiment with a given level of statistical significance [57,58]. In practice, setting the MDE is a crucial first step in experiment design, as it balances the desire to detect small effects with the practical constraints of traffic and time [59]. A smaller MDE requires a much larger sample size to achieve statistical significance. For this study, we set a relatively high MDE of 10% for two strategic reasons:
    As highlighted by industry experts, the MDE should reflect the “business significance” of an outcome—a change is only worth implementing if it drives a meaningful impact [57,59]. In the competitive e-commerce industry, a double-digit uplift in a macro-conversion metric is widely considered a substantial and commercially meaningful gain that justifies the investment in deploying a change. We were interested in detecting changes that provide a clear and impactful business benefit, rather than marginal improvements that might not be worth the implementation cost.
    Setting a higher MDE allows for conclusive results to be reached more quickly with a limited amount of user traffic [58]. It minimizes the risk of implementing changes based on minor, potentially random fluctuations (a false positive), ensuring that only modifications with a strong, statistically sound impact are permanently adopted. This pragmatic approach prioritizes testing velocity and confidence in the results over the detection of very small effects.
  • The rejection threshold was set to 0% (no negative change) ( β x = 0 % ). This reflects a conservative “do no harm” principle, which is critical in a live commercial environment [56]. While the acceptance threshold focuses on the magnitude of a positive change, the rejection threshold is designed to have maximum sensitivity to any negative impact. Any statistically significant deterioration of a key metric is considered an unacceptable outcome, as it can directly impact revenue and customer trust. The asymmetry between the acceptance threshold (a high MDE of 10%) and the rejection threshold (a near-zero tolerance for negative effects) deliberately prioritizes the avoidance of harm over the pursuit of marginal gains. This is a common and prudent risk management strategy in user experience optimization and online controlled experiments [56].
  • The default decision for inconclusive results was: reject the changes. This aligns with the risk-averse strategy, reverting to the established baseline when the benefits of a change are not clearly proven.
It is important to note that these thresholds are not fixed constants but are configurable parameters of the framework. Their optimal values can be tuned based on a specific business’s risk appetite, market conditions, and the specific goals of the optimization efforts (e.g., prioritizing user engagement over immediate revenue).
The decision rule used in the recommendation system was based on two macro conversion indicators ( C R and A O V ) and one micro conversion indicator ( P C R ). They correspond to the Θ , Φ , Ψ measures from the formal model described in the previous section.
The latter took into account the following user actions: moving from the home page or listing to the product card (10 points each), moving from the home page to the listing (10 points), adding the product to the cart (20 points), moving from the product card to another product card (5 points). These values were assigned by the UX expert based on the action’s proximity to the final purchase decision, with actions higher in the conversion funnel (e.g., adding to cart) weighted more heavily than exploratory actions (e.g., viewing another product). This structured approach, while relying on expert judgment, provides a transparent and logical framework for prioritizing user actions that more strongly indicate purchase intent.

3.2. The First Iteration of Verifying Implemented UI Changes

During the first iteration of the analysis, information was collected on 6001 user sessions from the selected cluster, during which the dedicated UI variant was served 2983 times and the standard variant was served 3018 times. The CR, AOV and PCR values are shown in Table 3. To determine if the observed differences were statistically significant, we conducted hypothesis testing. For the Conversion Rate, which is a proportional metric, we used the chi-square ( χ 2 ) test of independence. For the Average Order Value and Partial Conversion Rate, which are continuous metrics, we used the two-sample independent t-test. All tests were performed with a significance level ( α ) of 0.05, corresponding to a 95% confidence level. In addition to determining statistical significance with p-values, we calculated the practical significance or effect size of the changes. For the proportional CR metric, we used the Phi ( ϕ ) coefficient, and for the continuous AOV and PCR metrics, we used Cohen’s d. This allows for a standardized measure of the magnitude of the impact, independent of sample size, where an effect size of ∼0.2 is small, ∼0.5 is medium, and ∼0.8 is large.
The percentage change was calculated using the following formula:
C H X = X ( d e d ) X ( s t d ) X ( s t d ) × 100 %
where: C H X is the percentage change in the value of the X indicator, X ( d e d ) is the value of the X indicator for the dedicated interface, and X ( s d t ) is the value of the X indicator for the standard interface.
The “++” symbol in the last line indicates that the value exceeds the assumed positive threshold, and the “+” symbol indicates a higher indicator value for the specific UI, but below the assumed threshold. A “-”symbol indicates a worse indicator value for the dedicated UI variant, but not exceeding the assumed rejection threshold.
Analysis of the results obtained allowed the recommendation system to accept the first set of changes. For two indicators ( C R and A O V ), the values for the dedicated interface were significantly higher and exceeded the threshold (Figure 6). The statistical analysis confirms this conclusion, with p-values well below 0.05, indicating that these improvements are highly unlikely to be due to random chance. Furthermore, the calculated effect sizes quantify the practical impact of these changes. The large effect size for CR ( ϕ = 0.82) and medium effect size for AOV (d = 0.55) confirm that these improvements were not just statistically significant, but also practically meaningful. The values obtained for the third indicator ( P C R ) were similar for both UI variants, but with a minimal advantage for the dedicated interface. This is reflected in the high p-value (p = 0.34), which shows the 1.01% difference is not statistically significant and a trivial effect size (d = 0.04), confirming the lack of any real impact from the changes on this specific metric. On this basis, the changes were confirmed and incorporated into the new standard UI in the next iteration of the study.

3.3. The Second Iteration of Verifying Implemented UI Changes

In the second iteration of the research, 6175 customer sessions were identified from the analyzed cluster. During this period, the dedicated UI variant was served 3057 times, and the standard variant (taking into account the modifications from the first iteration) was served 3118 times. The indicator values obtained in this part of the study are presented in Table 4.
This time, the results were inconclusive. For no indicator did the difference between the dedicated and standard interfaces exceed the threshold (Figure 7). The statistical tests for all three metrics yielded p-values greater than the 0.05 significance level, confirming that none of the observed changes (positive or negative) were statistically significant. Furthermore, the calculated effect sizes were trivial to small for all metrics: CR showed a small positive effect ( ϕ = 0.15), AOV a small negative effect (d = −0.19), and PCR a trivial effect (d = −0.03). This combination of non-significant results and small effect sizes provides strong evidence that the second set of UI modifications had no meaningful impact—either positive or negative—on customer behavior. Two indicators showed a deterioration in the quality of the dedicated interface, and one indicated an improvement. In this situation, the recommendation system rejected the set of changes according to the decision rule and the additional assumptions made. This outcome was technically correct based on the predefined rules, but it highlighted a key practical challenge—the system’s sensitivity to marginal, statistically insignificant changes, an issue that will be explored further in the discussion.

4. Discussion

The experimental results provide significant insights into the practical application of the 3WD model for UI adaptation. In this section, we synthesize these findings to answer our research questions more broadly. We discuss the implications of our results for RQ2, focusing on the observed impact on e-commerce metrics, and then address RQ3 by analyzing the challenges and limitations revealed during the study, proposing avenues for future research that will be critical for maturing this technology.
The presented algorithm for adapting UI modifications to e-commerce customer behavior, based on 3WD model, is a flexible tool that can be developed in multiple directions. Figure 8 illustrates the algorithm’s real-world application, where it correctly processed both a clear positive outcome and a complex, inconclusive one. The first iteration resulted in acceptance, while the second led to a rejection. This demonstrates the framework’s primary strength—its ability to provide a structured, automated response not just to clear wins, but also to ambiguous results that would typically require manual analysis and debate in traditional A/B testing. This formal handling of uncertainty is what makes the tool uniquely robust for real-world e-commerce environments where inconclusive outcomes are common.
The presented experimental results show the general principle of its operation, but contain simplifications that warrant deeper discussion. Primarily, the practice of grouping multiple UI changes into a single test, while expedient, creates analytical challenges. Our approach successfully assesses the aggregate impact, but it cannot deconstruct the outcome to identify the specific drivers. This “bundling” risks a valuable modification being rejected because it was packaged with a poor one, or vice-versa. This is a critical trade-off in optimization—while testing every atomic change is often infeasible due to the statistical confidence required, large bundles obscure causality. Therefore, a key takeaway for practitioners is the need for a strategic methodology for bundling—grouping changes thematically (e.g., all related to the checkout process) or by anticipated impact—to find a practical balance between testing velocity and analytical clarity.
A key practical insight from this study is the need for a more sophisticated handling of marginal outcomes. This issue became evident in the first iteration, where the PCR indicator’s +1.01% improvement was positive but marginal. For two indicators, the values for the dedicated interface were significantly higher than for the standard interface, but for the P C R indicator the difference was about 1%. Fortunately, in this case, the dedicated variant was slightly better, so the set of modifications was accepted. However, if the standard interface had been slightly better, the algorithm would have considered such modifications as having an indeterminate effect. We therefore propose introducing a “dead zone” or indifference margin (e.g., ± 2 % or ± X 2 %) around a 0% change. Results falling within this range would be formally classified as neutral, preventing statistically insignificant noise from triggering a rejection or complicating an acceptance decision. This is not merely a technical tweak—it is a crucial enhancement to the 3WD model’s practical application. It acknowledges the stochastic nature of user behavior and prevents the system from overreacting to randomness. Implementing this margin would make the framework more robust and efficient, reducing the need for manual review of inconclusive tests and better aligning the automated decisions with real-world business logic.
Another key consideration is the methodology for weighting the Partial Conversion Rate indicator. In this study, the weights were assigned by a UX expert based on established conversion funnel principles. We acknowledge this introduces an element of subjectivity, which is a limitation of the current implementation. This expert-led approach, while practical, represents a potential point of failure or bias. A more advanced and objective system would move towards data-driven weighting. For instance, the weight for each user action could be dynamically calculated based on the historical, empirical probability that a user performing that action will ultimately complete a purchase. Such a model would transition the PCR from an expert-guided metric to a more objective, empirically validated indicator of user engagement and purchase intent. This shift from human-heuristic to machine-derived weights is a critical step towards creating a truly autonomous and intelligent optimization system, directly addressing the limitations identified in RQ3 and setting a clear agenda for future research.
A related direction for future research is the justification and optimization of the decision thresholds ( α and β ) themselves. Our choice of α = 10 % and β = 0 % was based on a sound, risk-averse business strategy common in e-commerce. However, these static values may not be optimal for all contexts. A more advanced implementation of our framework would involve a process for tuning these thresholds. For instance, a business could perform a sensitivity analysis using historical experiment data to model the impact of different threshold levels on long-term growth and risk exposure. Furthermore, one could develop an adaptive threshold mechanism, where the values of α and β are dynamically adjusted based on factors like the maturity of the website, business seasonality, or the company’s shifting tolerance for risk. This would transform the thresholds from static parameters into a dynamic component of a larger optimization strategy, representing a significant step towards a more autonomous and intelligent system.
Beyond the technical implementation, it is crucial to consider the ethical dimensions of automated personalization. While the goal of this framework is to improve user experience, any system that tracks user behavior carries a responsibility to do so ethically. The core ethical challenge lies in balancing the benefits of a personalized interface with the user’s right to privacy and autonomy. Transparency is paramount: users should be clearly informed that their experience is being dynamically adapted and have control over their data. Furthermore, personalization algorithms must be designed to avoid manipulative practices (so-called “dark patterns”) that exploit cognitive biases to drive conversions against the user’s best interests. The framework presented here is intended as a tool to serve users with more relevant content, not to coerce them. Future research should continue to explore methods for building “explainable AI” into such systems, allowing for greater transparency and user trust.
While the study effectively demonstrates the proposed framework’s viability, it is important to acknowledge its limitations, which in turn suggest directions for future research. The experiment was conducted in a single clothing online store. Future studies should validate the model across different e-commerce sectors, such as electronics or groceries, which may feature different customer decision-making processes. A key limitation of the current study is that the experiment was conducted in a single online store within the clothing industry. This context specificity restricts the generalizability of our findings. Therefore, a critical next step is to extend the validation of the framework across a diverse range of e-commerce sectors. Future studies should test the model in domains such as electronics, groceries, or digital services, which are characterized by different customer decision-making processes, purchase cycles, and user interface requirements. Such multi-domain validation would be essential to confirm the robustness of the 3WD model and refine its application for broader practical use.
The second key limitation of our study is its cross-sectional nature, which captures only the immediate impact of UI modifications. This approach cannot account for long-term user experience phenomena such as the novelty effect, where an initial performance uplift may be due to the change itself rather than its intrinsic quality, or adaptation fatigue, where users may become disoriented or frustrated by a constantly evolving interface. To address this, future research must include a longitudinal analysis. Such a study would involve tracking user cohorts over an extended period (e.g., several months) and focusing on long-term engagement and retention metrics, such as repeat purchase rate, customer lifetime value, and churn rate. This would provide crucial insights into the sustainable impact of the adaptive framework on customer loyalty and trust.
Future research could also move towards a more autonomous system by integrating machine learning models, such as reinforcement learning, to not only evaluate but also generate UI modification proposals, thereby reducing the reliance on human experts. Finally, incorporating qualitative user feedback through surveys or interviews could provide deeper insights into user satisfaction and trust, complementing the quantitative behavioral metrics used in this study.
It is also worth noting the importance of the threshold used in the decision algorithm. It determines the sensitivity of the self-adaptation mechanism. In the studies described, a threshold of α = 10 % was used. Increasing the threshold would result in a higher probability of rejection or acceptance of the modification, and thus a slower but safer adaptation of the UI variant to the customer group. On the other hand, lowering the threshold would make the decision easier and faster, but increase the risk of a false positive conclusion. Optimizing the threshold is clearly a challenge that should be addressed in further research. This issue is related to the timing of feedback collection. If the e-shop’s customers return frequently, providing an opportunity to analyze their behavior with a modified UI variant, a higher threshold could be considered. However, if there are few returning users, a lower threshold may be necessary due to the timing of the study.

5. Conclusions

This paper has presented and empirically validated a self-adaptive framework for e-commerce user interfaces, marking a significant advancement over traditional optimization methods. Our primary contribution is the novel adaptation of the three-way decision model to the domain of UI personalization. By moving beyond the rigid, binary paradigm of conventional A/B testing, we have furnished a more nuanced “accept-reject-delay” mechanism that is inherently better suited to the complexities of user behavior. This methodological shift addresses a key gap in the A/B testing literature, which often lacks a formal process for handling the ambiguous or statistically insignificant outcomes frequently encountered in practice.
Our work also extends the broader e-commerce personalization literature. While much prior research has focused on content personalization through methods like collaborative and content-based filtering for product recommendations, our framework addresses the relatively underexplored challenge of adapting the UI layout and interaction design itself. By doing so, we move from personalizing what a user sees to how they interact with the entire platform.
In addressing our research questions, we have demonstrated both the theoretical soundness and practical utility of this approach. We successfully designed and implemented the 3WD framework (RQ1), showing that it can be effectively operationalized to manage UI modifications. Our experimental results confirmed that this system can drive statistically and practically significant improvements in key performance metrics (RQ2), leading to enhanced conversion rates and average order values. Finally, through a critical discussion of the study’s challenges and the need to optimize its decision parameters, we have outlined a clear path for future refinement and development (RQ3).
The implications of this work are twofold. For practitioners, our framework offers a tangible pathway to automate the delivery of personalized e-commerce experiences, fostering stronger customer relationships and sustainable growth. For the academic community, our study serves as a proof-of-concept, demonstrating that decision-making models from granular computing can be successfully applied to solve complex problems in Human–Computer Interaction and opening new directions for research in intelligent, adaptive systems.
By integrating machine learning algorithms, user data analysis, and responsive design, e-commerce platforms can tailor their interfaces to individual users. This approach enhances user satisfaction and positively impacts conversion rates and overall business success. As the crucial bridge between business and consumer, the user interface’s ability to adapt to customer behaviors and preferences is paramount. Currently, however, most UI personalization in e-commerce is limited to product recommendation engines. Our research demonstrates a viable path forward, showing that deeper, layout-level adaptation is not only possible but can be managed through a robust, data-driven, and automated framework.
To meet evolving market dynamics, the exploration of deeper, specialized interactions is necessary. The multi-variant user interface is one such innovative solution, offering customers layout options tailored to the needs of different user demographics instead of a single, static design. While its implementation is a complex undertaking—requiring consideration of specific e-commerce behaviors, customized data processing, and new mechanisms for delivering UI modifications—the potential economic and marketing benefits justify the investment.
The system framework presented in this paper facilitates the deployment of a multi-variant e-commerce user interface. It includes the ability to independently modify specific variants, going beyond the commonly used A/B testing. The operation of the self-adaptation feature is elucidated through an empirical study. Its assumptions were tested in practice and the conclusions were used to make improvements. In addition, the paper proposes a strategy for the further development of this methodology to improve the fine-tuning of user interface variants through the application of ML algorithms.
Our approach successfully embeds the theoretical principles of the three-way decision model into a practical, real-world business context. The research conducted confirmed its effectiveness and identified potential limitations and directions for development. Future work should focus on validating this model across diverse e-commerce domains, exploring the long-term effects on user experience, and developing more autonomous ML-driven systems for generating UI modifications. The constant evolution of technology and the ever-changing landscape of user expectations underscore the importance of ongoing research and enhancement in this area. As e-commerce UI self-adaptation becomes more sophisticated, it opens the door to a future where online shopping experiences are intuitive, efficient, and enjoyable for users. By prioritizing the improvement of the user experience through adaptive interfaces, companies can build stronger customer relationships, foster brand loyalty, and position themselves for sustainable growth in the dynamic world of online commerce.

Author Contributions

A.W.: Conceptualization, Methodology, Formal analysis, Investigation, Writing—Original Draft; J.S.: Validation, Writing—Original Draft, Writing—Review & Editing. All authors have read and agreed to the published version of the manuscript.

Funding

The experimental research was carried out under the project “Self-adaptation of the online store interface for the customer requirements and behaviour” co-funded by the National Centre for Research and Development under the Sub-Action 1.1.1 of the Operational Programme Intelligent Development 2014–2020.

Data Availability Statement

The datasets used in the study described in the paper are available for download from https://doi.org/10.7910/DVN/9HSDTA.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Durand-Hayes, S.; Gooding, M.; Crane, B.; Roesch, H.; Pedersen, K. Decision Points: Sharpening the Pre-Purchase Consumer Experience; Technical Report; PricewaterhouseCoopers (PWC): London, UK, 2023. [Google Scholar]
  2. Arora, N.; Liu, W.W.; Robinson, K.; Stein, E.; Ensslen, D.; Fiedler, L.; Schuler, G. The Value of Getting Personalization Right—Or Wrong—Is Multiplying; Technical Report; McKinsey: New York, NY, USA, 2021. [Google Scholar]
  3. Barthel, M.; Faraldi, A.; Robnett, S.; Darpö, O.; Lellouche Tordjman, K.; Derow, R.; Ernst, C. Winning Formulas for E-Commerce Growth; Technical Report; Boston Consulting Group (BCG): Boston, MA, USA, 2023. [Google Scholar]
  4. Gunawan, R.; Anthony, G.; Vendly; Anggreainy, M.S. The Effect of Design User Interface (UI) E-Commerce on User Experience (UX). In Proceedings of the 2021 6th International Conference on New Media Studies (CONMEDIA), Virtually, 12–13 October 2021; pp. 95–98. [Google Scholar] [CrossRef]
  5. Tiziana Guzzo, F.F.; Grifoni, P. A model of e-commerce adoption (MOCA): Consumer’s perceptions and behaviours. Behav. Inf. Technol. 2016, 35, 196–209. [Google Scholar] [CrossRef]
  6. Rahutomo, R.; Lie, Y.; Perbangsa, A.S.; Pardamean, B. Improving Conversion Rates for Fashion e-Commerce with A/B Testing. In Proceedings of the 2020 International Conference on Information Management and Technology (ICIMTech), Virtually, 13 August 2020; pp. 266–270. [Google Scholar] [CrossRef]
  7. Wasilewski, A. Multi-Variant User Interfaces in E-Commerce. A Practical Approach to UI Personalization; Springer: Berlin/Heidelberg, Germany, 2024. [Google Scholar]
  8. Koukouvis, K.; Cubero, R.; Pelliccione, P. A/B Testing in E-Commerce Sales Processes; Springer: Cham, Switzerland, 2016; pp. 133–148. [Google Scholar] [CrossRef]
  9. Chancellor, S. Toward Practices for Human-Centered Machine Learning. Commun. ACM 2023, 66, 78–85. [Google Scholar] [CrossRef]
  10. Jalil, N. Introduction to Intelligent User Interfaces (IUIs); IntechOpen: London, UK, 2021. [Google Scholar] [CrossRef]
  11. Miraz, M.H.; Ali, M.; Excell, P.S. Adaptive user interfaces and universal usability through plasticity of user interface design. Comput. Sci. Rev. 2021, 40, 100363. [Google Scholar] [CrossRef]
  12. Browne, D. Adaptive User Interfaces; Academic Press: London, UK, 2016. [Google Scholar]
  13. Lavie, T.; Meyer, J. Benefits and costs of adaptive user interfaces. Int. J. Hum.-Comput. Stud. 2010, 68, 508–524. [Google Scholar] [CrossRef]
  14. Smereka, M.; Kołaczek, G.; Sobecki, J.; Wasilewski, A. Adaptive user interface for workflow-ERP system. Procedia Comput. Sci. 2023, 225, 2381–2391. [Google Scholar] [CrossRef]
  15. Gao, Y.; Liu, H. Artificial intelligence-enabled personalization in interactive marketing: A customer journey perspective. J. Res. Interact. Mark. 2022, 17, 663–680. [Google Scholar] [CrossRef]
  16. Höök, K. Evaluating the utility and usability of an adaptive hypermedia system. In Proceedings of the 2nd International Conference on Intelligent User Interfaces, Orlando, FL, USA, 6–9 January 1997; pp. 179–186. [Google Scholar]
  17. Chandra, S.; Verma, S.; Lim, W.M.; Kumar, S.; Donthu, N. Personalization in personalized marketing: Trends and ways forward. Psychol. Mark. 2022, 39, 1529–1562. [Google Scholar] [CrossRef]
  18. Hussain, J.; Hassan, A.U.; Bilal, H.; Ali, R.; Afzal, M.; Hussain, S.; Bang, J.; Banos, O.; Lee, S. Model-based adaptive user interface based on context and user experience evaluation. J. Multimodal User Interfaces 2018, 12, 1–16. [Google Scholar] [CrossRef]
  19. Sutcliffe, A. Designing for User Engagment: Aesthetic and Attractive User Interfaces; Springer: Cham, Switzerland, 2022. [Google Scholar]
  20. Bhatia Khan, S.; Chandna, S. Chapter 1—Introduction to human-computer interaction using artificial intelligence. In Innovations in Artificial Intelligence and Human-Computer Interaction in the Digital Era; Bhatia Khan, S., Namasudra, S., Chandna, S., Mashat, A., Xhafa, F., Eds.; Intelligent Data-Centric Systems; Academic Press: Cambridge, MA, USA, 2023; pp. 1–6. [Google Scholar] [CrossRef]
  21. Langley, P. Machine learning for adaptive user interfaces. In Proceedings of the Annual Conference on Artificial Intelligence, Freiburg, Germany, 9–12 September 1997; Springer: Heidelberg, Germany, 1997; pp. 53–62. [Google Scholar]
  22. Monarch, R.M. Human-in-the-Loop Machine Learning: Active Learning and Annotation for Human-Centered AI; Simon and Schuster: New York, NY, USA, 2021. [Google Scholar]
  23. Li, N.; Zhang, M.; Li, J.; Kang, E.; Tei, K. Preference Adaptation: User satisfaction is all you need! In Proceedings of the 2023 IEEE/ACM 18th Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS), Melbourne, Australia, 15–16 May 2023; pp. 133–144. [Google Scholar] [CrossRef]
  24. Yigitbas, E.; Karakaya, K.; Jovanovikj, I.; Engels, G. Enhancing Human-in-the-Loop Adaptive Systems through Digital Twins and VR Interfaces. In Proceedings of the 2021 International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS), SEAMS ’21, Madrid, Spain, 23–24 May 2021; pp. 30–40. [Google Scholar] [CrossRef]
  25. Capel, T.; Brereton, M. What is Human-Centered about Human-Centered AI? A Map of the Research Landscape. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, CHI ’23, Hamburg, Germany, 23–28 April 2023; Association for Computing Machinery: New York, NY, USA, 2023. [Google Scholar] [CrossRef]
  26. Shneiderman, B. Human-centered artificial intelligence: Reliable, safe & trustworthy. Int. J. Hum.-Comput. Interact. 2020, 36, 495–504. [Google Scholar]
  27. Adolphs, C.; Winkelmann, A. Personalization research in e-commerce—A state of the art review (2000–2008). J. Electron. Commer. Res. 2010, 11, 326. [Google Scholar]
  28. Rathnayake, N.; Meedeniya, D.; Perera, I.; Welivita, A. A Framework for Adaptive User Interface Generation based on User Behavioural Patterns. In Proceedings of the 2019 Moratuwa Engineering Research Conference (MERCon), Moratuwa, Sri Lanka, 3–5 July 2019; pp. 698–703. [Google Scholar] [CrossRef]
  29. Johnston, V.; Black, M.; Wallace, J.; Mulvenna, M.; Bond, R. A framework for the development of a dynamic adaptive intelligent user interface to enhance the user experience. In Proceedings of the 31st European Conference on Cognitive Ergonomics, Belfast, UK, 10–13 September 2019; pp. 32–35. [Google Scholar]
  30. Singh, A.; Kumar, A.; Gupta, R.; Gupta, S. Utilizing Machine Learning Methods for Customer Segmentation Analysis. Lect. Note. Netw. Syst. 2025, 1417, 311–329. [Google Scholar] [CrossRef]
  31. Wang, K.; Zhang, T.; Xue, T.; Lu, Y.; Na, S.G. E-commerce personalized recommendation analysis by deeply-learned clustering. J. Vis. Commun. Image Represent. 2020, 71, 102735. [Google Scholar] [CrossRef]
  32. Gheibi, O.; Weyns, D. Lifelong Self-Adaptation: Self-Adaptation Meets Lifelong Machine Learning. In Proceedings of the 17th Symposium on Software Engineering for Adaptive and Self-Managing Systems, SEAMS ’22, Pittsburgh, PA, USA, 18–23 May 2022; Association for Computing Machinery: New York, NY, USA, 2022; pp. 1–12. [Google Scholar] [CrossRef]
  33. Wasilewski, A.; Wasilewska, B. Data-Driven E-Commerce UI Personalization: Going Beyond Product Recommendations. Int. J. Hum.–Comput. Interact. 2025, 1–24. [Google Scholar] [CrossRef]
  34. Alves Gomes, M.; Meisen, T. A review on customer segmentation methods for personalized customer targeting in e-commerce use cases. Inf. Syst. e-Bus. Manag. 2023, 21, 527–570. [Google Scholar] [CrossRef]
  35. Osypanka, P.; Nawrocki, P. Resource usage cost optimization in cloud computing using machine learning. IEEE Trans. Cloud Comput. 2020, 10, 2079–2089. [Google Scholar] [CrossRef]
  36. Liao, M.; Sundar, S.S. When E-Commerce Personalization Systems Show and Tell: Investigating the Relative Persuasive Appeal of Content-Based versus Collaborative Filtering. J. Advert. 2022, 51, 256–267. [Google Scholar] [CrossRef]
  37. Nasir, M.; Ezeife, C. Semantic enhanced Markov model for sequential E-commerce product recommendation. Int. J. Data Sci. Anal. 2023, 15, 67–91. [Google Scholar] [CrossRef]
  38. Alamdari, P.M.; Navimipour, N.J.; Hosseinzadeh, M.; Safaei, A.A.; Darwesh, A. An image-based product recommendation for E-commerce applications using convolutional neural networks. Acta Inform. Pragensia 2022, 11, 15–35. [Google Scholar] [CrossRef]
  39. Wattimena, F.Y.; Rofi’i, Y.U. E-Commerce Product Recommendation System Using Case-Based Reasoning (CBR) and K-Means Clustering. Int. J. Softw. Eng. Comput. Sci. (IJSECS) 2023, 3, 162–173. [Google Scholar]
  40. Cao, J. E-Commerce Big Data Mining and Analytics; Springer Nature: Berlin, Germany, 2023. [Google Scholar]
  41. Kim, J.; Lee, J. Critical design factors for successful e-commerce systems. Behav. Inf. Technol. 2002, 21, 185–199. [Google Scholar] [CrossRef]
  42. Deuschel, T.; Scully, T. On the Importance of Spatial Perception for the Design of Adaptive User Interfaces. In Proceedings of the 2016 IEEE 10th International Conference on Self-Adaptive and Self-Organizing Systems (SASO), Augsburg, Germany, 12–16 September 2016; pp. 70–79. [Google Scholar] [CrossRef]
  43. Aslam, W.; Hussain, A.; Farhat, K.; Arif, I. Underlying factors influencing consumers’ trust and loyalty in E-commerce. Bus. Perspect. Res. 2020, 8, 186–204. [Google Scholar] [CrossRef]
  44. Bielozorov, A.; Bezbradica, M.; Helfert, M. The Role of User Emotions for Content Personalization in e-Commerce: Literature Review. In Proceedings of the HCI in Business, Government and Organizations, eCommerce and Consumer Behavior, Orlando, FL, USA, 26–31 July 2019; Nah, F.F.H., Siau, K., Eds.; Springer: Cham, Switzerland, 2019; pp. 177–193. [Google Scholar]
  45. Sadeghian, A.H.; Otarkhani, A. Data-driven digital nudging: A systematic literature review and future agenda. Behav. Inf. Technol. 2023, 43, 3834–3862. [Google Scholar] [CrossRef]
  46. Leach, P.J.; Salz, R.; Mealling, M.H. A Universally Unique IDentifier (UUID) URN Namespace; DataPower Technology, Inc.: Cambridge, MA, USA, 2005; RFC 4122. [Google Scholar] [CrossRef]
  47. Rousseeuw, P.J. Silhouettes: A graphical aid to the interpretation and validation of cluster analysis. J. Comput. Appl. Math. 1987, 20, 53–65. [Google Scholar] [CrossRef]
  48. Calinski, T.; Harabasz, J. A Dendrite Method for Cluster Analysis. Commun. Stat.-Theory Methods 1974, 3, 1–27. [Google Scholar] [CrossRef]
  49. Dunn, J.C. A Fuzzy Relative of the ISODATA Process and Its Use in Detecting Compact Well-Separated Clusters. J. Cybern. 1973, 3, 32–57. [Google Scholar] [CrossRef]
  50. Davies, D.L.; Bouldin, D.W. A Cluster Separation Measure. IEEE Trans. Pattern Anal. Mach. Intell. 1979, PAMI-1, 224–227. [Google Scholar] [CrossRef]
  51. Shi, C.; Wei, B.; Wei, S.; Wang, W.; Liu, H.; Liu, J. A quantitative discriminant method of elbow point for the optimal number of clusters in clustering algorithm. Eurasip J. Wirel. Commun. Netw. 2021, 2021, 31. [Google Scholar] [CrossRef]
  52. Dopson, E. Ecommerce Customer Retention Marketing: How to Use Emails, Loyalty Programs & Communities to Improve Retention. Available online: https://www.shopify.com/blog/customer-retention-program (accessed on 15 November 2025).
  53. Saleh, K. The Average Website Conversion Rate by Industry (Updated 2023); Technical Report; Invesp: Chicago, IL, USA, 2023. [Google Scholar]
  54. Pawełek-Lubera, E.; Przyborowski, M.; Ślęzak, D.; Wasilewski, A. Multi-criteria selection of data clustering methods for e-commerce personalization. Appl. Soft Comput. 2025, 182, 113559. [Google Scholar] [CrossRef]
  55. Van der Maaten, L.; Hinton, G. Viualizing data using t-SNE. J. Mach. Learn. Res. 2008, 9, 2579–2605. [Google Scholar]
  56. Kohavi, R.; Tang, D.; Xu, Y. Trustworthy Online Controlled Experiments: A Practical Guide to A/B Testing; Cambridge University Press: Cambridge, UK, 2020. [Google Scholar] [CrossRef]
  57. How to Wrap Your Head Around Minimum Detectable Effect (MDE). 2024. Available online: https://www.convert.com/blog/a-b-testing/minimum-detectable-effect-mde-ab-testing/ (accessed on 5 January 2026).
  58. Understanding Minimum Detectable Effect (MDE). 2025. Available online: https://help.vwo.com/hc/en-us/articles/36876638315929-Understanding-Minimum-Detectable-Effect-MDE (accessed on 5 January 2026).
  59. MDE in A/B Testing: Setting Realistic Expectations for Your Experiments. 2025. Available online: https://www.statsig.com/perspectives/mde-ab-testing-expectations (accessed on 5 January 2026).
Figure 1. General scheme of MultiUI service. Source: [7].
Figure 1. General scheme of MultiUI service. Source: [7].
Make 08 00020 g001
Figure 2. Example of behavioral and contextual data collected. Source: Own elaboration, based on the Matomo tool (https://matomo.org/, accessed on 15 November 2025).
Figure 2. Example of behavioral and contextual data collected. Source: Own elaboration, based on the Matomo tool (https://matomo.org/, accessed on 15 November 2025).
Make 08 00020 g002
Figure 3. Decision algorithm for self-adaptation. Source: Own elaboration.
Figure 3. Decision algorithm for self-adaptation. Source: Own elaboration.
Make 08 00020 g003
Figure 4. The general architecture of the A I M 2 platform. Source: [7].
Figure 4. The general architecture of the A I M 2 platform. Source: [7].
Make 08 00020 g004
Figure 5. Visualisation of the clusters from the study. Source: Own elaboration, based on data from the A I M 2 platform.
Figure 5. Visualisation of the clusters from the study. Source: Own elaboration, based on data from the A I M 2 platform.
Make 08 00020 g005
Figure 6. Results of the first iteration of the study. Source: Own elaboration.
Figure 6. Results of the first iteration of the study. Source: Own elaboration.
Make 08 00020 g006
Figure 7. Results of the second iteration of the study. Source: Own elaboration.
Figure 7. Results of the second iteration of the study. Source: Own elaboration.
Make 08 00020 g007
Figure 8. Decision flows in iterations of the study. Source: Own elaboration.
Figure 8. Decision flows in iterations of the study. Source: Own elaboration.
Make 08 00020 g008
Table 1. E-commerce efficiency metrics.
Table 1. E-commerce efficiency metrics.
MetricTypeDescription
CRmacro conversionthe percentage of website visitors who take a desired action (e.g., purchase)
AOVmacro conversionthe average amount of money a customer spends on a single order
CTRmicro conversionthe ratio of clicks on a specific link to the number of times the link or ad was displayed
PCRmicro conversionthe convergence of the customer journey with the customer journey expected by the e-store owner
Source: Adapted from [7].
Table 2. Sets of microchanges.
Table 2. Sets of microchanges.
Set 1Set 2Description [7]
a popup appears displaying a login form and a registration link
free delivery information is displayed in the left column on a grey background; once the free delivery threshold is reached, the background changes to green
by default, only the first filter is expanded, with all other filters collapsed
the broken filter view displays categories in the left column, while other filters are positioned above the product content; updates include keeping the category filter in its classic left-column layout, converting filters in the header to dropdown menus (collapsing into an additional window upon selection), and repainting active filters
the input field is shown, and the magnifying glass icon is enlarged to 25 × 25 px
tabs on the product card are located on the left side, beneath the galleries, regardless of gallery layout
buttons above products in category listings are red
rating information is positioned above the product name on both desktop and mobile views, with enlarged star icons
an inline “Add to Wishlist” icon is displayed next to the “Add to Cart” button in the mobile version, with the icon slightly enlarged
the footer remains fully expanded on desktop, while on mobile, additional arrows are added in an accordion format
product SKU is displayed above the product name
the product price is enlarged
Source: [7].
Table 3. Results of iteration 1.
Table 3. Results of iteration 1.
InterfaceCRAOVPCR
Dedicated3.49%50.7139.31
Standard2.85%45.0538.74
Change+22.56%+12.56%+1.01%
Decision+++++
p-value<0.01<0.010.34
Effect Size ( ϕ / d )0.82 (large)0.55 (medium)0.04 (trivial)
p-values < 0.05 are considered statistically significant. Source: Own elaboration.
Table 4. Results of iteration 2.
Table 4. Results of iteration 2.
InterfaceCRAOVPCR
Dedicated4.45%42.5853.08
Standard4.17%46.1553.68
Change+6.71%−7.74%−1.12%
Decision+--
p-value0.180.080.41
Effect Size ( ϕ /d)0.15 (small)−0.19 (small)−0.03 (trivial)
p-values < 0.05 are considered statistically significant. Source: Own elaboration.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wasilewski, A.; Sobecki, J. Machine Learning-Based Three-Way Decision Model for E-Commerce Adaptive User Interfaces. Mach. Learn. Knowl. Extr. 2026, 8, 20. https://doi.org/10.3390/make8010020

AMA Style

Wasilewski A, Sobecki J. Machine Learning-Based Three-Way Decision Model for E-Commerce Adaptive User Interfaces. Machine Learning and Knowledge Extraction. 2026; 8(1):20. https://doi.org/10.3390/make8010020

Chicago/Turabian Style

Wasilewski, Adam, and Janusz Sobecki. 2026. "Machine Learning-Based Three-Way Decision Model for E-Commerce Adaptive User Interfaces" Machine Learning and Knowledge Extraction 8, no. 1: 20. https://doi.org/10.3390/make8010020

APA Style

Wasilewski, A., & Sobecki, J. (2026). Machine Learning-Based Three-Way Decision Model for E-Commerce Adaptive User Interfaces. Machine Learning and Knowledge Extraction, 8(1), 20. https://doi.org/10.3390/make8010020

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop