Next Article in Journal
Data-Driven Evaluation of the Synergistic Development of Economic-Social-Environmental Benefits for the Logistics Industry
Next Article in Special Issue
Using Ant Colony Optimization as a Method for Selecting Features to Improve the Accuracy of Measuring the Thickness of Scale in an Intelligent Control System
Previous Article in Journal
Optimization of Low-Carbon and Highly Efficient Turning Production Equipment Selection Based on Beetle Antennae Search Algorithm (BAS)
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on the Optimization of A/B Testing System Based on Dynamic Strategy Distribution

School of Computer Science and Engineering, Central South University, Changsha 410083, China
*
Author to whom correspondence should be addressed.
Processes 2023, 11(3), 912; https://doi.org/10.3390/pr11030912
Submission received: 4 January 2023 / Revised: 2 March 2023 / Accepted: 15 March 2023 / Published: 17 March 2023

Abstract

:
With the development of society, users have increasing requirements for the high-quality experience of products. The pursuit of a high profit conversion rate also gradually puts forward higher requirements for product details in the competition. Product providers need to iterate products fast and with a high quality to enhance user viscosity and activity to improve the profit conversion rate efficiently. A/B testing is a technical method to conduct experiments on target users who use different iterative strategies, and observe which strategy is better through log embedding and statistical analysis. Usually, different businesses of the same company are supported by different business systems, and the A/B tests of different business systems need to be operated in a unified manner. At present, most A/B testing systems cannot provide services for more than one business system at the same time, and there are problems such as high concurrency, scalability, reusability, and flexibility. In this regard, this paper proposes an idea of dynamic strategy distribution, based on which a configuration-driven traffic-multiplexing A/B testing model is constructed and implemented systematically. The model solves the high-concurrency problem when requesting experimental strategies by setting message middleware and strategy cache modules, making the system more lightweight, flexible, and efficient to meet the A/B testing requirements for multiple business systems.

1. Introduction

With the development of the Internet, extensive traffic can no longer meet the needs of users [1], and it is difficult to bring a greater profit transformation to enterprises [2,3]. The refined design requirements of products are getting higher and higher, and product providers need to update and iterate products more quickly in aspects such as personalized recommendation, efficient search, and user interface. Different iteration strategies may bring different effects. When the strategic effect is invisible, how to accurately launch product iterations that can achieve an effective growth to the market has become a major problem faced by many companies.
A/B testing, also known as A/B/n testing, is a method of comparing the performance of different strategies through objective indicators to select the best strategy. A/B testing is a process of randomly assigning two (A/B) or multiple (A/B/n) different versions of strategies to some users at the same time, collecting and analyzing the related data of different groups of users through statistical methods, and finally adopting the best version of the strategy.
When A/B testing is applied to Internet products, it is essentially a comprehensive service for conducting large-scale online experiments that integrates front-end, back-end, data, and customer feedback. Since Google applied online experiment technology to Internet products in 2000 [3], A/B testing has become an effective method for Internet companies in terms of strategy verification, product iteration, algorithm optimization, and risk control. Through A/B testing, you can also gain an in-depth understanding of user behavior patterns to find out more ways to increase profit conversion rates. By 2010, A/B testing gradually began to invest in the Internet industry [4].
A/B testing platforms can be roughly divided into two types: self-developed and third-party. Generally, some Internet companies that have been established for a long time with considerable data and volume can undertake the development of self-developed A/B testing systems in terms of cost and technology accumulation. Such companies often choose to build their own A/B testing platforms for their own business systems due to data security considerations and more complex individual needs brought about by a large and complex user base—for example, Google [4], Microsoft [5,6], Amazon [7,8], Facebook, LinkedIn [9,10], ByteDance, Gemini, Alibaba Group’s Tesla, etc. Those small companies mostly use third-party A/B testing tools due to less testing content and insufficient self-development costs. Some influential third-party platforms include Optimizely [11,12], AppAdhoc, and AB Tasty [13], and so on.
However, although these A/B testing systems have their own characteristics, there are various problems. For example, in order to achieve lightweight requirements, A/B testing services often run in the form of plug-ins dependent on a business system, and it is difficult to achieve one A/B testing system for multiple business systems. The A/B testing system that can provide services to multiple business systems at the same time is mostly the third-party type. Because they tend to work with small enterprises with small business volume and small visits, such systems usually do not consider the problem of high-concurrency pressure.
At present, the business complexity of the system that needs to use A/B testing is getting higher and higher. It often happens that multiple business systems depend on one A/B testing system at the same time. A large number of product iteration requirements bring about an increase in the number of experiments and the number of users, which means that the number of requests to be borne by the A/B testing system will surge. A/B testing systems are facing exponentially increasing concurrency pressure [14,15], and the consequent risk of a single point of failure [4]. When the A/B testing system runs slowly due to excessive pressure, it will seriously affect the normal operation of the business system. Therefore, it is necessary to build a lightweight and efficient A/B testing system that can stably carry out experiments under high-concurrency pressure and, at the same time, support the change of traffic distribution at any time, so as to facilitate the gray release of products [13,16], quickly and efficiently, for complete product iterations.
In view of the above requirements, this paper proposes an idea of dynamic strategy distribution, and, based on this idea, a configuration-driven traffic-multiplexing A/B testing system is designed. Through the asynchronous distribution of the experimental strategy configuration information, the problem of high-concurrency pressure can be solved, so that the A/B testing system can be driven in a lightweight, flexible, and efficient manner.
Section 1 of this article introduces the concept of A/B testing and some common problems faced by A/B testing systems in the Internet industry currently, and explains the research motivation; Section 2 introduces some common A/B testing systems in the market and some other related research; Section 3 describes the problems to be solved; Section 4 introduces the idea of dynamic strategy distribution and the configuration-driven traffic-multiplexing A/B testing model; Section 5 introduces the realization of A/B testing system; and Section 6 analyzes and summarizes the advantages of the system.

2. Related Work

Around 2000, led by Google, Internet companies began to adopt the method of A/B testing to help enterprises with decision-making management while taking into account the development of big data, thereby reducing the cost of bad decisions and promoting business growth. In 2001, Rochelle King et al. [17] summarized the complete methodology of A/B testing on the basis of data thinking, formed a complete and feasible method for building an experimental framework, and systematically answered questions about A/B testing based on actual needs. In 2010, Google’s Diane Tang et al. proposed the overlapping experiment framework [4]. Through the domain and layer design, the system can carry out multiple experiments at the same time while retaining the advantages of the easy use and fast speed of the single-layer experimental system. The system using this framework can stop illegal experiments and experiments with poor results in time, can quickly establish and start new experiments, and can also increase or decrease user traffic during the experiment process. The support of multiple experiments by the overlapping experiment framework lays the foundation for one A/B testing system to serve multiple business systems at the same time. The Google overlapping experiment framework became the basis of most A/B testing system design frameworks. After that, A/B testing began to show a trend of productization, and it was gradually introduced into commercial use by the domestic Internet industry. Together with the growth system, it became an important tool for corporate decision making.
Alibaba Group’s Tesla system, built with reference to Google’s overlapping experiment framework, can conduct multiple experiments at the same time, realize more flexible traffic multiplexing and management, and integrate analysis functions. However, due to the pursuit of being lightweight, as a library, Tesla needs to be implanted into the front-end or back-end of a certain system. This feature makes it unable to serve multiple business systems at the same time.
Gemini is a company’s self-developed A/B testing system that improves on the Google framework, and its modularity enables it to cope with greater concurrency pressure. However, its infrastructure is the same as that of Microsoft and Amazon’s self-developed A/B tests. It integrates benchmark codes and experimental codes into business logic, and embeds A/B testing modules in business servers. In this way, the dependence on external systems is small, but it has a large impact on business, inconvenient grayscale publishing, difficult code maintenance, and limited scalability.
Among the third-party A/B testing platforms, Optimizely has the highest share in the international market. The biggest advantage of Optimizely is that it is visually operated, easy to get started, and also provides complex methods for stricter tracking. Optimizely can also be integrated with analysis tools, which makes it convenient.

3. Dynamic Strategy Distribution

3.1. Overview

Most existing A/B testing systems use similar architectures. When a user participating in an experiment requests an experiment strategy, the experiment management module is accessed directly. If the scale of the experiment expands, and the number of users and experiments increases, such a communication method will put huge concurrency pressure on the experiment management module which stores the experiment strategy. The core of dynamic strategy distribution is to implement asynchronous message delivery and system decoupling by setting up modules such as message middleware, so that each business system can undertake the access of its own users, share the access pressure, and solve high-concurrency problems.
The basis for realizing the idea of dynamic strategy distribution is to clearly divide the core functions and key steps of the A/B testing system to each module in the modular design stage. Each module implements one core function, and the system functions can be completed by simply calling the interface between the modules through data communication with a small data volume and uniform format.
The core idea of dynamic strategy distribution lies in dynamics, which means how to distribute policies dynamically and asynchronously. Strategy distribution means that the module responsible for editing the experiment strategy sends the experiment strategy to other modules of the system who use the experiment strategies, and these modules store the strategies as needed. Dynamic strategy distribution is to redistribute and update strategies in time when the system is initialized, restarted, and the experimental strategy is changed. Compared with the method used in most A/B testing services currently, the biggest advantage of the “dynamic” method is that the A/B testing system does not communicate every time it receives a request for an experiment strategy. There is no need to occupy a high bandwidth to send a complete experiment strategy, but only distribute a small amount of configuration information at a few special time nodes such as the time when strategies are updated, which allows each module of the A/B testing system and business systems to keep abreast of the latest experimental strategies. When there is no change in the experimental strategy, there will not be a large amount of communication within the A/B system. When users request experimental strategies from the business system, the business side parses the configuration information to obtain a complete strategy and distributes it to users. The entire A/B testing system is driven by the dynamic distribution of strategy configuration information.
In this way, when the A/B testing system we propose is applied to business systems with a large number of experiments and users, it can avoid frequent communication with a large amount of data when a large number of experimental strategy requests occur, solving the high-concurrency problem faced by the traditional A/B testing system when docking with multi-service systems.

3.2. System Model

Based on the idea of dynamic strategy distribution, this paper designs a configuration-driven traffic-multiplexing A/B testing system model. In this A/B testing model, the testing process mainly involves three subjects, namely, the A/B testing subsystem, the business side, and the data analysis subsystem. The A/B testing subsystem includes the experiment management module, strategy caching module, and message middleware. The business side includes the business system, the general strategy analysis tool, and user front-ends. The data analysis subsystem includes a log module and an analysis and report visualization module. The system model is shown in Figure 1.
The A/B testing subsystem is the producer of the strategy configuration information, responsible for producing and storing the configuration information. The business side is the consumer of the strategy configuration information, with the general strategy analysis module and business system communicating with the A/B testing subsystem. These two modules take the experimental-layer scene in the experimental-related information as the subject of monitoring, and use the experiment ID as the unique keyword to monitor the message middleware for a long time. When the strategy configuration information is updated, the latest configuration information will be obtained in time, as shown in Figure 2.
The detailed description of each module of the A/B testing subsystem is as follows:
(1)
A/B testing subsystem—experiment management module: This module is responsible for experiment strategy editing, which is used to create, query, update, and delete experiment-related information. After configuring or updating the experiment information, it will generate or update the configuration information of the experiment strategy according to the relevant information, and then send the new strategy configuration information to the strategy cache module and the message middleware at the same time, as shown in Figure 1’s “A/B testing subsystem” section.
(2)
A/B testing subsystem—message middleware: Due to its characteristics of asynchronous message delivery, the message middleware becomes the key to solving high-concurrency problems. When creating a new experiment, the strategy configuration information generated by the experiment management module will be put into the message queue. When updating an experiment, the experiment management module will push the new strategy configuration information to the message middleware. For example, when performing traffic distribution operations such as the experimental traffic expansion, deletion of buckets, and traffic push, the traffic configuration will be updated and pushed to the message queue. When updating the experimental parameters, we push the new parameter list. In this way, it can be guaranteed that the latest configuration information is cached in the message middleware. The asynchrony of the message middleware is manifested, on the one hand, in that the communication between the experiment management module and the message middleware occurs only when creating and updating, and, on the other hand, in that the business side obtains the configuration information through long-term monitoring, and only communicates with it when the monitoring configuration information changes, and the communication between the two sides is asynchronous and does not interfere with each other.
(3)
A/B testing subsystem—strategy cache module: The way this module updates the configuration information is similar to the message middleware. Unlike the message middleware, the strategy cache module is mainly used to record the traffic distribution and backup information. When creating a new experiment, the strategy cache module will respond to the query of the experiment management module and return the assigned traffic of different experiments. When the business system restarts the service, the business system will send a strategy access request to the strategy cache module to obtain the current strategy configuration information of the experiment backed up in the cache.
In addition to the core part, the A/B testing subsystem, the complete process of A/B testing also includes the business side and data analysis subsystem.
Business side: The business side includes business systems that need to undergo product iteration through A/B testing. During the A/B testing, when the business system receives a user request from the front end, it can send the user ID together with the monitored strategy configuration information to the strategy analysis tool to obtain the parsed strategy configuration information. The business system can return the corresponding A/B strategy to the user by logically processing the parameter list obtained after parsing. After obtaining the A/B strategy, the user officially enters the experiment.
Data analysis subsystem: the user front-end writes the experimental information and embedded point data generated during the experiment, such as the conversion rate and click-through rate, into the log. After statistical analysis, these data and analysis results are finally presented through the analysis and report visualization module. Based on these information, business personnel and experimental strategy designers can understand the differences between buckets, and select the best for product release, or the information can guide the next step of the experimental strategy, forming an A/B testing analysis loop.

3.3. Strategy Configuration Information

The strategy configuration information is essentially a JSON string, consisting of the experiment ID, bucket information, parameter information, and other information. Each experiment has some strategy configuration information. To ensure accuracy, the bucket information consists of 10,000 groups of “Id: Bucket Name” from 0 to 9999. Assuming that there are n buckets in this experiment, the parameter information consists of n “Bucket Name: parameter list”s. The strategy configuration information is shown in Figure 3.
The new bucketing information will be generated when traffic bucketing operations are performed, such as creating new experiments, traffic expansion, deleting buckets, and traffic push. By querying the traffic usage in the strategy cache module, the idle traffic belonging to the experiment is obtained, that is, a set of available IDs. In the bucket information, for the IDs that do not belong to this experiment, the bucket name is uniformly set to “noExp”, and the parameter list is set to empty. The IDs belonging to the experimental traffic, after obtaining the corresponding hash value through the MurMurHash [18] algorithm (as shown in Figure 4), takes the modulus of 10,000× the traffic occupied by this experiment, and allocates n bucket names according to the traffic ratio of each bucket. After the generation is complete, the new strategy configuration information will be asynchronously distributed to each business end.
The business side obtains a unique HashId by performing the MurMurHash calculation on the user ID, and then modulo 10,000 to obtain an ID. Then, we query the strategy configuration information, find the bucketing information corresponding to this ID, and obtain information such as the parameter list, and finally analyze the information on the business side to obtain its own experimental strategy.
In order to make better use of limited user traffic, multiplexing the traffic is a basic requirement. Using the method of traffic layering and multiplexing, the same user can participate in multiple experiments with different properties in different experimental layers to realize traffic sharing. After the user traffic flows through an experimental layer, the current hash value is subjected to the MurMurHash operation again and then flows to the buckets of the next experimental layer. The traffic between the different buckets of each experiment in the same experimental layer is mutually exclusive, and the different experimental layers have no influence on each other, and the traffic is orthogonal, as shown in Figure 5. The strategy configuration information of different experimental layers is distinguished according to the topic in the message header.

4. Configuration-Driven Traffic-Multiplexing A/B Testing System

4.1. System Structure

Based on the above idea of dynamic strategy distribution, this paper proposes a configuration-driven traffic-multiplexing A/B testing system. From bottom to top, the system designs the data management layer, configuration layer, interface layer, and report layer, respectively, as shown in Figure 6.

4.2. Business Process

Here, we have the configuration-driven traffic-multiplexing A/B testing system, the business process of which is roughly shown in Figure 7.
In the configuration-driven traffic-multiplexing A/B testing system, the detailed steps of the complete A/B testing process are as follows:
Step 1—Business personnel configure or update the experiment-related information in the A/B testing system according to business requirements. The data managed by the experiment management system is shown in Figure 8, which mainly consist of three parts: experiment information, experiment layer information, and index information. The indicator information refers to indicators such as the click-through rate that the experiment should focus on. After creating the experiment layer, you can create specific experiments in the experiment layer, set parameters, buckets, and so on. After the experiment is created according to the above steps, the experiment can be set to the “debug” state. If there is an exception during the configuration of this information, the transaction needs to be rolled back, which will eventually show that the experiment creation fails.
Step 2—Generate or update the strategy configuration information based on the basic information of the experiment.
(1) Initialize the configuration information.
(2) Allocate the idle traffic. Compare the traffic required for the experiment in the experiment-related information with the remaining idle traffic obtained from querying the cache module, calculate the part of the idle traffic that belongs to the experiment, and then update the strategy configuration information initialized in (1).
(3) Bucket traffic. For the hash ID assigned to each experiment, calculate its hash value through the MurMurHash algorithm, take a modulus of 10,000, and then distribute the traffic according to the proportion of the bucketed traffic.
(4) Update the strategy configuration information. For each bucket traffic allocated in (3), match the bucket name corresponding to the bucket with the parameter list, and then update the strategy configuration information initialized in (1).
Step 3—Transmit the strategy configuration information to the strategy cache module and message middleware at the same time. This step covers the three cases detailed above:
(1) When creating an experiment, the strategy cache module is used to store the user-traffic-assigned experiment strategy of each experiment layer.
(2) When updating the experiment, the strategy cache module is used to update the experiment strategy configuration information.
(3) When the business system restarts the service, the business system sends a configuration access request to the strategy cache module to obtain the latest configuration information of the experiments.
Step 4—The business system monitors the message middleware with the experiment ID. Monitoring rules: Use the experiment-layer scene field and experiment, respectively, as the topic and keys of the message middleware to obtain the experiment strategy configuration information.
Step 5—When the front-end user requests the experiment strategy, obtain and parse the strategy configuration information. The business system obtains the user ID carried by the request from the front end, uses the MurMurHash algorithm to calculate a hash value, and modulo 10,000 to obtain an integer result value to obtain a value between 0–9999. According to a total of 10,000 configuration values from 0 to 9999 in the configuration information, find the one that is equal to the obtained value as the configuration of the user, where the configuration information includes the bucket name and parameter list. The business system performs logical processing according to the bucket name and parameter list, and returns this user’s experimental strategy to the user front-end. The front-end writes the bucket name, experiment ID, and other embedded data into the log, performs a report analysis, and guides the next update or deletion of the experiment, forming an A/B testing loop and completing the A/B testing service.

5. Experiment and Analysis

Based on the idea of dynamic strategy distribution, this paper studies the optimization of the A/B testing system, and proposes a configuration-driven traffic-multiplexing A/B testing system. Compared with the A/B testing system currently used in the market, the core advantage is that this system solves high-concurrency pressure. It is now deployed on a server, and the configuration is shown in Table 1.
We test the high-concurrency performance of the system. In the case of simulating 10,000 users requesting experimental strategies from the system at the same time, some performance indicators of the system response are shown in Table 2 and Figure 9.
The situation where 10,000 users visit at the same time is quite extreme. Then, we can take a look at the performance of this system under normal circumstances when faced with access by different numbers of users, as shown in Figure 10.
As shown in Figure 10, the response performance of the system is tested when 2000~10,000 users request at the same time. In Figure 10a, the response time changes significantly around 3000 and 4500. In Figure 10b, the throughput and sending and receiving speeds fluctuate around a certain level, and there is no obvious trend of change. The deviation increases with the increase of the number of users, indicating that the stability of the system response decreases with the increase of concurrency.

6. Conclusions

Based on the idea of dynamic strategy distribution, this paper proposes a configuration-driven traffic-multiplexing A/B testing model and builds the system. With the help of the strategy analysis plug-in on the business side, the experimental information can be recorded in the strategy configuration information. Through the modular design and the design of asynchronous message distribution using the characteristics of message middleware, the production, caching, and consumption of the experimental strategy configuration information are decoupled, and the performance of the A/B testing system is improved, overcoming the high-concurrency pressure problems due to the increased amount of experiment and user traffic. At the same time, as a system that can run independently, this system can serve multiple business systems at the same time, which is lightweight, flexible, and has good scalability.

Author Contributions

Conceptualization, B.W. and J.S.; methodology, H.L.; software, H.L.; validation, H.L.; formal analysis, H.L. and J.S.; writing—original draft preparation, H.L.; writing—review and editing, J.S. and H.L.; supervision, B.W.; project administration, B.W.; funding acquisition, J.S. and B.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the National Key Research and Development Program of China under grant No. 2018YFB1003602.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Baier, D.; Rese, A. Increasing Conversion Rates Through Eye Tracking, TAM, A/B Tests: A Case Study. In Advanced Studies in Behaviormetrics and Data Science; Springer: Singapore, 2020; pp. 341–353. [Google Scholar]
  2. Kohavi, R.; Longbotham, R. Online experiments: Lessons learned. Computer 2007, 40, 103–105. [Google Scholar] [CrossRef]
  3. Siroker, D.; Koomen, P. A/B Testing: The Most Powerful Way to Turn Clicks into Customers; John Wiley & Sons: Hoboken, NJ, USA, 2013. [Google Scholar]
  4. Tang, D.; Agarwal, A.; O’Brien, D.; Meyer, M. Overlapping Experiment Infrastructure: More, Better, Faster Experimentation. In Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Washington, DC, USA, 24–28 July 2010; pp. 17–26. [Google Scholar]
  5. Kohavi, R.; Longbotham, R.; Sommerfield, D.; Henne, R.M. Controlled experiments on the web: Survey and practical guide. Data Min. Knowl. Discov. 2009, 18, 140–181. [Google Scholar] [CrossRef] [Green Version]
  6. Azevedo, E.M.; Deng, A.; Montiel Olea, J.L.; Rao, J.; Weyl, E.G. A/b testing with fat tails. J. Political Econ. 2020, 128, 46–54. [Google Scholar] [CrossRef]
  7. Deng, A.; Lu, J.; Litz, J. Trustworthy analysis of online a/b tests: Pitfalls, challenges and solutions. In Proceedings of the Tenth ACM International Conference on Web Search and Data Mining; ACM: New York, NY, USA, 2017; pp. 641–649. [Google Scholar]
  8. Hill, D.N.; Nassif, H.; Liu, Y.; Iyer, A.; Vishwanathan, S.V.N. An Efficient Bandit Algorithm for Realtime Multivariate Optimization. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Halifax, NS, Canada, 13–17 August 2017; pp. 1813–1821. [Google Scholar]
  9. Bojinov, I.; Chen, A.; Liu, M. The Importance of Being Causal. In Harvard Data Science Review; MIT Press: Cambridge, MA, USA, 2020; p. 2. [Google Scholar]
  10. Saveski, M.; Pouget-Abadie, J.; Saint-Jacques, G.; Duan, W.; Ghosh, S.; Xu, Y.; Airoldi, E.M. Detecting Network Effects: Randomizing over Randomized Experiments. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Halifax, NS, Canada, 13–17 August 2017; pp. 1027–1035. [Google Scholar]
  11. Jiang, S.; Martin, J.; Wilson, C. Who’s the Guinea Pig? Investigating Online A/B/n Tests in-the-Wild. In Proceedings of the Conference on Fairness, Accountability, and Transparency, Atlanta, GA, USA, 29–31 January 2019; pp. 201–210. [Google Scholar]
  12. Johari, R.; Koomen, P.; Pekelis, L. Peeking at A/B Tests: Why It Matters, and What to Do about It. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Halifax, NS, Canada, 13–17 August 2017; pp. 1517–1525. [Google Scholar]
  13. Claeys, E.; Gancarski, P.; Maumy-Bertrand, M.; Wassner, H. Dynamic allocation optimization in A/B tests using classification-based preprocessing. IEEE Trans. Knowl. Data Eng. 2021, 35, 335–349. [Google Scholar] [CrossRef]
  14. Nandy, P.; Basu, K.; Chatterjee, S.; Tu, Y. A/B testing in dense large-scale networks: Design and inference. Adv. Neural Inf. Process. Syst. 2020, 33, 2870–2880. [Google Scholar]
  15. Kokkonis, G.; Psannis, K.E.; Roumeliotis, M. Network adaptive flow control algorithm for haptic data over the internet–NAFCAH. In International Conference on Genetic and Evolutionary Computing; Springer: Cham, Switzerland, 2015; pp. 93–102. [Google Scholar]
  16. Deng, A.; Lu, J.; Chen, S. Continuous monitoring of A/B tests without pain: Optional stopping in Bayesian testing. In Proceedings of the 2016 IEEE international conference on data science and advanced analytics (DSAA), Montreal, QC, Canada, 17–19 October 2016; pp. 243–252. [Google Scholar]
  17. King, R.; Churchill, E.F.; Tan, C. Designing with Data Improving the User Experience with A/B Testing; O’Reilly Media, Inc.: Sebastopol, CA, USA, 2017. [Google Scholar]
  18. Yamaguchi, F.; Nishi, H. Hardware-based hash functions for network applications. In Proceedings of the 2013 19th IEEE International Conference on Networks (ICON), IEEE, Singapore, 11–13 December 2013; pp. 1–6. [Google Scholar]
Figure 1. System model.
Figure 1. System model.
Processes 11 00912 g001
Figure 2. Business system monitoring the message middleware.
Figure 2. Business system monitoring the message middleware.
Processes 11 00912 g002
Figure 3. Strategy configuration information.
Figure 3. Strategy configuration information.
Processes 11 00912 g003
Figure 4. MurMurHash.
Figure 4. MurMurHash.
Processes 11 00912 g004
Figure 5. Layered multiplexing of traffic.
Figure 5. Layered multiplexing of traffic.
Processes 11 00912 g005
Figure 6. A/B testing system structure.
Figure 6. A/B testing system structure.
Processes 11 00912 g006
Figure 7. The business process of A/B Testing Service.
Figure 7. The business process of A/B Testing Service.
Processes 11 00912 g007
Figure 8. Experiment-management-related information.
Figure 8. Experiment-management-related information.
Processes 11 00912 g008
Figure 9. Statistics of system response time.
Figure 9. Statistics of system response time.
Processes 11 00912 g009
Figure 10. Statistics of system response.
Figure 10. Statistics of system response.
Processes 11 00912 g010
Table 1. Server configuration.
Table 1. Server configuration.
LabelConfiguration
CPUAMD Ryzen 9 3900 12-Core Professor
GPUNVDIA GeForce RTX 2070 8 G
RAM32 G
ROM512 G SSD 1.8 T HDD
Table 2. Performance indicators of the system response.
Table 2. Performance indicators of the system response.
LabelValue
Average410 ms
50%Line (Median)204 ms
90%Line1244 ms
95%Line1336 ms
99%Line1761 ms
Min3 ms
Max1835 ms
Error0.00%
Deviation475
Throughput2335.4/s
Received399.1 KB/s
Sent305.6 KB/s
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sheng, J.; Liu, H.; Wang, B. Research on the Optimization of A/B Testing System Based on Dynamic Strategy Distribution. Processes 2023, 11, 912. https://doi.org/10.3390/pr11030912

AMA Style

Sheng J, Liu H, Wang B. Research on the Optimization of A/B Testing System Based on Dynamic Strategy Distribution. Processes. 2023; 11(3):912. https://doi.org/10.3390/pr11030912

Chicago/Turabian Style

Sheng, Jinfang, Huadan Liu, and Bin Wang. 2023. "Research on the Optimization of A/B Testing System Based on Dynamic Strategy Distribution" Processes 11, no. 3: 912. https://doi.org/10.3390/pr11030912

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop