Abstract
The growing number of algorithmic decision-making environments, which blend machine and bounded human rationality, strengthen the need for a holistic performance assessment of such systems. Indeed, this combination amplifies the risk of local rationality, necessitating a robust evaluation framework. We propose a novel simulation-based model to quantify algorithmic interventions within organisational contexts, combining causal modelling and data science algorithms. To test our framework’s viability, we present a case study based on a bike-share system focusing on inventory balancing through crowdsourced user actions. Utilising New York’s Citi Bike service data, we highlight the frequent misalignment between incentives and their necessity. Our model examines the interaction dynamics between user and service provider rule-driven responses and algorithms predicting flow rates. This examination demonstrates why understanding these dynamics is essential for devising effective incentive policies. The study showcases how sophisticated machine learning models, with the ability to forecast underlying market demands unconstrained by historical supply issues, can cause imbalances that induce user behaviour, potentially spoiling plans without timely interventions. Our approach allows problems to surface during the design phase, potentially avoiding costly deployment errors in the joint performance of human and AI decision-makers.
1. Introduction
Despite the rapid advancement of Artificial Intelligence (AI), particularly with generative AI [1], human judgment remains critical in decision-making [2,3,4,5].
From a macroeconomic perspective, AI, as a General-Purpose Technology (GPT), requires complementary structures, processes and systems to maximise its value [6,7]. This transition is gradual, particularly for organisations not born digital, which will blend old and new processes for some time [8]. While automation replicates existing rules, an augmentation approach—combining humans and AI—creates opportunities to innovate and extend task boundaries, effectively “growing the pie” by expanding the range of available options [9,10].
From an organisational decision-making perspective, AI’s handling of tasks involving tacit knowledge is often unreliable due to the variability in such tasks [11,12,13]. Effective AI integration requires a shift from “point solutions” to a system-level approach that considers interdependencies within the organisation [8]. This transition from narrow “digitisation” to a holistic, problem-led “digital” view acknowledges the contextual richness of organisational decision-making, recognising the limitations of reducing complex problems to a few variables [14,15,16]. As Gigerenzer points out, simplifications are inadequate in a world that lacks stability, necessitating a systemic approach that accounts for both uncertainty and the complexity of real-world contexts [17].
Technologically, the limitations of Large Language Models (LLMs) and the biases inherent in machine learning highlight that data and models are not entirely objective [18,19,20]. Machine learning models inevitably reflect the biases of their designers, refuting claims of “theory-free” AI [21,22].
These mutually reinforcing perspectives establish human–AI collaboration as the default for complex problem-solving. Research shows that coordination dynamics, already complex with traditional technologies and likened to navigating a rugged landscape, become even more intricate with the introduction of machine learning [23,24,25].
In light of the need for collaborative decision-making between humans and AI, the challenge is to make the dynamics of joint decisions transparent. Our proposed framework addresses this by integrating deep learning as the centrepiece of the algorithmic component, complemented by human judgment, within a simulation workbench designed to evaluate and refine decision policies. To demonstrate the framework’s application, we applied it to inventory balancing in docked bike-sharing systems, which often experience spatial and temporal inventory asymmetries, where stations with similar initial stocks diverge over time. Two main motivating factors for choosing this problem were the availability of high-quality, publicly accessible datasets that Citi Bike regularly publishes [26] and its use of a crowdsourcing model for inventory balancing [27]. While recruiting users for inventory balancing instead of using carrier vehicles offers environmental benefits, the need to account for user behaviours in response to bike saturation at stations and incentives adds to the complexity of the coordination problem in a docked model. Our framework tackles this challenge by integrating heuristic and data-driven methods, enabling a comprehensive assessment of their performance across different experimental conditions.
From a systems perspective, organisations with hierarchical structures, interacting parts, beliefs, rules and goals can be viewed as complex systems. System dynamics, which focuses on dynamic complexity arising from interactions over time rather than the number of components, offers a viable method for understanding organisational behaviour [28,29].
In developing system dynamics, Forrester’s early work examined fluctuations in performance factors within a manufacturing supply chain under stable customer demand patterns [30]—the so-called “bullwhip” effect, or amplification of demand variability up the supply chain. He called the general behaviour patterns in this context “industrial dynamics”. Forrester then applied the system dynamics method to policy matters at the city level (“urban dynamics”) and even on a global scale (“world dynamics”) [31].
Although system dynamics has shown itself to be adept at modelling problems at different scales since its inception, traditionally, the focus has been on broad societal issues more than organisational topics [32]. Moreover, even when applied in organisational contexts, system dynamics models traditionally rely on heuristics to represent the mental model guiding decisions (e.g., [33,34,35]).
Despite the introduction of the Python library PySD, which allows combining Python’s rich data science methods and simpler judgmental rules in system dynamics models [36], AI use has been restricted mainly to policy optimisation [37] and searching the space of inputs for a specific mode of behaviour [38].
Since one of the core principles of a complex system is its hierarchical structure—what Simon calls “boxes-within-boxes” [39]—characterised by more interactions within subsystems and fewer between them, there is an untapped potential for modelling organisational behaviour as a mix of relatively closed algorithmic subsystems and more open rule-based ones cohabiting, using PySD [40]. While traditional inventory balancing is often modelled as an “optimisation” problem involving carrier vehicles (e.g., [41,42,43,44]), incorporating user engagement complicates the planning problem and justifies a combination of heuristic and algorithmic approaches. As shown in Figure 1, modern organisations blend rule-based and AI-driven models, mirroring the heuristic–data science approach used in our bike-sharing study. However, despite integrating human judgment and algorithmic decision-making, organisations often fail to evaluate the overall performance of joint decision-making comprehensively. This gap underscores the novelty of our model in facilitating double-loop learning [45], allowing real-world feedback to iteratively refine both heuristic and AI elements of the mental model, fostering more adaptive human–AI collaboration.
Figure 1.
Double-loop learning applied to refine collaborative human–AI decision-making.
The remainder of this paper is structured as follows: Section 2 outlines the modelling framework, focusing on Phase 2 of the decision-making process, which encompasses evaluation, highlighting the integration of machine learning and heuristic methods. Section 3 applies this framework to the bike-share case, starting with the business context and problem illustration and progressing through data sources, causal mapping for evaluation, and the development of simulation and ML models. This section includes detailed subsections on demand forecasting, causal inference and stocks and flows modelling, leading into partial model testing and comprehensive integration analysis, such as assessing starting inventory policies and mapping the performance across the decision landscape. Insights and implications derived from these evaluations are then discussed. Finally, Section 4 summarises the key findings, contributions and potential directions for future research.
2. Modelling Framework
Our primary deliverable is a quantitative model designed to bridge the gap between recognising the need for machine learning (ML) in decision-making and implementing it. We define a “quantitative” model following [46], where causal relationships between variables are defined and quantified. The aim is to shift from qualitative conceptual models that suggest how joint human–ML decision-making might be structured to quantitative models that evaluate whether existing organisational policies and processes are conducive to successful ML implementation.
This approach follows established guidelines for system dynamics modelling [45,47], emphasising the need for a holistic understanding of problem-solving [48]. The framework operates within the evaluation phase of decision-making, which bridges conceptualisation and implementation. It leverages systems thinking principles, where the structure of the system determines its behaviour [28].
As depicted in Figure 2, the evaluation phase comprises four main steps: Causal Mapping, Formulation of Simulation and Machine Learning Models, Partial Model Testing and Integration and Policy Analysis. Each step plays a crucial role in transitioning from conceptual understanding to an actionable and testable model.
Figure 2.
Overview of the modelling framework.
Causal Mapping is the foundation, representing current and future decision-making states through causal diagrams. This step abstracts operational details to define system boundaries, identify critical feedback structures, and set the stage for more detailed model formulation. The purpose is to make explicit the interdependencies and feedback loops within the system, ensuring that critical elements influencing decision outcomes are accounted for.
Formulation of Simulation and Machine Learning Models follows, employing tools such as Vensim, PySD and ML techniques (e.g., Recurrent Neural Network (RNN)). Here, causal maps guide the development of simulation models that integrate both ML and heuristic methods. The simulation models are structured to allow interaction with ML submodels, facilitating the exchange of information between predictive algorithms and rule-based processes. Data collation and transformation, shown in the diagram, prepare input data for ML training and testing, while simulation elements are parametrised to create and test various scenarios using different configurations of control variables.
Partial Model Testing is crucial for verifying the local rationality of subsystems, ensuring alignment with process-level objectives before full integration. In this phase, code-based simulations with embedded ML models are run to assess the performance of individual subsystems, such as those depicted as Subsystems A, B, C and D. This iterative testing step helps identify potential biases or misalignments within isolated components, allowing for adjustments before the entire model is synthesised.
Integration and Policy Analysis assesses the global rationality of the intervention by simulating system-wide outcomes and comparing them against organisational goals. This step explores potential misalignments and examines the robustness of different policy choices, informing adjustments needed for improved performance. The final output of this phase serves as a low-risk tool for evaluating whether to proceed with or adapt the human-AI strategy in a real-world implementation.
In summary, this modelling framework leverages the strengths of both simulation and ML to create a cohesive system that bridges conceptualisation and implementation. Using tools like PySD for programmatic simulation and incorporating a combination of heuristic and algorithmic approaches, the framework facilitates double-loop learning; this ensures that real-world feedback informs and adapts both heuristic and ML components, fostering a more adaptive human–AI collaboration.
4. Conclusions
Successive innovations in AI, particularly transformer-based models, along with increased investments in data and computing, continue to drive rapid growth in the field. These advancements create new opportunities for automation but also introduce novel risks, highlighting the need for human oversight. Recognising these risks and the potential for new tasks through human–AI collaboration has shifted the focus towards augmentation rather than pure automation in decision-making. Although frameworks for augmentation offer guidelines—often drawing on decision theory, systems theory and empirical data—they remain only a starting point, as each organisation’s approach is shaped by its unique decision-making routines and resources. To bridge the gap between theory and practice, organisations need to evaluate their specific blend of human judgment, AI and, more broadly, algorithmic decision-making. The system dynamics-based simulative modelling framework proposed in this paper addresses the “last mile” challenge of moving from a hypothesised teaming of human and AI agents to practical implementation, enabling quantification that accounts for firm-specific decision complements.
We applied the model to the inventory balancing problem in docked bike-share systems, where incentivising users to perform balancing, although eco-friendly, poses a coordination challenge. The model leverages the complementarity between stations or station clusters with asymmetrical demand patterns to optimise inventory management. Our experiments support recent research suggesting that introducing ML as a novel learning agent creates a decision-making landscape that is likely to be rugged. The flexibility of ML, unconstrained by the prior beliefs that shape human decision-making, allows it to explore a broader decision space. This flexibility creates opportunities for substantial improvement over traditional approaches but also introduces risks. For instance, our simulations of multiple policy variants for inventory balancing, combining judgmental heuristics with data science approaches (including deep learning), showed that, while ML’s superior ability to anticipate future flows often leads to significant improvements, there are also scenarios where performance declines compared to using no incentives.
Furthermore, by simulating the assumed future-state policy variant and neighbouring scenarios, our approach reveals that ruggedness can result in significant variability: a variant performing well above the baseline may have nearby scenarios performing much worse. Given irreducible uncertainty, prioritising robust scenarios over elusive optimal ones is essential. Our approach enables the mapping of the performance landscape, offering the opportunity to develop robust policies through parameter fine-tuning and structural adjustments that deliver synergistic human–AI teaming.
Future work could extend the analysis to an entire network of stations, leveraging the scalability of our PySD-based simulation workbench to run parallel simulations for multiple demand clusters. Incorporating structural changes to the model, such as the ability to simulate carrier-based rebalancing alongside crowdsourced strategies, would add realism and support a more comprehensive analysis of balancing measures. Additionally, validating the assumptions underpinning perceived availability and risk—as well as other heuristics that synthesise algorithmic predictions and human judgment—would be essential when implementing the framework in real-world settings. These adaptations would need to align with firm-specific decision-making routines to ensure applicability across different operational environments. Finally, while this study focused on bike-sharing, the framework could be adapted to other complex systems requiring a blend of human and machine-led decision-making, such as supply chain planning and resource allocation problems.
Author Contributions
Conceptualization, G.S. (Ganesh Sankaran) and M.A.P.; Methodology, G.S. (Ganesh Sankaran); Software, G.S. (Ganesh Sankaran); Validation, G.S. (Ganesh Sankaran) and M.A.P.; Investigation, G.S. (Ganesh Sankaran); Writing—original draft, G.S. (Ganesh Sankaran); Writing—review & editing, M.A.P. and G.S. (Guido Siestrup); Visualization, G.S. (Ganesh Sankaran); Supervision, M.A.P., M.K. and G.S. (Guido Siestrup); Project administration, M.A.P. and G.S. (Guido Siestrup); Funding acquisition, G.S. (Guido Siestrup). All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
The ML code, system dynamics simulation files and data are available on GitHub under: bss-model-E3BF (https://anonymous.4open.science/r/bss-model-E3BF (accessed on 10 October 2024)).
Conflicts of Interest
The authors declare no conflicts of interest.
Appendix A
Table A1.
Bike-share forecast model characteristics.
Table A1.
Bike-share forecast model characteristics.
| Aspect | Description |
|---|---|
| Model Architecture |
|
| Features Used |
|
| Optional Features |
|
| Model Parameters |
|
| Loss Function |
|
| Optimisation |
|
| Data Preprocessing |
|
| Training Process |
|
| Evaluation Metrics |
|
| Implementation |
|
| Data Granularity |
|
| Forecasting Approach |
|
| Notable Features |
|
| Reproducibility |
|
Figure A1.
CLD of the bike-share two-stock model.
References
- Raschka, S. Build a Large Language Model from Scratch; Manning Publications: Shelter Island, NY, USA, 2024; 400p. [Google Scholar]
- Malone, T.W. How Human-Computer “Superminds” Are Redefining the Future of Work. MIT Sloan Manag. Rev. 2018, 59, 34–41. [Google Scholar]
- Agrawal, A.; Gans, J.S.; Goldfarb, A. What to Expect from Artificial Intelligence. MIT Sloan Manag. Rev. 2017, 58, 23. Available online: https://sloanreview-mit-edu.plymouth.idm.oclc.org/article/what-to-expect-from-artificial-intelligence/ (accessed on 14 September 2021).
- Saenz, M.J.; Revilla, E.; Simón, C. Designing AI Systems With Human-Machine Teams. MIT Sloan Manag. Rev. 2020, 61, 1–5. Available online: https://sloanreview.mit.edu/article/designing-ai-systems-with-human-machine-teams/ (accessed on 8 September 2021).
- Raisch, S.; Krakowski, S. Artificial Intelligence and Management: The Automation–Augmentation Paradox. AMR 2021, 46, 192–210. [Google Scholar] [CrossRef]
- Brynjolfsson, E.; Mitchell, T. What can machine learning do? Workforce implications. Science 2017, 358, 1530–1534. [Google Scholar] [CrossRef]
- Autor, D. Polanyi’s Paradox and the Shape of Employment Growth; Report No.: 20485; National Bureau of Economic Research: Cambridge, MA, USA, 2014; Available online: https://www.nber.org/papers/w20485 (accessed on 8 September 2021).
- Agrawal, A.; Gans, J.; Goldfarb, A. Power and Prediction: The Disruptive Economics of Artificial Intelligence; Harvard Business Review Press: Boston, MA, USA, 2022; 288p. [Google Scholar]
- Brynjolfsson, E. The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence. Daedalus 2022, 151, 272–287. [Google Scholar] [CrossRef]
- Acemoglu, D.; Johnson, S. Power and Progress: Our Thousand-Year Struggle over Technology and Prosperity, 1st ed.; Public Affairs: New York, NY, USA, 2023; 560p. [Google Scholar]
- Kambhampati, S. Polanyi’s Revenge and AI’s New Romance with Tacit Knowledge. Commun. ACM 2021, 64, 31–32. [Google Scholar] [CrossRef]
- Lebovitz, S.; Levina, N.; Lifshitz-Assaf, H. Is AI Ground Truth Really “True?” The Dangers of Training and Evaluating AI Tools Based on Experts’ Know-What. Manag. Inf. Syst. Q. 2021, 45, 1501–1526. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3839601 (accessed on 28 October 2022). [CrossRef]
- Raghu, M.; Blumer, K.; Corrado, G.; Kleinberg, J.; Obermeyer, Z.; Mullainathan, S. The Algorithmic Automation Problem: Prediction, Triage, and Human Effort. arXiv 2019, arXiv:1903.12220. [Google Scholar]
- Ross, J. Don’t Confuse Digital with Digitization. MIT Sloan Manag. Rev. 2019. Available online: https://sloanreview.mit.edu/article/dont-confuse-digital-with-digitization/ (accessed on 7 November 2022).
- Moser, C.; Hond, F.; den Lindebaum, D. What Humans Lose When We Let AI Decide. MIT Sloan Manag. Rev. 2022, 63, 12–14. [Google Scholar]
- Morgan, G. Images of Organization; Updated edition; SAGE Publications, Inc.: Thousand Oaks, CA, USA, 2006; 520p. [Google Scholar]
- Gigerenzer, G. How to Stay Smart in a Smart World: Why Human Intelligence Still Beats Algorithms; Penguin: Portland, Oregon, 2022; 307p. [Google Scholar]
- Chiang, T. ChatGPT Is a Blurry JPEG of the Web. The New Yorker, 2023. Available online: https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web (accessed on 24 November 2023).
- Babic, B.; Cohen, I.G.; Evgeniou, T.; Gerke, S. When Machine Learning Goes Off the Rails. Harv. Bus. Rev. 2021, 132–138. Available online: https://hbr.org/2021/01/when-machine-learning-goes-off-the-rails (accessed on 25 May 2022).
- Smith, B.C. The Promise of Artificial Intelligence: Reckoning and Judgment; Illustrated Edition; The MIT Press: Cambridge, MA, USA, 2019; 184p. [Google Scholar]
- Kitchin, R. Big Data, New Epistemologies and Paradigm Shifts. Big Data Soc. 2014, 1, 2053951714528481. [Google Scholar] [CrossRef]
- Domingos, P. A Few Useful Things to Know About Machine Learning. Commun. ACM 2012, 55, 78–87. [Google Scholar] [CrossRef]
- Levinthal, D.A. Adaptation on Rugged Landscapes. Manag. Sci. 1997, 43, 934–950. [Google Scholar] [CrossRef]
- Sturm, T.; Gerlach, J.P.; Pumplun, L.; Mesbah, N.; Peters, F.; Tauchert, C.; Nan, N.; Buxmann, P. Coordinating Human and Machine Learning for Effective Organizational Learning. MIS Q. 2021, 45, 1581–1602. [Google Scholar] [CrossRef]
- Dell’Acqua, F.; McFowland, E., III; Mollick, E.R.; Lifshitz-Assaf, H.; Kellogg, K.; Rajendran, S.; Lakhani, K.R. Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality. Harvard Business School Technology & Operations Mgt. Unit Working Paper. 2023. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4573321 (accessed on 13 November 2023).
- Oliveira, G.N.; Sotomayor, J.L.; Torchelsen, R.P.; Silva, C.T.; Comba, J.L.D. Visual analysis of bike-sharing systems. Comput. Graph. 2016, 60, 119–129. [Google Scholar] [CrossRef]
- Chung, H.; Freund, D.; Shmoys, D.B. Bike Angels: An Analysis of Citi Bike’s Incentive Program. In Proceedings of the 1st ACM SIGCAS Conference on Computing and Sustainable Societies (COMPASS’ 18), New York, NY, USA, 20–22 June 2018; Association for Computing Machinery: New York, NY, USA, 2018; pp. 1–9. [Google Scholar] [CrossRef]
- Meadows, D.H. Thinking in Systems: International Bestseller; Wright, D., Ed.; Chelsea Green Publishing: Hartford, VT, USA, 2008; 240p. [Google Scholar]
- Senge, P.M. The Fifth Discipline: The Art & Practice of The Learning Organization; Revised & Updated edition; Doubleday: New York, NY, USA, 2006; 445p. [Google Scholar]
- Forrester, J.W. Industrial Dynamics. Harv. Bus. Rev. 1958, 36, 37–66. [Google Scholar]
- Pruyt, E. Small System Dynamics Models for Big Issues: Triple Jump Towards Real-World Complexity; TU Delft Library: Delft, The Netherlands, 2013; 324p. [Google Scholar]
- Cassidy, R.; Singh, N.S.; Schiratti, P.-R.; Semwanga, A.; Binyaruka, P.; Sachingongu, N.; Chama-Chiliba, C.M.; Chalabi, Z.; Borghi, J.; Blanchet, K. Mathematical modelling for health systems research: A systematic review of system dynamics and agent-based models. BMC Health Serv. Res. 2019, 19, 845. [Google Scholar] [CrossRef]
- Morecroft, J.D. System dynamics: Portraying bounded rationality. Omega 1983, 11, 131–142. [Google Scholar] [CrossRef]
- Lyneis, J.M. System dynamics for market forecasting and structural analysis. Syst. Dyn. Rev. 2000, 16, 3–25. [Google Scholar] [CrossRef]
- Vlachos, D.; Georgiadis, P.; Iakovou, E. A system dynamics model for dynamic capacity planning of remanufacturing in closed-loop supply chains. Comput. Oper. Res. 2007, 34, 367–394. [Google Scholar] [CrossRef]
- Houghton, J.; Siegel, M. Advanced data analytics for system dynamics models using PySD. In Proceedings of the 33rd International Conference of the System Dynamics Society, Cambridge, MA, USA, 19–23 July 2015; System Dynamics Society: Cambridge, UK, 2015. [Google Scholar]
- Chen, Y.T.; Tu, Y.M.; Jeng, B. A Machine Learning Approach to Policy Optimization in System Dynamics Models. Syst. Res. Behav. Sci. 2011, 28, 369–390. [Google Scholar] [CrossRef]
- Edali, M. Pattern-oriented analysis of system dynamics models via random forests. Syst. Dyn. Rev. 2022, 38, 135–166. [Google Scholar] [CrossRef]
- Simon, H.A. The Sciences of the Artificial, 3rd ed.; The MIT Press: Cambridge, MA, USA, 1996; 248p. [Google Scholar]
- Sankaran, G.; Palomino, M.A.; Knahl, M.; Siestrup, G. A modeling approach for measuring the performance of a human-AI collaborative process. Appl Sci. 2022, 12, 11642. [Google Scholar] [CrossRef]
- Caggiani, L.; Ottomanelli, M. A Dynamic Simulation based Model for Optimal Fleet Repositioning in Bike-sharing Systems. Procedia-Soc. Behav. Sci. 2013, 87, 203–210. [Google Scholar] [CrossRef]
- Lowalekar, M.; Varakantham, P.; Ghosh, S.; Jena, S.; Jaillet, P. Online Repositioning in Bike Sharing Systems. In Proceedings of the International Conference on Automated Planning and Scheduling, Pittsburgh, PA, USA, 18–23 June 2017; Volume 27, pp. 200–208. Available online: https://ojs.aaai.org/index.php/ICAPS/article/view/13824 (accessed on 6 November 2024).
- Legros, B. Dynamic repositioning strategy in a bike-sharing system; how to prioritise and how to rebalance a bike station. Eur. J. Oper. Res. 2019, 272, 740–753. [Google Scholar] [CrossRef]
- Ghosh, S.; Trick, M.; Varakantham, P. Robust Repositioning to Counter Unpredictable Demand in Bike Sharing Systems. In Proceedings of the 25th International Joint Conference on Artificial Intelligence IJCAI 2016, New York, NY, USA, 9–15 July 2016; pp. 3096–3102. Available online: https://ink.library.smu.edu.sg/sis_research/3456 (accessed on 6 November 2024).
- Sterman, J.D. Business Dynamics; International edition; McGraw-Hill Education: Boston, MA, USA, 2000; 993p. [Google Scholar]
- Will, M.; Bertrand, J.; Fransoo, J.C. Operations management research methodologies using quantitative modeling. Int. J. Oper. Prod. Manag. 2002, 22, 241–264. [Google Scholar] [CrossRef]
- Morecroft, J.D.W. Strategic Modelling and Business Dynamics: A Feedback Systems Approach, 2nd ed.; Wiley: Hoboken, NJ, USA, 2015; 504p. [Google Scholar]
- Mitroff, I.I.; Betz, F.; Pondy, L.R.; Sagasti, F. On Managing Science in the Systems Age: Two Schemas for the Study of Science as a Whole Systems Phenomenon. Interfaces 1974, 4, 46–58. Available online: https://www.jstor.org/stable/25059093 (accessed on 22 October 2021). [CrossRef]
- Shen, Y.; Zhang, X.; Zhao, J. Understanding the usage of dockless bike sharing in Singapore. Int. J. Sustain. Transp. 2018, 12, 686–700. [Google Scholar] [CrossRef]
- Shaheen, S.A.; Guzman, S.; Zhang, H. Bikesharing in Europe, the Americas, and Asia: Past, Present, and Future. Transp. Res. Rec. 2010, 2143, 159–167. [Google Scholar] [CrossRef]
- Singla, A.; Santoni, M.; Bartók, G.; Mukerji, P.; Meenen, M.; Krause, A. Incentivizing Users for Balancing Bike Sharing Systems. In Proceedings of the AAAI Conference on Artificial Intelligence, Austin, TX, USA, 25–30 January; 2015; Volume 29. Available online: https://ojs.aaai.org/index.php/AAAI/article/view/9251 (accessed on 11 August 2023).
- El Sibai, R.; Challita, K.; Bou Abdo, J.; Demerjian, J. A New User-Based Incentive Strategy for Improving Bike Sharing Systems’ Performance. Sustainability 2021, 13, 2780. Available online: https://www.mdpi.com/2071-1050/13/5/2780 (accessed on 6 November 2024). [CrossRef]
- Makridakis, S.G.; Wheelwright, S.C.; Hyndman, R.J. Forecasting: Methods and Applications, 3rd ed.; Wiley: New York, NY, USA, 1998; 923p. [Google Scholar]
- Sankaran, G.; Sasso, F.; Kepczynski, R.; Chiaraviglio, A. Improving Forecasts with Integrated Business Planning: From Short-Term to Long-Term Demand Planning Enabled by SAP IBP; Management for Professionals; Springer International Publishing: Cham, Switzerland, 2019; Available online: http://link.springer.com/10.1007/978-3-030-05381-9 (accessed on 27 August 2024).
- Géron, A. Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems, 3rd ed.; O’Reilly Media: Sebastopol, CA, USA, 2019; 856p. [Google Scholar]
- Chollet, F. Deep Learning with Python, 2nd ed.; Manning: Shelter Island, NY, USA, 2021; 504p. [Google Scholar]
- Brodersen, K.H.; Gallusser, F.; Koehler, J.; Remy, N.; Scott, S.L. Inferring causal impact using Bayesian structural time-series models. Ann. Appl. Stat. 2015, 9, 247–274. [Google Scholar] [CrossRef]
- Morecroft, J.D.W. Rationality in the Analysis of Behavioral Simulation Models. Manag. Sci. 1985, 31, 900–916. [Google Scholar] [CrossRef]
- Makridakis, S.; Spiliotis, E.; Assimakopoulos, V. Statistical and Machine Learning forecasting methods: Concerns and ways forward. PLoS ONE 2018, 13, e0194889. [Google Scholar] [CrossRef] [PubMed]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).