Next Article in Journal
Strategic Smart City Development Through Citizen Participation: Empirical Evidence from Slovakia
Previous Article in Journal
The Spillover of Digital Transformation in Supply Chain Innovation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hybrid Human–Machine Consensus Framework for SME Technology Selection: Integrating Machine Learning and Planning Poker

by
Chetna Gupta
1,* and
Varun Gupta
2,3,*
1
School of Computer Science Engineering and Technology, Bennett University, Greater Noida 201301, India
2
Multidisciplinary Research Centre for Innovations in SMEs (MrciS), Gisma University of Applied Sciences, 14469 Potsdam, Germany
3
Department of Economics and Business Administration, University of Alcala, 28802 Alcalá de Henares, Spain
*
Authors to whom correspondence should be addressed.
Systems 2026, 14(1), 42; https://doi.org/10.3390/systems14010042
Submission received: 26 October 2025 / Revised: 19 December 2025 / Accepted: 25 December 2025 / Published: 30 December 2025
(This article belongs to the Section Artificial Intelligence and Digital Systems Engineering)

Abstract

This paper proposes a hybrid collaborative framework to optimize technology selection in Small and Medium-sized Enterprises (SMEs) by integrating machine learning (ML) predictions with Planning Poker, consensus-based estimation technique used in agile software development. Addressing known challenges such as cognitive bias, resource constraints, and the need for inclusive decision-making, the proposed model combines data-driven suitability analysis with stakeholder-driven consensus. ML generates quantitative, criterion-wise suitability scores based on historical SME data, providing transparent baselines for evaluation. Stakeholders independently assess candidate technologies using Planning Poker, and their consensus is blended with ML predictions through a flexible weighting mechanism. An illustrative case study on CRM tool selection illustrates the framework’s practical advantages: improved decision accuracy, transparency, and greater stakeholder engagement. The methodology is iterative, allowing for continuous learning and adaptation as new data emerges. This dual approach ensures that technology adoption decisions in SMEs are both empirically validated and contextually robust, offering a significant improvement over traditional, siloed methods.

1. Introduction

Small and Medium-sized Enterprises (SMEs) play a vital role in driving economic growth and innovation worldwide [1]. However, SMEs often face significant challenges when selecting appropriate technologies due to their limited resources, expertise, and rapidly evolving market conditions [2]. The traditional methods employed for technology selection in SMEs tend to be informal, subjective, and prone to cognitive biases, which can lead to poor decisions that adversely affect business performance and competitiveness [2,3]. Therefore, there is an increasing need for structured, participatory, and data-driven approaches that can enhance the accuracy and inclusiveness of technology selection processes in SMEs.
One promising technique to address these challenges is Planning Poker, a collaborative estimation method originally developed for Agile software development [4,5]. Planning Poker encourages the involvement of diverse stakeholders, such as managers, technical staff, and end-users, encouraging a more independent and comprehensive evaluation process. By having participants independently provide estimates before group discussion, the method helps reduce common cognitive biases like anchoring and group thinking. Through iterative rounds of voting and discussion, Planning Poker facilitates consensus-building and generates more accurate and reliable assessments of technology options [6]. Additionally, this process promotes transparency and early identification of risks, which is particularly valuable for SMEs that must carefully manage limited resources. The method’s flexibility and efficiency make it well-suited to the dynamic environments in which SMEs operate.
Complementing the participatory nature of Planning Poker, machine learning (ML) offers a powerful data-driven approach to improve decision accuracy in technology selection [7]. ML algorithms can analyze vast amounts of historical data on technology adoption, business context, and outcomes to identify patterns and predict the suitability of various technology options. By simultaneously considering multiple criteria such as cost, scalability, integration complexity, and user-friendliness, ML models provide objective, empirical support to decision-makers. These models not only enhance the reliability of technology selection but also enable continuous learning by updating predictions as new data becomes available. The data-driven justification provided by ML can increase stakeholder confidence and facilitate securing external funding, which is often critical for SMEs [8].
While both Planning Poker and machine learning have demonstrated individual strengths, the integration of these methods into an integrated framework for technology selection in SMEs remains an emerging area of research. Existing models in project management and software development have begun to combine Planning Poker with algorithmic or AI-based tools to improve estimation accuracy and decision support. For instance, hybrid decision support systems incorporate Planning Poker-derived weights as input features for ML models, leveraging human judgment alongside empirical analysis [9]. Agile estimation tools augmented with AI further illustrate the feasibility of this integration, although their application has primarily focused on software projects rather than broader technology selection in SMEs. These interdisciplinary approaches highlight the potential benefits of combining consensus-driven techniques with machine learning but also establishes the need for frameworks tailored specifically to SME contexts.
The effectiveness of machine learning in this domain heavily depends on the availability and quality of relevant data. Useful data sources include historical records of past technology adoptions, encompassing attributes of the technologies themselves, contextual information about the SMEs (such as industry sector and size), and measurable outcomes like return on investment and user satisfaction. Additionally, stakeholder evaluations gathered during Planning Poker sessions can be encoded as features for ML models, enriching the dataset with qualitative insights. External data such as market trends, competitor technology adoption, and regulatory changes further enhance model robustness. The integration of these diverse data sources enables ML algorithms to capture the multifaceted nature of technology selection decisions, thereby improving prediction accuracy and relevance.
From the perspective of SMEs, the integration of Planning Poker and AI tools for technology is a new direction in this field. Many SMEs recognize that combining transparent, inclusive decision-making with data-driven analytics can increase confidence in technology choices and promote stakeholder acceptance. The efficiency gains and objectivity introduced by these methods align well with SMEs’ needs for agile and reliable decision processes. However, challenges remain, particularly regarding data availability, the need for facilitation skills to conduct effective Planning Poker sessions, and potential resistance to adopting new tools.
The integration of Planning Poker and machine learning offers a promising hybrid approach to technology selection in SMEs. By combining the strengths of collaborative stakeholder engagement with data-driven predictive analytics, this approach addresses the limitations of traditional methods and enhances decision quality. While both Planning Poker and machine learning have been individually used in software engineering and project estimation, prior hybrid approaches mainly focus on task or sprint-level estimation in software projects rather than organizational technology adoption. To the best of the authors’ knowledge, this study uses a hybrid ML–Planning Poker framework specifically for SME technology selection, where ML provides criterion-wise suitability scores and Planning Poker aggregates cross-functional SME stakeholder judgments into a blended decision model. This research paper aims to analyze the challenges in current technology selection practices, explore the benefits and limitations of Planning Poker and machine learning, propose an integrated framework tailored for SMEs, and evaluate its effectiveness through a case study. Table 1 below presents the comparison of existing work with the presented work highlighting the key difference.
The main contribution of this research is as follows:
  • Introduces a hybrid decision-support framework that adapts the Planning Poker technique traditionally used in agile software estimation to the domain of strategic technology selection for SMEs, and integrates it with ML-based suitability scoring and confidence-weighted blending.
  • Combines machine learning–based suitability scoring with participatory stakeholder input, enabling inclusive and transparent multi-criteria decision-making in resource-constrained SME environments.
  • Enhances technology selection decisions through data-driven insights, increasing the accuracy and relevance of selected technologies.
  • Demonstrates the framework’s practical utility through an illustrative CRM technology case study, highlighting improvements in decision accuracy, stakeholder engagement, and overall practical relevance.
The remainder of this paper is structured as follows: Next section provides a literature review followed by conceptual framework of the proposed integrated approach. Next methodology is discussed followed by a case study, result discussion. Research implication and reflection are presented next followed by conclusion and future scope at the end.

2. Literature Review

Selecting appropriate technologies is a critical decision for Small and Medium-sized Enterprises (SMEs), affecting competitiveness, innovation, and long-term sustainability. Over the last ten years there has seen a surge in studies examining how SMEs approach this complex decision, identifying key drivers, barriers, and frameworks for technology selection and adoption.
Recent systematic literature reviews highlight that SMEs are increasing technology adoption to maintain competitiveness. Technologies prioritized include mobile applications, artificial intelligence (AI), blockchain, virtual and augmented reality, and fintech [10,11,12]. These reviews highlight a few persistent barriers including lack of skilled labor, financial constraints, inadequate business models, and limited access to high-speed broadband. These limit SMEs’ ability to take advantage of emerging technologies. Nevertheless, firms that overcome these hurdles report significant gains in productivity, sustainability, and operational efficiency [10,12].
Traditional methods of technology selection in SMEs have historically been driven by internal and external factors, including operational efficiency, cost considerations, market pressures, organizational structure, resource availability, and strategic alignment, while external factors encompass market competition, customer demands, and regulatory requirements. These methods often involve qualitative and quantitative approaches to evaluate the suitability of technologies for business needs. These factors have been extensively studied in the context of Information Technology (IT) adoption in SMEs, with research indicating that limited resources and strategic focus are significant barriers to effective technology selection [13,14]. Traditional methods of technology selection often employ a combination of qualitative and quantitative approaches. Qualitative methods, such as interviews and case studies, provide insights into organizational needs and stakeholder perceptions, while quantitative methods, including surveys and statistical modeling, enable data-driven decision-making. A systematic review of 60 research papers on digital technologies in SMEs reveals that quantitative approaches are more commonly used (60%), followed by qualitative (25%) and mixed methods (15%) [15].
The Technology–Organization–Environment (TOE) Framework and Diffusion of Innovation (DOI) Theory are commonly used frameworks for analyzing technology adoption in SMEs [16,17]. Studies find that technological factors (e.g., complexity, compatibility), organizational factors (e.g., leadership, human resources, digital culture), and environmental factors (e.g., market pressure, regulatory support) interact to shape technology selection [16,17]. Newer models like the Dynamic Technology Adoption Model for SMEs (DTAM-SMEs) have been proposed, emphasizing the importance of pre-adoption and implementation factors, especially for advanced technologies like blockchain. The model suggests a more iterative and context-aware approach to technology integration [11]. SMEs’ technology choices are often influenced by immediate business needs, opportunities for efficiency gains, and the desire for market expansion. Those who actively engage in entrepreneurial orientation (openness to innovation and risk-taking) and develop sustainable resilience strategies are better positioned to integrate new technologies successfully [10,12,18,19].
The advent of machine learning has introduced new possibilities for technology selection in SMEs [15,20,21]. ML techniques can analyze large datasets, identify patterns, and predict outcomes, enabling more informed decision-making. Machine learning plays a crucial role in enhancing the efficiency and accuracy of technology selection processes. By analyzing historical data, ML algorithms can predict the likelihood of success for different technologies, helping SMEs make data-driven decisions. For instance, ML can be used to evaluate the suitability of digital technologies such as cloud computing, blockchain, and the Internet of Things (IoT) for specific business needs [15]. Artificial intelligence (AI) and machine learning are increasingly being used in SMEs to enhance financial decision-making [20]. AI-driven tools, such as predictive analytics and dynamic pricing models, enable SMEs to optimize revenue and profitability. For example, AI can predict liquidity needs based on historical data, seasonal trends, and market conditions, helping SMEs maintain healthy cash flow and avoid financial shortfalls [20].
Machine learning also enhances knowledge management and operational efficiency in SMEs. By performing real-time data analysis, ML empowers businesses to detect and address potential threats, ensuring the protection of sensitive information and financial assets. Additionally, ML enables personalized marketing campaigns and ongoing process enhancement, opening up new opportunities for growth and competitiveness [21]. Despite the potential of machine learning in technology selection, several challenges and limitations exist. High implementation costs, limited technical expertise, and concerns over data privacy can hinder the adoption of ML tools in SMEs. However, the increasing availability of scalable and cost-effective AI solutions is making it easier for SMEs to integrate ML into their processes [20,21]. While machine learning and artificial intelligence are widely discussed, recent research highlights a broadening spectrum of technology adoption in SMEs are under growing pressure to embrace cutting-edge technologies like cloud computing, AI, blockchain, and IoT. Blockchain, cloud computing and the Internet of Things (IoT) are emerging as key dimensions of digital transformation to remain competitive in today’s rapidly evolving digital market [22,23,24]. These technologies introduce novel paradigms for efficiency, trust, connectivity, and operations, and are shaping technology selection in diverse ways. Digital technologies most commonly adopted include cloud computing, AI, IoT, and blockchain solutions, with adoption rates influenced by sector and geography.
Despite advances, research on technology selection remains fragmented, with limited theoretical guidance and insufficient focus on outcomes beyond adoption (e.g., long-term business transformation and resilience) [12]. There is a growing need for studies that explore the integration of sustainability into technology decision-making and for cross-country comparative research to address contextual differences [10,19]. In light of these arguments, this paper presents a hybrid human–machine framework that adapts Planning Poker from agile estimation to SME technology adoption and combines it with ML-based suitability scoring for multi-criteria decision analysis.

3. Conceptual Framework of the Integrated Approach

The integrated approach combining machine learning (ML) and Planning Poker (PP) for technology selection in SMEs is designed to integrate the strengths of both methods. Machine learning provides objective, data-driven baseline estimates of technology suitability by analyzing historical adoption data and contextual factors. Planning Poker, a consensus-based collaborative estimation technique widely used in Agile project management, then refines these baseline scores by incorporating expert judgment and stakeholder insights through structured group discussion and iterative voting [4,5,6].
Conceptually, the framework builds on bounded rationality and prescriptive analytics, where computational tools support decision-makers who face complex, multi-criteria problems with limited time and information. In SME technology selection, ML-based suitability scores help address these limits by summarizing historical adoption patterns into consistent, criterion-wise baselines that guide evaluation.
The Planning Poker component reflects insights from behavioral decision-making and group cognition, which show that unaided expert judgment is vulnerable to biases such as anchoring, groupthink, and dominance. By enforcing independent scoring, simultaneous reveal, and focused discussion on divergent views, Planning Poker structures stakeholder interaction to surface diverse knowledge while reducing undue influence. Combining ML and Planning Poker follows a hybrid human–machine collaboration logic: the algorithm contributes scalable pattern recognition and probabilistic estimates, while human stakeholders contribute contextual understanding, value judgments, and legitimacy through participatory consensus. The weighted blending of ML predictions and consensus scores formalizes this complementarity and aligns with arguments that pairing algorithmic recommendations with structured deliberation can yield more robust and acceptable decisions in resource-constrained SME environments.
The process begins with the training of an ML model on historical data (please refer to the data availability statement to access the dataset) encompassing technology features, firm characteristics, and adoption outcomes. This model predicts a suitability score and an associated confidence level for each technology-criterion pair, offering quantitative guidance grounded in empirical evidence. These ML-generated scores serve as a transparent starting point for stakeholder evaluation, helping to reduce initial bias and anchoring effects. Figure 1 presents the whole process.
Subsequently, stakeholders participate in Planning Poker sessions where the ML predictions are presented as baseline estimates. Each participant independently assigns scores to technologies under each criterion using Planning Poker cards, typically numbered in a modified Fibonacci sequence to reflect relative effort or suitability. After revealing their votes simultaneously, stakeholders discuss discrepancies, especially focusing on divergent estimates, and may revise their votes through multiple rounds until consensus is achieved. This iterative process promotes engagement, mitigates cognitive biases, and leverages collective expertise.
Finally, the consensus scores derived from Planning Poker are combined with the ML predictions using a weighted blending formula that accounts for the confidence of both sources. This blended score balances the objectivity of machine learning with expertise and knowledge of stakeholders. The technologies are then ranked based on these final scores to inform selection decisions.

4. Methodology

4.1. Data Collection and Preparation

The process begins with the collection of structured historical data on technology adoption in SMEs, including technology attributes (e.g., cost, scalability), organizational context (e.g., industry, size), and adoption outcomes (e.g., success metrics). The dataset is pre-processed and features engineered to ensure suitability for ML modeling. Historical data on SME technology adoption was sourced from the Kaggle CRM Sales Opportunities dataset (n = 2500 records across 150 SMEs), augmented with synthetic SME adoption records to reach 3200 instances covering 12 technology types (CRM, ERP, cloud platforms, etc.) and 5 decision criteria (cost, integration, user-friendliness, vendor support, scalability). Each instance includes 18 features: 8 technology attributes (e.g., monthly cost, API availability), 6 firm characteristics (e.g., employee count, industry sector, IT maturity), and 4 outcome variables (e.g., ROI at 12 months, user satisfaction score). Categorical variables were one-hot encoded, numerical features min-max normalized to, and missing values (<5%) imputed using median values per feature.

4.2. Machine Learning Model Training and Prediction

An ensemble ML model, Random Forest is trained on the prepared dataset to predict a suitability score (SS) for each technology-criterion pair. The model also estimates a confidence level (CL) for each prediction, reflecting its certainty. These predictions serve as quantitative baselines for stakeholder evaluation. A Random Forest regressor (scikit-learn 1.3.0) was trained to predict suitability score (SS, 1–10 scale) for each technology-criterion pair. Hyperparameters: n_estimators = 100, max_depth = 10, min_samples_split = 5, min_samples_leaf = 2, criterion = ‘squared_error’, random_state = 42. The dataset was split 80/20 (train/test) with 5-fold cross-validation for hyperparameter tuning via grid search on max_depth and n_estimators. Model performance: MAE = 0.72, RMSE = 1.02, R2 = 0.84 on test set, indicating adequate predictive power for baseline estimation. Random Forest was selected over alternatives (LightGBM, XGBoost, SVM) for SME-specific advantages: (1) robustness to imbalanced/missing data (2.1% missing rate) common in adoption records, (2) feature importance interpretability for stakeholder trust (“cost influences 28% of predictions”), and (3) low overfitting risk with limited samples (n = 3200) via ensemble averaging. LightGBM offers speed but lacks interpretability; SVM performs poorly with high-dimensional sparse data; neural networks require 10× larger datasets. RF’s MAE = 0.72 confirms adequacy for data-scarce SME contexts. Confidence level (CL, 0–1 scale) for each prediction was computed as the normalized inverse variance of the 100 tree-level predictions: CL = 1 − (σ_tree/σ_max), where σ_tree is the standard deviation across trees and σ_max is the maximum observed variance (calibrated on training data).

4.3. Planning Poker Session for Stakeholder Refinement

The ML-generated suitability scores and confidence levels are presented to SME stakeholders. During Planning Poker sessions, stakeholders independently vote on the suitability of each technology under each criterion, using the ML scores as reference points. Through iterative discussion and revoting, stakeholders reach a consensus that reflects both data-driven insights and contextual knowledge.
Four cross-functional stakeholders participated in the sessions: IT manager (technical integration), sales manager (business value), customer service lead (user experience), and finance officer (cost-effectiveness). This composition represents SME decision diversity while remaining feasible for resource-constrained settings (n = 4 balances inclusivity and coordination overhead). All votes equally weighted to promote democratic consensus over hierarchical dominance. Planning Poker was adapted from effort estimation to technology suitability rating, as both require relative judgment of alternatives under uncertainty. Participants scored each technology-criterion pair using a modified Fibonacci scale (1 = very poor, 2 = poor, 3 = fair, 5 = good, 8 = very good, 10 = excellent), with ML predictions shown as baselines before Round 1. Votes revealed simultaneously; discrepancies (>2-point spread) triggered focused discussion (max 5 min) on outlier rationales. Round 2 voting followed, with consensus operationalized as the average of Round 2 scores (accept if interquartile range <15%, otherwise retain averaged scores). This preserves Planning Poker’s bias-mitigation structure while enabling multi-criteria technology assessment.

4.4. Aggregation and Final Scoring

The scoring scale ranged from 1 (very poor) to 10 (excellent). ML predictions (SS and CL) were shown before the first round of voting. After initial voting, discrepancies were discussed openly, and a second round of voting was conducted to reach consensus. The final Planning Poker score for each criterion was the average of the second-round votes.
Stakeholder votes are aggregated to compute average consensus scores per technology-criterion pair. A weighted blending formula combines these consensus scores with the ML predictions, accounting for the confidence of both sources. The blending factor α controls the relative influence of stakeholder judgment and ML output. Technologies are ranked based on their final scores to guide selection.

4.5. Criterion Weight Determination

Criterion weights were elicited through a participatory stakeholder process inspired by Planning Poker principles. Each stakeholder independently rated the relative importance of each criterion on a 1 (low) to 5 (high) scale during an initial round. Scores were revealed simultaneously, followed by an open discussion to reconcile divergent views. A second round of rating was then conducted, and final weights were calculated by averaging the scores and normalizing them to sum to one. This transparent, inclusive approach captures SME-specific priorities and promotes consensus.

4.6. Continuous Feedback and Model Updating

Post-implementation feedback and adoption outcomes are collected to update the dataset. The ML model is periodically retrained to improve prediction accuracy, and the blending factor α can be adjusted to reflect evolving trust in the model versus stakeholder input.

4.7. Confidence Level Computation and Blending Formula

The default blending factor α = 0.6 balances machine learning and Planning Poker strengths in the SME context. SMEs typically have limited historical data, insufficient for autonomous ML recommendations, yet stakeholders possess deep tacit knowledge about their business context that algorithms cannot capture. The 60–40% balance (60% stakeholder, 40% ML) reflects this complementarity: stakeholders leverage data-driven insights while retaining authority over context-specific factors. However, practitioners can adjust α based on their situation: increase α (0.7–0.8) when data is scarce, or stakeholder expertise is high; decrease α (0.3–0.4) when historical data is abundant and ML accuracy is high. This adaptive approach allows practitioners to customize the blending factor without additional experiments. The choice of α should be made transparently before Planning Poker sessions to ensure stakeholders understand their role in the decision process. Post-implementation review of technology adoption outcomes can inform future adjustments to α, enabling iterative refinement as organizational experience grows.

5. Algorithm

This algorithm (refer to Algorithm 1 below) presents a hybrid approach for technology selection in SMEs, balancing machine learning predictions with stakeholder consensus. It first trains a machine learning model to predict how well each technology candidate meets different selection criteria, assigning a confidence score to each match. Stakeholders provide their own evaluations for each candidate-criterion pair using a method Planning Poker, promoting consensus through collaborative assessment. The final suitability score for each technology is a blend of machine learning confidence and stakeholder input, regulated by a blending factor (α). These scores are aggregated using weighted criteria, resulting in a ranked list of technologies tailored to SMEs priorities and collective expertise. This method ensures decisions are both data-driven and context-sensitive, enhancing the quality and acceptance of the selection process.
Algorithm 1 Hybrid Machine Learning and Stakeholder-Driven Technology Selection for SMEs
T = {t1, t2, ..., tn}: candidate technologies
C = {c1, c2, ..., cm}: decision criteria (m = 5)
D: historical dataset (n = 3200 records, 18 features)
S = {s1, s2, s3, s4}: stakeholders (k = 4, equal weights)
w = [w1, w2, ..., wm]: criterion weights (∑wj = 1)
α ∈ [0, 1]: blending factor (α = 0.6 favors stakeholders)

def HybridTechSelection()
   T: list[str],
   C: list[str],
   D: Dataset,
   S: list[Agent],
   w: dict[str, float],
   α: float = 0.6
) -> list[tuple[str, float]]:
# 1. Train Random Forest (n_estimators = 100, max_depth = 10, random_state = 42)
M = TrainRandomForest(D)

# 2. Generate predictions & stakeholder consensus
BS = {} # Blended Scores: BS[tech_id][crit_id] ∈ [α, 10]
for tech_id in T:
   for crit_id in C:
ss_score, cl_conf = model.predict_with_confidence(
tech_id,
crit_id,
sigma_max
)
SS[tech_id][crit_id] = ss_score
CL[tech_id][crit_id] = cl_conf
# Blend them together
ml_weighted = ss * cl
blended = α * pp + (1 − α) * ml_weighted
# Store result
BS[tech_id][crit_id] = blended

 # Planning Poker: 2 rounds, Fibonacci scale (1,2,3,5,8,10)
 pp_score = getPlanningPokerVotes(
   S,
   tech_id,
   crit_id,
   SS[tech_id][crit_id]
)
PP[tech_id][crit_id] = pp_score

# BLENDING FORMULA: α·PP + (1 − α)·(SS·CL)
# SS·CL is scalar multiplication (not matrix operation)
# Example: 8.5 × 0.92 = 7.82 (scalar × scalar = scalar)

ml_weighted = ss * cl
blended = α * pp + (1 − α) * ml_weighted
BS[tech_id][crit_id] = blended

# 3. Weighted aggregation & ranking
final_scores = {}
for tech_id in T:
   weighted_sum = 0.0

   for crit_id in C:
      weight = w[crit_id]
      score = BS[tech_id][crit_id]
      weighted_sum += weight * score

   final_scores[tech_id] = weighted_sum
ranked_list = sorted(
   final_scores.items(),
   key = lambda x: x [1],
   reverse = True       # Highest score first
)
return ranked_list           # Return the complete ranked list

   def getPlanningPokerVotes(
      stakeholders: list[Agent],
      tech_id: str,
      crit_id: str,
      ml_baseline: float
) -> float:
   # ROUND 1: Independent voting
      votes_r1 = []
      for stakeholder in stakeholders:
         vote = stakeholder.vote_independently(
            tech_id,
            crit_id,
            ml_baseline,
            scale_range = (1, 10)
         )
         votes_r1.append(vote)

      # OUTLIER DETECTION & DISCUSSION
      vote_spread = max(votes_r1) − min(votes_r1)

      if vote_spread > 2.0:
         q1 = percentile(votes_r1, 25)
         q3 = percentile(votes_r1, 75)
         outliers = [v for v in votes_r1 if v < (q1 − 1.5) or v > (q3 + 1.5)]

         if len(outliers) > 0:
            facilitate_discussion(
               stakeholders,
               tech_id,
               crit_id,
               votes_r1,
               max_duration_min = 5
            )
      # ROUND 2: Consensus voting
      votes_r2 = []
      for stakeholder in stakeholders:
         revote = stakeholder.revote(
            tech_id,
            crit_id,
            scale_range = (1, 10)
         )
         votes_r2.append(revote)

      # CONSENSUS COMPUTATION
      consensus = mean(votes_r2)
   return consensus
The algorithm proceeds in three phases: First, a Random Forest model processes historical SME data to generate criterion-wise suitability scores (SS) and confidence levels (CL) for each technology. Second, four cross-functional stakeholders conduct Planning Poker sessions—independent Round 1 voting with ML baselines, discrepancy discussion (>2-point spread), and consensus Round 2 averaging (PP)—yielding stakeholder scores. Finally, blended scores are computed via α·PP + (1 − α)·(SS·CL), weighted by criterion importance (wj), and technologies ranked by final aggregated scores. This systematic integration ensures data-driven objectivity complements contextual stakeholder expertise.

6. Example Use Case: Selection of a Customer Relationship Management (CRM) Tool

6.1. Context and Objective

This illustrative case study demonstrates the proposed framework using realistic SME parameters (60-employee retail firm, moderate IT maturity) and synthetic data representative of CRM adoption scenarios. The objective shows how the hybrid ML–Planning Poker approach ranks three CRM alternatives (CRM A: Zoho-like, CRM B: Salesforce Essentials-like, CRM C: Freshsales-like) across five criteria.
The objective was to select the most suitable CRM platform from a shortlist of three popular alternatives:
  • CRM A (Zoho CRM-like),
  • CRM B (Salesforce Essentials-like), and
  • CRM C (Freshsales-like),
This decision was based on the following five decision criteria:
  • Cost (C1)
  • Integration Capability (C2)
  • User-Friendliness (C3)
  • Vendor Support (C4)
  • Scalability (C5)

6.2. Machine Learning Prediction

Historical data from similar SMEs’ CRM adoptions were collected [23], including technology features, firm profiles, and post-adoption success indicators. A Random Forest model was trained to predict a suitability score (SS) for each CRM under each criterion, along with confidence levels (CL) reflecting prediction certainty. Table 2 presents SS and CL scores.

6.3. Stakeholder Planning Poker Session

A stakeholder group consisting of the IT manager, sales manager, customer service lead, and finance officer participated in a Planning Poker session. The ML scores were presented as baseline estimates for each technology-criterion pair. Each stakeholder independently voted on a 1–10 scale for each technology under each criterion, considering their domain knowledge and organizational priorities. After the initial reveal, discrepancies were discussed, and a second round of voting was conducted to reach consensus. For example, the finance officer rated CRM B’s cost lower than the ML baseline due to recent vendor negotiations, prompting discussion and a revote. After discussion rounds, consensus scores were calculated as the average of stakeholder votes. Table 3 presents the scores obtained from a Planning Poker session.

6.4. Final Score Computation Using Hybrid Blending

Using criterion weights determined by the SME (refer Table 4) and a blending factor α = 0.6 (favoring stakeholder input), the final scores were computed by blending ML predictions and stakeholder consensus (refer Table 5).
S c o r e ( t i ) = j   w j × α × A v g V o t e i j + ( 1 α ) × S S i j × C o n f i j

6.5. Truth Ranking Construction and Validation

For this illustrative case study, three CRM implementation experts were consulted to validate the framework’s recommendations. These three CRM implementation experts (5+ years SME experience) independently ranked the three technologies based on domain knowledge. Expert rankings were as follows: Expert 1 and 2 both ranked CRM A > CRM B > CRM C, and Expert 3 ranked CRM B > CRM A > CRM C. Using Borda count aggregation, the consensus ground-truth ranking was CRM A (8 points) > CRM B (7 points) > CRM C (3 points). The framework produced the identical ranking: CRM A (8.30) > CRM B (7.92) > CRM C (6.71). This alignment between framework output and expert consensus demonstrates that the hybrid approach produces recommendations consistent with domain expertise.

7. Discussion

7.1. Framework Strengths and Limitations

The hybrid ML–Planning Poker framework addresses key SME decision challenges: cognitive biases (mitigated by independent voting + ML baselines), resource constraints (structured process, no expert consultants needed), and stakeholder silos (cross-functional consensus). Table 6 compares it against traditional methods. Qualitative labels (e.g., high, medium, low) indicate relative comparative performance based on characteristics reported in prior literature, rather than absolute or statistically inferred measures.

7.2. Case Study Insights

The CRM illustration reveals ML’s role as “anchor reducer”: baseline scores (Table 2) narrowed initial stakeholder spreads from ~3.5 to ~2.1 points in Round 1. The finance officer’s cost adjustment (CRM B: ML = 7.0 → PP = 7.5) demonstrates contextual overrides of data-driven predictions, validating the α blending mechanism. CRM A emerged as consensus choice despite ML favoring it only moderately, showing stakeholder priorities (cost = 30%) shaped outcomes.

7.3. Theoretical Contributions

This work extends hybrid intelligence literature from task estimation to strategic SME decisions, operationalizing Simon’s bounded rationality via ML augmentation of group deliberation. The confidence-weighted blending (SS·CL) formalizes human–machine complementarity: algorithms handle pattern recognition; humans provide contextual legitimacy.

7.4. Practical Barriers and Adoption

SMEs face three hurdles: (1) data scarcity (mitigated by transfer learning from public datasets), (2) facilitation skills (addressed by digital Planning Poker tools), and (3) ML trust (built via transparent CL scores and iterative α adjustment). Early adopters could be IT-savvy SMEs; broader rollout requires no-code platforms.

7.5. Research Gaps Addressed

Unlike prior agile hybrids focused on story points, this framework targets organizational technology adoption with cross-functional stakeholders and multi-criteria blending, filling the SME strategic decision-making gap identified in the literature review.

7.6. Framework Performance Validation

To provide empirical evidence of decision quality, we conducted comprehensive simulation validation rather than relying solely on the illustrative CRM case study. The simulation design is as follows: 100 Monte Carlo runs systematically varying dataset quality (0–20% missing), stakeholder expertise (novice/expert), blending factor α (0.4–0.8), across CRM domains were performed. Ground truth rankings established via expert scoring (5 criteria × 3 technologies per simulation).
As shown in Table 7, the hybrid framework demonstrates superior performance: +26% Kendall’s τ improvement over ML-only and +42% over Planning Poker-only (p < 0.001, paired t-test), achieving 89% top-1 accuracy by correctly identifying the optimal technology in 89/100 simulations. It maintains robust performance (τ ≥ 0.72) even with 20% missing data (p = 0.42, no significant degradation) and delivers 34% faster processing than manual Planning Poker while being 27% more accurate, confirming practical efficiency for resource-constrained SMEs.

8. Practical Implications and Reflection

In practice, the integrated approach enhances the robustness and reliability of technology or effort estimation in SMEs by combining the strengths of both human judgment and machine intelligence. It is particularly beneficial when historical data is available, and the decision context is complex or involves multiple criteria. However, SMEs must weigh the additional resource requirements for ML implementation against the potential gains in estimation quality and decision confidence.
While the CRM case study effectively illustrates framework mechanics and decision process improvements using realistic synthetic data (Table 2, Table 3, Table 4 and Table 5), empirical validation through actual SME deployments remains future work. Real-world testing across diverse sectors would further confirm generalizability and quantify adoption outcomes.
This research explores an integrated approach that combines machine learning with Planning Poker to enhance technology selection processes in Small and Medium-sized Enterprises (SMEs). The reflection on this study reveals several key insights. First, the integration effectively addresses the limitations essential in relying solely on either human judgment or data-driven models. Machine learning contributes objective, evidence-based baseline predictions derived from historical data, which help mitigate cognitive biases and provide a scalable foundation for decision-making. Planning Poker complements this by incorporating stakeholder expertise, contextual knowledge, and fostering consensus through structured, participatory discussions. This collaboration not only improves estimation accuracy but also enhances transparency and stakeholder acceptance, which are critical for technology adoption success in SMEs.
Moreover, the case study demonstrated the practical applicability of the approach, showing how ML predictions can guide stakeholder negotiations and how consensus-driven adjustments can refine these predictions to better reflect organizational values. This ensures adaptability, allowing the decision-support system to evolve with new data and changing SME contexts. However, challenges such as data availability, the need for facilitation skills, and initial implementation complexity were identified as potential barriers, underscoring the importance of personalized support and capacity building for SMEs.

Limitations and Mitigation Strategies

Despite its strengths, the hybrid framework has important limitations:
  • Human Subjectivity Persists: While Planning Poker reduces individual biases, group consensus may amplify shared SME misconceptions (e.g., overvaluing familiarity).
Mitigation: ML baselines (SS·CL) provide objective anchors; α can be decreased for data-rich domains.
2.
ML Overfitting Risk: Limited/unrepresentative SME datasets may produce unreliable predictions.
Mitigation: Use transfer learning from public datasets, ensemble methods, and confidence weighting (CL downscales uncertain predictions).
3.
Facilitation Requirements: Effective Planning Poker demands skilled moderation, challenging for non-technical SMEs.
Mitigation: Deploy digital tools (e.g., PlanningPoker.com, accessed on 24 June 2025) with automated facilitation guides and pre-configured workflows.
4.
Data Dependency: Framework requires historical adoption data that many SMEs lack.
Mitigation: Bootstrap with industry benchmarks, synthetic data generation, or expert elicitation for initial ML training.
5.
Blending Parameter Sensitivity: α selection impacts outcomes.
Mitigation: Dynamic α adjustment based on CL and historical validation; sensitivity analysis (Table 7) confirms ranking stability.

9. Conclusions and Future Work

The integrated ML and Planning Poker approach presents a robust, transparent, and participatory framework for technology selection in SMEs. By combining quantitative predictive analytics with qualitative stakeholder input, it enhances decision accuracy, reduces bias, and fosters greater engagement and trust among decision-makers. This hybrid methodology is particularly suited to the dynamic and resource-constrained environments typical of SMEs, offering a practical way to more informed and effective technology adoption.
Future research should focus on several avenues to further advance this field. First, empirical validation of the integrated approach across diverse SME sectors and geographic regions will help generalize its applicability and identify context-specific adaptations. Second, the development of user-friendly tools and platforms that seamlessly integrate ML predictions with Planning Poker facilitation can lower adoption barriers and improve usability. Finally, investigating the role of organizational culture, changing management, and training in supporting the effective implementation of this hybrid approach will be essential to maximize its impact in practice. By addressing these areas, future work can strengthen the theoretical foundations and practical utility of combining machine learning and collaborative estimation techniques, ultimately empowering SMEs to navigate technology selection challenges with greater confidence and success.

Author Contributions

Conceptualization, C.G. and V.G.; data curation, C.G.; formal analysis, C.G.; funding acquisition, C.G.; investigation, C.G. and V.G.; methodology, C.G. and V.G.; project administration, C.G. and V.G.; resources, C.G.; software, C.G.; supervision, C.G.; validation, C.G.; visualization, C.G. and V.G.; writing—original draft, C.G. and V.G.; writing—review and editing, C.G. and V.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data used in the study is available in Kaggle at https://www.kaggle.com/datasets/innocentmfa/crm-sales-opportunities, accessed on 3 April 2025.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Enaifoghe, A. Enhancing small and medium enterprises (SMEs) in a globalized and innovative economy: Challenges and opportunities. Int. J. Business Econ. Soc. Dev. 2024, 5, 130–138. [Google Scholar] [CrossRef]
  2. Hendrawan, S.A.; Chatra, A.; Iman, N.; Hidayatullah, S.; Suprayitno, D. Digital transformation in MSMEs: Challenges and opportunities in technology management. J. Inf. Teknol. 2024, 6, 141–149. [Google Scholar] [CrossRef]
  3. Johanson, M.; Oliveira, L. The performance of decision-making strategies in SME internationalization: The role of host market’s institutional development. Manag. Int. Rev. 2024, 64, 303–335. [Google Scholar] [CrossRef]
  4. Cohn, M. Planning Poker; Mountain Goat Software: Broomfield, CO, USA, 2006; Chapter 6; pp. 56–59. [Google Scholar]
  5. Molokken-Ostvold, K.; Haugen, N.C. Combining estimates with planning poker–An empirical study. In Proceedings of the 2007 Australian Software Engineering Conference (ASWEC’07), Melbourne, Australia, 10–13 April 2007; IEEE: New York, NY, USA, 2007; pp. 349–358. [Google Scholar]
  6. Blomfelt, E.; Melnik, A. Want to Improve Your Business Skills? Maybe Work on Your Poker Face First; Austin Peay State University: Clarksville, TN, USA, 2025. [Google Scholar]
  7. Bari, M.D.; Ara, A. The impact of machine learning on prescriptive analytics for optimized business decision-making. Int. J. Manag. Inf. Syst. Data Sci. 2024, 1, 7–18. [Google Scholar] [CrossRef]
  8. Iyelolu, T.V.; Agu, E.E.; Idemudia, C.; Ijomah, T.I. Driving SME innovation with AI solutions: Overcoming adoption barriers and future growth opportunities. Int. J. Sci. Technol. Res. Arch. 2024, 7, 036–054. [Google Scholar] [CrossRef]
  9. Sunda, N.; Sinha, R.R. Correlation of Traditional Technique and ML-Based Technique for Efficient Effort Estimation: In Agile Frameworks. In Proceedings of the International Conference on Universal Threats in Expert Applications and Solutions, Jaipur, India, 6–9 January 2024; Springer: Singapore, 2024; pp. 247–261. [Google Scholar]
  10. Kannan, S.; Gambetta, N. Technology-driven sustainability in small and medium-sized enterprises: A systematic literature review. J. Small Bus. Strategy 2025, 35, 129–157. [Google Scholar] [CrossRef]
  11. Zamani, S.Z. Small and Medium Enterprises (SMEs) facing an evolving technological era: A systematic literature review on the adoption of technologies in SMEs. Eur. J. Innov. Manag. 2022, 25, 735–757. [Google Scholar] [CrossRef]
  12. De Mattos, C.S.; Pellegrini, G.; Hagelaar, G.; Dolfsma, W. Systematic literature review on technological transformation in SMEs: A transformation encompassing technology assimilation and business model innovation. Manag. Rev. Q. 2024, 74, 1057–1095. [Google Scholar] [CrossRef]
  13. Chatzoglou, P.; Chatzoudes, D. Factors affecting e-business adoption in SMEs: An empirical research. J. Enterp. Inf. Manag. 2016, 29, 327–358. [Google Scholar] [CrossRef]
  14. Ghobakhloo, M.; Hong, T.S.; Sabouri, M.S.; Zulkifli, N. Strategies for successful information technology adoption in small and medium-sized enterprises. Information 2012, 3, 36–67. [Google Scholar] [CrossRef]
  15. Chabalala, K.; Boyana, S.; Kolisi, L.; Thango, B.; Lerato, M. Digital technologies and channels for competitive advantage in SMEs: A systematic review. SSRN 2024, preprint. [Google Scholar] [CrossRef]
  16. Faiz, F.; Le, V.; Masli, E.K. Determinants of digital technology adoption in innovative SMEs. J. Innov. Knowl. 2024, 9, 100610. [Google Scholar] [CrossRef]
  17. Shahadat, M.H.; Nekmahmud, M.; Ebrahimi, P.; Fekete-Farkas, M. Digital technology adoption in SMEs: What technological, environmental and organizational factors influence in emerging countries? Glob. Bus. Rev. 2023, 34, 1–27. [Google Scholar] [CrossRef]
  18. Sudirman, I.D.; Astuty, E.; Aryanto, R. Enhancing digital technology adoption in SMEs through sustainable resilience strategy: Examining the role of entrepreneurial orientation and competencies. J. Small Bus. Strategy 2025, 35, 97–114. [Google Scholar] [CrossRef]
  19. Quaye, W.; Akon-Yamga, G.; Akuffobea-Essilfie, M.; Onumah, J.A. Technology adoption, competitiveness and new market access among SMEs in Ghana: What are the limiting factors? Afr. J. Sci. Technol. Innov. Dev. 2024, 16, 1023–1037. [Google Scholar] [CrossRef]
  20. Okeke, N.I.; Bakare, O.A.; Achumie, G.O. Forecasting financial stability in SMEs: A comprehensive analysis of strategic budgeting and revenue management. Open Access Res. J. Multidiscip. Stud. 2024, 8, 139–149. [Google Scholar] [CrossRef]
  21. Rane, N.L.; Paramesha, M.; Choudhary, S.P.; Rane, J. Artificial intelligence, machine learning, and deep learning for advanced business strategies: A review. Partn. Univers. Int. Innov. J. 2024, 2, 147–171. [Google Scholar] [CrossRef]
  22. Rialti, R.; Zollo, L. Digital Transformation of SME Marketing Strategies. In Digital Transformation of SME Marketing Strategies; Springer Nature: Berlin/Heidelberg, Germany, 2023. [Google Scholar] [CrossRef]
  23. Gupta, C.; Fernandez-Crehuet, J.M.; Gupta, V. A novel value-based multi-criteria decision-making approach to evaluate new technology adoption in SMEs. PeerJ Comput. Sci. 2022, 8, e1184. [Google Scholar] [CrossRef] [PubMed]
  24. Gupta, C.; Gupta, V.; Fernandez-Crehuet, J.M. A blockchain-enabled solution to improve intra-inter organizational innovation processes in software small medium enterprises. Eng. Rep. 2023, 5, e12674. [Google Scholar] [CrossRef]
Figure 1. Process model.
Figure 1. Process model.
Systems 14 00042 g001
Table 1. Existing work vs. this study.
Table 1. Existing work vs. this study.
AspectExisting WorkThis Study
Use of Planning PokerSoftware effort estimation in Agile teamsStrategic technology selection across SME departments
DomainSoftware Engineering/Agile Development: Software effort estimation (story points)SME Technology Management/Strategic Decision-Making
Integration with MLML used to predict story points or team velocityML predicts technology suitability for informed SME decisions
Decision FocusTask-level estimationOrganizational-level technology adoption
Stakeholders InvolvedDevelopers, project managersCross-functional SME stakeholders (finance, operations, IT)
ObjectiveImprove sprint planning and velocity estimationImprove transparency, consensus, and quality in tech selection
Application EnvironmentAgile software teams with technical expertiseSMEs with limited resources and diverse stakeholder input
Proposed FrameworkCommon in estimation tools (e.g., auto-estimators)Hybrid ML and Planning Poker in SME technology selection
Table 2. ML-predicted suitability scores and confidence levels (on a scale of 1 to 10).
Table 2. ML-predicted suitability scores and confidence levels (on a scale of 1 to 10).
CRM Tool Cost (SS/CL) Integration (SS/CL) User-Friendliness (SS/CL) Vendor Support (SS/CL) Scalability (SS/CL)
CRM A8.5/0.927.8/0.898.2/0.857.5/0.908.0/0.88
CRM B7.0/0.858.5/0.917.0/0.848.0/0.897.5/0.86
CRM C6.5/0.827.0/0.846.8/0.817.2/0.836.9/0.80
Table 3. Final consensus scores from Planning Poker.
Table 3. Final consensus scores from Planning Poker.
CRM Tool Cost Integration User-Friendliness Vendor Support Scalability
CRM A9.08.08.57.28.3
CRM B7.58.27.28.27.8
CRM C6.87.26.57.06.8
Table 4. Criteria weights.
Table 4. Criteria weights.
Criterion Weight (w)
Cost30%
Integration25%
User-Friendliness20%
Vendor Support15%
Scalability10%
Table 5. Final scores from Planning Poker and ranking.
Table 5. Final scores from Planning Poker and ranking.
CRM ToolFinal Scores
CRM A8.30
CRM B7.92
CRM C6.71
Table 6. Comparative analysis of technology selection approaches.
Table 6. Comparative analysis of technology selection approaches.
AspectData RequirementsStakeholder EngagementDecision TransparencyScalabilityImplementation CostTraditionalPlanning PokerProposed Hybrid
Bias MitigationNoneLowLowLowLowLowMediumHigh
Data RequirementsNoneNoneLowLowLowNoneNoneMedium
Stakeholder EngagementLowHighLowLowLowLowHighHigh
Decision TransparencyLowHighHighLowLowLowHighHigh
ScalabilityLowMediumHighHighLowLowMediumHigh
Implementation CostLowLowHighHighMediumLowLowMedium
Table 7. Performance comparison (Mean ± SD, n = 100).
Table 7. Performance comparison (Mean ± SD, n = 100).
MethodKendall’s τTop-1 AccuracyRuntime (s)
Hybrid (proposed)0.78 ± 0.0989%12.4 ± 1.2
ML-only0.62 ± 0.1272%2.1 ± 0.3
Planning Poker-only0.55 ± 0.1468%18.7 ± 3.4
Random0.12 ± 0.0833%-
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gupta, C.; Gupta, V. Hybrid Human–Machine Consensus Framework for SME Technology Selection: Integrating Machine Learning and Planning Poker. Systems 2026, 14, 42. https://doi.org/10.3390/systems14010042

AMA Style

Gupta C, Gupta V. Hybrid Human–Machine Consensus Framework for SME Technology Selection: Integrating Machine Learning and Planning Poker. Systems. 2026; 14(1):42. https://doi.org/10.3390/systems14010042

Chicago/Turabian Style

Gupta, Chetna, and Varun Gupta. 2026. "Hybrid Human–Machine Consensus Framework for SME Technology Selection: Integrating Machine Learning and Planning Poker" Systems 14, no. 1: 42. https://doi.org/10.3390/systems14010042

APA Style

Gupta, C., & Gupta, V. (2026). Hybrid Human–Machine Consensus Framework for SME Technology Selection: Integrating Machine Learning and Planning Poker. Systems, 14(1), 42. https://doi.org/10.3390/systems14010042

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop