Next Article in Journal
A Dynamic Analysis of Cross-Category Innovation in Digital Platform Ecosystems with Network Effects
Previous Article in Journal
How Perceived Motivations Influence User Stickiness and Sustainable Engagement with AI-Powered Chatbots—Unveiling the Pivotal Function of User Attitude
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

How Doctors’ Proactive Crafting Behaviors Influence Performance Outcomes: Evidence from an Online Healthcare Platform

by
Wenlong Liu
*,
Yashuo Yuan
,
Zifan Bai
and
Shenghui Sang
College of Economics and Management, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China
*
Author to whom correspondence should be addressed.
J. Theor. Appl. Electron. Commer. Res. 2025, 20(3), 226; https://doi.org/10.3390/jtaer20030226
Submission received: 27 June 2025 / Revised: 15 August 2025 / Accepted: 19 August 2025 / Published: 1 September 2025
(This article belongs to the Topic Data Science and Intelligent Management)

Abstract

With the steady global progress in integrating technology into healthcare delivery, doctors’ behavioral patterns on online healthcare platforms have increasingly become a focal point in the fields of digital health and healthcare service management. Grounded in Job Crafting Theory, this study constructs a proactive crafting index, which captures doctors’ proactive behaviors on the platform across three dimensions: consultation rate, number of consultations, and response speed. We systematically examine the multidimensional impacts of such behaviors on performance outcomes, including online consultation volume, offline service volume, and user evaluation performance. This study collects publicly available records from a major online healthcare platform in China and conducts empirical analysis using the entropy weight method and econometric techniques. The results reveal that there is an optimal level of proactive engagement: moderate proactivity maximizes online consultation volume, while both insufficient and excessive proactivity reduce it. Offline service volume, in contrast, follows a U-shaped relationship, where moderate proactive engagement minimizes offline visits, while too little or too much engagement leads to more offline service needs. These nonlinear patterns highlight the importance of framing doctors’ proactive behavior to optimize both online engagement and offline service. The findings enrich Job Crafting Theory by identifying boundaries in platform-based service environments and provide actionable insights for platform operators to design behavior management and incentive systems tailored to doctors’ professional rank, patient condition, and regional context.

1. Introduction

With the rapid advancement of digital health initiatives worldwide, online healthcare platforms have increasingly become an essential component of modern healthcare systems across countries. Integrating telemedicine, remote consultations, and digital patient management, these platforms are transforming traditional models of care delivery, enhancing healthcare accessibility, efficiency, and patient engagement on a global scale [1,2]. Through these platforms, doctors can provide patients with services such as text-based consultations, voice inquiries, and video appointments, enabling efficient remote interactions. In this process, doctors not only perform their traditional clinical duties but also take on roles as service providers, communicators, and representatives of their professional image on the platform. Their behavioral patterns have become increasingly influential in determining service performance outcomes [2,3]. Compared with offline healthcare, the evaluation of doctors’ performance on digital platforms is more transparent, dynamic, and multidimensional, encompassing metrics such as user feedback, repeat visits, and referrals [4]. How to stimulate doctors’ proactivity and enhance their overall service performance within platform-based contexts has thus become a focal issue in the operation and management of online healthcare platforms [4]. Recent empirical studies have shown that doctors’ response speed, consultation frequency, and communication style significantly impact patient satisfaction, repeat visits, and even platform recommendation exposure. For example, Liu et al. find that better interaction performance attracts more chronic patients [5], while Wang et al. demonstrate that physicians’ online activities can lead to a higher service quantity in offline channels, whereas the more offline-patients physicians serve, the more articles the physicians will likely share online [6]. These findings provide strong support for the growing importance of behavioral engagement in platform-based healthcare services.
In recent years, Job Crafting Theory has emerged as a prominent theoretical framework in the fields of organizational behavior and human resource management. The theory posits that individuals are not passive recipients of job assignments; instead, they actively modify the content of their tasks, interpersonal relationships, and role perceptions. Through such proactive adjustments, individuals can acquire more job resources, reduce stressors, and achieve an optimal alignment between job demands and available resources [7,8]. This framework provides a novel perspective for understanding how doctors may self-regulate to influence their performance outcomes on online healthcare platforms [9].
Within these platforms, doctors can actively shape their “work presence” by taking on more consultations, responding promptly, and increasing service frequency, thereby attracting patients to choose their services [10]. To comprehensively understand the synergistic effects among doctors’ behavioral indicators, this study develops a novel composite measure—Proactive Crafting Index (PCI)—specifically designed for online healthcare platforms. While grounded in Job Crafting Theory, the PCI is conceptually new and operationally tailored to capture doctors’ proactive behaviors in digital service environments. It integrates three key behavioral metrics—consultation rate, number of consultations, and response speed—that reflect task and relational crafting dimensions: specifically, consultation rate and number of consultations correspond to task crafting, while response speed reflects relational crafting, synthesized using the entropy weight method [11,12]. This design enables a multidimensional understanding of doctors’ platform engagement, which is often oversimplified in existing research. This index is used to assess doctors’ proactive behaviors on online healthcare platforms and to analyze their impact on three dimensions of performance: online consultation volume (OCV), offline service volume (OSV), and user evaluation performance (UEP).
In addition, doctors’ performance outcomes may also be influenced by individual attributes and external contextual factors. Variables such as doctors’ professional level, the area in which they practice, and the disease urgency level of their patients may moderate the relationship between proactive behaviors and performance outcomes [1,10]. Therefore, this study further investigates the heterogeneity of these variables to construct a more comprehensive research framework. By integrating theory with empirical analysis, this study aims to uncover how doctors’ proactive crafting behaviors influence their performance outcomes, thereby offering both theoretical insight and practical guidance for doctor management and innovation in healthcare services on digital platforms.

2. Literature Review

2.1. Studies on Job Crafting Theory

Job Crafting theory was first introduced by Wrzesniewski and Dutton, emphasizing that individuals are not passive recipients of job tasks and role expectations [7]. Instead, they proactively adjust task boundaries, interpersonal relationships, and perceptions of work meaning to align with personal needs and motivations, thereby enhancing job satisfaction and performance. Job crafting behaviors are typically categorized into three forms: task crafting, relational crafting, and cognitive crafting. Later, Tims and Bakker extended the theory within the framework of the Job Demands–Resources (JD-R) Model, suggesting that individuals can enhance challenging work resources and reduce hindering demands through crafting behaviors, which in turn improves performance and alleviates burnout [7,13].
Job crafting theory has been widely applied in research on service industries, knowledge workers, and virtual work environments. Studies have shown that proactive job crafting significantly enhances employee engagement and organizational commitment in public service institutions [1,14]. Among teachers, task and relational crafting behaviors have been linked to increased job satisfaction and innovative behaviors [14]. Chen et al. found that cognitive crafting mitigates feelings of isolation and productivity loss among remote workers, especially in digital platform settings [15]. However, these studies are mostly situated in general organizational or educational contexts and rarely engage with highly institutionalized, regulated environments like healthcare. Furthermore, they often rely on subjective reports rather than directly observing behavior in real-time digital interactions.
In the healthcare context, although work is highly standardized and task-driven, doctors still retain a degree of autonomy during service delivery. They may adjust the pace of service, optimize consultation procedures, and strengthen doctor–patient interactions, thus reshaping the boundaries of their work. Existing studies have shown that task and cognitive crafting behaviors among healthcare professionals contribute to improved work adaptation and well-being [16,17,18]. Furthermore, individual motivations, departmental characteristics, and service pressures influence whether doctors choose to engage in self-regulation. However, existing research on doctors’ crafting behaviors is largely confined to offline or traditional hospital settings. In increasingly digitalized work environments where professionals interact with others through structured systems, algorithmic performance evaluation, and publicly visible profiles, new forms of job crafting may emerge. These settings introduce both novel affordances—such as broader reach and flexible interaction tools—and constraints—such as system rules and transparent performance metrics—that existing job crafting frameworks have yet to fully theorize [19,20]. Therefore, job crafting theory offers a suitable foundation for understanding doctors’ behaviors on online healthcare platforms. Investigating job crafting in this context can reveal how doctors actively seek resources, enhance influence, and improve performance outcomes within a highly structured yet dynamic digital framework. Such an investigation deepens our understanding of proactive work behaviors in digital healthcare environments and further explores the boundaries of job crafting theory’s applicability beyond traditional settings.

2.2. Studies on Doctors’ Behavior

With the rapid development of online healthcare platforms, doctors’ behavior on these platforms has become a prominent research topic in healthcare management and service innovation [1,21]. As core service providers, doctors’ attitudes, behavioral characteristics, and interaction styles significantly affect patient experience and platform service quality [22,23]. Existing research has explored the relationship between doctors’ behaviors and patient responses from various perspectives.
In terms of responsiveness, prompt replies to patient inquiries are seen as key to enhancing trust and satisfaction on the platform [24]. Research findings empirically found a significant negative correlation between doctors’ average response time and the likelihood of patient revisit, indicating that response speed plays a critical role in user retention [25]. Regarding service frequency and consultation behavior, doctors with higher service frequency are more likely to receive platform recommendations and attract more patients, forming a positive feedback loop of “high exposure–high traffic–high performance” [26,27]. In addition, doctors’ enthusiasm and communication quality have been confirmed to significantly impact service evaluations and reputation building [28]. Other researchers have found that the level of detail in doctors’ initial consultations is significantly associated with patient’s willingness to continue follow-up consultations [29].
While previous studies have examined the impact of individual behavioral traits on performance, several important limitations remain. First, the majority of existing research tends to focus on single behavioral indicators—such as response speed, frequency, or consultation detail—without capturing the interactive, temporal, and strategic nature of doctors’ behavioral patterns. This approach overlooks the possibility that doctors may intentionally and proactively orchestrate multiple behaviors to shape their work environment and outcomes over time [30,31]. Second, most studies are descriptive or correlational in nature, lacking a unifying theoretical framework that can explain why and how certain behavioral configurations emerge and influence performance [31,32]. This approach limits our understanding of proactive behavior as a multidimensional construct—an issue this study aims to address through an integrated analytical model. Given doctors’ service autonomy on platforms, their series of continuous, interactive, and adaptive behaviors should be conceptualized as an integrated form of Proactive Crafting Behavior, rather than isolated service actions. Therefore, integrating doctors’ behaviors into a job crafting theoretical framework holds substantial theoretical and practical significance.

2.3. Studies on Doctors’ Performance

As one of the core quality indicators in healthcare service setting, doctors’ performance outcomes have traditionally been assessed based on clinical competence, diagnostic accuracy, and patient’s physiological treatment outcomes [33,34]. However, in online healthcare contexts, performance is defined more broadly, encompassing immediacy of service delivery, interaction quality, and patient’s perceived outcomes [35,36]. Particularly on internet-based platforms, performance metrics often derive from behavioral logs and user feedback, such as consultation volume, patient ratings, and repeat visits, providing a more holistic picture of doctors’ service capabilities and attractiveness to users [23,37]. Among these, user ratings are frequently used to evaluate service quality and platform reputation. Research findings that the positive review rate is a critical factor in platform recommendation algorithms, indirectly influencing a doctors’ patient flow and income [22]. Research found a significant positive correlation between doctors’ service ratings and patients’ willingness to pay [38]. In addition to subjective evaluations, objective behavioral indicators such as Online consultation volume, revisit rates, and Offline service volume are widely adopted to assess doctors’ service stickiness and long-term impact on the platform [39,40,41].
Despite these advancements, current research on online doctors’ performance suffers from several conceptual and methodological limitations. First, performance is often operationalized using fragmented or surface-level indicators, such as star ratings or consultation counts, which fail to capture the multidimensional and dynamic nature of service provision on digital platforms [42,43]. Second, most studies treat performance as an outcome influenced primarily by external factors (e.g., platform algorithms, patient preferences), without systematically investigating the proactive role of doctors themselves in shaping these outcomes [44,45]. Third, theoretical guidance remains weak, as few studies draw on established behavioral or organizational theories to explain how doctors navigate institutional constraints, leverage platform affordances, or align their service strategies with personal or professional goals [43,44,45]. These gaps highlight the need for a more comprehensive, theory-driven approach to understanding platform-based medical performance. Moreover, few studies have simultaneously examined user evaluation performance, online consultation volume, and offline service volume within a unified analytical framework [46,47], explanatory mechanisms for doctors’ performance often focus on platform rules and patient characteristics, with limited attention to doctors’ own behavioral motivations and adjustment strategies. Therefore, it is necessary to investigate how doctors achieve multidimensional performance outcomes through proactive behavioral crafting.

2.4. Review Summary

Despite increasing scholarly interest in the relationship between doctors’ behavior and performance, several gaps remain: (1) Existing research tends to conceptualize doctors’ behaviors in fragmented and static ways—focusing on discrete metrics such as response speed, consultation frequency, or communication quality—without considering their proactive, strategic, and adaptive dimensions over time. This limits our understanding of behavior as a dynamic process shaped by doctors’ own agency, especially under platform constraints. (2) Performance outcomes are typically operationalized using single-dimensional indicators such as review ratings or consultation counts, overlooking the multidimensional and interrelated nature of digital healthcare performance. As shown in previous sections, objective (e.g., consultation volume) and subjective (e.g., patient ratings) performance dimensions often co-exist, yet are rarely modeled together. (3) Few studies systematically apply theoretical frameworks to explain the mechanisms linking doctor behavior and performance. While many studies adopt a correlational or descriptive approach, they often lack conceptual tools to interpret why certain behaviors emerge and how they function within the digital institutional context of online platforms.
To address these gaps, this study introduces the concept of a proactive crafting index based on Job Crafting Theory, which comprehensively measures doctors’ proactive service behaviors on online healthcare platforms. It constructs a three-dimensional model of performance outcomes encompassing online consultation volume, offline service volume, and user evaluation performance, and explores their interrelationships and moderating mechanisms. This approach aims to enrich the theoretical foundation and empirical evidence for research on doctors’ behavior in online healthcare platforms.

3. Research Hypotheses

Based on Job Crafting Theory, individuals in dynamic work environments proactively adjust their work approaches to adapt to external demands, enhance access to resources, and ultimately improve performance outcomes [7,48]. In the context of online healthcare platforms, doctors enjoy relatively high service autonomy, allowing them to reshape their work boundaries through consultation selection, response management, and task coordination [8,49]. These behaviors influence not only how doctors are ranked and presented by platform algorithms, but also play a critical role in shaping patients’ perceptions, service experiences, and subsequent choices [50,51,52]. Therefore, this study conceptualizes doctors’ proactive behaviors as a Proactive crafting index and proposes the following hypotheses from a theoretical perspective.

3.1. Proactive Crafting and Online Consultation Volume

The number of online consultations reflects a doctors’ service engagement and is shaped by both platform recommendation algorithms and patient choice. According to Job Crafting Theory, proactive behaviors enhance a doctors’ exposure and reach and trust score on the platform, increasing the likelihood of being recommended [52]. More frequent consultations and quicker responses are logged by the platform as indicators of high-quality, active engagement, leading to more exposure and patient visits [1]. Furthermore, trust between doctor and patient is a key driver of continued use. Proactive service behavior serves as a trust signal that reduces patients’ uncertainty and increases their likelihood of returning to the same doctor [53]. Research found that doctors’ proactive behavior significantly improves patient retention on digital platforms [54]. According to the perspective of Job Crafting Theory, excessive job crafting may lead to resource depletion and overload—for instance, if doctors take on too many consultations or excessively shorten response times, it may reduce the quality of each consultation and, in turn, undermine patient trust [14]. Empirical studies have also shown that in online services, users exhibit an implicit threshold for “behavioral frequency”: when the number of consultations becomes too high, the likelihood that patients perceive the service as perfunctory also increases [55]. Thus, we propose the following hypothesis:
H1: 
Doctors’ Proactive crafting index shows an inverted U-shaped relationship with their online consultation volume in the subsequent month.

3.2. Proactive Crafting and Offline Service Volume

Offline service volume captures the conversion of online consultation into in-person visits, reflecting patients’ transfer of trust across channels—a key performance metric [56,57]. In the online healthcare setting, proactive crafting enhances patients’ perception of the doctors’ professional capabilities, service quality, and communication attitude, thereby increasing their likelihood to visit offline [57,58]. This aligns with Job Crafting Theory, which emphasizes resource acquisition through task boundary expansion and improved interpersonal interactions.
However, the relationship may not be strictly linear. Low to moderate levels of proactive behavior may be perceived by patients as driven more by a desire to boost platform traffic than by a genuine concern for diagnostic quality [59,60]. This perception can undermine patients’ trust in a doctor’s offline professional competence, thereby reducing their willingness to seek offline consultations [61]. This could deter patients from transitioning to offline visits, especially when these involve additional time and financial costs [62]. In contrast, when proactive behavior reaches a high and consistent level—such as sustained consultation activity accompanied by a stable response rhythm—patients are more likely to interpret it as a signal of professional dedication. They may perceive the doctor as having sufficient time, energy, and expertise to manage both online services and offline care effectively, which can enhance trust in the doctor’s offline services and increase the likelihood of offline conversion [63,64]. Therefore, we expect a U-shaped relationship:
H2: 
Doctors’ Proactive crafting index and their offline service volume in the subsequent month exhibit a U-shaped relationship.

3.3. Proactive Crafting and User Evaluation Performance

User evaluation performance reflects patients’ subjective assessment of the doctors’ service and serves as a direct representation of the doctors’ reputation on the platform [65]. According to Job Crafting Theory, doctors who proactively adjust their service pace and interaction style can improve timeliness and professionalism, thus enhancing interaction quality. Tims et al. found that job crafting behaviors improve perceived value and satisfaction among customers [34].
On digital platforms, fast response times, efficient consultation scheduling, and frequent service provision reduce patient waiting time and enhance perceived professionalism. Service enthusiasm and visible dedication further strengthen perceived value, leading to higher satisfaction and better reviews. Studies have shown that proactive behaviors in service-oriented organizations help shape overall customer impressions. Unlike online consultation volume and offline service volume, user evaluation performance (UEP) reflects patients’ subjective feedback on service quality. In service scenarios, user evaluations are subject to a “floor effect,” meaning they rely more on whether basic expectations are met rather than the rationality or frequency of the doctor’s behavior [66,67]. Consequently, the relationship between UEP and doctors’ proactive behaviors tends to be more linear. Accordingly, we propose:
H3: 
Doctors’ Proactive crafting index is positively associated with user evaluation performance.

3.4. Heterogeneity by Professional Rank, Disease Type, and Regional Context

Although proactive crafting generally enhances performance, this relationship may vary across different subgroups. That is, the effectiveness of proactive behaviors may exhibit structural heterogeneity. From the extended perspective of the Job Demands–Resources (JD-R) model, differences in resource endowment and job context significantly shape the alignment between behavior and performance outcomes [12,68]. Therefore, we examine the heterogeneity across three dimensions: doctor rank, disease urgency, and regional development level.
Doctors at different professional ranks enjoy varied levels of authority and credibility [68]. High-rank doctors are more likely to have their proactive behavior interpreted as dedication and responsibility, while lower-rank doctors might be perceived as promotional or commercially driven. Thus, rank moderates the behavior-performance link.
H4a: 
The effect of Proactive Crafting on performance differs significantly by doctors’ professional rank: it is stronger in doctors with higher professional ranks than those with lower ranks.
Patients’ sensitivity to responsiveness and initiative differs depending on disease urgency. For acute conditions, prompt responses and proactive communication quickly alleviate patient anxiety, driving repeat visits [69,70]. For chronic diseases, long-term consistency matters more, and proactive behavior may yield diminishing marginal returns.
H4b: 
The effect of Proactive Crafting on performance differs significantly across disease types: it is stronger in acute conditions than in non-acute conditions.
Variations in platform penetration, healthcare infrastructure, and digital literacy exist across regions [71]. In more developed areas, patients are more familiar with online healthcare processes and can better perceive proactive behaviors, leading to positive performance feedback. In underdeveloped regions, limited platform engagement and technological barriers may hinder recognition and effectiveness of proactive behavior.
H4c: 
The effect of Proactive Crafting on performance differs significantly across regions: it is stronger in more developed regions compared to less developed regions.

4. Research Design

4.1. Data Source and Processing

This study is based on publicly accessible secondary data collected from a major Chinese online healthcare platform (www.wedoctor.com) (accessed on 7 June 2023). The platform openly provides non-sensitive information about doctors’ online service activities, including consultation frequency, response time, pricing, and patient reviews. We collected data through automated scripts consistent with the platform’s robots.txt policy. The dataset comprises more than 100,000 panel records covering over 2800 doctors from December 2022 to May 2023. All doctor- and patient-level data were recorded in the form of platform-generated encrypted IDs, without any personal identifiers such as names, photos, or contact information. Therefore, no personally identifiable information was accessed or stored during data collection. The original behavioral data are generated by the platform as part of standard system logging and were not manipulated by the researchers. To ensure data quality, we performed consistency checks, outlier screening, and cross-validation across time periods. The data are considered reliable as they are systematically recorded by the platform based on real-time user behavior. Because the data are publicly available and do not involve private or sensitive content, this study does not require ethics board approval or informed consent, in line with standard practices for research using non-identifiable secondary data.
During the six-month observation period (December 2022 to May 2023), there were no reported major platform updates, systemic changes, or significant health policy shifts that might systematically bias doctors’ behaviors or patient demand. Moreover, seasonal effects—such as those associated with public holidays or flu season—were controlled using time-fixed effects in the regression models to mitigate their potential impact.

4.2. Variable Selection and Measurement

Variable selection is grounded in the core logic of Job Crafting Theory, which posits that employees proactively shape their work to acquire resources, tackle challenges, and improve performance. Drawing on the platform’s data structure and doctors’ work characteristics, we identify key indicators to construct independent, dependent, and control variables, ensuring theoretical consistency and operational feasibility.
(1) Independent Variable
In the context of Online healthcare platforms, doctors enjoy a high degree of service autonomy. Their proactive behaviors reflect a combination of task crafting and relational crafting within the platform environment. According to Job Crafting theory, employees can enhance resource acquisition and improve performance outcomes by actively expanding work boundaries and optimizing interaction processes.
Given the digital characteristics of doctors’ work tasks and Job Crafting Theory, this study selects three behavioral indicators—online consultation rate (t), number of consultations (t), and response speed (t)—as the foundational variables for constructing the proactive crafting index (t). The online consultation rate (t) reflects a doctors’ willingness to accept appointment requests, indicating openness to task boundaries, capturing doctors’ willingness to proactively expand their service scope [72,73]. The number of consultations (t) measures the supply of services provided by the doctor, representing the ability to expand their work scope, representing doctors’ proactive efforts to acquire resources by increasing service volume [74,75]. Response speed (t), defined as the average time a doctor takes to reply to patient inquiries, serves as a key indicator of service agility and interactive proactivity, denoting doctors’ proactive efforts to optimize doctor–patient interactions [64,74,75]. Together, these three indicators—whether to accept tasks, how much to invest, and how to optimize interactions—comprehensively capture the key dimensions of doctors’ proactive behaviors in online healthcare settings, and are theoretically coherent. They are chosen because they are objectively observable, theoretically grounded, and feasible to extract from platform data. Specifically, consultation rate and number of consultations correspond to task crafting, while response speed reflects relational crafting. In contrast, cognitive crafting, which refers to individuals changing how they perceive the meaning or identity of their work, involves internal psychological processes that are difficult to detect from behavioral traces in a digital setting. While important in theory, cognitive crafting cannot be reliably inferred from platform-level data without self-reported or qualitative inputs. Therefore, our PCI focuses on externally observable proactive behaviors that reflect how doctors shape their task boundaries and interaction processes on the platform.
This study employs the entropy weight method to synthesize the Proactive Crafting Index (PCI), which better fits the research needs compared to equal weighting or principal component analysis (PCA). The equal weighting method assumes all indicators contribute equally, whereas actual data show varying levels of dispersion across the three indicators (e.g., response speed exhibits greater individual variation than consultation rate). The entropy method automatically assigns weights based on information entropy, giving more weight to indicators with higher variability and reducing subjective bias. PCA, by contrast, focuses on dimensionality reduction, which may obscure the unique theoretical meaning of each indicator [76]. In comparison, the entropy method preserves the conceptual integrity of all three indicators—each representing task boundaries, job investment, and interaction quality—making it more appropriate for this measurement task. The resulting composite index reflects the overall tendency of doctors to proactively manage their platform work.
The process involves: standardizing the indicators (using Equation (1) for positive indicators and Equation (2) for negative indicators), calculating the weight of the indicator j for the doctor i (Equation (3)), computing the information entropy of the indicator j (Equation (4)), determining the degree of divergence (Equation (5)), and finally calculating the weight of each indicator (Equation (6)) and the composite score (Equation (7)).
y i j = x i j min x j max x j min x j
y i j = max x j x i j max x j min x j
p i j = y i j i = 1 n   y i j
e j = k i = 1 n   p i j ln p i j
d j = 1 e j
w j = d j j = 1 m   d j
S i = j = 1 m   w j y i j
In Formulas (1)–(7) above, x i j refers to the original (raw) value of indicator j for doctor i. For example, consultation rate, consultation number, or response speed. y i j refers to the normalized value of indicator j for doctor i, obtained through min–max normalization. Equation (1) is used for positive indicators (higher is better), and Equation (2) is used for negative indicators (lower is better, e.g., response time). p i j refers to the proportion of doctor i’s normalized score on indicator j relative to the total across all doctors. e j refers to the information entropy of indicator j, reflecting the dispersion of this indicator across all individuals. A higher entropy means less variability. d j refers to the degree of divergence of indicator j, calculated as 1 − e j . It reflects the contribution of the indicator to distinguishing between individuals. w j refers to the entropy-based weight assigned to indicator j, computed by normalizing the divergence across all indicators. S i  refers to the final composite score (PCI) for doctor i, calculated as the weighted sum of the normalized indicators. A higher score reflects stronger proactive crafting behavior.
(2) Dependent Variable
Performance outcomes serve as the core dimension for evaluating the effectiveness of doctors’ behaviors on the platform. Traditional performance research often focuses on either subjective satisfaction or a single objective output indicator. However, in online environments, a multidimensional performance structure more accurately captures the comprehensive value of doctors’ actions.
Drawing on literature from service marketing and digital platform behavior, this study constructs three performance indicators: user evaluation performance (t+1), online consultation volume (t+1), and offline service volume (t+1). User evaluation performance (t+1) reflects patients’ subjective perceptions and satisfaction with the quality of service, measured by the proportion of positive reviews in the current month. Online consultation volume (t+1) indicates patients’ behavioral choices to engage the doctors’ services in the following month, reflecting both service attractiveness and platform recommendation effects. Offline service volume (t+1) assesses whether online interactions effectively translate into offline medical services, serving as a key outcome for measuring trust transfer and cross-channel integration. This performance framework aligns with the logical chain of service quality → usage behavior → trust conversion, allowing for a comprehensive assessment of the impact of doctors’ proactive crafting behaviors on performance outcomes.
(3) Moderating Variables
This study introduces three moderating variables for heterogeneity analysis: doctors’ professional level (t), the Area a doctor located (t) (regional development level), and disease urgency level (t).
Doctors’ professional level (t) (DULt) represents their professional qualifications and professional credibility on the platform, which may enhance patients’ perceptions and acceptance of their behavior. Disease urgency level (t) (DULt) captures patients’ sensitivity to response speed and service urgency. Patients with acute conditions may prefer faster and more efficient services, thus making the effect of doctor’ s proactive behavior more pronounced. The Area a doctor located (t) (Areat), or the level of regional development, affects the technological penetration of the platform and patients’ adaptability to digital services. Differences in visibility and conversion efficiency of doctor behaviors may emerge across regions. These moderating variables are grounded in the logic of behavioral perception and service response matching, contributing to a deeper understanding of the main effect relationships.
(4) Control Variables
To enhance the explanatory power and identification validity of the model examining the relationship between doctors’ proactive crafting behaviors and performance outcomes, this study controls for other potential confounding factors that might systematically affect performance outcomes. Four categories of control variables are included: latest review (t) (LRt), online average price (t) (OAPt), offline service prices (t) (OSPt), and the total number of online and offline services (t) (NOOSt).
Latest review (t) reflects the recent activity level and frequency of evaluations for a doctors’ services, which may directly influence platform exposure and patient selection preferences. Online average price (t) and Offline service prices (t) represent the pricing levels of the doctors’ services. Price is an important reference factor in patient decision-making. The total number of online and offline services (t) measures the overall service intensity provided by the doctor, which may create scale effects or experience advantages affecting performance outcomes. Incorporating these control variables helps to isolate the effect of proactive crafting behavior on performance outcomes, thereby improving the robustness and credibility of the empirical analysis.
The definitions and measurement methods of variables are shown in Table 1.

4.3. Research Model

The research model of this study is shown in Figure 1.

5. Results Analysis

5.1. Descriptive Statistics

The dataset includes over 2800 doctors from various regions, specialties, and professional levels across the platform. These doctors span both high-level and non-high-level titles and cover a broad range of clinical departments and geographic locations, which reflects the diversity of the platform’s overall population. This study conducted descriptive statistics using Stata 17 (see Table 2). The results indicate that the distribution of the sample is highly representative. Both OCVt+1 and OSVt+1 exhibit considerable variation, reflecting differences in doctors’ activity levels regarding patient consultations on the platform. The mean of UEPt+1 reaches 98.392, with a maximum value of 100, indicating that the majority of doctors received favorable patient evaluations, which reflects a notable “positive review bias” in the rating mechanism. Additionally, the maximum value of the NOOSt is as high as 98,797, far exceeding the median of 379, suggesting a highly skewed distribution in consultation volume. Overall, the data characteristics support the feasibility of conducting subsequent analyses.

5.2. Correlation Analysis

Correlation analysis was conducted using Pearson correlation test in Stata 17 (see Table 3). The results show a significant positive correlation between doctors’ proactive crafting behavior and online consultation volume, supporting the theoretical expectation that proactive crafting can enhance platform exposure and patients’ willingness to consult. Although the correlation with OSVt+1 is negative, its absolute value is small, suggesting a potential nonlinear relationship. The PCIt also exhibits a weak positive correlation with UEPt+1; despite the limited correlation strength, the direction is consistent. Overall, all correlation coefficients are below 0.8, indicating no serious multicollinearity issues among variables and providing a solid foundation for subsequent regression model estimation.

5.3. Multicollinearity Test

Variance Inflation Factor (VIF) analysis was performed using Stata 17 to test for multicollinearity, and the results show that all explanatory variables have VIF values within a reasonable range (see Table 4), with a maximum value of 2.11 (OSPt) and most variables below 1.5, indicating no serious multicollinearity problem in the regression model. The PCIt has a VIF of 1.08, demonstrating good identification and independence as the main explanatory variable. The reciprocal of VIF values (1/VIF) all exceed 0.47, further confirming variable independence. These test results ensure the stability of parameter estimates in the subsequent multiple regression analyses.

5.4. Two-Way Fixed Effects Analysis

Based on the Hausman test conducted using Stata 17 (p < 0.05), this study adopts fixed effects modeling. The White test (also performed in Stata 17) indicates heteroscedasticity in the data; thus, robust standard errors are employed. Controlling for individual doctor and time fixed effects, the two-way fixed effects model results are presented in Table 5. After controlling for individual and temporal fixed effects, regression results show that the impact of proactive crafting behavior on online and offline performance exhibits significant nonlinear characteristics. Models (1) and (2) support the hypothesized inverted U-shaped relationship in H1 and the U-shaped relationship in H2, respectively.
This study further applies the Utest (a formal test for U-shaped and inverted U-shaped nonlinear relationships) to validate these forms. And generate a prediction graph through Stata17, as shown in Figure 2. The results indicate that the online consultation model satisfies the unimodal property of an inverted U-shape, while the offline consultation model satisfies the unimodal property of a U-shape, both passing the Utest significance test. These findings not only confirm the theoretical nonlinear specifications but also statistically strengthen the credibility and identification of Hypotheses 1 and 2. The relatively low R2 of Model (3) in predicting UEPt+1 may be attributed to the characteristics and measurement properties of the UEP data. Our results show that the mean UEPt+1 score is 98.39 (out of 100), with a median of 99.7, and over 90% of the evaluations fall within the 95–100 range. This strong clustering of high scores likely limits the model’s ability to detect meaningful associations. Additionally, UEPt+1 may be influenced by unobserved factors such as medical outcomes or individual user preferences. Nonetheless, despite the model’s limited explanatory power, the results indicate that PCIt has no significant effect on UEPt+1 (coefficient = 0.011, p > 0.1). This finding is consistent with the data characteristics: due to the heavy concentration of high scores, variations in proactive behavior may not translate into observable differences in user ratings, which still provides descriptive insights.

5.5. Robustness Tests

5.5.1. Removal of Outliers

A robustness test was conducted by applying a 1% winsorization to the data using Stata 17, as shown in Table 6. To reduce the influence of extreme values on the estimation results, the main variables were winsorized at the 1% level. After this treatment, the study continued to use the two-way fixed effects model with robust standard errors for regression analysis. After this treatment, the regression results remained highly consistent with the original models: OCVt+1 still exhibited a significant inverted U-shaped pattern, while OSVt+1 maintained a typical U-shaped trend. This indicates that extreme sample points have minimal interference with the overall trend, confirming the robustness of the regression findings.

5.5.2. Control Variable Adjustment: Replacing LR with NNR

Building upon the original model, this study replaces the original control variable—LRt—with Number of Negative Reviews (t) (NNRt) using Stata 17 for analysis, considering that negative feedback on online platforms may have a more substantial impact on a doctor’s reputation and performance. As explicit signals of service dissatisfaction, negative reviews may more effectively influence patients’ decisions regarding future consultations or referrals, thereby offering a more precise control for the effects of patient perception on performance outcomes. Negative reviews, as explicit signals of service failure, may influence patients’ decisions regarding revisit or referral. Therefore, incorporating this variable helps more comprehensively control for the effect of patients’ subjective perceptions on performance outcomes, thereby improving the precision in identifying the net effect of proactive behaviors.
The results after inclusion (see Table 7) indicate that the nonlinear relationship between proactive crafting behavior and performance remains robust. The NNRt variable itself shows a significant negative impact on User Evaluation Performance (UEPt+1) but a significant positive impact on Offline Service Volume (OSVt+1), confirming its explanatory role in the model. This suggests that while negative reviews may harm perceived service quality, they may also reflect higher patient engagement or exposure, thereby increasing the likelihood of subsequent offline visits.

5.5.3. Recalculation of the Proactive Crafting Index

To further validate the rationality of the construction of the main explanatory variable, this study recalculated the PCIt using the improved entropy weight method. Unlike the traditional entropy method, which relies solely on information entropy for indicator weighting, the improved entropy weight method incorporates standard deviation and range as additional metrics to adjust the weights. This enhancement increases sensitivity to differences in indicator distributions.
This optimization mechanism prevents disproportionate weighting caused by concentrated values or extreme outliers in any single indicator, thereby improving both the stability and theoretical expressiveness of the index. After introducing this new variable, the study conducted regression analysis using the two-way fixed effects model with robust standard errors via Stata 17. The re-estimated results (see Table 8) continue to confirm the inverted U-shaped relationship for Online Consultation Volume (OCVt+1) and the U-shaped relationship for Offline Service Volume (OSVt+1). These findings indicate that the construction method of the PCIt does not materially affect the core conclusions, and demonstrates the robustness of the methodology in empirical application.

5.6. Endogeneity Test

Considering that doctors’ proactive behaviors may be simultaneously influenced by feedback from performance outcomes (e.g., better performance leading to increased proactivity), this study conducted an endogeneity test to mitigate potential bias. Lagged values of the PCIt and PCI2t term were used as instrumental variables. The choice of lagged variables is based on the assumption that past behaviors are correlated with current proactive tendencies but are less likely to be directly affected by contemporaneous performance shocks, thus satisfying the relevance and exogeneity conditions for valid instruments [77,78,79,80]. In this study, the lagged variable (PCIt−1) affects performance at time t+1 (e.g., OCVt+1) indirectly by influencing current proactive behavior (PCIt), rather than exerting a direct effect on future performance. This approach is grounded in the notion that patients’ decisions to initiate online consultations or seek offline visits are predominantly driven by the doctor’s current behavior. As a result, the lagged variable serves as a valid instrumental variable, being strongly correlated with the endogenous regressor (current proactive behavior) and, crucially, uncorrelated with the random disturbance term of the outcome equation, thereby satisfying the key conditions for an effective instrument. Moreover, the use of lagged independent variables has been validated in prior studies as an effective strategy to mitigate contemporaneous endogeneity [79,80].
The lagged variables were generated using Stata 17, followed by a test using the instrumental variable method. The results are shown in Table 9. The first-stage F-statistic was significant, indicating a strong correlation between the instruments and endogenous regressors, ruling out weak instrument problems. The CD Wald F statistic further verifies the validity of the instruments, showing that they can effectively explain the variation in the endogenous regressors. The SW S statistic test results demonstrate that the instruments satisfy the exogeneity condition, ensuring their validity in the model. Since the model is exactly identified, with the number of instruments equal to the number of endogenous regressors, no overidentification test (such as the Sargan test) is required. Therefore, based on these statistical test results, we can confirm that the selected instruments are valid and reliable.
Second-stage results showed that the lagged index maintains the inverted U-shaped relationship with OCVt+1, consistent with the main model. The relationship with OSVt+1 remains U-shaped, also consistent with theoretical expectations. These findings suggest the instrument design is appropriate, effectively controlling for reverse causality and omitted variable bias, thereby further validating the reliability of the model results.

5.7. Heterogeneity Analysis

5.7.1. Heterogeneity Analysis by Doctors’ Professional Level

Doctors were divided into high-level and non-high-level groups for analysis using Stata 17 (see Table A1 in Appendix A). The prediction charts are shown in Figure 3. For high-level doctors, the relationship between the Proactive crafting index and OCVt+1 exhibits a more significant inverted U-shaped pattern, while the relationship with OSVt+1 shows a U-shaped trend. In contrast, although the OCVt+1 for non-high-level doctors is significant, the OSVt+1 is not. This suggests that high-level doctors, due to their greater authority and trustworthiness, are more capable of converting proactive behaviors on the platform into performance outcomes, supporting Hypothesis H4a.

5.7.2. Heterogeneity Analysis by Disease Urgency Level

Disease types were categorized into non-acute and acute for analysis using Stata 17, with results shown in Table A2 in Appendix A. The prediction charts are shown in Figure 4. Among acute patients, the inverted U-shaped relationship between doctor s’ Proactive crafting index and OCVt+1 is significant. For non-acute patients, the U-shaped relationship with OSVt+1 is significant. Additionally, this study found that among non-acute patients, the proactive crafting index has a significant positive correlation with User evaluation performance. This may be because acute patients are more sensitive to response speed and service proactivity, enabling them to more quickly perceive doctors’ enthusiasm, thus fostering trust and conversion behavior, which validates Hypothesis H4b.

5.7.3. Heterogeneity Analysis by Area

Regions were classified into four categories according to the National Bureau of Statistics standards: Eastern, Western, Central, and Northeast areas, and analyzed using Stata 17 accordingly (see Table A3, Table A4 and Table A5 in Appendix A). The prediction charts are shown in Figure 5. The results indicate noticeable differences in the impact of doctors’ proactive behavior on performance across regions. In the Eastern area, the inverted U-shaped relationship between proactive crafting and OCVt+1 is most pronounced, possibly due to higher platform usage frequency and greater patient digital literacy in this area. While certain trends exist in the Central and Western areas, the effects are relatively weaker. Regarding OSVt+1, significant effects appear in both the Eastern and Northeast areas, whereas the Western and Central areas show no significant effects, which may relate to economic level differences or sample size disparities. UEPt+1 shows little variation across regions and remains statistically insignificant, likely due to the “high-score clustering” characteristic of rating data. Overall, these findings support Hypothesis H4c.

6. Discussion and Conclusions

6.1. Implications for Research

This study systematically investigates the impact of doctors’ proactive crafting behaviors on multidimensional performance outcomes within online healthcare platforms. By incorporating nonlinear modeling and heterogeneity analysis, it extends the theoretical boundaries of healthcare operations management and platform behavior research. Compared to existing literature, this study contributes novel insights in terms of theoretical framing, methodological design, and empirical findings.
First, this study enriches the understanding of the relationship between doctors’ proactive behavior and performance. Prior studies generally suggest that doctors’ participation, activity level, and engagement on online platforms positively influence their performance [21,22,23,24,25], but most assume a linear relationship, implying that more effort yields better outcomes. In contrast, by constructing a Proactive Crafting Index (PCI), our empirical results reveal that proactive behavior exhibits an inverted U-shaped relationship with online performance and a U-shaped relationship with offline performance, highlighting the inherently nonlinear nature of these behavioral effects. These findings suggest that there is an optimal range of proactivity: moderate engagement enhances visibility and patient conversion, while excessively low or high levels may diminish performance returns. This aligns with the “boundary” in Job Crafting Theory, which posits that while proactive efforts can improve resource acquisition and job fit, they are subject to psychological and contextual constraints. As proactive behavior intensifies beyond a certain point, it may trigger fatigue, social strain, or resistance from others, thereby reducing its effectiveness [81,82,83]. In online healthcare settings, the boundary may be even more salient, as platform mechanisms and patient expectations jointly shape the ‘acceptable’ range of doctor engagement [12]. In our analysis, H3, which hypothesized a positive association between proactive crafting and user evaluation performance (UEP), yielded a statistically non-significant result. We attribute this mainly to the measurement limitations of the UEP variable, particularly its ceiling effect, with over 90% of observations clustered between 95 and 100. This rating inflation likely reduces score variation, limiting the model’s ability to detect meaningful differences in proactive behavior. Therefore, the nonsignificant effect in H3 may stem from the limited variance and evaluative saturation of the dependent variable, rather than an absence of substantive impact.
Second, our results emphasize the potential adverse effects of excessive proactivity—an area often overlooked in the literature. In highly digitized, algorithm-driven online healthcare environments, doctors’ behavioral frequency and response speed are tracked and factored into platform recommendation systems [38,39,40]. While proactive behaviors can enhance algorithmic ranking and increase doctors’ exposure on online platforms, overly frequent accepting consultations may backfire—being perceived by patients as formulaic, commercially driven, or even manipulative. Such patterns may signal a lack of individualized attention, undermining the perceived authenticity of the doctor’s communication. As patients in online healthcare environments often rely on subtle cues—such as message tone, response timing, and personalization—to infer service motivation, excessive behavioral frequency may be interpreted as strategic self-promotion rather than genuine care. Prior studies have shown that when users detect impression management or self-serving intent in digital service contexts, it can erode trust and reduce willingness to engage [84,85,86]. Studies have highlighted that patients are highly sensitive to doctors’ service motivations, especially in the absence of face-to-face interaction. In such contexts, trust is built more through perceived sincerity, communication tone, and platform cues rather than traditional doctor-patient rapport [22,24]. Our findings support this “too much of a good thing” logic and suggest that platform managers should help doctors find a balance between visibility and authenticity.
Third, through heterogeneity analysis, this study identifies the boundary conditions under which proactive behavior influences performance, thereby enriching the external validity of doctor behavior research. The results show that high-level doctors are more likely to translate proactive behavior into performance benefits, possibly due to the alignment between their professional authority, platform reputation, and patient expectations. For acute disease patients, responsiveness and enthusiasm are more highly valued, thus the marginal return on proactive behavior is greater. Regional analysis shows that in eastern regions, where digital adoption and platform usage are higher, proactive behavior has a stronger impact on performance, while in central and western regions, limited resources and information asymmetry weaken the effect. These findings underscore that the “conversion efficiency” of proactive behavior is not determined by behavior alone but is jointly shaped by individual attributes, patient types, and the surrounding platform environment. This represents a valuable supplement to prior studies that have not fully considered contextual differences [49,69].
In addition, this study advances the methodology of behavioral research on digital platforms. We construct the PCI using an Improved Entropy Weight Method, which, compared to the traditional entropy-based approach, incorporates standard deviation and range as auxiliary weighting factors. This increases the sensitivity of the index to distributional differences, enhancing its stability and theoretical clarity. Furthermore, to address potential endogeneity issues arising from reverse causality between behavior and performance, we adopt lagged values of the PCI and its squared term as instrumental variables. The results validate both the relevance and exogeneity of the instruments, confirming the robustness of our model.
In conclusion, this study supports the general notion in existing literature that proactive participation by doctors is valuable, but further reveals that this impact is nonlinear and context-dependent. By developing a multidimensional performance framework, identifying nonlinear patterns, and introducing multilevel moderators, this research presents a more nuanced and explanatory model of the behavior–performance relationship on online healthcare platforms. It not only extends the theoretical application of Job Crafting Theory in digital healthcare settings but also offers practical implications for guiding and incentivizing doctor behavior on such platforms.

6.2. Implications for Practice

This study also offers several important managerial implications for doctors, patients, and operators of online healthcare platforms. To begin with, our findings demonstrate that doctors’ proactive crafting behaviors can significantly influence their online and offline performance outcomes, but the effect is nonlinear. This suggests that platforms should encourage doctors to maintain a moderate level of proactive engagement, where active service and interaction improve visibility and patient attraction, while avoiding excessive activity that may appear mechanical or profit-driven and negatively affect patient trust. Second, the identification of inverted U-shaped and U-shaped relationships between proactive behavior and performance provides actionable insights for platform algorithm design and doctor performance management. Platform operators could integrate the findings of this study into their recommendation systems by setting thresholds for proactive behaviors. For example, doctors displaying excessively high proactive behavior may be flagged for needing balance to prevent burnout or customer fatigue. Similarly, platforms could adjust incentive structures to reward moderate proactivity, as research shows that this level enhances patient trust and engagement. Third, the heterogeneity findings suggest that platform operators should adopt differentiated operational strategies based on doctors’ characteristics and service contexts. For example, high-level doctors are more effective in converting proactive behavior into performance, particularly in offline services. Therefore, platforms can prioritize promoting these doctors for high-urgency or complex cases to improve patient outcomes and platform credibility. Conversely, lower-level doctors may benefit more from support programs that guide the quality rather than quantity of engagement to build trust and improve long-term performance.
Moreover, the results show that patients with acute conditions respond more positively to proactive behaviors, particularly in online consultations. Platforms could design patient–doctor matching systems that consider disease urgency, thereby assigning more responsive doctors to patients requiring fast and effective interaction. This targeted matching can enhance both service satisfaction and medical outcomes. Additionally, the regional heterogeneity observed in our study implies that platform operators in less-developed areas (e.g., central and western regions) may need to invest more in patient education and digital literacy to help users understand and value doctors’ proactive services. Meanwhile, platforms in digitally mature areas like eastern China can leverage behavioral data more effectively for real-time performance feedback and service personalization. Finally, the findings underscore the need for dynamic, multidimensional performance evaluation systems. Platforms should move beyond static metrics such as consultation volume and incorporate behavioral quality indicators (e.g., response speed, communication tone), patient feedback, and conversion rates. Such systems can provide more holistic and fair assessments of doctors’ contributions and foster sustainable service motivation.

6.3. Limitations and Future Research

While this study provides meaningful theoretical and empirical insights, several limitations should be noted, offering valuable directions for future research. The analysis is based on data from a single online healthcare platform in China. Although the platform is representative in terms of user base and service structure, the observed relationships—especially those derived from the Proactive Crafting Index (PCI)—may be shaped by platform-specific incentive mechanisms, sociocultural expectations, or regulatory environments. Future research could extend the scope by incorporating data from multiple platforms or conducting cross-country comparative studies to examine whether the observed nonlinear and heterogeneous effects are consistent across different healthcare systems. Another limitation lies in the focus on doctor-side behaviors, without incorporating the dynamic interactions between doctors and patients. In real-world platform environments, patient responses—such as feedback, follow-up behavior, and trust formation—can significantly influence doctors’ behavioral decisions, forming a recursive feedback loop. Future studies could explore these bilateral dynamics by analyzing interactional data and examining how mutual adaptation between doctors and patients affects performance outcomes. While this study adopts a multidimensional framework to measure performance—including online consultation volume, offline service volume, and user evaluation performance—some critical dimensions may still be overlooked. For instance, clinical effectiveness, patient retention, or long-term health improvements are also essential outcomes in healthcare services. Future research could incorporate clinical indicators or third-party data to provide a more comprehensive assessment of performance. Additionally, this study does not explicitly address potential biases arising from platform algorithm opacity or patient self-selection. The recommendation and exposure mechanisms on digital health platforms are often governed by proprietary algorithms that are not fully disclosed to users or researchers. Such opacity may introduce systematic bias in how doctors’ behaviors are rewarded or penalized. Furthermore, patient choices—such as selecting doctors based on perceived popularity or availability—may lead to selection effects that confound the relationship between doctor behavior and performance. Future studies should account for these potential biases, possibly through platform collaboration or experimental designs. Although methodological efforts were made to enhance robustness—including the application of the Improved Entropy Weight Method and the use of lagged instrumental variables to address endogeneity—there remains the possibility of unobserved confounding factors or biases introduced by opaque platform algorithms. Future studies could strengthen causal inference by employing experimental or quasi-experimental designs, such as leveraging natural experiments or platform-level policy changes. The UEP variable used in this study has a ceiling effect, as most ratings cluster in the higher range, limiting its ability to measure subtle differences in service quality. Future research could consider incorporating more granular patient feedback, such as satisfaction surveys, to complement this measurement limitation and capture finer variations in service quality. In addition, our current modeling approach treats user behavior and feedback in entropy weight method, which, while aligned with data availability and study scope, does not fully capture the complexity and heterogeneity of patient decision-making. Future research may explore this dimension more deeply by incorporating user-level behavioral modeling, segmentation strategies, or qualitative methods to uncover how different user types perceive and respond to doctors’ proactive behaviors in platform-based healthcare environments.
In sum, future research should aim to expand the empirical scope, incorporate dynamic and bilateral interactions, and explore system-level governance factors in order to construct a more comprehensive understanding of proactive behavior and performance in online healthcare contexts.

Author Contributions

Conceptualization, W.L. and Y.Y.; methodology, Y.Y. and Z.B.; formal analysis, Y.Y. and S.S.; data curation, W.L.; writing—original draft preparation, W.L. and Y.Y.; writing—review and editing, W.L. and Y.Y.; funding acquisition, W.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by “the Fundamental Research Funds for the Central Universities”, NO. ND2024001.

Institutional Review Board Statement

Not applicable. Due to the nature of the study, Ethics Committee approval was not required.

Informed Consent Statement

Not applicable. We would like to clarify that our study is based on publicly available online consultation data obtained from a widely used Chinese online healthcare platform, with all data processing conducted in compliance with the platform’s terms of service and relevant ethical guidelines. Therefore, informed consent was not required.

Data Availability Statement

The data presented in this study are available upon request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
OCVt+1Online consultation volume (t+1)
OSVt+1Offline service volume (t+1)
UEPt+1User evaluation performance (t+1)
PCItProactive crafting index (t)
LRtLatest Review (t)
OAPtOnline average price (t)
OSPtOffline service prices (t)
NOOStNumber of online and offline services (t)
DULtDisease urgency level (t)
DPLtDoctors’ professional level (t)
AreatArea (t)
NNRtNumber of Negative Reviews (t)

Appendix A

Table A1. Heterogeneity Analysis (DPL).
Table A1. Heterogeneity Analysis (DPL).
OCVt+1OSVt+1UEPt+1
DPL-highDPL-LowDPL-highDPL-LowDPL-highDPL-Low
PCIt1.198 **2.541 ***−2.020 **−2.412−0.007−0.011
−0.409−0.763−0.853−2.496−0.078−0.189
PCI2t−2.890 **−5.528 **5.359 **5.312
−1.049−1.95−2.362−7.026
LRt0.0000.0000.001−0.0010.000 **0.000 *
0.0000.0000−0.0010.0000.000
OAPt0.002 *0.005 *0.0010.0060.0000.000
−0.001−0.002−0.001−0.0060.000−0.001
OSPt−0.0130.068−0.0440.0000.018−0.002
−0.021−0.089−0.05−0.001−0.022−0.002
NOOSt0.000 **0.000 **0.000 ***0.001 ***0−0.000 **
0.0000.0000.0000.0000.0000.000
_cons1.1110.3252.724−0.18698.024 ***99.306 ***
−0.77−1.424−1.78−0.467−0.707−0.102
ID FixedYesYesYesYesYesYes
Month FixedYesYesYesYesYesYes
N13,00049657405142222,0008302
R20.0670.1150.4970.3210.0060.013
R2_a0.067−0.3790.4960.3170.0060.012
F74.17645.982440.21221.3085.5452.818
Notes: * p < 0.1, ** p < 0.05, *** p < 0.001.
Table A2. Heterogeneity Analysis (DUL).
Table A2. Heterogeneity Analysis (DUL).
OCVt+1OSVt+1UEPt+1
DUL-non acuteDUL-acuteDUL-non acuteDUL-acuteDUL-non acuteDUL-acute
PCIt0.3041.545 ***−4.911 **−1.3140.376 *−0.062
−1.25−0.408−1.831−0.8−0.225−0.081
PCI2t−0.055−3.554 ***11.574 **3.531
−3.334−1.04−5.016−2.208
LRt0.0000.0000.0020.0000.0000.000 **
0.0000.000−0.0020.0000.0000.000
OAPt0.0010.003 **0.003 **0.0000.0000.000
−0.002−0.001−0.002−0.0020.0000.000
OSPt0.035−0.0170.146 ***−0.015−0.0090.007
−0.058−0.012−0.01−0.054−0.011−0.009
NOOSt0.000 *0.000 **−0.001 **0.001 ***0.000−0.000 **
0.0000.0000.0000.00000.0000.000
_cons−0.2481.348 ***0.20.47199.118 ***98.644 ***
−1.692−0.402−1.232−1.758−0.308−0.247
ID FixedYesYesYesYesYesYes
Month FixedYesYesYesYesYesYes
N235716,00010947733391526,000
R20.0630.0650.5130.450.0060.009
R2_a0.060.0640.5090.4490.0030.009
F11.2190.526.365.6992.7356.211
Notes: * p < 0.1, ** p < 0.05, *** p < 0.001.
Table A3. OCV Heterogeneity Analysis (Area).
Table A3. OCV Heterogeneity Analysis (Area).
OCVt+1
Area-EasternArea-WesternArea-CentralArea-Northeast
PCIt0.906 **3.753 **2.646 *0.998
−0.413−1.594−1.401−2.325
PCI2t−2.201 **−8.266 **−8.368 **−3.273
−1.052−4.003−3.792−6.125
LRt0.0000.000−0.0010.001 *
0.0000.000−0.0010.000
OAPt0.002 **−0.012 ***0.0010.013
−0.001−0.003−0.002−0.01
OSPt−0.009−0.057−0.0250.153 ***
−0.018−0.064−0.018−0.02
NOOSt0.000 **0.000 *0.001 **0.001
0.0000.0000.0000.000
_cons1.267 *3.027 ***−0.051−2.042 **
−0.651−0.861−0.692−1.006
ID FixedYesYesYesYes
Month FixedYesYesYesYes
N12,00011592101602
R20.0470.0970.0650.097
R2_a0.0460.090.0610.084
F55.9859.6810.343.
Notes: * p < 0.1, ** p < 0.05, *** p < 0.001.
Table A4. OSV Heterogeneity Analysis (Area).
Table A4. OSV Heterogeneity Analysis (Area).
OSVt+1
Area-EasternArea-WesternArea-CentralArea-Northeast
PCIt−2.070 **2.1711.395−18.281 ***
−0.766−4.044−1.69−4.167
PCI2t4.732 **−4.266−2.22248.097 ***
−2.071−9.601−4.917−11.991
LRt0.0010.0000.003 **−0.001
−0.001−0.001−0.001−0.009
OAPt−0.0010.010 **0.008−22.033 **
−0.001−0.004−0.007−10.031
OSPt0.0380.000−0.108 **0.000
−0.0670.000−0.0410.000
NOOSt0.001 ***0.001 ***−0.002 ***0.014 ***
0.0000.0000.000−0.004
_cons−1.431−0.5698.282 ***1975.360 **
−2.59−0.545−1.357−900.741
ID FixedYesYesYesYes
Month FixedYesYesYesYes
N60533701207157
R20.4740.4380.5790.469
R2_a0.4740.4260.5750.44
F326.70923.779107.349.
Notes: ** p < 0.05, *** p < 0.001.
Table A5. UEP Heterogeneity Analysis (Area).
Table A5. UEP Heterogeneity Analysis (Area).
UEPt+1
Area-EasternArea-WesternArea-CentralArea-Northeast
PCIt−0.0130.0710.0250.293
−0.091−0.072−0.143−0.223
LRt0.0000.000 **0.000 **0.000
0.0010.0000.0000.001
OAPt0.0000.000−0.0020.000
0.0000.000−0.002−0.002
OSPt0.013−0.032−0.006 **0.012
−0.012−0.023−0.003−0.009
NOOSt0.000−0.000 **−0.001 **−0.000 *
0.0000.0010.0010.001
_cons98.383 ***99.499 ***98.844 ***99.058 ***
−0.417−0.286−0.232−0.196
ID FixedYesYesYesYes
Month FixedYesYesYesYes
N20,00019373476982
R20.0060.0040.0150.022
R2_a0.005−0.0010.0120.012
F3.021.162.6512.515
Notes: * p < 0.1, ** p < 0.05, *** p < 0.001.

References

  1. Guo, S.; Guo, X.; Fang, Y.; Vogel, D. How Doctors Gain Social and Economic Returns in Online Health-Care Communities: A Professional Capital Perspective. J. Manag. Inf. Syst. 2017, 34, 487–519. [Google Scholar] [CrossRef]
  2. Peng, Y.; Yin, P.; Deng, Z.; Wang, R. Patient–Physician Interaction and Trust in Online Health Community: The Role of Perceived Usefulness of Health Information and Services. Int. J. Environ. Res. Public Health 2020, 17, 139. [Google Scholar] [CrossRef]
  3. Peng, M.; Zhu, K.; Gu, Y.; Yang, X.; Su, K.; Gu, D. How to Self-Disclose? The Impact of Patients’ Linguistic Features on Doctors’ Service Quality in Online Health Communities. J. Theor. Appl. Electron. Commer. Res. 2025, 20, 56. [Google Scholar] [CrossRef]
  4. Li, M.; Gu, D.; Li, R.; Gu, Y.; Liu, H.; Su, K.; Wang, X.; Zhang, G. The Impact of Linguistic Signals on Cognitive Change in Support Seekers in Online Mental Health Communities: Text Analysis and Empirical Study. J. Med. Internet Res. 2025, 27, e60292. [Google Scholar] [CrossRef]
  5. Liu, H.; Zhang, Y.; Li, Y.; Albright, K. Better Interaction Performance Attracts More Chronic Patients? Evidence from an Online Health Platform. Inf. Process Manag. 2023, 60, 103413. [Google Scholar] [CrossRef]
  6. Wang, L.; Yan, L.; Zhou, T.; Guo, X.; Heim, G.R. Understanding physicians’ online-offline behavior dynamics: An empirical study. Inf. Syst. Res. 2020, 31, 537–555. [Google Scholar] [CrossRef]
  7. Wrzesniewski, A.; Dutton, J.E. Crafting a Job: Revisioning Employees as Active Crafters of Their Work. Acad. Manag. Rev. 2001, 26, 179–201. [Google Scholar] [CrossRef]
  8. Tims, M.; Bakker, A.B. Job Crafting: Towards a New Model of Individual Job Redesign. SA J. Ind. Psychol. 2010, 36, 1–9. [Google Scholar] [CrossRef]
  9. Seppälä, P.; Hakanen, J.J.; Virkkala, J.; Tolvanen, A.; Punakallio, A.; Rivinoja, T.; Uusitalo, A. Can Job Crafting eLearning Intervention Boost Job Crafting and Work Engagement, and Increase Heart Rate Variability? Testing a Health Enhancement Process. J. Occup. Health Psychol. 2023, 28, 395–410. [Google Scholar] [CrossRef]
  10. Yan, J.; Liang, C.; Gu, D.; Zhu, K.; Zhou, P. Understanding Patients’ Doctor Choice Behavior: Elaboration Likelihood Perspective. Inf. Dev. 2024; forthcoming. [Google Scholar] [CrossRef]
  11. Tims, M.; Bakker, A.B.; Derks, D. Development and validation of the job crafting scale. J. Vocat. Behav. 2012, 80, 173–186. [Google Scholar] [CrossRef]
  12. Tims, M.; Bakker, A.B.; Derks, D. The Impact of Job Crafting on Job Demands, Job Resources, and Well-Being. J. Occup. Health Psychol. 2013, 18, 230–240. [Google Scholar] [CrossRef]
  13. Demerouti, E. Design Your Own Job through Job Crafting. Eur. Psychol. 2014, 19, 237–247. [Google Scholar] [CrossRef]
  14. Petrou, P.; Demerouti, E.; Peeters, M.C.W.; Schaufeli, W.B.; Hetland, J. Crafting a Job on a Daily Basis: Contextual Correlates and the Link to Work Engagement. J. Organ. Behav. 2012, 33, 1120–1141. [Google Scholar] [CrossRef]
  15. Lu, C.Q.; Wang, H.J.; Lu, J.J.; Du, D.Y.; Bakker, A.B. Does Work Engagement Increase Person–Job Fit? The Role of Job Crafting and Job Insecurity. J. Vocat. Behav. 2014, 84, 142–152. [Google Scholar] [CrossRef]
  16. Nagarajan, R.; Swamy, R.A.; Reio, T.G.; Elangovan, R.; Parayitam, S. The COVID-19 Impact on Employee Performance and Satisfaction: A Moderated Moderated-Mediation Conditional Model of Job Crafting and Employee Engagement. Hum. Resour. Dev. Int. 2022, 25, 600–630. [Google Scholar] [CrossRef]
  17. Lichtenthaler, P.W.; Fischbach, A. The Conceptualization and Measurement of Job Crafting: A Critical Review. Work Stress 2019, 33, 147–172. [Google Scholar] [CrossRef]
  18. Vogt, K.; Hakanen, J.J.; Brauchli, R.; Jenny, G.J.; Bauer, G.F. The Consequences of Job Crafting: A Three-Wave Study. Eur. J. Work Organ. Psychol. 2016, 25, 353–362. [Google Scholar] [CrossRef]
  19. Shi, Y.; Xie, J.; Wang, Y.; Zhang, N. Digital job crafting and its positive impact on job performance: The perspective of individual-task-technology fit. Adv. Psychol. Sci. 2023, 31, 1133–1145. [Google Scholar] [CrossRef]
  20. Liu, W.; Chang, W.; Guo, J.; Chen, S.; Wang, H.J. Electronic performance monitoring and job crafting: A psychological ownership perspective. J. Digit. Manag. 2025, 1, 9. [Google Scholar] [CrossRef]
  21. Bakker, A.B.; Demerouti, E. Job Demands–Resources Theory: Taking Stock and Looking Forward. J. Occup. Health Psychol. 2017, 22, 273–285. [Google Scholar] [CrossRef]
  22. Cao, Q.; Lu, Y.; Dong, D.; Tang, Z.; Li, Y. The Roles of Bridging and Bonding in Social Media Communities. J. Am. Soc. Inf. Sci. Technol. 2013, 64, 1671–1681. [Google Scholar] [CrossRef]
  23. Yan, Z.; Wang, T.; Chen, Y.; Zhang, H. Knowledge Sharing in Online Health Communities: A Social Exchange Theory Perspective. Inf. Manag. 2016, 53, 643–653. [Google Scholar] [CrossRef]
  24. Deja, M.; Hommel, M.; Weber-Carstens, S.; Moss, M.; von Dossow, V.; Sander, M.; Pille, C.; Spies, C. Evidence-Based Therapy of Severe Acute Respiratory Distress Syndrome: An Algorithm-Guided Approach. J. Int. Med. Res. 2008, 36, 211–221. [Google Scholar] [CrossRef]
  25. Shah, A.M.; Naqvi, R.A.; Jeong, O.R. The Impact of Signals Transmission on Patients’ Choice through E-consultation Websites: An Econometric Analysis of Secondary Datasets. Int. J. Environ. Res. Public Health 2021, 18, 5192. [Google Scholar] [CrossRef] [PubMed]
  26. Tan, H.; Yan, M. Physician-User Interaction and Users’ Perceived Service Quality: Evidence from Chinese Mobile Healthcare Consultation. Inf. Technol. People 2020, 33, 1403–1426. [Google Scholar] [CrossRef]
  27. Lu, Y.; Wang, Q. Doctors’ Preferences in the Selection of Patients in Online Medical Consultations: An Empirical Study with Doctor–Patient Consultation Data. Healthcare 2022, 10, 1435. [Google Scholar] [CrossRef] [PubMed]
  28. Liu, J.; Bian, Y.; Ye, Q.; Jing, D. Free for Caring? The Effect of Offering Free Online Medical-Consulting Services on Physician Performance in e-Health Care. Telemed. E-Health 2019, 25, 979–986. [Google Scholar] [CrossRef] [PubMed]
  29. Zhang, Y.; Zhang, J. Spillover Effects of Physicians’ Prosocial Behavior: The Role of Knowledge Sharing in Enhancing Paid Consultations Across Healthcare Networks. J. Theor. Appl. Electron. Commer. Res. 2025, 20, 87. [Google Scholar] [CrossRef]
  30. Lai, A.Y.; Wee, K.Z.; Frimpong, J.A. Proactive behaviors and health care workers: A systematic review. Health Care Manag. Rev. 2024, 49, 239–251. [Google Scholar] [CrossRef]
  31. Ebert, T.; Bipp, T.; Debus, M.E. Stability, reciprocity, and antecedent-outcome relations of different job crafting forms. Occup. Health Sci. 2025; forthcoming. [Google Scholar] [CrossRef]
  32. Topa, G.; Aranda-Carmena, M. Job crafting in nursing: Mediation between work engagement and job performance in a multisample study. Int. J. Environ. Res. Public Health 2022, 19, 12711. [Google Scholar] [CrossRef] [PubMed]
  33. Liu, S.; Si, G.; Gao, B. Which Voice are You Satisfied with? Understanding the Physician–Patient Voice Interactions on Online Health Platforms. Decis. Support Syst. 2022, 157, 113754. [Google Scholar] [CrossRef]
  34. Donabedian, A. Evaluating the Quality of Medical Care. Milbank Q. 1966, 44, 166–206. [Google Scholar] [CrossRef]
  35. Campbell, S.M.; Roland, M.O.; Buetow, S.A. Defining Quality of Care. Soc. Sci. Med. 2000, 51, 1611–1625. [Google Scholar] [CrossRef]
  36. Goh, J.M.; Gao, G.; Agarwal, R. The Creation of Social Value: Can an Online Health Community Reduce Rural–Urban Health Disparities? Manag. Inf. Syst. Q. 2016, 40, 247–263. [Google Scholar] [CrossRef]
  37. Chen, J.; Zhang, C.; Xu, Y. The Role of Mutual Trust in Building Members’ Loyalty to a C2C Platform Provider. Int. J. Electron. Commer. 2020, 24, 206–231. [Google Scholar] [CrossRef]
  38. Chen, Y.; Lee, S. User-Generated Physician Ratings and Their Effects on Patients’ Physician Choices: Evidence from Yelp. J. Mark. 2024, 88, 77–96. [Google Scholar] [CrossRef]
  39. Chen, L.; Tang, H.; Guo, Y. Effect of Patient-Centered Communication on Physician-Patient Conflicts from the Physicians’ Perspective: A Moderated Mediation Model. J. Health Commun. 2022, 27, 164–172. [Google Scholar] [CrossRef]
  40. Aalto, A.M.; Elovainio, M.; Tynkkynen, L.K.; Reissell, E.; Vehko, T.; Chydenius, M.; Sinervo, T. What Patients Think about Choice in Healthcare? A Study on Primary Care Services in Finland. Scand. J. Public Health 2018, 46, 463–470. [Google Scholar] [CrossRef]
  41. Zhang, X.; Guo, X.; Lai, K.; Guo, F.; Li, C. Understanding Gender Differences in m-Health Adoption: A Modified Theory of Reasoned Action Model. Telemed. E-Health 2014, 20, 39–46. [Google Scholar] [CrossRef]
  42. Gartner, J.B.; Lemaire, C. Dimensions of performance and related key performance indicators addressed in healthcare organisations: A literature review. Int. J. Health Plan. Manag. 2022, 37, 1941–1952. [Google Scholar] [CrossRef]
  43. Carini, E.; Gabutti, I.; Frisicale, E.M.; Di Pilla, A.; Pezzullo, A.M.; de Waure, C.; Cicchetti, A.; Boccia, S.; Specchia, M.L. Assessing hospital performance indicators. What dimensions? Evidence from an umbrella review. BMC Health Serv. Res. 2020, 20, 1038. [Google Scholar] [CrossRef]
  44. Liu, X.; Guo, H.; Wang, L.; Hu, M.; Wei, Y.; Liu, F.; Wang, X. Effect of prosocial behaviors on e-consultations in a web-based health care community: Panel data analysis. J. Med. Internet Res. 2024, 26, e52646. [Google Scholar] [CrossRef]
  45. van Elten, H.J.; van der Kolk, B. Performance management, metric quality, and trust: Survey evidence from healthcare organizations. Br. Account. Rev. 2024, 101511. [Google Scholar] [CrossRef]
  46. Huang, Z.; Duan, C.; Yang, Y.; Khanal, R. Online Selection of a Physician by Patients: The Impression Formation Perspective. BMC Med. Inform. Decis. Mak. 2022, 22, 193. [Google Scholar] [CrossRef] [PubMed]
  47. Wu, H.; Lu, N. Service Provision, Pricing, and Patient Satisfaction in Online Health Communities. Int. J. Med. Inform. 2018, 110, 77–89. [Google Scholar] [CrossRef]
  48. Liu, X.; Guo, X.; Wu, H.; Wu, T. The Impact of Individual and Organizational Reputation on Physicians’ Appointments Online. Int. J. Electron. Commer. 2016, 20, 551–577. [Google Scholar] [CrossRef]
  49. Zhang, F.; Parker, S.K. Reorienting Job Crafting Research: A Hierarchical Structure of Job Crafting Concepts and Integrative Review. J. Organ. Behav. 2019, 40, 126–146. [Google Scholar] [CrossRef]
  50. Giæver, F.; Løvseth, L.T. Exploring Presenteeism among Hospital Physicians through the Perspective of Job Crafting. Qual. Res. Organ. Manag. 2020, 15, 296–314. [Google Scholar] [CrossRef]
  51. Nguyen, N.X.; Tran, K.; Nguyen, T.A. Impact of Service Quality on In-patients’ Satisfaction, Perceived Value, and Customer Loyalty: A Mixed-Methods Study from a Developing Country. Patient Prefer. Adherence 2021, 17, 2523–2538. [Google Scholar] [CrossRef]
  52. Guo, X.; Guo, S.; Vogel, D.; Li, Y. Online Healthcare Community Interaction Dynamics. J. Manag. Sci. Eng. 2016, 1, 58–74. [Google Scholar] [CrossRef]
  53. He, Y.; Guo, X.; Wu, T.; Vogel, D. The Effect of Interactive Factors on Online Health Consultation Review Deviation: An Empirical Investigation. Int. J. Med. Inform. 2022, 163, 104781. [Google Scholar] [CrossRef]
  54. Liu, X.; Chi, X.; Li, J.; Zhou, S.; Cheng, Y. Doctors’ Self-Presentation Strategies and the Effects on Patient Selection in Psychiatric Department from an Online Medical Platform: A Combined Perspective of Impression Management and Information Integration. J. Theor. Appl. Electron. Commer. Res. 2025, 20, 13. [Google Scholar] [CrossRef]
  55. Igarashi, H.; Kitamura, T.; Ohuchi, K.; Mitoma, H. Consultation frequency and perceived consultation time in a Japanese psychiatric clinic: Their relationship with patient consultation satisfaction and depression and anxiety. Psychiatry Clin. Neurosci. 2008, 62, 129–134. [Google Scholar] [CrossRef]
  56. Jing, L.; Shan, W.; Evans, R.; Shi, X. To Continue Consultation or Not? How Physicians’ Information Cues Affect Patients’ Continued Online Consultation Behavior. Electron. Commer. Res. 2024; forthcoming. [Google Scholar] [CrossRef]
  57. Cao, X.; Liu, Y.; Zhu, Z.; Hu, J.; Chen, X. Online Selection of a Physician by Patients: Empirical Study from Elaboration Likelihood Perspective. Comput. Hum. Behav. 2017, 73, 403–412. [Google Scholar] [CrossRef]
  58. Guetz, B.; Bidmon, S. The Impact of Social Influence on the Intention to Use Physician Rating Websites: Moderated Mediation Analysis Using a Mixed Methods Approach. J. Med. Internet Res. 2022, 24, e37505. [Google Scholar] [CrossRef]
  59. Sun, Q.; Tang, G.; Xu, W.; Zhang, S. Social media stethoscope: Unraveling how doctors’ social media behavior affects patient adherence and treatment outcome. Front. Public Health 2024, 12, 1459536. [Google Scholar] [CrossRef]
  60. Dong, W.; Lei, X.; Liu, Y. The mediating role of patients’ trust between web-based health information seeking and patients’ uncertainty in China: Cross-sectional web-based survey. J. Med. Internet Res. 2022, 24, e25275. [Google Scholar] [CrossRef]
  61. Wang, Z.; Zhang, X.; Han, D.; Zhao, Y.; Ma, L.; Hao, F. How the Use of an Online Healthcare Community Affects the Doctor-Patient Relationship: An Empirical Study in China. Front. Public Health 2023, 11, 1145749. [Google Scholar] [CrossRef]
  62. Yan, J.; Liang, C.; Zhou, P. Physician’s Service Quality and Patient’s Review Behavior: Managing Online Review to Attract More Patients. Internet Res. 2024. [Google Scholar] [CrossRef]
  63. Wang, Q.; Wang, H.; Wang, S.; Zhang, W. How does the design of consultation pages affect patients’ perception of physician authority and willingness to seek offline treatment: A randomized controlled trial. Behav. Sci. 2023, 13, 584. [Google Scholar] [CrossRef]
  64. Shen, T.; Li, Y.; Chen, X. A systematic review of online medical consultation research. Healthcare 2024, 12, 1687. [Google Scholar] [CrossRef] [PubMed]
  65. Yang, H.; Guo, X.; Wu, T. Exploring the Influence of the Online Physician Service Delivery Process on Patient Satisfaction. Decis. Support Syst. 2015, 78, 113–121. [Google Scholar] [CrossRef]
  66. Woolley, F.R.; Kane, R.L.; Hughes, C.C.; Wright, D.D. The effects of doctor-patient communication on satisfaction and outcome of care. Soc. Sci. Med. A 1978, 12, 123–128. [Google Scholar] [CrossRef]
  67. Woo, S.; Choi, M. Medical service quality, patient satisfaction and intent to revisit: Case study of public hub hospitals in the Republic of Korea. PLoS ONE 2021, 16, e0252241. [Google Scholar] [CrossRef]
  68. Li, J.; Bao, X.; Liu, X.; Ma, L. The Impact of Joining a Team on the Initial Trust in Online Physicians. Healthcare 2020, 8, 33. [Google Scholar] [CrossRef]
  69. Wei, W.; Ma, X.; Wang, J. Adaptive Experiments Toward Learning Treatment Effect Heterogeneity. J. R. Stat. Soc. Ser. B Stat. Methodol. 2025; forthcoming. [Google Scholar] [CrossRef]
  70. Cohen, C.; Pignata, S.; Bezak, E.; Tie, M.; Childs, J. Workplace Interventions to Improve Well-being and Reduce Burnout for Nurses, Physicians and Allied Healthcare Professionals: A Systematic Review. BMJ Open 2023, 13, e071203. [Google Scholar] [CrossRef]
  71. Forbus, J.J.; Berleant, D. Discrete-Event Simulation in Healthcare Settings: A Review. Modelling 2022, 3, 417–433. [Google Scholar] [CrossRef]
  72. Chen, Z.; Song, Q.; Wang, A.; Xie, D.; Qi, H. Study on the relationships between doctor characteristics and online consultation volume in the online medical community. Healthcare 2022, 10, 1551. [Google Scholar] [CrossRef]
  73. Jiang, H.; Mi, Z.; Xu, W. Online medical consultation service–oriented recommendations: Systematic review. J. Med. Internet Res. 2024, 26, e46073. [Google Scholar] [CrossRef] [PubMed]
  74. Chen, J.; Lan, Y.-C.; Chang, Y.-W.; Chang, P.-Y. Exploring doctors’ willingness to provide online counseling services: The roles of motivations and costs. Int. J. Environ. Res. Public Health 2020, 17, 110. [Google Scholar] [CrossRef]
  75. Kammrath Betancor, P.; Boehringer, D.; Jordan, J.; Lüchtenberg, C.; Lambeck, M.; Ketterer, M.C.; Reinhard, T.; Reich, M. Efficient patient care in the digital age: Impact of online appointment scheduling in a medical practice and a university hospital on the “no-show”-rate. Front. Digit. Health 2025, 7, 1567397. [Google Scholar] [CrossRef]
  76. Chen, P. Effects of the entropy weight on TOPSIS. Expert Syst. Appl. 2021, 168, 114186. [Google Scholar] [CrossRef]
  77. Wilkins, A.S. To lag or not to lag?: Re-evaluating the use of lagged dependent variables in regression analysis. Political Sci. Res. Methods 2018, 6, 393–411. [Google Scholar] [CrossRef]
  78. O’Malley, A.J. Instrumental variable specifications and assumptions for longitudinal analysis of mental health cost offsets. Health Serv. Outcomes Res. Methodol. 2012, 12, 254–272. [Google Scholar] [CrossRef]
  79. Sun, L.; Abraham, S. Estimating dynamic treatment effects in event studies with heterogeneous treatment effects. J. Econom. 2021, 225, 175–199. [Google Scholar] [CrossRef]
  80. Wooldridge, J.M. Econometric Analysis of Cross Section and Panel Data; MIT Press: Cambridge, MA, USA, 2010. [Google Scholar]
  81. Di Stefano, G.; Lo Piccolo, E.; Cicero, L. How job crafting dimensions differentially moderate the translation of work conditions into stress perceptions. Behav. Sci. 2025, 15, 793. [Google Scholar] [CrossRef]
  82. Knight, C.; Tims, M.; Gawke, J.; Parker, S.K. When do job crafting interventions work? The moderating roles of workload, intervention intensity, and participation. J. Vocat. Behav. 2021, 124, 103522. [Google Scholar] [CrossRef]
  83. Li, S.; Meng, B.; Wang, Q. The double-edged sword effect of relational crafting on job well-being. Front. Psychol. 2022, 13, 713737. [Google Scholar] [CrossRef] [PubMed]
  84. Wolski, M.; Howard, L.; Richardson, J. A Trust Framework for Online Research Data Services. Publications 2017, 5, 14. [Google Scholar] [CrossRef]
  85. Gerlich, M. Exploring Motivators for Trust in the Dichotomy of Human—AI Trust Dynamics. Soc. Sci. 2024, 13, 251. [Google Scholar] [CrossRef]
  86. Yan, Y.; Lu, B.; Xu, T. Exploring the Relationship between Host Self-Description and Consumer Purchase Behavior Using a Self-Presentation Strategy. Systems 2023, 11, 430. [Google Scholar] [CrossRef]
Figure 1. Research Model.
Figure 1. Research Model.
Jtaer 20 00226 g001
Figure 2. Prediction curve: (a) OCV prediction curve; (b) OSV prediction curve.
Figure 2. Prediction curve: (a) OCV prediction curve; (b) OSV prediction curve.
Jtaer 20 00226 g002
Figure 3. Heterogeneity Analysis by Doctors’ professional level: (a) OCV prediction curve; (b) OSV prediction curve; (c) UEP prediction curve.
Figure 3. Heterogeneity Analysis by Doctors’ professional level: (a) OCV prediction curve; (b) OSV prediction curve; (c) UEP prediction curve.
Jtaer 20 00226 g003
Figure 4. Heterogeneity Analysis by Disease urgency level: (a) OCV prediction curve; (b) OSV prediction curve; (c) UEP prediction curve.
Figure 4. Heterogeneity Analysis by Disease urgency level: (a) OCV prediction curve; (b) OSV prediction curve; (c) UEP prediction curve.
Jtaer 20 00226 g004
Figure 5. Heterogeneity Analysis by Area: (a) OCV prediction curve; (b) OSV prediction curve; (c) UEP prediction curve.
Figure 5. Heterogeneity Analysis by Area: (a) OCV prediction curve; (b) OSV prediction curve; (c) UEP prediction curve.
Jtaer 20 00226 g005
Table 1. Summary of Variables.
Table 1. Summary of Variables.
VariableAbbreviationVariable DefinitionData Type
Online consultation volume (t+1)OCVt+1The number of online consultations completed by doctors in the following month. A higher value indicates a greater volume of online consultations. Apply the natural log (ln) transformation to the value.Continuous
Offline service volume (t+1)OSVt+1The number of offline outpatient visits completed by doctors in the following month. A higher value indicates a greater volume of offline services. Apply the natural log (ln) transformation to the value.Continuous
User evaluation performance (t+1)UEPt+1The subjective rating given by platform patients to doctor services. The value ranges from 0 to 100; a score closer to 100 indicates higher patient satisfaction.Continuous
Proactive crafting index (t)PCItThe overall degree to which doctors actively shape their work behavior on the platform, calculated using the entropy weight method. The value ranges from 0 to 1; a value closer to 1 indicates a higher level of proactive crafting behavior.Continuous
Latest Review (t)LRtThe latest number of comments from patients on the platform about the doctor. A higher value reflects a greater number of recent evaluations.Continuous
Online average price (t)OAPtThe average price for doctors’ online consultations. The value represents consultation pricing; a higher value indicates a higher online service fee.Continuous
Offline service price (t)OSPtDoctors’ offline consultation prices. The value represents pricing for offline services; a higher value indicates a higher fee.Continuous
Number of online and offline services (t)NOOStTotal number of online and offline consultations by doctors. A higher value indicates a greater total number of services provided.Continuous
Disease urgency level (t)DULtThe urgency level of the disease consulted by the patient. Determined by whether the consultation was with an emergency department; acute = 1, non-acute = 0.Binary (Nominal)
Doctors’ professional levelDPLtProfessional title hierarchy of doctors. Categorized by doctor rank; chief physicians are coded as high-level (1), all others as non-high-level (0).Binary (Ordinal)
Area (t)AreatDoctors’ location. Categorized by region based on National Bureau of Statistics classifications: Eastern = 1, Western = 2, Central = 3, Northeastern = 4.Categorical
Table 2. Descriptive Statistics.
Table 2. Descriptive Statistics.
VarNameObsMeanSDMinMedianMax
OCVt+123,4551.6021.34601.3865.659
OSVt+110,9691.9851.49901.7925.591
UEPt+142,50098.3923.3117599.700100
PCIt30,1020.2430.12000.2871
LRt42,211129.092645.51602729,132
OAPt42,504104.191103.5660751333.333
OSPt42,50427.24016.337425100
NOOSt42,5041778.8304458.605037998,797
DULt42,5040.1330.339001
DPLt42,5040.7150.451011
Areat42,50413.29412.02711129
Notes: This table is based on original panel data collected from www.wedoctor.com (accessed on 7 June 2023).
Table 3. Correlation Analysis.
Table 3. Correlation Analysis.
OCVt+1OSVt+1UEPt+1PCItLRtOAPtOSPtNOOStDPLtDULtAreat
OCVt+11
OSVt+10.286 ***1
UEPt+10.076 ***0.018 *1
PCIt0.160 ***−0.011 ***0.054 ***1
LRt0.375 ***0.215 ***0.042 ***0.236 ***1
OAPt0.051 ***0.280 ***0.024 ***−0.062 ***−0.0011
OSPt−0.043 ***0.281 ***0.010 **−0.081 ***−0.043 ***0.485 ***1
NOOSt0.348 ***0.464 ***0.016 ***0.114 ***0.461 ***0.258 ***0.156 ***1
DPLt−0.063 ***0.188 ***0.029 ***−0.085 ***−0.042 ***0.287 ***0.451 ***0.125 ***1
DULt−0.015 **−0.0130.021 ***0.014 **0.0060.030 ***−0.012 **−0.032 ***−0.0061
Areat0.002−0.254 ***−0.020 ***−0.031 ***−0.008 *−0.179 ***−0.419 ***−0.106 ***−0.023 ***−0.035 ***1
Notes: * p < 0.1, ** p < 0.05, *** p < 0.001.
Table 4. Multicollinearity Test.
Table 4. Multicollinearity Test.
VarOCVt+1OSVt+1UEPt+1PCItLRtOAPtOSPtNOOStDPLtDULtAreat
VIF1.321.361.061.081.251.372.111.561.321.011.52
1/VIF0.7570.7350.9470.9290.8030.7280.4740.6410.7570.9920.658
Table 5. Fixed effects analysis results.
Table 5. Fixed effects analysis results.
(1)(2)(3)
OCVt+1OSVt+1UEPt+1
PCIt1.279 **−3.413 **0.011
−0.46−1.351−0.167
PCI2t−2.809 **9.341 **
−1.213−3.831
LRt0.0010.0000.000 **
0.0000.0010.000
OAPt0.003 **0.001−0.001
−0.001−0.0010
OSPt−0.0170.0090.003
−0.012−0.05−0.008
NOOSt0.001 ***0.001 ***−0.001 **
0.0000.0000.001
_cons1.218 **−0.07798.754 ***
−0.411−1.647−0.243
ID FixedYesYesYes
Month FixedYesYesYes
N18,000882730,000
R20.0640.4430.004
R2_a0.0640.4420.004
F97.673404.0944.843
Notes: ** p < 0.05, *** p < 0.001.
Table 6. Robustness Test (1% Tail Shrinkage).
Table 6. Robustness Test (1% Tail Shrinkage).
OCVt+1OSVt+1UEPt+1
PCIt1.466 ***−1.719 **−0.006
−0.385−0.739−0.076
PCI2t−3.348 ***4.456 **
−0.986−2.039
LRt0.0010.0000.000 **
0.0000.0010.000
OAPt0.002 **0.0010
−0.001−0.0010
OSPt−0.0160.0080.007
−0.012−0.05−0.008
NOOSt0.000 **0.001 ***−0.000 **
0.0010.0010.001
_cons1.375 ***−0.12298.657 ***
−0.389−1.633−0.232
ID FixedYesYesYes
Month FixedYesYesYes
N18,000882730,000
R20.0630.4510.008
R2_a0.0620.4510.008
F100.639420.5676.728
Notes: ** p < 0.05, *** p < 0.001.
Table 7. Robustness test (Adjusting control variables).
Table 7. Robustness test (Adjusting control variables).
OCVt+1OSVt+1UEPt+1
PCIt1.473 ***−1.292 *−0.002
−0.385−0.676−0.076
PCI2t−3.375 ***3.223 *
−0.985−1.855
NNRt0.0150.037 **−0.062 **
−0.01−0.014−0.021
OAPt0.002 **0.0010
−0.001−0.0010
OSPt−0.0160.0080.006
−0.012−0.05−0.008
NOOSt0.000 **0.001 ***0.001
0.0000.0000.000
_cons1.378 ***0.03498.666 ***
−0.387−1.617−0.238
ID FixedYesYesYes
Month FixedYesYesYes
N18,000882730,000
R20.0630.4520.014
R2_a0.0630.4520.013
F90.615389.1426.792
Notes: * p < 0.1, ** p < 0.05, *** p < 0.001.
Table 8. Robustness Test (Improved Calculation Method).
Table 8. Robustness Test (Improved Calculation Method).
OCVt+1OSVt+1UEPt+1
NewPCIt1.635 ***−2.404 ***−0.036
−0.462−0.44−0.123
NewPCI2t−6.797 **12.568 ***
−2.316−1.924
LRt0.0010.0000.000 **
0.0000.0010.000
OAPt0.002 **0.0010
−0.001−0.0010
OSPt−0.0160.0080.007
−0.012−0.05−0.008
NOOSt0.000 ***0.001 ***−0.000 **
0.000−0.0010.000
_cons1.331 ***0.17598.660 ***
−0.39−1.633−0.231
ID FixedYesYesYes
Month FixedYesYesYes
N18,000882630,000
R20.0630.4540.008
R2_a0.0630.4530.008
F100.359440.9546.724
Notes: ** p < 0.05, *** p < 0.001.
Table 9. Instrumental Variable Method.
Table 9. Instrumental Variable Method.
PCItPCI2tOCVt+1PCItPCI2tOSVt+1PCItUEPt+1
L.PCIt−0.05 **−0.088 *** −0.111 **−0.074 *** 0.222 ***
−0.033−0.013 −0.055−0.021 −0.009
L.PCI2t0.629 ***0.437 *** 0.737 ***0.379 ***
−0.083−0.032 −0.143−0.055
PCIt 5.443 ** −37.303 *** −0.194
−2.598 −13.344 −0.232
PCI2t −13.955 ** 93.726 ***
−5.696 −31.165
LRt0.0010.0000.000 **0.0010.0000.000 **0.0000.000 **
0.0000.0010.0000.0000.0010.0000.0010.000
OAPt0.000−0.000 **0.002 **0.001−0.000 **0.003 *0.0010.001
0.0010.000−0.0010.0000.001−0.0020.0010.002
OSPt0.002 **0.001−0.0150.0040.0020.019−0.0010.005
−0.0010.000−0.013−0.003−0.001−0.048−0.001−0.007
NOOSt0.0000.000 ***0.000 ***0.0010.000 ***0.0000.000 **−0.000 **
0.0010.001−0.0010.000−0.0010.0000.0000.000
ID FixedYesYesYesYesYesYesYesYes
Month FixedYesYesYesYesYesYesYesYes
N14,82714,82714,82760006000600016,15416,154
R2 0.006 0.635 0.001
F 11.455 20.08 1.644
CD Wald F93.058 17.14 662.02
SW S stat.11.987 35.434 0.981
Notes: * p < 0.1, ** p < 0.05, *** p < 0.001.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, W.; Yuan, Y.; Bai, Z.; Sang, S. How Doctors’ Proactive Crafting Behaviors Influence Performance Outcomes: Evidence from an Online Healthcare Platform. J. Theor. Appl. Electron. Commer. Res. 2025, 20, 226. https://doi.org/10.3390/jtaer20030226

AMA Style

Liu W, Yuan Y, Bai Z, Sang S. How Doctors’ Proactive Crafting Behaviors Influence Performance Outcomes: Evidence from an Online Healthcare Platform. Journal of Theoretical and Applied Electronic Commerce Research. 2025; 20(3):226. https://doi.org/10.3390/jtaer20030226

Chicago/Turabian Style

Liu, Wenlong, Yashuo Yuan, Zifan Bai, and Shenghui Sang. 2025. "How Doctors’ Proactive Crafting Behaviors Influence Performance Outcomes: Evidence from an Online Healthcare Platform" Journal of Theoretical and Applied Electronic Commerce Research 20, no. 3: 226. https://doi.org/10.3390/jtaer20030226

APA Style

Liu, W., Yuan, Y., Bai, Z., & Sang, S. (2025). How Doctors’ Proactive Crafting Behaviors Influence Performance Outcomes: Evidence from an Online Healthcare Platform. Journal of Theoretical and Applied Electronic Commerce Research, 20(3), 226. https://doi.org/10.3390/jtaer20030226

Article Metrics

Back to TopTop