Next Article in Journal
Conceptualizing Holistic Capital
Previous Article in Journal
Mapping European Countries’ Resilience to Cognitive Warfare
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Digital Shock: Administrative Burden and the Governance–Service Trade-Off in Indonesia’s Public Service Reform

Fakultas Ilmu Komunikasi, Universitas Padjadjaran, Bandung 40135, Indonesia
*
Author to whom correspondence should be addressed.
Adm. Sci. 2026, 16(3), 159; https://doi.org/10.3390/admsci16030159
Submission received: 2 February 2026 / Revised: 15 March 2026 / Accepted: 18 March 2026 / Published: 23 March 2026

Abstract

This study explores the impact of implementing a mandatory e-government reform within Indonesia’s national ISBN service (Regulation No. 5/2022). It examines the effects of this policy shift on public service quality and the resulting administrative burden placed on stakeholders, specifically publishers. The study employs an explanatory sequential mixed-methods design (QUAN → qual). The first phase analyzes longitudinal quantitative data from annual Public Satisfaction Surveys (2021–2024). The subsequent qualitative phase analyzes thousands of archival records, including complaint logs and policy memos, to contextually explain the quantitative findings. The results indicate that the reform induced a severe digital shock, causing the Public Satisfaction Index (IKM) to plummet from Good in 2021 to Poor (75.03) in 2022. The most significant declines were observed in the Procedures (2.79/4) and Service Time (2.30/4) indicators. Qualitative analysis reveals that this collapse was driven by specific policy-induced frictions: the mandatory implementation of a Single Account system and the intentional tightening of governance and validation parameters. While limited in statistical generalizability due to its single-case archival design, this study clearly demonstrates that public managers must recognize the inherent trade-off between tightening institutional governance (control) and maintaining public service quality (satisfaction). Proactive friction management and user-centric change management are essential to mitigating such digital shocks. Ultimately, this study offers a unique longitudinal analysis that forensically links quantitative satisfaction metrics with qualitative policy frictions.

1. Introduction

The global public service landscape has undergone a seismic shift over the past two decades. Fueled by new technology and increased public demand for efficiency, the public sector is under constant pressure to modernize. The drive for digital transformation is no longer merely a policy choice but a strategic imperative (Nosova et al., 2021). The hopes of e-government have triggered a series of reforms all over the world (Zou et al., 2023), ranging from improved operational efficiency and radical transparency to comprehensive governance improvements. On a grand scale, governments are aggressively replacing decades-old paper-based service delivery models with integrated digital platforms that will significantly reduce bureaucracy (Bhatia & Bhatia, 2025; Haeruddin et al., 2025).
However, this should not be confused with a simple technological replacement. It is not as simple as substituting typewriters with computers. Instead, it is a revolutionary transition in the social contract between the state and its participants, whether individual citizens or private enterprises. Digitalization fundamentally changes assumptions, transforms processes, and reimagines every social encounter with the state. In short, the field’s academic and practical value lies not so much in the technology itself as in the way we resolve this difficult transition. Ultimately, the effectiveness of digital reform will be gauged—beyond the back-end efficiency of institutions—by how well external users use, accept, and are satisfied by the system (Bokhari et al., 2025).
In fact, the current state of knowledge shows a divided picture of the actual impact of e-government. The initial academic literature was characterized by a wave of optimism, with reports of the successful launch of new portal projects and evidence of beneficial outcomes regarding decreased operating costs and faster service delivery (Siemiatycki, 2008). However, as e-government takes shape and the complexity of digitized delivery becomes more evident, a more critical and introspective body of research has emerged. This newer literature invariably offers a much more critical reflection: many digital changes, especially those mandated top-down, do not live up to their promises.
These interventions paradoxically lead to new and more complex forms of administrative burden (Moynihan et al., 2015; Carey et al., 2020; Herd et al., 2023). These burdens are not exclusively financial, but rather cognitive and procedural (Moynihan et al., 2015). Digital reforms frequently shift the burden of compliance—such as the overhead of learning new rules, gathering new documents, and navigating confusing systems—from bureaucracies to users. This leads to what has been termed “service friction,” an experience filled with confusion and frustration. Furthermore, currently published work in this domain is often static. Numerous studies have examined only a small component of user satisfaction at a single moment in time (Gluck, 1996; Harrati et al., 2016), neglecting the dynamic mechanisms of policy implementation, user resistance, and institutional persistence.
To understand this complex and contradictory landscape, simply adopting one methodological lens is insufficient. A purely quantitative approach, such as analyzing satisfaction survey data, may excel at diagnosing symptoms—it can tell us with precision that a specific service metric has plummeted. However, it cannot sufficiently explain the root causes of the collapse. It identifies the symptom, but not the underlying cause. On the other hand, a qualitative approach, such as analyzing complaint archives, yields rich narrative evidence of user friction and confusion. However, without a measurable macro-view, it is impossible to determine whether these complaints are simply anecdotal cases or representative of a systemic shock. Therefore, to fully capture both the macro-effects and the micro-mechanisms of these processes, a methodological bridge is required (Kincaid, 2012).
Based on this rationale, this study adopts an explanatory sequential mixed-methods design. The justification for this sequence is inherently investigative and forensic. This study emerges from a well-documented quantitative anomaly: a pronounced drop in service metrics that demands a clear causal explanation. The initial quantitative phase provides a longitudinal statistical analysis to measure the impact and identify specific problem areas. The subsequent qualitative phase is purposefully designed to unpack these discrepancies and offer a rich contextual account. The rationale for this case study stems from its unique approach to triangulating authentic archival data (Schlunegger et al., 2024). Longitudinal survey data (QUAN) establish a broader framework, while thousands of unfiltered user complaint logs and internal policy memos (QUAL) provide explanatory material that cannot be obtained by other means.
Therefore, this paper’s primary aim is to comprehensively assess both the implementation and impact of the national ISBN service reform (Perka No. 5/2022) in Indonesia over a four-year period (2021–2024). Specifically, this study has three objectives. First, to quantitatively trace the longitudinal impacts of this policy intervention on stakeholder (publisher) satisfaction by measuring service metrics over time—namely, the pre-policy baseline, the implementation shock, and the recovery phase. Second, to qualitatively assess the sources of service friction, administrative burden, and procedural confusion experienced by stakeholders during the transition period. Third, to integrate these findings and create a complete explanatory model of the “digital shock” process, analyzing a fundamental trade-off that is commonly faced but rarely discussed: the conflict between strengthening institutional governance and enhancing the delivery of public services.
In this study, we explicitly define digital shock not merely as a synonym for administrative burden, but as an acute, temporal phenomenon. While administrative burden often refers to a chronic state of friction, digital shock is an abrupt, disruptive spike in learning and compliance costs triggered immediately following a top-down, mandatory technological intervention. Furthermore, we posit the governance–service trade-off as a central theoretical proposition: interventions designed to maximize institutional control and compliance often inadvertently destroy user-relevant value during the transition phase. As recent scholarship indicates, digital transformation is only sustainable when it produces recognizable user value, and complex public-sector reforms ultimately depend on coordination, communication, and robust stakeholder engagement to navigate these trade-offs (Meuleman, 2021).

2. Theoretical Framework

2.1. E-Government Transformation: The Promise of Efficiency and the Reality of Service

The digital age has revolutionized established public administration. The phenomenon of e-government has grown to be an essential aspect of bureaucratic modernization. New public management (NPM)-style public management focuses on productivity, accountability, and customer orientation (Kim, 2021; Alkaabi et al., 2024). These principles are a main feature of the current general approach of governments throughout the world that adopt digital technologies to provide a comprehensive array of services. The goals of this adoption include enhancing internal operational efficiency, achieving cost savings, improving transparency and accountability (Nurfadila, 2024; Shulzhyk et al., 2024), and ultimately improving service quality and citizen satisfaction. The early literature on e-government has often been regarded as overly optimistic, focusing heavily on feasibility surveys and adoption methods (e.g., the Technology Acceptance Model), along with success stories that highlight how technology can reduce bureaucracy and render government more accessible to all people (Balaskas et al., 2022; Li et al., 2024; Jeong, 2025).
As e-government has developed, the research environment has matured and grown increasingly complex and critical. Currently, practical implementations often produce ambiguous results. On the ground, the promises made by technocrats frequently do not match the true situation. Many e-government platforms do not eliminate bureaucracy but essentially digitalize old processes without any fundamental changes, a phenomenon often referred to as “paving the cow path” (Ramadhani, 2025). Moreover, various studies show that these digital projects are often so complicated for end users that they tend to be rejected or resisted, thereby exacerbating the digital divide among those in need (Norris, 2003; Pirhonen et al., 2020). This issue is addressed in recent research, which conceptualizes e-government as a sophisticated socio-technical relationship (Park & Park, 2025) rather than a cure-all. This implies that the success of a project relies not only on technological developments but also on organizational willingness, user digital literacy, and the appropriateness of the policy framework.

2.2. Public Service Quality in the Digital Era: From SERVQUAL to Administrative Burden

For decades, the performance analysis of public services has been circumscribed by a rubric largely borrowed from the private sector. The benchmark instrument is service quality, often represented by seminal models such as SERVQUAL (AlOmari, 2020; MM & Jasim, 2020). This model attempts to measure the perceived gap between users’ expectations and outcomes across a set of predefined domains: reliability, responsiveness, assurance, empathy, and tangibles. In many instances—such as in Indonesia, where the structured Public Satisfaction Survey (SKM) is commonly used to measure bureaucratic performance—process measures such as procedures, time, and cost are given numerical scores (Akbar et al., 2012, 2015).
In the context of the new millennium and the increasingly complex, technology-enabled e-government environment, however, this traditional satisfaction approach has proven to be reductive. It has begun to reveal fundamental limitations. In practice, this methodology is restricted to capturing symptoms. It can describe what users experience—such as extreme dissatisfaction reflected in a “2.30 score” for waiting time—but the methodology is blind to why this phenomenon occurs. In the digital environment, service quality is no longer solely about good platforms or faithful implementers. It has essentially pivoted to the costs users incur simply by using the platform. This is where academic discourse shifts toward the notion of administrative burden, a far more accurate and pertinent theoretical lens through which to analyze failures and frictions in today’s digital spaces.
This represents a dramatic conceptual shift. First introduced by Burden et al. (2012) and further elaborated upon by Moynihan et al. (2015), administrative burden is defined as the costs borne by individuals who interact with the government. It turns our attention to costs beyond mere financial expenses, particularly since the majority of public services are ostensibly free. This framework emphasizes three commonly neglected forms of hidden costs that governments, intentionally or unintentionally, impose on their users.
The first is the learning cost. This involves the cognitive effort required by users to learn new rules, grasp suddenly transformed processes, and navigate confusing digital gatekeepers. For instance, when a new policy is implemented—such as shifting from a distributed multi-account registration mechanism to a single, centralized account—the transition is highly complex, and for thousands of established users, the learning cost can be exorbitant. Their existing knowledge becomes obsolete, leaving them disoriented. Confusion, uncertainty, and a wave of procedural errors are the empirical results of this high learning cost.
The second is the compliance cost, which encompasses material and temporal dimensions. This expense includes the time, effort, and material resources users expend simply to follow the bureaucracy’s new rules. Specific to this case study, temporal costs stem from the additional effort dedicated to gathering new legal documents, creating and maintaining a new website as a prerequisite, or enduring the tedium of failed uploads associated with stringent technical requirements. As Buffat (2015) insists, e-government reforms often simply shift the burden of compliance from the bureaucrat’s desk to the end user’s screen rather than removing it altogether.
The third, most corrosive, and hardest to measure is the psychological cost. This represents the emotional and cognitive toll of dealing with a rigid bureaucracy, including severe frustration, stress, anxiety, and feelings of helplessness. Repeated rejections of documents without explanation, confusing system error messages, and a deep-seated sense of confinement within the “black box” of digital bureaucracy are crucial contributors to this psychological cost. The administrative burden framework thus offers a particularly effective lens to explain why quantitative satisfaction metrics (QSMs) tend to plummet even when the service remains technically functional.
The acceptance of a new mandatory policy, which compels people to adopt a specific system, is fundamentally different from voluntary technology adoption (Brown et al., 2002). What can be termed “digital shock” occurs when a top-down policy completely revolutionizes work processes that have been established and understood for years. Existing users experience this as an acute and immediate disruption, where their procedural knowledge, fine-tuned workflows, and expectations suddenly become obsolete.
This shock is not abstract; it surfaces in the real world as service friction. This line of thought, aptly addressed by Heggertveit and Rydén (2024) in their account of the barriers that digital bureaucratic systems impose on users, encapsulates the lived experience of administrative burden. This constitutes much more than just a poor user experience. Instead, friction manifests as a series of new, exhausting barriers that users face, whether intentionally designed or not. Viewed in this light, the enormous volume of post-implementation complaint data in this study takes on new meaning. It is not merely a symptom of systemic failure or technical bugs; it is rich, real-world evidence of continual friction. The classical literature on policy implementation (Stoker & John, 2009) has evidenced the inevitable separation between policymakers’ intentions (policy design) and practical execution (policy practice). What makes this gap unique in the digital realm is that it is immediate, measurable, and permanently stored in data logs. This arguably represents the most profound and neglected gap in technocratic e-government evaluations, providing insight into the underlying trade-off between two often orthogonal goals: governance and service delivery. Most of the evaluation literature implicitly assumes that the primary focus of any digital reform is strictly to enhance users’ access to services. However, in many public sector settings—especially those that are regulatory or allocative in nature—the main objective may be the exact opposite. Typically, the goal is to increase control, rigidly enforce compliance, and limit abuse or moral hazard.
This case study of the ISBN service perfectly illustrates this conflict. When internal archival data revealed the suspected misuse or improper application of ISBNs, the institutional response was not to ease the service, but to make it more restrictive. Governance-driven actions included tightening book criteria (for instance, rejecting previously accepted book chapters or monographs), implementing a Single Account system to consolidate oversight, and adding new legal mandates. Hence, the sources of friction users confronted—lengthy wait times, puzzling rejections, and complex procedures—were not random system bugs. Rather, they represented deliberate “features” meant to screen applicants, enforce compliance standards, and re-empower institutional control.
This research links the e-government literature to Evans’ (2016) classical concept of street-level bureaucracy. In Evans’ view, frontline bureaucrats (such as police or social workers) are the actual decision-makers, as they possess the discretion to manage limited resources. However, in e-government, we are witnessing the evolution of this phenomenon into a “digital street-level bureaucracy” (Buffat, 2015). Rules, criteria, and discretion are now hardcoded directly into the workflows of the platforms themselves. System validators who reject a manuscript may apply stricter rules not because they are uncooperative, but because new policies require enhanced oversight, and the digital platforms rigidly constrain their human discretion.
The recovery of the IKM scores observed in 2023 and 2024 warrants further theoretical attention. This rebound should not be viewed merely as the natural endpoint of technological assimilation, but rather as the outcome of dynamic adaptation mechanisms. The recovery phase highlights the vital role of collaborative governance, where stakeholder coordination and iterative communication mechanisms eventually smoothed the friction of the new policy environment (Valentina et al., 2025). Furthermore, navigating this recovery likely depended on more than just procedural adjustments. It required organizational integrity, enhanced responsiveness, and the extra-role behavior of frontline staff (digital street-level bureaucrats) who guided confused users through the new compliance maze, factors that are critical enabling conditions for stronger public management performance (Saputra et al., 2026).

2.3. Synthesis and Research Gaps

This overview of the literature indicates that academic discourse has a robust foundation across several important domains. The potential benefits of e-government are well-documented (Weerakkody et al., 2015), instruments for measuring service satisfaction are established (Alawneh et al., 2013), and the theoretical underpinnings for understanding administrative burden are robust (Moynihan et al., 2015). However, an evident and persistent gap remains regarding in-depth, longitudinal empirical studies utilizing mixed-methods approaches. More specifically, the literature is sorely missing data-rich case studies capable of forensically capturing the complete cycle of a mandatory e-government reform.
Addressing this gap requires studies that simultaneously achieve five objectives: first, quantitatively establishing a pre-policy baseline; second, quantitatively assessing the effects of digital shocks immediately following digitalization; third, qualitatively identifying sources of friction and administrative burden (learning, compliance, and psychological costs) based solely on user records; fourth, theoretically examining the inevitable trade-offs between governance goals and the service experience; and fifth, tracking long-term recovery or adaptation in subsequent years. Therefore, this research explicitly aims to fill these gaps through a comprehensive analysis of this entire process.

3. Materials and Methods

3.1. Research Design

This research method was specifically chosen because it deals with an intrinsically dualistic phenomenon. The two aspects of the policy implementation phenomenon under study are inseparable: the objectively measurable impact and the subjectively experienced phenomenon. This means that adopting only a single methodological lens would leave gaps in our knowledge and potentially result in superficial answers. A purely quantitative methodology, based solely on the statistical analysis of Public Satisfaction Survey data, may excel at detecting symptoms. It might identify exactly what is happening by providing the raw facts of plummeting satisfaction scores in 2022. However, it would be unable to explain why it occurred; it could diagnose the symptom but not the underlying cause.
Conversely, a purely qualitative approach, such as a close reading of complaint logs, would generate narratively rich data detailing the friction and confusion experienced by publishers. But without a macro-perspective or a measurable baseline, it would be impossible to determine whether these complaints represent isolated anecdotes or a systemic shock. Thus, integrating these two approaches is not merely a matter of methodological convenience, but an epistemological necessity. Indeed, according to Tang (2025), analytical integrity in this type of research is only achievable when mixed methods work in tandem. This methodology enables researchers to construct a robust bridge, leveraging the strength of quantitative data to map broad trends and qualitative data to illuminate deep context. Therefore, this synergy overcomes the drawbacks of a mono-method approach (Saunders & Darabi, 2024). Only in this manner can we construct a comprehensive and in-depth explanatory narrative.

3.2. Explanatory Sequential Design

This study adopts an explanatory sequential design as its methodological architecture. The empirical framework for this entire inquiry stems from a straightforward quantitative anomaly: unexpected results emerged from the 2022 Public Satisfaction Survey, where the Public Satisfaction Index (PSI) fell into the “Poor” category. More specifically, these quantitative data directed researchers to two primary problem areas: Procedures and Service Time. This necessitated a deeper explanation of these unexpected quantitative results, challenging the linear perspective often associated with digitally enhanced services.
An explanatory sequential design is built precisely for investigative scenarios such as this. Qualitative findings can assist research efforts in illustrating, untangling, and interpreting unexpected, unclear, or surprising quantitative outcomes (Morrison et al., 2014; Lim, 2025). Consequently, the qualitative (QUAL) phase of the study—a thematically oriented analysis of thousands of complaint and stakeholder query records from 2022 to 2024—serves as a deliberately designed, targeted follow-up. Researchers did not merely explore the qualitative data in a general sense; they interrogated it directly to resolve the enigmas uncovered during the quantitative (QUAN) phase. This methodological flow enabled the researchers to precisely associate the drop in Procedures scores with the implementation of the Single Account policy, and the decrease in Service Time scores with the tightening of manuscript validation standards.

3.3. Using Archival Data in the Case Study Setting

This research takes the form of an in-depth single case study. It analyzes the enactment of the new regulations in 2022—considered a major piece of e-government reform—and the subsequent impact, friction, and adaptation dynamics from the perspective of e-government transformation. The strength of this methodological architecture lies primarily in its reliance on rich internal archival data. This is a crucial point of distinction. Unlike primary data collection methods, such as retrospective interviews or new surveys—which are often prone to researcher bias and respondent recall bias—the data in this study offer an objective historical record.
The data used in this study consist of documents such as daily complaint logs, institutional annual SKM reports, and the minutes of socialization meetings, all of which serve as real-time digital traces of the policy implementation process as it unfolded. Two major methodological advantages arise from using archival data as a primary source. First, it is an unobtrusive technique that eliminates respondent reactivity (Kazdin, 1979). The complaint data analyzed are raw, concurrent, and unfiltered manifestations of the friction publishers experienced at that exact moment. This confers a substantial level of ecological validity to the findings. Second, the analysis of archival documents provides a rare methodological privilege: the opportunity to obtain longitudinal data (Neale & Bishop, 2012). Establishing a strong quantitative baseline from the 2021 satisfaction reports prior to any policy intervention would be highly difficult within the framework of other primary data collection methods, yet it is critical for elucidating the central argument of this study.

3.4. To Ensure Rigor and Transparency

Methodological rigor in qualitative and mixed-methods research is distinct from rigor in purely quantitative research. It is achieved not through statistical generalizability, but rather through trustworthiness (Harrison et al., 2020). To ensure this, the researcher applied systematic, multi-layered data triangulation strategies.
Instead of relying on a single, potentially biased source, the study’s findings are founded on the convergence and cross-validation of three separate but complementary data sources. These three components consist of: (1) quantitative satisfaction survey data (annual IKM reports); (2) qualitative complaint data (helpdesk logs); and (3) internal policy and implementation documents (derivative policy memos and monitoring logs). Trustworthiness is maintained when these three independent data sources collectively provide a well-rounded view of events. For instance, a conclusion is considered sound if quantitative data (plummeting Procedures scores) is corroborated by qualitative data (thousands of complaints regarding Single Accounts) and further supported by archival data (internal administrative evidence of hundreds of account deactivations).
To ensure methodological transparency, the quantitative phase utilized data from the annual Public Satisfaction Survey (SKM) with the following verified sample sizes: 318 respondents in 2021, 345 respondents in 2022, 351 respondents in 2023, and 324 respondents in 2024. Quantitative statistical analysis was performed using IBM SPSS Statistics software version 26.0 (IBM Corp., Armonk, NY, USA). For the qualitative phase, the thematic analysis systematically reviewed a total of 2148 individual complaint records and stakeholder queries submitted to the helpdesk between 2022 and 2024. The qualitative data management and thematic coding were facilitated using Microsoft Excel (Microsoft Corporation, Redmond, WA, USA).
Naturally, the investigator is explicit about the inherent limitations of secondary data. Institutional data is originally collected for internal administrative purposes. This implies that the researcher is constrained by the institution’s pre-established taxonomies, categories, and definitions (Hodgson, 2019). Consequently, the investigator cannot actively interject or query respondents to gain deeper insights. Nevertheless, the researcher establishes a transparent audit trail by explicitly outlining the logical steps from the quantitative findings to the qualitative interpretations, ensuring that all results correlate with the archived evidence. This transparency enables readers to independently judge the veracity and strength of the researcher’s conclusions.

4. Results

Research findings are described in two subsequent phases with regard to the method used. The first quantitative part assesses the “digital shock” on stakeholder satisfaction post-policy implementation. The second qualitative phase (basis of the study) aims to explain why this quantitative phenomenon is happening by scrutinizing in detail the sources of friction and the administrative burden on service users.

4.1. Phase 1 (QUAN): Measuring Policy Impact Through Stakeholder Satisfaction

A longitudinal quantitative analysis of annual Public Satisfaction Survey (SKM) data tells a clear story of “baseline, shock, recovery.” Our initial 2021 SKM data (pre-policy baseline) indicated a “Good” level of service with an 80% satisfaction target. This is evidence that stakeholders (publishers) were largely satisfied with ISBN services as per prior regulatory arrangements under the past regime. The policy intervention, embodied in Regulation No. 5/2022, was fully implemented in mid-2022. The response was instantly and dramatically quantifiable. This change becomes visible in Figure 1.
The “digital shock” that occurred in 2022 is graphically shown in Figure 1; the Customer Satisfaction Index (CSI) dropped to 75.03 in 2022. This number made service quality hit a “Poor” level for the first time, and the service users’ substantial inconveniences were confirmed. Data from 2023 and 2024 displayed the recovery and stabilization trend, with the CSI returning to the “Good” category as users and institutions adapted. The decomposition of the 2022 IKM scores elucidates the root causes precisely. Table 1 shows a breakdown of the scores for the nine service elements measured in 2022, compared to the ideal values and their corresponding categories.
The “shock” was not evenly distributed (Table 1). Six of the nine elements received an overall score of “Good.” The lowest three elements collapsed though, the top two being the “Procedures” element (score 2.79) and the “Service Time” element (score 2.30). These odd quantitative findings were the initial foundation of the qualitative portion.

4.2. Phase 2 (QUAL): Explaining “Digital Shock” Through Service Friction Analysis

The qualitative phase was explicitly conducted to explain the dramatic reduction in “Procedures” and “Service Time” scores. Thousands of complaint log files, helpdesk inquiries, internal policy memos, and institutional monitoring records from 2022 to 2024 were analyzed thematically in depth. The reason why the “Procedures” element weakened (score 2.79) was attributed to the introduction of the most disruptive new policy: the consolidation of ‘Single Accounts’ for government institutions and universities. Before this policy, each faculty or work unit could register as a separate issuer. Thousands of these accounts, however, were merged into a verified master account under the new policy. Triangulation with archival data confirmed this as a huge source of friction. Table 2 shows evidence of a gigantic administrative burden from the internal implementation data.
Such administrative actions documented have contributed to friction as demonstrated in the complaint summary archive. The results of Table 3 indicate that the complaint category registered the highest in 2023 was “Registration of Existing Publishers,” indicative of the effects of the “Single Account” policy. In addition, the complaint log provides a sense of users’ cognitive load, illustrated by questions like, “Regarding researcher submissions, what is a single account?” (February 2024). The collapse of the “Service Time” element (score 2.30) can therefore be explained at the core with a shift in policy towards tighter governance. Internal monitoring records show that this new policy was fueled by “presumption” and “misconduct” found by publishers. Consequently, the proposed measures laid down much stricter validation standards per regulations. We see this friction in Table 3. The main complaint in 2022 (the “shock” year) was “Late ISBN Validation” (32 incidents) that indicates the queues formed since the new rules took effect. But in 2023 complaints changed. “Late Validation” fell drastically, but complaints about “ISBN Registration” (12 incidents) and “Old Publisher Registration” (15 incidents) also flourished.
This change implies that the issue is no longer merely one of “slow,” but “rejected” or “obstructed by procedures.” Consolidatory complaint disposition logs corroborate this as systematic rejections of previously accepted publication types (“Book Chapter including publications not assigned an ISBN…” (December 2023)) and complaints about rejections of “monographs” (August 2024). Frictions grew worse in 2024 with requirements like a publisher’s website with its own domain, which startup publishers complained about (March 2024). The endgame of this tightening of governance is evidence of an official apology letter issued by one of the publishers whose account was blocked due to alleged violations (see Figure 2).
This is the document in which the publisher declares, “…I commit and promise not to repeat the same mistake in the future…”, which is powerful qualitative evidence of the new governance regime. Therefore, this qualitative data clearly shows that the low scores on “Time to Service” do not reflect just bureaucratic delays but are instead a conscious byproduct of stricter governance which together results in a higher level of friction and longer wait times for publishers.
English Translation: LETTER OF APOLOGY. I, the undersigned [Redacted Name], Director of [Redacted Publisher], hereby submit an apology to the Head of the Center for Bibliography and Library Materials Processing for the actions of our editorial admin who failed to verify the authenticity of the author’s work, which violates the standard rules regarding forgery. I would like to thank the Head of the Center for the reprimand, which serves as an evaluation for optimizing our future editorial work. I commit and promise not to repeat similar mistakes in the future.

5. Discussion

5.1. Summary of Findings and Respondents’ Answers to Research Questions

The goal of this study is to review the influence of mandatory e-government reforms on service quality and the administrative burden of the stakeholders, compared with past practice. The main findings speak directly to this research question: an intervention by Regulation No. 5/2022 aimed at mandatory digitalization and tightening governance correlates with a significant “digital shock” to user satisfaction. The main quantitative data comes from a dramatic fall in the Public Satisfaction Index (PSI) from (2021 baseline) “Good” to (2022) “Poor” (PSI 75.03). This suggests that reform, rather than providing immediate improvements in service satisfaction, has contributed to widespread disarray. Moreover, this study found that there were certain factors contributing to the fall. Qualitative findings convincingly explain the quantitative anomaly. The decline in the IKM was not general, but rather concentrated in two service elements: “Procedures” (score 2.79) and “Service Time” (score 2.30). The collapse in the “Procedures” score was explained by the introduction of the ‘Single Account’ policy, which created new procedural and cognitive burdens, evidenced by data on mass account deactivations and user confusion (“Existing Publisher Registration” was the #1 complaint in 2023). Likewise, reduced “Service Time” score reflects increasing compliance burden due to stricter governance, as manifested by validation queues (2022), rejected book criteria (2023), or new requirements, such as websites (2024).

5.2. Research Landscape Analysis and Theoretical Contributions

This paper analyzes the existing research landscape, as well as the theoretical contributions to that area. The results of this research make a noteworthy contribution to the theoretical debate in public administration. Research on e-government is typically polarized, we see favorable narratives of technocratic effectiveness and negative narratives of administrative burden (Moynihan et al., 2015). This paper fills the void between these two streams by including rare empirical evidence that demonstrates directly the relationship between specific policy solutions and the measurement of the burden of the outcomes. The measured “digital shock” (SMI 75.03) may be considered an empirical example of the increased cognitive, procedural, and compliance demand placed on stakeholders overnight. The key conceptual insight gained from this investigation consists of the clarity of its recognition of the core trade-off of governance and service. In the field of research these two issues are often considered in isolation. But researchers’ results imply that they can run in direct conflict: the institutional drive to tighten governance (pushed by the “misconduct” findings) is strongly linked to for the decline in service metrics (the collapsing “Service Time” and “Procedures” scores). Publisher friction is not a bug in design or system, but a feature in the new governance regime, rarely mentioned in simpler e-government evaluation models.

5.3. Comparison with Current Literature

The findings of this study confirm and extend existing understanding. The observation that top-down reform causes user resistance and temporary satisfaction declines echoes a good number of previous findings in technology policy implementation from the public sector. The researchers’ findings are therefore particularly reminiscent of the broad literature on street-level bureaucracy, which is now taking forms that are digital in nature (Evans, 2016). The denial of book criteria (i.e., book chapters or monographs) is a quintessential instance of bureaucratic “discretion” rendered accessible today via the Internet to control demand and regulate rules. Nevertheless, the novelty of this work is the evidentiary approach. Most policy evaluation studies either only use a single method (satisfaction surveys) or do so through interviews (which are prone to recall bias). The uniqueness of this study is that the researchers employed rigorous explanatory sequential mixed methods with a genuine data archive approach. The researchers not only prove that satisfaction fell but also link in a forensic manner decreased quantitative metrics to explanatory qualitative specifics extracted from thousands of complaint logs, aided by archives’ internal policies. Moreover, the longitudinal nature of the study (2021–2024) enables mapping of the “shock-and-recovery” flow, transcending the static “success” or “failure” stereotypes prevalent within the evaluation literature.

5.4. Implications for Policy and Practice

The results have important implications for public managers who are also involved in a digital transformation process. The quantitative evidence for “digital shock” offers an important lesson: mandatory e-government reforms are not simply technical achievements, but socio-technical interventions with real transition costs. Another core policy implication is public institutions need to intentionally expect and control friction. Instead of simply shifting compliance and administration burdens to users, institutions need to make a significant investment in user-centric change management. Actionally, concrete prescriptions are oriented towards communication. The 2022 “shock,” which turned out to be associated with procedural confusion, focused on socialization. Public institutions, in addition to other things, cannot just “roll out” new regulations. They have to lay out a cross network-based communication plan far in advance, with interactive FAQs, video tutorials and, crucially, improved helpdesks. We find from the 2023–2024 SMI recovery that the “shock” is manageable, but early investment in user support (a feature often sacrificed in the name of efficiency) plays a critical role in reducing the lag time for service interruption.

5.5. Research Limitations

The main limitation is the heavy reliance on secondary archival data. While the “naturalistic” nature of the data allows for a high degree of authenticity, the researcher is confined to variables and categories set by the institution for administrative purposes. This means that the researcher did not have the autonomy to conduct in-depth interviews which might have sought to penetrate into particular complaints or satisfaction scores and provided more qualitative information as to how users experience these frictions emotionally and at deeper psychological levels. This research is also an intensive, single-case study of one public service in one country (with all data from one country). Quantitative results, for example the drop in MFI in this report to 75.03, indicate that the “Single Account” policy is one of the principal friction-prone points are highly context-bound. Thus these findings cannot be statistically generalized for all e-government reforms. The generalizations from this study must be analytical, that is, the qualitative transferability of theoretical insights on the “digital shock” process and (the trade-off) governance/service rather than an empirical generalization of the findings.

5.6. Future Research

Future research directions following this study, particularly concerning the “shock-and-recovery” cycle, open up an important new research agenda. While the study recognized and explained the shock (the 2022 SMI drop), the “black box” of recovery (the 2023–2024 SMI rise) is still unknown. A key research gap identified was an ignorance of how this reciprocal process of adjustment takes place. Future work should emphasize primary qualitative methods like interviewing publishers to understand publishers’ processes in creating new policy literacy and adaptive strategies to respond to service frictions. At the same time, there exists on the institutional level a crucial research gap. While this study looks at external user experiences, the internal consequences of these reforms for the implementing staff remain unclear. Organizational ethnography should be used to investigate how validators (digital street-level bureaucrats) make sense of, negotiate and implement these new, stricter rules. Last of all, a cross-theme research priority would be to assess the impact of the policy at the governance level. This research has captured the costs of the reforms (decreased service quality). But the alleged benefits, including diminished ISBN misuse, perhaps inferred from the “misuse” monitoring archive, have not been quantified.

6. Conclusions

This research concludes that the mandatory e-government reform (Perka No. 5/2022) is associated with a direct “digital shock,” contributing to a severe disruption in public service delivery. Explicitly, the quantitative results demonstrated that the Public Satisfaction Index (IKM) plummeted from a “Good” rating in 2021 to a “Poor” rating (75.03) in 2022, with the most significant deteriorations recorded in the Procedures (2.79/4) and Service Time (2.30/4) dimensions. Our qualitative analysis of the archival complaint logs explicitly linked this statistical collapse not to random technical failures, but to targeted policy-induced frictions. Specifically, the mandatory transition to a ‘Single Account’ system and the deliberate tightening of validation parameters, prompted by the discovery of internal policy misuse, drastically increased users’ learning and compliance costs. This demonstrates a clear theoretical proposition: interventions designed to maximize institutional governance and control can inadvertently disrupt the user experience, highlighting a critical governance–service trade-off.
The practical implications derived from these results indicate that public agencies must anticipate “digital shock” as an inherent component of complex digital reforms. Therefore, emphasis should shift from solely technical deployment to proactive friction management and the continuous generation of user-relevant value. Specifically, to alleviate the acute procedural burdens observed in our 2022 data, agencies must prioritize user-centric onboarding and strengthen helpdesk support prior to policy implementation.
Furthermore, the subsequent quantitative recovery of the IKM scores observed in the 2023 and 2024 surveys highlights the vital role of collaborative governance, organizational integrity, and stakeholder coordination in navigating public-sector reforms. While the single-case archival design limits statistical generalizability, the analytical insights derived from this case offer a valuable model for understanding the governance–service trade-off in broader digital transformation contexts. Looking ahead, future research should shift from assessing policy impact to evaluating these adaptation processes through primary qualitative explorations, investigating how stakeholders adapt to new digital environments and how frontline staff interpret and enforce evolving governance rules.

Author Contributions

Conceptualization, I.H.N. and A.B.; methodology, I.H.N. and W.E.; software, I.H.N.; validation, A.B., W.E. and U.L.S.K.; formal analysis, I.H.N.; investigation, I.H.N.; resources, I.H.N.; data cu-ration, I.H.N.; writing—original draft preparation, I.H.N.; writing—review and editing, A.B., W.E. and U.L.S.K.; visualization, I.H.N.; supervision, A.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Research Ethics Committee of Universitas Padjadjaran (protocol code 88/UN6.KEP/EC/2026 and 26 January 2026).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to institutional privacy policies regarding internal complaint logs.

Acknowledgments

The authors would like to thank the Center for Bibliography and Library Materials Processing, National Library of the Republic of Indonesia, for granting permission to access the archival data and internal records used in this study.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AbbreviationMeaning
ISBNInternational Standard Book Number
PerpusnasNational Library of the Republic of Indonesia
SPBESistem Pemerintahan Berbasis Elektronik (E-Government)
IKMIndeks Kepuasan Masyarakat (Community Satisfaction Index)

References

  1. Akbar, R., Pilcher, R., & Perrin, B. (2012). Performance measurement in Indonesia: The case of local government. Pacific Accounting Review, 24(3), 262–291. [Google Scholar] [CrossRef]
  2. Akbar, R., Pilcher, R. A., & Perrin, B. (2015). Implementing performance measurement systems: Indonesian local government under pressure. Qualitative Research in Accounting & Management, 12(1), 3–33. [Google Scholar] [CrossRef]
  3. Alawneh, A., Al-Refai, H., & Batiha, K. (2013). Measuring user satisfaction from e-Government services: Lessons from Jordan. Government Information Quarterly, 30(3), 277–288. [Google Scholar] [CrossRef]
  4. Alkaabi, S., Hazzam, J., Wilkins, S., & Dan, S. (2024). The influences of ambidexterity, new public management and innovation on the public service quality of government organizations. Public Performance & Management Review, 47(5), 1110–1137. [Google Scholar] [CrossRef]
  5. AlOmari, F. (2020). Measuring gaps in healthcare quality using SERVQUAL model: Challenges and opportunities in developing countries. Measuring Business Excellence, 25(4), 407–420. [Google Scholar] [CrossRef]
  6. Balaskas, S., Panagiotarou, A., & Rigou, M. (2022). The influence of trustworthiness and technology acceptance factors on the usage of e-Government services during COVID-19: A case study of post COVID-19 Greece. Administrative Sciences, 12(4), 129. [Google Scholar] [CrossRef]
  7. Bhatia, V., & Bhatia, S. (2025). Evaluating e-governance: A comparative analysis and way forward. Digital Policy, Regulation and Governance, 27(6), 724–745. [Google Scholar] [CrossRef]
  8. Bokhari, S. A. A., Park, S. Y., & Manzoor, S. (2025). Digital government transformation through artificial intelligence: The mediating role of stakeholder trust and participation. Digital, 5(3), 43. [Google Scholar] [CrossRef]
  9. Brown, S. A., Massey, A. P., Montoya-Weiss, M. M., & Burkman, J. R. (2002). Do I really have to? User acceptance of mandated technology. European Journal of Information Systems, 11(4), 283–295. [Google Scholar] [CrossRef]
  10. Buffat, A. (2015). Street-level bureaucracy and e-government. Public Management Review, 17(1), 149–161. [Google Scholar] [CrossRef]
  11. Burden, B. C., Canon, D. T., Mayer, K. R., & Moynihan, D. P. (2012). The effect of administrative burden on bureaucratic perception of policies: Evidence from election administration. Public Administration Review, 72(5), 741–751. [Google Scholar] [CrossRef]
  12. Carey, G., Dickinson, H., Malbon, E., Weier, M., & Duff, G. (2020). Burdensome administration and its risks: Competing logics in policy implementation. Administration & Society, 52(9), 1362–1381. [Google Scholar] [CrossRef]
  13. Evans, T. (2016). Street-level bureaucracy, management and the corrupted world of service. European Journal of Social Work, 19(5), 602–615. [Google Scholar] [CrossRef]
  14. Gluck, M. (1996). Exploring the relationship between user satisfaction and relevance in information systems. Information Processing & Management, 32(1), 89–104. [Google Scholar] [CrossRef]
  15. Haeruddin, Toding, S., & Nashar, A. (2025). Indonesian government bureaucracy in the perspective of reinventing government: “How the entrepreneurial spirit is transforming the public sector”. Arus Jurnal Sosial dan Humaniora, 5(2), 2188–2196. [Google Scholar] [CrossRef]
  16. Harrati, N., Bouchrika, I., Tari, A., & Ladjailia, A. (2016). Exploring user satisfaction for e-learning systems via usage-based metrics and system usability scale analysis. Computers in Human Behavior, 61, 463–471. [Google Scholar] [CrossRef]
  17. Harrison, R. L., Reilly, T. M., & Creswell, J. W. (2020). Methodological rigor in mixed methods: An application in management studies. Journal of Mixed Methods Research, 14(4), 473–495. [Google Scholar] [CrossRef]
  18. Heggertveit, I., & Rydén, H. H. (2024). Narratives of reality: Administrative burdens and workarounds in digital self-services [preprint]. AMCIS 2024 proceedings. Available online: https://aisel.aisnet.org/amcis2024/soc_inclusion/social_inclusion/13 (accessed on 12 August 2025).
  19. Herd, P., Hoynes, H., Michener, J., & Moynihan, D. (2023). Introduction: Administrative burden as a mechanism of inequality in policy implementation. RSF: The Russell Sage Foundation Journal of the Social Sciences, 9(4), 1–30. [Google Scholar] [CrossRef]
  20. Hodgson, G. M. (2019). Taxonomic definitions in social science, with firms, markets and institutions as case studies. Journal of Institutional Economics, 15(2), 207–233. [Google Scholar] [CrossRef]
  21. Jeong, J. (2025). The effects of quality of bureaucrats, regulations, and e-government on the efficiency of economic regulatory policy: Focusing on the effect on time. Journal of Asian Public Policy, 18(2), 529–561. [Google Scholar] [CrossRef]
  22. Kazdin, A. E. (1979). Unobtrusive measures in behavioral assessment. Journal of Applied Behavior Analysis, 12(4), 713–724. [Google Scholar] [CrossRef]
  23. Kim, Y. (2021). Searching for newness in management paradigms: An analysis of intellectual history in U.S. public administration. The American Review of Public Administration, 51(2), 79–106. [Google Scholar] [CrossRef]
  24. Kincaid, H. (2012). The Oxford handbook of philosophy of social science. Oxford University Press. [Google Scholar]
  25. Li, L., Lin, X., Yang, X., Luo, Z., & Wang, M. (2024). Digital governance and urban government service spaces: Understanding resident interaction and perception in Chinese cities. Land, 13(9), 1403. [Google Scholar] [CrossRef]
  26. Lim, W. M. (2025). What is qualitative research? An overview and guidelines. Australasian Marketing Journal, 33(2), 199–229. [Google Scholar] [CrossRef]
  27. Meuleman, L. (2021). Public administration and governance for the SDGs: Navigating between change and stability. Sustainability, 13(11), 5914. [Google Scholar] [CrossRef]
  28. MM, S., & Jasim, K. M. (2020). Ascertaining service quality and medical practitioners’ sensitivity towards surgical instruments using SERVQUAL. Benchmarking: An International Journal, 28(1), 370–405. [Google Scholar] [CrossRef]
  29. Morrison, Z., Fernando, B., Kalra, D., Cresswell, K., & Sheikh, A. (2014). National evaluation of the benefits and risks of greater structuring and coding of the electronic health record: Exploratory qualitative investigation. Journal of the American Medical Informatics Association, 21(3), 492–500. [Google Scholar] [CrossRef] [PubMed]
  30. Moynihan, D., Herd, P., & Harvey, H. (2015). Administrative burden: Learning, psychological, and compliance costs in citizen-state interactions. Journal of Public Administration Research and Theory, 25(1), 43–69. [Google Scholar] [CrossRef]
  31. Neale, B., & Bishop, L. (2012). The Timescapes Archive: A stakeholder approach to archiving qualitative longitudinal data. Qualitative Research, 12(1), 53–65. [Google Scholar] [CrossRef]
  32. Norris, P. (2003). Digital divide: Civic engagement, information poverty, and the Internet worldwide. Canadian Journal of Communication, 28(1), 9–120. [Google Scholar] [CrossRef]
  33. Nosova, S., Norkina, A., Makar, S., & Fadeicheva, G. (2021). Digital transformation as a new paradigm of economic policy. Procedia Computer Science, 190, 657–665. [Google Scholar] [CrossRef]
  34. Nurfadila, N. (2024). Enhancing public financial management through performance evaluation and cost systems. Advances in Management & Financial Reporting, 2(1), 24–35. [Google Scholar] [CrossRef]
  35. Park, S., & Park, J. (2025). How do e-government system configurations differ between high and low EGDI countries? Fuzzy set qualitative comparative analysis approach. Journal of Internet Electronic Commerce Research, 25(4), 51–77. [Google Scholar] [CrossRef]
  36. Pirhonen, J., Lolich, L., Tuominen, K., Jolanki, O., & Timonen, V. (2020). “These devices have not been made for older people’s needs”–Older adults’ perceptions of digital technologies in Finland and Ireland. Technology in Society, 62, 101287. [Google Scholar] [CrossRef]
  37. Ramadhani, S. N. (2025). Implementasi inovasi e-government dalam pelayanan publik studi kasus aplikasi sampah online banyumas (salinmas). Journal of Politic and Government Studies, 13(1), 73–85. [Google Scholar]
  38. Saputra, N., Putera, R. E., Zetra, A., Azwar, A., Valentina, T. R., & Mulia, R. A. (2026). Enhancing public management through integrity and organizational citizenship behavior: A systematic review. International Journal of Public Administration, 1–16. [Google Scholar] [CrossRef]
  39. Saunders, M. N. K., & Darabi, F. (2024). Chapter 4: Using multi- and mixed methods research designs. In Field guide to researching employment and industrial relations. Edward Elgar Publishing. [Google Scholar]
  40. Schlunegger, M. C., Zumstein-Shaha, M., & Palm, R. (2024). Methodologic and data-analysis triangulation in case studies: A scoping review. Western Journal of Nursing Research, 46(8), 611–622. [Google Scholar] [CrossRef]
  41. Shulzhyk, Y., Suray, I., Parkhomenko-Kutsevil, O., Zakharchenko, V., & Slobozhan, O. (2024). Addressing contemporary issues in public administration: Effective strategies for improving efficiency and transparency in public services. Multidisciplinary Reviews, 8, 2024spe080. [Google Scholar] [CrossRef]
  42. Siemiatycki, M. (2008). Managing optimism biases in the delivery of large-infrastructure projects: A corporate performance benchmarking approach. In 2008 first international conference on infrastructure systems and services: Building networks for a brighter future (INFRA) (pp. 1–6). IEEE. [Google Scholar] [CrossRef]
  43. Stoker, G., & John, P. (2009). Design experiments: Engaging policy makers in the search for evidence about what works. Political Studies, 57(2), 356–373. [Google Scholar] [CrossRef]
  44. Tang, G. (2025). Using mixed methods research to study research integrity: Current status, issues, and guidelines. Accountability in Research, 32(5), 807–828. [Google Scholar] [CrossRef]
  45. Valentina, T. R., Putera, R. E., & Salsabila, L. (2025). Collaborative governance in handling the waste crisis: A systematic literature review. International Journal of Sustainable Development and Planning, 20(2), 761–770. [Google Scholar] [CrossRef]
  46. Weerakkody, V., Irani, Z., Lee, H., Osman, I., & Hindi, N. (2015). E-government implementation: A bird’s eye view of issues relating to costs, opportunities, benefits and risks. Information Systems Frontiers, 17(4), 889–915. [Google Scholar] [CrossRef]
  47. Zou, Q., Mao, Z., Yan, R., Liu, S., & Duan, Z. (2023). Vision and reality of e-government for governance improvement: Evidence from global cross-country panel data. Technological Forecasting and Social Change, 194, 122667. [Google Scholar] [CrossRef]
Figure 1. Trend of Public Satisfaction Index (IKM) for ISBN Services (2021–2024).
Figure 1. Trend of Public Satisfaction Index (IKM) for ISBN Services (2021–2024).
Admsci 16 00159 g001
Figure 2. Statement and apology letter from the publisher (January 2024).
Figure 2. Statement and apology letter from the publisher (January 2024).
Admsci 16 00159 g002
Table 1. Breakdown of Public Satisfaction Index (IKM) Scores in 2022 (the ‘shock’ year).
Table 1. Breakdown of Public Satisfaction Index (IKM) Scores in 2022 (the ‘shock’ year).
No.Service ElementIndex Value (Out of 4.00)Service Quality Category
1Service Fees/Rates3.48Good
2Complaint Handling3.21Good
3Staff Behavior3.19Good
4Staff Competence3.09Good
5Service Conditions3.07Good
6Service Products3.06Good
7Facilities and Infrastructure2.84Not good
8Procedures2.79Not good
9Service Time2.30Not good
Total IKM(75.03)Not good
Table 2. Implementation of the ‘Single Account’ policy based on internal archival data.
Table 2. Implementation of the ‘Single Account’ policy based on internal archival data.
Parent InstitutionMain Account (Single Account)Number of Deactivated Sub-Accounts
Ministry of Education, Culture, Research, and TechnologyMinistry of Education, Culture, Research and Technology of the Republic of Indonesia151
Halu Oleo UniversityHalu Oleo University Press17
Commission Indonesian General ElectionKPU (Kendari City)12
Table 3. Shift in the most frequent complaint categories (2022 vs. 2023).
Table 3. Shift in the most frequent complaint categories (2022 vs. 2023).
Rank2022 Complaint CategoryFrequency2023 Complaint CategoryFrequency
1Late ISBN Validation32Registration of Existing Publishers15
2Late Publisher Validation8ISBN Registration12
3Missing Publisher Requirements3Upload Failures3
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Nabawi, I.H.; Bajari, A.; Erwina, W.; Khadijah, U.L.S. The Digital Shock: Administrative Burden and the Governance–Service Trade-Off in Indonesia’s Public Service Reform. Adm. Sci. 2026, 16, 159. https://doi.org/10.3390/admsci16030159

AMA Style

Nabawi IH, Bajari A, Erwina W, Khadijah ULS. The Digital Shock: Administrative Burden and the Governance–Service Trade-Off in Indonesia’s Public Service Reform. Administrative Sciences. 2026; 16(3):159. https://doi.org/10.3390/admsci16030159

Chicago/Turabian Style

Nabawi, Irham Hanif, Atwar Bajari, Wina Erwina, and Ute Lies Siti Khadijah. 2026. "The Digital Shock: Administrative Burden and the Governance–Service Trade-Off in Indonesia’s Public Service Reform" Administrative Sciences 16, no. 3: 159. https://doi.org/10.3390/admsci16030159

APA Style

Nabawi, I. H., Bajari, A., Erwina, W., & Khadijah, U. L. S. (2026). The Digital Shock: Administrative Burden and the Governance–Service Trade-Off in Indonesia’s Public Service Reform. Administrative Sciences, 16(3), 159. https://doi.org/10.3390/admsci16030159

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop