Next Article in Journal
Populist Radical Right: Illiberal Erosion or Liberal Decay? Assessing Theoretical Explanations in the Wake of the 2024 European Parliament Election
Previous Article in Journal
AI Pioneers and Stragglers in Greece: Challenges, Gaps, and Opportunities for Journalists and Media
Previous Article in Special Issue
Adaptive Epistemology: Embracing Generative AI as a Paradigm Shift in Social Science
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comparison of Online Probability Panels in Europe: New Trends and Old Challenges in the Era of Open Science

1
Institute for Research on Population and Social Policies, Italian National Research Council (CNR-IRPPS), 00185 Rome, Italy
2
Italian National Institute of Statistics (Istat), 00198 Rome, Italy
*
Author to whom correspondence should be addressed.
Societies 2025, 15(8), 210; https://doi.org/10.3390/soc15080210
Submission received: 3 July 2025 / Revised: 26 July 2025 / Accepted: 28 July 2025 / Published: 29 July 2025

Abstract

Online Probability Panels (OPPs) have emerged as essential research infrastructures for social sciences, offering robust tools for longitudinal analysis and evidence-based policy-making. However, the growing role of the Open Science movement demands systematic evaluation of their compliance. This study compares major European OPPs—including LISS, GESIS, the GIP, ELIPSS, and the Swedish and Norwegian Citizen Panels—focusing on their practices of openness, recruitment, sampling, and maintenance. Through a qualitative analysis of public documentation and methodological reports, the study examines how their diverse approaches influence data accessibility, inclusivity, and long-term usability. Our findings highlight substantial variability across panels, reflecting the interplay between national contexts, governance models, technological infrastructures, and methodological choices related to recruitment, sampling, and panel maintenance. Some panels demonstrate stronger alignment with Open Science values—promoting transparency, interoperability, and inclusive engagement—while others operate within more constrained frameworks shaped by institutional or structural limitations. This comparative analysis contributes to the understanding of OPPs as evolving knowledge infrastructures and provides a reference framework for future panel development. In doing so, it offers valuable insights for enhancing the role of OPPs in advancing Open and socially engaged research practices.

1. Introduction

Over the past two decades, Online Probability Panels (OPPs) have emerged as a crucial tool for social research, enabling high-quality longitudinal data collection and supporting evidence-based policy-making across diverse national contexts. Initially introduced to overcome the limitations of traditional survey methods [1]—such as declining response rates in telephone or face-to-face interviews—OPPs have benefited from growing internet access and technological innovation, allowing for faster, more cost-effective, and methodologically robust data collection [2,3]. A key strength of OPPs lies in their use of probability-based sampling [4], essential for ensuring external validity and minimizing measurement error compared to non-probability panels [5,6]. However, significant challenges persist, including the risk of underrepresentation due to the digital divide [7,8], panel attrition over time, and item nonresponse, which may affect data quality [9].
The origins of Online Probability Panels (OPPs) trace back to the late 1980s with the rise of computerized, self-administered data collection methods [2]. Fundamental was the first Dutch Telepanel (1986) [10], followed by a second phase of development that began in the mid-1990s, initially in the U.S. and parts of Europe such as the Netherlands [11]. From 2007, OPP growth has intensified, often through collaborations between academic institutions and private research centers. This acceleration has also been evident in Europe: 2007 is the year that the widely recognized and well-established LISS Panel (Longitudinal Internet Studies for the Social Sciences) was launched by Centerdata at Tilburg University in the Netherlands. This was followed by the creation of the Novus Panel Sweden (2008), the Icelandic Social Science Research Institute—SSRI Panel (2010), the Swedish Citizen Panel (2010), the German Internet Panel (GIP) at the University of Mannheim (2012), the GESIS Panel, also based in Germany (2013), the Norwegian Citizen Panel at the University of Bergen (2013), and the ELIPSS Panel (Longitudinal Internet Studies for the Social Sciences) managed by Sciences Po in Paris in collaboration with the CDSP—Centre de Données Socio-Politiques (2012), as well as other UK-based panels, such as the NatCen Opinion Panel (2015) or Taking Part (2016). Their presence and development have progressively demonstrated the scientific and institutional value of OPPs infrastructures.
In this context, Blom et al. [12] provided one of the first comparative studies of four major European OPPs (LISS, GIP, GESIS, and ELIPSS), focusing primarily on sampling, data collection strategies, and panel maintenance. Since then, the landscape of social research has evolved considerably, with the rise of Open Science as a normative framework promoting transparency, data sharing, and participatory research practices [13,14,15]. Central to this paradigm are the FAIR principles—Findable, Accessible, Interoperable, and Reusable data—which have become a reference standard for research data management and stewardship [16], though not sufficient on their own to fully achieve Open Science. We acknowledge that Open Science represents a much broader and more complex paradigm than FAIR principles alone. While the FAIR principles have been increasingly operationalized—most notably through tools such as the FAIR Data Maturity Model—Open Science as a whole remains challenging to translate into concrete, measurable indicators. Precisely for this reason, our analysis adopts a qualitative approach and recognizes the greater difficulty in fully operationalizing openness in a consistent and comparative way. The role of OPPs has thus shifted from being mere data collection tools to complex knowledge infrastructures whose role moves from archiving to empowering social research [17,18,19], embedded within Open Science ecosystems [20,21] and enabling participatory governance models [22,23].
Despite the increasing institutionalization of Open Science practices and the adherence of OPPs to FAIR principles, little is known about how existing OPPs implement OS in practice, and how new generations of panels are designed to meet these evolving standards. Addressing this gap, our study provides a systematic comparison of established European OPPs, including LISS, the GESIS Panel, the GIP, and ELIPSS, as Blom et al. analyzed [12], integrated with more recent examples such as the well-established Swedish and Norwegian Citizen Panels. Building on a systematic document analysis, we assess these panels across key dimensions of data collection, maintenance, and openness, analyzing their compliance with Open Science principles and highlighting differences both across countries and between panel generations.
Our results show substantial variability in the degree and mechanisms of openness adopted by different OPPs, shaped by national research policies, governance models, and technological capacities. While some panels prioritize controlled access, other initiatives tend to incorporate stronger Open Science features. Our contribution is twofold: first, we provide a comparative framework for the implementation of Open Science principles within OPPs; second, we offer practical insights and lessons learned for the development of future panels, particularly in countries where OPPs are still emerging. In doing so, we aim to contribute to the advancement of Open Science practices in OPP social research and the design of resilient, transparent, and inclusive research infrastructures.

2. Materials and Methods

This study is based on the secondary analysis of publicly available documentation and institutional materials concerning established European Online Probability Panels. The materials analyzed include methodological reports, technical documentation, wave reports, and datasets provided by the official websites and archives of the main OPPs in Europe, including the GESIS Panel, LISS, ELIPSS, the GIP, the Swedish Citizen Panel, and the Norwegian Citizen Panel1.
The analysis was conducted by a research team composed of seven scholars with expertise in research infrastructures, Open Science practices, and specifically in the design and management of an emerging Online Probability Panel. Following established qualitative research practices [24], the analysis involved the categorization and systematic comparison of materials across common analytical dimensions [25,26]. A double-blind coding phase was initially performed. In this phase, all researchers independently analyzed the selected materials, focusing on a predefined set of dimensions related to data collection, panel maintenance, and openness strategies [25,26].
The structured matrix used for coding material includes the following analytical dimensions: Recruitment Dates, Initial Sample Sizes, Length and Frequency, Target Population, Sampling Frame, Sampling Procedure, Inclusion of the Offline Population, Advance Letter Material, Mode of Offline Recruitment, Invitation to Join the Panel, Recruitment Incentives, Response Rates, Invitations, Incentives, Communication, Further Measures, Documentation Available, Open in Output (data accessibility: who can access and at what level), and Open in Input (openness of participation of other institutions or researchers).
Given the limited number of categories and themes considered, coding was carried out simultaneously by all team members, allowing for immediate cross-verification and ensuring coding accuracy [27]. This initial blind phase was followed by a collective discussion aimed at reaching a shared interpretation of key concepts, thus ensuring inter-rater reliability and consistency of results across the different cases analyzed [24]. Particular attention was devoted to the interpretation of Open Science practices, whose conceptual boundaries are often open to multiple interpretations and context-dependent applications.
The level of consistency among the observations provided by the coders was ensured by the iterative process of collective discussion and reinforced by the expertise of the research team, composed of scholars with extensive experience in the field of digital research infrastructures and data governance [28].
Finally, the replicability of the study is guaranteed by the full transparency of the research process: all materials used for this study are publicly accessible online through the official platforms of each panel, allowing independent verification and reuse.

3. Results

3.1. The Development of Openess in European Online Probability Panels

In recent years, we have witnessed a growing momentum towards the model of OPPs across Europe. New national OPPs are currently under construction, in pilot phases, or recently started, for example, in Belgium (The Social Study) or Italy (Italian Online Probability Panel—IOPP), but also in other emerging countries [29]. This dynamic development signals a wider institutional commitment to strengthening evidence-based social research through comparable and interoperable data infrastructures, echoing recent policy directions towards Open Science [14,23].
The European spread of OPPs has not only marked a methodological shift in social research but has also mirrored the progressive embedding of Open Science principles into national research infrastructures. Panels such as LISS, ELIPSS, GESIS, the GIP, and the Swedish and Norwegian Citizen Panels can illustrate how openness, transparency, and researcher engagement are negotiated differently, shaped by institutional missions, governance structures, and national data policies.
The Dutch LISS Panel stands out for its strong Open Science orientation: data are openly accessible for non-commercial use, comprehensively documented, and updated regularly. Researchers and policy institutions can propose new modules, and despite a cost-based entry for commissioned surveys, the architecture of the panel supports broad engagement and methodological transparency. ELIPSS similarly embodies FAIR principles, offering open datasets via Réseau Quetelet with DDI-compliant documentation—an international metadata standard used to describe survey and social science data—and welcoming external proposals aligned with inclusivity and population relevance. These panels illustrate a high level of maturity in integrating openness with representativity and scientific rigor, as we will see below.
Germany’s dual infrastructure—the GESIS Panel and GIP—adopts a differentiated approach. The GESIS Panel promotes openness through structured submission procedures (free and open year-round), offering both peer-reviewed and short-track options for module proposals. In contrast, the GIP offers bi-monthly contributions to researchers affiliated with or collaborating with the University of Mannheim, while external users must navigate a fee structure. Data access is mediated through Paneldata.org or on-site at the Data Cube at the Schloss Schneckenhof Library, maintaining transparency but moderating openness through financial and logistical thresholds.
Scandinavian panels offer additional contrasts. The Swedish Citizen Panel encourages research collaboration through invitation and project-based engagement, but full datasets are not publicly released. Though adhering to Open Science, access is mediated by institutional constraints. The Norwegian Citizen Panel demonstrates high data availability for research and education through the Sikt portal and DIGSSCORE, yet it does not permit external contributions to data collection—underscoring a model of transparency without participatory governance.
These differences reflect broader national research traditions and institutional capacities. While LISS, ELIPSS, and GESIS are more fully embedded within Open Science ecosystems—facilitating reuse, participation, and interoperability—others like the GIP and the Swedish and Norwegian Panels operate within more controlled frameworks, balancing data protection, cost recovery, and institutional sustainability. Such variability is not a weakness, but a reflection of context-sensitive governance, though it poses challenges for harmonizing standards across borders.
The Netherlands, France, and Germany have used their institutional stability to develop mature, high-functioning OPPs, while the Scandinavian model offer different solutions. Countries such as Italy and Belgium, for example, and others in their first stages face both challenges and opportunities: they can draw from established models to design infrastructures natively aligned with FAIR and Open Science principles, rather than adapting legacy systems.
Ultimately, this comparative overview (see also Table A1 in the Appendix A.1) reveals that European OPPs are not only data collection tools but also strategic actors within evolving knowledge infrastructures. Their varying implementations of openness and fairness offer key insights for shaping next-generation research ecosystems that are resilient, participatory, and ethically grounded.
In the following sections, we compare the methodological strategies adopted by these diverse European OPPs, with particular attention to how they operationalize principles of fairness and openness in their recruitment designs, data access models, and governance structures.

3.2. Sampling

Understanding sampling strategies adopted in online probability panels is crucial for understanding the representativeness and reliability of survey-based research. A well-defined sampling frame is essential to ensure the panel’s representativeness and the accuracy of its findings. Furthermore, strategies to include traditionally underrepresented groups are commonly employed to improve the generalizability of the results. In fact, the inclusion of offline populations, often achieved through mixed-mode data collection or alternative sampling strategies, is also essential for maintaining the integrity of longitudinal studies and minimizing coverage bias. Furthermore, based on the sampling strategies adopted by the various European OPPs analyzed in this study, a distinction also emerges in terms of their level of alignment with the FAIR principles and Open Science, such as strategies that enhance the representativeness of samples supporting the creation of high-quality and reusable data.
Following the comparative approach proposed by Blom et al. [12], this section provides an overview of the sampling strategies employed by the European OPPs under study. It examines their key methodological elements, including target populations, sampling frames, data collection procedures, and the inclusion of offline populations (see Table A2 in the Appendix A.2). The goal is to provide a comprehensive understanding of these practices across panels and evaluate their interrelation with FAIR and Open Science principles.
The comparison of the sampling strategies adopted by the OPPs under study reveals both convergences and divergences. An important difference concerns the target population. Although almost all these panels adopt probability-based sampling strategies targeting individuals, the LISS Panel focuses on private households, i.e., it interviews all the eligible household members, including those who became age eligible over time. Similarly, the GIP interviews all the individuals in a household, but it does not incorporate new household members after the initial recruitment phase. In contrast, the remaining panels (ELIPPS, GESIS, and the Swedish and Norwegian Citizen Panels) focus on individuals, typically one person per household. Differences also emerge regarding the age eligibility criteria. All panels cover the adult population residing in each country aged 16 and older, but with some specificity. The GIP focuses on individuals aged 16–75 years old, GESIS targets individuals aged 18–70 at recruitment, and ELIPSS includes residents aged 18–79. For the Swedish Citizen Panel, the age range is 16–90, the Norwegian Citizen panels rely on individuals aged 18–95, while LISS consists of individuals of 16 years and older living in a household, without specifying an upper age bound for participation. Moreover, in addition to the age range of the target population, some panels provide additional criteria for individuals selection, such as speaking language and/or place of residence. This is the case for the German GESIS Panel, which includes the German-speaking adult population; the French ELIPSS Panel, which includes only French-speaking residents of mainland France, excluding Corsica; the LISS Panel which includes Dutch-speaking population permanently residing in the Netherlands; and the Norwegian Citizen Panel, which excludes individuals without a current home address in Norway.
Another important difference lies in the selection of the sample. In fact, the name extraction is not always drawn from national or municipal population registers. In fact, although all panels are built on a probability sample that is representative of the country’s resident population, the sampling design, as well as the source of extraction and the names that are included in the sampling list, varies from one country to another. For the LISS Panel, the addresses of the private households consist of a simple random sample drawn from the Dutch National Statistical Office (Statistics Netherlands). ELIPSS is a random panel of individuals from ordinary households, based on a two-stage probability sample drawn from the French rotating census of the National Institute for Statistics and Economic Studies (INSEE), with one eligible person randomly selected per household. Similarly, the GIP adopt an area probability sample strategy. Since 2018, in the first stage, a random probability sample of 180 municipalities has been drawn from all municipalities in Germany, stratified by federal state and population density. In the second stage, each selected municipality was asked to draw a random sample of individuals aged 16–75 from its local population register, proportionate to the number of residents with a primary residence. Still in Germany, for GESIS, panelists are selected through a two-stage sampling procedure. A random sample of local municipalities—serving as primary sampling units (PSUs)—is first drawn using a stratified random sampling approach, with strata defined by federal state, administrative district, and settlement structure. The sample points are then allocated proportionally to population size, and sample points within each stratum are selected using probability proportional to size. In the second stage, these municipalities are asked to draw a sample of individuals from their local population registers using systematic random sampling. In the Swedish Citizen Panel, most participants are self-recruited, and each year in September, a probability-based sample of individuals is also added, drawn randomly from the National Tax Agency register. Also, for the Norwegian Citizen Panel, eligible individuals are randomly drawn from the National Population Registers.
Finally, the panels adopt different strategies to include population minorities who lack Internet access or are unable to use digital devices to mitigate digital exclusion biases. This aspect is crucial not only for the representativeness of the panel, but also for the panel’s compliance with Open Science values. In fact, even if these are online panels, special attention is given to addressing the digital divide. Among the strategies adopted to include offline populations, some panels go beyond the simple mailing of paper-and-pencil questionnaires, as performed by GESIS, and implement more systematic approaches. For example, the ELIPSS Panel provides each participant with an A5-sized tablet and free 3G Internet access, while the LISS Panel equips offline households with a specially designed device (the “simPC”), developed for elderly individuals with no prior computer experience. Similarly to LISS, the GIP, which is only online, adopted additional measures to allow the inclusion of offliners in 2012 and 2014, which consisted of providing each sample household with a user-friendly computer/tablet and Internet connection (the “BenPC”).
The differences in sampling strategies highlight how methodological choices in sample construction and population access, such as, for example, the distribution of digital devices to offliners to support their participation in panel studies, influence not only the quality and generalizability of the data, but also the extent to which a panel succeeds in embodying values such as inclusion, transparency, equity, and accessibility.

3.3. Recruitment

The comparative examination of recruitment procedures across several major Online Probability Panels (OPPs) in Europe reveals both similarities and differences shaped by institutional, cultural, and operational factors. Key areas of comparison include advance materials, mode of offline recruitment, panel invitation procedures, incentive schemes, and response rates (see Table A3 in the Appendix A.3).
All six panels use advanced communications to introduce the study to potential participants. Prior contact via post serves both to legitimize the study and enhance trust, which is known to improve response rates in survey recruitment [30]. The LISS Panel sends pre-interview letters to sampled households, a method also adopted by GESIS. ELIPSS uses a phased approach, initiating contact with an information letter followed by in-person engagement [7]. The GIP involves municipalities in the initial stages, adding a layer of administrative endorsement that likely enhances legitimacy [31]. The Swedish Citizen Panel (SCP) relies on postal invitations alone for the probability recruited participants, capitalizing on Sweden’s high trust in public institutions [32]. The Norwegian Citizen Panel (NCP) uses a two-stage approach, sending an invitation letter to the selected individuals and, after two weeks, a reminder post card to those who have not logged into the survey or have neither completed the survey nor provided their email address. Despite differences, all panels demonstrate a commitment to transparency and informed consent, which are core principles in ethical survey design [33].
Offline recruitment modes vary considerably. CAPI is the preferred method for the NCP, ELIPSS, the GIP, and GESIS, allowing interviewers to build relations and clarify expectations, which has been shown to increase participation [34]. The LISS Panel uses a mixed-mode strategy, while the SCP stands apart due to its composition. Initially, the SCP primarily relies on online self-recruitment. However, recognizing the limitations of this approach in achieving a representative sample, the panel introduced offline recruitment methods in 2012 [35]. These methods include telephone interviews and postal invitations. On the one hand, potential participants are contacted via telephone to invite them to join the panel, allowing for direct engagement and the opportunity to address any question or concerns prospective participants may have. On the other hand, individuals are randomly selected from the Swedish population registry and sent invitations by mail [35]. These invitations typically include information about the panel and instructions on how to join. These offline recruitment strategies are designed to complement the online self-recruitment process, aiming to include individuals who might be underrepresented in online-only recruitment, such as older adults or those less active on digital platforms. This method aligns with Sweden’s broader administrative and digital infrastructure, where citizens are accustomed to official communication and services via mail and online platforms [36], and by high trust in public institutions [32].
Invitations to join the panel seem generally successful upon contact. LISS, ELIPSS, and GESIS extend the invitation during or, specifically for the NCP, at the end of the CAPI session, capitalizing on the participant’s engagement. The GIP uses a structured wave-based approach, integrating panel invitations into ongoing data collection [7]. The SCP requires respondents to act upon the invitation by registering online, a step potentially introducing selection bias if not well supported [37,38]. Effective recruitment hinges not only on initial contact but on how seamlessly individuals can transition from respondent to panel member.
Incentive strategies across panels vary in structure and timing. Unconditional incentives, like the EUR 10 offered by LISS, tend to increase response rates but at higher financial costs [39]. Mixed models, such as those used by the GIP and ELIPSS, aim to balance initial motivation and long-term commitment. Conditional incentives, as seen with GESIS (EUR 5 per completed wave), are cost-effective and promote retention [40]. The SCP’s varied incentives reflect an adaptive approach based on the type of panelist. Individuals who voluntarily join the SCP through self-registration do not receive any form of incentive for registering. This approach relies on the participants’ intrinsic motivation to contribute to research [37,41]. For participants recruited through probability-based methods, the SCP has experimented with non-monetary incentives to enhance recruitment rates. Specifically, during certain recruitment phases, lottery incentives were included in postal invitations and reminder letters. These lotteries offered participants a chance to win prizes as encouragement to join the panel, potentially positively affecting recruitment rates [42]. It is important to note that while these non-monetary incentives are utilized during the recruitment process for probability-based panelists, the SCP does not provide incentives for participation in the regular survey waves for any panelists, regardless of their recruitment method.
Response rates differ significantly across panels. The GIP reported improvements from 18.5% in 2012 to 30.9% in 2016, indicative of methodological refinement and possible learning effects [7]. ELIPSS provides wave-level data, suggesting iterative adjustments. LISS and GESIS report high initial cooperation, attributed to their combined face-to-face and postal strategies. SCP reports response rates around 70% over the 2024 waves, but each wave is targeting just a panel sub-sample. NCP reports response rates around 15% in the last 10 waves. These disparities likely reflect the complex interplay of contact mode, incentives, sampling design, and national survey cultures [43].
Crucially, recruitment procedures also serve as a practical entry point for operationalizing the values of Open Science and FAIR data stewardship. How participants are invited, incentivized, and engaged—primarily through mixed-mode strategies and offline outreach—directly shapes not only who is represented in a panel but also the usability, transparency, and equity of the resulting data. Panels like LISS and ELIPSS, which combine robust pre-contact, multimodal engagement and carefully designed incentive models, embody a recruitment logic that actively supports fairness in participation and inclusivity of underrepresented groups—both prerequisites for data that are broadly representative.
Mixed-mode and offline strategies play a central role in minimizing exclusion due to the digital divide, one of the primary ethical and methodological challenges identified in longitudinal online research. For instance, the SCP’s integration of postal and telephone invitations reflects an adaptive design, responsive to gaps in digital access. Similarly, the face-to-face recruitment used by GESIS and the GIP ensures the inclusion of individuals who are less digitally connected. These strategies, although resource-intensive, enhance external validity and reinforce the philosophy of science that underpins Open Science.
Incentive schemes highlight how recruitment design intersects with the principles of openness and sustainability in research. While monetary incentives can effectively boost response rates, they also raise concerns about equity and long-term feasibility. This is particularly relevant in the context of Open Science, where reliance on financial rewards may pose a significant barrier for low- and middle-income countries that cannot afford such measures, potentially limiting their participation and undermining global inclusiveness. Alternative models—such as the SCP’s lottery-based incentives or GESIS’s per-wave payments—reflect efforts to balance participant engagement with cost-effectiveness and fairness. In this regard, transparency in the rationale behind incentive structures and in their documentation is essential to uphold the accountability norms that underpin Open Science.
Ultimately, the recruitment phase is not merely procedural: it is foundational to building trustworthy, transparent, and equitable research infrastructures. The comparability and openness of European OPPs depend not only on post hoc data sharing but also on how fairly and inclusively respondents are brought into the panels in the first place. Recruitment, therefore, is where Open Science begins.

3.4. Mantaining

There are numerous factors that influence the decision to respond to a survey, as well as various external elements that may affect the choice to continue participating in future waves of a longitudinal study. One of the key challenges is the ability to track and engage panel members during the period between waves.
In the LISS panel, data are annually collected in two fieldwork periods of both three and four weeks. A reminder is sent twice to non-responders, while a second fieldwork period is directed at those who did not respond in the first one, again followed by two reminders. Questionnaires can be filled out only online. In the GESIS Panel, independently of the survey mode, all participants are invited by mail and receive an unconditional incentive of EUR 5. In addition to the mail invitation, online participants receive an invitation and two reminders by email, whereas no reminders are sent out in the mail mode. In the GIP, interviews lasting about 20–25 min are carried out bi-monthly. For each interview, respondents receive EUR 4, with an annual bonus of EUR 5 if panel members participated in all but one interview, and a bonus of EUR 10 if they participated in all interviews of a year [7]. In ELIPSS, the panelists are solicited to respond to monthly questionnaires, taking up to 30 min. At back-to-school time and at the end of the year, wishes are sent to the panelists by postal mail together with a small sum (EUR 10) voucher. Finally, the Nordic panels are less demanding in terms of required participation, but they differ in the use of incentives. The SCP distributes online surveys three to four times per year to different sub-samples of panel respondents. They do not receive any incentives for their participation in the survey. In the NCP, panelists are requested to complete an online questionnaire three times a year, of 15 min each. A lottery on travel gift cards is included in each survey round as an incentive.
The literature on attrition typically distinguishes three main components: namely, failure to locate the sample unit, failure to make contact, and failure to gain cooperation. While the risk of the first two failures is lower in online panels, failure to gain cooperation may be the greatest concern for longitudinal online surveys [12]. Interestingly, the GESIS Panel employs two distinct methods for collecting participant feedback: dedicated comment fields in surveys—to provide feedback directly related to the questionnaire content—and direct contact with the panel management—for feedback on the survey process [44]. In the GIP, the hotline can be reached via email and telephone (a toll-free number). Furthermore, the panel members are asked for feedback at the end of each questionnaire, both in closed rating questions as well as by providing an open question for more general feedback. This possibility is also available in ELIPSS, where special open-ended items are included in each questionnaire to let the respondents give feedback about their impressions and difficulties.
Panel rules define whether any data collection should be attempted with nonrespondents from the previous waves and, if so, which nonrespondents should be targeted. For example, they may establish the ineligibility of participants if they were hard refusals or untraceable during the previous wave or if they failed to respond to two previous waves [45]. In LISS, panel members who have not filled out a questionnaire for three months are considered “sleepers”—without being formally excluded. Effective measures include a follow-up call in the event of inactivity (non-response) for longer than two months and an offer of a EUR 10 conditional incentive after three months [46]. Similarly, in the GIP, respondents who failed to respond to two consecutive waves are additionally reminded over the phone during the last week of fieldwork in each month, but they can always return to the survey [47]. In the GESIS Panel, individuals who do not respond to or cannot be reached for three consecutive waves are excluded from the panel (involuntary attrition). In ELIPSS, all the non-respondents to one or more preceding studies are notified by postal mail about their initial commitment to the panel and invited to respond to oncoming survey invitations. If they remain nonresponsive to the following waves, they are thanked for their participation and removed from the panel [48].
One key design choice for all panels is how to maintain the representativeness of the sample over time. With the aging of the panel, there may be increasing differences between the panel and the population of interest. For longitudinal research, this issue is less of a concern, since the primary aim is typically to observe changes over time. However, if the research questions are about the extent and nature of change at the population level, it is essential to find a way of adding new members of the cross-sectional population to the initial sample. Generally, in all panels, refreshment sampling is carried out every two to three years, but there are some differences in how this is performed. In the GESIS Panel, the sampling design of all refreshment samples is based on a “piggybacking” approach, i.e., respondents are recruited after completing other GESIS surveys. The first two refreshment cohorts (2016, 2018) were based on the General Social Survey (ALLBUS). In 2021, the refreshment was drawn from the German Study of the International Social Survey Programme (ISSP), whereas in 2023 and 2025, it followed the German part of the European Social Survey (ESS) [49]. In the LISS Panel, in addition to the periodic refreshment that takes place every two to three years, a new recruitment round is also launched when the panel becomes too small (less than 5000 respondents) or when there arises a threat of significant deviation from population figures on the main sociodemographic characteristics (gender, age, educational level, net income per household, urbanity, province, household size, and residential form) [50]. The GIP and ELIPSS have had a similar timeline over the years, but in the latter, there was one more round of refreshment. In detail, the GIP started in 2012 and was supplemented with two refreshments in 2014 and 2018. ELIPSS was established in 2012 and then refreshed three times: in 2016, 2020, and 2023. In the NCP, supplementary recruiting is performed on a yearly basis.
The strategies adopted by the European OPPs to maintain panel engagement and counteract attrition (for a summary overview, see Table A4 in the Appendix A.4) directly affect their openness in terms of data continuity, usability, and transparency. Panels like LISS and ELIPSS, which implement structured re-engagement procedures (e.g., multi-step reminders, personal communication, and conditional incentives) and integrate participant feedback mechanisms, demonstrate a deliberate effort to preserve the integrity of longitudinal data over time—an essential requirement for reusable and reliable datasets. These strategies enhance transparency and make the dataset more suitable for secondary analysis, thus aligning closely with Open Science principles. In contrast, panels like the Swedish and Norwegian Citizen Panels, with less frequent engagement demands and limited or absent incentives, adopt a lower-maintenance model that may reduce respondent fatigue but could also limit data richness and long-term comparability. Moreover, differences in recontact policies, retention thresholds, and refreshment sampling timelines influence the panel’s ability to remain representative and openly usable across time. The GESIS Panel’s strict exclusion criteria for non-respondents, for example, promote internal consistency but reduce inclusivity. By contrast, the GIP and LISS allow re-entry or long-term “sleeper” status, ensuring participant autonomy and longer-term data availability. These design choices reveal that panel maintenance is not only a methodological concern but a cornerstone of sustainable and open research infrastructures, where responsiveness to participants and adaptive recruitment are key to delivering FAIR-compliant, longitudinal data assets.

4. Discussion

The comparative analysis of sampling, recruitment, and maintaining strategies across European Online Probability Panels (OPPs) provides a clear example of how data infrastructures are deeply embedded into specific social, institutional, and technological contexts. Drawing on the literature on knowledge infrastructures [17,19] and situated data practices [15,20], our findings show that OPPs are far from being standardized or universally replicable tools for data collection. On the contrary, they operate as situated infrastructures [15], whose design and management reflect national policies, local survey cultures, and resource availability.
Practices of recruitment, sampling, and maintaining—such as the provision of devices to offline participants [7], the choice of offline contact modes, or the criteria for panel refreshment—are not neutral technical procedures, but socio-technical interventions that mediate access to participation and define who is included in the data ecosystem [17]. Similarly, panel maintenance strategies—including incentives, communication channels, and panel care—highlight the infrastructural work required to sustain participation over time [19], ensuring data quality while managing issues of attrition and nonresponse [9].
Furthermore, the varying degrees of openness [27] observed across OPPs reflect different governance models and resource constraints [21], showing that data openness is never absolute but always negotiated and conditioned by infrastructural choices. From this perspective, OPPs emerge as knowledge infrastructures in transition [17], evolving from mere data archives to participatory and FAIR-oriented ecosystems [20,23], yet still deeply shaped by their situated histories and institutional configurations.
Future research should develop more refined tools and indicators to assess openness in OPP research infrastructures, moving beyond the technical dimensions captured by the FAIR principles and addressing social, governance, and ethical aspects of data openness.
As Di Donato [51] argued, while many datasets produced by OPPs are increasingly findable, accessible, and even interoperable thanks to technical developments, their actual reuse remains limited, and their openness is still under-examined. A systematic review of data reuse practices and impact across different user communities would be essential to understand the real value and circulation of open data within and beyond academia. Furthermore, the development of a FAIR-oriented evaluation framework enriched with indicators addressing both ongoing and incoming openness would be highly valuable in enabling a more comprehensive assessment of openness in research infrastructures.

5. Conclusions

The ever-increasing need for robust, reliable, and timely information underscores the importance of data production infrastructures that are methodologically sound and grounded in transparent, advanced communication models. The design of such systems must draw from previous experiences to extract functional practices and adapt them to the specificities of national and institutional contexts. The comparative analysis of European OPPs highlights how openness, FAIR compliance, and participatory governance are not fixed standards, but evolving principles negotiated within situated socio-technical environments.
The heterogeneity among the cases analyzed does not constitute a limitation; rather, it reflects a plurality of viable models that can inform the development of new panels in countries currently undergoing this transition. Different European countries, including Italy, stand at a strategic crossroads where they can design infrastructures natively aligned with Open Science values, instead of retrofitting legacy systems. In this regard, the diversity of approaches—ranging from the openness and modularity of the Dutch and French models to the selective accessibility of the Scandinavian and German experiences—offers a rich repository of institutional solutions.
Building on the comparative insights from our analysis, we propose a preliminary typology of OPPs structured around two key dimensions: the degree of openness (in terms of data access, transparency, and participatory models) and the level of attention to sample quality and ongoing engagement with panelists. Rather than fixed categories, these dimensions define a two-axis space, allowing panels to be located along a continuum in both respects. This results in four emerging profiles:
1.
Open and Engaged Panels: These panels combine high levels of openness with strong investments in sample quality and panelist relations, including transparent recruitment, accessible data, and long-term participant engagement strategies.
2.
Open but Lightly Managed Panels: Panels that provide relatively open data access and transparent documentation but place limited emphasis on long-term panel maintenance or respondent experience.
3.
Closed but Carefully Maintained Panels: Panels with restricted data access or limited transparency, but which are nonetheless highly curated in terms of sampling procedures, retention strategies, and respondent care.
4.
Closed and Minimally Managed Panels: These represent the lower end of both dimensions, with limited openness and minimal investment in panelist relationships or methodological quality.
This typology is intended as a flexible analytical tool to map differences among OPPs, while also highlighting trade-offs and potential tensions between openness, methodological rigor, and sustainability.
This updated comparative work, therefore, serves as a valuable resource not only for methodological reflection but also for strategic planning by institutional, political, and social actors. Strengthening the interoperability, reuse, and social impact of public interest data depends on our ability to learn from these experiences and to foster infrastructures that are both scientifically rigorous and socially responsive. The transition of OPPs into participatory knowledge ecosystems marks a crucial evolution in digital social research—one that combines methodological excellence with ethical accountability and institutional sustainability.
In this context, aligning future panel development with emerging European frameworks becomes increasingly relevant. Initiatives such as the European Open Science Cloud (EOSC) and the Horizon Europe program actively promote open data mandates, cross-border interoperability, and shared standards for data governance. For countries currently in a design phase—such as Italy and Belgium—these initiatives offer both infrastructural models and funding mechanisms to support the development of panels that are natively embedded in the broader European Open Science ecosystem. Explicitly integrating these frameworks into the design process can help ensure not only compliance with evolving policy standards, but also long-term sustainability and international compatibility of national infrastructures.

Author Contributions

Conceptualization, L.T. and R.P.; methodology, L.T.; validation, L.C.; formal analysis, N.M., I.P. and M.S.; investigation, N.M., I.P. and M.S.; resources, D.G., R.P. and C.P.; data curation, D.G., R.P. and C.P.; writing—original draft preparation, L.T., R.P., L.C., N.M., I.P. and M.S.; writing—review and editing, L.T. and R.P.; visualization, L.C., L.T., R.P., N.M., I.P., M.S., D.G. and C.P.; supervision, L.C.; project administration, L.C.; funding acquisition, M.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research and the APC were funded by National Recovery and Resilience Plan (PNRR)—NextGenerationEU, grant number MUR IR00008 within the FOSSR project “Fostering Open Sci-ence for Social Science Research”.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Ethics Committee of Italian National Research Council (protocol code 0501489 of 18 December 2024).

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in the study are openly available in documentation accessible via the websites of the online panels investigated. List of consulted online panel websites: ELIPSS Panel, https://www.sciencespo.fr/cdsp/en/projects/ongoing-projects/the-elipss-panel/ and https://quanti.dime-shs.sciences-po.fr/en/; German Internet Panel (GIP), https://www.uni-mannheim.de/en/gip/; Swedish Citizen Panel (SCP), https://www.gu.se/en/som-institute/the-swedish-citizen-panel/about-the-swedish-citizen-panel and https://www.gu.se/som-institutet/resultat-och-publikationer/vetenskapliga-publikationer; LISS Panel, https://www.lissdata.nl/; GESIS Panel, https://www.gesis.org/; and Norwegian Citizen Panel, https://surveybanken.sikt.no/en/series/ed271b1c-2595-47e4-8c97-3fcc00f02368 (all links accessed on 2 July 2025).

Acknowledgments

We would like to thank Federica Mattei for her valuable administrative support throughout the funding of this work. We also acknowledge Mario Paolucci, Director of the Institute, for his guidance and support during the project.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results. The authors declare that they have no direct involvement in the development, management, or operation of any of the Online Probability Panels (OPPs) analyzed in this study. This ensures an independent and impartial comparative analysis.

Appendix A

Appendix A.1

Table A1. Summary overview and openness.
Table A1. Summary overview and openness.
PanelStartInitial
Sample Sizes
Length and
Frequency
Outgoing
Openness
Incoming
Openness
LISS20075.259 households/8.849 personsEvery month/30 minYesYes
SCP20109.0004 rounds per yearN/AYes
(under request)
ELIPPS20121.039Every month/30 minYesYes
GIP20121.483Bi-monthly/20–25 minYes
(restricted)
Yes
(restricted)
GESIS20144.900Bi-monthly/20–25 minYesYes
NCP20134.9053 times per year/15 minYes
(under request)
No

Appendix A.2

Table A2. Summary sampling.
Table A2. Summary sampling.
PanelTarget
Population
Sampling FrameSampling
Procedure
Including
Offliners
LISS16+ y/o Dutch-speaking individuals in private householdsNationwide address frame of Statistics NetherlandsSimple random sample of addressesYes
SCPSelf-selected + random sampleMixing self-recruitment and probability sampling based on population registersN/AN/A
ELIPPS18–79 y/o French-speaking individuals in private householdsListing of housing units from the rolling censusMulti-stage, stratified, clustered random sampling from a census-based sampling frameYes
GIP16–75 y individuals from 180 municipalitiesResidents’ registration officesRegionally proportional stratified samplingYes
GESIS18–70 y/o German-speaking individuals in private householdsMunicipal population registersTwo-stage stratified random sampling from population registersYes
NCP18y+National population registryProbability sampleNo

Appendix A.3

Table A3. Summary recruitment.
Table A3. Summary recruitment.
PanelMaterialsOfflineInvitationIncentivesResponse Rate
LISSLetter and brochure: CATI/CAPI follow-upYesAfter brief interviewUnconditional 10 EUR /10 EUR upon registration75% of respondents/84% willing/48% registered
SCPOnline messages/letterNoPart self-recruited Online/part invited by letterVoluntary/lottery incentivesPer wave: between
52% and 70%
ELIPPSLetterCATI/CAPI follow-upYes (CATI/CAPI)CAPI (CATI alternative)Experiment: unconditional half of 10 EUR Per wave: between
19% and 33%
GIPLetterYes
(given pc)
Six-condition incentives and recruitment modalityUnconditional 5 EUR /conditional 10 EUR 47%
GESISLetter: CAPI follow-upYes (CAPI)After brief interviewUnconditional 5 EUR /conditional 5 EUR 35,5% of respondents/ 81.7% willing/29% registered
NCPLetter: reminder post cardYes (CATI)After brief interviewTravel gift cardPer wave: between
14% and 23%

Appendix A.4

Table A4. Summary maintaining.
Table A4. Summary maintaining.
PanelInvitationsIncentivesCommunicationFurther Measures
LISSEmail, push message: 2 email reminders15 EUR per hour of interview time/10 EUR for sleepersPhone, email, websiteResults, newsletter, feedback options
SCPWebsite/postalN/A N/AN/A
ELIPPSEmail, push message: 2 email reminders
+ personalized actions
Tablet and connectionEmail,
mail, website, researcher contact
Results, feedback options
GIPCover letter and
informative
Conditional 5 EUR + yearly bonus (5/10 EUR)Email-
GESISEmail/letterUnconditional 5 EUR for each wavePhone, email, contact
person
-
NCPEmail: 2 email reminders (or SMS)5 EUR lottery travel gift card for each roundEmail, SMS-

Notes

1
The selection includes all panels for which detailed methodological information was publicly available or, when not directly accessible, was provided in response to our request for documentation and clarification.

References

  1. Baker, R.; Blumberg, S.J.; Brick, M.; Couper, M.P.; Courtright, M.; Dennis, J.M.; Dillman, D.; Frankel, M.R.; Garland, P.; Groves, R.M.; et al. Research Synthesis: AAPOR Report on Online Panels. Public Opin. Q. 2010, 74, 711–781. [Google Scholar] [CrossRef]
  2. Callegaro, M.; Baker, R.P.; Bethlehem, J.; Göritz, A.S.; Krosnick, J.A.; Lavrakas, P.J. Online Panel Research: A Data Quality Perspective; Wiley: Chichester, UK, 2014; ISBN 9781119941774. [Google Scholar]
  3. Couper, M.P. Designing Effective Web Surveys; Cambridge University Press: New York, NY, USA, 2008. [Google Scholar]
  4. Couper, M.P.; Bosnjak, M. Internet Surveys. In Handbook of Survey Research; Wright, J.D., Marsden, P.V., Eds.; Elsevier: San Diego, CA, USA, 2010; pp. 527–550. [Google Scholar]
  5. Lavrakas, P.J.; Pennay, D.; Neiger, D.; Phillips, B. Comparing Probability-Based Surveys and Nonprobability Online Panel Surveys in Australia: A Total Survey Error Perspective. Surv. Res. Methods 2022, 16, 241–266. [Google Scholar] [CrossRef]
  6. Cornesse, C.; Blom, A.G.; Dutwin, D.; Krosnick, J.A.; De Leeuw, E.D.; Legleye, S.; Pasek, J.; Pennay, D.; Phillips, B.; Sakshaug, J.W.; et al. A Review of Conceptual Approaches and Empirical Evidence on Probability and Nonprobability Sample Survey Research. J. Surv. Stat. Methodol. 2020, 8, 4–36. [Google Scholar] [CrossRef]
  7. Blom, A.G.; Gathmann, C.; Krieger, U. Setting Up an Online Panel Representative of the General Population: The German Internet Panel. Field Methods 2015, 27, 391–408. [Google Scholar] [CrossRef]
  8. Maslovskaya, O.; Lugtig, P. Representativeness in Six Waves of Cross-National Online Survey (CRONOS) Panel. J. R. Stat. Soc. Ser. A Stat. Soc. 2022, 185, 851–871. [Google Scholar] [CrossRef]
  9. Cornesse, C.; Blom, A.G. Response Quality in Nonprobability and Probability-Based Online Panels. Sociol. Methods Res. 2023, 52, 879–908. [Google Scholar] [CrossRef]
  10. Saris, W. Ten Years of Interviewing without Interviewers: The Telepanel. In Computer Assisted Survey Information Collection; Couper, M.P., Baker, R.P., Bethlehem, J., Clark, C.Z.F., Nicholls, W.L., II, O’Reilly, J.M., Eds.; Wiley & Sons, Inc.: New York, NY, USA, 1998; pp. 409–429. [Google Scholar]
  11. Postoaca, A. The Anonymous Elect: Market Research Through Online Access Panels; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  12. Blom, A.G.; Bosnjak, M.; Cornilleau, A.; Cousteaux, A.S.; Das, M.; Douhou, S.; Krieger, U. A Comparison of Four Probability-Based Online and Mixed-Mode Panels in Europe. Soc. Sci. Comput. Rev. 2016, 34, 8–25. [Google Scholar] [CrossRef]
  13. David, P.A. The Historical Origins of “Open Science”: An Essay on Patronage, Reputation and Common Agency Contracting in the Scientific Revolution. Capital. Soc. 2008, 3, 1–103. [Google Scholar] [CrossRef]
  14. UNESCO. UNESCO Recommendation on Open Science; UNESCO: Paris, France, 2021. [Google Scholar]
  15. Leonelli, S. Philosophy of Open Science; Cambridge University Press: Cambridge, UK, 2023. [Google Scholar]
  16. Wilkinson, M.D.; Dumontier, M.; Aalbersberg, I.J.; Appleton, G.; Axton, M.; Baak, A.; Blomberg, N.; Boiten, J.W.; da Silva Santos, L.B.; Bourne, P.E.; et al. The FAIR Guiding Principles for Scientific Data Management and Stewardship. Sci. Data 2016, 3, sdata201618. [Google Scholar] [CrossRef]
  17. Edwards, P.N.; Jackson, S.J.; Chalmers, M.K.; Bowker, G.C.; Borgman, C.L.; Ribes, D.; Burton, M.; Calvert, S. Knowledge Infrastructures: Intellectual Frameworks and Research Challenges; Deep Blue: Ann Arbor, MI, USA, 2013. [Google Scholar]
  18. Schumann, N.; Mauer, R. The GESIS Data Archive for the Social Sciences: A Widely Recognised Data Archive on Its Way. Int. J. Digit. Curation 2013, 8, 215–222. [Google Scholar] [CrossRef]
  19. Borgman, C.L.; Scharnhorst, A.; Golshan, M.S. Digital Data Archives as Knowledge Infrastructures: Mediating Data Sharing and Reuse. J. Assoc. Inf. Sci. Technol. 2019, 70, 888–904. [Google Scholar] [CrossRef]
  20. Bezuidenhout, L.M.; Leonelli, S.; Kelly, A.H.; Rappert, B. Beyond the Digital Divide: Towards a Situated Approach to Open Data. Sci. Public Policy 2017, 44, 464–475. [Google Scholar] [CrossRef]
  21. Beck, S.; Bergenholtz, C.; Bogers, M.; Brasseur, T.-M.; Conradsen, M.L.; Di Marco, D.; Distel, A.P.; Dobusch, L.; Dörler, D.; Effert, A.; et al. The Open Innovation in Science Research Field: A Collaborative Conceptualisation Approach. Ind. Innov. 2022, 29, 136–185. [Google Scholar] [CrossRef]
  22. Van den Eynden, V.; Corti, L. Advancing Research Data Publishing Practices for the Social Sciences: From Archive Activity to Empowering Researchers. Int. J. Digit. Libr. 2017, 18, 113–121. [Google Scholar] [CrossRef]
  23. Wehn, U.; Ajates, R.; Mandeville, C.; Somerwill, L.; Kragh, G.; Haklay, M. Opening Science to Society: How to Progress Societal Engagement into (Open) Science Policies. R. Soc. Open Sci. 2024, 11, 231309. [Google Scholar] [CrossRef]
  24. Schreier, M. Qualitative Content Analysis in Practice; Sage Publications: London, UK, 2012. [Google Scholar]
  25. Krippendorff, K. Content Analysis: An Introduction to Its Methodology, 2nd ed.; Sage Publications: Thousand Oaks, CA, USA, 2004. [Google Scholar]
  26. Neuendorf, K.A. The Content Analysis Guidebook; Sage: Thousand Oaks, CA, USA, 2002. [Google Scholar]
  27. MacQueen, K.M.; McLellan, E.; Kay, K.; Milstein, B. Codebook Development for Team-Based Qualitative Analysis. CAM J. 1998, 10, 31–36. [Google Scholar] [CrossRef]
  28. Früh, W. Inhaltanalyse: Theorie und Praxis; UVK Verlagsgesellschaft: Konstanz, Germany, 2007. [Google Scholar]
  29. Kocar, S.; Kaczmirek, L. A Meta-Analysis of Worldwide Recruitment Rates in 23 Probability-Based Online Panels, between 2007 and 2019. Int. J. Soc. Res. Methodol. 2024, 27, 589–604. [Google Scholar] [CrossRef]
  30. Dillman, D.A.; Smyth, J.D.; Christian, L.M. Internet, Phone, Mail, and Mixed Mode Surveys: The Tailored Design Method, 4th ed.; John Wiley & Sons, Inc.: New York, NY, USA, 2014. [Google Scholar]
  31. Scherpenzeel, A.C. “True” Longitudinal and Probability-Based Internet Panels: Evidence from the Netherlands. In Social and Behavioral Research and the Internet; Routledge: New York, NY, USA, 2011. [Google Scholar]
  32. OECD. OECD Survey on Drivers of Trust in Public Institutions 2024 Results—Country Notes: Sweden. Available online: https://www.oecd.org/en/publications/oecd-survey-on-drivers-of-trust-in-public-institutions-2024-results-country-notes_a8004759-en/sweden_11ca1946-en.html (accessed on 20 May 2025).
  33. AAPOR. Standard Definitions: Final Dispositions of Case Codes and Outcome Rates for Surveys, 10th ed. Available online: https://aapor.org/wp-content/uploads/2024/03/Standards-Definitions-10th-edition.pdf (accessed on 20 May 2025).
  34. Koch, A.; Blom, A.G.; Stoop, I.; Kappelhof, J. Data Collection Quality Assurance in Cross-National Surveys: The Example of the ESS. Methoden Daten Anal. 2009, 3, 219–247. [Google Scholar]
  35. Arnesen, S. A Guide to the 2017 European Internet Panel Study. Available online: https://bookdown.org/sveinungarnesen78/eips2017-guide (accessed on 20 May 2025).
  36. European Commission. The Digital Economy and Society Index (DESI). Available online: https://digital-strategy.ec.europa.eu/en/policies/desi (accessed on 20 May 2025).
  37. SOM Institute. About the Swedish Citizen Panel. Available online: https://www.gu.se/en/som-institute/the-swedish-citizen-panel/about-the-swedish-citizen-panel (accessed on 20 May 2025).
  38. Nilsson, A.; Bonander, C.; Strömberg, U.; Canivet, C.; Östergren, P.-O.; Björk, J. Reweighting a Swedish Health Questionnaire Survey Using Extensive Population Register and Self-Reported Data for Assessing and Improving the Validity of Longitudinal Associations. PLoS ONE 2021, 16, e0253969. [Google Scholar] [CrossRef]
  39. Singer, E.; Ye, C. The Use and Effects of Incentives in Surveys. Ann. Am. Acad. Political Soc. Sci. 2013, 645, 112–141. [Google Scholar] [CrossRef]
  40. Göritz, A.S. Incentives in Web Studies: Methodological Issues and a Review. Int. J. Internet Sci. 2006, 1, 58–70. [Google Scholar]
  41. Brüggen, E.; Wetzels, M.; De Ruyter, K.; Schillewaert, N. Individual Differences in Motivation to Participate in Online Panels: The Effect on Reponse Rate and Reponse Quality Perceptions. Int. J. Mark. Res. 2011, 53, 369–390. [Google Scholar] [CrossRef]
  42. Martinsson, J.; Riedel, K. Postal Recruitment to a Probability Based Web Panel. Long Term Consequences for Response Rates, Representativeness and Costs. LORE Work. Pap. 2015, 1, 1–18. [Google Scholar]
  43. de Leeuw, E.D.; Hox, J.; Dillman, D. International Handbook of Survey Methodology, 1st ed.; Routledge: New York, NY, USA, 2012; ISBN 9781136910630. [Google Scholar]
  44. Wahlig, G.; Dannwolf, T.; Züll, C.; Tanner, A. Panelmanagement: Probleme, Anmerkungen und Kommentare der Befragten; Kategorienschema für die Codierung von Befragtenrückmeldungen im GESIS Panel. GESIS Pap. 2018, 8, 1–40. [Google Scholar]
  45. Kalton, G. Some Issues in the Design and Analysis of Longitudinal Surveys. In Proceedings of the 59th World Statistics Congress of the International Statistical Institute, Hong Kong, 25–30 August 2013; International Statistical Institute: Hague, The Netherlands, 2013; pp. 2611–2616. [Google Scholar]
  46. Scherpenzeel, A. Survey Participation in a Probability-Based Internet Panel in the Netherlands. In Improving Survey Methods: Lessons from Recent Research; Routledge: New York, NY, USA, 2015; pp. 223–235. [Google Scholar]
  47. Lugtig, P.; Blom, A. Using Paradata to Explain Attrition in the German Internet Panel. In Proceedings of the Panel Survey Methods Workshop, Berlin, Germany, 20 June 2016. [Google Scholar]
  48. Palat, B.; Elie, M.; Bendjaballah, S.; Garcia, G.; Sauger, N. Give Them a Call! About the Importance of Call-Back Strategies in Panel Surveys. Surv. Pract. 2023, 16, 1–11. [Google Scholar] [CrossRef]
  49. Schwerdtfeger, M.; Hadler, P.; Weyandt, K. GESIS Panel Wave Report: Wave Lb. Available online: https://access.gesis.org/dbk/78914 (accessed on 20 May 2025).
  50. Meijeren, M.; Bekkers, R.; Scheepers, P. Hop in and Drop Out: How Are Changes in the Life Course Related to Changes in Volunteering for Humanitarian Organizations? Nonprofit Volunt. Sect. Q. 2025. [Google Scholar] [CrossRef]
  51. Di Donato, F.; Provost, L. Why Isn’t FAIR Enough? Bringing Together Methods and Values for Open Science Uptake. Um. Digit. 2025, 9, 17–46. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Taddei, L.; Germani, D.; Marchesini, N.; Paolillo, R.; Pennacchiotti, C.; Primerano, I.; Santurro, M.; Cerbara, L. Comparison of Online Probability Panels in Europe: New Trends and Old Challenges in the Era of Open Science. Societies 2025, 15, 210. https://doi.org/10.3390/soc15080210

AMA Style

Taddei L, Germani D, Marchesini N, Paolillo R, Pennacchiotti C, Primerano I, Santurro M, Cerbara L. Comparison of Online Probability Panels in Europe: New Trends and Old Challenges in the Era of Open Science. Societies. 2025; 15(8):210. https://doi.org/10.3390/soc15080210

Chicago/Turabian Style

Taddei, Luciana, Dario Germani, Nicolò Marchesini, Rocco Paolillo, Claudia Pennacchiotti, Ilaria Primerano, Michele Santurro, and Loredana Cerbara. 2025. "Comparison of Online Probability Panels in Europe: New Trends and Old Challenges in the Era of Open Science" Societies 15, no. 8: 210. https://doi.org/10.3390/soc15080210

APA Style

Taddei, L., Germani, D., Marchesini, N., Paolillo, R., Pennacchiotti, C., Primerano, I., Santurro, M., & Cerbara, L. (2025). Comparison of Online Probability Panels in Europe: New Trends and Old Challenges in the Era of Open Science. Societies, 15(8), 210. https://doi.org/10.3390/soc15080210

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop