Next Article in Journal
A Multi-Task Framework for Action Prediction
Next Article in Special Issue
Challenges of Developing a Mobile Game for Children with Down Syndrome to Test Gestural Interface
Previous Article in Journal / Special Issue
Creating Competitive Opponents for Serious Games through Dynamic Difficulty Adjustment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Playful Learning with a Location-Based Digital Card Environment: A Promising Tool for Informal, Non-Formal, and Formal Learning

Research group WISE, Department Computer Science, Vrije Universiteit Brussel, 1050 Brussel, Belgium
*
Author to whom correspondence should be addressed.
Information 2020, 11(3), 157; https://doi.org/10.3390/info11030157
Submission received: 29 January 2020 / Revised: 12 March 2020 / Accepted: 13 March 2020 / Published: 15 March 2020
(This article belongs to the Special Issue Advances in Mobile Gaming and Games-based Leaning)

Abstract

:
Most people are using their smartphones daily and carry them all the time. Therefore, mobile learning applications can be integrated in daily routines to make learning a part of daily life. While numerous mobile learning applications exist, each with their own goal, our aim was to explore the possibility of creating an engaging mobile environment that could be useful for informal learning, as well as for other forms of learning, i.e., non-formal and formal learning. The result is TICKLE, a playful learning environment for youngsters. It is a mobile location-based smartphone application that offers youngsters an interactive environment for exploring their surroundings. It offers cards related to physical locations, which can be collected by performing small challenges (based on the principles of micro learning). A design science research approach has been used to develop this software environment. Persuasive techniques and gamification are used to stimulate usage. Furthermore, a personalized approach is applied. The environment was evaluated by means of formative evaluations in different contexts. We obtained positive results and received useful feedback to improve and extend the application. We can conclude that, in the context of these evaluations, the app was usable for youngsters and able to engage them, and we see indications that it may be able to increase the intrinsic motivation and learning capacity of youngsters. In addition, our demonstrations show that the app is usable in different contexts and for different purposes. In this way, the environment can be used to offer youngsters appealing learning related experiences.

1. Introduction

Learning is usually associated with going to school or following an educational program. This is so-called formal learning. Formal learning is typically institutionally organized, often classroom-based, and highly structured [1]. However, there are plenty of opportunities to learn. Learning can happen as a byproduct of some other activity, i.e., so-called incidental learning [1], or while searching for information, or exploring an environment. In general this kind of learning is called informal learning [2,3], but also different terms are used, such as learning “en passant” [4] or self-regulated learning [5]. Informal learning is the result of an unplanned or an unexpected event. Note that in the educational domain also the term non-formal learning (also called semi-formal learning [6]) is used. As opposed to informal learning, non-formal learning is a planned, but very adaptable activity set up by an institution or organization [7]. It consists of learning embedded in planned activities that are not explicitly designed as learning but contain an important learning element [8] (p. 839). Examples of non-formal learning are visits to museums and city tours organized as part of educational curriculum activities. So, informal learning is distinguished from formal and non-formal learning by having no authority figure or mediator [7].
According to Marsick and Watkins [1], informal learning takes place wherever people have the need, motivation, and opportunity for learning. Informal learning is characterized as being integrated with daily routines, triggered by an internal or external event, not highly conscious, and is an inductive process of reflection and action. Given these characteristics, current digital technologies, such as mobile applications and social media, can foster informal learning [9,10,11]. Most people are using their smartphones daily and carry them all the time. Therefore, mobile applications can be integrated into daily routines. Furthermore, smartphones can be used to trigger the user at any moment. The use of social media, such as Flickr, YouTube, and Facebook, has steadily increased [12] and has also become part of people’s daily life. The wealth of information available on the Web and accessible at any moment through smartphones provides plenty of opportunities to learn. Therefore, mobile applications and social media are nowadays also used for non-formal learning [8]. Two example are the MTL Urban Museum App [13] and the Digital Literacy 2.0 project [14]. However, the aforementioned technologies can also be used to support formal learning [15]. Therefore, from a technological point of view, it should, in principle, be possible to develop a learning environment that can be used for all three forms of learning. Our goal was to investigate this possibility. Hence, our goal was not to explicitly support one of the three forms of learning, but, as formulated by Lonsdale, Vavoula, and Sharples, “to use mobile technologies to transform learning into a seamless part of daily life to the point where it is not recognized as learning at all” [16] (p. 5). We aimed to achieve this by creating an engaging software environment that would be used voluntarily for informal learning, but could also be used for non-formal learning, and in the context of formal learning. The type of learning environment we aimed for is a so-called Playful Learning Environment (PLE). A playful learning environment refers to a technology-enriched play and learning environment that blends indoor and outdoor spaces to create a playground for exploration, narration, and imagination of information and knowledge [17,18].
Using the Design Science Research Methodology (DSRM) [19], we developed a mobile digital environment for stimulating youngsters to explore their environment in a meaningful and playful way. The digital environment uses a card-based interface and the principles of micro learning [20] to offer small chunks of learning content and challenges. Persuasive techniques [21] and gamification [22] are used to stimulate usage. Furthermore, a personalized approach is applied, meaning that what is offered, how and when, is adapted to the needs and behavior of the individual user. We performed several formative evaluations in different contexts, which provided useful feedback to improve and extend the application. We could conclude that, in the context of those evaluations, the app was usable for youngsters and able to engage them, and we see indications that it may be able to increase the intrinsic motivation and learning capacity of youngsters. By means of different use cases, we showed that the app is usable in different contexts and for different purposes, including informal, non-formal, and formal learning, thus achieving our main goal.
The paper starts by discussing related work (Section 2). Next, it elaborates on the research steps taken to come to the design of the system and justifies the decisions made (Section 3). In Section 4, we provide an overview of the system and its functionalities. Next, we discuss the evaluations performed and the demonstrators developed (Section 5). Section 6 provides a discussion, limitations, and future work. Section 7 concludes the paper.

2. Related Work

This section discusses the research work and applications that are related to our approach and developed environment, and we compare them with our work.
Personalize Adaptive CARD-based interface (PACARD) [23] combines a card-based interface, personalized adaptation, push notifications, and badges. It aims to enhance mobile learning engagement. The cards used in the interface simulate physical flashcards, which are used in education as an aid to memorization. In this way, the applicability of PACARD is limited to repetition-based learning. To be effective, flashcards should be reviewed in periodic intervals, such as daily. The TICKLE interface is also card-based, but the cards are not flashcards and the goal is not the retention of information. Similarly to PACARD, we used the principle of micro learning but for a broader goal. In PACARD, the concept is used for helping in memorization. Similarly to TICKLE, PACARD aims at providing a personalized adaptive learning experience on mobile devices. The notification system is used to send reminders to the learners.
Similarly, MemReflex [24] also uses flashcards in a mobile environment. It also uses the principle of micro learning, and the content delivery is adapted by the user’s performance.
Talentcards [25] is a commercial micro-learning-based platform using a card interface. Similarly to TICKLE, the cards consist of micro-learning objects. The objects can have several types of content (text, presentations, video, audio, etc.). The cards are available through a mobile application. In Talentcards, the cards are structured using learning paths. This solution is more tailored towards formal learning.
S_U+G is a commercial platform to create serious urban games [26]. It allows the development of location-based urban games in which the user has to perform challenges. Similarly to TICKLE, by performing these challenges the user can learn more about a topic. Mayor@Your-town is such an example of one of these games. The aim of this serious game is to challenge youngsters to look at the city with new eyes, and to send policy suggestions on all matters that concern young people. The user can walk with a tablet in the city to find challenges to perform. The game also uses a card-based interface and the user can collect points. In this system, the challenge itself is presented as a card, while in TICKLE the challenge is accessible through the card. Urban games or street games are only one type of so-called mobile location-based game. In [27], a review of mobile location-based games and their possible contribution toward learning is provided. This review divided the games into three groups: The first group contains games that are developed for entertainment but could be used in the context of learning. A Geocaching game is an example of such a game. The second group are games explicitly developed for learning, and the third group are the hybrid games, which are designed for both entertainment and learning. Examples of this type of game are museum mobile interaction or role playing games. Although TICKLE is not designed for the development of mobile games, it can be used, to a certain extent, to develop geocaching, urban games, or museum games. In Section 5 examples are given.
Furthermore, there are several mobile phone applications that use persuasive technology to induce behavior changes. Some examples are:
  • Chick Clique [28] is a mobile phone application designed to motivate teenage girls to exercise more. Competition, self-monitoring, and social comparison are used as persuasive techniques.
  • iDetective [29] is a persuasive smartphone game that tries to persuade the users to walk more. The user has to find the location where a picture was taken. The application shows the picture and some clues. The main persuasive principle used is social comparison.
  • RightOnTime [30] is a mobile application intended to help people improve their punctuality and time-keeping skills. Self-monitoring and task reduction are the main principles used to achieve this goal. The app also uses notifications to give reminders and tips.

3. Research Methodology and Design Decisions

Our research is situated in the domain of Information System (IS) design science [31], which deals with the creation of innovative artifacts. Therefore, we follow the typical design science research process, which is an iterative process including a problem identification phase, a solution design phase, and an evaluation phase. The evaluation is driving the iterative process. The purpose of the evaluation is to investigate how well the artifact supports a solution to the problem. Based on the results of the evaluation, the solution design may be improved. Different types of evaluation are possible, depending on the nature of the problem and the state of the artifact during the process.
Slightly different design science research processes exist [32]. We have followed the six steps of the Design Science Research Methodology (DSRM) [19]: (1) problem identification and motivation, (2) definition of the objectives for a solution, (3) design and development, (4) demonstration, (5) evaluation, and (6) communication. In this section, we discuss the first and second step, as well as the design decisions resulting from these steps. Step 3 is discussed in Section 4; step 4 and 5 in Section 5. For communication (step 6), the regular academic channels are used, as well as presentations to relevant organizations and at relevant professional events.

3.1. Problem Identification, Motivation, and Definition of the Objectives

The original context of our research was the need to tackle school burnout. School burnout is a term used to refer to exhaustion at school, a cynical and detached attitude towards the school, and feelings of inadequacy as a student [33]. School burnout often precedes school dropout, also named Early School Leave (ESL) [34], which results in young people leaving education with only lower secondary education or less. Early school leavers have lower job opportunities and only qualify for jobs with lower earnings, which has great impact on their further life [35]. Therefore, the issue is high on the political agenda. Europe 2020 aims for a reduction of ESL to less than 10% [36]. Although different programs exist to prevent school burnout and ESL—ranging from offering customized training projects and individual coaching to time-out trajectories aiming to bring the student back into the classroom—these projects and programs are very labor-intensive. To come to a less labor-intensive solution, in particular to deal with school burnout, our objective was to complement the existing programs with an ICT solution. Although we started the development of the tool in the context of supporting informal learning to tackle school burnout, it was found that the tool can also be used for non-formal, as well as formal learning. Therefore, the scope was broadened to non-formal and formal learning, but we kept the focus on youngsters, as well as our main objective: a digital environment that could transform learning into a pleasant activity, a seamless part of daily life.

3.2. Design Decisions

To come to the design of our solution, we started with a number of literature studies. To be able to make a grounded decision on the technology to be used, we analyzed the results of the major studies on computer and media use among youngsters. Our focus was on Flanders and Brussels, as youngsters in Brussels were initially our target users. We provide a summary of the findings that are relevant for the design of our solution. The complete report is available online [37]. Youngsters appear to have good general computer skills and experience with the Internet. They also regularly share their own material online. They have a preference for smartphones and are using them daily; tablets are used less. Smartphones seem to fit best with their lifestyle, i.e., they have often limited financial resources and spend a large part of their time outside. Therefore, we decided to adopt smartphones for our solution. The Internet is well spread and most youngsters do have access to the Internet. Moreover, the availability of the Internet is only increasing: free Wi-Fi becomes available in public spaces and a lot of youngsters have mobile broadband on their smartphone. Therefore, we opted for an Internet-based application. To keep all options open, we decided to go for a browser application rather than a native app. In addition, this allows for having the application immediately available on different types of smartphones. Furthermore, the majority of mobile operating systems provide a Web application as an “app” on the start screen. Later on, it is still possible to turn the application into a true (native) app.
The initial problem that we wanted to address was school burnout in order to avoid ESL, so we investigated studies related to these topics. The complete report is available online [38]. In summary, we found that a large variety of factors can play a role: factors from the youngsters’ environment, as well as individual characteristics, but none of these factors seems to be conclusive. Therefore, it is recommended in the literature that prevention programs should rely on a wide body of information related to multiple influencing factors, to provide a more complete picture of the youngster. For this reason, we decided to include an elaborated user profile that should be used to personalize the environment and the presented content towards the situation and characteristics of the youngsters.
Re-activating youngsters with school burnout implies inducing a behavior change. Therefore, we studied theories dealing with behavior change. In this context, we concluded that the Behavioral Model of Fogg [39] is an interesting model for our solution. It offers three factors that determine whether a person will perform a certain behavior or not: motivation, ability, and trigger. Other studies have also shown that motivation and ability are crucial requirements for behavioral change (e.g., [40]). According to Fogg, motivation can be distilled to three pairs of core motivators: pleasure and pain; hope and fear; and social acceptance and social rejection. These are aspects that could be taken into consideration in the development of the tool. For instance, our objective to make learning a pleasant activity is using pleasure as motivator. However, social acceptance and social rejection are also usable in our solution.
Ability relates to available resources. Fogg [39] uses the term ability in a broad sense, i.e., available time and/or money; required physical effort and/or cognitive effort; social deviance caused by the behavior; and the familiarity with the behavior. To take this into account, we should carefully adapt the activities to the abilities of a youngster.
The trigger in Fogg’s model is the element that sparks, facilitates, or signals the target behavior. Triggers are most effective when they are provided at the right place and time [39]. This is an argument in favor of keeping track of the youngsters’ performance and activities within the tool in order to be able to give the trigger at the right place and time. However, the type of trigger used, as well as the content of the trigger, also seems to be important. If an app keeps sending notifications that are not considered useful by the receiver, this might annoy the user, and (s)he will start to ignore them [41]. Furthermore, what will trigger one person to perform an action may not trigger another person. This is because different users have different preferences and characteristics [42]. This is an additional argument for using an elaborated user profile in order to also personalize the triggers.
Next, the Hook Model of Eyal [43] is a practical approach to create new habits or behavior. According to Eyal, a new behavior becomes a habit when the behavior becomes an automatic response to a situational cue or trigger. Unfortunately, turning a new behavior into a habit is hard since, according to Eyal, old habits die hard, while new habits quickly dissipate. Therefore, the Hook Model proposes a cycle through which the user must repeatedly move to gradually create these new habits. A single cycle is composed of four consecutive steps. It starts with a trigger that should be followed by an action from the user. In accordance with Fogg, Eyal also argues that an action will only take place if the user possesses sufficient motivation and ability to perform the action. Therefore, Eyal suggests to make the actions as easy as possible, e.g., clicking on a link. In this way, the behavior is very likely to be performed. The next phase in the Hook cycle is the reward phase. Rewards are used to increase the likelihood of repeating the action in the next cycle. Variable rewards (both in time and in size) are recommended to avoid the effect of the reward fading away after a while. The last phase of the cycle is the investment. This phase is typical for the Hook Model. An investment is everything that a user puts into the system (like time and effort), or supplies to the system (like preferences and content), but also concepts like “reputation”, “followers”, or “likes” are considered as an investment. The more a user invests in a system, the less likely it is that (s)he will stop using the system, as then the investment will be lost. Therefore, when applying the Hook model, it is necessary to give due consideration to these investments. The goal of repeatedly going through the cycle is to eventually remove the need for an external trigger, which is used in the first step of the cycle, and to replace it by an internal trigger, such as the feeling of boredom or loneliness, or simply curiosity.
The triggers in both the Behavior Model of Fogg and the Hook Model of Eyal aim to persuade the user to perform a certain behavior (Fogg) or an action (Eyal). Persuasion is used to influence decisions or actions taken by human beings by applying certain psychological principles [44]. These principles are based on the fact that most of the time people do not take decisions or actions based on rational arguments, but use shortcuts to save time and energy. Earlier, persuasion was mainly done via human interaction. However, nowadays, ICT is also used to persuade people to do what somebody wants them to do, i.e., by means of so-called persuasive technology [21]. Therefore, we performed a literature review on persuasive technology [45] and decided to use principles of persuasion, such as the principles formulated by Cialdini [46], in our solution.
Furthermore, we opted for using the principles of micro learning to keep learning pleasant. According to this principle, learning takes place from interaction with small chunks of learning content and flexible technologies enabling easy and “on the move” access from anywhere [20]. Such an approach better fits the characteristics of our target audience and will make it easier to integrate it into the daily life of a youngster.

4. Design and Development

We designed and developed a mobile digital environment, called TICKLE, to stimulate youngsters to explore their environment in a meaningful and playful way. It allows youngsters to collect digital cards by performing associated challenges. Following the principle of micro learning, the challenges are small learning activities related to the physical environment, and they can range from non-formal learning activities to formal learning activities.
TICKLE is composed of a front end and a back end (see Figure 1). The front end is the actual learning environment and is intended to be used by the youngsters. The back end contains the authoring environment to create the actual content of the environment, as well as a supervisor module to create and maintain the profiles of the youngsters and to link cards to youngsters, which is needed for the personalized approach. This module also provides learner analytics. This back end is intended for the content creators, teachers, and supervisors of youngsters with learning issues, or anybody that wants to use the application in another context (see also Section 6). All information created and collected by the back end and the front end is stored in the TICKLE database. In this way, the database allows the front end and the back end to communicate.

4.1. Front End

The main component of the front end of TICKLE is the Card Interface. This is the interface for the youngster. Next to this, it contains a Notifier module, responsible for sending notifications to the users. In this paper, we focus on the Card Interface. The Notifier module, which is running in the back, will be described in detail elsewhere. However, the purpose and the principles for the notifications are described in Section 4.1.2, and a short description of its implementation is given in Section 4.3.

4.1.1. Card Interface

The Card Interface is a location-based mobile application. It displays a (geographical) map on which cards are marked. The available cards are also shown as thumbnails at the top or the bottom of the map. See Figure 2 for an illustration of the map view of this Card Interface. The cards can be collected by the youngster when (s)he is nearby the physical location associated with a card. To collect a card, the youngster has to open the card (see Figure 3a for an example card), and has to perform the associated challenge (see Figure 3b for an example challenge). Note that these challenges can be very simple (e.g., taking a picture), but more advanced activities are possible, like performing a quiz (e.g., in Figure 3b), a small game, or doing a traditional learning assignment. According to Fogg’s model, these challenges should be adapted to the ability of the youngster. This implies that cards should be carefully matched with the abilities of the youngster. For this matching, the profile of the youngster is used. This matching is done in the back end of TICKLE.
A card has two sides. The front side of the card provides information about the subject of the card, and/or information that allows performance of the challenge, or just an incentive to stimulate the youngster to take up the challenge. Note that the actual challenge remains hidden until the youngster decides to take up the challenge (i.e., clicks on the challenge button). This is done for two reasons. Firstly, we want to trigger the curiosity of the youngster, and secondly, we want to prevent the youngsters from “shopping” for easy challenges and dropping others.
The front side of the card may contain a title, a description, links, and media (pictures and videos). The cards can also mention a time period if the card is only applicable for a specific time (e.g., such as a card for an event or an exhibition). The cards are labeled with tag(s). A tag is used to indicate the topic or subject of the card, e.g., culture or sport. The tags also indicate the points that can be collected with the card for the related topic. In Figure 3a, the tag is “SPORTS”, and 20 points can be earned with the card. The points are only collected when the challenge is performed correctly.
The points collected by a youngster are maintained in a wallet and can be used by the youngster to obtain (“buy”) rewards. As recommended in the Hook model, the points, as well as the rewards, are variable. The actual rewards depend on the use case, but examples are vouchers and cinema tickets.
The card can be flipped to see information such as the author of the card, related cards, and comments given by other users.
Cards are grouped in so-called card environments and a youngster can be assigned to one or more card environments (this is done in the back end by the supervisor of the youngster). The youngster can inspect which cards (s)he already collected for an environment, which cards are started (i.e., the challenge is started but not yet finished), which cards are still open, i.e., collectable, and which cards (s)he fails to collect (failed card) (see Figure 4). (S)he can also inspect his/her wallet and “buy” rewards with the collected points (via the “XP” button in Figure 4). In the future, it will also be possible to connect with other users of the card environment, ask for help, or collaborate to collect a card (via the “FRIENDS” button).

4.1.2. TICKLE’s Hook Cycle

Collecting cards follows the Hook cycle. The Hook cycle starts with the triggering phase. In TICKLE, the triggering is done by means of notifications. Two types of notification are supported: internal notification and external notification. Internal notifications are given when the youngster is using the TICKLE app, e.g., to indicate that the youngster comes nearby a location for which a suitable card is—or several cards are—available, or when new cards are available. External notifications are used to trigger the youngster when (s)he is not using the app. For example, when, after a while, there are still cards to collect or to finish by the youngster, or when the youngster has been inactive for a longer period of time. A default schedule for sending these notifications is foreseen, but it can be adapted for a youngster by the supervisor. The notifications use persuasive messages, which are tailored towards the personality of the youngster. For the moment, the Big Five trait taxonomy [47] is used for this purpose (see Section 4.2.1). A notification message contains a button to quickly go to the card mentioned in the notification or to go the TICKLE app (depending on the message given in the notification and the type of notification). This implements the action phase of the Hook Cycle. Figure 5a shows an example of an internal notification message.
When a youngster performs a challenge, (s)he can succeed or fail to collect the card. Collecting a card is rewarded with points but the youngster will also receive a (variable) performance feedback message by means of a notification. The type of feedback is tailored towards the personality of the youngster and the number of attempts needed (see Figure 5b for an example). If the youngsters failed a challenge, a supportive message will be given, also tailored towards the personality of the youngster and depending of the number of attempts already used. Note that the possibility to retry a challenge can be disabled or enabled in the authoring environment.
Investments from the youngsters are currently: time spent, the collected cards, and collected points. Later on (future work), when connecting to other users is supported, other types of investment will be the friends made and the reputation of the youngster in the community.

4.2. Back End

The back end consists of an Authoring Environment for creating cards and challenges and a Supervisor Module for creating and maintaining the profiles of the youngsters, to link the card environments and cards to youngsters, to manage the sending of notifications, and to inspect the progress of the youngsters. The users of the back end are, on the one hand (learning) content creators, and on the other hand professionals who want to use TICKLE in their institute or organization. These can be teachers, or professionals supervising youngsters with school burnout, or members of an organization or institute that want to use a TICKLE environment for some purpose, e.g., a team building event, a city game, training.

4.2.1. Authoring Environment

Cards are created by means of the Card Editor. A card consists of a number of fields, such as Image, Title, Description, Links, Videos, Topics, and Time-period. The author can choose which ones to include in the card and then provides the content. See Figure 6a for an illustration. Giving the location of the card on the (geographical) map is mandatory, as well as the information on when and where the card should be visible. The visibility of a card can be limited to a certain range, i.e., 50, 200, or 500m, meaning that a card will only become visible when the user is physically within this range of the location of the card. The alternative is that the card is visible wherever the user is located. The duration of the visibility can also be limited by providing a starting date and time and an end date and time, for instance when the card is about an event or a temporary exhibition. If such a time period is not given the card will stay visible (until explicitly removed). It is also possible to indicate that a card should not be visible on the map after the card was collected (or the user failed to collect the card). To speed up the creation of similar cards, templates can be created and used.
Currently, TICKLE supports a limited number of challenge types, such as a quiz, an open question, and a hangman game. However, external authoring tools, e.g., BookWidgets [48], can be used for creating other types of challenges like time line exercises, riddles, educational games, etc. Figure 6b illustrates the creation of a quiz challenge within TICKLE.
For some types of application it may be useful to help the user to find the cards. For this purpose, so-called waypoints can be created, which guide the user in the direction of the cards. They are especially useful when cards are not visible upfront and need to be discovered by the user. Then, helpful comments can be attached to the waypoints to specify a region of interest for the user. Figure 7 shows the creation of these waypoints in the authoring environment.
In the Card Environment Editor, cards are grouped in a so-called card environment. A card environment is given a name, a description and an image. It is possible to make the environment public, which means that all TICKLE users will be able to use the environment. Otherwise, the card environment is private and needs to be assigned to a user to allow the user to see the environment. This is done in the supervisor module. A user can have access to multiple card environments. In Figure 8, we see the start screen of a user who has access to multiple card environments.

4.2.2. Supervisor Module

In the supervisor module, accounts for users can be created and users can be given access to card environments. The information about a user (youngster) is maintained in a user profile. The user profile includes personal information (such as the name and email address of the youngster and interests), and also information to steer the sending of notifications (such as the block off time, i.e., time period(s) in which no notifications should be send to the user), as well as the persuasion profile of the user [49] that contains personality information (e.g., the values for the Big Five) that can be used to select the appropriated persuasion techniques for the user. User profiles are created and maintained by means of the Profile Editor. The supervisor can also inspect the performance of his/her users, i.e., the points collected, the cards collected, failed, or started, and their activities in TICKLE by means of the Learner Analytics module. Note that a supervisor can only manage his own users.
In the Cards Manager Module, a supervisor can manage his or her card environments. For a card environment, (s)he can add and remove cards, add and remove users. To ensure that the challenges are adapted to the abilities and the interests of the user, (s)he can add or remove individual cards for a user. (S)he can also inspect by card, who could collect the card, and who could not (see Figure 9a for an illustration of this last functionality). For the challenges that cannot be assessed automatically (like open questions), the supervisor should inspect the answers given and assess them (see Figure 9b). Furthermore, the Cards Manager allows adding rewards to a card environment.

4.3. Development

The TICKLE application is developed as a responsive browser application. In this way, cross-platform support could be achieved without major effort. Later on, when a stable version is obtained, it is still possible to turn the application into a true (native) app. Furthermore, the majority of mobile operating systems allow to provide a Web application as an “app” on the start screen. The user interface of the front end is optimized for smartphones. For the back end, it is recommended to use a laptop or desktop.
The implementation of TICKLE follows the Progressive Web App (PWA) paradigm, which mimics the user experience of a native mobile application on the Web platform [50]. A PWA is required to be reliable, i.e., to load instantly, to provide limited offline functionality, and also to be fast and engaging.
To separate business logic from implementation internals, we chose to follow the Redux architecture [51] as a model. It centralizes the state of the application in one place and provides a one-directional data flow that makes it easy to test complex user interaction procedures. Whenever the user interacts with the user interface, an action is triggered that updates the state of the application and the interface.
For the implementation of the user interfaces, we chose React.js [52] as framework, which provides a declarative way to author HTML components in JavaScript. Moreover, it has its own notion of state that helps further to separate business logic and pure user interaction.
Additionally, we used the API layer provided by Firebase [53], which includes a schema-less document database, i.e., Firestore.
To implement the Notifier module, a rule engine is used. This was done to make the notification system flexible and to allow us to experiment with different notification strategies. The rules allow for the specification of when, how, and which notification should be given to a user (i.e., to define a notification strategy). This is done by means of a set of conditions, which are evaluated by the rule engine against the activities performed by the user (e.g., login, challenge submitted, location change), the user’s persuasion profile, and the user’s notification preferences. Based on the satisfied rules, the rule engine schedules (when, what, how) the user notifications. We opted for the JSON Rules Engine library [54], which is a lightweight rules engine written in JavaScript. The Firebase Cloud messaging service is used to send the resulting notifications to the clients.

5. Evaluations and Demonstrations

During the research and development process, we performed several evaluations and provided different demonstrations. The evaluations were formative evaluations with the aim to improve the application as its development progresses. For that purpose, qualitative research methods are more useful than solely quantitative ones [55]. According to Kaplan and Maxwell [55] qualitative methods can be used throughout the entire development process, as they can help to identify potential problems as they are forming, thereby providing opportunities to improve the system as it develops.
A phased approach was used for this formative evaluation. After each evaluation phase the app was improved based on the feedback received. The main questions for this formative evaluation were: Is the TICKLE environment, as an adaptive mobile tool with persuasive strategies, (1) usable for youngsters, (2) able to engage youngster, and (3) able to increase the intrinsic motivation and learning capacity of youngsters?

5.1. Evaluation Phase 1

In the first phase, which was situated in the early design phase, we wanted to receive suggestions and recommendations from supervisors and organizations concerned with school burnout and dropout to ensure that our environment would be usable for this purpose. Next to that, we also wanted to create awareness about the project within the field (step 6 in DSRM).
Individual sessions were held with 11 organizations, all organizations working with youngsters. In the sessions, we used open interviews to gather feedback and/or input on specific topics concerning the TICKLE environment (i.e., on attractiveness, usability, and feasibility of the approach). After a short introduction, the researchers explained the aim, design, and features of the TICKLE environment, after which the current prototype was presented. Next, the TICKLE environment was discussed using some questions as a guide for the interview conversations. The interviews were audio-recorded, transcribed ad verbatim, and read through repeatedly. The interviews were coded and analyzed in the MAXQDA software package through an iterative process that combined elements of both content and thematic analyses [56].
Findings & actions taken:
  • Potential value of TICKLE for exploring the youngster’s environment.
The organizations we consulted pointed out that a lot of youngsters, among whom those that (eventually may) dropout, hold on strongly to the boundaries of their own quarters, in this way missing opportunities to broader their interests. Its location-based service and on-the-go approach meant the organizations did see merit in TICKLE in allowing young people to go out and step outside their own direct neighborhoods, enabling them to explore new parts of their neighborhood and the city in general. By offering youngsters different challenges and activities, the application could bring them to locations and places they have not been before and stimulate them to explore activities they did not participate in before.
  • Potential value of TICKLE for engaging youngsters.
The coaches and supervisors from the consulted organizations recommended that the offer in terms of cards, activities, and challenges should be very diverse, so that all youngsters could find something of their interest. Themes mentioned were sports (e.g., dance and boxing) and music, but also media. Next to our intention to start from the youngsters’ own interest, the gamification element within TICKLE was considered a positive and appealing way to motivate youngsters to explore more. Based on this feedback, and in order to allow the youngsters to broaden their interests, we decided to provide links to “related” cards on (the back side of) a card.
  • Potential value of TICKLE for informal learning.
Within the environment, the youngster is able to track the cards already opened and collected, the themes discovered, and his/her own growth. It was indicated that this could offer a means of self-reflection. Furthermore, it also provides ownership over one’s own learning process. Another idea that was dropped, and added to the system, was to include soft skills next to topics of interest, and allow the labeling of cards with soft skill labels too, e.g., responsibility, team spirit. In this way, the youngsters can (possibly unconsciously) practice these soft skills and also collect points for them.
Furthermore, it was indicated that it would be valuable to guide the youngsters around within the educational, social (-cultural) support and service landscape. This has been taken up by providing a specific card environment dedicated to this. In this card environment, each relevant organization is described by a card, which is positioned on the map by means of a dedicated icon (see also Section 5.3.).
  • Other suggestions
It was suggested that the app could support geocaching [57]. Although TICKLE is not explicitly tailored towards geocaching, it is possible to support it by means of the open challenges. In the future, we will investigate how it can be supported in a more explicit way.
Another aspect that was mentioned was the importance of allowing youngsters to connect with each other with and within the TICKLE environment. The organizations gave several reasons why this would be good to have: to communicate and connect, to inspire and trigger each other, to collaborate and meet in real life, to help and learn from each other. This valuable suggestion will be implemented later (future work).
It was also suggested that the app should provide a help button that the youngster could use when (s)he would be stuck on a challenge. There are different possibilities to implement such a help-functionality. It will be considered in future work.
Another suggestion was to introduce leaderboards. We did not uphold this suggestion as such, because leaderboards are in general only motivating for the ones at the top of the leaderboard. For those at the bottom, it may not be good for their self-esteem and motivation. However, we will reconsider it later as part of the personalization of the persuasive strategy.

5.2. Evaluation Phase 2

Within phase two, the developed tool was piloted and evaluated in real-life settings. The main aim of phase two was to receive feedback and suggestions from the target group, i.e., youngsters, in order to adapt or redesign the environment according to their feedback and suggestions. More specifically, we wanted to know (1) if the app was attractive and usable, (2) if its use was motivating, and (3) the youngsters’ willingness to use the app on a longer term basis. On purpose, we did not focus on youngsters with school burnout but on youngsters in general. This decision was taken in order to ensure that the app would be usable by youngsters in general and would be usable for a broader goal than school burnout.
In this phase, two evaluations took place, both with the youth organization the “Vlaamse Dienst Speelpleinwerking” (VDS) [58], translated as the Flemish Service for Playground Work. VDS organizes so-called playgrounds (i.e., play days) for children during school holidays. They also organize courses for youngsters who want to become animators for the playgrounds.
In both evaluations, the participants were animators of the organization. We informed them that they were participating in an evaluation, and they were informed about their rights and agreed to participate.

5.2.1. Evaluation 1 of Phase 2

VDS was looking for a game to improve the cooperation between the animators of the playgrounds and to stimulate their creativity. In order to do so, we proposed they try out the TICKLE environment. For this evaluation, cards were created with challenges related to the operation of a playground. An example challenge was, for instance, to build a spaceship. The cards and challenges were created by experienced instructors from the organization. Challenges could be carried out individually, or collaboratively with other animators. The goal for the participants was to carry out the challenge/activity to the best possible standard and to collect as many cards as possible.
For this evaluation, the cards were not placed on a physical map but on a fantasy map, i.e., a treasure map (see Figure 10), as all challenges were located at the playground’s location. All cards were visible and labeled with a topic, as well as with a difficulty degree: easy, medium, or difficult.
The evaluation was done at two different playground locations. In principle, all animators of those playground locations could participate. The animators were introduced to the TICKLE environment in small groups by means of an oral presentation and a hands-on demo. They also received a short manual on paper. They could use the environment for three weeks. The youngsters had to use their own smartphones. At that time, only recent Android smartphones were well supported. Youngsters that did not have such a device could use the application on a laptop or desktop computer through a Web browser. Afterwards, feedback was invited through an online questionnaire. Next to some questions related to the participant (age, background), this questionnaire contained questions from the short version of the User Experience Questionnaire (UEQ) [59,60], as well as questions for testing whether the participants understood specific features of TICKLE, questions about the look and feel of the cards, the challenges, and about the original goal (i.e., stimulating the cooperation and creativity of the animators). These questions used a Likert scale. The participants could also leave comments and suggestions for improvement.
Results: In total, 20 animators filled out the questionnaire: nine participants were 16 years old; the others were between 17 and 25 years old. Nine participants had no or only one year of experience as animator. Concerning the questions from UEQ, the hedonic quality (stimulation and novelty) scored higher (1,5) than the pragmatic quality (attractiveness, efficiency, perspicuity, dependability) (1,00). According to the UEQ handbook, these scores represent a positive evaluation. The results on the questions to test the understanding of specific features, as well as about the look and feel, were mixed, indicating that some improvement would be needed on these aspects: eight of the 18 participants (40%) gave a score higher than 4 for attractiveness (where 1 was attractive and 7 not attractive); five participants (25%) gave a score of 4, while the scores of the other seven participants were between 1 and 3. In general, we received positive results about the challenges. For the fun aspect, all scores were between 1 and 4 (where 1 was fun and 7 boring), with 20% (four participants) for score 1 and 40% (eight participants) for score 2. All scores for being doable (where 1 was not doable and 7 good doable) were 4 or higher, with 45% (nine participants) for score 5 and 6. The results on the questions related to the original goal were positive: 85% (17 participants) indicated that the challenges were inspiring, the other 15% (three participants) replied “maybe”; 70% (14 participants) indicated that this app could contribute to a better collaboration, the other 30% (six participants) answered “maybe”; everybody agreed that the challenges could contribute to a higher quality of the playground activities.
Suggestions and comments were provided. Comments about the challenges provided useful feedback about the type of challenges youngsters are interested in. The other comments and suggestions were about improving the interface, the info presented on the cards, and some aspects of the functionalities. Additionally, usability issues with specific smartphones and browsers were mentioned.

5.2.2. Evaluation 2 of Phase 2

For the second evaluation, the app was used for a kind of city game, restricted to one long street, in the context of a “start of the year” event of the VDS. On the TICKLE map, cards with challenges were spread along the street (see Figure 11 for an illustration). Participants had to find the cards, which only became visible on the map when the participant (i.e., the smartphone) physically came in the vicinity of the location of a card. Each challenge that was well executed yielded points. The aim was to collect as many points as possible. The cards and challenges were created by the organizers of the event.
For this evaluation the youngsters had to use their own smartphones. Just as for the previous evaluation, only recent Android smartphones were well supported. However, as the street game was done in small groups and only one smartphone was needed per group, enough suitable smartphones were available. Afterwards, feedback from the animators was invited through an online questionnaire. This questionnaire included the same UEQ questions as the first evaluation, as well as specific questions about the way the street game was set up in TICKLE, about the look and feel of the cards, and about the challenges. These questions also used a Likert scale (1 to 7). The participants could again leave comments and suggestions for improvement.
Results: In total, 18 animators filled out the questionnaire. In this evaluation, the participants were young adults: 18 years old or older; one person was older than 26. Concerning the questions from UEQ, the results were in line with the previous evaluation: the hedonic quality (stimulation and novelty) scored higher (1.34) than the pragmatic quality (attractiveness, efficiency, perspicuity, dependability) (1.04). The results on the questions about the set up of the game confirmed our setup: 88% (16 participants) agreed that keeping the cards hidden until close to the location made the game exciting, but in addition, 39% (seven participants) would have preferred an alternative to see all the cards right from the start, but keep the challenges hidden, or only provide the functionality to submit them when near the location. In this evaluation, the results about the look and feel were mixed (11 of the 18 participants (61.1%) gave a score higher than 4—where 1 was attractive and 7 not attractive). The ease of entering the answers was also evaluated mixed (nine of the 18 participants (50%) gave a score higher than 4—where 1 was easy and 7 cumbersome). We received positive results about the challenges. For the fun aspect, all scores were between 1 and 4, with 38.9% for score 2 (where 1 was fun and 7 boring). All scores for being doable (where 1 was not doable and 7 very doable) were 3 or higher, with 50% for score 5.
Comments were about the available time for the game (which they found to be too short), the data consumption, and the battery consumption (which were both considered too high), and small usability problems and bugs.

5.3. Evaluation Phase 3

In this phase, evaluations in real-life settings were also done, but this time the focus was on youngsters in some way related to the issue of school burnout and school dropout. Two evaluations took place, both with an organization dealing with youngsters who are in a problematic situation, i.e., Try-out [61] and CAD Limburg [62]. Try-out offers activities that allow youngsters with school issues to detect their talents and interests, and in this way try to reconnect them with regular school or work, and CAD Limburg offered a Reboot Camp [63] for young gamers at risk, who often are also at risk for school dropout.
In both evaluations, the participants were informed that they were participating in an evaluation, they were informed about their rights, and agreed to participate.
Due to the problems experienced with the broad range of smartphones used by youngsters in the evaluation phase two, we decided to provide them with a smartphone to avoid usability problems due to incompatibility issues. The smartphones were Android devices. Sufficient mobile data was provided, as this was reported as an issue in the previous evaluation phase. We realize that those issues should be resolved in a later stage, but we wanted to avoid issues with smartphones influencing the results of the evaluations.

5.3.1. Evaluation 1 of Phase 3

This evaluation was done in the context of a day activity organized by the organization Try-out. For this evaluation, a city game was created with TICKLE. The location was the center of Brussels, and the cards and associated challenges had the aim of letting participates explore interesting places in the city and find out more about these places (see Figure 12 for a screenshot of the card interface). Variable amounts of points could be collected with the cards. The goal was to collect as many points as possible. There was no predefined route; the participants had to develop their own strategy to collect as many points as possible in the given time (2 h). They played the game in groups of two to three. Each group was accompanied by a supervisor from the organization. Each participant received a smartphone with mobile data and a short manual on paper (three pages). We had six participants in total.
Afterwards, the youngsters were asked to fill out an online questionnaire. The questionnaire included questions about their age and interests, as well as questions related to the user experience. This time we did not follow UEQ completely, because the way the questions in this questionnaire were formulated was not suitable for the youngsters who would participate in the evaluation (see also Section 6 Discussion). Furthermore, questions about the challenges, the look and feel and the information on the cards were asked. In this evaluation, we also measured whether the app was able to engage the youngsters, and if it was able to increase the motivation for learning more about Brussels. All these questions used a Likert scale. In this evaluation, the questions were formulated as statement, and a scale from 1 to 5 was used to indicate the level of agreement with the statement: 1 being “strongly disagree”, and 5 being “strongly agree”. The participants could again leave comments and suggestions for improvement.
Results: all participants (six) filled out the questionnaire. They were 14 and 15 years old. Given the small number of participants, we did not use statistics to process the results. The results on the user experience were rather neutral. In this evaluation, the participants found the look and feel more attractive (note that after the previous evaluation phase the interface was improved considerable): two participants (33.3%) agreed with the statement that the cards were attractive with a score of 3, three participants (50%) with a score 4, and one participant (16.7%) with a score of 5; two participants (33.3%) respectively agreed with the statement that the cards look nice with a score of 2, 4, and 5. They liked the challenges (four participants (66.7%) confirmed this with a score of 4 and two (33.3%) with a score of 5), found them not difficult to understand (three participants (50%) strongly disagreed (score 1) with the statement that the challenges were difficult to understand, while the other three participants provided a score of 2, 3, and 4 respectively), very doable (three participants agreed with this statement, with a score of 4 (two participants) and 5 (one participant) respectively; two participants were neutral (score 3), and one gave a score of 2), and varied (two participants (33.3%) agreed, with a score of 4, and four (66.7%) gave a score of 5). They appreciated that the challenges addressed a range of areas of interest (50% agreed, with a score of 4, and 50% gave a score of 5). They all found the city game with TICKLE a nice way to get to know Brussels (four participants (66.7%) agreed with a score of 5, the two other participants gave a score of 3 and 5 respectively); four of the six participants recognized that they learned new things; and 50% indicated that they would use the app again (with a score of 4), the other 50% gave a score of 3 on this statement. However, the results were mixed concerning the questions to measure a change in their motivation for learning more about Brussels or other domains. Regarding the statement of whether they would like to learn more about Brussels, the distribution of the scores were as follows: one participant gave 1, two participants gave 2, two participants gave 3, and one participant gave 4. On the statement whether they would like to learn more about other domains, two participants gave a score of 4, and one participant gave a score of 1, 2, 3 and 5 respectively. Few comments were given and were mainly on small usability issues.

5.3.2. Evaluation 2 of Phase 3

This evaluation was done in the context of the Reboot Camp organized by the organization CAD Limburg. The camp lasted one week (5 days). For this evaluation, cards were created for the different activities offered during the camp. In this way, the youngsters could use TICKLE as a kind of agenda. Each day they could see, by means of cards, the activities of the day. The cards only became visible on the day of the activity. The cards contained information about the activity. To collect a card, they had to do a small challenge related to the activity. The challenges were ranging from doing a quiz to writing a small reflection about an activity. In this way, points could be collected. There were also cards with general information, such as a card with a short manual, a card with the rules of the camp, a card about the camp’s location, and a card with a link to the questionnaire. See Figure 13 for a screenshot of the card interface used for this evaluation.
The seven youngsters received an introduction with a hands-on demo. They each received a smartphone with mobile data. On the request of the organization, we restricted the use of the smartphone to TICKLE, to consult the Web, and to take pictures. Unfortunately, the supervisors of the camp decided that they could only use the smartphone at certain moments during the day. At the last day of the camp, the participants were supposed to fill out an online questionnaire, an activity that was also offered through a card. The questionnaire for the participants was similar to the questionnaire for the city game. The questions were formulated as statement and a scale from 1 to 5, used to indicate the level of agreement with the statement: 1 being “strongly disagree”, and 5 being “strongly agree”. In this evaluation, we also asked questions about the notifications provided in TICKLE. The participants could again leave comments and suggestions for improvement.
Results: Although we explicitly asked the organization to stimulate the youngsters to fill out the online questionnaire at the last day of the camp, only three (of the seven) youngsters filled out the questionnaire. They were respectively 14, 15 and 18 years old. These participants were positive about the app (measured by means of different questions), found it easy to use (two participants agreed with a score of 4, one with a score of 5), and a nice way to detect new things (one score 3, one score 4, and one score 5). They were positive about the use of notifications for letting them know which activities would take place (two scores of 4 and one of 3), but they were divided about the usefulness for informing them about the points collected (one score of 1, one score of 3 and one of 4). The information on the cards and their look and feel was evaluated positively (agreement with a score of 3 (one participant) and 4 (two participants) for the information, and with a score of 4 (three participants) for the look and feel). Also, these participants liked the challenges (one score 4, and two score 5), found them good doable (three score 4) and varied (one score 3, and two score 4), but found them in average difficult to understand (one score 2, one score 3, and one score 4). They found TICKLE a nice way to get to know the activities (one score 3, one score 4, and one score 5); recognized that they learned new things (two score 4 and one score 5), and indicated that they would use it again (with different degree of certainty: one score 3, one score 4, and one score 5). For the questions used to measure a change in their motivation for learning more about new areas of interest or activities, the results were mixed: two showed a clear increased motivation (score 5), while one did not (score 2). No comments or suggestions for improvement were given.
We checked the activity in the card environment because of the low number of responses. We observed a slight decrease in activity over the week, and the participants performed the challenge more for the cards about outdoor activities than for those of the indoor activities.
Furthermore, we received feedback from the organizers of the camp. They mentioned that the youngsters indicated that they enjoyed working with the app; the youngsters were often asking in the morning how many challenges they had to perform. However, they also noticed that as the camp progressed, some youngsters had less motivation to complete all the challenges. They reported that the app motivated some of the youngsters to "do it right", e.g., they searched for the right information online. However, the organizers also mentioned that the circumstances were not ideal for the evaluation: given the multitude of activities throughout the camp, it was not obvious to follow up all challenges, and because of the strict policy on the use of cell phones during the camp, the time available for using TICKLE was limited.

5.4. Demonstrations

In the context of the evaluations, different card environments were created that demonstrate the possibilities of the application. We used TICKLE to create a street game and a city game; for creating a playful environment to stimulate collaboration and creativity of animators; to inform and support reflection during a camp for youngsters at risk of game addiction. These use cases cover informal and non-formal learning.
Next, we used TICKLE in the context of formal learning, i.e., to stimulate the processing of the course material during the semester for one of our university courses. Cards were created about topics in the course, and the associated challenges had the aim of letting students test their knowledge about the topic. By collecting cards, the student could collect points. The cards became available during the course of the semester. The students were notified by email when new cards became available. This demonstration was used to test the personalized notification system. We asked the students (on voluntary basis) to fill out an existing online questionnaire in order to determine their personality in terms of the Big Five taxonomy [64], and send us the results. Based on this information, the students received notification messages tailored to their personality.
Next to these card environments, we also created an environment that provides an inventory of all organizations related to school burnout or ESL located in Brussels. This card environment contains 41 cards. There is a card for each location of an organization (some organizations have more than one office in Brussels), and the cards are positioned on the map of Brussels (this is the card environment shown in Figure 2). The card of an organization contains the following information: the name of the organization, a short description, address, a link to the website and Facebook page, and contact information. These cards can be collected without the need to perform a challenge.

6. Discussion, Limitations and Further Work

In general, in the context of the formative evaluations, we obtained positive results and received useful feedback to improve and extend the application. Based on the results, we can conclude that in the context of these formal evaluations, the app is usable for youngsters and able to engage them, and we see indications that it may be able to increase the intrinsic motivation and learning capacity of youngsters. However, the evaluations were limited in the number of participants and the context in which they were performed, and they had a limited goal, i.e., checking the usability of the app for youngsters and the ability to engage them. To confirm the results and to verify whether the app can increase the intrinsic motivation and learning capacity of youngsters, summative and longitudinal evaluations are needed. Such evaluations are under preparation. For these evaluations, questionnaires other than UEQ will be considered. While UEQ aims to measure the user experience in general, GAMEX is a new instrument (i.e., questionnaire) for measuring gameful experience, i.e., the user experience of engaging with gamified applications [65]. Although developed in the context of customer behavior, it may also be usable for education. PLEXQ [66] is another questionnaire focusing on measuring playfulness. This questionnaire targets a wide range of products, including mobile apps. To measure the effect on motivation for learning, we will collaborate with our research partners from the educational domain.
Next to the positive results obtained from the formative evaluations performed, we encountered different issues that are worthwhile to mention:
  • Performing formative evaluations in real life settings is challenging.
First of all, we found that our target audience, i.e., youngsters, is very demanding, especially concerning look, feel, and usability. Although we always explained very well that the app under evaluation was research work and still required improvements, most of the critique was on the look and feel, and about small usability issues. Also being able to quickly start and resume the app was very important for them.
Next, there exist a broad range of smartphones with different screen sizes and browser versions. It turned out to be impossible in the context of a research project to ensure that the application was running smoothly and without issues on all possible devices used by youngsters. For that reason, we decided to provide smartphones to perform the evaluations in the third phase. However, for the longitudinal evaluation this may cause some bias. When the youngsters have to use an additional device next to their own smartphone, they may find this annoying, and it will counteract efforts to make the app “a seamless part of daily life”.
Lastly, when performing evaluations in real-life settings, it is not always possible to have full control over the setup. Even after careful preparation, unexpected issues may show up during the evaluation. For instance, this happened during the Reboot Camp evaluation: although we limited the use of the mobile phone to the TICKLE app (so the youngsters could not use it to make phone calls, download or play games, or for other apps), and discussed this with the organizers in advance, it turned out that the strict policy for using mobile devices was also applied to the mobile phones given to the youngsters for the evaluation. Probably, during the camp, it must have been easier to ban the use of any mobile phone during the day, and because we were not allowed to be present during the camp week, we could not intervene.
  • Questionnaires for youngsters.
Most available questionnaires on usability and user experience are designed for adults with good literacy. For the evaluations in phase two, we used UEQ in the native language of the participants, but simplified the language somewhat, because a pilot with youngsters of the same age indicated that some terms were still too difficult to understand. When the youngsters were filling out the questionnaire, we also noticed that they had problems in using the Likert scale, especially when the lowest score represented a good result and the highest a bad result. For instance, they had no problem in scoring a statement like “the system was (1) easy to learn … (7) difficult to learn”, but a statement like “the system was (1) motivating … (7) demotivating” caused misunderstanding. Apparently, in their education, they were used to associating a high score with a good result. The participants in phase three were even younger, and it was known that their literacy could be an issue, so we simplified the language in the questionnaire even more and used statements that all could be answered with the same scale: “strongly disagree” to “strongly agree”. As much as possible, we tried to avoid negatively formulated statements. In the first evaluation of phase three, the youngsters could ask for an explanation while filling out the questionnaire, but in general they did not use this opportunity. For the summative longitudinal evaluations, we will pay even more attention to the questionnaire, pilot it several times, and if possible, use an existing validated questionnaire tailored towards children/youngsters and suitable for our purpose.
The evaluations and demonstrations showed that, although we started with the aim to tackle school burnout, the application is usable in a broader context, e.g., for team building activities, for information providing, and for non-formal learning activities, as well as for formal learning activities. Although we did not yet test it, we see more possible application domains, e.g., for tourism, for museums, for shopping opportunities in a city, for event announcements, for social engagement, etc. In addition, it is our opinion that the app could be used for a broader audience than youngsters. For example, we are currently setting up an environment with TICKLE for civic engagement focused on assisting elderly [67].
Most of the feedback and suggestions received during the formative evaluations has already been taken into consideration. Still to be considered, and planned for future work is:
  • Allowing youngsters to connect with each other with and within the TICKLE environment and to collaborate on the collection of cards.
  • To provide a help functionality that a youngster could use when (s)he would be stuck on a challenge.
  • To provide the possibility to use leaderboards as a persuasive technique when this is appropriate.
  • To allow youngsters to create cards themselves. This will contribute to the investments made by the youngsters, but also to the fact that youngsters like to share their own material online. This functionality is already available, but a procedure needs to be added to prevent youngsters from creating cards that are not acceptable.
  • Adding an intelligent matching algorithm to suggest cards to youngsters in an automatic way. Currently, this needs to be done manually by the supervisor of a youngster.

7. Conclusions

In this paper we presented TICKLE, a playful learning environment for youngsters. The environment is a mobile location-based smartphone application that offers youngsters an interactive environment with which they can explore their surroundings based on their interests or needs. The environment offers cards that the youngsters can collect by performing small challenges. In the regular case, the cards are associated with physical locations, and the challenges are related to those locations. In this way, TICKLE promotes the playful exploration and discovery of information in a physical environment. However, the environment is also usable with a fictive environment, like a treasure map.
We explained the research steps we followed to establish the design of the system and justified the decisions made. We presented an overview of the system and its functionalities. The system consists of a front end that is the actual playful environment, and a back end that offers an authoring environment for creating the content, and a supervisor module for managing and monitoring the performance of the users and the card environments. Next, we discussed the evaluations performed. We opted for an elaborated set of formative evaluations to ensure good usability, before performing summative and longitudinal evaluations. Such evaluations, which will also measure the learning impact, are under preparation. Notwithstanding that the results of the different formative evaluations were positive, the feedback received was used to considerably improve the system, and other suggestions are noted for future work.
Although the environment was developed for dealing with school burnout, the environment is also usable in other contexts and for different purposes. We have demonstrated that TICKLE can be used for a large range of use cases: for team building activities, for information providing, and for non-formal learning activities, as well as in the context of regular education (for formal learning). This shows that we achieved our main goal, i.e., to create an engaging mobile environment that could be useful for informal learning, as well as for other forms of learning, i.e., non-formal and formal learning. Although this should still be verified, we see more possible application domains, e.g., for tourism, for museums, for indicating shopping opportunities in a city, for event announcements, for social engagement, etc.

Author Contributions

Conceptualization, O.D.T. and J.M.; methodology, O.D.T. and J.M.; software, J.M. and D.B.; validation, O.D.T., J.M, R.L. and D.B.; investigation, J.M, R.L. and D.B.; writing—original draft preparation, O.D.T, J.M, R.L. and D.B.; writing—review and editing, O.D.T, J.M, R.L. and D.B.; visualization, D.B. and J.M.; supervision, O.D.T.; project administration, O.D.T.; funding acquisition, O.D.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research is funded by the European Regional Development Fund (ERDF) and the Brussels-Capital Region within the framework of the Operational Program 2014-2020 through the ERDF-2020 project ICITY-RDI.BRU.

Acknowledgments

The authors like to acknowledge the support received from the company Telenet bvba by means of the donation of material and services needed in the evaluations. We also like to thank all the members of the organizations involved in our evaluations for their cooperation and contributions. We also thank all participants of the evaluations for their time and feedback.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the application; in the set up of the evaluations, collection, analyses or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Marsick, V.J.; Watkins, E.K. Informal and Incidental Learning. New Dir. adult Contin. Educ. 2001, 89, 25–34. [Google Scholar] [CrossRef]
  2. Maarschalk, J. Scientific literacy and informal science teaching. J. Res. Sci. Teach. 1998, 25, 135–146. [Google Scholar] [CrossRef]
  3. Tamir, P. Factors associated with the relationship between formal, informal, and nonformal science learning. J. Environ. Educ. 1991, 22, 34–42. [Google Scholar] [CrossRef]
  4. Reischmann, J. Learning ‘en passant’: The Forgotten Dimension Andragogy. In Proceedings of the Conference of the American Association of Adult and Continuing Education, Hollywood, FL, USA, 23 October 1986. [Google Scholar]
  5. Pintrich, P.R. Understanding self-regulated learning. New Dir. Teach. Learn. 1995, 199, 3–12. [Google Scholar] [CrossRef]
  6. Grant, M.M. Using mobile devices to support formal, informal & semi-formal. uses and implications for teaching & learning. In Emerging Technologies for Steam Education; Springer: Berlin/Heidelberg, Germany, 2015. [Google Scholar]
  7. Eshach, H. Bridging In-school and Out-of-school Learning: Formal, Non-Formal, and Informal Education. J. Sci. Educ. Technol. 2007, 16, 171–190. [Google Scholar] [CrossRef]
  8. Lee, B. Social Media as a Non-formal Learning Platform. Procedia Soc. Behav. Sci. 2013, 103, 837–843. [Google Scholar] [CrossRef] [Green Version]
  9. Clough, G. Geolearners: Location-Based Informal Learning with Mobile and Social Technologies. IEEE Trans. Learn. Technol. 2010, 3, 33–44. [Google Scholar] [CrossRef]
  10. Dabbagh, N.; Kitsantas, A. Internet and Higher Education Personal Learning Environments, social media, and self-regulated learning: A natural formula for connecting formal and informal learning. Internet High. Educ. 2012, 15, 3–8. [Google Scholar] [CrossRef] [Green Version]
  11. Jeng, Y.L.; Wu, T.T.; Huang, Y.M.; Tan, Q.; Yang, S.J. The add-on impact of mobile applications in learning strategies: A review study. Educ. Technol. Soc. 2010, 13, 3–11. [Google Scholar]
  12. Clement, J. Social Media—STATISTICS & Facts. Available online: https://www.statista.com/topics/1164/social-networks/ (accessed on 4 December 2019).
  13. McCord Museum. MTL Urban Museum. Available online: https://www.musee-mccord.qc.ca/en/mtl-urban-museum/ (accessed on 16 December 2019).
  14. Bernsmann, S.; Croll, J. Lowering the threshold to libraries with social media: The approach of ‘Digital Literacy 2.0’, a project funded in the EU Lifelong Learning Programme. Libr. Rev. 2013, 62, 53–58. [Google Scholar] [CrossRef]
  15. Gikas, J.; Grant, M.M. Mobile computing devices in higher education: Student perspectives on learning with cellphones, smartphones & social media. Internet High. Educ. 2013, 19, 18–26. [Google Scholar]
  16. Naismith, L.; Lonsdale, P.; Vavoula, G.; Sharples, M. Literature Review in Mobile Technologies and Learning. Available online: https://telearn.archives-ouvertes.fr/hal-00190143 (accessed on 20 August 2018).
  17. Kangas, M. Creative and playful learning: Learning through game co-creation and games in a playful learning environment. Think. Ski. Creat. 2010, 5, 1–15. [Google Scholar] [CrossRef]
  18. Kangas, M.; Ruokamo, H. Playful Learning Environments: Effects on Children’s Learning. In Encyclopedia of the Sciences of Learning; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  19. Peffers, K.; Tuunanen, T.; Rothenberger, M.A.; Chatterjee, S. A Design Science Research Methodology for Information Systems Research. Source J. Manag. Inf. Syst. 2007, 24, 45–77. [Google Scholar] [CrossRef]
  20. Hug, T. Didactics of Microlearning; Waxmann Verlag: GmbH, Germany, 2007. [Google Scholar]
  21. Fogg, B.J. Persuasive technology: Using computers to change what we think and do. Ubiquity 2002, 2002, 5. [Google Scholar] [CrossRef] [Green Version]
  22. Deterding, S.; Dixon, D.; Khaled, R.; Nacke, L. From Game Design Elements to Gamefulness. In Proceedings of the 15th International Academic MindTrek Conference on Envisioning Future Media Environments—MindTrek, Tampere, Finland, 28–30 September 2011. [Google Scholar]
  23. Pham, X.L.; Chen, G.D. PACARD: A New Interface to Increase Mobile Learning App Engagement, Distributed Through App Stores. J. Educ. Comput. Res. 2018, 57, 1–28. [Google Scholar] [CrossRef]
  24. Edge, D.; Whitney, M.; Charlotte, C. MemReflex: Adaptive Flashcards for Mobile Microlearning. In Proceedings of the 14th international Conference on Human-Computer Interaction with Mobile Devices and services, San Francisco, CA, USA, 21–24 September 2012. [Google Scholar]
  25. Epignosis. Talentcards. Available online: https://www.talentcards.com/ (accessed on 6 December 2019).
  26. TELMA. S_U+G Serious Urban Game. Available online: http://sug-platform.be/ (accessed on 16 December 2019).
  27. Avouris, N.; Yiannoutsou, N. A Review of Mobile Location-based Games for Learning across Physical and Virtual Spaces. J. Univers. Comput. Sci. 2012, 18, 2120–2142. [Google Scholar]
  28. Toscos, T.; Faber, A. Chick Clique: Persuasive Technology to Motivate Teenage Girls to Exercise. In Proceedings of the CHI’06 extended abstracts on Human factors in computing systems, Montréal, QC, Cananda, 22–27 April 2006. [Google Scholar]
  29. Kimura, H.; Ebisui, J.; Funabashi, Y.; Yoshii, A.; Nakajima, T. iDetective: A Persuasive Application to Motivate Healthier Behavior Using Smart Phone. In Proceedings of the Proceedings of the 2011 ACM Symposium on Applied Computing, TaiChung, Taiwan, 21–25 March 2011. [Google Scholar]
  30. Tikka, P.; Woldemicael, B.; Oinas-kukkonen, H. Building an App for Behavior Change: Case RightOnTime. In Proceedings of the Fouth International Workshop on Behavior Change Support Systems (BCSS 2016), Salzburg, Austria, 5 April 2016. [Google Scholar]
  31. Hevner, A.R.; March, S.T.; Park, J.; Ram, S. Design Science in Information Systems Research. MIS Q. 2004, 28, 75–105. [Google Scholar] [CrossRef] [Green Version]
  32. Offermann, P.; Levina, O.; Schönherr, M.; Bub, U. Outline of a Design Science Research Process. In Proceedings of the 4th International Conference on Design Science Research in Information Systems and Technology, Philadelphia, PA, USA, May 2009. [Google Scholar]
  33. Salmela-Aro, K.; Kiuru, N.; Leskinen, E.; Nurmi, J.E. School Burnout Inventory (SBI) Reliability and Validity. Eur. J. Psychol. Assess. 2009, 25, 48–57. [Google Scholar] [CrossRef]
  34. Bask, M.; Salmela-Aro, K. Burned Out to Drop Out: Exploring the Relationship Between School Burnout and School Dropout. Eur. J. Psychol. Educ. 2013, 28, 511–528. [Google Scholar] [CrossRef]
  35. European Commission. Reducing Early School Leaving: Key Messages and Policy Support. Available online: https://ec.europa.eu/education/content/reducing-early-school-leaving-key-messages-and-policy-support_en (accessed on 20 August 2018).
  36. European Commission. Early School Leaving. Available online: http://ec.europa.eu/education/policy/school/early-school-leavers_en (accessed on 20 August 2018).
  37. Vlieghe, J.; de Troyer, O. Tickle Report D1: State-of-the-art on Media Use in Belgium, Flanders & Belgium. Available online: https://wise.vub.ac.be/tickle/index.php/reformation/sota-mediause/ (accessed on 20 August 2018).
  38. Vlieghe, J.; de Troyer, O. Tickle Report D2: State-of-the-Art on Early School Leaving and Dropouts. Available online: https://wise.vub.ac.be/tickle/index.php/reformation/report-d2-state-of-the-art-on-early-school-leaving-and-dropouts/ (accessed on 20 August 2018).
  39. Fogg, B. A Behavior Model for Persuasive Design. In Proceedings of the 4th International Conference on Persuasive Technology—Persuasive, Claremont, CA, USA, 26–29 April 2009. [Google Scholar]
  40. Lo, J.L. Playful Tray: Adopting Ubicomp and Persuasive Techniques into Play-Based Occupational Therapy for Reducing Poor Eating Behavior in Young Children. In Proceedings of the International Conference on Ubiquitous Computing, Innsbruck, Austria, 16–19 September 2007. [Google Scholar]
  41. Pham, X.-L.; Nguyen, T.-H.; Hwang, W.-Y.; Chen, G.-D. Effects of push notifications on learner engagement in a mobile learning app. In Proceedings of the 2016 IEEE 16th International Conference on Advanced Learning Technologies (ICALT), Austin, TX, USA, 25–28 July 2016. [Google Scholar]
  42. Kaptein, M.; van Halteren, A. Adaptive persuasive messaging to increase service retention: Using persuasion profiles to increase the effectiveness of email reminders. Pers. Ubiquitous Comput. 2013, 17, 1173–1185. [Google Scholar] [CrossRef] [Green Version]
  43. Eyal, N. Hooked: How to Build Habit-Forming Products; Penguin Books Ltd: London, UK, 2014. [Google Scholar]
  44. Simons, H.W.; Jones, J.G.; Simons, H.W. Persuasion in Society; SAGE: Routledge, NY, USA, 2017. [Google Scholar]
  45. Vlieghe, J.; de Troyer, O. Tickle Reoprt D3: Literature Study on Persuasive Techniques and Technology. Available online: https://wise.vub.ac.be/tickle/wp-content/uploads/2015/12/Report-D3_final.pdf (accessed on 20 August 2018).
  46. Goldstein, N.J.; Steve, M.J.; Cialdini, R.B. Yes! 50 Scientifically Proven Ways to be Persuasive; Simon & Schuster Inc: New York, NY, USA, 2008. [Google Scholar]
  47. John, O.P.; Srivastava, S. The Big Five trait taxonomy: History, measurement, and theoretical perspectives. In Handbook of Personality: Theory and Research; Lawrence, A.P., John, O.P., Eds.; Elsevier: Amsterdam, The Netherlands, 1999. [Google Scholar]
  48. Kidimedia. BookWidgets Interactive Learning. Available online: https://www.bookwidgets.com/ (accessed on 23 January 2020).
  49. Markopoulos, P.; Kaptein, M.; de Ruyter, B.; Aarts, E. Personalizing persuasive technologies: Explicit and implicit personalization using persuasion profiles. Int. J. Hum. Comput. Stud. 2015, 77, 38–51. [Google Scholar]
  50. Progressive Web Apps. Available online: https://developers.google.com/web/progressive-web-apps/ (accessed on 23 January 2020).
  51. Redux—A Predictable State Container for JS Apps. Available online: https://redux.js.org/ (accessed on 23 January 2020).
  52. Facebook. React—A Javascript Library for Building User Interfaces. Available online: https://reactjs.org/ (accessed on 23 January 2020).
  53. Firebase Helps Mobile and Web App Teams Succeed. Available online: https://firebase.google.com/ (accessed on 23 January 2020).
  54. Json-Rules-Engine. Available online: https://www.npmjs.com/package/json-rules-engine (accessed on 23 January 2020).
  55. Kaplan, B.; Maxwell, J.A. Qualitative research methods for evaluating computer information systems. In Evaluating the Organizational Impact of Heathcare Information Systems; Anderson, J.G., Aydin, C.E., Eds.; Springer: Berlin/Heidelberg, Germany, 2005. [Google Scholar]
  56. Bowen, G. Document Analysis as a Qualitative Research Method. Qual. Res. J. 2009, 9, 27–40. [Google Scholar] [CrossRef] [Green Version]
  57. Geocaching. Available online: https://www.geocaching.com/ (accessed on 19 December 2019).
  58. VDS. Vlaamse Dienst Speelpleinwerking. Available online: https://www.speelplein.net/ (accessed on 16 December 2019).
  59. Rauschenberger, M.; Schrepp, M.; Cota, M.P.; Olschner, S.; Thomaschewski, J. Efficient Measurement of the User Experience of Interactive Products. How to use the User Experience Questionnaire (UEQ). Example: Spanish Language Version. Int. J. Artif. Intell. Interact. Multimed. 2013, 2, 39–45. [Google Scholar] [CrossRef]
  60. Team UEQ, User Experience Questionnaire. Available online: https://www.ueq-online.org/ (accessed on 16 December 2019).
  61. Alba. alba—Try-Out Brussel. Available online: http://alba.be/project/try-out-brussel/ (accessed on 16 December 2019).
  62. CADLimburg. Centra Voor Alcohol-en Andere Drugproblemen Limburg. Available online: https://www.cadlimburg.be/ (accessed on 16 December 2019).
  63. Reboot. Available online: https://www.rebootkamp.be/ (accessed on 16 December 2019).
  64. Koerth-Baker, M.; Wolfe, J. Most Personality Quizzes Are Junk Science. Take One That Isn’t. Available online: https://projects.fivethirtyeight.com/personality-quiz/ (accessed on 16 December 2019).
  65. Eppmann, R.; Bekk, M.; Klein, K. Gameful Experience in Gamification: Construction and Validation of a Gameful Experience Scale. J. Interact. Mark. 2018, 43, 98–115. [Google Scholar] [CrossRef]
  66. Boberg, M.; Karapanos, E.; Holopainen, J.; Lucero, A. “PLEXQ: Towards a Playful Experiences Questionnaire. In Proceedings of the 2015 Annual Symposium on Computer-Human Interaction in Play (CHI PLAY 2015), London, UK, 3–7 October 2015. [Google Scholar]
  67. Lindberg, R.; Maushagen, J.; de Troyer, O. Combining a Gamified Civic Engagement Platform with a Digital Game in a Loosely Way to Increase Retention. In Proceedings of the 21st International Conference on Information Integration and Web-based Applications & Services, Munich, Germany, 2–4 December 2019. [Google Scholar]
Figure 1. Architecture of TICKLE.
Figure 1. Architecture of TICKLE.
Information 11 00157 g001
Figure 2. Map view of the Card Interface.
Figure 2. Map view of the Card Interface.
Information 11 00157 g002
Figure 3. (a) An example card (front side); (b) An example challenge.
Figure 3. (a) An example card (front side); (b) An example challenge.
Information 11 00157 g003
Figure 4. Overview of the cards of a youngster for a card environment.
Figure 4. Overview of the cards of a youngster for a card environment.
Information 11 00157 g004
Figure 5. (a) An example of an internal notification—clicking on the image will allow the user to jump to the card directly; (b) Example feedback message in case of a successfully performed challenge.
Figure 5. (a) An example of an internal notification—clicking on the image will allow the user to jump to the card directly; (b) Example feedback message in case of a successfully performed challenge.
Information 11 00157 g005
Figure 6. (a) Card Editor—Adding an image field; (b) Card Editor—Creating a quiz challenge.
Figure 6. (a) Card Editor—Adding an image field; (b) Card Editor—Creating a quiz challenge.
Information 11 00157 g006
Figure 7. Creation of waypoints.
Figure 7. Creation of waypoints.
Information 11 00157 g007
Figure 8. Start screen for a user who has access to multiple card environments.
Figure 8. Start screen for a user who has access to multiple card environments.
Information 11 00157 g008
Figure 9. (a) Interface in the supervisor module for inspecting who could collect the card and who could not; (b) Interface for rating an open question.
Figure 9. (a) Interface in the supervisor module for inspecting who could collect the card and who could not; (b) Interface for rating an open question.
Information 11 00157 g009
Figure 10. Treasure map used for the first evaluation with VDS.
Figure 10. Treasure map used for the first evaluation with VDS.
Information 11 00157 g010
Figure 11. TICKLE’s card interface for the second evaluation with VDS.
Figure 11. TICKLE’s card interface for the second evaluation with VDS.
Information 11 00157 g011
Figure 12. TICKLE’s card interface for the city game in Brussels.
Figure 12. TICKLE’s card interface for the city game in Brussels.
Information 11 00157 g012
Figure 13. TICKLE’s card interface for the Reboot Camp.
Figure 13. TICKLE’s card interface for the Reboot Camp.
Information 11 00157 g013

Share and Cite

MDPI and ACS Style

De Troyer, O.; Maushagen, J.; Lindberg, R.; Breckx, D. Playful Learning with a Location-Based Digital Card Environment: A Promising Tool for Informal, Non-Formal, and Formal Learning. Information 2020, 11, 157. https://doi.org/10.3390/info11030157

AMA Style

De Troyer O, Maushagen J, Lindberg R, Breckx D. Playful Learning with a Location-Based Digital Card Environment: A Promising Tool for Informal, Non-Formal, and Formal Learning. Information. 2020; 11(3):157. https://doi.org/10.3390/info11030157

Chicago/Turabian Style

De Troyer, Olga, Jan Maushagen, Renny Lindberg, and David Breckx. 2020. "Playful Learning with a Location-Based Digital Card Environment: A Promising Tool for Informal, Non-Formal, and Formal Learning" Information 11, no. 3: 157. https://doi.org/10.3390/info11030157

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop