Next Article in Journal
The ENRICH Marital Satisfaction (EMS) Scale: A Psychometric Study in a Sample of Portuguese Parents
Previous Article in Journal
Bridging the Gender Gap in the Agricultural Sector: Evidence from European Union Countries
Previous Article in Special Issue
Editorial Introduction to Technological Approaches for the Treatment of Mental Health in Youth
Article

Making Progress Monitoring Easier and More Motivating: Developing a Client Data Collection App Incorporating User-Centered Design and Behavioral Economics Insights

1
Penn Center for Mental Health, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
2
Sandra Rosenbaum School of Social Work, University of Wisconsin, Madison, WI 53706, USA
3
Hall Mercer Community Mental Health, University of Pennsylvania Health System, Philadelphia, PA 19104, USA
4
Department of Psychiatry, Penn Medicine Nudge Unit, University of Pennsylvania Health System, Philadelphia, PA 19104, USA
*
Author to whom correspondence should be addressed.
Academic Editors: Eduardo Bunge, Blanca Pineda, Taylor N. Stephens, Naira Topooco and Nigel Parton
Soc. Sci. 2022, 11(3), 106; https://doi.org/10.3390/socsci11030106
Received: 1 September 2021 / Revised: 15 February 2022 / Accepted: 17 February 2022 / Published: 3 March 2022
(This article belongs to the Special Issue Technological Approaches for the Treatment of Mental Health in Youth)

Abstract

Data collection is an important component of evidence-based behavioral interventions for children with autism, but many one-to-one aides (i.e., behavioral support staff) do not systemically collect quantitative data that are necessary for best-practice client progress monitoring. Data collection of clients’ behaviors often involves labor-intensive pen-and-paper practices. In addition, the solitary nature of one-to-one work limits opportunities for timely supervisor feedback, potentially reducing motivation to collect data. We incorporated principles from behavioral economics and user-centered design to develop a phone-based application, Footsteps, to address these challenges. We interviewed nine one-to-one aides working with children with autism and seven supervisors to ask for their app development ideas. We then developed the Footsteps app prototype and tested the prototype with 10 one-to-one aides and supervisors through three testing cycles. At each cycle, one-to-one aides rated app usability. Participants provided 76 discrete suggestions for improvement, including 29 new app features (e.g., behavior timer), 20 feature modifications (e.g., numeric type-in option for behavior frequency), four flow modifications (e.g., deleting a redundant form), and 23 out-of-scope suggestions. Of the participants that tested the app, 90% rated usability as good or excellent. Results support continuing to develop Footsteps and testing its impact in a clinical trial.
Keywords: digital mental health; m-heath applications; behavioral data collection; autism spectrum disorder; behavioral economics; user-centered design; participatory design digital mental health; m-heath applications; behavioral data collection; autism spectrum disorder; behavioral economics; user-centered design; participatory design

1. Introduction

The use of technology in the delivery, coordination, and monitoring of therapeutic or behavioral health interventions continues to grow (Raney et al. 2017). However, there are few mobile applications available for one-to-one aides (i.e., behavioral support staff) or therapists to track clients’ progress. Two main barriers limit client data tracking. First, data collection often relies on pen-and-paper methods, which are subject to error and impede the delivery of services in fast-paced and busy therapeutic environments (Dale and Hagen 2007; Le Jeannic et al. 2014; Riggleman 2021). This reliance on pen-and-paper methods also complicates behavior tracking over time and data sharing among team members (Riggleman 2021). Managing children’s challenging behaviors while keeping up with data collection requirements can be extremely difficult and stressful for one-to-one aides and therapists (Riggleman 2021). Indeed, data show that challenging behaviors in children with autism contribute to burnout among staff (Hastings and Brown 2002). Quantitative data collection (e.g., tracking frequency of a child’s challenging behaviors) in autism treatment programs is necessary for high-quality treatment planning but is challenging to achieve. Data collection is consistent with best practice guidelines for autism as it ensures progress is tracked and that goals and strategies are updated accordingly (Steinbrenner et al. 2020). However, many agencies still use paper-and-pen data collection systems (Marcu et al. 2013) or only require qualitative summary notes of sessions which impede accuracy and efficiency in progress monitoring.
Second, the solitary nature of therapy and behavioral health work, which is often delivered in a one-to-one format, often limits opportunities for timely supervisor feedback regarding data collection (Melamed et al. 2001). The complex nature of challenging behavior in children with autism (Cohen et al. 2011), along with the complex nature of autism behavioral interventions (Steinbrenner et al. 2020), necessitates timely supervisor feedback (Rispoli et al. 2011), but large caseloads and often inadequate resources mean that agencies cannot provide appropriate supervision. Specific to the current study, supervisor oversight on data collection of children’s behaviors and progress over time is a critical component of effectively understanding children’s changing support needs.
Digital technology could not only replace pen-and-paper data collection methods but also address these timely data collection feedback needs at a lower cost than in traditional supervision. Although many technologies have been applied to therapy for individuals with autism, from augmentative and alternative communication devices to robots designed to improve social interaction (Goldsmith and LeBlanc 2004; Kientz et al. 2013; Sutherland et al. 2018), few technologies are designed to support their one-to-one aides or therapists (Nuske and Mandell 2021). Digital technology has supported the implementation of evidence-based practices in other sectors such as healthcare. For example, data show that use of computerized clinical decision support systems with individualized real-time reminders is associated with higher fidelity to evidence-based practices and better patient outcomes among providers who work in chaotic environments, such as ambulatory clinics (Hunt et al. 1998; Saleem et al. 2005, 2007; Vashitz et al. 2007). Therefore, digitalizing data collection in one-to-one programs and incorporating digital forms of timely feedback on data collection via digital technology has the potential to form an elegant solution for the identified barriers raised above.
To address these barriers, user-centered design practices are critical to ensure a genuine problem-solution fit for the digital technology that is endorsed by key stakeholders whilst ensuring its feasibility and usability. User-centered design practices include engaging with key community stakeholders early and often in every step of the technology development process, from conception to prototype development to refinement to testing and dissemination (Abras et al. 2004). Several user-centered design methods are available to gather stakeholders’ needs and design ideas, including interviews, field observations, field tests, surveys, focus groups, and community advisory boards (Dopp et al. 2019).
Another approach to addressing implementation barriers is to use principles from behavioral economics. Behavioral economics incorporates findings from social and cognitive psychology into factors associated with decision making, with a particular focus on irrational heuristics and biases (Mullainathan and Thaler 2000; Samson 2014). Behavioral economics principles have been applied by previously to health technology development to improve physical health outcomes using smartphone mobile applications and wearable devices (Case et al. 2015; Cotton and Patel 2019; Kim and Patel 2018; Patel et al. 2017, 2020). However, there is limited work on integrating behavioral economics with mental health (Beidas et al. 2019) and interest in integrating behavioral economics with education is still growing (Jabbar 2011; Koch et al. 2015; Lavecchia et al. 2016; Levitt et al. 2016; List et al. 2018). We used the behavioral economics framework developed by the international Behavioural Insights Team, EAST (Easy, Attractive, Social and Timely; The Behavioural Insights Team n.d.) to enact behavioral change.

Current Study

We employed several user-centered design approaches and incorporated behavioral economics principles (EAST, see above) to adapt an existing mobile application and associated web portal for one-to-one aides who support children with autism in reaching their educational and behavioral goals. First, we conducted interviews with one-to-one aides and supervisors to understand how to improve data collection procedures. This feedback directly informed the development of the app. Second, once we had a working prototype of the app, we iteratively improved the app by gathering detailed stakeholder feedback from newly recruited one-to-one aides and supervisors on three iterative app testing cycles.

2. Materials and Methods

2.1. Setting and Context

We partnered with three behavioral health agencies in Philadelphia, Pennsylvania, USA that employ one-to-one aides (usually bachelors-prepared individuals) and clinical supervisors (usually board-certified behavior analysts) who work children with autism in community settings. Children with autism in Pennsylvania often qualify for a one-to-one aide because they present with challenging behaviors that require additional support. Although one-to-one aides often work in schools, they are hired from an outside agency and do not report to the classroom teacher. Instead, they have a clinical supervisor who provides periodic (often monthly) supervision at their agency rather that the school.

2.2. Participants

Interview participants. The research team visited the partnering community behavioral health agencies in the Philadelphia region to recruit one-to-one aides and their supervisors. The team presented the study to behavioral health support staff and their supervisors and invited them to participate. Our inclusion criterion was that staff and their supervisors currently worked with at least one student diagnosed with or educationally classified as having autism on their caseload who attended a Philadelphia public school (Pre-K through grade 12). Behavioral health support staff who worked only in home settings or daycares were ineligible. Research staff conducted 16 interviews in total, nine with one-to-one aides and seven with supervisors. All participants gave informed consent before participating in the study.
App testing cycles participants. Research staff recruited 10 behavioral support workers to participate in three app testing cycles. All participants from the first two cycles were invited to participate in later cycles; however, due to the COVID-19 pandemic there was a substantial delay between cycles 2 and 3, resulting in only one participant carrying over to cycle 3 (none participated in all cycles, five participated in two cycles, five participated in one cycle). Inclusion criteria of the one-to-one aides were as follows: (1) they had to support children with autism in a school, daycare, home, or community setting in February or March of 2020 before the shutdown due to the COVID-19 pandemic, or to have currently worked with clients with autism in any capacity, including remotely over web conferencing software (for cycle 1 and 2, see Procedure section); and (2) they had to currently work in a school, daycare, home or community setting (for cycle 3 only). See Table 1 for all participant demographic characteristics. All participants gave informed consent before participating in the study.

2.3. Materials

Interview guides. The research team developed two semi-structured interview guides adapted from the ‘Theory Informed Topic Guide’ (Potthoff et al. 2019), one to interview behavioral health support staff and one to interview their clinical supervisors. Both interview guides included questions about current assessment practices and how to make data collection easier, and the one-to-one aide version also included questions about their beliefs and attitudes around data collection.
App development. The study team researched autism data collection applications available for download in the United States and identified six for consideration. The research team discussed the strengths and weaknesses of each application, and created a list of basic requirements, including HIPAA compliancy, offline availability, and the option to download data as a .csv file. This narrowed down the list to three applications. The team scheduled meetings with the developers of these three applications to gauge their interest in the study and learn more about each application’s capabilities. Based on a variety of factors (e.g., cost, ease of use, user interface, data export options, device compatibility [i.e., iPhone and Android, iPad and tablet], interest and availability to build in new features as per the aims of the study), the team partnered with a digital health technology company that focuses on customized digital platforms to support complex healthcare needs.
Our partnering behavioral health agency leaders advised on app design and facilitated recruitment across studies. Prior to the current study we also hosted an innovation tournament, a mechanism to crowdsource ideas on a topic (Terwiesch and Ulrich 2009), through the University of Pennsylvania’s Your Big Idea platform (Penn Medicine Center for Health Care Innovation n.d.). Our group has found this method to be ideal for gathering ideas as a starting place for solving a specific problem in behavioral health (Beidas et al. 2019; Stewart et al. 2019). In this innovation tournament, we asked one-to-one aides and their supervisors for ideas on how to make data collection for behavioral support workers easier and more motivating, so as to identify the most innovative solutions (Terwiesch and Ulrich 2009). Seventy-one percent of the ideas submitted in our innovation tournament suggested some form of technology or could be implemented via a data collection application for aiding one-to-one aides in sessions with children, which provided support for the app development idea. After completing the innovation tournament, research staff conducted the interviews with one-to-one aides and their supervisors. Following this, we then completed three app testing cycles (see above) with one-to-one aides. Feedback gathered through each of these user-centered design methods informed the final development of the Footsteps client data collection app. See Figure 1 for a schematic of the app development process.
Footsteps: Client data collection app. Our application was designed to address barriers inherent to data collection by making it easier and more attractive to take data on therapy programs, by applying the EAST behavioral economics principles described above. The app was intended to be Easy: featuring basic digital data collection features; Attractive: including client and provider data graphs; Social: showing comparisons to agency expectations, including a supervisor-provider messaging platform; Timely: giving start-of-session feedback messages on previous session’s data collection performance (percentage of intervals data was collected in, either synchronously or asynchronously), and giving in-session and end-of-week pop-up reminders. The basic data collection forms are personalized per client to include the behavior and/or skills being tracked in their therapy program. The basic data collection features and behavioral economics features are described in more detail in Table 2. See Figure 2 for images of the app main features.

2.4. Procedure

Interviews. Screening calls were conducted to determine eligibility. All interviews were audio recorded and transcribed once completed. Any app updates suggestions were flagged and compiled for consideration with the research team and mobile developer for future app iterations.
App testing. After interviews were completed, we incorporated participant feedback in the app design (see Data App Collection section above) and prepared the app for testing. In each app testing cycle, we showed one-to-one aides a beta version and had them complete exercises to give us feedback on how to improve the app. In each cycle, we conducted screening calls with participants to determine eligibility, gather details to create a customized participant account, and familiarize the participant with the application. While we planned for app testing to take place in schools, due to COVID-19 restrictions, app testing in the first two of three testing cycles took place in the one-to-one aides’ home offices, where they used the app as they would usually do in schools with their clients.
In cycle 1, four participants viewed screenshots of the app and participated in two “think aloud” exercises on how they would take data in real time (Jaspers et al. 2004). Each think aloud exercise required the participant to think about a common behavior they often observe and report how they would take data on them using the basic data entry features of the app, highlighting anything about the app that was confusing or that they would change.
In cycle 2, six participants downloaded the app onto their phones. They then completed two think aloud exercises, this time based on two videos of therapy sessions with children with autism. After each video, the participant took data on the child’s behaviors using the application while telling the research staff how they were navigating the basic data entry features of the app in real time. Participants then provided feedback on the first draft of the behavioral economics features, including usability and layout of the child and data collection graphs, and impressions on receiving reminders to take data via push notifications, again highlighting anything about the app that was confusing or that they would change.
In cycle 3, five participants tested the live app prototype. Research staff met in a video conferencing platform with each participant to complete a brief training on how to use the application and the web portal to download child data entered into the app. Staff asked for details of the participant’s client with autism to create an individualized app account. Once the account was set up, participants used the application with their client for at least two sessions. After this was completed, research staff conducted post-test interviews with participants to gather feedback on the basic data collection features and the behavioral economics features. This included the layout, features and functionality of each form and feature in the application: (1) the start session form with feedback on the previous session’s data collection performance (percentage of intervals data collected in, either synchronously or asynchronously); (2) behavior forms; (3) child data graphs; (4) data collection performance graphs; (5) in-session reminders to take data; (6) the data collection feedback message at the end of the week; and (7) the messaging platform. The interview guide asked semi-structured questions about feedback and improvement ideas on the basic data collection and the behavioral economics features. The research team also included a set of questions on the general experience using the app in each cycle that assessed: usability, acceptability, feasibility, appropriateness, burden, and comparison to other systems or applications. As for interviews, any app updates suggestions were flagged and compiled for consideration with the research team and mobile developer for future app iterations.
To measure the usability of the app on each testing cycle, participants also completed the System Usability Scale (SUS; Brooke 1996). The SUS is a 10-item questionnaire on the usability of a system or product with five Likert scale response options ranging from strongly disagree (1) to strongly agree (5), with higher total scores (calculated as a proportion score/100) indicating higher usability: ≥85 = Excellent; ≥71 = Good; ≥51 = Okay (Bangor et al. 2008). We adapted the measure for this study by replacing the/this “system” with “app” across all items.
Supervisors of one-to-one aides were asked for feedback during cycle 1 and cycle 3 during agency advisory meetings. As supervisors do not work directly with clients, we gathered their feedback during these meetings by showing them the versions of the app prototype using Figma, an interactive web-based design software that allowed supervisors to view the app as it would appear on their phone (e.g., with scroll capability). We did not ask them to rate the app on the SUS as they were not directly using the app with clients, so the 10 one-to-one aides that gave usability ratings all had experience with the live prototype. We did, however, ask supervisors for their app improvement ideas given their wealth of experience in the field so we could further improve the app.
Reliability on categorizing app improvement ideas. Two coders categorized app improvement ideas from interview participants as either (1) a feature to consider adding or 2) out of scope for the goals of the project (100% overlap on coding with a M = 95% agreement on categories, range 92–100%). App improvements ideas from the app testing cycles were categorized as either 1) a new feature, or (2) a feature modification. App improvement suggestions were discussed with the entire research team and with the mobile developer for consideration of adding these to the app.

3. Results

Below we present the app improvement ideas results and app usability results as gathered via interviews and app testing.

3.1. App Improvement Idea Results

One-to-one aides and supervisors provided 76 discrete suggestions for improvement; 53 were actionable suggestions, including 29 new app features (e.g., interval data collection form), 20 feature modifications (e.g., numeric type-in option for behavior frequency), and four flow modifications (e.g., deleting a redundant behavior form submission confirmation). Twenty-three other suggestions were not actionable (e.g., were in contrast with the core project’s aims to incorporate motivational messages grounded in behavioral economic theory, or were outside the project scope to create an app for 1:1 s and not youth). As shown in Table 3 and Table 4, one-to-one aides and supervisors helped to design and refine the application, and largely had consistent ideas for improving the application.

3.2. App Usability Results

As shown in Figure 3, most participants (70%) rated the Footsteps client data collection app as “Excellent” on usability (total SUS score) by their final testing cycle. Two participants rated the app as “Good” and one as not acceptable (<51) on usability by their final testing cycle.

4. Discussion

We developed an app to track client behaviors and therapy progress in partnership with behavioral health agencies incorporating feedback from supervisors and one-to-one aides at every development stage, from conception to prototyping to field testing. In each of these stages, we relied on principles and methods from user-centered design and behavioral economics. We conducted a preliminary test of the app with one-to-one aides who work with clients with autism. Most one-to-one aides rated the app as highly usable, suggesting the app is ready for more definitive testing in a randomized control trial.
The community partnerships strengthened as part of this project were vital to the success of the app development. New technologies or programs are often challenging to implement in community settings due to many barriers including lack of leadership buy-in and limited resources (Iadarola et al. 2015; Langley et al. 2010). One way to address these barriers is to develop meaningful partnerships with key community stakeholders, including those who will be responsible for supporting, implementing and consuming the technology or program (Pellecchia et al. 2018). One-to-one aides and supervisors from our partnering behavioral health agencies gave us a multitude of app improvement ideas. These allowed us to design and refine the application with the knowledge that we were fulfilling the needs of the community for data collection and progress monitoring on their therapy programs.

Limitations and Future Directions

This study is not without limitations. First, the sample sizes for each part of the project were relatively small. A larger scale study is needed to ensure that the app’s design and features are palatable to the larger community behavioral health community. However, involving supervisors in the interviews and app testing cycles allowed us to learn from their wealth of expertise having worked across many settings and clients.
Second, there were several app improvement ideas that were out of scope for the current project, including client facing features, tracking triggers/antecedents and interventions/consequences to behavior, and integration into existing electronic health records. These form excellent future directions for the educational and therapy app design field.
Third, no data was collected on whether the behavioral economics features of the application help to increase the quantity or quality of data collection by one-to-one aides. The app has the potential to improve data collection practices and therefore clinical care. We are currently running a pilot randomized controlled trial comparing the Footsteps app vs. a basic data collection app to examine this question, and plan to follow-up with a fully powered randomized controlled trial.

Author Contributions

Conceptualization, H.J.N., E.M.B.-H., K.Z. and D.S.M.; Methodology, H.J.N., J.E.B., B.R., E.M.B.-H., K.Z. and D.S.M.; Software, H.J.N., J.E.B., B.R., E.M.B.-H., K.Z. and D.S.M.; Validation, H.J.N., J.E.B. and B.R.; Formal Analysis, H.J.N., J.E.B., B.R., E.M.B.-H. and K.Z.; Investigation, H.J.N., J.E.B., B.R., E.M.B.-H., K.Z. and D.S.M.; Resources, H.J.N., E.M.B.-H. and D.S.M.; Data Curation, H.J.N., J.E.B., B.R., E.M.B.-H. and K.Z.; Writing—Original Draft Preparation, H.J.N. and J.E.B.; Writing—Review & Editing, H.J.N., J.E.B., B.R., E.M.B.-H., K.Z. and D.S.M.; Visualization, H.J.N.; Supervision, H.J.N., E.M.B.-H., K.Z. and D.S.M.; Project Administration, H.J.N., J.E.B., B.R., E.M.B.-H., K.Z. and D.S.M.; Funding Acquisition, E.M.B.-H., K.Z. and D.S.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by two National Institute of Mental Health grants, number P50MH113840 (PIs: Rinad Beidas, David Mandell, and Kevin Volpp) and K01MH120509 (PI: Heather Nuske).

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and the protocol was approved by the Ethics Committee of the City of Philadelphia (2019-32).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Not applicable.

Acknowledgments

We would like to thank our wonderful community partners, including our three partnering agencies (Gemma Services, Children’s Crisis Treatment Center, and NET Centers) and their agency directors and supervisors (Amy Wasersztein, Tristan Dahl, Michelle Ruppert-Daly, Emmeline Williamson, Bridget Donohue, Patrick Bevenour) and all the other supervisors and one-to-one aides working at these agencies who were involved in the project. Without them we could not have completed this project and developed the application to such a high standard. We would also like to thank our industry partner for their commitment to incorporating community feedback and seeing through the development of the application to its final state.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Penn Medicine Center for Health Care Innovation. About|Your Big Idea. n.d. Available online: https://bigidea.pennmedicine.org/about (accessed on 28 July 2021).
  2. Abras, Chadia, Diane Maloney-Krichmar, and Jenny Preece. 2004. User-centered design. In W. Bainbridge, Encyclopedia of Human-Computer Interaction. Thousand Oaks: Sage Publications, vol. 37, pp. 445–56. [Google Scholar]
  3. Bangor, Aaron, Phillip T. Kortum, and James T. Miller. 2008. An Empirical Evaluation of the System Usability Scale. International Journal of Human–Computer Interaction 24: 574–94. [Google Scholar] [CrossRef]
  4. Beidas, Rinad S., Kevin G. Volpp, Alison N. Buttenheim, Steven C. Marcus, Mark Olfson, Melanie Pellecchia, Rebecca E. Stewart, Nathaniel J. Williams, Emily M. Becker-Haimes, Molly Candon, and et al. 2019. Transforming Mental Health Delivery through Behavioral Economics and Implementation Science: Protocol for Three Exploratory Projects. JMIR Research Protocols 8: e12121. [Google Scholar] [CrossRef] [PubMed]
  5. Brooke, John. 1996. SUS: A “Quick and Dirty” Usability Scale. In Usability Evaluation In Industry. Boca Raton: CRC Press. [Google Scholar]
  6. Case, Meredith A., Holland A. Burwick, Kevin G. Volpp, and Mitesh S. Patel. 2015. Accuracy of Smartphone Applications and Wearable Devices for Tracking Physical Activity Data. JAMA 313: 625. [Google Scholar] [CrossRef]
  7. Cohen, Ira L., Helen J. Yoo, Matthew S. Goodwin, and Lauren Moskowitz. 2011. Assessing challenging behaviors in Autism Spectrum Disorders: Prevalence, rating scales, and autonomic indicators. In International Handbook of Autism and Pervasive Developmental Disorders. Berlin/Heidelberg: Springer, pp. 247–70. Available online: http://link.springer.com/chapter/10.1007/978-1-4419-8065-6_15 (accessed on 1 September 2021).
  8. Cotton, Victor, and Mitesh S. Patel. 2019. Gamification Use and Design in Popular Health and Fitness Mobile Applications. American Journal of Health Promotion 33: 448–51. [Google Scholar] [CrossRef] [PubMed]
  9. Dale, Oystein, and Kaare Birger Hagen. 2007. Despite technical problems personal digital assistants outperform pen and paper when collecting patient diary data. Journal of Clinical Epidemiology 60: 8–17. [Google Scholar] [CrossRef]
  10. Dopp, Alex R., Kathryn E. Parisi, Sean A. Munson, and Aaron R. Lyon. 2019. A glossary of user-centered design strategies for implementation experts. Translational Behavioral Medicine 9: 1057–64. [Google Scholar] [CrossRef] [PubMed]
  11. The Behavioural Insights Team. EAST: Four Simple Ways to Apply Behavioural Insights. n.d. Available online: https://www.bi.team/publications/east-four-simple-ways-to-apply-behavioural-insights/ (accessed on 24 May 2021).
  12. Goldsmith, Tina R., and Linda A. LeBlanc. 2004. Use of technology in interventions for children with autism. Journal of Early and Intensive Behavior Intervention 1: 166. [Google Scholar] [CrossRef]
  13. Hastings, Richard P., and Tony Brown. 2002. Coping strategies and the impact of challenging behaviors on special educators’ burnout. Mental Retardation 40: 148–56. [Google Scholar] [CrossRef]
  14. Hunt, Dereck L., R. Brian Haynes, Steven E. Hanna, and Kristina Smith. 1998. Effects of computer-based clinical decision support systems on physician performance and patient outcomes: A systematic review. JAMA 280: 1339–46. [Google Scholar] [CrossRef] [PubMed]
  15. Iadarola, Suzannah, Susan Hetherington, Christopher Clinton, Michelle Dean, Erica Reisinger, Linh Huynh, Jill Locke, Kelly Conn, Sara Heinert, Sheryl Kataoka, and et al. 2015. Services for children with autism spectrum disorder in three, large urban school districts: Perspectives of parents and educators. Autism 19: 694–703. [Google Scholar] [CrossRef] [PubMed]
  16. Jabbar, Huriya. 2011. The Behavioral Economics of Education: New Directions for Research. Educational Researcher 40: 446–53. [Google Scholar] [CrossRef]
  17. Jaspers, Monique W. M., Thiemo Steen, Cor Van Den Bos, and Maud Geenen. 2004. The think aloud method: A guide to user interface design. International Journal of Medical Informatics 73: 781–95. [Google Scholar] [CrossRef] [PubMed]
  18. Kientz, Julie A., Matthew S. Goodwin, Gillian R. Hayes, and Gregory D. Abowd. 2013. Interactive Technologies for Autism. Synthesis Lectures on Assistive, Rehabilitative, and Health-Preserving Technologies 2: 1–177. [Google Scholar] [CrossRef]
  19. Kim, Rebecca H., and Mitesh S. Patel. 2018. Barriers and Opportunities for Using Wearable Devices to Increase Physical Activity among Veterans: Pilot Study. JMIR Formative Research 2: e10945. [Google Scholar] [CrossRef] [PubMed]
  20. Koch, Alexander, Julia Nafziger, and Helena Skyt Nielsen. 2015. Behavioral economics of education. Journal of Economic Behavior & Organization 115: 3–17. [Google Scholar] [CrossRef]
  21. Langley, Audra K., Erum Nadeem, Sheryl H. Kataoka, Bradley D. Stein, and Lisa H. Jaycox. 2010. Evidence-based mental health programs in schools: Barriers and facilitators of successful implementation. School Mental Health 2: 105–13. [Google Scholar] [CrossRef] [PubMed]
  22. Lavecchia, Adam M., Heidi Liu, and Philip Oreopoulos. 2016. Chapter 1—Behavioral Economics of Education: Progress and Possibilities. In Handbook of the Economics of Education. Edited by Erik A. Hanushek, Stephen J. Machin and Ludger Woessmann. Amsterdam: Elsevier, vol. 5, pp. 1–74. [Google Scholar] [CrossRef]
  23. Le Jeannic, Anais, Celine Quelen, Corinne Alberti, and Isabelle Durand-Zaleski. 2014. Comparison of two data collection processes in clinical studies: Electronic and paper case report forms. BMC Medical Research Methodology 14: 7. [Google Scholar] [CrossRef] [PubMed]
  24. Levitt, Steven D., John A. List, Susanne Neckermann, and Sally Sadoff. 2016. The Behavioralist Goes to School: Leveraging Behavioral Economics to Improve Educational Performance. American Economic Journal: Economic Policy 8: 183–219. [Google Scholar] [CrossRef]
  25. List, John A., Anya Samek, and Dana L. Suskind. 2018. Combining behavioral economics and field experiments to reimagine early childhood education. Behavioural Public Policy 2: 1–21. [Google Scholar] [CrossRef]
  26. Marcu, Gabriela, Kevin Tassini, Quintin Carlson, Jillian Goodwyn, Gabrielle Rivkin, Kevin J. Schaefer, Anind K. Dey, and Sara Kiesler. 2013. Why do they still use paper? Understanding data collection and use in Autism education. Paper presented at CHI’13, SIGCHI Conference on Human Factors in Computing Systems, Paris, France, April 27–May 2; pp. 3177–86. [Google Scholar] [CrossRef]
  27. Melamed, Yuval, Henry Szor, and Elizur Bernstein. 2001. The loneliness of the therapist in the public outpatient clinic. Journal of Contemporary Psychotherapy 31: 103–12. [Google Scholar] [CrossRef]
  28. Mullainathan, Sendhil, and Richard H. Thaler. 2000. Behavioral Economics. (No. w7948). Cambridge: National Bureau of Economic Research. [Google Scholar] [CrossRef]
  29. Nuske, Heather J., and David S. Mandell. 2021. Digital health should augment (not replace) autism treatment providers. Autism 25: 1825–1847. [Google Scholar] [CrossRef] [PubMed]
  30. Patel, Mitesh S., Daniel Polsky, Edward H. Kennedy, Dylan S. Small, Chalanda N. Evans, Charles A. L. Rareshide, and Kevin G. Volpp. 2020. Smartphones vs. Wearable Devices for Remotely Monitoring Physical Activity After Hospital Discharge: A Secondary Analysis of a Randomized Clinical Trial. JAMA Network Open 3: e1920677. [Google Scholar] [CrossRef] [PubMed]
  31. Patel, Mitesh S., Luca Foschini, Gregory W. Kurtzman, Jingsan Zhu, Wenli Wang, Charles A. L. Rareshide, and Susan M. Zbikowski. 2017. Using Wearable Devices and Smartphones to Track Physical Activity: Initial Activation, Sustained Use, and Step Counts Across Sociodemographic Characteristics in a National Sample. Annals of Internal Medicine 167: 755–57. [Google Scholar] [CrossRef] [PubMed]
  32. Pellecchia, Melanie, David S. Mandell, Heather J. Nuske, Gazi Azad, Courtney Benjamin Wolk, Brenna B. Maddox, Erica M. Reisinger, Laura C. Skriner, Danielle R. Adams, Rebecca Stewart, and et al. 2018. Community–academic partnerships in implementation research. Journal of Community Psychology 46: 941–52. [Google Scholar] [CrossRef]
  33. Potthoff, Sebastian, Justin Presseau, Falko F. Sniehotta, Matthew Breckons, Amy Rylance, and Leah Avery. 2019. Exploring the role of competing demands and routines during the implementation of a self-management tool for type 2 diabetes: A theory-based qualitative interview study. BMC Medical Informatics and Decision Making 19: 23. [Google Scholar] [CrossRef]
  34. Raney, Lori, David Bergman, John Torous, and Michael Hasselberg. 2017. Digitally Driven Integrated Primary Care and Behavioral Health: How Technology Can Expand Access to Effective Treatment. Current Psychiatry Reports 19: 86. [Google Scholar] [CrossRef]
  35. Riggleman, Samantha. 2021. Using Data Collection Applications in Early Childhood Settings to Support Behavior Change. Journal of Special Education Technology 36: 175–82. [Google Scholar] [CrossRef]
  36. Rispoli, Mandy, Leslie Neely, Russell Lang, and Jennifer Ganz. 2011. Training paraprofessionals to implement interventions for people autism spectrum disorders: A systematic review. Developmental Neurorehabilitation 14: 378–88. [Google Scholar] [CrossRef]
  37. Saleem, Jason J., Emily S. Patterson, Laura Militello, Marta L. Render, Greg Orshansky, and Steven M. Asch. 2005. Exploring Barriers and Facilitators to the Use of Computerized Clinical Reminders. Journal of the American Medical Informatics Association 12: 438–47. [Google Scholar] [CrossRef]
  38. Saleem, Jason J., Emily S. Patterson, Laura Militello, Shilo Anders, Mercedes Falciglia, Jennifer A. Wissman, Emilie M. Roth, and Steven M. Asch. 2007. Impact of Clinical Reminder Redesign on Learnability, Efficiency, Usability, and Workload for Ambulatory Clinic Nurses. Journal of the American Medical Informatics Association 14: 632–40. [Google Scholar] [CrossRef]
  39. Samson, Alain. 2014. The Behavioral Economics Guide 2014. London: Behavioral Economics Group. [Google Scholar]
  40. Steinbrenner, Jessica R., Kara Hume, Samuel L. Odom, Kristi L. Morin, Sallie W. Nowell, Brianne Tomaszewski, Susan Szendrey, Nancy S. McIntyre, Serife Yücesoy-Özkan, and Melissa N. Savage. 2020. Evidence-Based Practices for Children, Youth, and Young Adults with Autism. Chapel Hill: The University of North Carolina, Frank Porter Graham Child Development Institute, National Clearinghouse on Autism Evidence and Practice Review Team. [Google Scholar]
  41. Stewart, Rebecca E., Nathaniel Williams, Y. Vivian Byeon, Alison Buttenheim, Sriram Sridharan, Kelly Zentgraf, David T. Jones, Katelin Hoskins, Molly Candon, and Rinad S. Beidas. 2019. The clinician crowdsourcing challenge: Using participatory design to seed implementation strategies. Implementation Science 14: 1–8. [Google Scholar] [CrossRef] [PubMed]
  42. Sutherland, Rebecca, David Trembath, and Jacqueline Roberts. 2018. Telehealth and autism: A systematic search and review of the literature. International Journal of Speech-Language Pathology 20: 324–36. [Google Scholar] [CrossRef] [PubMed]
  43. Terwiesch, Christian, and Karl T. Ulrich. 2009. Innovation Tournaments: Creating and Selecting Exceptional Opportunities. Boston: Harvard Business Press. [Google Scholar]
  44. Vashitz, Geva, Joachim Meyer, and Harel Gilutz. 2007. General Practitioners’ Adherence with Clinical Reminders for Secondary Prevention of Dyslipidemia. AMIA Annual Symposium Proceedings 2007: 766–70. [Google Scholar]
Figure 1. App development process.
Figure 1. App development process.
Socsci 11 00106 g001
Figure 2. App Design Incorporating Behavioral Economics EAST * (Easy, Attractive, Social and Timely) Principles. * https://www.bi.team/publications/east-four-simple-ways-to-apply-behavioural-insights/ (accessed on 1 September 2021).
Figure 2. App Design Incorporating Behavioral Economics EAST * (Easy, Attractive, Social and Timely) Principles. * https://www.bi.team/publications/east-four-simple-ways-to-apply-behavioural-insights/ (accessed on 1 September 2021).
Socsci 11 00106 g002
Figure 3. System Usability Scale (SUS) Scores Across App Testing Cycles. Each dot color represents one participant. Higher total scores (calculated as a proportion score/100) indicate higher usability: ≥85 = Excellent; ≥71 = Good; ≥51 = Okay (Bangor et al. 2008).
Figure 3. System Usability Scale (SUS) Scores Across App Testing Cycles. Each dot color represents one participant. Higher total scores (calculated as a proportion score/100) indicate higher usability: ≥85 = Excellent; ≥71 = Good; ≥51 = Okay (Bangor et al. 2008).
Socsci 11 00106 g003
Table 1. Participant demographic characteristics.
Table 1. Participant demographic characteristics.
Interviews
(n = 16)
App Testing Cycles
(n = 10)
Race
 Black or African American77
 White51
 Asian10
 Native Hawaiian or Pacific Islander10
 Prefer not to disclose02
 Other10
 Missing10
Hispanic or Latino/a/x
 No117
 Yes31
 Prefer not to disclose02
 Missing20
Gender
 Female108
 Male50
 Prefer not to disclose02
 Missing10
Table 2. Footsteps app: basic data collection and behavioral economics features.
Table 2. Footsteps app: basic data collection and behavioral economics features.
App Tile *Basic Data Collection FeaturesBehavioral Economics Features
Start Session Start session, edit session length or end session early.
Choose location (e.g., School, Home, Community, Daycare, Other).
Encouraging or celebratory feedback message upon starting a session, depending on whether their agency’s threshold of percentage of intervals data taken should be taken on was reached (e.g., “Well done for logging data for 100% of intervals in your last session. Let’s keep it up today!”).
Track Behavior/Skills FormBehavior name, definition, and associated goal.
Time behavior occurred (in case asynchronous data collected).
Behavior metrics (chosen upon account set-up), including frequency, duration, intensity (three levels with editable descriptors), % opportunities and context.
Additional notes.
Behavior did not occur quick button (as nudge to take absence of behavior data).
In-session push notification reminders if data has not been collected during a time interval (e.g., at the end of the hour, interval set on account set-up): “Hello! Just reminding you to take data for this hour”.
See Data Graphs Data graph of client’s behavioral data based on data entry, available on associated web platform.Data graph of client’s behavioral data based on data entry, available in app and on associated web platform.
Two data graph of one-to-one’s data entry (% of intervals data collected): (1) comparing the current week with the previous week with an encouraging or celebratory feedback message based on current this week’s data collection performance, and (2) comparing the current week with agency expectation threshold (% of intervals in which data should be taken), with encouraging (“Room for Improvement”) or celebratory feedback messages (“Good”, “Great”, “You are a top performer!”) and associated graphics (e.g., smiley faces, fireworks GIF).
End of week (Friday, 4 p.m.) push notification reminders to check week-to-week comparison graphs: “Let’s review your data this week compared to last week”.
Summary NoteFree form text field, available once at least one quantitative behavior form has been entered during a session.
Timer Stopwatch timer to make it easier to collect behavior duration data.
Messaging Platform Basic messaging platform available via the app and/or associated web platform, which can be used for supervisor interaction.
Note. * As per the Start Page shown in Figure 2.
Table 3. App improvement ideas: interviews.
Table 3. App improvement ideas: interviews.
CategoryParticipantsImprovement Idea/Feedback
FeatureSupervisorsTaking frequency and duration data by pushing a button in the app every time a behavior occurs.
Take duration of behavior using an app.
Positive messaging when taking data collection.
Include a visual of the data.
A recorder button to record a behavior when it occurs and takes duration data.
Take data on replacement behavior(s).
Make the app work with and without Wifi or cellular data.
Add a timer in the app.
Sync the app to determine reliability when support worker and supervisor are taking data.
One-to-one aidesCustomizing the app with client behaviors.
Make it simple to take data, such as clicking a + or − sign to indicate if a client exhibited a behavior.
Add a note section.
Customize the behavior, including the time interval per behavior.
A form or device to take data by pushing a button.
Include a list of behaviors to take data on in an app.
Include duration and intensity to take data on in an app.
Show visual data with a graph.
Add percentage option for frequency.
Take data in an app by including a button per behavior that you can click to take frequency.
Take data on a handheld device.
Take data in an app by including a button per behavior that you can click to take frequency and duration.
Add a note section.
User interface should be easy to use.
Take data on a phone app.
Add a timer in an app.
Take data on a device that does not require internet.
Have the option to record behaviors in real time.
Out of scopeSupervisorsBe able to record interventions and outcomes.
Be able to copy the data into electronic health record (EHR).
Be able to customize the app to match the format of the agency’s EHR.
Provide Wifi.
An interactive component to teach how to use the app (i.e., Clippy from Microsoft Word).
Be able to choose between frequency and partial interval data.
One-to-one aidesRecord the data by using Bluetooth technology.
Record the data by using a microphone.
Provide multiple different ways to take data through technology.
A clicker to record data for behaviors that happen frequently.
Be able to customize the app in real time (add behaviors).
Flow modificationSupervisorsAdd a way to log multiple behaviors at once (for behaviors that often happen concurrently).
One-to-one aidesAdd a way to log multiple behaviors at once (for behaviors that often happen concurrently).
Remove the need to press the “close” button after submitting a behavior.
Out of scopeSupervisorsTally button for each behavior on start page.
One-to-one aidesTally button for each behavior on start page.
Include the agency’s progress note form within the app.
Incorporate client facing features in the app (e.g., show the data graphs to show. progress and show positive messages on behalf of the client).
Table 4. App improvement ideas: app testing cycles.
Table 4. App improvement ideas: app testing cycles.
CycleCategoryParticipantsImprovement Idea/Feedback
Cycle 1 New FeatureSupervisorsLine graph to visually display patterns in behavior.
Interval data collection form.
One-to-one aidesA function to compile individual notes at the end of the session.
Feature modificationSupervisorsPercentage of opportunities button in 10% increments.
Numeric type-in option for frequency (type in number).
One-to-one aidesNumeric type-in option for frequency (type in number).
Flow modificationOne-to-one aidesRemove “Behavior occurred” button.
Out of scopeSupervisorsLink the app to the agency’s scheduler to make the app align with billing requirements.
One-to-one aidesIntervention data collection tab.
E-sign the data submitted after each session.
Add behavior form section to captures outcome of behavior (mood/redirection/outcome).
Cycle 2New FeatureOne-to-one aidesN/A
Feature modificationOne-to-one aidesReviewing data in the graphs by month would be more useful than by week.
Wants more detailed description of the behaviors along with the goal description.
Only view one day’s worth of intervals in child data graph instead of multiple days at one time.
Have additional details fields (frequency, duration, etc.) pop up automatically after selecting “yes, behavior occurred”.
Include option in App to note that client “left early” or “no show”.
BHT has a super flexible schedule that changes frequently, so a “start” and “end” session button would be useful instead of having a set schedule.
Flow modificationOne-to-one aidesN/A
Out of scopeOne-to-one aidesInclude a component in the app that the child can engage with.
Include a way to track antecedents and interventions.
Have the app match the one-to-one aides’ data sheet identically.
Cycle 3New FeatureSupervisorsAdd a method to easily take note of the absence of a behavior.
One-to-one aidesN/A
Feature modificationSupervisorsTimer button on start page.
Add the option to specify location (home, daycare, school, community) since some kids receive service in multiple locations.
Change “significant” to “severe” when describing levels of severity.
Make the severity definitions appear before a BHT selects a severity level option (e.g., “expand all definitions” option, “view definitions” option).
Change % of opportunity to be a drop down where user can input # of successful opportunities and total # opportunities.
Add option on behavior form to add context (e.g., whole class instruction, 1:1, option to edit list).
Change client data graph to be line graph instead of bar graph.
One-to-one aidesOn Start Page, have option to manually enter the exact start time.
Opportunity to record more behaviors, specifically the positive behaviors.
Make the app compile the notes from the behavior forms and summary note(s) together into one compilation of notes at the end of a session.
Add a button you can push that says the behavior did not occur.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Back to TopTop