Explanation of Student Attendance AI Prediction with the Isabelle Infrastructure Framework †
Abstract
:1. Introduction
“Class attendance is a puzzle, buildings are built, rooms reserved, teaching schedules are set, and students enroll with the assumption that faculty-student encounters will occur. Yet quite often many students do not show up.”[1]
“Instead of a reactive state of mind through evaluating students′ grades, a proactive state of mind can be created by increasing class attendance.”[3]
1.1. Contribution
1.2. Overview of the Paper
2. Related Work
2.1. Students′ Attendance
2.2. Inductive Logic Programming and Induction in Automated Reasoning
3. eXplainable Artificial Intelligence (XAI)
“Explainable Artificial Intelligence (XAI), a field that is concerned with the development of new methods that explain and interpret machine learning models.”(Linardatos et al. (2021) [6]).
3.1. Explainability Need
- Explainability: Explainability aims to describe the (i) reason, (ii) model, and (iii) decision of AI systems so that humans can understand them [22].
- Transparency: If the algorithmic behavior, processes, and decision output of an AI model can be understood by a human mechanistically then the model is said to be transparent [23].
Explainability
“Explainability is the extent to which the internal mechanics of a machine or deep learning system can be explained in human terms. Explainability is being able to explain quite literally what is happening.”(Koleñàk (2020) [24])
3.2. Isabelle Insider and Infrastructure Framework (IIIf)
Attack Trees, CTL and Kripke Structures
- Kripke structures and state transitions:A generic state transition relation is ; Kripke structures over a set of states t reachable by from an initial state set I can be constructed by the Kripke constructor asKripke {t. ∃ i ∈ I. i t} I
- CTL statements:Computation Tree Logic (CTL) can be used to specify dependability properties asK ⊢ EF s,which means that in Kripke structure K there is a path (E) upon which the property s (given as the set of states in which the property is true) will eventually (F) hold.
- Attack trees:In Isabelle, attack trees are defined as a recursive datatype having three constructors: creates or-trees and creates and-trees. And-attack trees and or-attack trees consist of a list of sub-attacks which are again recursively given as attack trees. The third constructor inputs a pair of state sets and constructs a base attack step between two state sets. As an illustration, for the sets I and s this is written as . To give another example, a two step and-attack leading from state set I via si to s is expressed as⊢ [,].
- Attack tree refinement, validity and adequacy:Refinement can also be used to construct attack trees but this process is different from the system refinement described in Kammüller (2021) [36]. A high level attack tree is refined iteratively by adding more detailed attack steps until a valid attack is reached:⊢A :: (:: state) attree).The definition of validity is constructive so that code can be automatically extracted from it. A formal semantics for attack trees is provided in CTL; adequacy is proved which can enable actual application verification.
4. Case Study of Pupils′ Attendance in Schools
4.1. Problem Statement
4.2. Data Collection
4.3. Model in IIIf
5. Results
5.1. Definition of PCR Cycle
PCR Cycle Algorithm
- Using attack tree analysis in the CTL logic of the IIIf we find the initial starting condition of the PCR. The variable B is an element of a datatype for which we seek explanation (in the example it is actors).M ⊢ EF { s ∈ states M. ¬ DO(B, s) }.This formula states that there exists a path (E) on which eventually (F) a state s will be reached in which the desirable outcome is not true for B. The path corresponds to an attack tree (by adequacy [33]) designating failure states s.
- Find the (initial or refined) precondition using a counterfactual.That is, for a state s in the set of failure states identified in the previous step.
- (a)
- Find states s′ and s″ such that closest s′ s s″, that is, s′ s and s′ s″. In addition, DO(B,s″) must hold.
- (b)
- Identify the precondition pc leading to the state s″ where DO holds, that is, find an additional predicate with (B, s′) and use it to extend the previous predicate pc to pc:= pc∧.
- Generalisation.Use again attack tree analysis in the CTL logic of the IIIf to check whether the following formula is true on the entire datatype of B: it is globally true (AG) that if the precondition pc holds, there is a path on which eventually (EF) the desirable outcome DO holds (Note that the interleaving of the CTL-operators AG and EF with logical operators, like implication ⟶ is only possible since we use a Higher Order logic embedding of CTL).∀ A. M ⊢ AG {s ∈ states M. pc (A, s) ⟶ EF {s. DO(A, s)}}
- (a)
- If the check is negative, we get an attack tree, that is, IIIf provides an explanation tree forM ⊢ EF { s ∈ states M. pc(A,s) ∧ ¬ DO(A, s) }and a set of failure states s with pc(A,s) and the desirable outcome is not true: ¬DO(A,s).In this case, go to step 2. and repeat with the new set of failure states in order to find new counterfactuals and refine the predicate. pc:= pc∨ where is an additional precondition.
- (b)
- If the check is positive, we have reached the termination condition yielding a precondition pc such that for all A:M ⊢ AG { s ∈ states M. pc (A, s) ⟶ EF {s. DO(A, s)} }
5.2. PCR Cycle Application to School Attendance Case Study
- For actor Bob, we use CTL modelchecking in the IIIf to verify the formulaAttendance_Kripke ⊢EF { s ∈ states Attendance_Kripke. ¬ DO(“Bob”, s) }.From this proof, the IIIf allows applying Completeness and Correctness results of CTL [33] to derive the following attack tree.⊢ [,]The attack tree corresponds to a path leading from the initial state I to the failure state CC where Bob’s approval field in attendance CC gets evaluated by the education department ED as negative “False”. The evaluation steps are:
- I→C
- : Bob puts in an attendance request; this is represented by a put action. So, the state C has (“Bob”, None) ∈ attendance C.
- C→CC
- : the Education Department ED evaluates the attendance request represented as an eval action with the result of the evaluation left in attendance CC. So, the state CC has (“Bob”, Some(False)) ∈ attendance CC.
To derive the final failure state CC, the Education Department has applied the bb function as Some((bb C) d) which evaluates Bob’s request as Some(False) (rule eval). - Next, the PCR algorithm finds an initial precondition that yields the desirable outcome in a closest state using counterfactuals. The closest state is given as Ca which differs from C in that Bob has lower disadvantage set (0 elements) as opposed to a 1-element set as in C. The precondition derived ispc ≡ card(disadvantage_set A s) = 1 ∧ A Non-Coastal)The state Ca is reachable: Bob first applies for a disadvantage removal via the action delete. From the state Ca, Bob puts in the attendance application leading to CCa, before the Education Department ED evaluates leading to CCCa. We see that now with the reduced disadvantage set, Bob receives a present attendance prediction.
- The next step of the PCR algorithm is generalisation. We want to investigate whether the disadvantage set reduction is a sufficient precondition in general (for all actors) to explain why the bb algorithm approves the credit. When we try to prove according to Step 3 that this is the case, the attack tree analysis proves the opposite.M ⊢ EF { s ∈ states M. s(“Alice”, s) ∧ ¬ DO(“Alice”, s) }It turns out that Alice who has a disadvantage set of size 2 does not get the “present” attendance scoring either. She lives, however, in the coastal area unlike Bob who lives in the non-coastal area. Following Step 3(a) we need to go to another iteration and go back to Step 2, to refine the precondition.
- In this -run, we now have the state s where Alice does not get the approval. According to Step 2(a), we find a counterfactual state as the one in which Alice reduces her disability set to one leading to a new alternative precondition added to the previous one as an alternative (with ∨).pc (A, s) := card(disadvantage_set A s) ≤ 1 ∧ A Non-Coastal
- However, there are more alternatives in the counterfactual set. Alice can also first move to London. The new precondition now is created by adding the following pc as an additional alternative with ∨ to the overall precondition.pc (A, s) := card(disadvantage_set A s) ≤ 2 ∧ A London
- Going to Step 3 again in this -run, now the proof of the generalisation succeeds.∀ A. M ⊢ AG {s ∈ states M. pc(A, s) ∨ pc (A, s) ∨ pc (A, s)⟶ EF {s. DO(A, s)}}
5.3. Discussion
6. Conclusions
Future Work
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Friedman, P.; Rodriguez, F.; McComb, J. Why Students Do and Do Not Attend Classes. Coll. Teach. 2014, 49, 124–133. [Google Scholar] [CrossRef]
- Moodley, R.; Chiclana, F.; Carter, J.; Caraffini, F. Using Data Mining in Educational Adminstration: A Case Study on Improving School Attendance. Appl. Sci. 2020, 10, 3116. [Google Scholar] [CrossRef]
- Vissers, M. Predicting Students′ Class Attendance. Master’s Thesis, Tilburg University School of Humanities, Tilburg, The Netherlands, 2018. Available online: http://arno.uvt.nl/show.cgi?fid=147795 (accessed on 17 July 2023).
- Myers, A.C.; Liskov, B. Complete, Safe Information Flow with Decentralized Labels. In Proceedings of the IEEE Symposium on Security and Privacy, Oakland, CA, USA, 6 May 1998; IEEE: Piscataway, NJ, USA, 1999. [Google Scholar]
- Kammüller, F. Explanation of Black Box AI for GDPR related Privacy using Isabelle. In Proceedings of the Data Privacy Management DPM ’22, Co-Located with ESORICS 22, Copenhagen, Denmark, 26–30 September 2022; Springer: Berlin/Heidelberg, Germany, 2022; Volume 13619. [Google Scholar]
- Linardatos, P.; Papastefanopoulos, V.; Kotsiantis, S. Explainable AI: A Review of Machine Learning Interpretability Methods. Entropy 2021, 23, 18. [Google Scholar] [CrossRef] [PubMed]
- Balfanz, R.; Byrnes, V. The Importance of Being There: A Report on Absenteeism in the Nation’s Public Schools. 2012. Available online: https://www.attendanceworks.org/wp-content/uploads/2017/06/FINALChronicAbsenteeismReport_May16.pdf (accessed on 13 July 2023).
- Cook, P.; Dodge, K.; Gifford, E.; Schulting, A. A new program to prevent primary school absenteeism: Results of a pilot study in five schools. Child. Youth Serv. Rev. 2017, 82, 262–270. [Google Scholar] [CrossRef]
- Havik, T.; Ingul, J.M. How to Understand School Refusal. Front. Educ. 2021, 6. [Google Scholar] [CrossRef]
- Chen, J.; Lin, T.F. Class Attendance and Exam Performance: A Randomized Experiment. J. Econ. Educ. 2008, 39, 213–227. [Google Scholar] [CrossRef]
- Nyamapfene, A. Does class attendance still matter? Eng. Educ. 2010, 5, 67–74. [Google Scholar] [CrossRef]
- Westerman, J.W.; Perez-Batres, L.A.; Coffey, B.S.; Pouder, R.W. The relationship between undergraduate attendance and performance revisited: Alignment of student and instructor goals. Decis. Sci. J. Innov. Educ. 2011, 9, 49–67. [Google Scholar] [CrossRef]
- The Department for Education. Can You Take Kids on Term-Time Holidays without Being Fined? 2023. Available online: https://www.moneysavingexpert.com/family/school-holiday-fines/ (accessed on 17 July 2023).
- GOV.UK. Department for Education. 2023. Available online: https://www.gov.uk/government/organisations/department-for-education (accessed on 17 July 2023).
- Saranti, A.; Taraghi, B.; Ebner, M.; Holzinger, A. Insights into Learning Competence through Probabilistic Graphical Models. In Machine Learning and Knowledge Extraction, Proceedings of the 2019 International Cross-Domain Conference, CD-MAKE 2019, Canterbury, UK, 26–29 August 2019; Lecture Notes in Computer Science; Holzinger, A., Kieseberg, P., Tjoa, A., Weippl, E., Eds.; Springer: Cham, Switzerland, 2019; pp. 250–271. [Google Scholar] [CrossRef]
- Muggleton, S.H. Inductive logic programming. New Gener. Comput. 1991, 8, 295–318. [Google Scholar] [CrossRef]
- Finzel, B.; Saranti, A.; Angerschmid, A.; Tafler, D.; Pfeifer, B.; Holzinger, A. Generating Explanations for Conceptual Validation of Graph Neural Networks. KI—Künstliche Intelligenz 2022, 36, 271–285. [Google Scholar] [CrossRef]
- Miller, T. Explanation in artificial intelligence: Insights from the social sciences. Artif. Intell. 2019, 267, 1–38. [Google Scholar] [CrossRef]
- Schwalbe, G.; Finzel, B. XAI Method Properties: A (Meta-)Study. Available online: http://arxiv.org/abs/2105.07190 (accessed on 2 August 2023).
- van Lent, M.; Fisher, W.; Mancuso, M. An Explainable Artificial Intelligence System for Small-Unit Tactical Behavior. In Proceedings of the IAAI’04—16th Conference on Innovative Applications of Artifical Intelligence, San Jose, CA, USA, 27–29 July 2004; pp. 900–907. [Google Scholar]
- Gunning, D.; Aha, D. DARPA’s Explainable Artificial Intelligence (XAI) Program. AI Mag. 2019, 40, 44–58. [Google Scholar] [CrossRef]
- Bruckert, S.; Finzel, B.; Schmid, U. The Next Generation of Medical Decision Support: A Roadmap toward Transparent Expert Companions. Front. Artif. Intell. 2020, 3, 507973. [Google Scholar] [CrossRef] [PubMed]
- Páez, A. The Pragmatic Turn in Explainable Artificial Intelligence (XAI). Minds Mach. 2019, 29, 441–459. [Google Scholar] [CrossRef] [Green Version]
- Koleñák, F. Explainable Artificial Intelligence. Master’s Thesis, Department of Computer Science and Engineering, University of West Bohemia, Plzeň, Czech Republic, 2020. [Google Scholar]
- Chakraborti, T.; Sreedharan, S.; Grover, S.; Kambhampati, S. Plan Explanations as Model Reconciliation: An Empirical Study. In Proceedings of the 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Daegu, Republic of Korea, 11–14 March 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 258–266. [Google Scholar]
- Kulkarni, A.; Zha, Y.; Chakraborti, T.; Vadlamudi, S.G.; Zhang, Y.; Kambhampati, S. Explicable Planning as Minimizing Distance from Expected Behavior. In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, Montreal, QC, Canada, 13–17 May 2019; International Foundation for Autonomous Agents and Multiagent Systems: Pullman, WA, USA, 2019; pp. 2075–2077. Available online: https://www.ifaamas.org (accessed on 2 August 2023).
- Pearl, J. Theoretical Impediments to Machine Learning with Seven sparks from the Causal Revolution. arXiv 2018, arXiv:1801.04016. [Google Scholar]
- Arrieta, A.B.; Díaz-Rodríguez, N.; Javier Del Ser, A.B.; Tabik, S.; Barbado, A.; Garcia, S.; Gil-Lopez, S.; Molinaa, D.; Benjamins, R.; Chatila, R.; et al. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 2020, 58, 82–115. [Google Scholar] [CrossRef] [Green Version]
- Gilpin, L.H.; Bau, D.; Yuan, B.Z.; Bajwa, A.; Specter, M.A.; Kagal, L. Explaining explanations: An approach to evaluating interpretability of machine learning. arXiv 2018, arXiv:1806.00069. [Google Scholar]
- Belle, V.; Papantonis, I. Principles and practice of explainable machine learning. arXiv 2020, arXiv:2009.11698. [Google Scholar] [CrossRef]
- Pieters, W. Explanation and Trust: What to Tell the User in Security and AI? Ethics Inf. Technol. 2011, 13, 53–64. [Google Scholar] [CrossRef] [Green Version]
- Schneier, B. Secrets and Lies: Digital Security in a Networked World; John Wiley & Sons: Hoboken, NJ, USA, 2004. [Google Scholar]
- Kammüller, F. Attack Trees in Isabelle. In Proceedings of the 20th International Conference on Information and Communications Security, ICICS 2018, Lille, France, 29–31 October 2018; LNCS. Springer: Berlin/Heidelberg, Germany, 2018; Volume 11149. [Google Scholar]
- Kammüller, F. Combining Secure System Design with Risk Assessment for IoT Healthcare Systems. In Proceedings of the Workshop on Security, Privacy, and Trust in the IoT, SPTIoT’19, Kyoto, Japan, 11–15 March 2019. [Google Scholar]
- Holzinger, A.; Langs, G.; Denk, H.; Zatloukal, K.; Müller, H. Causability and explainability of artificial intelligence in medicine. WIREs Data Mining Knowl. Discov. 2019, 9, e1312. [Google Scholar] [CrossRef] [Green Version]
- Kammüller, F. Dependability engineering in Isabelle. arXiv 2021, arXiv:2112.04374. [Google Scholar]
- CHIST-ERA. SUCCESS: SecUre aCCESSibility for the Internet of Things. 2016. Available online: http://www.chistera.eu/projects/success (accessed on 2 August 2023).
- Kammüller, F. Isabelle Insider and Infrastructure Framework with Explainability Applied to Attendance Monitoring. 2023. Available online: https://github.com/flokam/Dimpy (accessed on 2 August 2023).
- Kammüller, F.; Wenzel, M.; Paulson, L.C. Locales—A Sectioning Concept for Isabelle. In Theorem Proving in Higher Order Logics, Proceedings of the 12th International Conference, TPHOLs′99, Nice, France, 14–17 September 1999; Bertot, Y., Dowek, G., Hirchowitz, A., Paulin, C., Thery, L., Eds.; LNCS; Springer: Berlin/Heidelberg, Germany, 1999; Volume 1690. [Google Scholar]
- Wachter, S.; Mittelstadt, B.; Russell, C. Counterfactual Explanantions without Opening the Black Box: Automated Decisions and the GDPR. Harv. J. Law Technol. 2018, 31, 841. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Kammüller, F.; Satija, D. Explanation of Student Attendance AI Prediction with the Isabelle Infrastructure Framework. Information 2023, 14, 453. https://doi.org/10.3390/info14080453
Kammüller F, Satija D. Explanation of Student Attendance AI Prediction with the Isabelle Infrastructure Framework. Information. 2023; 14(8):453. https://doi.org/10.3390/info14080453
Chicago/Turabian StyleKammüller, Florian, and Dimpy Satija. 2023. "Explanation of Student Attendance AI Prediction with the Isabelle Infrastructure Framework" Information 14, no. 8: 453. https://doi.org/10.3390/info14080453
APA StyleKammüller, F., & Satija, D. (2023). Explanation of Student Attendance AI Prediction with the Isabelle Infrastructure Framework. Information, 14(8), 453. https://doi.org/10.3390/info14080453