You are currently viewing a new version of our website. To view the old version click .
Computation
  • Article
  • Open Access

17 November 2022

Drawing Interpretation Using Neural Networks and Accessibility Implementation in Mobile Application

and
Computer Science Department, University Politehnica of Bucharest, 060042 Bucharest, Romania
*
Author to whom correspondence should be addressed.

Abstract

This paper continues the research of the previous work, regarding PandaSays mobile application, having its main purpose to detect the affective state of the child from his drawings, using MobileNet neural network. Children diagnosed with autism spectrum disorder, have difficulties in expressing their feelings and communicating with others. The purpose of PandaSays mobile application, is to help parents and tutors that have children diagnosed with autism, to communicate better with them and to understand their feelings. The main goal was to improve the model’s accuracy, trained with MobileNet neural network, which reached the value of 84.583%. For training the model, it was used Python programming language. The study focuses further on accessibility and its importance to children diagnosed with autism. Relevant screenshots of the mobile application are presented, in order to indicate that the application follows the accessibility guidelines and rules. Finally, there is presented the interaction with Marty robot and the efficiency of mobile application’s drawing prediction.

1. Introduction

The present paper continues the work of PandaSays [1] (version number 1.11, Bucharest, Romania) mobile application that uses MobileNet neural network to predict the affective state of the child from his drawings, in order to help parents and tutors that have children diagnosed with autism, to improve the communication with them and understand their affective state. Moreover, a Sign Language module was introduced, to assist children that have hearing impairments.
Autism spectrum disorder (ASD) makes reference to a big range of conditions distinguished by difficulties in speech and nonverbal communication social skills, repetitive behaviors and difficulties in expressing their emotions [2]. Humanoid robots have been used in therapy to help children diagnosed with autism, by helping improving their communication and social skills and also as a teaching tool, as the following articles will demonstrate.
Article [3] presents the usefulness of the humanoid robot NAO, for teaching children diagnosed with autism to recognize emotions. NAO robot will execute body gestures as sad and happy. The child had to recognize the emotions that were executed by the robot. The study concluded that NAO robot has potential to improve communication skills for children diagnosed with autism.
In papers [4] and [5] is described how humanoid robots can improve verbal and non-verbal communication for children diagnosed with autism spectrum disorder and with hearing impairments. In study [4], four children diagnosed with autism were selected from the Society for the Welfare of Autistic Children (SWAC). The interaction between NAO robot and the children were made using the Choreographe software of the NAO robot. The session begins with simple questions as: “How are you?”, “What is your name?”, “What is your mother’s name”. The next session includes physical activities as dancing and hand exercises. The study concluded that three of four children responded good to the NAO robot’s interaction and improved their communication skills.
Article [6] describes how a game involving a humanoid robot, helps children diagnosed with hearing impairments. The research is based on the Robotic Platform Robovie R3. There were two different setups. The first one aimed to test the sign language effectiveness of the virtual robot. The tests were conducted in the classroom. The children watched the robot’s signs from two-three meters away. The second phase was performed with ten hearing impaired children, with ages between ten and sixteen. The training set contained the following signs: “mother”, “spring”, “baby”, “mountain”, “big”, “to come”, “black”, “to throw”. At the end of the research, the children started to play more with the robot and learned sign language, by recognizing robot’s gestures.
The present article is structured as follows:
  • Related work chapter, where there are presented relevant studies that helped the research.
  • Second chapter: PandaSays application—the updated machine learning model, in which is given an update regarding the dataset and the improvement of the machine learning algorithms used.
  • PandaSays mobile application and accessibility integration section, where is presented the level of accessibility of the application, representing an important subject for children diagnosed with autism.
  • Forth chapter: Case study using PandaSays application and Marty robot, where is shown the usage of the PandaSays application and Marty robot.
  • Conclusions.

3. PandaSays Application—The Updated Machine Learning Model

In articles [21] and [22], we talked about PandaSays mobile application, that incorporates a machine learning model to predict the affective state of the child from his drawings.
The best model chosen for the PandaSays application was the one trained with MobileNet [23] neural network.
Paper “Machine Learning based Solution for Predicting the Affective State of Children with Autism,” makes a comparison between MobileNet neural network, VGG16 and Feedfarward Neural networks. The purpose of the article was to find the best model for PandaSays mobile application, to predict the affective state of the children diagnosed with autism spectrum disorder. The dataset was formed from only 597 drawings, less than the current dataset—1453 drawings. The dataset was split in 75% for training, and 25% for testing, same as in current paper. The batch size was 16 and the number of epochs—30. The model resulted from training with VGG16 was underfit and the accuracy obtained was 35%. The accuracy obtained by using MobileNet, was 58%, lower than in the current paper—84.583%. The Feedforward neural network performed poorly, with an accuracy of 28.3333%. The paper concluded that the best model to use was the one created with MobileNet neural network. For all models were received as weights the “imagenet” [24] dataset and transfer learning was applied.
In article [25], the accuracy obtained was 56.25%, which was the reason why the algorithm was changed and also the MobileNet’s model creation, as it is explained in the current study.
MobileNet was selected because of its smaller model size, as it contains fewer number of parameters (13 million), in comparison to VGG16 (Visual Geometry Group from Oxford), that has 138 million of parameters and ResNet-50 (Deep residual networks), that has over 23 million parameters. Moreover, MobileNet has smaller complexity, as it has fewer multiplications and additions, making this neural network more suitable to be incorporated into a mobile application.
The dataset contains 1453 drawings and the way of how they are structured are shown in Figure 1.
Figure 1. Number of trained images per class.
The dataset was split as follows: 25% test and 75% train. The input shape of the model is equal to (224, 224, 3), 224—representing the width and height, and number 3, representing the three channels (Red, Green, Blue). To avoid overfitting, the preprocessing was applied on the drawings, with the following parameters, using Keras ImageDataGenerator:
  • rotation_range = 280
  • zoom_range = 0.30
  • width_shift_range = 0.10
  • height_shift_range = 0.10
  • shear_range = 0.30
  • horizontal_flip = True
  • vertical_flip = True
  • fill_mode = “nearest”
The results are displayed in Figure 2.
Figure 2. Image data augmentation.
The drawings are rotated randomly within the range of 280 degrees, zoomed inside within the range of 0.30. The drawing is shifted horizontally (right to left) with 10% of the total width of the image (width_shift_range) and shifted vertically with 10% of the total height of the image (height_shift_range). The shear_range parameter set to 0.30 means that the drawing will be distorted along an axis within the range of 0.30, so that the image is perceived from different angles. All the drawings were randomly flipped horizontally (horizontal_flip) and vertically (vertical_flip) and the newly created pixels that resulted after the rotation are filled in (fill_mode). The code for the model creation is presented below in Equation (1):
image_input = Input(shape = (224, 224, 3)) mobilenet_init_model = MobileNet(input_tensor = image_input, include_top = False, weights = “imagenet”)
The “include_top” parameter is set to false, which means that the fully connected layers are excluded, in order to permit the new output layer to be trained and added. The model received as weights the “imagenet” dataset. A new model was created, followed by the bootstrapping of the new model onto the pretrained layers. Then, two Dense layers are added onto the new model, followed by a Dropout layer of value 0.35, for regularization and to prevent overfitting. The output layer has 5 neurons (representing the 5 affective states), and it is applied a “softmax” activation function. As optimizer, it was used SGD (Stochastic gradient descent), with a learning rate of 0.001 and a momentum of 0.9. The models summary of the MobileNet, Vgg16 and ResNet-50 neural networks is presented in Table 1. MobileNet model has 261,302,565 trainable parameters and 2,257,984 non-trainable parameters. VGG16 has 107,161,893 trainable parameters and ResNet-50 has 415,443,237 trainable parameters, representing the largest number of parameters.
Table 1. MobileNet, VGG16 and ResNet-50 models summary.
To evaluate the model, it was used a K-fold cross validation, of value of 10. The model was trained for 50 epochs and the batch size was 32. The metrics of interest were loss and accuracy. The loss evolution across the 10 folds is presented in Figure 3 and the method to calculate it, is presented in Equation (1). The accuracy is given in Figure 4. The train loss and train accuracy are represented with the blue color and the test loss and test accuracy are represented with magenta color, for 10 folds. It can be noticed that after 20 epochs, the test loss starts to increase, which means that the model will overfit, if is trained more. For displaying the figures, “Matplotlib” module was used from Python 3. The magenta color represents the test values and the blue color represents the train values.
Figure 3. MobileNet Cross entropy loss.
Figure 4. MobileNet model accuracy.
ResNet-50 model mean accuracy was 28.463% and the standard deviation was 0.030, which is better than the MobileNet’s one—0.14. The accuracy is very low in comparison to MobileNet, where it is 84.583%.
ResNet-50 mean loss was 1.555, which is bigger than MobileNet’s—0.3756. The values for 10 folds, are presented in Figure 5a.
Figure 5. (a) ResNet-50 cross entropy loss for training set and test set; (b) VGG16 cross entropy loss for training set and test set.
VGG16 had a mean accuracy of 59.867% and a standard deviation of 0.085. Though the standard deviation is lower than the MobileNet’s one, the accuracy is still under the MobileNet’s.
VGG16 mean loss was 1.006 which is lower than ResNet-50′s, but bigger than MobileNet’s. The loss’s values are presented in Figure 5b.
Due to the model results and to its low complexity, MobileNet model was selected to be incorporated in the PandaSays mobile application.
For evaluating the model, was used sklearn library. The code was written in Python 3 programming language. To calculate the evaluation of the model was taken into account the number of folds (10), the mobile net model, the batch size of 32 and the number of epochs, which was 50.
For calculating the mean and the standard deviation, was used “numpy” module from Python and predefined methods: std (standard deviation) and mean.
The model performance is presented in Figure 6. The mean accuracy is 84.583% and the standard deviation is 0.14, for 10 folds, which means that the majority of the values are within the range of 0.14 from the mean value of 84.583%. The mean accuracy was calculated summing all the accuracy results obtained from the model training, with 10 folds and divided the sum to 10. In our previous work [26], the accuracy obtained was 56.25%, fact that emphasizes the improvement of the work.
Figure 6. MobileNet Model performance.
For all neural networks (MobileNet, ResNet-50 and VGG16) a method named summarize_performance (), presented in Equation (2), was used to calculate the performance of the specific neural network model. For 10 folds, were obtained 10 losses values and 10 accuracy values. The average loss for MobileNet, calculated by summing all loss values, divided by 10, was 0.3756 and the lowest one was 0.0506. ResNet-50 mean loss was 1.555 and the lowest value was 1.5296. VGG16′s mean loss was 1.006 and the lowest value was 0.7483. As it can be noticed those loss values are bigger than MobileNet’s loss value.
    dev = std(scores) -> calculates the standard deviation     def summarize_performance(scores):             print(‘Accuracy: mean = %.3f std = %.3f, n = %d’ %                  (mean(scores)*100, std(scores)             len(scores)))             plt.boxplot(scores)             plt.show()              summarize_performance(scores)
As it can be noticed in Table 2, the highest precision is represented by the “sad” class and the lowest is represented by “happy” class. The precision is calculated as the ration between the true positives and the sum of true positives and false positives and the recall as the ratio between the true positives and the sum between the true positives and the false negatives [27]. F1-score has the following Formula (3):
F1-score = 2 × (Precision × Recall)/(Precision + Recall)
Table 2. MobileNet Classification Report.
Precision and recall are near 1, which means that the classifier is good. The highest f1-score was achieved by the “fear” class (0.88) and the lowest value was represented by the “happy” class (0.73).
The model trained is converted to a TensorFlow lite file (“.tflite”) and deployed in the mobile application. Figure 7 illustrates the prediction of the drawing, with the final output.
Figure 7. PandaSays drawing prediction screen.
The output of the machine learning model is further sent to Marty [28] robot or Alpha 1P robot. In the next sections, will be presented the robot screen and the interaction with the robot Marty.

4. PandaSays Mobile Application and Accessibility Integration

Other important feature introduced in the PandaSays application is accessibility. According to World Health Organization, 15% of the world population represents people with disabilities [29]. In this context we can distinguish six important areas of disability [30]:
  • Vestibular balance disorder and seizures
  • Auditory impairments: Conductive hearing loss—happens when the natural movement of sound does not reach the inner ear.
  • Sensorineural hearing loss—occurs after inner ear damage.
  • Visual impairments: color blindness, eyesight loss (Diabetic retinopathy [31], Glaucoma)
  • Motor impairments: amelia [32], paralysis, broken arm, broken leg
  • Cognitive impairments—dyslexia, autism, memory problems, distractions.
Countries have their own standards regarding accessibility. For example, in United Stated of America, there is the law called “Section 508”, that enforces the existence of electronic devices are accessible for people with disabilities. In Europe, there is the “European Accessibility Act”, which represents a directive, aimed to create a legislative environment for accessibility conforming with the rules of Article 9 of the United Nation’s Convention on the Rights of Persons with Disabilities (CRPD) [33].
To eliminate barriers for people with disabilities and in order to help them, it is crucial to have an accessible application. For making use of the accessibility service, the screen readers can help with reading the content from the application. “TalkBack” application is a screen reader for Android devices and “VoiceOver” is a screen reader for iOS devices.
According to “Web Content Accessibility Guidelines (WCAG) 2.0” [34], the navigation across the screens should be implemented in the same way and also the layout. The grammar should be correct in the application/website and simple for people diagnosed with autism, in order for them to not get confused. Error messages should be provided and the refresh of the application or website should be avoided, if possible. Web Content Accessibility Principles are centered around a human approach regarding web design, having the acronym POUR (Perceivable, Operable, Understandable, Robust). Perceivable means that the content should be presented to the users in a way they understand it. Operable means that navigation and user interface should pe operable. Understandable refers to the information that is offered by the user interface must be easy to understand and robust principle refers to the fact that the content should be able to be interpreted by the assistive technology [30].
In Figure 8, the “TalkBack” application is started and emphasizes the title “How to use PandaSays App”. The title has the role of “heading”. The button “Next” has the role of “button”.
Figure 8. PandaSays UserManul screen with TalkBack on.
The menu screen is shown in Figure 9 and represents a circle menu. By clicking the menu button, all screen options will be displayed. On the menu button, the accessibility label “Menu” can be observed. Each element from the menu has its own label, which is announced when it is focused. The elements announced by the accessibility screen reader are: “Draw”—presenting the drawing module, where the child can draw, “Augmented Reality”—representing the augmented reality screen-, “Text-to-Speech”—where the child can write and can use the Sign Language screen-, “Choose Image to upload”—for uploading an image-, “Upload Image”—upload an image/drawing—and “Interpret your Drawing”—representing the machine learning module-.
Figure 9. PandaSays menu with TalkBack application on.
Figure 10 presents the “Text-to-Speech” screen, where it can be observed the text “Enter text” and, on the right of it, there is an edit text, with the hint “Hello!”. The editable text has the “textbox” role.
Figure 10. PandaSays Text-To-Speech screen with TalkBack on.
In Figure 11, the “Sign language” screen is shown, easily being identified the letter “A”. Each letter has the role of a “button”, with a padding of 20 dp.
Figure 11. PandaSays Sign language screen with TalkBack on.
The font used is “Arial”, which is considered to be more appropriate for helping people with disabilities [34]. The font size used for regular text is 16 sp (scalable pixels), for subtitle is 20 sp and for title is 24 sp; these dimensions are compliant with the ones recommended to be used for people with disabilities [30].
Error messages are also important for making the application accessible and helping the user understand what is wrong and what he should do further [34]. Figure 12 shows an error message, when the user has not introduced yet any IP address and clicks the button “Connect to your robot”.
Figure 12. Error null IP address.
It can be noticed that the text (written with black color) is readable against the background that is white. This is another important request in making the app accessible.
The images have a specific content description that is read by the Screen Reader, helping the user understand better the application’s functionality.
PandaSays application uses Google Play Services for AR (ARCore), provided by Google LLC and governed by the Google Privacy Policy. In Figure 13 is presented the augmented reality module. The screen reader focuses the carousel icon, having the content description of “carousel”, which is being read by the accessibility service. The other icons, have the content description of “dinosaur”, and they all have the accessibility role of “button”.
Figure 13. Augmented reality module.

5. Case Study Using PandaSays Application and Marty Robot

Mihnea is an eight-years-old child, who was diagnosed with autism at the age of three. In Figure 14, the child is starting to draw. PandaSays application was tested on numerous drawings of the child. It is worth mentioning that not every child diagnosed with autism spectrum disorder can draw, because of the motor difficulties specific to ASD.
Figure 14. Child drawing.
In Figure 15, PandaSays application is used to predict the child’s drawing. The output of the first drawing is “insecure”, with 88% accuracy. In Figure 16, the output of the other drawing of the child is “happy”, with 98% accuracy, followed by the state “sad” (2% accuracy).
Figure 15. PandaSays state prediction when child was drawing.
Figure 16. PandaSays drawing interpretation.
The next screen after the drawing prediction is the Robot Connection screen, where the user can choose between Alpha 1P or Marty robot, depending what robot they possess. The only required field for the robot’s connection is to provide the IP (Internet Protocol) address of the robot. The Marty robot is presented in Figure 17 and Alpha 1P is presented in Figure 18.
Figure 17. Marty robot.
Figure 18. Alpha 1 P robot.
In Figure 19, the child interacts with Marty robot. He responded very well to the robot’s actions. He played with the robot from the start and was not scared by it.
Figure 19. Child’s interaction with Marty robot.

6. Conclusions

Emotion recognition remains a difficult matter for children diagnosed with autism. PandaSays application represents an addition of value to the other studies that are focused also on detecting emotions for children diagnosed with autism spectrum disorder, in order to help them, their parents and tutors. Robots can play a significant role in helping children diagnosed with autism; as was noticed in this study, the child interacted with the robot from the beginning, though there are children diagnosed with autism that are afraid of them.
The accuracy of the machine learning model improved, reaching a value of 84.583%, in comparison to our previous work [26], when the accuracy obtained was 56.25%. The dataset also was increased with 174 drawings.
For further research, the robot will be used for playing music and also perform the same actions as before, when the output is received from the MobileNet model, after the drawing interpretation. PandaSays mobile application will also have a Music Player module. Studies [35,36,37] explain the usage of music therapy using robots, for helping people with autism. Moreover, as a next step for our research, the database of drawings will continue to be increased and the application will be tested in centers with children diagnosed with autism.

Author Contributions

Conceptualization, A.-L.P.; methodology, A.-L.P.; software, A.-L.P.; validation, N.P.; formal analysis, N.P.; investigation, A.-L.P.; resources, A.-L.P.; data curation, A.-L.P.; writing—original draft preparation, A.-L.P.; writing—review and editing, A.-L.P. and N.P.; visualization, A.-L.P.; supervision, N.P.; project administration, N.P.; funding acquisition, N.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by University Politehnica of Bucharest, Pub Art Project.

Institutional Review Board Statement

Ethical review and approval were waived for this study due to the fact that the involved experiments did not have any ethical implications.

Data Availability Statement

The dataset can be found here: https://www.kaggle.com/datasets/popescuaura/newdataset30082022 (accessed on 16 November 2022).

Acknowledgments

The authors gratefully acknowledge the help of Georgiana Soricica, a psychologist specialized in drawing interpretation, for professional evaluation of children’s drawings and Bunu Cristina, a psychologist, who trusted to us to interact with her child, diagnosed with autism spectrum disorder, and use PandaSays mobile application.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Popescu Aura-Loredana, PandaSays Mobile Application. Available online: https://play.google.com/store/apps/details?id=com.popesc.aura_loredana.pandasaysnew (accessed on 11 October 2022).
  2. What is Autism? Available online: https://www.autismspeaks.org/what-autism#:~:text=Autism%2C%20or%20autism%20spectrum%20disorder,in%20the%20United%20States%20today (accessed on 30 August 2022).
  3. Miskam, M.A.; Shamsuddin, S.; Samat, M.R.A.; Yussof, H.; Ainudin, H.A.; Omar, A.R. Humanoid robot NAO as a teaching tool of emotion recognition for children with autism using the Android app. In Proceedings of the 2014 International Symposium on Micro-NanoMechatronics and Human Science (MHS), Nagoya, Japan, 10–12 November 2014; pp. 1–5. [Google Scholar] [CrossRef]
  4. Farhan, S.A.; Khan, N.R.; Swaron, M.R.; Shukhon, R.N.S.; Islam, M.; Razzak, A. Improvement of Verbal and Non-Verbal Communication Skills of Children with Autism Spectrum Disorder using Human Robot Interaction. In Proceedings of the 2021 IEEE World AI IoT Congress (AIIoT), Seattle, WA, USA, 10–13 May 2021; pp. 356–359. [Google Scholar] [CrossRef]
  5. Akalin, N.; Uluer, P.; Kose, H. Non-verbal communication with a social robot peer: Towards robot assisted interactive sign language tutoring. In Proceedings of the 2014 IEEE-RAS International Conference on Humanoid Robots, Madrid, Spain, 18–20 November 2014; pp. 1122–1127. [Google Scholar] [CrossRef]
  6. El-Seoud, M.S.A.; Karkar, A.; Al Ja’Am, J.M.; Karam, O.H. A pictorial mobile-based communication application for non-verbal people with autism. In Proceedings of the 2014 International Conference on Interactive Collaborative Learning (ICL), Dubai, United Arab Emirates, 3–6 December 2014; pp. 529–534. [Google Scholar] [CrossRef]
  7. Documentation Regarding KASPAR Robot. Available online: https://www.herts.ac.uk/kaspar/impact-of-kaspar (accessed on 10 August 2022).
  8. Wainer, J.; Robins, B.; Amirabdollahian, F.; Dautenhahn, K. Using the Humanoid Robot KASPAR to Autonomously Play Triadic Games and Facilitate Collaborative Play Among Children with Autism. IEEE Trans. Auton. Ment. Dev. 2014, 6, 183–199. [Google Scholar] [CrossRef]
  9. Yussof, H.; Shamsuddin, S.; Hanapiah, F.A.; Ismail, L.I.; Miskam, M.A. IQ level assessment methodology in robotic intervention with children with autism. In Proceedings of the 2015 10th Asian Control Conference (ASCC), Kota Kinabalu, Malaysia, 31 May–3 June 2015; pp. 1–6. [Google Scholar] [CrossRef]
  10. Shamsuddin, S.; Yussof, H.; Ismail, L.; Hanapiah, F.A.; Mohamed, S.; Piah, H.A.; Zahari, N.I. Initial response of autistic children in human-robot interaction therapy with humanoid robot NAO. In Proceedings of the 2012 IEEE 8th International Colloquium on Signal Processing and its Applications, Malacca, Malaysia, 23–25 March 2012; pp. 188–193. [Google Scholar] [CrossRef]
  11. NAO Robot Specifications. Available online: https://www.softbankrobotics.com/emea/en/nao (accessed on 2 August 2022).
  12. Aniketh, M.; Majumdar, J. Humanoid Robotic Head Teaching a Child with Autism. In Proceedings of the 2018 3rd International Conference on Circuits, Control, Communication and Computing (I4C), Bangalore, India, 3–5 October 2018; pp. 1–7. [Google Scholar] [CrossRef]
  13. Robins, B.; Dautenhahn, K.; Dickerson, P. From Isolation to Communication: A Case Study Evaluation of Robot Assisted Play for Children with Autism with a Minimally Expressive Humanoid Robot. In Proceedings of the 2009 Second International Conferences on Advances in Computer-Human Interactions, Cancun, Mexico, 1–6 February 2009; pp. 205–211. [Google Scholar] [CrossRef]
  14. Villano, M.; Crowell, C.R.; Wier, K.; Tang, K.; Thomas, B.; Shea, N.; Schmitt, L.M.; Diehl, J.J. DOMER: A Wizard of Oz interface for using interactive robots to scaffold social skills for children with Autism Spectrum Disorders. In Proceedings of the 2011 6th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Lausanne, Switzerland, 8–11 March 2011; pp. 279–280. [Google Scholar] [CrossRef]
  15. Arent, K.; Brown, D.J.; Kruk-Lasocka, J.; Niemiec, T.L.; Pasieczna, A.H.; Standen, P.J.; Szczepanowski, R. The Use of Social Robots in the Diagnosis of Autism in Preschool Children. Appl. Sci. 2022, 12, 8399. [Google Scholar] [CrossRef]
  16. Feng, H.; Gutierrez, A.; Zhang, J.; Mahoor, M.H. Can NAO Robot Improve Eye-Gaze Attention of Children with High Functioning Autism? In Proceedings of the 2013 IEEE International Conference on Healthcare Informatics, Philadelphia, PA, USA, 9–11 September 2013; p. 484. [Google Scholar] [CrossRef]
  17. Goodrich, M.A.; Colton, M.A.; Brinton, B.; Fujiki, M. A case for low-dose robotics in autism therapy. In Proceedings of the 2011 6th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Lausanne, Switzerland, 8–11 March 2011; pp. 143–144. [Google Scholar] [CrossRef]
  18. She, T.; Kang, X.; Nishide, S.; Ren, F. Improving LEO Robot Conversational Ability via Deep Learning Algorithms for Children with Autism. In Proceedings of the 2018 5th IEEE International Conference on Cloud Computing and Intelligence Systems (CCIS), Nanjing, China, 23–25 November 2018; pp. 416–420. [Google Scholar] [CrossRef]
  19. Philippsen, A.; Tsuji, S.; Nagai, Y. Picture completion reveals developmental change in representational drawing ability: An analysis using a convolutional neural network. In Proceedings of the 2020 Joint IEEE 10th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob), Valparaiso, Chile, 26–30 October 2020; pp. 1–8. [Google Scholar] [CrossRef]
  20. Literat, I. “A Pencil for your Thoughts”: Participatory Drawing as a Visual Research Method with Children and Youth. Int. J. Qual. Methods 2013, 12, 84–98. [Google Scholar] [CrossRef]
  21. Popescu, A.L.; Popescu, N. Machine Learning based Solution for Predicting the Affective State of Children with Autism. In Proceedings of the 2020 International Conference on e-Health and Bioengineering (EHB), Iasi, Romania, 29–30 October 2020; pp. 1–4. [Google Scholar] [CrossRef]
  22. Popescu, A.-L.; Popescu, N. Neural networks-based solutions for predicting the affective state of children with autism. In Proceedings of the 2021 23rd International Conference on Control Systems and Computer Science (CSCS), Bucharest, Romania, 26–28 May 2021; pp. 93–97. [Google Scholar]
  23. MobileNet, MobileNetV2, and MobileNetV3. Available online: https://keras.io/api/applications/mobilenet/ (accessed on 12 August 2022).
  24. ImageNet. Available online: http://www.image-net.org/ (accessed on 17 March 2021).
  25. Popescu, A.-L.; Popescu, N.; Dobre, C.; Apostol, E.-S.; Popescu, D. IoT and AI-Based Application for Automatic Interpretation of the Affective State of Children Diagnosed with Autism. Sensors 2022, 22, 2528. [Google Scholar] [CrossRef] [PubMed]
  26. Sklearn Metrics. Available online: https://scikitlearn.org/stable/modules/generated/sklearn.metrics.precision_recall_fscore_support.html (accessed on 5 September 2022).
  27. Marty Robot Documentation and Images. Available online: https://robotical.io/ (accessed on 14 July 2022).
  28. Disability Information. Available online: https://www.who.int/health-topics/disability#tab=tab_1 (accessed on 16 July 2022).
  29. Kalbag, L. Accessibility for Everyone; A Book Apart: New York, NY, USA, 2017; pp. 8–117. [Google Scholar]
  30. Diabetic Retinopathy. Available online: https://www.nei.nih.gov/learn-about-eye-health/eye-conditions-and-diseases/diabetic-retinopathy (accessed on 3 September 2022).
  31. Tetra-Amelia Syndrome. Available online: https://rarediseases.info.nih.gov/diseases/5148/tetra-amelia-syndrome (accessed on 5 September 2022).
  32. WCAG 2.0 A/AA Principles and Checkpoints. Available online: https://www.boia.org/wcag-2.0-a/aa-principles-and-checkpoints (accessed on 20 September 2022).
  33. What Are Accessible Fonts? Available online: https://www.accessibility.com/blog/what-are-accessible-fonts (accessed on 12 August 2022).
  34. Provide Helpful Error Messages. Available online: https://accessibility.huit.harvard.edu/provide-helpful-error-messages (accessed on 1 September 2022).
  35. Divya, D.; Nithyashree, V.C.; Chayadevi, M.L. Music Therapy for Stress Control and Autism. In Proceedings of the 2019 1st International Conference on Advances in Information Technology (ICAIT), Chikmagalur, India, 25–27 July 2019; pp. 516–521. [Google Scholar] [CrossRef]
  36. Beer, J.M.; Boren, M.; Liles, K.R. Robot assisted music therapy a case study with children diagnosed with autism. In Proceedings of the 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Christchurch, New Zealand, 7–10 March 2016; pp. 419–420. [Google Scholar] [CrossRef]
  37. Suzuki, R.; Lee, J. Robot-play therapy for improving prosocial behaviours in children with Autism Spectrum Disorders. In Proceedings of the 2016 International Symposium on Micro-NanoMechatronics and Human Science (MHS), Nagoya, Japan, 28–30 November 2016; pp. 1–5. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.