Machine-Environment Interaction, Volume II

A special issue of Journal of Sensor and Actuator Networks (ISSN 2224-2708). This special issue belongs to the section "Big Data, Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (30 January 2024) | Viewed by 7013

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computer Science, ECE Paris School of Engineering (ECE Paris Ecole d’Ingénieurs), Paris, France
Interests: knowledge representation; machine learning; computational intelligence; artificial intelligence; formal methods; multimodal computing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Département des Réseaux et des Télécommunications (RT), University Paris-Saclay, UVSQ - 12 Avenue de l’Europe, 78114 Vélizy, France
Interests: software architecture; dynamic architecture; knowledge representation and reasoning; interaction machine-environment; ambient intelligence; robotic interaction; data fusion and fission; embedded system

Special Issue Information

Dear Colleagues,

Machine–environment interaction is a vast and complex domain located at the crossroads of several artificial intelligence disciplines, the representation of knowledge and reasoning, ambient intelligence, networks of sensors, etc.

An intelligent machine (robot, vehicle, drone, etc.) can evolve in a so-called “smart” environment (smart home, smart city, etc.). Such a connected environment has a network of sensors that an intelligent machine can use in addition to its own sensors to better understand the environment and interact with the objects that are present there. Humans are a part of this environment, and hence, human–machine integration is a special case of machine–environment interaction.

The interaction loop between machine and environment is made up of the following processes: perception, comprehension, decision, and action. Thus, an intelligent machine has the capacity to perceive the environment, understand its current state, make decisions, and act on the environment to execute decisions.

Research and development in interactive systems entails many challenges and opportunities, not only in hardware and software but also on various sensors, which constitute important tools of these systems to perceive the environment.

We would like to draw your attention to the JSAN Special Issue on “Machine–Environment Interaction”. Extended versions of selected papers from the 3rd EAI International Conference on Computational Intelligence and Communications (CICom 2022), to be held in Australia on 28–29 December 2022 (https://cicom-conference.eai-conferences.org/2022/), will be considered for this  Special Issue. In addition, we invite other researchers beyond CICom 2022 to submit papers on a topic within the field of machine–environment interaction, including its related fields which may include but are not limited to the following areas:

  • Representation of the environment of an interactive system: models, formalisms;
  • The perception of the environment of interactive systems;
  • Data fusion and fission in interactive systems;
  • Techniques and methods for optimization and learning to facilitate interaction;
  • Reasoning in decision-making components of interactive systems;
  • System for evaluation and analysis of interactive systems;
  • Networks of sensors for interactive systems.

Dr. Manolo Dulva Hina
Prof. Dr. Amar Ramdane-Cherif
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Sensor and Actuator Networks is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 2546 KiB  
Article
Assessing the Acceptance of a Mid-Air Gesture Syntax for Smart Space Interaction: An Empirical Study
by Ana M. Bernardos, Xian Wang, Luca Bergesio, Juan A. Besada and José R. Casar
J. Sens. Actuator Netw. 2024, 13(2), 25; https://doi.org/10.3390/jsan13020025 - 09 Apr 2024
Viewed by 241
Abstract
Mid-gesture interfaces have become popular for specific scenarios, such as interactions with augmented reality via head-mounted displays, specific controls over smartphones, or gaming platforms. This article explores the use of a location-aware mid-air gesture-based command triplet syntax to interact with a smart space. [...] Read more.
Mid-gesture interfaces have become popular for specific scenarios, such as interactions with augmented reality via head-mounted displays, specific controls over smartphones, or gaming platforms. This article explores the use of a location-aware mid-air gesture-based command triplet syntax to interact with a smart space. The syntax, inspired by human language, is built as a vocative case with an imperative structure. In a sentence like “Light, please switch on!”, the object being activated is invoked via making a gesture that mimics its initial letter/acronym (vocative, coincident with the sentence’s elliptical subject). A geometrical or directional gesture then identifies the action (imperative verb) and may include an object feature or a second object with which to network (complement), which also represented by the initial or acronym letter. Technically, an interpreter relying on a trainable multidevice gesture recognition layer makes the pair/triplet syntax decoding possible. The recognition layer works on acceleration and position input signals from graspable (smartphone) and free-hand devices (smartwatch and external depth cameras), as well as a specific compiler. On a specific deployment at a Living Lab facility, the syntax has been instantiated via the use of a lexicon derived from English (with respect to the initial letters and acronyms). A within-subject analysis with twelve users has enabled the analysis of the syntax acceptance (in terms of usability, gesture agreement for actions over objects, and social acceptance) and technology preference of the gesture syntax within its three device implementations (graspable, wearable, and device-free ones). Participants express consensus regarding the simplicity of learning the syntax and its potential effectiveness in managing smart resources. Socially, participants favoured the Watch for outdoor activities and the Phone for home and work settings, underscoring the importance of social context in technology design. The Phone emerged as the preferred option for gesture recognition due to its efficiency and familiarity. The system, which can be adapted to different sensing technologies, addresses the scalability concerns (as it can be easily extended for new objects and actions) and allows for personalised interaction. Full article
(This article belongs to the Special Issue Machine-Environment Interaction, Volume II)
Show Figures

Figure 1

18 pages, 618 KiB  
Article
Electric Vehicles Energy Management for Vehicle-to-Grid 6G-Based Smart Grid Networks
by Rola Naja, Aakash Soni and Circe Carletti
J. Sens. Actuator Netw. 2023, 12(6), 79; https://doi.org/10.3390/jsan12060079 - 27 Nov 2023
Viewed by 1542
Abstract
This research proposes a unique platform for energy management optimization in smart grids, based on 6G technologies. The proposed platform, applied on a virtual power plant, includes algorithms that take into account different profiles of loads and fairly schedules energy according to loads [...] Read more.
This research proposes a unique platform for energy management optimization in smart grids, based on 6G technologies. The proposed platform, applied on a virtual power plant, includes algorithms that take into account different profiles of loads and fairly schedules energy according to loads priorities and compensates for the intermittent nature of renewable energy sources. Moreover, we develop a bidirectional energy transition mechanism towards a fleet of intelligent vehicles by adopting vehicle-to-grid technology and peak clipping. Performance analysis shows that the proposed energy provides fairness to electrical vehicles, satisfies urgent loads, and optimizes smart grids energy. Full article
(This article belongs to the Special Issue Machine-Environment Interaction, Volume II)
Show Figures

Figure 1

19 pages, 4711 KiB  
Article
Enhancing Mental Fatigue Detection through Physiological Signals and Machine Learning Using Contextual Insights and Efficient Modelling
by Carole-Anne Cos, Alexandre Lambert, Aakash Soni, Haifa Jeridi, Coralie Thieulin and Amine Jaouadi
J. Sens. Actuator Netw. 2023, 12(6), 77; https://doi.org/10.3390/jsan12060077 - 03 Nov 2023
Cited by 1 | Viewed by 1373
Abstract
This research presents a machine learning modeling process for detecting mental fatigue using three physiological signals: electrodermal activity, electrocardiogram, and respiration. It follows the conventional machine learning modeling pipeline, while emphasizing the significant contribution of the feature selection process, resulting in, not only [...] Read more.
This research presents a machine learning modeling process for detecting mental fatigue using three physiological signals: electrodermal activity, electrocardiogram, and respiration. It follows the conventional machine learning modeling pipeline, while emphasizing the significant contribution of the feature selection process, resulting in, not only a high-performance model, but also a relevant one. The employed feature selection process considers both statistical and contextual aspects of feature relevance. Statistical relevance was assessed through variance and correlation analyses between independent features and the dependent variable (fatigue state). A contextual analysis was based on insights derived from the experimental design and feature characteristics. Additionally, feature sequencing and set conversion techniques were employed to incorporate the temporal aspects of physiological signals into the training of machine learning models based on random forest, decision tree, support vector machine, k-nearest neighbors, and gradient boosting. An evaluation was conducted using a dataset acquired from a wearable electronic system (in third-party research) with physiological data from three subjects undergoing a series of tests and fatigue stages. A total of 18 tests were performed by the 3 subjects in 3 mental fatigue states. Fatigue assessment was based on subjective measures and reaction time tests, and fatigue induction was performed through mental arithmetic operations. The results showed the highest performance when using random forest, achieving an average accuracy and F1-score of 96% in classifying three levels of mental fatigue. Full article
(This article belongs to the Special Issue Machine-Environment Interaction, Volume II)
Show Figures

Figure 1

29 pages, 6070 KiB  
Article
A Multi-Agent Intrusion Detection System Optimized by a Deep Reinforcement Learning Approach with a Dataset Enlarged Using a Generative Model to Reduce the Bias Effect
by Matthieu Mouyart, Guilherme Medeiros Machado and Jae-Yun Jun
J. Sens. Actuator Netw. 2023, 12(5), 68; https://doi.org/10.3390/jsan12050068 - 18 Sep 2023
Cited by 1 | Viewed by 1516
Abstract
Intrusion detection systems can defectively perform when they are adjusted with datasets that are unbalanced in terms of attack data and non-attack data. Most datasets contain more non-attack data than attack data, and this circumstance can introduce biases in intrusion detection systems, making [...] Read more.
Intrusion detection systems can defectively perform when they are adjusted with datasets that are unbalanced in terms of attack data and non-attack data. Most datasets contain more non-attack data than attack data, and this circumstance can introduce biases in intrusion detection systems, making them vulnerable to cyberattacks. As an approach to remedy this issue, we considered the Conditional Tabular Generative Adversarial Network (CTGAN), with its hyperparameters optimized using the tree-structured Parzen estimator (TPE), to balance an insider threat tabular dataset called the CMU-CERT, which is formed by discrete-value and continuous-value columns. We showed through this method that the mean absolute errors between the probability mass functions (PMFs) of the actual data and the PMFs of the data generated using the CTGAN can be relatively small. Then, from the optimized CTGAN, we generated synthetic insider threat data and combined them with the actual ones to balance the original dataset. We used the resulting dataset for an intrusion detection system implemented with the Adversarial Environment Reinforcement Learning (AE-RL) algorithm in a multi-agent framework formed by an attacker and a defender. We showed that the performance of detecting intrusions using the framework of the CTGAN and the AE-RL is significantly improved with respect to the case where the dataset is not balanced, giving an F1-score of 0.7617. Full article
(This article belongs to the Special Issue Machine-Environment Interaction, Volume II)
Show Figures

Figure 1

20 pages, 585 KiB  
Article
Safe Data-Driven Lane Change Decision Using Machine Learning in Vehicular Networks
by Rola Naja
J. Sens. Actuator Netw. 2023, 12(4), 59; https://doi.org/10.3390/jsan12040059 - 01 Aug 2023
Viewed by 1604
Abstract
This research proposes a unique platform for lane change assistance for generating data-driven lane change (LC) decisions in vehicular networks. The goal is to reduce the frequency of emergency braking, the rate of vehicle collisions, and the amount of time spent in risky [...] Read more.
This research proposes a unique platform for lane change assistance for generating data-driven lane change (LC) decisions in vehicular networks. The goal is to reduce the frequency of emergency braking, the rate of vehicle collisions, and the amount of time spent in risky lanes. In order to analyze and mine the massive amounts of data, our platform uses effective Machine Learning (ML) techniques to forecast collisions and advise the driver to safely change lanes. From the unprocessed large data generated by the car sensors, kinematic information is retrieved, cleaned, and evaluated. Machine learning algorithms analyze this kinematic data and provide an action: either stay in lane or change lanes to the left or right. The model is trained using the ML techniques K-Nearest Neighbor, Artificial Neural Network, and Deep Reinforcement Learning based on a set of training data and focus on predicting driver actions. The proposed solution is validated via extensive simulations using a microscopic car-following mobility model, coupled with an accurate mathematical modelling. Performance analysis show that KNN yields up to best performance parameters. Finally, we draw conclusions for road safety stakeholders to adopt the safer technique to lane change maneuver. Full article
(This article belongs to the Special Issue Machine-Environment Interaction, Volume II)
Show Figures

Figure 1

Back to TopTop