1. Introduction
For many years, and even decades, both scientific literature and popular media have addressed the challenges posed by the forthcoming population shift. A central question arising from this development is how modern societies can ensure adequate well-being and support for an aging population.
The population shift constitutes a multidimensional challenge. One important dimension is the continuously increasing shortage of qualified personnel, driven by the changing ratio of the working population to retirees. Another relevant aspect is that, due to factors such as urbanization, younger adults often live at a considerable distance from their elderly parents. Combined with increasing life expectancy, these developments lead to a growing demand for care services [
1], probably drastically [
2], for example, due to age-related increased risks of diseases [
3], physical (falls [
4]) or psychological (dementia, mild cognitive impairments) problems [
5,
6,
7]. According to [
8], the WHO estimates the current prevalence of various types of disabilities at 15%, with a continuous increase in morbidities and functional limitations in the next years. Due to the increasing shortage of qualified support personnel and the distance to potential informal caregivers, the respective challenges are rapidly growing. As a consequence, healthcare systems face rising pressure because they are insufficiently prepared to cope with increasing demand and diversification [
2].
The prevailing strategy of relocating elderly people to nursing homes and other care facilities is the least preferred option for the majority of the target group. The related literature indicates that older adults strongly prefer to remain in their familiar living environment rather than in a nursing facility [
1,
2,
4,
6,
7,
9]. Avoiding institutional care for as long as possible is therefore of core interest to the elderly because
institutional sanitation [
9] can be a traumatic experience, with several negative aspects, for example, social disconnection [
6].
Against this background, alternative ways of ensuring support have to be found, combined with new strategies and policies [
10]. In addition to psychological benefits, supporting older adults in remaining in their homes also offers economic advantages [
1], as, e.g., ref. [
11] reports that the costs of home-based care are about 40% lower than those of institutional support.
The concept of Active and Assisted Living (AAL), originally referred to as Ambient Assisted Living, was introduced in the early 2000s [
12] and continued in the AAL Initiative of the European Commission [
13]—(Ref. [
9] refers to similar activities even earlier, around 1993). The basic motivation for AAL was to consider technology a promising means of addressing demographic change, at least in part. In the definition of [
3], AAL is understood as
“state-of-the-art ICT-based solutions that build on the principles of ambient intelligence to create intelligent environments that provide all-encompassing, non-invasive, and proactive support to older adults and have the ultimate goal to maintain their independence, enhance their overall quality of life, and support their caregivers”. The general assumption in the related literature is that appropriately equipped living environments can support aging in place, reduce the need for human support, and, to some extent, delay institutional care. Nevertheless, despite more than two decades of research and development, the basic technology has not yet become part of typical living environments. Existing solutions often fail to meet the practical needs of the target group [
7] at the physical, social, and psychological levels, thereby failing to promote, for example, independence [
10] and dignity.
One reason for this shortcoming is that standardized or one-size-fits-all smart solutions, which have proven success in industrial contexts (and are also applicable in governmental/municipal housing and care facilities), are not suitable for private living conditions, because, as [
13] describes it, the understanding of living is
“an individual’s perceptions of a position in life in the context of the culture and value system in which they live and in relation to their goals, expectations, (individual) standards, and concern”. Consequently, systems addressing private households must be adaptable, flexible, and scalable.
Although the technological foundations for such systems are meanwhile widely available in the form of smart home technology and the Internet of Things (IoT), a considerable gap remains between technical availability and actual adoption among private homeowners in general and the elderly in particular. One of the reasons is that the technologies are difficult to integrate into the diversity of built environments (e.g., family houses or flats in cities, suburbs, and rural areas) and adapt to the different lifestyle aspects pointed out by [
13], e.g., differences in income, level of education, and other factors. Although the related literature highlights the elderly’s wish to remain in their familiar living environments for as long as possible, these environments are typically not prepared to accommodate the ICT that would support this wish.
This paper presents a longitudinal case study constituting the most recent step in a series of AAL-related projects addressing the described problem area and its dimensions. The study is based on accompanying one household for several years, with the aim of initially implementing, improving, extending, and evaluating exemplary AAL functionality tailored to the household’s characteristics and inhabitants. The work focuses on a group that, to the best of our knowledge, is not appropriately addressed in the related research but could specifically benefit from the capabilities of state-of-the-art technology.
This group, as [
13] describes it, is
“adults with no critical pathological condition”. More precisely, our approach focuses on elderly persons who are no longer fully independent, are typically infrequently supported by informal caregivers (family and friends), but do not need professional or close-knit support. This share of the population, however, does not represent a homogeneous group, neither in terms of age nor in terms of health status and support needs. The group represents different stages of aging [
10], where many individuals are still healthy and independent, while others become (highly) dependent on care [
2]. Potential support for this target group would need to be as flexible and adaptive as possible, as the following considerations emphasize: With age, the likelihood of health-related problems increases. When problems first arise, the environment is likely to lack the necessary preparation. One example is the (first) occurrence of falls, which typically occur unexpectedly. The probability of such events does not, in general, justify contracting professional support organizations. However, given the state-of-the-art functional capabilities of smart home systems, which will likely become standard equipment in built environments in the future, combined with stronger involvement and empowerment of the elderly themselves and the informal caregivers, solutions are likely in sight.
The main contributions of this work are threefold. First, we demonstrate both the potential and the practical challenges of a long-term (multi-year) deployment of Active and Assisted Living (AAL) technology in a real private home, thereby moving beyond short-term pilot studies. In comparison to field studies with shorter durations (e.g., ref. [
14], 6–9 months), the presented study spans more than 10 years and thus accounts for technical changes and advancements over time, which constitute an additional real-world aspect of long-term field deployments. Another difference to other field-based approaches (e.g., refs. [
14,
15]), which is likewise influenced by technical progress (e.g., the spread and increasing usability of tablets or smartphones), is that the deployed functionality does not provide customized explicit and active user interaction. Instead, it focuses on unobtrusive, background-only sensing, enabling the observation of behavioral routines and long-term drifts, in combination with the observation and analysis of technological/infrastructural aspects that are typically not thematized in shorter-term evaluations.
Second, the proposed solution is based on a fully on-site, local architecture that operates without cloud services or other external computation resources, thereby providing a high level of security and privacy—a key aspect for sensitive application domains such as AAL. This architectural choice supports a high degree of external validity regarding the potential and barriers of AAL solutions, including availability, stability and robustness, maintainability, security, and privacy.
Third, the paper documents a deployment characterized by minimal invasiveness, the use of low-cost, off-the-shelf hardware, and open-source software, thereby providing a feasible and affordable basis for larger-scale AAL deployments.
The remainder of this paper is structured as follows: After providing an overview of related work, we describe the recent steps of our aging-in-place approach, the developed variant of basic AAL functionality, and the achieved results. The paper concludes with a reflection on the pros and cons of the followed approach, its respective possibilities and limitations, and an outlook on future potentials (up to state-of-the-art machine learning methods, such as LSTM).
3. Method
The main conceptual foundations derived from the related work can be summarized as follows. Older adults generally wish to remain in their familiar living environments for as long as possible. At the same time, the increasing scarcity of human support resources suggests that technology may assume selected support tasks, provided certain requirements are met. A key prerequisite is that the respective technology can be integrated into existing living environments, requires only minimal technical prerequisites, operates as unobtrusively as possible, and adapts to users’ diverse needs regarding acceptance, security, usability, and overall lifestyle. In comparison to approaches described in the related literature (cf. e.g., ref. [
14]), and based on experiences gained in preceding projects (see, e.g., ref. [
15]), the focus of the approach presented in this paper lies on the recognition of activity and deviations from established routines. Usability aspects discussed in the literature become particularly relevant when systems provide explicit or customized interfaces to the users. In contrast to earlier projects conducted by the authors and to approaches such as those described by [
14], the present study deliberately avoids requiring participants to actively interact with the system. In the preceding project, the software platform was operated on a high-performance computer. To motivate installation and provide added value, we decided on an embedded PC that met the performance requirements and offered an integrated screen and the ability to use basic internet features (e.g., access weather and news services, write basic emails), rather than a customized interface. However, experiences from this project indicated that most participants (obviously fulfilling the characteristic of “technology openness” referred to above) already employed standard devices such as desktop computers, laptops, smartphones, or tablets, for communication and information purposes. Some participants even expressed feelings of stigmatization (
“Am I too dumb for standard devices?”) when asked about their interest in devices designed specifically for older people. Based on these observations, the present approach deliberately refrains from providing a dedicated display or user interface. Instead, the focus is on unobtrusive, automatic AAL functionality that operates entirely in the background. As the employed platform is open and flexible, the integration of additional features or subsystems enabling explicit interaction remains possible for future extensions.
A central challenge arises from the combination of a limited penetration of smart home technology and the limited framing conditions for adopting such systems. Starting with a full-fledged integration would therefore not be feasible. Consequently, it was necessary to identify a basic set of initial functionality and carefully assess its applicability using a SWOT-oriented perspective, while simultaneously considering potential extensions that might become relevant in later stages.
When our related research activities began around 2010, the technical possibilities for approaches comparable to the one presented in this paper were limited. At that time, no readily available off-the-shelf solutions existed. Therefore, a custom platform was developed to meet the project’s hardware and software requirements. For example, ref. [
29] developed an improvised version of a smart home gateway. On the software level, a middleware platform based on OSGi had to be custom-programmed (see, e.g., refs. [
30,
31]), enabling the integration of smart home components from different manufacturers and supporting investigations of their potentials and limitations in various application domains, including AAL [
15].
In the meantime, platforms offering the features required for basic AAL functionality are now widely available. However, their installation, configuration, and maintenance still require a reasonable level of technical knowledge and expertise. The current status of the platform used in this study is described in the following section.
3.1. Installation and Evaluation Site
The case study presented in this paper is based on an on-site installation in a household inhabited by an elderly woman who had already participated in the project described in [
15] and expressed the willingness to continue cooperating, facilitated by a personal relationship with the research team. Due to this personal connection, the study can be considered a form of participant observation, a method with a long tradition in psychology (cf. e.g., ref. [
32]) and with demonstrated relevance also in AAL and user experience research [
33].
3.2. Participant Characteristics and Ethical Aspects
The participant is an elderly woman aged 87 (reference year 2025) who lives alone in her own family home located in a rural village with approximately 100 inhabitants. The house is surrounded by a small garden. The participant generally performs her Activities of Daily Living (ADLs) independently. Her three children, aged 58 to 67 years, act as informal caregivers and live at distances of 10 to 30 kilometres. Due to the spatial distance between the elderly and the descendant generation (which is typical for the region), the potentials and limitations of a technology-based support system were of particular interest in the context of this study. The participant shows a generally positive attitude toward technology. She regularly uses a tablet computer to communicate with family members, friends, and public authorities via email, and to browse the World Wide Web in order to stay informed about the latest news. About five years prior to the reference period, she cancelled her printed newspaper subscription due to recurring delivery problems. Since then, she primarily obtains information online with the tablet or via teletext services, which she accesses using her smart TV.
The participant had already participated in the preceding project and had provided informed consent at that time. The consent documentation included a detailed description of the project idea, the data collected and processed (in anonymized form), and the expected involvement of participants, including their willingness to participate in meetings and interviews related to the project. As informal caregivers were an essential group of stakeholders in the preceding project, they were also asked to review and co-sign the informed consent together with the main participant. All participants were provided with comprehensive, state-of-the-art opt-out options (available at any time, without justification, with no disadvantages). Some participants used these options and terminated their participation prematurely. When the original project officially ended, participants were asked to indicate their preferences with regard to two aspects: (1) whether they wished to retain the equipment installed during the project and assume ownership as well as responsibility for its operation and maintenance, and (2) whether they were willing to participate in follow-up projects. A small number of participants expressed interest in both options. Those who agreed to continue their involvement were provided with an updated informed consent form that outlined the technical changes, including modifications in hardware and a reduction in functionality, the latter limited to background activity observation. Due to the technical challenges discussed in the paper, the focus of this follow-up study was subsequently shifted toward a case-study design, with the option to increase its scale again in later stages. The original project was evaluated and approved by the ethics commission of the federal province of Carinthia. As the principal scientific approach did not change, no updated approval was requested. During the project’s runtime, specifically in 2017, the University of Klagenfurt’s ethics board was established. During the conceptualization of the present manuscript, the board approved the ethical validity of the approach on the basis of the original approval issued by the Carinthian ethics commission.
3.3. Hardware Basis
Continuous advances in hardware development have led to the widespread availability of miniature microcomputers capable of providing sufficient computational power for a range of smart home functionalities. Owing to strong community support and positive prior experience, the Raspberry Pi architecture was selected as the hardware basis of the presented work. Over the past years, several installations based on this platform have been successfully implemented. Initially, the Asus Tinker Board (Asustek, Taiwan) [
34] was employed in various smart home projects, including the early stages of the presented case study. However, due to ongoing technological progress and increasing requirements at both the hardware and software levels, the platform underwent several successive updates.
The case study, representing a follow-up to the larger-scale field study mentioned above and detailed in [
15] started in 2015, based on an updated variant of the embedded PC architecture (Asus EEE Top) from that study. Once the Raspberry Pi (Raspberry Pi Foundation, United Kingdom) platform had reached sufficient performance levels and the necessary software support became available—such as operating systems, smart home platforms, and tools for remote access—the system was migrated to the Tinker Board in 2017. Subsequently, continuously declining support from both the manufacturer and the community led to the transition to the Raspberry Pi 4 around 2020. Finally, the upgrade to the Raspberry Pi 5 followed in 2023, but it still uses microSD cards to host the operating system. It was not until 2024 that support for solid-state disks (SSDs) on the Raspberry Pi platform reached sufficient maturity and market prices decreased to an affordable range, making a switch technologically feasible and economically reasonable.
Across the different generations, the Raspberry Pi, was configured as a server hosting the smart home middleware platform OpenHAB as the central component, which will be described in detail in the software section. Although the hardware architecture would allow for direct control of smart components—for example, via the General-Purpose Input/Output Bus (GPIO) [
35]—integration with dedicated smart home subsystems proved more advantageous in terms of system stability and operational flexibility. In the concrete installation, the smart home subsystem used is Homematic [
36], a platform targeting the end-consumer market and primarily addressing technology-oriented users and advanced do-it-yourself practitioners. The system focuses on the European Market, as the supplier’s headquarters are in Germany. The selection of this platform was based on the research team’s long-standing, positive experience with Homematic and its predecessor, FS20, particularly regarding fault tolerance, maintenance effort, and programming and configuration capabilities.
Figure 1 shows the first version of the on-site installation.
3.4. Software Architecture
The core software component providing the required smart home functionality is the platform OpenHAB (Open Home Automation Bus, Openhab Foundation, Germany, Releases 3.x (start) to 5.x (current)) [
37]. OpenHAB originated from OSGi and the Eclipse Foundation ecosystem [
38], which parallels earlier developments conducted by the authors. These parallels and related experience with the underlying principles constituted one reason for selecting OpenHAB over alternatives such as X10 or Home Assistant. Another decisive factor was OpenHAB’s proven suitability for thin-client hardware platforms, including Raspberry Pi, combined with a minimal set of requirements.
With respect to installation and configuration, OpenHAB provides an integrated image-based solution (OpenHaBian [
39]) that enables rapid setup. However, this study adopted an alternative approach tailored to the specific project requirements. A central requirement was fully autonomous operation without on-site intervention by the supported individuals. This necessitated reliable remote maintenance capabilities. Integrated system images, such as OpenHABian, tightly couple software components, which can result in complete system failure if errors occur, thereby requiring physical on-site intervention. To mitigate this risk, the installation was based on independently operating core components.
Ubuntu Long-Term Support (LTS) was selected as the operating system for the Raspberry Pi, providing enhanced robustness, including automatic recovery following power outages. Remote maintenance and control were enabled using Teamviewer [
40], which automatically starts with the operating system and allows high-probability access even under adverse conditions.
OpenHAB itself was installed as a service, started once all prerequisites—such as a Java runtime environment—were operational. A key architectural element of OpenHAB is the use of bindings, which act as software-based connectors that enable the integration of devices and services from different vendors. The variety of bindings is big, going from a few hundred hardware components (such as the aforementioned Homematic), over online services (e.g., for weather), to cloud support and software-related bindings, for example, connectors to different databases.
In the described installation, hardware bindings included Homematic components, such as sensors, lighting actuators, and motion-detection devices. Additional bindings enable the integration of a Smart TV to observe switching events. To support post-processing of trigger events (e.g., switching of Homematic components or the TV), database bindings were employed to connect to, at first, to a MariaDB database and later to InfluxDB. Other relevant OpenHAB bindings include services enabling so-called actions, such as email and SMS messaging. Finally, an important component of OpenHAB in the context of the project is rules, which enable the use of basic programming constructs, such as conditional statements (if, then, else), loops, database calls, etc., to react to observed events.
3.5. Algorithmic Approach
3.5.1. Initial Feasibility Evaluations
To assess the approach’s feasibility and basic usefulness, several preliminary evaluations were conducted using simple OpenHAB rules. An example rule triggered the sending of an email to a caregiving relative whenever the resident manually operated a smart home component (shown in Listing 1). The motivation for this functionality draws on several theoretical aspects discussed in
Section 2. The most important one is unobtrusiveness, since the system operates entirely in the background.
| Listing 1. A rule named “Grandma OK” which observes when a blind is operated. In this case an
email is sent to the relative. |
![Applsci 16 02251 i001 Applsci 16 02251 i001]() |
The initial trials were at least technically successful, as the triggered events resulted in a corresponding email notification. However, this basic functionality was obviously not really smart and provided only limited informational value, as it did not assess whether the observed activity was typical or atypical with respect to time of the day and component usage. Subsequent improvements, therefore, focused on enhancing the informational quality of notifications. SQL (Structured Query Language) queries were integrated in OpenHAB rules to retrieve historical data from the MariaDB database. The queries were based on the comparison between the current activity level and the historical averages for the corresponding hours of the day, as shown in Listing 2.
| Listing 2. A rule named “Grandma Status” which periodically fetches data from a connected database
and forwards the result to the relative per E-Mail. |
![Applsci 16 02251 i002 Applsci 16 02251 i002]() |
The rule uses a cron function [
41] to automatically execute an SQL query [
42] on the database hourly, counting how often the component (Item XXX) has been operated. Counting triggers rather than querying component states was necessary because different smart components transmit heterogeneous data to the database. Light switches, for example, typically generate binary values such as
on and
off. Roller blinds or light dimmers provide discrete numeric values, for instance, ranging from
0% (open) to
100% (closed), or from
1 (minimal brightness) to
16 (maximal brightness). A television similarly reports
on and
off states but may additionally transmit information about the active channel or the current volume level. Other components may provide status values such as
open,
closed, or
active/inactive. While these values are relevant for different functional purposes, they are of limited interest for observing activity patterns. From this perspective, the decisive factor is whether a component was manually operated and how frequently such operations occurred. This information can be reliably derived using the SQL Count function, independent of the specific values transmitted by individual devices. For readability, the examples shown in this paper represent a simplified query applied to a single component. In the actual installation, however, the query included seven components, and the resulting values were aggregated into a derived average, which was forwarded to the caregiving relative. The transmitted message contained the result of comparing the number of triggered events over the past hour with the average number of triggered events for the same hour of the day across the entire observation period represented in the database. This approach significantly improved the information quality of the generated notifications. However, it also pushed the performance and stability limits of the OpenHAB runtime environment. Consequently, it has been decided to transition to alternative, more specialized mechanisms in order to further enhance both system robustness and the quality of the extracted information.
3.5.2. MariaDB and PhpMyAdmin
The next development step involved using database-level processing in PhpMyAdmin, a database management tool that provides advanced data manipulation capabilities. This approach enabled shifting complex, resource-intensive computations away from the limitations of the OpenHAB runtime environments. Instead of relying on the OpenHAB rule engine, data processing was implemented using stored procedures that operate independently of OpenHAB. In addition to relocating basic functionality from OpenHAB to PhpMyAdmin, further considerations were made regarding algorithmic advancement. As already discussed, the data generated by the attached components comprises heterogeneous data and exhibits varying triggering frequencies. Furthermore, the temporal distribution of trigger events can be influenced by seasonality, day–night cycles, and individual activities and routines. It was therefore necessary to process the stored data and transform it into a representation capable of accommodating these variations. Given the requirements discussed in
Section 2, the primary objective at this stage was to avoid reliance on external computing resources, such as cloud services. This decision was motivated by technical constraints—including connection stability and limitations in bandwidths—as well as security and privacy concerns. Consequently, all data processing was designed to be performed locally. Due to the computational limitations of the Raspberry Pi, particularly in the early versions, the approach was restricted to statistical analysis methods. Specifically, data normalization and manipulation based on the z-distribution (standard normal distribution; see Equation (
1)) were employed.
In the first step, the data in MariaDB was recalculated and normalized using the z-transformation, and the results were subsequently written to separate data tables. The resulting data is characterized by constant features (mean = 0, standard deviation = 1). Again, the goal was to determine the difference between the number of manual triggers observed in the most recent hour and the typical trigger frequency in the same hour of the day in the past. The z-distribution enables the identification of deviations based on calculated z-values, which express the statistical distance from expected behaviour. Deviations are commonly interpreted in terms of standard deviations (). Depending on the desired sensitivity of the algorithm, absolute z-exceeding a threshold of 2 (i.e., z > 2 or <) can be regarded as statistically significant deviations. The computation of the z-values and the assessment of deviation significance were performed using the aforementioned stored procedures. These procedures can be understood as programs providing enhanced functionality directly within the database system. They can be executed locally, require no external computational resources, and run automatically at predefined intervals. Compared to the earlier cron/rule combination, this mechanism offers substantially greater stability (because it runs decoupled from the smart home runtime), flexibility and supports more complex processing logic.
The described step, illustrated in Listing 3, shows an example stored procedure that calculates a z-score and writes the result to a separate database table in order to preserve the integrity of the raw sensor data. To meet the requirement to inform the caregiving relative of the current status, the email action is used again within a rule. However, instead of executing complex SQL queries directly in the rule engine, the rule only fetches the most recent z-value, corresponding to the past hour and cumulated across all components, from the database. Compared to an SQL query directly embedded in a rule, this approach significantly reduced the computational resources required.
| Listing 3. Stored procedure for calculating a z-value based on the comparison between the number
of triggers in the past hour and the average trigger frequency for the corresponding hour of the day
in the whole observation period. |
![Applsci 16 02251 i003 Applsci 16 02251 i003]() |
3.6. Time Series/InfluxDB
The approach based on conventional relational databases again proved useful to some extent, improving information quality. However, over time, several problems emerged that required changes at both the hardware and software levels. Regarding the hardware, the Tinker Board’s architecture and peripherals exhibited several weaknesses. One issue was the limited robustness of the 32-bit architecture, combined with the decreasing vendor and community support. On the hardware level, the Tinker Board was therefore replaced by state-of-the-art Raspberry Pis—initially generation 4 and later generation 5—the latter based on a 64-bit ARM architecture equipped with 8 GB of RAM. This transition also opened opportunities for upgrading the software infrastructure. The operating system was upgraded to a 64-bit version of Ubuntu, and more recent versions of OpenHAB, the associated runtime environments could be deployed (e.g., Java 21 instead of Java 11). At first, the new platform worked better than the previous one. However, a critical bottleneck remained: the SD Card. For short-term or simple prototyping projects, the Raspberry Pi platform provides convenient, SD card-based runtime images. These storage media are multipurpose and have proved useful in various application domains. However, they are not optimized for scenarios with high-frequency or continuous read and write operations, which is typically the case when operating systems handle multiple runtime services in parallel. The high demands placed on the operating system and concurrently running services led to instability, disconnections, and, in some cases, damage to the boot image stored on the SD Cards. These issues ultimately led to the decision to switch to an SSD (solid-state disk)-based solution. SSDs are significantly better suited for operating system execution and high-frequency read and write operations. In addition, support for SSDs on the Raspberry Pi has reached a high level of quality, and their prices are comparable to those of SD Cards.
The basic installation on the new platform—consisting of the latest-generation Raspberry Pi with SSD storage, Ubuntu (Canonical Group Ltd., United Kingdom), TeamViewer (Teamviewer GmbH, Germany), and OpenHAB—was largely comparable to the initial platform based on Tinker Board described above. The essential difference was that the new architecture enabled a transition to a more modern approach to data processing and manipulation, while still fulfilling the core requirements of unobtrusive operation, fully local execution of the required functionality and remote control. The central component of the revised software collection is the time-series database InfluxDB (version 2), which represents a NoSQL database management environment. InfluxDB uses proprietary concepts, such as organizations and buckets, and a specific database query language, Flux, to manage and manipulate data. Compared to the MariaDB/PhpMyAdmin-based approach, Influx offers a broader range of integrated data manipulation capabilities specifically designed for time-related data. This is particularly relevant in the present context, where trigger events occur at specific points in time and at frequencies influenced by temporal factors such as the hour of the day, day–night cycles, and seasonal variations. InfluxDB is directly supported by OpenHAB via a dedicated binding that treats the database as a persistence layer, enabling the storage and manipulation of data generated by smart home or IoT components.
The starting point when introducing InfluxDB was not as low-level as in the previous development stages. Since the z-distribution-based approach has already proven useful, the primary goal was to port the existing functionality to the new platform and adapt it to its specific characteristics. The connection between OpenHAB and InfluxDB is conceptually similar to that of MariaDB; the major differences lie in how data is organized and manipulated. Influx is structured around the concepts of organizations and buckets, which, in a simplified sense, correspond to databases and tables in relational systems. Unlike MariaDB, InfluxDB provides an integrated management environment and therefore does not require external add-ons such as PhpMyAdmin. The platform is based on a built-in graphical web interface that supports a variety of data visualization and manipulation. For more sophisticated programming tasks Influx provides an integrated Query Builder based on a code view. Both the graphical interface and the code-based processing support various connections to external systems. However, the version used in this work (InfluxDB 2), which is currently the only version integratively supported by OpenHAB, relies on the proprietary data manipulation language Flux and other proprietary concepts.
Due to the different paradigms on which InfluxDB is based—some of which deviate considerably from conventional database concepts and SQL—the transfer of the existing functionality required substantial adaptation and proved to be somewhat cumbersome.
The current status of the implementation is illustrated in Listing 4, which builds on the same concepts as the previous approaches (z-value-based deviation recognition, and hours as units of observation).
| Listing 4. Influxdb query language (flux) based query for calculating a z-value based on the comparison
between the trigger count of the past hour and the average trigger frequency for the corresponding
hour over the observation period. |
![Applsci 16 02251 i004 Applsci 16 02251 i004]() |
Accordingly, the query also needed to be executed following an automated pattern, which required finding an equivalent mechanism to stored procedures. InfluxDB provides so-called tasks for this purpose. These tasks offer a broad range of functionality and can, to a certain extent, be regarded as counterparts to stored procedures. As in the previous approach, data is retrieved from the data bucket, transformed into z-values, and written to a separate bucket in order to preserve the original raw data. The OpenHAB rule engine subsequently searches for entries corresponding to the past hour and sends a notification to the relative’s email address if the observed data deviates significantly from the expected value.
5. Discussion
This paper represents an attempt to bring Active and Assisted Living (AAL) to a broader share of the elderly population. Although the results may appear limited and weak at first glance, they reflect real-world conditions and the practical difficulties and obstacles associated with integrating technology into environments that are typically not optimally prepared for such interventions. A cautious and considered approach was therefore necessary. In this sense, the work aimed to balance potential benefits with the drawbacks of the applied technologies. Although several requirements for appropriate AAL solutions were identified in the related work section, only a subset could be addressed in the concrete implementation examples. The reason is that the feasibility—particularly under real-life conditions, as opposed to laboratory settings or artificial living environments, which constitute the majority of related work—must be approached with sensitivity. Consequently, the focus of the illustrated AAL functionality was placed on activity observation, analysis, and reaction mechanisms that fulfill the requirements of unobtrusiveness and, in the sense of the technology acceptance model (TAM), usefulness and ease of use, as no explicit interaction by the supported person is required.
As discussed in
Section 3 and
Section 4, several obstacles must still be overcome before the respective technologies can be integrated more broadly to support a larger share of the elderly population in their own homes. One important obstacle is the range of technical skills still required to install and maintain the necessary infrastructure and functionality. Although instructions, guidelines, and other types of documentation are available, they are typically distributed across multiple sources and require a certain level of technical literacy to interpret and apply.
From a data-processing perspective, the z-distribution-based approach proved useful, though with limitations that must be considered. The calculated z-values provide a basic indication of potential behavioural deviations. As shown in
Figure 4, the participant in this case study exhibits relatively stable routine behaviour; therefore, averaging over the entire observation period provides acceptable information value. At the same time, the approach is sensitive to deviations. As discussed in
Section 4, the z-values correctly indicated increased activity during renovation work. However, these deviations were also influenced by the presence of additional persons, as the participant could not carry out the renovation alone.
An in-depth investigation and detailed analysis of correctly and incorrectly identified deviations were conducted and are shown in
Table 1. An observation period of two recent months was analyzed in detail for two reasons. First, the selected observation period is based on the most recent version of the platform, which is characterized by greater stability and more complete data. Second, due to the recency of the observation period, additional sources of information for comparison with the observation data were more easily accessible. We focused on true and false positives for activity deviations, using data from a single motion sensor located in the household’s basement. The significantly deviant z-values were matched to data from other sources (e.g., calendar entries of the visiting relatives or their Google Timeline data). The results show that on weekdays, the percentage of true positives is very high (around 87%), while the percentage of false positives is approximately 14%. On weekends, however, the proportion of true and false positives changes significantly: the majority of identified deviations are false positives (about 61%), compared to roughly 39% true positives. These findings provide important insights and will—besides other aspects—guide the direction of our future work (discussed in
Section 6).
From an interpretative perspective—supported by findings from the preceding project described in [
15])—the attribution of activity to a specific individual is of secondary relevance. As long as activity is detected, it can be assumed that at least one person is present who could provide support if required. This reasoning also applies to multi-person households and eases the observation (when multiple users would not have to be differentiated). If no activity is recognized, the situation is effectively equivalent to that of a single-person household.
The new system architecture, in particular the replacement of SD Cards with SSD-based storage, significantly improved stability. Power outages, such as those visible in
Figure 3) can be handled more robustly, as the system restarts automatically once power is restored. Nevertheless, the platform is still not entirely error-free. Contrary to expectations, situations still occur in which the system neither stores nor transmits data and is no longer accessible remotely. Although this occurs far less frequently than with the SD cards, it remains problematic. In such cases, a simple toggling of the corresponding circuit breaker in the household’s fusebox—an action that can be performed by the participant—restores functionality. However, diagnostic information regarding the error cannot be recovered, and a certain amount of data loss must be accepted. Remote maintenance is further limited by the low bandwidth of rural internet connections, which does not affect data transmission itself but makes modification of algorithms or InfluxDB tasks cumbersome.
Another important learning concerns the influence of the observation system on participant behaviour. In the preceding project, noticeable behavioural changes were observed among both elderly participants and their relatives. Some participants explicitly reported modifying their behaviour—for example, deliberately triggering sensors (“We always check that the connecting door is closed”) to simulate “typical” activity. In one case, a relative expressed concern when the system reported prolonged inactivity while the participant was performing repair work in the cellar. Although such deviations had existed prior to the study, the changed quality and granularity of information significantly altered the sensitivity and interpretation of the involved persons.
A contrasting effect was observed during the early phases of the study regarding the high number of notification emails. The large volume of messages overwhelmed the recipient, increasing the likelihood that relevant information might be overlooked—an effect comparable to losing the overall perspective due to information overload, where information noise reduces the effective information content. In other words, not seeing the wood for the trees.
Insights from the preceding project further showed that behavioural changes tend to decline over time. In the long-term setting of the presented study, no Hawthorne-like effect could be observed. The participant is aware of the system operating in the background and actively uses the monitored components, but her behaviour does not appear to be influenced by the observation itself. However, because of the personal proximity to the research team, such an effect cannot be ruled out in other settings. No observable changes in behaviour can probably also be attributable to the substantially longer deployment duration compared to typical field studies reported in the literature, which often span only a few months. Nevertheless, further investigations of these aspects remain necessary. Privacy and security represent additional critical concerns in such sensitive application domains. We addressed these concerns by running the Raspberry Pi in a headless configuration, meaning no keyboard, mouse, or screen were attached. While these peripherals could theoretically be connected, the operating system is protected by standard authentication mechanisms. Remote access via TeamViewer is end-to-end encrypted and additionally secured by login credentials. The data processed locally is highly abstract and not interpretable without contextual metadata. For example, a recorded event such as HM-LC-Sw2-FM-OEQ1663598:1, “On”, 20 May 2015 06:30:13 does not convey any critical personal information. Additional semantic labelling (e.g., couch-light) is stored only within the system for better readability. Overall, the privacy and security risks do not exceed those that are typically associated with standard internet-connected or IoT-based systems.
6. Conclusions and Future Work
The presented work can be considered a small contribution within the broad field of AAL, highlighting the challenges and constraints encountered in real-world deployment. As emphasized throughout the paper, the underlying technology is largely available; however, its meaningful and sustainable application continues to resist straightforward implementation, particularly when evaluated from a comprehensive user experience perspective.
At the same time, noticeable progress can be observed compared to earlier field studies (specifically, ours). One important development is the availability of open and extensible smart home platforms such as OpenHAB, which served as one of the central technical foundations of this work. Long before major industrial stakeholders introduced the Matter standard [
43], open-source communities had already established practical solutions for integrating heterogeneous components from different manufacturers via platforms such as Home Assistant or OpenHAB [
18]. In this context, community-driven developments such as Zigbee2MQTT [
44] also deserve particular attention. Zigbee is, in principle, an open standard which would allow for the direct integration of devices from different suppliers. However, as pointed out by [
45], differences in device specification and communication behaviour between vendors [
45] (e.g., mention Ikea or Philips Hue, but also Miele@home relies on a proprietary and closed Zigbee variant) significantly reduce practical interoperability. The goal of Zigbee2MQTT is to overcome such vendor-specific and proprietary deviations and thereby increase interoperability between devices from different manufacturers. From a functional perspective, the smart home platforms themselves are largely equivalent, offering broad protocol support and extensibility. Zigbee2MQTT provides an important complementary advantage to platforms such as OpenHAB or Home Assistant, as Zigbee devices can communicate directly and fully locally, without necessarily relying on vendor-specific gateways, which in turn often introduce additional cloud dependencies. Matter plays a specific and somewhat ambivalent role in the context of smart home platforms. Despite its significantly stronger market backing (including GAFA companies), Matter currently offers capabilities that are largely comparable to those of open platforms. At the same time, Matter has been subject of critical discussion, particularly with respect to privacy and security implications that arise from design choices, such as the exchange of credentials and data across different vendor-specific subsystems within the MATTER ecosystem [
46]. With regard to usability—an aspect emphasized as crucial for the broader adoption of smart home technologies in general and AAL in particular—Home Assistant provides more accessible add-ons and plugins for non-technical users, whereas OpenHAB is less supportive, e.g., in terms of visualization tools to define rules for users not familiar with programming. One of the original concepts underlying such end-user programming approaches can be found in Google Blockly [
47], which is based on the idea of composing functionality on the basis of connected building blocks representing programming concepts (such as loops, conditions, and actions) and has been—beside other fields of application—also researched in the context of IoT [
48,
49] or smart home programming [
50]. Although these concepts would reduce the entry barrier for laypersons, they still constitute “programming” and therefore require a basic understanding of the underlying concepts and fundamentals of logic. A promising alternative is likely found in artificial intelligence, specifically in Large Language Models (LLMs), as they are based on natural language interaction, which is presumably more accessible for laypersons and may eventually support them in designing, configuring, and maintaining smart home systems [
51], probably also in the context of AAL. Currently, however, LLM-based approaches still require manual transfer of prompt results into target IoT environments (e.g., OpenHAB or Home Assistant). Agentic AI paradigms could further enhance this context as agents could autonomously perform the transfer and integration steps, thereby closing the gap between conversational interaction and operational system programming [
52].
Given demographic change and shortages in professional personnel, such developments appear increasingly necessary. Already in 2006, ref. [
53] argued that human–computer interaction would shift from the era of
easy to use to an era of
easy to develop. From today’s perspective, neither of these goals has yet been fully achieved, though notable exceptions exist. One illustrative example is video production, which—due to its technical complexity—was formerly reserved to technology enthusiasts. Today, however, almost any smartphone user can record, edit, and publish videos, including titles, background music, and scene transitions. This development demonstrates that, with sufficient industry motivation and commitment, highly complex functionality can indeed be transformed into user-friendly solutions. The aforementioned approaches based on agentic AI can be considered a means of supporting diverse user groups in managing their ICT-related problems independently, representing the technological state of the art. However, societal concerns accompanying such developments (e.g., ethics, security, privacy) should be critically examined [
54].
Nevertheless, the system described in this paper cannot currently be configured or maintained by non-technical users. Substantial technical knowledge and sustained interest remain necessary. A potential service or commercialization model could therefore involve
“AAL as a Service”, in which specialized providers centrally manage installation, configuration, and maintenance, while caregiving relatives assume responsibility for interpretation and response. As the primary target group consists of older adults who are not yet dependent on professional care or close-knit support structures, questions arise regarding affordability as well as legal and organizational aspects—particularly with respect to liability. Regarding scalability, the presented approach shows promising potential. Because it relies solely on time-based observation of sensor trigger frequencies rather than explicit activity recognition, it can likely be transferred to other environments unsupervised. This assumption is supported by results from previous work [
15], which deployed diverse sensor combinations across approximately 20 real-world households. Despite participants often stating that they did not follow strict routines, observation periods revealed that even a small number of smart components (as in the presented study, which required manual triggering) was sufficient to identify routine behaviour and deviations. Following an initial learning phase, the system was able to derive typical activity patterns and detect anomalies. Effective AAL support does not require explicit interaction or wearable devices. Environmental infrastructure and passive observation can already increase safety compared to conventional support, such as periodic visits or phone calls. The presented concept may even operate using devices not traditionally associated with smart homes, including televisions, internet routers, hi-fi equipment, or household appliances. The presented approach can automatically and unobtrusively react to deviations and trigger appropriate actions, as exemplified in this work by email notifications.
The approach nevertheless exhibits limitations while offering multiple opportunities for further improvement, particularly at the algorithmic level. Deviation detection based on the z-distribution has proven suitable as a baseline method but has only been partially evaluated using standard key metrics such as precision and recall. The time-series data managed within InfluxDB provides a promising basis for more advanced analysis, for instance by accounting for seasonality and higher-order temporal patterns. Moreover, forecasting and modelling approaches are directly supported by InfluxDB. Initial experiments with the MAD (median absolute deviation) algorithm [
55], which promises improved outlier detection in the InfluxDB ecosystem, finally became successful and enabled preliminary analyses. However, due to the comparatively small volume of available data, no generalizable findings can yet be derived. Nevertheless, first insights were obtained from analyzing the ratio of false to true positives, as shown in
Table 1. We applied both the z-score calculation and MAD on the data and observed the following initial differences: the proportion of identified significant deviations was comparatively low (in the range of 20–30%) for both methods. However, the majority of significantly deviating data identified by the z-score method represent false positives, whereas a slight majority of deviations identified by MAD represent true positives. It should be noted that the initial successful application of MAD—after several unsuccessful trials due to the specific characteristics of the data—were not optimized with respect to parameter tuning, observation window sizes or other algorithmic refinements. These enhancements, together with the integration of external sources (e.g., calendar information, Google Timeline as referred to in
Section 5, or geofencing), will be addressed in future work. Despite the limitations, the initial results are promising in terms of validity. As MAD relies on median-based statistics, it is expected to be more robust to outliers and comparatively better than z-scores, which are based on means and averages and are therefore expected to be more sensitive to rare extreme events. In the context of in-home activity recognition, MAD is expected to capture deviations from habitual behaviour more reliably.
A further direction for future work involves the application of artificial intelligence-based methods such as LSTM and related machine learning approaches, which are frequently discussed in the related literature (cf. e.g., refs. [
16,
17,
18]). Several challenges must be considered, including whether such methods can be executed locally without reliance on cloud-based infrastructures, given privacy, security, and connectivity constraints. In many application domains, comparable problems are addressed using (cloud-based) GPU hardware, which introduces additional complexity. Platforms such as NVIDIA Jetson [
56] offer potential solutions by providing local GPU resources on edge-based architectures. Their applicability, compatibility with open-source platforms and sustainability will be explored in future work.