Previous Article in Journal
RFID-Enabled Electronic Voting Framework for Secure Democratic Processes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Assessing Compliance in Child-Facing High-Risk AI IoT Devices: Legal Obligations Under the EU’s AI Act and GDPR

1
Telematic Engineering Department, Universidad Carlos III de Madrid, 28911 Leganés, Spain
2
Sustainable Hitecnova S.L., 13500 Ciudad Real, Spain
*
Author to whom correspondence should be addressed.
Telecom 2025, 6(4), 79; https://doi.org/10.3390/telecom6040079
Submission received: 9 August 2025 / Revised: 20 September 2025 / Accepted: 23 September 2025 / Published: 17 October 2025

Abstract

The rapid and ongoing adoption of smart home products, coupled with the increasing integration of artificial intelligence (AI), particularly in these products, is an undeniable reality. However, as both technologies converge, they also give rise to a range of significant concerns. The EU’s recent AI Act specifically addresses the challenges associated with the use of AI technology. In this study, we examine three AI-integrated products with toy capabilities that are sold in Spain, serving as a case study for the EU market of smart home devices that incorporate AI. Our research aims to identify potential compliance issues with both the AI Act and the General Data Protection Regulation (GDPR). Our results reveal a clear and worrying gap between the existing legislation and the functionalities of these devices. Using a normal user’s approach, we find that the privacy policies for these products, whose features make them high-risk AI systems, AI systems with systemic risk, or both as per the AI Act, fail to provide any information about AI usage, particularly of ChatGPT, which they all integrate. This raises significant concerns, especially as the market for such products will continue to grow. Without rigorous enforcement of existing legislation, the risk of misuse of sensitive personal information becomes even greater, making strict regulatory oversight essential to ensure user protection.

1. Introduction

The Internet of Things (IoT) is a well-diversified technology that has become part of our daily lives. It is fully integrated into manufacturing, mobility, agriculture, and smart homes, among other areas. The IoT market size reflects this widespread adoption, with the number of users of smart home devices skyrocketing from 191 million in 2019 to 422 million in 2024; this number is expected to reach 785 million by 2028 [1]. As part of IoT technology, smart home technology follows this growing trend, mainly because of the unmatched comfort and efficiency it brings to the home. Not only does this technology involve appliances with basic functionality like bulbs, power plugs, etc., but it also includes more powerful appliances—from a computational capacity perspective—like smart washing machines, cookers, voice assistants, and refrigerators, as they involve more interaction with the user compared to the user’s interaction with basic appliances. It is expected that the number of smart home devices will reach 29 billion devices by 2030 [2].
With such a tendency in the smart home market, the integration of Artificial Intelligence (AI), which has expanded into many areas, into the smart home ecosystem is rapidly reshaping it. LG recently presented its home robot called LG Smart Home AI Agent to the public during the Consumer Electronics Show (CES) 2024. The robot, highlighted as “an all-around home manager and companion rolled into one” device, is able to distinguish the user’s emotions by analyzing their voice and facial expressions [3] and can select adequate music that matches their feelings. Also, it functions as “a moving smart home hub” and can carry out additional functions, like monitoring pets and communicating with other smart home devices [4]. Furthermore, it features a GenAI conversational service that is based on an LLM.
While home robots have recently joined the call, AI-driven IoT devices, such as smart doorbells with facial recognition and voice-activated assistants, have already become integral parts of this ecosystem, given their ability to provide security, comfort, and efficiency. These devices operate autonomously, gathering and processing data to deliver customized experiences. It is expected that the market size for AI in Smart Home will grow from USD 11.24 billion in 2022 to a staggering USD 52.1 billion by 2030 [5]. Such growth in the use of AI in the Smart Home is positively viewed, with expectations of even higher efficiency and comfort. However, this advancement also raises concerns regarding personal data processing and the transparency and trustworthiness of AI algorithms that form part of the AI-driven smart home ecosystem.
To address these issues, and with the goal of regulating both smart products that include AI features (such as home robots) and systems that work entirely through AI (like CV filtering tools used in hiring), the EU has worked on developing a law specifically for AI since April 2021. This law—now known as the the AI Act [6]—finally entered into force in August 2024. While this legislative action is a step forward in protecting end-users of AI systems, there remains vagueness regarding how the Act will be enforced in practice. An EU Commission’s report [7] indicated that there was no need to reinvent the wheel, as several IEEE standards are already in place on which European Standards Organizations (ESOs) may rely. However, the report warned that some standards do not fully align with the EU’s legislative obligations, thus requiring either the extension of these standards or complementing the gaps with other additional standards.
With all the above in mind, the AI Act’s introduction adds to the complexity of the legislative landscape for stakeholders, including smart home end-users. In this sense, it becomes crucial to identify possible compliance challenges, particularly for smart home devices that are classified in the AI Act as either high-risk or involving systemic risk. There are already smart home products that use biometrics (classified as high-risk) and Generative AI (classified as systemic risk) sold in the EU market. Hence, besides studying the General Data Protection Regulation (GDPR) compliance [8], this work aims to shed light on the challenges emerging from this new legislation, particularly for those devices falling under the high-risk and systemic risk labels as per the AI Act. In this paper, we explore the Spanish market case—as an example EU market—for devices that fit these characteristics given our knowledge of this particular market. Our search shows the availability of several devices that are considered as high-risk/systemic risk. We analyze these devices’ functions and their documentation. Also, taking a user’s perspective approach, we study their privacy policies—as the only documentation that a normal user may look at—for GDPR compliance and for any AI-related details. Our results show non-compliance with various transparency aspects of both GDPR and the AI act.
Our paper is organized as follows. Section 2 provides the background, Section 3 covers the related work, Section 4 explains our methodology, Section 5 provides the results for our analysis, Section 6 discusses the results and, finally, Section 7 concludes the paper and highlights future work.

2. Background

With GDPR and the AI act being the legal basis of this study, it is critical to provide a sufficient background on them, particularly focusing on the articles which this study relies on.

2.1. GDPR

This legislation [8], in force since May 2018, is the EU’s framework for personal data protection. It applies to all processing of personal data of individuals in the EU, regardless of where the data controller or processor is established. Data controllers (controller hereafter) are the entities that decide on the purposes and means of the processing of personal information, while data processors are those entities that process such data on behalf of the controllers. While GDPR has a broad coverage, we highlight the parts that are most relevant to our study. First, Article 12.1 indicates that controllers shall ensure that users are informed—specifically when minors are involved—so that any data processing is carried out transparently (using concise, intelligible, accessible and clear language). Also, Article 13.1 requires that controllers inform users of their identity and contact details, data processing purposes, and whether the users’ data will be transferred to a non-EU country. Furthermore, as per Article 13.2, controllers need to inform users of the existence of automated decision making, including profiling. Finally, Recital 38 stresses on that children merit specific protection with regard to their personal data, particularly as they lack the ability to fully grasp the idea of risks, consequences and safeguards.

2.2. AI Act

In April 2021, the EU Commission proposed a legal framework to regulate AI. This framework was approved by the EU Council and the EU Parliament under the name The AI Act [6] in December 2023, finally coming into force in August 2024. While the Act itself is an extended document that covers different aspects like the definition of AI systems, the role(s) of each stakeholder within the ecosystem, the types of AI systems, exclusions, sanctions, etc., this study takes a more generic approach, particularly given our limited legal background.
As per the AI Act, an AI system involves several aspects, including (1) the ability to function under several degrees of autonomy, (2) having implicit/explicit objectives, (3) the ability to generate an output based on inferences from the input, and (4) having the capability of influencing the virtual/physical environment. AI systems are ranked as either prohibited, high-risk or limited risk. In addition, a special class involves General-Purpose AI (GPAI) and Generative AI (GenAI).
AI systems with high risk are further classified into two categories. (1) Those that are a product/safety component within another product, e.g., elevators, vehicles, aviation, toys, among others [6]. These adhere to the EU’s Harmonization Legislation and require undergoing a third-party conformity assessment. (2) Those with a high risk of causing harm to fundamental rights, safety, or health for natural persons, e.g., remote biometric identification systems, AI-driven job recruiting software, AI-driven safety components within critical infrastructure, etc.
For GPAI and GenAI systems, they are only considered to involve systemic risk when the cumulative amount of training compute, measured in Floating Point Operations (FLOPS), is greater than 10 25 (this number may change in the future as technology advances). According to Sastry et al. [9], ChatGPT 4 and Gemini Ultra are already above the 10 25 FLOPs threshold, thus involving systemic risk. As per the AI Act, GenAI is a subset of GPAI. Additionally, the act defines GenAI systems as those that are built to generate content like images, video, audio, or text with different levels of autonomy. The act requires that GenAI’s outputs be machine-readable and be flagged as artificially created. This applies to multimedia that are deepfakes as well as text that informs people on public interest matters. The AI Act includes transparency obligations. Article 13.1 requires that sufficient transparency forms part of the design and development of a high-risk AI system. Also, as per Article 50.1, for AI systems that are designed for direct interaction with humans, providers shall inform users of the nature of the system as an AI one. A summary of the main articles of both GDPR and the AI Act is provided in Table 1.

3. Related Work

Research on the privacy and security of AI-driven smart toys and home devices has accelerated in recent years, reflecting growing concerns about technology entering children’s lives. Early studies identified various vulnerabilities in internet-connected toys and highlighted how they could impact children’s privacy. For example, a recent multi-device analysis by Feldbusch et al. [10] examined a dozen smart toys available in the EU market and revealed insufficient privacy protections and significant transparency issues. They found widespread behavioral profiling of children—a vulnerable group under GDPR deserving specific protection—through the collection of extensive data analytics combined with unique identifiers. Companion mobile apps for toys often requested unnecessary and sensitive permissions (e.g., access to location, contacts, or the microphone), and local network communications initiated by the toys were frequently unencrypted. In some cases, toys even transmitted user Wi-Fi credentials in plaintext over unencrypted HTTP during setup, creating serious security risks. Manufacturers also failed to declare hardware or software support lifecycles, which raises long-term security concerns if devices become unsupported. Data from these devices were often sent to third-party services or stored in various regions (including countries like China that the EU deems not offering an adequate level of data protection) without clear disclosure. Privacy policies tended to be vague, unspecific, or even entirely missing for some toys. Consequently, compliance with GDPR was found to be only partial across many products; key shortcomings included a lack of privacy policy availability in the user’s language and a lack of comprehensiveness in what was provided.
Not all emerging work is focused purely on risks; some explore positive applications of AI in child-oriented devices. For instance, Udayagiri et al. [11] developed an AI-driven soft toy aimed at helping identify infants at risk of developmental delays. The toy in their study collected interaction data (e.g., touches, grasps) via soft sensors and used a machine learning model to automatically assess motor development, achieving over 95% accuracy in detecting possible motor delay indicators. This example, though in the healthcare context, shows that AI-driven smart toys could offer substantial benefits when used responsibly and with proper safeguards. It underscores the importance of balancing innovation with ethics and compliance.
On the other hand, some studies have focused on smart toys and their impact on children. McStay and Rosner [12] surveyed 1000 UK parents to gauge the social acceptability of emotional AI in toys (“emotoys”) and how such toys should be governed. Emotional AI allows devices to detect, learn from, and interact with human emotional life through data such as words, images, facial expressions, and biosignals. The survey revealed mixed feelings; 43% of parents saw emotion-sensing wearables as potentially helpful in parenting, whereas 59% found them intrusive. In particular, 80% of parents expressed concern about who would access their child’s data. Parents expressed a preference for clear information about data collection on toy packaging or via in-app notices before purchase, rather than complex terms and conditions. The authors also interviewed experts (industry practitioners, regulators, NGO representatives, academics); while some saw potential benefits for child well-being, there was strong consensus on the critical need for adequate governance of such technologies. The growing significance of smart toys is underlined by market trends—global smart toy revenues tripled between 2018 and 2023, from about USD 6 billion to USD 18 billion.
Recent literature also calls attention to broader AI trends that could influence smart home devices. One such trend is the rise of Large Model-Based Agents (LMAs), which are general-purpose AI agents powered by large language models or other large-scale models. Wang et al. [13] provide a comprehensive survey of LMA technologies. They discuss how future autonomous LMAs may communicate and collaborate with minimal human intervention, and they enumerate new security and privacy challenges that arise in multi-agent settings. For example, LMAs operating in a home environment could coordinate tasks or share data between devices, potentially magnifying privacy risks if not properly controlled. While our work examines single-device cases, the issues we identify (like lack of transparency and data safeguards) would be even more critical in ecosystems of connected AI agents. The LMA perspective suggests that ensuring accountability and secure cooperation in multi-agent IoT scenarios will be an important area of future research and regulation.
Another emerging area is the use of blockchain technology to enhance IoT privacy and security. Researchers have proposed blockchain-based solutions to provide decentralized, tamper-evident data management for IoT devices. The inherent properties of blockchain (immutability, distributed consensus, and cryptographic verification) can address certain IoT challenges by ensuring data integrity and enabling transparent auditing of data access. For instance, recent studies have integrated blockchain with smart toy or wearable devices to give users more control over their data. Blockchain can allow parents/users to track how data flows from a smart toy and to enforce consent smart contracts for data sharing, thereby enhancing trust. However, these approaches also face usability and scalability hurdles (e.g., complexity of user understanding and the overhead of blockchain operations). Nonetheless, the exploration of blockchain–IoT convergence indicates a possible path for improving compliance; by design, such systems could fulfill transparency and consent requirements in a verifiable way.
All in all, the literature highlights both persistent shortcomings in smart toy privacy and compliance and the emergence of potential remedies. Our study extends this work by analyzing specific child-facing AI toys currently available in the EU market through the lens of the GDPR and the AI Act, identifying where obligations are unmet and where concrete improvements are required.

4. Methodology

To determine the current state of privacy and transparency in the functionality of AI-driven consumer robots available to European consumers, we conduct a case study analysis of AI-integrated smart home devices that are sold in Spain via official websites or major e-commerce platforms. A preliminary search of devices with these capabilities yields the following products: Loona [14], RUX AI Desktop [15], and Enabot Ebo X [16]. The selected devices stand out because of their market relevance and child-facing design: Loona is explicitly marketed as a smart toy for children, RUX AI is an interactive “pet” robot with entertainment features, and Ebo X is a family companion robot that also advertises child-friendly capabilities. Each device incorporates advanced AI functionalities (notably, all integrate generative AI services like ChatGPT for interactive dialogue) and thus represents the type of high-risk AI IoT product that falls under new regulatory scrutiny. By selecting these three cases, we aimed to capture a representative snapshot of the compliance challenges in this emerging product category, rather than being exhaustive.
Our methodology consists of the following steps for each product:
  • Capability Analysis: We analyze the capabilities of the product and its features according to the documentation, like product description or manuals on the official website.
  • Risk Enumeration based on Capabilities: We study the risks that could emerge from the mentioned capabilities described in Step 1.
  • Mobile Application (App) Download: We obtain the official app for the device from the Google Play Store. Each of these robots is operated or configured through an Android app provided by the manufacturer.
  • Policy Parsing: We access and download the privacy policy available within the app (or linked through the app) and perform an initial parsing for any information related to AI usage. In particular, we searched the text for keywords such as ai, artificial intelligence, chatgpt, machine learning, and algorithm, to see if the policy explicitly addresses the AI functionalities of the device.
  • Manual Policy Review: We then read each privacy policy in full, from start to finish, to identify disclosures regarding the device’s data practices, with special attention to provisions that the GDPR and AI Act would require, as highlighted in Section 2. This includes noting whether the policy explained the purpose of data collection, listed data recipients (especially any transfers outside the EU), referenced automated decision making or profiling, provided contact information for data protection inquiries (such as a Data Protection Officer), and addressed the handling of minors’ data (age limits, parental consent mechanisms).
  • Feature and Documentation Cross-Check: We compare the privacy policy contents against the device’s known features (as in Step 1) to see if all AI-related features were covered. For instance, if a robot is known to include face recognition or a voice assistant like Alexa, we check whether the policy mentions those aspects. We also note the policy’s last update date (if provided) as an indicator of how recently the manufacturer may have considered new regulations.

5. Results

5.1. Capability Analysis

5.1.1. Loona Smart Toy

A product of the Chinese company Beijing Ke Yi Technology Co., Ltd., Loona robot is a pet robot (petbot) that has many capabilities, including integration of ChatGPT 4, as well as toy capabilities like bullfighting. Its publicity videos feature children playing and interacting with the toy capabilities. As per the documentation, the product’s name is “Loona smart toy” and has the model number “KY004LN01”. On the official webpage [14], the following is stated:
  • Loona can recognize the whole family to make sure nobody is left out and everyone feels special.
  • Keep the fun going with Loona’s app-enabled games that engage and entertain children for hours.
  • Also, Loona integrates Amazon Lex, a GenAI tool and “a fully-managed artificial intelligence (AI) service with advanced natural language models to design, build, test, and deploy AI chatbots and voice bots in applications. (Businesses) can integrate it with foundation and large language models to answer complex questions using data from (their) enterprise knowledge repositories.” [17]. When in camera mode, Loona uses Amazon Kinesis Video System.
Additionally, with the goal of user data protection at the hardware level, Loona uses a data encryption chip to guarantee data security locally. The documentation indicates that given its use of Amazon Web Services (AWS), this ensures compliance to regulatory requirements of “virtually every regulatory agency around the globe” including COPPA and GDPR [18]. Loona is available for European users on its official page [14] and Amazon Spain.

5.1.2. RUX AI Desktop

RUX AI Desktop is a similar desktop robot that integrates ChatGPT. According to the official page, it also has emotional interaction features. Besides being sold on the company’s website, the robot is available for European users on the platforms of Amazon Spain and AliExpress [15,19,20]. On the official website, it can be found in the Entertainment Robot Toys section. Also, the website states the following: “Rux Robot is attentive to your movements. It can read your gestures and respond accordingly, adding an interactive element to your workspace”.

5.1.3. Enabot Ebo X

As described on its official webpage, Ebo X is an AI family companion robot [16]. This robot integrates GPT-4o mini, safety and health monitoring like detection of elderly falling, strangers and crying for help, Alexa, among others. The product is available for sale on Amazon Spain as well as for European users on the official website [16,21].

5.2. Risk Enumeration Based on Capabilities

From the features of the devices described above, we identify several risks that, as we believe, require compliance with the AI Act. These features include:
  • GenAI: The devices discussed include a GenAI capability given that they all integrate ChatGPT. In the case of Ebo X and Loona, their documentation says explicitly that they use GPT-4o technology (Ebo X uses the mini version). As previously explained in Section 2, GPT 4 is considered to be a systemic risk. Given that GPT-4o is the subsequent version of GPT 4, we assume that it should have at least the same training compute of 10 25 FLOPS that GPT 4 has, thus also involving systemic risk.
  • Toys: While Loona is explicitly considered a toy by the manufacturer, RUX AI Desktop features playing capabilities, making it very likely to involve child interaction. AI-driven toys are considered high-risk AI systems by default in the AI Act (Preamble, 50) [6].
  • Emotion detection: RUX AI features emotional-based interactions. With these capability, RUX AI is considered to be a high-risk AI system.

5.3. Privacy Policy Analysis

5.3.1. Characteristics of Privacy Policies

We were able to access Loona’s privacy policy through its app (https://play.google.com/store/apps/details?id=com.keyitech.loona (accessed on 24 July 2025)). We found that the policy was only provided available in English, although the Android phone’s language setting was in Spanish and the app was accessed in Spain. With regards to RUX AI, its companion app is called LeTianPai app (https://play.google.com/store/apps/details?id=com.letianpai.robot (accessed on 24 July 2025)). The app requests the user to select either Global or China-based setting. After selecting the Global setting, we were able to access the privacy policy, which was only in English. With respect to, EBO X, we were able to obtain the privacy policy from the app (https://play.google.com/store/apps/details?id=com.enabot.ebox.intl (accessed on 24 July 2025)), which, similar to the other policies, was only available in English.

5.3.2. Keyword Parsing

We process the privacy policy by searching for the list of relevant keywords mentioned in Section 4. Only the text of Loona’s privacy policy includes a reference to a keyword: “ChatGPT”. However, it is not referred to as an included service but rather refers to it as a third party.

5.3.3. Manual Review

Our manual analysis of the privacy policy of Loona indicates that it was last updated in September 2023—almost 2 years before the writing of this paper. Also, the policy indicates that for users located within the EU, data will be processed within the EU. Additionally, it states that the information relevant to facial recognition is stored locally on the device. Moreover, the policy defines minors 16 years old or younger as children and requires additional measures when they sign up, like including a guardian’s consent and email address. The policy has links that identify:
  • requested permissions within the app;
  • third-party entities and links to these entities’ privacy policies (including ChatGPT);
  • data collected by Keyi Robot—the manufacturer of Loona.
For RUX AI’s policy, it mentions, with regards to minors, “our policy does not require access to personal information of minors or the sending of any promotional materials to such groups” and “Renhejia does not seek or attempt to seek to receive any personal information from minors”. Also, it mentions that user’s personal data are stored on secure servers and protected in controlled facilities. We highlight that, to the best of our knowledge, the company is based in China and does not have a European branch. The policy lacked the date of publication.
Regarding Enabot EBO X’s policy, similar to RUX AI, it lacked the date of issuance. Also, the privacy policy makes no mention of Alexa, which is integrated into the services of the robot. Alexa typically involves what is known as Natural Language Understanding and it might involve queries that require remote processing like asking questions and getting answers, e.g., about the weather. Interestingly, the policy only mentions third-party platforms in the context of sharing photos and videos. We highlight that the policy does not include the name of the DPO nor any address for contact, although there is a section titled “Contact Us” that refers the user to visiting “the page of privacy questions”—which does not have any URL within the policy. Finally, the manufacturer company, Enabot, indicates that “personal information collected and generated … will be saved on the system server”. However, to the best of our knowledge, this company is based in China as well. Finally, the information relevant to facial recognition is stored locally on the device similar to Loona. Table 2 summarizes the findings from the privacy policies.

5.3.4. Feature and Documentation Cross-Check

As indicated in Section 4, we compare each device’s advertised features and technical documentation with the disclosures provided in its privacy policy as required by GDPR and the AI Act.
With regards to Loona, the documentation states integration of ChatGPT, Amazon Lex, and Amazon Kinesis Video. The privacy policy, however, does not describe these AI functions. ChatGPT only appears in a list of third-party services, not as a core feature. The policy fails to mention AI decision making or profiling or highlight toy capabilities as advertised.
For RUX AI, its documentation describes ChatGPT integration and emotional interaction features. Yet, the privacy policy makes no mention of AI or ChatGPT. It includes a generic denial of collecting minors’ data but no parental consent workflow.
Regarding Ebo X, the documentation describes GPT-4o mini, Alexa integration, facial recognition, and safety/health monitoring. The privacy policy makes no mention of GPT-4 or Alexa. Although the documentation promotes child/family use, the policy does not address minors or parental consent.
This comparison reveals a systematic gap; while documentation markets advanced AI and child-facing features, privacy policies omit these, leaving users uninformed about how personal data and AI interactions are handled. This directly affects compliance with GDPR transparency provisions (Articles 12–13, Recital 38) and AI Act requirements (Articles 13.1, 50.1). We summarize each potential compliance gap in Table 3.

6. Discussion

Our analysis reveals consistent gaps between the features advertised for Loona, RUX AI, and Ebo X and the disclosures in their corresponding privacy policies. While documentation highlights advanced AI functionalities such as ChatGPT or GPT-4o mini, Alexa, emotion recognition, and facial recognition, these elements are either downplayed or absent from privacy policies. This omission raises concerns about alignment with transparency and accountability requirements central to both the GDPR and the AI Act. In particular, the lack of disclosure about automated decision making and profiling is at odds with GDPR Articles 12–13 and Recital 38, and with AI Act Articles 13.1 and 50.1, which stress the need to inform users when interacting with AI systems.

6.1. Regulatory Implications

Compliance in this domain involves both company responsibility and oversight by supervisory bodies. Under the GDPR, national Data Protection Authorities (DPAs) oversee data protection practices, while under the AI Act national AI supervisory authorities oversee compliance with the AI Act. The devices we analyzed are available in the EU, yet the policy gaps we observed suggest that child-facing AI IoT products may not currently be a central focus of enforcement. A stronger emphasis on this category could help ensure that children’s rights are consistently safeguarded.
It is also important to recognize that the development of the AI Act has been underway since 2021 and entered into force more than a year before this study. Its transparency requirements (Articles 13.1 and 50.1) have therefore been visible to industry for some time. The fact that the policies we reviewed do not yet reflect these provisions indicates that adoption is still in progress, although the AI act had already come into force almost a year before the writing of this paper. As regulatory expectations become clearer and supervisory bodies provide additional guidance, closer alignment between product documentation, privacy policies, and legal requirements will be essential.

6.2. Children and AI as a “Knowledgeable Friend”

The risks are particularly significant for children, who are not only data subjects but also interact continuously with AI systems. Devices like Loona and RUX AI encourage conversational interactions with ChatGPT, positioning it as a companion or even a “knowledgeable friend.” This dynamic raises both privacy and psychological concerns. Children may disclose highly sensitive personal or family information without recognizing that they are interacting with an AI system whose responses are generated, logged, and potentially processed externally. Moreover, reliance on such systems for guidance or emotional support could influence children’s social development and trust formation in ways that parents and regulators may not anticipate. These issues underline the importance of Recital 38 of the GDPR, which emphasizes that children merit special protection, and reinforce the AI Act’s demand for transparency in interactions with AI systems.

6.3. Connection to Literature and Emerging Trends

Our findings confirm and extend earlier work. Feldbusch et al. [10] documented inadequate privacy protections in EU-market toys, while McStay and Rosner [12] demonstrated parental concerns about emotional AI and the governance of “emotoys.” The persistence of similar patterns in newer devices indicates that compliance challenges remain unresolved despite the maturing regulatory landscape.
Emerging trends further magnify these risks. Large Model-Based Agents (LMAs), as surveyed by Wang et al. [13], represent the next stage of AI integration into IoT, where multiple devices coordinate autonomously. If single-device policies already fail to disclose AI features, multi-agent ecosystems could make transparency and accountability even harder. This shows the importance of embedding agent-based governance and transparency frameworks into regulatory and industry practice. In parallel, blockchain–IoT integration has been explored as a technical mechanism to improve auditability and consent management, offering tamper-evident records of data flows and consent agreements. While our focus remains on legal compliance, such technical approaches highlight complementary directions for improving trust in AI-enabled consumer ecosystems.

6.4. Recommendations for Practice

Based on our analysis, we recommend that manufacturers of child-facing AI IoT devices:
  • Publish dedicated AI transparency sections in privacy policies, explicitly identifying integrated AI models, their purposes, and their data use.
  • Provide localized, child-friendly disclosures in the languages of the markets where the devices are sold.
  • Implement verifiable parental consent mechanisms consistent with GDPR Article 8.
  • Establish independent audits and conformity assessments of AI systems, with results made available to supervisory authorities.
  • Develop standardized disclosure templates across the industry to facilitate comparison by regulators and consumers.
These steps would help bridge the current gap between product marketing and regulatory obligations, while also building trust among parents and caregivers.

6.5. Limitations

This study has several limitations. First, our analysis is based solely on documentation and privacy policies, without testing the devices or capturing real data flows. Future research should incorporate technical audits of network traffic and on-device behavior to validate policy claims. Second, the scope was limited to three products marketed in the EU; although representative, they cannot capture the full diversity of child-facing AI IoT devices. Expanding the sample size would strengthen the generalizability of the findings. Third, our analysis focused on the GDPR and AI Act; additional frameworks such as COPPA in the United States or other national laws may raise further requirements. Finally, we acknowledge that our expertise lies primarily in technical domains. The legal analysis presented here draws on interdisciplinary study and engagement with privacy-related research rather than formal legal training. As such, our interpretations aim to highlight probable compliance risks and stimulate discussion, but they should not be read as definitive legal judgments.

7. Conclusions and Future Work

This study examined three AI-enabled, child-facing devices available in the EU and found consistent gaps between their advertised features and their privacy policy disclosures. Core AI functionalities were omitted or only superficially acknowledged, minors’ protections were limited, and policies were not localized, indicating a misalignment with GDPR and AI Act obligations.
The results point to an urgent need for stronger oversight of child-facing AI IoT products by supervisory authorities, as well as more proactive adaptation by industry. Beyond minimum compliance, manufacturers should adopt practical safeguards that make AI use transparent, accessible, and child-appropriate.
Future work should expand the scope of analysis to more devices, validate policy claims through technical audits, and explore interdisciplinary perspectives on the psychosocial effects of children interacting with AI systems such as ChatGPT.

Author Contributions

Conceptualization, M.R. and Y.E.; methodology, M.R.; validation, M.R.; formal analysis, M.R.; investigation, M.R.; resources, M.R.; writing—original draft preparation, M.R.; writing—review and editing, M.R. and Y.E.; supervision, Y.E. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

Author Yasser Essa was employed by Sustainable Hitecnova S.L. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Number of Users of Smart Homes Worldwide from 2019 to 2028. Available online: https://www.statista.com/forecasts/887613/number-of-smart-homes-in-the-smart-home-market-in-the-world (accessed on 29 July 2025).
  2. Smart Home Market by Communication Protocol (Wi-Fi, Zigbee, Z-Wave, Bluetooth, and Thread), by Smart Home Hubs (Standalone Hubs and Built-in Hubs), by Voice Assistants Integration (Amazon Alexa, Google Assistant, Apple Siri, and Others), by Product Type (Smart Lighting, Smart Home Security & Surveillance, Smart Entertainment, and Smart Appliances), by Smart Home Compatibility with Smartphones (iOS and Android) and Others—Global Opportunity Analysis and Industry Forecast, 2024–2030. Available online: https://www.nextmsc.com/report/smart-home-market (accessed on 30 July 2025).
  3. CX Lab. LG Smart Home AI Agent. YouTube Video.2024. Available online: https://www.youtube.com/watch?v=fQVEFCJRWcc (accessed on 1 August 2025).
  4. LG USHERS IN ‘ZERO LABOUR HOME’ WITH ITS SMART HOME AI AGENT AT CES 2024. Available online: https://www.lg.com/sg/about-lg/press-and-media/lg-ushers-in-zero-labour-home-with-its-smart-home-ai-agent-at-ces-2024/ (accessed on 30 July 2025).
  5. Artificial Intelligence in Smart Home Technology Market Size, Share, Trends & Competitive Analysis by Type: AI-Powered Smart Speakers, AI-Enabled Security Systems, AI-Based Home Automation Hubs, AI-Driven Smart Appliances, AI-Integrated Lighting Systems by Technology: By Application: By Connectivity: By End-User: By Deployment Mode: By Regions, and Industry Forecast, Global Report 2025–2033. Available online: https://www.futuredatastats.com/artificial-intelligence-in-smart-home-technology-market?srsltid=AfmBOor_htAEDclmrZB5uZkVQ9IHyI8teZ_dk_dcykmorL1_JGYk9Ec_ (accessed on 29 July 2025).
  6. European Parliament. EU AI Act: First Regulation on Artificial Intelligence. 2023. Available online: https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence (accessed on 5 July 2025).
  7. Soler, G.J.; Tolan, S.; Hupont, T.I.; Fernandez, L.D.; Charisi, V.; Gomez, G.E.; Junklewitz, H.; Hamon, R.; Fano, Y.; Panigutti, C.; et al. AI Watch: Artificial Intelligence Standardisation Landscape Update; Publications Office of the European Union: Luxembourg, 2023. [Google Scholar]
  8. European Parliament. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46/EC (General Data Protection Regulation). Available online: https://eur-lex.europa.eu/eli/reg/2016/679/oj (accessed on 5 July 2025).
  9. Sastry, G.; Heim, L.; Belfield, H.; Anderljung, M.; Brundage, M.; Hazell, J.; O’keefe, C.; Hadfield, G.K.; Ngo, R.; Pilz, K.; et al. Computing power and the governance of artificial intelligence. arXiv 2024, arXiv:2402.08797. [Google Scholar] [CrossRef]
  10. Feldbusch, J.; Pavliv, V.; Akbari, N.; Wagner, I. No Transparency for Smart Toys. In Annual Privacy Forum; Springer: Cham, Switzerland, 2024; pp. 203–227. [Google Scholar]
  11. Udayagiri, R.; Yin, J.; Cai, X.; Townsend, W.; Trivedi, V.; Shende, R.; Sowande, O.F.; Prosser, L.A.; Pikul, J.H.; Johnson, M.J. Towards an AI-driven soft toy for automatically detecting and classifying infant-toy interactions using optical force sensors. Front. Robot. AI 2024, 11, 1325296. [Google Scholar] [CrossRef] [PubMed]
  12. McStay, A.; Rosner, G. Emotional artificial intelligence in children’s toys and devices: Ethics, governance and practical remedies. Big Data Soc. 2021, 8. [Google Scholar] [CrossRef]
  13. Wang, Y.; Pan, Y.; Su, Z.; Deng, Y.; Zhao, Q.; Du, L.; Luan, T.H.; Kang, J.; Niyato, D. Large model based agents: State-of-the-art, cooperation paradigms, security and privacy, and future trends. IEEE Commun. Surv. Tutor. 2025. [Google Scholar] [CrossRef]
  14. Meet Your Petbot Loona! Available online: https://keyirobot.com/pages/loonadetail (accessed on 29 July 2025).
  15. Robot Compañero IA Rux Blanco. Available online: https://eu.robotshop.com/es/products/robot-companero-ia-rux-blanco (accessed on 1 August 2025).
  16. EBO X-AI Family Companion Robot. Available online: https://www.enabot.com/pages/ebo-x-family-robot-companion (accessed on 1 August 2025).
  17. Amazon Lex—AI Chat Builder. Available online: https://aws.amazon.com/lex/ (accessed on 29 July 2025).
  18. User Manual for Loona. Available online: https://keyitech.zendesk.com/hc/en-us/article_attachments/10800256513565 (accessed on 29 July 2025).
  19. Desktop AI Robot Multilingual AI Personal Assistant, Gift. Available online: https://www.amazon.es/-/en/Desktop-AI-Multilingual-Robot-Assistant/dp/B0CYPLBYM3/ref=sr_1_2?crid=1Y3LM10T98L33&dib=eyJ2IjoiMSJ9.M1BRCmJISyGT7U0NaacCtp_QWlkA75vyBpJnMwCTYawQGFT0PXCWts1LZ4Ve0kEIDzYmLTJkGw9FuRqEDpVLqS_oBr0oeeQ-kX8vwVLILKn1eIjoL1e5zlME3Oc4avWs1UD-6DSXoN3zsvU1eun8n9uQi0ALj14XiMRKW-njX8wh9D2pDsLKOpqmTYkzjQu19UjdXOloP5c8_4WcYqLyqlcE9MyXDz3iaXpBYVM-kEM08SaF_nst4NMJCpSvTFYFL5jmdwa04NLIvwrI3_wo7LRRcR34nX65iUOqZIa2QIw.5b08-FNZ7z4VS59QugWtBYy_efZ1d8qrGI-vW78v0_A&dib_tag=se&keywords=Robot+Compa%C3%B1ero+IA+Rux&qid=1754009862&sprefix=robot+compa%C3%B1ero+ia+rux+%2Caps%2C79&sr=8-2&language=es_ES (accessed on 1 August 2025).
  20. AI Rux RobotLetianpai Robot Inteligencia Artificial Acompañar Juguete Programación Monitoreo Remoto de Escritorio. Available online: https://es.aliexpress.com/item/1005008691830097.html?spm=a2g0o.productlist.main.1.15d8b99ekyiRfT&algo_pvid=ae1204a1-1123-4d96-a0b5-1c5a0e72fafc&pdp_ext_f=%7B%22order%22%3A%222%22%2C%22eval%22%3A%221%22%2C%22fromPage%22%3A%22search%22%7D&utparam-url=scene%3Asearch%7Cquery_from%3A%7Cx_object_id%3A1005008691830097%7C_p_origin_prod%3A (accessed on 1 August 2025).
  21. Enabot EBO X, Robot Móvil Inteligente para Vigilancia en el Hogar con Mapas y Navegación, Cámara de Vigilancia 4K Estabilizada con Visión Nocturna, Altavoz Premium con Alexa Integrada. Available online: https://www.amazon.es/Enabot-EBO-Inteligente-Vigilancia-estabilizada/dp/B0CJBCRDKV?ref_=ast_sto_dp (accessed on 1 August 2025).
Table 1. Summary of relevant GDPR and AI Act provisions.
Table 1. Summary of relevant GDPR and AI Act provisions.
LawArticle/RecitalSummary
GDPR [8]Article 8Processing of children’s data is lawful from age 16 for information society services; below that, guardian consent is required.
Article 12.1Controllers must provide processing information in a concise, transparent, and accessible form, using clear language, especially for children.
Article 13.1Controllers collecting personal data must disclose their identity, contact details, processing purpose, and intention to transfer data outside the EU.
Article 13.2Controllers must inform data subjects of automated decision making, including profiling.
Recital 38Children require special protection for their personal data, given their limited ability to assess risks, especially in profiling contexts.
AI Act [6]Article 13.1High-risk AI systems must be designed and developed to ensure sufficient transparency in their operation.
Article 50.1To ensure transparency and prevent misleading interactions, providers must inform people when they interact with an AI system intended for natural persons, unless it is obvious.
Table 2. Comparison of privacy policy features across case study devices. ✓ = provided; – = absent.
Table 2. Comparison of privacy policy features across case study devices. ✓ = provided; – = absent.
FieldLoonaRUX AIEbo X
Storage in EU
Under 16 years old policy
Automated-decision/AI mentioned
Policy date provided
Local language version
Name/contact DPO
Table 3. Compliance of Loona, RUX AI, and Ebo X privacy policies with selected GDPR and AI Act provisions. ✓ = compliant; △ = partial compliance; – = absent.
Table 3. Compliance of Loona, RUX AI, and Ebo X privacy policies with selected GDPR and AI Act provisions. ✓ = compliant; △ = partial compliance; – = absent.
LawArticle/RecitalLoonaRUX AIEbo X
GDPRArticle 8
Article 12.1
Article 13.1
Article 13.2
Recital 38
AI ActArticle 13.1
Article 50.1
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Rashed, M.; Essa, Y. Assessing Compliance in Child-Facing High-Risk AI IoT Devices: Legal Obligations Under the EU’s AI Act and GDPR. Telecom 2025, 6, 79. https://doi.org/10.3390/telecom6040079

AMA Style

Rashed M, Essa Y. Assessing Compliance in Child-Facing High-Risk AI IoT Devices: Legal Obligations Under the EU’s AI Act and GDPR. Telecom. 2025; 6(4):79. https://doi.org/10.3390/telecom6040079

Chicago/Turabian Style

Rashed, Mohammed, and Yasser Essa. 2025. "Assessing Compliance in Child-Facing High-Risk AI IoT Devices: Legal Obligations Under the EU’s AI Act and GDPR" Telecom 6, no. 4: 79. https://doi.org/10.3390/telecom6040079

APA Style

Rashed, M., & Essa, Y. (2025). Assessing Compliance in Child-Facing High-Risk AI IoT Devices: Legal Obligations Under the EU’s AI Act and GDPR. Telecom, 6(4), 79. https://doi.org/10.3390/telecom6040079

Article Metrics

Back to TopTop