Next Article in Journal
SoK: Delegated Security in the Internet of Things
Previous Article in Journal
Implementation and Performance Analysis of an Industrial Robot’s Vision System Based on Cloud Vision Services
Previous Article in Special Issue
Survey on Secure Scientific Workflow Scheduling in Cloud Environments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimizing Internet of Things Services Placement in Fog Computing Using Hybrid Recommendation System

1
Miracl Lab, Higher Institute of Computer Science and Communication Technologies of Sousse, University of Sousse, Sousse RJQ4+5WW, Tunisia
2
Efrei Research Lab, Paris Panthéon Assas University, 94800 Villejuif, France
3
Higher Institute of Computer Science and Multimedia of Sfax, University of Sfax, Sfax 3021, Tunisia
4
ESEN, Univesity of Manouba, Manouba CP 2010, Tunisia
*
Authors to whom correspondence should be addressed.
Future Internet 2025, 17(5), 201; https://doi.org/10.3390/fi17050201
Submission received: 23 March 2025 / Revised: 22 April 2025 / Accepted: 24 April 2025 / Published: 30 April 2025

Abstract

:
Fog Computing extends Cloud computing capabilities by providing computational resources closer to end users. Fog Computing has gained considerable popularity in various domains such as drones, autonomous vehicles, and smart cities. In this context, the careful selection of suitable Fog resources and the optimal assignment of services to these resources (the service placement problem (SPP)) is essential. Numerous studies have attempted to tackle this issue. However, to the best of our knowledge, none of the previously proposed works took into consideration the dynamic context awareness and the user preferences for IoT service placement. To deal with this issue, we propose a hybrid recommendation system for service placement that combines two techniques: collaborative filtering and content-based recommendation. By considering user and service context, user preferences, service needs, and resource availability, the proposed recommendation system provides optimal placement suggestions for each IoT service. To assess the efficiency of the proposed system, a validation scenario based on Internet of Drones (IoD) was simulated and tested. The results show that the proposed approach leads to a considerable reduction in waiting time and a substantial improvement in resource utilization and the number of executed services.

1. Introduction

The increasing use of Internet of Things (IoT) devices has significantly improved the quality of human life in various sectors such as intelligent transport, smart healthcare, automated industrial processes, and more [1]. Among these devices, drones have become extensively used across multiple applications, including surveillance, object tracking, disaster investigation, and environmental monitoring. The term “Internet of Things” was invented by Kevin Ashton [2]. It encompasses the general idea of objects, especially everyday objects, which are made readable, recognizable, localizable, addressable, and controllable via the Internet [3]. The Internet of Drones (IoD) is a revolutionary concept that aims to integrate drones into IoT networks, where they function as IoT devices. The rapid development of IoT has led to an era in which billions of connected devices interact independently, providing a large number of services [4]. However, this quantitative explosion of connected objects and associated services has created a major challenge: how to effectively place this exponential mass of services, while respecting crucial imperatives such as minimal latency, information confidentiality, and efficient bandwidth management?
When applied to the specific needs of IoT, traditional Cloud computing infrastructures, which have historically played a central role in implementing IoT services, face a range of challenges. These challenges include geographical constraints, latency delays, and increasing concerns regarding data confidentiality. This raises questions about the effectiveness of traditional Cloud models in the IoT environment. To address these challenges, Fog Computing (FC) has emerged as an innovative response, offering a decentralized approach that brings data processing closer to the place where it is generated, as close as possible to the connected objects. Although FC presents numerous benefits like enhancing the implementation of IoT applications for time-sensitive tasks by deploying Fog devices close to drones in IoD–Fog networks, it also poses complex challenges when it comes to its application in the IoD context. One of these challenges is determining the strategic placement of IoD services within the distributed and very dynamic environment. The placement of IoD services in the Fog network is critical, as it directly impacts the performance, security, latency, and efficiency of IoD applications. Therefore, it is crucial to determine the optimal placement of IoD services in FC. This helps minimize latency, optimize available resources, and improve application responsiveness. However, numerous challenges must be addressed, including network dynamics, workload variability, and the security of sensitive data exchanged between drones and processing centers.
In this work, the main objective is to perform an in-depth analysis of the challenges related to the placement of IoD services in FC. This analysis should consider the heterogeneous nature of drones, the specific requirements of applications, and the resource constraints of the Fog network. It is crucial to understand how these factors interact and how they impact the overall performance of the system. We therefore propose a dynamic services placement approach that considers real-time workload variations. This approach also explores intelligent resource management mechanisms to ensure efficient use of the computing and storage capacities available in the Fog.
In summary, we propose an approach that provides practical, informed recommendations for optimizing the performance and latency of IoD applications deployed in FC infrastructure. By delving into these challenges, the proposed approach aims to maximize the benefits of FC in the specific context of IoD services. The important contributions of this article are as follows. Firstly, it introduces a conceptual computational framework based on the MADE-k autonomous model. Leveraging an extended version of the ICA as a meta-heuristic approach to tackle the SPP offers promising avenues for optimization. Additionally, a novel hybrid recommendation system is introduced to address diverse user criteria for SPP resolution, aiming at the optimal utilization of Fog resources and latency minimization. Furthermore, the effectiveness of the proposed schema is illustrated through comprehensive experiments.
The rest of this paper is organized as follows: Section 2 highlights the FC features, the service placement particularities, and discusses an overview of recent works that have been proposed to solve the SPP. The proposed conceptual framework is described in Section 3. In Section 4, we illustrate the process and advantages of the proposed solution. Section 5 presents the results and discussion part. Finally, this article concludes with a summary of the work and future scope in Section 6.

2. Related Work

2.1. Fog Computing Architecture

To overcome the limitations of Cloud computing, various paradigms such as Edge computing and FC have been proposed [5]. The primary concept of Edge computing is to establish a hybrid architecture that integrates peer-to-peer and Cloud servers with mobile terminals [6]. On the other hand, FC is often seen as an extension and evolution of the Cloud toward a more distributed model that gets closer to end users and exploits network nodes for user computing. The new FC paradigm was proposed by Cisco in 2012 [7]. As such, it is defined as “an intermediate computing architecture between IoT devices and the Cloud, providing computing and storage resources at the Edge of the network” [8]. The comparison between Cloud, FC, and Edge computing is summarized in Table 1.
Various architectures of FC have been proposed, and most of them are based on three tiers. The first tier is the Edge tier, which consists of IoT devices equipped with sensors, actuators, and computing capabilities [9]. Edge refers to the processing of data close to the device where it is generated, often at the device itself or in proximity. Additionally, Edge devices can perform initial processing tasks using their onboard processing capabilities, such as data filtering, aggregation, or simple analysis directly on the device itself. Moreover, Edge computing resources like gateways or more powerful IoT devices may have more advanced processing capabilities. These capabilities enable them to run more complex algorithms or perform extensive data processing tasks directly at the Edge of the network. This allows Edge devices to filter, aggregate, or analyze the collected data before transmission to higher tiers for further processing or storage. These data can include environmental parameters (such as humidity, temperature, or motion), device statuses, or user interactions. The second tier is the Fog tier, which acts as an intermediary between the Edge and Cloud tiers. The Fog tier encompasses colonies or domains that contain various powerful node types, including Fog servers managed by an orchestrator. Here, more complex processing tasks can be performed compared to the Edge tier, allowing for deeper analysis of the collected data. Additionally, the Fog tier provides services closer to the end devices than the Cloud tier, reducing latency and bandwidth usage for certain applications. Unlike Edge computing, Cloud computing services like SaaS, IaaS, and PaaS can be extended by FC to the network Edge. Due to these stated features, FC is believed to be an efficient and promising computing paradigm for IoT compared to other associated computing models [6]. Finally, the Cloud tier is the top tier within the FC architecture. This level typically consists of powerful servers and storage systems located in remote data centers. The Cloud tier is in charge of storing and analyzing large volumes of data collected from the Edge and Fog tiers.
Figure 1 shows an overview of the FC architecture.

Fog Computing Features

The NIST [10] and the OpenFog consortium have identified several features that define Fog infrastructures, which are summarized below:
  • Proximity to end users and low latency: FC offers the advantage of being close to end users and having low latency. This is because Fog nodes are located close to end devices or data sources, allowing them to collect and process service requests, store the results at the Edge of the network, and send the responses to the service requester. As a result, Fog nodes (FNs) can reduce the amount of data transmitted across the network, minimizing the risk of data loss, thus minimizing the average response time. This feature renders FC well suited for delay-sensitive applications, such as live gaming traffic monitoring, among others.
  • Greater data control and privacy: Users have greater control over their data, thanks to the proximity of end devices with Fog nodes, rather than outsourcing them to distant Cloud data centers. In fact, FC enables data to be processed locally before transferring delay-tolerant data to remote servers [11].
  • Heterogeneity: Fog nodes are available in a variety of form factors and will be deployed in a wide variety of environments [12]. In fact, the heterogeneity of FNs refers to the diversity of devices and configurations in which these nodes are deployed. That is, they exist in a multitude of sizes, shapes, and hardware capabilities. These nodes can be complete servers dedicated to gateways, specialized IoT devices, etc. What is more, these nodes are deployed in a wide variety of environments, ranging from dense urban to remote rural areas, from industrial plants to transportation infrastructures, etc.
  • Support for real-time applications: FC can support real-time applications thanks to its ability to have close proximity to end devices, enabling faster data analysis and response times than conventional centralized data centers. These include applications such as virtual reality, augmented reality, traffic monitoring, telesurgery, etc. [11].
  • Decentralization and geo-distribution: Unlike the centralized architecture of Cloud computing, FC is a distributed architecture comprising huge, geographically distributed heterogeneous nodes covering numerous domains. This property enables location-based services and guarantees the provision of very low-latency services [11].
  • Autonomy and programmability: Fog infrastructures are characterized by autonomous decision making for the management of the applications deployed on it. In fact, the simplicity of programming and reconfiguration, made possible by virtualization, simplifies management and ensures the infrastructure’s ability to adapt to the changing dynamics of the environment.
  • Energy efficiency: It remains a challenge in the IoT environment, particularly regarding the energy consumption of large IoT devices. FC addresses this issue by empowering these devices to make intelligent decisions, such as toggling between on/off/hibernate states, ultimately leading to a reduction in overall energy consumption [13].
  • Cost: In the context of Cloud computing, where resources are generally billed according to usage (pay-per-use model), some applications may find it more advantageous to invest once in the acquisition of private Fog resources (a form of decentralized processing closer to users) rather than pay regularly for instances in the Cloud. This highlights an alternative economic consideration for specific applications.
  • Management of services: FC operates as an intermediary layer to furnish computational capabilities. This not only facilitates device control but also permits the customization of services tailored to the specific environment [14].
In Table 2, we compare some studied works based on their objectives among the above Fog characteristics. According to this table, we notice that most works consider the reduction in latency, energy, and cost, and only the works [15,16] consider data privacy and security.

2.2. Overview of Existing Solutions

The SPP in FC involves determining the nodes to which a specific task should be assigned and what parameters should be taken into account to evaluate the performance of the deployment. Indeed, the choice of a particular node among others can significantly impact various criteria such as overall system latency, response time, Quality of Service (QoS), etc. For example, consider a set of mobile IoT devices communicating with each other and their environment. The goal is to efficiently place the services offered by these objects on the Fog infrastructure with minimal latency. However, numerous complex constraints may influence the quality of the proposed placement. These limitations are related to the dynamic nature of the environment, generated mainly by the mobility of objects and network fluctuations. So, the wrong choice of node can affect system performance and thus user satisfaction.
To address the SPP for IoT applications on Fog resources, several solutions have been developed. Most approaches consider it as an optimization problem, as classified in Table 3 based on the used method and compared in Table 4 based on their achievements.
For example, the work [18] proposed an autonomous IoT service placement methodology named MADE (Monitoring, Analysis, Decision Making, and Execution), comprising four sequential steps. The process begins with real-time monitoring to evaluate the status of available resources and application services. Subsequently, requested services are prioritized according to application service deadlines. Following this, the Strength Pareto Evolutionary Algorithm II is utilized to make decisions about the placement of application services, treating it as a multi-objective optimization problem. Finally, the decisions crafted in the earlier phases are executed within a Fog environment.
In the work [17], to maximize the utilization of Fog resources and improve the Quality of Service (QoS), the authors developed a conceptual computing framework based on a combination of Cloud–Fog. This integration involves the introduction of a Cloud–Fog control middleware, designed to oversee and manage service requests while adhering to specific constraints. To tackle the SPP, an evolutionary algorithm based on the cuckoo search algorithm was proposed. This algorithm was developed by Yang and Deb in 2009 as a population-based meta-heuristic algorithm [32]. Based on MADE-K automatic control loops, which consist of four stages, Monitoring (M), Analysis (A), Decision Making (D), and Execution (E), the services were placed in the current colony if available, and they were allocated to the nearest available colony or, if unavailable, forwarded to the Cloud.
In the paper [19], the authors addressed the lack of resource provisioning approaches to utilize Fog-based computational resources. They introduced a conceptual FC framework to tackle the issue. They modeled the SPP for IoT applications over Fog resources as an optimization problem, considering the heterogeneity of applications and resources in terms of QOS attributes. Finally, they suggested a genetic algorithm as a heuristic to solve this problem.
The paper [20] presents two main contributions. Firstly, it introduces an optimization model that efficiently maps data streams into Fog nodes. The model takes into consideration the current load of the Fog nodes, as well as the communication latency between sensors and Fog nodes. Second, to handle the complexity of the problem, a scalable heuristic based on genetic algorithms was proposed.
The paper [21] introduces a dynamic congestion management brokerage system that aims to meet the QoS requirements of the IoT, as outlined in the service-level agreement (SLA). This system effectively handles a large influx of Cloud requests that originate from the Fog broker layer while also presenting a forwarding policy that allows the Cloud service broker to selectively forward high-priority requests to appropriate Cloud resources from both Fog brokers and Cloud users. The underlying concept behind this system is based on the weighted fair queuing (WFQ) Cisco queuing mechanism, which simplifies the management and control of potential congestion issues on the Cloud service broker side.
Considering the mobility of nodes and the diverse nature of computing devices, the need for an efficient load-balancing algorithm becomes evident. The work [22] introduces a load-balancing approach specifically designed to minimize service latency. The primary goal is to distribute the latency uniformly among all nodes, ensuring a fully decentralized process. This approach ensures that no user encounters a lower QoS than others.
Optimizing the placement of tasks on Edge devices to achieve optimal performance is a significant challenge in Fog-assisted architecture. This process of mapping tasks to services is known as the SPP. In the recent work [23], a heuristic algorithm named “Clustering of Fog Devices and Requirement-Sensitive Service First” (SCATTER) was introduced to effectively solve the SPP.
To address the NP-hard optimization problem of application placement strategies in a hierarchical Fog–Cloud environment, specifically based on directed acyclic graphs (DAGs), a heuristic approach has been commonly used. This approach generates sub-optimal solutions by employing a non-preemptive placement policy. Moreover, the current practice of merging multiple DAG-based applications into one is ineffective in minimizing the overall makespan. This is due to the lack of fairness considerations among multiple DAG-based applications. In response to this limitation, a novel application placement strategy based on dynamic scheduling is proposed in the work [24]. This strategy leverages the schedule gaps in virtual machines across different layers, aiming to minimize the makespan while ensuring adherence to deadlines.
A major challenge in Fog networks is how to distribute computational tasks efficiently between Fog and Cloud nodes, which have varying computing capacities at different distances from end users. To address this challenge, in the work [25], a universal non-convex Mixed-Integer Nonlinear Programming (MINLP) problem was formulated. The objective of this problem is to minimize both task transmission and processing-related energy while considering delay constraints. The formulated problem is transformed using Successive Convex Approximation (SCA) and decomposed through primal and dual decomposition techniques. To tackle this optimization problem, two practical algorithms have been proposed: Energy-Efficient Resource Allocation (EEFFRA) and Low-Complexity (LC)-EEFFRA.
The paper [26] proposes strategies to integrate FC into the Cloud-based Industrial Internet of Things (IIoT) and establish a Cloud–Fog-integrated IIoT (CF-IIoT) network seamlessly. The primary goal is to achieve ultra-low service response latency, which requires incorporating distributed computing within the CF-IIoT network. To optimize load balancing in this distributed Cloud–Fog environment, the paper proposes a Real-Coded Genetic Algorithm for Constrained Optimization Problems (RCGA-CO). In addition, it presents a task reallocation and retransmission mechanism to address inherent unreliability issues within the CF-IIoT, such as potential Fog node damage and wireless link outages. This mechanism reallocates unfinished subtasks from failed nodes using the RCGA-CO algorithm and retransmits new subtasks to normal nodes to ensure timely task completion.
The authors of [27] proposed a strategy for service provisioning and the placement of services in Fog-based computing resources. Their approach aims to optimize the placement of IoT services by prioritizing resources for applications with the earliest deadlines. To achieve this, the services are allocated within the constraints of CPU, RAM, and storage capacities of the respective Fog resources.
To describe the interactions between system components and the Fog service placement problem-solving policy, the authors of the paper [7] propose a three-layer conceptual computing framework (Cloud, Fog, and IoT). They also adopt the MADE-k approach, which is commonly used in similar studies, to solve the problem.
The study [8] focuses on minimizing application delay and network usage in Fog–Cloud environments. To achieve this, the authors propose a genetic-based service placement algorithm that introduces a penalty-based approach. This approach aims to address both the delay and the reduction in time-consuming Cloud placements.
In [28], a hybrid meta-heuristic algorithm was formulated and developed. The algorithm is called MGAPSO and EGAPSO, which are combinations of genetic algorithm (GA) and Particle Swarm Optimization (PSO) and Elitism-based GA and PSO, respectively.
To address the complexity of the SPP in FC, an efficient and autonomous scheme named SPP-AOA is proposed in the study of [29]. The solution leverages meta-heuristic approaches and features a shared parallel architecture to navigate the intricate nature of the problem. SPP-AOA adopts the Archimedes Optimization Algorithm (AOA), inspired by Archimedes’s Principle in physics, to formulate the SPP as a multi-objective problem. The main objective of SPP-AOA is to orchestrate the placement of autonomous services across distributed Fog domains, focusing on considerations such as resource utilization, service cost, energy consumption, delay cost, and throughput. By analyzing the distribution of resources over time, SPP-AOA optimizes placement to ensure effective resource allocation for handling future requests.
Recognizing the limitations of current approaches, ref. [30] employs the Asynchronous Advantage Actor-Critic (A3C) algorithm as an innovative deep reinforcement learning (DRL) solution for the SPP. The proposed strategy focuses on placing IoT services in a way that minimizes cost and latency within deadline and resource constraints. Aligned with these goals, A3C aims to maximize the long-term cumulative reward to enhance QoS. Placement activities occur within local Fog domains, and neighboring Fog domains are utilized when necessary to enhance Fog resource utilization. Additionally, a technique is integrated to extract and distribute resources over time, conserving resources for efficiently managing future requests.
In the context of autonomous planning in FC, the work [31] uses the Imperialist Competitive Algorithm (ICA) as a meta-heuristic approach to address the complexity of this problem. During the allocation process, the algorithm prioritizes Fog nodes with ample resources capable of hosting multiple IoT services, taking advantage of resource distribution. The algorithm aims to optimize IoT services to minimize delays, formulating the SPP as a multi-objective problem. In addition, a conceptual framework is employed to articulate the proposed autonomous planning model, delineating the interactions within the Cloud–Fog–IoT ecosystem components.
The authors in [15] propose a secure and efficient service placement strategy in FC for IoT applications. They suggest an evolutionary method to optimize the mapping between Fog–Cloud nodes and IoT services based on specific service types. The research introduces an Adaptive Neuro-Fuzzy Inference System (ANFIS) to classify requests and identify processing layers. Additionally, a refined Honey Badger Algorithm (CHBA) is implemented for scheduling, enhanced by an Opposition-based Learning (OBL) approach to improve convergence rates.
The paper [16] presents a hierarchical framework that combines Federated Learning (FL) with deep reinforcement learning (DRL) to create a privacy-preserving offloading strategy for low-altitude Vehicular Fog Computing (VFC) systems. The proposed method introduces decentralized training and execution (DTDE) at the local level within Roadside Unit (RSU) regions, enhancing scalability and privacy. Additionally, an auto-encoder-based contextual FL mechanism is developed to account for diverse operational scenarios and regional context awareness.
As demonstrated, several studies have investigated the SPP in FC. However, the SPP remains a complex optimization challenge influenced by factors such as latency, resource utilization, the dynamic nature of systems, and end user mobility. Table 3 shows different methods to solve this problem, such as heuristic and meta-heuristic methods, machine learning, and hybrid approaches. Table 4 reveals that mobility was considered in only three studies, while some research efforts have overlooked the dynamic nature of the system. In addition, while progress has been made in reducing latency, this reduction is still insufficient for applications that require immediate responses, such as online gaming and remote surgery, where even minor delays can critically affect user satisfaction.
Indeed, contextual factors like user proximity (neighboring), service usage history, and behavioral patterns are often overlooked. This highlights the significance of considering not only resource capacity but also geographical and behavioral factors of the users in the service placement process. Therefore, incorporating these elements into the decision-making process for service placement can optimize the user experience, improve customer satisfaction, and provide a more comprehensive response to the various constraints and needs of the system. This need for context-aware, intelligent placement strategies has also been recognized in other IoT domains. Recent interdisciplinary surveys, such as the one by Wang and Su [33], emphasize the importance of hybrid AI models and intelligent decision making in dynamic, data-intensive IoT environments. Although focused on agriculture, their findings are relevant to Fog Computing, where similar challenges such as resource constraints, real-time responsiveness, and user context awareness also arise.
Another critical observation is the limited use of simulation tools in some studies, where optimization solvers have been employed instead, such as in the work [20]. However, simulation is a crucial step in research for several reasons. Firstly, it provides essential insights by enabling performance evaluation across various scenarios and metrics, such as latency and response time. Secondly, it provides flexibility to explore various scenarios and approaches. Additionally, simulation is a cost-effective and practical alternative to assess the performance of a system in a virtual environment.
Future research efforts should focus on methods that integrate user mobility, historical patterns, and contextual surroundings into the service placement process. These factors can significantly influence the quality and efficiency of service placement on available resources, providing a comprehensive framework to meet the diverse constraints and needs of Fog Computing environments. Additionally, the lack of focus on meeting user-requested deadlines remains a persistent challenge, highlighting the need to improve latency to meet the stringent demands of real-time applications.

3. Hybrid Service Placement Based on a Recommender System

FC has become a popular paradigm for service placement. However, efficient placement in such an infrastructure is a challenge, due to the constraints imposed by this infrastructure as well as those of IoT objects. The choice of placing services in FC instead of the Cloud is based on several criteria, such as customer satisfaction and meeting deadlines. So, it is important to think of ways to improve existing solutions and ensure that the best choices are made. Existing solutions prioritize factors like latency, cost, and QOS while disregarding other valuable information such as the user’s preferences, neighborhood, and service usage history. However, we suggest that a more efficient approach would be to implement recommendation systems.

3.1. Proposed Architecture (System Model)

In this section, we propose a three-tier architecture (Cloud, Fog, and IoT), as illustrated in Figure 2, to describe the interactions between system components and address the SPP. This framework is based on the autonomous MADE-k model and a recommendation system. The SPP determines which IoT services are deployed on which Fog nodes for execution. Consider that there are m applications, represented by A = { a 1 , a 2 , , a i , , a m } , and a i refers to the i-th application. Each IoT application comprises multiple tasks, with each task acting as an IoT service request generated by IoT devices. The whole set of services from m IoT applications includes r service, denoted as S = { s 1 , s 2 , , s j , , s r } , where s j corresponds to the j-th service.
The FC layer consists of n decentralized Fog nodes, expressed as F = { f 1 , f 2 , , f k ,   , f n } , with f k representing the k-th Fog node. The service s j a i can be divided into subtasks, each assigned to a specific Fog node for processing. As a result, the node f k can execute a subset of parsed tasks from multiple services. Each node includes a set of virtual machines (VMs) capable of hosting these task subsets.
For simplicity, we assume that services are indivisible and that each Fog node executes all tasks associated with a given service independently. Additionally, each service remains hosted by a single Fog node until the completion of the execution process.
As shown in Figure 2, the proposed architecture consists of four main layers each with specific responsibilities divided into three tiers: the Cloud, the Fog, and the Edge. The Cloud layer centrally places services that are not sensitive to delay, do not require an ultra-fast response, and services that do not stand a chance of being placed in the Fog or Edge layers. The Fog layer manages the decentralized processing of services and is positioned closer to IoT devices than the Cloud layer, thereby improving the response time for time-sensitive applications and reducing the services that need to be transmitted to the Cloud after filtering and aggregation. The Edge layer is located closest to end user devices and acts as an intermediary between the Fog and IoT devices, enabling faster response times for time-sensitive applications and facilitating communication between IoT devices and Fog nodes for dynamic interaction between connected objects and distributed computing resources. Finally, the IoT layer comprises connected objects, sensors, and actuators and is tasked with collecting data from the environment.
Our contribution is made within the Fog layer, so we will focus particularly on this layer. Figure 3 shows the hierarchical structure of the proposed architecture, which consists of a group of colonies and a hybrid recommendation system (RS).

3.1.1. Colony

The term “colony” or “cluster” in the context of FC is a metaphor used to describe a group of interconnected Fog nodes working together. This terminology is inspired by the concept of a colony of organisms, such as a colony of bees, a colony of ants, and so on. To ensure distributed computing, FC is composed of several colonies. Each colony comprises a set of nodes and an orchestrator that manages them.

3.1.2. Fog Nodes

Any device with enough computing, storage, and networking capacity to run advanced services can act as a Fog node. We can differentiate between two types of nodes (devices): fat devices that can process and compute, and small or lightweight devices that are less powerful. In addition, ref. [34] classifies the Fog nodes into static and mobile nodes, as illustrated in the following list:
Static nodes: Strategically positioned in specific locations, static nodes include entities such as base stations, small-scale data centers, personal laptops, switches, and routers. These devices are designed to remain fixed and are not mobile [34].
Mobile nodes: In contrast, mobile nodes exhibit mobility, are physically small, less resourceful, and are characterized by their flexibility in terms of installation and configuration. Examples of mobile nodes include single-board machines (such as RPIs and Pine A64+), drones, and vehicles [34].

3.1.3. Fog Orchestrator Control Node

The role of the FOCN is to orchestrate Fog resources (Fog nodes) and services. In other words, to coordinate, organize, and manage them together to meet specific functional and non-functional requirements. Thus, in the proposed architecture, the FOCN’s main responsibility is to resolve the SPP and ensure the efficient placement of services through an admission control unit (ACU) and a control loop called MADE-K.
According to our proposal, within a Fog colony, the ACU plays a role in filtering requests at the local level and helps maintain an optimal performance, as illustrated in Figure 4. In the case of non-real-time applications, the ACU sends the request to the Cloud. Otherwise, it receives requests to place services in the current colony, filters these requests, and classifies them according to certain criteria, such as the deadline for completion, then sends these requests (indicating whether they have priority) to the MADE-k control loop, which will take charge of the placement decision. Otherwise, if the colony is not available, then the ACU sends the request to the recommendation system.
The “MADE-K” or Monitoring, Analysis, Decision Making, and Execution with a knowledge base is a control loop model that can manage, plan placement requests, and solve the SPP based on four main phases. The monitoring phase consists of receiving placement requests sent by the ACU, following up on these requests, and monitoring node availability by updating the local knowledge base every period. The second phase consists of analyzing the placement requests received and prioritizing them according to the lead and waiting times. The decision phase is based mainly on executing a multi-objective optimization algorithm that will solve the SPP. Finally, the last phase of the MADE-k consists of placing the service based on the decision made in the previous phase. There are two possible outcomes: either the service is placed in an available node in the colony (selected in the decision phase), or, if unavailable, the placement request is transferred to the RS.

3.1.4. Recommendation System

Robin Burke [2] defined recommendation systems as systems with the ability to offer personalized recommendations, guiding the user to interesting and useful resources within a large dataset. Thus, ref. [35] describes recommendation systems as programs that attempt to recommend the most appropriate items (products or services) to specific users (individuals or companies), anticipating a user’s interest in an item based on information related to items, users, and interactions.
The two fundamental entities present in all recommendation systems are the user and the item. The user represents the person who uses the RS, expresses his or her opinion on various items, and receives new recommendations from the system. The item, on the other hand, is the general term used to describe what the system recommends to users.
The input data for an RS depends on the type of filtering algorithm used. In general, they fall into one of the following categories:
Ratings (Feedback): Allowing the user to express a positive or negative opinion on the item consulted. These ratings, generally numerical and restricted to a scale of values such as [1–5] reflect the user’s interest in the item. A high score indicates strong interest and a match with preferences, while a low score suggests a lack of interest.
Content data: Based on the textual analysis of the documents associated with the items rated by the user. Features extracted from this analysis are then integrated as inputs into the filtering algorithm to deduce the user’s profile.
Demographic attributes: Refer to information explicitly or implicitly collected about the user, such as age, personal status, level of education, location, and others. Although they do not provide direct evaluations, these attributes enrich the user profile, making it easier to tailor recommendations based on these characteristics.
There is a wide variety of recommendation techniques. We present in the following the main recommendation techniques used in our approach.
Content-based recommendation (CB): CB filtering consists of recommending to the active user new items with similar descriptions to those previously enjoyed. In fact, this technique compares the content of items (a set of attributes) with the user’s profile, itself made up of attributes [36,37]. In our proposed solution, the content-based system focuses on five main steps. The first involves retrieving the data required for the recommendation. The second step consists of the necessary filtering, such as the removal of redundant information. The third step is based on a similarity calculation between the collected data to select the node with maximum similarity in the next step. Finally, it will return the result which is the most suitable node for the placement requested according to this system. Figure 5 illustrates the content-based filtering process.
Recommendation based on collaborative filtering (CF): CF is based on the sharing of opinions between users, inspired by the “word-of-mouth” principle that humans have always used to make judgments about a product or service they do not know. Recommendation systems based on collaborative filtering recommend to the user items appreciated by users sharing similar preferences, i.e., similar users [38]. As illustrated in Figure 6, the proposed collaborative RS considers the preferences and requirements of the user, his neighbors, and the availability of nodes to recommend the location of a given service. It collects these data first, filters them second, searches for the most similar k-neighbor users by applying the nearest-neighbor algorithm, and then recommends a location based on the similarity result, its history, and node availability.
Hybrid recommendation: A hybrid RS is a system that merges two or more distinct recommendation techniques. These combinations leverage the advantages of the different techniques used, overcome the specific limitations of each, and deliver more relevant recommendations. In our solution, we chose to use a hybrid RS to benefit from the advantages of both content-based and collaborative systems as illustrated in Figure 7. Indeed, there are various approaches to combining recommendation techniques, and there is no consensus defined by the research community.
However, Robin Burke in [2,39] listed seven different hybridization methods, including weighted hybridization, switching, cascade hybridization, feature augmentation, meta-level hybridization, feature combination, and mixed hybridization. Mixed hybridization involves the simultaneous provision of recommendations from several recommendation techniques. In other words, recommendations generated by different techniques are combined into a single list, which is then presented to the user. In our solution, we chose to use this hybridization method. In fact, the hybrid RS retrieves the results of both collaborative and content-based RS, calculates the distances between the recommended nodes and the target user, and finally selects the final location based on the minimum distance.

4. Proposed Methodology

The proposed solution revolves around two parts: the first part handles placement when a node is available in the current colony based on the FOCN, while the second part uses a recommendation system when no node is available. The general idea of the proposed solution is illustrated in the activity diagram in Figure 8.

4.1. Solution Based on the FOCN

According to our proposal, the ACU in a Fog colony filters and categorizes service placement requests based on the completion deadlines. The filtered requests are then sent to the MADE-k control loop for placement decisions. During the monitor phase, placement requests are received and tracked, and node availability is periodically updated in the local knowledge base. Then, in the third step of Made-k, based on the information sent by the ACU on the requests (priority or not and the degree of priority), the MADE-k will sort these placement requests and pass them to the decision stage according to the order of priority.
In the decision phase, we propose to use the meta-heuristic algorithm “Imperialist Competitive Algorithm” (ICA) [31]. The ICA belongs to the category of optimization methods and has proven to perform well compared to other algorithms (such as ODMA and cuckoo Search) in terms of speed and convergence speed. In addition, several approaches have proven the performance of this algorithm in solving high-dimensional problems. The ICA is inspired by imperial competition, where imperialists and colonies compete to find the best solution based on a process of competition and migration. Indeed, the term “colony” in the ICA is different from the notion of colony we used in the proposed architecture. According to the ICA, colonies are less efficient solutions than imperialists (candidate solutions).
So, the objective of this algorithm is, from a set of solutions and using the “Fitness” objective function, to select the best solution, which in our case is the best node in which to place the service while minimizing system latency and respecting most of the constraints imposed by FC.
In fact, the first step of the ICA involves randomly creating an initial population, then calculating the Fitness objective function, and finally sorting the solutions based on their Fitness values, where the Nimp solutions (imperialists) are those with the smallest Fitness values.The fitness value of each solution is calculated according to several different objectives. These objectives include service latency ( S L ), energy consumption ( E C ), and resource utilization ( R U ), as described in Equation (1). The goal is to minimize the fitness function F, which incorporates the following optimization objectives:
F = ξ S L · S L + ξ E C · E C ξ R U · R U
where S L , E C , and R U represent service latency, energy consumption, and resource use, respectively, multiplied by the impact coefficients associated with these objectives. These coefficients represent the relative importance of each objective and are determined based on the system’s requirements.
Given the importance of both service latency ( S L ) and resource utilization ( R U ) in the optimization process, we assign the following coefficients to reflect their priorities:
ξ S L = 0.5 ( latency is the top priority )
ξ E C = 0.2 ( energy consumption is less important )
ξ R U = 0.3 ( resource utilization is also important but secondary to latency )
These values were selected to strike a balance between minimizing latency, optimizing resource utilization, and managing energy consumption in the Fog Computing environment.
The key difference in our approach is that, in the event of a service placement failure in the current colony, the RS will take over the SPP resolution and search for an available node not only within a single colony but across all Fog colonies. In the worst-case scenario, if the RS cannot find an available node, it forwards the placement request to the Cloud layer.
In the following section, we focus on recommender systems and explain in detail how they solve the SPP in FC. In addition, given that the weakness of most existing work is that it does not take mobility into account, we propose that the RS also tracks user mobility (by collecting data on their positions from connected objects) and node availability for each colony through a database shared by all colonies. So, if the RS detects the mobility of the corresponding end user and its approach to another available neighbor node/another available neighbor colony, then it recommends the current node to send part of its load to its neighbor.

4.2. Solution Based on the Recommendation System

Our approach is based primarily on the idea of adding a hybrid RS as illustrated in Figure 2. The RS is activated when a colony receives a service placement request but fails to allocate the service to any of its local Fog nodes due to resource constraints or policy limitations. In such cases, the request is forwarded to the RS, which analyzes alternative placement options in neighboring colonies. Figure 9 illustrates the detailed architecture of the proposed solution, which is based on the following key steps: data collection, data processing, collaborative filtering, content-based filtering, and the decision phase (combining the results of the previous recommendations and finally the implementation). These steps are presented in detail below.

4.2.1. Data Collection

The data collection phase is the first stage of our proposed system. Each of the three RS components gathers specific data as described below:
The collaborative RS: This component collects data related to the user who requested the placement, including service preferences such as latency, deadline, CPU, memory, and storage requirements. It also retrieves the service history and preferences of neighboring users located in the same geographical area. The history reflects the Fog node(s) where similar services were previously placed, while availability refers to the current capacity of each node to accommodate new services, based on metrics such as CPU, memory, and storage.
The content-based RS: This module gathers data on the specific requirements of the target user’s requested service (e.g., CPU, memory, and storage), his history, and the characteristics of candidate Fog nodes. These characteristics indicate the resources that each node can provide, enabling a match between service needs and node capabilities.
The hybrid RS: will retrieve the results of these two recommendation systems, as well as the location of users and Fog nodes.

4.2.2. Data Processing

At this stage, as illustrated in the pseudo-code Algorithm 1, after retrieving the necessary data, the RS performs a filtering process to improve efficiency and facilitate subsequent searches. This included removing duplicate entries, handling missing values, and removing outlier data. Custom scripts were written to identify redundant data points and eliminate them. Additionally, outlier detection was performed to ensure that any anomalies in the mobility patterns of the taxis were filtered out. This step was crucial for maintaining the accuracy and reliability of the data before using them for further analysis in the recommendation system.
Algorithm 1 Data Processing
  • Input: Raw data retrieved from the dataset
  • Output: Cleaned and processed data
  • function DataProcessing(data)
  •     for each user u in data do
  •         if duplicate entries are detected for u then
  •            Remove duplicates
  •         end if
  •         if missing or incomplete data is found for u then
  •            Remove the incomplete entry
  •         end if
  •         if outliers are detected in u’s data then
  •            Remove outliers
  •         end if
  •     end for
  •     return Cleaned data
  • end function

4.2.3. Collaborative Filtering

In this step, once the data preprocessing phase is completed and all users’ requirement vectors are constructed, the CF system evaluates the similarity between the target user u t and each other user u j in the system using cosine similarity.
Let
  • R t = [ CPU t , RAM t , Storage t , Delay t ] be the requirement vector of the target user u t .
  • R j = [ CPU j , RAM j , Storage j , Delay j ] be the requirement vector of another user u j .
  • n be the number of considered features.
The cosine similarity between users u t and u j is computed as
Similarity ( u t , u j ) = i = 1 n R t , i · R j , i i = 1 n R t , i 2 · i = 1 n R j , i 2
After computing similarity scores, the system ranks all users and selects the top K most similar ones. The parameter K is set to 5, and this value was empirically chosen to achieve the responsiveness of the system.
The node selection phase is executed as follows:
  • For each of the top K users, the system retrieves the node they previously used to deploy the same or a similar service.
  • If the node is still available (i.e., it satisfies the current resource constraints), it is immediately selected.
  • If not, the system proceeds to the next most similar user, continuing this process until a suitable node is found or the K-list is exhausted.
The pseudo-code Algorithm 2 illustrates this.
Algorithm 2 Collaborative Filtering
1:
Input: User set U, requirement vectors R u , target user u t , service history H
2:
Output: Recommended node n r
3:
function CollaborativeFiltering(U, u t , H)
4:
    Initialize similarity list S [ ]
5:
    for each user u j in U where u j u t  do
6:
        Compute cosine similarity between R u t and R u j
7:
        Append ( u j , similarity ) to S
8:
    end for
9:
    Sort S in descending order of similarity
10:
  Select top K users U t o p K
11:
  for each u k in U t o p K  do
12:
        Retrieve nodes N k used for similar services from H
13:
        for each node n in N k  do
14:
           if n satisfies resource constraints then
15:
               return n
16:
           end if
17:
        end for
18:
  end for
19:
  return NULL                ▹ No suitable node found
20:
end function

4.2.4. Content-Based Filtering

As illustrated in the pseudo-code Algorithm 3, in this step, after retrieving data on the user details (the history), its requested service requirements, and Fog node details (i.e., memory, storage, and CPU), the system calculates the cosine similarity between the target service’s characteristics and the characteristics of all colony nodes.
The cosine similarity is computed as follows:
Similarity ( s t , n j ) = i = 1 n R t , i · N j , i i = 1 n R t , i 2 · i = 1 n N j , i 2
where
  • R t , i is the value of feature i (e.g., CPU, RAM, storage, and delay) in the requirement vector of the target service s t .
  • N j , i is the value of feature i (e.g., CPU, RAM, storage, and delay) in the feature vector of colony node n j .
  • n is the number of features considered (e.g., 4 features: CPU, RAM, storage, and delay).
In the content-based filtering stage, the system considers the top K = 3 colony nodes ranked by cosine similarity with the target service request. The value of K was chosen based on its effectiveness in reducing deployment delay.
After ranking the nodes based on similarity, the system checks if the target user has previously used the most similar node. That node is immediately recommended if it exists in the user’s history. If not, the system checks the next most similar node in the top K list. If no suitable node is found in the history, the node with the highest similarity is recommended as the optimal location for the service.
Algorithm 3 Content-Based Filtering
1:
Input: Requirement vector R t , Node set N , User history H
2:
Output: Recommended node n r
3:
function ContentBasedFiltering( R t , N , H)
4:
    Initialize similarity list L [ ]
5:
    for each node n j in N  do
6:
        Let N j be the feature vector of node n j
7:
        Compute cosine similarity between R t and N j
8:
        Append ( n j , similarity ) to L
9:
    end for
10:
  Sort L in descending order of similarity
11:
  Select top K nodes: N t o p K first K nodes from L
12:
  for each node n in N t o p K  do
13:
        if  n H [ u t ]  and n is available then
14:
           return n
15:
        end if
16:
  end for
17:
  for each node n in N t o p K  do
18:
        if n is available then
19:
           return n
20:
        end if
21:
  end for
22:
  return NULL                ▹ No suitable node found
23:
end function

4.2.5. Combining Results

In this step, we adopt a mixed hybridization strategy that simultaneously leverages collaborative and content-based recommendations. Each method returns a recommended node, and the hybrid recommender system selects the final node by comparing its spatial proximity to the target user.
To determine the best location, the system calculates the 3D Euclidean distance between the current position of the user and each recommended node.
Let
  • u = ( x u , y u , z u ) be the 3D coordinates of the user.
  • n c f = ( x c f , y c f , z c f ) be the coordinates of the node recommended by collaborative filtering.
  • n c b = ( x c b , y c b , z c b ) be the coordinates of the node recommended by content-based filtering.
The Euclidean distance is calculated as
Distance ( u , n ) = ( x u x n ) 2 + ( y u y n ) 2 + ( z u z n ) 2
The node with the shortest distance to the user is selected as the final placement decision.
The pseudo-code Algorithm 4 explains this step.
Algorithm 4 Combining Results
1:
Input: Node from collaborative filtering n c f , node from content-based filtering n c b , user position u
2:
Output: Final recommended node n f
3:
function CombineResults( n c f , n c b , u )
4:
    Compute  d c f EuclideanDistance ( u , n c f )
5:
    Compute  d c b EuclideanDistance ( u , n c b )
6:
    if  d c f < d c b  then
7:
        return  n c f
8:
    else
9:
        return  n c b
10:
    end if
11:
end function

4.2.6. Implementation

Once the hybrid recommendation system determines the final node for service placement, the system proceeds with execution. If the selected Fog node has sufficient resources to accommodate the service, it is deployed immediately. Otherwise, if no suitable Fog node is found or all are overloaded, the request is escalated to the Cloud layer, ensuring service continuity. This step is illustrated with the pseudo-code Algorithm 5.  
Algorithm 5 Implementation
1:
Input: Final recommended node n f
2:
procedure PlaceExecution( n f )
3:
    if  n f is not NULL and satisfies resource constraints then
4:
        Deploy the service on node n f
5:
    else
6:
        Forward the request to the Cloud layer
7:
    end if
8:
end procedure
To conclude, Figure 10 provides a global overview of the proposed solution, integrating both the initial placement process handled by the FOCN and the fallback mechanism based on the hybrid RS. It summarizes the entire workflow, from receiving a request to its final placement, whether locally, in another colony, or in the Cloud.
The proposed solution consists of solving the SPP in FC, as it offers several advantages over existing solutions. We present these advantages below:
  • Use of Fog: our solution maximizes the use of Fog. Indeed, we aim to place the maximum number of services in the Fog layer instead of transferring them to the Cloud, since in the event of the unavailability of the current colony, instead of sending the request to the Cloud layer, the RS looks for a location in the other Fog colonies to increase the chance of placement in Fog.
  • Dynamism: Our solution takes dynamism into account. If the RS detects the unavailability of the Fog node, it looks for another available node, then retransmits and reallocates the service to the most suitable node.
  • Mobility: Our proposal takes mobility into account. Indeed, if the RS detects the mobility of the end user, then it looks for another node that is closer to this customer, then retransmits and re-routes the service to the most suitable node.
  • Latency: Placing the service in the Fog layer reduces latency since it is closer to the user. What is more, in the event of the unavailability of a node in the current colony, the RS can search faster than moving from one colony to another.
  • Response time: fog placement is faster than Cloud placement.
  • SLA: service placement considers service delay and prioritizes the service with the closest delay.
  • QOS: SLA criteria, latency, etc., are QOS criteria that help satisfy the customer.

5. Experimental Results and Discussion

To evaluate the performance of our proposed hybrid recommendation-based approach, we selected two baseline methods: the ICA [31] and ODMA [7]. These algorithms were chosen because they address similar Fog SPPs, focusing on similar objectives, such as minimizing service delay and maximizing Fog resource utilization. Furthermore, they represent two widely used classes of optimization techniques, meta-heuristics (ICA) and heuristics (ODMA), allowing for a well-rounded comparison. Although other methods, such as genetic algorithms and deep reinforcement learning, have been explored in more recent studies, we focus on algorithms that align closely with our problem formulation, objective functions, and simulator settings.

5.1. Simulation and Technical Development Framework

Based on the comparison presented in Table 5, which summarizes the most widely used simulators to address the SPP, we observe that iFogSim is a more recent tool, designed specifically for simulating Fog environments. It fulfills our requirements by enabling the modeling and simulation of Fog systems. In addition, it allows for the performance evaluation of Fog architectures, considering various aspects such as latency, power consumption, and resource management. The work [40] defines iFogSim as “an open-source Java simulator developed by the Cloud computing and Distributed Systems (CloudS) Laboratory at the University of Melbourne”. iFogSim is an extension of a popular simulator called CloudSim. So, to implement the features of the iFogSim architecture, we are relying on CloudSim’s core event simulation functionality [41]. The physical topology of this simulation tool is presented in the work [42] in the form of Java classes. It consists of a set of sensors, a set of actuators, and a set of Fog devices. Instances of the “Sensor” class are entities that act as IoT sensors. The “Actuator” class models an actuator and defines a method for acting on the arrival of a tuple or task from an application module. The last class specifies the hardware characteristics of Fog devices and their connections to other Fog devices, sensors, and actuators.
To overcome the current limitations of the iFogSim simulator in supporting application migration, the logical grouping of Fog nodes, and the orchestration of loosely coupled application services, three new components, mobility, clustering, and microservices, have been implemented into iFogSim2 [43]. This simulator is well equipped with APIs and built-in policies to illustrate use cases related to mobility, microservices, and node clustering in Edge/Fog Computing environments. Additionally, the functions of its various components can be adjusted according to specific case studies to create variations in the simulations [43].
In the Table 6, iFogSim2 is compared with other existing simulators designed to model Edge/Fog environments. As shown in the table, iFogSim2 simultaneously supports the integration of a real-world dataset to evaluate the performance of different service management policies in Edge/Fog Computing environments. It provides default techniques for mobility management, node clustering, and microservice orchestration, which can be adopted as benchmarks when comparing performance. Finally, based on the analyses summarized in Table 5 and Table 6, we decided to conduct our simulations using iFogSim2, as it best meets the requirements of our study.

5.2. Case Study: Drones

To demonstrate the usefulness of our proposed system, we applied it in a usage scenario based on a monitoring system for intelligent drones. A drone, which is an unmanned aerial vehicle, is controlled remotely by a human operator or by an integrated autonomous system. Drones are often used in various fields to perform tasks that may be difficult or dangerous for humans. These tasks include aerial photography, 3D mapping, delivery, environmental monitoring, and more. Figure 11 illustrates the conceptual integration of drones with Fog nodes and Cloud data centers.
At the first level, we have the Low-Altitude Platforms (LAPs). They serve as the primary data collection and sensing devices. Equipped with various sensors such as cameras, Lidar, thermal sensors, and environmental sensors, they capture environmental data and perform tasks such as aerial photography, surveillance, and environmental monitoring. These platforms typically operate at altitudes of up to a few hundred meters above ground level.
Moving to the second level, we encounter High-Altitude Platforms (HAPs). HAPs operate at much higher altitudes compared to LAPs, typically in the stratosphere at altitudes ranging from 20 km to 50 km above ground level. These platforms can be stationary, such as tethered balloons or airships, or mobile, such as solar-powered drones or aircrafts. HAPs provide broader coverage and connectivity beyond the capabilities of LAPs, covering large geographical areas. In addition to their role in extending network coverage, HAPs can also function as Fog nodes, reducing latency and bandwidth requirements for transmitting data to centralized data centers.
At the third level, we have the Proxy server and Cloud data center, which represent centralized elements responsible for further data refinement, storage, and management. The Proxy server acts as an intermediary between the Fog nodes and the Cloud data center, facilitating data transfer and providing additional services such as caching and security. These elements handle the processing and storage of data collected by LAPs and HAPs and facilitate more complex data analysis and decision-making processes.
To simulate the mobility patterns of LAPs, the EPFL/mobility dataset [49], which includes timestamped GPS coordinates of taxi movements in the San Francisco Bay Area, was adapted for use in our Fog-based architecture. Although the dataset is in 2D, each taxi’s GPS coordinate was mapped to a drone’s position, with a synthetic altitude assigned based on predefined patterns typical of LAP operations (ranging from 80 to 150 m). This enabled the generation of service requests at the mapped locations, which were then processed by nearby Fog nodes or HAPs. This setup closely approximates real-world drone mobility in dynamic environments, enabling the evaluation of the proposed system’s performance. The dataset was preprocessed by filtering out redundant and noisy data points and converting timestamp values into consistent simulation intervals.
We implemented our proposed approach in the iFogSim2 simulator based on the conceptual architecture illustrated in Figure 11. In the sensor class of iFogSim, we define IoT components, including devices, sensors, actuators, and their respective properties and connections. The Fog device class specifies the hardware characteristics, architecture, and connectivity details of computational resources like Fog devices and the Cloud. Additionally, a Fog device operates as an FOCN, containing both a database and a local host for managing processing tasks. The MyController class is responsible for the management of resources, enabling the calculation of Quality of Service (QoS) metrics such as network usage, service latency, run time, and energy consumption of devices, during the simulation.
Application Model:
As illustrated in Figure 12, the application of drones consists of the following modules:
  • Functional modules: The drone is equipped with functional modules, including sensor and actuator modules. These modules enable the drone to collect sensor data from its environment and perform actions based on the processed data.
  • Client module: likely can be located on the IoT device itself, preprocesses the data and sends it to the processing module.
  • Processing/main module: this module includes Fog devices and Cloud data centers.
  • Storage module: after processing, making the decision, etc., the results will be stored in this module which can be in Fog or Cloud.

5.3. Performance Metrics, Results, and Discussion

We simulated our proposed solution using iFogSim2. The simulated Fog environment comprises five Fog colonies, each containing 12 Fog nodes. Fog nodes exhibit heterogeneity in both their topological positions and the dynamic workloads they process. Their spatial distribution across distinct colonies, combined with the mobility of LAP drones, leads to varying drone-to-node distances and service request intensities. This results in non-uniform resource utilization and variable processing delays.
Table 7 shows the configuration of the devices.
The latency between devices is shown in Table 8.
Table 9 shows the details of services for five time periods.
To evaluate our service placement strategy, we used the following metrics, selected for their relevance to our core objectives:
  • Number of services performed: this metric represents the number of services executed successfully within defined time periods, which directly reflects the system’s effectiveness in meeting time constraints and user expectations [7].
  • Number of failed services: This measures how many services were not executed before their deadlines. A lower number indicates better placement decisions that respect time-critical requirements.
  • Number of remaining services: Some services may not be successfully allocated due to a limited number of time periods. Services that meet the deadline but remain unallocated are eligible for placement in the next time period.
  • Fog resource utilization: Reflects deploying the highest possible number of services on Fog nodes. A higher Fog utilization means more services are executed closer to end users, reducing reliance on remote Cloud resources. Since Cloud offloading often introduces additional delay, improving Fog usage directly contributes to reducing overall service latency.
A total of 310 services from three defined applications are introduced into the ecosystem across five distinct time periods. The goal of any placement algorithm is to allocate these services to Fog nodes or the Cloud in a way that ensures the application deadlines are met. Services that are executed within the specified time constraints are referred to as “performed services”. In contrast, services that fail to execute because they violate deadlines are considered “failed services”. Additionally, some services may require resources that are unavailable during a given time period, leading to their placement being deferred to the next period. These services are categorized as “remaining services”. Figure 13 presents the results of various methods concerning metrics such as the number of performed services, failed services, and remaining services.
As illustrated, HRS is superior to ODMA and ICA, methods with 278 services performed, 26 failed services, and six remaining services by 6.33% and 2.75%, respectively.
The superiority of HRS can be explained by the fact that based on priorities, Fog nodes select requests with minimum time deadlines for execution.
Figure 14 presents the Fog resource utilization metric for different methods, calculated and reported over five time periods (1 to 25 s). The results show that the HRS method achieves a higher utilization rate of Fog resources than the other methods.
The results show that HRS increased the utilization rate of Fog resources compared to other methods. As illustrated, HRS improved this metric by 0.46% compared to ICA and 17.47% compared to ODMA.
Each incoming service must wait until the following time period for placement. Additionally, placement planning and various delays contribute to this waiting time. Figure 15 presents the waiting time for different algorithms at the end of each time period. In this analysis, the total waiting time is calculated for each service, and then the average waiting time across all executed services is reported for each algorithm. As illustrated, HRS was able to reduce service waiting times over all time periods.
Although some improvements such as a 6.33% increase in the number of successfully executed services may appear modest at first glance, their impact becomes more meaningful when considered in the broader context of system performance. For instance, a 17.47% increase in Fog resource utilization, compared to ODMA, achieved by our approach not only leads to more services being processed locally but also contributes to a significant reduction in execution delays. This is particularly important in IoD scenarios, where timely execution is critical.
Moreover, improved Fog utilization reduces reliance on Cloud resources, which are often associated with higher latency. Therefore, even when the number of additionally executed services is relatively small, the accompanying reduction in average waiting time, as shown in Figure 15, reflects a tangible enhancement in Quality of Service (QoS).
Our work can be improved and extended in a number of ways. In the following, we give some indications of improvements and evolution:
  • Improve the results found in our system evaluation concerning waiting time, Fog utilization, etc.
  • The treatment of other SPP-related objectives in Fog such as cost of service and energy consumption.
  • Testing our work in the real world with large amounts of data.
  • Ensuring QOS becomes an important issue in the event of faulty Fog nodes. Indeed, a faulty node means, for example, a node that gives false information about its state or has not processed the services received after a certain time, so that if the number of services received is small, it will be considered available and can receive other services. Therefore, it is necessary to address the issue of the fault tolerance of service placement in Fog.
  • Security is also very important, and we have to consider it. Indeed, for solutions based on user history and information (like our solution), it is crucial to protect these data.

6. Conclusions and Future Work

In this paper, a dynamic placement approach based on a hybrid recommendation system was presented that adapts to real-time workload fluctuations. This method integrates intelligent resource management mechanisms to ensure the efficient use of the computing and storage capacities available in Fog environments. The hybrid approach combines three recommendation techniques: content-based filtering, collaborative filtering, and a hybrid strategy. The proposed system was tested using an IoD scenario. The results show that the hybrid approach achieves better outcomes than some solutions based on ODMA or ICAs. Future directions for improvement include enhancing system evaluation results, particularly in terms of response time and overall latency. Additionally, other Fog service placement objectives such as service cost and energy consumption will be explored. Potential limitations related to scalability will also be addressed, especially the communication overhead and responsiveness challenges that may arise in large-scale IoT environments. Ensuring QOS becomes critical in the presence of faulty Fog nodes necessitates attention to fault tolerance in service placement. Finally, due to the reliance on user history and preferences, particular attention will be paid to safeguarding sensitive data and ensuring system security.

Author Contributions

Conceptualization, H.B.R., L.S. and A.D.; Methodology, L.S., H.Z. and A.D.; Software, H.B.R.; Validation, L.S., R.B.D. and A.D.; Formal analysis, H.Z. and R.B.D.; Investigation, R.B.D.; Data curation, H.B.R.; Writing—original draft, H.B.R.; Writing—review & editing, L.S. and A.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original dataset used in this study, the San Francisco Taxi Dataset, is publicly available at “https://doi.org/10.15783/C7J010”. The adapted dataset generated during the study is not publicly available due to project restrictions.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
FCFog Computing
SPPService placement problem
IoTInternet of Things
IoDInternet of Drones
HRSHybrid recommendation system
RSRecommendation system
SSecond
QOSQuality of Service
SLAService-level agreement
GAGenetic algorithm
ICAImperialist Competitive Algorithm
HAPsHigh-Altitude Platforms
LAPsLow-Altitude Platforms
CSACuckoo search algorithm
ACUAdmission control unit

References

  1. Zorgati, H.; Djemaa, R.; Amor, I. Finding Internet of Things resources: A state-of-the-art study. Data Knowl. Eng. 2022, 140, 102025. [Google Scholar] [CrossRef]
  2. Burke, R. Hybrid recommender systems: Survey and experiments. User Model. User Adapt. Interact. 2002, 12, 331–370. [Google Scholar] [CrossRef]
  3. Mior, I.; Sicari, S.; De Pellegrini, F.; Chlamtac, I. Internet of things: Vision, applications and research challenges. Ad Hoc Netw. 2012, 10, 1497–1516. [Google Scholar] [CrossRef]
  4. Tan, L.; Wang, N. Future internet: The internet of things. In Proceedings of the 2010 3rd International Conference on Advanced Computer Theory and Engineering (ICACTE), Chengdu, China, 20–22 August 2010; Volume 5, p. V5-376. [Google Scholar] [CrossRef]
  5. Lopez, P.; Montresor, A.; Epema, D.; Datta, A.; Higashino, T.; Iamnitchi, A.; Barcellos, M.; Felber, P.; Riviere, E. Edge-centric computing: Vision and challenges. ACM SIGCOMM Comput. Commun. Rev. 2015, 45, 37–42. [Google Scholar] [CrossRef]
  6. Sabireen, H.; Neelanarayanan, V. A review on fog computing: Architecture, fog with IoT, algorithms and research challenges. ICT Express 2021, 7, 162–176. [Google Scholar] [CrossRef]
  7. Zhao, D.; Zou, Q.; Boshkani Zadeh, M. A QoS-aware IoT service placement mechanism in fog computing based on open-source development model. J. Grid Comput. 2022, 20, 12. [Google Scholar] [CrossRef]
  8. Sarrafzade, N.; Entezari-Maleki, R.; Sousa, L. A genetic-based approach for service placement in fog computing. J. Supercomput. 2022, 78, 10854–10875. [Google Scholar] [CrossRef]
  9. Gasmi, K.; Dilek, S.; Tosun, S.; Ozdemir, S. A survey on computation offloading and service placement in fog computing-based IoT. J. Supercomput. 2022, 78, 1983–2014. [Google Scholar] [CrossRef]
  10. Mell, P.; Grance, T. The NIST Definition of Cloud Computing; Technical report; Computer Security Division, Information Technology Laboratory, National Institute of Standards and Technology: Gaithersburg, MD, USA, 2011. [Google Scholar]
  11. Raghavendra, M.; Chawla, P.; Rana, A. A survey of optimization algorithms for fog computing service placement. In Proceedings of the 2020 8th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO), Noida, India, 4–5 June 2020; pp. 259–262. [Google Scholar]
  12. Bonomi, F.; Milito, R.; Zhu, J.; Addepalli, S. Fog computing and its role in the internet of things. In Proceedings of the First Edition of the MCC Workshop on Mobile Cloud Computing, Helsinki, Finland, 17 August 2012; pp. 13–16. [Google Scholar]
  13. Jalali, F.; Vishwanath, A.; De Hoog, J.; Suits, F. Interconnecting Fog computing and microgrids for greening IoT. In Proceedings of the 2016 IEEE Innovative Smart Grid Technologies-Asia (ISGT-Asia), Melbourne, Australia, 28 November–1 December 2016; pp. 693–698. [Google Scholar]
  14. Taneja, M.; Davy, A. Resource aware placement of IoT application modules in Fog-Cloud Computing Paradigm. In Proceedings of the 2017 IFIP/IEEE Symposium on Integrated Network and Service Management (IM), Lisbon, Portugal, 8–12 May 2017; pp. 1222–1228. [Google Scholar] [CrossRef]
  15. Srichan, S.; Majhi, S.; Jena, S.; Mishra, K.; Bhat, R. A Secure and Distributed Placement for Quality of Service-Aware IoT Requests in Fog-Cloud of Things: A Novel Joint Algorithmic Approach. IEEE Access 2024, 12, 56730–56748. [Google Scholar] [CrossRef]
  16. Wei, Z.; Mao, J.; Li, B.; Zhang, R. Privacy-Preserving Hierarchical Reinforcement Learning Framework for Task Offloading in Low-Altitude Vehicular Fog Computing. IEEE Open J. Commun. Soc. 2024, 6, 3389–3403. [Google Scholar] [CrossRef]
  17. Liu, C.; Wang, J.; Zhou, L.; Rezaeipanah, A. Solving the multi-objective problem of IoT service placement in fog computing using cuckoo search algorithm. Neural Process. Lett. 2022, 54, 1823–1854. [Google Scholar] [CrossRef]
  18. Ayoubi, M.; Ramezanpour, M.; Khors, R. An autonomous IoT service placement methodology in fog computing. Softw. Pract. Exp. 2021, 51, 1097–1120. [Google Scholar] [CrossRef]
  19. Skarlat, O.; Nardelli, M.; Schulte, S.; Borkowski, M.; Leitner, P. Optimized IoT service placement in the fog. Serv. Oriented Comput. Appl. 2017, 11, 427–443. [Google Scholar] [CrossRef]
  20. Canali, C.; Lancellotti, R. Gasp: Genetic algorithms for service placement in fog computing systems. Algorithms 2019, 12, 201. [Google Scholar] [CrossRef]
  21. Al Masarweh, M.; Alwada’n, T.; Afandi, W. Fog computing, cloud computing and IoT environment: Advanced broker management system. J. Sens. Actuator Netw. 2022, 11, 84. [Google Scholar] [CrossRef]
  22. Proietti, M.G.; Magnani, M.; Beraldi, R. A Latency-levelling Load Balancing Algorithm for Fog and Edge Computing. In Proceedings of the 25th International ACM Conference on Modeling Analysis and Simulation of Wireless and Mobile Systems, Montreal, QC, Canada, 24–28 October 2022; pp. 5–14. [Google Scholar]
  23. Khosroabadi, F.; Fotouhi-Ghazvini, F.; Fotouhi, H. Scatter: Service placement in real-time fog-assisted IoT networks. J. Sens. Actuator Netw. 2021, 10, 26. [Google Scholar] [CrossRef]
  24. Maiti, P.; Sahoo, B.; Turuk, A.K.; Kumar, A.; Choi, B.J. Internet of Things applications placement to minimize latency in multi-tier fog computing framework. ICT Express 2022, 8, 166–173. [Google Scholar] [CrossRef]
  25. Kopras, B.; Bossy, B.; Idzikowski, F.; Kryszkiewicz, P.; Bogucka, H. Task allocation for energy optimization in fog computing networks with latency constraints. IEEE Trans. Commun. 2022, 70, 8229–8243. [Google Scholar] [CrossRef]
  26. Shi, C.; Ren, Z.; Yang, K.; Chen, C.; Zhang, H.; Xiao, Y.; Hou, X. Ultra-low latency cloud-fog computing for industrial internet of things. In Proceedings of the 2018 IEEE Wireless Communications and Networking Conference (WCNC), Barcelona, Spain, 15–18 April 2018; pp. 1–6. [Google Scholar]
  27. Skarlat, O.; Nardelli, M.; Schulte, S.; Dustdar, S. Towards QoS-aware Fog Service Placement. In Proceedings of the 2017 IEEE 1st International Conference on Fog and Edge Computing (ICFEC), Madrid, Spain, 14–15 May 2017; pp. 89–96. [Google Scholar]
  28. Natesha, B.; Guddeti, R. Meta-heuristic based hybrid service placement strategies for two-level fog computing architecture. J. Netw. Syst. Manag. 2022, 30, 47. [Google Scholar] [CrossRef]
  29. Zhang, Z.; Sun, H.; Abutuqayqah, H. An efficient and autonomous scheme for solving IoT service placement problem using the improved Archimedes optimization algorithm. J. King Saud Univ. Comput. Inf. Sci. 2023, 35, 157–175. [Google Scholar] [CrossRef]
  30. Zare, M.; Sola, Y.; Hasanpour, H. Towards distributed and autonomous IoT service placement in fog computing using asynchronous advantage actor-critic algorithm. J. King Saud Univ. Comput. Inf. Sci. 2023, 35, 368–381. [Google Scholar] [CrossRef]
  31. Zare, M.; Sola, Y.; Hasanpour, H. Imperialist competitive based approach for efficient deployment of IoT services in fog computing. Clust. Comput. 2023, 7, 1–4. [Google Scholar] [CrossRef]
  32. Yang, X.; Deb, S. Cuckoo search via Lévy flights. In Proceedings of the 2009 World Congress on Nature Biologically Inspired Computing (NaBIC), Coimbatore, India, 9–11 December 2009; pp. 210–214. [Google Scholar]
  33. Wang, R.F.; Su, W.H. The Application of Deep Learning in the Whole Potato Production Chain: A Comprehensive Review. Agriculture 2024, 14, 1225. [Google Scholar] [CrossRef]
  34. Ghobaei-Arani, M.; Souri, A.; Rahmanian, A. Resource management approaches in fog computing: A comprehensive review. J. Grid Comput. 2020, 18, 1–42. [Google Scholar] [CrossRef]
  35. Bobadilla, J.; Ortega, F.; Hernando, A.; Gutiérrez, A. Recommender systems survey. Knowl. Based Syst. 2013, 46, 109–132. [Google Scholar] [CrossRef]
  36. Adomavicius, G.; Tuzhilin, A. Toward the next generation of recommender systems: A survey of the state-of-the-art and possible extensions. IEEE Trans. Knowl. Data Eng. 2005, 17, 734–749. [Google Scholar] [CrossRef]
  37. Lops, P.; De Gemmis, M.; Semeraro, G. Content-based recommender systems: State of the art and trends. In Recommender Systems Handbook; Springer: Berlin/Heidelberg, Germany, 2011; pp. 73–105. [Google Scholar] [CrossRef]
  38. Schafer, J.; Frankowski, D.; Herlocker, J.; Sen, S. Collaborative filtering recommender systems. In The Adaptive Web: Methods and Strategies of Web Personalization; Springer: Berlin/Heidelberg, Germany, 2007; pp. 291–324. [Google Scholar] [CrossRef]
  39. Burke, R. Hybrid web recommender systems. In The Adaptive Web: Methods and Strategies of Web Personalization; Springer: Berlin/Heidelberg, Germany, 2007; pp. 377–408. [Google Scholar] [CrossRef]
  40. Mahmud, R.; Buyya, R. Modelling and simulation of fog and edge computing environments using iFogSim toolkit. In Fog and Edge Computing: Principles and Paradigms; Wiley: Hoboken, NJ, USA, 2019; Volume 1. [Google Scholar] [CrossRef]
  41. Calheiros, R.; Ranjan, R.; Beloglazov, A.; De Rose, C.; Buyya, R. CloudSim: A toolkit for modeling and simulation of cloud computing environments and evaluation of resource provisioning algorithms. Softw. Pract. Exp. 2011, 41, 23–50. [Google Scholar] [CrossRef]
  42. Gupta, H.; Dastjerdi, A.; Ghosh, S.; Buyya, R. iFogSim: A toolkit for modeling and simulation of resource management techniques in the Internet of Things, Edge and Fog computing environments. Softw. Pract. Exp. 2017, 47, 1275–1296. [Google Scholar] [CrossRef]
  43. Mahmud, R.; Pallewatta, S.; Goudarzi, M.; Buyya, R. Ifogsim2: An extended ifogsim simulator for mobility, clustering, and microservice management in edge and fog computing environments. J. Syst. Softw. 2022, 190, 111351. [Google Scholar] [CrossRef]
  44. Qayyum, T.; Malik, A.; Khattak, M.; Khalid, O.; Khan, S. FogNetSim++: A toolkit for modeling and simulation of distributed fog environment. IEEE Access 2018, 6, 63570–63583. [Google Scholar] [CrossRef]
  45. Puliafito, C.; Goncalves, D.; Lopes, M.; Martins, L.; Madeira, E.; Mingozzi, E.; Rana, O.; Bittencourt, L. MobFogSim: Simulation of mobility and migration for fog computing. Simul. Model. Pract. Theory 2020, 101, 102062. [Google Scholar] [CrossRef]
  46. Lera, I.; Guerrero, C.; Juiz, C. YAFS: A simulator for IoT scenarios in fog computing. IEEE Access 2019, 7, 91745–91758. [Google Scholar] [CrossRef]
  47. Salama, M.; Elkhatib, Y.; Blair, G. IoTNetSim: A modelling and simulation platform for end-to-end IoT services and networking. In Proceedings of the 12th IEEE/ACM International Conference on Utility and Cloud Computing (UCC’19), Auckland, New Zealand, 2–5 December 2019; pp. 251–261. [Google Scholar]
  48. Mass, J.; Srirama, S.; Chang, C. STEP-ONE: Simulated testbed for edge-fog processes based on the opportunistic network environment simulator. J. Syst. Softw. 2020, 166, 110587. [Google Scholar] [CrossRef]
  49. Piorkowski, M.; Sarafijanovic-Djukic, N.; Grossglauser, M. CRAWDAD epfl/mobility. IEEE Dataport 2022. [Google Scholar] [CrossRef]
Figure 1. Fog Computing architecture illustrating the three tiers (Edge, Fog, and Cloud), where arrows represent the direction of data flow and interactions between the layers.
Figure 1. Fog Computing architecture illustrating the three tiers (Edge, Fog, and Cloud), where arrows represent the direction of data flow and interactions between the layers.
Futureinternet 17 00201 g001
Figure 2. Detailed architecture to solve the SPP based on the FOCN and the RS, where the numbers represent the steps of the service placement process.
Figure 2. Detailed architecture to solve the SPP based on the FOCN and the RS, where the numbers represent the steps of the service placement process.
Futureinternet 17 00201 g002
Figure 3. The hierarchical structure of the proposed Fog layer architecture.
Figure 3. The hierarchical structure of the proposed Fog layer architecture.
Futureinternet 17 00201 g003
Figure 4. The proposed ACU function where the arrows represent the flow of IoT requests through different processing paths depending on their delay sensitivity and fog node availability.
Figure 4. The proposed ACU function where the arrows represent the flow of IoT requests through different processing paths depending on their delay sensitivity and fog node availability.
Futureinternet 17 00201 g004
Figure 5. The content-based filtering.
Figure 5. The content-based filtering.
Futureinternet 17 00201 g005
Figure 6. The collaborative filtering.
Figure 6. The collaborative filtering.
Futureinternet 17 00201 g006
Figure 7. The proposed hybrid recommendation system process.
Figure 7. The proposed hybrid recommendation system process.
Futureinternet 17 00201 g007
Figure 8. Activity diagram of the proposed solution based on the FOCN, using standard UML notations (rectangles for activities, diamonds for decision points, and arrows for control flow).
Figure 8. Activity diagram of the proposed solution based on the FOCN, using standard UML notations (rectangles for activities, diamonds for decision points, and arrows for control flow).
Futureinternet 17 00201 g008
Figure 9. Detailed architecture of the proposed hybrid recommendation system, where arrows represent the direction of data flow and interactions between the layers.
Figure 9. Detailed architecture of the proposed hybrid recommendation system, where arrows represent the direction of data flow and interactions between the layers.
Futureinternet 17 00201 g009
Figure 10. Activity diagram of the proposed solution.
Figure 10. Activity diagram of the proposed solution.
Futureinternet 17 00201 g010
Figure 11. Three-tier architecture for Fog-enabled drone systems, where arrows represent the interactions between the layers.
Figure 11. Three-tier architecture for Fog-enabled drone systems, where arrows represent the interactions between the layers.
Futureinternet 17 00201 g011
Figure 12. Application model for IoT-enabled drones case study.
Figure 12. Application model for IoT-enabled drones case study.
Futureinternet 17 00201 g012
Figure 13. Comparison of services performed, failed, and remaining in different methods after 5 time periods.
Figure 13. Comparison of services performed, failed, and remaining in different methods after 5 time periods.
Futureinternet 17 00201 g013
Figure 14. Comparison of the Fog resource utilization in different methods.
Figure 14. Comparison of the Fog resource utilization in different methods.
Futureinternet 17 00201 g014
Figure 15. Results for the average waiting time for services executed in each time period.
Figure 15. Results for the average waiting time for services executed in each time period.
Futureinternet 17 00201 g015
Table 1. Comparison between Cloud, Fog, and Edge.
Table 1. Comparison between Cloud, Fog, and Edge.
Aspect:CloudFogEdge
Operator:Cloud ProvidersCloud and Network Access ProvidersEdge Providers
Architecture:CentralizedDistributedDistributed
Basic equipmentPowerful serversNetwork equipment, dedicated computing nodesNetwork equipment, Edge devices
Energy
consumption:
HighModerateLow
Latency:HighModerateLow
User distance:LargeRelatively smallVery small
Number of nodes:SmallLargeVery large
Mobility support:LimitedYesYes
Security:Less (non-local)High (local)High (local)
Response time:HighLowVery low
Computing power:HighLess powerModerate
Table 2. Comparison of the studied works based on their objectives.
Table 2. Comparison of the studied works based on their objectives.
Fog FeatureReferences
Reduce latency[7,8,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31]
Data privacy and security[15,16]
Energy efficiency[7,17,23,28,29,31]
Cost[7,17,21,23,28,29,30,31]
Table 3. Classification of related works.
Table 3. Classification of related works.
MethodRefTechniquesPerformance MetricsDomainEvaluation Tool
Heuristic[19]GALatency, throughputSmart cityiFogSim
[20]GALatencySmart cityTestbed
[22]Load balancingLatencyGeneralSimpy
[23]SCATTERCost, EC, RT, latency, and FUSmart HomeiFogSim
[7]ODMACost, EC, RT, latency, and FUHealthcareiFogSim
[8]GAApplication delay and network usageGeneraliFogSim
Exact solvers[25]CPLEX solverLatency, costGeneralMatlab
[27]ILP solverLatency, costGeneraliFogSim
Meta-Heuristic[29]SPP-AOAFU, service cost, EC, delay cost, and throughputGeneralMatlab
[17]CSAFU, delay, RT, SLA, EC, and costIndustryMatlab
[31]ICAFU, service cost, EC, delay, and throughputGeneralMatlab
Evolutionary[18]SPEA-IIFU, service latency, and costGeneraliFogSim
scheduling[21]Round robin + Weighted fair queuingQOS (SLA)GeneraliFogSim + CloudSim
[24]Scheduling algorithmsLatencyGeneralMatlab
Machine learning[30]DRL + A3C-SPPLatency, costGeneralMatlab
[16]AFedPPO and CCP FLPrivacy preservation, latencyVehicular Fog ComputingAirFogSim
Hybrid[15]ANFIS, CHBA, and OBLMakespan, delay violation, cost, FU, and ECGeneralMatlab
[28]MGAPSO + EGAPSOQOS, service cost, EC, and the service timeIndustryTestbed
HRSContent + collaborative filteringFU, delay, and RTIoDiFogSim2
EC: energy consumption, FU: Fog utilization, RT: response time.
Table 4. Comparison of studied works according to their achievements.
Table 4. Comparison of studied works according to their achievements.
Ref.Use of FogDynamicMobilityLatencyRecommendation SystemHistory and PreferencesSimulation
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[7]
[8]
[28]
[29]
[30]
[31]
[15]
[16]
HRS
✓: yes; ✗: no.
Table 5. Comparison between Matlab and iFogSim.
Table 5. Comparison between Matlab and iFogSim.
MatlabiFogSim
Created in 1984Created in 2012
Used in various fields
of science
Specific: specifically designed
for engineering fields
Uses its own MatlabUsed in conjunction with Java
Language for scripting
and Simulink for
graphical modeling
Table 6. Comparison between iFogSim2 and other simulation tools.
Table 6. Comparison between iFogSim2 and other simulation tools.
Simulators
Dataset
Real
Mobility
Custumized
Formation
Cluster
FogNetSim++ [44]
MobFogSim [45]
YAFS [46]
IoTNetSim [47]
STEP-ONE [48]
iFogSim2 [45]
✗: Not Supported; ✓: Supported.
Table 7. Configuration of devices.
Table 7. Configuration of devices.
Device
Type
CPU
(MIPS)
RAM
(MB)
Uplink
Bandwidth
(Mbps)
Downlink
Bandwidth
(Mbps)
Cloud data center44,80040,00010010,000
Proxy server2800400010,00010,000
Fog device2800400010,00010,000
End device1000100010,000270
Table 8. The latency between devices.
Table 8. The latency between devices.
SourceDestinationLatency
SensorFinal device0.6
ActuatorFinal device0.1
Final deviceFog device2.0
Fog deviceProxy server4.0
Proxy serverCloud datacenter100.0
Table 9. Details for services in 5 time periods.
Table 9. Details for services in 5 time periods.
Time Period (T)12345
Time (S)816243240
Number of services7148484697
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ben Rjeb, H.; Sliman, L.; Zorgati, H.; Ben Djemaa, R.; Dhraief, A. Optimizing Internet of Things Services Placement in Fog Computing Using Hybrid Recommendation System. Future Internet 2025, 17, 201. https://doi.org/10.3390/fi17050201

AMA Style

Ben Rjeb H, Sliman L, Zorgati H, Ben Djemaa R, Dhraief A. Optimizing Internet of Things Services Placement in Fog Computing Using Hybrid Recommendation System. Future Internet. 2025; 17(5):201. https://doi.org/10.3390/fi17050201

Chicago/Turabian Style

Ben Rjeb, Hanen, Layth Sliman, Hela Zorgati, Raoudha Ben Djemaa, and Amine Dhraief. 2025. "Optimizing Internet of Things Services Placement in Fog Computing Using Hybrid Recommendation System" Future Internet 17, no. 5: 201. https://doi.org/10.3390/fi17050201

APA Style

Ben Rjeb, H., Sliman, L., Zorgati, H., Ben Djemaa, R., & Dhraief, A. (2025). Optimizing Internet of Things Services Placement in Fog Computing Using Hybrid Recommendation System. Future Internet, 17(5), 201. https://doi.org/10.3390/fi17050201

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop