Cloud and Edge Computing-Based Computer Forensics: Challenges and Open Problems

In recent years, there has been a dramatic change in attitude towards computers and the use of computer resources in general. Cloud and Edge computing have emerged as the most widely used technologies, including fog computing and the Internet of Things (IoT). There are several benefits in exploiting Cloud and Edge computing paradigms, such as lower costs and higher efficiency. It provides data computation and storage where data are processed, enables better data control, faster understanding and actions, and continuous operation. However, though these benefits seem to be appealing, their effects on computer forensics are somewhat undesirable. The complexity of the Cloud and Edge environments and their key features present many technical challenges from multiple stakeholders. This paper seeks to establish an in-depth understanding of the impact of Cloud and Edge computing-based environmental factors. Software and hardware tools used in the digital forensic process, forensic methods for handling tampered sound files, hidden files, image files, or images with steganography, etc. The technical/legal challenges and the open design problems (such as distributed maintenance, multitasking and practicality) highlight the various challenges for the digital forensics process.


Introduction
Cloud computing has emerged as a ubiquitous technology with new capabilities for the smooth and fast delivery of computer resources such as servers, networks, storage, applications, and services on demand with minimal management requirements [1]. It is also one of the most critical milestones in developing computer systems and technologies that can be integrated and utilized efficiently [2,3]. Cloud computing services are very economical and expandable [4][5][6][7][8].
Recently, Edge computing has created a new construct in the computing environment. It brings Cloud computing services next to the end-users and is defined by faster processing and quick response to the requests. Internet-enabled applications such as real-time traffic monitoring, surveillance, and virtual reality requires prompt response time and faster processing [9]. Edge computing's promising features include mobility support, location awareness, very low delays, and access to users. These features make Edge computing suitable for various future applications in a plethora of domains such as transportation, entertainment, wellness, health, industry, etc. [10][11][12][13][14].
The Cloud computing design is flawless, and its benefits are subject to criminal exploitation [15]. Additionally, the authors of [16][17][18] argued that this conflict with the traditional digital investigation method creates various law enforcement challenges with digital forensic technology practitioners. Criminals use the Cloud as a safe place to store and hide possible digital evidence [19]. Indeed, the authors of [20,21] discussed that Cloud computing had added new digital forensic investigation requirements. Its design process introduces unique risks related to confidential information, integrity, and the impact of discovery of computer forensics [22]. Cloud and Edge computing's impact on computer forensics is divided into concepts of two separate, but related challenges-technical and legal [23][24][25][26][27].
Data in the Cloud/Edge are primarily flexible, resulting in various technical challenges. One of the most common legal barriers to Cloud/Edge forensics is concerned with legal issues. From the legal viewpoint, these legal challenges are primarily considered as evidence such as authentication, hearsay, chain of custody and preservation [24,28]. In most cases, the Cloud/Edge Service Provider (SP) can obtain the data from various law enforcement authorities. Thus, the data (as evidence) are subject to multiple legal considerations simultaneously. This is especially difficult when there is a third-party SP. In other words, the SP uses the services of another SP located in various locations. If data storage and other services are not included in the SP, issues regarding the actual site of the data and data ownership can confuse the investigation process. There are different rules of evidence in many jurisdictions, and differences in crime and criminal offences are not consistent with some of the many legal issues in various categories.

Motivation
The purpose of this review paper is to establish a more in-depth understanding of the challenges of Cloud and Edge computing in computer forensics. From the technical viewpoint, these include the difficulty in dealing with various data stored in multiple locations (by authorities), limitations in accessing Cloud/Edge resources that build strong evidence, and maintaining the evidence's integrity [29,30]. Further, large amounts of data in the Cloud/Edge cannot be easily handled and processed without the risk of contamination or damage to the evidence's integrity [31]. The nature of digital evidence compared to physical evidence is complex, requiring a new criminal system to control the collection of digital evidence [32]. Dealing with the massive amount of data distributed across multiple domains has seemed to add to the problem on an ongoing basis [33,34].

1.
All the hardware and software tools available for the digital forensic process in Cloud and Edge computing; 2.
Effect of encryption methods on Cloud/Edge forensic analysis; 3.
Basic details of forensic methods for handling tampered sound files, hidden files, image files, or images with steganography (e.g., to protect copyrights).
The rest of the paper is organized as follows. Section 2 describes the literature review. Section 3 presents Edge forensics challenges with Cloud computing and existing solutions. Section 4 discusses the forensic analysis tools, encryption methods and methods for handling different types of files. Section 5 discusses the open problems existing in the related study. Section 6 discusses the nature and scope of the technical and legal challenges, followed by Section 7, which concludes the paper and discusses future work directions.

Literature Review
This review process discusses the most acknowledged challenges of Cloud/Edge computing in computer forensics. In some ways, the review is critical in highlighting the limitations and shortcomings of the different solutions proposed. Research gaps have also appeared in other proposals for further review and implementation. For this study, the most common steps used in a digital research model have been taken, as presented by the authors of [28,35]. Where necessary, two or more related stages from other models are put together to give some emphasis and clarity, as shown in Figure 1. The outcome of this review is discussed in the following subsections.

Survey/Identification Stage
This stage involves examining criminal aspects related to crime. This approach is based on the active involvement of the criminal elements(s). This stage recognizes that an incident has occurred from key indicators present in the scene and determines the participation level and its nature [36]. However, the widespread nature of Cloud/Edge computing makes the identification process very difficult. Investigators rarely know about the exact location of evidence. Additionally, the forensic detective has limited access to evidence in the Cloud/Edge and therefore performing any identification encounters significant challenges. Flexible retention that translates into the highest risk of data loss in the physical environment is also a concern [37][38][39]. A suspect in the Cloud/Edge may choose to restart or turn off the power of VMs and hardware related to the distortion of any available evidence [40]. Therefore, if there is no continuous record-keeping, there will be no identification of evidence or even collection opportunities. When you are in a public Cloud, the identification process becomes much more complicated than a private Cloud [37]. The following are the solutions that exist for the problem.

Log-Based Approach Model
The authors of [41][42][43] emphasize wood logging for research purposes. In particular, the authors of [42] proposed a separate location log on the client side and then aligned it with the CSP log using unique IDs and timestamps that provide relevant event details. Furthermore, they argued that this approach was designed to simplify digital forensics in computerized environments. However, going back to Sang's suggestion [42] that CSPcontrolled information has entered the client's side, it is argued that this limitation ensures comparable data on both sides of the transaction, i.e., the client side and the CSP side. The authors of [43] identify specific security and privacy threats over the SaaS-based CRM solutions and the measures to overcome them to ensure business processes' smooth functioning.

Access of Evidence in Logs Using a Prepared API
The authors have proposed another solution [5], which identifies partly with Sang [42], in finding potential evidence from the client-side log. Their solution has been compiled as a "Client-Side Evidence Identification Process" that looks at using a pre-configured API that can extract system-appropriate data for both SaaS and PaaS applications. It is said that the process will look at the raw logs and the state of use of the client side. It is therefore limited to information available on the client side only, such as the IaaS model solution.
There are no usage indicators for SaaS and PaaS models due to customer access to APIs in these models. Further, the authors of [44] have identified all the aspects of log-based Cloud forensic techniques in the digital investigation process.

Using Eucalyptus Software-Syslog or Snort Logs
The authors of [22] recommended using a standard logging method that ensures timber retention. Using the log management system, the process collects and associates logs containing important information such as the suspect machine's IP address, the type of browser used, and information on the number of HTTP requests and the requested content. This solution has been implemented and validated in the Eucalyptus cloud environment, using analytical tools such as Snort to monitor the Eucalyptus cloud environment's environmental performance.

The Persistent Storage Device for the Client's Data
The authors of [5] have proposed a solution that uses persistent customer data storage for volatile data. While this comes with additional costs, the solution comes with two benefits: firstly, it improves data security and access to data for customers and provides easy access to evidence if a cloud computing force is forced. This proposal's success depends mostly on implementing a global SLA-like policy between the CSP and the client. Another downside to this solution is the client's release of data confidentiality, as the proponents have stated. The SLA means that customer data should be erased entirely at the contract end. To ensure confidentiality, all user data should be encrypted enough to prevent unauthorized reading. The authors of [45] proposed a model to obtain residual data from all activities performed by GDrive users on Android smartphones. Therefore, this proposed approach may assist investigators in obtaining residual data from a GDrive client and may provide information to law enforcement personnel.

An Integrated Conceptual Digital Forensic Framework
For volatile data, the authors of [21] proposed an integrated digital concept framework as a test method for analyzing VMs in the absence of persistent storage. This proposal is based on two widely used digital forensics frameworks [46] and NIST [47]. In addition to this approach, the authors of [48] suggested using a compatible data synchronization model that synchronizes data variables between VM and persistent storage for forensic use. The authors of [49] proposed a model for data collection and forensic Cloud construction, by using feature-based machine learning and prioritization on VMs. The authors proposed a forensic approach for PaaS and SaaS models to design and develop new digital technologies and improve research efficiency and may provide information to law enforcement personnel.

Collection and Preservation Stages
These stages include the collection and retention of evidence for maintaining integrity and usefulness. Once identified, the evidence has a potential value and should, therefore, be collected and protected from unwanted damages that will make them unreliable, incomplete, infallible and unreliable [50]. This process is challenged in investigations involving the Cloud/Edge partly due to shared features of the Cloud/Edge environment [5,9] and reliance on the CSP to store evidence, making it available to digital investigators. Somehow, CSPs face a series of technical challenges to provide good and fast data [51]. Collection and preservation stages' issues also include Cloud/Edge quality classification (that might be unsuccessful due to high-frequency problems), data reliability, time synchronization, data availability, and the timeline of events in the Cloud directly affecting stock storage [23][24][25]51]. The following are the available solutions to the problem.

The Trust Model and Trust Cloud
CSP and client/investigator trust relationships are required at the highest level so that CSPs can provide specific information on request [51]. The authors of [52][53][54][55][56] have discussed using multiple layers of trust to allow customers and investigators to gather evidence, including VM images, network logs, processes and information. Both of these solutions seek to address the lack of transparency between CSPs and clients, which is why they build a relationship of trust. However, as the authors of [51] point out, the level of dependence on the control aircraft is essential for the reliability model [53,54]. These solutions are also targeted at IaaS and PaaS models and cannot be successfully implemented in the SaaS model.

Isolation Techniques
The authors of [57] suggested methods and procedures that can be used to differentiate Cloud conditions to prevent contamination and distortion of evidence during collection, thus maintaining authenticity. The solution seeks to address the problems of a significant expansion in the Cloud by separating VM conditions. Their proposed strategies include condition transport, server farming; failover; address delivery; sandboxing; and the middle man. In Cloud forensics, it is used to collect and analyze data between the data exchange process between the model and Cloud hardware, which effectively allows the forensic approach to be performed on all data used by the model without interfering with other conditions. However, the authors of [51] pointed out that these methods were doctrinal and could not be tested or given any kind of experiment.

Secure Provenance Schemes
With the advent of data in the Clouds, the authors [58] proposed a "Secure Provenance Scheme" for Cloud computing that records data objects' identity and processing history in the Cloud. The solution depends on the team's signature and authentic signature strategy, which aims to provide "privacy of sensitive documents and performance of Cloud-based location records", among others. The program addresses two key issuessecurity and privacy, emphasising who receives what and when. In this way, issues related to child custody [59] and anonymous access are addressed through the "Trusted Evidence Mechanism" solution offered. However, it is not clear whether this program works on all three Cloud models or has limitations. The proposed system's safety has been demonstrated under certain assumptions of it being standard and proven. The authors of [60] proposed a model except for the buyer of the service and the service provider. An additional obligation of the Cloud auditor has been included as the services used by consumers are accessible to all users even if the provider enables login features for each system where the customer has limited access. Suppose a forensic analyst reports an issue. In this case, the investigator will conduct a preliminary investigation by collecting provenance records from the last history and finally prepare a report to be submitted to the court authorities.

The Trust Platform Module (TPM)
Data reliability is an essential requirement for the acceptance of evidence in the courts. The authors of [5,48,53] proposed the use of the Trust Platform Module (TPM), which aims to maintain the confidentiality and integrity of data in the Cloud. The TPM solution provides additional security for authentication, encryption and signing so that evidence can be retained and cannot be criminally altered or removed without notice. However, as the authors of [53] point out, one problem with TPM is that its safety is still in doubt as effective procedures can be modified without detection. Additionally, the implementation of TPM on PaaS and SaaS models is uncertain due to limitations in access to the hypervisor. Moreover, the problem of compatibility with many current devices in the Cloud is troubling, and as a result, CSPs are unlikely to support their implementation. The use of TPM is also not severely tested, and there is no indication of any internal claims.

VM Snapshots
Collecting evidence sometimes requires making a forensic copy (bit-by-bit image) of Cloud storage systems. The authors of [18] suggested using VM Snapshots to help capture and investigate the design of the IaaS model. However, this solution is limited to accessing portable devices in the private delivery model, but not in the public Cloud. Additionally, this solution does not apply to SaaS and PaaS models. In the IaaS model, things are a little different as the client (and the investigators) controls the Cloud infrastructure that includes storage. The authors of [61] improve the migration framework of a secure machine that enables a reliable platform module for many virtual machines operating on the hardware platform while maintaining the integrity of the Cloud system.

Examination and Analysis Stages
The examination and analysis stages meet undoubtedly to be the most challenging of traditional digital research, particularly the analytical process investigating the value of evidence and its significance for testing [36]. These categories become a significant challenge for cloud forensics due to the scale and increased volume of resources and materials to be tested, in line with the limitations of the processing and testing tools [51]. Additionally, there are limited tools available for performing forensic tests and analysis due to the highly informal cloud data format. It is often understood that traditional forensic tools have various limitations brought about by the diffused and elastic features of the cloud paradigm [39,59]. For example, Encase and FTK (the most widely used forensic tools) cannot be used to retrieve, analyze or test cloud data due to the high level of trust required.
For this reason, the development of applications that translate the traditional cloud data format into something readable and tangible may be necessary. Another option is to develop tools that can be used to test and analyze data in a conventional cloud data format. The following are the available solutions to the problems.

The Offline Windows Analysis and Data Extraction (OWADE) Tool
The (OWADE) tool was developed by the Open-Source Software Foundation, which can retrieve information about the website and the user, has visited, extract data stored in the cloud, disclose user identity, and resume online session activities. This tool is essential because the internet is the gateway to many cloud services and resources, and providing this data will be a crucial step forward in the investigation. However, the authors of [51] have shown that this app is still developing and only works with Windows XP drivers. Meanwhile, Microsoft applications have been continuously upgraded; Windows 10 is the latest version that makes Windows XP less functional.

The Management Plane
The "Management Plane" proposed by [53] is another recommendation that provides a balance between speed and reliance when obtaining and analyzing Cloud-based evidence. This solution is part of the authors' proposals that include the "Platform Modules", "Forensics-as-a-service", and "legal solutions", all of which consider less trust but require more co-operation in the CSP.

The Forensic Open-Stack Tools (FROST)
The authors of [53] developed the FROST, a management plane forensic toolkit for acquiring forensics data from virtual disks, API logs and guest firewall logs.

Reconstruction Stage
The reconstruction phase in a digital investigation is closely tied to the analysis stage. The data reconstruction process produces a variety of analyses [5]. In other categories, Cloud computing also contributes to the reconstruction phase in several ways. The distributed Cloud-based environment hinders the execution of any brief analysis with a consistent timeline that can be used to re-track the logical sequence of crime events [37]. Following are the solutions that exist for the problem.

LVM2-Based System Snapshots
To rebuild the crime scene, the authors of [62] proposed a solution that duplicated the attack and restored the system to its original state. Their proposed method uses LVM2 System abbreviations to revive the attack event each time a Cloud system attack occurs.

Presentation/Reporting Stage
The presentation/reporting stage entails reporting investigation findings in a legally acceptable manner [35]. It is the final stage in an analytical investigation where evidence is formally presented in court or by another judicial body [63]. The presentation stage successfully contains a summary of the entire investigation process, and a description of the conclusions [36]. Presenting or reporting the outcome of digital investigations involves using cloud services and applications that face similar challenges and showing the result of traditional digital investigations due to strict legal requirements.
Cloud computing and complexity factors have added to these challenges due to several issues that need to be overcome in the course of a court investigation into the Cloud. In most cases, investigators are required, in their presentations, to explain the procedures used to obtain evidence and how it relates to the case. However, Cloud technologies that include multiple applications, visualization and simultaneous access (to name a few) make the definition very difficult to understand for those with limited technical knowledge. In presenting the investigation findings, the legal investigator, as a professional witness, will have to prove that the evidence was collected using legally permitted methods that are very difficult to follow in the Cloud space. The Cloud's shared and multidimensional features make it challenging to determine where and to whom the crime was committed, let alone determine in what country the case should be brought and where it can be heard [38]. Following are the solutions that exist for the problem.

Seminars and Associate Conferences
Conferences should be held to raise awareness among judges and other legal stakeholders about the complexity of digital investigations involving the Cloud. In addition, court administrators (e.g., judges and magistrates) will need corresponding qualifications to demonstrate their understanding of the basics of Cloud computing and related evidence available in the Cloud.

Repeatable and Reproducible Conclusions
When presenting the findings of an investigation, the investigator should ensure that the results of the analysis should be repeated and redefined by the same or another forensic investigation. This will increase credibility in the report and go beyond the fact of simply presenting a piece of channel-controlled information.
The various previous stages of the case investigation process and the challenges that exist will impact the presentation phase because the previous sections' processes and outcomes are discussed and reported in the presentation section. For this study, the most common steps of a digital research model have been taken, as discussed by the authors of [35]. Where necessary, two or more related steps from other models are put together to give some emphasis and clarity. Table 1 summarizes the recommendations already presented by various authors to address the multiple challenges encountered at different stages of the digital forensics process.
The disadvantages of the methods discussed for the challenges/issues in the digital forensic investigation process have already been highlighted under the "Limitations/Drawbacks" attribute of Table 1. A brief discussion on the advantages and disadvantages of each approach is presented in Table 2. There is no proof of testing and evaluation [21,48,49] The continuous data synchronization model x Continuous Synchronization on API There is no evidence of any criminal activity [48,49] Access to Evidence Log-based Model x x Uses transaction and event logs It does not resolve issues related to dynamic data [41,43] x It uses a separate location log on the client-side aligned with the CSP log using different IDs and timestamps. Additionally, it uses different IDs and timestamps that provide relevant event details The CSP determines what information is included on the client side [42,43] Log-based Model (Client-Side Evidence Identification Process) x

Built-in application logs
No proof of implementation or evaluation [5,42,44]   • Separate the primary and secondary crimes significantly.
• Investigation phases are considered iterative instead of linear.
• It is very time-consuming due to the iterative phases of the digital investigation process.

Cybercrime Forensic
Framework [40] • Considers the collection and preservation stage.
• Make multiple images of the relative records and files for primitiveness and integrity of the evidence.
• Highly dependent on FTK and En-Case tools for the forensic analysis.
• Performance is not up to the Cloud scale.
Log-based approach [41] • Log consumers can process, analyze, and correlate the emitted log records effectively and efficiently • Not able to handle forensic timeline analysis, log review, log correlation, security visualization and policy monitoring.
Log-based model [42] • Check the activities on SaaS cloud without the support of the CSPs.
• Create a customized log for both CSCs and CSP.
• Data and logging for multiple users may be co-located with spread hosts and data centres.
Log-based approach for Cloud CRM [43] • Time saving for uploading, backup, and maintenance purposes.
• Focused on log centralization, log records retention and maintenance and integrity preservation at the same time.
• Not able to handle log management for maintaining big data and large-scale organizations.
• Results analysis is possible only after performing live attacks and security requirements for large number of datasets.
Mobile forensic investigation for Remnant data [45] • Identification of Remnant data on GDrive data and applications for all CSCs.
• Restricted to GDrive for Android Smartphone only. • Requires authenticated credentials from users to decrypt the file on the server.
• Restricted to their private cloud-only.

Collection and Preservation stages
Clouds and Edge computing security challenges [51] • Considered Cloud and Edge as the revolutionary technologies in terms of security, trust and client management.
• Cloud and Edge computing still face several issues with customer-centred privacy, customer authentication system, and customer service level agreement.
TrustCloud [52] • Considered Security, Privacy, accountability and suitability as the main trust parameter for the cloud.
• Detective rather than preventive approaches to increasing accountability.
• Restricted to logging mechanism for data layer of accountability apart from workflow, system, Policies and Laws and regulations.
Cloud Forensic Readiness Framework [54] • Time, money and efforts have been reduced by thorough digital investigation and collection of data before the crime incidence.
• Privacy and forensic techniques were not considered in the initial proposed framework.
• Readiness framework has been validated through a single organization only.
Secure provenance scheme [58] • Provides provenance tracking on disputed documents, the provenance record's unforgeability, and finegrained access control on documents.
• Sensitive documents stored on the Cloud are confidential and handle the authenticity to Cloud servers anonymously.
• Security of the proposed scheme has been validated through assumptions instead of real-life experimentation. • Eucalyptus and OpenStack software is used to capture a snapshot of running VM automatically for evidence collection.
• Only a single VM has been considered at the moment.
Secure Live Job Migration Framework for multiple VM [61] • Load balancing, fault management, recovery from host failure, system maintenance and resource sharing have been effectively considered during the VM migration process.
• Virtual trusted platform module had been used to provide trusted computing for multiple VM in a single snapshot.
• The integrity of the proposed approach has been verified using various assumptions based on physical access of attacker, VM communication mechanism and virtual trusted platform module.

Examination and Analysis Stages
Pros and cons of digital forensic in Clouds [39] • Availability of massive storage for evidence in clouds.
• Inbuilt mechanism for hash authentication of disk drives • Data acquisition as the significant issue.
• Temporary files, registry entries, and memory may be very difficult to access in cloud data centers. • Provide both read-write functionality by default compared to read-only function of LVM1.
• Mount the snapshot volume by experimental programs to change files on that volume.
• Threshold values have not been detected correctly.
Presentation/Reporting stage Digital Forensic Readiness [63] • It uses remote and centralized logging to improve the authenticity of archived data in order to overcome jurisdictional issues on Clouds.
• The proposed model works for Windows only.
• Protect operating system or hypervisor and central log server.

Edge Forensics: Challenges with Cloud Forensics and Existing Solutions
Edge computing extends Cloud computing by providing computer services closer to end-users on the Edge of the network. Edge's view addressed the problem of high delays in services sensitive to delays and malicious programs within the Cloud computing paradigm [65]. However, Cloud and Edge computing has established its power and usefulness through features such as low cost, reliability, local sensitivity, wireless communication, and local access. These power features pose new problems in the area of privacy, security, and legal practitioners. There is no doubt that Cloud computing brings many challenges in investigating digital forensics conducted in the Cloud. Given these challenges, it is vital to analyze and understand the impact of Cloud computing on forensic investigation and identify the various methods, tools and techniques used to solve these challenges. We decided to focus on these paradigms (Cloud and Edge) as these technologies' challenges are the almost same.
The authors [26] proposed a log assuring secrecy scheme in Edge-Cloud Environment. In this approach basically investigator is trying to recover the log files as shown in Figure 2, Initially all the users log has been stored on the Edge nodes. Later on the users log after segmentation has been stored to the distributed storage system and centrally located Cloud node. In first phase attacker will try to destroy/steal data for Edge node instead of central storage or distributed storage system. In second phase, attacker will attack on the Cloud node for stealing the data. In between, on suspect the Edge Cloud service provider will talk to investigator to do the investigation on the data and store it. The investigator is able to recover the data from central storage or distributed storage system but not able to recover it from the Edge node destroyed/stolen by the attacker. Further, with the help of MIC network and index files, the data can be finally recovered from the Distributed storage clusters. An essential advantage of connecting the Edge with a computer is achieving great power under high simultaneous access, real-time guarantees, travel support, and data persistence. Cloud computing's elastic storage and extensions capabilities allow us to address the demands of scalability, perseverance, and trust. Edge computing deals with the services hosted on nodes that are held by Internet Service Providers (ISPs), and other smaller servers which are available near users in the network. The Nano Datacenters (NaDa) [66] model is one of the examples of the current trend of disruption of the Edge of complex network operations that were traditionally performed "Network-centric".
Some of the suggested Cloud computing solutions, such as data integrity schemes, searchable homomorphic encryption, and test data retrieval, can find a limited program on an Cloud/Edge computer due to high data and work migration through the Edge nodes and Cloud [67][68][69]. Another key security feature is Cloud protection infrastructure from external attacks access control. However, as the Edge nodes are not in a single administrative domain, these nodes might not necessarily be compatible with management authority. Most of the challenges associated with Cloud/Edge forensics have already been summarized in Tables 1 and 2. In addition to having conflicting access control policies, Cloud owners' nodes may need to extend their access management policies by using other trusted third parties places. Such a transfer model may be the case of harassment by an opponent [70].

Forensic Analysis: Tools, Encryption and Files Handling Methods
The term "forensics" refers to the methods that investigators use to solve the crime. Each innovation has its benefits as well as drawbacks. The art of investigating a crime is performed or involved with a computer, called computer forensics. Computers and electronic devices are developing very fast and being used in modern crime. Strictly speaking, computer forensic technology can be used to collect and preserve evidence from devices and later introduce it in court. These devices have been developed and can perform all types of operations from essential to advanced levels. The forensic investigator is mainly responsible for the examination and preservation of data evidence. He/she must be aware of the techniques, procedures, and processes of crime evidence.

Computer Forensic Software Tools
Computer forensic tools are designed to guarantee that the data collected from the computer are precise, accurate and trustworthy. Due to the different types of computerbased evidence, there is a diverse range of computer forensics tools [71]. The well-known open-source and business software tools for digital forensics are tabulated in Table 3. The list consists of their detailed descriptions, characteristics, compatibility with the operating system and website links.

Hardware Tools for Forensic Analysis in Cloud and Edge Computing
In this section, the various hardware tools/devices used for forensic analysis in Cloud and Edge computing are discussed in detail [70].

Data Recovery Stick
A data recovery stick can recover deleted files quickly from any location, even the files that have been permanently deleted from the recycle bin. The investigator can still retrieve the deleted files until the files' locations are overwritten with new data or not excessive. There is no need for any installation and any other software.

Phone Recovery and iRecovery Stick
The phone recovery stick includes special recovery software on USB drives that allow recovering data on Android-based smartphones and tablets, which can be checked even with a locked screen. It can recover deleted data unless the data have not been overwritten. Similarly, the iRecovery Stick contains special testing and recovery software on USB drives that allows anyone to check data on iPhone-and iPod-based touch devices. Similarly, the iRecovery Stick can also recover deleted data unless overwritten.

Project-A-Phone Flex
Project-A-Phone Flex can take high-quality screenshots of any device. The eightmegapixel camera allows the investigator to take clear pictures of each screen on the device to miss nothing. It is an easy way to test a flex cell phone when the investigator does not have software. The camera connects appropriately to the investigator's computer to acquire clear images of the cell phone pictures. The investigator can also record HD quality videos of the entire testing procedure.

Chatstick
Chatstick covers the entire computer with Yahoo, MSN 6.1, 6.2, 7.0, and 7.5, ICQ 1999-2003b, Trillion, Skype, Hello, and Miranda, etc. It scans for all these chatboxes and creates a report in a simple format so that investigator can see what their relatives, friends or employees are saying to people online.

Forensic Ultradox Right Blocker V5.5
Digital and forensic investigators, technicians and lawyers who want to safely view, evaluate, or depict a disk drive based on CRU, WiebeTech, etc., can opt for Forensic Ultradox v5.5. It is an easy-to-use, professional-class drive dock that provides a stock of hosts and drives connections to recover and analyze the delated or malfunctioned data.

Dp2c
It is a powerful and compatible tool with built-in data-target triage and a complete diskimaging tool. Investigators can collect specific data objects such as graphics or documents and e-mails, etc. This tool is bootable, easy to use and provides all the functionality required by an investigator to collect the data.

Mobile Field Kit
It includes the Project-a-Phone in a licensed and robust, portable case of the DS embedded on the Windows laptop, cable and power options. The main advantage of MFK is that it uses an open system, which allows investigators to place other mobile forensics or computer forensic software on your system for use in the field. The investigator has the choice of a standard Windows laptop or a ruggedized laptop. This allows manual driver updates to support most new phones without having to wait for a software update. Further, MFK, a handheld cellular exploit device with a device capture driver embedded in the hardware, was used. The hardware comes in a rugged case and has a touchscreen capability for the mobile extraction unit. MFK allows logical and physical acquisition of handheld/mobile devices with built-in capabilities.

Strong Holdbags and Tabletop
Strong Holdbags are used to block signals from wireless networks, signal sources that can threaten digital evidence and cell towers, etc. However, it does not permit incoming communications to alter the evidence or send a wiping order. Similarly, another popular tool called Tabletop Stronghold tents can block wireless signals from wireless networks, cell towers, and other signal sources.

Forensic Duplicator
Forensic Duplicator integrates multiple devices into one simple unit for better performance. Field units combine forensic imaging applications (Linux) quickly and easily with virtual drive emulators and S.M.A.T diagnostic tests. This is used as a platform to perform: cellphone/tablet data extraction and analysis, complete computer forensic analysis and forensic triage data collection. It is particularly fast with E01/Ex01 formats, runs at full compression, and uses multiple threads in multiple sessions. Forensic duplicator units are available with a dual boot option.

Effects of Encryption Methods on Cloud Forensic Analysis
Applications running on the Cloud can balance various factors, including data size, bandwidth, load balancing, and security. The main obstacles to Cloud reception are privacy and data security because data owners and CSPs are not within the same trusted domain [72]. A series of different technologies or cybersecurity algorithms are used to protect and maintain privacy in Clouds. These techniques mainly include encryption, limited service and strict access, data backup, and facilitate data recovery [73]. To ensure security and data privacy from a Cloud service provider, an encryption mechanism must encrypt the data before uploading them to the server.
The key technique is public-key encryption based on the public key cryptographic algorithms. In traditional public-key algorithms/techniques, the data owner encrypts the data with the user's public key before uploading it to the Cloud. When a data user wants to access the data in the Cloud, then the Cloud provides a related cipher/private key for encrypted data. The user then decrypts this cybertext with the private key. Attribute-based encryption (ABE) [74] is a new invention that works using one to many relations and is also called fuzzy encryption.
The authors of [75] have discussed the encrypted search mechanism in Cloud storage, as shown in Figure 3. Both the data owners and data users need to follow the different search methods. Data owners can store their data files using encryption and index files corresponding to the encryption algorithm. This encrypted file and the index file can be stored in the Cloud storage system. The person or data user having the decryption key/algorithm can only retrieve and access the original file shared by the data owner. In this case, if an attacker is trying to access the encrypted file, he/she must have the relevant decryption key or algorithm either shared by the data owner or CSP. Android is one of the leading smartphone operating systems globally, and it is essential to know about Android forensics. In addition, chat messaging is becoming a popular medium of communication among consumers, especially young people. The authors of [76] had extensively analyzed encrypted instant messaging (IM) applications such as Whatsapp, Viber, Telegram and WeChat, etc. The primary purpose of studying the forensic analysis of these applications is to encrypt the database and media files. Data availability is of key interest to law enforcement agencies for acquisition and analysis. Sometimes database of these applications in encrypted form can be provided by forensic investigators. However, in some cases, the encryption key may be available in the application itself. They make extensive use of popular instant messaging applications for encrypted and secured chat database files.
The encrypted archived files for Viber and WeChat can be downloaded from a rooted handset. The WeChat database can be decrypted with the encryption key provided by the IMEI number and the phone's unique identifier. Further, WhatsApp messages can be retrieved from unrooted devices. Additionally, archive files for WeChat can also be collected from an unrooted device by degrading it to a lower version to the existing version. Telegrams is the most secure IM applications compared to all the others. Even if the database files are retrieved, the encrypted chat database files cannot be decrypted without the encryption key [76].
Forensic investigators are primarily concerned with the security and privacy of sensitive e-mail data. Encrypted e-mail seems to be a feasible option for providing security but is limited to its considerable operational functionality. Public encryption with the Keyword Search (PEKS) is a popular scheme [77] with regard to the combined security and friendly operation functions, which play an essential role in finding encrypted e-mail on the Cloud server.
The author of [78] proposed "public-key multikeyword searchable encryption with hidden structures" (PMSEHS) with public-key multikeyword searched encryption. It enables e-mail receivers to conduct multikeyword and Boolean searches on massive encrypted e-mail databases without disclosing any additional data to the Cloud server. The author of [79] explores the potential for retrieving remnant data residues for preliminary research for criminal investigations that may be carried out for pCloud software. It is based on a study in volatile memory. In pCloud, Investigator can download, import, and open files in the Cloud in three different ways. pCloud is a kind of private Cloud computing designed specifically for file storage. It simply depicts the interaction in terms of a user-friendly app and how it works. It is available for smartphones and computer systems as it has several operations on operating systems such as Mac OS X, iOS, Android, Windows and Linux. It is installed on top of a local disk space system on a stable virtual drive called pCloud. It is used to create tools.
The author of [80] proposed "forward-secure puncturable identity-based encryption" (fs-PIBE), which allows the e-mail user to indicate the decryption capability accurately. The user will be allowed to conserve the decryption capability of the unencrypted e-mail and remove the received ones. Therefore, it allows more practical forwarding confidentiality than specific etiquette, in which the decryption capability of received and unpublished encrypted e-mails is simultaneously withdrawn. With a consistent fs-PIBE construct, constant ciphertext, and proven encryption in a standard format, the authors create the appearance of an encrypted Cloud e-mail system and speed it up. Furthermore, the authors extended the proposed fs-PIBE system to enable end-to-end encryption and outsource decryption to improve the protection and reliability of the proposed mechanism.

Forensic Methods for Handling Tampered Sound Files
Audio forensics refers to assessing and investigating audio recordings for accurate confirmation of genuineness of any audio evidence in court. Audio forensic algorithms are used to prove the audio evidence's authenticity, improve speech transparency and lower-level sound's audibility, improve audio recording and documentation. In addition, it can be used to identify the speaker, clarify the conversation, and understand audio evidence, such as crime or accident scenes [81]. The lack of widely accessible forensic datasets for performance assessment and benchmarking and modern multimedia forensic algorithms is one of the challenges in multimedia forensics.
High-quality audio playback system, waveform display system, and spectrographic display system are the primary instruments in modern audio forensic research. Standard desktop or notebook computers are typically used to complete these functions [82]. External non-audio knowledge about a problem, suspects, circumstances, and the investigator's suspicions are often sources of prejudice in audio forensic examinations. An individual demanding an audio forensic investigation would wish to discuss the accused's arrest history, explain the physical evidence found at the crime scene, and refer to the requested finding as a "case". Typically, an audio forensic investigation begins with a request from a law enforcement agent or an attorney. It is possible that the applicant is not acquainted with audio forensics techniques.
It is highly recommended to start with the original recording medium and, if possible, make certified digital work copies and any improvement or commentary work before beginning the interpretation recording system. The original recording system may allow investigators to retrieve specific original proprietary data, device settings, metadata and other recording settings, timestamps, etc. If the device has special connectors, cables, and power supply, these also need to be requested. Some recording devices may have volatile memories: the recorded signal is lost if the power is lost. This should be taken into account to ensure that the memory is protected from potential power loss. Some storage systems have volatile memories, which means that the signal is lost if the power goes out. This step should be taken to ensure that the memory is safe from power outages. The sender should be asked to back up the proof with the "Written protection" mode and other mechanically overwritten preventive settings.
Recordings are always subject to accidental changes or intentional tampering, and these changes may not be noticeable. The court must have confidence in the audio evidence's accuracy and integrity. Audio forensic examiners should observe chain-of-custody protocols, avoid allowing for unanticipated improvements in the actual evidence, and be aware of signs of alteration.
The authors of [83] developed a digital audio forensic data collection to help analyze audio forensic algorithms. They included data-gathering environments, cameras, speakers, languages, and notation in detail. Established tamper detection devices use artefacts caused by recording instruments, codecs, and acoustic settings. The audio portion of the reference audio is replaced with an audio recording recorded with a related microphone in the same recording session to create the manipulated audio dataset. The collected dataset is used to detect damage to the audio data file. This study examines the inter-range spread of microphone artwork. The recordings of five different microphone models would be regarded in this respect. It is important to note that each microphone category has at least two similar microphones and versions [83].
The authors of [84] proposed a new paradigm for emerging multimedia data processing approaches for multiple multimedia sources. They focused on speeches, videos and pictures, handwriting and text. The toolkit forms the methods and techniques for gathering information from multiple multimedia sources that can be used in various contexts in court. Additionally, social media such as Twitter and Facebook allow multimedia data in new formats to the data described above. The referenced data can further be provided to the investigators for coincidence and variations, including more symbolic features based on individuals, biometric features or objects, events or data features.
Transient techniques such as spectral, delta modulation, wavelet scattering, and zerocrossing information are used to develop speech and audio representation schemes. For example, traditional and structural similarity-based methods: example-based voice, speaker, and audio recognition can compare aspect and overall similarity using query-based, keyword, and phrase-based retrieval schemes. The resulting scheme is both computationally simpler and data compression effectively.
The author of [85] considers a method to verify speech recording instead of a two-step reference: align the two recordings and then classify each question frame as matching or mismatched. Furthermore, the proposed sub-sequence alignment method is based on the Needleman-Wunsch algorithm and proves that it improves the dynamic time when performing simple manipulation operations. In addition, the authors explore several binary taxonomic models based on LSTM and transformer architectures to verify content material at the frame level. Through extensive experiments on Donald Trump's speech recording, they reliably explored the various types and periods of audio tampering activity.

Forensic Methods for Handling Image Files
Multimedia files play a crucial role in supporting evidence analysis to assess crime by viewing files as digital guides or evidence. Traditional file recovery software detects and reconstructs parts of a file using markers such as titles and footers. The Joint Photographic Experts Group (JPEG) file data format is widely used in computers, mobile phones, the Internet, multimedia applications, and digital cameras due to its advanced features [86]. File recovery is the process of recovering deleted or damaged files from digital storage when file metadata are available [87].
Digital forensics is the application of file retrieval methods on bit-copy images of disk drives. It emphasizes retrieval and preservation of different parts of the assigned data and is different to the investigation. The authors of [88] adopt engraving techniques as a base work due to its comprehensive approach to retrieving JPEG files. The accuracy and authenticity of a digital image are often challenging to visualize due to the advent of advanced image processing and manipulation techniques. Various digital image forensics approaches have been proposed over the last decade to detect digital image forgery.
The authors of [89] provide a brief overview of passive digital image forensics methods to provide a broad innovation to recent digital image forensic security advances. Various traces can be used in digital image forensics to differentiate manipulated images from natural ones. Furthermore, the authors divided these traces into three categories: traces leftover from image acquisition, traces leftover from image storage, and traces leftover from image editing. For each trace, the relevant digital image forensics approach is briefly reviewed by clarifying the relevant issues.
The authors of [90] proposed a model for efficiently and accurately identifying the forensic process by classifying bulk JPEG images generated by the data engraving process or other means. JPEG images with errors or corrupted data fall into the first classification. JPEG images with lattice fields fall into the second classification, and JPEG images without corrupt or forgery codes fall into the third. Furthermore, the proposed model would automatically allow investigators to automatically label JPEG images, minimizing the amount of time spent on the whole digital forensic procedure.
Locating the source of multimedia data, i.e., identifying the model, make, or personal computer that will capture the media material is a common problem in forensic analysis. As opposed to sensor noise-based approaches, source linkage allows for more direct automation based on media element header detail. The header detail consists of metadata such as Exchangeable image file format (EXIF) tags and JPEG algorithm parameterization.
The authors of [91] researched the factors affecting the metadata and the JPEG-encoder based on the sources of the images taken by Apple's iPhone cameras. The diversity of image metadata requirements is greatly enhanced as compared to the ecosystem of conventional cameras. Source device identification is even more difficult in the case of traditional digital cameras. The authors demonstrated the whole process in four stages. First, the dataset collected from Flickr contained more than fifty thousand images from an Apple device. Second, the EXIF metadata of the smartphone on Apple devices will change over time. Third, the change is less integrated with the original Apple hardware, but the iPhone operating system is more associated with the version change in iOS. Finally, automatic taxonomy training outperforms hardware platforms in evaluating iOS models.
The author of [92] used a Convolutional Neutral Network (CNN) to decide on which images were tempered and look for image irregularities such as identical patches of pixels. This dataset, in particular, includes duplicate images created using the copy-move technique. The authors mainly focus on different types of image-manipulated analysis methods such as:
Splicing detection based on inconsistencies in geometry perspective; 3.
Copy-move forgery is a form of image tampering in which a portion of an image is copied and pasted onto another, usually to mask unsightly data. The aim of determining each transmission type is to locate picture regions that are identical or very similar. Such characteristics (e.g., noise and colour) are consistent with the rest of the image if the copied bits are from the same image. This means that forensic approaches that look for statistical inequalities will not detect such an attack.
The authors of [93] suggested a part-level middle-out learning technique for structurally efficient classification to detect dual compression. The authors first show that single and double compressed data with different JPEG coder settings will form a limited number of coherent clusters in a featured space representation. They also visualized the behavior of several well-known Benford-based functions. Finally, in the feature engineering family, the proposed model was used for the double JPEG compression detection problem. When compared to similar strategies in this family, the proposed approach has less complexity but a comparable efficiency.

Forensic Methods for Handling Image Files with Steganography
Steganalysis and steganography are two distinct techniques of cybersecurity or digital investigation. Steganography seeks to hide messages from the naked eye, while steganalysis aims to confirm their existence or retrieve embedded data. Steganography and steganalysis have recently received a lot of media attention, mainly because they have been used by law enforcement. It is an old technique derived from the Greek terms steganos (cover) and graph (writing). Since cryptography is prohibited or forbidden by law in many countries, hiding messages is very popular these days [94]. Steganography is the process of hiding a message, audio, image or video by embedding it in another picture, audio, news or video. Therefore, understanding how messages can be embedded in a digital medium, such as digital images. Over the years, many robust and powerful methods of steganography and steganalysis have been demonstrated in the literature.
The author of [94] used various steganography techniques to identify confidential data in digital images. Steganography is commonly used for watermarking and fingerprinting to protect Intellectual Property Rights (IPR). The watermark is embedded in the host statistics not to be deleted without causing damage to the host media. An invisible data monogram on a watermarked object documents the actual owner of the data to validate copyright protection [95]. In practice, all digital file formats are marked with steganography with a distinctive mark of redundancy.
Many practical steganography applications enhance the power of image search engines and smart identity cards and include information from people embedded in their photographs and content under copyright control. Other important applications include television transmissions, video audio synchronization, secure movement of corporate confidential information and TCP/IP protocol packets. Specific data are embedded in the image for perseverance to check the network traffic of specific handlers. JSteg [96], F5 [97], Outguess [98], MB [99], YASS [100] are the major JPEG stenographer approaches. In addition, the authors of [95] classify stenographic systems, which convert image documents into invisible data into the following categories. Without understanding the steganographic algorithm, Stego can be extracted using essential image processing functions in an anti-forensic way. The authors of [105] proposed the Persistent Stego Incident Response System (PSIRS) anti-stego algorithm, which is used to make images and videos without stego without affecting critical visual quality.

1.
Military personnel use steganography as a general means of communication for confidential communication; 2.
National Security Agencies (NSAs) use steganography to transmit confidential messages inside and outside the agency; 3.
Hiding the details of the people in their photo in the smart ID; 4.
Since unreliable communication may often lead to serious data loss, corporate and industry communication is monitored for protection and authenticity; 5.
Watermarking is used for copyright information notation; 6.
Radar system and remote sensing; 10. Multimodal biometric data, etc.; 11. Medical science uses image steganography in medical images used for diagnoses such as CT and MRI.
There are many examples of criminals and terrorists using steganography for communication. Here are some examples from the real world 1.
U.S. officials and several articles claim that al-Qaeda used steganography to plan the "9/11 attacks" [109]. Later in 2012, an al-Qaeda member was arrested in Berlin with a chip that contains video files containing steganography. 2.
"Operation Twins" is an international paedophile racket known as the "Shadow Brotherhood" [110].

3.
The University of Purdue reported that several computers had been found with information relating to financial fraud and a device that hides data with child pornography [111]. 4.
The Federal Bureau of Investigation (FBI) states that 10 Russian goosebumps in the United States are using stenography and that confidential data have been leaked to Moscow from the United States [112].
A good steganography technique or equipment must meet the basic requirements of the steganography system, including aptitude, efficiency, visibility and safety. In addition, there are reversibility, encryption, computational complexity and other requirements for better technology. Multimedia, archives, networks, Skype, medical videos, and DNA are some of the latest steganography patterns [108]. The author of [113] demonstrates the classification of steganography based on technical and non-technical steganography.
Different types of linguistic and technical steganography are discussed. Subsequently, PSNR, MSE, robustness, and the domain insert technique evaluate the output of other image stereography methods. Many of the current algorithms have some benefits and drawbacks. The latest steganography has also been integrated into detail with the embedding algorithm and the extraction process. In some steganography algorithms, data hiding and payload capabilities distort embedded data and the original image. Different algorithms have different payload capabilities. It is now essential to develop an efficient and effective infrastructure that provides the high-payload ability, data embedding and restoration without distortion, improved protection and accuracy, and attacker resistance technology.

Open Problems
The literature review covers the challenges that various authors and the proposed solutions have already been identified to address them in detail. However, some challenges were identified as open problems as they still needed to be addressed or required real-life solutions. These issues, as discussed by the authors, are: 1.
"Reliance on CSPs" to obtain information as evidence in the Cloud [ Registry entries, temporary files, and memory may be difficult when cloud data centers are impossible to access due to virtualization [38,39]; 9.
Cybercrime is carried out without regard to time or place. Information services or customer data may be disseminated across many sites or even continents. As a result, it creates procedural ambiguity and controversy in the government's information security oversight and lengthens the forensic time and complicates the process. Furthermore, judicial questions relating to users' physical limits are hazy due to virtualized technology, and they must not be overlooked [40]; 10. Crime modes can cover various realms, from electronic communities, pornography websites, phishing, copyright piracy, and e-commerce bills, depending on the Cloud network. When sensitive information is destroyed or disrupted legally or by some other medium, privacy problems arise [40]; 11. Electronic data include network evidence. The majority of documentation, such as Cloud storage log sheets, email logs of e-commerce bills, or digital signatures. When evaluated and tested by specific authentication, these various types of proof cannot mean anything by themselves. Since electronic evidence is easily tampered with, it is critical to prevent the different factors that compromise the legal impact of the three types of forensic evidence mentioned above [41]; 12. How to keep logs as security and synchronized and effectiveness as evidence [42]; 13. Improper activities at the network, client and server-end [43]; 14. Distributed Denial of Services (DDoS) and Malware Attacks [43]; 15. Dependencies due to service providers [43]; 16. In the public Cloud deployment model, consumers do not have physical access to the infrastructure, and their data privacy is much lower than those in the private cloud [54]; 17. Client computers in Cloud environments can provide minimal evidence due to the storage of real data on the CSP side [54]; 18. Timestamp matching of different file system activities during the investigation can be complex as clients and Cloud storage servers may reside in different time zones [54].
Added to these challenges associated with encryption and other security measures by Cloud users to protect the data and communications in the Cloud discussed [114] make it difficult for investigators to clarify details and reconstruct the crime. This study's main aim is to highlight the different technical and legal challenges along with open problems. Further, the study highlights the various software and hardware tools for digital forensics in Cloud/Edge Environment and different file handling techniques related to forensic analysis.

Nature and Scope of Challenges: Technical and Legal
Traditional methods of digital forensic investigation are not compatible with computer features. In various ways, they are different and seem to contradict each other-the challenges are apparent and concise. These challenges are broadly classified as technical and legal challenges [23][24][25][26][27]. (1) difficulty in dealing with a variety of data stored in various locations (by authorities), (2) limitations in accessing Cloud/Edge resources that build strong evidence, (3) maintaining the integrity of evidence, (4) the volume of data in the Cloud/Edge cannot be easily controlled and processed without the risk of contamination or damage to the authenticity of the evidence, etc. In itself, the nature of digital evidence is compared to that of complex physical evidence, which requires a New Criminal Procedure to regulate the collection of digital evidence [26,32].

Cloud Computing Operational Characteristics
Features of using Cloud computing such as multiuse and sharing, visualization, accessibility, and network distribution have created new forensic investigation challenges. For example, the Cloud's shared environment and the widespread use of visual technology have brought new challenges, especially at various digital road investigation processes' stages, which has become very tedious and impossible. The type of Cloud computing based on remote storage technology, multidisciplinary technology and virtualization technology makes it difficult to forensic research using traditional tools and methods [115]. The scope of Cloud computing resources enabled by virtualization technology [116,117] have introduced many complexities and requirements in the Cloud response space. This includes new requirements for virtual visualizations and vice versa and tracking multiple areas of the operating system [52]. Dealing with the impossible amount of data distributed across multiple domains has been seen to complicate the problem regularly. Additionally, the data in the Cloud are primarily flexible, resulting in a variety of technical challenges.

Nature of Cloud-and Edge-Based Evidence
By its nature, digital evidence is fragile and can be easily altered or destroyed if mismanaged [118]. The dynamic nature of digital evidence in the Cloud/Edge has compounded these problems by bringing new challenges. Cloud/Edge systems by design are often live and active, which brings about issues related to data emergence and real-time reporting issues [52].

Identifying Cloud Suspects
The complexity of the work associated with the Clouds' features has identified the real suspects within the Cloud space as a daunting task; non-disclosure of evidence (such as computers, etc.) could bind the suspect to testimony.

Encryption and Other Security Issues
As customer data can reside in unspecified locations, many organizations use highencryption applications to protect their data within the Cloud [75,114]. However, this approach also poses significant challenges to intelligence investigators where such data become part of a broader investigation or are considered evidence. The development of encryption techniques makes it difficult for technical investigators to clarify details and reconstruct crime incidents.
6.1.6. Cloud Service Models-Challenges Different types of Cloud provisioning pose additional challenges to crime investigations. These challenges vary depending on service infrastructure and the environment. PaaS and SaaS are quite challenging due to the unique architecture and complex architecture and complexity in many ways than those of the traditional computer model. On the other hand, the IaaS model has some forensic capabilities as its operating environment is logical and similar to that of the conventional computer environment [37,119]. Therefore, the traditional research method is not possible within the PaaS and SaaS models, where there is an opportunity for the IaaS model [120]. The technical challenges and their implications are summarized in Table 4.
The above challenges are mainly concerned with the general impact of Cloud/Edge computing/features on computer forensics. In a way, the different types of Cloud services present the various challenges and opportunities for digital forensic investigation, as summarized in Table 5.
From Table 5, it is clear that there are many technical and physical barriers to digital forensics in the Cloud/Edge. Some of these barriers, such as proffer, can be remedied using existing forensic tools and processes differently. Others will require the construction of new structures and implementation. At all times, Cloud/Edge forensics also faces legal challenges, which they may encounter in border investigations.

Design Parameter Challenges
Distributed Nature and Remote Storage [26,32] Establishes location and data identification as part of the investigation process. Additionally, identifying Cloud suspects and taking evidence as part of the preservation process is not possible.
Elastic Storage and Volume of Data [118] The scale, size, and size of Cloud penetration affect data collection and identification. Additionally, time constraints can identify interest data; there has been a danger of tampering with or undermining the evidence's authenticity.
Volatile Storage [5,45] There is no guarantee that relevant data can be obtained if required.
Volatile Data [5,45] Data changes are permanent and thus can be easily destroyed; issues related to data emergence and real-time accountability.
Security Obstacles Encryption [75,114] High encryption and high levels of security are difficult to understand and/or pass. One of the most common Cloud/Edge forensics legal barriers is concerned with jurisdictional issues. Other legal challenges are primarily considered in evidence, such as Authentication, Hearing, Chain of Custody, and Preservation. In many cases, the CSP can be found in various legal fields. Therefore, the data (as evidence) are subject to many legal considerations simultaneously. This is especially difficult when there is a third-party CSP-in other words, the CSP uses other CSP services in various locations [121]. If data storage and other services are not included in the CSP, issues regarding the actual location of the data and data ownership may interfere with further investigation. For this reason, the author of [122] argued that the Cloud universe's existence presents unique legal challenges for Cloud researchers. Additional evidence in many private domains, various law breaches, and criminal cases and regulations are inconsistent with some of the many legal barriers to a legal investigation.

Lack of International Collaboration
The challenges posed by legal issues are compounded mainly by the lack of international cooperation between the various authorities involved in the investigation. In some cases, there is a significant lack of legal mechanisms for "access to and exchange of data within the country" and "Lack of legislation/regulation and legal advice", both affecting SLAs and the forensic data collection process [121]. The lack of a coherent legal framework for all authorities makes it impossible to clearly define CSP obligations when it comes to cross-border investigations involving the use of their services. The circumstances under which CSPs can comply with applicable laws and industry standards may vary. Moreover, the lack of law and order in some jurisdictions creates opportunities for legal disputes. It makes it difficult for investigators to follow evidence hidden in the Cloud without a valid reason. Therefore, the forensic data collection process can also be prevented as there are no legal frameworks to enforce or assist forensic investigations.

The Requirement for Seizing Evidence (Admissibility of the Evidence)
Acceptance of evidence is a critical requirement in a court of law. As a result, evidence obtained illegally or without the required consent may provide inconclusive evidence [35]. The process of finding or retrieving evidence usually requires a search warrant. To see it, investigators must make sure that a crime has been committed, that evidence exists, and there is a high probability that there is room for probation. When working with Cloud evidence, achieving these needs is a challenge, mainly due to legal restrictions created by legal issues. Researchers face several difficulties in understanding how digital evidence can change depending on the Cloud service used [121]. Additionally, a forensic investigator may face more significant legal challenges when he or she finds evidence in the Cloud of public prosecution or other acts than when gathering such evidence of criminal prosecution [122]. The forensic investigator can be charged with a criminal offence if the evidence is obtained illegally and without proper examination of official signs.

Paid Service That Requires Protecting Customer Privacy
Cloud computing services mainly come with a limited-service feature [123] and metering capabilities that separate them from free programs and services, including Gmail, Yahoo and LinkedIn. As a result, confidentiality and other legal issues are often discussed and agreed upon in the SLA between the CSP and the client (Choo, 2014). Cloud service buyers who pay for such services will not subscribe to any agreement that will give thirdparty access to their information in the Cloud or to investigate to disclose information that may affect them or damage their reputation. Some of the most common challenges are summarized in Table 6. Certain legal requirements regarding the taking of evidence and maintaining their integrity are very difficult to meet due to Cloud characteristics.
'Paid Service', snaith2011emergency Being a customer of the "paid service" Cloud would be counterproductive to approving any SLA that promotes third-party access to their data for intelligence purposes.

Conclusions and Future Scope
Cloud and Edge computing have paramount importance in the landscape of computer paradigms. While Cloud and Edge computing are finding great success in medium-sized businesses and large organizational structures, these paradigms pose particular issues for computer forensics. The use of computers technology has already been discontinued. The advent of Cloud/Edge computing has led to new technological and legal challenges that were not common in the traditional computing environment. This study has been conducted on Cloud/Edge computing systems and created a revolution in conventional computer forensic by highlighting all the related challenges and proposed solutions altogether. The primary objective of this study has been to establish a deeper understanding of Cloud/Edge computing's impact on computer forensics. The key findings have been that the impact of Cloud/Edge computing at various stages of the research process is different. Further, this study highlights the detailed description of all the hardware and software tools available for the digital forensic process in Cloud and Edge computing. Effect of encryption methods on Cloud/Edge forensic analysis along with the brief introduction to cryptography. Basic details of forensic methods for handling tampered sound files, hidden files, image files, or images with steganography have also been incorporated to enhance the technical strength of the study. Central to this is understanding of the effectiveness and efficiency of traditional digital technologies at various stages of the research process. The following points are indicated for further, and future research aimed to identify unforeseen challenges and to recognize solutions, where possible: • Identification of social/cultural barriers to achieve a new framework/model and assess the implications for its conduct and implementation; • Development of new Cloud/Edge forensics tools based on a new understanding of the concept of challenges and different hardware and software tools discussed in this study; • Establishment of a standard and internationally recognized toolbar with reliability and guaranteed performance by focusing on reducing or preventing opportunities for investigators who produce results and evidence that provides a plan without finding the truth; • Creation of a conducive environment for building trust between SPs and clients, especially about what is being recorded for future use (i.e., ongoing maintenance) as investigations arise; • Assessing the feasibility of the "Location Register" from an ISP or data network providers.
Author Contributions: V.P. worked on the ideas, preparation, creation and/or presentation of the published work, specifically the critical review, commentary or revision-including pre-or postpublication stages. The second author, A.W., worked on the ideas, formulation or evolution of overarching research goals and aims of the manuscript. The other authors C.S., L.G. and S.B. have played the role of supervisors with oversight and leadership responsibilities for the planning and execution of the research activity, including mentorship external to the core team. Further, they provided valuable suggestions, specifically on the critical review, commentary or revision-including pre-or post-publication stages. All authors have read and agreed to the published version of the manuscript.