Malware Detection Issues, Challenges, and Future Directions: A Survey

: The evolution of recent malicious software with the rising use of digital services has increased the probability of corrupting data, stealing information, or other cybercrimes by malware attacks. Therefore, malicious software must be detected before it impacts a large number of computers. Recently, many malware detection solutions have been proposed by researchers. However, many challenges limit these solutions to effectively detecting several types of malware, especially zero-day attacks due to obfuscation and evasion techniques, as well as the diversity of malicious behavior caused by the rapid rate of new malware and malware variants being produced every day. Several review papers have explored the issues and challenges of malware detection from various viewpoints. However, there is a lack of a deep review article that associates each analysis and detection approach with the data type. Such an association is imperative for the research community as it helps to determine the suitable mitigation approach. In addition, the current survey articles stopped at a generic detection approach taxonomy. Moreover, some review papers presented the feature extraction methods as static, dynamic, and hybrid based on the utilized analysis approach and neglected the feature representation methods taxonomy, which is considered essential in developing the malware detection model. This survey bridges the gap by providing a comprehensive state-of-the-art review of malware detection model research. This survey introduces a feature representation taxonomy in addition to the deeper taxonomy of malware analysis and detection approaches and links each approach with the most commonly used data types. The feature extraction method is introduced according to the techniques used instead of the analysis approach. The survey ends with a discussion of the challenges and future research directions.


Introduction
The constant growth of Internet users and the provision of online services such as banking and shopping services, provide hacking criminals with a suitable environment to perform their cybercrimes, which leads to a rise in the expenses that are paid to protect the systems [1].The international damage cost that has been caused by cyber maliciousness takes the attention of the researchers due to its rapid growth.In 2021, this cost is predicted to be around 6 USD trillion according to Cybersecurity Ventures Official Annual Cybercrime Report [2].Malware is considered the biggest threat to cybersecurity and falls under several types such as viruses, worms, trojan horses, rootkits, and ransomware since malware causes direct harm to the systems or steals their sensitive information [3] [4].In addition, malware represents the most frequent sort of computer, network, or user attacks to cause damage or steal sensitive information [5,6].Over recent years, the number of malicious software has increased by 22.9%, which reflects an alarming rise in threats to computer Appl.Sci.2022, 12, 8482 2 of 29 users [7].The authors of [8] stated that there were around one billion infected files in January 2021.Currently, malware generation has been implemented to escape the detection process via the exploitation of obfuscation and evasion techniques, and thereby produce more sophisticated dynamic malware [9].A malware analysis process must be conducted to understand the characteristics and major functions of the malware.There are three main analysis approaches: static, dynamic, and hybrid.Static analysis means assessing a program without running it.Unlike static analysis, dynamic analysis means checking the executable's behavior by running it.The superiority and restrictions of both kinds of analysis are mutually complementary.Static analyses are faster, but malware can avoid detection if it is well disguised using obfuscation techniques or encryption strategies.Obfuscated codes and dynamic malicious hardly avoid the dynamic analysis by observing and examining the program during its runtime, but the dynamic analysis is sensitive to evasion techniques.Additionally, because it cannot analyses all the distinct operation paths, dynamic analysis does not provide all risky behaviors [10].
Several researchers collect data using both static and dynamic analysis and merge it into a single set of features to increase the detection accuracy of malware by utilizing mixed data that is generated through a hybrid analysis approach.On the other hand, the benefits and drawbacks of both static and dynamic analysis have been taken into account by the hybrid analysis approach [11].Moreover, to protect the systems and users' data, the detection and classification models have been built utilizing three detection approaches: signature-based, behavioral-based, and heuristic-based.With a signature-based approach, a unique signature pattern has to be previously extracted to compare the given testing files' signature to an updated database of signatures and make a final decision based on the matching state [12].However, only known malware can be detected using this approach due to the signature of the unknown malware having never been extracted yet, so the obfuscation techniques are considered the biggest weakness of this approach [13].In contrast, the novel malware can be recognized and thus detected in the behavioralbased approach, which is conducted based on the observed behaviors during the runtime of malware in a controlled environment.Further, detecting the malware based on their behaviors is more robust against the obfuscation techniques [14].However, malware with the ability to distinguish between the real machine environment and the analysis environment can circumvent and evade the behavioral-based approach [1].To improve malware detection accuracy, several authors utilize manual or automated rules to develop heuristic-based malware classification and detection models.The heuristic-based models, on the other hand, are restricted to only the malicious behaviors that are represented in the general rules.
Furthermore, (ML) machine learning is commonly used to effectively support the business objectives needed and has been very successful because it is able to handle massive amounts of data such as Application Programming Interface (API) calls, Assembly code (Opcode), and Byte code, which is unacceptable for humans [15].ML techniques offer a great deal of generality and have become an active domain in the field of cybersecurity.For modelling purposes, ML techniques such as Support Vector Machine (SVM), Naive Bayes (NB), Decision Trees (DT), etc., were used to detect harmful programs based on numerous forms of data that influence the overall detection and classification performance.
Even though several studies have introduced malware analysis approaches, there is a lack of reviews that address the relationship between each analysis approach and the used data types.Existing reviews [11,16,17] focus on the methods without relating these methods with the used data types.Furthermore, taxonomies in existing reviews [1,9,18,19] stop at generic detection approaches like signature and behavioral.These taxonomies also overlook data representation methods used by malware analysis and detection research.Additionally, the previous review papers relate the feature extraction to the analysis phase.This does not hold as the outcome from such a phase is the raw data, from which the features are extracted.That is, feature extraction comes after the data collection stage.Therefore, our review bridges these gaps by providing more granular taxonomy that explores in detail the subcategories under each approach and associates each subcategory approach with the most used data types.This enables the research community to go deeper with more specific categories and provide solutions for the root causes behind the unsatisfactory performance of the existing malware analysis and detection solutions.Our paper introduces the data extraction process based on the used extraction techniques and considers the extraction process separately from the analysis process to highlight the broader picture between the data collection and extraction phases.Additionally, a novel taxonomy for data representation methods is introduced in this paper.In contrast, our survey did not focus on feature selection methods because feature selection methods have been reviewed extensively in the literature.Figure 1 shows the previous studies that are considered in our survey and their distribution over the period of time from 2003 to 2022.search.Additionally, the previous review papers relate the feature extraction to the analysis phase.This does not hold as the outcome from such a phase is the raw data, from which the features are extracted.That is, feature extraction comes after the data collection stage.Therefore, our review bridges these gaps by providing more granular taxonomy that explores in detail the subcategories under each approach and associates each subcategory approach with the most used data types.This enables the research community to go deeper with more specific categories and provide solutions for the root causes behind the unsatisfactory performance of the existing malware analysis and detection solutions.
Our paper introduces the data extraction process based on the used extraction techniques and considers the extraction process separately from the analysis process to highlight the broader picture between the data collection and extraction phases.Additionally, a novel taxonomy for data representation methods is introduced in this paper.In contrast, our survey did not focus on feature selection methods because feature selection methods have been reviewed extensively in the literature.Figure 1 shows the previous studies that are considered in our survey and their distribution over the period of time from 2003 to 2022.

Paper Contribution
(i) The association between the analysis approaches and the user data is highlighted.
(ii) A detailed taxonomy that discusses the different types of malware signature and behaviour and distinguishes between manual and automated rules for malware detection is introduced, along with associating each subcategory detection approach with the data types used.

Paper Contribution
(i) The association between the analysis approaches and the user data is highlighted.
(ii) A detailed taxonomy that discusses the different types of malware signature and behaviour and distinguishes between manual and automated rules for malware detection is introduced, along with associating each subcategory detection approach with the data types used.(iii) This survey provides a taxonomy of feature extraction and representation methods based on the techniques used to extract and represent the features.(iv) A comprehensive understanding of the concepts for both data collection (analysis approaches) and feature extraction processes is presented in this review.(v) The open issues and the future directions of the research community are introduced in this review.

Paper Organization
This paper is organized as follows.In Section 2, the research methodology that is followed is presented.In Section 3, this paper discussed the recent review papers.Section 4 presented the malware analysis and detection approaches taxonomy.In Section 5, we have presented the feature extraction and representation methods taxonomy.Section 6 outlined the challenges and open issues.Future directions are described in Section 7.

Research Methodology
We followed the methodology which is shown in Figure 2 to introduce this paper.Firstly, we focused on the review papers that have been written recently to identify the limitations of the existing reviews and then show the need for new literature review papers.Secondly, in addition to the review papers, we used specific key words to collect the relevant experimental papers.Thirdly, according to the analysis and the detection approaches along with the extraction and the representation methods that have been utilized in each single study, the literature review is classified.Four processes, which are reading, understanding, comparing, and criticizing, have been conducted in the last phase to obtain the final results of this survey and highlight the future directions and open issues in the malware detection and classification area.Figure 2 shows the methodology that is followed to write this survey.
have presented the feature extraction and representation methods taxonomy.Section 6 outlined the challenges and open issues.Future directions are described in Section 7.

Research Methodology
We followed the methodology which is shown in Figure 2 to introduce this paper.Firstly, we focused on the review papers that have been written recently to identify the limitations of the existing reviews and then show the need for new literature review papers.Secondly, in addition to the review papers, we used specific key words to collect the relevant experimental papers.Thirdly, according to the analysis and the detection approaches along with the extraction and the representation methods that have been utilized in each single study, the literature review is classified.Four processes, which are reading, understanding, comparing, and criticizing, have been conducted in the last phase to obtain the final results of this survey and highlight the future directions and open issues in the malware detection and classification area.Figure 2 shows the methodology that is followed to write this survey..

Related Work
Several review papers have been conducted in the malware detection and classification community to identify specific definitions for malware types, the development phases that are taken by malware to be more sophisticated, and the obfuscation techniques of malware.The authors of [19,20] presented the static, dynamic, and hybrid ways as analysis approaches as well as divided the detection approaches into signature-based, heuristic-based, specification-based, machine learning-based, deep learning-based, and multimodal-based.The challenges that have been faced by malware detection researchers were determined as class imbalance, open and public benchmarks, concept drift, adversarial learning, and interpretability of models.Furthermore, ref. [21] in their survey of malware detection techniques added a sandbox detection approach to those mentioned in [19,20].
In addition, static, dynamic, and hybrid were highlighted as feature extraction methods in [22], along with connecting each extraction method to its most used data types.Furthermore, the feature extraction and feature selection processes were explained from different perspectives in [16,17,23,24].The authors of [23] reviewed machine learning-based malware detection models by outlining their phases; the feature extraction phase was presented according to the extracted data types.The authors of [16] classified the feature extraction process based on the employed techniques and methods and listed N-gram, graph-based, and dataset-based feature extraction methods.Furthermore, ref. [17] surveyed the literature and classified the features of the extraction methods based on the used analysis approaches as static, dynamic, and hybrid feature extraction methods.In their review, the analysis phase was explored from two aspects, which were steps and strategies.The analysis steps were identified as assembly, representation, and classification.
Furthermore, the family's analysis, similarities analysis, and variants analysis were determined as analysis strategies.Different selection and detection techniques were explored to explain how they affect the malware detection model's performance.Subsequently, the impact of the feature types and the classification techniques have been investigated to prove that there is no one approach capable of detecting all the malware types.The author of [24] presented a complete overview of modern methods and strategies for the feature selection phase based on the data perspective.Therefore, they divided the feature selection methods into theoretical, spare learning, and statistical approaches based on similarity.
Some of the researchers focused on the evaluation of the evolution and the complications that have occurred in modern sophisticated malware.The authors of [1,9] addressed the second generation of malware in detail, along with their development steps to evaluate the evolution of malicious with the associated detection approaches, which were specified as signature-based, behavior-based, heuristic-based, specification-based, energy-based, bio-inspired-based, and machine/deep learning-based to show reliance between the malware development and the detection techniques.As more complicated malware, evasive malware is discussed by [25] through investigating the FFRI dataset to identify the ratio of evasive malware and explain the trend of employed evasion techniques as well as the effect of anti-analysis operations on the analysis and detection processes, but their results include the anti-analysis operations used by malware through considering only the API calls, so the other anti-analysis operations which need to be understood using other features were not covered by their study.Some researchers survived state-of-the-art malware detection according to a particular attack or using a specific technique.The authors of [26] investigated several characteristics of specific malware attacks on smart home networks.A taxonomy of those attacks was presented based on smart home architecture, smart home central hub, and smart home physical security.Furthermore, VPN filter malware has been addressed in terms of its vulnerability, impact on router vendors, and effect on the network of smart homes.Likewise, Ref. [27] studied the advanced persistent threats (APTs) in detail to explore their characteristics, models, payload delivery methods, and advanced evasion techniques.Furthermore, the analysis approaches and the existing application hardening techniques that have been used to mitigate the malware were taxonomized.The solutions that have been adopted to design a secure region against APTs were stated.
On the other hand, Ref. [28] introduced a survey of data mining techniques-based malware detection approaches.Signature-based and behavior-based were introduced as malware detection approaches along with their frameworks to explain how both use machine learning algorithms.Furthermore, the rate of using the machine learning algorithms in the literature review was illustrated and the challenges were summarized.A summary of data mining-based malware detection approaches was provided by [29] in their work.In addition to the signature-based approach, they added heuristic-based and specificationbased malware detection approaches as well as the advantages and disadvantages of each mentioned approach.
However, this study introduces a comprehensive review including a sufficient number of studies to provide a taxonomy for malware analysis and detection approaches, along with highlighting the most frequently used data types for each approach.Additionally, unlike the existing taxonomies that stop at generic detection approaches like signature and behavioral, our review provides a deeper taxonomy to introduce the known detection approaches in detail by presenting novel subcategories under each approach as well as associating each subcategory with the most used data types.This enables the research community to go deeper with a better understanding of the existing detection approaches.In contrast to the existing reviews, which focused on the used data types and the analysis approach (static, dynamic, and hybrid) when they identified the extraction methods, this survey presents the feature extraction phase from the perspective of which technique is used to achieve the extraction process, to introduce a clearer concept that highlights the differences between the data collection process that is conducted during the analysis phase and the feature extraction process that is performed after the analysis phase.Thus, our survey forms the border between those two phases.Furthermore, modern feature extraction methods are added to the ones that existed in the literature, as well as those methods are discussed and compared in this review.
Moreover, a novel taxonomy of feature representation methods is presented in this paper to fill up the gap left by the existing reviews that have never included this taxonomy.Further, tackling those representation methods and identifying distinct definitions for each one as well as explaining the weaknesses leads to a more precise understanding among the research community's authors.Unlike earlier reviews, which treat ML/Deep techniques as a distinct detection approach, this study defines ML/Deep techniques as algorithms that have been widely utilized to enhance the performance of whatever detection approach is used, such as signature-based, behavioral-based, or heuristic-based detection.Table 1 shows the comparison between this survey and the previous review papers based on the list of the contributions.Table 1.A list of the contributions to this survey with a comparison to the previous review papers.

Show the Border between Collection Data and Feature Extraction Phases
This survey

The Taxonomy of Malware Analysis and Detection Approaches
This section describes the taxonomy of malware analysis and detection approaches.While malware analysis is taxonomy and linked to the data types that are used with each analysis approach, malware detection is introduced with a deep taxonomy where each known detection approach is presented in subcategories and the relationship between each introduced detection subcategory and the data types that are utilized is determined.Figure 3 shows the malware analysis and detection taxonomy, where the analysis approaches are presented as static, dynamic, and hybrid, as well as showing the frequently used data types with each analysis approach.Regarding malware detection approaches, sub-detection approaches which go deeper than the well-known approaches, signaturebased, behavioral-based, and heuristic-based have been presented.Static and dynamic signatures, continuous, sequential, and common behavioral, and automated and manual rules are displayed as deep categories of the major detection approaches along with associating each sub-detection approach with the most used data types.

The Taxonomy of Malware Analysis and Detection Approaches
This section describes the taxonomy of malware analysis and detection approaches.While malware analysis is taxonomy and linked to the data types that are used with each analysis approach, malware detection is introduced with a deep taxonomy where each known detection approach is presented in subcategories and the relationship between each introduced detection subcategory and the data types that are utilized is determined.Figure 3 shows the malware analysis and detection taxonomy, where the analysis approaches are presented as static, dynamic, and hybrid, as well as showing the frequently used data types with each analysis approach.Regarding malware detection approaches, sub-detection approaches which go deeper than the well-known approaches, signaturebased, behavioral-based, and heuristic-based have been presented.Static and dynamic signatures, continuous, sequential, and common behavioral, and automated and manual rules are displayed as deep categories of the major detection approaches along with associating each sub-detection approach with the most used data types.

Malware Analysis Approaches and Data Types
Malware analysis and the type of data have a considerable and growing impact on the detection process that determines the classification of the investigated file and therefore influences the overall accuracy of the detection models.Several types of data have been extracted using static, dynamic, and hybrid analysis, such as Byte code, Opcode, API calls, file data, registry data, and so on, to understand and acknowledge the major purpose and function of the files examined and therefore classify them as malware or benign files.Therefore, we discuss below the analysis approaches that have been employed in recent studies alongside the types of data that have been extracted in those studies to identify the impact and the trend of the data types.Table 2 contains the characteristics of each reviewed paper in terms of the analysis approach and the extracted data.

Static Analysis
The static analysis approach has been widely utilized by exploring the source code without running the executable files to extract a unique signature which is used to rep-resent the file under investigation.Several types of static data can be collected via static analysis, including PE-header data [30][31][32][33], derived data such as string-based entropy and compression ratio [34][35][36][37].Additionally, static analysis tools, such as IDA pro disassembler and Python-developed modules, are also used to collect static opcode and API calls [38][39][40][41][42].Even though static analysis can track all the possible execution paths, it is influenced by packing and encryption techniques.

Dynamic Analysis
Several researchers performed a dynamic analysis approach to collect various data types to differentiate between malware and benign files by running the executable files in isolated environments, virtual machines (VM) or emulators to monitor the executable file behavior during the run-time and then collect the desired dynamic data [43].Various kinds of data have been collected utilizing a dynamic analysis approach.Malicious activities can be dynamically represented using both executable file behavior and by retaining memory images during run-time.The executable files' behaviors are identified through collecting the invoked API calls [44][45][46][47][48], machine activities [47,49,50], file-related data [51][52][53], and registry and network data [45,54].Opcode-based memory image can be taken to represent the malicious activities dynamically [15].Even though obfuscated malware cannot hide how it behaves when dynamically analyzed, dynamic analysis is unable to satisfy all malicious conditions in order to explore all execution paths.

Hybrid Analysis
Some previous studies combined data extracted through static and dynamic analysis together to reduce the drawbacks of both analysis approaches and achieve a higher detection rate.Different tools, including Cuckoo sandbox, IDA pro disassembler, and OlleyDbg, are employed to collect dynamic and static data, and then hybrid feature sets are created based on several types of data, such as string, opcode, API calls, and others [55][56][57][58][59].Even though the hybrid analysis approach benefits from the advantages of both static and dynamic analysis, it also suffers from their disadvantages.

Malware Analysis and Data Types Discussion
The most used data types with each analysis approach have been shown in Figure 4. On the x-axis, the data types are depicted as string (St), PE-header (P-h), Opcode (Op), API calls (API), Dynamic link library (DLL), machine activities (MA), process data (PD), file data (FD), registry data (RD), network data (ND), and derived data (DD).Similarly, static, dynamic, and hybrid are mapped on the y-axis as analysis approaches.The horizontal and vertical dotted lines illustrate the relationship between each data type and each analysis approach by showing how frequently each single data type has been used with each particular analysis approach in the literature review.
By focusing on static analysis, opcode and PE-header represent the first and second most repetitively utilized static data types, respectively.This survey found that using the static data type that is employed in the minimum number of studies, the API calls that are collected statically were not preferable in the literature compared to the usage of byte-code data and the derived data.In contrast to static analysis, the API calls data demonstrates the most significant data type that has been extended using dynamic analysis.While machine activities data is the second trend data type, using data related to registry value, file, and network delivers the average ratio among studies that extracted their required data dynamically.Furthermore, bytecode, PE-header, and Opcode data are rarely extracted using dynamic analysis.In the studies that have utilized both static and dynamic analysis as hybrid analysis, it is clear to note that the same ranges are repeated for the data that are associated with static and dynamic analysis, such as Opcode, bytecode, and PE-header as static data and the API calls as dynamic data.On the other hand, the usage ratio of some of the dynamic data, such as data related to files, registry, and machine activities, is decreased in the studies that choose hybrid analysis to extract their features.
Even though the static analysis is safe due to there being no need to run the files, malicious software frequently employs encryptors like UPX and ASP Pack Shell to prevent analysis.As a result, unpacking and decompression processes are required before analysis, which is accomplished with disassemblers such as IDA Pro and OlleyDby.Contrary to the static analysis approach, the dynamic analysis approach is more effective because there is no need to unpack the investigated file before the analysis process.Dynamic analysis can support the detection models to be able to detect novel malware behaviors in addition to known malware.Further, the general behavior of the malware is not affected by obfuscation techniques, so it is difficult for obfuscated malware to evade the dynamic analysis approach because the obfuscation techniques change the malware structure but could not change the malware behavior.However, dynamic malware analysis is more sensitive to evasive malware and can detect whether it is being executed in a real or controlled environment.

Malware Detection Approaches
The malware detection process is the mechanization that must be implemented to discover and identify the malicious activities of the files under investigation.As a result, several approaches to detecting malware have been improved year after year, with no single approach providing 100% success with all malware types and families in every situation.Therefore, malicious software has been detected based on two main characteristics, which are signatures and behaviors using three malware detection approaches that are signaturebased, behavioral-based, and heuristic-based.Therefore, the following sections discuss the malware detection approaches.Table 3 contains the characteristics of each reviewed paper in terms of detection approach with the extraction and representation methods.

Signature-Based
Several studies have been undertaken to improve malware detection and classification models by relying on a unique signature that has been previously extracted statically or dynamically and stored to compare it with the collected investigated file signature.Those signatures include but are not limited to a set of API calls, opcodes or byte-code series, and entropy quantity.Static string-based signatures have been generated by [35,60] to detect VBasic malicious software by representing the obtained strings using frequency vectors, while [61] generated static signatures based on n-grams and binary vectors.Additionally, [38] formed malicious static signatures using the statistical values of opcodes.On the other hand, behavioral signatures have been constructed based on the dynamically collected data.The authors of [48,62,63] created behavioral signatures using API calls invoked by malware during their running time.Specific sets of API calls are identified as reflecting malicious activities, and thus the behavioral malicious signatures are constructed utilizing those API calls.Static and behavioral signature-based malware detection models suffer from low detection rates when classifying unknown signatures that may be linked to unknown malware or different variants of known malware.

Behavioral-Based
After monitoring the executable files in an isolated environment and collecting the exhibited behaviors, features extraction techniques have been developed to extract the sensitive features by which the developed model can classify the known malicious behaviors as well as any behavior that seems to be similar to them with respect to false positive behaviors.The ability to identify novel malware behaviors in addition to the known ones based on collecting behaviors during run-time has made this approach more valuable than the signature-based approach.As a result, the majority of the studies in the literature review focused on using behavioral-based approaches in the form of continuous, sequential, and common behaviors to increase malware detection ratios.After monitoring the executable files in an isolated environment and collecting the exhibited behaviors, features extraction techniques have been developed to extract the sensitive features by which the developed model can classify the known malicious behaviors as well as any behavior that seems to be similar to them with respect to false positive behaviors.The ability to identify novel malware behaviors in addition to the known ones based on collecting behaviors during run-time has made this approach more valuable than the signature-based approach.As a result, the majority of the studies in the literature review focused on using behavioral-based approaches in the form of continuous, sequential, and common behaviors to increase malware detection ratios.
Some studies have been conducted based on the extracted continuous behaviors which, are represented by machine activities.The authors of [41] was interested in the Windows platform and used the Cuckoo sandbox to extract machine activity data (CPU, memory, received, and sent packets).After that, the observations were transformed into vectors, which were used to train and assess classification algorithms.Most of the previous studies were concerned with extracting API calls, system calls, opcodes, and others to form them sequentially (sequential behaviors) or into ordered patterns to understand the malicious functionalities.The sequential or ordered patterns can be API calls, registry data, and network data [13,40,45] or opcode sequences [76].Moreover, the common behaviors that are performed by malware and benign samples can be used as an indicator to classify the investigated file between malware or benign classes in the binary classification models.In addition, those common behaviors can be observed in each malware family in the case of multi-classification models.The time of the matching process has been reduced from where the developed models classify the test files based on only the common behaviors.Common behaviors graph-based malware detection and classification models have been proposed by [10,77] in their work through observing the most frequent behavior graphs in each malware family.Additionally, ref. [78] presented binary and multi-classification models using (LSTM) long-short term models based on the common API call sequences offered by each malware family.

Heuristic-Based
A heuristic-based approach has been used in various research by generating generic rules that investigate the extracted data, which are given through dynamic or static analysis to support the proposed model of detecting malicious intent.The generated rules can be developed automatically using machine learning techniques, the YARA tool, and other tools or manually based on the experience and knowledge of expert analysts.
Several studies have been done to develop malware detection models by which decisions have been taken based on the automated behavioral rules that are created using machine learning techniques and the YARA tool [45,47,54].On the other hand, based on statically extracted string data, ref. [34] was concerned with generating manually general rules to recognize the existence of malicious activities that might be achieved by malware using HTML elements and JavaScript functions.Moreover, (DNS) domain name systembased rules were developed by [76] to build a botnet attack detection model.The proposed model took the final decision based on the manually developed general rules which can detect abnormalities in DNS queries and responses.

Malware Detection Discussion
According to the literature, the researchers applied string, opcode, and derived features to construct static signatures, while only API calls and opcodes were used to create dynamic signatures.In general, the ability to detect malware utilizing previously derived signatures using both static and dynamic signatures is insufficient to combat malicious software enhancement due to obfuscation techniques popularly used by malware writers to create different malware variants as well as new malware, because each malware variant and new malware seems to have a different fingerprint, which must be obtained before the detection process can begin.
Furthermore, malware detection models based on static signature patterns have been defeated during the tackling of encrypted or compromised malware.Regarding the behavioral-based approach and the preferable data types, machine activities data is the most used feature in the literature to depict malware functionality using continuous behaviors, whereas API calls data is the most frequently utilized feature to build malware detection models using sequential behaviors.Furthermore, the most used data types when authors try to get the common behaviors among malware groups are API calls and opcodes.The behavioral-based approach is a promising solution to overcome the weaknesses of the signature-based approach but relying on behaviors leads the suggested models to misclassify malware that performs functions that are similar to benign functions or mimic legitimate behaviors, and then those models suffer from a high false-positive rate.
Moreover, malware is capable of recognizing the nature of its execution surroundings using evasion techniques and then changing its behaviors to be like benign behaviors or terminating its execution, resulting in representing them through unrepresentative behaviors.
In addition, extracting a sufficient feature set is a tough process that has a massive effect on the malware detection and classification models.Furthermore, representing malware behaviors based on the names, sequence, or frequency of extracted characteristics results in malware detection and classification models that are more vulnerable to obfuscation techniques, which are employed to update the names, sequences, and frequencies of the extracted characteristics.Furthermore, several researchers trained their models by employing malicious behaviors that were extracted from the most recent malware to provide the classifiers the capacity to recognize trends in malicious activity.On the other hand, developed malware detection models have become vulnerable to older malicious behaviors.
By focusing on the heuristic-based approach, there is no single type of data that is commonly used with this approach in the literature, but the researchers have used practically almost all the types of data at the same rate, including API calls, network data, registry data, import DLL, and others.However, the creation of general rules that play a significant role in the final decision is required when building a malware detection and classification model based on the heuristic approach.Therefore, generating the investigation rules manually consumes time and effort and needs malware behavior experts with enough experience.Even though the required rules can be generated automatically, the suggested rule-based model is limited to detecting only the malicious activities that are represented in the critical general rules.

The Taxonomy of Feature Extraction and Representation Methods
This section covered a taxonomy of feature extraction and representation methods.In contrast to previous feature extraction taxonomies that were introduced based on the analysis approach, this paper presented a feature extraction taxonomy according to the techniques employed to extract the features.Furthermore, a novel feature representation taxonomy is identified to clear up the borders between data collection, extraction, and representation phases.The feature extraction and representation taxonomy are shown in Figure 5.

Feature Extraction Methods
Generally, feature engineering refers to the extraction, selection, and representation of features.This is an important phase in the malware detection and classification process because it has a significant impact on the classification model's performance [70].After all, the feature engineering procedure arranges the features towards a more machineunderstandable subset of data [22].Additionally, the reduced computational overhead is achieved by decreasing the dataset for processing [79].The feature extraction methods based on extraction techniques are presented in the following subsections.Table 3 shows the feature extraction method in each reviewed paper.

Feature Extraction Methods
Generally, feature engineering refers to the extraction, selection, and representation of features.This is an important phase in the malware detection and classification process because it has a significant impact on the classification model's performance [70].After all, the feature engineering procedure arranges the features towards a more machine-understandable subset of data [22].Additionally, the reduced computational overhead is achieved by decreasing the dataset for processing [79].The feature extraction methods based on extraction techniques are presented in the following subsections.Table 3 shows the feature extraction method in each reviewed paper.

N-Gram
Using a single feature, such as an API call or an opcode, separately causes a low detection rate in some text classification models because doing so ignores the importance that can be obtained when combining multiple features together as one feature during the extraction phase.Malicious behaviours can be done using a group of API calls or opcodes.To the best of our knowledge, malware often does not rely on a single feature to perform its malicious activities.Therefore, the existing solutions for malware detection and classification models have widely used feature extraction methods based on n-gram techniques in order to combine more than one feature over the extraction phase.By using the n-gram technique, each single extracted feature is constructed from N features that are offered by the malware during the analysis phase.

N-Gram
Using a single feature, such as an API call or an opcode, separately causes a low detection rate in some text classification models because doing so ignores the importance that can be obtained when combining multiple features together as one feature during the extraction phase.Malicious behaviours can be done using a group of API calls or opcodes.To the best of our knowledge, malware often does not rely on a single feature to perform its malicious activities.Therefore, the existing solutions for malware detection and classification models have widely used feature extraction methods based on n-gram techniques in order to combine more than one feature over the extraction phase.By using the n-gram technique, each single extracted feature is constructed from N features that are offered by the malware during the analysis phase.
Several studies have used the n-gram technique to extract features [50,65,72,80,81].Constructing subsets from an original set of text with a length of n is called n-gram.Depending on the application, this string may contain a variety of kinds.It can include letters or words.N-grams are made by dividing a text string into fixed-length substrings.The accuracy of the measure of similarity across terms has been enhanced because of the use of the N-gram [82].However, high dimensionality feature space is caused by using the n-gram technique, due to the large number of generated features using the n-gram technique which leads to more time being consumed [10].

Text Mining
Text mining techniques are taken from the information retrieval field.Term indexing and term weighting represent the two main categories of text mining techniques.Term indexing, where a unique index is assigned to each term, while term weighting means that a specific weight is calculated and assigned to each term.In the malware detection and classification fields, term weighting methods have been widely used to extract significant features.Identifying specific importance for each word or term arranged in the analysis reports is critical to improving the proposed model's performance.There are various techniques, such as (TF-IDF) Term Frequency-Inverse Document Frequency, (IG) Information Gain, and others, to mine the texts and define their importance by assigning weights by which the analysts can know the most useful words for classification purposes.
The authors of [43] assigned weights for each text in the analysis report using the information gain technique.Those weights were assigned to the texts that represent the performed operations and their locations based on the occurrences of those texts in malware and benign classes.In addition, TF-IDF (Term Frequency-Inverse Document Frequency) has been used by [54] as a weighting method to extract the API calls and network traffic by evaluating the appearance of each feature in both malware and benign classes to identify the uncommon features that are more significant than the common features.

Graph-Based Extractor
To extract only the most important features, raw data such as API calls and opcodes are formed into graphs that represent the appearance of each feature in that sample as well as its relationships with others.Therefore, the frequent sub-graphs among all the samples of a specific class or family are considered significant features that have to be extracted.Those sub-graphs consist of nodes (API calls, opcodes) and their relationships with other features (dependency or control flow).Some studies extract the features using algorithms that construct graphs.Those graphs consist of groups of nodes as basic blocks in the program.These basic blocks are connected by edges to represent the path of control flow between the nodes as well as the blocks [95].Part of the previous studies [10,80,96], have used graphbased extraction methods to extract the optimal feature set.CFGs were created based on opcodes or API calls.Some studies used matching algorithms immediately to distinguish between malicious and benign constructed graphs, while others represented the constructed graphs as binary vectors, weight vectors, or weighted dependency graphs.However, it is difficult to build a unique graph for each individual malware and, considering the common graph-based behaviors, might lead to an increase in the detection time.

Frequency-Based Extractor
Each feature that appears in the raw collected data can be a discriminative or redundant feature.The importance of the features depends on how they occur in each class.As long as the feature appears frequently in one of the classes and does not occur in the other classes, the possibility for that feature to be a significant feature increases.According to the aforementioned concept, some studies rely on the frequency when they develop feature extraction techniques.The occurrence of the features is measured and analyzed to be used as an indicator for the prominent features to prevent creating an extremely high-dimensional feature space that leads to an ineffective model.The frequency of API calls and opcode were used by [75,87] to extract features and construct the dataset that was used to train and assess their suggested models, while [45] considered the frequency of each unique API, DLL, and registry key in each document as a feature.However, obfuscation techniques such as instruction replacement and dead code insertion, which update the frequency of the derived features, have had an impact on frequency-based models [10,62].

Word Embedding
The word embedding method can predict the distribution of each word or feature.The well-known embedding word Word2Vec can be developed in two different ways: Bag-of-Words (CBOW) and Skip-Gram.(CBOW) used the (context) words before and after the target word as input to predict the output, which is the target word, while Skip-Gram used the target word as input to predict the output, which is the (context) words before and after the target word.Word2Vec transforms the corpus of text into vector space in which each text feature in the corpus is represented by a vector in the space.Textual similarities are considered in the vector space.In the case of two words that are similar in context, they will be placed close to each other in the vector space.Therefore, based on the fact that the order of API calls or opcodes in malware sequences must contain textual meaning and it does not randomly create it, parts of the previous studies have utilized Word2Vec techniques to extract the feature set from which, the textual relationships are represented [85].
The authors of [85,97,98] used API sequence, while [88] used opcode as inputs to produce their extracted features as vectors representing each word using the assigned weights that show the similar words in the context close to each other.Even though the Word2Vec technique captures the contextual relationship between the features (words), several characteristics of the words might lead to an increase in the computational complexity that reduces the overall performance of the proposed models.

Iterative-Based Extractor
Since the final objective of each developed malware detection and classification model is to achieve satisfactory accuracy when classifying the test data, this method focuses on the features by which the detection accuracy is extremely improved.The challenge when an iterative-based extractor method is used is how to identify the initial feature set.The obvious solution to this challenge is that the whole features that appear in almost all the samples of one class have to be included.Therefore, performing extensive experiments and analyzing the results, along with including and excluding parts of the initial feature set, in order to identify the optimal feature set by which it achieves the highest performance.This method entails conducting a large number of experiments, followed by analyzing the obtained results to highlight the features that are associated with the highest detection accuracy.The authors of [84] extracted the typical feature set from the analysis reports based on the series of experiments that were conducted to identify the effective features relying on the improvements that have been achieved by each additional feature in every experiment.Even though this method ensures the efficiency of the extracted feature set because those extracted features are associated with the highest achieved detection accuracy, it is time-and effort-consuming.

Gray Scale Image-Based Extractor
The malware binaries are divided into 8-bit units, and then each 8-bit unit can be utilized as a unique pixel in the image.This is because any 8-bit unit of malware binary provides a numeric value between 0 and 255, where 0 represents the black color and 255 represents the white color.The values between 0 and 255 are gradually moved between the black and white colors.Moreover, each 8-bit unit or a single pixel in the image of malware binary can be painted based on the degree of the color that is expressed in that pixel.Therefore, gray images can be generated for each malware binary, and then there are several features that are related to the generated images that can be extracted, such as texture, intensity, and wavelet.Malware variants that are remembered in the same family produce similar visualizations as a result of reused codes [99].Therefore, some research is concerned about distinguishing between malware and benign files based on the extracted visualization features.
The authors of [86] read the file as an 8-bit unsigned integer between 0 (black) and 255 (white), and then use a machine learning technique to transfer those files and save them as images.However, relying on visualization-based features requires an adequate number of malware families and types during the training phase to build an effective model.The authors of [100] in their work, utilized the transfer learning TL-based CNN models to obtain the main characteristics of malware based on their converted images.The gray-scale images were created by converting the malware files into 1D, 8-bit vectors and then 2D images.Moreover, an image resizing process was achieved to introduce appropriate inputs for TL-based CNN models.The created dataset was divided into training data (80%) and test data (20%).The VGG16 model performed the best among all the utilized TL-based CNN models.

Discussion of Feature Extraction Methods
Even though using the n-gram method to extract features appears to be quite popular among authors, this technique produces a large number of features, making N-gram-based models suffer from high dimensionality.The frequency-based feature extraction method is employed to decrease the number of extracted features by extracting only the most often occurring ones, which helps to overcome the problem of high dimensionality space.However, obfuscation techniques that can adjust the frequencies of certain features in each variant invalidate the frequency-based method.
Though it is impossible to develop a unique graph for each type of malware, some studies have employed a graph-based feature extraction method to construct generic graphs using common characteristics.The matching procedure, on the other hand, caused the graph-based models to have a significant level of complexity over time.The problem of time matching has been examined, and a solution has been proposed in the representation phase when some studies represented constructed graphs as vectors.
On the other hand, the text features have been given weights to indicate which features should be extracted.Text mining-based models are particularly vulnerable to obfuscation techniques because those weights are derived using frequency-based techniques like TF-IDF.Another direction was taken by using a Gray-based method as a feature extractor when the produced data can be visualized.The limitation with the visualization extractor is that the extracted features would be stored as images, which needs more storage space.Regardless of how the word embedding method captures the contextual relationship between the characteristics (words), various aspects of the words may raise the computational cost and lowering the overall performance of the model.

Feature Representation Methods
Next to the feature extraction step, a significant step must be performed, which is how to represent the characteristics of malicious and legitimate activities.In other words, how to transfer the extracted characteristics to be understandable for algorithms and machines using several vector types from which the proposed models can learn the behaviors of different classes.Therefore, the proposed models have become capable of distinguishing between malware and benign files based on the representation methods that have been used by those models to recognize the characteristics of both classes [79].Features representation methods are identified and discussed in the next subsections.Table 3 shows the feature representation in each reviewed paper.

Binary-Based Vector
This method of representation examines the extracted features from the perspective, which has only two aspects: true and false.True if the examined feature exists in the document, and false if the examined feature does not exist in the document.The binary vector representation is a widely used method where the extracted features are represented by 1 or 0 according to their existence in each sample to create a binary vector that represents the characteristics of that sample.The authors of [72] integrated the binary vectors that are generated based on static printable strings and dynamic API-n-gram features.As a result, hybrid binary vectors are constructed to represent the characteristics of malware and benign samples.The authors of [63] created the local binary vector for each file by converting each API call into 1 or 0 depending on its presence in the global list.
However, using binary vectors to represent the features is vulnerable to obfuscation techniques that produce irrelevant features such as irrelevant API calls or opcodes without affecting the overall functionality [101].Such a representation method seems to be sensitive to the employed feature extraction method.The legitimate features, which are often injected by malware authors in the produced malware to overcome the analysis efforts, can be represented in the malware binary vectors as malware characteristics in the case of the developed extraction method that includes them as malware features.

Frequency-Based Vector
The production of malware variants nowadays is an easy task for the reason that there are various obtainable malicious software code libraries and online tools to reuse and alter the existing malicious codes and then introduce new variants.As a result, the frequency (number of occurrence times) of each extracted feature in each malware sample, such as API calls and opcodes, can be used to identify the similarity between malicious behaviors that belong to multiple variants of the same malware family [44].
To construct the frequency-based vectors, the occurrence times of the extracted features must be considered.Based on the assumption that there are differences between malware and benign programs in the occurrence numbers of their performed functions, frequencybased vectors have been used to represent the differences between malicious and legitimate activities.Therefore, the most frequent features have to be determined for both malware and benign classes [102].In their study, ref. [70] was concerned about VBasic-based malware, so the VBscript samples were looked up to identify specific functions, methods, and keywords along with their number of occurrences to construct the feature vector.The authors of [44] extracted API calls using n-gram techniques, and then the frequency-based vectors which consist, of the occurrences of each n-gram, were constructed to represent the file characteristics.
Despite the fact that using a frequency-based approach to represent malware on graphs yields discriminating patterns [103], the performance of frequency-based malware detection models is hampered by obfuscation techniques such as dead code insertion, which is capable of updating the distribution of the feature [104].

Weight-Based Vector
The distribution of the extracted features is one of the biggest differences between legitimate and malware classes, and even between malware families.The features that are related to performing malicious behavior have to be heavily distributed in malware samples and are not or rarely distributed in legitimate samples.Furthermore, malware families are determined based on assumptions about how the malware acts and what techniques are used to carry out their own malicious actions [83].Therefore, among the whole set of extracted features, there are specific features that can be related to particular families and are rarely distributed in others.Based on the above concept, some of the previous studies have represented the extracted text features as numerical weights using a weight-based vector with the help of statistical methods such as Term Frequency-Inverse Document Frequency (TF-IDF), Information Gain (IG), and entropy value.
A weight-based vector consists of the extracted features and their weights.Specific weights are assigned to each feature as their data values using statistical methods such as information gain and TF-IDF.Therefore, the proposed models can distinguish between malware and benign files based on the assigned weights.The authors of [43] calculated the frequency of each extracted word, and then the frequency values were normalized.Further, the Information Gain method has been employed to assign the calculated weights to each feature.Those weights were used to determine where the testing files should be placed, whether in malware or benign files.Another study [81], computed weights for each extracted feature using TF-IDF to select only the most significant features and then represented their samples using the selected features along with their weights as an Attribute Relation File Format (ARFF), with each row in this file representing a weight vector.However, the statistical methods which aim to convert text data into numerical data to be fed into ML techniques do not capture any contextual relationship between features [85].

Image Characterization-Based Vector
In the gray-scale-based malware detection and classification models, the gray images of malware and benign samples are generated by dividing the PE malware files into 8-bit vectors consisting of hexadecimal values.Those values have been represented in the form of pixels, which range from 0 to 255 to simulate the gradient from the white to the black colors for each pixel in order to represent the malware files as images.The width of the images is predetermined, while the length can be hung based on its size [105].Therefore, those created images have been utilized in many studies during the representation phase when the generated vectors comprise the numerical data that characterize the created image.
To generate image characteristics-based vectors and represent the malware files as vectors that describe the created malware images from several aspects, there are various algorithms such as Color Layout Descriptor (CLD), Homogeneous Texture Descriptor (HTD), and GIST for obtaining the vector from the generated images.The vectors which are generated to describe the constructed images of malware and benign files have to contain distinguishable values since those values represent several aspects of the images, such as texture and intensity, which are expressed differently in malware and benign files.
In their studies [75], they converted each byte in the binary malware file into one pixel of the image that represents the malware file.The authors of [58] integrated the static and dynamic features after visualizing them as two images.Both images of the original binary file were encoded into a single image.Therefore, the image characterization vector method is applied to generate the vector that consists of numerical data that represents the texture and color of the hybrid image.However, it takes more time to generate the images.Furthermore, representing the files as images requires a large amount of storage space.Additionally, visualization-based models depend on the matching process to classify the testing files.Therefore, samples with few variants in the training data lead to no matching and then misclassification for them.

Weighted Dependency Graph
Because malware detection and classification models that have been developed based on individual features or sequences of features are vulnerable to unreadable insertion and reordering instruction techniques, more complex information such as features dependencies (the probability degree of the dependency between each feature and the others) has to be included and learned by the developed models [80].Therefore, the features' dependencies have been calculated and introduced as weights that describe the dependency between each two features in the generated graph to represent the malicious behaviors.Several similarity measures, such as the Maximum Weight Subgraph Algorithm (MWSA), NP-similarity, and Same-similarity, have been used to tell the difference between the weighted dependency graph-based behaviors.The results of the similarity measures are then compared to thresholds to see if the examined behavior is similar or different to the learned behaviors.
Calculating weights as indicators to express the probability degree of the dependency between every two characteristics is an alternate representation method for making the extracted feature graphs more representational of the corresponding behaviors.The authors of [80] assigned weights that denote the probability that a dependency relation appears in a malware family to represent the behaviors that belong to that family.A weight-based threshold is used to determine which dependency relationships in the graph of each family have to be used as common behaviors for that family.

Rule-Based Representation
Some of the studies in the literature focused on the rule-based representation method for representing malicious software behavior.There are two main mechanisms for generating the rule-based behaviors, which are: manually and automatically.Manual rules necessitate significant effort on the part of malware analysts to observe the features by which malicious behaviors are represented, whereas tools such as the YARA tool have been used to generate automatic rules.The rule-based constructed behaviors can be utilized in the detection phase by matching them with the generated rule-based behavior of the test file.
The majority voting decision-making mechanism has been widely used in the previous studies to decide if the test file's rule-based behaviour is more in line with benign or malware rule-based behaviour.However, rule-based malware detection and classification models are limited to recognizing only the malicious activities that are represented in their generated rules.The authors of [54] represented malware and benign behaviors using dynamic API calls, API sequences, and network traffic features by building the rules that are stored in a database to describe malware and benign behaviors, and then leveraging a majority voting method when their model matched the extracted testing files' behaviors with those rules.

Discussion of Feature Representation Methods
According to the existence of each feature in the investigated file, a value of 0 or 1 can be assigned to that feature to represent the examined file using a binary vector.Similarly, instead of the values 0 or 1 that are utilized in the binary vector representation method, the times of occurrences for each feature are computed to describe the sample using a frequencybased vector.Unfortunately, the existence of the features can be changed from one variant to another one that achieves the same malicious activities along with the presence of irrelevant features.Moreover, the frequency-based vector representation method is defeated when the malware developers contribute their samples using obfuscation techniques.
To mitigate the weaknesses of binary/frequency-based representation methods, some researchers attempted to rank the weight value for each feature using statistical methods to represent the investigated file using a weight vector.However, the contextual relationship among the features is not captured by converting the text features into numerical features.
Moreover, the binary files were represented using vectors which consists of numerical information extracted from the malware and benign images that are generated by converting the bytes into pixels that ranged gradually from white to black color via 0 to 250 degrees, but generating the images took a significant period of time and storing the behaviors as images required a large storage capacity.Some researchers exploited the rules generated by particular tools like the YARA tool to characterize the features that are produced during the extraction phase.Only the harmful behaviors specified in the created rules are identified using the rule-based representation method.Therefore, this representation method eliminates the inefficiency of the matching process.

Open Issues
Based on the survey of the recently proposed malware detection and classification models along with reviewing the approaches and techniques that have been used, the shortcomings and usefulness of those approaches and techniques are specified.Therefore, our review could identify the challenges and open issues.The following are the most open issues and suggestions for research directions.

Obfuscation Techniques
Those techniques were first used to protect the intellectual property of software contactors, but lastly, the writers of malicious programs leveraged those techniques to transfer their malware to different forms that are harder to analyze and detect [106,107].Such techniques as dead code insertion, registry reassignment, instruction reordering, and instruction substitution have been used to update the characteristics of malware along with achieving the same functions [108,109].Based on [87], almost 50% of the unknown malicious programs are different variants of the old ones, while [75] reported that only 20% of the new malware represents unseen malware, and 80% of them are the same malware but in different variants.Therefore, producing the malware in several variants that have their own characteristics makes the malware detection and classification models, which have been built based on the signatures, sequences, or frequencies, suffer from poor detection accuracy.

Evasion Techniques
Through executing particular operations, the evasive malware is capable of recognizing whether they are running in a controlled or real environment.When malware discovers the characteristics that indicate a controlled environment, they immediately alter their behaviors to show different behaviors that are similar to benign behaviors or stop their executions [25].In addition, the evasive malware can identify the nature of the execution environment by utilizing specific information that is related to sandboxes, debuggers, virtual machines, or monitors as well as waiting for the action of the user, such as mouse move, mouse click, or others, as a condition to their execution starting [110].Furthermore, according to [111] due to the usage of anti-analysis techniques, around 1% of the scanned malware is not detectable for 64% of anti-virus scanners after one year.Therefore, the malware detection and classification models that obtained the features by running the samples in controlled environments experienced difficulties detecting the evasive malware.

Zero-Day Malware
Previously unseen malware usually causes harm to the systems through its malicious activities to achieve new attacks.To the best of our knowledge, after each new attack that has occurred by the zero-day malware, there might be zero-day until this malware is discovered [112].Unknown malware has new characteristics that fulfill their purposes, so the malware detection models that have been designed based on past information are not efficient when attempting to detect zero-day malware [113].Furthermore, the ratio of malware with new strategies to fulfill their goals as zero-day malware has increased [32,114].According to [85], around 350,000 zero-day of malware have been produced daily.Therefore, detecting the zero-day malware that presents new characteristics during their attacks is harder using malware detection models that recognize the malware behaviors based on the obtained characteristics from the training data.

Redundancy and Irrelevant Behaviours
Since machine learning techniques have trouble coping with data that contains redundant and irrelevant features [115], the presence of redundant and/or unnecessary features in datasets is one of the issues of the malware detection community [50,116].Indeed, the redundant/irrelevant behaviors that are irrelevant for a certain class can significantly raise the operational cost and reduce the accuracy of most learners.Therefore, the task of generating datasets, including feature extraction and selection phases, is challenging and must be continuously improved in order to improve the overall performance of the developed models.

False Positive/Negative Rate
Malware authors have attempted to make the produced malware accomplish their functions in a way that is consistent with legitimate behavior [117].Therefore, some characteristics and fingerprints in malicious files and benign samples might be quite similar.Several malware detection approaches are vulnerable to false positive and false negative rates.Though an increase in false positive or false negative rates reduces the model detection accuracy, false positives are far more significant than false negatives in the effective malware detection models.If a legitimate file is mistakenly identified as malicious on a user's computer, the operating system may become unbootable and other applications may become non-working [118].

Incremental Learning
Malware analysis and detection developers have refined their classifiers to include the most modern malicious activity techniques since malware writers develop malicious software daily.During the collection phase, the preferred training data must represent the behaviors of modern malware by considering the most recent malware files.As a result, older malware behaviors are not captured in the models developed, rendering them undetected.It is a sensitive issue to incorporate adequate historical trends of harmful activity during the sample collection phase so that the models can recognize both recent and older malicious behavior [119].For example, when the testing data contained older malware, the proposed model in [58] provided a low detection rate.This is because the suggested model was trained considering the behaviors of the most recent malware samples.As a result, the characteristics of older malware behaviors were not represented in the developed model.

Future Directions
Even though the existing solutions in the literature review have established the road to developing trustworthy malware detection and classification models, evasive malware detection is still challenging.To detect evasive malware, several approaches have been taken such as: generating API-based evasive malware signatures, discovering evasion behaviors using multiple execution environments, and using the known evasion techniques to detect evasive malware.To the best of our knowledge, each evasive malware detection solution has its own weaknesses.For example, the distinction between evasion techniques that have been used in legitimate behavior and malicious-related evasion techniques is still a challenge.Additionally, it's quite difficult for the developed models, which are learned based on the known evasion techniques, to detect and recognize the unknown ones.Moreover, using several execution environments without high complexity in terms of time and resources is another challenge.
Despite several studies having been done to enhance the evasive malware detection rate, there is no available dataset from which the evasive behaviors are represented.Therefore, creating an evasive behavior dataset would contribute to the efforts of researchers to produce robust solutions.For evasive malware detection purposes, efficient feature extraction and representation techniques are required to extract and represent a feature set that represents evasion techniques related to only malicious behaviors On the other hand, zero-day malware and unknown malware variants' daily production has been greatly increased since the availability of online tools by which to create new malware or reformat the existing ones using obfuscation techniques to introduce new variants.Therefore, efficient updating learning mechanisms are required to render the developed models to adaptively learn the coming new behaviors.For this end, deep learning techniques in conjunction with unsupervised machine learning techniques can be designed and implemented for updating learning and developing models which are adaptively learning new malicious behaviors.

Conclusions
In this survey, we have introduced a comprehensive review on the evolution as well as the trends of malware analysis and detection approaches.Particularly, this survey concerned with the perspectives often ignored or partially studied by previous surveys.For example, exploring the usefulness of each data type according to the utilized analysis approaches, offering a deep taxonomy for malware detection approaches where the detection approaches are presented in more detail than signature-based, behavioral-based, and heuristic-based.This is to provide the research community an opportunity to improve the existing malware detection solutions.Additionally, this survey has associated the feature extraction methods with the employed extraction techniques instead of the analysis approaches to highlight the boundary between the data collection and data extraction phases.A novel taxonomy for methods of feature representation has been presented in this survey.The root cause problems by which each approach or method for analysis, detection, extraction, and representation suffers from its own drawbacks have been investigated to produce the open issues and suggestions for future research directions.

Figure 1 .
Figure 1.The distribution of the considered papers in the period of time between 2003 and 2022.

Figure 1 .
Figure 1.The distribution of the considered papers in the period of time between 2003 and 2022.

Figure 5 .
Figure 5. Taxonomy of feature extraction and representation methods.

Figure 5 .
Figure 5. Taxonomy of feature extraction and representation methods.

Table 2 .
The relation between analysis approaches and data types.