Deep Learning Model and Correlation Analysis by User Object Layering of a Social Network Service

: This paper focuses on preventing forms of social dysfunction such as invasions of privacy and stalking by understanding the diversiﬁed situation of the rapidly increasing number of social media users who use social media services, which are various types of social networking services. To prevent these problems, we aim to identify mutual relationships by layering the relationships between social media users. In other words, in social media that has a relationship with the subject, the subject user is yet another object, so the appearance of the object viewed by the subject user and the correlation between the subjects and objects must be visualized. At this time, because the subject is an object that has changed over time, it is necessary to perform symmetrical and mutual correlation analysis based on relationship through objective layering viewed from a computer. In this paper, the mutual relationship between the subject user and the object user was deﬁned and visualized to apply it to the deep learning model through a software program. Among various types of social media that are mainly used, user information data is gathered through the popular social media site called Instagram and our target community platforms. Consequently, it was processed again to represent user interactions among other users. Finally, three stages of mutual relationship visualization were represented through simulation and tests, and 120,000 data sets were processed, classiﬁed, and proved through the simulation results.


Introduction
Social networking services (SNS), in conjunction with a variety of new types of services, became an indispensable tool for people to communicate with one another. As a result, the number of SNS users is exponentially increasing, and their personal data is becoming a research subject [1]. One characteristic of SNS is that every user is able to participate and provide feedback for each other. In addition, various communities based on SNS are rapidly formed based on individual interactions, sharing common interests, and gradually developing into various groups [2]. At the same time, SNS creates a vulnerable space that can be easily exposed to the public. This often causes problems such as stalking and malicious posts, which then creates further problems within society [3]. For example, users can purposefully provide distorted information, create conflicts between other users, or invade other people's privacy. These negative byproducts are becoming serious public issues [4]. In order to find the solution to this social dysfunction, understanding the dynamic relationship between SNS users is pivotal. It is also meaningful as a field of study to deal with the imbalance of data among users in a symmetrical and balanced way. Once the correlation is discovered, a new data processing methodology must be applied to analyze the correlation between users and provide object layering. To collect data for this purpose, all SNS platforms must be assessed. However, it was announced that during every second, 2.5 TB worth of data is being generated on social media around the world [5]. Accordingly, it is very difficult to collect and analyze all relevant data at once. Therefore, we conducted experiments by adjusting the data collection process through the platform of choice. Once the platform was selected, we needed to understand if the data is managed internally or externally. Further, data from each platform needed to be evaluated for their traffic, amount of transmission, and speed of transmission in order to apply a new data processing methodology for fast and effective results. Based on data type and criteria, each data type needed to be assessed to prepare appropriate technology to collect data and security systems to protect them. If the data is processed internally, ongoing communication must occur with the data coordinator to specify which type of data and how often the data will be collected. If it is processed externally, whether data is offered through Open API, and what type and quantity of data can be collected must be understood [6]. In addition, if data is being collected through data crawling, its data life cycle and copyright must be considered to create a plan to collect data [7]. If their data type is categorizing data, it will be collected either from the database or from drivers provided by the vendor, or collected as a file through data crawling from API or HTTP. After the data category and type are understood, the corresponding technology can be determined to collect them.
In this paper, the popular SNS, 'Instagram' and community platform, called 'Bobaedream' were selected as the SNS given target platforms, and we collected data from the users and processed the data to understand the correlation through object layering. This mutual relationship diagram will predict the individual relationship between each user of symmetrical balance, and ultimately prevent social dysfunction on the SNS platforms.
The following is a list of the composition of papers. First, Section 2 of the next chapter explains the background of this research and Section 3 explains the Object layering among users. In addition, the next Section 4 shows a new data processing methodology, and Section 5 explains the correlation analysis involving Object layering of users. In Section 6, we talk about the conclusion once more.

Background
In general, the emergence of social media is very important as time changes, and are many successful services around us that have recently emerged through commercialization. Based on this, detailed research and a number of social media software services have emerged together. At the same time, you can see that Facebook, which has a large number of users worldwide, focuses on producing chat, Messenger, advertising, and content.
In the previous research that studied how the SNS experience on smartphones can be improved, a test was conducted to understand what are the initial challenges involved in using the SNS platform (i.e., user interfaces including icon, design, and function) [8]. Based on the result from this study, the authors offered suggestions to improve experiences from Facebook or Instagram [9]. In addition, among the characteristics of social media, the method of expressing and satisfying the desire to show off individual content has evolved into a customized advertisement or a method of showing off individual content. Global companies around the world, such as Facebook, or Twitter, are interested in social media and analyzing data through deep learning to develop better services. In addition, much of the population are focusing their efforts on communication and redistribution of content. One study looked into the cause of motor vehicle accidents in metropolitan areas of the United States of America [10]. The data was collected through hashtags (#) on Twitter, and after using deep learning, the research successfully analyzed the cause of the accidents. This was one of the few positive results to prove how SNS can deliver information to people, as well as how data can be categorized for these studies [11].
In addition to these positive aspects, there have also been phenomena that have negative effects through social media. For example, personal lives can be affected by random exposure to an unspecified number of people through social media, and financial damage in the community can result from this. These are all issues that can be solved if the entity that develops social media services analyzes data through optimized layering between users.

User Object Layering
A SNS service shares two distinct characteristics of participation and exposure [12]. Every user can participate to share feedback and communicate, while an individual's interests, posts, and comments may be shared and exposed to others. Based on these characteristics, various forms of content created by the users is constantly being shared, and the platforms are becoming an innovative space to promote participation and networking.
A 'Subject user' is someone who initially posts on SNS, and 'Object user' is a respondent to the initial post. This mutual relationship enables all users to communicate, and the role of subject and object users may switch constantly. Thus, these interchanging roles must be understood prior to analyzing the social dysfunction between SNS users and potentially prevent it from occurring.
In Figure 1, a schematic diagram of the relationship between the Subject and Object users is presented, followed by a suggestion of Object layering and a deep learning model throughout the research article. that develops social media services analyzes data through optimized layering between users.

User Object Layering
A SNS service shares two distinct characteristics of participation and exposure [12]. Every user can participate to share feedback and communicate, while an individual's interests, posts, and comments may be shared and exposed to others. Based on these characteristics, various forms of content created by the users is constantly being shared, and the platforms are becoming an innovative space to promote participation and networking.
A 'Subject user' is someone who initially posts on SNS, and 'Object user' is a respondent to the initial post. This mutual relationship enables all users to communicate, and the role of subject and object users may switch constantly. Thus, these interchanging roles must be understood prior to analyzing the social dysfunction between SNS users and potentially prevent it from occurring.
In Figure 1, a schematic diagram of the relationship between the Subject and Object users is presented, followed by a suggestion of Object layering and a deep learning model throughout the research article. For data analysis and service application purposes, data collection is an important process that determines the quality of the analysis as well as the service. Thus, source data must be carefully evaluated in the planning phase, and the difficulty, cost, and safety of data must be considered as well. External data is another factor to consider, as data analysis often does not include any internal data processing, and instead relies on external data for both collection and analysis. To see these data processing methodologies, one of our development goals is to maintain the symmetry and interaction of each user. In this study, Level 1 (L1) is defined as the basic data, Level 2 (L2) as additional data, and Level 3 (L3) as cognitive data [13,14].
Structured data is data in schematic form that consists of rows and columns. Examples are tables from RDBMS as well as spreadsheets from Microsoft Excel. On the other hand, unstructured data is analyzable data that is in object form, such as text, multimedia (e.g., image and video), and HTML. Text mining is the most well-known analysis involving unstructured data. Unstructured data is semi-structured data with structures, and each quantity of data has different structures. Thus, meta-data must be analyzed to study data patterns, and examples of this are HTML, XML, and JSON. For data analysis and service application purposes, data collection is an important process that determines the quality of the analysis as well as the service. Thus, source data must be carefully evaluated in the planning phase, and the difficulty, cost, and safety of data must be considered as well. External data is another factor to consider, as data analysis often does not include any internal data processing, and instead relies on external data for both collection and analysis. To see these data processing methodologies, one of our development goals is to maintain the symmetry and interaction of each user. In this study, Level 1 (L1) is defined as the basic data, Level 2 (L2) as additional data, and Level 3 (L3) as cognitive data [13,14].
Structured data is data in schematic form that consists of rows and columns. Examples are tables from RDBMS as well as spreadsheets from Microsoft Excel. On the other hand, unstructured data is analyzable data that is in object form, such as text, multimedia (e.g., image and video), and HTML. Text mining is the most well-known analysis involving unstructured data. Unstructured data is semi-structured data with structures, and each quantity of data has different structures. Thus, meta-data must be analyzed to study data patterns, and examples of this are HTML, XML, and JSON. Figure 2 is an example of Layer 1 (L1) being used for object layering, representing JSON data from unstructured data.

Data Gathering Process
When collecting data, different technology applies to each type and shape of the data and examples include crawling, ETL, collection of log, FTP, HTTP, and RDB [15]. In general, data collection and storage are performed through DBMS, while utilizing SQL to bring in structured data. If data originates from external sources, it comes in a script form of unstructured data, often requiring a separate script development. In this case, HTTP protocol can scrap the text portion of the file to process its meta-data.
There are two main ways to collect big data, which are through open API and web crawling. This is due to the fact that data from social media is mostly collected from external sources.
As shown in Figure 3, basic data, the Layer 1 (L1) collection process occurred through data selection, detailed collection planning, and pilot testing.

Data Gathering Process
When collecting data, different technology applies to each type and shape of the data and examples include crawling, ETL, collection of log, FTP, HTTP, and RDB [15]. In general, data collection and storage are performed through DBMS, while utilizing SQL to bring in structured data. If data originates from external sources, it comes in a script form of unstructured data, often requiring a separate script development. In this case, HTTP protocol can scrap the text portion of the file to process its meta-data.
There are two main ways to collect big data, which are through open API and web crawling. This is due to the fact that data from social media is mostly collected from external sources.
As shown in Figure 3, basic data, the Layer 1 (L1) collection process occurred through data selection, detailed collection planning, and pilot testing. Data collection ordinarily occurs through sequences. If a problem occurs in the middle of the process, application of the result may be challenging, and the process might need to restart from the beginning. In order to prevent this interruption, a detailed planning process is pivotal. Thus, our three-step process included data selection, detailed col-  Data collection ordinarily occurs through sequences. If a problem occurs in the middle of the process, application of the result may be challenging, and the process might need to restart from the beginning. In order to prevent this interruption, a detailed planning process is pivotal. Thus, our three-step process included data selection, detailed collection planning, and pilot testing [16].

Data Selection Process
In data collection, data must be considered in various perspectives, including whether the data can be easily collected, its security, accuracy, challenges, and price [17]. The first consideration is whether data can be collected, regardless of the quality of the data.
Whether targeted data has personal information or copyright should be reviewed as well, otherwise the release of the result as well as application of the findings may be jeopardized. In addition, any potential security issue must be discussed with the owner or manager of each dataset. Accuracy of data depends on the purpose of the collection, and the process should clarify that applicable data is being collected. Even if the original data does not include applicable data, post-processing must be discussed, as it can potentially yield the desired data. Lastly, difficulty and price of the data collection and process are important factors to be considered.
During the data selection process, web crawling resulted in the short data collection interval being interrupted with a blocking error or request-calling error, thus the time and method had to be reassessed through consideration of the continuous data collection interval. The format in which several other object users reply to the content posted by a user in a subjective position in SNS for the first time may result in a data imbalance. At the same time, when viewed from the object user's point of view, SNS changes back to the subject's position, so it is necessary to consider a complementary and symmetrical position. Also, the Web browser was controlled through Selenium, which presented a call and response being delayed as well as other errors [18]. Furthermore, unexpected errors can occur when the HTML contents vary based on the size of the web browser; thus, a standardized collection process was required. This particular problem was due to the fact that the button for moving to the following page may disappear based on the size of the web browser. Thus, the collection process was modified to approach the post itself to collect data. Ultimately, the process had to be maximally simplified and categorized to control the potential errors.

Pilot Test
As previously mentioned, data selection and detailed collection planning had to be followed by the pilot test when we created each user's layering. First, basic data (L1) collection was performed through Instagram, which is the social networking service with the second most active users in the world. On this particular platform, users share their images, text, and hashtags (#) to create social relationships on the Internet. Through Selenium, an automated data gathering tool, we acted as if a user is on Instagram through a web browser, and displayed data from recent posts as well as behavioral data.
Displayed information had multiple sections including images, the user who posted, posted text information, hashtags (#), comments, emoticons, posting information, and postdates, including how much they are posted.
To collect data from each section, the find method was used to collect data based on tag and class information. Class name '_ezgzd' was utilized to collect the user name and posted information with hastags (#). Figure 4 demonstrates the data collection plan, which describes how we planned to extract information from collected usernames, posted contents, hashtags, as well as main text, images, and videos data as well. Figure 5 demonstrates data collection tag locations on the actual screen, and Figure 6 lists collectable information by HTML tags. To collect data from each section, the find method was used to collect data based on tag and class information. Class name '_ezgzd' was utilized to collect the user name and posted information with hastags (#). Figure 4 demonstrates the data collection plan, which describes how we planned to extract information from collected usernames, posted contents, hashtags, as well as main text, images, and videos data as well.

Name of Class Functions
class 'sxolz' Access the image and video areas and extract saved links for images and videos.
class '56pd5′ A button that calls an asynchronous call has a function to load the hidden comment information, and you must click repeatedly until the button disappears to bring up all comments. class 'evcx9′ Numbers are stored in text format as information on the number of users who clicked "Like" on posted articles, so processing in number format is necessary. class 'didmk' Text data in which the date information on which the post is registered is stored in a Post-ingDate format can be extracted. class '3a693′ Go to the next posting to gather information by finding and clicking a button to go to the next post. At the same time, the collected information was large in scale, yet each attribute has multiple values. As we planned to save the collected data on a database such as MongoDB, it was saved as unstructured data that has more structures and flexibility than structured data. In Figure 6, unstructured data has data types such as XML and JSON, and this research used JSON, which allowed an easier approach to create a structure through Python. The collected data had multiple JSON objects, and each JSON object included information from individual posts.

Name of Class Functions
class 'sxolz' Access the image and video areas and extract saved links for images and videos.
class '56pd5′ A button that calls an asynchronous call has a function to load the hidden comment information, and you must click repeatedly until the button disappears to bring up all comments. class 'evcx9′ Numbers are stored in text format as information on the number of users who clicked "Like" on posted articles, so processing in number format is necessary. class 'didmk' Text data in which the date information on which the post is registered is stored in a Post-ingDate format can be extracted. class '3a693′ Go to the next posting to gather information by finding and clicking a button to go to the next post. At the same time, the collected information was large in scale, yet each attribute has multiple values. As we planned to save the collected data on a database such as MongoDB, it was saved as unstructured data that has more structures and flexibility than structured data. In Figure 6, unstructured data has data types such as XML and JSON, and this research used JSON, which allowed an easier approach to create a structure through Python. The collected data had multiple JSON objects, and each JSON object included information from individual posts.  At the same time, the collected information was large in scale, yet each attribute has multiple values. As we planned to save the collected data on a database such as MongoDB, it was saved as unstructured data that has more structures and flexibility than Symmetry 2021, 13, 965 7 of 14 structured data. In Figure 6, unstructured data has data types such as XML and JSON, and this research used JSON, which allowed an easier approach to create a structure through Python. The collected data had multiple JSON objects, and each JSON object included information from individual posts. Figure 7 shows how unstructured data, JSON, is saved on NoSQL DB, and one file includes multiple JSON objects, while each JSON consists of data from individual posts.

class '3a693′
Go to the next posting to gather information by finding and clicking a button to go to the next post. At the same time, the collected information was large in scale, yet each attribute has multiple values. As we planned to save the collected data on a database such as MongoDB, it was saved as unstructured data that has more structures and flexibility than structured data. In Figure 6, unstructured data has data types such as XML and JSON, and this research used JSON, which allowed an easier approach to create a structure through Python. The collected data had multiple JSON objects, and each JSON object included information from individual posts.  As a result, the basic collection process was divided into three different steps, which enabled a simplified and categorized approach with decreased risks factors for errors. To collect information from users on Instagram, the http protocol through URL was used. The primary collection method was a repetitive process to approach each post onto the next, while the secondary process collected ID values of each post prior to other processes, as the primary collection presented multiple challenges. This process was allowed through operating a script on Selenium, scrolling down to the end of the web browser, and collecting all the ID values first. Once ID values were collected, each post was recalled, similar to the primary collection process.

Result and Discussion
To analyze the social networking, modeling was performed to create relationships between nodes and links, which helps to analyze the structure, expansion, and evolution of the findings. Nodes present the object to be analyzed, such as an individual or an object, whereas links demonstrate the relationship between the nodes, which may or may not have directionality [19].
In SNS, subject users and object users can constantly interchange roles, thus we wanted Artificial Intelligence to learn this volatile relationship and provide values to keep a symmetrical balance. First, objects must be followed by layering in order to analyze their correlation [20][21][22][23]. As the collection process requires Selenium and the WebDriver API to open a web browser for collection, we have monitored the process through the web browser. However, as the web browser is involved in data collection, the overall speed was slow, and the closure of the web browser stopped the collection process, which resulted in the loss of the entire work memory. In addition, unstable internet connection also force stopped the process.
There were no other technical difficulties discovered. Thus, we have proceeded to selecting public influencers, as their accounts clearly represent subject and object user information. This is an important factor, as the theoretical background applied Multi-layer Perceptron, which predicts the results after the Deep Learning process [24]. When this occurs, it is pivotal to provide maximal space for exception handling, so that the entire process does not require a manual reset, even when problems occur.
For our first pilot study, we randomly selected two public entertainers or influencers with a high number of followers. As a result, the SNS influencers named 'Jae Woo Kim' and 'Chan Woo Cheong' were selected, and their Instagram posts and comments were collected. The two individuals are known to have an actual relationship as well as a relationship on social networking services, thus we were expecting to show that through our analysis. When nodes and links are used to construct a correlation diagram for object layering purposes, the following factors must be considered: Degree centrality (Cd), Closeness centrality (Cc), Betweenness centrality (Cb), and Eigenvector centrality (Ce). Degree centrality shows how many links are connected in a single node, quantifying the centrality of the node [25]. Closeness centrality presents the proximity between each node, which calculates the centrality through both a direct and indirect relationship with other notes; by contrast, degree centrality only calculates based on direct relationships. Closeness centrality represents the sum of minimal steps to reach from one node to another [26]. Betweenness centrality measures the number of roles a node has as a mediator, and a high value can be observed when the node is located in between many different nodes [27]. Lastly, Eigenvector centrality places a focus on connection to 'important nodes', thus a high value can be measured when the node is connected to multiple important nodes [23].
Because of the final product of social network analysis and visualization, the following analytical tools can be used: R, NetDraw, Gephi, NodeXL, NetMiner, Pajek, and more. In this research, the igraph package from R was utilized, and this product of correlation diagram can be used for modeling an actual or artificial network.
The data for this diagram were created through HIVE, and this process requires two tables representing the relationship between the nodes as well as information on the nodes. On these two tables, the link table should include from_ID, to_ID, weight, and score, while the node table should include ID and group information, without the redundant use of from_ID and to_ID. Figure 8 shows an example of data used to construct the relationship diagram, and shows how the subject and object users can be differentiated. This information was further used to generate a directional relationship calculated by the amount and positivity of the comment. The data for this diagram were created through HIVE, and this process requires two tables representing the relationship between the nodes as well as information on the nodes. On these two tables, the link table should include from_ID, to_ID, weight, and score, while the node table should include ID and group information, without the redundant use of from_ID and to_ID. Figure 8 shows an example of data used to construct the relationship diagram, and shows how the subject and object users can be differentiated. This information was further used to generate a directional relationship calculated by the amount and positivity of the comment. Figures 9 and 10 represent the two tables of nodes and links, showing the relationship between nodes. In order to analyze the correlation between user object layering, nodes and links were visualized. This visualization assisted the deep learning process, analyzing the relationship between subject and object users, as well as the relationship between a new object user (i.e., previously, a subject user) and other object users.  Therefore, the process of correlation analysis of user object layering can be described in three steps. Figure 11 presents the process of Object layering expressing the user relationships, and the node's relationships were shown with the node's color, size, and size of the arrow. The width of the link represents the weight, whereas the preference based on the score was visualized with link colors.  In order to analyze the correlation between user object layering, nodes and links were visualized. This visualization assisted the deep learning process, analyzing the relationship between subject and object users, as well as the relationship between a new object user (i.e., previously, a subject user) and other object users. The data for this diagram were created through HIVE, and this process requires two tables representing the relationship between the nodes as well as information on the nodes. On these two tables, the link table should include from_ID, to_ID, weight, and score, while the node table should include ID and group information, without the redundant use of from_ID and to_ID. Figure 8 shows an example of data used to construct the relationship diagram, and shows how the subject and object users can be differentiated. This information was further used to generate a directional relationship calculated by the amount and positivity of the comment. Figures 9 and 10 represent the two tables of nodes and links, showing the relationship between nodes. In order to analyze the correlation between user object layering, nodes and links were visualized. This visualization assisted the deep learning process, analyzing the relationship between subject and object users, as well as the relationship between a new object user (i.e., previously, a subject user) and other object users.  Therefore, the process of correlation analysis of user object layering can be described in three steps. Figure 11 presents the process of Object layering expressing the user relationships, and the node's relationships were shown with the node's color, size, and size of the arrow. The width of the link represents the weight, whereas the preference based on the score was visualized with link colors. The data for this diagram were created through HIVE, and this process requires two tables representing the relationship between the nodes as well as information on the nodes. On these two tables, the link table should include from_ID, to_ID, weight, and score, while the node table should include ID and group information, without the redundant use of from_ID and to_ID. Figure 8 shows an example of data used to construct the relationship diagram, and shows how the subject and object users can be differentiated. This information was further used to generate a directional relationship calculated by the amount and positivity of the comment. Figures 9 and 10 represent the two tables of nodes and links, showing the relationship between nodes. In order to analyze the correlation between user object layering, nodes and links were visualized. This visualization assisted the deep learning process, analyzing the relationship between subject and object users, as well as the relationship between a new object user (i.e., previously, a subject user) and other object users.  Therefore, the process of correlation analysis of user object layering can be described in three steps. Figure 11 presents the process of Object layering expressing the user relationships, and the node's relationships were shown with the node's color, size, and size of the arrow. The width of the link represents the weight, whereas the preference based on the score was visualized with link colors.  Therefore, the process of correlation analysis of user object layering can be described in three steps. Figure 11 presents the process of Object layering expressing the user relationships, and the node's relationships were shown with the node's color, size, and size of the arrow. The width of the link represents the weight, whereas the preference based on the score was visualized with link colors. Therefore, the process of correlation analysis of user object layering can be described in three steps. Figure 11 presents the process of Object layering expressing the user relationships, and the node's relationships were shown with the node's color, size, and size of the arrow. The width of the link represents the weight, whereas the preference based on the score was visualized with link colors. Figure 11. Sequential process and diagram of object layering step for expressing the each user's relationship. Figure 11. Sequential process and diagram of object layering step for expressing the each user's relationship.
The first step starts with displaying the relationship between different nodes. Using the following source with the node's color, size, and size of the arrow can provide correlation of the nodes. Secondly, interest was presented through the width of the link using the weight. This can add preference to the second source by designating the weight to represent the links' size. The third step was to show the preference through the score, while designating the link's color. This can be added to the preference through the second source. Figure 12 shows the source used to describe a relationship between the nodes for visualizing the correlation. As dynamic relationships between subject and object users in social media cannot be established with a fixed value, user object layering must be carefully performed to complete a deep learning model. The first step starts with displaying the relationship between different nodes. Using the following source with the node's color, size, and size of the arrow can provide correlation of the nodes. Secondly, interest was presented through the width of the link using the weight. This can add preference to the second source by designating the weight to represent the links' size. The third step was to show the preference through the score, while designating the link's color. This can be added to the preference through the second source. Figure 12 shows the source used to describe a relationship between the nodes for visualizing the correlation. As dynamic relationships between subject and object users in social media cannot be established with a fixed value, user object layering must be carefully performed to complete a deep learning model.
In the following Figure 13, all data from the two previously mentioned public influencers went through object layering, and the correlation analysis is shown. Using the sources from Figures 9-11 results in Figure 13. This is the first visualization step showing the correlation. The posting data of each user forms user-centered nodes, and the relationship between nodes is shown in the form of a solid line. As there are more relationships, the thickness of the solid line tends to be thicker.  In the following Figure 13, all data from the two previously mentioned public influencers went through object layering, and the correlation analysis is shown. Using the sources from Figures 9-11 results in Figure 13. This is the first visualization step showing the correlation. The posting data of each user forms user-centered nodes, and the relationship between nodes is shown in the form of a solid line. As there are more relationships, the thickness of the solid line tends to be thicker.
After that, Figure 14 shows the visualized data focusing on the Interest obtained during the object layering correlation analysis. This interest with a specific algorithm was registered as copyrighted intellectual property [22][23][24].
Similarly, Figure 15 shows the visualized data focusing on the attraction. The interest may allow both subject and object users to have their own relationships, thus the color of the link was added. This addition resulted in the modification of nodes and links, such as width of the link. As links become more and more complex, the color of the solid lines, which is the connection between the links, becomes darker. Eventually, the beginning begins with black, becomes yellow and red, and becomes darker and darker towards a deep red.
In the following Figure 13, all data from the two previously mentioned public influencers went through object layering, and the correlation analysis is shown. Using the sources from Figures 9-11 results in Figure 13. This is the first visualization step showing the correlation. The posting data of each user forms user-centered nodes, and the relationship between nodes is shown in the form of a solid line. As there are more relationships, the thickness of the solid line tends to be thicker. After that, Figure 14 shows the visualized data focusing on the Interest obtained during the object layering correlation analysis. This interest with a specific algorithm was registered as copyrighted intellectual property [22][23][24]. Similarly, Figure 15 shows the visualized data focusing on the attraction. The interest may allow both subject and object users to have their own relationships, thus the color of the link was added. This addition resulted in the modification of nodes and links, such as width of the link. As links become more and more complex, the color of the solid lines, which is the connection between the links, becomes darker. Eventually, the beginning begins with black, becomes yellow and red, and becomes darker and darker towards a deep red. In addition, both the interest and attraction from Figures 13-15 were assigned with specific algorithms and registered as copyrighted intellectual property [13,14]. Since the correlation from object layering irregularly changes based on the node, the deep learning Similarly, Figure 15 shows the visualized data focusing on the attraction. The interest may allow both subject and object users to have their own relationships, thus the color of the link was added. This addition resulted in the modification of nodes and links, such as width of the link. As links become more and more complex, the color of the solid lines, which is the connection between the links, becomes darker. Eventually, the beginning begins with black, becomes yellow and red, and becomes darker and darker towards a deep red. In addition, both the interest and attraction from Figures 13-15 were assigned with specific algorithms and registered as copyrighted intellectual property [13,14]. Since the correlation from object layering irregularly changes based on the node, the deep learning model experiences challenges in visualizing a user's data. Thus, this research focused on utilizing width and color for node and link. However, relationship diagrams created through igraph did not clearly show the interest and attraction. To resolve this problem, another package, NetworkD3 from R, was used as well. The NetworkD3 package is based on the htmlwidget package, and it allows visualization of the D3 network [28]. This package also offers an igraph_to_networkD3 function, which uses igraph objects to create a network.
In addition, the igraph_to_networkD3 function may not create any image, but it is capable of extracting parameters used in the forceNetwork function, which is used for plot- In addition, both the interest and attraction from Figures 13-15 were assigned with specific algorithms and registered as copyrighted intellectual property [13,14]. Since the correlation from object layering irregularly changes based on the node, the deep learning model experiences challenges in visualizing a user's data. Thus, this research focused on utilizing width and color for node and link. However, relationship diagrams created through igraph did not clearly show the interest and attraction. To resolve this problem, another package, NetworkD3 from R, was used as well. The NetworkD3 package is based on the htmlwidget package, and it allows visualization of the D3 network [28]. This package also offers an igraph_to_networkD3 function, which uses igraph objects to create a network.
In addition, the igraph_to_networkD3 function may not create any image, but it is capable of extracting parameters used in the forceNetwork function, which is used for plotting the network. This function creates D3.js (forced instruction network graph) from the two data frames. One network includes information for network nodes and the other includes information for network links. Figure 16 explains forceNetwork function's options, and Figure 17 shows the result from the forceNetwork function by Network3D. Additionally, Figure 17 shows an actual screen capture of real data collection on Instagram. It is the result of using data collected from social media, constructing relationship diagrams through the Network3D and igraph packages. Based on the amount of data and speed of irregular changes within subject and object users, it was difficult to analyze their correlation. Therefore, Figure 17 depicts how actual data from social media may be collected and processed. This is an example of this correlation being studied through the deep learning process. Figure 18 shows the classification of elements that can be extracted from raw data after data had been collected from one of the public entertainers, or influencers, 'Chan Woo Cheong,' who is a famous influencer on both TV and SNS. This is actual data from his Instagram and each string was classified to ￦t, as explained in the deep learning methodology [29]. The data consist of the actual post, hashtag, tag IDs, and posting IDs as one entity. In addition, other data that can be extracted is also presented, such as the posting Additionally, Figure 17 shows an actual screen capture of real data collection on Instagram. It is the result of using data collected from social media, constructing relationship diagrams through the Network3D and igraph packages. Based on the amount of data and speed of irregular changes within subject and object users, it was difficult to analyze their correlation. Therefore, Figure 17 depicts how actual data from social media may be collected and processed. This is an example of this correlation being studied through the deep learning process. Figure 18 shows the classification of elements that can be extracted from raw data after data had been collected from one of the public entertainers, or influencers, 'Chan Woo Cheong,' who is a famous influencer on both TV and SNS. This is actual data from his Instagram and each string was classified to ￦t, as explained in the deep learning methodology [29]. The data consist of the actual post, hashtag, tag IDs, and posting IDs as one entity. In addition, other data that can be extracted is also presented, such as the posting ID, posting date, posting time, uploading ID, post texts, public influencer hashtag, commenter IDs, and comments. Additionally, Figure 17 shows an actual screen capture of real data collection on Instagram. It is the result of using data collected from social media, constructing relationship diagrams through the Network3D and igraph packages. Based on the amount of data and speed of irregular changes within subject and object users, it was difficult to analyze their correlation. Therefore, Figure 17 depicts how actual data from social media may be collected and processed. This is an example of this correlation being studied through the deep learning process. Figure 18 shows the classification of elements that can be extracted from raw data after data had been collected from one of the public entertainers, or influencers, 'Chan Woo Cheong', who is a famous influencer on both TV and SNS. This is actual data from his Instagram and each string was classified to ₩t, as explained in the deep learning methodology [29]. The data consist of the actual post, hashtag, tag IDs, and posting IDs as one entity. In addition, other data that can be extracted is also presented, such as the posting ID, posting date, posting time, uploading ID, post texts, public influencer hashtag, commenter IDs, and comments.
Symmetry 2021, 13, 965 13 of 15 Figure 18. Classification of elements that can be extracted from raw data.

Conclusions
As social network services currently play a very important role in our society, collecting and analyzing vast amounts of data is an important step for the Deep Learning of Artificial Intelligence. We need an insightful solution because collecting data on social networking service platforms before analyzing the data being collected can be very ineffective. New platforms and services can be built through Deep Learning by applying mesh-up algorithms through different layering of subjects and objects between users so that the data analyzed can be safely used in novel social network services at the same time.
In this research, we provided a method of performing user object layering, showed how to analyze the relationship between subjects and object users, and how to establish the relationship between new object users (i.e., previous subject users) and other object users. This is because the subject in the novel network service is a group of objects surrounded by the subject user, and if each layering is added again, a special relationship, as shown in the various presented figures, between users with different viewpoints and purposes is formed.
The impact of novel social network services around the world is what happens as part of symmetrical relationships with the emotions of many people, so it is necessary to analyze them with Artificial Intelligence. The most commonly used 'Instagram' and 'Bobaedream' were selected as target SNS platforms, and the results were visualized by collecting, processing, and analyzing data from the accounts of public influencers. Novel social network services that can be reinterpreted from the perspective of subjects from different perspectives (that is, users who were former objects and become subjects again) were combined to cover all events and the object user layering that can occur in countless and symmetrical relationship-oriented networks. This process was implemented with algorithms and Deep Learning. In addition, one data collection tool, Selenium, was used automate programs and mimic the manual use of web browsers, taking advantage of multiple data crawling methods.
In conclusion, based on the data collected from social media, we showed the correlation between interest and attraction in the newly formed network of subjects that are different subjects and objects again, which are both objects and subjects at the same time. Through this research, a safe cyberspace was created to build a safe social network service that can be used with confidence. Future research will focus on preventing social dysfunction by recognizing and blocking potentially problematic user relationships with a meshup algorithm. Figure 18. Classification of elements that can be extracted from raw data.

Conclusions
As social network services currently play a very important role in our society, collecting and analyzing vast amounts of data is an important step for the Deep Learning of Artificial Intelligence. We need an insightful solution because collecting data on social networking service platforms before analyzing the data being collected can be very ineffective. New platforms and services can be built through Deep Learning by applying mesh-up algorithms through different layering of subjects and objects between users so that the data analyzed can be safely used in novel social network services at the same time.
In this research, we provided a method of performing user object layering, showed how to analyze the relationship between subjects and object users, and how to establish the relationship between new object users (i.e., previous subject users) and other object users. This is because the subject in the novel network service is a group of objects surrounded by the subject user, and if each layering is added again, a special relationship, as shown in the various presented figures, between users with different viewpoints and purposes is formed.
The impact of novel social network services around the world is what happens as part of symmetrical relationships with the emotions of many people, so it is necessary to analyze them with Artificial Intelligence. The most commonly used 'Instagram' and 'Bobaedream' were selected as target SNS platforms, and the results were visualized by collecting, processing, and analyzing data from the accounts of public influencers. Novel social network services that can be reinterpreted from the perspective of subjects from different perspectives (that is, users who were former objects and become subjects again) were combined to cover all events and the object user layering that can occur in countless and symmetrical relationship-oriented networks. This process was implemented with algorithms and Deep Learning. In addition, one data collection tool, Selenium, was used automate programs and mimic the manual use of web browsers, taking advantage of multiple data crawling methods.
In conclusion, based on the data collected from social media, we showed the correlation between interest and attraction in the newly formed network of subjects that are different subjects and objects again, which are both objects and subjects at the same