SAO-Based Semantic Mining of Patents for Semi-Automatic Construction of a Customer Job Map

The Outcome-Driven Innovation (ODI) method based on the ‘Jobs-to-be-done’ concept is very useful in the identification of unmet customer needs and has been adopted widely in the industry. The Job Map, a tool of the ODI method, is used to understand customers by defining their behavioral process. Complications must be overcome before the Job Map can be applied to the specific problem in question, such as a time-consuming process, dealing with a large amount of data, and experts’ biased work. To solve these problems, this study develops a patent mining-based method based on the subject-action-object (SAO) structure to support the creation of a Job Map by semi-automatizing data collection and analysis. This effort at better utilizing computers in customer analysis for product design will contribute to expanding computerized methods for solving design and engineering problems in practice.


Introduction
A critical step in new product design and development is the identification and analysis of unmet customer needs [1]. The Outcome-Driven Innovation (ODI) method [2] based on the 'Jobs-to-be-done' concept has been used to identify unmet customer needs. The method attempts to discover the fundamental problem that customer wants to solve. This approach is becoming an effective method to guide innovation of services and products [3][4][5]. Especially, the Customer-Centered Innovation Map (Job Map), a tool of the ODI method, is widely used as a way to understand customers by defining their behavior [6][7][8].
In ODI, the 'Job' is defined as the fundamental goal customers are trying to accomplish or problems they are trying to solve in a given situation [4,9]. The Job Map visualizes the entire process by which a customer uses a product or service to accomplish these Jobs. The Job Map consists of the action elements that are performed for each process. This detailed analysis provides opportunities to use the Job Map to help users to find a new opportunity for innovation with a new perspective by considering the whole fundamental process rather than just focusing on existing functions of the product or service [6].
Three complications must be overcome before the Job Map can be applied in the field: (1) to apply the Job Map method, the user must expend considerable effort to build it before it can be used to understand customers. An existing ODI study [3,4,6,[9][10][11] provides a general guideline for the process of creating a Job Map, but the process is very complicated because it entails collecting and analyzing a large amount of raw data on customers; moreover, such data are often qualitative. (2) The Job Map depends greatly on the tool user's capability and background, so its use can differ significantly across users even if the goal is the same. (3) The Job Map involves numerous variables, so the user often has difficulty responding appropriately to each case. To solve these problems, this research proposes a patent data mining-based method based on the subject-action-object (SAO) structure to support the creation of a Job Map by semi-automatizing the data collection and analysis.
A Job Map abstracts the customer's detailed job and behavioral process. This approach is closely related to the functional analysis, which is one of the methods used in product innovation. A Job Map represents the steps for a customer to use a product or service; the customer follows those behaviors in the form [action-object] (AO). This representation form is similar to using a [subject-action-object] (SAO) structure form in theory of solving inventive problem (Transliterated Russian acronym TRIZ) and in the function analysis research fields for abstraction of key concepts [12,13]. The SAO analysis is a method of abstracting and expressing SAO-type structure to effectively organize and express various and large amounts of document data [14,15]. The SAO structure is also appropriate for representing contents in a document from a functional view [13,16,17]. Recently, many studies have used text mining techniques for automated content analysis and product design [13,[18][19][20]. The use of the SAO structure is advantageous in text mining for analyzing massive amounts of technological document data or web data, and has been adopted in various studies such as trend analysis, technology analytics, and patent analysis. By using the SAO structure, a user can automate collection, analysis, and organization of a large amount of data when defining the customer's action steps and detailed Jobs in the Job Map. This efficient approach can solve the three aforementioned problems of Job Map research; the effort in collecting and analyzing a large amount of data can also be reduced, and the variabilty coming from the users' different capabilities and backgrounds can be controlled.
This paper presents a semi-automated method to create a Job Map by using the SAO analysis of patent documents related to products. Using this computerized method, users can create the Job Map effectively and can use the ODI method efficiently to support product and service innovation. While there have been several studies on facilitating the identififcation and use of customer needs [21,22], our review of the design and engineering literature revealed a surprising lack of work directed at providing methods to help researchers and practitioners identify customer needs in an automatic way, despite its significance in this data-rich economy. This study contributes to both literature and practice by solving this problem originally.
The structure of this paper is as follows. Section 2 introduces related work in ODI, Job Map, and SAO-based patent text mining research. Section 3 proposes the process of the semi-automated method to create the Job Map. Section 4 presents a case study of a cleaning job to explain and verify the details of the method. Finally, Section 5 discusses the implications of this study and suggests future work.

Outcome-Driven Innovation and Job Map
ODI [4,9] assumes that customers use a product or service for the purpose of performing a specific Job. The Job is defined as "the fundamental problem that a customer needs to resolve in a given situation" [4]. The Outcome is a measure that is used as a basis to evaluate how well the customer has accomplished what they are doing. Instead of consuming the product or service itself, the customer is using it, and completing a Job; in this way, the customer gets value. The Job does not change for a long time while creating value for the customer. ODI is a method that attempts to innovate based on the Outcome that makes these Jobs well. This trait allows companies to increase the likelihood of innovation when enabling Job for customers.
A customer performs several actions to complete the Job. For example, "to clean the dust on the floor", the entire process that the customer undertakes to achieve the job must be addressed. Defining the overall execution process of the Job enables an effective understanding of what products and services a customer is using. Clarifying the execution process of the Job at each step can suggest a possible development direction for the effective execution of the Job, and thereby help users discover an opportunity for the innovation of products or services [6,10,23].
However, defining the job and its execution process requires a large amount of information analysis and experience related to the customer. A Job Map represents the process that is required to complete the Job; the map is developed by analyzing various products and services [2,11]. The Job Map generally defines the execution process of the Job as eight steps ('Define', 'Locate', 'Prepare', 'Confirm', 'Execute', 'Monitor', 'Modify', and 'Conclude') or nine steps (a 'Resolve' step is added after 'Monitor') ( Figure 1). Sustainability 2017, 9,1386 3 of 17 possible development direction for the effective execution of the Job, and thereby help users discover an opportunity for the innovation of products or services [6,10,23]. However, defining the job and its execution process requires a large amount of information analysis and experience related to the customer. A Job Map represents the process that is required to complete the Job; the map is developed by analyzing various products and services [2,11]. The Job Map generally defines the execution process of the Job as eight steps ('Define', 'Locate', 'Prepare', 'Confirm', 'Execute', 'Monitor', 'Modify', and 'Conclude') or nine steps (a 'Resolve' step is added after 'Monitor') ( Figure 1). Previous studies provided a basic framework (Job Mapping) to define the steps of a Job's execution process when creating a Job Map [6,9,10]. However, when applying this method to innovate products or services, the Job Mapping task may be difficult, and it is based on subjective judgement of the experts and is very time-consuming. To construct a Job Map for a product or service, the relevant customer behavior must be derived, and the relationships among actions must be defined; to accomplish these tasks, a large amount of data must be collected and analyzed. Furthermore, the process must be conducted by an expert who has a strong understanding of ODI.
Until now, studies of the Job Map have been applied to the innovation of products or services by using the Job Map directly or indirectly [3,8,23], but research to solve problems (Insufficient guidelines, Dependence on expert competency, Time-consuming work) of Job Mapping has been insufficient. The research assumes that the development process of a Job Map depends on the knowledge of experts and does not consider the extraction of automating a Job using data. To solve these problems, this research defines each execution process of the Job Map, uses patent data related to products to reduce time-consuming tasks, and helps an expert to increase the reliability of Job Mapping.

SAO-Based Patent Text Mining
'Function' has been defined in several ways [24][25][26]. In this research, a function is an expression that abstracts the aims and methods of technology and business information. Altshuller invented the TRIZ, which is a method that uses this functional approach to abstract and generalizes patents [27,28]. Other research fields including Value Engineering, Engineering Design, and Product Design also apply the functional approach [26,29,30]. A subject-action-object (SAO) grammatical structure is used to express functional information effectively. This structure uses the action to express the relationship between the subject and object. The SAO structure encodes a key concept and shows a meansobjective relationship [12,15,16].
The natural language processing (NLP) method is a technology that supports the development of intelligent services by automatically analyzing data recorded as text [12,31]. NLP converts human language to morphemes, phrases, and sentences, and then analyzes these components to extract Previous studies provided a basic framework (Job Mapping) to define the steps of a Job's execution process when creating a Job Map [6,9,10]. However, when applying this method to innovate products or services, the Job Mapping task may be difficult, and it is based on subjective judgement of the experts and is very time-consuming. To construct a Job Map for a product or service, the relevant customer behavior must be derived, and the relationships among actions must be defined; to accomplish these tasks, a large amount of data must be collected and analyzed. Furthermore, the process must be conducted by an expert who has a strong understanding of ODI.
Until now, studies of the Job Map have been applied to the innovation of products or services by using the Job Map directly or indirectly [3,8,23], but research to solve problems (Insufficient guidelines, Dependence on expert competency, Time-consuming work) of Job Mapping has been insufficient. The research assumes that the development process of a Job Map depends on the knowledge of experts and does not consider the extraction of automating a Job using data. To solve these problems, this research defines each execution process of the Job Map, uses patent data related to products to reduce time-consuming tasks, and helps an expert to increase the reliability of Job Mapping.

SAO-Based Patent Text Mining
'Function' has been defined in several ways [24][25][26]. In this research, a function is an expression that abstracts the aims and methods of technology and business information. Altshuller invented the TRIZ, which is a method that uses this functional approach to abstract and generalizes patents [27,28]. Other research fields including Value Engineering, Engineering Design, and Product Design also apply the functional approach [26,29,30]. A subject-action-object (SAO) grammatical structure is used to express functional information effectively. This structure uses the action to express the relationship between the subject and object. The SAO structure encodes a key concept and shows a means-objective relationship [12,15,16].
The natural language processing (NLP) method is a technology that supports the development of intelligent services by automatically analyzing data recorded as text [12,31]. NLP converts human language to morphemes, phrases, and sentences, and then analyzes these components to extract meaning. NLP can efficiently extract SAO structures from a large amount of document data. NLP based on the SAO structure has been used for patent analysis [12,14,19]; the technique has been used to write a technology roadmap automatically, to discover an innovative technology opportunity, and to understand technology trends by text mining of many patent documents. Other studies have applied NLP and the SAO structure to analyze technical documents and web documents [18].
An example of SAO extraction using NLP is shown in Figure 2. The SAO extraction method has been extensively studied in the conventional NLP domain. In particular, the rule-based approach and stochastic approach are used as Part-Of-Speech (POS) analysis [32][33][34][35]. In recent years, with the application of deep learning approach to NLP research, summarization of document contents by using Subject-Verb-Object, which is same as a Subject-Action-Object structure, has become one of the important research topics in natural language processing [36,37]. meaning. NLP can efficiently extract SAO structures from a large amount of document data. NLP based on the SAO structure has been used for patent analysis [12,14,19]; the technique has been used to write a technology roadmap automatically, to discover an innovative technology opportunity, and to understand technology trends by text mining of many patent documents. Other studies have applied NLP and the SAO structure to analyze technical documents and web documents [18]. An example of SAO extraction using NLP is shown in Figure 2. The SAO extraction method has been extensively studied in the conventional NLP domain. In particular, the rule-based approach and stochastic approach are used as Part-Of-Speech (POS) analysis [32][33][34][35]. In recent years, with the application of deep learning approach to NLP research, summarization of document contents by using Subject-Verb-Object, which is same as a Subject-Action-Object structure, has become one of the important research topics in natural language processing [36,37].

Original Sentence:
The MG-Si powder is melted by the high-temperature plasma and sprayed onto the base substrate 11.

Extracted SAO Model:
high-temperature plasmamelt -MG-Si powder However, text mining has not yet used the SAO structure to analyze and understand customers. Because the expression structure used for Job Mapping uses the AO structure, the SAO structure technique can be used to automate Job Mapping to analyze a lot of documents on customer behaviors. For example, Figure 2 shows a Job Map representation done by an expert and the same for SAO with patent documents in a cleaning process description. Figure 3 shows that both cases are the same regarding expressive form and semantics (e.g., 'use the cleaner machine'; 'operate cleaning device'). Thus, the latter case can be used for the former case. Our study starts from this point and automates a job map creation process through the SAO-based patent mining.  However, text mining has not yet used the SAO structure to analyze and understand customers. Because the expression structure used for Job Mapping uses the AO structure, the SAO structure technique can be used to automate Job Mapping to analyze a lot of documents on customer behaviors. For example, Figure 2 shows a Job Map representation done by an expert and the same for SAO with patent documents in a cleaning process description. Figure 3 shows that both cases are the same regarding expressive form and semantics (e.g., 'use the cleaner machine'; 'operate cleaning device'). Thus, the latter case can be used for the former case. Our study starts from this point and automates a job map creation process through the SAO-based patent mining. meaning. NLP can efficiently extract SAO structures from a large amount of document data. NLP based on the SAO structure has been used for patent analysis [12,14,19]; the technique has been used to write a technology roadmap automatically, to discover an innovative technology opportunity, and to understand technology trends by text mining of many patent documents. Other studies have applied NLP and the SAO structure to analyze technical documents and web documents [18]. An example of SAO extraction using NLP is shown in Figure 2. The SAO extraction method has been extensively studied in the conventional NLP domain. In particular, the rule-based approach and stochastic approach are used as Part-Of-Speech (POS) analysis [32][33][34][35]. In recent years, with the application of deep learning approach to NLP research, summarization of document contents by using Subject-Verb-Object, which is same as a Subject-Action-Object structure, has become one of the important research topics in natural language processing [36,37]. However, text mining has not yet used the SAO structure to analyze and understand customers. Because the expression structure used for Job Mapping uses the AO structure, the SAO structure technique can be used to automate Job Mapping to analyze a lot of documents on customer behaviors. For example, Figure 2 shows a Job Map representation done by an expert and the same for SAO with patent documents in a cleaning process description. Figure 3 shows that both cases are the same regarding expressive form and semantics (e.g., 'use the cleaner machine'; 'operate cleaning device'). Thus, the latter case can be used for the former case. Our study starts from this point and automates a job map creation process through the SAO-based patent mining.

Method
For Job Mapping, the overall process of the semi-automated method proposed in this study consists of three phases (Figure 4). The first phase is to use International Patent Classification (IPC) codes to guide the collection of patent sets of the product group that is related to the Job, then to use NLP to extract the SAO structures from each patent document. The second phase is to apply rules to the candidate SAO structures to first select valid SAO structures for Job Mapping, then to apply action-selection rules, object-selection rules, and domain-knowledge rules to generate AO lists that can represent the detailed customer behaviors of the Job Map. The third phase is to apply verb-mapping rules and object-mapping rules to the selected AO lists to develop the Job Map. Finally, the user who performs the Job Mapping completes the Map by reviewing and checking it.

Method
For Job Mapping, the overall process of the semi-automated method proposed in this study consists of three phases (Figure 4). The first phase is to use International Patent Classification (IPC) codes to guide the collection of patent sets of the product group that is related to the Job, then to use NLP to extract the SAO structures from each patent document. The second phase is to apply rules to the candidate SAO structures to first select valid SAO structures for Job Mapping, then to apply action-selection rules, object-selection rules, and domain-knowledge rules to generate AO lists that can represent the detailed customer behaviors of the Job Map. The third phase is to apply verbmapping rules and object-mapping rules to the selected AO lists to develop the Job Map. Finally, the user who performs the Job Mapping completes the Map by reviewing and checking it.

Data Collection and SAO Structure Extraction
In this research, the Job Map presents the detailed actions of the customer, which are derived from patent data. Patent data are highly efficient to search and analyze because the patent specification format is standardized [38,39]. In addition, patent data include bibliographic information in addition to technical information, so various analyses are possible. The relevant fields in patent data for this study include "field of invention", "description of related art", "summary of the invention", and "detailed description". Information about customers who use the product can also be found in such data. To collect patent data, we use bulk patent dataset from Google Repository (Google, Menlo Park, Cal., USA).
The first phase of Job Mapping is to define the Job and the group of products that are needed to perform it and to select IPC codes related to the product group. Because the Job Map considers the entire process to complete resolving a job, data are collected over a wide range of product groups rather than for a specific product. For example, beyond the patents associated with vacuum cleaners, patents related to the required tools used should also be examined. The IPC code can be found in any IPC code browser or IPC code dictionary. In addition, missing patents should be added to the category defined in IPC codes. A user can check other IPC codes referenced by the retrieved patents, or can add use query searching to add a patent set.
When the IPC code associated with the product group is selected, a set of patents that belong to the IPC code is collected. Because the goal of the proposed methods to use as much data as possible in the content of patents, the scope of patents related to the term and rights is set to include all published patents. This large amount of data increases the reliability of the results of Job Mapping and enables further analyses. When the entire patent set is selected, the contents of the patent document such as "abstract", "field", "introduction", "summary", "description of invention", and the basic bibliographic information are stored. The patent data can be collected using patent search sites provided by the United States Patent and Trademark Office (USPTO) or other similar organizations, or by using a commercial patent search site.

Data Collection and SAO Structure Extraction
In this research, the Job Map presents the detailed actions of the customer, which are derived from patent data. Patent data are highly efficient to search and analyze because the patent specification format is standardized [38,39]. In addition, patent data include bibliographic information in addition to technical information, so various analyses are possible. The relevant fields in patent data for this study include "field of invention", "description of related art", "summary of the invention", and "detailed description". Information about customers who use the product can also be found in such data. To collect patent data, we use bulk patent dataset from Google Repository (Google, Menlo Park, CA, USA).
The first phase of Job Mapping is to define the Job and the group of products that are needed to perform it and to select IPC codes related to the product group. Because the Job Map considers the entire process to complete resolving a job, data are collected over a wide range of product groups rather than for a specific product. For example, beyond the patents associated with vacuum cleaners, patents related to the required tools used should also be examined. The IPC code can be found in any IPC code browser or IPC code dictionary. In addition, missing patents should be added to the category defined in IPC codes. A user can check other IPC codes referenced by the retrieved patents, or can add use query searching to add a patent set.
When the IPC code associated with the product group is selected, a set of patents that belong to the IPC code is collected. Because the goal of the proposed methods to use as much data as possible in the content of patents, the scope of patents related to the term and rights is set to include all published patents. This large amount of data increases the reliability of the results of Job Mapping and enables further analyses. When the entire patent set is selected, the contents of the patent document such as "abstract", "field", "introduction", "summary", "description of invention", and the basic bibliographic information are stored. The patent data can be collected using patent search sites provided by the United States Patent and Trademark Office (USPTO) or other similar organizations, or by using a commercial patent search site.
Next, NLP is used to extract the contents of the collected patent automatically in SAO form from patent documents. This step can use any NLP analysis software such as Stanford parser, Alchemy API, Antelope, or Knowledgist. To determine the exact meaning of the SAO structure, each structure must be associated with existing patent documents.

Rule-Based Selection of Candidates of Job Execution Steps
The next phase is to derive specific task related to customer behavior, which is a component of the Job Map. In this study, we defined rules to derive candidates that are likely to be detailed Jobs. The first classification step is to identify the set of SAO structure that is related to the customer's behavior, purpose, and other requirements. SAO structures are selected first if the subject (noun) is a word related to the customer. Because the patent document is clearly and concisely described, it is an easy task to confirm that the subject part is related to the customer.
The first step in this process is to extract nouns that can be related to the customers and their behaviors (Table 1). These nouns or noun phrases are related to 'using', 'people', and 'body parts'. In addition to this, the user who performs the Job mapping task can add nouns that he or she thinks are valid and can provide domain-specific terms according to the product group. The next step is to identify AO structures that are likely to describe tasks that are components of the Job Map. The SAO set that is selected first has a subject associated with the customer. In the second classification step, the rules are defined and classified by verb and object elements ( Table 2).
To define AO structures, we select verbs from three categories. First, AO structures that include a verb associated with "technology view" are highly related to technical solutions or technical outcomes; therefore, these structures should be retained. Second, verbs related 'components' are greatly related to the product itself, and, third, verbs related to "emotion" are likely to refer to responses of the customer. We refer to the study of Choi, et al. [40], which is one of the existing SAO analysis studies, to define verbs of technology and product components. In addition, we add an emotion category because of the necessity for the Job Map analysis of the experience of the user. Lastly, verbs that are not likely to refer to the jobs should be eliminated.
When selecting AO structures, removing meaningless words (called stopwords) is a very important task. Unfortunately, in NLP research, there is still no perfect way to remove stopwords automatically. Generally, through a trial-and-error process, the analyst chooses stopwords to improve the performance of the target NLP application. In this paper, we recommend the use of data from the Google dictionary [41], which is the most general stopword dictionary. One way is to exclude AOs that have objects containing pronouns or rhetoric and that are in the dictionary. The aforementioned SAO parsers in NLP analysis software can extract a considerable amount of AO structures from patent documents. Thus, the filtering out of some AO structures with stopwords should not be a problem, given the variety and volume of big patent data. After removing the AO structures, the analyst can eliminate more additional meaningless AOs as needed.
In the second classification process, the user merges semantically similar AO structures into a single AO structure. For example, a plural verb is the same as its singular form. Furthermore, if nouns or noun phrases in the object part are synonymous, they can be integrated into one AO. The NLP technology is applied to handle the integration of synonyms. For example, WordNet, a dictionary that already defines the relationships of words and phrases [42][43][44], can be used to obtain semantic similarity information of words. In addition, the recently developed Word2Vec technology [45,46], which expresses the relationships among words in a vector space, can be used to measure the distance between similar words by using the measure of cosine distance. Finally, the user applies rules to define verbs and objects that are not relevant to the detailed jobs and removes AO structures that use these subjects or objects. The third step is to closely examine the remaining AO structures to select candidates that are likely to describe detailed Jobs. This selection process uses the verb in the general guidelines to define Jobs that belong to each step in the Job Map. In this step, candidate AOs are identified based on verbs that are related to each step of the Job Map (Table 3). This study uses the eight-step process ( Figure 1); the ninth 'resolve' step can be added as necessary. The user can add or remove steps as appropriate. As previously described, verbs such as synonyms and antonyms of the verbs in the guidelines are defined as rules for this step. A lexical database such as WordNet ( Figure A1 in Appendix A) can be used effectively for this purpose [42][43][44]. Some AO structures may be domain-specific. In this case, the creator of the Job Map and the domain experts would add rules for the verb and object parts. These semi-automatic procedures identify the AO candidates that are likely to become detailed Jobs of the Job Map.

Job Mapping
The next phase is to map the candidate AO sets to each step of the Job Map. At the beginning of this process, the candidates AO set need not be mapped perfectly, because, at the end of the procedure, the Job Map is reviewed and revised. However, if the set of AO candidates to be mapped in the Job Map is provided at this process, the workload can be reduced in the final procedure and the reliability of the result can be increased.
Candidate AOs can be mapped automatically to the Job Map by two sets of rules: verb phrases and object phrases. The first method utilizes the verbs related to 'steps' (Table 3) and semantic verbs. During construction of the Job Map, candidate AOs that are related to more than one step are not assigned to one of them; this assignment is finalized during the review process. Additional verbs that are defined according to the product group are mapped by the same method. The second method is to define object phrases that are relevant to each step, then to map candidate AOs to the Job Map accordingly. For example, the dust tank is generally used after the cleaning job is finished, so this timing defines a rule to map to the eighth 'Conclude' step. By this process, a set of candidate Jobs is derived, and then the candidate jobs are mapped step-by-step to create an initial Job Map.
The final phase of the overall method is to finalize the Job Map. In this process, researchers who use a Job Map check the results obtained to this point, and then select AOs that become the detailed Jobs and map them at each step of the Job Map. Domain experts can also participate in the review and obtain reliable results. During the automatic mapping process, a researcher must map detailed jobs that are related to more than one step or that have no step; to do this, a researcher loads the original patent document linked to the AO structures, and then checks the part at which the AO structure is presented. Finally, the Job Map is completed by reviewing according to guidelines (Table 4).

Case Study: Analysis of the Cleaning Job
A case study of a Cleaning Job is used to explain and verify the semi-automated Job Mapping method proposed in this study. This Job is suitable as a case study because it includes many human processes and various types of products. In addition, this case is suitable for achieving innovation by applying the Job Map. Furthermore, because the process is relatively simple, it is easy to understand.

Data Collection and SAO Structure Extraction
To create a Job Map for Cleaning Job, the research team used for patents from 2005 to 2014 provided by the United States Patent and Trademark Office (USPTO). A typical product that solves a Cleaning Job is a cleaner. The IPC codes associated with the cleaner product group are A47L-005, A47L-007, A47L-009, A47L-011, and A47L-013 (Table 5). Additional patents can be added to the patent search query, but, in this study, almost all patents related to the cleaner product group were extracted using the IPC codes, and sufficient data were obtained for the study. The search recovered 5299 US patents for these IPC codes. All contents (abstract, field, introduction, summary, description of an invention) were stored in a database of these patents. The SAO structure set was extracted from the patent database. Initially, 768,649 SAO structures were extracted using Knowledgist, a commercial NLP software package for SAO extraction.

Rule-Based Candidates of Job Selection
The initial SAO structure set was divided into three periods: 2005 to 2007, 2008 to 2010, and 2011 to 2014. The reason is that if a Job Map is created by dividing it into three periods, the researcher can use the three outcomes to understand common points and to identify trends. This organization also increases the chances to identify new opportunities. Dividing the data into three periods reduced the information bias because the periods include equal amounts of data (Table 6).   (Table 1), and the verb rules of the Job Map (Table 2)

Job Mapping
The initial Job Mapping is based on the verbs of the Job Map (Table 3) and the semantic verbs. Based on the initial Job Map created using these rules, the research team worked together with expert groups related to the cleaner product. Two groups of experts participated in this case study; both are professionally conducting research on cleaners in South Korea. One group is a new-product development team of a home appliances company (LG Electronics (Seoul, South Korea) New Product Introduction team); this team aims to develop new products for various products besides the cleaner. This team is interested in the ODI approach and our semi-automated method, and had a high degree of interest and understanding of our study. The other team is a government research institute (Korea Testing Laboratory Home Appliance Testing Team) that analyzes and tests the effectiveness of cleaners manufactured and sold in South Korea.
The research team and the expert teams closely examined the initial Job Map and all candidate AOs derived in Section 4.2. In continuous reviews during this phase, semantically-redundant AOs and AOs not associated with cleaning Job were removed, and candidate AOs were assigned to each step of the Job Map. To accurately understand the meaning of each candidate AO, we looked at the actual description of the patent in the patent database (Section 4.1). For example, 'open door' assigned to the 'Prepare' step reflects "a user conventionally opens the doors or windows to get fresh air into the space being cleaned", which is the description of the US20090260178A1 patent. 'See content of dust' was expected to be a 'Define' step, but was assigned to the 'Monitor' step after consultation with the description of the patent written: "the users to see the content of the dust, so that the users can find something other than dust with ease." In addition, 'select suction power' was automatically assigned to the 'Define' step, but after finalizing the correct meaning, the final decision was moved to the 'Confirm' step.
The  (Table 7). The case study demonstrates and verifies the proposed semi-automatic method for constructing a Job Map. The proposed method reduced the effort involved in data collection and analysis for constructing a job map in terms of cleaning. It enhanced the human capability to handle big text data for achieving good results. Moreover, the variability originating from the different people involved in the construction could be controlled with the systematic process and standardized result. Note that our work is not intended to replace the creativity and imagination of designers and engineers. New product development is a creative job, similar to any other development. Our work aims to support and help designers and engineers perform their jobs easier and better. With the proposed method, designers and engineers may achieve a large portion (e.g., 70%) of their customer understanding and ideation tasks. Nonetheless, new product development is, by nature, a "soft" task that requires human activities. As in any development project, project management effort and qualitative discussion are required after the automated process in order to ensure a successful implementation of the proposed method. We believe a semi-automatic approach should be used to achieve both efficiency and effectiveness.
In addition, we divided the period of three stages and obtained the following considerable implications. First, the contents of the Job Map did not change significantly during the 10 years. This result demonstrates that Job Analysis, the core of the ODI method, does not change significantly over time. Second, at the detailed stage, the newly added Job was found in the analysis of the last 4 years. For example, the 'read display' of the confirm step has been found. Through this, we noticed that a detailed job related to robot cleaner was additionally derived. In addition, over 7 years ago, the appearance of the 'store cleaner' of the conclude stage was found. This can be inferred from the fact that the separate storage of cleaning tools such as dust filters, dust storage, etc. is becoming an important issue in a modern electronic cleaner.

Conclusions
This study proposes a semi-automated method to create a Job Map, which is a key tool of Outcome-Driven Innovation. One of the most common problems when creating a Job Map is the difficulty of collecting and analyzing data. To solve this problem, a large amount of documented data is collected and analyzed by an automated method that uses a text mining technique. The content information of patents is a data source that can be used for analyzing and designing various products besides the cleaner product, which is the subject of the case study of this study. By analyzing a large amount of data in a semi-automated process, this method can reduce the time required to complete the task. However, this process has the limitation in that assembly of the Job Map depends entirely on expert intuition, so different results can be obtained, and the results may be unreliable if the expert's understanding of the Job Map is not sufficient. The proposed method can address this problem by using computers to process a large amount of actual data before creating the Job Map; it can also complement information that the expert missed while creating the Job Map.
The two expert groups participated in this research assessed that the final Job Map identified almost all customer acts and detailed Jobs related to cleaning. This positive evaluation indicates that the semi-automated method proposed in this study can effectively create a Job Map. They also judged that the final Job Map could be a useful tool for understanding the customer's cleaner use process, the customer's job, and the customer's needs. In particular, specific Jobs such as 'move chair', 'remove large particle', and 'position cleaning device', were unexpected by the cleaner-development teams, and they expected to be able to draw innovative ideas from the Job Map. The research teams created the Job Map by dividing data into three periods. By comparing the Job Maps ( Figures A2-A4) from these periods, several new detailed Jobs were derived. However, the composition and contents of the three Job Map were similar; therefore, we confirmed the basic idea that 'what customer wants' and 'detailed Jobs' do not change. In addition, the similarity of the three Job Maps confirmed the consistency of the proposed semi-automated method.
To increase the effectiveness and expand the applicability of proposed method, we have a plan to conduct further research. First, its applicability to other products should be assessed, and the method modified if necessary. Based on the results, software that enables users to create a Job Map easily could be developed. The ODI approach is also claimed to be a useful way to innovate the service field beyond product innovation. However, this study did not provide information on service oriented businesses because it used product-oriented patents as the analysis data. To develop applications of the method to the service field, we have a plan to evaluate the value of using web-based mass data related to service businesses.

Appendix A. An Example of WordNet Representation
Sustainability 2017, 9,1386 12 of 17 Figure A1. Example of semantic vocabulary related to 'adjust' obtained using 'WordNet'. Figure A1. Example of semantic vocabulary related to 'adjust' obtained using 'WordNet'.