Spatio-Temporal Information Extraction and Geoparsing for Public Chinese Resumes

: As an important carrier of individual information, the resume is an important data source for studying the spatio-temporal evolutionary characteristics of individual and group behaviors. This study focuses on spatio-temporal information extraction and geoparsing from resumes to provide basic technical support for spatio-temporal research based on resume text. Most current studies on resume text information extraction are oriented toward recruitment work, such as the automated information extraction, classiﬁcation, and recommendation of resumes. These studies ignore the spatio-temporal information of individual and group behaviors implied in resumes. Therefore, this study takes the public resumes of teachers in key universities in China as the research data, proposes a set of spatio-temporal information extraction solutions for electronic resumes of public ﬁgures, and designs a spatial entity geoparsing method, which can effectively extract and spatially locate spatio-temporal information in the resumes. To verify the effectiveness of the proposed method, text information extraction models such as BiLSTM-CRF, BERT-CRF, and BERT-BiLSTM-CRF are selected to conduct comparative experiments, and the spatial entity geoparsing method is veriﬁed. The experimental results show that the precision of the selected models on the named entity recognition task is 96.23% and the precision of the designed spatial entity geoparsing method is 97.91%.


Introduction
With the development of the Internet and information technology, almost all the interactive information in people's daily life is transmitted on the Internet, and the amount of text information on the Internet is increasing and growing geometrically.Resumes, which carry key information about persons in textual form, are also increasing in number on the web.Statistical analyses show that well-known third-party e-job portals upload more than 300 million resumes per year [1].A resume is a standardized, logical written expression that contains a brief description of an individual's basic information, experience, strengths, hobbies, and other relevant information.
Information extraction (IE) is the process of converting unstructured text into structured data containing information of interest [2].Current information extraction tasks are divided into three main approaches: rule-based methods [3], machine learning methods [4], and deep learning methods [5].The development of IE makes it possible to automated extraction, classification and recommendation of resume [6,7], which greatly improves the efficiency of recruiters in selecting suitable job applicants.Meanwhile, resumes contain abundant temporal and spatial information, which is important for studying the spatial mobility of individuals, as well as the characteristics of the spatio-temporal distribution of a particular group of people.However, current studies on resume information extraction have not paid enough attention to temporal and spatial information and have ignored indepth information such as personal growth path and common characteristics in resumes [8], which is a waste of the value of resume information.
Geoparsing is a cornerstone of many geographic information applications and a difficult natural language processing task [9].It contains two important processes: geotagging and geocoding [10].Geotagging is a special case of named entity recognition, which is aiming to identify the place names containing the geographical information from unstructured texts.Geocoding is the process of establishing the consistency between geographic coordinates and a given address.Considering the needs of the study, in this paper, we just obtain the province and city of the entity containing location information during the geocoding phase, and use the geographic coordinates corresponding to its city as the geographic coordinates of the entity.
In this paper, taking the teachers' resumes of key universities in China as an example, we explore a process and method of spatio-temporal information extraction from teachers' resumes, and design a spatial entity geoparsing method to realize the extraction and location of spatio-temporal information from resumes, which provide technical supports for spatio-temporal analysis based on resume texts and help for mining the deep information contained in resumes.

Related Works 2.1. Information Extraction
Resume information extraction is an important application of textual information extraction techniques, aiming to automatically extract information of interest from resume text.In studies of English resume information extraction, Ciravegna et al. [11] extracted the name, street, city, province, email, phone, fax number, and zip code information from English resumes by a rule-based approach.Kopparapu et al. [12] presented a system that can handle multiple types and formats of resumes and created an electronic database.Bodhvi et al. [13] used a semi-supervised deep learning method to parse the education section of resumes.Rakhi et al. [6] designed a resume analysis and recommendation system using NLP techniques with the objective of simplifying the employment process.In the early studies on Chinese resume information extraction techniques, rule-based methods were mainly used, such as Qiao et al. [14] researched and developed a character information extraction system based on a rule-based approach to achieve automatic extraction of semi-structured character attributes; Li et al. [15] conducted a study on encyclopedic character attribute extraction algorithms; Yu et al. [16] proposed an attribute extraction method based on remote supervision and pattern matching to extract specified person's title attributes.Since the acquisition of rules usually requires specialized domain knowledge, the generalization ability of rule-based methods is low.Subsequently, machine learning methods for entity extraction have emerged.Dong et al. [17] proposed a method for extracting key information from teachers' homepages based on conditional random fields (CRFs); Chen et al. [18] proposed a "two-step" resume information extraction algorithm by combining the syntactic information of resumes with the design of "Writing Style", and achieved the accurate extraction of resume information without defining rules and annotating data.In recent years, with the development of artificial neural networks, deep learning methods have also been widely used in resume information extraction [19].Some scholars have built named entity recognition models for Chinese e-resumes based on the BERT language model, which shows good performance [8,20].
The task of spatio-temporal information extraction focuses on identifying and extracting temporal and spatial information from text data, and constructing relationships between temporal and spatial information in order to describe the changes in the spatial location of the research object within a certain time period, so as to explore the intrinsic patterns and characteristics of the research object's behaviors.Some scholars have explored the extraction of temporal information in Chinese texts from the perspective of linguistics, mainly by analyzing the constituent elements of time and the form of time word composition in Chinese and adopting the concept of temporal expressions to achieve the identification [21,22].Based on the extraction of time elements, the normalized expression of time phrases is achieved by defining the types of time relations in Chinese descriptions and parsing the internal rules of time expressions [23].By summarizing the characteristics of time information description in Chinese texts, Zhang et al. [24] constructed a time lexicon and a time description pattern library, and designed algorithms for the normalized expression of time information and semantic inference.Qiu et al. [25] conducted the extraction of temporal information in geological reports by constructing a temporal gazetteer.For the extraction of spatial entities, the extraction of place name (or toponym) is the main focus.The existing methods for recognizing toponyms can be divided into three types: rule-based methods, machine learning methods, and deep learning methods.The rule-based methods are to build gazetteers, to combine the characteristics of the word composition and lexical features of a toponym, and to generalize the general rules of place name expressions for the recognition of toponym entities.This approach has the advantages of simplicity and precision, but the limitations of the constructed gazetteer cannot handle situations such as new place names and complex syntax.The machine learning approach does not require specialized language knowledge, is more robust and flexible than the rule-based approach.In recent years, machine learning models such as the hidden Markov model (HMM), the Support Vector Machine, the maximum entropy Markov model (MEMM), and CRFs [26,27] have been used for the recognition of toponyms.With the rise and development of deep learning, toponym entity recognition methods based on deep learning models have also been widely used, and the classical models are BiLSTM-CRF [28], BERT-CRF [29], etc.

Geoparsing
Gritta et al. [10] conduct a detailed geoparsing survey, which evaluates and analyzes the performance of a number of leading geoparsers on a number of corpora and highlight the challenges in detail.For the aim of obtaining the geographic content of the social media message, Gelernter et al. [30] present a method to geo-parse the short, informal messages known as microtext.
For Chinese geotagging, the method of neural network is mostly used, and the mainstream method is based on a pre-training model [31,32].Chinese geocoding mostly relies on map service platforms, and the mainstream Chinese map service platforms are: Baidu Map (https://lbsyun.baidu.com,accessed on 12 June 2023), Gaode Map (https://lbs.amap.com,accessed on 12 June 2023) and Tianmap (http://lbs.tianditu.gov.cn,accessed on 12 June 2023), etc. Due to the different geocoding rules and data sources provided by different map service providers, the results of geocoding will be different.He et al. [33] fused and optimized multi-source online coding services to reduce the result bias caused by geocoding differences and improve the efficiency of geocoding work.Zhu [34] used four mapping platforms to conduct the geocoding of some address data, comparing the geocoding errors for community addresses and road addresses.In this paper, for toponyms geoparsing, we not only used the map service platforms, but also used web encyclopedic knowledge, expanding the information sources for geoparsing.

Data
The public resumes of university teachers are usually posted on the official websites of universities, which are generally easy to find.In this study, we obtained the teacher resumes from 35 "Project 985" universities in China.The 35 "Project 985" universities are listed in Appendix A. Then, we collected 51,438 resumes from the official websites of these universities.After cleaning, 28,306 resumes were remained.
The university teacher resumes are semi-structured texts, and the writing style has certain regularity.Figure 1 shows an example of crawled resumes of key university teachers, from which it can be seen that the content of the teacher resume can be divided into modules, such as educational experience, work experience, academic positions, and teaching courses.Browsing through a large number of teacher resumes, we can see that they also contain modules on teaching and research, awards and honors, papers and achievements, etc.These modules are usually identified by keywords such as "basic information", "educational experience", "work experience", and "research interests", i.e., each module appears in the form of "caption keyword + module content" [35].Therefore, it is particularly important to make good use of these caption keywords in the teachers' resumes in order to effectively chunk the content of the resumes.

Data
The public resumes of university teachers are usually posted on the official websites of universities, which are generally easy to find.In this study, we obtained the teacher resumes from 35 "Project 985" universities in China.The 35 "Project 985" universities are listed in Appendix A. Then, we collected 51,438 resumes from the official websites of these universities.After cleaning, 28,306 resumes were remained.
The university teacher resumes are semi-structured texts, and the writing style has certain regularity.Figure 1 shows an example of crawled resumes of key university teachers, from which it can be seen that the content of the teacher resume can be divided into modules, such as educational experience, work experience, academic positions, and teaching courses.Browsing through a large number of teacher resumes, we can see that they also contain modules on teaching and research, awards and honors, papers and achievements, etc.These modules are usually identified by keywords such as "basic information", "educational experience", "work experience", and "research interests", i.e., each module appears in the form of "caption keyword + module content" [35].Therefore, it is particularly important to make good use of these caption keywords in the teachers' resumes in order to effectively chunk the content of the resumes.

Methodology
Figure 2 illustrates the framework of the proposed method which is divided into four parts: data acquisition, caption lexicons construction, information extraction and geocoding.In the data acquisition section, 28,306 valid teacher resumes were collected from the official websites of 35 key universities in China using web crawler technology and stored in JSON data format.In the caption lexicons construction section, a caption lexicon of teacher resumes from each key university was obtained based on statistical and text similarity calculation methods.In the information extraction section, the target entities in the university teacher resumes were identified by the constructed information extraction scheme.In the geocoding section, a spatial entity geocoding method is designed to obtain the geographical coordinates of toponyms.
Figure 2 illustrates the framework of the proposed method which is divided into four parts: data acquisition, caption lexicons construction, information extraction and geocoding.In the data acquisition section, 28,306 valid teacher resumes were collected from the official websites of 35 key universities in China using web crawler technology and stored in JSON data format.In the caption lexicons construction section, a caption lexicon of teacher resumes from each key university was obtained based on statistical and text similarity calculation methods.In the information extraction section, the target entities in the university teacher resumes were identified by the constructed information extraction scheme.In the geocoding section, a spatial entity geocoding method is designed to obtain the geographical coordinates of toponyms.

Resume Caption Lexicons Construction
As mentioned in Section 3.1, the modules in a teacher resume usually take the form of "caption keyword + module content".Using these headwords to chunk the content of a resume is a prerequisite for subsequent information extraction.In this study, a statisticalbased approach is used to initially obtain a high-frequency caption lexicon, and then a text

Resume Caption Lexicons Construction
As mentioned in Section 3.1, the modules in a teacher resume usually take the form of "caption keyword + module content".Using these headwords to chunk the content of a resume is a prerequisite for subsequent information extraction.In this study, a statisticalbased approach is used to initially obtain a high-frequency caption lexicon, and then a text similarity-based approach is used to expand the caption lexicon to obtain a comprehensive caption lexicon for each university, as each university's resume style is not consistent.

Statistical-Based Caption Lexicon Construction
Browsing through a large number of teacher resumes, we can see that the caption keywords are generally separated and are generally between four and seven in length, while they may end in a colon.Through these characteristics, rules were established to initially count the eligible caption words and their word frequencies, and finally the caption words ranked in the top 15 in terms of word frequency for each university were retained.
By summarizing and classifying the results of the preliminary statistics of caption words from all universities, we divided the caption words of teacher resumes into 11 categories, as shown in Table 1.

Text Similarity-Based Caption Lexicon Expansion
Text similarity is the degree of similarity between texts and measures the commonality and difference between texts.There are many ways to calculate text similarity, and this paper focuses on two methods: word vector and edit distance.The essence of word vector is to embed words into a low-dimensional vector representation, which makes it possible for semantically similar words to be nearer in spatial distance, and also facilitates the calculation of similarities between words.In addition, this representation of words is a good solution to the problem of semantic deficit and dimensional disaster caused by the bag-of-words model due to the independence of words [36].The edit distance, also known as the Levenshtein Distance, is a string-level measure of text similarity, measured by the number of operations required to transform a string into another string, including insertion, modification and deletion.
In this paper, we use a combination of word vector and edit distance to measure text similarity to discover new caption words.The formula for calculating the text similarity between short texts is as follows: where S a , S b is the weighted average of the word vectors of the short text a, b after splitting and deactivating stopwords, respectively.L a , L b is the length of the short text a, b after deactivating the stopwords, respectively.ED a,b is the edit distance of the short text a, b. α, β are the weighting factors used to adjust the two factors.Through comparative experiments, we got the optimal weighting factors that α is 0.8 and β is 0.2.
Based on the constructed text similarity measure formula, the teacher resume caption lexicon constructed by the statistical-based method was expanded to obtain teacher resume caption lexicons for each university.

Resume Information Extraction Scheme
The resume information extraction scheme for university teachers constructed in this study is divided into three main steps, namely, resume text chunk segmentation, resume text chunked content normalization and resume text entity recognition.Among them, the purpose of resume chunk segmentation is to divide a resume into different subject chunks by resume caption words, and each chunk describes the same topic, such as "educational experience" and "work experience".The chunked resume text has a messy format and cannot be used as input for the named entity recognition model.The purpose of the normalization process is to standardize the content of the chunked resumes into the input form for the named entity recognition model.For the task of recognizing entities, we focus on the recognition of temporal and spatial entities in resumes.For the temporal entity, we use a rule-based approach for pattern recognition; for the spatial and other entity, a deep learning approach is used.

Resume Text Chunking
We adopt a rule-based pinch-force cutting method to achieve the segmentation of resume text chunks, and the steps are as follows: Step 1: Caption trigger word targeting According to the constructed caption lexicon of teacher resumes of each university, rules are established to match the caption words in the resume text.The matched caption words are assigned to the corresponding categories, while the position indexes of the caption words are noted for chunking.
Step 2: Caption trigger word sorting Sorting the caption trigger words by the position indexes from smallest to largest, getting an ordered sequence of caption trigger words.
Step 3: Resume text cutting Firstly, according to the ordered sequence of caption trigger words, based on the rule that the content between two trigger words belongs to the previous trigger word, the start and end position indexes of the resume text under the resume caption category to which the caption trigger word belongs are obtained.Then, the resume texts are chunked according to the position indexes.Lastly, the text chunks are classified.As we focus on the spatial and temporal information, only four categories of resume chunks, namely "basic information", "personal resume", "educational experience" and "work experience", are remained.
The above resume text chunking algorithm relies heavily on the caption lexicons, implemented with the rule that the content between two trigger words belongs to the previous trigger word.In this paper, a more complete caption lexicon of teacher resumes for each university is obtained, and therefore this resume text chunking algorithm performs better in this study.

Resume Text Chunk Content Normalization
By resume text chunking, teacher resumes are divided into different topics according to caption words, which prepares for the subsequent information extraction.However, some of the chunked resumes are not suitable to be the input of the named entity extraction model, and need to be normalized into line-by-line resume descriptions.For example, the education section in a chunked resume is like this: "'2014-2018', 'Doctor', 'Astrophysics', 'Southern Methodist University', '2011-2013', 'Master', 'Photogrammetry and Remote Sensing', 'Wuhan University', '2007-2011', 'Bachelor', 'Geographic Information System', 'China University of Geosciences, Wuhan'".It has to be normalized as: "'2014-2018 Doctor Astrophysics Southern Methodist University', '2011-2013 Master Photogrammetry and Remote Sensing Wuhan University', '2007-2011 Bachelor Geographic Information System China University of Geosciences, Wuhan'".
In order to automate the normalization of chunked resume text to obtain line-by-line resume descriptions, we define rules for the merging of resume text data as Table 2.The time information in resumes has a distinct writing pattern, and a rule-based approach is used to write regular expressions for pattern matching recognition.Browsing through the time descriptions in the teacher resumes of various universities, their writing types were summarized as shown in Table 3. Considering the needs of this study, we only extract year information.Additionally, for the experience without year information, omission was done.The regular expressions for temporal entity extraction, which are written by Python using the Re (https://docs.python.org/3/library/re.html,accessed on 12 June 2023), are as follows: • P1: re.compile(r'\d{4}'), • P2: re.compile(r'(\d{2})/'), • P3: re.compile(r'/(\d{2})'), and • P4: re.compile(r'\d{2}').
The P1 directly identifies the year in the full time description, e.g., "2012" and "2017" in "June 2012-November 2017" P2, P3 and P4 are used to extract the year in the omitted time description.Additionally, for the omitted time description like "98/09-01/06", a judgement and normalization process is required to ensure that the correct and standardized year entities are extracted.

Deep Learning-Based Entity Recognition
Non-temporal entities such as places, institutions and positions are represented in various forms and with little regularity, thus we use deep learning methods to extract such entities.In this paper, BERT-BiLSTM-CRF is used as the resume entity recognition model.
BERT uses a bi-directional Transformer [37] encoder and a Masked Language Model to implement a bi-directional language model, which is pre-trained to obtain a representation of each word considering contextual information.Compared to traditional language models, BERT has stronger representational power and is able to solve problems such as multiple meanings of words.The Self-attention Mechanism is the main module of the Transformer encoder used by BERT, which uses the Query-Key-Value (QKV) model to map each input word into three different spaces to obtain the query vector Q, the key vector K and the value vector V, respectively.The scoring value is obtained by calculating Q and K with the dimension of the input vector d K , and then the scoring value is calculated with V to finally obtain a new representation of each word considering the inter-word relationships, as shown in Equation (2).
LSTM (Long Short-Term Memory) is a variant of recurrent neural network (RNN), which can effectively solve the gradient explosion or disappearance problem of simple recurrent neural networks [38].The LSTM network introduces a new internal state dedicated to linear recurrent information transfer, while the input gate i t , the forgetting gate f t and the output gate o t are introduced through a gating mechanism to control the path of information transfer in order to control the updating, transferring and forgetting of information in the memory unit.The computational formula of the LSTM network can be succinctly described as follows: where σ is the Sigmoid activation function; o t , i t and f t are the output, input and forgetting gates, respectively; W and b are the net parameters; is the element-by-element multiplication; and h t represents the output of the hidden layer memory unit at the moment of t.BiLSTM (Bi-directional Long Short-Term Memory) obtains preceding and following text information through LSTM in both directions, thus compensating for the deficiency that unidirectional LSTM cannot learn the following information.However, the individual states of the output sequence of the BiLSTM network are independent of each other, so the problem of incoherent entity labels in the sequence annotation will arise, which requires the introduction of label-to-label constraint relations in the prediction process.
CRFs are conditional probability distribution models for outputting random variables given a set of input random variables, which takes into account the interdependencies between the labels and ensures that the output labels are reasonable.Therefore, the inclusion of a CRF layer after the output of a BiLSTM network allows the prediction results to be constrained.In the application of labelling problems, linear-chain conditional random fields are often used.Given an input sequence X, and assuming that training yields the corresponding output label sequence Y, the formula is as follows: where ω denotes the weight vector, f denotes the feature function, and Z(x) is the normalization factor.

Spatial Entities Decoding
The spatial entity, i.e., an entity containing spatial location information, mainly include two types of entities, namely places and organizations.The decoding of spatial entities in this study is to determine the information of the administrative division of the province and city where the given spatial entity is located, and then obtain the geographical coordinates corresponding to the administrative division to be used as the spatial coordinates of the entity.For foreign spatial entities, we locate them to the national scale, e.g., Oxford University is located to the UK.Then, the OSM (Open Street Map) latitude and longitude coordinates corresponding to the country are obtained through the Nominatim API (https://nominatim.org/release-docs/latest, accessed on 12 June 2023) to achieve the geocoding of foreign spatial entities.We use a combination of Baidu Baike (https://Baike.baidu.com,accessed on 12 June 2023) search, Baidu Map Application Programming Interface (API) (https://lbsyun.baidu.com/faq/api?title=webapi/guide/webservice-placeapi,accessed on 12 June 2023), and cpca (https://github.com/DQinYuan/chinese_province_city_area_mapper,accessed on 12 June 2023) library mapping to achieve the geocoding of spatial entities.
For the included entry, the Baidu Baike returns a normative entry page.The entry page can be broadly divided into four parts: title, overview, basic information column and body content, as shown in Figure 3.By parsing these contents, the location of the spatial entity can be obtained.For entries that have not yet been included, Baidu Baike will return content-related entries for users' reference.The Baidu Map Location Retrieval Service API can be queried to obtain the province and city where the input keyword is located.The cpca library in Python can extract the province, city and district from strings and perform mapping, which can quickly identify the province and city expressed in spatial entity strings.

Spatial Entity Geocoding Based on Baidu Baike
The method based on the Baidu Baike is the key research content.In this method, for the entry included in Baidu Baike, it obtains the location of spatial entities by web page parsing; for the entry that have not yet been included in Baidu Baike, it will determine whether the returned related entry and the retrieved entity are the same entity, and if they

Spatial Entity Geocoding Based on Baidu Baike
The method based on the Baidu Baike is the key research content.In this method, for the entry included in Baidu Baike, it obtains the location of spatial entities by web page parsing; for the entry that have not yet been included in Baidu Baike, it will determine whether the returned related entry and the retrieved entity are the same entity, and if they are, parsing the page to obtain location.The algorithm flow of this method is shown in Figure 4.The "web page parsing" in the algorithm flow is the parsing of the title, overview, basic information column and body content.Algorithm 1 (page parsing algorithm) for web page parsing is as follows: Algorithm 1. (page parsing algorithm) Step1: Recognize the current webpage title through the cpca library, and judge the recognition result.If the province and city is complete, Step5 will be executed, otherwise Step2 will be executed.
Step2: Iterate through the basic information column of the webpage and identify the content of the location.Execute Step5 if the province and city (city may be missing) is obtained, otherwise execute Step3.
Step3: Pattern matching to identify the location in the web paragraph.Execute Step5 if province and city (city may be missing) is obtained, otherwise execute Step4.
Step4: Do word segmentation for the webpage overview and body content, and build a dictionary of words in the form of {key = city name, value = number of occurrences}.If the dictionary is not empty and the maximum number of occurrences is not less than 5, the city with the highest number of occurrences will be the final result and the corresponding province will be obtained; otherwise, the province and city will be empty.Execute Step5.
Step5: Finish and return to the province and city.

Threshold Setting
In the spatial entity geocoding method based on Baidu Baike and Baidu Map API, the returned relevant entry titles and POI names may not be the same entity as the search keywords, so threshold settings are required to improve the accuracy of spatial entity geocoding.Based on the actual extraction situation, the final threshold settings are described

Threshold Setting
In the spatial entity geocoding method based on Baidu Baike and Baidu Map API, the returned relevant entry titles and POI names may not be the same entity as the search keywords, so threshold settings are required to improve the accuracy of spatial entity geocoding.Based on the actual extraction situation, the final threshold settings are described in Table 4.
Table 4. Threshold settings for the spatial entity geocoding method.

Method
Description of the Threshold Settings

Baidu Baike Search
When the text editing distance between the retrieved entity and the title of the related entry after deactivating stopwords is within 5 or the cosine similarity at the character level [39] is greater than or equal to 0.9, the retrieved entity is considered to be the same entity as the related entry if it appears in the overview paragraph of the entry or if the cosine similarity is greater than or equal to 0.95.

Baidu Map API Query
After deactivating stopwords for the POI names returned by the Baidu Map Location Retrieval Service API, if the text editing distance with the retrieved entity is within 2 or the cosine similarity is greater than or equal to 0.95, the returned POI is considered to be the same entity as the retrieved entity.

Experimental Evaluation 7.1. Dataset
In this paper, we use a Chinese resume dataset (Resume NER), presented by Zhang et al. [40] in 2018, as the database to train the resume entity recognition model BERT-BiLSTM-CRF.This dataset is crawled from the Sina Finance website and consists of 1027 randomly selected resume summaries of executives of listed companies in the Chinese stock market, annotated with 8 types of named entities using the YEDDA system, namely nationality, education, native place/location, name, organization, profession, ethnicity, and position.The total number of sentences in the Resume NER dataset is over 4700, and the training set, validation set and test set are divided according to 8:1:1.

Evaluation Indexes
In this study, the precision P, recall R and F1 values are selected as model and geocoding method evaluation indexes.
where TP, FP, and FN is the number of true positive samples, false positive samples, and false negative prediction samples, respectively.

Recognition Results
The recognition results of the BERT-BiLSTM-CRF model on the test set are shown in Table 5.To demonstrate the superiority of the model, three baseline models were selected, and the evaluation metrics of each model on the test set are shown in Table 6.It can be seen that the BERT-CRF and BERT-BiLSTM-CRF models are significantly more effective than the BiLSTM and BiLSTM-CRF models, indicating that pre-training on a large-scale text corpus can effectively improve the model performance on downstream tasks.The sequence modelling capability of CRF constrains the model output, which can also improve the prediction of models to a large extent.The BERT-BiLSTM-CRF model combines BERT with BiLSTM to perform multi-layer feature extraction on the input, and then constrains the model output by a CRF, which ultimately achieves the best results for all type entities and outperforms the other comparison models in terms of precision and F1 value.

Geocoding Results
From the resume entities, 631 spatial entities are randomly selected, and the geographical locations of the entities were manually labeled as test data.Then, the test entities are geocoded by the spatial entity geocoding method.The results show that in the 621 entities extracted with location, 608 were correctly located, and the specific index results are shown in Table 7.The method of geocoding is generally good and can meet the needs of the study.However, for some abolished or renamed spatial entities such as the "Twenty-ninth Research Institute of the State Ministry of Machinery and Electronics Industry", "Fifth Institute of China Aerospace Industry Corporation", "Factory 230 of the Aerospace Corporation", the locations are difficult to obtain and easily misrecognition.

Conclusions
Resumes contain rich spatio-temporal information about people's behaviors, which is of great value for studying the spatio-temporal evolutionary characteristics of individual and group behaviors.However, current research on resume information extraction lacks attention to spatio-temporal information and fails to fully explore the analytical value of resumes.In order to make full use of the spatio-temporal information of resumes and provide technical support for the spatio-temporal analysis research based on resumes, this study proposes a spatio-temporal information extraction and geoparsing method for people's resumes by combining NLP and geoparsing techniques, which effectively realizes the extraction and geoparsing of spatio-temporal information in resumes.The method consists of three major aspects: combining statistical methods and text similarity calculation methods to construct the title thesaurus of teachers' resumes in various colleges and universities; realizing the recognition of target entities in teachers' resumes through the designed resume information extraction solutions; and implementing the positioning of spatial entities in teachers' resumes through the constructed spatial entity geocoding method.Experiments show that the named entity recognition model selected in this paper is significantly better than other models, and the constructed spatial entity geocoding method has higher accuracy, which can provide support for the research of spatio-temporal analysis based on resume data.
At the same time, there are some shortcomings in this study.On the one hand, the spatial entity location method mainly relies on Baidu Baike knowledge, which is a single source of information and is task specific to a certain extent, and the generalization ability needs to be further verified.Future research can explore more relevant entity information sources to improve the accuracy and generalization of the spatial entity location method.On the other hand, in the work of extracting resume information of college teachers, the named entity recognition method used in this paper takes the canonical resume information item by item as the input, so a certain amount of manual effort needs to be invested in the text chunk normalization.Subsequent research can consider adopting the event extraction method to achieve a fully automatic extraction of teacher resume information to reduce manual consumption in text content normalization.

Figure 1 .
Figure 1.Example of crawled resumes of key university teachers.The resume has been shortened for display.

Figure 1 .
Figure 1.Example of crawled resumes of key university teachers.The resume has been shortened for display.

Figure 2 .
Figure 2. Framework of spatio-temporal information extraction and geocoding based on public resumes.

Figure 2 .
Figure 2. Framework of spatio-temporal information extraction and geocoding based on public resumes.

Figure 3 .
Figure 3. Example of Baidu Baike page content division.

Figure 4 .
Figure 4. Method of spatial entity geocoding based on Baidu Baike.

Figure 4 .
Figure 4. Method of spatial entity geocoding based on Baidu Baike.

Table 1 .
The construction of university teacher resumes caption lexicon based on statistics.

Table 2 .
The rules and operations for university teacher resumes normalization.

Table 3 .
Summary of time writing types in university teachers' resumes.

Table 5 .
Evaluation of various entity recognition results of the BERT-BilSTM-CRF model.

Table 6 .
Evaluation indexes of each model on the test set.

Table 7 .
Result of test experiment.