Next Article in Journal
Gain Adaptation in Sliding Mode Control Using Model Predictive Control and Disturbance Compensation with Application to Actuators
Previous Article in Journal
An Approach for Recommending Contextualized Services in e-Tourism
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Information Usage Behavior and Importance: Korean Scientist and Engineer Users of a Personalized Recommendation Service

Korea Institute of Science and Technology Information, Daejeon 305-806, Korea
*
Author to whom correspondence should be addressed.
Information 2019, 10(5), 181; https://doi.org/10.3390/info10050181
Submission received: 3 April 2019 / Revised: 10 May 2019 / Accepted: 14 May 2019 / Published: 24 May 2019
(This article belongs to the Section Information and Communications Technology)

Abstract

:
Background: We conducted research on the newly developed personalized recommendation service (PRS) of the global network of Korean scientists and engineers (KOSEN) in order to explore the information usage behavior and importance of the PRS used by Korean scientists and engineers. Methods: In order to understand information usage behavior, we gathered data from 513 survey results and analyzed them in terms of information usage behavior and the corresponding importance in each of the service quality areas. Results: We analyzed the 321 outcomes that indicated non-use of the PRS in order to understand the underlying reason(s); we employed 192 results that demonstrated the use of functionality to examine information usage behavior and importance. They found that the predominant motive for non-use of the service resulted from the respondents not knowing how to use it. According to demographic characteristics, the usage behavior of the PRS showed a difference regarding the purpose of using the service in the categories of gender and major field of study. Furthermore, users were concerned with various components of the PRS such as ease of use, design, relevance of content, user support, and interactivity. Conclusions: We suggest reinforcing user education degree and promotion to enhance the PRS. Since users were concerned with ease of use, design, relevance, user support, and interactivity, we recommend these as major points for improvement.

1. Introduction

1.1. Background and Purpose of the Research

As information is produced, reproduced, and expanded exponentially, users are able to obtain it through a variety of routes, but their exposure to an excessive amount of material poses a challenge to acquiring the knowledge they desire, and they waste a lot of time accessing content that does not meet their needs. Korean online service providers of academic, scientific information offer limited functions that categorize research results in various forms and aggregate them from different fields of study, but only in terms of the date published. This technological hindrance demands time and effort from users when acquiring the facts they want, since they have to use approaches (such as filtering) to extract the appropriate information from the vast amount of data available. This problem also causes past research that a user in a specific field might find relevant to be buried under more recent knowledge. The authors of this study researched a personalized recommendation service (PRS), which overcomes the abovementioned obstacles and provides users with sufficient information. Since March of 2017, the Global Network of Korean Scientists and Engineers (KOSEN, www.kosen21.org), an academic science information platform that is the focus of this study, has operated a PRS based on big data; it automatically provides users with appropriate service menus and content based on their individual characteristics. The authors investigated the information usage behavior of people who use KOSEN’s PRS and analyzed the outcomes based on quality of service, thereby proposing a method to improve the system.
This study allows the system to be enhanced in a way that boosts the quality of service and reduces the time users need to acquire pertinent knowledge. Furthermore, it helps to prolong the lifecycle of valuable information by identifying older data that could apply to users in their respective fields, but which do not appear at the top of the search results. Section 2 examines the theoretical background of the PRS and the existing literature. Section 3 introduces the research on and analysis of user information behavior, while Section 4 deduces the implications and points of improvement, based on the outcomes of Section 3. The conclusion, Section 5, reviews the significance of this study and suggests directions for future research.

1.2. Objective of the Research

This study analyzes the current state of use, as well as the usage trends, of Korean scientists and engineers who access the PRS, the goal being to understand their information usage behavior and satisfaction rate. In terms of usage trends, the authors examined the effects of users’ demographic traits (age, gender, education level, major, and occupation) on such trends in addition to awareness, satisfaction rate, expectations, and the correlations among these components.
RQ1—Do the usage behaviors of PRS users show variance according to their demographic characteristics?
RQ2—Does the satisfaction rate for the service’s three areas of quality (system, content, and service support) show variance according to users’ demographic features?
RQ3—What are some priorities and key concerns of users regarding the service, and is there a significant correlation among the three areas of quality pertaining to them?

2. Theoretical Framework

2.1. Existing Literature

The research on the PRS in the field of library and information science is scarce, both in Korea and abroad. This may be due to a lack of research on the PRS (for academic information) in this field of study. Existing pieces of literature relevant to the present study include those pertaining to science and technology researchers’ information usage behavior, the PRS, and users’ satisfaction rate with it.

2.1.1. Information Behavior of Scientists and Engineers

The research on the information behavior of scientists and engineers can be classified broadly into that of scientists and engineers, undergraduate students, and graduate students.
Brown (1999) [1] used a survey to analyze the information search behavior of astronomers, chemists, mathematicians, and physicists at Oklahoma University. The results revealed that scientists and engineers rely heavily on academic journals to support their investigations and creative activities. An interesting observation was that despite the demand for more information services, most of the researchers preferred to approach them via published—rather than electronic—media. Majid et al. (2000) [2] asserted that the information knowledge and seeking behavior of scientists and engineers plays an important role in efficiently satisfying information requests, and suggested that libraries could use this knowledge to re-arrange their collections and facilities according to the requests of the scientific community. Furthermore, their findings illustrate that scientists and engineers prefer major information sources, especially journals and review articles.
To develop and assess a medical information system that reflects the information seeking behavior of doctors, Kim (2016) [3] carried out in-depth interviews with doctors on their information usage behavior in order to develop a knowledgeable source that could adequately fulfill their information requests; the results were reflected in a search system called MediSearching. The interview outcomes showed a difference in information usage behavior according to type of hospital or area of specialization. Whereas doctors at university hospitals had a large amount of information requests for research and used academic journals as their predominant means of obtaining data, doctors at private and special hospitals mostly received requests in terms of medical treatment, and satisfied their needs by having conversations with their fellow physicians.
In his research on the information usage behavior of science and engineering students at the undergraduate and graduate levels, Fidzani (1998) [4] studied the information seeking behavior and usage of information resources by graduate students at the University of Botswana. His overall objective was to identify requests and understand students’ degree of awareness regarding library services. He collected the experiential date on the graduate students’ information requests, and observed that a guideline for utilizing library resources and services was necessary in order to meet some of the requests.
Hemminger (2007) [5], in his research on information seeking behavior of academic scientists, found that the information seeking behavior of academic scientists is transformed by the availability of electronic resources for searching and retrieving scholarly materials. Significant changes in information seeking behavior were found, including increased reliance on web-based resources, fewer visits to the library, and communication of information that was almost entirely electronic. The evidence for this change was observed by analyzing the library records and the records from other information service organizations. Simple descriptive statistics were reported to these organizations. Additionally, the analysis of results is broken out into basic science and medical science departments.
Lee (2016) [6] studied the entire processes of an information service with the actual users as the subjects in order to acquire the basic data to develop a service based on the users’ traits and requests, the goal being to revitalize university library information services. The outcomes indicated that critical components of an effective service include the user’s information and an analysis of the themes, successful consultation, communication techniques, and the information provider’s awareness and ability.

2.1.2. The PRS and Relevant Studies

Yoo (2017) [7] defined a recommendation system as a service that utilizes a user’s information or the content to select and present content that may be relevant to him/her, thereby reducing the user’s effort in looking for pertinent data from the vast amount of material available. The R&D of the PRS began in the late 1990s, starting with the filtering technique, which was used for news and webpages. Recommendation techniques have become increasingly refined and sophisticated since the 2016 competition hosted by the American company Netflix. Research on increasing the relevance of recommendations and the efficacy of services is ongoing.
Research on recommendation techniques has a relatively long history. The most prominent methods include demographic recommendations, content-based filtering, collaborative filtering, and hybrid filtering (which combines two approaches that complement the other’s limitations) (Jung and Lee, 2005) [8]. Content recommendation systems primarily use content-based recommendation algorithms, as well as collaborative filtering algorithms. Since recommendation techniques that employ collaborative filtering showed significantly superior outcomes in terms of accuracy compared to those with content-based filtering in the Netflix competition, the most recent studies have focused on the former.
Firstly, demographic recommendation is the most basic approach that predicts whether a user will be associated with content based on demographic traits (such as age and location) input by the user.
Secondly, content-based filtering calculates content’s relevance to the user based on its similarity to content in which the user has previously expressed interest. This method recommends content with a high degree of similarity to the user, and can be employed when the profiles of both user and content are known. One study—which proposed a technique that harnesses the user’s content usage behavior, as well as the content’s location information for the PRS (for content distribution networks such as the Internet and Internet Protocol television, or IPTV)—used collaborative and content-based filtering techniques (Kim, 2006) [9].
Thirdly, collaborative filtering [10,11] identifies users with similar preferences and recommends information that one user has to another user that does not have it. This is the most predominant filtering method used today. This technique generates results using big-data technology by analyzing various and abundant log data (such as real-time content reviews by users, content score, purchased content, number of uses, time spent on content, number of views, and user pathway to the content). It has been actively examined in various fields including broadcasting, social networking services (SNS), and the Internet of Things (IoT).
Lastly, hybrid filtering combines multiple recommendations in order to overcome the limitations of content-based and collaborative filtering. One of the most successful examples is the item-to-item collaborative filtering recently put into place by Amazon. This technique predicts the score a user may give to an item based on the information about items a user has previously purchased. It calculates the similarity between items using the correlation coefficient and cosine similarity in order to find ones similar to those the user has shown preference for, or those predicted to receive a high score [12,13,14,15,16].
Personalization is a broad concept used to signify providing data that corresponds to information requested by the user. Personalization can be classified as passive and active. The former offers the user relevant information or products based on his/her profile during the process of the information search that he/she has designated. It is primarily explored in information searches on word ambiguity and similar topics. Regarding the latter type, the service provider offers the user pertinent information based on his/her profile, which reflects his/her preferences as new data becomes available, before the user makes a request. It can be categorized as a PRS and is mainly studied in data mining (Kim, 2002) [17,18,19,20,21,22,23].
In his investigation of computer science, Kim, B.M. et al. (2004) [24] stated that information filtering can be classified into content-based and collaborate filtering techniques, and that the former uses information from content, whereas the latter employs the opinions of other users. He focused on the organic integration of both approaches in order to overcome the constraints of collaborative filtering, and proposed a method that uses its framework and enforces it with an improved usage of user profiles. Koh et al. (2017) [25] designed a smart mirror that uses IoT for a user personalization service. The smart mirror harnesses information on the Internet to provide users with real-time traffic data, news, schedules, and weather updates. It can also offer recommendation services by employing a user’s usage history [26,27].
In the field of library and information science, Kim (2006) [9,28,29,30,31,32,33,34,35,36,37,38] investigated a hybrid recommendation system for multimedia content based on frequency of use, and proposed a system that provides users with personalized multimedia content (as an active information service in libraries and information centers). He analyzed the pros and cons of conventional recommendation techniques. To solve their shortcomings, he put forward a hybrid recommendation system that employs the user’s frequency of use of content in a high-volume content environment. Park (2016) [39] suggested book curation as an information service that a school library webpage could offer, and derived the criteria for the curation, which are needed to design the system. In order to derive 12 criteria for recommendations, he examined a list of suggested books in a pre-existing system, and analyzed properties in the user information, as well as book information that could be used for the recommendation service. A survey that looked at users’ preferences for each of the criteria showed that most students believed that libraries need book curation services [40,41,42,43,44,45,46,47].

2.1.3. Satisfaction Rate of Information Service Users

Investigations pertaining to user satisfaction comprise an active field in terms of academic research, which is dealt with in various fields of study. The following is a survey of relevant studies to this paper’s theme and pertains to user satisfaction in the field of the library information provision or academic information service.
Martensen and Gronholdt (2003) [28] described the development and application of a model that enables librarians to quantitatively measure quality, satisfaction rate, and user loyalty, and examined the degree to which collection, service, and environment affect the aforementioned elements.
Khan and Shafique (2011) [29] examined the role and importance of departmental libraries in satisfying students’ information requests. They assessed the satisfaction rate of users of departmental library services; the outcomes showed that most of the surveyees were satisfied with virtually all of the services provided by the departmental libraries. Furthermore, most of the surveyees requested more computers, digital collections, magazines, and newspapers from departmental libraries.
Seeholzer and Salem (2011) [30] examined expectations and satisfaction rates at Kent State University’s library website by focusing on both groups and individuals. The participants interacted with the library’s resources and services using a mobile website to a degree that surpassed expectations. The results revealed that the participants were employing the mobile website to use the research-related database, library catalogues, and reference system; they were also interested in contacting the library using text messages.
An analysis of the relevant research on the information behavior of scientists and engineers, the PRS, and user satisfaction with information services indicates a need to periodically investigate users’ information requests, as well as to evaluate the collection and service usability of the science technology information service. This study examines the information usage behavior of users of KOSEN’s PRS in order, and proposes improvements to the system based on the findings.

2.2. Overview of KOSEN and PRS

2.2.1. Overview of KOSEN

KOSEN, an academic science information platform that is the subject of this study, was created as part of the Internationalization Foundation Construction Project by the Ministry of Science and Technology in order to connect Korean scientists and engineers dispersed throughout the globe in cyberspace. Korean scientists and engineers abroad can obtain news from science and engineering circles in Korea through KOSEN, and can contribute to their development by providing advanced information from overseas. Moreover, scientists and engineers in Korea can acquire such data promptly and establish a network that enables multi-dimensional exchanges, such as meeting prospective partners for international joint research. Individual scientists and engineers can use the information they find in KOSEN to enhance their own competence, the accumulation of which contributes to the capacity of Korea’s national science and engineering society. With these goals in mind, KOSEN launched its website in July 1999. As of December 2017, it has 130,000 active members. Since March of 2017, KOSEN has operated a PRS in-house based on big data that automatically provides users with relevant service menus and content based on their individual characteristics.

2.2.2. PRS of KOSEN

Since March of 2017, KOSEN has developed and operated a PRS based on big data that automatically provides users with relevant service menus and content based on their individual characteristics.
KOSEN’s PRS developed personalization services based on log data using about 140,000 members as well as information for approximately 70,000 members’ services and individual profiles.
We constructed the data by using web log information, which contains activity information and the accumulated user information in ORACLE DB, and records all the details of the activity of members or non-members via the service page accessed on PC or mobile.
In addition, the users of the system are classified into members and non-members. Members are classified into normal, dormant, and new member categories. Users were clustered based on general member profile information and activity log. For new members who do not have activity logs or members who do not use long-term services, the system recommends services by analyzing the service utilization of other members in the same cluster. When recommending a service to a member who has a history of using the service, the service that is most commonly used by that member is recommended first. In addition, we intend to increase the overall utilization of services by recommending services that managers want to present to new members, human members, and users with history.
Content recommended for users in each service is considered together with user profile information and activity history. We present the recent contents related to the services that the user has employed, and recommend those contents to the members who do not have an active history by using the user profile information and user cluster information. The user ‘s “major” field in the user profile information was considered the most important factor, followed by the “keyword” and “information of interest”.
If you go to the main page after initial login, you can see the applied page immediately. In the main page, we tried to figure out how much each user utilizes the recommended services and contents over the course of about one year. Satisfaction with personalized services was also measured by conducting individual satisfaction surveys on the recommended services.

2.2.3. PRS Item Introduction

We configured the main page as shown in Figure 1.
1. Widget Service: A total of 14 service menus including messages, document folder, “My articles”, and “My café”. The information pertaining to each service is provided at the time of login, and the user is notified of new articles and related information at the time of posting.
2. Recommendation List Service: Provided after personalization is applied and after logging in. Information from nine menus (laboratory information, academy information, business announcements, KOSEN reports, trend reports, knowledge and sentiments forum, What is?, career opportunities, analysis requests) is provided based on the most frequently accessed services in the user group to which the user belongs.
3. Scheduler: Associates users with similar characteristics to derive preferred services of user groups to announce academy information, business announcements, career opportunities, and KOSEN timetables and events.
4. Card Service: Provided after personalization is applied and after logging in. Information from 11 menus (global news, knowledge curator, videos, overseas business trip support, overseas enterprises, KOSEN webzine, current affairs and discussions, infographics, Korean community events, activities, press releases), selected based on the services accessed most frequently in the user group to which the user belongs, are provided in card format.
5. Image Service: This service constitutes 17 service menus that the administrator recommends in order to encourage users to utilize the services. The services recommended to login and non-login members are differentiated by the administrator settings.
6. Personalization Service Settings: User login activates the function to add a service in which the user is interested to the PRS tab. Users can add or change services in which they are interested by adjusting the settings.

3. Research and Analysis of User’s Information Usage Behavior

3.1. Design of the Research Instrument

Based on the attached survey (Appendix A-survey), we conclude that:
As shown in Table 1, the survey questions—which were designed to assess the information usage behavior of KOSEN users—consist of 20 questions in the three areas of (1) user characteristics, (2) PRS quality, and (3) PRS importance. The first dimension consists of 10 questions under two categories designed to assess users’ traits and PRS usage behavior. The second area comprises nine questions under three categories designed to assess the system (homepage), content, and service support. Third, PRS importance consists of one question designed to understand the elements of the PRS that the user deems to be important. The questionnaire for evaluating PRS quality and PRS Importance used a five-point assessment system.

3.2. Research Subjects and Methodology

KOSEN users were selected as the research subjects. Although some KOSEN users do not use the PRS, the reasons for non-use are accounted for in Chapter 4, which discusses methods for improving the service.
To investigate the information usage behavior of KOSEN’s PRS, the authors sent online surveys to KOSEN users who consented to receiving e-mail communication. The authors gathered data for two weeks from March 6 to March 14, 2018 utilizing Google Docs. A total of 513 users responded; the authors analyzed the data from the 321 users who do not use the service as well as the 192 users who do.
The authors employed SPSS 23.0 to analyze the data. The steps of analysis were as follows: (1) The authors performed factor analysis and reliability tests on the survey questions, which served as the instrument of measurement. Secondly, the authors analyzed user behavior, overall satisfaction rate, the basic statistical data from the assessment of the quality of service, and basic statistical data on importance. Thirdly, the authors conducted a t-test and ANOVA to examine the overall satisfaction rate per different categories such as gender, education level, and major field of study.
Lastly, the authors conducted correlation analysis to investigate the correlation between system, content, service support, and overall satisfaction rate. They also performed multicenter retrospective analysis to explore the degree of influence.

3.2.1. Factorial and Reliability Analysis of the Measurement Instrument

In order to assess the reliability of the collected data, the authors used Cronbach’s alpha coefficient to analyze the internal consistency reliability of the survey questions. Table 2 shows the results of the reliability assessment.
Using the factor analysis, the authors grouped the three system components, the three content components, and the two service support components in order to simplify the variables. They conducted factor rotation using Verimax rotation (which is an orthogonal rotation), as well as the Kaiser–Meyer–Olkin (KMO) test and Bartlett’s test of sphericity. A KMO of 0.7 showed that the outcome was adequate. Since the KMO value was 0.839, the authors deemed the research request to be adequate. Next, in the test to verify whether Bartlett’s factor analysis would be sufficient, the value obtained was significantly smaller than 0.95, which indicates that the use of factorial analysis was appropriate. The results of the factor analysis showed that there were two service support components: user support and interactivity.
Cronbach’s alpha tests reliability by testing the probability of obtaining the same value when the measurement is conducted again for the same concept. In academic papers, a value greater than 0.6 conventionally guarantees reliability. Thus, the internal consistency reliability has been guaranteed. In terms of the system components, ease of use has a reliability value of 0.750, demonstrating a normal level of reliability. Design has a reliability value of 0.863, indicating a high level of reliability. Lastly, accessibility has a reliability value of 0.765, which signals a normal level of reliability. In terms of the content components, sufficiency has a reliability value of 0.794, revealing a normal level of reliability. Adequacy has a reliability value of 0.744, which reflects a normal level of reliability. Lastly, utility has a reliability value of 1.000, displaying a very high level of reliability. Third, for service support components, user support has a reliability value of 0.861, indicating a high level of reliability. Lastly, interactivity has a reliability value of 0.810, which suggests a high level of reliability.
Furthermore, the authors conducted descriptive statistical analysis to obtain the descriptive statistical values of the variables.
As Table 3 shows ease of use has an average of 3.72 and a standard deviation (SD) of 0.786. Design has an average of 3.58 and an SD of 0.828. Accessibility has an average of 3.54 and an SD of 0.734. Sufficiency has an average of 3.71 and an SD of 0.768. Adequacy has an average of 3.85 and an SD of 0.739. Utility has an average of 3.51 and an SD of 1.008. User support has an average of 3.41 and an SD of 0.818. Interactivity has an average of 3.60 and an SD of 0.766. Importance has an average of 3.92 and an SD of 0.712.

3.2.2. Basic Statistical Analysis

The authors employed basic statistical analysis to investigate the characteristics of the surveyees, their usage behavior of the PRS, and reason(s) for non-use in order to understand the overall usage trends of the service users. The investigation of the characteristics included demographic components such as age, gender, education level, major field of study, and occupation. The outcomes are shown in Table 4.
According to Table 4, in terms of age, 7.3% of (or 14) users were in their 20s, 31.3% (60) were in their 30s, 36.5% (70) were in their 40s, and 17.7% (14) were in their 60s. In terms of gender, 87.5% were male (168) and 12.5% were female (24). Regarding education level, undergraduates comprised 18.2% (35), while graduates made up 27.6% (53) and doctors 54.2% (104). In terms of major field of study, 2.6% of (or 5) users majored in construction/transportation; 5.2% (10) in science and technology and humanities and the social sciences; 7.8% (15) in mechanical engineering; 2.1% (4) in food, agriculture, forestry and fisheries sciences; 0.0% (0) in brain science; 2.1% (4) in physics; 8.3% (16) in healthcare; 22.4% (43) in life science; 0.5% (1) in mathematics; 2.1% (4) in energy and resources; 0.0% (0) in atomic energy; 1.6% (3) in cognitive-emotional science; 9.4% (18) in materials engineering; 6.3% (12) in electrical and electronic engineering; 7.8% (15) in information and communication; 0.5% (1) in earth science; and 6.8% (13) in chemical engineering. In terms of occupation, 41.7% of (or 80) users were researchers; 8.3% (16) were students; 9.9% (19) were professors; 30.7% (59) were company employees; while 9.4% (18) marked other.
The results of demographic characteristics such as age, sex, final education, major, and occupation are summarized as follows: The majority of the survey participants were in their 30s and 40s, and the sex was overwhelmingly male. In addition, most of the participants’ highest degree earned was a master or doctorate degree. Most also majored in life sciences, but were distributed evenly, and more than 70% of the participants’ professions were as researchers and office workers.

3.2.3. Analysis of PRS Usage Behavior According to Demographic Characteristics

The authors studied the following aspects of the surveyees’ PRS usage behavior: use or non-use of the service, reason(s) for non-use, primary purpose of using the service, number of visits, and service use time. The results are shown in Table 5.
As for the question on whether the user is accessing KOSEN’s PRS, 192 users answered “Yes” (37.4%) and 321 users said “No” (62.6%).
Regarding reason(s) for non-use, 80.6% (258) users said that they “Do not know how to use it”, 2.5% (8) marked “Difficult to use”, 10.9% (35) stated “Unnecessary”, 1.6% (5) claimed the “Recommendation results are not adequate”, and 4.4% (14) chose “Other”. The fact that the predominant reason for non-use is “Do not know how to use it” suggests that company management needs to make stronger efforts to promote user education.
Regarding the primary purpose of using the service, 51.0% of (or 98) users answered “Information accessibility”; 24.0% (46) chose “Adequate information”; 9.4% (18) stated “Useful information”; 14.6% (28) selected “Variety of information provided”; and 1.0% (2) chose “Other”.
In terms of the number of visits, 64.1% (123) answered “1–7 times a week”; 26.0% (50) chose “1–7 times a month”; and 9.9% (19) selected “1–10 times a year”. For service use time, 82.3% (158) marked “Less than 1 h”; 16.1% (31) stated “1–2 h”; and 1.6% (3) chose “3 h or more”. The primary purpose of using the service was information accessibility, which enables users to access the information they desire rapidly. The second most chosen reason was to obtain relevant knowledge.
The authors conducted a cross-examination (chi-square) test to examine the variance in service usage behavior according to demographic characteristics. They investigated the differences in the primary purpose of using the service, the number of visits, and service use time.
First, regarding whether the purpose of using the service varies according to demographic traits, gender and major field of study showed a statistically significant difference. Looking at gender, 54.2% of males (91) selected “Information accessibility”; 20.2% (34) chose “Adequate information”; 9.5% (16) marked “Useful information”; 15.5% (26) answered “Variety of information provided”; and 0.6% (1) stated “Other”. As for females, 29.2% (7) chose “Information accessibility”; 50.0% (12) selected “Adequate information”; 8.3% (2) chose “Useful information”; 8.3% (2) answered “Variety of information provided”; and 4.2% (1) marked “Other”. Looking at the verification statistics, x2 is 11.700 and the probability of significance was 0.020, which indicates a statistically significant difference. Regarding field of study, by examining the verification statistics, x2 is 86.887 and the probability of significance was 0.030, which demonstrates a statistically significant difference. Age, education level, and occupation did not show statistically significant differences.
Second, as for whether the number of visits varied according to demographic features, education level showed a statistically significant difference. Among users with Bachelor’s degrees, 48.6% (17) marked “1–7 times a week”; 40.0% (14) chose “1–7 times a month”; and 11.4% (4) selected “1–10 times a year”. Among users with Master’s degrees, 54.7% (29) marked “1–7 times a week”; 24.5% (13) stated “1–7 times a month”; and 20.8% (11) chose “1–10 times a year”. Among doctors, 74.0% (77) selected “1–7 times a week”; 22.1% (23) chose “1–7 times a month”; and 3.8% (4) answered “1–10 times a year”. Looking at the verification statistics, x2 is 17.175 and the probability of significance was 0.002, which suggests a statistically significant difference. Age, gender, major field of study, and occupation did not show statistically significant differences.
Third, regarding whether service use time varied according to demographic elements, gender, age, education level, major field of study, and occupation all showed no statistically significant differences.

3.2.4. Demographic Analysis of the Satisfaction Rate for System, Content, and Service Support

The three areas of PRS quality can be divided into system, content, and service support. Service can be classified under ease of use, design, and accessibility. Content can be categorized as sufficiency, adequacy, and utility. Service support can be broken down into user support and interactivity.
Table 6 shows the variance in the overall satisfaction rate for the areas of quality according to gender. The authors used a t-test to compare the genders, with the results showing that variance in the overall satisfaction rate according to gender was marginal and therefore statistically insignificant.
As with gender, the variance in the overall satisfaction rate according to age, education level, and occupation were also marginal and thus statistically insignificant. However, the ANOVA used to measure variance according to occupation produced a statistically significant variance, as shown in Table 7.
1. Regarding ease of use, the average for researchers (a) was 3.59; for students (b), 3.38; for professors (c), 3.79; for company employees (d), 3.97; and for others (e), 3.78. The verification statistics show the F value to be 2.966 and the significance probability to be 0.021. Hence, there is a statistically significant variance in terms of ease of use.
2. Regarding design, the average for researchers (a) was 3.41; for students (b), 3.16; for professors (c), 3.82; for company employees (d), 3.79; and for others (e), 3.75. The verification statistics show the F value to be 3.557 and the significance probability to be 0.008. Therefore, there is a statistically significant variance in terms of design.
3. Regarding accessibility, the average for researchers (a) was 3.44; for students (b), 3.06; for professors (c), 3.58; for company employees (d), 3.81; and for others (e), 3.53. The verification statistics show the F value to be 4.245 and the significance probability to be 0.003. Therefore, there is a statistically significant variance in terms of accessibility.
4. Regarding sufficiency, the average for researchers (a) was 3.50; for students (b); 3.41, for professors (c), 3.71; for company employees (d), 4.02; and for others (e), 3.86. The verification statistics show the F value to be and the significance probability to be 0.001. Therefore, there is a statistically significant variance in terms of sufficiency.
5. Regarding adequacy, the average for researchers (a) was 3.71; for students (b), 3.53; for professors (c), 3.87; for company employees (d), 4.08; and for others (e), 3.94. The verification statistics show the F value to be 3.209 and the significance probability to be 0.014. Hence, there is a statistically significant variance in terms of adequacy.
6. Regarding utility, the average for researchers (a) was 3.35; for students (b), 3.50; for professors (c), 3.21; for company employees (d), 3.76; and for others (e), 3.67. The verification statistics show the F value to be 2.000 and the significance probability to be 0.096. Therefore, there is a statistically significant variance in terms of utility.
7. Regarding user support, the average for researchers (a) was 3.23; for students (b), 3.28; for professors (c), 3.55; for company employees (d), 3.64; and for others (e), 3.44. The verification statistics show the F value to be 2.473 and the significance probability to be 0.046. Thus, there is a statistically significant variance in terms of user support.
8. Regarding interactivity, the average for researchers (a) was 3.44; for students (b), 3.47; for professors (c), 3.63; for company employees (d), 3.83; and for others (e), 3.61. The verification statistics show the F value to be 2.434 and the significance probability to be 0.049. Hence, there is a statistically significant variance in terms of interactivity.

3.2.5. Analysis of the Importance in Determining Factors in the Three Quality Areas of the PRS and the Interactivity of its Components

The authors conducted multiple regression analysis to determine the effect of system, one of the three quality areas, on the importance of the recommendation service. Table 8 shows the results.
Model F has a value of 28.725 and can be considered a statistically significant regression model. The R-squared of the regression analysis is the equivalent of the coefficient of determination, and signifies the proportion of variance in the dependent variable, which can be explained by the variable element. The R-squared, at 31.4%, indicates a high degree of explanation. The variance inflation factor (VIF) value can range from 1 to infinity, and the values between 1 and 10 indicate no problem of multi-collinearity. Since the VIF is below 10, there is no problem of multi-collinearity. Since the outcome of the Durbin–Watson statistic is close to 2, there is no autocorrelation, and the residuals are independent of each other. Therefore, there is no problem with the variables. The standard significance level is 0.05 (95%). Results lower than 0.05 are statistically significant, while those higher than 0.05 are not.
Looking at ease of use in the regression analysis, the B value is 0.322. In terms of verification statistics, the t value is 4.249, and the probability of significance is 0.000, indicating a statistically significant amount of effect. Since the value of the standardized beta is 0.356, increasing 1 unit of ease of use expands the importance by 0.356 (35.6%). In terms of design, the B value is 0.247. Regarding verification statistics, the t value is 3.341 and the probability of significance is 0.001, demonstrating a statistically significant amount of effect. As the value of the standardized beta is 0.288, increasing 1 unit of ease of use augments importance by 0.288 (28.8%). Looking at accessibility, the B value is −0.036. For verification statistics, the t value is −0.451 and the probability of significance is 0.653, suggesting no statistically significant amount of effect.
Next, the authors conducted multiple regression analysis to determine the effect of recommended content on the importance of the recommendation service. Table 9 displays the outcomes.
Model F has a value of 23.886 and can be considered a statistically significant regression model. The R-squared of regression analysis is the equivalent of the coefficient of determination and signifies the proportion of the variance in the dependent variable, which can be explained by the variable element. The R-squared, at 27.6%, indicates a high degree of explanation. The VIF value can range from 1 to infinity, and the values between 1 and 10 indicate no problem of multi-collinearity. As the VIF is below 10, there is no problem of multi-collinearity. Since the result of Durbin–Watson is close to 2, there is no autocorrelation, and the residuals are independent of each other. Hence, there is no problem with the variables. The standard significance level is 0.05 (95%). Results higher than 0.05 are not statistically significant, while those lower than 0.05 are.
Looking at sufficiency in the regression analysis, the B value is 0.137. In terms of verification statistics, the t value is 1.714 and the probability of significance is 0.088, indicating no statistically significant amount of effect. Looking at adequacy, the B value is 0.357. In terms of verification statistics, the t value is 4.257 and the probability of significance is 0.000, suggesting a statistically significant amount of effect. Since the value of the standardized beta is 0.371, increasing 1 unit of ease of use boosts importance by 0.371 (37.1%). Looking at utility, the B value is 0.053. In terms of verification statistics, the t value is 1.037 and the probability of significance is 0.302, signaling no statistically significant amount of effect.
Lastly, the authors conducted multiple regression analysis to determine the effect of service support on the importance of the recommendation service. Table 10 shows the outcomes.
Model F has a value of 27.809 and can be considered a statistically significant regression model. The R-squared of regression analysis is the equivalent of the coefficient of determination, and signifies the proportion of variance in the dependent variable, which can be explained by the variable element. The R-squared, at 22.7%, signals a high degree of explanation. The VIF value can range from 1 to infinity, and values between 1 and 10 indicate no problem of multi-collinearity. Since the VIF is below 10, there is no problem of multi-collinearity. Given that the outcome of the Durbin–Watson statistic is close to 2, there is no autocorrelation, and the residuals are independent of each other. Hence, there is no issue with the variables. The standard significance level is 0.05 (95%). Results lower than 0.05 are statistically significant, while those higher than 0.05 are not.
In terms of user support in the regression analysis, the B value is 0.197. Regarding verification statistics, the t value is 2.184 and the probability of significance is 0.030, suggesting a statistically significant amount of effect. Since the value of the standardized beta is 0.227, increasing 1 unit of ease of use extends importance by 0.227 (22.7%). Looking at interactivity, the B value is 0.258. For verification statistics, the t value is 2.672 and the probability of significance is 0.008, implying a statistically significant amount of effect. Since the value of the standardized beta is 0.227, increasing 1 unit of ease of use expands importance by 0.227 (22.7%).

3.2.6. Pearson’s Correlation Analysis for Examining the Correlation among the PRS Components

In order to examine whether there is a significant correlation among the components of the service’s three areas of quality, the authors employed a Pearson correlation analysis, as shown in Table 11.

4. Implications

The authors conducted a user survey on KOSEN’s PRS to examine the information usage behavior of Korean scientists and engineers who access the PRS. In order to investigate the usage status of the service, the authors analyzed usage behavior by employing statistically significant data. They also explored the importance of the components of service quality, as well as the correlations among them.
The analysis yielded the following outcomes in regard to the three research questions. The results enabled the understanding of the information usage behavior of Korean scientists and engineers in relation to the PRS, as well as the effects of the three areas of quality (system, content, and service support) on the perceived importance of the service. A method for improving the system will be proposed in the following chapter.
First, regarding the answer to whether PRS usage behavior varies based on users’ demographic traits, in terms of gender, the primary purpose of using the service for males was information accessibility, whereas for females, it was the adequacy of the suggested information. The differences according to gender as well as major field of study were statistically significant. There was no statistically significant difference based on age, education level, or occupation.
Secondly, the answer to whether the satisfaction rate of the service’s three areas of quality (system, contents, and service support) shows variance according to users’ demographic attributes was that the variance in the overall satisfaction rate according to gender was marginal. As with gender, the variance in the overall satisfaction rate based on age, education level, and occupation was also marginal and therefore statistically insignificant. However, the ANOVA used to measure the variance according to occupation revealed a statistically significant variance in seven out of eight components (ease of use, design, accessibility, sufficiency, adequacy, user support, and interactivity) of the three areas of quality, excluding utility. Regarding the number of visits, there was only a statistically significant difference in terms of education level. For the difference in the service use time according to demographic traits, neither age, gender, education level, major field of study, or occupation showed a statistically significant difference in information usage behavior.
Third, the answer to what some of users’ key concerns are, and if there is a significant correlation among the three areas of quality that pertain to them, was as follows:
The authors conducted multiple regression analysis in order to determine the effect of system (one of the three areas of quality) on the importance of the recommendation service. Looking at ease of use, the B value is 0.322 and the probability of significance is 0.000, indicating a statistically significant amount of effect. For design, the B value is 0.247 and the probability of significance is 0.001, suggesting a statistically significant amount of effect. However, in terms of accessibility, the B value is −0.326 and the probability of significance is 0.653, demonstrating no statistically significant amount of effect.
In the analysis of the effect of system on the importance of the recommendation service, sufficiency has a B value of 0.137 and a probability of significance of 0.088, implying no statistically significant amount of effect. Looking at adequacy, the B value is 0.357 and the probability of significance is 0.000, pointing to a statistically significant amount of effect. For utility, the B value is 0.053 and the probability of significance is 0.302, reflecting no statistically significant amount of effect.
Lastly, in the analysis of the effect of service support on the importance of the recommendation service, user support has a B value of 0.197 and a probability of significance of 0.030, signaling a statistically significant amount of effect. Interactivity has a B value of 0.258 and a probability of significance of 0.008, revealing a statistically significant amount of effect.
To summarize the outcomes, of the eight components of the three areas of quality, ease of use, design, adequacy, user support, and interactivity had a statistically significant amount of effect on the importance of the PRS. This means that users of the recommendation service valued ease of use and efficient design (the system components), as well as adequacy of information (the content component). Users considered user support and interactivity (the service components) to be as important as the recommendation service itself.
The answer as to whether there was a significant correlation among the components of the three areas of quality was that all eight components reveal importance and a statistically significant degree of correlation. For system quality, ease of use and importance had a correlation coefficient of 0.520, design and importance had a correlation coefficient of 0.495, and accessibility and importance had a correlation coefficient of 0.356, all demonstrating a statistically significant correlation. For content quality, sufficiency and importance had a correlation coefficient of 0.434, adequacy and importance had a correlation coefficient of 0.507, and utility and importance had a correlation coefficient of 0.321, all suggesting a statistically significant correlation. For service support quality, user support and importance had a correlation coefficient of 0.445, while interactivity and importance had a correlation coefficient of 0.456.
Moreover, a significant correlation with a correlation coefficient of 0.6 and above was present between ease of use and design, ease of use and sufficiency, ease of use and adequacy, design and accessibility, design and sufficiency, design and adequacy, design and interactivity, sufficiency and adequacy, sufficiency and interactivity, adequacy and user support, and user support and interactivity.

5. Conclusions

The authors derived the following conclusions based on an examination of pre-existing studies and the analysis of the user survey.
1. The predominant reason for non-use of the PRS was that users did not know how to use it; 80.6% of surveyees marked this reason for their non-use.
2. In terms of the difference in the information usage behavior of PRS users according to demographic characteristics, gender and major field of study revealed a statistically significant difference in the purpose of using the service, and education level presented a statistically significant difference in the number of visits. No other significant differences were observed.
3. In the three areas of quality (system, content, and service support), the variance in the satisfaction rate for each according to demographic traits (gender, age, education level, and major field of study) was marginal. However, regarding the service satisfaction rate per occupation, there was a statistically significant variance in seven out of the eight components of the three areas of quality (ease of use, design, accessibility, sufficiency, adequacy, user support, and interactivity), excluding utility.
4. Of the eight components, five (ease of use, design, adequacy, user support, and interactivity) demonstrated a statistically significant amount of effect on the importance of the PRS, meaning that they are the components users were most concerned with.
5. Regarding the components that users deemed important in the PRS, five out of the eight (ease of use, design, adequacy, user support, and interactivity) had a statistically significant effect on determining the importance of the service.

5.1. Proposals to Improve the PRS

Based on the above conclusion, the authors propose the following to enhance the efficiency of the service and promote its use.
1. The fact that the main reason for non-use was users lacking the necessary knowledge suggests a need for more active and in-depth user education and promotion. In terms of user education, the advantages of using the service should be highlighted, and the method of using the service should be taught in detail. The promotion of the service should not be limited to public announcements on the webpage, but rather, awareness should be increased using various webzines and other media.
2. The usage behavior of PRS users was not significantly affected by their demographic features. However, some of the usage behaviors that did show statistically significant differences—such as that the primary purpose of using the service was information accessibility for males and information adequacy for females—could be used for the promotion and user education of the PRS.
3. In order to respond to information requests from users of various occupations (whose satisfaction rates with each of the three areas of quality were investigated), the recommendation service for detailed components must be improved.
4. The fact that users were concerned with the ease of use, design, adequacy, user support, and interactivity of the PRS suggests that improvement efforts should focus on ease of use and efficient design (the system components), adequacy of information (the content component), and user support and interactivity (the service components), which the users considered to be as important as the recommendation service itself.

5.2. Limitations of the Ppresent Study and Future Research

The present study offers an analysis of a user survey on KOSEN’s PRS and the results cannot be generalized. Future research must broaden the scope of analysis to include multiple PRSs.

Author Contributions

Formal analysis, S.E.P.; Supervision, H.-S.C.; Writing – original draft, J.-H.P.; Writing – review & editing, Y.-Y.H.

Funding

This research was funded by Development of S&T Knowledge Infrastructure Convergence Service, grant number K-19-L01-C05-S01.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A—Survey

I. This question is about using KOSEN’s PRS.
1. Are you using KOSEN’s PRS?
  (1) Yes
   * If you have used or are currently using PRS (go to Question 3)
  (2) No
   * If you have not used PRS or have used it but are not using it at present (answer Q2 only)
2. If you are not using PRS, why?
  (1) I do not know how to use it
  (2) It is inconvenient to use
  (3) Not necessary
  (4) Suggested service results are not appropriate
  (5) Other (        )
3. What is the primary purpose of using PRS?
  (1) I can get quick access to the information I want
  (2) It’ll give you the right information
  (3) Useful for my learning and research
  (4) I need to provide a variety of information
  (5) Other (        )
4. How often do you use PRS?
  (1) 1 to 7 times a week  (2) Less than 1–7 times a month  (3) Less than 1–10 times a year
5. How much time do you spend on average on PRS? (1 time)
  (1) Less than 1 h  (2) 1–2 h  (3) More than 3 h
II. Here are questions about using PRS. Please respond based on your experience in the last 6 months.
6. PRS system (homepage) related
6-1. Questions about the design of PRS
ItemAnswer
Strongly DisagreeDisagreeNeutralAgreeStrongly agree
KOSEN’s PRS uses easy-to-view design (color, icon, image, layout, etc.)
PRS information is displayed for easy understanding
Finding the information I’m looking for and related information (navigation) is easy
6-2. This is a question about the usability of PRS.
ItemAnswer
Strongly DisagreeDisagreeNeutralAgreeStrongly agree
KOSEN site connection and recommendation speed is fast
I can download the information I’m looking for
6-3. This is a question about accessibility of PRS.
ItemAnswer
Strongly DisagreeDisagreeNeutralAgreeStrongly agree
KOSEN PRS is available on PC and mobile
Dead links are checked, fixed, and maintained by site administration
PRS is available without using a separate device
7. Recommended content for PRS.
7-1. Fitness of content that PRS recommends to me.
ItemAnswer
Strongly DisagreeDisagreeNeutralAgreeStrongly agree
Information related to my field or me is provided
The information provided by the referral service is reliable
Presented up-to-date information from PRS
7-2. PRS’s recommendation to me is about sufficiency.
ItemAnswer
Strongly DisagreeDisagreeNeutralAgreeStrongly agree
The information provided by PRS is comprehensive
Provides information in various forms (PDF, MS office, etc.) through PRS
7-3. Personalized recommendations containe content that is useful to me.
ItemAnswer
Strongly DisagreeDisagreeNeutralAgreeStrongly agree
The information provided in the referral service is useful for my learning / research
I have used the information provided in the recommendation service in my thesis, assignment, experiment, etc.
8. Service-related support of PRS
8-1. If you have problems with using PRS, please contact us if you have any questions
ItemAnswer
Strongly DisagreeDisagreeNeutralAgreeStrongly agree
Provides support for use of referral services (tips, FAQs, Q & A, etc.)
Staff is available to provide professional support for referral services.
8-2. Questions about interactivity with personalized referral services.
ItemAnswer
Strongly DisagreeDisagreeNeutralAgreeStrongly agree
It provides instruction and instructional materials on recommended services
Provides information on recommended services in the form of email or online inquiries
9. Please describe any inconveniences or improvements you experienced when using PRS.
                                            
                                            
                                            
III. Questions about the importance of PRS by variable.
10. Please mark any items that you think are important for PRS.
ItemAnswer
Not very ImportantNot ImportantNeutralImportantVery Important
System (homepage) design, usability, accessibility
The suitability, completeness, and usefulness of the featured content
Service-related supportability (supportability, interactivity)
IV. General questions.
11. What is your age?
12. What is your gender?
  (1) Male
  (2) Female
13. What is your final academic background?
  (1) Bachelor’s degree
  (2) Master’s degree
  (3) Doctor’s degree
14. What is your major?
  (1) Construction/Transportation
  (2) Science and technology, humanities, and social science
  (3) Mechanical engineering
  (4) Food, agriculture, forestry and fisheries
  (5) Brain science
  (6) Physics
  (7) Health care
  (8) Life science
  (9) Mathematics
  (10) Energy/Resources
  (11) Atomic energy
  (12) Cognitive/Emotional science
  (13) Material engineering
  (14) Electrical/Electronic engineering
  (15) Information/Communication
  (16) Earth science
  (17) Chemical engineering
  (18) Chemistry
  (19) Environmental engineering
15. What is your occupation?
  (1) Researcher
  (2) Student
  (3) Professor
  (4) Company employee
  (5) Others

References

  1. Brown, C.M. Information-seeking behaviour of scientists in the electronic information age: Astronomers, chemists, mathematicians, and physicists. J. Am. Soc. Inf. Sci. 1999, 50, 929–943. [Google Scholar] [CrossRef]
  2. Majid, S.; Anwar, M.A.; Eisenschitz, T.S. Information needs and informationseeking behavior of agricultural scientists in Malaysia. Libr. Inf. Sci. Res. 2000, 22, 145–163. [Google Scholar] [CrossRef]
  3. Kim, N.-w. Construction and Feedback of an Information System by Analyzing Phusicians’ Information-Seeking Behavior. J. Korean Soc. Inf. Manag. 2016, 33, 161–180. [Google Scholar]
  4. Fidzani, B.T. Information needs and information-seeking behaviour of graduate students at the University of Botswana. Libr. Rev. 1998, 47, 329–340. [Google Scholar] [CrossRef]
  5. Hemminger, B.M.; Lu, D.; Vaughan, K.T.L.; Adams, S.J. Information seeking behavior of academic scientists. J. Am. Soc. Inf. Sci. Technol. 2007, 58, 2205–2225. [Google Scholar] [CrossRef]
  6. Lee, L.-J. An Exploratory Study of Information Services Based on User’s Charateriscitc and Needs. J. Korean Biblia Soc. Libr. Inf. Sci. 2016, 27, 291–312. [Google Scholar]
  7. Yoo, Y. Evaluation of Collaborative Filtering Methods for Developing Online Music Contents Recommendation System. Trans. Korean Inst. Electr. Eng. 2017, 66, 1083–1091. [Google Scholar]
  8. Jung, K.-Y.; Lee, J.-H. Comparative Evaluation of User Similarity Weight for Improving Prediction Accuracy in Personalized Recommender System. J. Inst. Electron. Eng. Korea Comput. Inf. 2005, 42, 63–74. [Google Scholar]
  9. Kim, Y.; Moon, S.-B. A Study on Hybrid Recommendation System Based on Usage Frequency for Multimedia Contents. J. Korean Soc. Inf. Manag. 2006, 23, 91–125. [Google Scholar]
  10. Lee, Y.-J.; Lee, S.-H.; Wang, C.-J. Improving Sparsity Problem of Collaborative Filtering in Educational Contents Recommendation System. In Proceedings of the 30th KISS Spring Conference, Jeju University, Jeju, Korea, 23–24 April 2003; Volume 30, pp. 830–832. [Google Scholar]
  11. Sarwar, B.; Karpis, G.; Konstan, J.; Riedl, J. Item based collaborative filtering recommendation algorithms. In Proceedings of the 10th International World Wide Conference, Hong Kong, China, 1–5 May 2001; pp. 285–295. [Google Scholar]
  12. Kim, J.-S.; Kim, T.-Y.; Choi, J.-H. Dynamic Recommendation System Using Web Document Type and Document Similarity in Cluster. J. KIISE Softw. Appl. 2004, 31, 586–594. [Google Scholar]
  13. Balabanovic, M.; Shoham, Y. Fab: Content-based, collaborative recommendation. Commun. ACM 1997, 40, 66–72. [Google Scholar] [CrossRef]
  14. Billsus, D.; Pazzani, M. Learning Collaborative information filters. In Proceedings of the International Conference on Machine Learning, San Francisco, CA, USA, 24–27 July 1998; pp. 46–54. [Google Scholar]
  15. Burke, R. Hybrid Recommender Systems: Survey and Experiments. User Model. User Adapt. Interact. 2002, 12, 331–370. [Google Scholar] [CrossRef]
  16. Linden, G.; Smith, B.; York, J. Amazon.com recommendations: Item-to-item collaborative filtering. Internet Comput. IEEE 2003, 7, 76–80. [Google Scholar] [CrossRef]
  17. Kim, H.-H.; Khu, N.-Y. A Study on the Design and Evaluation of the Model of MyCyber Library for a Customized Information Service. J. Korean Soc. Inf. Manag. 2002, 19, 132–157. [Google Scholar]
  18. Bae, K.-J. The Analysis of the Differences of Information Needs and Usages among Academic Uses in the Field of Science and Technology. J. Korean Soc. Libr. Inf. Sci. 2010, 44, 157–176. [Google Scholar]
  19. Yoo, S.R. User-oriented Evaluation of NDSL Information Service. J. Korean Soc. Libr. Inf. Sci. 2002, 36, 25–40. [Google Scholar] [Green Version]
  20. Kwak, B.-H. A Study on Information Seeking Behavior of University Libraries Users. J. Korean Libr. Inf. Sci. Soc. 2004, 35, 257–281. [Google Scholar]
  21. Yoo, J.O. A Study on Academic Library User’s Information Literacy. J. Korean Biblia Soc. Libr. Inf. Sci. 2004, 15, 241–254. [Google Scholar]
  22. Lee, J.-Y.; Han, S.-H.; Joo, S.-H. The Analysis of the Information Users’ Needs and Information Seeking Behavior in the Field of Science and Technology. J. Korea Soc. Inf. Manag. 2008, 25, 127–141. [Google Scholar]
  23. Fescemyer, K. Information-seeking behavior of undergraduate geography students. Res. Strateg. 2000, 17, 307–317. [Google Scholar] [CrossRef]
  24. Kim, B.-M.; Li, Q.; Kim, S.-G. A New Approach Combining ContentBased Filtering and Collaborative Filtering for Recommender Systems. J. KIISE Softw. Appl. 2004, 31, 332–342. [Google Scholar]
  25. Ko, H.-M.; Kim, S.-m. Design and Implementation of Smart-Mirror Supporting Recommendation Service based on Personal Usage Data. Kiise Trans. Comput. Pract. 2017, 23, 65–73. [Google Scholar] [CrossRef]
  26. Chen, Y.; Liu, Y.; Zhou, C. Web service success factors from users’ behavioral perspective. In Computer Supported Cooperative Work in Design III; Shen, W., Luo, J., Lin, Z., Barthès, J.P.A., Hao, Q., Eds.; Springer: Berlin/Heidelberg, Germany, 2007; pp. 540–548. [Google Scholar]
  27. Davis, F.D. Perceived usefulness, perceived ease of use, and user acceptance of information. Mis Q. 1989, 13, 319–340. [Google Scholar] [CrossRef]
  28. Martensen, A.L.G. Improving Library Users’ Perceived Quality, Satisfaction and Loyalty: An Integrated Measurement and Management System. J. Acad. Librariansh. 2003, 29, 140–147. [Google Scholar] [CrossRef]
  29. Khan, S.A.; Shafique, F. Role of Departmental Library in Satisfying the Information Needs of Students: A Survey of Two Departments of the Islamia University of Bahawalpur. Pak. J. Libr. Inf. Sci. 2011, 12, I1–I6. [Google Scholar]
  30. Seeholzer, J.; Salem, J.A. Library on the go: A focus group study of the Mobile Web and the academic. Coll. Res. Libr. 2011, 72, 9–20. [Google Scholar] [CrossRef]
  31. Chang, Y.-K. A study of e-service quality and user satisfaction in public libraries. J. Korean Soc. Libr. Inf. Sci. 2007, 41, 315–329. [Google Scholar]
  32. Hwang, J.-Y. Influences of Digital Library E-Service Quality on Customer Satisfaction. Master’s Thesis, Ajou University, Woldeukeom-ro, Korea, 2007. [Google Scholar]
  33. Park, J.-S.; Son, J.-H. An Effect of Service Value of Korea Search Engine on Customer Satisfaction. Korea Inst. Enterp. Archit. 2008, 5, 79–90. [Google Scholar]
  34. Nam, Y.-J.; Choi, S.-E. A Study on User Satisfaction with e-Book Services in University Libraries. J. Korean Soc. Libr. Inf. Sci. 2011, 45, 287–310. [Google Scholar] [Green Version]
  35. Oliver, R.L. Cognitive, Affective, and Attribute Base of the Satisfaction Responses. J. Consum. Res. 1993, 20, 418–430. [Google Scholar] [CrossRef]
  36. Bridges, L.; Rempel, H.G.; Griggs, K. Making the case for a mobile library Web site. Ref. Serv. Rev. 2010, 38, 309–320. [Google Scholar] [CrossRef]
  37. Byun, D.H.; Finnie, G. Evaluating usability, user satisfaction and intention to revisit for successful e-government websites. Electron. Gov. Int. J. 2011, 8, 1–19. [Google Scholar] [CrossRef]
  38. Khan, A.; Ahmed, S. The impact of digital library resources on scholarly communication: Challenges and opportunities for university libraries in Pakistan. Libr. Hi Tech News 2013, 30, 12–29. [Google Scholar] [CrossRef]
  39. Park, Y.-h. A Study on the Book Recommendation Standards of Book-Curaion Service for School Library. J. Korean Libr. Inf. Sci. Soc. 2016, 47, 279–303. [Google Scholar]
  40. Osborne, R. Open access publishing, academic research and scholarly communication. Online Inf. Rev. 2015, 39, 637–648. [Google Scholar] [CrossRef]
  41. Suarez-Torrente, M.D.C.; Conde-Clemente, P.; Martínez, A.B.; Juan, A.A. Improving web user satisfaction by ensuring usability criteria compliance: The case of an economically depressed region of Europe. Online Inf. Rev. 2016, 40, 187–203. [Google Scholar] [CrossRef]
  42. Tahira, M. Information Needs and Seeking Behaviour of Science and Technology Teachers of the University of the Punjab, Lahore. Unpublished Master’s Thesis, University of the Punjab, Lahore, Pakistan, 2008. [Google Scholar]
  43. Tosuntas, S.B.; Karadag, E.; Orhan, S. The factors affecting acceptance and use of interactive whiteboard within the scope of FATIH project: A structural equation model based on the unified theory of acceptance and use of technology. Comput. Educ. 2015, 81, 169–178. [Google Scholar] [CrossRef]
  44. Warriach, N.F.; Ameen, K. Perceptions of library and information science professionals about a national digital library program. Libr. Hi Tech News Inc. Online Cd Notes 2008, 25, 15–19. [Google Scholar] [CrossRef]
  45. Warriach, N.F.; Ameen, K.; Tahira, M. Usability study of a federated search product at Punjab University. Libr. Hi Tech News 2009, 26, 14–15. [Google Scholar] [CrossRef]
  46. Zazelenchuk, T.W.; Boling, E. Considering User Satisfaction in Designing Web-based. Educ. Q. 2003, 26, 35–40. [Google Scholar]
  47. Awwad, M.S.; Al-Majali, S.M. Electronic library services acceptance and use: An empirical validation of unified theory of acceptance and use of technology. Electron. Libr. 2015, 33, 1100–1120. [Google Scholar] [CrossRef]
Figure 1. Item of KOSEN web page.
Figure 1. Item of KOSEN web page.
Information 10 00181 g001
Table 1. Composition of the questionnaire.
Table 1. Composition of the questionnaire.
Measured AreasMeasured IndicatorsNo. of QuestionsRemarks
User characteristics1. Demographic characteristics 1) Age
2) Sex
3) Educational attainment
4) Major
5) Occupation
5Kang (2008),
Nam (2010)
2. Usage behavior of personalized recommendation service 1) Whether to use personalized recommendation service
2) Reason for non-use
3) Primary purpose of using it
4) Number of visits
5) Service usage time
5Kang (2008),
Nam (2010)
Personalized recommendation service quality1. System (website)1) Design
2) Ease of use
3) Accessibility
3Kang (2008)
2. Content1) Adequacy
2) Sufficiency
3) Utility
3Nam (2010),
Lee (2017)
3. Service support1) User support
2) Interactivity
2Kang (2008),
Nam (2010)
Personalized recommendation service importance1. Importance for each personalized recommendation service area1Nam (2010)
Table 2. Internal consistency reliability analysis of the independent variables.
Table 2. Internal consistency reliability analysis of the independent variables.
Area and Number of ItemsFactorCronbach’s Alpha
System (3)Ease of use0.750
Design0.863
Accessibility0.765
Content (3)Sufficiency0.794
Adequacy0.744
Usefulness1.000
Service support (2)User support0.861
Interactivity0.810
Table 3. Result of the descriptive statistics.
Table 3. Result of the descriptive statistics.
NMinMaxMeanStandard DeviationSkewnessKurtosis
Ease of use192153.720.786−0.3780.675
Design192153.580.828−0.2520.155
Accessibility192153.540.7340.1570.319
Sufficiency192153.710.768−0.3660.705
Adequacy192153.850.739−0.4710.518
Utility192153.511.008−0.309−0.313
User support192153.410.8180.028−0.190
Interactivity192153.600.766−0.1960.244
Importance192153.920.712−0.203−0.627
Table 4. Demographic characteristics.
Table 4. Demographic characteristics.
VariablesItemsFrequency%
Age20s147.3
30s6031.3
40s7036.5
50s3417.7
Over 60s147.3
GenderMale16887.5
Female2412.5
High test level of educationBachelor’s degree3518.2
Master’s degree5327.6
Doctor’s degree10454.2
MajorConstruction/Transportation52.6
Science and technology, Humanities, and social science105.2
Mechanical engineering157.8
Food, agriculture, forestry, and fisheries42.1
Brain science00.0
Physics42.1
Health care168.3
Life science4322.4
Mathematics10.5
Energy/Resources42.1
Atomic energy00.0
Cognitive/Emotional science31.6
Material engineering189.4
Electrical/Electronic engineering126.3
Information/Communication157.8
Earth science10.5
Chemical engineering136.8
Chemistry126.3
Environmental engineering168.3
OccupationResearcher8041.7
Student168.3
Professor199.9
Company employee5930.7
Others189.4
Table 5. Usage behavior of the PRS.
Table 5. Usage behavior of the PRS.
VariableItemFrequency%
Whether the user uses KOSEN’s personalized recommendation serviceYes19237.4
No32162.6
The reason for non-use of personalized recommendation serviceDo not know how to use it25880.6
Difficult to use82.5
Unnecessary3510.9
Recommendation result is not adequate51.6
Others144.4
The primary reason for using personalized recommendation service Information accessibility9851.0
Adequate information4624.0
Useful information189.4
Variety of information provided2814.6
Others21.0
Number of visits to KOSEN service1–7 times a week12364.1
1–7 times a month5026.0
1–10 times a year199.9
Service usage timeNo less than 1 h15882.3
1–2 h3116.1
3 h or longer31.6
Table 6. Comparison of satisfaction by gender (t-test).
Table 6. Comparison of satisfaction by gender (t-test).
VariableMaleFemaletP
AverageStandard DeviationAverageStandard Deviation
Ease of use3.760.7813.440.7711.9230.056
Design3.610.8513.330.6021.5550.122
Accessibility3.580.7553.290.5091.8130.071
Sufficiency3.740.7873.500.5901.4070.161
Adequacy3.880.7513.630.6121.5750.117
Utility3.540.9963.291.0831.1100.268
User support3.450.8323.150.6671.6930.092
Interactivity3.620.7803.460.6580.9440.346
Importance3.950.6973.750.8001.2670.207
* p < 0.05, ** p < 0.01.
Table 7. Comparison of satisfaction by occupation (ANOVA).
Table 7. Comparison of satisfaction by occupation (ANOVA).
VariableItemsAverageStandard DeviationFpPost Verification
Ease of useResearcher (a)3.590.8562.9660.021 *d > b
Student (b)3.380.866
Professor (c)3.790.652
Company employee (d)3.970.662
Others (e)3.780.712
DesignResearcher (a)3.410.8603.5570.008 **c > b
Student (b)3.160.908
Professor (c)3.820.671
Company employee (d)3.790.794
Others (e)3.750.600
AccessibilityResearcher (a)3.440.6944.2450.003 **d > b
Student (b)3.060.727
Professor (c)3.580.534
Company employee (d)3.810.799
Others (e)3.530.606
SufficiencyResearcher (a)3.500.8685.0450.001 **d > b
Student (b)3.410.664
Professor (c)3.710.384
Company employee (d)4.020.707
Others (e)3.860.479
AdequacyResearcher (a)3.710.8143.2090.014 *d > b
Student (b)3.530.826
Professor (c)3.870.574
Company employee (d)4.080.651
Others (e)3.940.511
UtilityResearcher (a)3.350.9952.0000.096n/a
Student (b)3.501.033
Professor (c)3.210.918
Company employee (d)3.761.088
Others (e)3.670.686
User SupportResearcher (a)3.230.8112.4730.046 *d > b
Student (b)3.280.547
Professor (c)3.550.911
Company employee (d)3.640.824
Others (e)3.440.784
InteractivityResearcher (a)3.440.8132.4340.049 *d > b
Student (b)3.470.694
Professor (c)3.630.742
Company employee (d)3.830.717
Others (e)3.610.654
Table 8. Effect of system on the importance of the recommendation service.
Table 8. Effect of system on the importance of the recommendation service.
Importance
Independent VariableBSEBetatpVIFDWR2F
(Constant)1.9630.23618.3130.000 2.2320.31428.725 **
(0.000)
Ease of use0.3220.0760.3564.2490.000 **1.924
Design0.2470.0740.2883.3410.001 **2.031
Accessibility−0.0360.079−0.037−0.4510.6531.837
p < 0.05, **p < 0.01 adR2 = 0.303.
Table 9. Effect of content on the importance of the recommendation service.
Table 9. Effect of content on the importance of the recommendation service.
Importance
Independent VariableBSEBetatpVIFDWR2F
(Constant)1.8550.249 7.4460.000 2.2960.27623.886 **
(0.000)
Sufficiency0.1370.0800.1481.7140.0881.936
Adequacy0.3570.0840.3714.2570.000 **1.969
Utility0.0530.0510.0751.0360.3021.360
Table 10. Effect of service support among three quality areas on the importance of the recommendation service.
Table 10. Effect of service support among three quality areas on the importance of the recommendation service.
Importance
Independent VariableBSEBetatpVIFDWR2F
(Constant)2.3220.221110.4980.000 2.0460.22727.809 **
(0.000)
User support0.1970.0900.2272.1840.030 *2.636
Interactivity0.2580.0960.2772.6720.008 **2.636
* p < 0.05, ** p < 0.01 ad.
Table 11. Person correlation analysis among the three major variables of quality.
Table 11. Person correlation analysis among the three major variables of quality.
SectionEase of UseDesignAccessibilitySufficiencyAdequacyUtilityUser SupportInteractivityImportance
Ease of use1
Design0.647 **1
Accessibility0.598 **0.626 **1
Sufficiency0.615 **0.648 **0.588 **1
Adequacy0.673 **0.641 **0.587 **0.677 **1
Utility0.452 **0.489 **0.416 **0.464 **0.478 **1
User support0.519 **0.574 **0.478 **0.580 **0.633 **0.549 **1
Interactivity0.554 **0.633 **0.549 **0.623 **0.704 **0.578 **0.788 **1
Importance0.520 **0.495 **0.356 **0.434 **0.507 **0.321 **0.445 **0.456 **1
* p < 0.05, ** p < 0.01.

Share and Cite

MDPI and ACS Style

Park, J.-H.; Park, S.E.; Choi, H.-S.; Hwang, Y.-Y. Information Usage Behavior and Importance: Korean Scientist and Engineer Users of a Personalized Recommendation Service. Information 2019, 10, 181. https://doi.org/10.3390/info10050181

AMA Style

Park J-H, Park SE, Choi H-S, Hwang Y-Y. Information Usage Behavior and Importance: Korean Scientist and Engineer Users of a Personalized Recommendation Service. Information. 2019; 10(5):181. https://doi.org/10.3390/info10050181

Chicago/Turabian Style

Park, Jung-Hoon, Seong Eun Park, Hee-Seok Choi, and Yun-Young Hwang. 2019. "Information Usage Behavior and Importance: Korean Scientist and Engineer Users of a Personalized Recommendation Service" Information 10, no. 5: 181. https://doi.org/10.3390/info10050181

APA Style

Park, J. -H., Park, S. E., Choi, H. -S., & Hwang, Y. -Y. (2019). Information Usage Behavior and Importance: Korean Scientist and Engineer Users of a Personalized Recommendation Service. Information, 10(5), 181. https://doi.org/10.3390/info10050181

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop