Skip to content

Support for Writing to Reference Cooperative Databases Using Case Study Searches

Akine Suzuki (AY 2021)

eference is the foundation of modern library services. The Reference Cooperative Database (commonly known as "Reference Cooperative DB") is a system for using records of reference services and research methods provided by libraries nationwide as a source of information. The purpose of the database is "to support reference services in libraries and research activities of general users by accumulating reference cases in public libraries (omitted) and providing data through the Internet. However, as of November 2021, out of a total of 263,191 registered cases, 139,582 are "open to the public," which is open to the entire Internet, 14,862 are "open to participating libraries," which can only be referenced by libraries registered in the RERF database, and 108,747 are "refer to your own library only," which can only be referenced by libraries that have registered cases. Nearly half of the registered cases are not open to the public. Previous studies and questionnaires to libraries suggest that there is no standard for describing the response process, which is the core of reference cases, and that there is no way or confidence to judge whether the cases should be opened to the public or not.

Therefore, the purpose of this study was to increase confidence in writing reference cases by getting out of the "state of not knowing what to write and in what order to write it" by using the knowledge of the Referee DB. To achieve this goal, we hypothesized that by referring to existing cases, users would be able to understand how to write the response process and gain confidence in their work. To verify this hypothesis, we constructed a system that searches for and presents the response process of similar cases that contain keywords in a question from the Referee DB when keywords are entered into a chatbot.

To evaluate the created system, we conducted an evaluation experiment with 10 students enrolled in "Knowledge Information Exercise II". To avoid coronavirus infection, the experiment was conducted entirely online. First, we asked the students to actually experience a mock reference to the questions we had prepared. The participants were asked to use a search engine for the mock reference, and the situation was observed in the form of screen sharing. We then asked them to create their own answers and answer process to the questions. Next, we presented another question and asked them to search in the same way. They were asked to generate their answers to the questions and the response process, this time using the proposed system. The results of each of these two experiments were evaluated by a questionnaire survey after the experiments were completed.

The results of the experiments showed that using the proposed system increased their understanding of how to write the response process and increased their confidence in what they had written. On the other hand, there were different opinions about the usability of the system. From the above results, it was found that the introduction of a search function for existing case studies when creating the response process increased the effects of "understanding how to write the response process" and "gaining confidence in the written response process". Future directions include research on improving the presentation method, such as color-coding existing cases.

(Translated by DeepL)


Back to Index