Evaluation – Competency N

Evaluate programs and services using measurable criteria;

Introduction

It is critical for librarians and information science professionals to understand the principles that guide the effective evaluation of programs and services using measurable criteria. From an outsider’s perspective, public libraries and their kin have been a fixture in society, but those who work in the field know not to take their continued existence for granted. Libraries will always have to demonstrate their value to their stakeholders to justify their presence. When advocating for funding and other resources, evaluations of programs and services can attest to the benefits that institutions confer on their client population. When these benefits are properly framed, assessments of past programs and services serve as a useful tool of advocacy for future funding and resources.

Evaluation of the effectiveness of services and programs is critical. Allocating limited resources to programs and services based on assumptions can be costly when those assumptions are false. If the assumption is false—at worst—the funds are completely wasted with an institution reaping no benefit from the expenditure. At best, funds are allocated inefficiently and fail to maximize the advancement of an institution’s strategic objectives. This is why evaluations are so important to the field of information science.

As a library assistant with the Fresno County Public Library, I have reported attendance figures for my Infant and Toddler Storytime to my library branch’s children’s librarian. For moving screenings at the Gillis Branch Library, I entered usage data into the Evanced software’s programming tracker. I also notated usage stats at the reference desk and am knowledgeable about the methods by which the administration collects data to assess the performance of programs and services.

As an information science professional, the courses of LIBR 265 Materials for Young Adults, INFO 204 Information Professionals, INFO 230 Issues in Academic Libraries, and INFO 282 Grant Writing and Alternative Funding Resources; I have assembled a solid foundation in the principles of providing and evaluating programs and services to one’s target audience, which I will summarize in the below text.

Explication

Developing the Assessment

Libraries should be responsive to changes in their clientele. Assessments help them to do so. Evaluation results can shape how libraries shift resources to appropriate programs and services to match the changing needs of their client population. For public libraries, this might entail evaluating the effectiveness of language acquisition programs; tracking scholarly database usage; or assessing the technological needs and preferences of their patrons.

Assessment is “an iterative and continuous cycle of experimentation and evaluation that begins with developing objectives” (Gilman, 2017, p. 214). These are the steps of an assessment process are:

  1. Developing assessment questions or objectives.
  2. Identifying assessment methods and data gathering.
  3. Analyzing and interpreting data in terms of assessment objective(s).
  4. Integrating results into operations and strategic planning.
  5. Disseminating the results.

(Gilman, 2017, p. 215)

Libraries engage in assessment for all sorts of reasons. Before conducting an evaluation, libraries should establish the precise purpose of the assessment. When developing an assessment, all stakeholders should be consulted to determine what figures should be measured. In particular, it may be prudent to consult the funding authorities to determine what data should be collected to discern whether the institution’s accountability requirements have been satisfied. In addition, part of this consultation includes gathering information about user preferences, perceptions, needs, and expectations. Increasingly, user expectations have evolved to prioritize convenience and immediacy in obtaining information (Gilman, 2017, p. 213). Additionally, librarians should be aware of their own cultural identity and biases when developing an evaluation (Association of College and Research [ACRL], 2012). Self-awareness is the first step in achieving a culturally competent assessment.

Evaluation provides essential feedback to the strategic planning process. While it is absolutely required that the immediate assessment objectives are addressed, it is important—that whenever possible—that performance measures are linked to “operational and strategic goals of the library to ensure that we gather relevant and actionable data that can be used to support organizational development” (Gilman, 2017, p. 212). This will allow institutions to demonstrate their value to their users and stakeholders.

When developing an assessment of quantitative data, the results should be benchmarked against industry standards. Cost analyses should be benchmarked against an external industry standard. For public libraries, external data can be found at the Museum and Library Service’s (IMLS) Public Libraries Surveys. For academic libraries, statistics can be found at the Association of Research Library’s ARL Statistics publication.

The feedback from assessments can be used to improve the quality of services. Evaluations of programs and services can aid in decision-making. Ultimately, the one objective way that funding authorities can use to determine if libraries are fulfilling their responsibilities and meeting strategic objectives is whether or not the institutions in question meet feasible criteria as measured by assessments.

Evaluation Methods

Data-gathering methods can be sorted into two broad categories based on whether they generate quantitative data or qualitative data. Quantitative data is numeric whose purpose is “to allow generalizations of the results from a sample to an entire population of interest” (Gilman, 2017, p. 216). Quantitative evaluative methods collect, process, and interpret the numerical data collected. The numeric data is often collected from surveys or harvested from systems-generated transactional data. Integrated library systems, interlibrary loan systems, and (website) content management systems can all generate quantitative statistics for the purpose of library assessment. Qualitative data is “suitable for gaining an in-depth understanding of underlying reasons and motivations in user studies” (Gilman, 2017, p. 217).

An assortment of data-gathering instruments can be employed to provide an assessment. Each method yields specific types of data. Depending on the evaluative question, some data-gathering tools may be more appropriate than others. Some of these methods include surveys, focus groups, interviews, observation studies, and usability testing. Services can undergo process evaluation to evaluate the efficiency of workflow. Surveys are a common tool for evaluation for libraries. They can measure internal affairs such as with organizational climate surveys, or they can measure the clientele of an institution through user satisfaction and service quality surveys. Open-ended survey questions are one method in which to gain qualitative data. If a library is seeking to improve a service such as reader advisory, the institution may first seek to identify, the aspects of the activity before devising the structure and composition of the formal assessment. They may ask staff to first log details such as the date, time, and length of the interaction. This preliminary picture may help determine what questions to ask and what data collection methods should be employed in a more thorough evaluation.

When selecting the appropriate assessment mechanism, one should consider the limitations of each method. For instance, usage statistics do not capture client satisfaction. To gauge client satisfaction, librarians may employ a service quality survey to grab a snapshot of patron satisfaction. Whenever possible, it is prudent to use a mixture of assessment instruments. Incorporating multiple data-gathering tools allows for data triangulation—“the practice of using two (or more) data sources or methods” (Gilman, 2017, p. 218) to confirm a premise. The Public Library Association offers a multitude of free resources for collecting and analyzing information and evaluating results through its Project Outcome (2016).

After determining the appropriate data-gathering instrument, librarians must then establish who will be gathering the data. This process should also account for any additional training that would be required to ensure that the appointed staffers fulfill their evaluation duties including instructions specifying exactly which measurements to take. Again, it must be stressed, that librarians should be aware of their own cultural preconceptions. “The Diversity Standards: Cultural Competency for Academic Libraries (2012)” calls on librarians “to develop collections and provide programs and services that are inclusive of the needs of all persons in the community the library serves” (ACRL). This can only be done when the assessment methods and their implementation are free from cultural bias.

Applying the Assessment

Evaluation methods may take place during the program or activity, i.e., formatively; or assessment can take place after the program or activity has concluded, i.e., summative. Gathering data is only the start of evaluation, librarians must judiciously and thoughtfully analyze the data to interpret and draw conclusions. This analysis and interpretation of the data should be conducted with domain knowledge or expertise. Domain knowledge is the “library function-specific knowledge such as e-resource management, collection development, preservation, and so on, and it is a key ingredient to success in giving meaning to data” (Gilman, 2017, p. 213). When gaps in services are identified, decision-makers should take action to close those gaps. There is little point in conducting an evaluation of programs or services if no action is taken to implement meaningful changes. Ideally, any evaluation should have a transparent communication plan to convey the assessment results to stakeholders.

Work in evaluation and assessment nearly always involves obtaining data from or about people. To protect the “right to privacy and confidentiality in their library use” as stipulated by the American Library Association’s Library Bill of Rights (2019), all stages of the research life cycle must mitigate privacy risks. Privacy risks arise at the initial collection of information, the analysis of the data to address research questions, dissemination of findings, retention and storage of information, and disposal of devices or records that contain sensitive information. Academic libraries that are attached to universities that conduct research are uniquely equipped to address such considerations through the institutional review boards (IRBs) of their parent institutions. These IRBs examine and approve research that involves human subjects. A combination of regulating data access, aggregate reporting, anonymizing information, and obtaining informed consent can manage the ethical risk to privacy.

Evidence [BSSC]

Evidence 1: Analysis of Reference Interaction [Phone Interview]

As evidence of my insight into the evaluation of programs and services within a library setting, I proffer this assignment submission where I detail a phone reference interview with the Henry Madden Library of California State University, Fresno. I posed the question “What resources does the library have to assist in developing a display highlighting Anne Frank’s story?” to the institution. I evaluated the reference service in accordance with the Reference and User Services Association’s (RUSA) “Guidelines for Behavioral Performance of Reference and Information Service Providers.”

Unfortunately, my qualitative analysis indicated that none of the standards of approachability, interest, listening and inquiry, and follow-up were met. This was reflected in the outcome of my reference interaction as I was not directed to any quality resources or even one artifact—or at least the digital representation of the artifact—to enhance my future display on the lives of Franks and the Van Pels during the Holocaust. The reference staff also failed to provide any instruction on how to further my research. Clearly, reference services at the Henry Madden Library need to be overhauled. The impression from my reference interaction is that librarians and student staffers were merely going through the motions of answering questions and ignoring established principles of behavioral performance in reference services. This assignment serves as crucial evidence to my portfolio by demonstrating the importance of regular evaluations of services as such a testing regime would have drawn attention to this service deficit and would have signaled to Henry Madden Library’s administration to allocate additional resources to shore up this core library service.

Evidence 2: Book Clubs and Digital Environments

As evidence of my knowledge of programs within a public library setting, I offer this discussion submission. I discuss how book clubs can operate independently or through facilitated librarian discussions. Book clubs are not just for adults. After I wrote this discussion post, I observed a librarian start separate book clubs for both children and teenagers. The librarian supplied snacks to encourage an informal atmosphere and paired these meetings either with a craft or a short game. Book clubs do not always have to have meetings. They can also operate through asynchronous communication when conducted through a chain of emails or on a discussion board. Regardless of the variety of book clubs, consistent communication and marketing are crucial cornerstones for successful programming.

This discussion post ends by posing questions on the future of book clubs and how technology should be incorporated to match the changing preferences of the public. These questions can be answered with information drawn from evaluations of existing and future programs and their target demographic.

This document demonstrates my familiarity with ways to adapt programs to appeal to the local demographics of a branch library and the need to incorporate the analysis of the community and of programs and services to craft programs that match the preferences and needs of the community that a library branch serves.

Evidence 3: Analysis of Reference Interaction [Instant Messaging]

For Competency N, I attest to my ability to evaluate programs and services using measurable criteria through the submission of this analysis of my interaction with the Los Angeles Public Library’s (LAPL) instant messaging reference service. I evaluated the response to my reference query in accordance with the Reference and User Services Association’s (RUSA) “Guidelines for Behavioral Performance of Reference and Information Service Providers.” Unfortunately, the librarian did not adhere to these guidelines, and it resulted in a subpar experience. My largest complaint was the extended wait to my initial response. It took four minutes before I received an acknowledgment that the librarian had even received my reference query.

I had no way of evaluating why it took four minutes for an initial response. Was there another person ahead of me? Did it take a couple minutes to notice that they had a question through text messaging? Did the librarian actually use most of the time looking for the answer to my question? I simply do not know. I found the uncertainty unsettling. What my contact should have done was greet me immediately after noticing that he or she had received a request and then followed up his or her greeting by rephrasing my reference query to acknowledge that my reference question had been received with the assurance that a staffer was working to provide an answer. Unfortunately, my interaction with LAPL personnel did not include a greeting. My staff contact did not even provide their name.

By describing experiences and events in detailed descriptions, the RUSA evaluation is emblematic of qualitative research methods. Qualitative descriptions focus on why something happened or how something occurred.

This analysis of the LAPL’s instant messaging reference service also demonstrates the difficulty of adapting in-person services to virtual analogs. In response to the COVID-19 pandemic, the field of librarianship accelerated the adoption and implementation of virtual programs and expanded online library collections and resources. This trend is expected to continue. As it does, the evaluation of programs and services will play a crucial role in finetuning online and virtual offerings that have been adapted from traditional library programs and services.

When rating the Los Angeles Public Library’s reference chat service with guidelines provided by the Reference and User Services Association (RUSA), the library service performed poorly. Unfortunately, my qualitative analysis indicated that none of the standards of interest, listening and inquiry, and follow-up were met. This rating reflected my dissatisfaction with the quality of the service, but instant messaging reference has the potential to equal longstanding in-person reference should the identified shortcomings be addressed.

Conclusion

A professional within the field of information science must be able to evaluate programs and services utilizing measurable criteria. Institutions in the field of library science should practice conducting regular assessments of programs and services. A well-designed evaluation is both feasible and produces actionable results. Evaluations of services and programs can have a qualitative or quantitative nature. Quantitative figures are numeric and are often harvested from systems-generated transactional data. Qualitative data is non-numeric and is often used to deepen an understanding of relationships or the underlying motivations of the subject in question.

After the data has been gathered, the institution needs to analyze the facts and figures. Information science professionals analyze and interpret the data using domain knowledge or expertise. Assessments of services and programs nearly always capture data from or about people. Information professionals should mitigate the risks to privacy when engaging in an evaluation of programs or an assessment of services.

Making decisions based on assumptions can be costly when those assumptions are false. This is why the evaluation of programs and services is critical to the operations of libraries. The results allow administrators to make evidence-based decisions on where to increase and where to decrease support. The evaluative process also helps gauge how programs and services help the institution achieve its mission and strategic objectives.

Positive evaluations are a useful tool when advocating for funding and other resources. By establishing relevance, personnel can secure the longevity of their respective institutions. The proffered evidence in this portfolio, the coursework, and work experience with the Fresno, County Public Library attest to my breadth and depth of knowledge on applying measurable criteria to the evaluation of programs and services.

References

American Library Association. (2019, January 29). Library bill of rights. https://www.ala.org/advocacy/intfreedom/librarybill/

Association of College and Research Libraries. (2012, May 4). Diversity standards: Cultural competency for academic libraries (2012). http://www.ala.org/acrl/standards/diversity

Association of Research Libraries. (2022, July). ARL statistics. https://www.arl.org/arl-terms/arl-statistics/

Gilman, Todd. (Ed.). (2017). Academic Librarianship Today. Rowman & Littlefield.

Institute of Museum and Library Services. (2021, May) Public libraries survey. https://www.imls.gov/research-evaluation/data-collection/public-libraries-survey

Public Library Association. (2016, August 2). Project outcome. American Library Association. https://www.ala.org/pla/data/performancemeasurement

RSS Management of Reference Committee. (2013, May 28). Guidelines for behavioral performance of reference and information service providers. American Library Association. https://www.ala.org/rusa/resources/guidelines/guidelinesbehavioral