Evaluate programs and services using measurable criteria
Introduction
Library and information science professionals (LISP) live and breathe services and programs. To continue creating, using, and maintaining successful ones, though, LISPs must use precise criteria with which to judge them. This way, with each iteration, it will be clear how parts of these services or programs had worked or failed. These criteria can be from established organizations, LIS experts, or guidelines LISPs form for personal use.
Established organizations and the criteria they create
Organizations like the Nielsen Norman Group (NNG) and the European Commission expert group on FAIR data have legitimized some criteria LISPs use to evaluate programs and services.
The NNG had published in 1994, then updated in 2020 “10 Usability Heuristics for User Interface Design” (NNG, 2020). These principles were labeled “heuristics” to seem more like informed suggestions than specific rules. To me, #2 –match between system and the real world- was most significant. A user interface represents for many the digital edge, where the physical begins to interact with the virtual (Bishop, 2017). If its words and concepts were too unfamiliar to the target user, the design has failed.
The FAIR guiding principles for scientific data management and stewardship, developed by the international research community, was published in 2016 (Australian Research Data commons [ARDC], 2022). Two years later, the European Commission (2018) published a framework to use them for creating the European Open Science Cloud. FAIR stands for Findable, Accessible, Interoperable, and Reusable. The last one, that data and metadata should be reusable across different settings, is my favorite because it speaks to keeping information highly retrievable over time.
LIS expert-created criteria
Heather Hedden, author of The Accidental Taxonomist and Data & Knowledge Engineer at Semantic Web Company, did a “Taxonomy Workshop” at Taxonomy Boot Camp 2018 where she discussed differences between taxonomies from thesauri (Wells, 2018) that can be used to form criteria for creating either. For example, taxonomies have a top-down navigation while users can start from and go to anywhere in a thesauri that the terms’ relationships bring them. This means each term included to a taxonomy must be a part of the classification, and there can be no orphan/non-related terms in a thesauri.
Another expert is Abby Covert, an information architect (IA) and writer, author of How to Make Sense of this Mess, and has led many IA organizations. Her 10 Information Architecture Heuristics were developed to give designers another perspective on their projects during any part of the creative process (Covert, 2020). The one I found hardest to grasp was #6 Credible: worthy of confidence, reliable. What can I add, or how can I word something that would make a user feel like my product can answer their questions? Get their job done?
Guidelines I have formed for myself
I take language very seriously. Having studied seven of them, I am convinced human understanding will increase if people would write as plainly as possible, which will help them think clearly and so become more aware. My criteria for writing plainly is:
- Complete sentences
- No unnecessary words: can the point be made without it/them? by using a simpler one? by merging a phrase to one word?
- No run-on sentences: even in our thoughts we need to breathe, so make 2+ sentences instead of cramming everything into one.
- No clichés or turns-of-phrases: language is for expressing ourselves, so do not use someone else’s expressions.
- No sentence can be perfect. If you have reread it seven times in editing already, move on.
Also, I enjoy expanding discussions on established LIS criteria with comments and suggestions.
Evidence
INFO 210 – Reference and Information Services – Reference services by phone per RUSA guidelines
I wrote this discussion on reference services for INFO 210, Reference and Information Services. The Reference and User Services Association (RUSA) have “Guidelines for Behavioral Performance of Reference and Information Service Providers” to aid librarians holding reference interviews (RUSA, 2023).
The RUSA guideline I tested for my remote inquiry, with a public library in Northern California, was Listening/Inquiring. I found the remote inquiry phone number and reached the front of the phone queue in two minutes. Exchanging introductions, I explained to the library worker I was an MLIS student who wanted to interview a collections specialist. They advised me to email my request to the library’s general information department, and an interested librarian should reply. An awkward silence later, they wished me well before ending the call.
The library worker had not performed well according to the Listening/Inquiring guideline.
Admittedly, I picked a question no general reference librarian could have been prepared to answer. They did not have to sound so put-off, though, and could have asked why I wanted that interview. The reason could have been something they know how to answer. Also, considering SJSU’s King Library posts their librarian profiles online, the library worker could have brought up the one for their library on their computer (3.2.1, uses current technology to gather information to serve patron’s need) and told me what librarians or departments to hail in the email’s subject line.
This discussion is evidence I can evaluate library reference services using criteria established by RUSA.
INFO 246 – Information Architecture (IA) – critiquing a website using IA heuristics
I critiqued an online secondhand bookstore using Abby Covert’s IA heuristics for INFO 246, Information Architecture. Doing this allowed me to see this online secondhand bookseller from more perspectives than before when I was simply a shopper. Discussing shopping sites as a consumer, my people and I would mention what is on sale, what was recently available, and how many ways to pay were offered. I now frown at sites where the words are too small (for the vision-impaired to read), or use lingo-ish words to label links or webpages (that puts non-English-as-a-primary-language speakers at a disadvantage). Covert’s IA heuristics have peeled away layers for me to identify what makes a website usable.
This critique is evidence I understand how LISPs using these IA heuristics/criteria to evaluate websites can help them design better IA.
INFO 240 – Information Technology Tools and Applications – Website on writing for the web
I built this website to discuss writing for the web for INFO 240, Information Technology Tools and Applications.
The HTML/CSS structure had been refined over six previous assignments, but my criteria on writing had begun forming a decade ago. From the guidelines, advice, and examples for writing best practices I had listed, the most impactful was The Elements of Style by Strunk and White. The brief manual had taught writers for generations how to keep things short, clear, and succinct.
These criteria are just as crucial for writing for the web. Usability.gov (usability.gov, 2022) says users scan more than read websites, so long blocks of texts are not welcoming for the eyes. Jimdo.com’s “11 Golden Rules of Writing Website Content” agrees to writing clearly online (Jimdo, 2019), so target users can learn what they need to finish their tasks as quickly as possible. Seattle University’s Web Team: Marketing Communications blog also champions brevity, to use less words, for web content (Seattle U, n.d.).
This website’s content is evidence I have studied writing enough to formulate my own criteria and understand how it matches up with the online community’s shared criteria for writing for the web.
*Please stay tuned for when I find a way to host this website.*
Conclusion
Evaluating programs and services without measurable criteria leads to wasting everyone’s time and resources. LISPs would not gather any meaningful data to create or upgrade their tools. Stakeholders would watch their money drain into developing nothing of value. Users would not have their needs met. To alleviate this, though, will not be as simple as staying updated on other organizations’ or experts’ established criteria relevant to my profession. It will be more important that I pay attention to what makes recent programs and services successful, and how the organizations behind them are structured and maintained. That way, I can decide on my own criteria and supplement it with someone else’s on a case-by-case basis for LIS projects.
References
American Library Association. (2008). Guidelines for behavioral performance of reference and information service providers. http://www.ala.org/rusa/resources/guidelines/guidelinesbehavioral
Australia Research Data Commons. (n.d.). FAIR data. https://ardc.edu.au/resource/fair-data/
Bishop, T. (February 10, 2017). The rise of the digital edge. https://www.datacenterdynamics.com/en/opinions/the-rise-of-the-digital-edge/
Covert, A. (July 30, 2020). Information Architecture Heuristics. https://abbycovert.com/ia-tools/ia-heuristics/
European Commission, Directorate-General for Research and Innovation, (2018). Turning FAIR into reality: final report and action plan from the European Commission expert group on FAIR data. Publications Office. https://data.europa.eu/doi/10.2777/1524
Jimdo. (February 6, 2019). The 11 golden rules for writing content for your website. https://www.jimdo.com/blog/11-golden-rules-of-writing-website-content/
Nielsen Norman Group. (November 15, 2020). 10 usability heuristics for user interface design. 10 Usability Heuristics for User Interface Design (nngroup.com)
Seattle U. (n.d.). Writing for the web. https://www.seattleu.edu/web/content/writing/
Strunk, W. & White, E.B. (1959). The Elements of Style. Macmillan.
Usability.gov. (2022). Writing for the web. https://www.usability.gov/how-to-and-tools/methods/writing-for-the-web.html
Wells, J. (November 5, 2018). Taxonomy 101- Key terms and concepts. KMWorld. https://www.kmworld.com/Articles/ReadArticle.aspx?ArticleID=128356
For Competency O essay, please click ‘Previous Post’ below.