LibParlor Contributor Series

Visualizing Research

LibParlor Contributor, Rachel Miles, shares her research experience during graduate school, which ultimately shaped her current research agenda.

Rachel Miles is a Digital Scholarship Librarian at the Center for the Advancement of Digital Scholarship (CADS) at Kansas State University and is K-State Libraries’ Partner on the Open/Alternative Textbook Initiative. She holds a BA in Psychology from Wichita State University and an MLS from Emporia State University. In her free time, she loves to hang out and play music with her husband. She also loves to write fiction, sew, and attend dance
classes.


Introduction

When I was in graduate school, I had the opportunity to become a Graduate Research Assistant (GRA) for Dr. Sarah Sutton at the School of Library and Information Management (SLIM) at Emporia State University (ESU) for the last semester of my Master’s program. In May of 2015, Dr. Sutton became involved in online survey research regarding the awareness and usage of research impact metrics, such as bibliometrics and altmetrics, among academic librarians at Carnegie-classified R1 institutions. I felt honored and excited to join Dr. Sutton in her research endeavors, though and at the time, I did not know what it would mean for my career and professional advancement. My experience and exposure to this research helped launch my career, propel me into the complex field of scholarly communication, and entice me to continue research in what I consider one of the most interesting topics in academic librarianship: research impact and evaluation.

Research Impact & Evaluation – An Overview

First, I’d like to offer a brief explanation on research impact metrics and why this research began in the first place. Research impact metrics, or indicators (a more accurate term), encompasses a range of indicators that tell us something about the research we attempt to evaluate. In our research publications and presentations, we categorized them into the following:

Research Impact Indicator Definition
Journal Impact Factor (JIF) Average number of citations an article receives in a given journal within a given time frame, usually two or five years
Journal Usage Factor (JUF) Average number of usage statistics (e.g., downloads, page views) that an article receives within a given journal
Article/book citation counts Total number of citations a scholarly article/book receives in another scholarly article/book
Usage statistics Total number of article/book downloads and page views
Altmetrics Online attention to academic research

(e.g., Mendeley bookmarks, number of Twitter mentions, peer reviews on Publons, public policy document citations, news media outlet citations, etc.)

Author h-index Reflects an author’s total number of publications that have received at least as many citations each. E.g. a researcher with an h-index of 25 has authored at least 25 journal articles that have received 25 citations.
Expert peer reviews Post-publication peer reviews (e.g., F1000, Publons, resource reviews published in library journals). A type of altmetrics.
Qualitative measures of impact “Who’s saying what about research?” (e.g., context in a citation from a scholarly article or a public policy document; )

Stacy Konkiel from Altmetric.com launched the research project in May of 2015. She and others had previously simply made assumptions about how academic librarians thought about and used research impact indicators. When she eventually searched the scholarly literature for any type of study like this, with the exception of a couple of small institutionally-based studies, she came up quite empty-handed. As a result, she decided to launch a study of her own.

GRAing it up at ESU

I began working for Dr. Sutton in late August of 2015, after the research project had already received IRB approval.

After completing CITI training, Dr. Sutton directed my attention to analyzing a dataset from the survey responses. First, we reviewed the survey questions, and I learned some of the basics behind the purposes for launching the survey project as well as the principles of research impact indicators. At the time, I had a minimal level of understanding of research impact and evaluation, and yet I had to analyze data regarding academic librarians’ own self-assessed understanding of research impact indicators. I was intimidated, and, without doubt, I can say I had Imposter Syndrome. Fortunately, Dr. Sutton encouraged me and guided me on my journey through library research land.

Next, Dr. Sutton was interested in parsing out the data concerning respondents’ liaison areas. About a third of our respondents indicated that they were liaison or subject specialists, in which they liaised to specific departments and/or units at their institutions. Most of these respondents also stated that they liaised to multiple areas, so I had to dig into these text-based responses to determine how the liaison areas related to one another. Dr. Sutton said I could use creativity and imagination to visualize these liaison areas as a hierarchy or network.

Eventually, Dr. Sutton and I came up with a physical visualization to connect liaison duties to one another on a bulletin board with yarn, push pins, and yarn. It took much longer to complete than anticipated, but it had enormous value in my development as a researcher.

an image of a tackboard with pins and a physical visualizaiton
The physical visualization of liaison duties created by the author and Dr. Sutton (image by the author)

Would I take this sort of due diligence as a researcher now? No, probably not. My first attempt at creating a meaningful data visualization with the push pins and yarn was an exercise for my mind. I believe that this exercise is analogous to toddlers playing with blocks before they play with a toolbox or solve a math problem. The hands-on nature of this exercise, as I reflect on it, helped me to take a baby-step into the world of library research.

As a GRA and graduate student, I also read relevant scholarly journal articles and books. I immersed myself in the literature concerning research impact, altmetrics, scholarly communication, and digital scholarship. I also relearned key statistical methods and data visualization concepts that helped me while I worked on this project.

For instance, I began copying datasets into Excel for individual survey questions, such as, “How familiar are you with the concept of journal impact factors and the following measures of article-level impact?” Participants could answer on a Likert-scale between “1 – I know nothing” and “5 – I’m an expert.” In Excel, I converted the raw numbers into percentages and created graphs, such as this one:

a graph of levels of familiarity with JIF and ALM
A sample of the graphs Rachel created from the collected data (graph from the author)

Finding Meaning in Data

After creating graphs, my data started to have more meaning, especially since I am a visual person. However, a good researcher never makes assumptions based purely on data visualizations. I ran statistical analyses on subsets of data that appeared to have statistically significant relationships. For example, there was a statistically significant relationship between the levels of familiarity with the JIF and altmetrics (x 2 (4, n = 1085) = 89.201, p < .01). The graph suggests this, but by conducting the chi-square test of independence on these subsets of data, our research team was able to determine with certainty that there were statistically significant relationships between certain variables.

What I’ve just described was a repeated process for me, but before it became ingrained in my mind, I had to take the time and effort to learn how to do it. Sometimes, I spent an unnecessarily long time with datasets, especially in the beginning of this research project. In addition, certain datasets and data visualizations were never used in presentations or publications, either because they did not provide our team with enough statistically significant results, or the results were unrelated to a project’s focus.

Communication & Organization

“There are things we could have done better in our research, mainly with survey design and data management. However, this will help me with managing the next big research project.”

I think that what every researcher needs to know is this: they will learn as they progress through each and every project. There are things we could have done better in our research, mainly with survey design and data management. However, this will help me with managing the next big research project. There are still clear takeaways from this past research project for both myself as a researcher and a professional librarian, and for the library profession as a whole. I would even go so far as to argue that our research may have a small impact on all of academia one day.

Measuring impact is tricky, and reflecting on your experience is much less measurable, but at the end of the day, I feel I’ve gained insight and wisdom into my research and into my own field of scholarly communication. However, I have more questions than when I started; isn’t that always the conundrum?


Featured image Networks via Pixabay


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License


The expressions of writer do not reflect anyone’s views but their own

0 comments on “Visualizing Research

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s

%d bloggers like this: