A notepad with a drawing of a light bulb and a blue crumpled piece of paper in the middle of the page
LibParlor Contributor Series

How Do You Even Do Research? A Librarian Parlor Series, Part II

In the second post in this series, LibParlor contributors write about their experiences actually doing the research and analysis.

Brittany Paloma Fiedler is a Teaching and Learning Librarian at the University of Nevada Las Vegas. She previously worked as a middle school librarian and high school English teacher. She was a 2015-2016 ALA Spectrum Scholar and is a 2018 ALA Emerging Leader.

Chelsea Heinbach is a Teaching and Learning Librarian at the University of Nevada, Las Vegas. Prior to UNLV, she taught at the University of California, Santa Barbara and Auraria Library in Denver, CO.  She is a co-founder and editor of The Librarian Parlor and she tweets here.


This post is the second in a series called “How Do You Even…” wherein we reflect on the entire research process as new-to-research librarians. For our first post on getting started, click here.

Submitting the IRB

As we discussed in our first post, we were incredibly lucky to have a dedicated assessment librarian at our library. While we discussed our survey questions, we also talked about deidentification and security of data in order to protect student information, which was helpful for filling out our IRB paperwork. Rosan, being the most experienced researcher among us, did a rough draft of all the paperwork. Then we had a three (!!!) hour Friday afternoon meeting reviewing the details over and over. Despite our best efforts, our initial submission to IRB was denied. There were a few reasons for the rejection. Some were in our control and some were not, but the office was helpful in working with us to get approval. We’ve heard that rejection can be pretty common when it comes to IRB offices, so try not to get discouraged! Also, build in some time just in case there are things you might need to clarify before being approved.

The Survey

We used Qualtrics survey software to send out our IRB-approved email to all transfer students. We sent multiple emails requesting survey responses, as Qualtrics has a nifty feature that ensures students don’t get multiple emails if they’ve completed the survey. To maximize our chances of response we varied the days and times of the solicitation emails, and ultimately over 10% of transfer students responded to our survey and more students volunteered to be interviewed than we could actually talk to.

The survey had a combination of question types: open-ended, short answer questions with comment boxes, Likert scales, and check-all-that-apply. The open-ended questions and the comment boxes posed a challenge to us as new researchers because we had no idea how to analyze that information. Fortunately, our library had just hired a new cataloging librarian (shout out to Kelsey George) who mentioned a free and easy data cleaning tool, OpenRefine. She was willing to sit down with Brittany and teach her some tricks. The most important things we learned from Kelsey were clustering and column editing.

Clustering was important for questions that were open-ended but had limited possibilities. For example, “What’s your major?” might generate responses like bio, BIOLOGY, Biology, and BS-Bio. Clustering recognizes that all of these answers are trying to be the same thing and allows you to change them all to ‘Biology’ so you can easily count your bio majors.

Column editing was important for questions that were totally open-ended, like “What additional spaces, services, or resources would you like to see in our libraries?” This question had over 200 responses! Brittany read a little bit about emergent thematic coding (information available here and here) and then read through the responses, taking notes on potential categories.

OpenRefine lets users designate a character as a signal to split cells, so Brittany used |,  as it was a character that definitely didn’t appear in any of the responses. This turned a response that looked like: “comfortable seating for reading” into “ FURNITURE |  | | Comfortable seating for reading” so that it split one cell into four cells with FURNITURE in the first cell, the comment in the last cell, and extra cells in between to make space for comments that needed more than one code.

Screenshot of spreadsheet demonstrating answers to a survey question grouped by category

The data was now a lot easier to make sense of, and eventually, we were able to turn the hundreds of open-ended responses about everything from napping pods to high-level STEM tutors into a spreadsheet that looked like this:

Screenshot of spreadsheet that keeps track of how many students mentioned certain features throughout the interview

Interviewing the Students

We decided it was best to have two of us in each interview, one person to take preliminary notes and one person to ask questions. We had prepared by doing practice interviews with our student workers, but we still made some mistakes when conducting the actual interviews. For example, once or twice we forgot at the end of the interview to ask the students to fill out their demographic information. If you’re going to do interviews, we recommend putting everything on your interview questions sheet including reminders to do things like sign informed consent forms, fill out demographic surveys, and give gift bags. While these things seem impossible to miss, things get hazy when you have four back-to-back interviews and are having genuinely interesting and dynamic conversations with students.

We interviewed students using a semi-structured format, which allowed us to ask follow-up questions based on interviewees answers. This gave us the freedom to really dive deep into responses and gave our interviews a conversational feel. We believe it contributed to our ability to have an open dialogue with the students we talked to and helped us foster a comfortable space for discussion. One thing we noticed is that sometimes interviewees will talk about what they want to talk about, not necessarily what we asked them to talk about. This is fine and can lead to some great conversation, but it is important to remember that you do ultimately want to try to get answers to your questions, or your interview won’t contribute as much to your research as it could have.

Something we learned a bit about during this process was data saturation. Data saturation is important in qualitative research, as it conceptualizes the moment of having received enough information to finish the data collection stage of your study. For us, reaching data saturation meant reaching a point when we felt that doing additional interviews wouldn’t add more necessary information to our data. It was hard for us to pinpoint just how many interviews were needed to reach data saturation because as the link above notes, it is different for every study. Because transfer students are such a diverse group with exceptionally unique experiences, we felt unsure of when we could consider ourselves finished. In the end, we conducted 21 interviews, a few more than is typical, before we felt comfortable with our data.
Even still, we found the demographics of our interviewees did not align with the demographics of our student body, and we will ultimately discuss this as a limitation in any writing we do about this project. We had an overrepresentation of white students, particularly white veterans. We also had an underrepresentation of Hispanic students and unfortunately did not speak to any black students. We believe that the overrepresentation of white veterans and the underrepresentation of people of color is likely due to which populations have the time to talk to librarians, in the middle of the day, for a 1/21 chance of winning an Amazon gift card. Because of this gap in our research, we are determined to learn more about the experiences of transfer students of color in the future, and will make additional efforts in additional studies to ensure we hear and learn from their perspective.

Presenting

Image of the two authors and their other collaborator after their presentation at LOEX 2018
L to R: Chelsea, Brittany, and Rosan after our LOEX presentation

We presented our preliminary findings at LOEX2018 in Houston, TX. This was super nerve-wracking for us, as it was our first time presenting nationally. To get the presentation together,  we divided and conquered, then came together to work through everything and ensure we were on the same page. Chelsea led the research and presentation on the asset-oriented lens we took with our research, Brittany took the reins on wrangling the survey data, and Rosan considered how we might use this information for the purposes of outreach. We recommend this approach sometimes, but we also really like the idea if, time willing, you can do all of the sections together. Ultimately, we felt pretty good about our presentation. If you’re nervous, it can help to remember that most other folks are nervous too. Try to breathe through your nerves, it’s going to be okay!

Keep an eye out for our third post, after we get to work submitting our study for publication. In the meantime, keep the conversation going…

  1. Any advice to newbies on how to tackle survey questions?
  2. Have you done interviews before? What advice do you have?
  3. What was your first presentation like and how did you deal with nerves?

Featured image [CC0], via Pexels


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.


The expressions of writer do not reflect anyone’s views but their own

1 comment on “How Do You Even Do Research? A Librarian Parlor Series, Part II

  1. Pingback: How Do You Even Do Analyze & Publish? A Librarian Parlor Series, Part III – The Librarian Parlor

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: