Does Investment in Public Libraries Increase Usage?

I recently mentioned that I had been exploring some data on public libraries.  Here’s the reason why.  A recent local new paper article chronicled the role libraries are playing today. They highlight the fact that some local libraries that have undergone major renovations recently. In the article they claim:

The surge in popularity mirrors what other communities have seen. When they invest in libraries, the number of people using them goes up.

The claim seemed to rely on anecdotal evidence, so I determined to examine this using data.

Disclaimer/Confessions

I want preface this by admitting that I am a big fan of libraries.  I have fond memories of summer reading programs in my childhood.  My very first exposure to the Internet happened in a public library.  I used to roller blade to the local public library as a teenager to do my homework (even though I had my own desk at home). When my parents moved and I visited them, one of the local attractions I wanted to see was their public libraries.  I love them.  However I love claims being backed with data more than anecdote, especially when it touches something close to me.

Methodology

I used data from the annual report for public and association libraries to evaluate the claims.  I looked at the data from 1991-2014.  As always, for those who care to replicate my analysis, you can check out the GitHub repository.

I examined the change in library “usage” in terms of circulation and visits.  I wanted to see if the investment in libraries spurred on increase usage that died out over time so I looked at the difference from a one year before and after investment window up to ten years.

There are just under 500 libraries that had a renovation over the time period.  There were also about 200 libraries in New York State didn’t have major renovations.  I was able to use these libraries as a control group.  If there was a statistically significant difference between these two groups there would be data to back up the news paper article’s claim.

Findings

After looking at circulation and visitation over the various time frames there was no difference between the libraries that were renovated and those that were not.  Not over the short term, or long term.  So the bottom-line is that the claim that investment increases library usage is not supported by the data.

Image

New York State Public Libraries Circulation Visualization

I have recently been exploring data on the public libraries of New York State for a side project (more on that in a latter post hopefully).  I have also stated a Data Visualization course on Coursera and have decided to feature some visualization of this data set.

About the Data

The data used in this analysis comes from the Annual Report for Public and Association Libraries produced for New York State Education Department (NYSED). You can access the data at http://collectconnect.baker-taylor.com/ using “new york” as the username and “pals” as the password.  Load the saved list named “All Libraries as of 15 March 2016” and select the “Total Circulation” data element.

Visualization Decisions

For this visualization I decided to use all data from 2000 to 2014 (latest data available).  I aggregated the library level circulation data to generate the aggregate circulation for New York State Public Libraries.  I used colorblind safe colors from the Color Brewer palette.  I adjusted the scale on the Y-axis to be in millions.  I used R to generate the following visualization:

Data_Visualization_Assignment_1

What It Tells Us

Book circulation generally increased until 2010 where one observes a reversal of the decade long trend.  There is an exceptionally precipitous drop from 2013 to 2014.

This begs the question why is this changing?  Is it because of a change in the population?  Is it due to a change in the number of libraries reporting (might explain the 2013-2014 drop)?  Is it due to a rise in digital media sources as a substitute for books?  Is it due to a lack of public support/investment in libraries? I plan at looking at that last question in a future post.

Source Code

library(dplyr)
library(tidyr)
library(ggplot2)
library(ggthemes)

book_circulation <- read.csv('https://goo.gl/fyybwi', na.strings = 'N/A', stringsAsFactors = FALSE) %>%
  gather(., Year, measurement, X1991:X2014) %>%
  mutate(Year = as.numeric(substr(Year,2,5))) %>%
  mutate(measurement = as.numeric(gsub(',', '', measurement))) %>%
  filter(Year > 1999)%>%
  filter(ifelse(is.na(measurement),0,1)==1) %>%
  group_by(Year) %>%
  summarise(Circulation = sum(measurement)) %>%
  mutate(Circulation = Circulation/1000000)

ggplot(book_circulation, aes(Year, Circulation)) + geom_bar(stat='identity', fill="#9ecae1", colour="#3182bd") + ylab('Book Circulation (in millions)') + ggtitle('Book Circulation in NYS Public Libraries, 2000-2014') + theme_hc()

Searching for a Needle in a Haystack with Python

I recently was working with New York State libraries data (hopefully more to come on this) and was trying to match up two data sets on the library name.  But due to minor variations in the names (i.e. an abbreviation or punctuation marks) this is kinda a messy process.  I found fuzzywuzzy which is a Python library that can help out.  Here’s how I constructed my search:

from fuzzywuzzy import fuzz
import pandas as pd

haystack = pd.read_csv('web_list.csv')
haystack = haystack['Libraries'].values.tolist()

needles = pd.read_csv('excel_list.csv')
needles = needles['Library'].values.tolist()

best_matches = []

# Look for best match
for needle in needles:
    print('Searching for: '+needle)
    best_match_needle = ''
    best_match_hay = ''
    best_match_ratio = 0
    search_for_match = True
    while search_for_match:
        for hay in haystack:
            match_score = fuzz.partial_ratio(needle, hay)
            if match_score > best_match_ratio:
                best_match_ratio = match_score
                best_match_needle = needle
                best_match_hay = hay
            if match_score == 100:
                search_for_match = False
        # Looped through haystack so stop searching
        search_for_match = False
    # Append best match search results
    row = {'Searched':best_match_needle, 'Found':best_match_hay, 'Ratio':best_match_ratio}
    best_matches.append(row)

df = pd.DataFrame(best_matches)
writer = pd.ExcelWriter('Best Matches.xlsx', engine='xlsxwriter')
df.to_excel(writer,'Sheet1', index=False)
writer.save()

The results are pretty good.  I tried the fuzzywuzzy processes but I was getting a lot of goofy results.  My only caution is that the ratio in my implementation should not be strongly trusted.  I had a 100 for “Babylon Public Library” being a match with “North Babylon Public Library.”