Does Investment in Public Libraries Increase Usage?

I recently mentioned that I had been exploring some data on public libraries.  Here’s the reason why.  A recent local new paper article chronicled the role libraries are playing today. They highlight the fact that some local libraries that have undergone major renovations recently. In the article they claim:

The surge in popularity mirrors what other communities have seen. When they invest in libraries, the number of people using them goes up.

The claim seemed to rely on anecdotal evidence, so I determined to examine this using data.


I want preface this by admitting that I am a big fan of libraries.  I have fond memories of summer reading programs in my childhood.  My very first exposure to the Internet happened in a public library.  I used to roller blade to the local public library as a teenager to do my homework (even though I had my own desk at home). When my parents moved and I visited them, one of the local attractions I wanted to see was their public libraries.  I love them.  However I love claims being backed with data more than anecdote, especially when it touches something close to me.


I used data from the annual report for public and association libraries to evaluate the claims.  I looked at the data from 1991-2014.  As always, for those who care to replicate my analysis, you can check out the GitHub repository.

I examined the change in library “usage” in terms of circulation and visits.  I wanted to see if the investment in libraries spurred on increase usage that died out over time so I looked at the difference from a one year before and after investment window up to ten years.

There are just under 500 libraries that had a renovation over the time period.  There were also about 200 libraries in New York State didn’t have major renovations.  I was able to use these libraries as a control group.  If there was a statistically significant difference between these two groups there would be data to back up the news paper article’s claim.


After looking at circulation and visitation over the various time frames there was no difference between the libraries that were renovated and those that were not.  Not over the short term, or long term.  So the bottom-line is that the claim that investment increases library usage is not supported by the data.


Visualizing U.S. Commuters

I recently analyzed the patterns of U.S. Commuters and created a visualization that summarizes these patterns at the state level.

About the Data

This data is from the Census Transporation Planning Products (CTPP). For those who don’t know, the CTPP is derived from the 2006–2010 5-year American Community Survey (ACS) data. You can learn more about this data set by visiting the home page. In an effort to make this easily reproducible, you can download the csv used in this analysis. I chose this data source because I have used it in the past and am familiar with it.

Steps Taken

I created a directed network graph from this data. I used python’s networkx and pandas packages and the complete source code is provided below. I excluded Puerto Rico from the data set because I wanted to analyze state level patterns. I did leave the District of Columbia in thus there are 51 States in the analysis.

I wanted to use Gephi to generate the visualization because I have done so in the past. After creating the directed network graph using Python, I imported the data into Gephi and used the community detection algorithm using the default settings (Randomized checked, Use weights checked, Resolution = 1.0). This resulted in 7 communities being detected.

I grouped the states by these communities and colored them and placed them in a circular layout with straight edges between the nodes. I varied the width based on the edge weight.


Commuters Network


There are a couple of things that are of interest. The first thing to acknowledge is the proliferation of edges in this network.  Almost all of the states are connected with the other states.  This results in the “spirograph” like effect in the visualization.  However I don’t find that to be the most interesting aspect.

The sub-networks highlighted in this visualization are particularly interesting.  For example one readily sees that a lot of people in New Jersey work in New York.  As a former New Yorker there is no surprises there.  You also see the Capital Beltway connections in the visualization.  Residents of Maryland and Virginia find work in D.C.  The community detection algorithm highlights this finding.  Since the states are arranged by “community”, seeing the cross-community connections are interesting.  For example the Connecticut to New York have a connection.

Source Code

The following is how I generated the directed network graph file for Gephi from the CSV.

import pandas as pd
import networkx as nx

df = pd.read_csv('Table 1 Commuting Flows Available at State to POW State and County to POW County only.csv', skiprows=[0,1], thousands=',')
# Only pull the Estimates
df = df[df['Output']=='Estimate']
# Only pull the first 4 columns
df = df[df.columns[:4]]
# Drop the N/A's
df = df.dropna()
# Drop the Output column
df = df.drop('Output', 1)
# Rename the columns
df.columns = ['from','to','weight']
# Drop the Puerto Rico records
df = df[df['from'] != 'Puerto Rico']
df = df[df['to'] != 'Puerto Rico']
# Remove the people who work where they live
commuters = df[df['from'] != df['to']]
# Build the network graph
G = nx.from_pandas_dataframe(commuters, 'from','to', ['weight'], create_using=nx.DiGraph())
# Write the graph so I can use Gephi

New York State Public Libraries Circulation Visualization

I have recently been exploring data on the public libraries of New York State for a side project (more on that in a latter post hopefully).  I have also stated a Data Visualization course on Coursera and have decided to feature some visualization of this data set.

About the Data

The data used in this analysis comes from the Annual Report for Public and Association Libraries produced for New York State Education Department (NYSED). You can access the data at using “new york” as the username and “pals” as the password.  Load the saved list named “All Libraries as of 15 March 2016” and select the “Total Circulation” data element.

Visualization Decisions

For this visualization I decided to use all data from 2000 to 2014 (latest data available).  I aggregated the library level circulation data to generate the aggregate circulation for New York State Public Libraries.  I used colorblind safe colors from the Color Brewer palette.  I adjusted the scale on the Y-axis to be in millions.  I used R to generate the following visualization:


What It Tells Us

Book circulation generally increased until 2010 where one observes a reversal of the decade long trend.  There is an exceptionally precipitous drop from 2013 to 2014.

This begs the question why is this changing?  Is it because of a change in the population?  Is it due to a change in the number of libraries reporting (might explain the 2013-2014 drop)?  Is it due to a rise in digital media sources as a substitute for books?  Is it due to a lack of public support/investment in libraries? I plan at looking at that last question in a future post.

Source Code


book_circulation <- read.csv('', na.strings = 'N/A', stringsAsFactors = FALSE) %>%
  gather(., Year, measurement, X1991:X2014) %>%
  mutate(Year = as.numeric(substr(Year,2,5))) %>%
  mutate(measurement = as.numeric(gsub(',', '', measurement))) %>%
  filter(Year > 1999)%>%
  filter(ifelse(,0,1)==1) %>%
  group_by(Year) %>%
  summarise(Circulation = sum(measurement)) %>%
  mutate(Circulation = Circulation/1000000)

ggplot(book_circulation, aes(Year, Circulation)) + geom_bar(stat='identity', fill="#9ecae1", colour="#3182bd") + ylab('Book Circulation (in millions)') + ggtitle('Book Circulation in NYS Public Libraries, 2000-2014') + theme_hc()

Searching for a Needle in a Haystack with Python

I recently was working with New York State libraries data (hopefully more to come on this) and was trying to match up two data sets on the library name.  But due to minor variations in the names (i.e. an abbreviation or punctuation marks) this is kinda a messy process.  I found fuzzywuzzy which is a Python library that can help out.  Here’s how I constructed my search:

from fuzzywuzzy import fuzz
import pandas as pd

haystack = pd.read_csv('web_list.csv')
haystack = haystack['Libraries'].values.tolist()

needles = pd.read_csv('excel_list.csv')
needles = needles['Library'].values.tolist()

best_matches = []

# Look for best match
for needle in needles:
    print('Searching for: '+needle)
    best_match_needle = ''
    best_match_hay = ''
    best_match_ratio = 0
    search_for_match = True
    while search_for_match:
        for hay in haystack:
            match_score = fuzz.partial_ratio(needle, hay)
            if match_score > best_match_ratio:
                best_match_ratio = match_score
                best_match_needle = needle
                best_match_hay = hay
            if match_score == 100:
                search_for_match = False
        # Looped through haystack so stop searching
        search_for_match = False
    # Append best match search results
    row = {'Searched':best_match_needle, 'Found':best_match_hay, 'Ratio':best_match_ratio}

df = pd.DataFrame(best_matches)
writer = pd.ExcelWriter('Best Matches.xlsx', engine='xlsxwriter')
df.to_excel(writer,'Sheet1', index=False)

The results are pretty good.  I tried the fuzzywuzzy processes but I was getting a lot of goofy results.  My only caution is that the ratio in my implementation should not be strongly trusted.  I had a 100 for “Babylon Public Library” being a match with “North Babylon Public Library.”

blsAPI Updated to Deliver QCEW Data

I have previously posted that I developed a R package to facilitate pulling data from the BLS API.  David Hiles asked that I incorporate pulling in QCEW data that is not available through the standard API.  It was a great idea and so I did it.  It is now posted to CRAN or the GitHub repository.

So if you install/update this R package you will have a blsQCEW() function.  You pass in what type of data you are looking for.  Valid options are: Area, Industry and Size.  Other parameters are needed but depend on what type of request you are making.

Area Data Request

Area request require a year, quarter, and area parameters.  The area codes are defined by the BLS and available here:  Here’s a code example for an area request:

# Request QCEW data for the first quarter of 2013 for the state of Michigan
MichiganData <- blsQCEW('Area', year='2013', quarter='1', area='26000')

Industry Data Request

Industry requests require a year, quarter, and industry parameters.  Some industry (NAICS) codes contain hyphens but the open data access uses underscores instead of hyphens. So 31-33 becomes 31_33. For all industry codes and titles see:  Here’s a code example for pulling making a construction industry request:

# Request Construction data for the first quarter of 2013
Construction <- blsQCEW('Industry', year='2013', quarter='1', industry='1012')

Size Data Request

Data by size is only available for the first quarter of each year. To make this type of request, you only need to provide the size and the year parameters. The size codes are available here:  Here’s a code example:

# Request data for the first quarter of 2013 for establishments with 100 to 249 employees
SizeData <- blsQCEW('Size', year='2013', size='6')

I also want to mention that the blsAPI() function has been changed to return data either as a JSON string or as a data frame. I hope others will find these improvements helpful.