Image

Visualizing EMS Service Delivery Options

I put together a neat little visualization that allows the user to do a back-of-the-envelope calculation of what an EMS service delivery option.  You can try it out at https://msilva-cgr.shinyapps.io/essex-county-ems-options/.  It is a shiny app.  Because of the time crunch it is slow because it is doing a lot of calculations on the fly.  If I were to do it again I would have preprocessed the data and saved the results.  It would cut out the costly computations.  But in any event it is a really neat tool.

essex

Advertisements
Image

Visualizing U.S. Commuters

I recently analyzed the patterns of U.S. Commuters and created a visualization that summarizes these patterns at the state level.

About the Data

This data is from the Census Transporation Planning Products (CTPP). For those who don’t know, the CTPP is derived from the 2006–2010 5-year American Community Survey (ACS) data. You can learn more about this data set by visiting the home page. In an effort to make this easily reproducible, you can download the csv used in this analysis. I chose this data source because I have used it in the past and am familiar with it.

Steps Taken

I created a directed network graph from this data. I used python’s networkx and pandas packages and the complete source code is provided below. I excluded Puerto Rico from the data set because I wanted to analyze state level patterns. I did leave the District of Columbia in thus there are 51 States in the analysis.

I wanted to use Gephi to generate the visualization because I have done so in the past. After creating the directed network graph using Python, I imported the data into Gephi and used the community detection algorithm using the default settings (Randomized checked, Use weights checked, Resolution = 1.0). This resulted in 7 communities being detected.

I grouped the states by these communities and colored them and placed them in a circular layout with straight edges between the nodes. I varied the width based on the edge weight.

Visualization

Commuters Network

Observations

There are a couple of things that are of interest. The first thing to acknowledge is the proliferation of edges in this network.  Almost all of the states are connected with the other states.  This results in the “spirograph” like effect in the visualization.  However I don’t find that to be the most interesting aspect.

The sub-networks highlighted in this visualization are particularly interesting.  For example one readily sees that a lot of people in New Jersey work in New York.  As a former New Yorker there is no surprises there.  You also see the Capital Beltway connections in the visualization.  Residents of Maryland and Virginia find work in D.C.  The community detection algorithm highlights this finding.  Since the states are arranged by “community”, seeing the cross-community connections are interesting.  For example the Connecticut to New York have a connection.

Source Code

The following is how I generated the directed network graph file for Gephi from the CSV.

import pandas as pd
import networkx as nx

df = pd.read_csv('Table 1 Commuting Flows Available at State to POW State and County to POW County only.csv', skiprows=[0,1], thousands=',')
# Only pull the Estimates
df = df[df['Output']=='Estimate']
# Only pull the first 4 columns
df = df[df.columns[:4]]
# Drop the N/A's
df = df.dropna()
# Drop the Output column
df = df.drop('Output', 1)
# Rename the columns
df.columns = ['from','to','weight']
# Drop the Puerto Rico records
df = df[df['from'] != 'Puerto Rico']
df = df[df['to'] != 'Puerto Rico']
# Remove the people who work where they live
commuters = df[df['from'] != df['to']]
# Build the network graph
G = nx.from_pandas_dataframe(commuters, 'from','to', ['weight'], create_using=nx.DiGraph())
# Write the graph so I can use Gephi
nx.write_gexf(G,'Commuters.gexf')
Image

New York State Public Libraries Circulation Visualization

I have recently been exploring data on the public libraries of New York State for a side project (more on that in a latter post hopefully).  I have also stated a Data Visualization course on Coursera and have decided to feature some visualization of this data set.

About the Data

The data used in this analysis comes from the Annual Report for Public and Association Libraries produced for New York State Education Department (NYSED). You can access the data at http://collectconnect.baker-taylor.com/ using “new york” as the username and “pals” as the password.  Load the saved list named “All Libraries as of 15 March 2016” and select the “Total Circulation” data element.

Visualization Decisions

For this visualization I decided to use all data from 2000 to 2014 (latest data available).  I aggregated the library level circulation data to generate the aggregate circulation for New York State Public Libraries.  I used colorblind safe colors from the Color Brewer palette.  I adjusted the scale on the Y-axis to be in millions.  I used R to generate the following visualization:

Data_Visualization_Assignment_1

What It Tells Us

Book circulation generally increased until 2010 where one observes a reversal of the decade long trend.  There is an exceptionally precipitous drop from 2013 to 2014.

This begs the question why is this changing?  Is it because of a change in the population?  Is it due to a change in the number of libraries reporting (might explain the 2013-2014 drop)?  Is it due to a rise in digital media sources as a substitute for books?  Is it due to a lack of public support/investment in libraries? I plan at looking at that last question in a future post.

Source Code

library(dplyr)
library(tidyr)
library(ggplot2)
library(ggthemes)

book_circulation <- read.csv('https://goo.gl/fyybwi', na.strings = 'N/A', stringsAsFactors = FALSE) %>%
  gather(., Year, measurement, X1991:X2014) %>%
  mutate(Year = as.numeric(substr(Year,2,5))) %>%
  mutate(measurement = as.numeric(gsub(',', '', measurement))) %>%
  filter(Year > 1999)%>%
  filter(ifelse(is.na(measurement),0,1)==1) %>%
  group_by(Year) %>%
  summarise(Circulation = sum(measurement)) %>%
  mutate(Circulation = Circulation/1000000)

ggplot(book_circulation, aes(Year, Circulation)) + geom_bar(stat='identity', fill="#9ecae1", colour="#3182bd") + ylab('Book Circulation (in millions)') + ggtitle('Book Circulation in NYS Public Libraries, 2000-2014') + theme_hc()
Image

Visualizing Calls for Service Data

At work I have been working with some EMS calls for service data. After geocoding the data, I put together the following animation in R showing the calls for service by month:
animationDue to confidentiality reasons I cannot disclose the data, however this is the code that created this animation:

library(ggplot2)
library(ggmap)
library(animation)
# Pull in the data
df <- read.csv("Greene Complete.csv", stringsAsFactors=F)
# Change the types
df$X <- as.numeric(df$X)
df$Y <- as.numeric(df$Y)
# Subset the data
df <- df[c('Month','X','Y')]
# Remove NA's
df <- df[!is.na(df$X),]
# Build the month vector
months <- df$Month
months <- months[!duplicated(months)]

saveGIF({
  for(month in months){
    data <- df[df$Month == month,]
    # Build the map
    map <- get_map(location = c(lon = -74.123996, lat = 42.295732), color = 'bw', maptype = "roadmap", zoom = 10)
    map <- ggmap(map, extent = "device", ylab = "Latitude", xlab = "Longitude")
    #map <- map + geom_point(data=data, aes(x=X, y=Y), size=2, color='red')
  
    overlay <- stat_density2d(data = data, 
                              aes(x = X, y = Y, fill = ..level.. , alpha = ..level..), 
                              size = 2, bins = 10, geom = "polygon")
    map <- map + overlay
    map <- map + scale_fill_gradient("Calls For\nService")
    map <- map + scale_alpha(range = c(0.4, 0.75), guide = FALSE)
    map <- map + guides(fill = guide_colorbar(barwidth = 1.5, barheight = 10))
    print(map)
  }
})

I would like to fix the scale to make it consistent month to month.