About Jared

hates carrots and loves visiting nursing homes in a pedobear costume
Health Insurance Calculations
Health Insurance Calculations

The costs involved with a health insurance plan can be confusing so I perform an analysis of different options to find which plan is most cost effective

My wife and I recently brought a new R programmer into our family so we had to update our health insurance. Becky is a researcher in neuroscience and psychology at NYU so we decided to choose an NYU insurance plan.

Our son sporting a New York R Conference shirt.
Our son sporting a New York R Conference shirt.

For families there are two main plans: Value and Advantage. The primary differences between the plans are the following:

Item Explanation Value Plan Amount Advantage Plan Amount
Bi-Weekly Premiums The amount we pay every other week in order to have insurance $160 ($4,160 annually) $240 ($6,240 annually)
Deductible Amount we pay directly to health providers before the insurance starts covering costs $1,000 $800
Coinsurance After the deductible is met, we pay this percentage of medical bills 20% 10%
Out-of-Pocket Maximum This is the most we will have to pay to health providers in a year (premiums do not count toward this max) $6,000 $5,000

We put them into a tibble for use later.

# use tribble() to make a quick and dirty tibble
parameters <- tibble::tribble(
    ~Plan, ~Premiums, ~Deductible, ~Coinsurance, ~OOP_Maximum,
    'Value', 160*26, 1000, 0.2, 6000,
    'Advantage', 240*26, 800, 0.1, 5000
)

Other than these cost differences, there is not any particular benefit of either plan over the other. That means whichever plan is cheaper is the best to choose.

This blog post walks through the steps of evaluating the plans to figure out which to select. Code is included so anyone can repeat, and improve on, the analysis for their given situation.

Cost

In order to figure out which plan to select we need to figure out the all-in cost, which is a function of how much we spend on healthcare in a year (we have to estimate our annual spending) and the aforementioned premiums, deductible, coinsurance and out-of-pocket maximum.

\[ \text{cost} = f(\text{spend}; \text{premiums}, \text{deductible}, \text{coinsurance}, \text{oop_maximum}) = \\ \text{min}(\text{oop_maximum}, \text{deductible} + \text{coinsurance}*(\text{spend}-\text{deductible}))+\text{premiums} \]

This can be written as an R function like this.

#' @title cost
#' @description Given healthcare spend and other parameters, calculate the actual cost to the user
#' @details Uses the formula above to caluclate total costs given a certain level of spending. This is the premiums plus either the out-of-pocket maximum, the actual spend level if the deductible has not been met, or the amount of the deductible plus the coinsurance for spend above the deductible but below the out-of-pocket maximum.
#' @author Jared P. Lander
#' @param spend A given amount of healthcare spending as a vector for multiple amounts
#' @param premiums The annual premiums for a given plan
#' @param deductible The deductible for a given plan
#' @param coinsurance The coinsurance percentage for spend beyond the deductible but below the out-of-pocket maximum
#' @param oop_maximum The maximum amount of money (not including premiums) that the insured will pay under a given plan
#' @return The total cost to the insured
#' @examples
#' cost(3000, 4160, 1000, .20, 6000)
#' cost(3000, 6240, 800, .10, 5000)
#'
cost <- function(spend, premiums, deductible, coinsurance, oop_maximum)
{
    # spend is vectorized so we use pmin to get the min between oop_maximum and (deductible + coinsurance*(spend - deductible)) for each value of spend provided
    pmin(
      # we can never pay more than oop_maximum so that is one side
      oop_maximum, 
      # if we are under oop_maximum for a given amount of spend,
      # this is the cost
      pmin(spend, deductible) + coinsurance*pmax(spend - deductible, 0)
    ) + 
    # we add the premiums since that factors into our cost
    premiums
}

With this function we can see if one plan is always, or mostly, cheaper than the other plan and that’s the one we would choose.

R Packages

For the rest of the code we need these R packages.

library(dplyr)
library(ggplot2)
library(tidyr)
library(formattable)
library(readr)
library(readxl)

Spending

To see our out-of-pocket cost at varying levels of healthcare spend we build a grid in $1,000 increments from $1,000 to $70,000.

spending <- tibble::tibble(Spend=seq(1000, 70000, by=1000))

We call our cost function on each amount of spend for the Value and Advantage plans.

spending <- spending %>% 
    # use our function to calcuate the cost for the value plan
    mutate(Value=cost(
        spend=Spend, 
        premiums=parameters$Premiums[1], 
        deductible=parameters$Deductible[1], 
        coinsurance=parameters$Coinsurance[1], 
        oop_maximum=parameters$OOP_Maximum[1]
    )
    ) %>% 
    # use our function to calcuate the cost for the Advantage plan
    mutate(Advantage=cost(
        spend=Spend, 
        premiums=parameters$Premiums[2], 
        deductible=parameters$Deductible[2], 
        coinsurance=parameters$Coinsurance[2], 
        oop_maximum=parameters$OOP_Maximum[2]
    )
    ) %>% 
  # compute the difference in costs for each plan
  mutate(Difference=Advantage-Value) %>% 
  # the winner for a given amount of spend is the cheaper plan
  mutate(Winner=if_else(Advantage < Value, 'Advantage', 'Value'))

The results are in the following table, showing every other row to save space. The Spend column is a theoretical amount of spending with a red bar giving a visual sense for the increasing amounts. The Value and Advantage columns are the corresponding overall costs of the plans for the given amount of Spend. The Difference column is the result of AdvantageValue where positive numbers in blue mean that the Value plan is cheaper while negative numbers in red mean that the Advantage plan is cheaper. This is further indicated in the Winner column which has the corresponding colors.

Spend Value Advantage Difference Winner
$2,000 $5,360 $7,160 1800 Value
$4,000 $5,760 $7,360 1600 Value
$6,000 $6,160 $7,560 1400 Value
$8,000 $6,560 $7,760 1200 Value
$10,000 $6,960 $7,960 1000 Value
$12,000 $7,360 $8,160 800 Value
$14,000 $7,760 $8,360 600 Value
$16,000 $8,160 $8,560 400 Value
$18,000 $8,560 $8,760 200 Value
$20,000 $8,960 $8,960 0 Value
$22,000 $9,360 $9,160 -200 Advantage
$24,000 $9,760 $9,360 -400 Advantage
$26,000 $10,160 $9,560 -600 Advantage
$28,000 $10,160 $9,760 -400 Advantage
$30,000 $10,160 $9,960 -200 Advantage
$32,000 $10,160 $10,160 0 Value
$34,000 $10,160 $10,360 200 Value
$36,000 $10,160 $10,560 400 Value
$38,000 $10,160 $10,760 600 Value
$40,000 $10,160 $10,960 800 Value
$42,000 $10,160 $11,160 1000 Value
$44,000 $10,160 $11,240 1080 Value
$46,000 $10,160 $11,240 1080 Value
$48,000 $10,160 $11,240 1080 Value
$50,000 $10,160 $11,240 1080 Value
$52,000 $10,160 $11,240 1080 Value
$54,000 $10,160 $11,240 1080 Value
$56,000 $10,160 $11,240 1080 Value
$58,000 $10,160 $11,240 1080 Value
$60,000 $10,160 $11,240 1080 Value
$62,000 $10,160 $11,240 1080 Value
$64,000 $10,160 $11,240 1080 Value
$66,000 $10,160 $11,240 1080 Value
$68,000 $10,160 $11,240 1080 Value
$70,000 $10,160 $11,240 1080 Value

Of course, plotting often makes it easier to see what is happening.

spending %>% 
    select(Spend, Value, Advantage) %>% 
    # put the plot in longer format so ggplot can set the colors
    gather(key=Plan, value=Cost, -Spend) %>% 
    ggplot(aes(x=Spend, y=Cost, color=Plan)) + 
        geom_line(size=1) + 
        scale_x_continuous(labels=scales::dollar) + 
        scale_y_continuous(labels=scales::dollar) + 
        scale_color_brewer(type='qual', palette='Set1') + 
        labs(x='Healthcare Spending', y='Out-of-Pocket Costs') + 
        theme(
          legend.position='top',
          axis.title=element_text(face='bold')
        )
Plot of out-of-pocket costs as a function of actual healthcare spending. Lower is better.
Plot of out-of-pocket costs as a function of actual healthcare spending. Lower is better.

It looks like there is only a small window where the Advantage plan is cheaper than the Value plan. This will be more obvious if we draw a plot of the difference in cost.

spending %>% 
    ggplot(aes(x=Spend, y=Difference, color=Winner, group=1)) + 
        geom_hline(yintercept=0, linetype=2, color='grey50') + 
        geom_line(size=1) + 
        scale_x_continuous(labels=scales::dollar) + 
        scale_y_continuous(labels=scales::dollar) + 
        labs(
          x='Healthcare Spending', 
          y='Difference in Out-of-Pocket Costs Between the Two Plans'
        ) + 
        scale_color_brewer(type='qual', palette='Set1') + 
        theme(
          legend.position='top',
          axis.title=element_text(face='bold')
        )
Plot of the difference in overall cost between the Value and Advantage plans. A difference greater than zero means the Value plan is cheaper and a difference below zero means the Advantage plan is cheaper.
Plot of the difference in overall cost between the Value and Advantage plans. A difference greater than zero means the Value plan is cheaper and a difference below zero means the Advantage plan is cheaper.

To calculate the exact cutoff points where one plan becomes cheaper than the other plan we have to solve for where the two curves intersect. Due to the out-of-pocket maximums the curves are non-linear so we need to consider four cases.

  1. The spending exceeds the point of maximum out-of-pocket spend for both plans
  2. The spending does not exceed the point of maximum out-of-pocket spend for either plan
  3. The spending exceeds the point of maximum out-of-pocket spend for the Value plan but not the Advantage plan
  4. The spending exceeds the point of maximum out-of-pocket spend for the Advantage plan but not the Value plan

When the spending exceeds the point of maximum out-of-pocket spend for both plans the curves are parallel so there will be no cross over point.

When the spending does not exceed the point of maximum out-of-pocket spend for either plan we set the cost calculations (not including the out-of-pocket maximum) for each plan equal to each other and solve for the amount of spend that creates the equality.

To keep the equations smaller we use variables such as \(d_v\) for the Value plan deductible, \(c_a\) for the Advantage plan coinsurance and \(oop_v\) for the out-of-pocket maximum for the Value plan.

\[ d_v + c_v(S – d_v) + p_v = d_a + c_a(S – d_a) + p_a \\ c_v(S – D_v) – c_a(S-d_a) = d_a – d_v + p_a – p_v \\ c_vS – c_vd_v – c_aS + c_ad_a = d_a – d_v + p_a – p_v \\ S(c_v – c_a) = d_a – c_ad_a – d_v + c_vd_v + p_a – p_v \\ S(c_v – c_a) = d_a(1 – c_a) – d_v(1 – c_v) + p_a – p_v \\ S = \frac{d_a(1 – c_a) – d_v(1 – c_v) + p_a – p_v}{(c_v – c_a)} \]

When the spending exceeds the point of maximum out-of-pocket spend for the Value plan but not the Advantage plan, we set the out-of-pocket maximum plus premiums for the Value plan equal to the cost calculation of the Advantage plan.

\[ oop_v + p_v = d_a + c_a(S – d_a) + p_a \\ d_a + c_a(S – d_a) + p_a = oop_v + p_v \\ c_aS – c_ad_a = oop_v + p_v – p_a – d_a \\ c_aS = oop_v + p_v – p_a + c_ad_a – d_a \\ S = \frac{oop_v + p_v – p_a + c_ad_a – d_a}{c_a} \]

When the spending exceeds the point of maximum out-of-pocket spend for the Advantage plan but not the Value plan, the solution is just the opposite of the previous equation.

\[ oop_a + p_a = d_v + c_v(S – d_v) + p_v \\ d_v + c_v(S – d_v) + p_v = oop_a + p_a \\ c_vS – c_vd_v = oop_a + p_a – p_v – d_v \\ c_vS = oop_a + p_a – p_v + c_vd_v – d_v \\ S = \frac{oop_a + p_a – p_v + c_vd_v – d_v}{c_v} \]

As an R function it looks like this.

#' @title calculate_crossover_points
#' @description Given healthcare parameters for two plans, calculate when one plan becomes more expensive than the other.
#' @details Calculates the potential crossover points for different scenarios and returns the ones that are true crossovers.
#' @author Jared P. Lander
#' @param premiums_1 The annual premiums for plan 1
#' @param deductible_1 The deductible plan 1
#' @param coinsurance_1 The coinsurance percentage for spend beyond the deductible for plan 1
#' @param oop_maximum_1 The maximum amount of money (not including premiums) that the insured will pay under plan 1
#' @param premiums_2 The annual premiums for plan 2
#' @param deductible_2 The deductible plan 2
#' @param coinsurance_2 The coinsurance percentage for spend beyond the deductible for plan 2
#' @param oop_maximum_2 The maximum amount of money (not including premiums) that the insured will pay under plan 2
#' @return The amount of spend at which point one plan becomes more expensive than the other
#' @examples
#' calculate_crossover_points(
#' 160, 1000, 0.2, 6000,
#' 240, 800, 0.1, 5000
#' )
#'
calculate_crossover_points <- function(
    premiums_1, deductible_1, coinsurance_1, oop_maximum_1,
    premiums_2, deductible_2, coinsurance_2, oop_maximum_2
)
{
    # calculate the crossover before either has maxed out
    neither_maxed_out <- (premiums_2 - premiums_1 + 
                              deductible_2*(1 - coinsurance_2) - 
                              deductible_1*(1 - coinsurance_1)) /
        (coinsurance_1 - coinsurance_2)
    
    # calculate the crossover when one plan has maxed out but the other has not
    one_maxed_out <- (oop_maximum_1 + 
                          premiums_1 - premiums_2 +
                          coinsurance_2*deductible_2 -
                          deductible_2) /
        coinsurance_2
    
    # calculate the crossover for the reverse
    other_maxed_out <- (oop_maximum_2 + 
                            premiums_2 - premiums_1 +
                            coinsurance_1*deductible_1 -
                            deductible_1) /
        coinsurance_1
    
    # these are all possible points where the curves cross
    all_roots <- c(neither_maxed_out, one_maxed_out, other_maxed_out)
    
    # now calculate the difference between the two plans to ensure that these are true crossover points
    all_differences <- cost(all_roots, premiums_1, deductible_1, coinsurance_1, oop_maximum_1) - 
        cost(all_roots, premiums_2, deductible_2, coinsurance_2, oop_maximum_2)
    
    # only when the difference between plans is 0 are the curves truly crossing
    all_roots[all_differences == 0]
}

We then call the function with the parameters for both plans we are considering.

crossovers <- calculate_crossover_points(
    parameters$Premiums[1], parameters$Deductible[1], parameters$Coinsurance[1], parameters$OOP_Maximum[1],
    parameters$Premiums[2], parameters$Deductible[2], parameters$Coinsurance[2], parameters$OOP_Maximum[2]
)

crossovers
## [1] 20000 32000

We see that the Advantage plan is only cheaper than the Value plan when spending between $20,000 and $32,000.

The next question is will our healthcare spending fall in that narrow band between $20,000 and $32,000 where the Advantage plan is the cheaper option?

Probability of Spending

This part gets tricky. I’d like to figure out the probability of spending between $20,000 and $32,000. Unfortunately, it is not easy to find healthcare spending data due to the opaque healthcare system. So I am going to make a number of assumptions. This will likely violate a few principles, but it is better than nothing.

Assumptions and calculations:

  • Healthcare spending follows a log-normal distribution
  • We will work with New York State data which is possibly different than New York City data
  • We know the mean for New York spending in 2014
  • We will use the accompanying annual growth rate to estimate mean spending in 2019
  • We have the national standard deviation for spending in 2009
  • In order to figure out the standard deviation for New York, we calculate how different the New York mean is from the national mean as a multiple, then multiply the national standard deviation by that number to approximate the New York standard deviation in 2009
  • We use the growth rate from before to estimate the New York standard deviation in 2019

First, we calculate the mean. The Centers for Medicare & Medicaid Services has data on total and per capita medical expenditures by state from 1991 to 2014 and includes the average annual percentage growth. Since the data are bundled in a zip with other files, I posted them on my site for easy access.

spend_data_url <- 'https://jaredlander.com/data/healthcare_spending_per_capita_1991_2014.csv'
health_spend <- read_csv(spend_data_url)

We then take just New York spending for 2014 and multiply it by the corresponding growth rate.

ny_spend <- health_spend %>% 
  # get just New York
  filter(State_Name == 'New York') %>% 
  # this row holds overall spending information
  filter(Item == 'Personal Health Care ($)') %>% 
  # we only need a few columns
  select(Y2014, Growth=Average_Annual_Percent_Growth) %>% 
  # we have to calculate the spending for 2019 by accounting for growth
  # after converting it to a percentage
  mutate(Y2019=Y2014*(1 + (Growth/100))^5)

ny_spend
Y2014 Growth Y2019
9778 5 12479.48

The standard deviation is trickier. The best I can find was the standard deviation on the national level in 2009. In 2013 the Centers for Medicare & Medicaid Services wrote in Volume 3, Number 4 of Medicare & Medicaid Research Review an article titled Modeling Per Capita State Health Expenditure Variation: State-Level Characteristics Matter. Exhibit 2 shows that the standard deviation of healthcare spending was $1,241 for the entire country in 2009. We need to estimate the New York standard deviation from this and then account for growth into 2019.

Next, we figure out the difference between the New York State spending mean and the national mean as a multiple.

nation_spend <- health_spend %>% 
  filter(Item == 'Personal Health Care ($)') %>% 
  filter(Region_Name == 'United States') %>% 
  pull(Y2009)

ny_multiple <- ny_spend$Y2014/nation_spend

ny_multiple
## [1] 1.418746

We see that the New York average is 1.4187464 times the national average. So we multiply the national standard deviation from 2009 by this amount to estimate the New York State standard deviation and assume the same annual growth rate as the mean. Recall, we can multiply the standard deviation by a constant.

\[ \begin{align} \text{var}(x*c) &= c^2*\text{var}(x) \\ \text{sd}(x*c) &= c*\text{sd}(x) \end{align} \]

ny_spend <- ny_spend %>% 
  mutate(SD2019=1241*ny_multiple*(1 + (Growth/100))^10)

ny_spend
Y2014 Growth Y2019 SD2019
9778 5 12479.48 2867.937

My original assumption was that spending would follow a normal distribution, but New York’s resident agricultural economist, JD Long, suggested that the spending distribution would have a floor at zero (a person cannot spend a negative amount) and a long right tail (there will be many people with lower levels of spending and a few people with very high levels of spending), so a log-normal distribution seems more appropriate.

\[ \text{spending} \sim \text{lognormal}(\text{log}(12479), \text{log}(2868)^2) \]

Visualized it looks like this.

draws <- tibble(
  Value=rlnorm(
    n=1200, 
    meanlog=log(ny_spend$Y2019), 
    sdlog=log(ny_spend$SD2019)
  )
)

ggplot(draws, aes(x=Value)) + geom_density() + xlim(0, 75000)
The log-normal distribution has a long right tail.
The log-normal distribution has a long right tail.

We can see that there is a very long right tail which means there are many low values and few high values.

Then the probability of spending between $20,000 and $32,000 can be calculated with plnorm().

plnorm(crossovers[2], meanlog=log(ny_spend$Y2019), sdlog=log(ny_spend$SD2019)) - 
  plnorm(crossovers[1], meanlog=log(ny_spend$Y2019), sdlog=log(ny_spend$SD2019))
## [1] 0.02345586

So we only have a 2.35% probability of our spending falling in that band where the Advantage plan is more cost effective. Meaning we have a 97.65% probability that the Value plan will cost less over the course of a year.

We can also calculate the expected cost under each plan. We do this by first calculating the probability of spending each (thousand) dollar amount (since the log-normal is a continuous distribution this is an estimated probability). We multiply each of those probabilities against their corresponding dollar amounts. Since the distribution is log-normal we need to exponentiate the resulting number. The data are on the thousands scale, so we multiply by 1000 to put it back on the dollar scale. Mathematically it looks like this.

\[ \mathbb{E}_{\text{Value}} \left[ \text{cost} \right] = 1000*\text{exp} \left\{ \sum p(\text{spend})*\text{cost}_{\text{Value}} \right\} \\ \mathbb{E}_{\text{Advantage}} \left[ \text{cost} \right] = 1000*\text{exp} \left\{ \sum p(\text{spend})*\text{cost}_{\text{Advantage}} \right\} \]

The following code calculates the expected cost for each plan.

spending %>% 
  # calculate the point-wise estimated probabilities of the healthcare spending
  # based on a log-normal distribution with the appropriate mean and standard deviation
  mutate(
    SpendProbability=dlnorm(
      Spend, 
      meanlog=log(ny_spend$Y2019), 
      sdlog=log(ny_spend$SD2019)
    )
  ) %>% 
  # compute the expected cost for each plan
  # and the difference between them
  summarize(
    ValueExpectedCost=sum(Value*SpendProbability),
    AdvantageExpectedCost=sum(Advantage*SpendProbability),
    ExpectedDifference=sum(Difference*SpendProbability)
  ) %>% 
  # exponentiate the numbers so they are on the original scale
  mutate_each(funs=exp) %>% 
  # the spending data is in increments of 1000
  # so multiply by 1000 to get them on the dollar scale
  mutate_each(funs=~ .x * 1000)
ValueExpectedCost AdvantageExpectedCost ExpectedDifference
5422.768 7179.485 1323.952

This shows that overall the Value plan is cheaper by about $1,324 dollars on average.

Conclusion

We see that there is a very small window of healthcare spending where the Advantage plan would be cheaper, and at most it would be about $600 cheaper than the Value plan. Further, the probability of falling in that small window of savings is just 2.35%.

So unless our spending will be between $20,000 and $32,000, which it likely will not be, it is a better idea to choose the Value plan.

Since the Value plan is so likely to be cheaper than the Advantage plan I wondered who would pick the Advantage plan. Economist Jon Hersh invokes behavioral economics to explain why people may select the Advantage plan. Some parts of the Advantage plan are lower than the Value plan, such as the deductible, coinsurance and out-of-pocket maximum. People see that under certain circumstances the Advantage plan would save them money and are enticed by that, not realizing how unlikely that would be. So they are hedging against a low probability situation. (A consideration I have not accounted for is family size. The number of members in a family can have a big impact on the overall spend and whether or not it falls into the narrow band where the Advantage plan is cheaper.)

In the end, the Value plan is very likely going to be cheaper than the Advantage plan.

Try it at Home

I created a Shiny app to allow users to plug in the numbers for their own plans. It is rudimentary, but it gives a sense for the relative costs of different plans.

Thanks

A big thanks to Jon Hersh, JD Long, Kaz Sakamoto, Rebecca Martin and Adam Hogan for reviewing this post.

Related Posts



Jared Lander is the Chief Data Scientist of Lander Analytics a New York data science firm, Adjunct Professor at Columbia University, Organizer of the New York Open Statistical Programming meetup and the New York and Washington DC R Conferences and author of R for Everyone.

Data scientists and R enthusiasts gathered for the 5th annual New York R Conference held on May 9th-11th. In front of a crowd of more than 300 attendees, 24 speakers gave presentations on topics ranging from deep learning and building packages in R to football and hockey analytics.

Speakers included: Andrew Gelman, Emily Robinson, Max Kuhn, Dan Chen, Jared Lander, Namita Nandakumar, Mike Band, Soumya Kalra, Brooke Watson, David Madigan, Jacqueline Nolis, Heather Nolis, Gabriela Hempfling, Wes McKinney, Noam Ross, Jim Savage, Ludmila Janda, Emily Dodwell, Michelle Gill, Krista Watts, Elizabeth Sweeney, Adam Chekroud, Amanda Dobbyn and Letisha Smith.

This year marked the ten-year anniversary of the New York Open Statistical Programming Meetup. It has been incredible to see the growth of meetup over the years. We now have over 10,000 members around the world!

Members Over Time

Let’s take a look at some of the highlights from the conference:

Jonah Gabry Kicked Off “R” Week at the New York Open Statistical Programming Meetup with a Talk on Using Stan in R

Jonah Gabry kicking off R Week with a talk about the Stan ecosystem

Jonah Gabry from the Stan Development Team kicked off “R” week with a talk on making Bayes easier in the R ecosystem. Jonah went over the packages (rstanarm, rstantools, bayesplot and loo) which emulate other R model-fitting functions, unify function naming across Stan-based R packages, and develop plotting functions using ggplot objects.

50 Conference Attendees Participated in Pre-Conference Workshops on Thursday before the Conference

A group of people in a room

Description automatically generated
People learning about machine learning from Max Kuhn during the pre-conference workshops

On the Thursday before the two-day conference, more than 50 conference attendees arrived at Work-Bench a day early for a full day of workshops. This was the first year of the R Conference Workshop Series. Max Kuhn, Dan Chen, Elizabeth Sweeney and Kaz Sakamoto each led a workshop which covered the following topics:

  • Machine Learning with Caret (Max Kuhn)
  • Git for Data Science (Dan Chen)
  • Introduction to Survival Analysis (Elizabeth Sweeney)
  • Geospatial Statistics and Mapping in R (Kaz Sakamoto)

The Growth of R-Ladies Summed Up in Three Pictures…

A group of people posing for a photo

Description automatically generated

We are so excited to see the growth of the R-Ladies community and we appreciate their support for the NY R Conference over the years. Congratulations ladies!

Dr. Andrew Gelman Delivers Keynote Speech on the Fallacy of P-Values and Thinking like a Statistician—All Without Slides

A person standing in front of a curtain

Description automatically generated
Andrew Gelman wowing the crowd as usual

There wasn’t a soul in the crowd who wasn’t hanging on every word from Columbia professor Dr. Andrew Gelman. The only speaker with a 40-minute time slot, and the only speaker to not use slides, Dr. Gelman talked about life as a statistician, warned of the perils of p-values and stressed the importance of simulation—before data collection—to improve our understanding of possible real-life scenarios. “Only through simulating fake data, can you really have statistical confidence about whatever performance metric you’re aiming for,” Gelman noted.

While we try not to pick a favorite speaker, Dr. Gelman runs away with that title every time he comes to speak at the New York R Conference.

Jacqueline and Heather Nolis Taught Us to Not to Be Afraid of Deep Learning and Model Deployment in Production

A group of people standing in a room

Description automatically generated
Jacqueline Nolis showing how fun and easy deep learning can be

The final talk on day one was perhaps the most entertaining and insightful from the weekend. Jacqueline Nolis taught us how developing a deep learning model is easier than we thought and how humor can help us understand a complex idea in a simple form. Our top five favorite neural network-generate pet names: Dia, Spok, Jori, Lule, and Timuse!

On Saturday morning, Heather Nolis showed us how we can deploy the model into production. Heather walked through the steps involved in preparing an R model for production using containers (Docker) and container orchestration (Kubernetes) to share models throughout an organization or for the public. How can we put a model into production without your laptop running 24/7? By running the code safely on a server in the cloud!

Emily Robinson and Honey Berk Win Headphones for Most Tweets During the Conference

Two people standing in front of a curtain

Description automatically generated
Emily Robinson winning the tweeting grand prize

If you’re not following Emily Robinson (@robinson_es) and Honey Berk (@honeyberk), you’re missing out! Emily and Honey led all conference attendees in Twitter mentions according to our Twitter scorekeeper Malorie Hughes (@data_all_day). Because of Emily and Honey’s presence on Twitter, those who were unable to attend the conference were able to follow along with all of our incredible speakers throughout the two-day event.

https://pbs.twimg.com/media/D6TF7CdW0AAmqOM.jpg:large
Tweet stats dashboard created by Malorie Hughes

Jared Lander Debuts New-Born R Package Hex Sticker T-Shirts: Congratulations to Jared and Rebecca on the Birth of their Son, Lev

Baby themed hex sticker shirts designed by Vivian Peng

During my talk I debuted a custom R package hex sticker t-shirt with my wife Rebecca and son Lev. We R a very nerdy family.

Looking Forward to 2020

If you attended the 2019 New York R Conference, we hope you had an incredible experience. If you did not attend the conference, we hope to see you next year!

Related Posts



Jared Lander is the Chief Data Scientist of Lander Analytics a New York data science firm, Adjunct Professor at Columbia University, Organizer of the New York Open Statistical Programming meetup and the New York and Washington DC R Conferences and author of R for Everyone.

Baby Hex Stickers

Members Over Time

Ten years ago Josh Reich held the first ever New York Open Statistical Programming Meetup at Union Square Ventures. Back then it was called the New York R Meetup and 21 people RSVP’d. I discovered the Meetup three months later thanks to Andrew Gelman’s blog and by then the RSVP count had doubled and Drew Conway became a co-organizer. The experience was so much fun and I learned a lot of good stuff. I remember learning about the head() and tail() functions, wishing I learned them in grad school.

Early Meetup
The crowd at an early meetup fit in a small room at Columbia with room to spare.

I started attending regularly and pretty soon Drew decided to serve pizza which later led to years of pizza data. He also designed a logo for the NYC Data Mafia, which made for a great t-shirt that we still sell. One time, a number of us were talking and realized we were all answering each other’s questions on StackOverflow. Our community was growing both in person and online. I fell in love with the group because it was a great place to learn and hang out with smart, welcoming people.

During the first two years our hosts included NYU, Columbia, AOL and a handful of others. At this time there were about 1,800 members with Drew as the sole organizer who was ready to focus on other parts of his life, so he asked Wes McKinney and me to take over as organizers. This was after Drew renamed the group the Open Statistical Programming Meetup as to include other open source languages like Python, Julia, Go and SQL. I was incredibly thrilled to organize this group which meant so much to me.

Over the next eight years our numbers swelled to almost 11,000 with members and speakers coming from all over the world. We have held the Meetup at places such as eBay, AT&T, iHeartRadio, Work-Bench, Knewton, Twitter, New York Presbyterian, Rise New York and Google. The most popular event welcomed nearly 400 people when Hadley Wickham spoke in September of 2015, the only time the Meetup met on a Friday.

Hadley, Becky and Jared
The Meetup’s most popular night featured Hadley Wickham and coincided with my fifth date with Rebecca Martin, my future wife.

That night was also my fifth date with Rebecca Martin. We originally met during Michael Kane’s talk about PubMed then reconnected about a year later. We went on to get married and have a kid together. The New York Times used the nerdiest closing line ever for our wedding announcement: “The couple met in New York in May 2014 at a meet-up about statistical programming organized by the groom.”

Baby Hex Stickers
Rebecca and I recently added a new R programmer to our family and celebrated with baby-themed hex sticker shirts.

The Meetup has grown not only in numbers but in reach as well. There’s a website hosting all of the presentations, we livestream the Meetups and people from all over the world chat in our Slack team. Our live events include an ongoing workshop series and conferences in New York and Washington DC, which just hit their fifth anniversary, all for building and supporting the community and open source software.

NY R Conference
The crowd at the New York R Conference.

These past ten years have been a collection of amazing experiences for me where I got to learn from some of the world’s best experts and develop lasting relationships with great people. This community means so much to me and I very much look forward to its continued growth over the next decade.

Related Posts



Jared Lander is the Chief Data Scientist of Lander Analytics a New York data science firm, Adjunct Professor at Columbia University, Organizer of the New York Open Statistical Programming meetup and the New York and Washington DC R Conferences and author of R for Everyone.

Pi Cake 2019
Pi Cake 2019

On Pi Day this year I was giving a keynote talk at DataFest in Scotland, so we celebrated Pi Day a week later, on the 21st. While it wasn’t the exact date, there’s never a bad time to eat pizza and Pi Cake.

This was the tenth Pi Cake, and it’s pretty hard to beat the Einstein design on last year’s Pi Cake, so Empire Cakes gave created us a cake with the actual definition of pi: The ratio of a circle’s circumference to its diameter.

For pizza we went to the new Lombardi’s in Chelsea. They use an amazing electric oven instead of coal, so if you look closely you can tell the difference, but the pizza was still great and the decor was fantastic. 

And now Pi Cake throughout the years:

Related Posts



Jared Lander is the Chief Data Scientist of Lander Analytics a New York data science firm, Adjunct Professor at Columbia University, Organizer of the New York Open Statistical Programming meetup and the New York and Washington DC R Conferences and author of R for Everyone.

After four sold-out years in New York City, the R Conference made its debut in Washington DC to a sold-out crowd of data scientists at the Ronald Reagan Building on November 8th & 9th. Our speakers shared presentations on a variety of R-related topics.

A big thank you to our speakers Max Kuhn, Emily Robinson, Mike Powell, Mara Averick, Max Richman, Stephanie Hicks, Michael Garris, Kelly O’Briant, David Smith, Anna Kirchner, Roger Peng, Marck Vaisman, Soumya Kalra, Jonathan Hersh, Vivian Peng, Dan Chen, Catherine Zhou, Jim Klucar, Lizzy Huang, Refael Lav, Ami Gates, Abhijit Dasgupta, Angela Li  and Tommy Jones.

Some of the amazing speakers

Some highlights from the conference:

R Superstars Mara Averick, Roger Peng and Emily Robinson

Mara Averick, Roger Peng and Emily Robinson

A hallmark of our R conferences is that the speakers hang out with all the attendees and these three were crowd favorites.

Michael Powell Brings R to the aRmy

Major Michael Powell

Major Michael Powell describes how R has brought efficiency to the Army Intelligence and Security Command by getting analysts out of Excel and into the Tidyverse. “Let me turn those 8 hours into 8 seconds for you,” says Powell.

Max Kuhn Explains the Applications of Equivocals to Apply Levels of Certainty to Predictions

Max Kuhn

After autographing his book, Applied Predictive Modeling, for a lucky attendee, Max Kuhn explains how Equivocals can be applied to individual predictions in order to avoid reporting predictions when there is significant uncertainty.

NYR and DCR Speaker Emily Robinson Getting an NYR Hoodie for her Awesome Tweeting

Emily Robinson

Emily Robinson tweeted the most at the 2018 NYR conference, winning her a WASD mechanical keyboard and at DCR she came in second so we gave her a limited edition NYR hoodie.

Max Richman Shows How SQL and R can Co-Exist

Max Richman

Max Richman, wearing the same shirt he wore when he spoke at the first NYR, shows parallels between dplyr and SQL.

Michael Garris Tells the Story of the MNIST Dataset

Michael Garris

Michael Garris was a member of the team that built the original MNIST dataset, which set the standard for handwriting image classification in the early 1990s. This talk may have been the first time the origin story was ever told.

R Stats Luminary Roger Peng Explains Relationship Between Air Pollution and Public Health

Roger Peng

Roger Peng shows us how air pollution levels has fallen over the past 50 years resulting in dramatic improvements in air quality and health (with help from R).

Kelly O’Briant Combining R with Serverless Computing

Kelly O'Briant

Kelly O’Briant demonstrates how to easily deploy R projects on Google Compute Engine and promoted the new #radmins hashtag.

Hot Dog vs Not Hot Dog by David Smith (Inspired by Jian-Yang from HBO’s Silicon Valley)

David Smith

David Smith, one of the original R users, shows how to recreate HBO’s Silicon Valley’s Not Hot Dog app using R and Azure

Jon Hersh Describes How to Push for Data Science Within Your Organization

Jon Hersh

Jon Hersh discusses the challenges, and solutions, of getting organizations to embrace data science.

Vivian Peng and the Importance of Data Storytelling

Vivian Peng

Vivian Peng asks the question, how do we protect the integrity of our data analysis when it’s published for the world to see?

Dan Chen Signs His Book for David Smith

Dan Chen and David Smith

Dan Chen autographing a copy of his book, Pandas for Everyone, for David Smith. Now David Smith has to sign his book, An Introduction to R, for Dan.

Malorie Hughes Analyzing Tweets

Malorie Hughes

On the first day I challenged the audience to analyze the tweets from the conference and Malorie Hughes, a data scientist with NPR, designed a Twitter analytics dashboard to track the attendee with the most tweets with the hashtag #rstatsdc. Seth Wenchel won a WASD keyboard for the best tweeting. And we presented Malorie wit a DCR speaker mug.

Strong Showing from the #RLadies!

The R-Ladies

The #rladies group is growing year after year and it is great seeing them in force at NYR and DCR!

Packages

Matthew Hendrickson, a DCR attendee, posted on twitter every package mentioned during the two-day conference: tidyverse, tidycensus, leaflet, leaflet.extras, funneljoin, glmnet, xgboost, rstan, rstanarm, LowRankQP, dplyr, coefplot, bayesplot, keras, tensorflow, lars, magrittr, purrr, rsample, useful, knitr, rmarkdown, ggplot2, ggiraph, ggrepel, ggraph, ggthemes, gganimate, ggmap, plotROC, ggridges, gtrendsr, tlnise, tm, Bioconductor, plyranges, sf, tmap, textmineR, tidytext, gmailr, rtweet, shiny, httr, parsnip, probably, plumber, reprex, crosstalk, arules and arulesviz.

Data Community DC

Data Community DC

A special thanks to the Data Community DC for helping us make the DC R Conference an incredible experience.

Videos

The videos for the conference will be posted in the coming weeks to dc.rstats.ai.

See You Next Year

Looking forward to more great conferences at next year’s NYR and DCR!

Related Posts



Jared Lander is the Chief Data Scientist of Lander Analytics a New York data science firm, Adjunct Professor at Columbia University, Organizer of the New York Open Statistical Programming meetup and the New York and Washington DC R Conferences and author of R for Everyone.

Getting People Started

A large part of my work is teaching R–for private clients, at Columbia Business School, at conferences and facilitating public workshops for others.

A common theme is that getting everyone setup on their individual computers is very difficult. No matter how many instructions I provide, there are always a good number of people without a proper environment. This can mean not using RStudio projects, not having the right packages installed, not downloading the data and sometimes not even installing R.

Solution

After many experiments I finally came upon a solution. For every class I teach I now create a skeleton project hosted on GitHub with instructions for setup.

The instructions (in the README) consist of three blocks of code.

  1. Package installation
  2. Copying the project structure from the repo (no git required)
  3. Downloading data

All the user has to do is copy and paste these three blocks of code into the R console and they have the exact same environment as the instructor and other students.

packages <- c( 
    'coefplot', 
    'rprojroot', 
    'tidyverse', 
    'usethis'
)
install.packages(packages)
newProject <- usethis::use_course('https://github.com/jaredlander/WorkshopExampleRepo/archive/master.zip')
source('prep/DownloadData.r')

Using this process, 95% of my students are prepared for class.

The inspiration for this idea came from a fun coffee with Hadley Wickham and Jenny Bryan during a conference in New Zealand and the implementation is made possible thanks to the usethis package.

Automating the Setup

Now that I found a good way to get students started, I wanted to make it easier for me to setup the repo. So I created an R package called RepoGenerator and put it on CRAN.

The first step to using the package is to create a GitHub Personal Access Token (instructions are in the README). Then you build a data.frame listing datasets you want the students to download. The data.frame needs at least the following three columns.

  • Local: The name, not path, the file should have on disk
  • Remote: The URL where the data files are stored online
  • Mode: The mode needed to write the file to disk, ‘w’ for regular text files, ‘wb’ for binary files such as Excel or rds files

An example data.frame is available in the RepoGenerator package.

data(datafiles, package='RepoGenerator')
datafiles[1:6, c('Local', 'Remote', 'Mode')]

After that you define the packages you want your students to use. There can be as few or as many as you want. In addition to any packages you list, rprojroot and usethis are added so that the instructions in the new repo will be certain to work.

packages <- c('caret', 'coefplot','DBI', 'dbplyr', 'doParallel', 'dygraphs', 
              'foreach', 'ggthemes', 'glmnet', 'jsonlite', 'leaflet', 'odbc', 
              'recipes', 'rmarkdown', 'rprojroot', 'RSQLite', 'rvest', 
              'tidyverse', 'threejs', 'usethis', 'UsingR', 'xgboost', 'XML', 
              'xml2')

Now all you need to do is call the createRepo() function.

createRepo(
    # the name to use for the repo and project
    name='WorkshopExampleRepo', 
    # the location on disk to build the project
    path='~/WorkshopExampleRepo',
    # the data.frame listing data files for the user to download
    data=datafiles,
    # vector of packages the user should install
    packages=packages,
    # the GitHub username to create the repo for
    user='jaredlander',
    # the new repo's README has the name of who is organizing the class
    organizer='Lander Analytics',
    # the name of the environment variable storing the GitHub Personal Access Token
    token='MyGitHubPATEnvVar'
)

After this you will have a new repo setup for your users to copy, including instructions.

That’s All

Reducing setup issues at the start of a training can really improve the experience for everyone and allow you to get straight into teaching.

Please check it out and let me know how it works for you.

 

Related Posts



Jared Lander is the Chief Data Scientist of Lander Analytics a New York data science firm, Adjunct Professor at Columbia University, Organizer of the New York Open Statistical Programming meetup and the New York and Washington DC R Conferences and author of R for Everyone.

2018 New York R Conference

The 2018 New York R Conference was the biggest and best yet. This is both in terms of the crowd size and content.  The speakers included some of the R community’s best such as Hadley Wickham, David Robinson, Jennifer Hill, Max Kuhn, Andreas Mueller (ok, a little Python), Evelina Gabasova, Sean Taylor and Jeff Ryan. I am proud to say we were almost at gender parity for both attendees and speakers which is amazing for a tech conference. Brooke Watson even excitedly noted that we had a line for the women’s room.

Particularly gratifying for me was seeing so many of my students speak. Eurry Kim, Dan Chen and Alex Boghosian all gave excellent talks.

Some highlights that stuck out to me are:

Emily Robinson Shows There is More to the Tidyverse than Hadley

The Expanded Tidyverse

Emily Robinson, otherwise known as ERob, gave an excellent talk showing how the Tidyverse is so much more than just Hadley and that there are many people inspired by him to contribute in the Tidy way.

Sean Taylor Forecasted the Future with Prophet

Sean Taylor

Sean Taylor, former New Yorker and unrepentant Eagles fan, demonstrate his powerful R and Python, package Prophet, for forecasting time series data. Facebook open sourced his work so we could all benefit.

OG Data Mafia Founder Drew Conway Popped In

Giving away a data mafia shirt

A lucky fan got an autographed NYC Data Mafia t-shirt from Drew Conway.

David Smith Playing Minecraft Through R

Minecraft in R

David Smith played Minecraft through R, including building objects and moving through the world.

Evelina Gabasova Used Social Network Analysis to Break Down Star Wars

It's a Trap

Evelina Gabasova wowed the audience with her fun talk and detailed analysis of character interaction in Star Wars.

Dusty Turner Represented West Point

Dusty Talking Army Sports

Dusty Turner taught us how the United States Military Academy uses R for both student instruction and evaluation.

Hadley Wickham Delved into the Nitty Gritty of R

Hadley shows off objects are stored in memory

Hadley Wickham showed us how to get into the internals of R and figure out how to examine objects from a memory perspective.

Jennifer Hill Demonstrated Awesome Machine Learning Techniques for Causal Inference

Jennifer Hill Explaining Causal Inference

Following her sold-out meetup appearance in March, Jennifer continued to push the boundaries of causal inference.

I Made the Authors of Caret and scitkit-learn Show That R and Python Can Get Along

Caret and Scikit-learn in one place

While both Andreas and Max gave great individual talks, I made them pose for this peace-making photo.

David Robinson Got the Upper Hand in a Sibling Twitter Duel

DRob Teaching

Given only about 30 minutes notice, David put together an entire slideshow on how to livetweet and how to compete with your sibling.

In the End Emily Robinson Beat Her Brother For Best Tweeting

Emily won the prize for best tweeting

Despite David’s headstart Emily was the best tweeter (as calculated by Max Kuhn and Mara Averick) so she won the WASD Code mechanical keyboard with MX Cherry Clear switches.

Silent Auction of Data Paintings

The Robinson Family bought the Pizza Data painting for me

Thomas Levine made paintings of famous datasets that we auctioned off with the proceeds supporting the R Foundation and the Free Software Foundation. The Robinson family very graciously chipped in and bought the painting of the Pizza Poll data for me! I’m still floored by this and in love with the painting.

Ice Cream Sandwiches

Ice Cream Sandwiches

In addition to bagels and eggs sandwiches from Murray’s Bagels, Israeli food from Hummus and Pita Company, avocado toast and coffee from Bluestone Lane Coffee and pizza from Fiore’s, we also had ice creams sandwiches from World’s Best Cookie Dough.

All the Material

To catch up on all the presentations check out Mara Averick’s excellent notes:

Or check out all of Brooke’s drawings, collated by Dan Chen.

Videos and Upcoming Events

The videos will be posted at rstats.nyc in a few weeks for all to enjoy.

There are a number of other events coming up including:

We are already beginning plans for next year’s conference and are working on bringing it to DC as well! Stay tuned for all that and more.

Dan loves his mug

Related Posts



Jared Lander is the Chief Data Scientist of Lander Analytics a New York data science firm, Adjunct Professor at Columbia University, Organizer of the New York Open Statistical Programming meetup and the New York and Washington DC R Conferences and author of R for Everyone.

Pi Day 2018

It’s Pi Day, when we celebrate all things round by eating pizza and Pi Cake. This is the ninth year we have celebrated Pi Day and the fourth year in a row we got the Pi Cake from Empire Cakes. This year’s pizza place was Arturo’s on Thompson and Houston. Arturo’s is a great example of old New York pizza with an oven dating to the 1920’s.

In addition to the traditional Pi Symbol atop the cake we added Albert Einstein since today is also his birthday. It seems fitting that we lost one of the world’s other greatest physicists, Stephen Hawking on the same math holiday.

Pi Cake 2018

The crew has grown quite large from the five of us who celebrated our first pie day almost a decade ago.

Some more pictures from this fun night.

And now Pi Cake throughout the years:

Related Posts



Jared Lander is the Chief Data Scientist of Lander Analytics a New York data science firm, Adjunct Professor at Columbia University, Organizer of the New York Open Statistical Programming meetup and the New York and Washington DC R Conferences and author of R for Everyone.

In my last post I discussed using coefplot on glmnet models and in particular discussed a brand new function, coefpath, that uses dygraphs to make an interactive visualization of the coefficient path.

Another new capability for version 1.2.5 of coefplot is the ability to show coefficient plots from xgboost models. Beyond fitting boosted trees and boosted forests, xgboost can also fit a boosted Elastic Net. This makes it a nice alternative to glmnet even though it might not have some of the same user niceties.

To illustrate, we use the same data as our previous post.

First, we load the packages we need and note the version numbers.

# list the packages that we load
# alphabetically for reproducibility
packages <- c('caret', 'coefplot', 'DT', 'xgboost')
# call library on each package
purrr::walk(packages, library, character.only=TRUE)

# some packages we will reference without actually loading
# they are listed here for complete documentation
packagesColon <- c('dplyr', 'dygraphs', 'knitr', 'magrittr', 'purrr', 'tibble', 'useful')
versions <- c(packages, packagesColon) %>% 
    purrr::map(packageVersion) %>% 
    purrr::map_chr(as.character)
packageDF <- tibble::data_frame(Package=c(packages, packagesColon), Version=versions) %>% 
    dplyr::arrange(Package)
knitr::kable(packageDF)
Package Version
caret 6.0.78
coefplot 1.2.6
dplyr 0.7.4
DT 0.2
dygraphs 1.1.1.4
knitr 1.18
magrittr 1.5
purrr 0.2.4
tibble 1.4.2
useful 1.2.3
xgboost 0.6.4

Then, we read the data. The data are available at https://www.jaredlander.com/data/manhattan_Train.rds with the CSV version at data.world. We also get validation data which is helpful when fitting xgboost mdoels.

manTrain <- readRDS(url('https://www.jaredlander.com/data/manhattan_Train.rds'))
manVal <- readRDS(url('https://www.jaredlander.com/data/manhattan_Validate.rds'))

The data are about New York City land value and have many columns. A sample of the data follows. There’s an odd bug where you have to click on one of the column names for the data to display the actual data.

datatable(manTrain %>% dplyr::sample_n(size=1000), elementId='TrainingSampled',
              rownames=FALSE,
              extensions=c('FixedHeader', 'Scroller'),
              options=list(
                  scroller=TRUE
              ))

While glmnet automatically standardizes the input data, xgboost does not, so we calculate that manually. We use preprocess from caret to compute the mean and standard deviation of each numeric column then use these later.

preProc <- preProcess(manTrain, method=c('center', 'scale'))

Just like with glmnet, we need to convert our tbl into an X (predictor) matrix and a Y (response) vector. Since we don’t have to worry about multicolinearity with xgboost we do not want to drop the baselines of factors. We also take advantage of sparse matrices since that reduces memory usage and compute, even though this dataset is not that large.

In order to build the matrix and vector we need a formula. This could be built programmatically, but we can just build it ourselves. The response is TotalValue.

valueFormula <- TotalValue ~ FireService + ZoneDist1 + ZoneDist2 +
    Class + LandUse + OwnerType + LotArea + BldgArea + ComArea + ResArea +
    OfficeArea + RetailArea + NumBldgs + NumFloors + UnitsRes + UnitsTotal + 
    LotDepth + LotFront + BldgFront + LotType + HistoricDistrict + Built + 
    Landmark
manX <- useful::build.x(valueFormula, data=predict(preProc, manTrain),
                        # do not drop the baselines of factors
                        contrasts=FALSE,
                        # use a sparse matrix
                        sparse=TRUE)

manY <- useful::build.y(valueFormula, data=manTrain)

manX_val <- useful::build.x(valueFormula, data=predict(preProc, manVal),
                        # do not drop the baselines of factors
                        contrasts=FALSE,
                        # use a sparse matrix
                        sparse=TRUE)

manY_val <- useful::build.y(valueFormula, data=manVal)

There are two functions we can use to fit xgboost models, the eponymous xgboost and xgb.train. When using xgb.train we first store our X and Y matrices in a special xgb.DMatrix object. This is not a necessary step, but makes things a bit cleaner.

manXG <- xgb.DMatrix(data=manX, label=manY)
manXG_val <- xgb.DMatrix(data=manX_val, label=manY_val)

We are now ready to fit a model. All we need to do to fit a linear model instead of a tree is set booster='gblinear' and objective='reg:linear'.

mod1 <- xgb.train(
    # the X and Y training data
    data=manXG,
    # use a linear model
    booster='gblinear',
    # minimize the a regression criterion 
    objective='reg:linear',
    # use MAE as a measure of quality
    eval_metric=c('mae'),
    # boost for up to 500 rounds
    nrounds=500,
    # print out the eval_metric for both the train and validation data
    watchlist=list(train=manXG, validate=manXG_val),
    # print eval_metric every 10 rounds
    print_every_n=10,
    # if the validate eval_metric hasn't improved by this many rounds, stop early
    early_stopping_rounds=25,
    # penalty terms for the L2 portion of the Elastic Net
    lambda=10, lambda_bias=10,
    # penalty term for the L1 portion of the Elastic Net
    alpha=900000000,
    # randomly sample rows
    subsample=0.8,
    # randomly sample columns
    col_subsample=0.7,
    # set the learning rate for gradient descent
    eta=0.1
)
## [1]  train-mae:1190145.875000    validate-mae:1433464.750000 
## Multiple eval metrics are present. Will use validate_mae for early stopping.
## Will train until validate_mae hasn't improved in 25 rounds.
## 
## [11] train-mae:938069.937500 validate-mae:1257632.000000 
## [21] train-mae:932016.625000 validate-mae:1113554.625000 
## [31] train-mae:931483.500000 validate-mae:1062618.250000 
## [41] train-mae:931146.750000 validate-mae:1054833.625000 
## [51] train-mae:930707.312500 validate-mae:1062881.375000 
## [61] train-mae:930137.375000 validate-mae:1077038.875000 
## Stopping. Best iteration:
## [41] train-mae:931146.750000 validate-mae:1054833.625000

The best fit was arrived at after 41 rounds. We can see how the model did on the train and validate sets using dygraphs.

dygraphs::dygraph(mod1$evaluation_log)

We can now plot the coefficients using coefplot. Since xgboost does not save column names, we specify it with feature_names=colnames(manX). Unlike with glmnet models, there is only one penalty so we do not need to specify a specific penalty to plot.

coefplot(mod1, feature_names=colnames(manX), sort='magnitude')

This is another nice addition to coefplot utilizing the power of xgboost.

Related Posts



Jared Lander is the Chief Data Scientist of Lander Analytics a New York data science firm, Adjunct Professor at Columbia University, Organizer of the New York Open Statistical Programming meetup and the New York and Washington DC R Conferences and author of R for Everyone.

I’m a big fan of the Elastic Net for variable selection and shrinkage and have given numerous talks about it and its implementation, glmnet. In fact, I will even have a DataCamp course about glmnet coming out soon.

As a side note, I used to pronounce it g-l-m-net but after having lunch with one of its creators, Trevor Hastie, I learn it is pronounced glimnet.

coefplot has long supported glmnet via a standard coefficient plot but I recently added some functionality, so let’s take a look. As we go through this, please pardon the htmlwidgets in iframes.

First, we load packages. I am now fond of using the following syntax for loading the packages we will be using.

# list the packages that we load
# alphabetically for reproducibility
packages <- c('coefplot', 'DT', 'glmnet')
# call library on each package
purrr::walk(packages, library, character.only=TRUE)

# some packages we will reference without actually loading
# they are listed here for complete documentation
packagesColon <- c('dplyr', 'knitr', 'magrittr', 'purrr', 'tibble', 'useful')

The versions can then be displayed in a table.

versions <- c(packages, packagesColon) %>% 
    purrr::map(packageVersion) %>% 
    purrr::map_chr(as.character)
packageDF <- tibble::data_frame(Package=c(packages, packagesColon), Version=versions) %>% 
    dplyr::arrange(Package)
knitr::kable(packageDF)
Package Version
coefplot 1.2.5.1
dplyr 0.7.4
DT 0.2
glmnet 2.0.13
knitr 1.18
magrittr 1.5
purrr 0.2.4
tibble 1.4.1
useful 1.2.3

First, we read some data. The data are available at https://www.jaredlander.com/data/manhattan_Train.rds with the CSV version at data.world.

manTrain <- readRDS(url('https://www.jaredlander.com/data/manhattan_Train.rds'))

The data are about New York City land value and have many columns. A sample of the data follows.

datatable(manTrain %>% dplyr::sample_n(size=100), elementId='DataSampled',
              rownames=FALSE,
              extensions=c('FixedHeader', 'Scroller'),
              options=list(
                  scroller=TRUE,
                  scrollY=300
              ))

In order to use glmnet we need to convert our tbl into an X (predictor) matrix and a Y (response) vector. Since we don’t have to worry about multicolinearity with glmnet we do not want to drop the baselines of factors. We also take advantage of sparse matrices since that reduces memory usage and compute, even though this dataset is not that large.

In order to build the matrix ad vector we need a formula. This could be built programmatically, but we can just build it ourselves. The response is TotalValue.

valueFormula <- TotalValue ~ FireService + ZoneDist1 + ZoneDist2 +
    Class + LandUse + OwnerType + LotArea + BldgArea + ComArea + ResArea +
    OfficeArea + RetailArea + NumBldgs + NumFloors + UnitsRes + UnitsTotal + 
    LotDepth + LotFront + BldgFront + LotType + HistoricDistrict + Built + 
    Landmark - 1

Notice the - 1 means do not include an intercept since glmnet will do that for us.

manX <- useful::build.x(valueFormula, data=manTrain,
                        # do not drop the baselines of factors
                        contrasts=FALSE,
                        # use a sparse matrix
                        sparse=TRUE)

manY <- useful::build.y(valueFormula, data=manTrain)

We are now ready to fit a model.

mod1 <- glmnet(x=manX, y=manY, family='gaussian')

We can view a coefficient plot for a given value of lambda like this.

coefplot(mod1, lambda=330500, sort='magnitude')

A common plot that is built into the glmnet package it the coefficient path.

plot(mod1, xvar='lambda', label=TRUE)

This plot shows the path the coefficients take as lambda increases. They greater lambda is, the more the coefficients get shrunk toward zero. The problem is, it is hard to disambiguate the lines and the labels are not informative.

Fortunately, coefplot has a new function in Version 1.2.5 called coefpath for making this into an interactive plot using dygraphs.

coefpath(mod1)

While still busy this function provides so much more functionality. We can hover over lines, zoom in then pan around.

These functions also work with any value for alpha and for cross-validated models fit with cv.glmnet.

mod2 <- cv.glmnet(x=manX, y=manY, family='gaussian', alpha=0.7, nfolds=5)

We plot coefficient plots for both optimal lambdas.

# coefplot for the 1se error lambda
coefplot(mod2, lambda='lambda.1se', sort='magnitude')

# coefplot for the min error lambda
coefplot(mod2, lambda='lambda.min', sort='magnitude')

The coefficient path is the same as before though the optimal lambdas are noted as dashed vertical lines.

coefpath(mod2)

While coefplot has long been able to plot coefficients from glmnet models, the new coefpath function goes a long way in helping visualize the paths the coefficients take as lambda changes.

Related Posts



Jared Lander is the Chief Data Scientist of Lander Analytics a New York data science firm, Adjunct Professor at Columbia University, Organizer of the New York Open Statistical Programming meetup and the New York and Washington DC R Conferences and author of R for Everyone.