Regression Categorical
Regression with Categorical Predictors
This set of notes will explore using linear regression for a single predictor attribute that is categorical instead of continuous. To explore this first, let’s explore some data.
library(tidyverse)
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.2 ──
## ✔ ggplot2 3.3.6 ✔ purrr 0.3.4
## ✔ tibble 3.1.8 ✔ dplyr 1.0.10
## ✔ tidyr 1.2.1 ✔ stringr 1.4.1
## ✔ readr 2.1.2 ✔ forcats 0.5.2
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
library(Lahman)
library(ggformula)
## Loading required package: ggstance
##
## Attaching package: 'ggstance'
##
## The following objects are masked from 'package:ggplot2':
##
## geom_errorbarh, GeomErrorbarh
##
## Loading required package: scales
##
## Attaching package: 'scales'
##
## The following object is masked from 'package:purrr':
##
## discard
##
## The following object is masked from 'package:readr':
##
## col_factor
##
## Loading required package: ggridges
##
## New to ggformula? Try the tutorials:
## learnr::run_tutorial("introduction", package = "ggformula")
## learnr::run_tutorial("refining", package = "ggformula")
theme_set(theme_bw(base_size = 18))
career <- Batting %>%
filter(AB > 100) %>%
anti_join(Pitching, by = "playerID") %>%
filter(yearID > 1990) %>%
group_by(playerID, lgID) %>%
summarise(H = sum(H), AB = sum(AB)) %>%
mutate(average = H / AB)
## `summarise()` has grouped output by 'playerID'. You can override using the
## `.groups` argument.
career <- People %>%
tbl_df() %>%
dplyr::select(playerID, nameFirst, nameLast) %>%
unite(name, nameFirst, nameLast, sep = " ") %>%
inner_join(career, by = "playerID") %>%
dplyr::select(-playerID)
## Warning: `tbl_df()` was deprecated in dplyr 1.0.0.
## ℹ Please use `tibble::as_tibble()` instead.
head(career)
## # A tibble: 6 × 5
## name lgID H AB average
## <chr> <fct> <int> <int> <dbl>
## 1 Jeff Abbott AL 127 459 0.277
## 2 Kurt Abbott AL 33 123 0.268
## 3 Kurt Abbott NL 455 1780 0.256
## 4 Reggie Abercrombie NL 54 255 0.212
## 5 Brent Abernathy AL 194 767 0.253
## 6 Shawn Abner AL 81 309 0.262
Question
Suppose we are interested in the batting average of baseball players since 1990, that is, the average is:
$$ average = \frac{number\ of\ hits}{number\ of\ atbats} $$
Let’s first visualize this.
gf_density(~ average, data = career) %>%
gf_labs(x = "Batting Average")
What if we hypothesized that the batting average will differ based on the league that players played in.
gf_violin(lgID ~ average, data = career, fill = 'gray80', draw_quantiles = c('0.1', '0.5', '0.9')) %>%
gf_labs(x = "Batting Average",
y = "League")
The distributions seem similar, but what if we wanted to go a step further and estimate a model to explore if there are really differences or not. For example, suppose we were interested in:
$$ H_{0}: \mu_{NL} = \mu_{AL} $$
What type of model could we use? What about linear regression?
Linear Regression with Categorical Attributes
Since these notes are happening, you can assume it is possible. But how can a categorical attribute with categories rather than numbers be included in the linear regression model?
The answer is that they can’t. We need a new representation of the categorical attribute, enter dummy or indicator coding.
Dummy/Indicator Coding
Suppose we use the following logic:
If NL, then give a value of 1, else give a value of 0.
Does this give the same information as before?
League ID | Dummy League ID |
---|---|
AL | 0 |
NL | 1 |
What would this look like for the actual data?
career <- career %>%
mutate(league_dummy = ifelse(lgID == 'NL', 1, 0))
head(career, n = 10)
## # A tibble: 10 × 6
## name lgID H AB average league_dummy
## <chr> <fct> <int> <int> <dbl> <dbl>
## 1 Jeff Abbott AL 127 459 0.277 0
## 2 Kurt Abbott AL 33 123 0.268 0
## 3 Kurt Abbott NL 455 1780 0.256 1
## 4 Reggie Abercrombie NL 54 255 0.212 1
## 5 Brent Abernathy AL 194 767 0.253 0
## 6 Shawn Abner AL 81 309 0.262 0
## 7 Shawn Abner NL 19 115 0.165 1
## 8 Bobby Abreu AL 858 3061 0.280 0
## 9 Bobby Abreu NL 1602 5373 0.298 1
## 10 Jose Abreu AL 1262 4353 0.290 0
Now that there is a numeric attribute, these can be added into the linear regression model.
average_lm <- lm(average ~ league_dummy, data = career)
broom::tidy(average_lm)
## # A tibble: 2 × 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) 0.253 0.000761 332. 0
## 2 league_dummy 0.00102 0.00107 0.949 0.343
How are these terms interpreted now?
df_stats(average ~ league_dummy, data = career, mean, sd, length)
## response league_dummy mean sd length
## 1 average 0 0.2525899 0.02876352 1431
## 2 average 1 0.2536090 0.02879563 1440
average_lm2 <- lm(average ~ lgID, data = career)
broom::tidy(average_lm2)
## # A tibble: 2 × 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) 0.253 0.000761 332. 0
## 2 lgIDNL 0.00102 0.00107 0.949 0.343
t.test(average ~ lgID, data = career, var.equal = TRUE)
##
## Two Sample t-test
##
## data: average by lgID
## t = -0.9487, df = 2869, p-value = 0.3429
## alternative hypothesis: true difference in means between group AL and group NL is not equal to 0
## 95 percent confidence interval:
## -0.003125489 0.001087227
## sample estimates:
## mean in group AL mean in group NL
## 0.2525899 0.2536090
Values other than 0/1
First, I want to build off of the first part of the notes on regression with categorical predictors. Before generalizing to more than two groups, let’s first explore what happens when values other than 0/1 are used for the categorical attribute. The following three dummy/indicator attributes will be used:
- 1 = NL, 0 = AL
- 1 = NL, 2 = AL
- 100 = NL, 0 = AL
Make some predictions about what you think will happen in the three separate regressions?
library(tidyverse)
library(Lahman)
library(ggformula)
theme_set(theme_bw(base_size = 18))
career <- Batting %>%
filter(AB > 100) %>%
anti_join(Pitching, by = "playerID") %>%
filter(yearID > 1990) %>%
group_by(playerID, lgID) %>%
summarise(H = sum(H), AB = sum(AB)) %>%
mutate(average = H / AB)
## `summarise()` has grouped output by 'playerID'. You can override using the
## `.groups` argument.
career <- People %>%
tbl_df() %>%
dplyr::select(playerID, nameFirst, nameLast) %>%
unite(name, nameFirst, nameLast, sep = " ") %>%
inner_join(career, by = "playerID") %>%
dplyr::select(-playerID)
career <- career %>%
mutate(league_dummy = ifelse(lgID == 'NL', 1, 0),
league_dummy_12 = ifelse(lgID == 'NL', 1, 2),
league_dummy_100 = ifelse(lgID == 'NL', 100, 0))
head(career, n = 10)
## # A tibble: 10 × 8
## name lgID H AB average league_dummy league_du…¹ leagu…²
## <chr> <fct> <int> <int> <dbl> <dbl> <dbl> <dbl>
## 1 Jeff Abbott AL 127 459 0.277 0 2 0
## 2 Kurt Abbott AL 33 123 0.268 0 2 0
## 3 Kurt Abbott NL 455 1780 0.256 1 1 100
## 4 Reggie Abercrombie NL 54 255 0.212 1 1 100
## 5 Brent Abernathy AL 194 767 0.253 0 2 0
## 6 Shawn Abner AL 81 309 0.262 0 2 0
## 7 Shawn Abner NL 19 115 0.165 1 1 100
## 8 Bobby Abreu AL 858 3061 0.280 0 2 0
## 9 Bobby Abreu NL 1602 5373 0.298 1 1 100
## 10 Jose Abreu AL 1262 4353 0.290 0 2 0
## # … with abbreviated variable names ¹league_dummy_12, ²league_dummy_100
average_lm <- lm(average ~ league_dummy, data = career)
broom::tidy(average_lm)
## # A tibble: 2 × 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) 0.253 0.000761 332. 0
## 2 league_dummy 0.00102 0.00107 0.949 0.343
average_lm_12 <- lm(average ~ league_dummy_12, data = career)
broom::tidy(average_lm_12)
## # A tibble: 2 × 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) 0.255 0.00170 150. 0
## 2 league_dummy_12 -0.00102 0.00107 -0.949 0.343
average_lm_100 <- lm(average ~ league_dummy_100, data = career)
broom::tidy(average_lm_100)
## # A tibble: 2 × 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) 0.253 0.000761 332. 0
## 2 league_dummy_100 0.0000102 0.0000107 0.949 0.343
Before moving to more than 2 groups, any thoughts on how we could run a one-sample t-test using a linear regression? For example, suppose this null hypothesis wanted to be explored.
$$ H_{0}: \mu_{BA} = .2 $$
$$ H_{A}: \mu_{BA} \neq .2 $$
Generalize to more than 2 groups
The ability to use regression with categorical attributes of more than 2 groups is possible and an extension of the 2 groups model shown above. First, let’s think about how we could represent three categories as numeric attributes. Suppose we had the following 4 categories of baseball players.
Position |
---|
Outfield |
Infield |
Catcher |
Designated Hitter |
library(GeomMLBStadiums)
ggplot() +
geom_mlb_stadium(stadium_segments = "all") +
facet_wrap(~team) +
coord_fixed() +
theme_void()
library(tidyverse)
library(Lahman)
library(ggformula)
theme_set(theme_bw(base_size = 18))
career <- Batting %>%
filter(AB > 100) %>%
anti_join(Pitching, by = "playerID") %>%
filter(yearID > 1990) %>%
group_by(playerID, lgID) %>%
summarise(H = sum(H), AB = sum(AB)) %>%
mutate(average = H / AB)
## `summarise()` has grouped output by 'playerID'. You can override using the
## `.groups` argument.
career <- Appearances %>%
filter(yearID > 1990) %>%
select(-GS, -G_ph, -G_pr, -G_batting, -G_defense, -G_p, -G_lf, -G_cf, -G_rf) %>%
rowwise() %>%
mutate(g_inf = sum(c_across(G_1b:G_ss))) %>%
select(-G_1b, -G_2b, -G_3b, -G_ss) %>%
group_by(playerID, lgID) %>%
summarise(catcher = sum(G_c),
outfield = sum(G_of),
dh = sum(G_dh),
infield = sum(g_inf),
total_games = sum(G_all)) %>%
pivot_longer(catcher:infield,
names_to = "position") %>%
filter(value > 0) %>%
group_by(playerID, lgID) %>%
slice_max(value) %>%
select(playerID, lgID, position) %>%
inner_join(career)
## `summarise()` has grouped output by 'playerID'. You can override using the
## `.groups` argument.
## Joining, by = c("playerID", "lgID")
career <- People %>%
tbl_df() %>%
dplyr::select(playerID, nameFirst, nameLast) %>%
unite(name, nameFirst, nameLast, sep = " ") %>%
inner_join(career, by = "playerID")
career <- career %>%
mutate(league_dummy = ifelse(lgID == 'NL', 1, 0))
count(career, position)
## # A tibble: 4 × 2
## position n
## <chr> <int>
## 1 catcher 410
## 2 dh 81
## 3 infield 1248
## 4 outfield 1136
gf_violin(position ~ average, data = career, fill = 'gray85', draw_quantiles = c(0.1, 0.5, 0.9)) %>%
gf_labs(x = "Batting Average",
y = "")
career <- career %>%
mutate(outfield = ifelse(position == 'outfield', 1, 0),
infield = ifelse(position == 'infield', 1, 0),
catcher = ifelse(position == 'catcher', 1, 0))
head(career)
## # A tibble: 6 × 11
## playerID name lgID posit…¹ H AB average leagu…² outfi…³ infield
## <chr> <chr> <fct> <chr> <int> <int> <dbl> <dbl> <dbl> <dbl>
## 1 abbotje01 Jeff Abbo… AL outfie… 127 459 0.277 0 1 0
## 2 abbotku01 Kurt Abbo… AL infield 33 123 0.268 0 0 1
## 3 abbotku01 Kurt Abbo… NL infield 455 1780 0.256 1 0 1
## 4 abercre01 Reggie Ab… NL outfie… 54 255 0.212 1 1 0
## 5 abernbr01 Brent Abe… AL infield 194 767 0.253 0 0 1
## 6 abnersh01 Shawn Abn… AL outfie… 81 309 0.262 0 1 0
## # … with 1 more variable: catcher <dbl>, and abbreviated variable names
## # ¹position, ²league_dummy, ³outfield
position_lm <- lm(average ~ 1 + outfield + infield + catcher, data = career)
broom::tidy(position_lm)
## # A tibble: 4 × 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) 0.257 0.00315 81.7 0
## 2 outfield -0.00182 0.00326 -0.557 0.578
## 3 infield -0.00289 0.00325 -0.888 0.375
## 4 catcher -0.0165 0.00345 -4.79 0.00000175
df_stats(average ~ position, data = career, mean)
## response position mean
## 1 average catcher 0.2408859
## 2 average dh 0.2574041
## 3 average infield 0.2545163
## 4 average outfield 0.2555881