Variable  Definition 

id  A unique sample identifier 
Gender  Gender (sex) of study participant coded as male or female 
Age  Age in years at screening of study participant. Note: Subjects 80 years or older were recorded as 80. 
Race  Reported race of study participant, including nonHispanic Asian category: Mexican, Hispanic, White, Black, Asian, or Other. Not availale for 200910. 
Education  Educational level of study participant Reported for participants aged 20 years or older. One of 8thGrade, 911thGrade, HighSchool, SomeCollege, or CollegeGrad. 
MaritalStatus  Marital status of study participant. Reported for participants aged 20 years or older. One of Married, Widowed, Divorced, Separated, NeverMarried, or LivePartner (living with partner). 
RelationshipStatus  Simplification of MaritalStatus, coded as Committed if MaritalStatus is Married or LivePartner, and Single otherwise. 
Insured  Indicates whether the individual is covered by health insurance. 
Income  Numerical version of HHIncome derived from the middle income in each category 
Poverty  A ratio of family income to poverty guidelines. Smaller numbers indicate more poverty 
HomeRooms  How many rooms are in home of study participant (counting kitchen but not bathroom). 13 rooms = 13 or more rooms. 
HomeOwn  One of Home, Rent, or Other indicating whether the home of study participant or someone in their family is owned, rented or occupied by some other arrangement. 
Work  Indicates whether the individual is current working or not. 
Weight  Weight in kg 
Height  Standing height in cm. Reported for participants aged 2 years or older. 
BMI  Body mass index (weight/height2 in kg/m2). Reported for participants aged 2 years or older. 
Pulse  60 second pulse rate 
BPSys  Combined systolic blood pressure reading, following the procedure outlined for BPXSAR. 
BPDia  Combined diastolic blood pressure reading, following the procedure outlined for BPXDAR. 
Testosterone  Testerone total (ng/dL). Reported for participants aged 6 years or older. Not available for 20092010. 
HDLChol  Direct HDL cholesterol in mmol/L. Reported for participants aged 6 years or older. 
TotChol  Total HDL cholesterol in mmol/L. Reported for participants aged 6 years or older. 
Diabetes  Study participant told by a doctor or health professional that they have diabetes. Reported for participants aged 1 year or older as Yes or No. 
DiabetesAge  Age of study participant when first told they had diabetes. Reported for participants aged 1 year or older. 
nPregnancies  How many times participant has been pregnant. Reported for female participants aged 20 years or older. 
nBabies  How many of participants deliveries resulted in live births. Reported for female participants aged 20 years or older. 
SleepHrsNight  Selfreported number of hours study participant usually gets at night on weekdays or workdays. Reported for participants aged 16 years and older. 
PhysActive  Participant does moderate or vigorousintensity sports, fitness or recreational activities (Yes or No). Reported for participants 12 years or older. 
PhysActiveDays  Number of days in a typical week that participant does moderate or vigorousintensity activity. Reported for participants 12 years or older. 
AlcoholDay  Average number of drinks consumed on days that participant drank alcoholic beverages. Reported for participants aged 18 years or older. 
AlcoholYear  Estimated number of days over the past year that participant drank alcoholic beverages. Reported for participants aged 18 years or older. 
SmokingStatus  Smoking status: Current Former or Never. 
8 Essential statistics
This chapter provides handson instruction and exercises covering basic statistical analysis in R. This will cover descriptive statistics, ttests, linear models, chisquare, clustering, dimensionality reduction, and resampling strategies. We will also cover methods for “tidying” model results for downstream visualization and summarization.
Handouts: Download and print out these handouts and bring them to class:
8.1 Our data: NHANES
8.1.1 About NHANES
The data we’re going to work with comes from the National Health and Nutrition Examination Survey (NHANES) program at the CDC. You can read a lot more about NHANES on the CDC’s website or Wikipedia. NHANES is a research program designed to assess the health and nutritional status of adults and children in the United States. The survey is one of the only to combine both survey questions and physical examinations. It began in the 1960s and since 1999 examines a nationally representative sample of about 5,000 people each year. The NHANES interview includes demographic, socioeconomic, dietary, and healthrelated questions. The physical exam includes medical, dental, and physiological measurements, as well as several standard laboratory tests. NHANES is used to determine the prevalence of major diseases and risk factors for those diseases. NHANES data are also the basis for national standards for measurements like height, weight, and blood pressure. Data from this survey is used in epidemiology studies and health sciences research, which help develop public health policy, direct and design health programs and services, and expand the health knowledge for the Nation.
We are using a small slice of this data. We’re only using a handful of variables from the 20112012 survey years on about 5,000 individuals. The CDC uses a sampling strategy to purposefully oversample certain subpopulations like racial minorities. Naive analysis of the original NHANES data can lead to mistaken conclusions because the percentages of people from each racial group in the data are different from general population. The 5,000 individuals here are resampled from the larger NHANES study population to undo these oversampling effects, so you can treat this as if it were a simple random sample from the American population.
You can download the data here: nhanes.csv. There’s also a data dictionary here: nhanes_dd.csv that lists and describes each variable in our NHANES dataset. This table is copied below.
8.1.2 Import & inspect
First, let’s load the dplyr and readr libraries.
If you see a warning that looks like this: Error in library(dplyr) : there is no package called 'dplyr'
(or similar with readr), then you don’t have the package installed correctly. See the (Appendix A)
Now, let’s actually load the data. When we load data we assign it to a variable just like any other, and we can choose a name for that data. Since we’re going to be referring to this data a lot, let’s give it a short easy name to type. I’m going to call it nh
. Once we’ve loaded it we can type the name of the object itself (nh
) to see it printed to the screen.
nh < read_csv(file="data/nhanes.csv")
nh
# A tibble: 5,000 × 32
id Gender Age Race Education MaritalStatus RelationshipStatus Insured
<dbl> <chr> <dbl> <chr> <chr> <chr> <chr> <chr>
1 62163 male 14 Asian <NA> <NA> <NA> Yes
2 62172 female 43 Black High Sch… NeverMarried Single Yes
3 62174 male 80 White College … Married Committed Yes
4 62174 male 80 White College … Married Committed Yes
5 62175 male 5 White <NA> <NA> <NA> Yes
6 62176 female 34 White College … Married Committed Yes
7 62178 male 80 White High Sch… Widowed Single Yes
8 62180 male 35 White College … Married Committed Yes
9 62186 female 17 Black <NA> <NA> <NA> Yes
10 62190 female 15 Mexican <NA> <NA> <NA> Yes
# ℹ 4,990 more rows
# ℹ 24 more variables: Income <dbl>, Poverty <dbl>, HomeRooms <dbl>,
# HomeOwn <chr>, Work <chr>, Weight <dbl>, Height <dbl>, BMI <dbl>,
# Pulse <dbl>, BPSys <dbl>, BPDia <dbl>, Testosterone <dbl>, HDLChol <dbl>,
# TotChol <dbl>, Diabetes <chr>, DiabetesAge <dbl>, nPregnancies <dbl>,
# nBabies <dbl>, SleepHrsNight <dbl>, PhysActive <chr>, PhysActiveDays <dbl>,
# AlcoholDay <dbl>, AlcoholYear <dbl>, SmokingStatus <chr>
Take a look at that output. The nice thing about loading dplyr and reading data with readr functions is that data are displayed in a much more friendly way. This dataset has 5,000 rows and 32 columns. When you import/convert data this way and try to display the object in the console, instead of trying to display all 5,000 rows, you’ll only see about 10 by default. Also, if you have so many columns that the data would wrap off the edge of your screen, those columns will not be displayed, but you’ll see at the bottom of the output which, if any, columns were hidden from view.
A note on characters versus factors: One thing that you immediately notice is that all the categorical variables are read in as character data types. This data type is used for storing strings of text, for example, IDs, names, descriptive text, etc. There’s another related data type called factors. Factor variables are used to represent categorical variables with two or more levels, e.g., “male” or “female” for Gender, or “Single” versus “Committed” for RelationshipStatus. For the most part, statistical analysis treats these two data types the same. It’s often easier to leave categorical variables as characters. However, in some cases you may get a warning message alerting you that a character variable was converted into a factor variable during analysis. Generally, these warnings are nothing to worry about. You can, if you like, convert individual variables to factor variables, or simply use dplyr’s
mutate_if
to convert all character vectors to factor variables:
nh < nh > mutate_if(is.character, as.factor)
nh
Now just take a look at just a few columns that are now factors. Remember, you can look at individual variables with the mydataframe$specificVariable
syntax.
nh$RelationshipStatus
nh$Race
levels(nh$Race)
If you want to see the whole dataset, there are two ways to do this. First, you can click on the name of the data.frame in the Environment panel in RStudio. Or you could use the View()
function (with a capital V).
View(nh)
Recall several builtin functions that are useful for working with data frames.
 Content:
 Size:
 Summary:

colnames()
(or justnames()
): returns the column names 
glimpse()
(from dplyr): Returns a glimpse of your data, telling you the structure of the dataset and information about the class, length and content of each column

8.2 Descriptive statistics
We can access individual variables within a data frame using the $
operator, e.g., mydataframe$specificVariable
. Let’s print out all the Race
values in the data. Let’s then see what are the unique
values of each. Then let’s calculate the mean
, median
, and range
of the Age
variable.
You could also do the last few operations using dplyr, but remember, this returns a singlerow, singlecolumn tibble, not a single scalar value like the above. This is only really useful in the context of grouping and summarizing.
The summary()
function (note, this is different from dplyr’s summarize()
) works differently depending on which kind of object you pass to it. If you run summary()
on a data frame, you get some very basic summary statistics on each variable in the data.
summary(nh)
8.2.1 Missing data
Let’s try taking the mean of a different variable, either the dplyr way or the simpler $
way.
What happened there? NA
indicates missing data. Take a look at the Income variable.
# Look at just the Income variable
nh$Income
# Or view the dataset
# View(nh)
Notice that there are lots of missing values for Income. Trying to get the mean a bunch of observations with some missing data returns a missing value by default. This is almost universally the case with all summary statistics – a single NA
will cause the summary to return NA
. Now look at the help for ?mean
. Notice the na.rm
argument. This is a logical (i.e., TRUE
or FALSE
) value indicating whether or not missing values should be removed prior to computing the mean. By default, it’s set to FALSE
. Now try it again.
mean(nh$Income, na.rm=TRUE)
[1] 57077.66
The is.na()
function tells you if a value is missing. Get the sum()
of that vector, which adds up all the TRUE
s to tell you how many of the values are missing.
Now, let’s talk about exploratory data analysis (EDA).
8.2.2 EDA
It’s always worth examining your data visually before you start any statistical analysis or hypothesis testing. We could spend an entire day on exploratory data analysis. The data visualization section (Chapter 5) covers this in much broader detail. Here we’ll just mention a few of the big ones: histograms and scatterplots.
8.2.2.1 Histograms
We can learn a lot from the data just looking at the value distributions of particular variables. Let’s make some histograms with ggplot2. Looking at BMI shows a few extreme outliers. Looking at weight initially shows us that the units are probably in kg. Replotting that in lbs with more bins shows a clear bimodal distribution. Are there kids in this data? The age distribution shows us the answer is yes.
library(ggplot2)
ggplot(nh, aes(BMI)) + geom_histogram(bins=30)
ggplot(nh, aes(Weight)) + geom_histogram(bins=30)
# In pounds, more bins
ggplot(nh, aes(Weight*2.2)) + geom_histogram(bins=80)
ggplot(nh, aes(Age)) + geom_histogram(bins=30)
8.2.2.2 Scatterplots
Let’s look at how a few different variables relate to each other. E.g., height and weight:
ggplot(nh, aes(Height, Weight, col=Gender)) + geom_point()
Let’s filter out all the kids, draw trend lines using a linear model:
nh >
filter(Age>=18) >
ggplot(aes(Height, Weight, col=Gender)) +
geom_point() +
geom_smooth(method="lm")
Check out the data visualization section (Chapter 5) for much more on this topic.
8.3 Continuous variables
8.3.1 Ttests
First let’s create a new dataset from nh
called nha
that only has adults. To prevent us from making any mistakes downstream, let’s remove the nh
object.
Let’s do a few twosample ttests to test for differences in means between two groups. The function for a ttest is t.test()
. See the help for ?t.test
. We’ll be using the forumla method. The usage is t.test(response~group, data=myDataFrame)
.
 Are there differences in age for males versus females in this dataset?
 Does BMI differ between diabetics and nondiabetics?
 Do single or married/cohabitating people drink more alcohol? Is this relationship significant?
t.test(Age~Gender, data=nha)
Welch Two Sample ttest
data: Age by Gender
t = 1.9122, df = 3697.2, pvalue = 0.05593
alternative hypothesis: true difference in means between group female and group male is not equal to 0
95 percent confidence interval:
0.02776814 2.22191122
sample estimates:
mean in group female mean in group male
47.06412 45.96704
t.test(BMI~Diabetes, data=nha)
Welch Two Sample ttest
data: BMI by Diabetes
t = 11.379, df = 407.31, pvalue < 2.2e16
alternative hypothesis: true difference in means between group No and group Yes is not equal to 0
95 percent confidence interval:
5.563596 3.924435
sample estimates:
mean in group No mean in group Yes
28.08753 32.83155
t.test(AlcoholYear~RelationshipStatus, data=nha)
Welch Two Sample ttest
data: AlcoholYear by RelationshipStatus
t = 5.4315, df = 2674.8, pvalue = 6.09e08
alternative hypothesis: true difference in means between group Committed and group Single is not equal to 0
95 percent confidence interval:
13.05949 27.81603
sample estimates:
mean in group Committed mean in group Single
83.93416 63.49640
See the heading, Welch Two Sample ttest, and notice that the degrees of freedom might not be what we expected based on our sample size. Now look at the help for ?t.test
again, and look at the var.equal
argument, which is by default set to FALSE
. One of the assumptions of the ttest is homoscedasticity, or homogeneity of variance. This assumes that the variance in the outcome (e.g., BMI) is identical across both levels of the predictor (diabetic vs nondiabetic). Since this is rarely the case, the ttest defaults to using the Welch correction, which is a more reliable version of the ttest when the homoscedasticity assumption is violated.
A note on onetailed versus twotailed tests: A twotailed test is almost always more appropriate. The hypothesis you’re testing here is spelled out in the results (“alternative hypothesis: true difference in means is not equal to 0”). If the pvalue is very low, you can reject the null hypothesis that there’s no difference in means. Because you typically don’t know a priori whether the difference in means will be positive or negative (e.g., we don’t know a priori whether Single people would be expected to drink more or less than those in a committed relationship), we want to do the twotailed test. However, if we only wanted to test a very specific directionality of effect, we could use a onetailed test and specify which direction we expect. This is more powerful if we “get it right”, but much less powerful for the opposite effect. Notice how the pvalue changes depending on how we specify the hypothesis. Again, the twotailed test is almost always more appropriate.
# Two tailed
t.test(AlcoholYear~RelationshipStatus, data=nha)
# Difference in means is >0 (committed drink more)
t.test(AlcoholYear~RelationshipStatus, data=nha, alternative="greater")
# Difference in means is <0 (committed drink less)
t.test(AlcoholYear~RelationshipStatus, data=nha, alternative="less")
A note on paired versus unpaired ttests: The ttest we performed here was an unpaired test. Here the males and females are different people. The diabetics and nondiabetics are different samples. The single and committed individuals are completely independent, separate observations. In this case, an unpaired test is appropriate. An alternative design might be when data is derived from samples who have been measured at two different time points or locations, e.g., before versus after treatment, left versus right hand, etc. In this case, a paired ttest would be more appropriate. A paired test takes into consideration the intra and intersubject variability, and is more powerful than the unpaired test. See the help for
?t.test
for more information on how to do this.
8.3.2 Wilcoxon test
Another assumption of the ttest is that data is normally distributed. Looking at the histogram for AlcoholYear shows that this data clearly isn’t.
ggplot(nha, aes(AlcoholYear)) + geom_histogram()
The Wilcoxon ranksum test (a.k.a. MannWhitney U test) is a nonparametric test of differences in mean that does not require normally distributed data. When data is perfectly normal, the ttest is uniformly more powerful. But when this assumption is violated, the ttest is unreliable. This test is called in a similar way as the ttest.
wilcox.test(AlcoholYear~RelationshipStatus, data=nha)
Wilcoxon rank sum test with continuity correction
data: AlcoholYear by RelationshipStatus
W = 1067954, pvalue = 0.0001659
alternative hypothesis: true location shift is not equal to 0
The results are still significant, but much less than the pvalue reported for the (incorrect) ttest above. Also note in the help for ?wilcox.test
that there’s a paired
option here too.
8.3.3 Linear models
Analysis of variance and linear modeling are complex topics that deserve an entire semester dedicated to theory, design, and interpretation. A very good resource is An Introduction to Statistical Learning: with Applications in R by Gareth James, Daniela Witten, Trevor Hastie and Robert Tibshirani. The PDF of the book and all the R code used throughout are available free on the author’s website. What follows is a necessary oversimplification with more focus on implementation, and less on theory and design.
Where ttests and their nonparametric substitutes are used for assessing the differences in means between two groups, ANOVA is used to assess the significance of differences in means between multiple groups. In fact, a ttest is just a specific case of ANOVA when you only have two groups. And both ttests and ANOVA are just specific cases of linear regression, where you’re trying to fit a model describing how a continuous outcome (e.g., BMI) changes with some predictor variable (e.g., diabetic status, race, age, etc.). The distinction is largely semantic – with a linear model you’re asking, “do levels of a categorical variable affect the response?” where with ANOVA or ttests you’re asking, “does the mean response differ between levels of a categorical variable?”
Let’s examine the relationship between BMI and relationship status (RelationshipStatus
was derived from MaritalStatus
, coded as Committed if MaritalStatus is Married or LivePartner, and Single otherwise). Let’s first do this with a ttest, and for now, let’s assume that the variances between groups are equal.
t.test(BMI~RelationshipStatus, data=nha, var.equal=TRUE)
Two Sample ttest
data: BMI by RelationshipStatus
t = 1.5319, df = 3552, pvalue = 0.1256
alternative hypothesis: true difference in means between group Committed and group Single is not equal to 0
95 percent confidence interval:
0.77817842 0.09552936
sample estimates:
mean in group Committed mean in group Single
28.51343 28.85475
It looks like single people have a very slightly higher BMI than those in a committed relationship, but the magnitude of the difference is trivial, and the difference is not significant. Now, let’s do the same test in a linear modeling framework. First, let’s create the fitted model and store it in an object called fit
.
fit < lm(BMI~RelationshipStatus, data=nha)
You can display the object itself, but that isn’t too interesting. You can get the more familiar ANOVA table by calling the anova()
function on the fit
object. More generally, the summary()
function on a linear model object will tell you much more. (Note this is different from dplyr’s summarize function).
fit
Call:
lm(formula = BMI ~ RelationshipStatus, data = nha)
Coefficients:
(Intercept) RelationshipStatusSingle
28.5134 0.3413
anova(fit)
Analysis of Variance Table
Response: BMI
Df Sum Sq Mean Sq F value Pr(>F)
RelationshipStatus 1 98 98.320 2.3467 0.1256
Residuals 3552 148819 41.897
summary(fit)
Call:
lm(formula = BMI ~ RelationshipStatus, data = nha)
Residuals:
Min 1Q Median 3Q Max
12.813 4.613 0.955 3.287 52.087
Coefficients:
Estimate Std. Error t value Pr(>t)
(Intercept) 28.5134 0.1388 205.440 <2e16
RelationshipStatusSingle 0.3413 0.2228 1.532 0.126
Residual standard error: 6.473 on 3552 degrees of freedom
(153 observations deleted due to missingness)
Multiple Rsquared: 0.0006602, Adjusted Rsquared: 0.0003789
Fstatistic: 2.347 on 1 and 3552 DF, pvalue: 0.1256
Go back and rerun the ttest assuming equal variances as we did before. Now notice a few things:
t.test(BMI~RelationshipStatus, data=nha, var.equal=TRUE)
 The pvalues from all three tests (ttest, ANOVA, and linear regression) are all identical (p=0.1256). This is because they’re all identical: a ttest is a specific case of ANOVA, which is a specific case of linear regression. There may be some rounding error, but we’ll talk about extracting the exact values from a model object later on.
 The test statistics are all related. The t statistic from the ttest is 1.532, which is the same as the tstatistic from the linear regression. If you square that, you get 2.347, the F statistic from the ANOVA.
 The
t.test()
output shows you the means for the two groups, Committed and Single. Just displaying thefit
object itself or runningsummary(fit)
shows you the coefficients for a linear model. Here, the model assumes the “baseline” RelationshipStatus level is Committed, and that the intercept in a regression model (e.g., \(\beta_{0}\) in the model \(Y = \beta_{0} + \beta_{1}X\)) is the mean of the baseline group. Being Single results in an increase in BMI of 0.3413. This is the \(\beta_{1}\) coefficient in the model. You can easily change the ordering of the levels. See the help for?factor
, and check out the new forcats package, which provides tools for manipulating categorical variables.
# Pvalue computed on a tstatistic with 3552 degrees of freedom
# (multiply times 2 because ttest is assuming twotailed)
2*(1pt(1.532, df=3552))
[1] 0.1256115
# Pvalue computed on an Ftest with 1 and 3552 degrees of freedom
1pf(2.347, df1=1, df2=3552)
[1] 0.1256134
A note on dummy coding: If you have a \(k\)level factor, R creates \(k1\) dummy variables, or indicator variables, by default, using the alphabetically first level as baseline. For example, the levels of RelationshipStatus are “Committed” and “Single”. R creates a dummy variable called “RelationshipStatusSingle” that’s 0 if you’re committed, and 1 if you’re Single. The linear model is saying for every unit increase in RelationshipStatusSingle, i.e., going from committed to single, results in a 0.314unit increase in BMI. You can change the ordering of the factors to change the interpretation of the model (e.g., treating Single as baseline and going from Single to Committed). We’ll do this in the next section.
8.3.4 ANOVA
Recap: ttests are for assessing the differences in means between two groups. A ttest is a specific case of ANOVA, which is a specific case of a linear model. Let’s run ANOVA, but this time looking for differences in means between more than two groups.
Let’s look at the relationship between smoking status (Never, Former, or Current), and BMI.
Analysis of Variance Table
Response: BMI
Df Sum Sq Mean Sq F value Pr(>F)
SmokingStatus 2 1411 705.50 16.988 4.54e08
Residuals 3553 147551 41.53
summary(fit)
Call:
lm(formula = BMI ~ SmokingStatus, data = nha)
Residuals:
Min 1Q Median 3Q Max
12.565 4.556 1.056 3.315 51.744
Coefficients:
Estimate Std. Error t value Pr(>t)
(Intercept) 27.3914 0.2446 111.975 < 2e16
SmokingStatusFormer 1.7736 0.3293 5.387 7.65e08
SmokingStatusNever 1.4645 0.2838 5.161 2.60e07
Residual standard error: 6.444 on 3553 degrees of freedom
(151 observations deleted due to missingness)
Multiple Rsquared: 0.009472, Adjusted Rsquared: 0.008915
Fstatistic: 16.99 on 2 and 3553 DF, pvalue: 4.54e08
The Ftest on the ANOVA table tells us that there is a significant difference in means between current, former, and never smokers (p=\(4.54 \times 10^{8}\)). However, the linear model output might not have been what we wanted. Because the default handling of categorical variables is to treat the alphabetical first level as the baseline, “Current” smokers are treated as baseline, and this mean becomes the intercept, and the coefficients on “Former” and “Never” describe how those groups’ means differ from current smokers.
Back to dummy coding / indicator variables: SmokingStatus is “Current”, “Former”, and “Never.” By default, R will create two indicator variables here that in tandem will explain this variable.
Original SmokingStatus  Indicator: SmokingStatusFormer  Indicator: SmokingStatusNever 

Current  0  0 
Former  1  0 
Never  0  1 
What if we wanted “Never” smokers to be the baseline, followed by Former, then Current? Have a look at ?factor
to relevel the factor levels.
# Look at nha$SmokingStatus
nha$SmokingStatus
# What happens if we relevel it? Let's see what that looks like.
relevel(nha$SmokingStatus, ref="Never")
# If we're happy with that, let's change the value of nha$SmokingStatus in place
nha$SmokingStatus < relevel(nha$SmokingStatus, ref="Never")
# Or we could do this the dplyr way
nha < nha >
mutate(SmokingStatus=relevel(SmokingStatus, ref="Never"))
# Refit the model
fit < lm(BMI~SmokingStatus, data=nha)
# Optionally, show the ANOVA table
# anova(fit)
# Print the full model statistics
summary(fit)
Call:
lm(formula = BMI ~ SmokingStatus, data = nha)
Residuals:
Min 1Q Median 3Q Max
12.565 4.556 1.056 3.315 51.744
Coefficients:
Estimate Std. Error t value Pr(>t)
(Intercept) 28.8558 0.1438 200.601 < 2e16
SmokingStatusCurrent 1.4645 0.2838 5.161 2.6e07
SmokingStatusFormer 0.3091 0.2632 1.175 0.24
Residual standard error: 6.444 on 3553 degrees of freedom
(151 observations deleted due to missingness)
Multiple Rsquared: 0.009472, Adjusted Rsquared: 0.008915
Fstatistic: 16.99 on 2 and 3553 DF, pvalue: 4.54e08
Notice that the pvalue on the ANOVA/regression didn’t change, but the coefficients did. Never smokers are now treated as baseline. The intercept coefficient (28.856) is now the mean for Never smokers. The SmokingStatusFormer
coefficient of .309 shows the apparent increase in BMI that former smokers have when compared to never smokers, but that difference is not significant (p=.24). The SmokingStatusCurrent
coefficient of 1.464 shows that current smokers actually have a lower BMI than never smokers, and that this decrease is highly significant.
Finally, you can do the typical posthoc ANOVA procedures on the fit object. For example, the TukeyHSD()
function will run Tukey’s test (also known as Tukey’s range test, the Tukey method, Tukey’s honest significance test, Tukey’s HSD test (honest significant difference), or the TukeyKramer method). Tukey’s test computes all pairwise mean difference calculation, comparing each group to each other group, identifying any difference between two groups that’s greater than the standard error, while controlling the type I error for all multiple comparisons. First run aov()
(not anova()
) on the fitted linear model object, then run TukeyHSD()
on the resulting analysis of variance fit.
Tukey multiple comparisons of means
95% familywise confidence level
Fit: aov(formula = fit)
$SmokingStatus
diff lwr upr p adj
CurrentNever 1.4644502 2.1298249 0.7990756 0.0000008
FormerNever 0.3091076 0.3079639 0.9261790 0.4685044
FormerCurrent 1.7735578 1.0015525 2.5455631 0.0000002
This shows that there isn’t much of a difference between former and never smokers, but that both of these differ significantly from current smokers, who have significantly lower BMI.
Finally, let’s visualize the differences in means between these groups. The NA category, which is omitted from the ANOVA, contains all the observations who have missing or nonrecorded Smoking Status.
ggplot(nha, aes(SmokingStatus, BMI)) + geom_boxplot() + theme_classic()
8.3.5 Linear regression
Linear models are mathematical representations of the process that (we think) gave rise to our data. The model seeks to explain the relationship between a variable of interest, our Y, outcome, response, or dependent variable, and one or more X, predictor, or independent variables. Previously we talked about ttests or ANOVA in the context of a simple linear regression model with only a single predictor variable, \(X\):
\[Y = \beta_{0} + \beta_{1}X\]
But you can have multiple predictors in a linear model that are all additive, accounting for the effects of the others:
\[Y = \beta_{0} + \beta_{1}X_{1} + \beta_{2}X_{2} + \epsilon\]
 \(Y\) is the response
 \(X_{1}\) and \(X_{2}\) are the predictors
 \(\beta_{0}\) is the intercept, and \(\beta_{1}\), \(\beta_{2}\) etc are coefficients that describe what 1unit changes in \(X_{1}\) and \(X_{2}\) do to the outcome variable \(Y\).
 \(\epsilon\) is random error. Our model will not perfectly predict \(Y\). It will be off by some random amount. We assume this amount is a random draw from a Normal distribution with mean 0 and standard deviation \(\sigma\).
Building a linear model means we propose a linear model and then estimate the coefficients and the variance of the error term. Above, this means estimating \(\beta_{0}, \beta_{1}, \beta_{2}\) and \(\sigma\). This is what we do in R.
Let’s look at the relationship between height and weight.
Call:
lm(formula = Weight ~ Height, data = nha)
Residuals:
Min 1Q Median 3Q Max
40.339 13.109 2.658 9.309 127.972
Coefficients:
Estimate Std. Error t value Pr(>t)
(Intercept) 73.70590 5.08110 14.51 <2e16
Height 0.91996 0.03003 30.63 <2e16
Residual standard error: 18.57 on 3674 degrees of freedom
(31 observations deleted due to missingness)
Multiple Rsquared: 0.2034, Adjusted Rsquared: 0.2032
Fstatistic: 938.4 on 1 and 3674 DF, pvalue: < 2.2e16
The relationship is highly significant (P<\(2.2 \times 10^{16}\)). The intercept term is not very useful most of the time. Here it shows us what the value of Weight would be when Height=0, which could never happen. The Height coefficient is meaningful – each one unit increase in height results in a 0.92 increase in the corresponding unit of weight. Let’s visualize that relationship:
ggplot(nha, aes(x=Height, y=Weight)) + geom_point() + geom_smooth(method="lm")
By default, this is only going to show the prediction over the range of the data. This is important! You never want to try to extrapolate response variables outside of the range of your predictor(s). For example, the linear model tells us that weight is 73.7kg when height is zero. We can extend the predicted model / regression line past the lowest value of the data down to height=0. The bands on the confidence interval tell us that the model is apparently confident within the regions defined by the gray boundary. But this is silly – we would never see a height of zero, and predicting past the range of the available training data is never a good idea.
ggplot(nha, aes(x=Height, y=Weight)) +
geom_point() +
geom_smooth(method="lm", fullrange=TRUE) +
xlim(0, NA) +
ggtitle("Friends don't let friends extrapolate.")
8.3.6 Multiple regression
Finally, let’s do a multiple linear regression analysis, where we attempt to model the effect of multiple predictor variables at once on some outcome. First, let’s look at the effect of physical activity on testosterone levels. Let’s do this with a ttest and linear regression, showing that you get the same results.
t.test(Testosterone~PhysActive, data=nha, var.equal=TRUE)
Two Sample ttest
data: Testosterone by PhysActive
t = 2.4298, df = 3436, pvalue = 0.01516
alternative hypothesis: true difference in means between group No and group Yes is not equal to 0
95 percent confidence interval:
34.813866 3.720171
sample estimates:
mean in group No mean in group Yes
207.5645 226.8315
Call:
lm(formula = Testosterone ~ PhysActive, data = nha)
Residuals:
Min 1Q Median 3Q Max
224.5 196.5 115.9 167.0 1588.0
Coefficients:
Estimate Std. Error t value Pr(>t)
(Intercept) 207.565 5.873 35.34 <2e16
PhysActiveYes 19.267 7.929 2.43 0.0152
Residual standard error: 231.4 on 3436 degrees of freedom
(269 observations deleted due to missingness)
Multiple Rsquared: 0.001715, Adjusted Rsquared: 0.001425
Fstatistic: 5.904 on 1 and 3436 DF, pvalue: 0.01516
In both cases, the pvalue is significant (p=0.01516), and the result suggest that increased physical activity is associated with increased testosterone levels. Does increasing your physical activity increase your testosterone levels? Or is it the other way – will increased testosterone encourage more physical activity? Or is it none of the above – is the apparent relationship between physical activity and testosterone levels only apparent because both are correlated with yet a third, unaccounted for variable? Let’s throw Age into the model as well.
Call:
lm(formula = Testosterone ~ PhysActive + Age, data = nha)
Residuals:
Min 1Q Median 3Q Max
238.6 196.8 112.3 167.4 1598.1
Coefficients:
Estimate Std. Error t value Pr(>t)
(Intercept) 247.8828 13.0853 18.944 < 2e16
PhysActiveYes 13.6740 8.0815 1.692 0.090735
Age 0.8003 0.2322 3.447 0.000574
Residual standard error: 231 on 3435 degrees of freedom
(269 observations deleted due to missingness)
Multiple Rsquared: 0.005156, Adjusted Rsquared: 0.004577
Fstatistic: 8.901 on 2 and 3435 DF, pvalue: 0.0001394
This shows us that after accounting for age that the testosterone / physical activity link is no longer significant. Every 1year increase in age results in a highly significant decrease in testosterone, and since increasing age is also likely associated with decreased physical activity, perhaps age is the confounder that makes this relationship apparent.
Adding other predictors can also swing things the other way. We know that men have much higher testosterone levels than females. Sex is probably the single best predictor of testosterone levels in our dataset. By not accounting for this effect, our unaccountedfor variation remains very high. By accounting for Gender, we now reduce the residual error in the model, and the physical activity effect once again becomes significant. Also notice that our model fits much better (higher Rsquared), and is much more significant overall.
Call:
lm(formula = Testosterone ~ PhysActive + Age + Gender, data = nha)
Residuals:
Min 1Q Median 3Q Max
397.91 31.01 4.42 20.50 1400.90
Coefficients:
Estimate Std. Error t value Pr(>t)
(Intercept) 46.6931 7.5729 6.166 7.83e10
PhysActiveYes 9.2749 4.4617 2.079 0.0377
Age 0.5904 0.1282 4.605 4.28e06
Gendermale 385.1989 4.3512 88.526 < 2e16
Residual standard error: 127.5 on 3434 degrees of freedom
(269 observations deleted due to missingness)
Multiple Rsquared: 0.6969, Adjusted Rsquared: 0.6966
Fstatistic: 2632 on 3 and 3434 DF, pvalue: < 2.2e16
We’ve only looked at the summary()
and anova()
functions for extracting information from an lm
class object. There are several other accessor functions that can be used on a linear model object. Check out the help page for each one of these to learn more.
8.4 Discrete variables
Until now we’ve only discussed analyzing continuous outcomes / dependent variables. We’ve tested for differences in means between two groups with ttests, differences among means between n groups with ANOVA, and more general relationships using linear regression. In all of these cases, the dependent variable, i.e., the outcome, or \(Y\) variable, was continuous, and usually normally distributed. What if our outcome variable is discrete, e.g., “Yes/No”, “Mutant/WT”, “Case/Control”, etc.? Here we use a different set of procedures for assessing significant associations.
8.4.1 Contingency tables
The xtabs()
function is useful for creating contingency tables from categorical variables. Let’s create a gender by diabetes status contingency table, and assign it to an object called xt
. After making the assignment, type the name of the object to view it.
xt < xtabs(~Gender+Diabetes, data=nha)
xt
Diabetes
Gender No Yes
female 1692 164
male 1653 198
There are two useful functions, addmargins()
and prop.table()
that add more information or manipulate how the data is displayed. By default, prop.table()
will divide the number of observations in each cell by the total. But you may want to specify which margin you want to get proportions over. Let’s do this for the first (row) margin.
# Add marginal totals
addmargins(xt)
Diabetes
Gender No Yes Sum
female 1692 164 1856
male 1653 198 1851
Sum 3345 362 3707
# Get the proportional table
prop.table(xt)
Diabetes
Gender No Yes
female 0.45643377 0.04424063
male 0.44591314 0.05341246
# That wasn't really what we wanted.
# Do this over the first (row) margin only.
prop.table(xt, margin=1)
Diabetes
Gender No Yes
female 0.91163793 0.08836207
male 0.89303079 0.10696921
Looks like men have slightly higher rates of diabetes than women. But is this significant?
The chisquare test is used to assess the independence of these two factors. That is, if the null hypothesis that gender and diabetes are independent is true, the we would expect a proportionally equal number of diabetics across each sex. Males seem to be at slightly higher risk than females, but the difference is just short of statistically significant.
chisq.test(xt)
Pearson's Chisquared test with Yates' continuity correction
data: xt
Xsquared = 3.4332, df = 1, pvalue = 0.0639
An alternative to the chisquare test is Fisher’s exact test. Rather than relying on a critical value from a theoretical chisquare distribution, Fisher’s exact test calculates the exact probability of observing the contingency table as is. It’s especially useful when there are very small n’s in one or more of the contingency table cells. Both the chisquare and Fisher’s exact test give us pvalues of approximately 0.06.
fisher.test(xt)
Fisher's Exact Test for Count Data
data: xt
pvalue = 0.05992
alternative hypothesis: true odds ratio is not equal to 1
95 percent confidence interval:
0.9883143 1.5466373
sample estimates:
odds ratio
1.235728
There’s a useful plot for visualizing contingency table data called a mosaic plot. Call the mosaicplot()
function on the contingency table object. Note this is a builtin plot, not a ggplot2style plot.
mosaicplot(xt, main=NA)
Let’s create a different contingency table, this time looking at the relationship between race and whether the person had health insurance. Display the table with marginal totals.
xt < xtabs(~Race+Insured, data=nha)
addmargins(xt)
Insured
Race No Yes Sum
Asian 46 169 215
Black 86 330 416
Hispanic 89 151 240
Mexican 147 141 288
Other 33 65 98
White 307 2141 2448
Sum 708 2997 3705
Let’s do the same thing as above, this time showing the proportion of people in each race category having health insurance.
prop.table(xt, margin=1)
Insured
Race No Yes
Asian 0.2139535 0.7860465
Black 0.2067308 0.7932692
Hispanic 0.3708333 0.6291667
Mexican 0.5104167 0.4895833
Other 0.3367347 0.6632653
White 0.1254085 0.8745915
Now, let’s run a chisquare test for independence.
chisq.test(xt)
Pearson's Chisquared test
data: xt
Xsquared = 323.3, df = 5, pvalue < 2.2e16
The result is highly significant. In fact, so significant, that the display rounds off the pvalue to something like \(<2.2 \times 10^{16}\). If you look at the help for ?chisq.test
you’ll see that displaying the test only shows you summary information, but other components can be accessed. For example, we can easily get the actual pvalue, or the expected counts under the null hypothesis of independence.
chisq.test(xt)$p.value
[1] 9.754238e68
chisq.test(xt)$expected
Insured
Race No Yes
Asian 41.08502 173.91498
Black 79.49474 336.50526
Hispanic 45.86235 194.13765
Mexican 55.03482 232.96518
Other 18.72713 79.27287
White 467.79595 1980.20405
We can also make a mosaic plot similar to above:
mosaicplot(xt, main=NA)
8.4.2 Logistic regression
What if we wanted to model the discrete outcome, e.g., whether someone is insured, against several other variables, similar to how we did with multiple linear regression? We can’t use linear regression because the outcome isn’t continuous – it’s binary, either Yes or No. For this we’ll use logistic regression to model the log odds of binary response. That is, instead of modeling the outcome variable, \(Y\), directly against the inputs, we’ll model the log odds of the outcome variable.
If \(p\) is the probability that the individual is insured, then \(\frac{p}{1p}\) is the odds that person is insured. Then it follows that the linear model is expressed as:
\[log(\frac{p}{1p}) = \beta_0 + \beta_1 x_1 + \cdots + \beta_k x_k\]
Where \(\beta_0\) is the intercept, \(\beta_1\) is the increase in the odds of the outcome for every unit increase in \(x_1\), and so on.
Logistic regression is a type of generalized linear model (GLM). We fit GLM models in R using the glm()
function. It works like the lm()
function except we specify which GLM to fit using the family
argument. Logistic regression requires family=binomial
.
The typical use looks like this:
Before we fit a logistic regression model let’s relevel the Race variable so that “White” is the baseline. We saw above that people who identify as “White” have the highest rates of being insured. When we run the logistic regression, we’ll get a separate coefficient (effect) for each level of the factor variable(s) in the model, telling you the increased odds that that level has, as compared to the baseline group.
#Look at Race. The default ordering is alphabetical
nha$Race
# Let's relevel that where the group with the highest rate of insurance is "baseline"
relevel(nha$Race, ref="White")
# If we're happy with that result, permanently change it
nha$Race < relevel(nha$Race, ref="White")
# Or do it the dplyr way
nha < nha >
mutate(Race=relevel(Race, ref="White"))
Now, let’s fit a logistic regression model assessing how the odds of being insured change with different levels of race.
Call:
glm(formula = Insured ~ Race, family = "binomial", data = nha)
Coefficients:
Estimate Std. Error z value Pr(>z)
(Intercept) 1.94218 0.06103 31.825 < 2e16
RaceAsian 0.64092 0.17715 3.618 0.000297
RaceBlack 0.59744 0.13558 4.406 1.05e05
RaceHispanic 1.41354 0.14691 9.622 < 2e16
RaceMexican 1.98385 0.13274 14.946 < 2e16
RaceOther 1.26430 0.22229 5.688 1.29e08
(Dispersion parameter for binomial family taken to be 1)
Null deviance: 3614.6 on 3704 degrees of freedom
Residual deviance: 3336.6 on 3699 degrees of freedom
(2 observations deleted due to missingness)
AIC: 3348.6
Number of Fisher Scoring iterations: 4
The Estimate
column shows the log of the odds ratio – how the log odds of having health insurance changes at each level of race compared to White. The Pvalue for each coefficient is on the far right. This shows that every other race has significantly less rates of health insurance coverage. But, as in our multiple linear regression analysis above, are there other important variables that we’re leaving out that could alter our conclusions? Lets add a few more variables into the model to see if something else can explain the apparent RaceInsured association. Let’s add a few things likely to be involved (Age and Income), and something that’s probably irrelevant (hours slept at night).
Call:
glm(formula = Insured ~ Age + Income + SleepHrsNight + Race,
family = "binomial", data = nha)
Coefficients:
Estimate Std. Error z value Pr(>z)
(Intercept) 3.501e01 2.919e01 1.199 0.230
Age 3.371e02 2.949e03 11.431 < 2e16
Income 1.534e05 1.537e06 9.982 < 2e16
SleepHrsNight 1.763e02 3.517e02 0.501 0.616
RaceAsian 4.550e01 2.031e01 2.241 0.025
RaceBlack 2.387e01 1.536e01 1.554 0.120
RaceHispanic 1.010e+00 1.635e01 6.175 6.61e10
RaceMexican 1.404e+00 1.483e01 9.468 < 2e16
RaceOther 9.888e01 2.422e01 4.082 4.46e05
(Dispersion parameter for binomial family taken to be 1)
Null deviance: 3284.3 on 3395 degrees of freedom
Residual deviance: 2815.0 on 3387 degrees of freedom
(311 observations deleted due to missingness)
AIC: 2833
Number of Fisher Scoring iterations: 5
A few things become apparent:
 Age and income are both highly associated with whether someone is insured. Both of these variables are highly significant (\(P<2.2 \times 10^{16}\)), and the coefficient (the
Estimate
column) is positive, meaning that for each unit increase in one of these variables, the odds of being insured increases by the corresponding amount.  Hours slept per night is not meaningful at all.
 After accounting for age and income, several of the racespecific differences are no longer statistically significant, but others remain so.
 The absolute value of the test statistic (column called
z value
) can roughly be taken as an estimate of the “importance” of that variable to the overall model. So, age and income are the most important influences in this model; selfidentifying as Hispanic or Mexican are also very highly important, hours slept per night isn’t important at all, and the other race categories fall somewhere in between.
There is much more to go into with logistic regression. This chapter only scratches the surface. Missing from this chapter are things like regression diagnostics, model comparison approaches, penalization, interpretation of model coefficients, fitting interaction effects, and much more. Alan Agresti’s Categorical Data Analysis has long been considered the definitive text on this topic. I also recommend Agresti’s Introduction to Categorical Data Analysis (a.k.a. “Agresti lite”) for a gentler introduction.
8.5 Power & sample size
This is a necessarily short introduction to the concept of power and sample size calculations. Statistical power, also sometimes called sensitivity, is defined as the probability that your test correctly rejects the null hypothesis when the alternative hypothesis is true. That is, if there really is an effect (difference in means, association between categorical variables, etc.), how likely are you to be able to detect that effect at a given statistical significance level, given certain assumptions. Generally there are a few moving pieces, and if you know all but one of them, you can calculate what that last one is.
 Power: How likely are you to detect the effect? (Usually like to see 80% or greater).
 N: What is the sample size you have (or require)?
 Effect size: How big is the difference in means, odds ratio, etc?
If we know we want 80% power to detect a certain magnitude of difference between groups, we can calculate our required sample size. Or, if we know we can only collect 5 samples, we can calculate how likely we are to detect a particular effect. Or, we can work to solve the last one  if we want 80% power and we have 5 samples, what’s the smallest effect we can hope to detect?
All of these questions require certain assumptions about the data and the testing procedure. Which kind of test is being performed? What’s the true effect size (often unknown, or estimated from preliminary data), what’s the standard deviation of samples that will be collected (often unknown, or estimated from preliminary data), what’s the level of statistical significance needed (traditionally p<0.05, but must consider multiple testing corrections).
8.5.1 Ttest power/N
The power.t.test()
empirically estimates power or sample size of a ttest for differences in means. If we have 20 samples in each of two groups (e.g., control versus treatment), and the standard deviation for whatever we’re measuring is 2.3, and we’re expecting a true difference in means between the groups of 2, what’s the power to detect this effect?
power.t.test(n=20, delta=2, sd=2.3)
Twosample t test power calculation
n = 20
delta = 2
sd = 2.3
sig.level = 0.05
power = 0.7641668
alternative = two.sided
NOTE: n is number in *each* group
What’s the sample size we’d need to detect a difference of 0.8 given a standard deviation of 1.5, assuming we want 80% power?
power.t.test(power=.80, delta=.8, sd=1.5)
Twosample t test power calculation
n = 56.16413
delta = 0.8
sd = 1.5
sig.level = 0.05
power = 0.8
alternative = two.sided
NOTE: n is number in *each* group
8.5.2 Proportions power/N
What about a twosample proportion test (e.g., chisquare test)? If we have two groups (control and treatment), and we’re measuring some outcome (e.g., infected yes/no), and we know that the proportion of infected controls is 80% but 20% in treated, what’s the power to detect this effect in 5 samples per group?
power.prop.test(n=5, p1=0.8, p2=0.2)
Twosample comparison of proportions power calculation
n = 5
p1 = 0.8
p2 = 0.2
sig.level = 0.05
power = 0.4688159
alternative = two.sided
NOTE: n is number in *each* group
How many samples would we need for 90% power?
power.prop.test(power=0.9, p1=0.8, p2=0.2)
Twosample comparison of proportions power calculation
n = 12.37701
p1 = 0.8
p2 = 0.2
sig.level = 0.05
power = 0.9
alternative = two.sided
NOTE: n is number in *each* group
Also check out the pwr package which has power calculation functions for other statistical tests.
Function  Power calculations for 

pwr.2p.test() 
Two proportions (equal n) 
pwr.2p2n.test() 
Two proportions (unequal n) 
pwr.anova.test() 
Balanced one way ANOVA 
pwr.chisq.test() 
Chisquare test 
pwr.f2.test() 
General linear model 
pwr.p.test() 
Proportion (one sample) 
pwr.r.test() 
Correlation 
pwr.t.test() 
Ttests (one sample, 2 sample, paired) 
pwr.t2n.test() 
Ttest (two samples with unequal n) 
8.6 Tidying models
We spent a lot of time in previous chapters on tidy data, where each column is a variable and each row is an observation. Tidy data is easy to filter observations based on values in a column (e.g., we could get just adult males with filter(nha, Gender=="male" & Age>=18)
, and easy to select particular variables/features of interest by their column name.
Even when we start with tidy data, we don’t end up with tidy models. The output from tests like t.test
or lm
are not data.frames, and it’s difficult to get the information out of the model object that we want. The broom package bridges this gap.
Depending on the type of model object you’re using, broom provides three methods that do different kinds of tidying:

tidy
: constructs a data frame that summarizes the model’s statistical findings like coefficients and pvalues. 
augment
: add columns to the original data that was modeled, like predictions and residuals. 
glance
: construct a concise onerow summary of the model with information like \(R^2\) that are computed once for the entire model.
Let’s go back to our linear model example.
# Try modeling Testosterone against Physical Activity, Age, and Gender.
fit < lm(Testosterone~PhysActive+Age+Gender, data=nha)
# See what that model looks like:
summary(fit)
Call:
lm(formula = Testosterone ~ PhysActive + Age + Gender, data = nha)
Residuals:
Min 1Q Median 3Q Max
397.91 31.01 4.42 20.50 1400.90
Coefficients:
Estimate Std. Error t value Pr(>t)
(Intercept) 46.6931 7.5729 6.166 7.83e10
PhysActiveYes 9.2749 4.4617 2.079 0.0377
Age 0.5904 0.1282 4.605 4.28e06
Gendermale 385.1989 4.3512 88.526 < 2e16
Residual standard error: 127.5 on 3434 degrees of freedom
(269 observations deleted due to missingness)
Multiple Rsquared: 0.6969, Adjusted Rsquared: 0.6966
Fstatistic: 2632 on 3 and 3434 DF, pvalue: < 2.2e16
What if we wanted to pull out the coefficient for Age, or the Pvalue for PhysActive? It gets pretty gross. We first have to coef(summary(lmfit))
to get a matrix of coefficients, the terms are still stored in row names, and the column names are inconsistent with other packages (e.g. Pr(>t)
compared to p.value
). Yuck!
[1] 0.5903616
[1] 0.03771185
Instead, you can use the tidy
function, from the broom package, on the fit:
# A tibble: 4 × 5
term estimate std.error statistic p.value
<chr> <dbl> <dbl> <dbl> <dbl>
1 (Intercept) 46.7 7.57 6.17 7.83e10
2 PhysActiveYes 9.27 4.46 2.08 3.77e 2
3 Age 0.590 0.128 4.60 4.28e 6
4 Gendermale 385. 4.35 88.5 0
This gives you a data.frame with all your model results. The row names have been moved into a column called term
, and the column names are simple and consistent (and can be accessed using $
). These can be manipulated with dplyr just like any other data frame.
# A tibble: 3 × 2
term p.value
<chr> <dbl>
1 Gendermale 0
2 Age 0.00000428
3 PhysActiveYes 0.0377
Instead of viewing the coefficients, you might be interested in the fitted values and residuals for each of the original points in the regression. For this, use augment
, which augments the original data with information from the model. New columns begins with a .
(to avoid overwriting any of the original columns).
# Augment the original data
# IF you get a warning about deprecated... purrr..., ignore. It's a bug that'll be fixed soon.
augment(fit) > head()
# A tibble: 6 × 11
.rownames Testosterone PhysActive Age Gender .fitted .resid .hat .sigma
<chr> <dbl> <fct> <dbl> <fct> <dbl> <dbl> <dbl> <dbl>
1 1 47.5 No 43 female 21.3 26.2 0.000989 128.
2 2 643. No 80 male 385. 258. 0.00185 127.
3 3 643. No 80 male 385. 258. 0.00185 127.
4 4 21.1 Yes 34 female 35.9 14.8 0.000928 128.
5 5 563. No 80 male 385. 178. 0.00185 128.
6 6 402. No 35 male 411. 9.45 0.00117 128.
# ℹ 2 more variables: .cooksd <dbl>, .std.resid <dbl>
# Plot residuals vs fitted values for males,
# colored by Physical Activity, size scaled by age
augment(fit) >
filter(Gender=="male") >
ggplot(aes(.fitted, .resid, col=PhysActive, size=Age)) + geom_point()
Finally, several summary statistics are computed for the entire regression, such as \(R^2\) and the Fstatistic. These can be accessed with glance
:
glance(fit)
# A tibble: 1 × 12
r.squared adj.r.squared sigma statistic p.value df logLik AIC BIC
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 0.697 0.697 128. 2632. 0 3 21545. 43100. 43130.
# ℹ 3 more variables: deviance <dbl>, df.residual <int>, nobs <int>
The broom functions work on a pipe, so you can >
your model directly to any of the functions like tidy()
. Let’s tidy up our ttest:
t.test(AlcoholYear~RelationshipStatus, data=nha)
Welch Two Sample ttest
data: AlcoholYear by RelationshipStatus
t = 5.4315, df = 2674.8, pvalue = 6.09e08
alternative hypothesis: true difference in means between group Committed and group Single is not equal to 0
95 percent confidence interval:
13.05949 27.81603
sample estimates:
mean in group Committed mean in group Single
83.93416 63.49640
# A tibble: 1 × 10
estimate estimate1 estimate2 statistic p.value parameter conf.low conf.high
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 20.4 83.9 63.5 5.43 6.09e8 2675. 13.1 27.8
# ℹ 2 more variables: method <chr>, alternative <chr>
…and our MannWhitney U test / Wilcoxon ranksum test:
wilcox.test(AlcoholYear~RelationshipStatus, data=nha)
Wilcoxon rank sum test with continuity correction
data: AlcoholYear by RelationshipStatus
W = 1067954, pvalue = 0.0001659
alternative hypothesis: true location shift is not equal to 0
wilcox.test(AlcoholYear~RelationshipStatus, data=nha) > tidy()
# A tibble: 1 × 4
statistic p.value method alternative
<dbl> <dbl> <chr> <chr>
1 1067954. 0.000166 Wilcoxon rank sum test with continuity correct… two.sided
…and our Fisher’s exact test on the crosstabulated data:
xtabs(~Gender+Diabetes, data=nha) > fisher.test()
Fisher's Exact Test for Count Data
data: xtabs(~Gender + Diabetes, data = nha)
pvalue = 0.05992
alternative hypothesis: true odds ratio is not equal to 1
95 percent confidence interval:
0.9883143 1.5466373
sample estimates:
odds ratio
1.235728
xtabs(~Gender+Diabetes, data=nha) > fisher.test() > tidy()
# A tibble: 1 × 6
estimate p.value conf.low conf.high method alternative
<dbl> <dbl> <dbl> <dbl> <chr> <chr>
1 1.24 0.0599 0.988 1.55 Fisher's Exact Test for Count… two.sided
…and finally, a logistic regression model:
# fit the model and summarize it the usual way
glmfit < glm(Insured~Race, data=nha, family=binomial)
summary(glmfit)
Call:
glm(formula = Insured ~ Race, family = binomial, data = nha)
Coefficients:
Estimate Std. Error z value Pr(>z)
(Intercept) 1.94218 0.06103 31.825 < 2e16
RaceAsian 0.64092 0.17715 3.618 0.000297
RaceBlack 0.59744 0.13558 4.406 1.05e05
RaceHispanic 1.41354 0.14691 9.622 < 2e16
RaceMexican 1.98385 0.13274 14.946 < 2e16
RaceOther 1.26430 0.22229 5.688 1.29e08
(Dispersion parameter for binomial family taken to be 1)
Null deviance: 3614.6 on 3704 degrees of freedom
Residual deviance: 3336.6 on 3699 degrees of freedom
(2 observations deleted due to missingness)
AIC: 3348.6
Number of Fisher Scoring iterations: 4
# tidy it up!
tidy(glmfit)
# A tibble: 6 × 5
term estimate std.error statistic p.value
<chr> <dbl> <dbl> <dbl> <dbl>
1 (Intercept) 1.94 0.0610 31.8 2.96e222
2 RaceAsian 0.641 0.177 3.62 2.97e 4
3 RaceBlack 0.597 0.136 4.41 1.05e 5
4 RaceHispanic 1.41 0.147 9.62 6.47e 22
5 RaceMexican 1.98 0.133 14.9 1.66e 50
6 RaceOther 1.26 0.222 5.69 1.29e 8
# do whatever you want now
tidy(glmfit) >
filter(term!="(Intercept)") >
mutate(logp=1*log10(p.value)) >
ggplot(aes(term, logp)) + geom_bar(stat="identity") + coord_flip()
Check out some of the other broom vignettes on CRAN, and also check out the biobroom package on bioconductor for turning bioconductor objects and analytical results into tidy data frames.
8.7 Additional topics & recommended reading
8.7.1 1. Batch effects
Batch effects are sources of technical variation introduced during an experiment, such as processing with different reagents, handling by a different technician, sequencing on a different flow cell, or processing samples in groups on different days. If these batch effects are strongly confounded with the study variable of interest, they can call into question the validity of your results, and in some cases, render collected data completely useless. The papers below discuss batch effects and how they can be mitigated.
 Chapter 5 of Scherer, Andreas. Batch effects and noise in microarray experiments: sources and solutions. Vol. 868. John Wiley & Sons, 2009.
 Leek, Jeffrey T., et al. “Tackling the widespread and critical impact of batch effects in highthroughput data.” Nature Reviews Genetics 11.10 (2010): 733739. Available at https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3880143/.
8.7.2 2. What’s my n?
“What’s my n” isn’t always a straightforward question to answer, especially when it comes to cell culture expriments. The post and article below go into some of these details.
 Statistics for Experimental Biologists: “What is ‘n’ in cell culture experiments?” Available at http://labstats.net/articles/cell_culture_n.html.
 Vaux, David L., Fiona Fidler, and Geoff Cumming. “Replicates and repeats—what is the difference and is it significant?.” EMBO reports 13.4 (2012): 291296. Available at https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3321166/.
8.7.3 3. Technical versus biological replicates
Technical replicates involve taking multiple measurements on the same sample. Biological replicates are different samples each with separate measurements/assays. While technical replicates can help calibrate the precision of an instrument or assay, biological replicates are necessary for statistical analysis to make inferences about a condition or treatment. Read the paper and note below for more information on technical vs biological replication.
 Blainey, Paul, Martin Krzywinski, and Naomi Altman. “Points of significance: replication.” Nature methods 11.9 (2014): 879880. Available at http://rdcu.be/yguA.
 Illumina Technical Note: “The Power of Replicates.” Available at https://www.illumina.com/Documents/products/technotes/technote_power_replicates.pdf.