A Solution Manual and Notes for: An Introduction to Statistical Learning: with Applications in R: Machine Learning by John Weatherwax

A Solution Manual and Notes for: An Introduction to Statistical Learning: with Applications in R: Machine Learning by John Weatherwax

Author:John Weatherwax [Weatherwax, John]
Language: eng
Format: epub
Published: 2014-04-13T00:00:00+00:00


Exercise 4

Note that this is the “same” optimization problem as Problem 3 but with an L2 penalty (rather than an L1 penalty) applied to the βj values.

Part (a): When λ is very large our problem is highly constrained (the coefficients βj must all be zero). This means that our technique will have high bias (be far from the assumed true linear model) and have low variance (will have estimated coefficients independent of the given dataset). When λ is zero, our problem is equivalent to least squares and has lower bias but higher variance. Thus as λ increases from 0 the training RSS will start small and increase as λ increases.

Part (b): The test RSS should decrease initially and then then increase in a “U” like shape. This behavior is seen in the examples presented in this chapter (for example Figure 6.5).

Part (c): As λ increases the variance will decrease as we are further constraining our model (see Figure 6.5).

Part (d): As λ increases the (squared) bias will increase (again see Figure 6.5).

Part (e): The irreducible error is not able to be modeled by our fitting procedure (and predictors) and thus will remain constant regardless of the value of λ.



Download



Copyright Disclaimer:
This site does not store any files on its server. We only index and link to content provided by other sites. Please contact the content providers to delete copyright contents if any and email us, we'll remove relevant links or contents immediately.