Machine Learning for Hackers by Drew Conway & John Myles White

Machine Learning for Hackers by Drew Conway & John Myles White

Author:Drew Conway & John Myles White [Drew Conway]
Language: eng
Format: epub, pdf
Tags: COMPUTERS / Machine Theory
ISBN: 9781449303785
Publisher: O'Reilly Media
Published: 2012-02-09T16:00:00+00:00


Figure 6-4. Nonlinear data with smooth linear fit

By adding two more inputs, we went from an R2 of 60% to an R2 of 97%. That’s a huge increase. And, in principle, there’s no reason why we can’t follow this logic out as long as we want and keep adding more powers of X to our data set. But as we add more powers, we’ll eventually start to have more inputs than data points. That’s usually worrisome, because it means that we could, in principle, fit our data perfectly. But a more subtle problem with this strategy will present itself before then: the new columns we add to our data are so similar in value to the original columns that lm will simply stop working. In the output from summary shown next, you’ll see this problem referred to as a “singularity.”

df <- transform(df, X4 = X ^ 4) df <- transform(df, X5 = X ^ 5) df <- transform(df, X6 = X ^ 6) df <- transform(df, X7 = X ^ 7) df <- transform(df, X8 = X ^ 8) df <- transform(df, X9 = X ^ 9) df <- transform(df, X10 = X ^ 10) df <- transform(df, X11 = X ^ 11) df <- transform(df, X12 = X ^ 12) df <- transform(df, X13 = X ^ 13) df <- transform(df, X14 = X ^ 14) df <- transform(df, X15 = X ^ 15) summary(lm(Y ~ X + X2 + X3 + X4 + X5 + X6 + X7 + X8 + X9 + X10 + X11 + X12 + X13 + X14, data = df)) #Call: #lm(formula = Y ~ X + X2 + X3 + X4 + X5 + X6 + X7 + X8 + X9 + # X10 + X11 + X12 + X13 + X14, data = df) # #Residuals: # Min 1Q Median 3Q Max #-0.242662 -0.038179 0.002771 0.052484 0.210917 # #Coefficients: (1 not defined because of singularities) # Estimate Std. Error t value Pr(>|t|) #(Intercept) -6.909e-02 8.413e-02 -0.821 0.414 #X 1.494e+01 1.056e+01 1.415 0.161 #X2 -2.609e+02 4.275e+02 -0.610 0.543 #X3 3.764e+03 7.863e+03 0.479 0.633 #X4 -3.203e+04 8.020e+04 -0.399 0.691 #X5 1.717e+05 5.050e+05 0.340 0.735 #X6 -6.225e+05 2.089e+06 -0.298 0.766 #X7 1.587e+06 5.881e+06 0.270 0.788 #X8 -2.889e+06 1.146e+07 -0.252 0.801 #X9 3.752e+06 1.544e+07 0.243 0.809 #X10 -3.398e+06 1.414e+07 -0.240 0.811 #X11 2.039e+06 8.384e+06 0.243 0.808 #X12 -7.276e+05 2.906e+06 -0.250 0.803 #X13 1.166e+05 4.467e+05 0.261 0.795 #X14 NA NA NA NA # #Residual standard error: 0.09079 on 87 degrees of freedom #Multiple R-squared: 0.9858, Adjusted R-squared: 0.9837 #F-statistic: 465.2 on 13 and 87 DF, p-value: < 2.2e-16

The problem here is that the new columns we’re adding with larger and larger powers of X are so correlated with the old columns that the linear regression algorithm breaks down and can’t find coefficients for all of the columns separately. Thankfully, there is a solution to this problem that can be found in the mathematical literature: instead of naively adding simple powers of x, we add more complicated



Download



Copyright Disclaimer:
This site does not store any files on its server. We only index and link to content provided by other sites. Please contact the content providers to delete copyright contents if any and email us, we'll remove relevant links or contents immediately.