Category Archives: Uncategorized
Outliers in the wild
So, I was at Barnes and Noble today with a few hours to kill. I sat down and I started reading Malcolm Gladwell’s new book, Outliers: The Story of Success. The further I read, the more it became clear to me that he wasn’t really talking about outliers. Also, since I have my qualifying exam on January 19th, it can’t hurt to do a review on detecting outliers. (I started this post before my qualifier which has since come and gone. I passed by the way.) I go on to do a review of what outliers are and then conlcude by explaining how Gladwell’s book isn’t really about outliers. If the middle part bores you just skip to the conclusion. (Note: I love Gladwell’s work. I have read all of his book and all of his New Yorker articles.)
Let’s try to answer this question: What is an outlier? One good answer can be found here at wolfram.com and another good explanation here. If you are interested in the wikipedia answer, that can be found here.
So how do we look for outliers? Say we only have one variable. A common way of defining outliers (suggested at wolfram and most intro stat classes) is to look for observations that are above Q3+1.5*IQR or below Q1-1.5*IQR. Here Q1 is the first quartile of the data (the point at which 25% of the data is below) and Q3 is the third quartile (the point at which 75% of the data is below). IQR is the interquartile range and is defined to be Q3-Q1. Below is a Box and Whisker plot of 99 random observations from a standard normal distribution and one observation with value 10. The box represents the IQR, while points outside of the whiskers are considered outliers.
Now consider that we wish to relate to variables, X and Y, using simple linear regression using the model Y=B0+B1*X+epsilon (where epsilon ~ N(0,sigma^2) and sigma^2 is fixed but unknown). It is very important to look for outliers in the X direction as they may heavily impact the final estimates of B0 and B1. A measure, called leverage, is used to check for outliers in the X direction. The leverage for the i-th point is defined as the i-th diagonal of the projection matrix P=X*g-inv(X’X)*X’. The first column in the data below is the response variable Y, the second column is a column of ones for the intercept and the third column is the predictor varaible. Therefore in my formula for P, the X matrix I am talking about is the second and third columns of the data.
| -2 | 1 | 1 |
| -1 | 1 | 1 |
| 0 | 1 | 10 |
| 2 | 1 | 1 |
| 2 | 1 | 1 |
The corresponding measure of leverage are: 0.We con25 0.25 1.00 0.25 0.25
We see that observation 3 has a leverage of 1. This is the maximum leverage that can be achieved by a data point, and this occurs when the regression line passes through the observation. We consider an observation to be an outlier in X is the leverage is large. So ,clearly, this value is an outlier in X. Such a large value of leverage should concern a good statistician as the point may also have large influence. (Here, however, even though observation 3 has the max leverage, the point has no influence. If we removed it from the analysis, the ordinary least quares regression line would not change at all.) A very good (and more thorough) explantion of leverage and influence (Cooks Distance) can be found here. (I am partial to DFFITS for measuring influence, myself.) Anyway, back to outliers. Alright. So once I fit my model I get predicted values of my response varible. We’ll call these observations y_hat. Using these we can define a residual as the quantity y-y_hat. Now if this value of the residual is large relative to the estimated value of sigma^2 (MSE is used to estimate sigma^2), then we consider that observation as a whole to be an outlier. The Conclusion If you have read this whole post, Cheers. If you skipped the middle part and came straight from the intro, welcome to the conclusion. I hope you enjoy your stay. So my big point is this: Nothing that Gladwell talks about in his book is really an outlier. Consider this example. You go and collect a whole bunch of data on 7 children’s heights. You collect 48,48,49,51,52,45,67. When you do a boxplot you see that the observation 67 is an outlier.
However, when you consider age as a predictor of height, you can see that the child who was 67 inches was older than the rest of the children. Surely, none of these observations can be considered outliers when age is factored in. A child would be an outlier only if there were significantly taller or shorter then their age would predict.
In Gladwell’s book he talks about the Bill Gates and the Beatles as being outliers in terms of success. Considered by themselves, yes, Bill Gates and the Beatles are outliers on many scales including success and income. However, he then goes on to look at what makes these “outliers” and he concludes that in order to be an expert you need 10,000 hours of training. Well, if success is a function of training and it takes 10,000 hours to become an expert, then the like sof the Beatles and Bill Gates aren’t outliers at all. They are exactly as successful as they are predicted to be since clearly both this band and this software engineer have put in well over 10,000 hours. An outlier in this case would be someone who practiced for 10,000 hours, but was very unsuccessful or, on the other hand, someone who doesn’t practice at all but is wildly successful. Gladwell admits himself that he couldn’t find any examples of wild success without putting in the training. So his book isn’t really about outliers at all. He is just looking at the top one percent of the top one percent. All this being said, I still think Outliers: The Story of Success is a very entertaining and interesting book. Also, be sure to check out his other books Blink and The Tipping Point (my favorite). And definately be sure to read the Malcolm Gladwell archive of his old New Yorker articles.
Cheers.
Distorted Statistics in the Wild
Here is an article titled “Girls Gone Bad: Statistics Distort the Truth.” I think this is a misleading headline. Statistics don’t distort the truth; the misinterpretaion of statistics distorts the truth.
Here are the two comments on the article from the site:
The first from xpan09: “That’s true. Statistics should wait before announcing a rise in violence until policies concerning them have leveled off. Also, what I’ve seen of girls being violent has not changed from my perspective and has been constant throughout my years.”
and the second from Ival: “Accurate statistics are just that and nothing more. Only liars and spinners distorth truth.”
Ival really sums up my point. Thanks Ival.
Cheers.
Stats in the wild: Census 2010
Who’s excited for the 2010 Census? What? You’re not? Well let me ask you this then? How else are we supposed to know that there were 6,424 people living in Pondera County, Montana in 2000? How? I don’t know about you, but I can’t wait to go door to door in 2010.
Anyway, here is a good article about prospective problems with the 2010 United States Census, and here is a little bio on the guy who ran the 2000 U.S. Census.
Census talk anecdote:
I went/was forced to go to a talk last year given by someone who had worked at the census for many years. As far as I was concerned the census is about as boring as it gets. (By the way, the Census bureau is located in a place called Suitland. I can’t imagine Suitland being a very fun place. Unless you like counting to very large numbers.) So the talk was about as boring as it gets. I think the statistical issues the census deals with are kind of dry (sampling!), but the political issues are fascinating. Basically the only item I remember from this woman’s talk was the controversy over how to count prisoners. She gave an example of a town in upstate New York that had a massive prison located within the town lines, but most of the prisoners were from near New York City. So where do these people get counted? Do they count for the purposes of representation? Should the town with the prison get more representation because they have so many residents? It’s an interesting discussion.
Cheers.
Being wrong in the wild.
Well, it looks like I was wrong. In one of my posts from the beginning of the NFL playoffs, I spent quite some time complaining about how the Chargers and the Cardinals made the play-offs at 8 – 8 and 9 – 7, respectively. Looks like the Cardinals proved me wrong by making it all the way to the Super Bowl, making them the first 4 seed to make it to the Super Bowl since the league switched to 4 divisions in 2001.
Why has a 4 seed never made it to the Super Bowl (until this year) in the era of 4 divisions? It’s because usually they are the worst team in the play-offs. Since they have switched to 4 divisions, at least one wild card team has had a better record than one of the 4 seed 5 times in 7 seasons. No four seed has ever won a Super Bowl in the four division era, and Arizona is the only 4 seed to make it to the Super Bowl with four divisions.
(Note: This is the first Super Bowl in the 4 division era not to feature a number 1 seed.)
And what happens when we look back to when the NFL had only three divisions? We see that the 3 seed never made it to a Super Bowl, but the 4 seed (the highest wild card team) made it to (and won the Super Bowl) 2 times in the six seasons between 1996 and 2001. I’m going to eventually get around to going back as far as I can, but for now I only have data going back to 1996.) Also, over this same period, two number 4 seeds (the highest seeded wild card) won the super bowl.
So here is my suggestion: Seed teams based on record. If you win your division at 8-8 (San Diego) or 9-7 (Arizona), you should be a lower seed than a wild card team that went 11-5 (Baltimore) or 12-4 (Indianapolis). If you are trying to have a play-off that is the fairest, you should seed based on wins, not how it is currently done. Tennessee got royally screwed this year in the play-offs because of this system. Their reward for being the number one seed? A first round bye and then a showdown with the 11-5 5th seed which they lost, while the number 2 seed had a relatively easy ride to the conference championship by beating the lowly 8-8 4th seeded San Diego Chargers. It just doesn’t make sense to seed San Diego higher than Baltimore in this season. Baltimore is a better team and they had more wins. Make things fair and seed based on record.
Final note: I still think Arizona is terrible. But I have been wrong three weeks in a row. However, let me remind you that a 4 seed has never won a Super Bowl in the 4 division era. And I still think they are terrible. Thusly, the statsinthewild blog’s official pick for Super Bowl XLIII is the Pittsburgh Steelers 24-13.
Cheers.
Losing money in the wild
This link has a good graphical representation of just how much value has been lost by some major banks.
Cheers.
Factor Analysis in the stock market (in the wild)
Well, I’m done with my qualifying exam. I’ll know if I passed by late this week/early next week.
Anyway, here is a short project that I did on factor analysis in November.
Cheers.
Introduction
A major market index in the United States is the Dow Jones Industrial Average. Thirty large industrial companies stock prices contribute to the calculation of the Dow Jones industrial average. These companies are Boeing, Caterpillar, Chevron, Citigroup, Coca Cola, DuPont, Exxon Mobil, General Electric, General Motors, Hewlett Packard, Home Depot, IBM, Intel, Johnson and Johnson, JP Morgan Chase, Kraft Foods, McDonalds, Merck, Microsoft, Pfizer, Proctor and Gamble, United Tech, Verizon, WalMart, Walt Disney, Bank of America, AT and T, American Express, Alcoa, and 3M Company.
The amount of change in the price of these stocks will be highly correlated, as they are all part of the larger market. Factor analysis will be used to reduce the dimensionality of the 30 stocks in the Dow Jones average. This is being done because I am interested to see which stock’s prices move together.
Data
Data was collected from the website finance.yahoo.com. Data consists of the high, low, opening, and closing price of each of the thirty stocks as well as the volume of each stock for each day. Stocks vary in the length for which they have historical data, as some companies have been public longer than other. As such only the last 1000 trading days are considered in the analysis. This includes all data dating back to November 19, 2004. Rather than consider the actual price of the stock (since some stock prices are much higher or lower than others), the change in stock price from one closing bell to the next is considered for all thirty stocks.
Analysis
Using SAS 9.2, a factor analysis was implemented for the differences in closing prices for the 30 Dow Jones stocks over the last 1000 days. Using a scree plot \cite{scree} and by analyzing the eigenvalues of the correlation matrix, a sufficient number of factors will be chosen. Upon finding the principal components, the varimax \cite{Johnson} method will be used to find a final rotated factor solution.
| Stock | Factor 1 | Factor 2 | Factor 3 | Factor 4 | Factor 5 |
| AA | 0.23127 | 0.09893 | 0.23084 | 0.73216 | 0.04921 | AXP | 0.67590 | 0.27178 | 0.30486 | 0.21506 | 0.10536 | BA | 0.22446 | 0.24467 | 0.32912 | 0.32123 | 0.33817 | BAC | 0.82352 | 0.25896 | 0.14310 | 0.14884 | 0.13932 | C | 0.79975 | 0.24980 | 0.14993 | 0.15357 | 0.09632 | CAT | 0.19406 | -0.06364 | 0.12838 | 0.45805 | 0.38608 | CVX | 0.13066 | 0.40635 | 0.20625 | 0.76029 | 0.08143 | DD | 0.39500 | 0.30897 | 0.27833 | 0.43173 | 0.30426 | DIS | 0.37520 | 0.43313 | 0.42601 | 0.27117 | 0.13823 | GE | 0.61010 | 0.27886 | 0.31333 | 0.21191 | 0.14694 | GM | 0.50475 | 0.06398 | 0.15834 | 0.19736 | 0.00832 | HD | 0.53510 | 0.24265 | 0.37274 | 0.02793 | 0.25287 | HPQ | 0.24719 | 0.19816 | 0.70506 | 0.24809 | 0.02544 | IBM | 0.33248 | 0.19050 | 0.68158 | 0.21392 | 0.10399 | INTC | 0.32603 | 0.20892 | 0.61192 | 0.21961 | 0.11821 | JNJ | 0.15935 | 0.71031 | 0.23464 | 0.10390 | 0.17792 | JPM | 0.80200 | 0.25962 | 0.19675 | 0.08128 | 0.15698 | KFT | 0.28979 | 0.45453 | 0.18234 | 0.20696 | 0.14068 | KO | 0.10981 | 0.60928 | 0.39598 | 0.08037 | 0.18114 | MCD | 0.26140 | 0.40935 | 0.36831 | 0.15274 | 0.36240 | MMM | 0.35237 | 0.31052 | 0.31527 | 0.35182 | 0.22999 | MRK | 0.18019 | 0.67967 | 0.06238 | 0.17023 | -0.06161 | MSFT | 0.14925 | 0.35621 | 0.65338 | 0.20696 | 0.12790 | PFE | 0.37601 | 0.57298 | 0.08865 | 0.15824 | -0.00815 | PG | 0.20504 | 0.69431 | 0.20156 | 0.17265 | 0.25142 | T | 0.36670 | 0.53525 | 0.32948 | 0.27273 | 0.00919 | UTX | 0.13055 | 0.15316 | 0.07721 | 0.11207 | 0.79017 | VZ | 0.37186 | 0.52181 | 0.37188 | 0.19101 | 0.01283 | WMT | 0.37782 | 0.46428 | 0.35919 | 0.06918 | 0.23774 | XOM | 0.13787 | 0.44943 | 0.21108 | 0.74470 | 0.09553 |
Results
Keeping five factors, we can see see which stocks load heavily onto which factors by looking at the table. The variables that load heavily onto the first factor include, American Express (AXP), Bank of America (BAC), Citigroup (C), General Electric (GE), General Motors (GM), Home Depot (HD), and JP Morgan (JPM). With the exception of Home Depot and General motors, all of these companies are financial institutions, and General Motors and Home Depot are heavily affected by the availability of credit from these institution as GM sells large ticket items (cars) and HD is heavily tied to people buying houses, and thus affected by the mortgage market. It appears that this first factor explains variation related to the financial sector.
The companies that are heavily loaded onto the second factor include, Chevron (CVX), Disney (DIS), Johnson and Johnson (JNJ), Kraft Foods (KFT), Coca Cola (KO), McDonalds (MCD), Merck (MRK), Pfizer (PFE), Proctor and Gamble (PG), AT and T (T), Verizon (VZ), Wal-Mart (WMT), and Exxon-Mobil (XOM). All of these companies sell items directly to consumers, and the costs involved in each of these transactions with consumers is relatively small. So, it appears this second factor is explaining the variation due to the individual consumer.
The third factor includes Disney, Hewlett-Packard, IBM, Intel, and Microsoft. These companies, with the glaring exception of Disney, are all companies tied to computers. Thus, it appears that the third factor explains variation due to computer industry. While factor four include companies such as Alcoa, Cat, Chevron, DuPont, and Exxon-Mobil. This factor appears to explain variation in the manufacturing market. Both Chevron and Exxon-Mobil appear heavily loaded on both factor 2 and factor 4. This makes sense since both companies can essentially break down their earnings into two components, individual consumer sales and sales to other businesses.
Factor five includes United Technologies by itself, which is interesting because UTX hold such a large variety of companies including, Carrier, Hamilton-Sundstrand, Otis elevators, Pratt and Whitney, and Sikorsky Helicopter.
Conclusions
The movement in stock price of the 30 stocks which comprise the Dow Jones Industrial Average are highly correlated. As such they are a prime candidate for a factor analysis and a dimensionality reduction. Using five factors, we can group the variability in the stock market into categories. Roughly speaking the three categories that explain the most variation are financial, consumer goods, technology. The fourth and fifth factor seem to represent approximately the same dimension, namely, manufacturing and industry.
Using this factor analysis, we no can now view fluctuations in the stock market based on groups rather than the individual stocks. We have reduced the dimensionality of the stock in the Dow Jones from 30 down to 5, while still explaining 60 percent of the variability, greatly simplifying analysis of this stock data.
Future work in this direction could include using more than the past 1000 days of data and possibly including more than 30 stocks in the factor analysis.
Boycott’s in the Wild
Today on Slate, there is an article about Hal Stern’s call for all qunatitative analysts to boycott the BCS. I would like to let everyone know that in addition to Hal Stern and Bill James, the Stats in the Wild Blog is officially joining the boycott. That ought to bring the BCS to its knees.
A few notes about the article:
1. Statistical analysis is fantastic for ranking teams, but I don’t think it should have any place in deciding who wins a national championship. I think it is always preferable to have a deterministic set of criteria to decide who goes to the post season, like all professional sports. That way no one can complain when someone gets into the play-offs. (Except, in the NFL when the criteria lets in 9-7 Arizona and 8-8 San Diego and leaves out 11-5 New England…..) If you want to get in, meet the criteria and stop whining (like me).
In NCAA basketball there are both deterministic criteria (win your league) and at large bids for getting into the tournament. Every year teams are left out of the tournament, but, by including 65 teams, I doubt that any team with a legitamate chance to win a national title has ever been left out. And with the tournament the way it is, it allows team to make great memorable post-season runs. (Remember in 1997 when Arizona was a four seed and they beat THREE number one seeds on their way to a national championship? If NCAA basketball were run like NCAA football Arizona would have won some horseshit bowl game and no one would remember.)
If football went to an 8 (or 16) game play-off, sure good teams wouldn’t get in, but 8 is probably enough teams that you aren’t leaving out anyone who has a real shot to win it.
(Note: These last two notes have nothing to do with statistics and ramble on for much longer than they should. Enjoy.)
2. I hate the NCAA. I hate that young athletes are risking serious injuries and are making no money (except scholarship), while the schools, the administrators, the head coaches, and the television stations are collectively making billions of dollars. The kids see none of this money. And I hate it even more when these scum bag college coaches try to get a kid to stay for a 3rd or 4th year of eligibilty instead of entering the draft. Easy for a coach to say when he is making 3 million dollars a year. If some one was going to offer me a few million a year to do anything when I was a sophomore or junior in college, I would have left in an instant.
3. This is the last paragraph of the article:
“When it doesn’t, you can put the blame on the greedy small schools that wanted to milk money from the big football factories, on the greedy big schools that wanted to keep as much money as possible in the fewest possible hands, on the lunk-head football coaches who can’t program a computer to play tic-tac-toe but want to make all the rules, or on the Congress that sits idly by and watches it happen. You guys want to make a mess of this, you can make a mess of it without our help.”
Listen. I am all for a play-off in college football, but Congress doesn’t need to get involved. If I were to make a list of things that were (or should be) more important to Congress than the BCS and college football, I would never stop writing. You realize we are in TWO wars AND the worst economic crisis since the great depression AND are facing a 1.2 trillion dollar deficit. And you want to fix the BCS? Have some perspective.
Examples of Congress and Senate being ridiculous about footbal:Arlen Specter and T.O., Arlen Specter trying to punish the Patriots, Orrin Hatch whining about Utah’s team, and Cliff Stearns asking for Congress to postpone votes so he can attend the BCS championship game.
A short open letter:
Dear Orrin Hatch, Arlen Specter, and Cliff Stearns,
Grow up.
Sincerely,
The Statsinthewild Blog.
Unhappiness in the wild
Happy New Year! Here is an article from the Freakonomics blog tracking American’s (un)happiness.
Cheers
Best Statistical Graph ever drawn in the Wild
Finals are over and I hope to post more regularly again. Here is a quick picture.

This picture, by French engineer Charles Joseph Menard, graphically depicts Napolean’s fateful march to Russia. The width of the line represents how many troops Napolean had at each point on his way to Russia and what makes this graphic so great is just how many different variables are all displayed at once.
Edward Tufte says in his book, The Visual Display of Quantitative Information, “Minard’s graphic tells a rich, coherent storhy with its multivariate data, far more enlightening than just a single number bouncing along over time. Six variables are plotted: the size of the army, its location on a two-dimensional surface, direction of the army’s movement, and temperature on various dates during the retreat from Moscow”.
In the last line of the description below the graph, Edward Tufte says, “It may well be the best statistical graphic ever drawn,” which, in my opinion, may be the best claim ever made about a statistical graphic ever.
Cheers.
For Stanley:
Complete caption of the graphic:
“This classic of Charles Joseph Minard (1781-1870), the French engineer, shows the terrible fate of Napolean’s army in Russia. Described by E. J. Marey as seeming to defy the pen of the historian by its brutal eloquence, this combination of data map and time-series, drawn in 1861, portrays the devastating losses suffered in Napolean’s Russian campaign of 1812. Beginning at the left on the Polish-Russian border near the Niemen River, the thick band shows the size of the army (422,000 men) as it invaded Russia in June 1812. The width of the band indicates the size of the army at each place on the map. In September, the army reached Moscow, which was by then sacked and deserted, with 100,000 men. The path of Napolean’s retreat from Moscow is depicted by the darker, lower band, which is linked to a temperature scale and dates at the bottom of the chart. It was a bitterly cold winter, and many froze on the march out of Russia. As the graphic shows, the crossing of the Berezina River was a disaster, and the army finally struggled back into Poland with only 10,000 men remaining. Also shown are the movements of auxiliary troops, as they sought to protect the right flank of the advancing army. Minard’s graphic tells a rich, coherent story with its multivariate data, far more enlightening than just a single number bouncing along over time. Six variables are plotted: the size of the army, its location on a two-dimensional surface, direction of the army’s movement, and temperature on various dates during the retreat from Moscow. It may well be the best statistical graphic ever drawn.”
Presidential Voter Turnout Trends (in the wild)
Voter turnout for this last election was as high as it has been in the last 60 years. Click here for a good graphical display of voter turnout since 1948 on Andrew Gelman’s blog.
Also check out the United States Election Project where they have data about past election going back about 50 years.