Category Archives: Uncategorized
Stats in the wild: Census 2010
Who’s excited for the 2010 Census? What? You’re not? Well let me ask you this then? How else are we supposed to know that there were 6,424 people living in Pondera County, Montana in 2000? How? I don’t know about you, but I can’t wait to go door to door in 2010.
Anyway, here is a good article about prospective problems with the 2010 United States Census, and here is a little bio on the guy who ran the 2000 U.S. Census.
Census talk anecdote:
I went/was forced to go to a talk last year given by someone who had worked at the census for many years. As far as I was concerned the census is about as boring as it gets. (By the way, the Census bureau is located in a place called Suitland. I can’t imagine Suitland being a very fun place. Unless you like counting to very large numbers.) So the talk was about as boring as it gets. I think the statistical issues the census deals with are kind of dry (sampling!), but the political issues are fascinating. Basically the only item I remember from this woman’s talk was the controversy over how to count prisoners. She gave an example of a town in upstate New York that had a massive prison located within the town lines, but most of the prisoners were from near New York City. So where do these people get counted? Do they count for the purposes of representation? Should the town with the prison get more representation because they have so many residents? It’s an interesting discussion.
Cheers.
Being wrong in the wild.
Well, it looks like I was wrong. In one of my posts from the beginning of the NFL playoffs, I spent quite some time complaining about how the Chargers and the Cardinals made the play-offs at 8 – 8 and 9 – 7, respectively. Looks like the Cardinals proved me wrong by making it all the way to the Super Bowl, making them the first 4 seed to make it to the Super Bowl since the league switched to 4 divisions in 2001.
Why has a 4 seed never made it to the Super Bowl (until this year) in the era of 4 divisions? It’s because usually they are the worst team in the play-offs. Since they have switched to 4 divisions, at least one wild card team has had a better record than one of the 4 seed 5 times in 7 seasons. No four seed has ever won a Super Bowl in the four division era, and Arizona is the only 4 seed to make it to the Super Bowl with four divisions.
(Note: This is the first Super Bowl in the 4 division era not to feature a number 1 seed.)
And what happens when we look back to when the NFL had only three divisions? We see that the 3 seed never made it to a Super Bowl, but the 4 seed (the highest wild card team) made it to (and won the Super Bowl) 2 times in the six seasons between 1996 and 2001. I’m going to eventually get around to going back as far as I can, but for now I only have data going back to 1996.) Also, over this same period, two number 4 seeds (the highest seeded wild card) won the super bowl.
So here is my suggestion: Seed teams based on record. If you win your division at 8-8 (San Diego) or 9-7 (Arizona), you should be a lower seed than a wild card team that went 11-5 (Baltimore) or 12-4 (Indianapolis). If you are trying to have a play-off that is the fairest, you should seed based on wins, not how it is currently done. Tennessee got royally screwed this year in the play-offs because of this system. Their reward for being the number one seed? A first round bye and then a showdown with the 11-5 5th seed which they lost, while the number 2 seed had a relatively easy ride to the conference championship by beating the lowly 8-8 4th seeded San Diego Chargers. It just doesn’t make sense to seed San Diego higher than Baltimore in this season. Baltimore is a better team and they had more wins. Make things fair and seed based on record.
Final note: I still think Arizona is terrible. But I have been wrong three weeks in a row. However, let me remind you that a 4 seed has never won a Super Bowl in the 4 division era. And I still think they are terrible. Thusly, the statsinthewild blog’s official pick for Super Bowl XLIII is the Pittsburgh Steelers 24-13.
Cheers.
Losing money in the wild
This link has a good graphical representation of just how much value has been lost by some major banks.
Cheers.
Factor Analysis in the stock market (in the wild)
Well, I’m done with my qualifying exam. I’ll know if I passed by late this week/early next week.
Anyway, here is a short project that I did on factor analysis in November.
Cheers.
Introduction
A major market index in the United States is the Dow Jones Industrial Average. Thirty large industrial companies stock prices contribute to the calculation of the Dow Jones industrial average. These companies are Boeing, Caterpillar, Chevron, Citigroup, Coca Cola, DuPont, Exxon Mobil, General Electric, General Motors, Hewlett Packard, Home Depot, IBM, Intel, Johnson and Johnson, JP Morgan Chase, Kraft Foods, McDonalds, Merck, Microsoft, Pfizer, Proctor and Gamble, United Tech, Verizon, WalMart, Walt Disney, Bank of America, AT and T, American Express, Alcoa, and 3M Company.
The amount of change in the price of these stocks will be highly correlated, as they are all part of the larger market. Factor analysis will be used to reduce the dimensionality of the 30 stocks in the Dow Jones average. This is being done because I am interested to see which stock’s prices move together.
Data
Data was collected from the website finance.yahoo.com. Data consists of the high, low, opening, and closing price of each of the thirty stocks as well as the volume of each stock for each day. Stocks vary in the length for which they have historical data, as some companies have been public longer than other. As such only the last 1000 trading days are considered in the analysis. This includes all data dating back to November 19, 2004. Rather than consider the actual price of the stock (since some stock prices are much higher or lower than others), the change in stock price from one closing bell to the next is considered for all thirty stocks.
Analysis
Using SAS 9.2, a factor analysis was implemented for the differences in closing prices for the 30 Dow Jones stocks over the last 1000 days. Using a scree plot \cite{scree} and by analyzing the eigenvalues of the correlation matrix, a sufficient number of factors will be chosen. Upon finding the principal components, the varimax \cite{Johnson} method will be used to find a final rotated factor solution.
| Stock | Factor 1 | Factor 2 | Factor 3 | Factor 4 | Factor 5 |
| AA | 0.23127 | 0.09893 | 0.23084 | 0.73216 | 0.04921 | AXP | 0.67590 | 0.27178 | 0.30486 | 0.21506 | 0.10536 | BA | 0.22446 | 0.24467 | 0.32912 | 0.32123 | 0.33817 | BAC | 0.82352 | 0.25896 | 0.14310 | 0.14884 | 0.13932 | C | 0.79975 | 0.24980 | 0.14993 | 0.15357 | 0.09632 | CAT | 0.19406 | -0.06364 | 0.12838 | 0.45805 | 0.38608 | CVX | 0.13066 | 0.40635 | 0.20625 | 0.76029 | 0.08143 | DD | 0.39500 | 0.30897 | 0.27833 | 0.43173 | 0.30426 | DIS | 0.37520 | 0.43313 | 0.42601 | 0.27117 | 0.13823 | GE | 0.61010 | 0.27886 | 0.31333 | 0.21191 | 0.14694 | GM | 0.50475 | 0.06398 | 0.15834 | 0.19736 | 0.00832 | HD | 0.53510 | 0.24265 | 0.37274 | 0.02793 | 0.25287 | HPQ | 0.24719 | 0.19816 | 0.70506 | 0.24809 | 0.02544 | IBM | 0.33248 | 0.19050 | 0.68158 | 0.21392 | 0.10399 | INTC | 0.32603 | 0.20892 | 0.61192 | 0.21961 | 0.11821 | JNJ | 0.15935 | 0.71031 | 0.23464 | 0.10390 | 0.17792 | JPM | 0.80200 | 0.25962 | 0.19675 | 0.08128 | 0.15698 | KFT | 0.28979 | 0.45453 | 0.18234 | 0.20696 | 0.14068 | KO | 0.10981 | 0.60928 | 0.39598 | 0.08037 | 0.18114 | MCD | 0.26140 | 0.40935 | 0.36831 | 0.15274 | 0.36240 | MMM | 0.35237 | 0.31052 | 0.31527 | 0.35182 | 0.22999 | MRK | 0.18019 | 0.67967 | 0.06238 | 0.17023 | -0.06161 | MSFT | 0.14925 | 0.35621 | 0.65338 | 0.20696 | 0.12790 | PFE | 0.37601 | 0.57298 | 0.08865 | 0.15824 | -0.00815 | PG | 0.20504 | 0.69431 | 0.20156 | 0.17265 | 0.25142 | T | 0.36670 | 0.53525 | 0.32948 | 0.27273 | 0.00919 | UTX | 0.13055 | 0.15316 | 0.07721 | 0.11207 | 0.79017 | VZ | 0.37186 | 0.52181 | 0.37188 | 0.19101 | 0.01283 | WMT | 0.37782 | 0.46428 | 0.35919 | 0.06918 | 0.23774 | XOM | 0.13787 | 0.44943 | 0.21108 | 0.74470 | 0.09553 |
Results
Keeping five factors, we can see see which stocks load heavily onto which factors by looking at the table. The variables that load heavily onto the first factor include, American Express (AXP), Bank of America (BAC), Citigroup (C), General Electric (GE), General Motors (GM), Home Depot (HD), and JP Morgan (JPM). With the exception of Home Depot and General motors, all of these companies are financial institutions, and General Motors and Home Depot are heavily affected by the availability of credit from these institution as GM sells large ticket items (cars) and HD is heavily tied to people buying houses, and thus affected by the mortgage market. It appears that this first factor explains variation related to the financial sector.
The companies that are heavily loaded onto the second factor include, Chevron (CVX), Disney (DIS), Johnson and Johnson (JNJ), Kraft Foods (KFT), Coca Cola (KO), McDonalds (MCD), Merck (MRK), Pfizer (PFE), Proctor and Gamble (PG), AT and T (T), Verizon (VZ), Wal-Mart (WMT), and Exxon-Mobil (XOM). All of these companies sell items directly to consumers, and the costs involved in each of these transactions with consumers is relatively small. So, it appears this second factor is explaining the variation due to the individual consumer.
The third factor includes Disney, Hewlett-Packard, IBM, Intel, and Microsoft. These companies, with the glaring exception of Disney, are all companies tied to computers. Thus, it appears that the third factor explains variation due to computer industry. While factor four include companies such as Alcoa, Cat, Chevron, DuPont, and Exxon-Mobil. This factor appears to explain variation in the manufacturing market. Both Chevron and Exxon-Mobil appear heavily loaded on both factor 2 and factor 4. This makes sense since both companies can essentially break down their earnings into two components, individual consumer sales and sales to other businesses.
Factor five includes United Technologies by itself, which is interesting because UTX hold such a large variety of companies including, Carrier, Hamilton-Sundstrand, Otis elevators, Pratt and Whitney, and Sikorsky Helicopter.
Conclusions
The movement in stock price of the 30 stocks which comprise the Dow Jones Industrial Average are highly correlated. As such they are a prime candidate for a factor analysis and a dimensionality reduction. Using five factors, we can group the variability in the stock market into categories. Roughly speaking the three categories that explain the most variation are financial, consumer goods, technology. The fourth and fifth factor seem to represent approximately the same dimension, namely, manufacturing and industry.
Using this factor analysis, we no can now view fluctuations in the stock market based on groups rather than the individual stocks. We have reduced the dimensionality of the stock in the Dow Jones from 30 down to 5, while still explaining 60 percent of the variability, greatly simplifying analysis of this stock data.
Future work in this direction could include using more than the past 1000 days of data and possibly including more than 30 stocks in the factor analysis.
Boycott’s in the Wild
Today on Slate, there is an article about Hal Stern’s call for all qunatitative analysts to boycott the BCS. I would like to let everyone know that in addition to Hal Stern and Bill James, the Stats in the Wild Blog is officially joining the boycott. That ought to bring the BCS to its knees.
A few notes about the article:
1. Statistical analysis is fantastic for ranking teams, but I don’t think it should have any place in deciding who wins a national championship. I think it is always preferable to have a deterministic set of criteria to decide who goes to the post season, like all professional sports. That way no one can complain when someone gets into the play-offs. (Except, in the NFL when the criteria lets in 9-7 Arizona and 8-8 San Diego and leaves out 11-5 New England…..) If you want to get in, meet the criteria and stop whining (like me).
In NCAA basketball there are both deterministic criteria (win your league) and at large bids for getting into the tournament. Every year teams are left out of the tournament, but, by including 65 teams, I doubt that any team with a legitamate chance to win a national title has ever been left out. And with the tournament the way it is, it allows team to make great memorable post-season runs. (Remember in 1997 when Arizona was a four seed and they beat THREE number one seeds on their way to a national championship? If NCAA basketball were run like NCAA football Arizona would have won some horseshit bowl game and no one would remember.)
If football went to an 8 (or 16) game play-off, sure good teams wouldn’t get in, but 8 is probably enough teams that you aren’t leaving out anyone who has a real shot to win it.
(Note: These last two notes have nothing to do with statistics and ramble on for much longer than they should. Enjoy.)
2. I hate the NCAA. I hate that young athletes are risking serious injuries and are making no money (except scholarship), while the schools, the administrators, the head coaches, and the television stations are collectively making billions of dollars. The kids see none of this money. And I hate it even more when these scum bag college coaches try to get a kid to stay for a 3rd or 4th year of eligibilty instead of entering the draft. Easy for a coach to say when he is making 3 million dollars a year. If some one was going to offer me a few million a year to do anything when I was a sophomore or junior in college, I would have left in an instant.
3. This is the last paragraph of the article:
“When it doesn’t, you can put the blame on the greedy small schools that wanted to milk money from the big football factories, on the greedy big schools that wanted to keep as much money as possible in the fewest possible hands, on the lunk-head football coaches who can’t program a computer to play tic-tac-toe but want to make all the rules, or on the Congress that sits idly by and watches it happen. You guys want to make a mess of this, you can make a mess of it without our help.”
Listen. I am all for a play-off in college football, but Congress doesn’t need to get involved. If I were to make a list of things that were (or should be) more important to Congress than the BCS and college football, I would never stop writing. You realize we are in TWO wars AND the worst economic crisis since the great depression AND are facing a 1.2 trillion dollar deficit. And you want to fix the BCS? Have some perspective.
Examples of Congress and Senate being ridiculous about footbal:Arlen Specter and T.O., Arlen Specter trying to punish the Patriots, Orrin Hatch whining about Utah’s team, and Cliff Stearns asking for Congress to postpone votes so he can attend the BCS championship game.
A short open letter:
Dear Orrin Hatch, Arlen Specter, and Cliff Stearns,
Grow up.
Sincerely,
The Statsinthewild Blog.
Unhappiness in the wild
Happy New Year! Here is an article from the Freakonomics blog tracking American’s (un)happiness.
Cheers
Best Statistical Graph ever drawn in the Wild
Finals are over and I hope to post more regularly again. Here is a quick picture.

This picture, by French engineer Charles Joseph Menard, graphically depicts Napolean’s fateful march to Russia. The width of the line represents how many troops Napolean had at each point on his way to Russia and what makes this graphic so great is just how many different variables are all displayed at once.
Edward Tufte says in his book, The Visual Display of Quantitative Information, “Minard’s graphic tells a rich, coherent storhy with its multivariate data, far more enlightening than just a single number bouncing along over time. Six variables are plotted: the size of the army, its location on a two-dimensional surface, direction of the army’s movement, and temperature on various dates during the retreat from Moscow”.
In the last line of the description below the graph, Edward Tufte says, “It may well be the best statistical graphic ever drawn,” which, in my opinion, may be the best claim ever made about a statistical graphic ever.
Cheers.
For Stanley:
Complete caption of the graphic:
“This classic of Charles Joseph Minard (1781-1870), the French engineer, shows the terrible fate of Napolean’s army in Russia. Described by E. J. Marey as seeming to defy the pen of the historian by its brutal eloquence, this combination of data map and time-series, drawn in 1861, portrays the devastating losses suffered in Napolean’s Russian campaign of 1812. Beginning at the left on the Polish-Russian border near the Niemen River, the thick band shows the size of the army (422,000 men) as it invaded Russia in June 1812. The width of the band indicates the size of the army at each place on the map. In September, the army reached Moscow, which was by then sacked and deserted, with 100,000 men. The path of Napolean’s retreat from Moscow is depicted by the darker, lower band, which is linked to a temperature scale and dates at the bottom of the chart. It was a bitterly cold winter, and many froze on the march out of Russia. As the graphic shows, the crossing of the Berezina River was a disaster, and the army finally struggled back into Poland with only 10,000 men remaining. Also shown are the movements of auxiliary troops, as they sought to protect the right flank of the advancing army. Minard’s graphic tells a rich, coherent story with its multivariate data, far more enlightening than just a single number bouncing along over time. Six variables are plotted: the size of the army, its location on a two-dimensional surface, direction of the army’s movement, and temperature on various dates during the retreat from Moscow. It may well be the best statistical graphic ever drawn.”
Presidential Voter Turnout Trends (in the wild)
Voter turnout for this last election was as high as it has been in the last 60 years. Click here for a good graphical display of voter turnout since 1948 on Andrew Gelman’s blog.
Also check out the United States Election Project where they have data about past election going back about 50 years.
Stats in World War II (in the wild)
A friend of mine told me about this problem, so I went and looked it up. This is stats in the wildest of the wild.
So, during World War II, the Allies were trying to estimate the number of a certain kind of German tank. They needed this information to better plan their attacks and invasions. There were two sets of estimates made, one by intelligence and another by a group using statistical methods.
Estimates made using statistical methods in June 1940, June 1941 and August 1942 of the number of a certain type German tank were, respectively, 169, 244, and 327. The intelligence estimates for each of those same three periods of time were, respectively, 1000, 1550, and 1550. (from :Number of German tanks)
These estimates are drastically different, and depending on which estimate was believed, it is possible that battle plans may have been significantly affected. So who made the better estimates?
In most situations when we estimate something, we can never actual know what the true value is. However, as it turns out, after the war was over, German records became available and the actual number of tanks that they had at each of those three points in time became available. The actual number of tanks that the Germans had at the three points in time (June 1940, June 1941 and August 1942) when the estimates were made were, respectively, 122, 271, and 342. (Recall that the statistical estimates were 169, 244, and 327 for those three time periods.) The statistical estimates are astonishingly close. (As well as the intelligence estimates being alarmingly inaccurate.) So how did they do it?
The statistical group looked at the serial numbers of tanks that had been captured or destroyed by Allied troops, and they assumed that the serial numbers of the tanks were ordered from 1 to T where T is the number of tanks that the Germans had. So they assumed that if the Allies found a tank with serial number 200, that the Germans had at least (and almost surely more than) 200 tanks.
So if we assume that each serial number has equal probability of being observed our maximum likelihood estimate (our best guess) of T is simply the maximum serial number that we encounter on a destroyed tank. However, using the maximum encountered serial number to estimate T turns out to be an unbiased estimator. (If we always used the largest serial number as our estimate of T, we would be systematically underestimating T, because our largest observation is usually not the actual largest value.) So what we need is an unbiased estimator for T.
As it turns out the expected value of our estimator of T (the maximum observed serial number) is n/(n+1)*T (hence biased). So on the average the largest observed value will be smaller than actual T. To correct for this we simply multiply the largest observed value by (n+1)/n. This will give us an unbiased estimate for the number of tanks the Germans had, and this is how they reached their statistical estimates.
Example:
Say we observe 50 tank serial numbers and the largest observed serial number is 245. With all of the above assumptions, our unbiased estimate as to the number of tanks is 51/50*245=249.9.
If we observe 25 serial number and the largest is 110, our best guess is 26/25*110=114.4.
Here is a link to another blog post about the German tank problem.
Modern note: I saw online that someone was using this approach to try to estimate the number of servers that Google has. (More to come on that)
References:
Ruggles, R., and Brodie H. (1947) An empirical intelligence in World War 2. Journal of the American Statistical Association, 42:72-91.
Goodman, L. A. (1954), “Some Practical Techniques in Serial Number Analysis,”
Journal of the American Statistical Association, 49, 97–112.
Speeling erroes kil blogers kredibilty (in the wild)
Accoriding to this article from www.readwriteweb.com, they claim that “Errors By Bloggers Kill Credibility & Traffic, Study Finds”. Interesting. So how did they reach this conclusion?
From the article:
“The company [goosegrade.com] asked a demographically diverse group of respondents on Amazon’s Mechanical Turk website to fill out the survey and published the results today on the goosegrade.com company blog. The bulk of respondents spent some time reading blogs but were people who remained dependent on ‘mainstream sources’ for most of their news.”
(For an explanation of Mechanical Turk, the Wikipedia article is here.
Comment: How does goosegrade know these people were demographically diverse? The only people they asked were Mechanical Turk workers. That seems like a very specific group of people. So you should only be able to make inference about that group of people. They hardly speak for internet users in general, but goosegrade.com uses them to make inference about “internet users” when they should just be making inference about “mechanical turk workers who are being paid by gooseGrade.com”. Those two groups are drastically different.
gooseGrade.com says on their site (http://www.goosegrade.com/reader-perception-survey-results)
“Readers want gooseGrade. Here’s proof.
175 People polled.
ABSTRACT: It appears that grammar, spelling, factual, and other errors do affect reader opinion as well as how likely they are to share or link to an article. These errors also seem to dictate the readers opinion of the author’s skills as a writer. 65.86% of internet users say that a tool like gooseGrade would increase their confidence in the content they are reading. Filtering further shows that 9 out of 10 newspaper readers say that a tool like gooseGrade would increase their confidence in author’s content. This merrits further investigation of newspaper readers and could show a path for new media to take more market share.”
As I said before, I’m not sure the opinions of 175 (more on this below) mechanical turk workers are sufficient to make inference on all internet users. Furthermore, remember that all of these respondents were paid by goosegrade.com (although it was probably only a few cents.)
A note on their sample size: They claim a sample size of 175 internet users, but an examination of the raw data shows that there are only 161 unique IP address. 9 IP addresses are repeated twice and 1 IP address is repeated 5 times. These should be thrown out of the sample because it is likely that they are the same person.
The readwriteweb.com article concludes with:
“Below are a few of the charts, you can see the rest on the GooseGrade blog. The lesson here? It seems pretty clear. We bloggers are harming our own credibility and traffic with our inattention to details, not just in the facts, but in the basics of our writing. Let’s do better!”
Here is a promise I am willing to make. I’ll write better and make less grammatical errors if you apply statistics more fairer. (LOL)
Cheers.