Some useful data here. The presentation is a bit misleading in some respects, however, so I'll just repost here the comment I posted there:
You have put together some very interesting data, for which my thanks and no doubt the thanks of others. However, you need to reframe your presentation of this data in several respects to avoid grossly misleading students:
First, you should immediately remove the UK and Australasian schools from the lists, since you have grossly undercounted their placements. It's just irresponsible and unfair to those schools to list them given thtat, as you acknowledge, you didn't really know how to interpret their placements. [The author has since removed these schools.]
Second, many students interested in placement are interested in the quality of the jobs, as well as the tenure-stream status. Your way of 'ranking' the schools erases the distinction between placement at a 4/4 school with mediocre students and a placement at Princeton (not to mention all the gradations inbetween). You should make clear in each chart that the "Placement Rank" is really only "Percentage of PhDs Who Secured a Tenure-Track Job Somewhere." (If you were so inclined you might examine the placements in terms of the kinds of schools, perhaps using the Carnegie system for categorizing institutions of higher education.)
Third, of course there is no correlation between placement in tenure-track jobs over the last 15 years, roughly, and the quality of faculties in 2011! Why would the quality of the faculty in 2011 predict placement circa 2000, for a cotorie of students who started 5-8 years before that? As I note, including in the material you link to, placement records are backwards-looking measures, so only where faculties have not changed in the interim could past performance be thought a likely indicator of future success. So you should eliminate entirely the meaningless comparison between 2011 faculty quality rankings and tenure-track placement (without regard to kind of institution) going back some 15 years.
Fourth, the chart about the percentage of graduates currently in academic philosophy needs a caveat for the benefit of students. Schhools vary quite a bit in how many of the students that enroll end up getting the PhD. MIT, for example, has historically had very low attrition. Other schools have attrition on the order of 50% or more. So while the data you've collected tells us the percentage of *students who managed to complete the PhD* who got some tenure-track job, it doesn't given an incoming student a realistic picture of his or her prospects of getting the PhD. Schools that have high attrition may, in fact, be losing or forcing out precisely the students less likely to be competitive on the job market--we just don't know. It's why I've encouraged schools to post their attrition and completion rates, which many have now now done.
UPDATE: There are various errors in the data (besides the egregious one of undercounting placement from British and Australasian schools), including for Northwestern and CUNY. So students should approach this with caution. The errors identified so far do raise questions about the reliability of what is reported, quite apart from its correct interpretation.