Misperceptions in the public culture about universities are legion, but one of the most peculiar is the perception of universities as hotbeds of "postmodernism," i.e., generalized skepticism about truth, meaning, and knowledge. As best I can tell, there are just two fields where postmodernism has a significant presence and influence: English/Literature and History. Because these fields are popular undergraduate majors, and perhaps, most significantly, because journalists appear to disproportionately have majored in one of these two subjects, the result is that the postmodern fashions of these two disciplines have led to an "official" media vision of universities as hotbeds of postmodernism. But what are the facts?
My knowledge of what is going in some fields is undoubtedly superficial or limited, but I feel reasonably confident in noting that postmodernism plays little or no role in most major academic disciplines. Consider:
Anthropology: there are postmodernist strains in cultural and social anthropology, none at all in physical anthropology. This, of course, has resulted in the bifurcation of Anthropology Departments at some universities.
Biology: postmodernism has had no impact.
Chemistry: postmodernism has had no impact.
Classics: postmodernism has had little or no impact.
Economics: postmodernism has had no impact.
Law: postmodernism has had little impact, outside certain areas like Critical Race Theory whose impact on most other branches of law has been minimal. Postmodernism has had no impact on the mainstream of jurisprudence.
Linguistics: postmodernism has had no impact.
Mathematics: postmodernism has had no impact.
Philosophy: postmodernism has had no impact in the mainstream, some influence at the margins.
Physics: postmodernism has had no impact.
Political Science: postmodernism has had some impact on political theory as practiced in Poli Sci departments, almost no impact at all on the rest of the discipline (international and comparative politics, public law, American politics, formal and rational choice theory).
Psychology: postmodernism has had little or no impact.
Sociology: postmodernism has had some impact in social theory, and little or no impact in the quantitative and empirical branches of the discipline, which dominate the field.
Unsurprisingly, the natural sciences have been untouched by postmodernism--but they are, after all, a substantial portion of any research university. The social sciences have been little affected: Anthropology the most, and Political Science and Sociology only in their abstract theoretical branches; the rest of those fields, like Economics, go about "business as usual" (and often in an anti-postmodern, scientistic way). But the Social and Natural Sciences account for a vast amount of what goes on these days at research universities. The Humanities have been most afflicted by postmodernism, though even there we find two central fields--Classics and Philosophy--largely unscathed by postmodern fashions.
So there we have it: Little or no postmodernist influences in Physical Anthropology, Biology, Chemistry, Classics, Economics, Linguistics, Mathematics, Philosophy, Physics, and Psychology. Modest, but still not significant, postmodernist influences in Law, Political Science, and Sociology. Significant postmodernist influences in Anthropology, English and the various literature disciplines, and History. It's really time for journalists to stop writing about universities and intellectual life as though postmodernism were its dominant motif; happily, it is not.
Speaking at an off-the-record panel discussion at the World Economic Forum in Davos, Switzerland on Jan. 27, CNN chief executive Eason Jordan apparently suggested that U.S. forces in Iraq had intentionally targeted journalists. When challenged by members of the audience, including Rep. Barney Frank (D-Mass.), Jordan backtracked, saying that he did not believe that such attacks were deliberate U.S. government policy.
By most accounts Jordan's assertions were wavering and ambiguous. Not so for the resulting howl from conservative bloggers. Twenty-two days later, under pressure from critics across the political spectrum, Jordan resigned.
Major news outlets jumped for the tried-and-true storyline: public figure makes outrageous statements and is taken down by persistent bloggers. The pajamahadeen, still triumphant from having toppled Dan Rather, had claimed another scalp.
Even Jordan's defenders went out of their way to attack his statements. Syndicated columnist Kathleen Parker, who voiced concerns that a "cyber-mob" mentality might chill journalistic free speech, made it a point to call Jordan's comments "manna to Islamist recruiters" and "stupid, even indefensible."
But what's truly indefensible is the American media's failure to examine the substance of Jordan's claims, however clumsily articulated. If they did, they might be surprised.
To be sure, many made superficial attempts. A typical example appears in conservative columnist Cathy Young's Feb. 14 Boston Globeop-ed, "Sliming American Troops." Young, pointing out that Jordan also cited an "uncorroborated tale" of an Al-Jazeera journalist tortured at Abu Ghraib prison, dismisses Jordan's accusations between parentheses:
"All this suggests that intentional targeting of journalists as journalists was likely a part of Jordan's claim. (No such charge has ever been made by any journalists' organization, though disturbing questions have been raised about negligence in the U.S. military's shelling of the Palestine Hotel in Baghdad in April 2003 in which two journalists were killed.)" (emphasis added)
No such charge? On April 8, Reporters Without Borders issued a press release headlined "Reporters Without Borders accuses U.S. military of deliberately firing at journalists." That Young missed this item from a global media rights group, released the very same day as the Palestine Hotel attacks that she goes on to mention, shows just how blind she is to facts that lay outside the orthodox narrative.
Young is one of the few columnists to make any connection between Jordan's comments and the shelling of the Palestine Hotel, which killed two journalists, José Couso of the Spanish network Telecinco and Reuters cameraman Taras Protsiuk, and wounded three others.
A film of the incident, shot by the French TV station France 3, shows that U.S. troops were not under any fire at the time, and that the tank crew took a few minutes to adjust its gun before opening fire on the hotel, which was home to more than 200 journalists and media assistants at the time.
Following a Pentagon investigation in which the U.S. military completely exonerated itself, the International Federation of Journalists denounced the "cynical whitewashing of a horrifying and avoidable tragedy...."
As for the "uncorroborated tale" of torture? According to those who witnessed the remarks at Davos, Jordan discussed the case of an Al-Jazeera reporter who had been taken to Abu Ghraib prison, where he was forced to eat his own shoes and mocked as "Al-Jazeera boy."
The only reporter from a major paper to look into this tale was the New York Sun's Roderick Boyd, who did so in a display of astonishingly lazy reporting. Boyd writes:
"A man who said he was a producer with Al-Jazeera at the network's headquarters in Doha, Qatar, said he was unaware of any such incident, 'although we have had problems with American troops in and out of Iraq.' The Al-Jazeera producer refused to give his name."
Had Boyd bothered to enter the words "Al-Jazeera boy" into Google or Lexis-Nexis, he would have found a March 11, 2004, story in the Nation, a May 2, 2004, story in the London Observer, and an accountfrom Reporters Without Borders, each of which details the seven-week detention and torture at Abu Ghraib of 33-year-old Al-Jazeera cameraman Salah Hassan.
But Boyd didn't bother. He was happy enough to settle for the anonymous source who told him what he wanted to hear. His "evidence" that the incident never happened has been reprinted throughout the blogosphere, including the influential conservative blog, Instapundit.
The reporter forced by U.S. troops to eat his own shoe was actually a different person: veteran Reuters cameraman Salem Ureibi. On Jan. 2, 2004, U.S. soldiers arrested Ureibi, Falluja stringer Ahmed Mohammed Hussein al-Badrani and driver Sattar Jabar al-Badrani in Falluja as they were filming the site of a helicopter crash. The three men were held for 72 hours, during which they were beaten, stripped and threatened with rape.
It was not the tales of abuse, however, that most appalled Jordan's critics—such stories are no longer shocking even to casual followers of the news—but the notion that the U.S. military would deliberately target media outlets in Iraq in order to silence them. In the early days of the invasion, they did just that.
In the hours before dawn on March 26, 2003, precision guided bombs struck the headquarters of Iraq's state-run television station, which had days earlier broadcast images of dead and captured American soldiers, in violation of the Geneva Conventions.
But the Geneva Conventions also prohibit targeting civilian buildings, unless they offer "definite military advantage." Media organizations and human rights groups were outraged.
"The bombing of a television station, simply because it is being used for the purposes of propaganda, cannot be condoned," read a press release from Amnesty International, which called the strike a "possible war crime."
"Once again," said Aidan White, general secretary of the International Federation of Journalists, "we see military and political commanders from the democratic world targeting a television network simply because they don't like the message it gives out...."
The next time some self-congratulatory, right-wing empty suit with a blog starts blathering about how the blogosphere challenges traditional media, send them this link. As usual, the right-wing blogosphere missed the real story.
In the  Loving case a mixed-race married couple was charged with violating Virginia's Racial Integrity Act. The judge who sentenced the couple wrote: "Almighty God created the races white, black, yellow, malay and red, and he placed them on separate continents. And but for the interference with his arrangements there would be no cause for [interracial] marriages. The fact that he separated the races shows that he did not intend for the races to mix."
Now we're told that he doesn't want gays to marry. That there is something unnatural about the whole idea of men marrying men and women marrying women. That it's abhorrent to much of the population, just as interracial marriages were (and to many, still are) abhorrent.
This does get to the heart of the matter. There is no doubt that the idea of two men or two women being married seems strange and unfamiliar (well, it is unfamiliar, after all), that it is upsetting to many, that it provokes hard-to-articulate feelings of unease and, in some, revulsion. It is, in that regard, no different than the feelings very common fifty years ago, and still sometimes found today, regarding interracial marriage. In both cases, it is impossible for anyone to give a rational explanation for their opposition. (A good illustration are the postings at this conservative site.)
It seems to me there have been three general kinds of attempts to offer a rational basis for opposition to gay marriage: appeals to religion, tradition, and the "essential" nature of marriage. Assuming that religious faith can be rationally defended--I will assume, arguendo, that it can be--it's not at all clear that those defenses suffice to underwrite the rationality of claims about God's intentions on matters like gay marriage. Belief in God is one thing; claims to authoritative epistemic access to God's intentions is another. The rationality of claims of the latter sort has never been adequately defended.
Reliance on "tradition" is not rational in the absence of (a) a defense of the rationality of the tradition, or (b) a defense of the rationality of deference to tradition. Obviously if the rationality of the tradition could be defended there would be no need to appeal to the tradition in the first place. And the only defenses of the rationality of deference to tradition--assuming they're successful--establish, at best, that tradition is a defeasible guide to what we should do today, and thus can not themselves fully dodge the question of why tradition should not be defeated in this instance. (Again, I'm assuming, arguendo, that the "tradition" supports the claims of the opponents of gay marriage: for some pertinent doubts, see the interview with Sanford Levinson linked at the end of this posting.)
Finally, arguments based on claims about the essential nature of marriage--like those by John Finnis and Robert George--are, it is fair to say, generally recognized as reductios: the arguments are so tortured and so wrought with bizarre premises as to lead one agnostic on the subject to be highly suspicious. (A thinner version of these arguments from Doug Kmiec is here. Larry Solum [San Diego Law] comments on some of the peculiarities of the Kmiec argument here.)
ORIGINALLY POSTED ON OCTOBER 1, 2003. (And for more on why the "Nobel Prize" in Economics is not really a Nobel Prize, see here. And for a different view on economics as "science," see here.)
Next week, the Swedish Academy starts handing out Nobel Prizes, which guarantees for the recipients a wave of international publicity and attention. Since the late 1960s, economics has been included among the fields for which the Prize is given. And without a doubt, the field of economics has gotten a lot of mileage out of the fact that it is the only social science field in which there is a Nobel Prize, which puts economics right alongside real sciences like physics and chemistry.
Economists tend to take the scientific status of their field for granted, while those outside the field are more skeptical. There's a large popular literature, of course, on the preposterously bad track record of predictions by economists (the December 2, 1996 essay by John Cassidy in The New Yorker may be the best-known, though it's not, as far as I can tell, on-line), but philosophers (notably Daniel Hausman at Wisconsin and Alexander Rosenberg at Duke) have developed substantial theoretical accounts of the problems that afflict economics and that account for the poor predictive track record.
A good sampling is found in Hausman's contribution to SEP on philosophy of economics. Hausman there writes, for example, regarding Milton Friedman's famous defense of unrealistic assumptions in economic theories:
In other words, Friedman believes that economic theories should be appraised in terms of their predictions concerning prices and quantities exchanged on markets. In his view, what matters is 'narrow predictive success' (Hausman 1994b), not overall predictive adequacy. So economists can simply ignore the disquieting findings of surveys. They can ignore the fact that people do not always prefer larger bundles of commodities to smaller bundles of commodities. They need not be troubled that some of their models suppose that all agents know the prices of all present and future commodities in all markets. All that matters is whether the predictions concerning market phenomena turn out to be correct or not. And since anomalous market outcomes could be due to any number of uncontrolled causal factors, while experiments are difficult or impossible to carry out, it turns out that economists need not worry about ever encountering evidence that would disconfirm fundamental theory. Detailed models may be confirmed or disconfirmed, but fundamental theory is safe. In this way one can understand how Friedman's methodology, which appears to justify the eclectic and pragmatic view that economists should use any model that appears to "work” regardless of how absurd or unreasonable its assumptions might appear, has been put in service of a rigid theoretical orthodoxy.
But this is fairly gentle (as befits an encyclopedia article), by comparison to some of the other philosophical verdicts; for example: John Dupre, writing in the Philosophical Review 104 (1995): 151: "[e]conomic theory [is] one of the more dismal empirical failures in the history of science." Daniel Hausman, writing in his important book The Inexact and Separate Science of Economics (CUP, 1992), 139:
The justification for a particular paradigm or research program, like the justification for the commitment to economics as a separate science, is success and progress, including especially empirical success and progress. Since economics has not been very successful and has not made much empirical progress, economists should be exploring alternatives....
No one has been more damning, though, than Alexander Rosenberg, in both his widely anthologized "If Economics Isn't Science, What Is It?" and his book Economics--Mathematical Politics or Science of Diminishing Returns? (University of Chicago Press, 1992). A sampling from the article:
microeconomic theory has made no advances in the management of economic processes since its current formalism was first elaborated in the nineteenth century.
the twentieth-century history of economic theory certainly does not appear to be that of an empirical science.
Economists would indeed be well-advised not to surrender their...research program, if only they could boast even a small part of the startling successes that other [similarly structured] research programs have achieved. But two hundred years of work in the same direction have produced nothing comparable to the physicists' discovery of new planets, or of new technologies by which to control the mechanical phenomena that Newton's laws synthesized. Economics have attained no independently substantiated insight into their domain to rival the biologists' understanding of macroevolution and its underlying mechanism of adaptation and heredity.
There is, of course, a vast literature on why economics has so little in the way of predictive content, and on how a theory so dependent on idealizations and factually false assumptions as microeconomics can nevertheless constituted a respectable scientific enterprise....The only two things clear about this literature are that economists have found it almost universally satisfying and legitimating, and noneconomists have consistently been left unsatisfied, insisting that methodological excuses are no substitute for attempting to do what economics has hitherto not done: improve its predictive content.
Economists, of course, take the existence of a Nobel Prize in their field as an indication of the closeness of their field to real science. But perhaps the best rationale for a Nobel Prize in economics is its proximity to literature?
Mr. Buck, needless to say, remains quite attached to his "insight" that there are two different senses of "a priori," one of which he denominates the "Kantian" sense. He explains:
Scientists often say as follows: 'Other scientists have seen that methodological naturalism has worked in the past; therefore I will approach any new problem with a strict insistence that only naturalistic solutions will be considered, because I have decided that only naturalistic solutions count as 'science.' Leiter focuses on the first part of that sentence [note: the sentence is Buck's, not Leiter's], and accordingly insists that methodological naturalism was not collectively chosen 'a priori' in the Kantian sense. That's all fine and well, but it says nothing about whether an individual scientist today approaches new problems having ruled out a particular type of solution without regard for its truth. In that sense, the commitment to methodological naturalism is 'a priori,' because it comes prior to an individual scientist's investigation of any actual new problem or question.
Where to begin? Let's take "a priori" in the new, "Buckian sense." Scientists believe something "a priori in the Buckian sense" if "it comes prior to an individual's scientist's investigation of any actual new problem or question." So, e.g., since most scientists accept the truth of Newtonian mechanics for mid-size physical objects, despite the fact that most of them have never conducted any investigations or experiments to confirm Newtonian mechanics, it follows that they accept it, then, "a priori in the Buckian sense." Needless to say, natural scientists quite generally accept methodological naturalism "a priori in the Buckian sense."
Indeed, it goes farther than that: most of us who are educated accept evolutionary biology "a priori in the Buckian sense" (after all, I'm no biologist, what do I know beyond what I've read and been told about it?). Indeed, I accept that FDR was President from 1932 to 1945, and that Hitler was a genocidal maniac in Germany during roughly those same years, and that Nietzsche was born in 1844 and died in 1900, and that Americans fought for independence from the British in the late 18th-century--I accept all of that "a priori in the Buckian sense," since I've done no empirical investigation to confirm any of it.
What that means, of course, is that for a belief to be "a priori in the Buckian sense" is utterly trivial: huge portions of what laymen and scientists alike accept is "a priori in the Buckian sense." Of course, it is quite rational to continue to accept as true our beliefs that are "a priori in the Buckian sense" as long as (a) the processes by which we acquired the beliefs are epistemically reliable, and (b) we don't encounter conflicting evidence in the course of subequent experience and investigation. The latter, I take it, is Dr. Myers's point when he says to Buck:
If Intelligent Design creationists (or ghost-hunters, or crystal-healers, or zero-point energy advocates) were to actually produce any evidence of the phenomena they claim to study, we'd jump up and take notice. While Buck tries to claim it is irrelevant, it is actually the heart of the matter, and that's the whole point of what I was saying: gathering evidence is the foundation of what scientists do. Rolling your eyes and dismissing the point [as Buck did] while claiming, in essence, that 'it doesn't matter, scientists won't accept evidence of intelligent design even if we had any, so we won't bother' doesn't get us anywhere. Especially since it is false.
In other words, in the absence of any evidence incompatible with methodological naturalism, we have no reason to give it up, despite its being, for most people, "a priori in the Buckian sense." (Let me note, though Mr. Buck does not, that this same issue came up in my earlier posting on VanDyke, in connection with the misrepresentation of Larry Laudan's views. As I wrote then: "Beckwith quotes Laudan [at 25] noting that ID 'is inconsistent with methodological naturalism and ontological materialism...[b]ut that fact has no bearing whatsoever on the plausbility of the arguments for ID.' Why does Laudan say that? Because methodological naturalism is an a posteriori doctrine, which means if ID generated any empirical results incompatible with it—it has not, of course—then so much the worse for MN. The problem is purely a posteriori: ID has no research program and no empirical support, so it presents no challenge at all to the reliance on naturalistical explanatory mechanisms.")
"My name is SupaDubya and I'm really fucking insane." The music isn't to my taste, but the intellectual and political content of this song is far more sophisticated than that of most of the blogosphere, at least in the U.S. Click on "read the lyrics" for a set of good links as well. But, warning: Ann Althouse will be offended by the naughty words. The song is by 4o. del Tren, a Mexican band. Never heard of them before, but I'm a fan now! See also their commentary on the song:
We're living in dangerous times. Dangerous because 'the most powerful man on earth', as Americans like to call their president, has unleashed a protracted war against an abstract enemy upon this world. Justifying his regime's actions by claiming to rid the world of tyranny and a supposed fight against terrorism, Bush & Co. found in the events of September 11, 2001, the launching pad for their agenda of imperial expansion.
Almost immediately, the government set the propaganda machine in motion, wrapping itself in the flag and other nationalist symbols and taking refuge in the fear, confusion and anger that followed the terrorist attacks. They repeated over and over those buzzwords that made it so easy for most Americans to believe that what was beginning was in fact a righteous war against an evil enemy and not the beginning of what they had planned to do all along... long before the election was stolen in 2000, and certainly LONG before the twin towers collapsed. Mass media - in particular, Rupert Murdoch's Fox News Channel - were a vital instrument in the carrying out of this task of propaganda. All those who opposed an invasion of Afghanistan (and later, of course, Iraq) were deemed anti-American, unpatriotic, or even worse... terrorist sympathizers.
Since the events of that day, the United States has inched ever closer to an Orwellian police state, in which citizens have been truly brainwashed into believing that this invasion serves a noble, even a holy purpose. What's even scarier, many of them, through the so-called Patriot Act and other measures adopted by the Bush regime, have been persuaded to surrender some of their civil liberties in the name of the so-called 'war on terror'. It wasn't until just over a year or so ago that serious questions about the intent and agenda of the Iraq occupation have been raised in the mainstream media, and the anti-Bush crowd began to become less a fringe movement of radical nuts and more the rational, objective and serious questioning of the decisions made and policies adopted by the current Administration. It had become impossible to remain quiet in the face of such blatant violations of international law and diplomacy, witnessing the unilaterals actions of a small group acting not in behalf of freedom, morality and democracy, but in the best interests of the corporate and military elite of the right-wing powers around the world.
With this track we have tried to put into perspective a glimpse of what the Bush/Cheney regime truly represents, and the means with which they have achieved their goals, as well as what we believe to be the truth in the alleged war on terror, its origins, its true motives and ends. Perhaps to some the lyrics of the song will seem extremist or radical. It might be especially difficult for some Americans to stomach; however, we believe that we truly have not exaggerated in our statements. We must be conscious and aware of the many different ways in which our perceptions and convictions are shaped. We must know our sources of information, whose interests they truly serve; and most of all, we must be aware of our own history and past as a species, so we may better understand what is currently going on and so we may have the capacity to put it into a perspective and not fall into the fatal error of limiting our memory to the here and now.
Yes indeed, we're living dangerous times. And it could get worse before it gets better if Bush/Cheney are not re-defeated this coming November. This is only the beginning in the expansionist agenda of the american right wing groups in power. It is vital that this veiled dictatorship be removed from office; not only for the sake of Americans, but for the sake of the entire world.
This book, published by French scholars in 1997, documents the death tolls attributable to communist regimes in different countries. The book is obviously premised on the thought that the fact that these were communist regimes is explanatory: clearly, one could produce similar books like "The Black Book of White Men" or or "The Black Book of Short Dictators" or "The Black Book of Blondes," listing the atrocities committed by regimes with the designated characteristics. But the undertaking would seem peculiar, since there is no reason to think that the highlighted attributes are explanatory.
The Black Book of Communism doesn't actually argue that allegiance to communism is explanatory; it takes it for granted. Someone writing The Black Book of Capitalism, in turn, might start counting corpses attributable to 19th-century European imperialism, Pinochet, Suhatro, Marcos, Somoza, etc. (Should we add Hitler, who on some accounts owes much to the support of the capitalist class, and whose rise to power was, uncontroversially, facilitated by the worldwide capitalist crisis of the late 1920s and 1930s?)
Would such an exercise illuminate anything about capitalism? It is an interesting question, and I do not know the answer. Still, in viewing the Black Book of Communism it is not unreasonable to raise the kinds of questions put--with characteristic Maoist hyperbole and apologetics--by this Maoist organization::
[I]t is an 856 page book and there are no statistical comparisons of premature deaths between capitalist and socialist countries anywhere in the book, just as MIM [Maoist International Movement] charged all along. The reason is simple: the Communists doubled the life expectancies of the people of the Soviet Union and China. That is the overall picture. It does not mean there were not civil wars or executions, including some unjust ones, but overall, the violence of communism is less than that of capitalism, by far.
The simple scientific link missing in the minds of our critics is the link between poverty under a system of private property and death. Poverty under capitalism causes death from lack of food, a decent environment and adequate health care.
Assuming one is a utilitarian about these matters--as the economists usually profess to be (though theirs is typically a sophomoric utilitarianism which equates well-being with preference-satisfaction)--then the only relevant question really is the causal nexus between different forms of socio-economic ordering and human well-being. And thus the Maoists are plainly correct to raise the question: how many lives were cut short by the catastrophe of the Great Depression? how many lives were lost to capitalist exploitation and terror in third world counctries? and so on.
It will not do, to refute the Maoists, to follow the lead of the silly Arnold Kling, and compare per capita life spans today to 100 or 200 years ago. No one was a more vigorous cheerleader for the productive power of capitalism, of course, than Karl Marx. The relevant question is whether that productive power might have been harnessed in ways that would have resulted in greater maximization of human well-being than actually resulted under existing socio-economic arrangements? And then, if we were to be serious about the question, we would have to line up the corpses and human misery (causally) attributable to the imperfections of capitalist forms of socio-economic organization next to those (causally) attributable to the communist societies, to see whose "Black Book" should be the fattest.
Now doctrinaire Marxists will presumably conclude, before the exercise even begins, that of course the ledger sheet of human misery will be longer on the side of the Chinese and the Soviets, since those nations were prematurely communist, and so deprived themselves of decades (or centuries) of development of their productive power which capitalism would have made possible. Such productive power, in turn, would have made possible, even within the pathologies of a capitalist system, improvements in human well-being, as those such as Dr. Kling like to point out.
Fortunately, doctrinaire Marxists are now in short supply! (Too bad the same is not true of doctrinaire libertarians, whose reasoning is not so different.) And so there is a genuine empirical issue here, for which The Black Book of Communism is only a partial contribution. (It focuses disproportionately on intentional killings, though it factors in some non-intentional ones as well, such as the Chinese famine. But it ignores precisely the critique of the Soviets and the Chinese that would be raised by the doctrinaire Marxist, noted above.) I will not be betting, to be sure, on Harvard University Press publishing installments looking at the other side of the ledger sheets, but perhaps other scholars will find other fora in which to address the issues.
At the same time, what is needed is not simply correlations, but some account of causal mechanisms, so that we understand the precise sense in which, e.g., "capitalism" (whatever that is) produces mass murderers like Suhatro, or "communism" (whatever that is) produces mass murderers like Stalin.
These are fascinating issues, of enormous human importance. Is there a pertinent scholarly (as distinct from a popular [e.g., the silly Arnold Kling] or polemical [e.g., the MIM, above]) literature?
UPDATE: My colleague Frank Cross calls my attention to an interesting, small-scale study of the effect of privatization of water rights on child mortality in Argentina. ANOTHER UPDATE: A different perspective on water privatization here. AND ANOTHER: Reader Tom Barker alerts me to the fact that there is, in French, aBlack Book of Capitalism, though it may already be out of print.
I'm reposting this, in light of the latest salvo by the right claiming a "bias" against conservatives in the academy. As usual, the possibility that conservatives are underrepresented because of intellectual or scholarly deficiencies isn't broached (how could that topic be broached by a journalist, after all?) (Surely it is relevant to an assessment of why Straussians who work on Plato have difficulty getting hired (except in departments already infested) is that they are viewed by all other Plato scholars as sloppy and philosophically inept scholars.)
Conservatives are usually keen to deny that the absence of, say, Blacks in academia doesn't signal bias; why are they so ready to infer bias from the absence of conservatives?
Only 7% of the members of the National Academy of Sciences believe in God, compared to 90% of the U.S. population. What should we infer?
Readers may also find the discussion here of interest.
The fury of the ex-smoker towards smokers is matched only by the fury of the ex-New Leftist towards anyone to the left of Tom DeLay, which brings us to David Horowitz, and his newest initiative, which ought to scare every scholar in the United States to death. He calls it an "Academic Freedom Bill of Rights," and its text seems almost benign and uncontroversial:
"All faculty shall be hired, fired, promoted and granted tenure on the basis of their competence and appropriate knowledge in the field of their expertise and, in the humanities, the social sciences, and the arts, with a view toward fostering a plurality of methodologies and perspectives. No faculty shall be hired or fired or denied promotion or tenure on the basis of his or her political or religious beliefs."
Who could quarrel with the proposition that "No faculty shall be hired or fired or denied promotion or tenure on the basis of his or her political or religious beliefs"?
The idea that a "plurality of methodologies and perspectives" be mandated is a bit more worrisome: should someone take the UCLA Philosophy Department to court for not having hired a postmodernist philosopher? At the University of Texas School of Law, we've also passed on a lot of postmodernists and Critical Race Theorists lately. Should we be liable as a result?
Of course, the pathological Horowitz has no interest in these issues. If you want an idea of what "academic freedom" means to him, you need only read a bit of what he has written, for example, this gem on the very same web page where the Academic Freedom Bill of Rights appears, in which he denounces "sociological flat-earthists -- Marxists, socialists, post-modernists and other intellectual radicals -- whose ideas of how societies work have been discredited by historical events" but who "can still dominate their academic fields."
I guess academic freedom doesn't encompass them.
McCarthyism was never so clever. Horowitz says:
"We call on state legislatures in particular to begin these inquiries at the institutions they are responsible for and to enact practical remedies as soon as possible."
Pratical remedies? Over in libertarian fantasy-land, Eugene Volokh expresses relief that Horowitz isn't suggesting quotas for conservatives. But what in the world does Eugene think litigation over violations of the so-called "Academic Freedom Bill of Rights" would look like? If people have legal rights, they have remedies, and those remedies are going to be crafted by courts or legislatures. In either scenario, you might as well kiss universities good-bye.
The next critical race theorist denied tenure at a law school will have her day in court, as will the next Straussian booted from a classics department (I'm assuming, counterfactually of course, that one would ever be hired). And perhaps to avoid endless litigation, the legislature will simply proclaim the appropriate balance of libertarians, social conservatives, Marxists, social democrats, liberals, centrist Democrats, and Republicans for each department at the university.
The Colorado branch of Horowitz's front group, "Students for Academic Freedom [sic]," has issued the following declaration:
"These vastly lopsided figures (94% Democratic faculty at the University of Colorado at Boulder, 98% at the University of Denver) provide objective standards of bias, since party affiliation is a choice of the individuals themselves. Even accounting for the fact that more liberals go into academia than do conservatives, can anyone truly claim with a straight face that liberals are so overwhelmingly more competent as teachers and scholars that they deserve to win such an astoundingly high percentage of professorships? This outcome is incomprehensible based on merit alone."
What percent of University of Colorado at Boulder faculty are white, one wonders? 90%, 95%? And since there is no reason to think Blacks are less interested in academic careers than whites (as opposed to conservatives, who would rather make money on Wall Street, or so the argument goes), and since no one could "claim with a straight face that" whites are "overwhelmingly more competent as teachers and scholars," this outcome "is incomprehensible on merit alone." Ergo, it must be bias, ergo, bring in the courts.
In the race and gender discrimination contexts, of course, the courts have largely rejected these kinds of arguments--and certainly right-wingers like David Horowitz reject them out of hand. Too bad they've not noticed the parallel in this context.
And, of course, the argument begs the real question. There are no alchemists in the best chemistry departments; no creation scientists or intelligent design proponents in the best biology departments; no geocentric theorists of the solar system in the best astronomy departments; no supply-side economists in the best economics departments (according to Paul Krugman); no postmodernists or Straussians in the best philosophy departments; and so on.
Do all these underrepresented intellectual perspectives have a claim under the Academic Freedom Bill of Rights? Couldn't it be that fundamental intellectual deficiencies of these views have something to do with their underrepresentation in the academy?
After all, take your favorite "objective" measure of intellectual ability--IQ, standardized test scores, etc.--and there' s no disputing that the level of intellectual ability is much, much higher in the academy than in the population at large. Why don't great believers in "merit" like Horowitz draw the obvious inference from this fact when conjoined with the absence of their preferred political ideology? (Well, the answer to that one is obvious.)
Alas, I'm sure it's not quite that simple. And conservative views aren't quite as underrepresented. Take libertarians in the legal academy: they are well-represented at most of the top law schools (Yale, Harvard, Chicago, Virginia, Texas, UCLA, Northwestern, etc.). (Indeed, they may be better-represented than Critical Race Theorists--most top schools have only one, if that.) That is as it should be since the libertarians, from Richard Posner to Alan Schwartz to Eugene Volokh, are intellectually and argumentatively engaged with the major scholarly issues in law. But if you start looking for conservatives in the Bush/DeLay model, you'll find hardly any. Is this attributable to bias or to the fact that as educational attainment and intellectual sophistication rises, it is harder to sign on to this world view?
UPDATE: I'm moving this post (from 9/17/03) to the top, given the latest brouhaha about the judicial confirmation process. The Democrats have approved 168 of Bush's 172 nominees; that's to their lasting shame. An honest, i.e. legal realist, appreciation of what appellate courts actually do, suggests that they ought to be using the filibuster a lot more.
So I am asked by a reader of the preceding post. I don't find blogs work too well as scholarly fora, so I refer interested readers to my essay in the forthcoming Blackwell Guide to Philosophy of Law on "American Legal Realism" for a scholarly answer. (There is a "Scandinavian" branch of "legal realism" which is a very remote cousin of the American branch--indeed, the label "legal realism" is probably more misleading, than illuminating, when applied to both--for reasons I'll explain when I finally finish the essay on both kinds for the Stanford Encyclopedia of Philosophy.)
A non-scholarly, and more superficial, gloss on "legal realism" will have to suffice for purposes here. Those who are realists about law, and more particularly, courts, think that the kinds of "legal reasons"--appeals to doctrine, precedent, statutory text, and the reasoning by analogy, by which courts bring the doctrine etc. in to contact with the facts of a case--that judges offer in their opinions largely obscure the actual grounds of decision. Legal reasons don't really explain the decisions; legal reasons are often indeterminate, and equally good legal arguments can be given for very different outcomes. What really explains the decision is the judge's commitment to non-legal norms (moral, political, economic).
My colleague Lucas A. (Scot) Powe, Jr. has a pithy way of expressing the idea. He likes to say: "Anyone teaching constitutional law who discusses only the doctrine is guilty of educational malpractice." (For a detailed case in support of this view, see his book on The Warren Court and American Politics [Harvard University Press, 2000].) Why malpractice? Because such a teacher will not equip his or her students to advise clients intelligently about constitutional law issues, since what courts do with these issues, on the realist view, has far more to do with extra-legal political and related considerations than with doctrine.
Suitably qualified (for the full scale of the qualifications, see my Blackwell essay), I find it hard to fathom that anyone disputes the truth of legal realism (lawyers don't, it's strictly some academics). It should hardly be surprising that legal realism is the correct descriptive account of appellate decision-making, if only for the simple reason that the cases selected for appellate review are disproportionately the ones where the legal reasons are indeterminate, and so the necessity for political and moral judgment is inescapable. (Yes, yes, I realize that Dworkin and his 3 [or is it 4?] followers have a theory according to which these judgments are themselves legal, but the theory is so bad, I'm discounting it here, sans argument.)
It's hard not to feel that our public culture, and our public discourse about law, would be a lot healthier if the truth of legal realism were more widely acknowledged. Consider the battle over federal court nominations: if we're realists, then we can say plainly that these are battles over life-time appointments of government agents who will be called on to make moral and political judgments, by which the force of the state will be brought to bear against the parties so judged. Ergo, it is perfectly reasonable, indeed, appropriate, for Senators to oppose nominations on moral and political grounds. The surprising thing, then, about the defeat of the Estrada nomination is not that he was defeated, but that so many others were given a free pass. These aren't battles over appointing the best "legal technicians"; these are battles over the appointment of mini-legislators. If the President doesn't nominate reasonably bipartisan legislators to the federal appeals courts, a Senate with a differing political cast of mind shouldn't approve any of them. Isn't it that simple? Well, it is if you're a legal realist.
UPDATE: Stuart Buck has "responded" in a fashion to this posting, but without it appears reading the linked essay and so without noting any of the "suitable qualifications." In particular, it is not the thesis of legal realism that "personal ideology" (as Mr. Buck puts it) determines the decision; it is the thesis that non-legal norms determine the decisions. This is obviously compatible with lots of unanimous decisions.
For understandable, if philosophically frivolous, reasons the collapse of the Soviet Union has been taken—especially in popular culture (including popular "intellectual" culture, such as the blogosphere)—as signalling the defeat of Marx qua philosopher. (The mock interview with Marx that was making the rounds of the blogosphere awhile back gave expression to this kind of view.) Marxism had, after all, been so long associated with the Soviet Union, that that system’s collapse was taken to coincide with the collapse of its putative intellectual foundations. Of course, this association is, from a philosophical point of view, a non-sequitur. (Indeed, the Soviet Union arguably collapsed for Marxian reasons: bureaucratic central planning clearly fettered the development of the forces of production, and thus was eventually supplanted by nascent market forms of production and distribution. But the truth of that hypothesis doesn't matter in this context.)
Even putting aside silly objections, though, there is still a pressing question about what really is living and what dead in Marx's philosophy? (In this regard, I once again strongly recommend Jonathan Wolff's Why Read Marx Today? [OUP, 2002].) In recent years, there has been what I call a "moralizing" tendency in Marx scholarship--a tendency to abandon the manifest causal-explanatory ambitions of Marx's actual philosophical practice (Marx was a good naturalist!), in favor of developing the implicit normative theory in Marx's writings. Marx never engaged explicitly in normative theory, and for a simple reason: he concluded, correctly I think, that it would have no impact on practice. (Richard Posner, a closet Marxist on this issue, has made the same argument more recently, for example in The Problematics of Moral and Legal Theory [Harvard, 1999].) My view is that Marx stands or falls on the success or failure of his causal/explanatory project, and so the question what is living and what is dead in Marx is equivalent to the question what is living and what is dead in the causal/explanatory project (and the predictive claims flowing from it) that was Marx's central work?
Part of my answer is here, which is excerpted from my contribution to The Future for Philosophy volume (which will be out not later than next fall). A few suggestive highlights:
We must start from the now commonplace observation among scholars that many familiar Marxian predictions--e.g., the labor theory of value or the theory of the falling rate of profit--are simply false, while certain philosophical assumptions--e.g., about the teleological structure of history, or his conception of human nature in terms of species-being--seem hard to square with his naturalistic scruples, since no successful scientific methods or empirical results have given them robust support.
But what is equally striking is the accuracy of many of Marx’s best-known qualitative predictions about the tendencies of capitalist development: capitalism continues to conquer the globe; its effect is the gradual erasure of cultural and regional identities; growing economic inequality is the norm in the advanced capitalist societies; where capitalism triumphs, market norms gradually dominate all spheres of life, public and private; class position continues to be the defining determinant of political outlook; the dominant class dominates the political process which, in turn, does its bidding; and so on. (The article, above, includes citations to supporting evidence.)
Particularly important, in my view, remains the Marxian theory of ideology, which predicts that the ruling ideas in any well-functioning society will be ideas that promote the interests of the ruling class in that society, i.e., the class that is economically dominant.
By the “ruling ideas” we should understand Marx to mean the central moral, political and economic ideas that dominate discussion in the mass media and in the corridors of power in that society. The theory is not peculiar to Marx, since the “classical realists” of antiquity like the Sophists and Thucydides advanced essentially the same theory: the powerful clothe their pursuit of self-interest in the garb of morality and justice. When Marx says that, “The ideas of the ruling class are in every epoch the ruling ideas” (The German Ideology) and that, “Law, morality, religion are to [the proletariat] so many bourgeois prejudices, behind which lurk in ambush just as many bourgeois interests” (The Communist Manifesto), he is simply translating in to Marxian terms the Sophistic view “that the more powerful will always take advantage of the weaker, and will give the name of law and justice to whatever they lay down in their own interests” (that's WKC Guthrie's gloss on the Sophistic view).
In the United States, for example, a majority of the population favors abolition of the estate tax—what the ideologues of the ruling class now call a “death tax”—believing that it affects them, and that it results in the loss of family businesses and farms. In fact, only 2% of the population pays the estate tax, and there is no documented case of families losing their farms or businesses as a result of the tax’s operation. Examples like this--in which the majority have factually inaccurate beliefs, that are in the interests of those with money and power--could, of course, be multiplied. Does this just happen by accident?
We still might demand, of course, an explanation for why the ruling class is so good at identifying and promoting its interests, while the majority is not? But, again, there is an obvious answer: for isn’t it generally quite easy to identify your short-term interests when the status quo is to your benefit? In such circumstances, you favor the status quo! In other words, if the status quo provides tangible benefits to the few—lots of money, prestige, and power—is it any surprise that the few are well-disposed to the status quo, and are particularly good at thinking of ways to tinker with the status quo (e.g., repeal the already minimal estate tax) to increase their money, prestige, and power? (The few can then promote their interests for exactly the reasons Marx identifies: they own the means of mental production.)
By contrast, it is far trickier for the many to assess what is in their interest, precisely because it requires a counterfactual thought experiment, in addition to evaluating complex questions of socio-economic causation. More precisely, the many have to ascertain that (1) the status quo--the whole complex socio-economic order in which they find themselves--is not in their interests (this may be the easiest part); (2) there are alternatives to the status quo which would be more in their interest; and (3) it is worth the costs to make the transition to the alternatives—to give up on the bad situation one knows in order to make the leap in to a (theoretically) better unknown. Obstacles to the already difficult task of making determinations (1) and (2)—let alone (3)—will be especially plentiful, precisely because the few are strongly, and effectively (given their control of the means of mental production), committed to the denial of (1) and (2).
There is some discussion in the blogosphere about Paul Krugman's tendency to make claims about the "motives" of Bush & co. One reasonable defense of the practice comes from Crooked Timber. The discussion, however, brought to mind an issue that arises with respect to what Paul Ricoeuer dubbed "the hermeneutics of suspicion," the practice of critics like Marx, Nietzsche, and Freud of attacking certain views by exposing their real motives or origins. In general, I don't find the blogosphere conducive to substantial philosophical argument, but in this case, and since it bears on the Krugman discussion, let me just post part of a forthcoming paper on "The Hermeneutics of Suspicion" (in The Future for Philosophyvolume) that constitutes a defense, as it were, of the epistemological foundations of Krugman's practice (not to mention the more sophisticated forms in Marx, Nietzsche, and Freud):
On the reading urged here, we should understand Marx, Nietzsche, and Freud as seeking a naturalistically respectable account of how we arrived at our current, conscious self-understandings. But arriving at such an understanding of the causal genesis of our conscious self-understandings is not--obviously, at least--equivalent to a “hermeneutics of suspicion,” to an understanding that should make us regard them as “suspect.” Why, to put it simply, is the correct naturalistic account of the genesis of our beliefs a reason to make us suspicious of those beliefs? It is useful, I believe, to take a short, though perhaps surprising, detour through Anglo-American epistemology of the past forty years to see how and why the “naturalistic” turn of Marx, Nietzsche, and Freud remains philosophically important.
In 1963, in a remarkably brief paper, Edmund Gettier convinced most philosophers that the received wisdom of millenia about the concept of knowledge was mistaken: “knowledge” was not simply a matter of having a justified, true belief, since one could adduce examples of beliefs that were both “justified” and “true” but which didn’t seem to be cases of “knowledge.” The Gettier counter-examples to the “justified true belief” analysis of “knowledge” all had the form of the following example:
Suppose Smith and Jones apply for the same job. And suppose Smith is justified in believing both that (1) Jones will get the job, and (2) Jones has 10 coins in his pocket. Smith would then also be justified in believing that (3) the person who gets the job will have 10 coins in his pocket. In fact, Smith (not Jones) gets the job and, as it happens, he has 10 coins in his pocket. (3) turns out to be a justified true belief, but it doesn’t seem that Smith “knows” (3). Of course, Smith should believe (3), but not for the reasons that he does. He has a true belief, but not knowledge.
The legacy of the Gettier counter-examples was a powerful one: a justified true belief isn’t “knowledge” when the justification for the true belief isn’t the cause of why the agent holds the belief. As Philip Kitcher put the point, in explaining the stimulus Gettier provided to the “naturalistic” turn in epistemology: “the epistemic status of a belief state depends on the etiology of the state.” Beliefs caused the “wrong” way suffer epistemically.
We can understand, now, the logic of the hermeneutics of suspicion as exploiting precisely this point about the epistemic status of belief: we should be suspicious of the epistemic status of beliefs that have the wrong causal etiology. That’s the lesson of the Gettier counter-examples, and it is the lesson which underwrites the suspicion that Marx, Nietzsche, and Freud recommend by way of providing alternative causal trajectories to explain our beliefs. To be sure, beliefs with the wrong causal etiology might be true; but since they are no longer cases of knowledge, we have no reason to presume that to be the case. To the contrary, we now have reason to be suspicious--nothing more--of their veritistic properties.
So asks a colleague in philosophy, who was recently a tenure referee for a law professor at a very reputable law school who got tenure, notwithstanding what struck the colleague as quite weak work. The answer is somewhat complex, and perhaps a bit speculative, but it goes something like this:
(1) Up until roughly the 1970s, law schools were essentially "trade" schools, tightly linked to the profession and to practice: law professors digested what judges were doing, and explained it to lawyers and judges, and to their students.
(2) When law schools were essentially "trade" schools, the traditional qualifications for law teaching--excellent grades in law school classes, a published student note in the Law Review (which digested what judges were doing, and explained it), clerkship(s) with excellent judges and/or practice experience with excellent lawyers--worked very well: unsurprisingly, someone with all those qualifications was usually damn good at digesting what courts had been doing and explaining it to judges, lawyers, and students.
(3) Folks with those qualifications also could earn a lot of money in the practice of law, and since the practice really wasn't that different than the scholarly side--a bit less relaxed, to be sure, a bit less reflective--law schools, to recruit these super-smart lawyers, had to both pay reasonably well (better than the liberal arts, to be sure!) and offer something that practice couldn't necessarily offer, namely, job security (i.e., tenure, or life-time employment, barring gross misconduct). And since the traditional qualifications were actually good predictors of success as a legal scholar on the "trade school" model, it was hardly problematic to start with a powerful presumption that everyone you hired, who had the traditional qualifications, is someone you had just hired for life.
(4) Starting in the 1970s, this whole paradigm fell apart, at least on the scholarly side. Starting in the 1970s, law schools started to move away from the trade school model, and become more closely integrated with the rest of the university. The rise of interdisciplinary scholarship began in earnest. The University of Chicago Law School, which in the 1940s was considered clearly the inferior of Northwestern, was the pioneer in law & economics in the 1970s, and moved, accordingly, in to the very top ranks of law schools. Michigan, which in the 1970s was a top five school on a par with Stanford and Columbia, began hiring PhDs in history, in sociology, in philosophy. A quarter-century later, the vast majority of faculty hired at top law schools have not only JDs, but PhDs in some cognate discipline; and even those without PhDs typically do interdisciplinary work, though often not of high quality.
(5) The interdisciplinary turn in legal scholarship, and the move away from "trade school" scholarship, meant that law schools, especially "top" law schools, became increasingly remote from the practice of law. This development was immortalized by Judge Harry Edwards of the U.S. Court of Appeals for the D.C. Circuit, who wrote a 1992 article on "The Growing Disjunction" between the world of practice and the world of scholarship. By then, however, it was far too late to change things. Yale Law School, the leading per capita producer of new law teachers (by a dangerous margin), and Harvard the leading gross producer of new law teachers, were squarely in the interdisciplinary, non-trade school camps--which meant that the law teachers they were training who would take up posts at Podunk U. had a conception of what was worth doing in legal scholarship that would simply reinforce the growing disjunction.
(6) Yet the change from a trade school to a PhD-program model for legal scholarship did not result in a change in tenure standards. (For awhile it didn't even result in a change in qualifications for new hires, which partly accounts for the huge amount of incompetent philosophy, history, economics, social science, etc. that appeared and still appears, alas, in law reviews. That's now changed in economics and history and some of the social sciences; it's starting to change in philosophy, though more erratically.)
In these slow summer months, when I finally have a chance to make real progress on writing projects (now that all grading is done as well), I'm going to be posting a bit less new material, but I thought I'd also repost some items from early on that, based on feedback, seem to be of particular interest. And since readership is 5-10 times higher now than in the first three months, some new readers may also find these early items of interest. I'll flag such items with "Originally Posted on _____."
Continental Philosophy Farhang Erfani, a philosopher at American University, provides a useful set of links to news, events, interviews, reviews, videos, etc. related to "Continental philosophy" (broadly construed)