I want to get to the department-level stuff today instead of just looking at the raters, but I promised yesterday that I’d say something about the relationship between the field position of raters and their voting patterns. As with specialty areas, where you stand might depend on where you sit. If we slice raters into groups based on the PGR rating of their employer, we can calculate overall PGR scores based just on the votes from within each group, as we did with the specialty areas. For example, we can divide them into quintiles, plus one extra group for the raters who participate in the survey but whose departments are not rated. (There were a few of those in 2006.) The story is the same as yesterday, only moreso: the rank order produced by different quintiles is very similar, there’s hardly any variation in the top eight or nine departments, and the heterogeneity that does exist is seen around the middle to lower-middle of the ranking table. So at least within the pool of raters, the people at lower-ranked schools produce more or less the same ranking as the people at higher-ranked schools.

While the rank order produced by different rater quintiles is very similar, the average scores awarded by each group do differ a bit. Here is a plot showing differences in the average scores awarded by raters employed by departments in the top twenty percent and those working at departments in the bottom twenty percent of the PGR ranking.

(PNG, PDF.)

As you can see, raters working at departments in the bottom quintile are consistently more generous in the scores they award than raters working at departments in the top quintile. Again, the actual rank order produced by these two groups would only really differ in a few places mid-table.

Interestingly, we also see a bit of national variation on this dimension. Conventional wisdom may say that American universities hand out As like candy while the U.K. is stingy with its First Class Honours. Proud traditions of Empire and all that. But while it’s true that the most generous raters in the 2006 data are employed at U.S. schools, the median U.K.-based rater awarded an average score of 3.35 while the median U.S.-based rater awarded an average of only 2.98. Raters in Canada were essentially identical to the Americans (sorry, Canadians, but it’s true). Meanwhile raters based in the southern hemisphere were the most generous of all, with the median Australia/NZ based rater awarding an average of some kind of disgusting salty yeast extract. I’m sorry, I mean an average of 3.62. There are far fewer Aus/NZ-, Canada-, and UK-based raters in the data than there are U.S.-based raters, so bear that in mind when freely generalizing about philosophers in these countries. (In 2006 the distribution of raters across countries was 178 US, 49 UK, 22 Canada, 14 ANZ, and 6 Other.) Notably, the median ANZ-based rater in particular rated a lot fewer departments on average than U.S. raters (47 vs 81). Most likely, morning tea rolled around and they knocked off for the day. That makes them more likely to have only rated top-ranking departments, which in turn would boost the average rating awarded. The median UK rater, meanwhile, evaluated 72 departments. The median Canadian rated 88, possibly because they are a responsible lot, possibly because the winter nights are long and dark up there. National stereotyping is very robust to small sample sizes.

That’s enough about raters. Next up, some pictures of departments.