In the past year or so I have been doing quite a bit of reading and research on undergraduate programs, student life, admissions procedures, and financial aid policies at different colleges (mainly in the Northeast), as my son, now a high school senior, has prepared to apply and then actually applied to a number of them. In truth, my son has only spent a small fraction of the time that I have in such preparatory research, since he has a pretty good idea where he wants to go and where he can get in. Happily, the school he most wants to attend is also one he can almost certainly get into. Even more fortunately, it is our State university, so it is also the most affordable of those he’s considered, and, depending on what offers of financial aid he should receive, possibly the only affordable one.
Some of my son’s classmates applied to our State university for early action, got accepted by early December, and didn’t have to worry about applying elsewhere. That procedure has a lot to be said for it. Nonetheless, based on his school counselor’s recommendation and our feeling that it’s a good idea to have more options, our son has applied to a number of colleges in our area. I should add that these are schools both he and his parents agree might conceivably become the top choice once an acceptance letter (and sufficient financial aid package) made it a definite option. He was not shy about rejecting outright several of our suggestions, and not always for any obvious reason. My wife has sometimes gotten the feeling that her suggesting a school gives it the kiss of death. It’s definitely true that his knowing that a favorite high school teacher attended a certain school raises that college’s acceptability quotient considerably, which is actually not bad reasoning.
In our effort to learn more about colleges we already knew were of interest and to discover new ones that might be worth considering, we utilized a few of the numerous college guides that are available. I thought sharing my observations on some of the leading college guides and how they compare might be useful for others about to embark on the same journey of college choice. Note that I said “observations”–I’m not making thorough reviews.
The first college guide we bought was the Barron’s Profiles of American Colleges: Northeast (also available for other sections of the country). Its listing is quite complete and contains basic facts about all the schools, grouped by State, including the editors’ evaluation of how “selective” each one is. The Barron’s guide also has a section where schools are grouped by selectivity, which can be useful both in identifying schools to investigate further and in sorting them into “reach”, “match”, and “safety” categories. Among the facts this guide includes, and which any guide will have, are the application deadlines, SAT/ACT test submission requirements, percentage of applicants admitted, and percentage of those admitted actually enrolling. Of course, all of the guides have some kind of summary of what the schools most popular majors are and what the students are like, including the percentage of those enrolling who were in their high school class’s top 10%, etc. for each school. Breakdowns along ethnic and public vs private school lines are usually provided. Barron’s had all of these standard data and more information about the programs offered, student housing, campus security, etc.
College guides generally include some report of the SAT (and/or ACT) scores of enrolling students. Typically this is given as the range in which the middle 50% of the students lie. The Barron’s guide had SAT scores broken down further, by 100-point range, so if you’d like to know what percentage of students at a given school scored, say, over 700 on the Math SAT, you can find it there. This finer-grained reporting can be useful in seeing how your student stacks up, which should help in evaluating not only the potential for admission but for “merit-based” financial aid.
One thing to note in gauging the competition for spots in a school is that the test scores of applicants will almost surely differ from those of enrolling students due to two factors. First, the students who were not accepted will presumably on average have lower scores than those accepted. Perhaps less obviously, the scores of those admitted are, for all but the most competitive schools, going to be on average higher than the average for those actually enrolling. Some high scorers will have applied to the school as a “safety school,” and will enroll elsewhere unless their more favored options all reject them. Still, being able to compare one’s SAT scores to those of a given school’s students seems useful.
All of the information in the Barron’s guide was actually quite helpful for someone just starting the college search and selection process. One important fact it revealed to us was just how expensive all of the private schools are, from top to bottom. We experienced substantial “sticker shock” in our college search: the total cost of attending a State university outside of one’s own State turns out to be about 70% of what one would pay for a private school (roughly $35,000 as opposed to $50,000!).
The next book we bought was The Best 366 Colleges (latest edition has 368) by the Princeton Review. There were probably at least 250 schools in this book that I wouldn’t have been able to place on such a list with any confidence. A surprisingly large number of these were small schools I had never even heard of. I’m afraid there was also some blind prejudice involved in my failure to know what schools would be found in the group of the “best.” I would never have guessed that the University of Tulsa was a really good school, for example, my hazy impression of Tulsa having been formed by some ancient movie poster featuring oil wells. I noted with some gratification that a number of the small private schools included were ones I had first encountered when a physics professor had purchased my modern physics teaching software OnScreen Particle Physics.
The Princeton Review guide is one of several books that attempt to give a more in-depth picture of what a student’s life is like at different colleges. It uses student questionnaires to gather data on many aspects of college life, both academic and social. Obviously these evaluations are subjective, but that’s not a bad thing to the extent that measuring student satisfaction and perception is the goal. The reliance on student surveys could leave the process vulnerable to a concerted effort to slant certain evaluations in a particular direction, however. The Princeton surveys are done on a three-year cycle with about a third of the whole batch resurveyed each year. An amusing feature of the Princeton guide is its compilation of the top twenty schools in many categories—relating to academics, bureaucracy, social life, and campus environment—based on the student questionnaire responses. Looking for a party school? The Princeton Review may be able to help.
The third guide that we utilized was the Fiske Guide to Colleges 2009. While the Barron’s guide was all-inclusive, the Princeton Review and Fiske guides restricted themselves to the “best” three hundred something schools. Finally, I couldn’t resist buying the US News (USN) America’s Best Colleges, which is actually very complete at least in its listings of four-year colleges. The USN guide is the one that gives an overall rank to schools in various institutional categories: National Universities, Liberal Arts Colleges, etc. Its most subjective feature is the peer rating of faculties. Student opinion is not consulted, but the USN rankings take into account some hard numbers, such as percentage of freshman returning for second year and, of course, SAT scores of enrolled students. Naturally, USN doesn’t reveal the secret formula they use in turning these factors into a single number that determines a school’s national ranking. USN also has useful summary information on all the four-year colleges in the country.
We’ve been using last year’s Princeton Review guide, but it seems the text of the college descriptions has not changed for any of the schools I scanned for differences. There are now 368 in all in the new edition, which I’ve only seen online. What has been revised is the top-twenty lists, which could change if only some schools re-evaluated things. How consistent are they from year to year? Not as consistent as Princeton’s claims of remarkably stable ratings would lead one to believe.
Northeastern University, located in Boston, is an interesting case to consider both for highlighting the very subjective nature of the ratings and the lack of consensus among guides and for hints that some ratings (supposedly based on student surveys) may be manipulable. Fiske, or whoever wrote the Northeastern review for the Fiske guide, clearly does not think very highly of Northeastern. In fact, he rates its academics so low (a 2 out of 5), one wonders how it made it into the book. I only saw ten other included schools (out of the more than 300) with a rating that low. But over in the Princeton Review (based on student surveys), Northeastern has a respectable 79 (out of 100) academic rating, which, for example, according to Princeton, is somewhat better than the University of Illinois, pegged at 74 by Princeton, but given a highest rating 5 by Fiske. Who’s right? The question really doesn’t have much meaning, since there is no actual number for a college’s academic rating, but such differences point out that relying on a single source may not be a reliable way to gauge even a school’s reputation.
When we look at the US News ratings, we find Northeastern ranked number 96 among “national universities” in the USA and with a peer-based faculty rating of 2.9 (out of 5—Harvard is 4.9), while Illinois comes in nationally at number 40 and with a faculty rating of 4.0). So in addition to wondering what Fiske has against Northeastern, we have to consider that Princeton may be underestimating Illinois. But USN doesn’t incorporate student opinion of their professors into the rating. Fiske says he does. Princeton definitely does. I just noticed that Caltech was tied with Northeastern in the Academic rating last year in the Princeton guide due to Caltech students’ putting their professors at number 3 on the “Professors Get Low Marks” list. Caltech’s Academic rating jumped up 10 points to 88 this year, despite its holding on to the third place in that bad professors’ category. Perhaps Princeton changed the weighting to de-emphasize student opinion; perhaps after some complaints by Caltech.
Princeton acknowledges that they allow schools to contest rankings they perceive to be unfair. There is no way to know how those things are settled; nor, of course, is there any reporting of when a change has resulted from a college administration complaint. Princeton notes that some schools refuse to participate in the student surveys. Though there don’t seem to many colleges at all that are currently opting out, Princeton wouldn’t want that number to start rising, especially among elite schools. Fiske gives CalTech a 5 academic rating, and USN has them with a 4.6 peer rating of faculty (and number 6 overall in the National University category). Of course there is a a difference between being a highly regarded scientist in one’s field and being a good teacher. Are Caltech professors really that bad as teachers? Is there a language problem? Could Caltech students be blaming their professors for the students’ own shortcomings in handling a very difficult curriculum? I don’t know, but the academic ratings definitely seems to be a case where the Princeton Review differs from Fiske and USN, due to its giving more weight to professors’ teaching ability and their availability.
One of the ill-defined but definitely interesting factors that both the Fiske and Princeton Review guides attempt to rate is the “quality of life” (QL) at the various colleges, and there are clear differences of opinion. Not only does Fiske rank Northeastern low in academics, he gives it a 2 for QL. Princeton, however, gives Northeastern a well above average QL rating of 83. I think this must be partly due to a bias against urban schools by Fiske. Boston College and Boston University get a 93 and 81, respectively, from Princeton; but Fiske gives them just OK 3 ratings. Fiske loves the University of Vermont and he certainly makes it sound appealing: “The size is manageable, Burlington is a fabulous college town, and Lake Champlain and the Green Mountains are on your doorstep.” He gives it a 5 for QL, while Princeton says Vermont is right there with Northeastern with an 83. UNH, a small-town school, gets a 4 for QL from Fiske, but only 68 from Princeton. NYU gets a Princeton 87 versus a Fiske 3, so it seems the Princeton-urban and Fiske-nonurban preference holds. Of course, the Princeton results are supposedly based mainly on student surveys and the Fiske ones less so, which seems to me to be saying that students like urban campuses better than Fiske does. It seems clear that it is the locale that causes Northeastern students to rate their experience so highly. Their responses put Northeastern at number 11 for “Great College Towns,” up from 15 a year ago.
A really big jump occurred for Northeastern University in the category of Best Career/Job Placement Services, where they came from out of nowhere to be number 1 in the latest edition. Has there really been that big a turnaround in those services in the past three years? Possibly, but mightn’t there have been a little encouragement by the school administration for students to highlight an area that Northeastern should shine in, given its co-op plan? I know nothing about it, but I can’t help wondering. Or perhaps some students took the initiative to help boost their school’s standing by organizing a campaign to get people to praise the Job Placement Office in their survey. There’s no way to know, but also no way to prevent it. I note that Northeastern has been touting their first-place Princeton Review finish in their recruiting letters this year.
There is also a self-consistency problem in the Princeton Review ratings. For example, The College of the Holy Cross gets a high (93) rating for Financial Aid, yet ranks number 16 (up from number 17) in student dissatisfaction with their aid. This isn’t strictly speaking a logical contradiction except that the 93 rating is supposed to be based on student reports as well as school reports. Is there something misleading in the school report, or is there something different about Holy Cross students? Can it be that only a relatively small minority are so dissatisfied with their aid as to make a big point of it but that this group still outnumbers those at most other schools wanting to raise it as a major issue? We’d need to know more of the details on how the ratings and rankings are arrived at to answer that question. It’s just another indication that it would be a mistake to make too much out of any one rating.
My wife and I have been rather dismayed that all of the schools our son has applied to have the notation “Lots of beer drinking” in the margin of the Princeton review. All but one also add “Hard liquor is popular.” It’s almost as if we were deliberately looking for hard-drinking schools, which is far from the case. I think we may have been a bit negligent on this issue, as I now see schools without the alcoholic notations exist, though not all that many in the Northeast.
An interesting case of an overall negative trend in the Princeton top-twenty rankings from one year to the next is the University of New Hampshire (UNH). In the 2008-9 edition they were number 7 in the “Party School” category, so it’s good (from a parent’s standpoint) that they’ve dropped a bit to number 11. They have, however, moved up from 4 to 3 in the “Lots of Beer” list, and they now appear as number 20 for “Reefer Madness,” while being “unranked” the previous year. Is there some sort of rivalry building with the University of Vermont on this cannabis-loving category? UVM had already staked out a place near the top (number 4) in the category. UNH is now number 5 for “Homogeneous Student Population” (up from 9) and has maintained its position at number 4 in “Little Race/Class Interaction.” To make matters worse they check in at number 6 in “Town-Gown Relations are Strained,” which wasn’t mentioned the previous year.
Those changes could be just due to changed perceptions or some particular incident (for the strained relations) probably. New this year at UNH is high dissatisfaction with professors: number 18 for “Professors Get Low Marks” and number 14 for “Least Accessible Professors.” Maybe this is just a statistical fluctuation from only somewhat lower, but invisible to the list, ratings last year. Or could a few students at UNH be trying to put heat on the school by organizing a campaign to give the faculty bad grades in the Princeton survey? How many would it take really? We aren’t given figures on how many students participate in the surveys for each school. Given that the Princeton surveys are done on a three-year cycle with about a third of the whole batch resurveyed each year, fluctuations could be due to new answers from the school’s students or from changes by students at other schools. Given some fairly big changes with UNH, I’d say they must have been resurveyed. Otherwise, how could the jump from off the list to number 6 in strained community relations have occurred?
The University of Massachusetts at Amherst has seen one major improvement (from the parent’s standpoint). It went from number 4 in “Students Never Study” to not being in the top 20. The list still exists (Florida took the top spot), so there must have been a decided change at UMass in the last three years, assuming the disappearance from the top twenty is due to resurveying. For all I know the campus newspaper published an editorial urging students not to make UMass look bad by saying they hardly ever studied. How would I know? How would the Princeton Review know? On another positive note, UMass has dropped out of the top twenty for “Long Lines and Red Tape,” where they held down the 17th position a year ago. The students at UMass are nonetheless (perhaps because of increased study time) even more unhappy than before—up to number 18 (from 20) in the “Least Happy Students” category. The campus isn’t looking any better, at least in the students’ eyes: UMass has moved up from 12 to 8 on the “Least Beautiful Campus” list. The students (now that they’ve had to start studying?) are showing their dissatisfaction with professor accessibility, coming in at number 17 for “Least Accessible Professors.”
That should be enough to show everyone that college guide evaluations are not gospel, don’t always share consensus, and are possibly subject to gaming by schools or organized groups of students. While I wasted way too much time on reading them, I know I learned a good deal too. I started writing this several weeks ago before much had happened in the way of college admission decisions and financial aid offers, and I’ve left everything about those matters in the future tense. I may have more to say later about the whole process of applying for admission and financial aid and deciding what to do.