.jpg)
Charging U
Why is college so expensive? Charging U explores the causes of high college tuition. If you want to know where all your money is going and why college costs so much more now than it did in the past, join host Larry Bernstein as he looks at how individual pricing, government policy, rankings, endowments, loans, luxurious amenities, administrative bloat, athletics, research, and other factors affect the price we pay for college.
Charging U
5. What Role Do Rankings Play in the High Cost of College?
Influential rankings are based on the wealth of an institution, not how much students learn. This incentivizes colleges to maximize income in order to remain competitive.
Theme music credit: Sunshine by lemonstudiomusic via Pixabay
Episode 5
What Role Do Rankings Play In the High Cost of College?
Why are some colleges ranked higher than others and how does that affect the price of tuition? We will answer this question on today’s episode. I am Larry Bernstein and welcome to Charging U.
“Reputation is an idle and for most false imposition oft got without merit and lost without deserving." From Othello, by William Shakespeare
Billy Beane revolutionized the century-old American pastime of baseball. The book Moneyball details how he had his own ideas about how to win games and what constituted value in a baseball player, often disagreeing with the accepted and untested school of thought at the time. He had his own theories about how to evaluate talent. When Beane came on the scene, most of those involved in decision-making in the sport, including scouts, general managers, and managers, judged potential based on ossified notions of reputation and physical appearance, but Beane objectively evaluated players, used new statistics with which to judge them, and trained them in aspects of the game that others didn’t consider important. The results were that his team, the Oakland A’s, won the second highest number of games (102) in all of Major League Baseball in 2001 and tied for the highest (103) in 2002, including an American League record 20 in a row. The important point is that Billy Beane figured out an inexpensive way to win baseball games. He looked for a method to identify and develop successful players and had an objective way to figure out if he was right. His team competed against other teams and he won with players not judged by other teams’ scouts to be top tier. He did this without spending a lot of money, thereby proving the superiority of his system. The problem for Beane was that when the players’ contracts were up and the opportunity arose, they signed lucrative contracts and moved elsewhere.
So if the unfounded opinion of so-called experts isn't the best way to decide which is the best baseball player or team, then why is it OK to accept the present unsubstantiated college rankings?
Let's take a look at the most well known and influential of the rankings, U. S. News and World Report, and see how it decided how one school is better than another.
Though there were previous attempts at ranking colleges, U. S. News and World Report began its ranking in 1983. Other rankings have been created since then but U. S. News and World Report’s ranking remains the most influential. Initially, its rankings were based purely on reputation. In 1988, it started using data such as selectivity in addition to reputation. In 1999, it standardized data for school size. Since then it has tweaked the criteria which it uses to rate colleges and universities. Up until 2023, the following categories were weighted:
Six year graduation rate and retention rate of first year students enrolling the second year 22%
Graduation rate of Pell Grant recipients 5%
Overall graduation rate performance 8%
Peer assessment of reputation 20%
Faculty compensation, terminal degree of full-time (not adjunct) professors, class-size, student faculty ratio, percent faculty that is full-time 20%
Selectivity by SAT/ACT/high school class rank 7%
Financial resources available per student 10%
Graduate indebtedness 5%
Alumni giving 3%.
Every measure except one can be improved by money. One of the most common reasons college students discontinue college attendance is because of the lack of funds and financial difficulties. A college which can provide more financial aid can retain students, lower their debt, and academically support them to graduation. Also, it is easy to prop up graduation rates by reducing academic rigor and allowing grade inflation. A wealthy institution can afford more full-time faculty and pay them more. It can provide smaller classes and generally improve its reputation. Financial resources per student and alumni giving are direct measures of an institution’s wealth. So, 93% of the rating was directly or indirectly related to how much money an institution has or spends. The other 7%, selectivity of those enrolled, is correlated with the family wealth of the students.
Notice the other tremendous flaws in the system:
1. It does not measure student learning, knowledge, critical thinking, ability to communicate, or skills acquired.
2. The number of faculty is measured without taking into consideration its availability, quality, interest, or effectiveness in teaching. There is no incentive for good teaching since it doesn’t improve a school’s reputation and it has little weight in determining faculty promotion or tenure. There is no reward for doing it well.
3. Reputation itself is directly related to selectivity and research productivity as well as the success of the sports teams. None of those factors has been shown to be a surrogate measure of teaching quality or student learning.
4. The people answering the survey cannot possibly know what is going on at hundreds of other colleges. They must depend on the previous years’ rankings to influence their opinion of a school’s current reputation. This is a major reason there is almost no significant variation in ranking from year to year.
5. Most important for our purposes is that the system judges a school by expenditure per student. This provides a disincentive to cutting costs and keeping tuition down. If two colleges provide the same quality by U. S. News and World Report standards, the one that spends more money to achieve that quality gets the higher ranking. This is illogical and runs counter to all principles of economic efficiency. This measure also rewards an institution which spends large sums of money on programs which have little or no objective benefit.
U. S. News and World Report tweaked its 2023 rankings by adding categories calculating graduation rates of first generation students and faculty publications and no longer measuring class size, students’ high school standing, and alumni giving. This does not significantly reduce the flaws in the system.
Forbes magazine also has a ranking of colleges. It puts more weight on the salaries of their graduates and looks at debt and salary compared to college cost. While this gives some insight into whether it is worthwhile paying for the degree at the college, salary is a function of the field entered after graduation. It is also dependent on the reputation of the college attended so that those who have attended more prestigious colleges receive more lucrative job offers. So we are back to the reasoning of basing a college’s ranking on its reputation. There is also a delay or lag between graduation and measurement of peak salary. A person who continues on to an advanced degree does not reach her maximum income potential until at least age 30, so accurate assessment of graduates’ income must take place at least 8-10 years after college graduation. Under ideal circumstances, the current ranking is based on events that occurred a decade earlier. Other measures heavily weighted by Forbes, such as the number of graduates on the Forbes American Leaders List and the number of Rhodes or Fulbright scholars are also just correlates of the institution’s prestige.
The rankings are more important to selective private colleges that attract applicants who are looking for schools with the highest absolute level of achievement which, in turn, allows them to charge high tuition. A higher ranking leads to more accepted students at the top of their high school class. Those students have higher standardized test scores. The number of acceptances offered by the admissions department goes down. When a college becomes more selective, its perceived value rises and the tuition discounts it has to offer to fill its class go down. This results in more revenue for the school. It can then spend a lot of money on programs which marginally increase its reputation. A positive feedback loop is created which multiplies the financial benefit. If a school’s ranking goes down, it does not lower its sticker price, but instead, it is forced to offer bigger discounts to fill its seats.
While few schools are competing directly with the Ivy League, as we mentioned in the discussion on Price Discrimination, there are schools that are competing with schools that are competing with very selective schools. There are only a few degrees of separation between any two colleges. If a college decides to compete in the rankings game, its rivals, who may have been sitting out of the game, now feel compelled to participate. And then the peers of those new contestants feel forced to enter the match. There is a ripple effect escalating the race.
While colleges get to judge high school students based on standardized test scores and grades, students do not have an objective measure of judging the colleges into which they are going to invest a small fortune. At this time, rank is one of the only measures that potential students have in evaluating a college or that employers have in judging a graduate even though there is no evidence-based foundation for it.
This phenomenon is known as credentialing. Credentialing or signaling is the perceived status of a graduate based on the prestige of the college that the student attended. It is based on the biases and whims of the “sorting hats” that are the different admissions committees. It does not take into account whether specific financial circumstances forced an applicant to choose an option based on cost or if a social situation caused him to confine himself to a certain geographic area. Individuals who do not possess the wide-ranging skill set and personality traits prized by those admissions offices may not gain admission to prestigious universities, despite being quite outstanding in one or more areas. Credentialing ignores the knowledge gained, skills acquired, effort, individual growth, and maturity which occur between the ages of 18 and 22 or even later. Do we really want to be forever judged by our actions as a 16 year old? Because there is no information updated after high school about an individual’s ability, those assessing a graduate must revert to the last set of data available, namely, high school grades, standardized test scores, and high school teacher evaluations. Since this has already been done by the college application process, it is easiest to defer to those results by looking at the college at which the student matriculated.
Widespread grade inflation has also added to the confusion of objectively assessing the knowledge and skills of college students. There has been a rise in the average GPA (grade point average) of all college students of a little more than one tenth point, 0.10, per decade over the last 50 years. The increase has been variable across institutions, If you go to the Kennedy Presidential Library, you can see John Kennedy’s Harvard report card with its Bs, Cs, and Ds which it turns out was probably not far from average at the time. In 1963, the average grade point average at Harvard was 2.7. In 2022, it had risen to 3.8 with 30% of the students at 3.9 and 16% at 4.0. The result is that grades are more compressed at the high end, making it difficult to differentiate the truly talented from the average. Grades at private institutions are generally higher than those on public campuses, further complicating comparisons.
With these factors in mind, in 2008, Boeing tried to come up with a ranking of colleges based on the work performance evaluations of its employees and the schools they attended. The problem with that, besides cost, is that even for a large corporation like Boeing, the number of employees who attended a specific college may be too low to provide statistically reliable comparisons. If there were one graduate of a specific university and she were outstanding, is that sufficient to say that all alumni from that program are outstanding? To get a statistically significant number, there would need to be numerous graduates from many different universities.
In response to the lack of objective data on colleges, in 2015 the Obama administration released its College Scorecard. It was a compilation of data on race and ethnicity, graduation rates, average annual cost, loan repayment data, and future earnings, but it was not a report card; there was no information on what students learned or what skills they acquired.
Ideally, there should be objective evaluations which quantify the amount of improvement that occurs while in college. They should be standardized across the nation and their results should be easy to understand and accessible to the public. There could be more than one kind of evaluation, that is to say, the system could measure different aspects of knowledge. Some could be fact-based in specific disciplines and others could measure analytic thinking and communication skills.
At this time there are two evaluations in use but neither is widespread.
The NSSE or National Survey of Student Engagement is a survey taken by undergraduates in their first and fourth years. As its name implies, it is a survey, not an examination. While the NSSE does not directly measure learning, it asks students to quantify various experiences inside and outside the classroom which have been shown to correlate with learning. It measures indicators such as academic challenge, learning with peers, experiences with faculty, and campus environment. It also measures high-impact practices including service learning, research with faculty, internships, study abroad, and a capstone senior experience in the student’s field. It can then compare first year and fourth year results across the board or in individual areas. It also gives the school feedback on which to base action, if it so desires. It is unclear if the survey correlates with real life outcomes, though. One other drawback is that there can be very wide variation in participation of the measured activities among the students at a specific institution so even if the university scores look good overall, it doesn't mean that a specific individual had high engagement with faculty or took part in high-impact experiences. In addition, what students consider academically challenging is subjective and may be different between institutions. That would limit its usefulness in comparing colleges.
Gary Pike, Director of Institutional Research at Mississippi State University, compared U. S. News and World Report measures with those of the NSSE and found that “academic reputation… was not correlated with the promotion of active learning, student-faculty interaction, or a supportive campus environment as measured by NSSE.”
The College Learning Assessment Plus or CLA+ is an examination aimed at assessing the improvement in analytic thinking during the college years. It is produced and administered by the Council For Aid to Education. About 200 colleges use this. Many recruit a limited number of student volunteers to take the test and provide them with modest remuneration. All students do not necessarily take it. It may be given to one group of first years in the fall and a different group of seniors in the spring, so it is unclear if this design is an accurate measure of how much a student improves. It takes about 90 minutes to complete and assesses a student’s foundational skills in critical thinking, problem-solving, and written communication. The test-taker is presented with a scenario and associated documents and asked to evaluate the information and present a supported opinion. The responses are open-ended. The CLA+ does not assess a specific fund of knowledge. It is not supposed to require specific preparation or studying. It can be used to get an objective measure of the improvement in critical problem-solving and communication which occurred during the college years. Results are compiled for an institution. Though individual scores are noted, they do not become associated with a specific person so there may be a reduced incentive for the student to do his best. It is NOT meant to be a post collegiate SAT/ACT or to be used as an individual assessment though recently, the individual scores have been converted to a 1600 scale- which sounds ominous for its future use as an individual assessment.
CLA+ results demonstrate that, for the most part, there is a high correlation with high school SAT results. However, there appears to be a large subset of test-takers at most colleges that shows a higher than expected improvement in scores after four years of higher education. If publicizing the results improved the standing of their institution, forward-thinking university administrators and faculty would be incentivized to initiate research into discovering teaching techniques and systems which are effective at improving the analytical and communication skills prized by prospective students, employers, and society.
Having an exit exam at the time of graduation from college would be the one way to objectively evaluate a student. Just as colleges get to judge high school students based on their own merits and not those of the secondary school they attend, it would allow the graduate to be judged independently and unlink the individual from the reputation of the school. The individual could stand on his own.
At this point even public disclosure of MCAT, LSAT, GRE, and GMAT scores which show the ability of a limited number of students would be a step in the right direction. These scores may reflect the student’s inherent ability and not reflect how much she has learned in college, but at least they could indicate where the student stands at a point in time. It’s not ideal but it would be a start.
If it turns out that one’s status at the time of college graduation is correlated with one’s status in high school, then why not cut out the middleman, that is, college, save tens if not hundreds of thousands of dollars, or enable students to go to the cheapest college and let employers and graduate or professional schools choose on the basis of high school transcript and SAT/ACT scores?
Unfortunately the current trend is away from testing. The problem associated with that trend is that when no objective, even if imperfect, data is available, it is easier to discriminate on nonobjective factors like reputation, financial status, or family background. The other problem is that at this time, there is no incentive for those who are winning under the current system of reputation without supporting evidence to adopt objective measures of learning and outcome.
As another baseball season unfolds, I am reminded of how the team with the highest payroll does not usually win the World Series. Although, the LA Dodgers have certainly spent a lot of money to amass a very talented team this offseason and I’m getting a little worried that they may prove me wrong. Those with the best reputations do not always perform at the expected level. Sometimes, those who work hard achieve more than those who rest on their laurels. Teams and players that do not adopt new ways of doing things such as how to use relief pitchers, or in basketball, more use of the 3 point shot, lose. Colleges should be forced to compete as well. Just as baseball teams have to win games in order to prove that the system they employ and the money they spend to be the best is worth it, so too should colleges have to demonstrate objective value to substantiate the reputation they have and the tuition they charge. It is only if this happens that applicants to undergraduate colleges can have the objective information needed to make informed choices about what is a fair amount to spend for a specific education. Graduate schools, professional schools, employers, and the public at large would have the information to judge quality accurately. It’s time for the colleges to, “Play ball.”
Thank you for listening to Charging U. In this episode, we saw the oversized role rankings play. We showed how the current system is purely a function of the wealth of a university and in no way measures how much students learn, what skills they acquire, or what other tangible value the university adds to make them so prestigious, desirable, and expensive.
In the next episode, we will examine the role intercollegiate athletics plays in making college so expensive.
If you found Charging U informative, please leave a rating and review. Please subscribe so you don’t miss an episode. Encourage everyone you know who previously paid, is currently paying, or who is anticipating paying college tuition to listen. Feel free to email comments to larry@chargingupodcast.com . Until next time, be well and be safe.