Discussion about this post

User's avatar
Timothy Burke's avatar

Since you use the analogy, is there good data in the sense that you mention in this essay on:

The quality of doctors.

The quality of mental health therapists.

The quality of lawyers.

The quality of judges.

The quality of accountants.

The quality of plumbers.

The quality of electricians.

The quality of investment managers.

The quality of academic administrators.

The quality of assessment professionals.

The quality of consultants.

The quality of chefs.

You get the point: service economies frustrate attempts to provide fixed quantitative measures of their quality for multiple reasons, but first and foremost because the question of the nature of quality is by its nature unsettled and because outcomes are by their nature ontologically, for real, difficult to measure and always will be.

You can build metrics for some of these professions that try to make the satisfaction of clients and customers into data, but even then there are very big issues. Ask a patient about doctor satisfaction while the patient is still dependent on that doctor and you will get one kind of data. Ask a patient after they've moved to another professional and you will get another. Ask a patient to assess a doctor who is treating an intrinsically difficult condition with low rates of reported success no matter what and you will hear one thing (in part depending on whether the patient understands the material reality of their condition) and ask a patient to assess a doctor who has treated an easily resolved problem and it will sound as if the doctor has removed a thorn from the lion's paw.

It's just a hard problem and that's all there is to it. It's not a dirty secret, it's life. Whether someone providing a service satisfies someone needing the service is not something we'll ever be able to measure in a way that banishes doubt, ambiguity and judgment. With education, rather like medicine, we have the extra problem that the experience a student is having might be the only time they ever have that experience. I might in a long life have used many plumbers and begin to have a basis for evaluating the difference between good plumbers and bad ones. But with professors, doctors, and a number of other professionals, it might be that the only people who have a deep basis for comparison are the professionals themselves. And there, yes, you do have a problem, if not quite a "dirty secret", which is that professionals are generally inclined to give each other the benefit of the doubt, and to preserve the integrity of professional relationships with one another more than to provide critical insight into less-than-sufficient practice by another. That's the real hard problem to crack, not the creation of better metrics to break people on the managerial wheel more effectively.

In the end, I really feel that anything that starts from the perspective that most professionals are bad at their jobs is a non-starter. I think for the most part people who aren't doing that well in the estimation of some of their clients are laboring under horrible systemic constraints. In the case of professors, that's teaching 500-1000 people in introductory surveys, with a 4-4 load, unresponsive administrations, no sense of deep values or mission in the institution, and poor compensation. In those circumstances, I don't really look to the professional as the problem, any more than I think to myself that a battlefield doctor in a war zone has maybe had to make compromises in the service they provide.

Rob Nelson's avatar

Like your post about silos, you've written things here that have been simmering away in my brain for years. Thank you for putting these words out into the world.

The failure to address poor teaching is endemic and, as you say, exacerbated by using satisfaction survey to assess teaching. Essentially, they measure how likeable an instructor and at best function as quality assurance...assuming there is a department chair or dean willing to do the thankless work of removing the teachers who actively alienate their students from learning.

Measuring quality in human performance is always a fraught and complicated task. Better measures only get us so far and I'm skeptical that legislators are in a position to make positive change.

My hope is that all the pressures on institutions of higher learning, AI among them, force a significant rebalancing of resource allocation in favor of teaching. We know what quality improvement looks like in research...it's called peer review.

If a college or university wants to improve teaching it will invite that 25% of teachers "who provide mentorship, complex critique, and human accountability" and have them run a program aimed at elevating teaching quality and assessing it using peer review methods.

73 more comments...

No posts

Ready for more?