Six Times Psychiatry was Accurately Represented in TV or Film

This post was brought to you by my psych rotation. I won’t be telling you anything, really, about my experiences on the psych unit, because these are some of the most vulnerable patients in all of medicine and it doesn’t feel appropriate. Media representations of psychiatry, on the other hand, I will talk about all day. It’s been on my mind ever since the morning I got to sit in on some ECT sessions. ECT, if you’re not familiar, is Electroconvulsive Therapy, colloquially called shock treatments, and if you are familiar it’s probably because you’ve had some. Otherwise chances are you’ve been exposed to some wildly inaccurate conceptions of this medical procedure.

I haven’t experienced ECT as a patient, and wouldn’t presume to speak for those who have. As a rule, however, modern ECT is not represented in media from the patient’s perspective, and for good reason: that would be hard to film, and boring, because patients go through this procedure under anesthesia. In fact it’s kind of boring to watch IRL, in the best possible sense. ECT is performed with the patient 1) asleep and 2) medically prevented from having muscle spasms associated with some types of naturally occurring seizure. The patient points their foot, and makes a face (caused by involuntary muscle contractions, not pain, see above asleepness), and that’s the whole show. Well I guess the machine also makes an inoffensive beep to make sure everyone knows the shock is being administered. But there’s just…not much to see. Do these important details come through in the way ECT is represented in, say, Homeland?


Haha, not likely! No, Homeland wants you to know that mental health treatment not only ruins careers, it looks and sounds like a living nightmare. At least that’s how it seems by the end of Season 1; I stopped watching after that because of this scene. Also because of the more than slightly exploitative approach to its protagonist (as this blogger put it, “It says a lot that for the most part the obsequious wannabe terrorist was a more sympathetic character than the mentally ill woman he was conning”), because of its casual islamophobia, and because it is a major pet peeve of mine when supposedly hardened, CIA-employed characters say nonsarcastic lines like, “My god. You’re in love with him!”

Look ECT isn’t magic, though TBH it can feel that way when a really sick patient who isn’t responding to medications or therapies starts to get better after having this treatment. This isn’t going to be a summary of the evidence base surrounding its use, though please feel free to post one on your own blog. I raise this issue because I think it typifies the representations of psychiatry and mental health care in popular culture. In contrast to the way medical doctor characters are so often written as relatable heroes (Grey’s Anatomy, E.R.), or at worst as lovable scamps even when their behavior is sociopathic (Scrubs, House), our baseline cultural understanding of psychiatry is pretty different. I object to the double standard. There are deep historical reasons for the mistrust between the public and psychiatrists, but yo, there are deep historical reasons to distrust anyone remotely connected to medicine (paging Dr. J. Marion Sims). I don’t believe the double standard is fair or accurate. The stigma attached to mental illness is bad enough–do we have to stigmatize the treatment of those illnesses too?

And so, by way of counteracting the trope of the sadistic power-mad and also just vanilla-mad shrink, I have assembled a collection of representations that I believe give a more realistic picture of psychiatrists. They’re not hero-healers, they’re just folks, and sometimes they help their patients live with incurable and potentially life-ruining diseases. This is list is by no means comprehensive–hello, I’m in medical school, I don’t have time to watch good TV, much less shows I hate like The Sopranos–but let me know if you’d like to do an updated content analysis some day. Here is a link to an out-of-date scholarly analysis if that’s your bag.

Below the jump the entire post is spoilers. Continue reading


What I Learned on my Radiology Rotation

1. Radiologists are, on average, pretty chill, happy people. They also, on average, swear a lot, which relaxes me and frees up the 25% of my mental effort that usually goes toward not dropping F-bombs, for learning.

2. Most kinds of images are not taken by radiologists themselves, they’re done by radiology technicians. I already knew that, but I’d never thought about it before. An experienced and knowledgeable tech makes all the difference in the world. Incidentally it takes them a buttload of time to train, and they’re highly specialized to the kind of images they take. Most of the techs I asked said that good communication with the doctors was everything. Also they would like the doctors to appreciate that some pictures are just really hard to take, and that they are doing their best.

3. Imaging is a consultation, not an order. The x-ray doesn’t spit out an answer; what you get is another doctor’s assessment of the patient’s condition.

4. Therefore, radiologists really, really, really, really, really, want clinicians to provide a clinical history when they order imaging. What they are looking for and how they interpret what they find are both influenced by the patient’s story. You know, like, everything else in medicine. And no, they can’t look it up in the patient’s chart. Another med student on this rotation with me ran the numbers and figured out that if the radiologists at UW took two minutes for each patient to look into their charts, it would add 7 hours to their day.

5. MRA can stand for Magnetic Resonance Angiography. I now plan to imagine the uglier corners of the internet as arteries.


6. The experience that trained me the best for reading images is taking Art Humanities in college. In case you were looking for another reason why premeds should get a liberal arts education.

7. ALARA, as you may know, stands for As Low as Reasonably Achievable, and it is the principle that guides exposure to radiation from medical imaging (and other things). I checked, and the number of US babies named Alara is on the rise. How many of their parents are radiologists, and how many are teenagers that are into Magic the Gathering? We’ll never know.


One Scholarly Article and One Comic about Talking Pills

First, the serious. One of the papers that came out of my dissertation work has just been published at Annals of Epidemiology (wide grin) and is available here. The paper takes advantage of a historical event, which is the halting of on arm of the Women’s Health Initiative Trial in 2002, after the trial found that the use of estrogen and progesterone in midlife women modestly increased risk of coronary heart disease. After that announcement a lot of women quit their hormonal medications cold turkey, and if you happen to be hooked up with a study that was following a cohort of midlife women before and after that date (which I am), that abrupt shift in prescribing and use of medications creates a natural experiment. My adviser said I can’t call it that but I’m doing it anyway cause I already have my PhD and they can’t take it back. Anyway, point is, we used these conditions to look at an outcome that has never been examined well in a large randomized trial of hormonal medications: sleep apnea. We found that up until the Women’s Health Initiative made its announcement, hormone use was associated with less sleep apnea. After that date, though, the association disappeared. The biology of the medications didn’t change, but its social context did. We argue that is evidence for what we epidemiologists call a Healthy User Bias; in the early period, hormonal medications were a marker for healthiness, and created a spurious association between the meds and lower risk of sleep apnea. So if this is your kind of thing, feel free to check it out at the link above.

And now, the silly.


[click on comic to see it larger]

I’ve been doing a lot of book learnin’ lately, and I keep getting hung up on this phrase that I see a lot. “This medication/procedure/practice has no role in the treatment of this disease,” is how it goes. It’s code for, “I don’t care how they taught it when you were in med school, stop doing this now before you hurt someone.” It’s always struck me as a sort of odd euphemism. So I made a comic about it.

Why I Don’t Need a Mirror

Like a lot of things that have made my life better, this one started by accident. When we moved into our current apartment, we decided to take the doors off the closets, and the closet doors happened to be where the full length mirrors were installed. I fully intended to put them back up, but in the time it took us to unpack, I began to notice that not having mirrors was changing my behavior. And it was good.

I’m not sure I was fully aware of the Socially Acceptable Outfit Vortex until I was well out of it. But it would go something like this. I would get dressed. I would stop to check myself in the mirror. Something about what I saw made me unhappy–the look I thought was classic turned out to be dowdy, the color combination was too hard to pull off, the length of the hem made my knees look wide. So I would change my top. Back to the mirror. This combination looks weird. Go change into different pants. Back to the mirror. Pretty soon I was just looping between the dresser and the mirror, rejected clothes piling up on the bed. I have been late to work because of this behavior. I have lost so many hours I could have spent doing literally anything else. The cycle never ended in my leaving the house feeling like I had nailed the right outfit, and was ready to take on the world. In fact it almost never left me feeling okay.


When we moved my mirrors to the basement, this behavior essentially ceased. The frankly pretty nutbars routine I’d been performing since early adolescence just fell out of my life. And I did not miss it. In place of the “how do I look” ritual, I was checking in with how the clothes felt. Over time I proved to myself that I could trust my own judgment. It turns out I am sufficiently competent at getting dressed that it’s not usually necessary to check my work.

Life without a full length mirror requires some changes, but some of them I had already made. For example I had gotten rid of the clothes that didn’t fit me. The range of possible sartorial disasters is actually pretty limited when all your clothes fit. On two or three occasions, I got to the office and found that my bike shorts were a tad too long for my skirt. And one time I wore my shirt inside out until 2 in the afternoon. But nothing bad happened because of those mistakes. I turned my shirt right side out and moved on with my life. Eventually I stopped wearing that skirt, and I didn’t miss that either. I began to gravitate to really reliable, low-maintenance garments that required no thought because I knew I liked how they looked on me. Then I went further.

One day I was complaining to my husband about the unfair double standard in professional dress for women and men. I pointed out that his entire process for getting dressed in the morning was 1) Grab the shirt on top of the shirt stack 2) Grab the pants on top of the pants stack. And he has never once tried something on and then come to me for an opinion on whether looks too masculine, or not masculine enough. I told him I just wanted what he had. “Well,” he asked me, “What’s stopping you?”

I took that question seriously. The double standard is real, but it’s up to me how much I choose to bend to it. I started asking myself what, actually, was the point of getting dressed. I’m not using clothes to attract a mate or make a best-dressed list. If I want to intimidate my enemies, I have a better weapons.

My work clothes in particular only have one job, which is to perform professionalism. I resent that I am graded on my ability to dress preppy (see also this important piece by Jacob Tobia), but that’s a post for another day. Point is, I do not work at Vogue. Nobody cares if I curate a tasteful capsule wardrobe in a variety of neutrals, or wear a giraffe-print jumpsuit to clinic every day, as long as my cleavage is covered and I don’t wear jeans. If there is a professional advantage to looking trendy, or having a varied and creative wardrobe, the payoff is pretty small proportionate to the amount of time, money, and stress that it requires. I think it’s awesome when other people express themselves creatively through their clothing, but when I looked at it hard I had to admit that most of the time I wasn’t expressing myself, I was just trying to pass for acceptable. So I opted out.

I now wear a black sweater and a black pencil skirt pretty much every day (sub in black jeans on the weekend). Every now and then I have the urge to change things up, but I usually regret it. I can now get ready for work in under ten minutes, and usually don’t have to think about my clothes for the rest of the day unless a baby barfs on me. I don’t wonder how I look cause I know my clothes really well, and I also know my own body.

I always thought of people who didn’t have full-length mirrors as people who couldn’t stand to look at themselves. But I’m pretty sure there are a lot of people with mirrors who also can’t stand to look. I can’t speak for anyone else, but I find I treat my body with more respect when I skip the daily appraisal. I don’t need a mirror to tell me how I look if I know how I see myself.

What I Learned on my Primary Care Rotation

The astute readers among you will have noticed a little change to the header of this blog a few months back. As many of you know I indeed recently finished my PhD, and have returned to medical school, where I have been thrown in with a group of people, some of them 10 years my junior, who have not taken a five-year hiatus from their clinical studies. This afternoon I finished my first of the third-year rotations, the clinical courses in which we are sent out to clinics and hospitals around the state to learn from practicing doctors and try not to get in anyone’s way. I was lucky to be assigned to begin with primary care. Lucky because it is a broad overview which I sorely needed, and lucky because it’s the part of med school I had been waiting for, ever since I started back when Bush II was in office. I got to split my time this summer between a rural family practice clinic, a pediatric clinic here in Madison, and a super-cool nonprofit, and frankly, I loved the whole thing. I started my third year wanting to go into primary care, and nothing that has happened in the past eight weeks has changed my mind. In fact I have quaffed deeply of the primary care kool-aid.

Now, mind you, none of this means I expect a good grade in the course. My performance on my first practical exam of the year can’t really be summarized by one gif alone, but perhaps in combination you’ll get some of the feel of it:




Whereas the national board exam was more like:


But in the clinic I was really content. I’m not saying I put my best foot forward with every patient or enjoyed every interaction, cause it’s med school and not The Nexus. Like any other time of my life, the rotation had its highs–like watching a patient and their parent go from “I don’t want to see a med student” to “thank you, that was really helpful.” And it had its lows, like when the earpiece of my stethoscope caught on the hem of my skirt and I accidentally flashed my (male) preceptor–a situation mitigated only by my loyalty to the world’s comfiest and most conservative undies, albeit in flamingo pink.

What I am saying is that I feel more strongly than ever that this is the work I want to do. And I’ve been lucky enough to spend the summer learning from people I really respect, who seem to think I could be good at it some day. I’ve learned a lot in a short time.

So here, in summary, is a list (not exhaustive, thank you very much) of lessons I have learned, and in many cases re-learned, this summer. Some I learned right away, and some I had to mess up repeatedly. Some I didn’t really put together until the rotation was over, and my poor beleaguered preceptors were probably thinking, “How is she not getting this yet?”  Anyway…

  • At this point in my career, my job is to learn how to form an assessment. Even though I’ll pretty much always be wrong.
  • A lot of the job is communication. As much as certain representatives of the medical school have treated my humanities background as an unfortunate handicap, it’s what’s taught me to listen analytically, write, teach, and make an argument. Which is kind of what I do all day now.
  • Before you talk to the patient about anything else, establish the identities of the people they brought with them.
  • My teenage hijinks, though bad decisions at the time, are coming in handy in peds clinic. Apparently, as med students go, I’m hard to shock.
  • I really suck, however, at using tongue depressors. I’ve seen so few oropharynxes that for or all I know 50% of children are born without them.
  • I like working with seniors. The demographic with which I have had the best luck establishing rapport is women over fifty, especially if they are “non-compliant,” and/or believe they are psychic.
  • It is possible for a moth to get stuck inside a human ear canal.
  • Rural medicine is for badasses.
  • With respect to rural populations, my cultural competence has a long way to go. I literally do not understand one sentence on this magazine cover.OutdoorLife
  • It’s on me to recognize the limits of my Spanish. I’m most likely to get in trouble when I’m feeling awkward about making someone repeat themselves.
  • That being said, a lot of patients are pretty stoked to find someone who speaks Spanish at all.
  • People who see the world very differently can be very much in sync when it comes to what they value in medicine
  • A lot of medical students are really excellent people. I have always held my colleagues to a pretty high standard, and sometimes my disappointments have dominated my feelings to the point where I almost forgot just how many fantastic people I had the privilege of knowing in med school. I’ve now met about 20 members of my new class, and I’ve liked all of them. When was the last time you met 20 people in a row in any context and liked them all? I’ve met young people with a lot of wisdom, men who care about women, people who respect their patients not because of some higher calling but just because they basically like people. They’re going to be great doctors.
  • Some doctors are really excellent people, too. My colleagues and myself are in danger of having the compassion ground out of us by a tough and often irrational medical education system, before we ever get out and to practice independently. But I’m beginning to believe most of us will be ok.

Justice in Data Analysis

Today we celebrate US Independence Day with some more Epidemiology 101. If the word “statistics” gives you yucky feelings, nausea, chills, phantom electric shocks, etc, stay right where you are. This is the post for you. I promise that I will not use this post to make you feel worse about your quantitative skills.


Still with me? Good. Let’s move on to this report released out of Brigham & Women’s Hospital in March of 2014 (I’m not gonna lie, I started writing this post like a year ago but was distracted by a spurt of dissertation productivity). The report is titled “Sex-Specific Medical Research: Why Women’s Health Can’t Wait.” There’s a lot in the report, and if health policy is your bag I recommend you go read the original, or at least the executive summary. What I want to focus on here is a recurring theme in the report pertaining to the statistical analysis of data: even studies that collect data on women may not use it. And you’ll be shocked to hear that the same problem is true for data collected on race and ethnicity. So women of color are even more understudied.

At first blush, this is a kind of puzzling finding. Why would you bother to collect data on women and/or people of color if you’re not going to use it? It’s a choice, but it’s often an unconscious choice. Like many areas of discrimination, implicit bias can influence the scientific process. And because it’s unconscious, people will resist naming it as an injustice.

In the bad old days, medical research was often carried out on men alone–often male undergraduates. It was assumed that conclusions drawn from research in men applied equally well to women. That turned out to be a bad assumption, and the repercussions for women were serious. To name the most famous example, the so-called classic symptoms of a heart attack like chest pain are really only classic in men, and are often absent when women have heart attacks (see also this PSA starring Elizabeth Banks).

The history of racism in research is more complex–a whole field of study. White scientists have used black people’s bodies as models to test medical treatment intended for white patients, and the repercussions are still being felt. Yet the opposite problem was also going on at the same time; it was fully acceptable to conduct studies on all-white populations. No surprise, the assumption that conclusions reached from data on white people applied equally to people of color also proved faulty. For example until a few years ago the treatment of choice for Hepatitis C, a disease which disproportionately effects African-Americans, was five times less likely to cure the disease in black men than white men.

A study population of white men alone is no longer acceptable in the world of medical research. A lot of credit for this change goes to the NIH Revitalization Act, which was passed in 1994, and which requires the inclusion of women and people of color in federally funded studies. So far, so good (mostly). But inclusion of these subpopulations doesn’t actually benefit anyone if the effect of gender or race isn’t examined in the data–and surprisingly often it isn’t.


Let’s focus on race for a moment. When you read about a study that proudly announces the racial diversity of its study population, I encourage you to ask the following question: Was race examined as a factor that could change the effect being studied, or was it treated as a nuisance variable? In epidemiologic lingo I’m talking about the difference between treating race as an effect modifier and treating it as a confounder. Here’s what I mean by that.

Let’s imagine you are a doctor at a teaching hospital with a large and diverse patient population. There is a new drug on the market that is used to treat bad breath (brand name Mintifreshimab). You have noticed that several of your patients that use the anti-halitosis drug have been coming to see you with a strange cough. You are afraid this new drug has a heretofore undiscovered side effect. Coughing was not studied in the large trials that led to the approval of this drug, so you decide to study it yourself. You receive permission to review the medical records of patients at your hospital for research purposes, and you use them to find out how many of the patients who have been prescribed the halitosis drug have returned complaining of cough. For comparison you choose a control group of patients who have been prescribed a new drug to prevent flatulence (brand name Tootnomor), and find out how many of them have returned with coughs as well. This is definitely not the optimal study design for this question, but you do what you can.

You find that patients taking the anti-halitosis drug have a similar amount of coughing complaints as patients taking the anti-flatulence drug. Pretty reassuring that the drug doesn’t cause the coughs. But, you astutely recognize that 75% of patients being prescribed the anti-halitosis drug are white, but only 25% of patients on the anti-flatulence drug are white. You also recognize that since asthma is less common in whites than in African-Americans, you would expect to find less coughing in a population with proportionately more white people–including the population of anti-halitosis drug users–regardless of their medications. You don’t have good data on respiratory disease for some reason (let’s say someone just released some really weird malware). So you must account for race in your analysis. What do you do?

If you said “control for race” or “adjust for race” (same thing), then you’re thinking like most people in this situation. You choose a model that essentially takes a complicated average of the drug’s effect in whites and its effect in blacks. This model assumes that even if whites have less coughing overall than blacks, it has nothing to do with the anti-halitosis drug–taking the halitosis drug wouldn’t give any more black people coughs than white people, or vice versa. Proceeding under this assumption, the model adjusts for race and calculates once again that there is no more cough in users than in nonusers. This estimated lack of effect applies to the whole population, “independent” of race.

In many contexts the assumption that a drug or exposure effects the health condition you are studying the same way in people of all races is an excellent assumption. But I want to point out that this kind of model implicitly frames the difference between white patients and black patients as a distortion of the “true” effect of the halitosis drug. What if those differences are important? Not a distortion of the effect your studying, but an intrinsic part of it?

The whole reason for studying a racially diverse sample is to investigate whether the drug acts differently on different populations. If the effect of the drug is the same for people of all races, then there would be no need for a diverse sample. You could study the effect of the drug just in African-Americans, or just in whites, and arrive at an estimate that was correct for any population. We know already that that is a bad assumption. Yet by “controlling” for race, you have actually removed the effect of race from your analysis instead of studying it. Henceforth I will be referring to this approach as the Misguided Approach, mostly because I put my real name on my blog, and as an aspiring medical professional it wouldn’t be smart for me to fill my blog with bathroom words.


A better plan–I’m going to call it the Astute Approach–would be to analyze the data in a way that allows the effect of the drug to vary between the two populations. If you do it that way, you might just find out that the drug does have an effect after all–two different effects, to be precise. In our scenario, it turns out that race is a strong predictor of what kind of relationship the drug has to coughing, but you have to be looking for it. When you examine white people alone, you find that halitosis drug users have more cough. Important finding–maybe some people should change their medications. But when you examine black people alone, you find that in this population halitosis users have less cough–whoa! So among African-Americans, this drug could actually help with cough? I mean, this is just one retrospective observational study, and also made up, so let’s not get carried away. But my point is that whether or not a clinican might want to prescribe this drug for a given patient might depend a lot on that patient’s race. This is sometimes called statistical interaction, or effect modification. The Misguided Approach fails to look for evidence of this kind of effect modification by race, and just assumes that race is unimportant. It averages the increased cough in whites and the decreased cough in blacks out to no effect at all.

If the assumptions underlying the model that controls/adjusts for race are wrong, and those effects really are different in the two populations, then the estimated average will be correct for neither population. It’s actually worse than studying an all-white sample, because it arrives at an estimated effect that is incorrect for white people, too. An estimate whose generalizability is unknown is better than an estimate that is universally wrong.

So now we’ve got a plan for a more fair application of statistics in medical science. And really, as fruit goes, this is very low hanging. You only have to use the data you already have! But…this analysis plan will only work if you have a large enough sample of people of color to investigate your research question separately for each population. This is what is meant by statistical power; the larger the sample, the less likely it is that your finding arose as a matter of chance. In the scenario I laid out above, the study data comes from preexisting medical records in a health care system serving a diverse population, so that’s not especially difficult. But for a study that’s gathering new data, study volunteers have to be recruited with particular attention to recruiting enough people from the relevant subpopulation.

Time to turn our attention to some fine print. The NIH continues to be the driver of most research in the U.S., and their policies are incredibly important. To receive NIH funding, a human subjects study has to include women and people of color unless there’s some specific reason not to (if you’re studying prostates, for example, you don’t need to recruit cis women). If the study is a Phase III drug trial, AND if there is a preexisting body of research suggesting that the effect being studied is different in men and women, then you also have to recruit a study population large enough to allow you to analyze the effect of gender. Ditto for the effect of race. If no one has looked for evidence of a gender or race effect, or if you are conducting some other kind of trial, then studies are not required to recruit a large enough sample size to look for differences by gender or race.

The default is to recruit a population that mirrors the 2010 census. So let’s say you take that approach, and recruit a sample of volunteers, 13% of whom identify as African-American. You can have a nice healthy sample of 200 subjects, but you’ll still only have 26 black subjects. So when you study the effect on black people alone, you will have low statistical power, and limited ability to draw any conclusions about the effect of the drug in black people.

Hey it turns out women are underrepresented as research participants. And you know I wouldn’t leave out the intersectional issues here. If the study is powered to investigate the effect of race, and it’s powered to investigate the effect of gender, is it powered to investigate whether the effect of gender is different in different racial/ethnic groups and vice versa? If in the example above you have half men and half women, you’ve got at most 13 black women fromnwhich to draw your conclusions. Will women of color really know if these research findings apply to them?

The first time my PhD adviser pointed this fine print out to me, it blew my mind in a way I had never imagined fine print could. You’re required to have women and people of color in your study, but you’re not required to recruit enough of them to look for evidence of gender and race effects? Why even bother then? It practically mandates the Misguided Approach.


Every additional subject makes the study more expensive, and since people of color are less likely to agree to participate on research (whole other blog post there someday I think), there is an associated cost to making greater recruiting efforts. Where I live, researchers find themselves competing for study volunteers, to the point where one of the local clinics serving primarily people of color will only help with recruitment if the investigators can convince them that this research will actually benefit underserved populations–and good on them. Hey, science isn’t easy. Unless you’re doing it really badly.

This is a matter of putting our money where our mouths are. If we are going to use our money, as a nation, to produce research that will benefit more populations, then we have to spend more money on research, or we have to conduct fewer studies. You’ve heard this before with respect to health care, but it applies to research too. When there’s a shortage–and just in case you haven’t heard, there has been a dismal shortage of funds for research for a while now–there is rationing. Conducting fewer studies would mean rationing on the basis of research agendas, and people who could benefit from more kinds of research questions being answered will lose out. People with a particular disease will not get that one other clinical trial that could help treat them. People in a particular job will not get the study that demonstrates that their work is unsafe, and they’ll keep doing the job. That stinks. 

Right now, research findings need only be true in white people to become the new paradigm that applies to all people. A drug only has to work in white people in order to get approved, and then prescribed to people of all races. That’s rationing on the basis of race, and in my opinion it stinks worse, because it is not just frustrating and sad, it is also unjust.

The pervasive failure to examine race effects in research, and the failure to prioritize the investigation of race effects in health, is a research-specific manifestation of colorblind racism. Researchers who take the Misguided Approach aren’t intentionally setting out to commit discrimination. They’d probably be the first to tell you that using an all-white population is wrong–after all, they’re working with data on studies that actually recruited a racially diverse sample. But choosing an analytic approach that fails to “see” race produces research that still leaves people of color underserved.

So, here is your action plan when evaluating human subjects research:

  1. What is the proportion of men and women, or whites and non-whites? Did they classify race and/or ethnicity in a useful way?
  2. Did the investigators look for evidence that the effect they are studying is different in men and women, or different in whites and non-whites? If you see a sentence that says something like “there was no evidence of effect modification by race,” that means that they looked, and it turned out the effect really was the same regardless of race. In which case controlling/adjusting for race is Astute. If they forgot to check that assumption, it is Misguided.
  3. Are women of color being lumped in with men of color or with white women?

Go forth and critique. This is one area of social justice with a comparatively simple solution. Let’s demand it.