Foreword for the lazy: I'm not saying that there's no such thing as sexism. I'm saying this ONE paper has serious methodological issues. Please read my argument first before commenting.
TL;DR: MAYBE THERE REALLY IS A DIFFERENCE IN HOW DANGEROUS PEOPLE PERCEIVE HURRICANES BASED ON GENDER OF THE NAME BUT THIS PAPER SHOULD NOT BE USED TO SUPPORT THAT ARGUMENT. [h/t Dr. Mrs. Schwartz for this lovely summary of my argument]
So hurricanes named after women aren't taken as seriously, right? Sexism kills people in the form of weak responses to hurricanes, right? Well... sometimes it pays to actually look at the study before you conclude whatever the WaPo says. This is one of those times. [Note that I have cribbed much of this argument from an acquaintance who I won't name simply because he had access to the full paper and I do not. I could've just as easily concluded the same things, however. Just a heads up.]
Firstly, note that the paper is limited to data post-1979, because prior to that all hurricanes had feminine names. So in reality, all deaths due to hurricanes before 1979 were because of female-named hurricanes. That's just... simple. For this paper, however, that means that any deaths prior to 1979 will be attributed differently (since there's no male name controls possible.) This means only 30 years of data. That's not much real data to work with.
More importantly, the authors didn't conduct interviews with real people who recalled thinking about real hurricane names, and how they acted on the femininity of those names. One experiment involved asking people how they predicted the danger of a storm based on name. They generated experimental data by asking college students to predict the "intensity" (in quotes just because that was the word used) from 1 to 7. For example: "Arthur" had a mean "intensity" of 4.246, while "Dolly" had 4.014. That data is incredibly close. It's impossible to tell if that's even enough to attribute to actual differences in opinion or just chance. More importantly, the sample was 346 people, which... is not much.
Another experiment was 142 volunteers from the Internet who were asked to rate their "evacuation intentions" and fake hurricane "risk", from 1 to 7. Hurricane Christina had a mean 2.343 "evacuation score", while Christopher had 2.939. This is perhaps a bit large of variance, but that sample is such a small n that I wouldn't be surprised if it yielded different results with a different run. Never mind the problems with that kind of data collection.
This paper was, frankly, bad. The stats were bad, the methods were bad, and they should feel bad. There are plenty of ways to demonstrate sexism in American society, but this paper just isn't it.