I’m really looking forward to tonight’s panel on mega-foundations and democracy, organized by the Big Apple Coffee Party. I will be the moderator of the event, which takes place at 6:30 tonight at All Soul’s Church at 1157 Lexington Avenue and is open to the public.
In preparation, I’m reading the article that inspired tonight’s discussion, written by one of tonight’s three panelists, Joanne Barkan. The essay is entitled Plutocrats at Work: How Big Philanthropy Undermines Democracy and it’s published by Dissent Magazine.
Also on the panel will be Gara LaMarche, President and CEO of The Atlantic Philanthropies and a Senior Fellow at NYU’s Robert F. Wagner School of Public Service, who can be seen putting philanthropy on trial here, and Stanley Katz, Director of the Center for Arts and Policy Studies at the Woodrow Wilson School of International and Public Affairs at Princeton University, who wrote this piece called Beware Of Big Donors.
I hope I see you all there tonight!
Also, this Friday I’ll be giving a talk at Cornell Tech called Making The Case For Data Journalism. Send me an email if you want to come to that, I’ll need to get you on the list. It’s at 12:30 in the Google building.
Finally, I had a lot of fun over the weekend at the Rebellious Lawyer’s Conference at Yale, which is a yearly conference organized by law students. I was on a panel concerned with Occupy and the law profession, organized by Zorka Milin, an amazing tax lawyer and activist, and the other panelists were the amazing and inspirational Akshat Tewary of Occupy the SEC and Rebecca Wilkins, also an activist tax lawyer who is writing a book about tax law for the rest of us that I can’t wait to read.
This is guest post by Michael Carey. In his own words: I am a decidedly “alternative” math nerd, working in an environment where I’m on the wrong end of power and status relationships with a handful of conservatives. (Like, “willing to say nasty things about the gays in public” conservatives.) I have been an occasional pseudonymous contributor to the LGBT blog at Slate, Outward, on topics ranging from the poly closet to queer culture.
We need to make ethical porn cheaper and more convenient than its alternatives.
Lastly, because ethical production costs more — even compared to the best-behaved of today’s studios in the San Fernando Valley, let alone free stuff from Bulgaria or Thailand — you need a subsidy scheme that allows at least infrequent users to get their content for free. Citizens would be entitled to a modest annual allotment of voucher codes, cashed in through the exchange.
Producers would compete to sell what the public wants to buy, and would face continuous scrutiny from regulators. Consumers could rest assured that their objects of lust are safe, healthy, and fairly compensated.
It’s still worth thinking about the problem of how we keep sexual entertainment widely available, while ensuring that casual, occasional consumers don’t have to worry about risking “gain[ing] pleasure from an act of rape.” That risk is something that ought to bother you.
Perhaps the “exchange” idea — porno.gov! — isn’t entirely crazy. Sexual release is one of the most basic human drives. Perhaps we should consider at least a modicum of sexual gratification to be a basic need, like shelter, food, and clean water. If so, government subsidy shouldn’t be out of the question. Germany allows those with disabilities to use part of their support stipend to hire a sexual surrogate, and this practice may soon spread to other parts of Europe. We can imagine a society where sex work — even prostitution — is viewed as a valid choice of career, employing competent professionals who are treated with at least the same level of respect accorded to a gerontological nurse, a sewer technician, or anyone else who takes a demanding, sometimes-dirty job, and does it well.
In order to make that happen, though, we need high standards for health, safety, and compensation. I don’t think that’s impossible to achieve, and I’m fairly certain it would be better than the situation we live with now, where workers can’t count on the state to protect them from criminals, and sometimes face public crackdowns on their livelihood that simultaneously infantilize and vilify them. To make something like that happen, we have to first agree that we want it.
This is a guest post by Courtney Gibbons, an assistant professor of mathematics at Hamilton College. You can see her teaching evaluations on ratemyprofessor.com. She would like you to note that she’s been tagged as “hilarious.” Twice.
Lately, my social media has been blowing up with stories about gender bias in higher ed, especially course evaluations. As a 30-something, female math professor, I’m personally invested in this kind of issue. So I’m gratified when I read about well-designed studies that highlight the “vagina tax” in teaching (I didn’t coin this phrase, but I wish I had).
These kinds of studies bring the conversation about bias to the table in a way that academics can understand. We can geek out on experimental design, the fact that the research is peer-reviewed and therefore passes some basic legitimacy tests.
Indeed, the conversation finally moves out of the realm of folklore, where we have “known” for some time that students expect women to be nurturing in addition to managing the class, while men just need to keep class on track.
Let me reiterate: as a young woman in academia, I want deans and chairs and presidents to take these observed phenomena seriously when evaluating their professors. I want to talk to my colleagues and my students about these issues. Eventually, I’d like to “fix” them, or at least game them to my advantage. (Just kidding. I’d rather fix them.)
However, let me speak as a mathematician for a minute here: bad interpretations of data don’t advance the cause. There’s beautiful link-bait out there that justifies its conclusions on the flimsy “hey, look at this chart” understanding of big data. Benjamin M. Schmidt created a really beautiful tool to visualize data he scraped from the website ratemyprofessor.com through a process that he sketches on his blog. The best criticisms and caveats come from Schmidt himself.
What I want to examine is the response to the tool, both in the media and among my colleagues. USAToday, HuffPo, and other sites have linked to it, citing it as yet more evidence to support the folklore: students see men as “geniuses” and women as “bossy.” It looks like they found some screenshots (or took a few) and decided to interpret them as provocatively as possible. After playing with the tool for a few minutes, which wasn’t even hard enough to qualify as sleuthing, I came to a very different conclusion.
If you look at the ratings for “genius” and then break them down further to look at positive and negative reviews separately, it occurs predominantly in negative reviews. I found a few specific reviews, and they read, “you have to be a genius to pass” or along those lines.
[Don’t take my word for it — search google for:
rate my professors “you have to be a genius”‘
and you’ll see how students use the word “genius” in reviews of professors. The first page of hits is pretty much all men.]
Here’s the breakdown for “genius”:
Similar results occur with “brilliant”:
Now check out “bossy” and negative reviews:
I thought that the phrase “terrible teacher” was more illuminating, because it’s more likely in reference to the subject of the review, and we’ve got some meaningful occurrences:
Who’s doing this reporting, and why aren’t we reading these reports more critically? Journalists, get your shit together and report data responsibly. Academics, be a little more skeptical of stories that simply post screenshots of a chart coupled with inciting prose from conclusions drawn, badly, from hastily scanned data.
Is this tool useless? No. Is it fun to futz around with? Yes.
Is it being reported and understood well? Resounding no!
I think even our students would agree with me: that’s just f*cked up.
This article from the New York Times really interests me. It’s entitled Unlikely Cause Unites the Left and the Right: Justice Reform, and although it doesn’t specifically mention “data driven” approaches in justice reform, it describes “emerging proposals to reduce prison populations, overhaul sentencing, reduce recidivism and take on similar initiatives.”
I think this sentence, especially the reference to reducing recidivism, is code for the evidence-based sentencing that my friend Luis Daniel recently posted about. I recently finished a draft chapter in my book about such “big data” models, and after much research I can assure you that this stuff runs the gamut between putting poor people away for longer because they’re poor and actually focusing resources where they’re needed.
The idea that there’s a coalition that’s taking this on that includes both Koch Industries and the ACLU is fascinating and bizarre and – if I may exhibit a rare moment of optimism – hopeful. In particular I’m desperately hoping they have involved people who understand enough about modeling not to assume that the results of models are “objective”.
There are, in fact, lots of ways to set up data-gathering and usage in the justice system to actively fight against unfairness and unreasonably long incarcerations, rather than to simply codify such practices. I hope some of that conversation happens soon.
I was recently part of a task force for understanding the practices of “big data” from the perspective of the American Association for Public Opinion Research (AAPOR), which is an organization that promotes good standards for studying public opinion.
So for example, AAPOR has a code of ethics for how to track public opinion, and a set of understood methodologies for correctly using surveys. They involved themselves last year when they criticized the New York Times and CBS for releasing the results of a nationwide poll on Senate races where the opt-in survey method had “little grounding in theory” and for a lack of transparency.
But here’s the thing, the biggest problem facing the world of public opinion research isn’t that online opt-in polls, but rather the temptation to troll twitter to “see what people are thinking.” And that’s exactly what’s happening, in large part because it’s cheaper. Thus the AAPOR Big Data Report that I helped with.
I think we did a decent job of describing some of the intrinsic difficulties with using big data, specifically around quality control issues, and for that reason I recommend this report to anyone entering the field, or even people already in the field who haven’t thought through this stuff. If you don’t have time to read the full report, here are our recommendations:
1. Surveys and Big Data are complementary data sources not competing data sources.
There are differences between the approaches, but this should be seen as an advantage rather than a disadvantage. Research is about answering questions, and one way to answer questions is to start utilizing all information available. The availability of Big Data to support research provides a new way to approach old questions as well as an ability to address some new questions that in the past were out of reach. However, the findings that are generated based on Big Data inevitably generate more questions, and some of those questions tend to be best addressed by traditional survey research methods.
2. AAPOR should develop standards for the use of Big Data in survey research when more knowledge has been accumulated.
Using Big Data in statistically valid ways is a challenge. One common misconception is the belief that volume of data can compensate for any other deficiency in the data. AAPOR should develop standards of disclosure and transparency when using Big Data in survey research. AAPOR’s transparency initiative is a good role model that should be extended to other data sources besides surveys.
3. AAPOR should start working with the private sector and other professional organizations to educate its members on Big Data.
The current pace of the Big Data development in itself is a challenge. It is very difficult to keep up with the research and development in the Big Data area. Research on new technology tends to become outdated very fast. There is currently insufficient capacity in the AAPOR community. AAPOR should tap other professional associations, such as the American Statistical Association and the Association for Computing Machinery, to help understand these issues and provide training for other AAPOR members and non-members.
4. AAPOR should inform the public of the risks and benefits of Big Data.
Most users of digital services are unaware of the fact that data formed out of their digital behavior may be reused for other purposes, for both public and private good. AAPOR should be active in public debates and provide training for journalists to improve data-driven journalism. AAPOR should also update its Code of Professional Ethics and Practice to include the collection of digital data outside of surveys. It should work with Institutional Review Boards to facilitate the research use of such data in an ethical fashion.
5. AAPOR should help remove the barrier associated with different uses of terminology.
Effective use of Big Data usually requires a multidisciplinary team consisting of e.g., a domain expert, a researcher, a computer scientist, and a system administrator. Because of the interdisciplinary nature of Big Data, there are many concepts and terms that are defined differently by people with different backgrounds. AAPOR should help remove this barrier by informing its community about the different uses of terminology. Short courses and webinars are successful instruments that AAPOR can use to accomplish this task.
6. AAPOR should take a leading role in working with federal agencies in developing a necessary infrastructure for the use of Big Data in survey research.
Data ownership is not well defined and there is no clear legal framework for the collection and subsequent use of Big Data. There is a need for public-private partnerships to ensure data access and reproducibility. The Office of Management and Budget (OMB) is very much involved in federal surveys since they develop guidelines for those and research funded by government should follow these guidelines. It is important that AAPOR work together with federal statistical agencies on Big Data issues and build capacity in this field. AAPOR’s involvement could include the creation or propagation of shared cloud computing resources
I was going to blog about some serious stuff this morning but then someone (specifically, my cousin Anne Hall) sent me this socialist and feminist redo of 50 Shades, which made me forget everything else. Favorite line:
“You need to go away and sit and think about commodity fetishism and the compensation of emotional labour. Also your obvious issues with women. By the way, how did you get this number?”
Oh wait, I guess I still have 18 minutes to say something.
So yesterday my Facebook page lit up with mathematicians discussing this USA Today list of top 10 colleges for math majors:
- HARVEY MUDD COLLEGE: CLAREMONT, CALIF.
- COLORADO SCHOOL OF MINES: GOLDEN, COLO.
- MASSACHUSETTS INSTITUTE OF TECHNOLOGY: CAMBRIDGE, MASS.
- UNIVERSITY OF CALIFORNIA – LOS ANGELES
- CARNEGIE MELLON UNIVERSITY: PITTSBURGH, PA.
- UNIVERSITY OF CHICAGO: CHICAGO
- CORNELL UNIVERSITY: ITHACA, N.Y.
- WASHINGTON UNIVERSITY IN ST. LOUIS: SAINT LOUIS, MO.
- UNIVERSITY OF PENNSYLVANIA: PHILADELPHIA
- CALIFORNIA INSTITUTE OF TECHNOLOGY: PASADENA, CALIF.
These are some fine schools, but not at all the typical list one would consider. That makes you wonder, how does one decide where the best place is for a math major? Scanning this list, I have no idea, and it certainly doesn’t correspond to the places that produce the most research mathematicians. But maybe that’s not what USA Today cares about.
|Early-Career Salary||High||Average salary of bachelors degree graduates from the college in that major with 0-5 years of experience.|
|Mid-Career Salary||Med||Average salary of bachelors degree graduates from the college in that major with 10+ years of experience.|
|Major Focus %||Med||Percentage of students at the college studying that major.|
|Bachelors Degree Market Share||Med||Percentage of all U.S. bachelors degree graduates in that major represented at that college.|
|Masters Degree Market Share||Low||Percentage of all U.S. masters degree graduates in that major represented at that college.|
|Doctoral Degree Market Share||Low||Percentage of all U.S. doctoral degree graduates in that major represented at that college.|
|Related Major Concentration|
|Related Major Focus (mPower Index)||Med||Measure of how much all the other majors at the college are related to the major.|
|Related Major Breadth||Low||Number of closely related majors offered at the college.|
|Relevant Program Specific Accreditation||Med||Whether or not the major is accredited by a relevant accrediting body (ie. ABET for engineering). If no obvious accrediting body for a major, this factor is ignored for that major’s rankings.|
|Overall College Quality|
|Best Colleges Ranking||High||The College Factual Best Colleges ranking, a measure of overall college quality.|
So here’s the thing. We mathematicians think that money doesn’t buy happiness, and we don’t care so much about early career salaries. That’s the first thing that sticks out.
But also, and I’d say just as importantly, we do care about the extent to which the average undergraduate math major is exposed to research mathematics, which is why we care much more deeply about the “doctoral degree market share” than this ranking does. I mean, I guess to the non-expert, it makes sense to care way more about the undergraduate focus of a college for judging the quality of an undergrad math major, but it all depends on what you want to have happen next.
I’m enjoying how different this list is from the typical inside baseball list. I don’t even know who it’s for, if anyone, but as a thought experiment it’s interesting to imagine what would happen if suddenly all the math departments everywhere suddenly tried to game this particular system, like colleges do with the US News ranking.
So, for example, we’d throw out pure math majors altogether in order to focus our attention on applied math majors that will make loads of money out of college. We’d also compete for students against other majors, something very few math departments actually do. It would be interesting, but I’m not holding my breath.
Really into writing right now but I’d still like to share my reading list with y’all.
- The review of 50 Shades I wish I’d written (hat tip Chris Wiggins). I still can’t decide whether the net effect of the film is bad, because the characters are so terribly stereotypical and stalkerish, or good, because it at least forces people to ask the question, what do I desire and how is that different from other people’s desires.
- Alexis Goldstein’s newest piece in Medium.com about the newest pawn in the financial lobbyist’s chess set, community banks.
- Remember SketchFactor? Well now there’s PlaceToLive (hat tip Jordan Ellenberg).
- I’m researching the toxic industry that is for-profit colleges. On the other hand, the Washington Post seems to be shilling for that same industry: here, here, and here (hat tip Auros Harman).
- If you’re a listener, try this interview with Edward Baptist, author of The Half Has Never Been Told: Slavery and the Making of American Capitalism. I’ve heard great things about this interview and its explanation of the earlier versions of “financial innovations” that directly involved slaves. This piece and many many more can be found on the Alt Banking website.