An article in yesterday’s Science Times explained that limiting the salt in your diet doesn’t actually improve health, and could in fact be bad for you. That’s a huge turn-around for a public health rule that has run very deep.
How can this kind of thing happen?
Well, first of all epidemiologists use crazy models to make predictions on things, and in this case what happened was they saw a correlation between high blood pressure and high salt intake, and they saw a separate correlation between high blood pressure and death, and so they linked the two.
Trouble is, while very low salt intake might lower blood pressure a little bit, it also for what ever reason makes people die a wee bit more often.
As this Scientific American article explains, that “little bit” is actually really small:
Over the long-term, low-salt diets, compared to normal diets, decreased systolic blood pressure (the top number in the blood pressure ratio) in healthy people by 1.1 millimeters of mercury (mmHg) and diastolic blood pressure (the bottom number) by 0.6 mmHg. That is like going from 120/80 to 119/79. The review concluded that “intensive interventions, unsuited to primary care or population prevention programs, provide only minimal reductions in blood pressure during long-term trials.” A 2003 Cochrane review of 57 shorter-term trials similarly concluded that “there is little evidence for long-term benefit from reducing salt intake.”
Moreover, some people react to changing their salt intake with higher, and some with lower blood pressure. Turns out it’s complicated.
I’m a skeptic, especially when it comes to epidemiology. None of this surprises me, and I don’t think it’s the last bombshell we’ll be hearing. But this meta-analysis also might have flaws, so hold your breath for the next pronouncement.
One last thing – they keep saying that it’s too expensive to do this kind of study right, but I’m thinking that by now they might realize the real cost of not doing it right is a loss of the public’s trust in medical research.
I’ve discussed the broken business model that is the credit rating agency system in this country on a few occasions. It directly contributed to the opacity and fraud in the MBS market and to the ensuing financial crisis, for example. And in this post and then this one, I suggest that someone should start an open source version of credit rating agencies. Here’s my explanation:
The system of credit ratings undermines the trust of even the most fervently pro-business entrepreneur out there. The models are knowingly games by both sides, and it’s clearly both corrupt and important. It’s also a bipartisan issue: Republicans and Democrats alike should want transparency when it comes to modeling downgrades- at the very least so they can argue against the results in a factual way. There’s no reason I can see why there shouldn’t be broad support for a rule to force the ratings agencies to make their models publicly available. In other words, this isn’t a political game that would score points for one side or the other.
Well, it wasn’t long before Marc Joffe, who had started an open source credit rating agency, contacted me and came to my Occupy group to explain his plan, which I blogged about here. That was almost a year ago.
Today the SEC is going to have something they’re calling a Credit Ratings Roundtable. This is in response to an amendment that Senator Al Franken put on Dodd-Frank which requires the SEC to examine the credit rating industry. From their webpage description of the event:
The roundtable will consist of three panels:
- The first panel will discuss the potential creation of a credit rating assignment system for asset-backed securities.
- The second panel will discuss the effectiveness of the SEC’s current system to encourage unsolicited ratings of asset-backed securities.
- The third panel will discuss other alternatives to the current issuer-pay business model in which the issuer selects and pays the firm it wants to provide credit ratings for its securities.
Marc is going to be one of something like 9 people in the third panel. He wrote this op-ed piece about his goal for the panel, a key excerpt being the following:
Section 939A of the Dodd-Frank Act requires regulatory agencies to replace references to NRSRO ratings in their regulations with alternative standards of credit-worthiness. I suggest that the output of a certified, open source credit model be included in regulations as a standard of credit-worthiness.
Just to be clear: the current problem is that not only is there wide-spread gaming, but there’s also a near monopoly by the “big three” credit rating agencies, and for whatever reason that monopoly status has been incredibly well protected by the SEC. They don’t grant “NRSRO” status to credit rating agencies unless the given agency can produce something like 10 letters from clients who will vouch for them providing credit ratings for at least 3 years. You can see why this is a hard business to break into.
The Roundtable was covered yesterday in the Wall Street Journal as well: Ratings Firms Steer Clear of an Overhaul - an unfortunate title if you are trying to be optimistic about the event today. From the WSJ article:
Mr. Franken’s amendment requires the SEC to create a board that would assign a rating firm to evaluate structured-finance deals or come up with another option to eliminate conflicts.
While lawsuits filed against S&P in February by the U.S. government and more than a dozen states refocused unflattering attention on the bond-rating industry, efforts to upend its reliance on issuers have languished, partly because of a lack of consensus on what to do.
I’m just kind of amazed that, given how dirty and obviously broken this industry is, we can’t do better than this. SEC, please start doing your job. How could allowing an open-source credit rating agency hurt our country? How could it make things worse?
Going along with the theme of shaming which I took up yesterday, there was a recent Wall Street Journal article called “When Your Boss Makes You Pay for Being Fat” about new ways employers are trying to “encourage healthy living”, or otherwise described, “save money on benefits”. From the article:
Until recently, Michelin awarded workers automatic $600 credits toward deductibles, along with extra money for completing health-assessment surveys or participating in a nonbinding “action plan” for wellness. It adopted its stricter policy after its health costs spiked in 2012.
Now, the company will reward only those workers who meet healthy standards for blood pressure, glucose, cholesterol, triglycerides and waist size—under 35 inches for women and 40 inches for men. Employees who hit baseline requirements in three or more categories will receive up to $1,000 to reduce their annual deductibles. Those who don’t qualify must sign up for a health-coaching program in order to earn a smaller credit.
A few comments:
- This policy combines the critical characteristics of shaming, namely 1) a complete lack of empathy and 2) the shifting of blame for a problem entirely onto one segment of the population even though the “obesity epidemic” is a poorly understood cultural phenomenon.
- To the extent that there may be push-back against this or similar policies inside the workplace, there will be very little to stop employers from not hiring fat people in the first place.
- Or for that matter, what’s going to stop employers from using people’s full medical profiles (note: by this I mean the unregulated online profile that Acxiom and other companies collect about you and then sell to employers or advertisers for medical stuff – not the official medical records which are regulated) against them in the hiring process? Who owns the new-fangled health analytics models anyway?
- We do that already to poor people by basing their acceptance on credit scores.
As a fat person, I’ve dealt with a lot of public shaming in my life. I’ve gotten so used to it, I’m more an observer than a victim most of the time. That’s kind of cool because it allows me to think about it abstractly.
I’ve come up with three dimensions for thinking about this issue.
- When is shame useful?
- When is it appropriate?
- When does it help solve a problem?
Note it can be useful even if it doesn’t help solve a problem – one of the characteristics of shame is that the person doing the shaming has broken off all sense of responsibility for whatever the issue is, and sometimes that’s really the only goal. If the shaming campaign is effective, the shamed person or group is exhibited as solely responsible, and the shamer does not display any empathy. It hasn’t solved a problem but at least it’s clear who’s holding the bag.
The lack of empathy which characterizes shaming behavior makes it very easy to spot. And extremely nasty.
Let’s look at some examples of shaming through this lens:
Useful but not appropriate, doesn’t solve a problem
Example 1) it’s both fat kids and their parents who are to blame for childhood obesity:
Example 2) It’s poor mothers that are to blame for poverty:
These campaigns are not going to solve any problems, but they do seem politically useful – a way of doubling down on the people suffering from problems in our society. Not only will they suffer from them, but they will also be blamed for them.
Inappropriate, not useful, possibly solving a short-term discipline problem
Hey parents: shaming your kids might solve your short-term problem of having independent-minded kids, but it doesn’t lead to long-term confidence and fulfillment.
Appropriate, useful, solves a problem
Here’s when shaming is possibly appropriate and useful and solves a problem: when there have been crimes committed that affect other people needlessly or carelessly, and where we don’t want to let it happen again.
For example, the owner of the Bangladeshi factory which collapsed, killing more than 1,000 people got arrested and publicly shamed. This is appropriate, since he knowingly put people at risk in a shoddy building and added three extra floors to improve his profits.
Note shaming that guy isn’t going to bring back those dead people, but it might prevent other people from doing what he did. In that sense it solves the problem of seemingly nonexistent safety codes in Bangladesh, and to some extent the question of how much we Americans care about cheap clothes versus conditions in factories which make our clothes. Not completely, of course. Update: Major Retailers Join Plan for Greater Safety in Bangladesh
Another example of appropriate shame would be some of the villains of the financial crisis. We in Alt Banking did our best in this regard when we made the 52 Shades of Greed card deck. Here’s Robert Rubin:
I’m no expert on this stuff, but I do have a way of looking at it.
One thing about shame is that the people who actually deserve shame are not particularly susceptible to feeling it (I saw that first hand when I saw Ina Drew in person last month, which I wrote about here). Some people are shameless.
That means that shame, whatever its purpose, is not really about making an individual change their behavior. Shame is really more about setting the rules of society straight: notifying people in general about what’s acceptable and what’s not.
From my perspective, we’ve shown ourselves much more willing to shame poor people, fat people, and our own children than to shame the actual villains who walk among us who deserve such treatment.
Shame on us.
Aunt Pythia’s advice: online dating, probabilistic programming, children, and sex in the teacher’s lounge
Aunt Pythia is yet again gratified to find a few new questions in her inbox this morning, but as usual, she’s running quite low. After reading and enjoying the column below, please consider making some fabricated, melodramatic dilemma up out of whole cloth, preferably combining sex with something nerdy (see below for example) and, more importantly:
Please submit your fake sex question for Aunt Pythia at the bottom of this page!
Dear Aunt Pythia,
I met this guy online and we met for three dates. I pinged him to meet up again, but he pleads busyness (he’s an academic, he has grading to do). Thing is, when I go on the dating website, I see that he’s been active–NOT communicating with me. I haven’t heard from him for a week. I sent him a quick, friendly email yesterday in which I did, yes, indicate that I was on the dating site and saw that he was active there. Is this guy a player, blowing me off, or genuinely busy with grading at the end of the semester?
Bewildered in Boston
I’m afraid that the evidence is pretty good that he’s blowing you off. To prevent this from happening in the future, I have a few suggestions.
Namely, you can’t prevent this kind of thing from happening in the future – not the part where some guy who seems nice blows you off. But you can prevent yourself from caring quite so much and stalking him online (honestly I don’t know why those dating sites allow you to check on other people’s activities. It seems like a recipe for disaster to me).
And the best way to do that is to have a rotation of at least 3 guys that you’re dating at a time, which means being in communication with even more than 3, until one gets serious and sticks. That way you won’t care if one of them is lying to you, and you probably won’t even notice, and it will be more about what you have time to deal with and less about fretting.
By the way, this guy could be genuinely busy and just using a few minutes online to procrastinate between grading papers. But you’ll never find that out if you stress out and send him accusing emails.
Dear Aunt Pythia,
I’m an algebraic topologist trying to learn a bit of data science on the side. Around MIT I’ve heard a tremendous amount of buzz about “probabilistic programming,” mostly focused around its abilities to abstract away fancy mathematics and lower the barrier to entry faced by modelers. I am wondering if you, as a person who often gets her hands dirty with real data, have opinions on the QUERY formalism as espoused here? Are probabilistic programming languages the future of applied machine learning?
I’ve never heard of this stuff before you just sent me the link. And I think I probably know why.
You see, the authors have a goal in mind, which is to claim that their work simulates human intelligence. For that they need some kind of sense of randomness, in order to claim they’re simulating creativity or at least some kind of prerequisite for creativity – something in the world of the unexpected.
But in my world, where we use algorithms to help see patterns and make business decisions, it’s kind of the opposite. If anything we want to interpretable algorithms, which we can explain in words. It wouldn’t make sense for us to explain what we’ve implemented and at some point in our explanation say, “… and then we added an element of randomness to the whole thing!”
Actually, that’s not quite true – I can think of one example. Namely, I’ve often thought that as a way of pushing back against the “filter bubble” effect, which I wrote about here, one should get a tailored list of search items plus something totally random. Of course there are plenty of ways to accomplish a random pick. I can only imagine using this for marketing purposes.
Thanks for the link!
Dear Aunt Pythia,
I heard that some of the “real” reasons couples choose to have children are peer pressure and boredom. Is that true? I never understood the appeal of children, since they seem to suck the life (and money) out of people for one reason or another.
Tony’s Tentatively-tied Tubes
I give the same piece of advice to everyone I meet, namely: don’t have children!
I think there should be a test you have to take, where it’s really hard but it’s not graded, and also really expensive, and then the test itself shits all over your shirt, and then afterwards the test proctor tells you in no uncertain terms that you’ve failed the test, and that means you shouldn’t have children. And if you still want children after all of that, then maybe you should go ahead and have them, but only after talking to me or someone else with lots of kids about how much work they are.
Don’t get me wrong, I freaking LOVE my kids. But I’m basically insane. In any case I definitely don’t feel the right kind of insanity emanating from you, so please don’t have any kids.
Dear Aunt Pythia,
I feel like a math fraud. I teach algebra and geometry but don’t have a math degree, (I just took the math exam for the single subject credential). I love math but fear I do every problem by brute force, taking twice as long as my fellow faculty members who show wicked fast cleverness in our meetings. Should i just sleep with everyone in the department to feel more like part of the gang? I am not finicky when it comes to orientation.
Faking under circumstances, keen math enthusiast
I really appreciate how you mixed the math question with the sex question. Right on right on!
I infer from documents like this that you are a high school math teacher. If you don’t mind I’ll address the sex question first, then the math question.
Honestly, and it may just be me, but I’m pretty sure it’s not, I’m hoping that all high school teachers have sex with each other at all times in the teachers’ lounge. Isn’t that what it’s for? Besides smoking up and complaining about annoying kids, of course. So yes, I totally approve of the plan to sleep with everyone in the department. Please report back.
Now on to the math: one thing that’s awesome about having a teacher who both loves math and is slow is that it’s incredibly relatable for the students. In other words, if you’re a student, what would you rather have for a teacher, someone who loves math and works through each problem diligently, or someone who is neutral or bored with math, and speeds through everything like a hot knife through warm butter?
Considering this, I’d say your best bet is to project your love for math to your students, by explaining your thinking at all times, and never forgetting how you thought about stuff when you were just learning it, and always telling them how cool math is. If you do all this you could easily be the best math teacher in that school.
Good luck with both projects!
Please submit your question to Aunt Pythia!
I was recently interviewed by Caroline Chen, a graduate student at Columbia’s Journalism School, about the status of Mochizuki’s proof the the ABC Conjecture. I think she found me through my previous post on the subject.
Anyway, her article just came out, and I like it and wanted to share it, even though I don’t like the title (“The Paradox of the Proof”) because I don’t like the word paradox (when someone calls something a paradox, it means they are making an assumption that they don’t want to examine). But that’s just a pet peeve – the article is nice, and it features my buddies Moon and Jordan and my husband Johan.
Read the article here.
Yesterday I wrote this short post about my concerns about the emerging field of e-discovery. As usual the comments were amazing and informative. By the end of the day yesterday I realized I needed to make a much more nuanced point here.
Namely, I see a tacit choice being made, probably by judges or court-appointed “experts”, on how machine learning is used in discovery, and I think that the field could get better or worse. I think we need to urgently discuss this matter, before we wander into a crazy place.
And to be sure, the current discovery process is fraught with opacity and human judgment, so complaining about those features being present in a machine learning version of discovery is unreasonable – the question is whether it’s better or worse than the current system.
Making it worse: private code, opacity
The way I see it, if we allow private companies to build black box machines that we can’t peer into, nor keep track of as they change versions, then we’ll never know why a given set of documents was deemed “relevant” in a given case. We can’t, for example, check to see if the code was modified to be more friendly to a given side.
Besides the healthy response to this new revenue source of competition for clients, the resulting feedback loop will likely be a negative one, whereby private companies use the cheapest version they can get away with to achieve the best results (for their clients) that they can argue for.
Making it better: open source code, reproducibility
What we should be striving for is to use only open source software, saved in a repository so we can document exactly what happened with a given corpus and a given version of the tools. It will still be an industry to clean the data and feed in the documents, train the algorithm (whilst documenting how that works), and interpreting the results. Data scientists will still get paid.
In other words, instead of asking for interpretability, which is a huge ask considering the massive scale of the work being done, we should, at the very least, be able to ask for reproducibility of the e-discovery, as well as transparency in the code itself.
Why reproducibility? Then we can go back in time, or rather scholars can, and test how things might have changed if a different version of the code were used, for example. This could create a feedback loop crucial to improve the code itself over time, and to improve best practices for using that code.