March for Science Knitwear #Resist

My buddy Brian Conrad alerted me yesterday to a very welcome three-way intersection:

 

Screen Shot 2017-04-05 at 9.14.56 AM

Graphics courtesy of meta-chart.com

 

So, anyone surprised? I’m not. But I am excited. Here’s what we’ve got on Ravelry:

 

Screen Shot 2017-04-05 at 9.18.17 AM

 

You can get the pattern for free here, and you can read and article about the concept here.

But, there’s more! Because knitting nerd activists are endlessly creative, we have the following generalizations on the above idea:

 

Screen Shot 2017-04-05 at 9.19.41 AM

Combination pussy hat and resistor hat

 

 

 

Screen Shot 2017-04-05 at 9.19.28 AM

Chemistry hat

 

 

Screen Shot 2017-04-05 at 9.19.20 AM

DNA helix hat

 

 

Screen Shot 2017-04-05 at 9.19.05 AM

Water molecule hat

 

Screen Shot 2017-04-05 at 9.18.52 AM

March for Trees hat

 

Amazing!

All patterns available here.

Categories: Uncategorized

Guest post: Make Your Browsing Noiszy

This is a guest post by Angela Grammatas, a digital analytics consultant specializing in worldwide implementations of online analytics tools.  She loves powerful data, but doesn’t love having artificial intelligence use it in creepy ways. She also has synesthesia and paints numbers (Instgram @angelagrammatas).

 

This week, the US Congress voted to allow ISPs (Internet Service Providers) to collect and sell your internet data without your consent.  Erasing your web data – or not allowing any to be collected in the first place – is getting more difficult, and less effective.  Hiding from data collection isn’t working.

It’s time for a completely different approach.  Instead of restricting our data, it’s time to create more – a lot more.  A flood of meaningless data could create a noisy cover that makes our true behavior hard to understand.  This could be a path to bursting the filter bubble, one person at a time.  And if enough people participate, we could collectively render some datasets completely meaningless.

Why should we care about where our data goes?

Organizations rely on data to “target” users online and serve them relevant (read, “more likely to be clicked”) advertisements.  Plenty of targeting is innocuous and can be genuinely helpful.  For example, getting a sale offer on a recently-viewed product can be a win-win; the company makes a sale, and the customer is happy about the discount.  Targeting (and re-targeting) makes that possible.

But when the pool of data gets larger and more integrated, the implications change.  For example: let’s imagine that “Jane Internet” loves cats, and visits cats.com daily.  One day, she’s considering how to vote on a local proposition, and she does some research by visiting two political news sites at opposite ends of the spectrum.  She reads a relevant article on each site, getting a balanced view of the issue.  Let’s imagine that the “Yes on Prop A” campaign has access to retargeting capabilities that utilize that large, blended dataset.  Soon, Jane starts to see “Vote Yes on Prop A” advertisements on many unrelated websites, with the message that Prop A will be great for local wildlife.

Jane has no way of knowing this, but that pro-wildlife message has been chosen specifically for her, because of her past visits to cats.com.  The ads are everywhere online (for Jane), so Jane believes that this message is a primary “Yes on A” talking point, and she’s encouraged to vote in agreement.  The “No on A” campaign never has any opportunity to discuss or debate the point.  They may not even know that the cats-related topic has been raised, because they’ve never even been exposed to it – that message is reserved for retargeting campaigns directed at people like Jane.  Jane’s attempt to be a well-informed voter has been usurped by retargeting.  And, perhaps most importantly, Jane doesn’t even know this has happened.

How could meaningless data help?

Jane was targeted because of her visits to cats.com, and the (reasonable) assumption that cats.com visitors are animal lovers.  What if she’d spent just as much time visiting sites related to other topics – desserts.com, running.com, and supportthelibrary.com?  Many organizations want to access potential customers who are interested in desserts, running, and libraries.  If Jane was visiting all of those sites, she’d be seeing a variety of targeted messages, exposing her to different points of view while also decreasing the impact of any single message.  Jane would start to break out of the “filter bubble” created by targeted ads.  In that case, Jane may not see any ads related to Prop A – or she might see ads that address the issue from a variety of perspectives.  For Jane, the playing field would be leveled again.

But if Janes all over the country also began to visit a much wider variety of sites, they could level the playing field for everyone.  Targeting algorithms that identify “people like Jane” look for similarities in web browsing behavior, and assume that these people will have similar ad-clicking behavior.  If the dataset becomes more randomized, those correlations will be weaker, and even when similar groups are identified, they won’t result in as many clicks – driving the cost of the ads up, and reducing the incentive to retarget.

The reality, of course, is that Jane doesn’t have the time or inclination to spend hours clicking random links online just to create her own personal meaningless dataset.  That’s why I created Noiszy (http://noiszy.com), a free browser plugin that runs in the background on Jane’s computer (or yours!) and creates real-but-meaningless web data – digital “noise.”  It visits and navigates around websites from within the user’s browser, leaving your misleading digital footprints wherever it goes.

When organizations lose the ability to “figure us out” from our browsing data, they’ll have to work harder to build products and content that people willingly engage and share data with, rather than simply chasing clicks and impressions.  Could “fake data” lead to the end of “fake news”?  Targeting algorithms are happily churning away on our data, pushing whatever messages the highest bidder wants us to see, and we have no obvious way to feed back into the cycle.  Meaningless data can help us hack this system, and bring about a conversation we deeply need to have: how should algorithms be (re)built for the greatest good?

Categories: Uncategorized

TomTown Ramblers playing this Sunday!

Please join us if you can!

Image-1 (1)

Categories: Uncategorized

How to Manage Our Algorithmic Overlords

My latest piece in Bloomberg View just came out:

How to Manage Our Algorithmic Overlords

Categories: Uncategorized

Guest Post: In Praise of Globes

This is a guest post by Ernie Davis Professor of Computer Science at NYU. Ernie has a BS in Math from MIT (1977) and a PhD in Computer Science from Yale (1984). He does research in artificial intelligence, knowledge representation, and automated commonsense reasoning. He and his father, Philip Davis, are editors of Mathematics, Substance and Surmise: Views on the Ontology and Meaning of Mathematics, published by Springer.

Any government which genuinely cared about education would see to it that a globe map, at present an expensive rarity, was accessible to every school child.
— George Orwell, “As I Please” February 11, 1944

 

The decision by the Boston school system to replace maps of the world using the Mercator projection with maps using the Gall-Peters projection has garnered a lot of favorable press from outlets such as NPR, The Guardian, Newsweek, and many others.

Mercator

Mercator map of the world.

gallpeters

Gall-Peters map of the world.

 

The pros and cons of these two maps have been debated extensively for many years; there was even an episode of West Wing that dealt with the subject. I would give the current collection of news articles a B for clarity and accuracy. If you read the Guardian article carefully from beginning to end, you can get a clear idea of the issues. But if you only skim the beginning, then phrases like “more accurate map”, and “amend[ing] 500 of distortion” are likely to leave you with the impressions, first, that the Gall-Peters map is indisputably more accurate, and, second, that the Mercator map was devised as an expression of Eurocentrism, neither of which is correct.

I hope that that is not what the students in Boston are being taught about the two maps. Still more, I hope that they are not being taught that these two maps are two competing theories about the geography of the world and that choosing between them is all a matter of your point of view and your political preferences, and that there is no actual truth of the matter. I wish I felt more confident of that.

The well-known truth is this: The Gall-Peters map accurately displays relative area, whereas the Mercator map grossly distorts relative area. However, the Mercator map accurately displays shapes and direction, whereas the Gall-Peters maps substantially distorts those. Each has their strengths and weaknesses.

As an illustration, consider Suriname and Iceland. Both are roughly squarish countries. Suriname’s area is 161,470 square km; it has an east-west span of about 460 km and an almost equal north-south span. Iceland is smaller; it has an area of 102,775 square km; its east-west span is about 390 km and its north-south span is about 300 km (these spans were hand-measured from a map and are not precise).

IcelandAndSuriname

If you take the Gall-Peters map of the world and cut out the maps of Suriname and Iceland, then the relative areas will be correct; Suriname will be about 1.6 times as large as Iceland. However, both will have bad distortion in the aspect ratio in opposite direction; the north-south span Suriname will appear substantially larger than its east-west span; and the north-south span of Iceland will appear very much smaller than its east-west span. Both of these are terrible maps of their individual countries. If you are used to looking at maps of Iceland and then look at Iceland on the Gall-Peters map, it will look seriously wrong, for good reason.

On the other hand, if you take a Mercator map of the world and cut out the maps of Suriname and Iceland, then each one by itself is exactly the right shape; each is a fine map of its individual country. However, Iceland will be 3 times the area of Suriname instead of 2/3 the area.

MapsCompare

The explanation of both of these distortions is simple. At the equator, the 460 km east-west span of Suriname corresponds to 4 degrees of longitude. At 64 degrees latitude, the 390 km east-west span of Iceland corresponds to 9 degrees of longitude. However, on both the Gall-Peters map and the Mercator map draw parallels of longitude as vertical lines, so Iceland ends up measuring 2-1/2 times as wide as Suriname on both maps; and, generally, an east-west mile in Iceland is displayed about 2-1/2 times as long as one in Suriname. The two maps adjust the north-south scale in opposite ways, depending on their different purposes. The Gall-Peters map, to preserve the area relation, must make a north-south mile in Suriname 2-1/2 times as long as one in Iceland; it does this, partly by stretching Suriname north-south and partly by shrinking Iceland north-south. The Mercator map, to preserve shape, must keep the ratio of the north-south mile to east-west mile always equal to 1; therefore, a north-south mile in Iceland is also 2-1/2 times as long as a north-south mile in Suriname.

There are other kinds of area-preserving maps besides Gall-Peters, and there are other kinds of shape-preserving maps besides Mercator. And there are many other kinds of maps; this article by Max Galka surveys the Miller, Winkel-Triple, and the Authagraph, which gets his vote. (The Authagraph is quite accurate as regards the land masses; the distortion gets pushed off onto the oceans, so the relative positions of the continents is bizarre.) However, there is absolutely no planar map of the world that can succeed in being both shape preserving and area preserving. There is just no way to perfectly flatten out the surface of a sphere.

§

Most educational debates do not have any final, ideal answer. What should be taught in literature, in history, in science, even in math, are matters of eternal debate, with no possible final resolution that is uncontentious, or apolitical, or value-free. However, this question of the proper way to display the geography of the earth is an exception. The obvious, perfect solution is — drumroll — a globe. A globe is an (essentially) perfect scale model of the geography of the earth, with no distortion of any kind. A child or adult who gets used to consulting a globe on questions of large scale geography, can get an exactly right idea of relative sizes and shapes and relative positions. (They should also have a good atlas, for small-scale geography).

As quoted above, George Orwell said that in 1944 globes were an “expensive rarity”. Presumably in 1944, getting globes to schoolchildren was not the top priority of the British government. But now they are really not expensive. You can get an inexpensive 6-inch globe for $10. You can get a good 11-inch globe for $30. I do find it surprising that none of the articles I’ve seen about the choice of maps even mentions this, best, option.

Of course if you have a globe for reference then it becomes enormously easier to explain how the Mercator, Gall-Peters, and other flat maps work, and what they get right and wrong.

In addition to its huge value in teaching geography, there are all kinds of cool things you can do and teach with a globe, particularly if you take it out of its stand:

  • Great circle. You can easily illustrate the great circle path from any point to any other point by stretching a string between them and pulling it tight. No possible planar map correctly represents large scale geodesics.
  • Seasons. Apparently a surprisingly large fraction even of college educated people think that the earth is closer to the sun in summer and further in winter. A globe makes it easier to illustrate both that the days are longer and that the light is more direct in the summer than in the winter. You also easily explain the significance of the poles, the equators, and of the polar and tropical circles.
  • Astronomy of the earth and the moon. With a second, smaller ball, you can illustrate the interaction of the earth and the moon, and explain things like eclipses. (In fact, you can make a scale model; if you have an 11 inch globe for the earth, then the moon is a 3 inch ball, about 23 feet away.)
  • Other celestial astronomy. With some additional props, you can show how the appearance of the night sky changes with latitude and with the time of year. You can explain why latitude has always been easy to determine, whereas the determination of longitude on board ship was one of the major problems for eighteenth century science. You can explain the significance of the ecliptic and the zodiac and the precession of the equinoxes.
  • Geometry. Purely from the standpoint of teaching geometry, a globe has the amazing property of being a sphere on which there are thousands of easily identifiable points with memorable names. So an enterprising high school math teacher with good students can use it as a source of examples for an introduction to spherical geometry and thus non-Euclidean geometry. You can also illustrate three-dimensional rotations and the fact that they don’t commute. The original meaning of “geometry”, after all, is “measuring the earth.”

At the minimum, hopefully, early and extensive exposure to globes will deter students from growing up to believe that the earth is flat.

Categories: Uncategorized

Guest post: the age of algorithms

Artie has kindly allowed me to post his thoughtful email to me regarding my NYU conversation with Julia Angwin last month.

This is a guest post by Arthur Doskow, who is currently retired but remains interested in the application and overapplication of mathematical and data oriented techniques in business and society. Artie has a BS in Math and an MS that is technically in Urban Engineering, but the coursework was mostly in Operations Research. He spent the largest part of my professional life working for a large telco (that need not be named) on protocols, interconnection testing and network security. He is a co-inventor on several patents. He also volunteers as a tutor.

Dear Dr. O’Neil and Ms. Angwin,

I had the pleasure of watching the livestream of your discussion at NYU on February 15. I wanted to offer a few thoughts. I’ll try to be brief.

  1. Algorithms are difficult, and the ones that were discussed were being asked to make difficult decisions. Although it was not discussed, it would be a mistake to assume a priori that there is an effective mechanized and quantitative process by which good decisions can be made with regard to any particular matter. If someone cannot describe in detail how they would evaluate a teacher, or make a credit decision or a hiring decision or a parole decision, then it’s hard to imagine how they would devise an algorithm that would reliably perform the function in their stead. While it seems intuitively obvious that there are better teachers and worse teachers, reformed convicts and likely recidivist criminals and other similar distinctions, it is not (or should not be) equally obvious that the location of an individual on these continua can be reliably determined by quantitative methods. Reliance on a quantitative decision methodology essentially replaces a (perhaps arbitrary) individual bias with what may be a reliable and consistent algorithmic bias. Whether or not that represents an improvement must be assessed on a situation by situation basis.
  1. Beyond this stark “solvability” issue, of course, are the issues of how to set objectives for how an algorithm should perform (this was discussed with respect to the possible performance objectives of a parole evaluation system) and the devising, validating and implementing of a prospective system. This is a significant and demanding set of activities for any organization, but the alternative of procuring an outsourced “black box” solution requires, at the least, an understanding and an assessment of how these issues were addressed.
  1. If an organization is considering outsourcing an algorithmic decision system, the RFP process offers them an invaluable opportunity to learn and assess how a proposed system is designed and how it will work – What inputs does it use? How does its decision engine operate? How has it been validated? How will it cover certain test cases? Where has it been used? To what effect? Etc. Organizations that do not take advantage of an RFP process to ask these detailed questions and demand thorough and responsive answers have only themselves to blame.
  1. While a developers’ code of ethics is certainly a good thing, the development, marketing and support of a proposed solution is a shared task for which all members of the team must share responsibility – coders, system designers and specifiers, testers, marketers, trainers, support staff, executives. There is no single point of responsibility that can guarantee either a correct or an ethical implementation. Perhaps, in the same way that a CEO must personally sign off on all financial filings, the CEO of a company offering an evaluative system should be required to sign off on the legality, effectiveness and accuracy of claims made regarding the system.
  1. Software contracts are notoriously developer-friendly, basically absolving the developer of all possible consequences arising out of the use of their product. This needs to change, particularly in the case of systems sold as “black box” solutions to a purchaser’s needs, and contracts should be negotiated in which the developer retains significant responsibility and liability.
  1. As I think was pointed out, there is a broad range of analysis and modeling techniques, ranging from expert systems that seek to encode human knowledge, to heuristic learning system such as neural nets. While heuristic systems have the potential to ferret out non-intuitive relationships, their results obviously require a much higher degree of scrutiny. Part of me wonders how IBM and Watson would do at developing decision systems.
  1. Extensive testing and analysis should be required before any system “goes live”. It is disappointing to hear that “algorithm auditing” does not seem to be a thriving business, and, depending on the definition of “algorithm auditing”, I may be suggesting even more. Perhaps “algorithm testing” would be a more attractive sounding service name. Beyond requiring an analytical assessment of underlying data requirements and assessment algorithms, systems should be tested using an extensive set of test cases. Test cases should be assessed in advance by other (e.g., human expert) means, and system results should be examined for plausibility and for sanity. Another set of test cases should assess performance with extreme (e.g., best case, worst case) scenarios to check for system sanity. Another possibility is “side by side” testing, in which the system will “shadow” the current implementation, either concurrently or in retrospect and the results will be compared.
  1. Psychological and other pre-employment tests, described in Weapons of Math Destruction, are problematic in two ways. First is whether it is appropriate to conduct them at all, and second is whether they are effective in their stated purpose (i.e., to select the best prospective employees, or those best matched to the position in question). Certainly, competency testing is an appropriate part of candidate selection, but whether psychological characteristics are a component of competency is arguable, at best. At the very least, however, such testing should be assessed as to whether it predicts what it claims to predict, and whether that characteristic is emblematic of work effectiveness. How to conduct such testing would require some creativity. Testing could be conducted on an “incoming class” of employees, whether prior to hiring, or after hiring with the test results being sequestered (neither reported to company management nor used in any evaluation process). After some period (1 – 2 years), the qualitative measures of employee performance and effectiveness could be compared to the sequestered test results and examined for correlation. Another possibility would be to identify a disinterested company with employees performing similar work. (By disinterested, I mean disinterested in using the evaluative test in question.) Employees of that company could be asked to undergo “risk free” testing, with results again being sequestered from their employer. The quantitative test results could then be compared to the qualitative measures of employee performance and effectiveness used by that employer. Whatever one thinks of such testing, as Weapons of Math Destruction correctly points out, to the extent to which it is used, efforts should be made to test and improve its efficacy. To the extent that such testing is promoted by an outside party, that party should be ready, willing and able to demonstrate observed effectiveness.
  1. An interesting alternative to a proprietary black box system would be what might be called a meta-system, a configurable engine which would allow its procurer to specify the inputs, weightings and the manner in which they are used to formulate a decision, perhaps offering a drag and drop software interface to specify the decision algorithm. Such a system would leave the fundamentals of the decision algorithm design to the purchasing company, but simply facilitate its implementation.
  1. One must always be cautious the possibility of inherent bias in data. As a simple example, recidivism is most easily estimated by the proportion of released convicts who are re-arrested. But if recidivism is actually defined by the percentage of released convicts who return to criminal life, then the estimate is likely skewed in several ways. Some recidivists will be caught; others will not. For example, some types of crime are more heavily investigated than others, leading to higher re-arrest rates. Further, even among perpetrators of the same crime, investigation and enforcement may well be targeted more to some areas than to others.
  1. As was pointed out during the discussion, being fair, being humane may cost money. And this is the real issue with many algorithms. In economists’ terms, the inhumanity associated with an algorithm could be referred to as an externality. Optimization has its origins with the solutions to problem in the inanimate world, how to inspect mass produced parts for flaws, how to cut a board to obtain the most salable pieces of lumber, how to minimize the lengths of circuit traces on a PC board. There were problems that touched on human behavior, scheduling issues, or traveling salesman type problems, but not to the extent that they ignored humane considerations. We are now to the point where we have human beings being compared to poisonous Skittles, and where life altering decisions of great import (hiring, firing parole, assessment, scheduling, etc.) are being subjected to optimization processes, often of questionable validity, which objectify people, view them as resources or threats, and give little or no consideration to the very human consequences of their deployment. Assuming that your good work can drive to this consensus, there is a fork in the road as to how it can be addressed. One way would be to attempt to implement humane costs, benefits and constraints into the models being deployed and optimize on that basis. The other is to stand back and monitor applications for their human costs and attempt to address them iteratively. Or, as Yogi said, you can come to the fork and take it.
Categories: Uncategorized

Dystopian Bloomberg Posts: Price Discrimination and Snap

This is just out on Bloomberg:

The Dystopian Future of Price Discrimination

And this came out Monday:

Snap Needs to Get Inside Your Head

Categories: Uncategorized