Home > Uncategorized > More creepy models

More creepy models

September 13, 2016

One of the best things about having my book out (finally!) is that once people read it, or hear interviews or read blogs about it, they sometimes sending me more examples of creepy models. I have two examples today.

The first is a company called Joberate which scores current employees (with a “J-score”) on their risk of looking for a job (hat tip Marc Sobel). Kind of like an overbearing, nosy boss, but in automated and scaled digital form. They claim they’re not creepy because they use only publicly available data.

Next, we’ve got Litify (hat tip Nikolai Avteniev), an analytics firm in law that’s attempting to put automatic scoring into litigation finance. Litify advertises itself thus:

Litify is led by an experienced executive team, including one of the world’s most influential and successful lawyers, well known VC’s and software visionaries. Litify will transform the way legal services are delivered connecting the firm and the client with new apps and will use artificial intelligence to predict the outcome of legal matters, changing the economy of law. Litify.com will become a household consumer name for getting legal assistance and make legal advice dramatically more accessible to the people…

What could possibly go wrong?

Categories: Uncategorized
  1. September 13, 2016 at 9:28 am

    I wrote about this one a few years back, Argus Analytics and it’s a big one as they buy up all the credit card data, score you in many ways and sell the scores and data to banks and so on. Even the CPFB is a customer, they say in order to perform testing, but I’m not quite sure that’s their “only” interest here. Your data gets sold to health insurers from Argus and more.


    As I said a few years ago, we need to index and license all data sellers, as “who are they” and what are they selling. Once our data gets repackaged (put in another data base and denormalized again) we can’t find the origins either to go back and request that the company fix or repair their reported flawed data about us and we’ll all end up on the Data Exchange some day. I have had this campaign about the topic and have not done a lot to promote it other than publish updates. People don’t seem to get this so there has not been a lot of opportunity to really run with it, but I said it about 4-5 years ago we need to license these companies, and of course the companies would all fight such a law too:) It’s better to keep the public in dark rooms, like mushrooms so they can profit.



  2. September 13, 2016 at 11:28 am

    FWIW, I’ll pass this along second-hand (I’ve never had direct contact with the company), but in the category of controlling lower-level staff with strict, burdensome ‘performance’ algorithms, I used to hear quite ill things about Sodexo, a European-based company with an American division that contracts employees for a number of large organizations/companies.


  3. September 13, 2016 at 11:35 am

    Joberate could complete the cycle and offer prospective employees a list, for a price, of companies that use them, and so to avoid. Then, they could offer their corporate customers that they’ll keep their names off the list for a premium fee. AND THEN, for an even higher fee prospective employees could get access to the list of companies who paid to stay off the first list. AND THEN …


  4. Klondike Jack
    September 13, 2016 at 12:07 pm

    (Snark Warning) I think J. K. Rawlings has to share a small part of the blame for the increased acceptance of magical thinking in the algorithm gold rush. All together now, with a wave and a swish, Algoria Bankacountia!


  5. September 13, 2016 at 1:04 pm

    What worries me most about these scoring models is that they may become causal and not just predictive. If someone has to pay higher interest rates, are priced more expensive insurance, are paid less, etc. because of scoring systems, then it is likely that their diminished opportunity and increased costs will cause future problems that then perpetuate a downward cycle. It would be quite easy that a process of accumulative causation build-up and do real harm to families.


  6. September 13, 2016 at 6:52 pm

    I’ve only read a review of the book, but I immediately thought of the Google search ranking algorithm. It’s totally opaque, totally unaccountable, has no definition of correctness beyond advertiser satisfaction, and for day-to-day purposes has become the embodiment of truth. It’s hard to think of a model that has a higher impact in daily life.


  7. September 14, 2016 at 12:07 pm

    hmmm, ICYMI… figure I should get in the line of all those folks probably sending you links to this story: http://tinyurl.com/hrc66fo


  8. Michael O'Neil
    September 14, 2016 at 3:19 pm

    Congrats on your National Book award long list! This is great!


  9. September 14, 2016 at 4:02 pm

    Thanks for the insightful book- the excerpts of which I managed to glean laboriously from amazon.com, the book reviews, and from this website of yours. Little as it is the gleaning influenced me enough to write a column in daily National Courier across the world here in Karachi, Pakistan. The write-up could be read at: http://nationalcourier.pk/e-paper/zoom.php?url=http://nationalcourier.pk/e-paper/wp-content/uploads/2016/09/2-Opinion-Sep_13-2000×2785.jpg

    We have our own distinct sneaky models in the part of the world a few of which I have mentioned in the write up.


  10. September 16, 2016 at 8:39 am

    Topics on WNYC this morning: clopenings and CompStat. Thank you for unveiling the stealthiness of big data.


    • Leon
      September 17, 2016 at 2:09 am

      Who knew that topology would be topic of interest on public radio?


  11. Jacob Bassett
    September 16, 2016 at 12:03 pm

    Cathy…. this might fit with your theme. I know it enjoyed it.

    Apparently I cannot paste the link. Please google “The Moth Data Mining for Dates”.

    Love your book.


  1. No trackbacks yet.
Comments are closed.
%d bloggers like this: