AI Skeptics: How to Think about Writing in the Age of AI (with John Warner)

Hello friends! Jake and I had an amazing time reading a wonderful book by John Warner called More than Words and chatting with the author. Take a listen if you have time!

Apple

Spotify

YouTube

Categories: Uncategorized

AI Skeptics: Sex Ed for Ed Tech (with Kiri Soares)

February 17, 2026 1 comment

Hello friends! Please find a moment to listen to this week’s AI Skeptics episode with my very good friend Kiri Soares, a (retiring!) principal of a 6-12 school in Brooklyn. She’s talking here about Ed Tech and AI, and how she frames it as a risk akin to sex or drugs. Fascinating!

Apple

Spotify

YouTube

Categories: Uncategorized

AI Skeptics: The Future of Work (With Kevin Frazier)

Hello, friends! And good morning! If we’re all done wiping away our tears of joy at last night’s Bad Bunny performance (if not the football score), take a moment to listen to the newest AI Skeptics podcast. This week we talked with legal scholar and AI optimist Kevin Frazier about the future of work.

Apple

Spotify

YouTube

Categories: Uncategorized

AI Skeptics Podcast: The AI Bubble Episode (with Tom Adams)

Guys! This is a timely subject, we think! Listen to us predict the bursting of the AI bubble:

Spotify

Apple

YouTube

Categories: Uncategorized

AI Skeptics Podcast: All Buzzwords Mean Surveillance (with Sherry Wong)

Hello friends! I’m psyched (and somewhat late due to the snowstorm) to share this week’s podcast with our good friend Sherry Wong:

Spotify

Apple

YouTube

Categories: Uncategorized

AI Skeptics podcast: The Braverman Episode

I’m very psyched to announce the newest AI Skeptics podcast episode, the Braverman Episode, in which Jake and I interview my good friend Nicole Aschoff (I learned about her a long long time ago) about her and our favorite book.

Spoitify

Apple

YouTube

Categories: Uncategorized

New AI Skeptics Podcast: We are all AI Skeptics now (except for our AI Overlords)

It’s Monday, so there’s a new AI Skeptics podcast, called We are all AI Skeptics now (except for our AI Overlords):

YouTube

Spotify

Apple

Categories: Uncategorized

New AI Skeptics Podcast: AI versus Artists and Educators

Here’s the newest AI Skeptics podcast, which comes out every Monday morning!

It’s with Becky Jaffe, and we talk about AI in her fields of art and education.

Spotify

Apple

YouTube

I hope you enjoy it! Please comment below, or ask for new topics for the podcast!

Categories: Uncategorized

New podcast!! AI Skeptics

I’m very psyched to tell you all that I finally belong to the 2020’s, because I have started a podcast with my friend and colleague Jake Appel. It’s called the AI Skeptics, and it’s a new project of my non-profit OCEAN.

Each episode is between 30 and 35 minutes long, so it’s a really pretty short dive into the topics, but we hope it’s also fun and thought-provoking.

Here’s the art:

So far we have five episodes, with new ones coming early Monday mornings. Here are some links:

Episode 1: Hello world (with Celeste Kidd)

Spotify YouTube Apple

Episode 2: Does anybody actually care about AI ethics? (with Aaron Abrams)

Spotify YouTube Apple

Episode 3: Don’t trust AI!

Spotify YouTube Apple

Episode 4: Wait, can Trump actually make AI regulation illegal? (with Tom Adams)

Spotify YouTube Apple

Episode 5: Recession for the young and creepy AI toys

Spotify YouTube Apple

I hope you all enjoy it! Please give comments and tell me what you think! I’m especially excited for this coming Monday’s episode.

Categories: Uncategorized

The impending crypto market crash

I have deeply disliked crypto ever since, way back in Occupy, we at Alternative Banking had a visit from some Bitcoin evangelists who were claiming that Bitcoin and the blockchain would somehow make banking more fair and democratic.

They couldn’t explain their reasoning, though, and just tried to get us all to sign up and start harvesting coins, which at the time, probably 2012, was kind of easy. But I wouldn’t do it, on the principle that it was a useless thing of no intrinsic value, and a huge waste of energy to boot. Come back when you can get rewarded for saving energy instead of wasting it, I remember saying.

I looked at the news this morning and realized that I was interested in back then, and why I joined Occupy in the first place, has come full circle to crypto.

Namely, the idea that the government (at the time, the Obama administration) was propping up a market (the mortgage securities specifically and the stock market more generally) for political reasons, which meant that the folks who should have been out of their jobs and possibly charged with crimes were getting off with their multimillion dollar year-end bonuses.

It seemed outrageous that nobody was losing their shirt except for the victims of the whole scam, because it was political suicide to allow the markets to collapse. And this was especially true because so many people had been convinced to invest their retirement in the stock market. As pensions were replaced by stock portfolios, it became a requirement to never allow stocks to dip too low for too long. And the result was truly non-natural and perverted.

I wasn’t the only one to be outraged, of course. The Tea Party was founded on the notion that somehow the borrowers for these loans – especially folks of color, for some mysterious reason – were somehow to blame.

Well here we are, back at the apex of a bubble, this time in cryptocurrency instead of weird hyper-inflated mortgage securities. But this time it’s not just that “Americans won’t be happy” if the bubble bursts, because they were convinced to put their retirement into that bubble. This time it’s Trump and his actual family that is so heavily invested in crypto that they personally stand to lose billions of dollars if and when the bubble bursts.

Which leads me to wonder, what will they end up doing to prevent this particular bubble from bursting? I don’t know, but I’m guessing it will be really gross.

What’s the right way to talk about AI?

Yesterday I came across this article in the Atlantic, written by Matteo Wong, entitled The AI Industry is Radicalizing.

It makes a strong case that, while the hype men are over-hyping the new technology, the critics are too dismissive. Wong quotes Emily Bender and Alex Hanna’s new book The AI Con as describing it as “a racist pile of linear algebra”.

Full disclosure: about a week before their title was announced, which is like a year and a half ago, I was thinking of writing a book similar in theme, and I even had a title in mind, which was “The AI Con”! So I get it. And to be clear I haven’t read Bender and Hanna’s entire book, so it’s possible they do not actually dismiss it.

And yet, I think Wong has a point. AI not going away, it’s real, it’s replacing people at their job, and we have to grapple with it seriously.

Wong goes on to describe the escalating war, sometimes between Gary Marcus and the true believers. The point is, Wong argues, they are arguing about the wrong thing.

Critical line here: Who cares if AI “thinks” like a person if it’s better than you at your job?

What’s a better way to think about this? Wong has two important lines towards answering this question.

Ignoring the chatbot era or insisting that the technology is useless distracts from more nuanced discussions about its effects on employment, the environment, education, personal relationships, and more. 

Automation is responsible for at least half of the nation’s growing wage gap over the past 40 years, according to one economist.

I’m with Wong here. Let’s take it seriously, but not pretend it’s the answer to anyone’s dreams, except the people for whom it’s making billions of dollars. Like any technological tool, it’s going to make our lives different but not necessarily better, depending on the context. And given how many contexts AI is creeping into, there are a ton of ways to think about it. Let’s focus our critical minds on those contexts.

We should not describe LRM’s as “thinking”

Yesterday I read a paper that’s seemingly being taken very seriously by some folks in the LLM/LRM developer community (maybe because it was put out by Apple). It’s called

The Illusion of Thinking:
Understanding the Strengths and Limitations of Reasoning Models
via the Lens of Problem Complexity

In it, the authors pit Large Language Models (LLMs) against Large Reasoning Models (LRMs) (these are essentially LLMs that have been fine-tuned to provide reasoning in steps) and notice that, for dumb things, the LLM’s are better at stuff, then for moderately complex things, the LRMs are better, then when you get sufficiently complex, they both fail.

This seems pretty obvious, from a pure thought experiment perspective: why would we think that LRMs are better no matter what complexity? It stands to reason that, at some point, the questions get too hard and they cannot answer them, especially if the solutions are not somewhere on the internet.

But the example they used – or at least one of them – made me consider the possibility that their experiments were showing something even more interesting, and disappointing, than they realized.

Basically, they asked lots of versions of LLMs and LRMs to solve the Tower of Hanoi puzzle for n discs, where n got bigger. They noticed that all of them failed when n got to be 10 or larger.

They also did other experiments with other games, but I’m going to focus on the Tower of Hanoi.

Why? Because it happens to be the first puzzle I ever “got” as a young mathematician. I must have been given one of these puzzles as a present or something when I was like 8 years old, and I remember figuring out how to solve it and I remember proving that it took 2^n-1 moves to do it in general, for n discs.

It’s not just me! This is one of the most famous and easiest math puzzles of all time! There must be thousands of math nerds who have blogged at one time or another about this very topic. Moreover, the way to solve it for n+1 discs is insanely easy if you know how to solve it for n discs, which is to say it’s iterative.

Another way of saying this is that, it’s actually not harder, or more complex, to solve this for 10 discs than it is for 9 discs.

Which is another way of saying, the LRMs really do not understand all of those blogposts they’ve been trained on explicitly, and thus have not successfully been shown to “think” at all.

And yet, this paper, even though it’s a critique of the status quo thinking around LRMs and LLMs and the way they get trained and the way they get tested, still falls prey to the most embarrassing mistake, namely of assuming the pseudo-scientific marketing language of Silicon Valley, wherein the models are considered to be “thinking”.

There’s no real mathematical thinking going on here, because there’s no “aha” moment when the model actually understands the thousands of explanations of proofs of how to solve the Tower of Hanoi that it’s been trained on. To test that I talked to my 16-year-old son this morning before school. It took him about a minute to get the lay of the land and another two minutes to figure out the iterative solution. After that he knew exactly how to solve the puzzle for any n. That’s what an “aha” moment looks like.

And by the way, the paper also describes the fact that one reason LRMs are not as good at simple problems as LLMs is that they tend to locate the correct answer, and then keep working and finally output a more complicated, wrong answer. That’s another indication that they do not actually understand anything.

In conclusion, let’s not call these things thinking. They are not. They are, as always, predicting the next word in someone’s blog post who is writing about the Towers of Hanoi.

One last point, which is more of a political positioning issue. Sam Altman has been known to say he doesn’t worry about global climate change because, once the AI becomes super humanly intelligent, we will just ask it how to solve climate change. I hope this kind of rhetoric is exposed once and for all, as a money and power grab and nothing else. If AI cannot understand the simplest and most mathematical and sanitary issue such as the Tower of Hanoi for n discs, it definitely cannot help us out of an enormously messy human quagmire that will pit different stakeholder groups against each other and cause unavoidable harm.

LLM’s in the VA: it just got dangerous

May 27, 2025 Comments off

I’ve been waiting to see how people are employing chatbots and LLM’s before worrying too much about it. Because so far it’s mostly been a weird and wasteful (and hugely overhyped) product from Silicon Valley that hasn’t been picked up too much in reality, because it makes tons of mistakes.

That’s not to say there’s nothing to be concerned about. I worry about kids and other vulnerable people spending too much time with bots, and there have been quite a few alarming ideas put forth by healthcare insurers to deploy AI as a way of saving money. But even there I feel like there will be cautious uptake on some of this, and mistakes will be noted, and will lead to lawsuits.

But this morning I read this WaPO article about DOGE’s planned 83,000 job cuts to the VA and in particular this line:

“Doctor administration work is important and not replaceable by AI,” the provider said in response to concerns that this administration has encouraged the use of artificial intelligence to replace work done by humans.

so, I feel like, we now see the plan for what it is: remove critically important VA staff and replace them with VA, presumably as a way of forcing a long term federal client for Silicon Valley’s latest product. And presumably with no oversight, judging from the 10-year AI regulation moratorium that these same guys are pushing.

Yikes!

Categories: Uncategorized

Um… how about we don’t cede control to AI?

May 19, 2025 Comments off

Just in one morning I read three articles about AI. First, that big companies are excited about the idea that we can allow AI agents to shop for us, buy us airplane tickets, arrange things for us, and generally speaking act as autonomous helpers. Second, that entry level jobs are drying up because the first and second year law jobs or office jobs or coding jobs are being done by AI, so let’s figure out how to get people to start working at the level of a third year employee, because that’s the inevitable future.

And third, that the world might actually end, and all humanity might actually die by 2027 (or, if we’re lucky, 2028!) because autonomous AI agents will take things over and kill us.

So, putting this all together, how about we don’t?

Note that I don’t buy any of these narratives. AI isn’t that good at stuff (because it just isn’t), it should definitely *not* be given control over things like our checkbooks and credit cards (because duh) and AI is definitely not conscious, will not be conscious, and will not work to kill humanity any more than our smart toasters that sense when our toast is done.

This is all propaganda, pointing in one direction, which is to make us feel like AI is inevitable, we will not have a future without it, and we might as well work with it rather than against it. Otherwise nobody graduating from college will ever find employment! It’s scare tactics.

I have another plan: let’s not cede control to problematic, error-ridden AI in the first place. Then it can’t destroy our lives by being taken over by hackers or just buying stuff we absolutely don’t want. It’s also just better to be mindful and deliberate when we shop anyway. And yes, let’s check the details of those law briefs being written up by AI, I’m guessing they aren’t good. And let’s not assume AI can take over things like accounting, because again, that’s too much damn power. Wake up, people! This is not a good idea.

South Park is genius

May 2, 2025 Comments off

On a recent airplane trip I had the pleasure of seeing a South Park special episode called The End of Obesity, which brilliantly satirizes the culture around weight loss, fat shaming, and weight loss drugs. It manages to do all of these things simultaneously:

  • make fun of “MILF”s that use weight loss drugs to obsessively lose a small amount of weight,
  • extend sympathy to actually fat people who cannot afford the weight loss drugs when they could really benefit from them,
  • poke fun at the “body positivity” movement,
  • truly villainize the health care industry, and in particular the way it tortures people that try to navigate the system, as well as
  • the companies producing weight-loss drugs, which now make ginormous profits off of stuff that’s actually easy to manufacture, and finally
  • they address the tired notion of “will power” and how it should be sufficient as a replacement for real solutions to the problem of obesity.

I recommend it! In particular, I’ve ready tons of coverage about weight-loss drugs in the past few years by journalists, and it’s mostly garbage and rarely gets at the underlying issues.

Categories: Uncategorized

Silicon Valley drinks its own Kool aid on AI

April 21, 2025 Comments off

There is growing evidence that we are experiencing a huge bubble when it comes to AI. But what’s also weird, bordering on cultish, is how bought in the researchers are in the world of AI.

There’s something called the AI Futures Project. It’s a series of blog posts about trying to predict various aspects of how soon AI is going to be just incredible. For example, here’s a graph of different models for how long it will take until AI can code like a superhuman:

Doesn’t this remind you of the models of COVID deaths that people felt compelled to build and draw? They were all entirely wrong and misleading. I think we did them to have a sense of control in a panicky situation.

Here’s another blogpost of the same project, published earlier this month, this time imagining a hypothetical LLM called OpenBrain, and what it’s doing by the end of this year, 2025:

… OpenBrain’s alignment team26 is careful enough to wonder whether these victories are deep or shallow. Does the fully-trained model have some kind of robust commitment to always being honest? Or will this fall apart in some future situation, e.g. because it’s learned honesty as an instrumental goal instead of a terminal goal? Or has it just learned to be honest about the sorts of things the evaluation process can check? Could it be lying to itself sometimes, as humans do? A conclusive answer to these questions would require mechanistic interpretability—essentially the ability to look at an AI’s internals and read its mind. Alas, interpretability techniques are not yet advanced enough for this.

The wording above makes me roll my eyes, for three reasons.

First, there is no notion of truth in an LLM, it’s just predicting the next word based on patterns in the training data (think: Reddit). So it definitely doesn’t have a sense of honesty or dishonesty. So that’s a nonsensical question, and they should know better. I mean, look at their credentials!

Second, the words they use to talk about how it’s hard to know if it’s lying or telling the truth betray the belief that there is a consciousness in there somehow but we don’t have the technology yet to read its mind: “interpretability techniques are not yet advanced enough for this.” Um, what? Like we should try harder to summon up fake evidence of consciousness (more on that in further posts)?

Thirdly, we have the actual philosophical problem that *we* don’t even know when we are lying, even when we are conscious! I mean, people! Can you even imagine having taken an actual philosophy class? Or were you too busy studying STEM?

To summarize:

Can it be lying to itself? No, because it has no consciousness.

But if it did, then for sure it could be lying to itself or to us, because we could be lying to ourselves or to each other at any moment! Like, right now, when we project consciousness onto the algorithm we just built with Reddit training data!

Why is Bernie Sanders touring Red States?

April 17, 2025 Comments off

I’m into Bernie Sanders’ and AOC’s Fighting Oligarchy Tour. It is the one thing that genuinely gets me excited in a good way about modern politics.

Because, and I know I’m not alone in saying this, enough is enough. It’s time for the next version of Occupy. Things have gotten way WAY worse, and not better at all, in terms of the the power of the very rich dictating to the rest of society since the original Occupy.

And when I see media coverage of them speaking in places like Montana and Utah and Idaho, I’m thinking to myself, about fucking TIME some folks on the left talk to the people of these states about the way things are actually working against the working person.

So it came as a big surprise when I heard some folks on MSNBC talking about this. One of them was a TV journalist who just interviews people all the time about politics, and the other represented the Democratic party. I don’t remember their names and so I can’t find the clip, but it went something like this:

Journalist: why are Bernie and AOC in Utah? Isn’t that a super red state? They have no chance to help elect a Dem! There’s not even a viable candidate nor an election!

DNC rep: Well, my theory is that they are there to get media coverage, and after all we are talking about them.

Journo: Oh, that makes sense.

This, to me, is a great illustration of one of the many things that the Democrats have really really wrong. At some point in the distant past, they stopped thinking about voters. Instead they started looking at numbers, and polls, and focusing very narrowly on incremental elections and surgical strikes into purple states.

In other words, I don’t think it occured to either of these people that Bernie and AOC are actually there to talk to actual people with actual problems, and trying to persuade those folks by paying actual attention to them and their problems, even though they are not going to be living in a blue state tomorrow.

This blindness to how people actually are makes the poll-watchers super blind to what really matters in terms of changing people’s minds. And it’s a widespread illness for so many folks you see on TV. They literally don’t see the point of talking to people who live in red states. And that’s why the red states get deeper red and will continue to if those folks are in charge. Yeesh.

The AI Bubble

April 16, 2025 Comments off

I recently traveled to Oakland California to see my buddy Becky Jaffe, who just opened a photographic exhibit at Preston Castle in Ione, CA, at a former reformatory for boys. It’s called Reformation: Transforming the Reformatory and you should check it out if you can.

Anyhoo, I got an early flight home, which meant I was in an Uber at around 5:15am on the Bay Bridge on my way to SFO.

And do you know what I saw? Approximately 35 lit billboards, and every single one of them was advertising some narrowly defined, or ludicrously broadly defined, or sometimes downright undefined AI.

I then noticed that every single ad *AT* the airport was also advertising AI. And then I noticed the same thing at Boston Logan Airport when I arrived home.

It’s almost like VC money has been poured into all of these startups with the single directive, to go build some AI and then figure out if they can sell it, and now there’s a shitton of useless (or, as Tressie McMillam Cottom describes it, deeply “mid”) AI products that were incredibly expensive to build and nobody fucking cares about it at all.

Then again, maybe I’m wrong!? Maybe this stuff works great and I’m just missing something?

Take a look at these numbers from the American Prospect (hat tip Sherry Wong)

  • In 2023, 71 percent of the total gains in the S&P 500 were attributable to the “Magnificent Seven”—Apple, Nvidia, Tesla, Alphabet, Meta, Amazon, and Microsoft.
  • Microsoft, Alphabet, Amazon, and Meta—combined for $246 billion of capital expenditure in 2024 to support the AI build-out.
  • Goldman Sachs expects Big Tech to spend over $1 trillion on chips and data centers to power AI over the next five years.
  • OpenAI, the current market leader, expects to lose $5 billion this year, and its annual losses to swell to $11 billion by 2026. 
  • OpenAI loses $2 for every $1 it makes.

So, um, I think this is going to be bad. And it might be blamed on Trump’s tariffs! Ironic.

Clock

August 12, 2024 Comments off

Guest post by Sander O’Neil

https://sanderoneilclock.tiiny.site/

This is a follow up to this post https://mathbabe.org/2015/03/12/earths-aphelion-and-perihelion/ from 2015

also try other experiments on my website:

https://mroneilportfolio.weebly.com/analemma.html

This JavaScript program is a clock/map/single star-star chart in the style of Geochron.

lets look at the elements of the site

The info here is your location and the time, these should be pulled directly from your computer and your IP address, the location isn’t exact for security reasons but it should point to the nearest city/wherever your WIFI is routed through. day and time of day are represented in numbers but rewritten into something understandable in the box below.

This is a star chart, but just for the sun.

This is a polar graph where distance from the center is the angular distance to straight upward, the light blue circle represents points in the sky above the horizon, dark blue represents below the horizon. Keen thinkers will already realize that this means the outside circle on this graph is actual the point in the sky which represents straight down, (or straight up for people on your Antipode)

The yellow circle is the suns current position in the sky,

the white squares represent where the sun will be in the sky in the next 24 hours, 1 square per hour.

the white line is where the sun will be every other day at this time for the next 365 days. (on my chart 12:47 pm)

On this Equirectangular projection of the earth each pixel moved up/down or left/right represents the same change in angle in latitude or longitude respectively

The yellow circle represents the point on earth closest and pointing most directly at the sun.

the red/black line represents the sunset/sunrise line. On the black side the sun cannot be seen, on the red side it can. As can be seen from this line, the sun has just risen in Hawaii and is soon to set over Greece.

The red horizontal line is your latitude.

The white vertical line is your longitude.

the yellow squares represent where the sun will be in the sky in the next 24 hours, 1 square per hour.

the white line is where the sun will be every other day at this time for the next 365 days.

the yellow and orange curves give some sense of where the sun is most directly warming currently.

Principles

The way I have actually gotten these positions is basically the most complicated possible method.

const f1 = vec(-783.79 / 2, 0);
const f2 = vec(783.79 / 2, 0);
let a = 23455;
let c = Math.abs(f1[0] - f2[0]) / 2;
let b = Math.sqrt(a * a - c * c);

const goal_angle_to_orbital_pos = (goal_angle) => {
    let angle = goal_angle + 0;
    let M = goal_angle - 0.0167086 * Math.sin(goal_angle - Math.PI);
    let goal_dif = M - goal_angle;

    for (let n = 0; n < 10; n += 1) {
        angle += goal_dif;
        M = angle - 0.0167086 * Math.sin(angle - Math.PI);
        goal_dif = goal_angle - M;
    }

    p = vec(Math.cos(angle) * a, Math.sin(angle) * b);
    return math.subtract(f1, p);
}

const rev_transform_planet = (p, a) => {
    const angle = a * 365.25 * 366.25 / 365.25;
    const day_matrix = rotationMatrix(2, angle); // Z-axis rotation

    const earth_tilt = math.unit(-23.5, 'deg').toNumber('rad');
    const tilt_matrix = rotationMatrix(1, earth_tilt); // Y-axis rotation

    const angle_tilt_to_elipse = -0.22363;
    const day_tilt_to_elipse = rotationMatrix(2, angle_tilt_to_elipse); // Z-axis rotation

    p = vec3(p[0], p[1], 0);
    let rotated_point = math.multiply(day_matrix, math.multiply(tilt_matrix, math.multiply(day_tilt_to_elipse, p)));
    rotated_point = normalize(rotated_point._data);
    const angle_rev = a * 365.25;

    let longitude = Math.atan2(rotated_point[1], rotated_point[0])-0.24385;
    let latitude = Math.atan2(rotated_point[2], Math.sqrt(rotated_point[1] * rotated_point[1] + rotated_point[0] * rotated_point[0]))

    return [vec(longitude, latitude), Math.abs(angle_rev + 0.22363) % (2 * Math.PI), rotated_point];
};

const year_to_angle = (t) => { return t * 2.0 * Math.PI - (182.0) / 365.25 }
const day_hour_to_angle = (d, h) => { return ((d - 182.0) / 365.25 + h / 365.25 / 24) * 2.0 * Math.PI }
const day_hour_to_year = (d, h) => { return ((d) / 365.25 + h / 365.25 / 24) }


Basically I’ve got the dynamics of the earths orbit of off Wikipedia and created a ellipse out of it. there is a concept in orbital mechanics of an imaginary angle called the mean anomaly which is an angle that changes at a constant rate through an orbit. It is not hard to calculate what the current mean anomaly is but it is only possible to estimate the real position of a point based on its mean anomaly. That is why my goal_angle_to_orbital_pos function performs 10 guesses and corrections. This is technically not an exact solution for the earths location in orbit but it gets 5 times more accurate with each correction so its indiscernible after 10 corrections

You will see odd corrections here, some constants thrown in. This is because the orbit and rotation of the earth are not all aligned. For instance the Apoapsis is 182 days off of the start of the year.

Future work

Tell me what you think of the NESW arrangement of the star chart. It is a little strange but you would use the chart in its current form by setting it on the ground. If it was going to represent looking up then it should be NORTH WEST SOUTH EAST.

Tell me if the location and time are correct for you, and anything else you want to see.

https://mroneilportfolio.weebly.com/clock.html

also you can go to my GitHub and just download the clock.html file and run it locally for the best experience https://github.com/Sander-ONeil/sun-timing/

Categories: Uncategorized

Guest post: Who gets to live in techno-utopia? Disability rights, eugenics, and effective altruism

This is a guest post by Victor Zhenyi Wang. Victor worked as a data scientist in global health and development at IDinsight India. He is currently a Masters student at the Blum Center for Developing Economies at UC Berkeley. He is interested in technology policy, participatory approaches to AI, and digital public infrastructure. He studied mathematics at the Australian National University where he developed a passion for long distance running. Victor also writes a blog about technology, ethics, and disabilities.

In September, I attended Manifest, a forecasting and prediction markets conference. This conference attracted a unique combination of speakers: Nate Silver, Robin Hanson, Destiny, Aella, to name a few. Other prominent guests included Richard Hanania, author of the recent “The Origins of Woke” and Jonathan Anomaly, an academic who writes on the ethics of eugenics, among other things. This conference, in Berkeley, also attracted a typically rationalist and effective altruism crowd.

Before the conference, there was a fair bit of contention around Richard Hanania’s attendance. A HuffPost article had just come out which revealed that under a different name, he wrote a number of hateful diatribes on race, gender, among other things. Since this is a prediction markets conference, a market was formed around whether he deserves to attend as a speaker. Austin Chen, the founder of Manifold (the forecasting app behind the conference), published a statement on his views on why he chose to invite Hanania; Hanania’s talk was withdrawn although he did attend the conference to promote his new book. A related post is on how many people protested by not coming.

Meanwhile, another speaker, the philosopher Jonathan Anomaly, spoke on “liberal eugenics”. “Liberal” eugenics concerns genetic enhancement at the group level through manipulation via technology. This is “liberal” in that no existing people are harmed and there is no obvious coercion. Instead, voluntary genetic selection and enhancement¹ is mediated via technological advancements. This is analogous to existing practices in genetic screenings of embryos. For instance, parents today have the choice to abort a fetus if certain genetic conditions such as Down syndrome are detected. This is legal in some countries, such as Australia.

If we believe that this is permissible, then why not allow parents to do this for other traits? As genome sequencing and predictive medicine evolve, we will likely have the technology to predict traits of embryos in vitro. It may even be possible for us to estimate probabilities for various genetic illnesses, mental illnesses or even addiction. One trait Anomaly stressed at the conference was intelligence and more broadly, whether society at large should select for children that are more intelligent or otherwise gifted.

At the end of his talk, the audience did not challenge his premises or core arguments. A common idea for why you invite or even platform someone with different, sometimes radical, opinions to your own is so you can challenge them publicly. If so, then does this mean everyone at the talk agreed with the speaker?

Walking out of the talk, I overheard a young attendee casually mention “wow I had no idea technology was so advanced now”. As someone who lives with a disability (and likely candidate for embryonic annihilation), it is a surreal experience to be in a room full of people who believe that they would like to do good in the world and at the same time be a person who they would prefer not to exist at all. I wish I had the capacity in that moment to have said something — anything — but I didn’t. I only hoped that others in the audience felt the discomfort that I did and that they did not de facto agree with the speaker.

But I think that if you find utilitarianism somewhat plausible, it is actually consistent to believe that the world in which liberal eugenics is widespread and permissible results in better outcomes for its denizens. Since, after all, if we focus on the elimination of disabilities, people with disabilities live “worse off” lives than those without, all else being equal, so surely this would be a much better world? If we focus on maximizing propensities for beneficial traits in future persons, surely they would all live better lives?

In this short essay, I want to challenge the idea that disabilities result in lives which are worse and discuss why cost benefit analysis (and therefore cost-effectiveness) is not a reasonable framework for thinking about disabilities.

I think what many utilitarians get wrong about disabilities is that most disabled people actually do believe that our lives are worth living but also valuable and ought to be valued. In fact, it is challenged whether disabled lives are “worse”. There is a New York Times magazine article from 2003 by Harriet McBride Johnson, a prominent disability rights activist, detailing her meeting with Peter Singer.

“Are we ‘’worse off’’? I don’t think so. Not in any meaningful sense. There are too many variables. For those of us with congenital conditions, disability shapes all we are. Those disabled later in life adapt. We take constraints that no one would choose and build rich and satisfying lives within them. We enjoy pleasures other people enjoy, and pleasures peculiarly our own. We have something the world needs.”

We fall in loveOur capabilities are different but not necessarily diminished. Advocacy by those much braver and much more determined than myself have changed history so that today, we are able to attain the capabilities that we wish to.

Obviously, people living with disabilities face a range of challenges in accessibility. Yet these challenges are not inherent to disabilities. For example, shortsightedness in the neolithic period would result in serious difficulties yet in today’s society, with both technology (glasses) and changes in the structure of society, in the configuration of people’s lives, shortsightedness does not really represent a diminishment in someone’s capabilities. What Johnson is claiming is that it is really a property of society that disabilities constrains the ability for one to attain the capabilities they wish for.

This becomes a problem in making policy decisions. Common metrics of wellbeing like Quality Adjusted Life Year (QALY) or its equivalent Disability Adjusted Life Year (DALY) may feel neutral and objective but are based on surveys that rely on subjective accounts of participants’ (who are typically without disabilities) ratings of quality of life with and without counterfactual disabilities or illnesses. So this metric is going to inherit all the biases society already has against disabled people.

If you imagine a utopia with this kind of metric at its core, what kind of society would this create? Since people with disabilities are a minority and we typically have both lower baseline utility and less utility to gain, this is a group that should be de-prioritized in order to maximize the total amount of good we can do since resources are limited and need rationing. As Nick Beckstead and Toby Ord have previously written, if we try to avoid disability discrimination in healthcare rationing, other worse outcomes might arise. In trying to smooth some air bubbles under wallpaper, we might create other ones. Better then, to not have to make this decision altogether, and eliminate the possibility that such individuals might exist at all. Liberal eugenics fits in perfectly with this idea.

This is however, not a new idea. In fact, historically, cost effectiveness analysis or cost benefit analysis have always been used to deny accommodations and basic dignities for disabled people. For example, in 1981 President Reagan signed an executive order which put Section 504 of the Rehabilitation Act of 1973 at risk. In this executive order, he mandated that regulation should “maximize the net-benefits to society” as well as choosing between alternative approaches that incur the “least net cost to society”. In other words, disability accommodations only help the disabled, do not benefit most of society, and tend to be terribly expensive (adding in elevators to a subway system tend to be much more expensive than just designing the subway with elevators in mind) — these concerns should be weighed against the interests of broader society. Accessibility and equity needs to be squared against cost justifications.

Since then, the language of cost benefit analysis has become an ubiquitous excuse to deny all sorts of basic rights in American politics. But this is really quite absurd. In 2012, the Department of Justice conducted a cost benefit analysis on whether prisons should prevent rape.

In the words of the disability rights activist Judith Heumann, that this is even being discussed “is so intolerable, I can’t quite put it into words.” The history of disability rights in this country was fought with disabled bodies thrown against callous institutions. This reveals the true nature of the threat -that it is tempting and beyond that terribly easy to use the language of cost effectiveness as an instrument of dispossession.

Techno-optimists will tell us that technology, through liberal eugenics, will create a world in which everyone is better off by design. But there will always be people with disabilities even if we were selecting ‘optimal’ embryos with predictive genetic selection. In that world, which seems not so far off, we will be marked as lesser, our dignities and rights denied to us. It would be a return to a world in which the expendable members of society are systematically institutionalized and recast as abject figures, confined. The air bubbles under the wallpaper vanished at last.

Footnotes

  1. Some draw a distinction between enhancement and treatment. I lean towards the side that this line is likely blurry and also I do not consider this to be an interesting question in this context
Categories: Uncategorized