Archive

Posts Tagged ‘writing’

What’s the right way to talk about AI?

Yesterday I came across this article in the Atlantic, written by Matteo Wong, entitled The AI Industry is Radicalizing.

It makes a strong case that, while the hype men are over-hyping the new technology, the critics are too dismissive. Wong quotes Emily Bender and Alex Hanna’s new book The AI Con as describing it as “a racist pile of linear algebra”.

Full disclosure: about a week before their title was announced, which is like a year and a half ago, I was thinking of writing a book similar in theme, and I even had a title in mind, which was “The AI Con”! So I get it. And to be clear I haven’t read Bender and Hanna’s entire book, so it’s possible they do not actually dismiss it.

And yet, I think Wong has a point. AI not going away, it’s real, it’s replacing people at their job, and we have to grapple with it seriously.

Wong goes on to describe the escalating war, sometimes between Gary Marcus and the true believers. The point is, Wong argues, they are arguing about the wrong thing.

Critical line here: Who cares if AI “thinks” like a person if it’s better than you at your job?

What’s a better way to think about this? Wong has two important lines towards answering this question.

Ignoring the chatbot era or insisting that the technology is useless distracts from more nuanced discussions about its effects on employment, the environment, education, personal relationships, and more. 

Automation is responsible for at least half of the nation’s growing wage gap over the past 40 years, according to one economist.

I’m with Wong here. Let’s take it seriously, but not pretend it’s the answer to anyone’s dreams, except the people for whom it’s making billions of dollars. Like any technological tool, it’s going to make our lives different but not necessarily better, depending on the context. And given how many contexts AI is creeping into, there are a ton of ways to think about it. Let’s focus our critical minds on those contexts.

Um… how about we don’t cede control to AI?

May 19, 2025 Comments off

Just in one morning I read three articles about AI. First, that big companies are excited about the idea that we can allow AI agents to shop for us, buy us airplane tickets, arrange things for us, and generally speaking act as autonomous helpers. Second, that entry level jobs are drying up because the first and second year law jobs or office jobs or coding jobs are being done by AI, so let’s figure out how to get people to start working at the level of a third year employee, because that’s the inevitable future.

And third, that the world might actually end, and all humanity might actually die by 2027 (or, if we’re lucky, 2028!) because autonomous AI agents will take things over and kill us.

So, putting this all together, how about we don’t?

Note that I don’t buy any of these narratives. AI isn’t that good at stuff (because it just isn’t), it should definitely *not* be given control over things like our checkbooks and credit cards (because duh) and AI is definitely not conscious, will not be conscious, and will not work to kill humanity any more than our smart toasters that sense when our toast is done.

This is all propaganda, pointing in one direction, which is to make us feel like AI is inevitable, we will not have a future without it, and we might as well work with it rather than against it. Otherwise nobody graduating from college will ever find employment! It’s scare tactics.

I have another plan: let’s not cede control to problematic, error-ridden AI in the first place. Then it can’t destroy our lives by being taken over by hackers or just buying stuff we absolutely don’t want. It’s also just better to be mindful and deliberate when we shop anyway. And yes, let’s check the details of those law briefs being written up by AI, I’m guessing they aren’t good. And let’s not assume AI can take over things like accounting, because again, that’s too much damn power. Wake up, people! This is not a good idea.