Home > Uncategorized > ChatGPT: neither wise nor threatening

ChatGPT: neither wise nor threatening

January 11, 2023

There’s been a huge amount of hubbub in the tech press lately around the newest generation of chatbots, with ChatGPT being the version that is most celebrated and/or feared.

I don’t think the celebration or the fear is warranted.

First of all, we don’t need to fear that ChatGPT is going to replace humans. It doesn’t have a model for truth; it’s simply writing words and phrases that are akin to the patterns it has observed in the historical speech it was fed. So in other words, it’s not actually answering a question thoughtfully or relevantly. Any kind of thoughtfulness that might be observed in its output is a combination of the human wisdom that is embedded in its training data and the projected credit that we tend to give others when we hear them making an attempt to formulate thoughts.

But what about students cheating by using ChatGPT instead of doing their own writing? The thing about technology is that it is interfering with the very weak proxies we have of measuring student learning, namely homework and tests. The truth is, the internet has allowed students to cheat on homework for a long long time, and for that matter many tests, and now they have a slightly better (albeit still imperfect) way to cheat. It’s just another reminder that it’s actually really hard to know how much someone has learned something, and especially if we’re not talking to them directly but relying on some scaled up automated or nearly automated system to measure it for us. I’m guessing there will be much more one-to-one oral exams in the future, at least when it comes to high stakes tests of knowledge. And for that matter, there will be fewer such tests, because many sorts of “knowledge” that humans once needed to memorize might not be necessary if the internet is always available.

Next, I do not think we have cause to celebrate either. There is no wisdom in ChatGPT or the other like that. To illustrate this point, consider Galactica, an AI that was introduced and then pulled by Meta, the parent company of Facebook. Galactica was trained on scientific papers to churn out scientific-ese paragraphs. It was pretty good at it. Actually I like this characterization, that it was a bullshit generator.

Galactica was supposed to help scientists write their paper, and help everybody look up scientific knowledge. It sometimes worked but often just made shit up. Deep fakes but for science.

Anyway, here’s one thing it will never do: create new science. And that’s why there’s nothing here to be all that excited about.

Another way to say it is that Galactica can show us what is “easy” about writing science papers: the language, jargon, third person authoritative style, etc. versus what is hard: the new ideas. Galactica can do all the easy stuff but none of the hard stuff, and so why should we be impressed?

Categories: Uncategorized
  1. Raphael Solomon
    January 11, 2023 at 9:53 am

    An increased prevalence of open-book exams would not be bad either. The focus shifts from memorization to the ability to apply the knowledge. This should be the core of university learning anyhow.

    Liked by 1 person

  2. January 11, 2023 at 10:04 am

    This is the way Democracy ends.
    This is the way Education ends.
    This is the way Humanity ends.
    Not with a Bang but Ka-Ching.

    I think many people are missing the supreme motivation of greed driving this latest form of sham intelligence and every other proxy for human agency. It is not a question of using technology or not, but who will be in control, and here it is the capital corporate sector which is gaining one more increment of control.

    Like

  3. January 11, 2023 at 10:39 am

    Yes. We are the ones being imitated in all cases. AI datasets are limited. Reality is not so limited. The human and other genomes are geared to respond sensitively to the external and social environments in order to survive, which includes reproduction and learning. AI is a great tool. It is just a tool. We have worshipped cars, also a tool. We have worshipped a lot of things, including stone idols we made. Worship might be a human tendency. Is it helpful, or is our survival over the long haul of time, and I’m talking hundreds of millions of years here, what we are after?

    Liked by 1 person

  4. rob
    January 12, 2023 at 6:19 am

    Having no semantics, it’s just verbiage, which might be adequate for a system that is relatively stable like language, but not for learning anything new about a world that we ourselves don’t completey understand. AI, though not mere chatbots, will soon enough interact with the world, model it, correct its models in further interactions, and, given some goal, will be able to teach us about the world — after plenty of initial failures, no doubt. And it will teach us about ourselves too.

    Technology is always the game-changer in an economy, so the value of predictions are mere chance, and that includes structures of ownership. Overall, technologies increase productivity and real wealth — think of electricity and the lightbulb that allowed the night to be productive or indoor plumbing or the smart phone that nearly everyone has except maybe the street homeless. And they often create a lot of unintended consequences like negative externalities and altered social expectations. And weapons.

    Isn’t the problem of who controls only meaningful in a world of “scarce” goods? The promise of technology undermines scarcity and so control. In point of fact, the tiny Netherlands is the second largest exporter of agricultural goods after the US. Imagine that efficiency spread across the globe to places with even greater rainfall. Who needs control when there’s such overabundance?

    But even a universal basic income for a future of robot labor will have unintended consequences we won’t know until we see them — too late. Social problems are always harder to solve than technological ones. “Solving” is probably not even the right concept for social problems. ‘Improvement’ would be a more modest goal.

    Oral or in-class writing exams are not ideal for many students, including good learners. I’ve had bright, dedicated students reduced to tears or even vomiting at an in-class exam even when the exam questions were provided weeks ahead. And since many forms of work don’t require in-person performance, there’s no necessity that every good learner be a performer as well, any more than every brilliant mind should also be public speaker. Sadly, chatGPT means brilliant introverts will have to suffer and be penalized for their nature.

    Liked by 1 person

    • January 12, 2023 at 8:44 am

      We tend to hope technology and its progress through time will save us. We will see if technology, which we make and have made, can save us from us. Our technology has produced species destruction, climate change that doesn’t work for humans or the rest of their technology, and an addiction to the infinite growth curve model of success not seen anywhere in Nature. Things live and die. They stop and start. We ignore that at our own risk. Our addiction includes the addiction to money, attention, fame, and loads of stuff. We sacrifice everything else for this addiction. My feeling is a sea change in all this will be required to save humanity from itself. As for AI, it is technology, it is not in touch with the world or us for many technological reasons right now. Are we waiting for AI to teach us about how the world works? Are we waiting for AI to teach us about ourselves when we can’t seem to do that for ourselves after 300,000 years? Are we saying the actions and words of people in all this time have left us clueless? You can sure go ahead and ask what does ‘us’ consist of? We ask this all time and have in every generation. Is the question itself a sign we are progressing and have much to learn now as we always will? Is our technology there only because we exist, to feed and care for us, not to tell us who we or the world are except as one more thing we have made, and in that making we can find some of who we are?

      Liked by 1 person

      • rob
        January 17, 2023 at 6:41 am

        Yes, all of those good questions. But we shouldn’t wait for AI to answer — which it will, one way or another. We should be preparing for the answers now. How prepare for the unprecedented? Even our addictions are culturally relative. The West is particularly narrow on this. We’ve been riding on a heady wave of wealth creation that has obscured alternatives. It seems likely that the hard problems of the future will be the moral ones, not the material ones.

        Liked by 1 person

  5. DANIEL E BRITTS
    January 14, 2023 at 5:19 pm

    I’ve found the answers to questions posed to ChatGPT to be meh at best. The concept of AGI is miles away. Conversely, developing expert systems like those used to front-run the stock market has been mucking up the system for the last decade. AI systems are very good at tasks like contracts and lawyers’ work. The jobs displaced are upper middle class, and that’s cause for consternation. Everything was fine if it was the person screwing headlights at GM, but now lawyers are under threat. If that’s a true concern, bring it.

    Like

  6. February 9, 2023 at 8:06 am

    It’s really a wonderful tool if you use it well

    Liked by 1 person

  7. February 9, 2023 at 9:14 am

    IPSMO = Intellectual Property Strip Mine Operator

    Like

  1. No trackbacks yet.
Comments are closed.