Home > Uncategorized > Can we embed dignity into social media?

Can we embed dignity into social media?

January 16, 2023

I’m working on a philosophy paper with an ethicist named William Cochran. I’ll post a link to it once it’s written but in the meantime I have decided to use this neglected space to think through parts of my work on the paper.

Namely, I’m trying to work out whether it’s possible or practical to embed dignity into social media. That sounds like a hard question to make precise, and my approach is to make use of Donna Hick’s amazing work, which came out of peace treaty negotiations, and you can learn about here or read her book Dignity.

Specifically for our purposes, Donna has a list of required conditions for dignity, which can be found here.

There are ten of them, and I was thinking of taking off bite sized chunks, namely to work with one or two at a time and think through how the design of social media’s algorithms or space on the platform itself could be redesigned (if necessary) to confer that particular condition of dignity.

The thing I’ll say before beginning is that, as of now, I don’t think this is being done well, and as a result I consider the human experience on social media to be mostly bad if not toxic. And yes, I do understand that people get a lot out of it too, which is why we should try to make it better rather than to abandon it.

Also, even if we do embed dignity into social media in an upheaval of design, which is hard enough to imagine: I do not think that means it will always be a great place to be. We should know by now that it’s a tool, and depending on how that tool is used it could be wielded as a weapon, as the Rohingya in Myanman learned in 2017.

Finally, I fully expect this to be hard, maybe impossible. But I want to try anyway, and I’d love comments from my always thoughtful readers if you think I’ve missed something or I’m being too blithe or optimistic. Thank you in advance.

So, let’s start with the first essential element of dignity:

Acceptance of Identity
Approach people as neither inferior nor superior to you; give others the freedom to express their authentic selves without fear of being negatively judged; interact without prejudice or bias, accepting how race, religion, gender, class, sexual orientation, age, disability, etc. are at the core of their identities. Assume they have integrity.

https://www.ikedacenter.org/thinkers-themes/thinkers/interviews/hicks/elements

The first part of this, the expressions part, looks pretty straightforward. On a social media platform, we should be able to self-identify in various ways, and we should be able to control how we are identified. All of that is easy to program. The second part is where it gets tricky, though: how do we do so without fear of being judged? Fully half of the evil shit going on now on social media is related to ridiculous, bigoted attacks on the basis of identity. How do we protect a given person from that? Automated bots looking for hate speech does not and will not work, and having an army of underpaid workers scanning for such speech is expensive and deeply awful to them.

It’s possible my experiment is already over, before it’s begun. But I have a couple of ideas nonetheless:

First, Make it much harder to broadcast bigoted views. This could be done iteratively, first by hiding identity-related information from people that have not been invited into a particular space on social media, and next by making general broadcasts (of anything) be held up to much higher scrutiny.

There’s always been a balance struck in social media of making it easy to connect people, for the sake of building enough of a network to keep somebody interested in spending time there, with making sure unwanted people aren’t invading spaces and making them toxic for the group that’s happy to be there. Folks such as Facebook group moderators (or other group moderators on other social media) do a lot of this work, for example.

So, here’s a model that might do the trick (one of many). Imagine a social media that is formed as a series of hotel rooms set off of a main hallway, where you really don’t know who is inside yet, you have to apply to go in, and there’s a moderation system that will kick you out if you don’t conform to rules. That might be too much of a burden to be instant fun, but it also might lead to better conversations and way less disbursement of hate speech. Does such a social media like this already exist?

On the other hand, there are going to be plenty of folks who actually want to engage in bigotry. They would clearly set up their rooms to be hate speech and bigot friendly. Would that be ok, or would there also need to be super-moderators who kick out rooms for violating rules?

Next, is there a third model that is somewhere in between the one that exists now, where you can pay to broadcast your views practically anywhere, and the much more zipped up model I outlined above? The critical use case is that someone should be able to identify themselves in all sorts of ways without fear of being yelled out or judged.

I’m also kind of prepared to be told that’s just what we humans do, there’s no way to build a policy that bypasses that. And when I say kind of, I just want to point out that on Ravelry, which is my knitting and crocheting community website, I don’t see a lot of this. I really don’t. And I think it’s because it’s already got something to talk about, so we don’t have to name call, because we’re busy.

Categories: Uncategorized
  1. January 16, 2023 at 10:02 am

    It is possible we want to find a way to give individuals the kinds of ethics we want them to have as they engage in social media? I am all for giving humans ethics. I expect a lot of human discourse has been centered around this issue for as long as we have records and folks have had access to having their thoughts sent around to other people in the form of letters, journals, newsletters, screeds, etc. Sure, the internet and social media at this time. However, we have always seemed to be concerned with free speech as long as it conforms to our speech. One of the patterns AI coders can be concerned with is making sure AI conforms in its expressions to what a human in its place would express. We code for AI, so it partakes of all our benefits and limitations, one being ethics and another being expression. Sure, we are still fighting with each other about the ethics of what and how to express ourselves in language or images. We want to moderate that fighting because we don’t have a choice. We need to do that, even though humans do fight about all and sundry, including ethics and self-expression. Should you and others jump into this arena and fight over social media contents? Yes. On the other hand, I do usually agree with you. Do I agree with what is being said otherwise. Not always. Is that good? I don’t know yet. I do know most people don’t want to be disagreed with, ethics or no ethics. What we do about that remains to be seen in general, and on social media in particular. Jump in, though. Go for it, hard as that will be.

    Like

    • January 16, 2023 at 10:03 am

      You’ve pointed out that these rules will turn off some people who don’t like rules. And that’s true, and that’s ok. Not everyone likes dignity either!

      Like

      • January 16, 2023 at 10:09 am

        Your phrase, ‘that’s okay” is how I see this too. Do.It.

        Like

  2. John Doe
    January 16, 2023 at 10:23 am

    Your model of rooms in a hotel where you have to apply to go in sounds a
    lot like a social media called tribe. It was based in San Francisco I
    believe had a strong following but did not get a revenue model working
    well enough to be able to grow and keep up with demand. Then facebook
    pretty much destroyed them.

    But I wonder if twitter under it’s new management can’t be moved to such
    a model. Especially when coupled with the pay to reach more people.
    Twitter needs a working revenue model and that may be interesting enough
    for them to want to make the change.

    But I also agree with you. The project may be doomed before it even starts.

    Liked by 1 person

  3. ursulawhitcher
    January 16, 2023 at 10:39 am

    Your “hotel room” model sounds a lot like Discord, where you have to be invited to an individual server to participate in conversations there.

    Liked by 1 person

  4. Jack Tingle
    January 16, 2023 at 10:41 am

    I’d almost take the room idea & abstract it a step up to language. You could only express certain tokens to generate a sentence about another user. You could apply to rooms with different token sets. Good behavior gets you a broader, less restrictive set. Gamify it to make it “fun.”

    Think of it as Babel-17 in reverse. Even if Sapir-Worf isn’t true, you could make it true in your room.

    Liked by 1 person

    • KH
      January 16, 2023 at 5:25 pm

      Sapir-Whorf (had to look it up 🙂

      Like

    • January 17, 2023 at 9:33 am

      Ethan Zuckerman has a model where the communities are centered around solving specific community problems (think bike lanes vs. parking spots in a neighborhood). So the moderators are in charge of asking the critical questions and the rest of the community gets to answer and make comments, but only about the issue at hand. It seems to make conversations really high quality, albeit narrowly focused.

      Like

  5. Angela Tarbet
    January 16, 2023 at 10:57 am

    Yes do it try it keep posting for us about it. I will have more thoughtful ideas for you. I would venture to say that ALL humans have baseline ethics. Yes they get warped by family friends church school work etc but I think all people would rather have a healthy child than a sick kid and so on…ART

    Like

  6. Raphael Solomon
    January 16, 2023 at 11:56 am

    One place where the rule of mutual acceptance and dignity seems to be enforced is within Facebook groups. There is content moderation and moderators who can delete comments or kick people out of groups. The worst moderation occurs on sponsored posts, which is where Facebook makes its money. I spend most of my time in Facebook groups, because it is far less toxic than Facebook itself. Also, commenting on posts or sharing posts within circles of friends is much safer.

    Liked by 1 person

  7. Roger Witte
    January 16, 2023 at 12:38 pm

    Not allowing third party bots to post would be helpful, if we could detect them. 🙂

    We need to consider who is paying and the resulting incentives on the host companies. In particular, should these platforms be privately or publicly owned and how/who regulates them. If their primary motive is to increase time spent on site, enraged people find disengagement more difficult than others. So changing the ethics involves changing the incentives on the actors in the system (or possibly, changing some of the actors altogether).

    Perhaps we should be thinking about retention of posts – on one hand relatives of the dead sometimes take comfort in eternal retention – but perhaps a right to be forgotten (cf European Union law on search results) so that one is not eternally burdened with a misstep might be handy? but might also be open to abuse?

    Also, I’ve made posts that, in retrospect, I regret, because the unseen context in which I made a comment, was very different from the unseen context in which the comment was going to be received. It’s only when a reply or third party comment was awry from my expectations, that I realised that I had pressed ‘send’ too soon. I don’t see how this sort of mis-step (akin to social clumsiness in real life networking) can be legislated against.

    Liked by 1 person

  8. Chucka
    January 16, 2023 at 2:58 pm

    Part of me almost wonders if this could be partially solved by changing the “put everything online” culture we’ve fallen into. It’s harder to do a coordinated attack on an individual if you don’t know their diagnoses, sexuality, location, and other things that have become common to put in a social media bio. Keeping that information mostly off the public, easy to find internet has brought me a lot of peace, and hasn’t impeded my ability to seek community with others who share the same experiences/identities.

    Like

    • January 17, 2023 at 9:35 am

      Yes, it does seem as if we have already been conditioned to behave terribly. I wonder if this would have worked better if we’d started out designing for dignity but now it’s a lost cause.

      Like

  9. KH
    January 16, 2023 at 6:45 pm

    Possibly approach this from both directions: Consider (1) qualities/categories from Donna Hicks’ list, call it the abstractions list; and also (2) a list of particular social media communities that we think do treat each other with dignity, like Ravelry, or Facebook groups with histories of treating each other well, call it the trusted-nominees list (nominees, because history is an ongoing thing). Agree, it matters who the ‘we’ are in ‘we think.’ It matters that there is a history, though getting stuck with one’s history forever does seem potentially unfair.

    Liked by 1 person

    • January 17, 2023 at 9:36 am

      I very much like the idea of a collection of “use cases” which are shown to work (or, in a separate list, fail!). Thanks for that idea.

      Like

  10. rob
    January 17, 2023 at 7:30 am

    I don’t get it. As you point out, Cathy, if I want to say bigotted things, I’ll find a place where I can say them. If there is such a place, I’m sure it will be filled with the like-minded and I’d be welcomed. But if I already believe in the dignity of all people, how useful will it be to have a nudge to keep my discourse respectful?

    Maybe it would be more useful to all if there were a site that guaranteed accurate information or at least supported by the best information available. My experience with most humans I interact with (I insist I am human too, but I’m beginning to wonder) is that they are woefully ignorant and are nevertheless all too ready to espouse views despite their ignorance. Worse than ignorance — we make up “facts” and theories out of sheer preference.

    A site guaranteeing accuracy will also discredit the sites that the biased seek. Probably won’t change the biased, but will expose them, maybe even to themselves, at least discredit them.

    Granted, truth and dignity are not always aligned. But where they are not, and dignity is publicly chosen at the expense of truth, loss of credibility is guaranteed, and polarization and resentment are likely. But if truth is presented first, dignity can still follow.

    We have a production right to free speech, but no reception right to truth. Maybe AI can help us with that.

    Liked by 1 person

    • January 17, 2023 at 9:38 am

      Rob, I’m kind of in the middle. I think lots of people intend to treat others with dignity but get easily nudged to behave badly – they even probably think they are completely justified in their behavior! So how about starting a social media where the nudging happens deliberately towards dignity rather than against, like now?

      And I agree that lots of people wouldn’t like that, and would *want* to behave disrespectfully. They can go somewhere else.

      Finally, I want to distinguish between truth and dignity. I can treat you with dignity even if you’re wrong, and I can violate your dignity whether or not you’re wrong.

      Liked by 1 person

      • rob
        January 18, 2023 at 10:18 am

        I’m thinking of the difficult cases in which truth clashes with dignity, where there’s no good solution. For example, suppose that women are genetically smarter than men on average (this is likely true, at least on some measures of intelligence, and maybe on all, but I’m just using this as a hypothetical for argument’s sake). If someone feels this truth compromises the dignity of men, should it be suppressed?

        Censorship and self-censorship will undermine the credibility of the discussion. For my money, I think suppressing such a truth would also be insulting to men, as if we’re incapable of handling the truth.

        Maybe this is a somewhat anodyne example and not the best, but if there are such clashes, they might be common. So it may be that dignity can only be prioritized at the expense of credibility somewhere down the line. This is a familiar problem, yet I don’t see any solution.

        Like

  11. Bill Frahm
    January 26, 2023 at 8:53 am

    I agree with you that dignity should be present in social media. Unfortunately, we’re fighting against media personalities who learned that there’s money in tapping people’s rawest emotions.

    I visited southern Louisiana recently and saw how misinformation can be overcome in a rational environment. I sat in a hotel bar talking to a group of strangers who were offshore roughnecks, farmers and firefighters. They had very different opinions than I on a number of topics. We had a very civil conversation. We listened to each other and came out better understanding each others’ concerns. I was most impressed when the conversation turned to the evils of those liberal electric vehicles. After discussing the benefits and problems with EVs, one guy actually said he might look at an EV for his wife.

    Yes, civil discussion is possible with many people. The habits must be reinforced to get to that point.

    Like

  12. John Boucher
    January 28, 2023 at 5:31 pm

    I read the Chaos Machine by Max Fisher. Scary stuff. The more salacious the material, the more eyeballs it will garner; I’m afraid there is no vaccine for this. The algorithms are set up to maximize revenue. They are just dead machines and always will be, let’s stop fooling ourselves. You have given yourself a sisyphean task, but thanks for all your efforts.We could make a start by marooning the Zuck. He’s a damned Jonah on this ship.
    Good luck

    Like

  1. No trackbacks yet.
Comments are closed.