Among the many interesting moments of this ending year 2022, we have had the chance of seeing the Twitter/Musk experiment unfold. I won’t spend time rehashing the many twists of this experiment, just a moment to offer the best wishes to all who work or worked at Twitter, in particular those being offered as scapegoats for the many sins of the C-suit, both past and present.
What I’m going to spend time on is moderation. If we listen to the official discourse, this entire Twitter/Musk experiment is about moderation. Also, as a direct consequence of aid experiment, the number of users on Federated Networks such as Matrix and Mastodon has never been so high. And Twitter, both past and present, has been a permanent demonstration that moderation is hard. So… what does it change to moderate a Federated Network?
Spoiler alert: it’s harder, but not impossible.
About Federation
First, a word about Federated Networks. A Federated Network is form of distributed system, in which none of the computers is privileged above others. In theory, the web is mostly a Federated Network (let’s ignore DNSes or CAs) with every webmaster the ruler of their own small kingdom. Matrix, XMPP, Mastodon are Federated Networks with every server managed by one admin, but all servers being equal to each other. Peer-to-peer networks (including peer-to-peer Matrix) are at the extreme of this spectrum, with every node in the network being a server managed by their single user. On the other hand, Twitter, Facebook, Instagram, Tik Tok are the absolute opposite: they are centralized services, controlled each by a central entity, and nobody who doesn’t work for (or own) said service has any hand or say in how the service works.
Centralized services have benefits. They’re easier to design, easier to maintain, easier to upgrade, they have consistent policies and tools. They also have downsides: if the owner of one of them were to suddenly decide to use it as their own personal playground, inviting one group while ostracizing others, users have no ability to migrate away while maintaining their network of contacts, their data, their identity.
But that’s science-fiction, right? So let’s make this a tad more concrete. Here’s an imaginary Far Far Away Network, composed of many servers. We have three main subnetworks, called the Galactic Empire, the Rebel Alliance and the Hutt Syndicate, plus a host of smaller subnetworks, including the Bounty Hunter’s Guild, the Federation of Free Traders or the Mandalorian Horde. Each of these subnetworks hosts a number of communities. The Galactic Empire is home to the Imperial Administration, the Circle of Survivors from the Explosion of the Death Star, the Tatooine’s Young Farmer Flying Association, … the Rebel Alliance is home to the Rogue Squadron, the Wraith Squadron, the Jedi Cosplayer Association, the Yoda Spotters Group, …
Some users own their personal server (perhaps they’re even communicating in peer-to-peer mode). Others (well, most) connect from their device to a server. Each user can participate in any number of communities and create communities at will. Critically, a conversation or a community may involve users from any number of servers. So, for instance, the Disney Princess Appreciation Community has users from the Galactic Empire, the Rebel Alliance, the Hutt Syndicate and more. Oh, and a real person may very well have several identities on the network.
Thanks to Federation, when the Galactic Empire blew up Alderaan, many users migrated away to other servers in a more or less orderly fashion.
Note that I’m most familiar with the Matrix Federated Network, so bits and pieces of what I’m going to discuss below may accidentally inspired from Matrix-specifities, but the general ideas should apply to all Federated Networks.
About Moderation
So what is Moderation? On a communication network, Moderation is the task of getting rid of stuff that is considered inacceptable, either because users won’t stand for it or because the local law says that the server’s owner will be punished if they don’t do anything against such content.
As an example, on our network:
- very few people, regardless of their political opinions, want to see spam;
- very few people, again regardless of their political opinions, want to be impersonated;
- very few people enjoy being victims of phishing.
Among their tasks, moderators should therefore fight spam, impersonations and phishing.
Let me reiterate: moderation is a (thankless but) critical task. A community without moderation will eventually fall prey to trolls and spammers and will be deserted by its users. Also, for services that happen to be ad-sponsored, announcers typically don’t like to have their name associated with phishing, spam or trolling.
So, what exactly do moderators need to filter out?
Problem 1: What rules?
Most server have rules. Most communities have rules. Rules that say, for instance, that spam, impersonation attempts or phishing are not acceptable. However, not all servers have the same rules and not all communities have the same rules. In fact, not all communities on the same server have the same rules. The rules of servers are often called Terms of Services and the rules of communities are often called Codes of Conduct.
For instance, what about politics? Pro-Empire propaganda is omnipresent on Galactic Empire servers but removed on sight from Alliance servers. Pro-Rebellion propaganda is omnipresent on Alliance servers but will grant you a one-way ticket to the caves of the Imperial Security Bureau on Empire servers.
What about something more universal, say pornography? The ever family friendly Galactic Empire rules “no, on penalty of 5 years of forced labour”, the Hutt Syndicate responds emphatically that “yes, do you want a sex slave with it?” Alliance servers tend to allow consensual pornography, but not in all communities. Also, some communities have stricter or more exotic rules. For instance, on Yavin IV and Kashyyyk, unboxing teddy bear toys is both considered pornography and punishable by tribal law.
Oh, and Hutt Syndicate have a very hands off approach to moderation. In fact, they will allow pretty much anything on their servers. Individual communities hosted on these servers can moderate themselves but should not expect any help from the servers or their administrators.
All this is a considerable difference between Federated and Centralized. Where Centralized networks can enforce global policies, Federated networks need to cope with policies that are distinct between servers and between communities. And by “distinct”, we sometimes mean “outright contradictory”. Which would still be fairly simple to solve if not for the fact that users from distinct servers often participate in shared communities/conversations.
Let’s consider the community of Archaeological Artifacts Enthusiasts, opened by Luthen Rael on the Coruscant Server, member of the Galactic Empire Network. In this community, people from all over the Federated Network meet and chat about archaeological artifacts. Which means that this community actually exists on both Galactic Empire servers, Rebel Alliance servers (because that’s how Rebel Alliance users can connect to this community), Hutt Syndicate servers (because that’s how Hutt Syndicate users can connect to this community) and even on peer-to-peer Bounty Hunter devices.
To summarize:
- The copy hosted by the Galactic Empire Network allows neither pornography, spam nor talk of sedition. Users can say anything else, even if it contradicts the rules of other servers.
- The copy hosted by the Rebel Alliance allows neither slave trading, spam nor Empire propaganda. Users can say anything else, even if it contradicts the rules of other servers.
- The copy hosted by Bobba Fett, the Bounty Hunter, on his device, has no rules. Bobba Fett can say anything.
- Likewise, the copy hosted by the Hutt Syndicate has no rules.
- The moderators of this community have setup additional rules. Nobody should speak of anything else than archaeology. In particular, no politics. Some of the moderators connect from the Galactic Empire Network server and some from Hutt Syndicate servers.
Surely, that cannot hold?
Well, in practice, it does. Let’s look at a few scenarios.
What if… an Empire user starts sending spam?
For a first, simple case, let’s assume that everybody agrees that what the user is sending constitutes spam.
What do the Terms of Service and Code of Conduct say?
- Empire Terms of Service say “no spam”;
- Alliance Terms of Service say “no spam”;
- Bobba Fett doesn’t have Terms of Services;
- Hutt Syndicate Terms of Services don’t care about spam;
- The Community’s Code of Conduct says “no spam”.
In this case, we can probably agree that spam should be eliminated if and whenever possible.
Recall that the Empire user is connecting from their device to Empire servers. If the Empire manages to eliminate spam before it is replicated from their servers to other servers, that spam stops here. Otherwise, the Community Manager may request that the spam be eliminated from all servers and individual servers that detect spam can also remove it from their server.
We’ll address detecting spam in another entry of this series, because that’s a complicated topic in itself.
What if… a Hutt user is complaining about the Empire’s latest policies?
Well, technically, that’s almost the same thing as above. Except not everybody agress that it’s bad content.
Let’s look at the Terms of Service (henceforth ToS) and Codes of Conduct (henceforth CoC):
- Empire ToS say “no seditious talk”;
- Alliance ToS say “seditious talk welcome”;
- Bobba Fett doesn’t have ToS;
- Hutt Syndicate Tos don’t care;
- The Community’s CoC says “no politics”, which implies “no seditious talk”.
Well, the Empire won’t like that. If they detect the message, they may scrub the content from their own server, but after that, they can only politely request that clients and other servers scrub that content from their own copies. In the real world, clients connected to a server tend to accept these requests blindly, as well as well-behaved servers. However, there is no guarantee that a server is well-behaved. Here, chances are that the Alliance servers are programmed to ignore scrub requests coming from the Empire.
By the way, since there is no difference between a seditious talk removal request and a spam removal request, that probably means that, in the previous example, the Alliance needs spam tooling entirely separate from request from the Empire.
Of course, Community Moderators may place the same kind of request. A Community Moderator using a Galactic Empire server can place the exact same request as the Empire itself, presumaby with the same results. A Community Moderator using a Hutt Syndicate can also place the exact same request and hope that the responses will be more favorable. In practice, this should be the case, but in theory, there is no guarantee.
What if… an Empire user is complaining about the Empire’s latest policies?
- Empire ToS say “no seditious talk”;
- Alliance ToS say “seditious talk welcome”;
- Bobba Fett doesn’t have ToS;
- Hutt Syndicate Tos don’t care;
- The Community’s CoC says “no politics”, which implies “no seditious talk”.
What happens here depends a lot on when the Empire detects the seditious talk. If it does before said talk leaves the server, it can eliminate it and stop it immediately. Nobody will ever see it.
If the Empire detects the unwanted content after said content has been sent to users connected to the server but before it has been replicated to other servers, it can still stop the replication and request that all clients scrub the message. As with centralized networks, well-behaved clients will typically accept these requests, but there is no guarantee.
And if the Empire detects the unwanted content after said content has been replicated to at least one other server, it’s pretty much too late. Even if the Empire scrubs the content, the other servers will in turn replicate it to the rest of servers. We’re back in the previous scenario, with the Empire having to politely request that the Alliance (and others) remove any trace of the seditious talk, which will quite possibly be ignored.
If the Empire wishes to avoid this scenario at all cost, they may decide to de-federate, blocking any other server from receiving their messages or sending them messages. That’s rather an extreme measure but it may be necessary in some cases, either temporarily or permanently.
What if… an Empire user is looking for a pornographic artifact?
- Empire ToS say “no porn”;
- Alliance ToS say “consensual porn welcome”;
- Bobba Fett doesn’t have ToS;
- Hutt Syndicate ToS say “all porn welcome”;
- The Community’s CoC says “any talk about artifacts welcome”.
Again, from the point of view of the Empire, this is unwanted content. From the point of everyone else, including the Community, this is legitimate content.
If the Empire can detect the content, they can scrub it (or fail to scrub it), exactly as in the previous scenarios, regardless of the wishes of the Community.
Without entering details about detection, let’s just mention briefly that the Community can make it easier or harder for the Empire to detect such content, some of the keywords being “End-to-end encryption”.
What if… a Rebel user is looking for a pornographic artifact?
- Empire ToS say “no porn”;
- Alliance ToS say “consensual porn welcome”;
- Bobba Fett doesn’t have ToS;
- Hutt Syndicate ToS say “all porn welcome”;
- The Community’s CoC says “any talk about artifacts welcome”.
Again, from the point of view of the Empire, this is unwanted content. From the point of everyone else, including the Community, this is legitimate content.
Again, if the Empire can detect the content, they can scrub it (or fail to scrub it), exactly as in the previous scenarios, regardless of the wishes of the Community. Even though the message did not originate from the Empire, the Empire can remove it from its own servers, presumably from its own clients, and, since at least one Community Moderator is hosted on an Empire server, the Empire can request that other servers remove it, too. Again, other servers can ignore this request.
Again, end-to-end encryption can influence whether the Empire can detect this unwanted content at all.
If such instances happen regularly, the Empire may decide to apply some form of content firewall, to block messages from Alliance users – or from specific Alliance users – from reaching their server. This will not affect the ability of the Alliance to send the messages to Hutt Syndicate users and Communities. This is, in practice, a much more lighweight form of de-federation.
Similarly, a Community Moderator may apply some form of content firewall to their Community, to the same effect.
What if… an Alliance user is trying to get some pro-Empire artifact banned from sale?
- Empire ToS say “pro-Empire is good”;
- Alliance ToS say “pro-Empire is bad”;
- Bobba Fett doesn’t have ToS;
- Hutt Syndicate ToS say “pretty much everything is good”;
- The Community’s CoC says “artifacts are good”.
Well, as an end user without moderation rights, our Alliance user cannot remove stuff they didn’t provide themself. In particular, they cannot send scrub requests to servers or other clients.
The only thing they can do is ask for intervention from a Moderator. As it turns out, the CoC explicitly allows these artifacts, so said Alliance user will not be able to ban said artifact.
What if… Bobba Fett is trying to get some piece of armor banned from sale?
- Empire ToS says nothing about armor;
- Alliance ToS says nothing about armor;
- Bobba Fett doesn’t have ToS;
- Hutt Syndicate ToS say “pretty much everything is good”;
- The Community’s CoC says “pieces of armor are artifacts and artifacts are good”.
Well, moderators are not going to help Bobba Fett either and Bobba Fett is not a moderator. However, Bobba Fett does have his own server. Perhaps he can send scrub requests to other servers?
All servers who have at least one member in the conversation know the capabilities of all users in this conversation, including Bobba Fett. So, while Bobba Fett could try and send scrub requests, by specification, all servers will ignore these requests because Bobba Fett doesn’t have the right to perform such operations.
So, is it broken?
Actually, I’d argue that it works pretty well.
If you look at the above examples, not all users get what they wish, but (assuming that unwanted content can be detected), the Community can remain exempt of stuff that the Community Managers don’t want (because they’re off-topic) and the Servers can remain exempt of stuff that the Server Managers don’t want (because that will land them in jail).
Could everyone get what they want? Probably not, as in real life. When users want things that are in contradiction to the rules (ToS or CoC), well, there is a mismatch between what the users and the Community and/or Server. And there is a very easy way to solve this: the user can find or create a new Community, with their own rules, on another or their own Server.
I would even go further and venture the fact that the experiment of Centralized Moderation has proven to be a failure. Real-world services such as Twitter, Facebook or Tiktok need to apply the same rules to the entire system or be labeled unfair. But these rules don’t exist. Outside of extremes, there is no monolithic definition of “good” and “bad”. Even worse: attempting to privatize these choices puts us one step closer to the Cyberpunk dystopia that’s been lurking as a possible future for quite a few years. Decentralizing rules towards Servers and Communities makes Moderation several steps closer to real-life: messy, gray, with lots of rooms for error and the need to step away and try something different without bringing down the entire building.
Now, we haven’t covered the critical case of trolling/bullying/harassment. Generally speaking, this follows the same schemas as other examples above: either the behavior of the abuser is in contradiction with the rules of the Server or Community, in which case said abuser can be spoken to sternly, kicked or banned, or the behavior of the abuser isn’t, in which case the only solutions of the victim are to either fight to get the rules changed or slam the door, possibly starting another Community and/or Server.
Trolling/bullying/harassment is (as everything else in this article) a human problem and the solutions are mostly human. The fact that the Network is Federated doesn’t get in the way of solving this problem, and the Twitter/Musk experiment suggests that it is actually helpful, insofar as on a Federated Network, victims can fairly easily move away and create new Communities and/or Servers with rules better suited to their situation, de-federating or firewalling abusers away if necessary.
Wait, are we stopping here?
Yes, we are.
Moderation is a very complex topic and we have only covered one angle of it: how a Federated Network can handle contradictory rules.
This is meant to be the first entry in a series about Federated Moderation. As of this writing, I’m not certain of the topic of the second entry, but I expect that I’ll be discussing at some point things such as:
- detecting unwanted content;
- asking for help;
- pooling resources;
- what end-to-end encryption is good (or bad) for.