How would moderation work here? Like, what prevents me from being literally inundated with people trying to sell me scamcoins, or viagra, or nudes, or whatever the latest thing is.
You say "Self-moderating: a user has enough control over what they receive to reject spam content", but it's unclear what that means. Are you telling me I'm going to have to keep saying "I don't want to hear from ViagraSupplier1493956" every time ViagraSupplier1493957 makes a new account? Won't they just automate creating new accounts so I continuously have to reject them?
Or is this an opt in system? That's not what the word "reject" means to me, but it's the other "simple" alternative that I could imagine this describing. If so, is there literally way for a new user to onboard to this network except to convince people to start "listening" to them via side channels (like asking friends on other social media to follow them)? That's basically how things like substack work... I'm not sure I see it scaling.
I think it was one of the reddit founders who recently made the point that content moderation is fundamentally about increasing signal to noise ratio, and frankly I can't think of a harder problem. When you're trying to pitch a decentralized social network to me, it's literally the first question I have.
PS. The readme links to "https://chatternet.github.io/", but that doesn't exist.
Signal to noise in the context of discovery of public content is just hard. If you remove discovery, or limit it to curated content you can get around it. Haven (my project) skips discovery and makes every source for your feed opt-in, but I'd love to see more attempts at curation!
Thanks for the thoughtful question. Signal to noise, spam, sybil attacks etc. is probably the hardest problem to solve for this project, so I suppose it only makes sense its not really solved. But I hope that Chatter Net offers the tools which can be used to, if not solve, then at least make progress on the issue.
The short answer is that Chatter Net routes messages to you only from people you follow. If this was just messages authored by people you follow, it wouldn't be a very interesting platform. But in fact, whenever a message lands on your UI, you emit a View (https://www.w3.org/TR/activitystreams-vocabulary/#dfn-view) event on the message you just viewed. And so people who follow you will see that view, and that's how content makes it way around.
The long answer is.. well long. And it's not an answer so much as a conversation. I see this as all relating to trust. And it's not really possible to discuss trust without implicating the debate of anonymity vs. privacy.
In real world communities, you trust some people, and so you are wiling to spend time listening to them. These could be family, friends, actors, politicians etc.
On the internet, the mechanism for trusting someone is actually a bit odd. You believe that when you connect to a known domain, there is some trustworthy entity behind it serving you content of interest to you. And if that domain hosts a social media platform, and that platform trusts a user by allowing the user to create an account, then you sort of extend the trust you have in the platform to that user. But this is all very anonymous and leads to all sorts of issues such as bot farms and domain parking.
When things are framed that way, it almost seems the current efforts around moderation are really trying to treat the symptom, not the cause. Does it make sense to trust an anonymous user on some platform? The place this seems to work best is Wikipedia. But even there the implicit trust in user accounts is sometimes abused (https://www.theatlantic.com/business/archive/2015/08/wikiped...).
I think somewhere along the line the concepts of anonymity and privacy got confused. In the real world, if you walk around hiding your face and otherwise concealing your identity, people don't trust you. The things you want to keep private, you discuss in a private setting (e.g. in the comfort of your home). But when you are in a public forum, your identity is what you use to get others to listen to you and interact with you.
In this sense, Chatter Net is a public forum, while something like Signal or Matrix Chat would be a private forum.
Thanks for the explanation - it's good to hear a little about this. I have a couple of questions.
> But in fact, whenever a message lands on your UI, you emit a View (https://www.w3.org/TR/activitystreams-vocabulary/#dfn-view) event on the message you just viewed. And so people who follow you will see that view, and that's how content makes it way around.
Do you have to push a button to share it in some way? How do you stop all content traversing the entire graph of people if it's shared as soon as it's automatically displayed on your UI?
> In real world communities, you trust some people, and so you are wiling to spend time listening to them. These could be family, friends, actors, politicians etc.
How does this relate to echo chambers? It sounds as though it would be hard to avoid creating one.
Out of curiosity, have you seen Aether, and specifically its approach to moderation?
I don't know how well this would scale in practice, but the idea of people delegating moderation to others is interesting IMO.
Which api does the client  use to communicate with the server ? Is it possible to use existing mastodon web clients like [3,4]?
It certainly sounds interesting, especially the lack of server blacklists.
>It closely follows (but is not fully compliant with) the Activity Pub protocol.
Are you aiming for posts from ChatterNet to be visible to the rest of the fediverse (mastodon et al)? How will that look from the POV of the mastodon server?
Hi, thanks for the question! I realize I pushed this with some important details missing.
As a first objective I'd like to get Chatter Net nodes to be able to consume from the Fediverse. The (small) challenge is that posts in the Fediverse are unsigned, whereas what makes Chatter Net tick is really just adding signatures to Activity Stream objects. How this could work then is that a Chatter Net user (or server) could pull posts from the Fediverse. That user (or server) could then emit an "view" message on that post and sign that message. Now that message has an "origin" within Chatter Net allowing others to trust it and share it. Many accounts can share the same post. In fact they just sign the CID of the contents of the post, so the post itself could be retrieved from an external server.
The more challenging objective is to get the Fediverse to hear about what's going on in Chatter Net. Chatter Net servers could implement the Activity Pub server-to-server protocol. But other Fediverse servers would need to trust the contents coming from a Chatter Net server, and without moderation this would be complicated.
If this project picks up some steam I am hopeful others will have good ideas on how to address these issues.
> Are you aiming for posts from ChatterNet to be visible to the rest of the fediverse (mastodon et al)? How will that look from the POV of the mastodon server?
For that matter, how do other fediverse apps handle this when transferring from concept-to-concept like normal social network to microblog, or chat to microblog? It was my impression that it is like-to-like rather than just general information schema.
Good idea. If your selling proposition against other protocols like farcaster.xyz is that the protocol is compatible to ActivityPub, could you explain where it is not fully compliant, please?
>It closely follows (but is not fully compliant with) the Activity Pub protocol
Can you imagine to run a bridge server to the Fediverse? This should require a white-list to gain the trust of the peering nodes, though.
Naive question, what is this solving?
How does this compare with Bluesky, which I understand also uses DIDs?
I’m trying to make sense of what this. It seems the answer may be on the project’s GitHub page reference in the rear, but that just 404’s.
There used to be a page there, but that's been taken down until more work can be done on it.
I pushed this all a bit early, there are a lot of details missing. I'll be spending time documenting things a bit better in the coming days. In the meantime if there's anything I can answer I'll be happy to!
Not implementing server-level blocklists is almost a guaranteed method of getting your instances added to federated blocklists. I see that this is not ActivityPub compliant, so presumably this would only federate with other Chatternet instances? If that's the case, I recommend hurrying to implement a solid product vision, because otherwise "no deplatforming" is your primary sell, and the people who are buying are perceived as folks who have been deplatformed already. It's a hard problem right now to get some people to separate "free speech" and "hate speech" in social media, and until that's got a solution I'm not sure a slightly-different protocol will move the needle.
You bring up some good points. Indeed Chatter Net as-is isn't able to fully integrate in the Fediverse. I think it'll be possible to get there, and I'd like to see the project get there, but in the meantime its more of a standalone network.
The only real difference between the Fediverse and Chatter Net is that in Chatter Net all messages are signed. To know the origin of a message you simply verify the signature. Whereas in the Fediverse you have to trust that the server authenticated the user and stored the data correctly.
The consequence is that in Chatter Net federation happens user-to-server, not server-to-server. That's because a server can trust messages coming from any arbitrary user, so long as it has a valid signature. The server can choose to filter messages from unknown users of course.
I think maybe a more important selling point, from that perspective, is that as a user of Chatter Net, you consume messages only from identities you trust (follow). So you can completely filter out unwanted communities. This will take a lot more work to correctly describe in the docs. And the implementation isn't fully sorted out yet either.
A hard problem is selling it short, it is impossible to separate them, because they are one and the same.
So, is this kind of decentralized wiki based on DID?
Pretty much, good observation. The main idea really is just to push identity to the client. Activity Stream is a convenient way to package information, especially if you want to self-sign it and have it be self-descriptive. And Activity Pub is a sensible way to share that information.
There are problems tuning out unwanted traffic, but cool idea. Apparently RDF, etc. are not used?
Chatter Net doesn't offer a solution to spam / unwanted traffic, but maybe it can give tools for people to create their own solutions. The idea is to have a network of servers, and each can enforce their own rules (the simplest of which would be to only accept messages from known users).
To the user, the servers are rather transparent. You can connect to multiple servers at once, and you can connect to different servers on different sessions. So somehow you can act as the bridge between various communities allowing information to spread. But each community can still enforce its own rules around what information it stores.
The main difference to the current Fediverse is that federtation happens user-to-server, not server-to-server. Ultimately, each server has full control over what it stores and shares. Ana each user has full control over what server they want to listen to.
Is an api with c++ bindings available?
Is there a license for the server? Sorry if I'm overlooking it.
That's the client.
Hi thanks for the interest! Oversight from my part I was pushing quite a few things tonight. I just added the MIT license to the server.
It's a really interesting project - thank you! I'm looking forward to exploring a few possibilities.
I don’t think it’s a good idea to build any kind of social networking application without moderation and filtering tools. Both for server admins and users.
Agree! Chatter Net is built to be self-moderating. The currently implementation (conversely.social) isn't fully using this yet.
From the user point of view, the idea is simple: you receive messages only from people you follow. As people in your network emit "view" messages (about other posts), those posts will make it to your inbox. It's a sort gossip mechanism.
There's another side to this coin: as a server administrator you might not want to host arbitrary content. So a server is free to filter content using any rules. The simplest of which would be to allow content only from trusted accounts.
A user is free to connect to any server, and if that server accepts their content, this is how they can send messages. A server is free to server only the information it wants to share. A user should be connected to multiple servers, and these can change session-to-session.
This is meant to be transparent to the user, you just go any site that delivers the front-end which can run a Chatter Net node, and that node will listen to what servers others are connecting to, and connect to those servers.
The hope is that communities will form around certain servers / moderation styles. But ultimately, the user chooses what they consume.
yeah, no moderation is just spam bots talking to other spam bots.
edit: the author specified below that moderation is on the to-do list, they’re hoping someone with moderation ideas will help with implementation later—it just hasn’t been added yet.