Tagr Bot

Tagr moderates unwanted content in your feed.

User driven content flag example

Centralized moderation is broken. Nostr gives control back to users

Corporate social media has struggled for years to moderate content in a centralized system. Instead of protecting the least vulnerable, the big platforms optimized for engagement often times at the expense of human lives.

Nostr is an open protocol, where people can post what they want. The minimal use of algorithms to generate feeds makes it more difficult for toxic content to go viral in the same manner as platforms, however some users may prefer not to see certain types of content.

Nostr puts users in control of who they follow and by extension what they see. They also offer manual content moderation tools for labeling posts. However, manual content moderation can be time consuming and mentally taxing. People come to social media to connect not to serve as moderators of content in their feed.

Minimize the exposure & work with Tagr Bot

Following the Tagr bot allows users to offload manual moderation to AI for text based content.

For those users who follow Tagr, the bot will review notes of users followed by users of the Nos app and evaluate the for NSFW, offensive, harassing and illegal content using Open AI's content moderation service.

If the bot finds a match it labels the content. Those who follow Tagr and use a client such as Nos that supports content labels will see a warning over the note.  

Steps for using Tagr

Follow the steps below to start using Tagr.

1. Follow Tagr in Nostr

Open a client and start following Tagr.

2. Add relay.nos.social to your relay list

Tagr uses relay.nos.social to manage and mark notes.  

3. Use a client that displays content warnings

Only some Nostr clients support content labels. Below is a short list: 
- Nos (iOS) 
- Amethyst (Android)