What does Tisane do When Problematic Content is Detected?
- We are not moderators or investigators. We merely process textual content.
- We do not set policy.
The role of Tisane ends with detection. Tisane receives input text, processes it, and responds with a JSON structure containing details on what was detected.
The application calling Tisane can then act upon the findings of Tisane. The calling application has complete freedom as to how it handles the response. It may ignore specific sections according to its policy, request a human review, act immediately, etc.
Tisane does not notify anyone or bans users. It is there for analysis (or translation).
What If I Need a Complete Solution?
We built several plugins for popular platforms, which allow using Tisane without the need to code. That includes complete moderation platforms integrated with human moderation and legal compliance, as well as instant messaging applications and games.
See: Integrations.
If you need source code to kickstart building your front-end, our partners in PubNub have built a seamless integration for Tisane and a demo. Source code: PubNub GitHub repository
If the platform is not listed or you are building a solution from scratch, contact us to discuss the details of your project.
What If I Need Social Media Content for My Solution?
We do not provide content.
However, we work with partners who do, covering sources from standard social media and Darkweb.
Contact us for directions.