WLI Weekly Gokhan #5
Status
Date
summary
I’ve shifted from a toxicity score to how they align on community values
- Very specific projects
TogetherCrew
- Changing existing ways of doing things
- Online space, I left the telegram group because
Consumer pays for a solution - both sides
- What have you lost out on
- Would pay for something that would help them manage what comes out of hand
- Grant management - the whole process took a lot of time.
Hate speech monitoring for individuals, diffences:
- Not pay as much
- Might expect the tool to start working streigh away
- Not need so much training data
- Adopt as fast as you can
- Is there a way for people with a similar profile be able to pull togethert their data for fine tuning?
At what point do we know an individual is willing to pay for this?
- People who are just using social, might be less willing to pay for something connected to their work
- Mental health protection they would be paying - perpective AI
- For taking part in improving AI, reducing biases
- Would be used for reporting and research - ideal value exchange
Datasets - Custumize the sensibility
- There was a competition, comparing two sentences, which one is more toxic?
- You can then train / score hate speech
- Easier to ask people to add more data
Alternative
- Ways for the model that can be speech
- Roberta LLMs model
Comments
- Can I personally be a force for good for the community?
Greedy investor
- Components thats are tradeable - models and data trained for ppl need to work with
- Can be used in scoring and scanning model
- Virtue signaling / how to remove the language thart will be seen
- There was the loss project being more toxity and moderation
- Move away from consulting to go and use
- Innovation they would still pay for this
- Moderation where you have and you dont have it, not central but personal
- We are nbtio sensoring and people can say whatever they want