🧠

WLI User Research

Action items
Revise the Com Manager value prop(s)
Check on monitization of live events
Recruit and talk to individuals
3 social media users
3 community joiners
Connect with TC - their data pipeline, but also what they learned
Tuning out
  • indiv who has a constant level of toxicity they get
    • tune out and keep going -
    • leave - it’s really hard to catch someone at one point in a journey
  • Com manager
    • constant that doesn’t bother them
Research Strategy
  • Who would buy and use our product and why?
  • What communities have the worst problem? Do they know it?
    • Do the data analysis to find which communities have more of a problem
Who:
Choosers
  • a leader who would buy the product
    Users - both dominant and marginalized people
    What kind of organizations
     
    Ask - can you combine all of these together make 3-4 problem statements, one each for the different ways to use the tool
    Next March 27th -
    Today’s Problem Statement v1
    I am an avid project contributor who posts a lot online.
    [a unique person with my context]
    I am trying to post opinions online which sometimes receive backlash.
    [accomplish an important goal]
    But I don’t always have the mental capacity to respond to the backlash.
    [I have this problem]
    Because it’s not the only part of my job although it is important.
    [root cause]
    Which makes me feel frustrated and drained.
    [emotional effect]
    • Filter bad stuff out of my feed (Web 2 and Social Media)
    Value Proposition v1
    Our product welivedit.ai
    helps social media users
    who want to post their opinions on controversial topics online.
    by (verb) (pain) decreasing their vulnerability to hate and toxic content
    and (verb) (benefit) increasing control over when and how they can deal with this content.
    because we (reason to believe) understand effect that hate speech and toxicity can have, and the empowerment coming from the ability to control it on own terms
    unlike _______
     
    We help people whose work keeps them visible online, by enabling them to a) filter out toxicity in their online spaces on days they don’t want to deal with it and b) capture data needed for reporting online hate.
     
    Version for Community Selection by Individuals
    Today’s Problem Statement v2
    I am responsible for managing an online community space.
    [a unique person with my context]
    I am trying to keep the space conflict free.
    [accomplish an important goal]
    But sometimes conflicts erupt and things turn toxic. People send insulting and hate-filled messages.
    [I have this problem]
    Because conflict is inevitable and I can’t monitor the space 24/7.
    [root cause]
    Which makes me feel frustrated.
    [emotional effect]
    Value Proposition v2
    Our product welivedit.ai
    helps organisations or projects with dynamic digital communities
    who wants want to attract and retain more diverse talent.
    by (verb) (pain) diminishing the risk of losing a portion of valuable existing or potential community members who don’t feel welcome in the community
    and (verb) (benefit) setting themselves apart from more toxic online projects and showing how they tackle online hate and toxicity through a dashboard (that is connected to welivedit.ai scanner)
    because we (reason to believe) understand the necessity to tailor moderation tools to each community and their values
    unlike _______ hate speech filters that are inflexible and don't adjust to our needs
     
    Value Proposition & Concept Tests
    see
    Our (kind of _________
    helps _________
    who wants to __________
    by (verb) (pain)
    and (verb) (benefit)
    because we (reason to believe)
    unlike _______
    Interviews & Notes
    How to find people to talk to
    • opt in
    Tracking, Analysis and Synthesis tools
    💪
    WLI Outreach initiatives
     
    🧠

    WLI User Research

    Action items
    Revise the Com Manager value prop(s)
    Check on monitization of live events
    Recruit and talk to individuals
    3 social media users
    3 community joiners
    Connect with TC - their data pipeline, but also what they learned
    Tuning out
    • indiv who has a constant level of toxicity they get
      • tune out and keep going -
      • leave - it’s really hard to catch someone at one point in a journey
    • Com manager
      • constant that doesn’t bother them
    Research Strategy
    • Who would buy and use our product and why?
    • What communities have the worst problem? Do they know it?
      • Do the data analysis to find which communities have more of a problem
    Who:
    Choosers
    • a leader who would buy the product
      Users - both dominant and marginalized people
      What kind of organizations
       
      Ask - can you combine all of these together make 3-4 problem statements, one each for the different ways to use the tool
      Next March 27th -
      Today’s Problem Statement v1
      I am an avid project contributor who posts a lot online.
      [a unique person with my context]
      I am trying to post opinions online which sometimes receive backlash.
      [accomplish an important goal]
      But I don’t always have the mental capacity to respond to the backlash.
      [I have this problem]
      Because it’s not the only part of my job although it is important.
      [root cause]
      Which makes me feel frustrated and drained.
      [emotional effect]
      • Filter bad stuff out of my feed (Web 2 and Social Media)
      Value Proposition v1
      Our product welivedit.ai
      helps social media users
      who want to post their opinions on controversial topics online.
      by (verb) (pain) decreasing their vulnerability to hate and toxic content
      and (verb) (benefit) increasing control over when and how they can deal with this content.
      because we (reason to believe) understand effect that hate speech and toxicity can have, and the empowerment coming from the ability to control it on own terms
      unlike _______
       
      We help people whose work keeps them visible online, by enabling them to a) filter out toxicity in their online spaces on days they don’t want to deal with it and b) capture data needed for reporting online hate.
       
      Version for Community Selection by Individuals
      Today’s Problem Statement v2
      I am responsible for managing an online community space.
      [a unique person with my context]
      I am trying to keep the space conflict free.
      [accomplish an important goal]
      But sometimes conflicts erupt and things turn toxic. People send insulting and hate-filled messages.
      [I have this problem]
      Because conflict is inevitable and I can’t monitor the space 24/7.
      [root cause]
      Which makes me feel frustrated.
      [emotional effect]
      Value Proposition v2
      Our product welivedit.ai
      helps organisations or projects with dynamic digital communities
      who wants want to attract and retain more diverse talent.
      by (verb) (pain) diminishing the risk of losing a portion of valuable existing or potential community members who don’t feel welcome in the community
      and (verb) (benefit) setting themselves apart from more toxic online projects and showing how they tackle online hate and toxicity through a dashboard (that is connected to welivedit.ai scanner)
      because we (reason to believe) understand the necessity to tailor moderation tools to each community and their values
      unlike _______ hate speech filters that are inflexible and don't adjust to our needs
       
      Value Proposition & Concept Tests
      see
      Our (kind of _________
      helps _________
      who wants to __________
      by (verb) (pain)
      and (verb) (benefit)
      because we (reason to believe)
      unlike _______
      Interviews & Notes
      How to find people to talk to
      • opt in
      Tracking, Analysis and Synthesis tools
      💪
      WLI Outreach initiatives