Feature Requests & Problems

This document condenses the findings from:
← Discussion after the test run
 
Problems derived from following feature requests (sorted by severity)
  • I can’t stop funds to be send to contributors who ghosted
  • Unfair distribution of funds: Team members get rewarded without contribution
  • How do I know if commitment was fulfilled?
  • I don’t know how to rate as I have no context on some team members
  • System is too rigid
  • Too complex
 
 
 
 
Feature requests:
Feature requests should be considered in terms of what is the underling problem the customer tries to solve? Usually those are different from what they formulate as a feature.
So what is the underlying problem?
 
UI
  • Show commitment & Market rate on assessment page - Solves problem of context
  • More granular values for rating - split test
  • Less granular values for rating - split test
  • Flag ghosting - Creates awareness of no contribution at all
 
UX
  • Set evaluation to weekly, biweekly or monthly
 
Assessment
  • Just asses people you interacted with - Solves problem of not having context on others
  • Delegate votes to those who worked together? - Problem is context
  • Automatic delegation from non-voters to voters - Problem is complexity
  • Asses Commitment additionally to (Cultural Impact and Performance) - Isn’t Culture + Performance = commitment?
  • Weights on cultural vs contribution - Solves problem of valuing contribution more then culture
  • Adjust voting weights with TogetherCrew's social graph
 
Commitment
  • Challenge commitment (dynamic) - Problem of unfair distribution of funds
  • Flag is anonymous and halts issuance for 48h
  • Review if commitment was fulfilled
  • Trigger „wisdom of the crowd“ vote
  • Adjust commitment %
    •  
Accountability
  • Additional mechanics that make sure the contributor agreements are being followed
 
Simplification
  • Flag just under- and over-performance
  • Everyone gets paid the agreed amount monthly
  • Remove rating | Only flag and accept? + comment
  • Reduce complexity in Teams/Organisation
 
Additional usecase
  • Mint feedback on-chain as credentials
 
Metrics
  • Additional metric which calculates someone’s actual commitment
  • % of weekly priorities completed
  • Response rate (within 24h?)
  • meeting attendance rate (100%)
 
Comments:
  • We need a system for fluid teams
  • How do we define commitment?
  • How to implement a review period, that checks if work was delivered? (Unit governance level)
  • Collabberry is a tool for compensation and feedback, not agreements review
  • Not punishment focused, rather reward those who deliver
  • The system doesn't seek to measure quality, the system seek to measure deliverables and cultural fit in.
  • Newcomers should get different treatment than established contributors
 
Issues:
  • How to police % of commitment?
  • One member got a rating of 3 despite not being active at all
  • Members working in isolation might give each other high ratings despite mediocre performance
  • Team members have to rate on people they have no context on
  • What do I do when I have no insight on what others do?
I’m just

Feature Requests & Problems

This document condenses the findings from:
← Discussion after the test run
 
Problems derived from following feature requests (sorted by severity)
  • I can’t stop funds to be send to contributors who ghosted
  • Unfair distribution of funds: Team members get rewarded without contribution
  • How do I know if commitment was fulfilled?
  • I don’t know how to rate as I have no context on some team members
  • System is too rigid
  • Too complex
 
 
 
 
Feature requests:
Feature requests should be considered in terms of what is the underling problem the customer tries to solve? Usually those are different from what they formulate as a feature.
So what is the underlying problem?
 
UI
  • Show commitment & Market rate on assessment page - Solves problem of context
  • More granular values for rating - split test
  • Less granular values for rating - split test
  • Flag ghosting - Creates awareness of no contribution at all
 
UX
  • Set evaluation to weekly, biweekly or monthly
 
Assessment
  • Just asses people you interacted with - Solves problem of not having context on others
  • Delegate votes to those who worked together? - Problem is context
  • Automatic delegation from non-voters to voters - Problem is complexity
  • Asses Commitment additionally to (Cultural Impact and Performance) - Isn’t Culture + Performance = commitment?
  • Weights on cultural vs contribution - Solves problem of valuing contribution more then culture
  • Adjust voting weights with TogetherCrew's social graph
 
Commitment
  • Challenge commitment (dynamic) - Problem of unfair distribution of funds
  • Flag is anonymous and halts issuance for 48h
  • Review if commitment was fulfilled
  • Trigger „wisdom of the crowd“ vote
  • Adjust commitment %
    •  
Accountability
  • Additional mechanics that make sure the contributor agreements are being followed
 
Simplification
  • Flag just under- and over-performance
  • Everyone gets paid the agreed amount monthly
  • Remove rating | Only flag and accept? + comment
  • Reduce complexity in Teams/Organisation
 
Additional usecase
  • Mint feedback on-chain as credentials
 
Metrics
  • Additional metric which calculates someone’s actual commitment
  • % of weekly priorities completed
  • Response rate (within 24h?)
  • meeting attendance rate (100%)
 
Comments:
  • We need a system for fluid teams
  • How do we define commitment?
  • How to implement a review period, that checks if work was delivered? (Unit governance level)
  • Collabberry is a tool for compensation and feedback, not agreements review
  • Not punishment focused, rather reward those who deliver
  • The system doesn't seek to measure quality, the system seek to measure deliverables and cultural fit in.
  • Newcomers should get different treatment than established contributors
 
Issues:
  • How to police % of commitment?
  • One member got a rating of 3 despite not being active at all
  • Members working in isolation might give each other high ratings despite mediocre performance
  • Team members have to rate on people they have no context on
  • What do I do when I have no insight on what others do?
I’m just