📖

Documentation Norm

Problem defined
Running peer-to-peer assessments for allocating salaries in RnDAO has shown some inefficiencies.
  1. It is difficult for people to do mental math, trying to objectively allocate GIVE to their peers on Coordinape. The reason is that it is unclear and unintuitive how many points they need to give to which peer to get to a fair salary. The connection between GIVE and Salary is not comprehensible. Tried solution: Daniel came up with the allocation, however now it feels like admin work to go and copy-paste those values. It is still unclear if you want to give more or less based on the peer’s performance, and how much more to give or take.
  1. Contributors lack context about the people they don’t work with
  1. There’s a lack of feedback received after assessments
Requirements
  • Holidays management and sick leave: how to treat people taking time off?
  • Low-risk hiring: enable easy “testing” of contributors as well as reduce friction/admin when contributors deliver significantly less than expected
  • Address members quitting/ghosting the team
  • Address bad-leavers
Solution
The solution aims to simplify the process of evaluation, introducing an assessment based on rating and feedback. Each peer will assess the others based on two factors:
  1. Culture contribution - everyone will evaluate all their peers from 1 to 5 on this
  1. Performance contribution - only contributors that work together will evaluate each other on this factor from 1 to 5 (will have words to these as well for more objective scale - ie. 3 - met expectations, 4 - performed above expectations, etc). The algorithm will default to mid score of 3 for the peers that don’t work together.
Giving those scores will also come with a mandatory feedback field for contributors to fill, this way the process will inspire a more collaborative environment for people to give each other feedback and improve on it.
Rating contributors from 1 to 5, adds a layer of abstraction and allows the peers to evaluate each other without doing mental math of points and market rates.
For the work done data, we propose to link the weekly priorities and tasks pages where people can see who did what instead of making people write them every month additionally somewhere (as they do now in Coordinape).
Peer-To-Peer assessment process
The process works with Google Forms, where each contributor is asked to input their name (from a dropdown) and evaluate all their team members, noting that they only evaluate their team members’ work done aspect, only if they worked together this past month.
The Google Form is divided into sections - one per each contributor. There are 4 questions within each section:
  1. Assessment on culture contribution to the team
  1. Assessment on performance contribution to the team (work done)
  1. Feedback questions:
      • What did I do well or made you feel good about that you would me to continue doing?
      • What did I do not as well as I could or made you feel bad about that I can change or improve?
Algorithm
Inputs
  1. Total Monthly Budget
  1. Market Rates + Commitment for each contributor
  1. Assessment scores for all contributors
  1. Performance Adjustment Rate: This rate indicates how much a person's salary can be adjusted per point deviation from a neutral score of 3
Algorithm Design
Step 1: Calculate Average Score
Average Score is the average value for each team member, based on all the feedback numbers they’ve gotten from their team
Step 2: Calculate the Salary Adjustment Multiplier for each contributor
Salary Adjustment Multiplier = (AverageScore - 3)*Performance Adjustment Rate
Example: Performance Adjustment Rate = 10%;
This rate indicates how much a person's salary can be adjusted per point deviation from a neutral score of 3. For a score of 1 to 5, the multiplier would range from -0.2 to 0.2.
  1. Median Score = 4
    1. Salary Adjustment Multiplier = (4 - 3)*10% = 0.1
  1. Median Score = 1
    1. Salary Adjustment Multiplier = (1 - 3)*10% = -0.2
Step 3: Calculate Individual Adjusted Salary
Calculate the adjusted salary for each contributor based on Salary Adjustment Multiplier.
Adjusted Salary = Expected Salary*(1+SAM)
Example:
Expected Salary = $1200
  • Person 1: Expected Salary = $1200, Score = 1 Adjustment Multiplier = (1 - 3) * 10% = -0.2 Adjusted Salary = 1200*(1 - 0.2) = $960
  • Person 2: Expected Salary = $9000, Score = 5 Adjustment Multiplier = (5 - 3) * 10% = 0.2 Adjusted Salary = 9000*(1 + 0.2) = $10 800
Step 4: Normalize to Budget
It's necessary to adjust the salaries in such a way that the total sum matches the available budget. This can be done by scaling the adjusted salaries proportionally so that their sum equals total budget
  1. Calculate the sum of all initially adjusted salaries.
  1. Compute a scaling factor, which is the ratio of the total budget to the sum of adjusted salaries.
3. Apply this scaling factor to each adjusted salary to ensure the total matches the budget.
Formula:
  1. Total Adjusted Salaries = Sum(Individual Adjusted Salary) for all contributors
  1. Scaling Factor = Budget / Total Adjusted Salaries
  1. Final Salary = Adjusted Salary * Scaling Factor
Questions and Considerations
  1. Normalizing to the monthly budget creates friction and misalignment of incentives. Best strategy for each contributor is to give low scores to their teammates, so that their scores can bump their score. Considering that team points are created on demand, we could adjust the budget according to the scores received, instead of limiting to the budget.
  1. Another option to solve that is to h0ave again limited amount of points to distribute, however, this is a problematic design as well, as in this case the individual is incentivised to give more points to contributors with lower salaries, than the ones with higher.
  1. Questions:
    1. Would there be an issue for the tokenomics if we don’t normalize to budget and instead give the points as extra - meaning if everyone performed great, everyone will get extra points (instead of using zero-sum-game dynamics with a fixed points budget)?
    2. What % do we want the scores to influence the end amount of salaries that people received (Performance Adjustment Rate)? At the moment in the RnDAO model it’s 100% - meaning if people don’t receive any give they will receive $0. I think it’s better to have a much smaller impact - let’s say between 20% and 50% max.
Resources:
Typeform for feedback:
📖

Documentation Norm

Problem defined
Running peer-to-peer assessments for allocating salaries in RnDAO has shown some inefficiencies.
  1. It is difficult for people to do mental math, trying to objectively allocate GIVE to their peers on Coordinape. The reason is that it is unclear and unintuitive how many points they need to give to which peer to get to a fair salary. The connection between GIVE and Salary is not comprehensible. Tried solution: Daniel came up with the allocation, however now it feels like admin work to go and copy-paste those values. It is still unclear if you want to give more or less based on the peer’s performance, and how much more to give or take.
  1. Contributors lack context about the people they don’t work with
  1. There’s a lack of feedback received after assessments
Requirements
  • Holidays management and sick leave: how to treat people taking time off?
  • Low-risk hiring: enable easy “testing” of contributors as well as reduce friction/admin when contributors deliver significantly less than expected
  • Address members quitting/ghosting the team
  • Address bad-leavers
Solution
The solution aims to simplify the process of evaluation, introducing an assessment based on rating and feedback. Each peer will assess the others based on two factors:
  1. Culture contribution - everyone will evaluate all their peers from 1 to 5 on this
  1. Performance contribution - only contributors that work together will evaluate each other on this factor from 1 to 5 (will have words to these as well for more objective scale - ie. 3 - met expectations, 4 - performed above expectations, etc). The algorithm will default to mid score of 3 for the peers that don’t work together.
Giving those scores will also come with a mandatory feedback field for contributors to fill, this way the process will inspire a more collaborative environment for people to give each other feedback and improve on it.
Rating contributors from 1 to 5, adds a layer of abstraction and allows the peers to evaluate each other without doing mental math of points and market rates.
For the work done data, we propose to link the weekly priorities and tasks pages where people can see who did what instead of making people write them every month additionally somewhere (as they do now in Coordinape).
Peer-To-Peer assessment process
The process works with Google Forms, where each contributor is asked to input their name (from a dropdown) and evaluate all their team members, noting that they only evaluate their team members’ work done aspect, only if they worked together this past month.
The Google Form is divided into sections - one per each contributor. There are 4 questions within each section:
  1. Assessment on culture contribution to the team
  1. Assessment on performance contribution to the team (work done)
  1. Feedback questions:
      • What did I do well or made you feel good about that you would me to continue doing?
      • What did I do not as well as I could or made you feel bad about that I can change or improve?
Algorithm
Inputs
  1. Total Monthly Budget
  1. Market Rates + Commitment for each contributor
  1. Assessment scores for all contributors
  1. Performance Adjustment Rate: This rate indicates how much a person's salary can be adjusted per point deviation from a neutral score of 3
Algorithm Design
Step 1: Calculate Average Score
Average Score is the average value for each team member, based on all the feedback numbers they’ve gotten from their team
Step 2: Calculate the Salary Adjustment Multiplier for each contributor
Salary Adjustment Multiplier = (AverageScore - 3)*Performance Adjustment Rate
Example: Performance Adjustment Rate = 10%;
This rate indicates how much a person's salary can be adjusted per point deviation from a neutral score of 3. For a score of 1 to 5, the multiplier would range from -0.2 to 0.2.
  1. Median Score = 4
    1. Salary Adjustment Multiplier = (4 - 3)*10% = 0.1
  1. Median Score = 1
    1. Salary Adjustment Multiplier = (1 - 3)*10% = -0.2
Step 3: Calculate Individual Adjusted Salary
Calculate the adjusted salary for each contributor based on Salary Adjustment Multiplier.
Adjusted Salary = Expected Salary*(1+SAM)
Example:
Expected Salary = $1200
  • Person 1: Expected Salary = $1200, Score = 1 Adjustment Multiplier = (1 - 3) * 10% = -0.2 Adjusted Salary = 1200*(1 - 0.2) = $960
  • Person 2: Expected Salary = $9000, Score = 5 Adjustment Multiplier = (5 - 3) * 10% = 0.2 Adjusted Salary = 9000*(1 + 0.2) = $10 800
Step 4: Normalize to Budget
It's necessary to adjust the salaries in such a way that the total sum matches the available budget. This can be done by scaling the adjusted salaries proportionally so that their sum equals total budget
  1. Calculate the sum of all initially adjusted salaries.
  1. Compute a scaling factor, which is the ratio of the total budget to the sum of adjusted salaries.
3. Apply this scaling factor to each adjusted salary to ensure the total matches the budget.
Formula:
  1. Total Adjusted Salaries = Sum(Individual Adjusted Salary) for all contributors
  1. Scaling Factor = Budget / Total Adjusted Salaries
  1. Final Salary = Adjusted Salary * Scaling Factor
Questions and Considerations
  1. Normalizing to the monthly budget creates friction and misalignment of incentives. Best strategy for each contributor is to give low scores to their teammates, so that their scores can bump their score. Considering that team points are created on demand, we could adjust the budget according to the scores received, instead of limiting to the budget.
  1. Another option to solve that is to h0ave again limited amount of points to distribute, however, this is a problematic design as well, as in this case the individual is incentivised to give more points to contributors with lower salaries, than the ones with higher.
  1. Questions:
    1. Would there be an issue for the tokenomics if we don’t normalize to budget and instead give the points as extra - meaning if everyone performed great, everyone will get extra points (instead of using zero-sum-game dynamics with a fixed points budget)?
    2. What % do we want the scores to influence the end amount of salaries that people received (Performance Adjustment Rate)? At the moment in the RnDAO model it’s 100% - meaning if people don’t receive any give they will receive $0. I think it’s better to have a much smaller impact - let’s say between 20% and 50% max.
Resources:
Typeform for feedback: