Camille (Apiary)
Task
Status
Date
Feb 28, 2024
Completion Time
Email
Interviewer
Segment
User Details
- …
Interviewer Observations
The conversation revolved around the intersections of ???, technology, and governance. Camille shared her passion for nature and how it influences her work, while Artem discussed his involvement in a community focused on web3 governance. Camille emphasized the need to rethink governance systems in light of the attention economy, while Artem offered alternative solutions such as sortition and AI. The speakers debated the potential of these approaches and the importance of ensuring that any new system provides sufficient information and fidelity for good decision-making.
Insights
Need
Pain points
Outline
Governance in web3.
Using AI to facilitate group decision-making.
Facilitating group conversations and sharing perspectives.
Group decision-making and technology.
Designing governance systems for intervention.
Governance and decision-making in the context of blockchain technology.
Leveraging technology for democratic decision-making.
Decision-making in digital spaces.
Using AI to improve group decision-making.
Â
Notes
- web3 as a petri dish
- tooling is ???
- success is not about information
- …
- before Apiary I started Purpose (now merged with VC fund) — legal frameworks for ??? (ownership transition)
- ripple effect of org change that occurs as a result
- became a good facilitator inadvertently because things I wanted to achieve…
- you can’t get people to arrive at conclusions faster unless you create that space for facilitation
- investigate what exactly facilitation means?
- sensemaking = information sharing — people are hearing other people say things they didn’t hear or understand before (e.g. direction for org to go)
- sensemaking = individual — “I’ve been carrying around this weight because of information flows, etc. I didn’t have a place to share this” (psychological ??? of relief), probably the most important part of the whole thing
- I can’t share my opinion (defensive) → collaborative ???, deeply human thing, probably not possible with a bot?
- they say the same thing: “thank you so much for listening to me” RELIEF
- the problem with trust — we’re really clear what the communication boundary lines are, all of this anonymous, rules are upfront, the feeling of psychological safety
- can you have the info feed into LLM to create sensemaking, and simultaneously make the people feel trusted
- psychological relationship of individual speaking to the bot might be… something I would PAUSE on
- how you go from individual safety and trust and feeling heard to group?
- individual can hear and share info…
- then you go into group convo, two things
- everyone participating needs to feel safe and secure
- you need to navigate…
- …
- … LLMs think they can help with group sensemaking
- one of the big mistakes is thinking there’s conversion between individual contribution that can be summarized into group thinking
- really counter intuitive: if everyone say the same then we can… that nets out as a reflection of … but it doesn’t work like this!
- with LLMs it’s often not what’s in the middle of the group what is actually the point of most interest and can help it move forward… it’s often outliers!
- we ironically get further and further away from the actual truth of as a whole group can
- there’s a fundamental tension that
- one of the things that’s important to hold and this has been the evolution in web3 (Vitalik posting I’m no longer a child) is shift from people are inputs to people make decisions in our bodies than just inputs of information
- the other point is around attention — how much of it people want to spend on a given topic, people ???, not sure it’s true in practice, sometimes the desire to move faster leads to delayed costs in the system (6 months later they ask why TF we’re doing it this way)
- people were going too fast without explaining enough to make the decisions in the right way with context
- if I was a Governing God, I would design a system based not on participation, but intervention… how do we allow people to intervene when something is not going well
- the question depends on the context… if intervention is the primary metric, two things are needed:
- information flow: how do you structure the org or teams so they can make localized decisions with localized info, and that feeds into the systems, so token holders can understand how resources are being used, if teams are on track, etc
- localized power, enabling people to act autonomously
- doing sensemaking well, and then ensuring that information that’s necessary for the objectives of the whole are fed into the whole
- I think you need to figure out what is actually the field of what needs to be governed, what decisions have to be made for the system to function well
- and then when decisions need to be made, how ??? it should be, looking on macro level, what would danger and success would look like, what do we want the system to achieve
- …
- you can’t hold people accountable for the decision made (in DAOs)
- why are we collaborating, what does success look like and what we’re trying to achieve
- …
Action Items
Â