Case Study: The Accra Action Learning Journey
The Accra Action Learning Journey (ALJ) serves as a powerful, real-world example of our evaluation methodology in action. This deep dive illustrates how we move from co-creating a value framework to tracking emergent dynamics through a rich, multi-source data strategy, and finally, to enabling resource allocation based on transparent, evidence-based insights. It showcases our ability to work with complexity, adapt in real-time, and make the invisible flows of value visible.
Phase 1 in Action: The “Valuing Workshop” & Co-Creation
The evaluation process began not with a predefined set of metrics, but with a live, participatory “Valuing Workshop” during the first week. This foundational session was designed to shift power to the participants, allowing them to define what constituted “value” for their journey.
-
Brainstorming Value: Participants engaged in an open brainstorming session to list activities, behaviors, and states they personally and collectively valued. This generated a rich and diverse list, including
human connections
,collaboration
,mutual benefit
,creativity
,upskilling
, androle-taking
. -
Thematic Grouping: Facilitators then guided the cohort through a collective sense-making process to cluster these dozens of items into four core value themes. These themes became the pillars of the evaluation framework:
-
Social Relationality: Fostering trust, openness, and collaborative spirit.
-
Learning & Application: Enabling personal and collective growth and skill development.
-
Creativity: Activating imaginative potential for new ideas, insights, and approaches.
-
Productivity: Encouraging meaningful participation, decision-making, and tangible contribution.
-
-
Co-Creating Weights: The most critical step was determining the relative importance of these themes. Through a facilitated dialogue and voting process, the cohort decided on a weighting system that reflected their collective priorities. This was a direct expression of their shared values. The final agreed-upon weights were:
-
Social Relationality: 4x (Explicitly recognized as the foundational element for all other forms of value).
-
Learning & Application: 3x
-
Creativity: 2x
-
Productivity: 1x
-
![[video_2025-06-30_20-19-20.mp4]] A live glimpse of the brainstorming session where participants put up sticky notes and commented on each other’s suggestions of what constitutes value.
This co-creative process ensured the evaluation framework was not an external imposition but a genuine reflection of the community’s values.
Phase 2 in Action: A Multi-Source Approach to Tracking Catalysts
Tracking value flow in Accra was a dynamic process that integrated multiple data streams. A key moment of live adaptation occurred when it became clear that asking participants to self-tag all timeline entries was “wildly unrealistic.” The facilitation team pivoted, adapting the methodology to focus on capturing the most significant value signals through a combination of specialized schemas and structured forms.
1. High-Gravity Schemas for Timelining
To reduce the burden on participants while still capturing crucial data, we designed four distinct Telegram bot schemas for logging the most significant, “high-gravity” contributions.
- Social Relationality Schema: Designed to feel conversational.
‘When I felt stuck, @Zara sat with me.’ #support #gratitude
Verified by an emoji from the tagged person.
![[evaluation day4 1.jpg]]
- Learning Application Schema: Formatted as a progress log.
@Jay #learning before: ‘mumbled ideas’ after: ‘pitched clearly’ method: ‘feedback from @Anu’
Verified by a peer comment.
![[evaluation day1 1.jpg]]
- Creativity Schema: A playful, “mad-lib” format.
@Leah dropped a #creativity bomb: ‘What if we could track energy by measuring voice memo tempo?’ #idea
Verified by 3+ ’🚀’ reactions.
![[evaluation day3.jpg]]
- Productivity Schema: An operational, checklist style.
@Ravi [#productivity #organizer] - [x] Finalized venue - [x] Coordinated speakers
Verified by a ’✅’ from a facilitator.
![[evaluation day2 1.jpg]]
While adoption was inconsistent, these schemas provided a crucial channel for capturing high-quality value signals that went beyond general chat messages.
2. Structured Forms for Multi-Perspectival Ratings
A critical component of the evaluation was the use of structured forms (built on Tally) to capture both quantitative ratings and qualitative reflections. This ensured that every participant’s perspective was included, not just those most active on Telegram.
-
Daily Self & Peer Evaluation Forms : At the start and end of each programming day, participants rated themselves and their peers on a scale of 1-5 across the four value areas. Crucially, each numerical rating had to be backed by a minimum one-sentence qualitative example, providing rich context to the scores.
-
Facilitation Feedback Forms: Anonymous questionnaires were deployed at the start and end of the intensive hacking week to gather feedback on facilitation patterns, topics (e.g., Leadership, Root Issue Process, Pitching), and the effectiveness of different session formats (e.g., Small Group Breakouts, Sharing Circles).
-
ALJ “Pulse” Forms: Short, repeated questionnaires tracked shifts in the cohort’s overall desire, energy, and expectations at three key intervals (retrospective, midpoint, and final).
3. Timelining Entries: The Unstructured Narrative Layer
The Telegram timelining channel served as the living, breathing narrative of the journey. While the schemas captured specific contributions, the general timeline captured the “in-between” moments: informal reflections, photos, spontaneous expressions of gratitude, and observations of emergent patterns. This unstructured data, though disparate, was vital for contextualizing the structured data from forms and schemas.
Phase 3 in Action: Synthesis, Analysis & Enabling Stewardship
The final phase was a comprehensive synthesis process designed to produce a fair, transparent, and holistic assessment that could be used to enable resource allocation (the prize distribution).
1. AI-Powered Synthesis and Statistical Analysis
All data streams—verified schema entries, form-based scores and comments, and timeline messages—were aggregated into our relational graph database.
-
GraphRAG for Deep Inquiry: We used GraphRAG to perform complex, cross-cutting queries using natural language. This allowed us to ask questions impossible for standard analytics, such as: “Show me all instances where a participant’s peer-rated ‘Social Relationality’ score was high, and cross-reference this with their voice notes mentioning ‘support’ or ‘trust’.”
-
Statistical Analysis of Form Data: The quantitative data from the forms was statistically analyzed to identify:
-
Score Distributions: Visualizing the spread of ratings for individuals and teams.
-
Trend Analysis: Tracking how personal growth ratings evolved over time.
-
Outlier Identification: Flagging participants who were consistently rated very high or low by their peers.
-
2. Visualization of Value Flows
To make sense of the complexity, we generated several types of visualizations:
-
Capital Flow Maps: Network graphs showing who was giving and receiving the most value, and of what type.
-
Contribution Heatmaps: Visual grids indicating which participants were most active in specific value areas.
-
Team Performance Dashboards: Side-by-side comparisons of team trajectories across the four value themes.
3. The Final Calculation: Multi-Perspectival & Weighted
The final prize distribution was a direct outcome of this rigorous, multi-perspectival process.
-
Data Aggregation: The synthesis integrated all available data points for each participant.
-
Applying the Co-Created Weights: Each verified contribution and rating was multiplied by its corresponding value weight (Social x4, Learning x3, etc.).
-
Final Ranking: The weighted scores were tallied for each individual and team to create a final, evidence-backed ranking.
-
Transparent Distribution: The prize money was distributed according to these final scores, making the allocation of resources a transparent reflection of the value the community itself had defined and tracked.
![[Pasted image 20250630204615.png]]
Conclusion: The Accra ALJ was a testament to the power of our methodology to navigate real-world complexity. By integrating diverse data sources, leveraging advanced AI tools for synthesis, and adapting our facilitation techniques in real-time, we created a robust, transparent, and participatory evaluation process. This transformed evaluation from a top-down assessment into a bottom-up practice of valuing, tracking, and stewarding the flows of energy that catalyze regenerative change.