Problem we are solving:
- How might we reset expectations of interactivity of events and enable people to actually participate in the design of the workshops?
Why now? We’ve attended a few events where folks do not expect that a workshop is interactive. They may be unprepared, or they may not understand how the interactive exercise is to work. This leads to folks not engaging in a discussion that is related to the topic at hand. At times, participants will drop out, leaving their peers without a person to interact with. As well, since there is little description about the exercise and required preparation, people don’t have time to think about the problem space, and some people do not think well on the spot, which means that people may not contribute well during the workshop.
Key Success Measures
- After all our events, we receive an average 4 out of 5 “participation and ‘aha!’ moment scores” from all our co-facilitators who observed the groups
- We have at least co-designed 1 solution to showcase (from any event) per newsletter
- Our overall average rating on events moves from x to y
Key Milestones
- Assign 1 PBA to be a “peer-reviewer” of all interactive exercises during conf-labs
- 100% of all future conf-labs (interactive events) have a video sent prior summarizing the problem space and what’s expected of participants.
- 100% of events all have pre-selected co-facilitators for interactive groups, whether that’s from the PBA team or Oyster or another group. (With a ratio of 1 facilitator to max 5 attendees).
- Groups should be at least 3.
- 100% of all future events have alignment messages going out to all PBAs and/or pre-selected facilitators via Slack to prepare co-facilitators to support the interactive exercise.
- Update calendar invites to make interactive nature clearer:
- Update Luma description to define Conf-Labs at the beginning and make it explicit that interaction is required, it’s not a typical webinar.
- Introduce gamification by having folks build robust solutions to challenging problems and:
- spotlighting solutions in the upcoming newsletter and/or
- dot voting during the webinar on the solutions that folks found most helpful
- Post-event, we check-in with all co-facilitators and ask them on a scale of 1 to 5, how much participation and “aha” moments did we see amongst the teams doing the interactive exercise?
- 5: Teams fully understood the exercise and participated; they found the outputs helpful (lots of “aha” moments”)
- 4: Teams fully understood the exercise and participated, but didn’t have many insights or “aha” moments walking away from it
- 3: Teams understood the exercise, but didn’t participate that much - they didn’t have many insights to bring (either distracted, or not fully prepared for the exercise)
- 2: Teams did not understand the exercise
- 1: Teams didn’t understand the exercise, and expressed eagerness to work on something completely different