The Arctic science community undergoes a planning exercise every ten years. This process is known as the International Conference on Arctic Research Planning (ICARP), and we’re currently on its fourth iteration. As part of this research planning process, there are a number of “research priority teams” (RPTs), and I co-chair one of them alongside three others. My team is RPT-2, we have around forty members, and our remit is “Observing, Reconstructing, and Predicting Future Climate Dynamics and Ecosystem Responses”.
After our first meeting in Edinburgh a year ago (blog post here), we’ve been working to produce a draft set of recommendations for what science (and scientific approaches) should be prioritised. It’s involved some interesting reflection on who Arctic science is done for, who it’s done by, and what is the job of an Arctic scientist in the first place. We produced a draft list of recommendations (focussed around “research gaps” and “research needs”) and presented them at the ICARP IV summit, which was just held in Boulder, Colorado.
One highlight of the summit was hosting a “townhall meeting” (I’ve found that Americans love a townhall meeting) where we presented our draft recommendations to the broader Arctic science community. This worked because a lot of non-ICARP people were at the venue for the annual Arctic Science Summit Week which this year ran concurrently with the summit. After giving a short presentation, we ran an interactive feedback session where participants rated and ranked our draft priorities, as well as suggesting things we might have overlooked.



Above: (1) me cajoling the audience to contribute to a word-cloud of possible stakeholders to consult. Photo Wilson Cheung (2) The audience using their phones to rate the recommendations on the screen. Photo Wilson Cheung (3) RPT cochairs and members at the end of the townhall. Photo Margaret Rudolf.
Overall scoring of our draft recommendations by the townhall audience.
We had 99 live (as opposed to asynchronous, not deceased) participants in the interactive activity - far beyond what I expected. I won’t chart or describe the recommendation-specific feedback here, but we did also ask some meta-questions about our performance. Specifically, we asked the participants to agree or disagree with some statements on a scale of 1-5. You can see some results below. Near the end of the meeting we also asked a more general question: to what extend did the audience agree with our recommendations? Our mean score was 4.2/5, which I’m pleased with. As well as providing valuable feedback to us, I also hope that these scores can help show the ICARP steering committee that our recommendations have at least some endorsement from the wider community.