There’s an interesting Twitter chat going on, stemming from Titus Brown’s recent blog post asking how to find a postdoctoral appointment where a student can do open science.
What could a student ask a potential employer (and mentor) to help shed light on the culture of the lab?
The post brought to mind a discussion had in the summer of 2013 at a meeting convened at SESYNC in Annapolis, MD about what to teach biologists about computing. We were discussing how to best assess the skill set of the graduate students applying to join a lab, with a tilt towards looking for the best practices associated with open science (good data and software management skills, proclivity to post their work so as to be shared and communicated, etc).
In the breakout, Titus, myself, Nirav Merchant (University of Arizona), and Marian Petre (Open University), brainstormed an activity-based assessment so we could level set better. Here’s the rough sketch of what we came up with:
0. Here’s a data sample. What would you need to fix in order to make it so you and others could use it?
1. Name and organize 3 data files (i.e., .csv, .dat, .txt)
2. Run this program on one of these files?
3. How would you capture that process for someone else to use?
4a) Suppose you change the program. How do you convey that information?
4b) Suppose someone sends you a changed version of the file/program. how do you interact with it?
5. How would you know that your program is doing what you want it to do?
6. How would you make your files available to others?
7. What additional data would you want to include (re: Ethan White paper)?
Does this suit our needs? How do these tasks map to various disciplines? What are we missing?
Thoughts, comments and suggestions welcome. I’d love to hear your thoughts.
(Also, for more on that meeting – which feels like forever ago – read Titus’ summary post. Lots of good stuff in there.)
Last week, with the help of two colleagues – Titus Brown (UC Davis) and Brian Nosek (Center for Open Science), we gathered a small group of stakeholders in the open science community for a meeting in Charlottesville. In the last few days, we put out a post following last week’s discussion as a collaborative post which you can read here. I highly encourage you to give it a look. It summarizes some of the topics from a 30,000 feet vantage point, and exposes some of the threads following the meeting.
Titus, who kicked off the first version of this post, also encouraged it to be a collective effort, signed by all who contributed and cross-posted as the participants saw fit. This was in the spirit of the meeting, which was about furthering open dialog and open science, after all.
But there’s some contradiction that’s been raised to me by some other community members who were not at the meeting about the fact that despite the name of the meeting – “Growing Open Source, Open Science” – the meeting itself was … closed. It wasn’t advertised or open for public registration, and some felt left out. The way the summary post talks about this, it was a meeting of influencers to discuss Open Science more generally, and that’s all true – but I wanted to expand on some of the mechanics of the meeting and explain the perceived contradiction. And lastly, invite the communities thoughts on how we can do more and/or better next time.
This wasn’t your average community meeting, featuring talks, networking, hackathons, and public registration. When Brian first approached Titus and I with an idea for a small gathering of thinkers and organizations in this space, it was originally pitched as something more of that ilk. But after a few iterations, we chose to take a different path, and pull together a small (as in, fits-in-one-conference-room) meeting, making rather deliberate decisions about attendees to ensure there was enough representation of voices to hit a cross-section of the community, but also ensure there was some familiarity in the group already.
And we put most of the meeting under Chatham’s House rule, or “FrieNDA” as Tim O’Reilly calls it. The aim: to create a safe space to put affiliations and funding pitches aside and be really honest about what *wasn’t* working. This was designed to be the Damascus moment (or first in a series, time will tell on that) for the community to be brutally honest about the state of “open” in science, identify rifts and challenges, and by surfacing that in a trusted, off-the-Twitters fashion, really hunker down and craft a strategy for doing better.
From our day-to-day positions, or the likes of coverage online, a rosy picture of progress is painted – of shifts at the funder level towards mandates furthering open research, training programs that are growing like wildfire, or researchers getting funded in part because of their track record being an open practitioner.
The reality, though, is that we’re not nearly there yet. Or, to quote Gibson, “the future is already here … it’s just not evenly distributed yet.”
What happened at the Charlottesville meeting that wasn’t directly reported was that a number of really frank, necessary, and in some cases, overdue conversations were had. We surfaced times when “collaboration” was what we preached, but not practiced, and we discussed how that presents less of a united front, but a walled garden approach. We talked about where both technology as well as technical awareness and appreciation broke down, and the repercussions, both for those around the table looking to keep the lights on at their business, and for the users. Users we’re not reaching as effectively as we should be. And we flagged the fact that for the first time, for me, since perhaps 2007-2008, it feels like elbows are sharp and pointed outward, part personality clash and part rooted in funding challenges, but feeling more fractured than united. We even had an honest discussion of unfair portrayals of organizations, groups pegged as being more the kid that shoved the others on the playground to get first in line.
I think this sort of discussion was necessary, as uncomfortable as it feels to admit when things *aren’t* working, and even more, where you may have made a mistake or failed. As colleague in the online education space once said, “accelerate the surfacing of vulnerabilities … it’s where the learning happens.”
I got to witness that first hand last week, and I thank the fellow participants and organizers for dipping their toe in the water and having a very frank, “open” – but not public, conversation. Our hope is not only to build on that work with concrete actions to (re)build, learn and move forward together, but really model the ethos we espouse. And beyond that, to explore with the community what the right place is for more of these discussions with a different cast of characters to ensure we’re properly assessing and being honest about where additional work is needed – and where we need to work better, together.
Shifting practice, understanding and the reward structures in science to be more open is not something any one of our organizations can do on our own, let alone as individuals. I hope that we can continue this work, and find ways of doing so together, and be as open about what’s not working as we are about the success stories.
We know that there were limitations to the size and makeup for the first meeting, and are actively working on ways to not only continue this work but also make this conversation ongoing and more inclusive. I’d invite suggestions on whether this style of conversation is useful in a broader context, and how we might convene more opportunities for the community to come together and brainstorm ways of furthering open together.
Many thanks to those who offered comments and suggestions for this post, especially Brian Nosek, Shauna Gordon-McKeon, Titus Brown and David Riordan.