On 5/18/2016 2:41 PM, Devananda van der Veen wrote:
On 05/18/2016 10:29 AM, Nick Chase wrote:
We wrote about another suggestion here: https://www.mirantis.com/blog/fixing-openstack-summit-submission-process/ (Look after "... and here's mine".) I know there there some plusses and minuses in the interviews we did about it, but here it is:
Borodaenko suggests a more radical change. “I think we should go more in the direction used by the scientific community and more mature open source communities such as the linux kernel.” Thanks for the summary and reposting here, Nick. I think this direction is good.
Not that you're suggesting it, but just to be clear, I don't think we should simply copy what works for other communities. After all, those communities function differently than OpenStack in certain ways and with good reasons, so what works for them may be different than what works for us.
The process, he explained, works like this:
1. All submissions are made privately; they cannot be disclosed until after the selection process is over, so there’s no campaigning, and no biasing of the judges. +1
2. The Peer Review panel is made up of a much larger number of people, and it’s known who they are, but not who reviewed what. So instead of 3 people reviewing all 300 submissions for a single track, you might have 20 people for each track, each of whom review a set of randomly selected submissions. So in this case, if each of those submissions was reviewed by 3 judges, that’s only 45 per person, rather than 300. -1
As a former track chair, I value the collaboration with the other track chairs during the review process.
Also, I would continue the practice of grouping track chairs with subject matter relevant to them. Don't ask me to review presentations on storage systems, for example, because I won't know what's relevant to that audience -- I'll probably find it all equally interesting and evaluate only based on how well the proposal is written.
Actually, since we published that, I have been a track chair as well, and I agree with you on that.
3. Judges are randomly assigned proposals, which have all identifying information stripped out. -1 (see above)
Also, -1 because I believe that speaker quality is important, as is the proximity of the speaker to the subject matter they're presenting. It is not possible to assess either of these from an anonymized abstract.
Rather, I trust the Foundation to be selecting track chairs who are conscientious, aware of who the known (good and bad) actors are within a given subject matter, and will perform this duty respectfully and impartially.
Agreed.
The system will know not to give a judge a
proposal from his/her own company. 4. Judges score each proposal on content (is it an interesting topic?), fit for the conference (should we cover this topic at the Summit?), and presentation (does it look like it’s been well thought out and will be presented well?). These scores are used to determine which presentations get in.
Those first two criteria are fine, but the latter requires that the judges see the talk material (not just the abstract) ahead of time, and I do not want to require that. It would mean that I would never be able to give a talk at the summit -- there's no way I'm writing the slides 3 months in advance, and if I did, you won't want to attend my talk 'cause it would already be on slideshare/youtube/whatever. This applies to many of the speakers whose talks I want to attend, too.
While I do think there is value in normalizing the criteria that track chairs use, and making that criteria visible to presenters, I feel like the Foundation has done a good job with this so far.
5. Proposal authors get back the scores, and the explanations. In an ideal world, authors have a chance to appeal and resubmit with improvements based on the comments to be rescored as in steps 3 and 4, but even if there’s not enough time for that, authors will have a better idea of why they did or didn’t get in, and can use that feedback to create better submissions for next time. A very big +1 to this
6. Scores determine which proposals get in, potentially with a final step where a set of publicly known individuals reviews the top scorers to make sure that we don’t wind up with 100 sessions on the same topic, but still, the scores should be the final arbiters between whether one proposal or the other is accepted.
“So in the end,” he explained, “it’s about the content of the proposal, and not who you work for or who knows you or doesn’t know you. Scientific conferences have been doing it this way for many years, and it works very well.”