On 05/18/2016 10:29 AM, Nick Chase wrote:
On 5/18/2016 1:11 PM, Doug Hellmann wrote:
Given the amount of spam we deal with in our other tools, I think you're underestimating the cost. See the -dev and -infra mailing lists for the discussions of locking down (or shutting down) the wiki, for example.
Remember, though, that the wiki has been around for a long time; this is a tool that will exist for just a week or so, and it takes spammers time to find it. And a Capcha can help there as well.
A web form with a text box is only slightly more complicated to game than a voting link.
Not necessarily; the number of comments is irrelevant. The only comments that really will affect anything are those that are substantive with regards to the content of the proposal, and that's not something that you can automate. In other words we can specify that we're looking for more than just "This is great" or "I would go to this".
Other conferences I'm involved with have a small program committee for each track. We did that for the Upstream Development track this last time around (maybe other tracks also have multiple chairs, I don't know). Having a group of informed people selecting talks based on the quality of the proposal and the subject matter included produced a track with good feedback from attendees. It seems like that should be able to work for other tracks, too, as long as we have a good balance in the chairs.
They were all like that.
We wrote about another suggestion here: https://www.mirantis.com/blog/fixing-openstack-summit-submission-process/ (Look after "... and here's mine".) I know there there some plusses and minuses in the interviews we did about it, but here it is:
Borodaenko suggests a more radical change. “I think we should go more in the direction used by the scientific community and more mature open source communities such as the linux kernel.”
Thanks for the summary and reposting here, Nick. I think this direction is good. Not that you're suggesting it, but just to be clear, I don't think we should simply copy what works for other communities. After all, those communities function differently than OpenStack in certain ways and with good reasons, so what works for them may be different than what works for us.
The process, he explained, works like this:
1. All submissions are made privately; they cannot be disclosed until after the selection process is over, so there’s no campaigning, and no biasing of the judges.
+1
2. The Peer Review panel is made up of a much larger number of people, and it’s known who they are, but not who reviewed what. So instead of 3 people reviewing all 300 submissions for a single track, you might have 20 people for each track, each of whom review a set of randomly selected submissions. So in this case, if each of those submissions was reviewed by 3 judges, that’s only 45 per person, rather than 300.
-1 As a former track chair, I value the collaboration with the other track chairs during the review process. Also, I would continue the practice of grouping track chairs with subject matter relevant to them. Don't ask me to review presentations on storage systems, for example, because I won't know what's relevant to that audience -- I'll probably find it all equally interesting and evaluate only based on how well the proposal is written.
3. Judges are randomly assigned proposals, which have all identifying information stripped out.
-1 (see above) Also, -1 because I believe that speaker quality is important, as is the proximity of the speaker to the subject matter they're presenting. It is not possible to assess either of these from an anonymized abstract. Rather, I trust the Foundation to be selecting track chairs who are conscientious, aware of who the known (good and bad) actors are within a given subject matter, and will perform this duty respectfully and impartially.
The system will know not to give a judge a proposal from his/her own company. 4. Judges score each proposal on content (is it an interesting topic?), fit for the conference (should we cover this topic at the Summit?), and presentation (does it look like it’s been well thought out and will be presented well?). These scores are used to determine which presentations get in.
Those first two criteria are fine, but the latter requires that the judges see the talk material (not just the abstract) ahead of time, and I do not want to require that. It would mean that I would never be able to give a talk at the summit -- there's no way I'm writing the slides 3 months in advance, and if I did, you won't want to attend my talk 'cause it would already be on slideshare/youtube/whatever. This applies to many of the speakers whose talks I want to attend, too. While I do think there is value in normalizing the criteria that track chairs use, and making that criteria visible to presenters, I feel like the Foundation has done a good job with this so far.
5. Proposal authors get back the scores, and the explanations. In an ideal world, authors have a chance to appeal and resubmit with improvements based on the comments to be rescored as in steps 3 and 4, but even if there’s not enough time for that, authors will have a better idea of why they did or didn’t get in, and can use that feedback to create better submissions for next time.
A very big +1 to this
6. Scores determine which proposals get in, potentially with a final step where a set of publicly known individuals reviews the top scorers to make sure that we don’t wind up with 100 sessions on the same topic, but still, the scores should be the final arbiters between whether one proposal or the other is accepted.
“So in the end,” he explained, “it’s about the content of the proposal, and not who you work for or who knows you or doesn’t know you. Scientific conferences have been doing it this way for many years, and it works very well.”
Regards, Devananda