On 5/18/2016 1:11 PM, Doug Hellmann wrote:
Given the amount of spam we deal with in our other tools, I think you're
underestimating the cost. See the -dev and -infra mailing lists for the
discussions of locking down (or shutting down) the wiki, for example.

Remember, though, that the wiki has been around for a long time; this is a tool that will exist for just a week or so, and it takes spammers time to find it. And a Capcha can help there as well.

A web form with a text box is only slightly more complicated to game
than a voting link.

Not necessarily; the number of comments is irrelevant. The only comments that really will affect anything are those that are substantive with regards to the content of the proposal, and that's not something that you can automate.  In other words we can specify that we're looking for more than just "This is great"  or "I would go to this". 

Other conferences I'm involved with have a small program committee
for each track. We did that for the Upstream Development track this
last time around (maybe other tracks also have multiple chairs, I
don't know). Having a group of informed people selecting talks based
on the quality of the proposal and the subject matter included
produced a track with good feedback from attendees. It seems like
that should be able to work for other tracks, too, as long as we
have a good balance in the chairs.

They were all like that.

We wrote about another suggestion here: https://www.mirantis.com/blog/fixing-openstack-summit-submission-process/  (Look after "... and here's mine".) I know there there some plusses and minuses in the interviews we did about it, but here it is:

Borodaenko suggests a more radical change.  “I think we should go more in the direction used by the scientific community and more mature open source communities such as the linux kernel.”  The process, he explained, works like this:

  1. All submissions are made privately; they cannot be disclosed until after the selection process is over, so there’s no campaigning, and no biasing of the judges.
  2. The Peer Review panel is made up of a much larger number of people, and it’s known who they are, but not who reviewed what.  So instead of 3 people reviewing all 300 submissions for a single track, you might have 20 people for each track, each of whom review a set of randomly selected submissions.  So in this case, if each of those submissions was reviewed by 3 judges, that’s only 45 per person, rather than 300.
  3. Judges are randomly assigned proposals, which have all identifying information stripped out.  The system will know not to give a judge a proposal from his/her own company.
  4. Judges score each proposal on content (is it an interesting topic?), fit for the conference (should we cover this topic at the Summit?), and presentation (does it look like it’s been well thought out and will be presented well?).  These scores are used to determine which presentations get in.
  5. Proposal authors get back the scores, and the explanations. In an ideal world, authors have a chance to appeal and resubmit with improvements based on the comments to be rescored as in steps 3 and 4, but even if there’s not enough time for that, authors will have a better idea of why they did or didn’t get in, and can use that feedback to create better submissions for next time.
  6. Scores determine which proposals get in, potentially with a final step where a set of publicly known individuals reviews the top scorers to make sure that we don’t wind up with 100 sessions on the same topic, but still, the scores should be the final arbiters between whether one proposal or the other is accepted.

“So in the end,” he explained, “it’s about the content of the proposal, and not who you work for or who knows you or doesn’t know you.  Scientific conferences have been doing it this way for many years, and it works very well.”

----  Nick