[openstack-community] Release of Tokyo Summit Voting Results
Will an appropriately anonymized dump of the session vote totals be made available to the community? If so, when will that be available?
I'm sure you know this already, but just reiterating, the voting is only ONE of the factors in session selection. There are 3-4 more hurdles after the voting is done. On Thu, Aug 27, 2015 at 3:13 PM, Richard Raseley <richard@raseley.com> wrote:
Will an appropriately anonymized dump of the session vote totals be made available to the community? If so, when will that be available?
_______________________________________________ Community mailing list Community@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/community
On 2015-08-27 15:19:37 -0600 (-0600), David Medberry wrote:
I'm sure you know this already, but just reiterating, the voting is only ONE of the factors in session selection. There are 3-4 more hurdles after the voting is done.
Yes, as the voting is somewhat easy to game and totals are meant to be purely advisory in nature, it's even possible that some track chairs may disregard them entirely (especially if they're running counter to common logic). -- Jeremy Stanley
On 08/27/2015 02:30 PM, Jeremy Stanley wrote:
Yes, as the voting is somewhat easy to game and totals are meant to be purely advisory in nature, it's even possible that some track chairs may disregard them entirely (especially if they're running counter to common logic). Yes, I totally understand.
I am just hoping to get a look at the anonymized data set.
+1. It would be valuable to see what the wider community is voting for. I was a track chair once upon a time and I completely ignored all the votes and voted for everything that was from Asia because I knew many of the other track chairs would wheel in the same old windbags that seem to get two, three or more slots every single summit.
-----Original Message----- From: Richard Raseley [mailto:richard@raseley.com] Sent: Friday, 28 August 2015 9:20 AM To: community@lists.openstack.org Subject: Re: [openstack-community] Release of Tokyo Summit Voting Results
On 08/27/2015 02:30 PM, Jeremy Stanley wrote:
Yes, as the voting is somewhat easy to game and totals are meant to be purely advisory in nature, it's even possible that some track chairs may disregard them entirely (especially if they're running counter to common logic). Yes, I totally understand.
I am just hoping to get a look at the anonymized data set.
On 08/28/2015 08:42 AM, Tristan Goode wrote:
It would be valuable to see what the wider community is voting for.
There is an assumption in this sentence that I personally consider not true and it's dangerous to assume it to be true before it can be proven. I don't think the 'wider community' votes, only a very vocal minority does. My impression, that I wish could be (dis)proven, is that votes come from twitter-active people and from loyalists in large corporations. If we had the data we may be able prove this assumption by checking for example if the higher amount of votes went to the proposals pushed by corporations with an organized marketing machine.
I was a track chair once upon a time and I completely ignored all the votes
That's what I've always done too. I ignore votes as a track chair. I think the voting process is a celebration of our community, a party, a ritual to get into the 'summit season; it's not a useful tool to evaluate proposals. It's not broken and doesn't need fixing, because its celebratory purpose is well accomplished IMHO. /stef
On 08/28/2015 10:24 AM, Stefano Maffulli wrote:
If we had the data we may be able prove this assumption by checking for example if the higher amount of votes went to the proposals pushed by corporations with an organized marketing machine.
That would be an interesting use for the data.
That's what I've always done too. I ignore votes as a track chair.
I think the voting process is a celebration of our community, a party, a ritual to get into the 'summit season; it's not a useful tool to evaluate proposals.
I understand that the track chairs have wide discretion in the selection of sessions, which seems appropriate. That being said, I am a little surprised at the casual nature with which current and former track chairs have talked about how they outright 'ignore votes'. As a foundation member (I assume voting is restricted to foundation members), I was under the impression that my vote would always count at least a little bit (e.g. as a small part of some weighted score). If that is not the case I think it would be appropriate to set those expectations, as I am guessing that may others might be under the same misapprehension. Regards, Richard
----- Original Message -----
From: "Richard Raseley" <richard@raseley.com> To: community@lists.openstack.org
On 08/28/2015 10:24 AM, Stefano Maffulli wrote:
If we had the data we may be able prove this assumption by checking for example if the higher amount of votes went to the proposals pushed by corporations with an organized marketing machine.
That would be an interesting use for the data.
That's what I've always done too. I ignore votes as a track chair.
I think the voting process is a celebration of our community, a party, a ritual to get into the 'summit season; it's not a useful tool to evaluate proposals.
I understand that the track chairs have wide discretion in the selection of sessions, which seems appropriate. That being said, I am a little surprised at the casual nature with which current and former track chairs have talked about how they outright 'ignore votes'.
As a foundation member (I assume voting is restricted to foundation members), I was under the impression that my vote would always count at least a little bit (e.g. as a small part of some weighted score). If that is not the case I think it would be appropriate to set those expectations, as I am guessing that may others might be under the same misapprehension.
Regards,
Richard
+1, while I've known this for a number of cycles I regularly encounter members of the Foundation who don't. It's not like this is ever broadcast anywhere except casually in email threads such as this one (usually with the implication that the person asking the question should somehow have known/expected their vote wouldn't count), have I missed it and there is in fact a public page documenting the talk selection process? Thanks, Steve
I was a first year track chair this year and I think that "your vote doesn't count" is not an accurate description from what I did. I can only speak for my thoughts and our track, but votes certainly were part of the process. Some talks had no votes or primarily negative votes, they were not considered much. But then you end up with lots of talks, well more than the 11 we could pick, with a good number of positive votes. We get to see counts and averages, is a talk with 65 votes and average of 2.5 better than a talk with 72 votes and an average of 2.4? You're splitting hairs there, so we can only use them as a rough guide for interest in the topic and speaker. Also, if we had simply picked the top 11 (by averages), you'd have ended up with an unbalanced track too many talks on the same topics for example or by the same people. Our goals were many, but included considering: - how were the votes? high votes? high score? etc - does the talk fit into this track? is it too advanced/too broad/too narrow? - is it probably a sales pitch? - are we covering the right things here? Does it fit into the goals of this track. - is the topic interesting to attendees? We try to think about what the audience is for the track and go from there. - is this a repeat from a previous year? Some talks are submitted with very similar sounding titles (although sometimes updates on xxx talks are good) - does anyone know the speaker? are they active in the community? an engaging speaker? a new fresh face that would bring a different perspective? - is this a duplicate talk? For example, out of the 11 talks we can pick, we don't have space for 4 talks on Chef, so lets pick one thats good and broad enough and fits this track. - do any of these talks include any locals who would not normally get a chance to talk or travel if this was in NA or Europe? These are how I considered/weighed the talks and the bottom line is that I assure there is no secret cabal ignoring everyone's wishes and jamming the schedule onto you. (If there is, I don't yet have my robes and secret book, please send). In fact I know of at least one talk that included many "luminaries" that we did not pick that I'm sure upset people. I had a talk in another track that I thought was a shoe-in that wasn't picked and I think many of us are in that boat. This process took about 6-8 hours of my time and we had a smaller track with about 90 talks to look through, many of the other chairs had way more work. So I hope that sheds some light on how the process worked at least for my track. On Sun, Aug 30, 2015 at 2:22 PM, Steve Gordon <sgordon@redhat.com> wrote:
----- Original Message -----
From: "Richard Raseley" <richard@raseley.com> To: community@lists.openstack.org
On 08/28/2015 10:24 AM, Stefano Maffulli wrote:
If we had the data we may be able prove this assumption by checking for example if the higher amount of votes went to the proposals pushed by corporations with an organized marketing machine.
That would be an interesting use for the data.
That's what I've always done too. I ignore votes as a track chair.
I think the voting process is a celebration of our community, a party, a ritual to get into the 'summit season; it's not a useful tool to evaluate proposals.
I understand that the track chairs have wide discretion in the selection of sessions, which seems appropriate. That being said, I am a little surprised at the casual nature with which current and former track chairs have talked about how they outright 'ignore votes'.
As a foundation member (I assume voting is restricted to foundation members), I was under the impression that my vote would always count at least a little bit (e.g. as a small part of some weighted score). If that is not the case I think it would be appropriate to set those expectations, as I am guessing that may others might be under the same misapprehension.
Regards,
Richard
+1, while I've known this for a number of cycles I regularly encounter members of the Foundation who don't. It's not like this is ever broadcast anywhere except casually in email threads such as this one (usually with the implication that the person asking the question should somehow have known/expected their vote wouldn't count), have I missed it and there is in fact a public page documenting the talk selection process?
Thanks,
Steve
_______________________________________________ Community mailing list Community@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/community
On Sun, 2015-08-30 at 14:46 -0600, Matt Fischer wrote:
I was a first year track chair this year and I think that "your vote doesn't count" is not an accurate description from what I did. I can only speak for my thoughts and our track, but votes certainly were part of the process. Some talks had no votes or primarily negative votes, they were not considered much. But then you end up with lots of talks, well more than the 11 we could pick, with a good number of positive votes. We get to see counts and averages, is a talk with 65 votes and average of 2.5 better than a talk with 72 votes and an average of 2.4? You're splitting hairs there, so we can only use them as a rough guide for interest in the topic and speaker. Also, if we had simply picked the top 11 (by averages), you'd have ended up with an unbalanced track too many talks on the same topics for example or by the same people.
Our goals were many, but included considering: * how were the votes? high votes? high score? etc * does the talk fit into this track? is it too advanced/too broad/too narrow? * is it probably a sales pitch? * are we covering the right things here? Does it fit into the goals of this track. * is the topic interesting to attendees? We try to think about what the audience is for the track and go from there. * is this a repeat from a previous year? Some talks are submitted with very similar sounding titles (although sometimes updates on xxx talks are good) * does anyone know the speaker? are they active in the community? an engaging speaker? a new fresh face that would bring a different perspective? * is this a duplicate talk? For example, out of the 11 talks we can pick, we don't have space for 4 talks on Chef, so lets pick one thats good and broad enough and fits this track. * do any of these talks include any locals who would not normally get a chance to talk or travel if this was in NA or Europe? These are how I considered/weighed the talks and the bottom line is that I assure there is no secret cabal ignoring everyone's wishes and jamming the schedule onto you. (If there is, I don't yet have my robes and secret book, please send). In fact I know of at least one talk that included many "luminaries" that we did not pick that I'm sure upset people. I had a talk in another track that I thought was a shoe-in that wasn't picked and I think many of us are in that boat. This process took about 6-8 hours of my time and we had a smaller track with about 90 talks to look through, many of the other chairs had way more work.
So I hope that sheds some light on how the process worked at least for my track.
FWIW, I think that approach is a great way to do it and I'm grateful to the track chairs for the large and complex task they're undertaking.
+1 On August 30, 2015 4:17:05 PM Xav Paice <xavpaice@gmail.com> wrote:
On Sun, 2015-08-30 at 14:46 -0600, Matt Fischer wrote:
I was a first year track chair this year and I think that "your vote doesn't count" is not an accurate description from what I did. I can only speak for my thoughts and our track, but votes certainly were part of the process. Some talks had no votes or primarily negative votes, they were not considered much. But then you end up with lots of talks, well more than the 11 we could pick, with a good number of positive votes. We get to see counts and averages, is a talk with 65 votes and average of 2.5 better than a talk with 72 votes and an average of 2.4? You're splitting hairs there, so we can only use them as a rough guide for interest in the topic and speaker. Also, if we had simply picked the top 11 (by averages), you'd have ended up with an unbalanced track too many talks on the same topics for example or by the same people.
Our goals were many, but included considering: * how were the votes? high votes? high score? etc * does the talk fit into this track? is it too advanced/too broad/too narrow? * is it probably a sales pitch? * are we covering the right things here? Does it fit into the goals of this track. * is the topic interesting to attendees? We try to think about what the audience is for the track and go from there. * is this a repeat from a previous year? Some talks are submitted with very similar sounding titles (although sometimes updates on xxx talks are good) * does anyone know the speaker? are they active in the community? an engaging speaker? a new fresh face that would bring a different perspective? * is this a duplicate talk? For example, out of the 11 talks we can pick, we don't have space for 4 talks on Chef, so lets pick one thats good and broad enough and fits this track. * do any of these talks include any locals who would not normally get a chance to talk or travel if this was in NA or Europe? These are how I considered/weighed the talks and the bottom line is that I assure there is no secret cabal ignoring everyone's wishes and jamming the schedule onto you. (If there is, I don't yet have my robes and secret book, please send). In fact I know of at least one talk that included many "luminaries" that we did not pick that I'm sure upset people. I had a talk in another track that I thought was a shoe-in that wasn't picked and I think many of us are in that boat. This process took about 6-8 hours of my time and we had a smaller track with about 90 talks to look through, many of the other chairs had way more work.
So I hope that sheds some light on how the process worked at least for my track.
FWIW, I think that approach is a great way to do it and I'm grateful to the track chairs for the large and complex task they're undertaking.
_______________________________________________ Community mailing list Community@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/community
----- Original Message -----
From: "Matt Fischer" <matt@mattfischer.com> To: "Steve Gordon" <sgordon@redhat.com>
I was a first year track chair this year and I think that "your vote doesn't count" is not an accurate description from what I did. I can only speak for my thoughts and our track, but votes certainly were part of the process. Some talks had no votes or primarily negative votes, they were not considered much. But then you end up with lots of talks, well more than the 11 we could pick, with a good number of positive votes. We get to see counts and averages, is a talk with 65 votes and average of 2.5 better than a talk with 72 votes and an average of 2.4? You're splitting hairs there, so we can only use them as a rough guide for interest in the topic and speaker. Also, if we had simply picked the top 11 (by averages), you'd have ended up with an unbalanced track too many talks on the same topics for example or by the same people.
The assertion that "your vote doesn't count" came from emails from not one but two former track chairs in this thread, so it's certainly the case for at least some tracks. I myself am not actually questioning the process as it exists today, putting those comments aside at least, but rather whether the process is documented.
Our goals were many, but included considering:
- how were the votes? high votes? high score? etc - does the talk fit into this track? is it too advanced/too broad/too narrow? - is it probably a sales pitch? - are we covering the right things here? Does it fit into the goals of this track. - is the topic interesting to attendees? We try to think about what the audience is for the track and go from there. - is this a repeat from a previous year? Some talks are submitted with very similar sounding titles (although sometimes updates on xxx talks are good) - does anyone know the speaker? are they active in the community? an engaging speaker? a new fresh face that would bring a different perspective? - is this a duplicate talk? For example, out of the 11 talks we can pick, we don't have space for 4 talks on Chef, so lets pick one thats good and broad enough and fits this track. - do any of these talks include any locals who would not normally get a chance to talk or travel if this was in NA or Europe?
This list is similar to what I have heard from other former track chairs (not withstanding the comments from those that have responded in this thread) but this appears to be communal knowledge shared among a subset of the community. My question again is whether it is written down somewhere and if not should it be.
These are how I considered/weighed the talks and the bottom line is that I assure there is no secret cabal ignoring everyone's wishes and jamming the schedule onto you. (If there is, I don't yet have my robes and secret book, please send). In fact I know of at least one talk that included many "luminaries" that we did not pick that I'm sure upset people. I had a talk in another track that I thought was a shoe-in that wasn't picked and I think many of us are in that boat. This process took about 6-8 hours of my time and we had a smaller track with about 90 talks to look through, many of the other chairs had way more work.
So I hope that sheds some light on how the process worked at least for my track.
The concern I raised was not that there is a secret cabal but that the process as it exists is not well known by those voting, nor in many cases those making submissions, because it isn't formally documented anywhere nor is there necessarily consistency across tracks (from three former chairs in this thread so far we have your well though out list, votes don't count at all, votes don't count at all and I only accepted submissions from Asia - that's quite a diverse range of ways of handling it). I *personally* understand the process but only via word of mouth and emails like this that I happen to catch. So not to belabor the point, is the process documented somewhere and is this something we should be highlighting to voters/submitters? Clearly I think the answer is yes, but I'm interested in arguments as to why it shouldn't be documented. Thanks, Steve
On Sun, Aug 30, 2015 at 2:22 PM, Steve Gordon <sgordon@redhat.com> wrote:
----- Original Message -----
From: "Richard Raseley" <richard@raseley.com> To: community@lists.openstack.org
On 08/28/2015 10:24 AM, Stefano Maffulli wrote:
If we had the data we may be able prove this assumption by checking for example if the higher amount of votes went to the proposals pushed by corporations with an organized marketing machine.
That would be an interesting use for the data.
That's what I've always done too. I ignore votes as a track chair.
I think the voting process is a celebration of our community, a party, a ritual to get into the 'summit season; it's not a useful tool to evaluate proposals.
I understand that the track chairs have wide discretion in the selection of sessions, which seems appropriate. That being said, I am a little surprised at the casual nature with which current and former track chairs have talked about how they outright 'ignore votes'.
As a foundation member (I assume voting is restricted to foundation members), I was under the impression that my vote would always count at least a little bit (e.g. as a small part of some weighted score). If that is not the case I think it would be appropriate to set those expectations, as I am guessing that may others might be under the same misapprehension.
Regards,
Richard
+1, while I've known this for a number of cycles I regularly encounter members of the Foundation who don't. It's not like this is ever broadcast anywhere except casually in email threads such as this one (usually with the implication that the person asking the question should somehow have known/expected their vote wouldn't count), have I missed it and there is in fact a public page documenting the talk selection process?
Thanks,
Steve
On 2015-08-30 18:52:22 -0400 (-0400), Steve Gordon wrote:
The assertion that "your vote doesn't count" came from emails from not one but two former track chairs in this thread, so it's certainly the case for at least some tracks.
If you're referring to my E-mail[1] (wherein I referred to the community votes being "purely advisory") as indicating that they do not count at all, then either you're mischaracterizing what I said or I did a poor job of saying it. I certainly took votes on abstracts under advisement, but also considered the fact that they're easy to game and popularity contests are not the best way to curate talks for a conference track.
I myself am not actually questioning the process as it exists today, putting those comments aside at least, but rather whether the process is documented. [...]
The process is basically: 1. Use your best judgement. 2. When in doubt, refer to #1. We have track chairs for a reason. It's their responsibility to decide what talks will end up in their tracks. Heaping rules and process on them is only likely to hamper their efforts to make the conference the best it can be. If you're dissatisfied with the outcome, then volunteer to be a track chair next time. [1] http://lists.openstack.org/pipermail/community/2015-August/001263.html -- Jeremy Stanley
Hi, On 08/31/2015 12:32 PM, Jeremy Stanley wrote:
If you're referring to my E-mail[1] (wherein I referred to the community votes being "purely advisory")
Tristan Goode: "I was a track chair once upon a time and I completely ignored all the votes and voted for everything that was from Asia because I knew many of the other track chairs would wheel in the same old windbags that seem to get two, three or more slots every single summit." Stefano Maffulli: "That's what I've always done too. I ignore votes as a track chair." My personal experience (track chair 2 or 3 times) was to take votes for guidance, and to look for some voting red flags to dig deeper (deeply polarized votes, proposals with a very high number of votes, proposals that fare poorly because of a small number of votes) - generally, talks that were popular in voting got in, but occasionally we picked talks that fared very poorly for other reasons (topic/speaker/company diversity, or under-treated topics we thought were important). One year I did it, I know that one track just chose the X highest scoring talks, while another track decided to completely ignore voting. I do not believe there is a consistent guidance or standard for this (but that may have changed). Personally I'd be happy doing away with voting altogether and leave it completely to the track selection groups. Regards, Dave. -- Dave Neary - NFV/SDN Community Strategy Open Source and Standards, Red Hat - http://community.redhat.com Ph: +1-978-399-2182 / Cell: +1-978-799-3338
----- Original Message -----
From: "Jeremy Stanley" <fungi@yuggoth.org> To: community@lists.openstack.org
On 2015-08-30 18:52:22 -0400 (-0400), Steve Gordon wrote:
The assertion that "your vote doesn't count" came from emails from not one but two former track chairs in this thread, so it's certainly the case for at least some tracks.
If you're referring to my E-mail[1] (wherein I referred to the community votes being "purely advisory") as indicating that they do not count at all, then either you're mischaracterizing what I said or I did a poor job of saying it. I certainly took votes on abstracts under advisement, but also considered the fact that they're easy to game and popularity contests are not the best way to curate talks for a conference track.
I'm not mischaracterizing anyone, as Dave pointed out two separate respondents in this thread explicitly noted both that they were track chairs and that they completely ignored the public vote. I'm also not even saying that's wrong in and of itself, because I happen to also believe that a popularity contest is a terrible way to curate talks for a conference track.
I myself am not actually questioning the process as it exists today, putting those comments aside at least, but rather whether the process is documented. [...]
The process is basically:
1. Use your best judgement.
2. When in doubt, refer to #1.
We have track chairs for a reason. It's their responsibility to decide what talks will end up in their tracks. Heaping rules and process on them is only likely to hamper their efforts to make the conference the best it can be. If you're dissatisfied with the outcome, then volunteer to be a track chair next time.
I'm not even talking about documenting the process at that level of detail (though I don't think some of the tribal knowledge on this topic as common guidelines for new track chairs would hurt anyone), I'm talking about making it clear to both people submitting talks and those voting on them that: a) Track chairs exist at all. b) Popularity in the vote does not in and of itself ensure success. More transparency on these two items would I believe clear up quite a bit of confusion but since you brought up volunteering perhaps we could even document who determines the track chairs... ;). Nobody seems to have issues clarifying these facts via semi-regular email threads like this one, so I'm not sure why setting these expectations up front (on the submission/voting websites and in the submission acknowledgement email) would be an issue? Thanks, Steve
[1] http://lists.openstack.org/pipermail/community/2015-August/001263.html -- Jeremy Stanley
On 2015-08-31 15:20:54 -0400 (-0400), Steve Gordon wrote: [...]
More transparency on these two items would I believe clear up quite a bit of confusion but since you brought up volunteering perhaps we could even document who determines the track chairs... ;). Nobody seems to have issues clarifying these facts via semi-regular email threads like this one, so I'm not sure why setting these expectations up front (on the submission/voting websites and in the submission acknowledgement email) would be an issue? [...]
The likely disconnect is that the people organizing the conference, calling for and selecting volunteer track chairs, designing the chairing and abstract voting interfaces, et cetera need to see these suggestions and weigh in on the discussion. Usually they're extremely busy, and so instead people who know what goes on with chairing a track for the conference (mostly by way of having been through it) answer with details but have no power to get this information added to the Web site(s) in question. -- Jeremy Stanley
Hi everyone, Sorry, I was out of the office the end of last week, and am catching up on this thread today. I’ll jump in with a few comments and updates from a Summit organizer perspective. From my perspective, the opportunity to vote on Summit sessions provides a strong community feedback mechanism so it’s not just a small group of people making decisions. It also provides a level of transparency because all submitted sessions are published and available to review, analyze, etc. (such as the keyword analysis several community members perform each Summit, or how other community organizers mine the information to recruit speakers for their own regional events). The results give track chairs a starting point (or sometimes a tie breaker when needed) and it helps them rule out sessions that have been consistently poorly reviewed. Second, to the initial question from Richard Raseley that started the thread, we have not historically published voting results by session, but are looking into generating a report (probably a quick and dirty CSV) with the session title, track, vote average & number of votes cast to share with the community for analysis, as well as the aggregate number of votes cast of course. This is the information that has been available to track chairs in their selection tool, and I think it makes sense to publish it more broadly, especially for speakers who might be interested in feedback on their session. In the future, I would love to be able to support some kind of comment feature with the voting tool, because I think that feedback could be valuable to the track chairs and speakers. Finally, you can read more about the track chair and voting process at this link: https://www.openstack.org/summit/tokyo-2015/selection-process/ <https://www.openstack.org/summit/tokyo-2015/selection-process/> (that’s the unique URL, but it was also published on the Summit speaking submission page and the Summit FAQ). To Steve’s point, it sounds like we need to do a better job making that information more visible. To start, we are planning to link to it from the schedule page as “How were these sessions selected?” The Summit team is always open to feedback and iterating the processes each cycle as the community continues to grow and change. Thanks for all the comments and input! Thanks, Lauren
On Aug 31, 2015, at 12:20 PM, Steve Gordon <sgordon@redhat.com> wrote:
----- Original Message -----
From: "Jeremy Stanley" <fungi@yuggoth.org> To: community@lists.openstack.org
On 2015-08-30 18:52:22 -0400 (-0400), Steve Gordon wrote:
The assertion that "your vote doesn't count" came from emails from not one but two former track chairs in this thread, so it's certainly the case for at least some tracks.
If you're referring to my E-mail[1] (wherein I referred to the community votes being "purely advisory") as indicating that they do not count at all, then either you're mischaracterizing what I said or I did a poor job of saying it. I certainly took votes on abstracts under advisement, but also considered the fact that they're easy to game and popularity contests are not the best way to curate talks for a conference track.
I'm not mischaracterizing anyone, as Dave pointed out two separate respondents in this thread explicitly noted both that they were track chairs and that they completely ignored the public vote. I'm also not even saying that's wrong in and of itself, because I happen to also believe that a popularity contest is a terrible way to curate talks for a conference track.
I myself am not actually questioning the process as it exists today, putting those comments aside at least, but rather whether the process is documented. [...]
The process is basically:
1. Use your best judgement.
2. When in doubt, refer to #1.
We have track chairs for a reason. It's their responsibility to decide what talks will end up in their tracks. Heaping rules and process on them is only likely to hamper their efforts to make the conference the best it can be. If you're dissatisfied with the outcome, then volunteer to be a track chair next time.
I'm not even talking about documenting the process at that level of detail (though I don't think some of the tribal knowledge on this topic as common guidelines for new track chairs would hurt anyone), I'm talking about making it clear to both people submitting talks and those voting on them that:
a) Track chairs exist at all.
b) Popularity in the vote does not in and of itself ensure success.
More transparency on these two items would I believe clear up quite a bit of confusion but since you brought up volunteering perhaps we could even document who determines the track chairs... ;). Nobody seems to have issues clarifying these facts via semi-regular email threads like this one, so I'm not sure why setting these expectations up front (on the submission/voting websites and in the submission acknowledgement email) would be an issue?
Thanks,
Steve
[1] http://lists.openstack.org/pipermail/community/2015-August/001263.html -- Jeremy Stanley
_______________________________________________ Community mailing list Community@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/community
Thanks Lauren, On 08/31/2015 07:57 PM, Lauren Sell wrote:
From my perspective, the opportunity to vote on Summit sessions provides a strong community feedback mechanism so it’s not just a small group of people making decisions. It also provides a level of transparency because all submitted sessions are published and available to review, analyze, etc. (such as the keyword analysis several community members perform each Summit, or how other community organizers mine the information to recruit speakers for their own regional events). The results give track chairs a starting point (or sometimes a tie breaker when needed) and it helps them rule out sessions that have been consistently poorly reviewed.
Back in February, during the voting process last time, I sent some feedback on the voting process to the community list - the main reasons I don't like the process are: * Having to hawk & promote proposal(s) is kind of unseemly, and makes us look small, I think. Hundreds of people going "vote for me!" doesn't make us look good. * Some people don't want to pitch themselves, others don't have access to as big a platform to promote * The same issues exist with this system which exist with board voting - there is a possibility that people will vote for their colleagues, not out of any corruption, but just because no-one has time to rate all the proposals, and they're more likely to rate those submitted by people they know more highly * Also, it's a self-selecting group of people who rate proposals - I don't think voters will be representative of summit attendees * After all is said and done, the proposals which are chosen by the voters are guidelines to the people who choose the talks for the tracks, the track leaders One more to add: this process encourages the kind of corporate divisiveness we should be trying to remove from OpenStack - every time, there's the "vote for the following proposals from your colleagues" emails, the blog post encouraging people to "vote for these 13 great proposals" which just happen to be the 13 from that company, etc. It's the worst of corporate jingoism, and (as I said) it doesn't make us look good. I'd much prefer that we just trust the track chairs to make good choices (which is, after all, what we do now). <snip>
Finally, you can read more about the track chair and voting process at this link: https://www.openstack.org/summit/tokyo-2015/selection-process/ (that’s the unique URL, but it was also published on the Summit speaking submission page and the Summit FAQ). To Steve’s point, it sounds like we need to do a better job making that information more visible. To start, we are planning to link to it from the schedule page as “How were these sessions selected?”
I was not aware of the link above, and the Etherpad linked from there is great, but it's a little ephemeral - it would be great to have track chairs be more visible during the call for papers process. Thanks again, Dave. -- Dave Neary - NFV/SDN Community Strategy Open Source and Standards, Red Hat - http://community.redhat.com Ph: +1-978-399-2182 / Cell: +1-978-799-3338
I've been reading/tracking this thread and want to say, thanks for working through the dialog with such diplomacy. I've had a lot of the same questions and it is enlightening to hear feedback from track chairs. /adam On Aug 31, 2015 7:42 PM, "Dave Neary" <dneary@redhat.com> wrote:
Thanks Lauren,
On 08/31/2015 07:57 PM, Lauren Sell wrote:
From my perspective, the opportunity to vote on Summit sessions provides a strong community feedback mechanism so it’s not just a small group of people making decisions. It also provides a level of transparency because all submitted sessions are published and available to review, analyze, etc. (such as the keyword analysis several community members perform each Summit, or how other community organizers mine the information to recruit speakers for their own regional events). The results give track chairs a starting point (or sometimes a tie breaker when needed) and it helps them rule out sessions that have been consistently poorly reviewed.
Back in February, during the voting process last time, I sent some feedback on the voting process to the community list - the main reasons I don't like the process are: * Having to hawk & promote proposal(s) is kind of unseemly, and makes us look small, I think. Hundreds of people going "vote for me!" doesn't make us look good. * Some people don't want to pitch themselves, others don't have access to as big a platform to promote * The same issues exist with this system which exist with board voting - there is a possibility that people will vote for their colleagues, not out of any corruption, but just because no-one has time to rate all the proposals, and they're more likely to rate those submitted by people they know more highly * Also, it's a self-selecting group of people who rate proposals - I don't think voters will be representative of summit attendees * After all is said and done, the proposals which are chosen by the voters are guidelines to the people who choose the talks for the tracks, the track leaders
One more to add: this process encourages the kind of corporate divisiveness we should be trying to remove from OpenStack - every time, there's the "vote for the following proposals from your colleagues" emails, the blog post encouraging people to "vote for these 13 great proposals" which just happen to be the 13 from that company, etc. It's the worst of corporate jingoism, and (as I said) it doesn't make us look good.
I'd much prefer that we just trust the track chairs to make good choices (which is, after all, what we do now).
<snip>
Finally, you can read more about the track chair and voting process at
this link: https://www.openstack.org/summit/tokyo-2015/selection-process/ (that’s the unique URL, but it was also published on the Summit speaking submission page and the Summit FAQ). To Steve’s point, it sounds like we need to do a better job making that information more visible. To start, we are planning to link to it from the schedule page as “How were these sessions selected?”
I was not aware of the link above, and the Etherpad linked from there is great, but it's a little ephemeral - it would be great to have track chairs be more visible during the call for papers process.
Thanks again, Dave.
-- Dave Neary - NFV/SDN Community Strategy Open Source and Standards, Red Hat - http://community.redhat.com Ph: +1-978-399-2182 / Cell: +1-978-799-3338
_______________________________________________ Community mailing list Community@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/community
----- Original Message -----
From: "Lauren Sell" <lauren@openstack.org> To: "Steve Gordon" <sgordon@redhat.com>, "Community User Groups OpenStack" <community@lists.openstack.org>
Hi everyone,
Sorry, I was out of the office the end of last week, and am catching up on this thread today. I’ll jump in with a few comments and updates from a Summit organizer perspective.
From my perspective, the opportunity to vote on Summit sessions provides a strong community feedback mechanism so it’s not just a small group of people making decisions. It also provides a level of transparency because all submitted sessions are published and available to review, analyze, etc. (such as the keyword analysis several community members perform each Summit, or how other community organizers mine the information to recruit speakers for their own regional events). The results give track chairs a starting point (or sometimes a tie breaker when needed) and it helps them rule out sessions that have been consistently poorly reviewed.
Second, to the initial question from Richard Raseley that started the thread, we have not historically published voting results by session, but are looking into generating a report (probably a quick and dirty CSV) with the session title, track, vote average & number of votes cast to share with the community for analysis, as well as the aggregate number of votes cast of course. This is the information that has been available to track chairs in their selection tool, and I think it makes sense to publish it more broadly, especially for speakers who might be interested in feedback on their session. In the future, I would love to be able to support some kind of comment feature with the voting tool, because I think that feedback could be valuable to the track chairs and speakers.
Finally, you can read more about the track chair and voting process at this link: https://www.openstack.org/summit/tokyo-2015/selection-process/ <https://www.openstack.org/summit/tokyo-2015/selection-process/> (that’s the unique URL, but it was also published on the Summit speaking submission page and the Summit FAQ). To Steve’s point, it sounds like we need to do a better job making that information more visible. To start, we are planning to link to it from the schedule page as “How were these sessions selected?”
The Summit team is always open to feedback and iterating the processes each cycle as the community continues to grow and change. Thanks for all the comments and input!
Thanks Lauren, That link is exactly what I was after! My main two outstanding comments would be: - The info in the first half of the https://etherpad.openstack.org/p/Tokyo_Summit_Track_Chairs etherpad is also quite relevant/informative and if that had a permanent home it would likely be more discoverable (keeping in mind that the first hit for "openstack summit presentation selection process" in google is still Mark Baker's presentation on how the process is broken - though the FAQ entry does at least make the front page). - The text of the presentation submission acknowledgment email currently reads: "Your speaking submissions have been received and are now complete. No further action is required at this time. Stay tuned and you’ll hear from us again once community voting begins." I think it would be useful if this was more accurate, either linking to the above information or at least mentioning that voting is not the only aspect of session selection (it's also inaccurate to say as it currently does that you'll hear any more when voting begins, realistically you hear back after voting has ended and some number of weeks have passed for the track chairs to do their thing). If this content is in a git repository somewhere that we can propose specific changes to it I am happy to do so but my understanding was these pages are still handled via a separate process. Thanks, Steve
Thanks, Lauren
On Aug 31, 2015, at 12:20 PM, Steve Gordon <sgordon@redhat.com> wrote:
----- Original Message -----
From: "Jeremy Stanley" <fungi@yuggoth.org> To: community@lists.openstack.org
On 2015-08-30 18:52:22 -0400 (-0400), Steve Gordon wrote:
The assertion that "your vote doesn't count" came from emails from not one but two former track chairs in this thread, so it's certainly the case for at least some tracks.
If you're referring to my E-mail[1] (wherein I referred to the community votes being "purely advisory") as indicating that they do not count at all, then either you're mischaracterizing what I said or I did a poor job of saying it. I certainly took votes on abstracts under advisement, but also considered the fact that they're easy to game and popularity contests are not the best way to curate talks for a conference track.
I'm not mischaracterizing anyone, as Dave pointed out two separate respondents in this thread explicitly noted both that they were track chairs and that they completely ignored the public vote. I'm also not even saying that's wrong in and of itself, because I happen to also believe that a popularity contest is a terrible way to curate talks for a conference track.
I myself am not actually questioning the process as it exists today, putting those comments aside at least, but rather whether the process is documented. [...]
The process is basically:
1. Use your best judgement.
2. When in doubt, refer to #1.
We have track chairs for a reason. It's their responsibility to decide what talks will end up in their tracks. Heaping rules and process on them is only likely to hamper their efforts to make the conference the best it can be. If you're dissatisfied with the outcome, then volunteer to be a track chair next time.
I'm not even talking about documenting the process at that level of detail (though I don't think some of the tribal knowledge on this topic as common guidelines for new track chairs would hurt anyone), I'm talking about making it clear to both people submitting talks and those voting on them that:
a) Track chairs exist at all.
b) Popularity in the vote does not in and of itself ensure success.
More transparency on these two items would I believe clear up quite a bit of confusion but since you brought up volunteering perhaps we could even document who determines the track chairs... ;). Nobody seems to have issues clarifying these facts via semi-regular email threads like this one, so I'm not sure why setting these expectations up front (on the submission/voting websites and in the submission acknowledgement email) would be an issue?
Thanks,
Steve
[1] http://lists.openstack.org/pipermail/community/2015-August/001263.html -- Jeremy Stanley
_______________________________________________ Community mailing list Community@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/community
-- Steve Gordon, RHCE Sr. Technical Product Manager, Red Hat Enterprise Linux OpenStack Platform
On 2015-09-01 09:41:07 -0400 (-0400), Steve Gordon wrote: [...]
If this content is in a git repository somewhere that we can propose specific changes to it I am happy to do so but my understanding was these pages are still handled via a separate process.
Try a git grep in https://github.com/OpenStackweb/openstack-org and if it's in there see whether a pull request gets you anywhere. I'd love for this to eventually be under community management like other OpenStack repositories so that we can all contribute to improving it, but for now the foundation's site content is mostly maintained by a third-party vendor under a contract arrangement. -- Jeremy Stanley
Hi Steve, Those sound like great suggestions to me. The original format for the selection process text was the Summit FAQ, so it’s a bit brief. I like the idea of adding more detail from the etherpad and the current list of track chairs to the selection process landing page (https://www.openstack.org/summit/tokyo-2015/selection-process/ <https://www.openstack.org/summit/tokyo-2015/selection-process/>), so we can link to that from the FAQ, schedule and all relevant speaker communications. Separately, if you have any other edits or comments to the website, you can always submit them to the launchpad bug tracker: https://launchpad.net/openstack-org <https://launchpad.net/openstack-org> Thanks for your help. Lauren
On Sep 1, 2015, at 8:41 AM, Steve Gordon <sgordon@redhat.com> wrote:
----- Original Message -----
From: "Lauren Sell" <lauren@openstack.org> To: "Steve Gordon" <sgordon@redhat.com>, "Community User Groups OpenStack" <community@lists.openstack.org>
Hi everyone,
Sorry, I was out of the office the end of last week, and am catching up on this thread today. I’ll jump in with a few comments and updates from a Summit organizer perspective.
From my perspective, the opportunity to vote on Summit sessions provides a strong community feedback mechanism so it’s not just a small group of people making decisions. It also provides a level of transparency because all submitted sessions are published and available to review, analyze, etc. (such as the keyword analysis several community members perform each Summit, or how other community organizers mine the information to recruit speakers for their own regional events). The results give track chairs a starting point (or sometimes a tie breaker when needed) and it helps them rule out sessions that have been consistently poorly reviewed.
Second, to the initial question from Richard Raseley that started the thread, we have not historically published voting results by session, but are looking into generating a report (probably a quick and dirty CSV) with the session title, track, vote average & number of votes cast to share with the community for analysis, as well as the aggregate number of votes cast of course. This is the information that has been available to track chairs in their selection tool, and I think it makes sense to publish it more broadly, especially for speakers who might be interested in feedback on their session. In the future, I would love to be able to support some kind of comment feature with the voting tool, because I think that feedback could be valuable to the track chairs and speakers.
Finally, you can read more about the track chair and voting process at this link: https://www.openstack.org/summit/tokyo-2015/selection-process/ <https://www.openstack.org/summit/tokyo-2015/selection-process/> (that’s the unique URL, but it was also published on the Summit speaking submission page and the Summit FAQ). To Steve’s point, it sounds like we need to do a better job making that information more visible. To start, we are planning to link to it from the schedule page as “How were these sessions selected?”
The Summit team is always open to feedback and iterating the processes each cycle as the community continues to grow and change. Thanks for all the comments and input!
Thanks Lauren,
That link is exactly what I was after! My main two outstanding comments would be:
- The info in the first half of the https://etherpad.openstack.org/p/Tokyo_Summit_Track_Chairs etherpad is also quite relevant/informative and if that had a permanent home it would likely be more discoverable (keeping in mind that the first hit for "openstack summit presentation selection process" in google is still Mark Baker's presentation on how the process is broken - though the FAQ entry does at least make the front page).
- The text of the presentation submission acknowledgment email currently reads:
"Your speaking submissions have been received and are now complete. No further action is required at this time. Stay tuned and you’ll hear from us again once community voting begins."
I think it would be useful if this was more accurate, either linking to the above information or at least mentioning that voting is not the only aspect of session selection (it's also inaccurate to say as it currently does that you'll hear any more when voting begins, realistically you hear back after voting has ended and some number of weeks have passed for the track chairs to do their thing).
If this content is in a git repository somewhere that we can propose specific changes to it I am happy to do so but my understanding was these pages are still handled via a separate process.
Thanks,
Steve
Thanks, Lauren
On Aug 31, 2015, at 12:20 PM, Steve Gordon <sgordon@redhat.com> wrote:
----- Original Message -----
From: "Jeremy Stanley" <fungi@yuggoth.org> To: community@lists.openstack.org
On 2015-08-30 18:52:22 -0400 (-0400), Steve Gordon wrote:
The assertion that "your vote doesn't count" came from emails from not one but two former track chairs in this thread, so it's certainly the case for at least some tracks.
If you're referring to my E-mail[1] (wherein I referred to the community votes being "purely advisory") as indicating that they do not count at all, then either you're mischaracterizing what I said or I did a poor job of saying it. I certainly took votes on abstracts under advisement, but also considered the fact that they're easy to game and popularity contests are not the best way to curate talks for a conference track.
I'm not mischaracterizing anyone, as Dave pointed out two separate respondents in this thread explicitly noted both that they were track chairs and that they completely ignored the public vote. I'm also not even saying that's wrong in and of itself, because I happen to also believe that a popularity contest is a terrible way to curate talks for a conference track.
I myself am not actually questioning the process as it exists today, putting those comments aside at least, but rather whether the process is documented. [...]
The process is basically:
1. Use your best judgement.
2. When in doubt, refer to #1.
We have track chairs for a reason. It's their responsibility to decide what talks will end up in their tracks. Heaping rules and process on them is only likely to hamper their efforts to make the conference the best it can be. If you're dissatisfied with the outcome, then volunteer to be a track chair next time.
I'm not even talking about documenting the process at that level of detail (though I don't think some of the tribal knowledge on this topic as common guidelines for new track chairs would hurt anyone), I'm talking about making it clear to both people submitting talks and those voting on them that:
a) Track chairs exist at all.
b) Popularity in the vote does not in and of itself ensure success.
More transparency on these two items would I believe clear up quite a bit of confusion but since you brought up volunteering perhaps we could even document who determines the track chairs... ;). Nobody seems to have issues clarifying these facts via semi-regular email threads like this one, so I'm not sure why setting these expectations up front (on the submission/voting websites and in the submission acknowledgement email) would be an issue?
Thanks,
Steve
[1] http://lists.openstack.org/pipermail/community/2015-August/001263.html -- Jeremy Stanley
_______________________________________________ Community mailing list Community@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/community
-- Steve Gordon, RHCE Sr. Technical Product Manager, Red Hat Enterprise Linux OpenStack Platform
On 08/31/2015 04:57 PM, Lauren Sell wrote:
Sorry, I was out of the office the end of last week, and am catching up on this thread today. I’ll jump in with a few comments and updates from a Summit organizer perspective.
From my perspective, the opportunity to vote on Summit sessions provides a strong community feedback mechanism so it’s not just a small group of people making decisions. It also provides a level of transparency because all submitted sessions are published and available to review, analyze, etc. (such as the keyword analysis several community members perform each Summit, or how other community organizers mine the information to recruit speakers for their own regional events). The results give track chairs a starting point (or sometimes a tie breaker when needed) and it helps them rule out sessions that have been consistently poorly reviewed.
Second, to the initial question from Richard Raseley that started the thread, we have not historically published voting results by session, but are looking into generating a report (probably a quick and dirty CSV) with the session title, track, vote average & number of votes cast to share with the community for analysis, as well as the aggregate number of votes cast of course. This is the information that has been available to track chairs in their selection tool, and I think it makes sense to publish it more broadly, especially for speakers who might be interested in feedback on their session. In the future, I would love to be able to support some kind of comment feature with the voting tool, because I think that feedback could be valuable to the track chairs and speakers.
Finally, you can read more about the track chair and voting process at this link: https://www.openstack.org/summit/tokyo-2015/selection-process/ (that’s the unique URL, but it was also published on the Summit speaking submission page and the Summit FAQ). To Steve’s point, it sounds like we need to do a better job making that information more visible. To start, we are planning to link to it from the schedule page as “How were these sessions selected?”
The Summit team is always open to feedback and iterating the processes each cycle as the community continues to grow and change. Thanks for all the comments and input!
Lauren, Thank you for your thoughtful reply. Publishing the results in the way you described above would be excellent. I'll stay tuned for that information. Regards, Richard
Yes - very aware. I am just interested in taking a look at the vote totals.
On Aug 27, 2015, at 2:19 PM, David Medberry <openstack@medberry.net> wrote:
I'm sure you know this already, but just reiterating, the voting is only ONE of the factors in session selection. There are 3-4 more hurdles after the voting is done.
On Thu, Aug 27, 2015 at 3:13 PM, Richard Raseley <richard@raseley.com> wrote: Will an appropriately anonymized dump of the session vote totals be made available to the community? If so, when will that be available?
_______________________________________________ Community mailing list Community@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/community
participants (12)
-
Adam Lawson
-
Dave Neary
-
David Medberry
-
Jeremy Stanley
-
Lauren Sell
-
mark@openstack.org
-
Matt Fischer
-
Richard Raseley
-
Stefano Maffulli
-
Steve Gordon
-
Tristan Goode
-
Xav Paice