Re: Any topics for the October 3rd meeting of the board?
Greetings Eoghan! I wholeheartedly agree, this is worthwhile to get on everyone's radar. As far as I'm aware, this topic has not come up before in the OpenInfra community, but we are a wide and diverse community. I do concur that this would fall into the scope of the board, in large part because we maintain authority over licenses and the acceptable copyrights. And as you've pointed out, this can quickly result in needing to dive into the license agreements and assignments which members commit to. Realistically, I think if we opted to permit AI generated code content, the content would still need to be denoted in the code itself under current USPTO guidelines, because the code in a single commit would then have two different sets of rules applying to it. One for the AI generated portion, and separately a human's influence over it. That being said, that is only my mildly caffeinated impression after doing some additional reading. I think a working group session may be a good starting place to collect thoughts from the board members. Meanwhile, I've added Thierry to the discussion because this might be something already on the staff's radar. -Julia On Fri, Sep 22, 2023 at 5:16 AM Eoghan Glynn <eglynn@redhat.com> wrote:
Thanks Julia,
Not necessarily a reason on its own to justify an October meeting, but one that we might consider getting on our radar before the end of the year if it's not already being discussed in other Open Infra forums ...
After reading a recent ASF blog <https://news.apache.org/foundation/entry/asf-legal-committee-issues-generative-ai-guidance-to-contributors> announcement, I asked the Red Hat team yesterday if anyone knew if there's an Open Infra equivalent for the Apache guidance <https://www.apache.org/legal/generative-tooling.html> on using generative AI for code contributions.
I also asked if this topic would naturally fall into the domain of the [OpenStack] TC to discuss and craft a policy.
The response I got was it felt more like a topic for discussion at the Foundation Board level, since it likely touches on the CLA and may require legal insight.
So a future Board discussion could be warranted on what type generated content we're comfortable with (if any) and whether it would make sense to encourage or even require disclosure of any such tool usage in the commit message.
Cheers, Eoghan
On Tue, Sep 19, 2023 at 3:58 PM Julia Kreger <juliaashleykreger@gmail.com> wrote:
Greetings Directors,
This morning Allison Randall and I met with the foundation executive staff to identify core topics from the staff to turn into an agenda for our upcoming meeting. Unfortunately, we determined that we would not have any items ready for the board for this upcoming meeting. The plus is that we can cancel the October 3rd meeting if we have no other business to attend to. The down side is our November meeting is likely going to have a number of topics to cover.
As such, if anyone has any topics which need to be discussed and addressed during the scheduled meeting on October 3rd, please let me know before 9 AM Friday morning US-Pacific (4 PM UTC). If I receive no topics by that time, I will cancel the October meeting of the board, and we will meet next during our scheduled November meeting.
Thanks,
-Julia _______________________________________________ Foundation-board mailing list Foundation-board@lists.openinfra.dev
Thanks for looping me in, Julia. It's definitely on our radar. It is a fast-changing legal landscape, and while the ASF can be praised for issuing early guidelines, most of their guidance is pretty open-ended right now, and is meant more as a living document that is going to evolve. I actually participate in the Open Source Initiative new webinar series on defining open source AI, and will be exploring the effects of the introduction of AI on openly-developed open source together with Davanum Srinivas (AWS, long time contributor to Apache projects, OpenStack and Kubernetes) and Diane Mueller (now at Bitergia). The effects of generative AI on code contributions will be a key part of the discussion. It will happen on Wednesday at 16:00 UTC / 11am CT: https://deepdiveai.sessionize.com/session/526792 Thierry Julia Kreger wrote:
Greetings Eoghan!
I wholeheartedly agree, this is worthwhile to get on everyone's radar.
As far as I'm aware, this topic has not come up before in the OpenInfra community, but we are a wide and diverse community. I do concur that this would fall into the scope of the board, in large part because we maintain authority over licenses and the acceptable copyrights. And as you've pointed out, this can quickly result in needing to dive into the license agreements and assignments which members commit to.
Realistically, I think if we opted to permit AI generated code content, the content would still need to be denoted in the code itself under current USPTO guidelines, because the code in a single commit would then have two different sets of rules applying to it. One for the AI generated portion, and separately a human's influence over it. That being said, that is only my mildly caffeinated impression after doing some additional reading.
I think a working group session may be a good starting place to collect thoughts from the board members. Meanwhile, I've added Thierry to the discussion because this might be something already on the staff's radar.
-Julia
On Fri, Sep 22, 2023 at 5:16 AM Eoghan Glynn <eglynn@redhat.com <mailto:eglynn@redhat.com>> wrote:
Thanks Julia,
Not necessarily a reason on its own to justify an October meeting, but one that we might consider getting on our radar before the end of the year if it's not already being discussed in other Open Infra forums ...
After reading a recent ASF blog <https://news.apache.org/foundation/entry/asf-legal-committee-issues-generative-ai-guidance-to-contributors> announcement, I asked the Red Hat team yesterday if anyone knew if there's an Open Infra equivalent for the Apache guidance <https://www.apache.org/legal/generative-tooling.html> on using generative AI for code contributions.
I also asked if this topic would naturally fall into the domain of the [OpenStack] TC to discuss and craft a policy.
The response I got was it felt more like a topic for discussion at the Foundation Board level, since it likely touches on the CLA and may require legal insight.
So a future Board discussion could be warranted on what type generated content we're comfortable with (if any) and whether it would make sense to encourage or even require disclosure of any such tool usage in the commit message.
Cheers, Eoghan
On Tue, Sep 19, 2023 at 3:58 PM Julia Kreger <juliaashleykreger@gmail.com <mailto:juliaashleykreger@gmail.com>> wrote:
Greetings Directors,
This morning Allison Randall and I met with the foundation executive staff to identify core topics from the staff to turn into an agenda for our upcoming meeting. Unfortunately, we determined that we would not have any items ready for the board for this upcoming meeting. The plus is that we can cancel the October 3rd meeting if we have no other business to attend to. The down side is our November meeting is likely going to have a number of topics to cover.
As such, if anyone has any topics which need to be discussed and addressed during the scheduled meeting on October 3rd, please let me know before 9 AM Friday morning US-Pacific (4 PM UTC). If I receive no topics by that time, I will cancel the October meeting of the board, and we will meet next during our scheduled November meeting.
Thanks,
-Julia _______________________________________________ Foundation-board mailing list Foundation-board@lists.openinfra.dev <mailto:Foundation-board@lists.openinfra.dev>
Hi Thierry, On "Open Source AI": The nextcloud project (under leadership of Frank Karlitschek) has come up with the term "ethical AI". They rate AI solutions based on three criteria: (1) The code that runs is fully open source (2) The model data is available under an open source license (3) The training data used to generate/train the model is available under an open source license While I would have called this "open AI" or "sovereign AI" or "the 3 opens of AI" ;-) and not "ethical AI", it's probably still a useful starting point for evaluating the openness. (WIth "ethical", I would expect that models are checked for discriminating biases ... and possibly checks added that avoid abusive output or other output that are seriously destructive to humans' physical or psychological safety.) HTH, -- Kurt Garloff <kurt@garloff.de> Cologne, Germany On 22/09/2023 15:06, Thierry Carrez wrote:
Thanks for looping me in, Julia.
It's definitely on our radar. It is a fast-changing legal landscape, and while the ASF can be praised for issuing early guidelines, most of their guidance is pretty open-ended right now, and is meant more as a living document that is going to evolve.
I actually participate in the Open Source Initiative new webinar series on defining open source AI, and will be exploring the effects of the introduction of AI on openly-developed open source together with Davanum Srinivas (AWS, long time contributor to Apache projects, OpenStack and Kubernetes) and Diane Mueller (now at Bitergia). The effects of generative AI on code contributions will be a key part of the discussion. It will happen on Wednesday at 16:00 UTC / 11am CT:
https://deepdiveai.sessionize.com/session/526792
Thierry
Julia Kreger wrote:
Greetings Eoghan!
I wholeheartedly agree, this is worthwhile to get on everyone's radar.
As far as I'm aware, this topic has not come up before in the OpenInfra community, but we are a wide and diverse community. I do concur that this would fall into the scope of the board, in large part because we maintain authority over licenses and the acceptable copyrights. And as you've pointed out, this can quickly result in needing to dive into the license agreements and assignments which members commit to.
Realistically, I think if we opted to permit AI generated code content, the content would still need to be denoted in the code itself under current USPTO guidelines, because the code in a single commit would then have two different sets of rules applying to it. One for the AI generated portion, and separately a human's influence over it. That being said, that is only my mildly caffeinated impression after doing some additional reading.
I think a working group session may be a good starting place to collect thoughts from the board members. Meanwhile, I've added Thierry to the discussion because this might be something already on the staff's radar.
-Julia
On Fri, Sep 22, 2023 at 5:16 AM Eoghan Glynn <eglynn@redhat.com <mailto:eglynn@redhat.com>> wrote:
Thanks Julia,
Not necessarily a reason on its own to justify an October meeting, but one that we might consider getting on our radar before the end of the year if it's not already being discussed in other Open Infra forums ...
After reading a recent ASF blog <https://news.apache.org/foundation/entry/asf-legal-committee-issues-generative-ai-guidance-to-contributors> announcement, I asked the Red Hat team yesterday if anyone knew if there's an Open Infra equivalent for the Apache guidance <https://www.apache.org/legal/generative-tooling.html> on using generative AI for code contributions.
I also asked if this topic would naturally fall into the domain of the [OpenStack] TC to discuss and craft a policy.
The response I got was it felt more like a topic for discussion at the Foundation Board level, since it likely touches on the CLA and may require legal insight.
So a future Board discussion could be warranted on what type generated content we're comfortable with (if any) and whether it would make sense to encourage or even require disclosure of any such tool usage in the commit message.
Cheers, Eoghan
On Tue, Sep 19, 2023 at 3:58 PM Julia Kreger <juliaashleykreger@gmail.com <mailto:juliaashleykreger@gmail.com>> wrote:
Greetings Directors,
This morning Allison Randall and I met with the foundation executive staff to identify core topics from the staff to turn into an agenda for our upcoming meeting. Unfortunately, we determined that we would not have any items ready for the board for this upcoming meeting. The plus is that we can cancel the October 3rd meeting if we have no other business to attend to. The down side is our November meeting is likely going to have a number of topics to cover.
As such, if anyone has any topics which need to be discussed and addressed during the scheduled meeting on October 3rd, please let me know before 9 AM Friday morning US-Pacific (4 PM UTC). If I receive no topics by that time, I will cancel the October meeting of the board, and we will meet next during our scheduled November meeting.
Thanks,
-Julia _______________________________________________ Foundation-board mailing list Foundation-board@lists.openinfra.dev <mailto:Foundation-board@lists.openinfra.dev>
Foundation-board mailing list -- foundation-board@lists.openinfra.dev To unsubscribe send an email to foundation-board-leave@lists.openinfra.dev
Kurt, Out of curiosity, in Europe are there concerns over copyright of AI "generated" work products? This may be an interesting data point to start at with discussions as well. Thanks, -Julia On Wed, Oct 4, 2023 at 8:48 AM Kurt Garloff <kurt@garloff.de> wrote:
Hi Thierry,
On "Open Source AI": The nextcloud project (under leadership of Frank Karlitschek) has come up with the term "ethical AI". They rate AI solutions based on three criteria: (1) The code that runs is fully open source (2) The model data is available under an open source license (3) The training data used to generate/train the model is available under an open source license
While I would have called this "open AI" or "sovereign AI" or "the 3 opens of AI" ;-) and not "ethical AI", it's probably still a useful starting point for evaluating the openness. (WIth "ethical", I would expect that models are checked for discriminating biases ... and possibly checks added that avoid abusive output or other output that are seriously destructive to humans' physical or psychological safety.)
HTH,
-- Kurt Garloff <kurt@garloff.de> Cologne, Germany
Thanks for looping me in, Julia.
It's definitely on our radar. It is a fast-changing legal landscape, and while the ASF can be praised for issuing early guidelines, most of their guidance is pretty open-ended right now, and is meant more as a living document that is going to evolve.
I actually participate in the Open Source Initiative new webinar series on defining open source AI, and will be exploring the effects of the introduction of AI on openly-developed open source together with Davanum Srinivas (AWS, long time contributor to Apache projects, OpenStack and Kubernetes) and Diane Mueller (now at Bitergia). The effects of generative AI on code contributions will be a key part of the discussion. It will happen on Wednesday at 16:00 UTC / 11am CT:
https://deepdiveai.sessionize.com/session/526792
Thierry
Julia Kreger wrote:
Greetings Eoghan!
I wholeheartedly agree, this is worthwhile to get on everyone's radar.
As far as I'm aware, this topic has not come up before in the OpenInfra community, but we are a wide and diverse community. I do concur that this would fall into the scope of the board, in large part because we maintain authority over licenses and the acceptable copyrights. And as you've
Realistically, I think if we opted to permit AI generated code content,
I think a working group session may be a good starting place to collect
On 22/09/2023 15:06, Thierry Carrez wrote: pointed out, this can quickly result in needing to dive into the license agreements and assignments which members commit to. the content would still need to be denoted in the code itself under current USPTO guidelines, because the code in a single commit would then have two different sets of rules applying to it. One for the AI generated portion, and separately a human's influence over it. That being said, that is only my mildly caffeinated impression after doing some additional reading. thoughts from the board members. Meanwhile, I've added Thierry to the discussion because this might be something already on the staff's radar.
-Julia
On Fri, Sep 22, 2023 at 5:16 AM Eoghan Glynn <eglynn@redhat.com
<mailto:eglynn@redhat.com>> wrote:
Thanks Julia,
Not necessarily a reason on its own to justify an October meeting, but one that we might consider getting on our radar before the end of the year if it's not already being discussed in other Open Infra forums ...
After reading a recent ASF blog <
https://news.apache.org/foundation/entry/asf-legal-committee-issues-generative-ai-guidance-to-contributors> announcement, I asked the Red Hat team yesterday if anyone knew if there's an Open Infra equivalent for the Apache guidance < https://www.apache.org/legal/generative-tooling.html> on using generative AI for code contributions.
I also asked if this topic would naturally fall into the domain of the [OpenStack] TC to discuss and craft a policy.
The response I got was it felt more like a topic for discussion at the Foundation Board level, since it likely touches on the CLA and may require legal insight.
So a future Board discussion could be warranted on what type generated content we're comfortable with (if any) and whether it would make sense to encourage or even require disclosure of any such tool usage in the commit message.
Cheers, Eoghan
On Tue, Sep 19, 2023 at 3:58 PM Julia Kreger <juliaashleykreger@gmail.com <mailto:juliaashleykreger@gmail.com>> wrote:
Greetings Directors,
This morning Allison Randall and I met with the foundation executive staff to identify core topics from the staff to turn into an agenda for our upcoming meeting. Unfortunately, we determined that we would not have any items ready for the board for this upcoming meeting. The plus is that we can cancel the October 3rd meeting if we have no other business to attend to. The down side is our November meeting is likely going to have a number of topics to cover.
As such, if anyone has any topics which need to be discussed and addressed during the scheduled meeting on October 3rd, please let me know before 9 AM Friday morning US-Pacific (4 PM UTC). If I receive no topics by that time, I will cancel the October meeting of the board, and we will meet next during our scheduled November meeting.
Thanks,
-Julia _______________________________________________ Foundation-board mailing list Foundation-board@lists.openinfra.dev <mailto:Foundation-board@lists.openinfra.dev>
_______________________________________________ Foundation-board mailing list -- foundation-board@lists.openinfra.dev To unsubscribe send an email to foundation-board-leave@lists.openinfra.dev
Foundation-board mailing list -- foundation-board@lists.openinfra.dev To unsubscribe send an email to foundation-board-leave@lists.openinfra.dev
Hi Julia, I am not a lawyer and I don't have an overview over the legal discussions in Europe. The legal situation in my home country (and probably similar in many neighbor countries) to my understanding are: * AI-generated content is not copyrightable. However, someone could base her own work on top of AI-generated content and if the increment is significant then own a copyright on the outcome. * AI that is trained on copyrighted material is difficult to judge -- if it reproduces such material more or less literally it would infringe. If it gets "creative" and composes something new, things are not very clear. (If I do this as a human being, I would typically be allowed to do this and own the copyright and not infringe -- most NDAs that I have seen even have a clause for "residuals" to clarify this. These were US NDAs, actually.) Not sure those bits are helpful ... -- Kurt -- Kurt Garloff <kurt@garloff.de> Cologne, Germany On 04/10/2023 18:00, Julia Kreger wrote:
Kurt,
Out of curiosity, in Europe are there concerns over copyright of AI "generated" work products?
This may be an interesting data point to start at with discussions as well.
Greetings Directors, Last week a group of directors met and reached the conclusion that we needed to solicit some community leader feedback and see if perceptions are similar, and if independent voices understood our general motivation to put into place some sort of guidance. We kept some informal notes from the discussion at the PTG[0], and please feel free to review. A few takeaways I have: * Generally there was understanding and consensus *something* was needed from a forward looking point of view. * It was noted with agreement that some sort of context setting is needed upfront to have these discussions. This revolved around the reality that we have predictive and generative modeling in existence today, and tools are constantly evolving. * The phrase "attractive nuisance" was used quite a bit with the modifier "at this time", while fully acknowledging forward evolution will occur and we need to be ready for it. * Community leaders also expressed a great deal of concern on the code review side, also mirroring the informal discussions amongst board members. Consensus sort of revolved around the need to understand how much of the contribution was computer generated, which also mirrors our prior week discussions. * There is concern over fully-automatic contributions. We, the board, may want to assert any expectations we have around who is making contributions as well. With that having been said, I believe we should meet again this coming week and see if we have consensus on if some sort of high level policy we could write and hopefully adopt before the end of the year. Would this coming Thursday, November 2nd be viable to discuss? Perhaps around 1500 UTC? Would another day be better? Thanks, -Julia [0]: https://etherpad.opendev.org/p/oct2023-ptg-openinfra-board
participants (3)
-
Julia Kreger
-
Kurt Garloff
-
Thierry Carrez