From: "Chuck Wolber" <chuck@wolber.net>
To: "Christian Couder" <christian.couder@gmail.com>, <git@vger.kernel.org>
Cc: "Junio C Hamano" <gitster@pobox.com>,
"Taylor Blau" <me@ttaylorr.com>,
"Rick Sanders" <rick@sfconservancy.org>,
"Git at SFC" <git@sfconservancy.org>,
"Johannes Schindelin" <Johannes.Schindelin@gmx.de>,
"Patrick Steinhardt" <ps@pks.im>,
"Christian Couder" <chriscool@tuxfamily.org>
Subject: Re: [PATCH v2] SubmittingPatches: add section about AI
Date: Wed, 01 Oct 2025 18:59:31 +0000 [thread overview]
Message-ID: <DD77TA1H1OOO.351R9WDH93UZ5@wolber.net> (raw)
In-Reply-To: <20251001140310.527097-1-christian.couder@gmail.com>
On Wed Oct 1, 2025 at 2:03 PM UTC, Christian Couder wrote:
> To mitigate both risks, let's add an "Use of Artificial Intelligence"
> section to "Documentation/SubmittingPatches" with the goal of
> discouraging its blind use to generate content that is submitted to
> the project, while still allowing us to benefit from its help in some
> innovative, useful and less risky ways.
I love the intent here, but it does not seem like that came through in the
proposed patch.
I think this patch opens the door to some concerning issues, including the
potential for false accusations and inconsistent treatment of human (non-AI)
generated contributions.
Sticking to a message of self-reliance (e.g. responsible AI use) and making
some technical changes to mark AI content might be a better approach.
> +The Developer's Certificate of Origin requires contributors to certify
> +that they know the origin of their contributions to the project and
> +that they have the right to submit it under the project's license.
> +It's not yet clear that this can be legally satisfied when submitting
> +significant amount of content that has been generated by AI tools.
The legal issues around AI will be resolved in time, but the future will not
stop bringing us a steady stream of things that create legal ambiguity.
Creating one-off sections that cover _multiple_ topics _including_ legal
ambiguity seems like it risks reducing clarity. To get the full picture, this
patch (and patches like it in the future) require me to navigate multiple
sections to understand all of the project's relevant legal concerns.
I also have two specific concerns with the wording:
1. It repeats what is said just a few paragraphs earlier in the document. I
understand _why_ it does this, but moving the essence of this topic up to the
DCO section avoids the repetition and avoids diluting the project's legal
guidance.
2. What am I supposed to do with "It's not yet clear"? This is worse than
telling me nothing. It introduces a vague question with no clear guidance. It
is _true_ that no clear guidance exists, but what are the consequences when it
_does_ exist? The worst case scenario is that we have to go back and
rework/remove AI generated patches. So why not just require something like a
declaration of AI content like the one proposed at declare-ai.org?
> +To avoid these issues, we will reject anything that looks AI
> +generated, that sounds overly formal or bloated, that looks like AI
> +slop, that looks good on the surface but makes no sense, or that
> +senders don’t understand or cannot explain.
That reads like a full stop rejection of all AI generated patch content.
What if AI were to generate a great patch whose technical quality is exemplary
in every way? How is that any different from a great patch of exemplary
technical quality submitted by a person who is unambiguosly evil?
But perhaps you intended it to mean a full stop rejection of content that
_looks_ like it was generated by the primitive AI we have _today_? Even going
with the interpretation you likely intended opens up a concerning double
standard.
What if a patch "looks" AI generated, but in reality was wholly geneated by a
human? Does this mean that patches generated by humans that fit the declared
criteria would be treated as if they were AI generated?
What about a non-native speaker who uses AI in an attempt to bridge a language
barrier? By definition they would lack the ability to judge the degree to which
their patch suddenly meets your criteria.
How is any of that fair, and how could you even tell the difference?
And on a personal note, the subjective wording gives me a "walking on
eggshells" feeling. It opens the door for false accusations, and gets us away
from judging things _purely_ on their technical merit.
Would it not be more _consistent_ to continue saying what is already true? That
your patches _must_ be remarkably high quality regardless of how they were
created?
With the addition of a required AI declaration (again, check out declare-ai.org
for an example of what that might look like), I think you cover all of the
necessary bases. And sure, someone could lie. But they can lie about meeting
the DCO as well. The consequences are the same - remove/rework.
> +We strongly recommend using AI tools carefully and responsibly.
Agreed, but I think you lost me here.
Taking your words at face value, the prior paragraph reads as if the Git
project is declaring an outright ban on _all_ AI generated content (and I am
nearly certain that is _not_ what you intended to say). If so, why bother
continuing on with a PSA (Public Safety Announcement)? It reads like a
non-alcoholic drink that has the words, "Drink Responsibly" printed on the side
of the can.
> +Contributors would often benefit more from AI by using it to guide and
> +help them step by step towards producing a solution by themselves
> +rather than by asking for a full solution that they would then mostly
> +copy-paste. They can also use AI to help with debugging, or with
> +checking for obvious mistakes, things that can be improved, things
> +that don’t match our style, guidelines or our feedback, before sending
> +it to us.
I think this is very useful guidance. And although it is timely, I think it
stands a good chance of being timeless, even when AI becomes far more competent
than it is today.
AI is not going away, and we need to find a way to use it productively
_without_ losing our sense of self-reliance. If we fail to develop this ability
when AI is hardly more skilled than an above average intern, full of hubris and
zero real world experience, imagine how unqualified we will be when AI becomes
competent enough to manipulate and mislead us?
Overall, I feel like an addition to the documentation is warranted, but this
version makes me uncomfortable if not a little unwelcome. Making a techncial
change to the required declarations and expanding on the theme of self-reliance
and responsible use feels like a more productive way to address this issue.
Putting my "money where my mouth is", I am more than happy to suggest a
revision to this patch if you would like. I wanted to avoid that right now
because it seemed like a dialog was warranted first.
..Ch:W..
next prev parent reply other threads:[~2025-10-01 18:59 UTC|newest]
Thread overview: 34+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-06-30 20:32 [RFC/PATCH] SubmittingPatches: forbid use of genAI to generate changes Junio C Hamano
2025-06-30 21:07 ` brian m. carlson
2025-06-30 21:23 ` Collin Funk
2025-07-01 10:36 ` Christian Couder
2025-07-01 11:07 ` Christian Couder
2025-07-01 17:33 ` Junio C Hamano
2025-07-01 16:20 ` Junio C Hamano
2025-07-08 14:23 ` Christian Couder
2025-10-01 14:02 ` [PATCH v2] SubmittingPatches: add section about AI Christian Couder
2025-10-01 18:59 ` Chuck Wolber [this message]
2025-10-01 23:32 ` brian m. carlson
2025-10-02 2:30 ` Ben Knoble
2025-10-03 13:33 ` Christian Couder
2025-10-01 20:59 ` Junio C Hamano
2025-10-03 8:51 ` Christian Couder
2025-10-03 16:20 ` Junio C Hamano
2025-10-03 16:45 ` rsbecker
2025-10-08 7:22 ` Christian Couder
2025-10-01 21:37 ` brian m. carlson
2025-10-03 14:25 ` Christian Couder
2025-10-03 20:48 ` Elijah Newren
2025-10-03 22:20 ` brian m. carlson
2025-10-06 17:45 ` Junio C Hamano
2025-10-08 4:18 ` Elijah Newren
2025-10-12 15:07 ` Junio C Hamano
2025-10-08 9:28 ` Christian Couder
2025-10-13 18:14 ` Junio C Hamano
2025-10-23 17:32 ` Junio C Hamano
2025-10-08 4:18 ` Elijah Newren
2025-10-08 8:37 ` Christian Couder
2025-10-08 9:28 ` Michal Suchánek
2025-10-08 9:35 ` Christian Couder
2025-10-09 1:13 ` Collin Funk
2025-10-08 7:30 ` Christian Couder
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
List information: http://vger.kernel.org/majordomo-info.html
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=DD77TA1H1OOO.351R9WDH93UZ5@wolber.net \
--to=chuck@wolber.net \
--cc=Johannes.Schindelin@gmx.de \
--cc=chriscool@tuxfamily.org \
--cc=christian.couder@gmail.com \
--cc=git@sfconservancy.org \
--cc=git@vger.kernel.org \
--cc=gitster@pobox.com \
--cc=me@ttaylorr.com \
--cc=ps@pks.im \
--cc=rick@sfconservancy.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
Code repositories for project(s) associated with this public inbox
https://80x24.org/mirrors/git.git
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).