From 562dfb7b4f06c026b071300ecd81aa4c48753103 Mon Sep 17 00:00:00 2001 From: Till Schneidereit Date: Tue, 21 Apr 2026 17:47:31 +0200 Subject: [PATCH 1/5] Add AI Tool Use Policy This PR adds a policy document describing the Bytecode Alliance's policy on use of AI tools in the context of the Bytecode Alliance. It's largely based on the [LLVM project's policy](https://github.com/llvm/llvm-project/blob/347dc1321ed50578bb09da6fa10ccec581d8a2b6/llvm/docs/AIToolPolicy.md), with light modifications as needed. Signed-off-by: Till Schneidereit --- AI_TOOL_POLICY.md | 65 +++++++++++++++++++++++++++++++++++++++++++++++ README.md | 1 + 2 files changed, 66 insertions(+) create mode 100644 AI_TOOL_POLICY.md diff --git a/AI_TOOL_POLICY.md b/AI_TOOL_POLICY.md new file mode 100644 index 0000000..5224073 --- /dev/null +++ b/AI_TOOL_POLICY.md @@ -0,0 +1,65 @@ +# Bytecode Alliance AI Tool Use Policy + +## Policy + +The Alliance's policy is that contributors can use whatever tools they would like to craft their contributions, but there must be a human in the loop. Contributors must read and review all LLM-generated code or text before they ask other project members to review it. The contributor is always the author and is fully accountable for their contributions. Contributors should be sufficiently confident that the contribution is high enough quality that asking for a review is a good use of scarce maintainer time, and they should be able to answer questions about their work during review. + +We expect that new contributors will be less confident in their contributions, and our guidance to them is to start with small contributions that they can fully understand to build confidence. We aspire to be a welcoming community that helps new contributors grow their expertise, but learning involves taking small steps, getting feedback, and iterating. Passing maintainer feedback to an LLM doesn’t help anyone grow, and does not sustain our community. + +Contributors are expected to be transparent and label contributions that contain substantial amounts of tool-generated content. Our policy on labelling is intended to facilitate reviews, and not to track which parts of LLVM are generated. Contributors should note tool usage in their pull request description, commit message, or wherever authorship is normally indicated for the work. For instance, use a commit message trailer like `Assisted-by: :`. This transparency helps the community develop best practices and understand the role of these new tools. + +This policy includes, but is not limited to, the following kinds of contributions: + - Code, usually in the form of a pull request + - RFCs or design proposals + - Issues or security vulnerabilities + - Comments and feedback on pull requests + +## Details + +To ensure sufficient self review and understanding of the work, it is strongly recommended that contributors write PR descriptions themselves (if needed, using tools for translation or copy-editing). The description should explain the motivation, implementation approach, expected impact, and any open questions or uncertainties to the same extent as a contribution made without tool assistance. + +An important implication of this policy is that it bans agents that take action in our digital spaces without human approval, such as the GitHub [@claude agent](https://github.com/claude/). Similarly, automated review tools that publish comments without human review are not allowed. However, an opt-in review tool that **keeps a human in the loop** is acceptable under this policy. As another example, using an LLM to generate documentation, which a contributor manually reviews for correctness, edits, and then posts as a PR, is an approved use of tools under this policy. + +AI tools must not be used to fix GitHub issues labelled good first issue. These issues are generally not urgent, and are intended to be learning opportunities for new contributors to get familiar with the codebase. Whether you are a newcomer or not, fully automating the process of fixing this issue squanders the learning opportunity and doesn’t add much value to the project. **Using AI tools to fix issues labelled as “good first issues” is forbidden**. + +## Extractive Contributions + +The reason for our “human-in-the-loop” contribution policy is that processing patches, PRs, RFCs, and comments to LLVM is not free – it takes a lot of maintainer time and energy to review those contributions! Sending the unreviewed output of an LLM to open source project maintainers extracts work from them in the form of design and code review, so we call this kind of contribution an “extractive contribution”. + +Our **golden rule** is that a contribution should be worth more to the project than the time it takes to review it. These ideas are captured by this quote from the book [Working in Public](https://press.stripe.com/working-in-public) by Nadia Eghbal: + +> “When attention is being appropriated, producers need to weigh the costs and benefits of the transaction. To assess whether the appropriation of attention is net-positive, it’s useful to distinguish between extractive and non-extractive contributions. Extractive contributions are those where the marginal cost of reviewing and merging that contribution is greater than the marginal benefit to the project’s producers. In the case of a code contribution, it might be a pull request that’s too complex or unwieldy to review, given the potential upside.” +> – Nadia Eghbal + +Prior to the advent of LLMs, open source project maintainers would often review any and all changes sent to the project simply because posting a change for review was a sign of interest from a potential long-term contributor. While new tools enable more development, it shifts effort from the implementor to the reviewer, and our policy exists to ensure that we value and do not squander maintainer time. + +## Handling Violations + +If a maintainer judges that a contribution doesn’t comply with this policy, they should paste the following response to request changes: + +> This PR doesn't appear to comply with our policy on tool-generated content, and requires +> additional justification for why it is valuable enough to the project for us to review it. +> Please see our developer policy on AI-generated contributions: +> https://github.com/bytecodealliance/governance/blob/main/AI_TOOL_POLICY.md + +The best ways to make a change less extractive and more valuable are to reduce its size or complexity or to increase its usefulness to the community. These factors are impossible to weigh objectively, so the determination is left up to the maintainers of the project, i.e. those who are doing the work of sustaining the project. + +If or when it becomes clear that a GitHub issue or PR is off-track and not moving in the right direction, maintainers should close it with an appropriate explanation. + +If a contributor repeatedly fails to make their changes meaningfully less extractive, maintainers should escalate to the TSC. + +## Copyright + +Artificial intelligence systems raise many questions around copyright that have yet to be answered. The Bytecode Alliance's policy on AI tools is that contributors are responsible for ensuring that they have the right to contribute code under the terms of our license, typically meaning that either they, their employer, or their collaborators hold the copyright. Using AI tools to regenerate copyrighted material does not remove the copyright, and contributors are responsible for ensuring that such material does not appear in their contributions. Contributions found to violate this policy will be removed just like any other offending contribution. + +## Exceptions + +Project maintainers might identify use cases for AI tools that don't fit this policy, such as fully automated reviews run as part of a CI pipeline. In such a case, maintainers should reach out to the TSC, e.g. using a Pull Request updating this section to list their use as an exception. The TSC will consider the use case and either approve it or provide feedback. + +## Examples + +A highly successful use of AI tools for finding security and correctness issues in the Wasmtime project is described in [this blog post](https://bytecodealliance.org/articles/wasmtime-security-advisories). + +## References + +This policy is a lightly adapted copy of the [LLVM Project's AI Tool Use Policy](https://github.com/llvm/llvm-project/blob/347dc1321ed50578bb09da6fa10ccec581d8a2b6/llvm/docs/AIToolPolicy.md) (fetched 2026-04-21), licensed under the [Apache 2.0-WITH-LLVM-Exception] license, itself based on the [Fedora Council's Policy on AI-Assisted Contributions](https://communityblog.fedoraproject.org/council-policy-proposal-policy-on-ai-assisted-contributions/) (fetched 2025-10-01), licensed under the [Creative Commons Attribution 4.0 International License](https://creativecommons.org/licenses/by/4.0/). These links serves as attribution. diff --git a/README.md b/README.md index 8474f0e..52074cb 100644 --- a/README.md +++ b/README.md @@ -18,6 +18,7 @@ The Bytecode Alliance's governance and processes are defined in a variety of gov - The [Code of Conduct](./CODE_OF_CONDUCT.md) describes the expected conduct in our community - The [Project License](./PROJECT_LICENSE) describes the default license the Bytecode Alliance uses, with exceptions listed [here](./projects/license-exceptions/README.md) - The [communication](./communication.md) document lays out tequirements and guidelines for how to communicate as a Bytecode Alliance participant +- The [AI tools policy](./AI_TOOL_POLICY.md) document lays out the Alliance's policy on use of AI tools ## Recognized Contributors From 8e4aca6646bdd880a5993f1d49eb42384dac7edd Mon Sep 17 00:00:00 2001 From: Till Schneidereit Date: Fri, 24 Apr 2026 14:26:17 +0200 Subject: [PATCH 2/5] Fix mentions of LLVM to say Bytecode Alliance instead Thanks @fitzgen! Co-authored-by: Nick Fitzgerald --- AI_TOOL_POLICY.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/AI_TOOL_POLICY.md b/AI_TOOL_POLICY.md index 5224073..14da332 100644 --- a/AI_TOOL_POLICY.md +++ b/AI_TOOL_POLICY.md @@ -6,7 +6,7 @@ The Alliance's policy is that contributors can use whatever tools they would lik We expect that new contributors will be less confident in their contributions, and our guidance to them is to start with small contributions that they can fully understand to build confidence. We aspire to be a welcoming community that helps new contributors grow their expertise, but learning involves taking small steps, getting feedback, and iterating. Passing maintainer feedback to an LLM doesn’t help anyone grow, and does not sustain our community. -Contributors are expected to be transparent and label contributions that contain substantial amounts of tool-generated content. Our policy on labelling is intended to facilitate reviews, and not to track which parts of LLVM are generated. Contributors should note tool usage in their pull request description, commit message, or wherever authorship is normally indicated for the work. For instance, use a commit message trailer like `Assisted-by: :`. This transparency helps the community develop best practices and understand the role of these new tools. +Contributors are expected to be transparent and label contributions that contain substantial amounts of tool-generated content. Our policy on labelling is intended to facilitate reviews, and not to track which parts of Bytecode Alliance projects are generated. Contributors should note tool usage in their pull request description, commit message, or wherever authorship is normally indicated for the work. For instance, use a commit message trailer like `Assisted-by: :`. This transparency helps the community develop best practices and understand the role of these new tools. This policy includes, but is not limited to, the following kinds of contributions: - Code, usually in the form of a pull request @@ -24,7 +24,7 @@ AI tools must not be used to fix GitHub issues labelled good first issue. These ## Extractive Contributions -The reason for our “human-in-the-loop” contribution policy is that processing patches, PRs, RFCs, and comments to LLVM is not free – it takes a lot of maintainer time and energy to review those contributions! Sending the unreviewed output of an LLM to open source project maintainers extracts work from them in the form of design and code review, so we call this kind of contribution an “extractive contribution”. +The reason for our “human-in-the-loop” contribution policy is that processing patches, PRs, RFCs, and comments to Bytecode Alliance projects is not free – it takes a lot of maintainer time and energy to review those contributions! Sending the unreviewed output of an LLM to open source project maintainers extracts work from them in the form of design and code review, so we call this kind of contribution an “extractive contribution”. Our **golden rule** is that a contribution should be worth more to the project than the time it takes to review it. These ideas are captured by this quote from the book [Working in Public](https://press.stripe.com/working-in-public) by Nadia Eghbal: From 58bd06fb40a5e467ffb3819ff80ed944b528d976 Mon Sep 17 00:00:00 2001 From: Till Schneidereit Date: Fri, 24 Apr 2026 14:28:13 +0200 Subject: [PATCH 3/5] Change AI Tool Use guidance on disclosure This is a change we discussed at the last TSC meeting. Signed-off-by: Till Schneidereit --- AI_TOOL_POLICY.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/AI_TOOL_POLICY.md b/AI_TOOL_POLICY.md index 14da332..0e031a0 100644 --- a/AI_TOOL_POLICY.md +++ b/AI_TOOL_POLICY.md @@ -6,7 +6,7 @@ The Alliance's policy is that contributors can use whatever tools they would lik We expect that new contributors will be less confident in their contributions, and our guidance to them is to start with small contributions that they can fully understand to build confidence. We aspire to be a welcoming community that helps new contributors grow their expertise, but learning involves taking small steps, getting feedback, and iterating. Passing maintainer feedback to an LLM doesn’t help anyone grow, and does not sustain our community. -Contributors are expected to be transparent and label contributions that contain substantial amounts of tool-generated content. Our policy on labelling is intended to facilitate reviews, and not to track which parts of Bytecode Alliance projects are generated. Contributors should note tool usage in their pull request description, commit message, or wherever authorship is normally indicated for the work. For instance, use a commit message trailer like `Assisted-by: :`. This transparency helps the community develop best practices and understand the role of these new tools. +As a consequence of our position that the contributor proposing changes is fully accountable for their contributions, contributors aren't required to explicitly mention the use of any tools, AI or otherwise. If a contributor thinks there is value in highlighting tool use, they *may* do so in their pull request description, commit message, or using a commit message trailer like `Assisted-by: :`. They **MUST NOT**, however, list any tool, AI or otherwise, as a co-author: only human contributors can be authors and take on the accountability that comes with it. This policy includes, but is not limited to, the following kinds of contributions: - Code, usually in the form of a pull request From 9a36bfa859981cd05db34b2068c0f9fb9b5b1e6d Mon Sep 17 00:00:00 2001 From: Till Schneidereit Date: Tue, 28 Apr 2026 19:37:10 +0200 Subject: [PATCH 4/5] Simplify some language Co-authored-by: Oscar Spencer --- AI_TOOL_POLICY.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/AI_TOOL_POLICY.md b/AI_TOOL_POLICY.md index 0e031a0..b467aa6 100644 --- a/AI_TOOL_POLICY.md +++ b/AI_TOOL_POLICY.md @@ -2,7 +2,7 @@ ## Policy -The Alliance's policy is that contributors can use whatever tools they would like to craft their contributions, but there must be a human in the loop. Contributors must read and review all LLM-generated code or text before they ask other project members to review it. The contributor is always the author and is fully accountable for their contributions. Contributors should be sufficiently confident that the contribution is high enough quality that asking for a review is a good use of scarce maintainer time, and they should be able to answer questions about their work during review. +Contributors may use whatever tools they would like to craft their contributions, but there must be a human in the loop. Contributors must read and review all LLM-generated code or text before they ask other project members to review it. The contributor is always the author and is fully accountable for their contributions. Contributors should be sufficiently confident that the contribution is high enough quality that asking for a review is a good use of scarce maintainer time, and they should be able to answer questions about their work during review. We expect that new contributors will be less confident in their contributions, and our guidance to them is to start with small contributions that they can fully understand to build confidence. We aspire to be a welcoming community that helps new contributors grow their expertise, but learning involves taking small steps, getting feedback, and iterating. Passing maintainer feedback to an LLM doesn’t help anyone grow, and does not sustain our community. From fb5a9b266f469b97029e82ea359954dea6ea77ab Mon Sep 17 00:00:00 2001 From: Till Schneidereit Date: Tue, 28 Apr 2026 20:02:08 +0200 Subject: [PATCH 5/5] Add language about potential implicit acceptance of practices not covered by the policy Based on discussions in today's TSC call. --- AI_TOOL_POLICY.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/AI_TOOL_POLICY.md b/AI_TOOL_POLICY.md index b467aa6..35222c2 100644 --- a/AI_TOOL_POLICY.md +++ b/AI_TOOL_POLICY.md @@ -54,7 +54,9 @@ Artificial intelligence systems raise many questions around copyright that have ## Exceptions -Project maintainers might identify use cases for AI tools that don't fit this policy, such as fully automated reviews run as part of a CI pipeline. In such a case, maintainers should reach out to the TSC, e.g. using a Pull Request updating this section to list their use as an exception. The TSC will consider the use case and either approve it or provide feedback. +While this policy captures the general rules on using AI tools in the Bytecode Alliance, there are circumstances in which a more nuanced approach is needed. The TSC, under whose purview this policy falls, might grant exceptions expliclity, or implicitly allow practices to continue that don't fully align with the requirements described here. + +Project maintainers who identify use cases for AI tools that don't fit this policy, such as fully automated reviews run as part of a CI pipeline, are encouraged to reach out to the TSC to seek clarification. The TSC will provide feedback and might suggest changes to the intended practices. In approving a practice, the TSC might choose to explicitly document an exception granted to a project, but might also communicate to the project that it'll allow the project's intended practices without further documentation. ## Examples