-
-
Notifications
You must be signed in to change notification settings - Fork 34k
[3.12] gh-144125: email: verify headers are sound in BytesGenerator #144188
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: 3.12
Are you sure you want to change the base?
Conversation
(cherry picked from commit 052e55e) Co-authored-by: Seth Michael Larson <seth@python.org> Co-authored-by: Denis Ledoux <dle@odoo.com> Co-authored-by: Denis Ledoux <5822488+beledouxdenis@users.noreply.github.com> Co-authored-by: Petr Viktorin <302922+encukou@users.noreply.github.com> Co-authored-by: Bas Bloemsaat <1586868+basbloemsaat@users.noreply.github.com>
pablogsal
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
|
@webknjaz Hmm. Is this failure familiar?
But the downloadable logs contain:
|
|
@encukou not sure but sound like a problem with their runners. I've restarted them in debug mode to see if that exposes anything else. The cancellation behavior is rather weird — sounds like something in the jobs failed, then GHA went ahead to cancel them citing those same jobs as the reason for cancellation. Usually, jobs in matrices are marked as failed when they fail and if there's |
|
Oh.. I didn't notice this is an old branch. It's probably a good idea to diff the CI infra with |
|
So merging #144036 30 minutes ago failed. It's the last commit on the branch. Yesterday, everything was fine, though (when Pablo merged the target branch into that PR). So this is new. |
|
Looking at https://github.com/actions/runner-images?tab=readme-ov-file#available-images, the last |
|
Trying to restart jobs one at a time with debugging on. |
|
Looks like |
|
Weird.. I've seen each of those jobs crashing flakily. But mostly when they ran in parallel. Now that I've restarted each individually, they seem to be running fine. Definitely a platform problem... |
|
@encukou it's probably a good idea to complain to GH support unless it self-heals. I've downloaded the logs from https://github.com/python/cpython/actions/runs/21365488062/job/61510137519 (DEBUG RUN!) And they've got something similar: $ cat 1_Windows\ _\ build\ and\ test\ \(Win32\).txt
2026-01-26T18:39:28.6558234Z ##[debug]Starting: Windows / build and test (Win32)
2026-01-26T18:39:28.6630888Z ##[error]Could not find a part of the path 'D:\a'.
2026-01-26T18:39:28.6637136Z ##[debug]System.IO.DirectoryNotFoundException: Could not find a part of the path 'D:\a'.
2026-01-26T18:39:28.6638073Z ##[debug] at System.IO.FileSystem.CreateDirectory(String fullPath, Byte[] securityDescriptor)
2026-01-26T18:39:28.6638844Z ##[debug] at System.IO.Directory.CreateDirectory(String path)
2026-01-26T18:39:28.6639798Z ##[debug] at GitHub.Runner.Worker.JobRunner.RunAsync(AgentJobRequestMessage message, CancellationToken jobRequestCancellationToken)
2026-01-26T18:39:28.6655833Z ##[debug]Finishing: Windows / build and test (Win32)
$ cat Windows\ _\ build\ and\ test\ \(Win32\)/system.txt
2026-01-26T18:39:25.2470000Z Requested labels: windows-2022
2026-01-26T18:39:25.2470000Z Job defined at: python/cpython/.github/workflows/reusable-windows.yml@refs/pull/144188/merge
2026-01-26T18:39:25.2470000Z Reusable workflow chain:
2026-01-26T18:39:25.2470000Z python/cpython/.github/workflows/build.yml@refs/pull/144188/merge (615364956b0ea0a8353c2c5623362d88b20b1f48)
2026-01-26T18:39:25.2470000Z -> python/cpython/.github/workflows/reusable-windows.yml@refs/pull/144188/merge (615364956b0ea0a8353c2c5623362d88b20b1f48)
2026-01-26T18:39:25.2470000Z Waiting for a runner to pick up this job...
2026-01-26T18:39:25.2450000Z Evaluating build-windows.if
2026-01-26T18:39:25.2450000Z Evaluating: (success() && (fromJSON(needs.build-context.outputs.run-windows-tests)))
2026-01-26T18:39:25.2450000Z Expanded: (true && true)
2026-01-26T18:39:25.2450000Z Result: true
2026-01-26T18:39:25.2450000Z Evaluating build-windows.Win32_false.build.if
2026-01-26T18:39:25.2450000Z Evaluating: success()
2026-01-26T18:39:25.2450000Z Result: true
2026-01-26T18:39:25.7450000Z Job is waiting for a hosted runner to come online.
2026-01-26T18:39:25.7450000Z Job is about to start running on the hosted runner: GitHub Actions 1000510807This one has a trace in debug ^ The runner diagnostic logs don't have anything interesting and also seem to only have the Attaching the logs for history: logs_55453059899.zip. |
|
Thanks for the investigation!
I'll keep an eye out for this happening again. |
I've just restarted the failing jobs on that commit and it they didn't crash immediately: https://github.com/python/cpython/actions/runs/21366922074. So I think it's safe to say the flaky behavior is gone for now. |

(cherry picked from commit 052e55e)
Co-authored-by: Seth Michael Larson seth@python.org
Co-authored-by: Denis Ledoux dle@odoo.com
Co-authored-by: Denis Ledoux 5822488+beledouxdenis@users.noreply.github.com
Co-authored-by: Petr Viktorin 302922+encukou@users.noreply.github.com
Co-authored-by: Bas Bloemsaat 1586868+basbloemsaat@users.noreply.github.com