Skip to content

perf: buffer UploadZstd archive in memory instead of temp file#161

Merged
rgarcia merged 1 commit intomainfrom
rgarcia/uploadzstd-memory-buffer
Feb 24, 2026
Merged

perf: buffer UploadZstd archive in memory instead of temp file#161
rgarcia merged 1 commit intomainfrom
rgarcia/uploadzstd-memory-buffer

Conversation

@rgarcia
Copy link
Contributor

@rgarcia rgarcia commented Feb 24, 2026

Summary

  • Replaces the temp file in UploadZstd with a bytes.Buffer, eliminating double disk I/O (write to temp file then reopen and read back) during archive upload.
  • The archive is buffered in memory and passed directly to UntarZstd via bytes.NewReader. No changes to the API contract or multipart field ordering.
  • Net -27/+3 lines.

Test plan

  • go build passes
  • Existing unit tests pass (go test ./cmd/api/api/)
  • e2e zstd upload/download tests pass against a running instance

Made with Cursor


Note

Medium Risk
Changes archive upload/extraction plumbing and can increase memory usage significantly for large uploads, potentially impacting server stability under load.

Overview
UploadZstd now buffers the uploaded tar.zst multipart payload in memory (bytes.Buffer) and streams it directly into zstdutil.UntarZstd, removing the temp-file write/close/reopen/read path.

Behavior and API remain the same (same required fields and error mapping for path traversal), but uploads will trade disk I/O for increased memory usage proportional to archive size.

Written by Cursor Bugbot for commit c76d31c. This will update automatically on new commits. Configure here.

Replace the temp file with a bytes.Buffer to avoid double disk I/O
(write to temp file, then reopen and read) during archive upload.

Co-authored-by: Cursor <cursoragent@cursor.com>
Copy link

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 1 potential issue.

Bugbot Autofix is ON. A Cloud Agent has been kicked off to fix the reported issue.

@cursor
Copy link

cursor bot commented Feb 24, 2026

Bugbot Autofix prepared fixes for 1 of the 1 bugs found in the latest run.

  • ✅ Fixed: Unbounded in-memory buffering risks OOM on large archives
    • Added io.LimitReader with a 512 MB cap around the archive part reader, rejecting uploads that exceed the limit with a 400 response before unbounded memory allocation can occur.

Create PR

Or push these changes by commenting:

@cursor push 70d1608f61
Preview (70d1608f61)
diff --git a/server/cmd/api/api/fs.go b/server/cmd/api/api/fs.go
--- a/server/cmd/api/api/fs.go
+++ b/server/cmd/api/api/fs.go
@@ -907,6 +907,7 @@
 		return oapi.UploadZstd400JSONResponse{BadRequestErrorJSONResponse: oapi.BadRequestErrorJSONResponse{Message: "request body required"}}, nil
 	}
 
+	const maxArchiveSize = 512 << 20 // 512 MB
 	var archiveBuf bytes.Buffer
 	var destPath string
 	var stripComponents int
@@ -925,10 +926,14 @@
 		switch part.FormName() {
 		case "archive":
 			archiveReceived = true
-			if _, err := io.Copy(&archiveBuf, part); err != nil {
+			limited := io.LimitReader(part, maxArchiveSize+1)
+			if _, err := io.Copy(&archiveBuf, limited); err != nil {
 				log.Error("failed to read archive data", "err", err)
 				return oapi.UploadZstd400JSONResponse{BadRequestErrorJSONResponse: oapi.BadRequestErrorJSONResponse{Message: "failed to read archive"}}, nil
 			}
+			if archiveBuf.Len() > maxArchiveSize {
+				return oapi.UploadZstd400JSONResponse{BadRequestErrorJSONResponse: oapi.BadRequestErrorJSONResponse{Message: "archive exceeds maximum size of 512 MB"}}, nil
+			}
 		case "dest_path":
 			data, err := io.ReadAll(part)
 			if err != nil {

@rgarcia rgarcia requested a review from hiroTamada February 24, 2026 18:22
@rgarcia rgarcia merged commit 1afc87f into main Feb 24, 2026
6 checks passed
@rgarcia rgarcia deleted the rgarcia/uploadzstd-memory-buffer branch February 24, 2026 18:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants