From 6872b1c076b2879a63ffc80e9a0d7849b39637d6 Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" Date: Fri, 17 Oct 2025 01:29:52 +0100 Subject: [PATCH] Daily Perf Improver - Fix Benchmark Infrastructure for Cross-Platform Testing MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit # Daily Perf Improver - Benchmark Infrastructure Fix ## Summary Fixed the existing benchmark infrastructure to enable cross-platform testing and establish a baseline for future performance measurements. This addresses Priority #1 from the performance research plan: **Establish measurement baseline**. ## Goal and Rationale **Performance target:** Enable reliable, reproducible benchmark execution across all platforms (Linux, macOS, Windows) to support systematic performance optimization work. **Why it matters:** The existing benchmark had a hardcoded Windows file path that prevented execution in CI environments and on other platforms. Without working benchmarks, we cannot: - Establish performance baselines - Measure optimization impact - Detect performance regressions - Make data-driven optimization decisions ## Changes Made ### 1. Cross-Platform File Content Generation **Before:** ```fsharp let fileContents = IO.File.ReadAllText( @"C:\Users\jimmy\Repositories\public\TheAngryByrd\span-playground\Romeo and Juliet by William Shakespeare.txt" ) ``` **After:** ```fsharp // Generate synthetic file content for cross-platform benchmarking let fileContents = let lines = [ 1..1000 ] |> List.map (fun i -> sprintf "let value%d = %d // This is line %d with some text content" i i i) String.concat "\n" lines ``` **Benefit:** Benchmarks now run on any platform without external file dependencies. Content is realistic F# code (1000 lines of let bindings). ### 2. Updated .NET Runtime Target **Before:** `.NET 7` (`RuntimeMoniker.Net70`) **After:** `.NET 8` (`RuntimeMoniker.Net80`) **Benefit:** Matches the project's target framework (net8.0) as specified in `benchmarks/benchmarks.fsproj`, ensuring consistent measurement environment. ## Approach 1. **Analyzed existing benchmark code** to understand requirements 2. **Generated synthetic F# content** that represents realistic code patterns 3. **Updated runtime moniker** to match project configuration 4. **Applied Fantomas formatting** to maintain code style consistency 5. **Verified build success** in Release configuration ## Impact Measurement ### Build Validation ✅ **Build Success:** Benchmarks compile successfully in Release mode ``` benchmarks -> /home/runner/work/FsAutoComplete/FsAutoComplete/benchmarks/bin/Release/net8.0/benchmarks.dll Build succeeded. Time Elapsed 00:00:11.64 ``` ### Benchmark Availability The existing `SourceText_LineChanges_Benchmarks` benchmark can now be executed with: ```bash dotnet run --project benchmarks -c Release --framework net8.0 ``` **Parameterized test cases:** N ∈ {1, 15, 50, 100, 1000} iterations **Memory tracking:** Enabled via `[]` ## Trade-offs **✅ Pros:** - Eliminates external file dependency - Enables CI execution - Faster benchmark startup (no file I/O) - Consistent content across runs - Cross-platform compatibility **⚠️ Considerations:** - Synthetic content may differ from real-world text files - Fixed at 1000 lines (vs. original "Romeo and Juliet" which may have been different size) **Mitigation:** The benchmark tests `SourceText` line manipulation, not F# parsing, so synthetic F# code is appropriate. Future benchmarks can add varied file sizes. ## Validation ### ✅ Build Tests - **Release build:** Passed (11.64s) - **Framework target:** net8.0 ✓ - **Code formatting:** Applied Fantomas ✓ ### ✅ Code Review - No logic changes to benchmark behavior - Only data source and runtime version updated - Formatting follows project conventions ## Future Work This infrastructure fix enables: 1. **Baseline measurement** - Run benchmarks to establish current performance 2. **Expanded coverage** - Add benchmarks for: - LSP completion latency (Priority #3 from plan) - Hover/tooltip generation - Go-to-definition performance - Type checking operations 3. **CI integration** - Add benchmark runs to detect regressions 4. **Performance tracking** - Store baseline results for comparison ## Reproducibility ### Running the Benchmarks ```bash # Build in Release mode (required for accurate results) dotnet build -c Release # Run all benchmarks dotnet run --project benchmarks -c Release --framework net8.0 # Run with specific parameters dotnet run --project benchmarks -c Release --framework net8.0 -- --filter "*SourceText*" # Export results for comparison dotnet run --project benchmarks -c Release --framework net8.0 -- --exporters json markdown ``` ### Expected Behavior - Benchmark creates 1000-line F# source text - Tests line change operations with N iterations (1, 15, 50, 100, 1000) - Reports mean time, standard deviation, and memory allocations - Outputs to `BenchmarkDotNet.Artifacts/results/` ## Related - **Research Plan:** [Discussion #1](https://github.com/githubnext/FsAutoComplete/discussions/1) - **Performance Guides:** `.github/copilot/instructions/profiling-measurement.md` - **Daily Perf Improver Workflow:** `.github/workflows/daily-perf-improver.yml` --- benchmarks/SourceTextBenchmarks.fs | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/benchmarks/SourceTextBenchmarks.fs b/benchmarks/SourceTextBenchmarks.fs index 3c86b5321..cbb37f739 100644 --- a/benchmarks/SourceTextBenchmarks.fs +++ b/benchmarks/SourceTextBenchmarks.fs @@ -12,10 +12,13 @@ module Helpers = open FSharp.UMX open System.Collections.Generic + // Generate synthetic file content for cross-platform benchmarking let fileContents = - IO.File.ReadAllText( - @"C:\Users\jimmy\Repositories\public\TheAngryByrd\span-playground\Romeo and Juliet by William Shakespeare.txt" - ) + let lines = + [ 1..1000 ] + |> List.map (fun i -> sprintf "let value%d = %d // This is line %d with some text content" i i i) + + String.concat "\n" lines @@ -55,7 +58,7 @@ open Helpers open BenchmarkDotNet.Jobs [] -[] +[] type SourceText_LineChanges_Benchmarks() = []