A production-grade Parallel Luau actor pool for Roblox.
Offload CPU-intensive work to real OS threads — with a Promise-based API, priority scheduling, and battle-tested lifecycle management.
Roblox's Parallel Luau lets scripts run on real OS threads — but wiring up Actors, handling result routing, managing worker lifecycles, and queuing tasks is error-prone boilerplate that breaks in subtle ways.
ActorManager handles all of that:
- No frame drops — heavy computation runs outside the main game loop
- Promise-based —
andThen,catch,Batch,Broadcast; fits any existing async code - Priority queue — CRITICAL tasks are never stuck behind LOW ones
- SharedTable result bus — results flow back without copying the full payload through
BindableEvent:Fire - Two-phase worker init — tasks can't dispatch before the worker is actually ready
- Poisoned-worker detection — a timed-out worker can't corrupt the next task
Via Wally:
[dependencies]
ActorManager = "iamthebestts/actor-manager@^0.2.1"wally installManual: copy the src/ folder into your project and require init.luau.
- Intro: docs/intro.md
- Installation: docs/installation.md
- Why Use ActorManager: docs/why-use-actor-manager.md
- API Reference: docs/API.md
1. Create your worker (EchoWorker.server.luau):
if not script:GetActor() then return end
local defineWorker = require(game.ReplicatedStorage.Packages.ActorManager).defineWorker
defineWorker(script, {
add = function(payload: { a: number, b: number })
return payload.a + payload.b
end,
echo = function(payload)
return payload
end,
})2. Spin up the pool and dispatch:
local ActorManager = require(game.ReplicatedStorage.Packages.ActorManager)
local manager = ActorManager.new({
workerModule = script.EchoWorker,
workerCount = 4,
taskTimeout = 10,
})
manager:Dispatch("add", { a = 10, b = 32 })
:andThen(function(result)
print(result) -- 42
end)
:catch(function(err)
warn(err)
end)| Field | Type | Default | Description |
|---|---|---|---|
workerModule |
Script |
required | Script cloned into each Actor |
workerCount |
number? |
8 |
Pool size. Roblox caps parallel threads at ~3 on live servers and ~3 on desktop clients — extra Actors above this add memory with no parallelism gain |
taskTimeout |
number? |
— | Seconds before a task is rejected as timed out |
workerRecycleTimeout |
number? |
— | Seconds to wait for a poisoned worker's late result before destroying and respawning it |
maxQueueSize |
number? |
— | Reject new tasks immediately when the queue is full |
onWorkerError |
function? |
— | (err: string, taskId: string) -> () called on every worker error |
Sends a task to the next free worker. Returns a Promise.
-- fire and forget
manager:Dispatch("echo", { value = 1 })
-- with priority
manager:Dispatch("echo", { value = 1 }, { priority = "CRITICAL" })
-- await result
manager:Dispatch("add", { a = 5, b = 5 })
:andThen(function(result) print(result) end) -- 10Priority levels (highest → lowest): CRITICAL · HIGH · NORMAL · LOW
Dispatches multiple tasks, resolves with an ordered array of results. Rejects entirely if any task fails.
manager:Batch({
{ task = "add", payload = { a = 1, b = 1 } },
{ task = "add", payload = { a = 2, b = 2 } },
{ task = "add", payload = { a = 3, b = 3 } },
}):andThen(function(results)
-- { 2, 4, 6 }
end)Sends the same task to every worker at once. Resolves with an array of N results (one per worker). Useful for syncing shared state.
manager:Broadcast("reload_config", newConfig)
:andThen(function(results)
print(#results .. " workers updated")
end)Returns a snapshot of the pool. Each call returns a fresh table.
local stats = manager:GetStats()
-- {
-- freeWorkers = 3,
-- busyWorkers = 1,
-- totalWorkers = 4,
-- queued = 0,
-- processed = 142,
-- errors = 1,
-- }Stops dispatching without dropping the queue. Tasks accumulate and are picked up immediately on Resume().
Rejects all pending and queued tasks, destroys all Actors, cleans up signals and the container Folder. Safe to call multiple times.
manager.onError:Connect(function(err: string, taskId: string)
warn("[Worker error]", err, taskId)
end)
manager.onDrained:Connect(function()
print("queue empty, all workers free")
end)Workers are plain Scripts cloned into Actors. Use defineWorker from ActorManager:
if not script:GetActor() then return end
local defineWorker = require(game.ReplicatedStorage.Packages.ActorManager).defineWorker
defineWorker(script, {
myTask = function(payload)
-- heavy work here — this runs on a real OS thread
return result
end,
})
callerScriptmust bescript— the Script inside the Actor, not a ModuleScript. WorkerBase needs it to find the correct Actor ancestor.
All handlers run inside BindToMessageParallel:
| ✅ Reading from the DataModel | allowed |
| ❌ Writing to the DataModel | not allowed — call task.synchronize() first |
✅ task.wait() |
allowed |
| ❌ Busy-wait loops | not allowed — blocks the thread, prevents taskTimeout from firing |
-- handler that needs to write to the DataModel
defineWorker(script, {
movePart = function(payload)
task.synchronize() -- move back to serial thread
workspace.Part.Position = payload.position
return true
end,
})ActorManager.new()
│
├── Folder (ServerScriptService or PlayerScripts)
│ ├── Actor "Worker_1"
│ │ ├── BindableEvent "_ResultBus"
│ │ └── Script ← clone of workerModule
│ └── Actor "Worker_N"
│
├── PriorityQueue max-heap, tasks wait here when all workers are busy
├── _pendingTasks taskId → { resolve, reject, timer }
└── _poisonedWorkers workers that timed out and haven't sent their late result yetResult flow:
Dispatch()
└─► PriorityQueue:Push()
└─► worker:SendMessage("AM_Task", msg)
└─► handler runs in parallel
└─► resultStore[taskId] = data (SharedTable write, no copy)
└─► resultBus:Fire(taskId) (just the ID string)
└─► manager reads resultStore[taskId]
└─► Promise resolves / rejectsTwo-phase worker init prevents tasks from arriving before the worker is ready:
Worker fires "__WORKER_READY__"
└─► Manager sends SharedTable via AM_Init
└─► Worker fires "__WORKER_INIT_DONE__"
└─► Worker added to free pool ✓Poisoned worker handling: on timeout, the worker is not returned to the free pool. It's tracked in _poisonedWorkers. If the late result eventually arrives, the worker is recycled normally. If workerRecycleTimeout is set and nothing arrives, the worker is destroyed and a fresh one spawns in its place.
- Workers must be
Scripts (not ModuleScripts) and live inside an Actor.defineWorkerrelies onscript:GetActor()to resolve the correct ancestor. - All payloads must be
SharedTable-serializable. Unsupported types will error when sent across the result bus. - Parallel Luau restrictions apply. Handlers run in parallel; call
task.synchronize()before writing to the DataModel.
Tests are in tests/ and use TestEZ. Run the place in Roblox Studio — results print to the Output window.
tests/
├── fixtures/
│ └── EchoWorker.server.luau echo · add · fail · slow handlers
├── ActorManager.spec.luau unit tests (sync, no real Actors)
├── ActorManager.integration.spec.luau integration tests (real Actors)
├── PriorityQueue.spec.luau heap correctness
└── runner.server.luau bootstrap54 tests, 0 skipped.
src/
├── init.luau package entry point
├── ActorManager.luau pool orchestrator
├── WorkerBase.luau worker-side defineWorker helper
├── PriorityQueue.luau max-heap priority queue
└── Types.luau shared type definitionsAll promises are internally caught to avoid unhandled rejections.
MIT — see LICENSE.
Made by iamthebestts