TL;DM (too long; didn't monitor) is a fast, extensible progress bar for Python, forked from tqdm. This fork was created to provide continued maintenance and development as the original tqdm project has become unmaintained.
Instantly make your loops show a smart progress meter - just wrap any iterable with tldm(iterable), and you're done!
from tldm import tldm
for i in tldm(range(10000)):
...76%|████████████████████████ | 7568/10000 [00:33<00:10, 229.00it/s]
trange(N) can also be used as a convenient shortcut for tldm(range(N)).
A progress bar in Python with a focus on simplicity and ease of use. This library makes your loops display with a smart progress meter, offering predictive statistics and minimal overhead.
Works across all major platforms (Linux, Windows, macOS) and in all major environments (terminal, Jupyter notebooks, IPython, etc.).
Beyond drop-in loop wrapping, TL;DM also exposes rolling metrics, named throughput, phase timing, final summaries, training-oriented nested bars, asyncio/concurrent helpers, pandas and rich integrations, logging redirection, and notebook-aware convenience aliases.
- Installation
- Usage
- Feature Highlights
- Examples
- Parameters
- Methods
- Convenience Functions
- Extensions
- Advanced Usage
- FAQ and Known Issues
- Contributing
- License
TL;DM currently supports Python 3.11 and newer.
You can install tldm via pip:
pip install tldmTo enable widget-based notebook rendering in Jupyter or IPython, install the optional notebook dependency group:
pip install "tldm[notebook]"Some integrations rely on third-party packages that are installed separately:
pip install pandas richLatest development release:
pip install "git+https://github.com/eliotwrobson/tldm.git@main#egg=tldm"tldm is very versatile and can be used in a number of ways. The two main ones are given below.
Wrap tldm() around any iterable:
from tldm import tldm
from time import sleep
text = ""
for char in tldm(["a", "b", "c", "d"]):
sleep(0.25)
text = text + chartrange(i) is a special optimised instance of tldm(range(i)):
from tldm import trange
for i in trange(100):
sleep(0.01)Instantiation outside of the loop allows for manual control over tldm():
pbar = tldm(["a", "b", "c", "d"])
for char in pbar:
sleep(0.25)
pbar.set_description("Processing %s" % char)Manual control of tldm() updates using a with statement:
from tldm import tldm
from time import sleep
with tldm(total=100) as pbar:
for i in range(10):
sleep(0.1)
pbar.update(10)If the optional variable total (or an iterable with len()) is provided, predictive stats are displayed.
with is also optional (you can just assign tldm() to a variable, but in this case don't forget to del or close() at the end):
pbar = tldm(total=100)
for i in range(10):
sleep(0.1)
pbar.update(10)
pbar.close()set_metrics(...),set_throughput(...),mark(...), andsection(...)turn a progress bar into a lightweight training or pipeline dashboard.summary=Trueandsummary_dict()keep the live bar concise while still giving you structured end-of-run metrics and timing data.training_tldm(...)manages nested epoch and step bars with shared metrics, throughput, and phase timing helpers.tldm_asyncio.gather(...),tldm_asyncio.as_completed(...),thread_map(...), andprocess_map(...)cover common async and concurrent execution patterns.auto_tldm,trange, the iterator helpers, andtldm.pandas()automatically select notebook widgets whenipywidgetsis available.
from tldm import trange
from time import sleep
for i in trange(16, leave=True):
sleep(0.1)from tldm import trange
from time import sleep
for i in trange(10, desc='1st loop'):
for j in trange(5, desc='2nd loop'):
for k in trange(50, desc='3rd loop', leave=False):
sleep(0.01)from tldm.extensions.concurrent import process_map, thread_map
thread_results = thread_map(fetch_row, rows, desc="threaded", max_workers=8)
process_results = process_map(score_row, rows, desc="scored", max_workers=4, chunksize=32)Use thread_map(...) for I/O-heavy work and process_map(...) for CPU-bound tasks. For large iterables, explicitly setting chunksize on process_map(...) avoids excessive dispatch overhead.
from tldm import tldm
from time import sleep
# Custom unit and description
for i in tldm(range(100), desc="Processing", unit="files"):
sleep(0.01)
# Custom unit with scaling
for i in tldm(range(1000000), unit="B", unit_scale=True, unit_divisor=1024):
pass # This will show KB, MB, etc.import asyncio
from tldm.extensions.asyncio import tldm_asyncio
async def fetch(i):
await asyncio.sleep(0.05)
return i * 2
async def main():
results = await tldm_asyncio.gather(*(fetch(i) for i in range(8)), desc="fetch")
print(results)
asyncio.run(main())gather(...) preserves the original ordering of the awaitables, while tldm_asyncio.as_completed(...) is available when you want to consume results as soon as they finish.
from tldm import tbatched
for batch in tbatched(records, 32, desc="embed batches"):
vectors = embed_batch(batch)
store_vectors(vectors)This is useful for workloads that already run in batches, such as inference, ETL, or rate-limited API calls. tbatched(...) requires Python 3.12+ because it builds on itertools.batched.
from tldm import training_tldm
with training_tldm(epochs=3, steps_per_epoch=len(loader), desc="train") as trainer:
for epoch in trainer.epochs():
for batch in trainer.steps(loader):
with trainer.section("forward"):
loss = run_forward(batch)
with trainer.section("backward"):
run_backward(loss)
trainer.set_metrics(loss=loss.item(), lr=optimizer.param_groups[0]["lr"])This produces an outer epoch bar, an inner step bar, a live phase=... indicator while sections are running, and rolling metric/timing summaries on the step bar.
from tldm import tldm
with tldm(files, desc="pipeline") as pbar:
for path in pbar:
pbar.mark("load")
with pbar.section("parse"):
doc = parse_file(path)
with pbar.section("transform"):
result = transform(doc)
with pbar.section("write"):
save_result(result)This is useful when you want the bar itself to show where time is going without opening a profiler first.
from tldm import tldm
with tldm(
range(100),
metric_window=10,
cpu_time=True,
bar_format="{l_bar}{bar}{r_bar} | phase {active_phase} | loss {metrics[loss]:.4f} | cpu {cpu_elapsed}",
) as pbar:
for step in pbar:
with pbar.section("train_step"):
loss = train_step(step)
pbar.set_metrics(loss=loss)This pattern works well with AI-generated scripts because the interesting state is exposed directly through bar_format.
-
iterable : iterable, optional Iterable to decorate with a progressbar. Leave blank to manually manage the updates.
-
desc : str, optional Prefix for the progressbar.
-
total : int or float, optional The number of expected iterations. If unspecified,
len(iterable)is used if possible. Iffloat("inf")or as a last resort, only basic progress statistics are displayed (no ETA, no progressbar). -
leave : bool, optional If [default: True], keeps all traces of the progressbar upon termination of iteration. If
None, will leave only ifpositionis0. -
file :
io.TextIOWrapperorio.StringIO, optional Specifies where to output the progress messages (default: sys.stderr). -
ncols : int, optional The width of the entire output message. If specified, dynamically resizes the progressbar to stay within this bound.
-
mininterval : float, optional Minimum progress display update interval [default: 0.1] seconds.
-
maxinterval : float, optional Maximum progress display update interval [default: 10] seconds.
-
miniters : int or float, optional Minimum progress display update interval, in iterations. If 0 and
dynamic_miniters, will automatically adjust to equalmininterval. -
ascii : bool or str, optional If unspecified or False, use unicode (smooth blocks) to fill the meter. The fallback is to use ASCII characters.
-
disable : bool, optional Whether to disable the entire progressbar wrapper [default: False]. If set to None, disable on non-TTY.
-
unit : str, optional String that will be used to define the unit of each iteration [default: it].
-
unit_scale : bool or int or float, optional If 1 or True, the number of iterations will be reduced/scaled automatically and a metric prefix following the International System of Units standard will be added (kilo, mega, etc.) [default: False].
-
dynamic_ncols : bool, optional If set, constantly alters
ncolsto the environment (allowing for window resizes) [default: False]. -
smoothing : float, optional Exponential moving average smoothing factor for speed estimates (ignored in GUI mode). Ranges from 0 (average speed) to 1 (current/instantaneous speed) [default: 0.3].
-
bar_format : str, optional Specify a custom bar string format. May impact performance.
-
initial : int or float, optional The initial counter value. Useful when restarting a progress bar [default: 0].
-
complete_bar_on_early_finish : bool, optional If True, complete the bar when closing early without errors and a total is known [default: False].
-
position : int, optional Specify the line offset to print this bar (starting from 0). Useful to manage multiple bars at once (eg, from threads).
-
postfix : dict or
*, optional Specify additional stats to display at the end of the bar. -
unit_divisor : float, optional [default: 1000], ignored unless
unit_scaleis True. -
write_bytes : bool, optional If (default: None) and
fileis unspecified, bytes will be written in Python 2. -
lock_args : tuple, optional Passed to
refreshfor intermediate output (initialisation, iterating, and updating). -
nrows : int, optional The screen height. If specified, hides nested bars outside this bound.
-
colour : str, optional Bar colour (e.g. 'green', '#00ff00').
-
delay : float, optional Don't display until [default: 0] seconds have elapsed.
-
cpu_time : bool, optional If True, track process CPU time and expose
cpu_elapsedandcpu_elapsed_sfor custombar_formatstrings. Wall-clock timing still drives the default elapsed, ETA, and rate display [default: False]. -
metric_window : int, optional If set, numeric values passed to
set_metrics(...)are smoothed over the most recentmetric_windowupdates before being displayed. -
summary : bool, optional If True, print a one-line final summary after the bar finishes using any active metric and timing summaries.
Manually update the progress bar, useful for streams such as reading files.
with tldm(total=100) as pbar:
for i in range(10):
# do something
pbar.update(10)Cleanup and (if leave=False) remove the progressbar.
Clear current bar display.
Force refresh the display of this bar.
Set/modify description of the progress bar.
pbar = tldm(range(10))
for i in pbar:
pbar.set_description(f"Processing {i}")Set/modify postfix (additional stats) with automatic formatting based on datatype.
from tldm import trange
from random import random
from time import sleep
with trange(10) as t:
for i in t:
t.set_postfix(loss=random(), accuracy=random())
sleep(0.1)set_postfix(...) is still useful for arbitrary display-only values, but named per-second counters now have a dedicated helper.
Set named throughput values measured in units per second. If elapsed_s is omitted, tldm uses the wall-clock time since the previous throughput update. If metric_window is set on the bar, throughput values are displayed as a rolling average over the most recent updates while throughput_raw keeps the latest instantaneous rate.
from tldm import tldm
with tldm(range(100)) as pbar:
for item in pbar:
read_bytes, write_bytes = process_item(item)
pbar.set_throughput(read_mb=read_bytes / 1e6, write_mb=write_bytes / 1e6)Throughput data is also exposed through bar_format via throughput, throughput_raw, and throughput_fmt.
from tldm import tldm
with tldm(
range(10),
metric_window=5,
bar_format="{l_bar}{bar}{r_bar} | samples {throughput[samples]:.1f}/s raw {throughput_raw[samples]:.1f}/s",
) as pbar:
for batch in pbar:
pbar.set_throughput(samples=len(batch))Set training or debugging metrics with stable formatting. If metric_window is set on the bar, numeric metrics are displayed as a rolling average over the most recent updates.
from tldm import tldm
with tldm(range(100), metric_window=20, desc="train") as pbar:
for batch in pbar:
loss, acc, lr = train_step(batch)
pbar.set_metrics(loss=loss, acc=acc, lr=lr)Metrics are also exposed through bar_format via metrics, metrics_raw, and metrics_fmt.
from tldm import tldm
with tldm(
range(10),
metric_window=5,
bar_format="{l_bar}{bar}{r_bar} | loss {metrics[loss]:.4f} acc {metrics[acc]:.3f}",
) as pbar:
for batch in pbar:
loss, acc = train_step(batch)
pbar.set_metrics(loss=loss, acc=acc)Record timing checkpoints or measure named code sections without leaving the progress-bar workflow.
from tldm import tldm
with tldm(range(100), desc="debug") as pbar:
for item in pbar:
pbar.mark("fetch")
with pbar.section("transform"):
transform(item)
with pbar.section("write"):
write(item)By default, named timings are merged into the bar's postfix using average wall-clock time per name.
While a section(...) block is active, the default bar also shows phase=<name>, and the live section name is exposed through active_phase for custom bar_format strings.
Timing data is also exposed through bar_format via timings and timings_fmt.
from tldm import tldm
with tldm(
range(10),
bar_format="{l_bar}{bar}{r_bar} | phase {active_phase} | load {timings[load][avg]:.3f}s forward {timings[forward][avg]:.3f}s",
) as pbar:
for item in pbar:
pbar.mark("load")
with pbar.section("forward"):
run_forward(item)Create a lightweight training-oriented wrapper around nested epoch and step bars.
from tldm import training_tldm
with training_tldm(epochs=3, steps_per_epoch=len(loader), desc="train") as trainer:
for epoch in trainer.epochs():
for batch in trainer.steps(loader):
trainer.set_metrics(loss=compute_loss(batch), lr=1e-4)
with trainer.section("forward"):
run_forward(batch)This keeps the epoch bar on the outer line, creates a step bar on the next line, and forwards set_metrics(...), set_throughput(...), summary_dict(), mark(...), and section(...) to the active step bar.
If you want a compact summary after the bar completes, enable summary=True.
from tldm import tldm
with tldm(range(100), metric_window=10, summary=True, desc="train") as pbar:
for batch in pbar:
with pbar.section("forward"):
loss = train_step(batch)
pbar.set_metrics(loss=loss)This leaves the normal final bar in place and then prints a summary line with elapsed wall time, displayed throughput and metrics, raw smoothed values when they differ, and per-phase average/total/count values. For example: train summary: elapsed=12.4s, samples/s=812, samples/s_raw=940, loss=0.4213, loss_raw=0.4389, forward_avg=18.2ms, forward_total=9.1s, forward_count=500.
If you want the same summary state as structured data instead of a printed line, call summary_dict().
from tldm import tldm
with tldm(range(100), metric_window=10, desc="train") as pbar:
for batch in pbar:
with pbar.section("forward"):
loss = train_step(batch)
pbar.set_metrics(loss=loss)
pbar.set_throughput(samples=len(batch))
summary = pbar.summary_dict()
print(summary["metrics"]["loss"], summary["timings"]["forward"]["avg_s"])The returned dictionary includes elapsed wall time, optional CPU time, displayed and raw metrics, displayed and raw throughput, active phase, and per-section timing stats with both numeric seconds and display-ready strings.
If you want to show process CPU time alongside the usual wall-clock stats, enable cpu_time=True and reference the injected fields from bar_format.
from tldm import tldm
for _ in tldm(
range(100),
cpu_time=True,
bar_format="{l_bar}{bar}{r_bar} [cpu {cpu_elapsed}]",
):
passThis is most useful for CPU-bound work. ETA and rate still use wall-clock time.
Print messages via tldm without overlapping with the progress bar. This works like the builtin print() function and is the recommended way to print messages. Supports all standard print arguments including flush.
from tldm import tldm
from time import sleep
for i in tldm(range(10)):
if i == 5:
tldm.print("Half way there!")
tldm.print("Progress:", i, "out of", 10)
sleep(0.1)Print a single string via tldm without overlapping with the progress bar. For most use cases, tldm.print() is more convenient.
from tldm import tldm
from time import sleep
for i in tldm(range(10)):
if i == 5:
tldm.write("Half way there!")
sleep(0.1)Reset the progress bar to 0 iterations for repeated use.
All convenience functions use automatic environment detection by default, displaying notebook widgets in Jupyter/IPython environments and standard terminal output otherwise.
An alias that automatically selects between tldm.notebook.tldm (for Jupyter/IPython) and the standard tldm.std.tldm (for terminals). This is used internally by all convenience functions.
from tldm import auto_tldm
# Works seamlessly in both notebooks and terminals
for i in auto_tldm(range(100)):
passShortcut for auto_tldm(range(*args), **kwargs).
from tldm import trange
for i in trange(100):
passEquivalent of builtin enumerate with a progress bar.
Note: By default, tldm_class is automatically detected (auto_tldm) and will use notebook widgets in Jupyter/IPython or standard terminal output otherwise.
from tldm import tenumerate
for i, item in tenumerate(['a', 'b', 'c']):
print(f"{i}: {item}")Equivalent of builtin zip with a progress bar. Accepts optional tldm_class in kwargs (defaults to auto_tldm).
from tldm import tzip
for a, b in tzip(range(100), range(100, 200)):
passEquivalent of builtin map with a progress bar. Accepts optional tldm_class in kwargs (defaults to auto_tldm).
from tldm import tmap
results = list(tmap(lambda x: x**2, range(100)))Equivalent of itertools.product with a progress bar. Accepts optional tldm_class in kwargs (defaults to auto_tldm).
from tldm import tproduct
for combo in tproduct(range(10), range(10)):
passEquivalent of itertools.batched (Python 3.12+) with a progress bar. Yields successive batches of size n from the iterable.
Note: Requires Python 3.12 or later (when itertools.batched was added).
from tldm import tbatched
# Process items in batches of 3 with progress bar
for batch in tbatched(range(10), 3):
print(batch) # [0, 1, 2], [3, 4, 5], [6, 7, 8], [9]TL;DM ships with several environment-aware helpers and optional integrations. Some live in tldm.extensions, while others are exposed directly from the top-level package for convenience.
Install tldm[notebook] to enable widget-based rendering in Jupyter or IPython.
auto_tldm, trange, tenumerate, tzip, tmap, tproduct, tbatched, and tldm.pandas() automatically select notebook widgets when ipywidgets is available. To force the widget backend explicitly, import the notebook class directly:
from tldm.notebook import tldm as notebook_tldm
for row in notebook_tldm(records, desc="EDA"):
inspect(row)Asynchronous-friendly version of tldm for use with async/await:
from tldm.extensions.asyncio import tldm_asyncio
import asyncio
async def main():
async for i in tldm_asyncio(range(100)):
await asyncio.sleep(0.01)
asyncio.run(main())Note: When using break with async iterators, either call pbar.close() manually or use the context manager syntax to ensure proper cleanup:
from tldm.extensions.asyncio import tldm_asyncio
with tldm_asyncio(range(100)) as pbar:
async for i in pbar:
if i == 50:
breakThere are also wrappers for the two most common task-orchestration patterns in asyncio:
import asyncio
from tldm.extensions.asyncio import tldm_asyncio
async def fetch(i):
await asyncio.sleep(0.05)
return i
async def main():
ordered = await tldm_asyncio.gather(*(fetch(i) for i in range(8)), desc="gather")
print(ordered)
for task in tldm_asyncio.as_completed([fetch(i) for i in range(8)], desc="ready"):
print(await task)
asyncio.run(main())Apply tldm to pandas operations. There are multiple ways to register the pandas integration:
Using the syntactic sugar (recommended):
import numpy as np
import pandas as pd
import tldm
# Register pandas integration - simple and clean!
tldm.pandas(desc="Processing")
df = pd.DataFrame(np.random.randint(0, 100, (1000, 6)))
# Now you can use progress_apply instead of apply
df.progress_apply(lambda x: x**2)
# Also works with groupby
df.groupby(0).progress_apply(lambda x: x**2)Alternative import style:
from tldm import pandas
# Register with default settings
pandas()
# Or with custom parameters
pandas(desc="Processing", ncols=80)Traditional import (also supported):
from tldm.extensions.pandas import tldm_pandas
# Register pandas integration
tldm_pandas(desc="Processing")The pandas integration automatically uses the appropriate progress bar for your environment (terminal or Jupyter notebook).
Registration adds progress_apply, progress_map, progress_applymap, progress_aggregate, and progress_transform, along with rolling/expanding progress_apply helpers.
from tldm import pandas
pandas(desc="features")
df["score_pct"] = df.groupby("team")["score"].progress_transform(
lambda group: group.rank(pct=True)
)
df["rolling_mean"] = df["score"].rolling(128).progress_apply(lambda window: window.mean())Integration with the rich library for enhanced terminal output:
from tldm.extensions.rich import tldm
for i in tldm(range(100)):
passInstall rich separately if you want this backend. The rich renderer is a good fit when you want Rich-native terminal output, but it intentionally does not support position or custom bar_format, so keep the standard backend for those cases.
Convenient wrappers for concurrent futures:
from tldm.extensions.concurrent import thread_map, process_map
# Thread-based parallel processing with progress bar
results = thread_map(lambda x: x**2, range(100), max_workers=4)
# Process-based parallel processing with progress bar
results = process_map(lambda x: x**2, range(100), max_workers=4)Both helpers accept the usual tldm keyword arguments such as desc, total, and disable, plus executor-specific arguments like max_workers, timeout, and chunksize.
Redirect console logging output to work seamlessly with tldm progress bars. This prevents log messages from interfering with progress bar display:
import logging
from tldm import trange
from tldm.logging import logging_redirect_tldm
LOG = logging.getLogger(__name__)
if __name__ == '__main__':
logging.basicConfig(level=logging.INFO)
with logging_redirect_tldm():
for i in trange(9):
if i == 4:
LOG.info("console logging redirected to `tldm.write()`")
# logging restoredThe logging_redirect_tldm() context manager redirects console logging to tldm.write(), leaving other logging handlers (e.g., log files) unaffected. It automatically:
- Removes console handlers (stdout/stderr) from loggers
- Adds a
TldmLoggingHandlerthat writes viatldm.write() - Preserves formatters and log levels from the original console handlers
- Preserves filters on the redirected console handler
- Restores original handlers when exiting the context
You can also combine progress bars with logging redirection using tldm_logging_redirect():
import logging
from tldm.logging import tldm_logging_redirect
LOG = logging.getLogger(__name__)
if __name__ == '__main__':
logging.basicConfig(level=logging.INFO)
with tldm_logging_redirect(total=10) as pbar:
for i in range(10):
LOG.info(f"Processing item {i}")
pbar.update(1)Parameters:
loggers: List of loggers to redirect (default:[logging.root])tldm_class: Progress bar class to use (default:tldm.std.tldm)
Custom information can be displayed and updated dynamically on tldm bars:
from tldm import tldm, trange
from random import random, randint
from time import sleep
with trange(10) as t:
for i in t:
# Description will be displayed on the left
t.set_description(f'GEN {i}')
# Postfix will be displayed on the right,
# formatted automatically based on argument's datatype
t.set_postfix(loss=random(), gen=randint(1, 999), str='h', lst=[1, 2])
sleep(0.1)The higher-level metric and timing helpers build on the same display surface:
from tldm import tldm
with tldm(range(5), metric_window=3, desc="debug") as pbar:
for step in pbar:
with pbar.section("work"):
value = run_step(step)
pbar.set_metrics(loss=value, refresh=False)You can also use a custom bar_format:
from tldm import tldm
with tldm(total=10, bar_format="{postfix[0]} {postfix[1][value]:>8.2g}",
postfix=["Batch", {"value": 0}]) as t:
for i in range(10):
t.postfix[1]["value"] = i / 2
t.update()tldm supports nested progress bars. Here's an example:
from tldm import trange
from time import sleep
for i in trange(4, desc='1st loop'):
for j in trange(5, desc='2nd loop'):
for k in trange(50, desc='3rd loop', leave=False):
sleep(0.01)For manual control over positioning (e.g., for multi-processing), you may specify position=n where n=0 for the outermost bar, n=1 for the next, and so on:
from time import sleep
from tldm import trange, tldm
from multiprocessing import Pool, RLock, freeze_support
L = list(range(9))
def progresser(n):
interval = 0.001 / (n + 2)
total = 5000
text = f"#{n}, est. {interval * total:<04.2}s"
for _ in trange(total, desc=text, position=n):
sleep(interval)
if __name__ == '__main__':
freeze_support() # for Windows support
tldm.set_lock(RLock()) # for managing output contention
p = Pool(initializer=tldm.set_lock, initargs=(tldm.get_lock(),))
p.map(progresser, L)Note that tldm.write is thread-safe:
from time import sleep
from tldm import tldm, trange
from concurrent.futures import ThreadPoolExecutor
L = list(range(9))
def progresser(n):
interval = 0.001 / (n + 2)
total = 5000
text = f"#{n}, est. {interval * total:<04.2}s"
for _ in trange(total, desc=text):
sleep(interval)
if n == 6:
tldm.write("n == 6 completed.")
tldm.write("`tldm.write()` is thread-safe!")
if __name__ == '__main__':
with ThreadPoolExecutor() as p:
p.map(progresser, L)tldm can easily support callbacks/hooks and manual updates. Here's an example with urllib:
import urllib.request
import os
from tldm import tldm
class TldmUpTo(tldm):
"""Provides `update_to(n)` which uses `tldm.update(delta_n)`."""
def update_to(self, b=1, bsize=1, tsize=None):
"""
b : int, optional
Number of blocks transferred so far [default: 1].
bsize : int, optional
Size of each block (in tldm units) [default: 1].
tsize : int, optional
Total size (in tldm units). If [default: None] remains unchanged.
"""
if tsize is not None:
self.total = tsize
return self.update(b * bsize - self.n)
eg_link = "https://example.com/file.zip"
with TldmUpTo(unit='B', unit_scale=True, unit_divisor=1024, miniters=1,
desc=eg_link.split('/')[-1]) as t:
urllib.request.urlretrieve(eg_link, filename=os.devnull,
reporthook=t.update_to, data=None)
t.total = t.nAlternatively, use the wrapattr convenience function:
import urllib.request
import os
from tldm import tldm
eg_link = "https://example.com/file.zip"
response = urllib.request.urlopen(eg_link)
with tldm.wrapattr(open(os.devnull, "wb"), "write",
miniters=1, desc=eg_link.split('/')[-1],
total=getattr(response, 'length', None)) as fout:
for chunk in response:
fout.write(chunk)The requests equivalent is nearly identical:
import requests
import os
from tldm import tldm
eg_link = "https://example.com/file.zip"
response = requests.get(eg_link, stream=True)
with tldm.wrapattr(open(os.devnull, "wb"), "write",
miniters=1, desc=eg_link.split('/')[-1],
total=int(response.headers.get('content-length', 0))) as fout:
for chunk in response.iter_content(chunk_size=4096):
fout.write(chunk)Working with Zip Files
You can also wrap file operations within zipfiles to show progress during compression/decompression:
import zipfile
from tldm import tldm
class ZipFile(zipfile.ZipFile):
"""ZipFile subclass with progress bars for read/write operations."""
def open(self, name, mode="r", pwd=None, *, force_zip64=False):
f = super().open(name, mode, pwd=pwd, force_zip64=force_zip64)
if mode == "r":
if not isinstance(name, zipfile.ZipInfo):
name = self.getinfo(name)
return tldm.wrapattr(
f, "read",
total=name.compress_size,
desc=f"Decompressing {name.filename}"
)
elif mode == "w":
if isinstance(name, zipfile.ZipInfo):
return tldm.wrapattr(
f, "write",
total=name.file_size,
desc=f"Compressing {name.filename}"
)
return f
else:
raise ValueError('open() requires mode "r" or "w"')
# Usage example
with ZipFile('archive.zip', 'r') as zf:
# Reading with progress bar
data = zf.open('largefile.txt').read()
# Extracting with progress bar
zf.extract('largefile.txt', 'output_dir/')Since tldm uses a simple printing mechanism to display progress bars, you should not write any message in the terminal using the builtin print() function while a progressbar is open.
To write messages in the terminal without any collision with tldm bar display, use the tldm.print() method (recommended) or the tldm.write() method:
Using tldm.print() (recommended):
from tldm import tldm, trange
from time import sleep
bar = trange(10)
for i in bar:
sleep(0.1)
if not (i % 3):
tldm.print(f"Done task {i}")The tldm.print() function works just like the builtin print(), accepting multiple values and standard keyword arguments like sep, end, file, and flush. This makes it a drop-in replacement for Python's builtin print().
Using tldm.write() (alternative):
from tldm import tldm, trange
from time import sleep
bar = trange(10)
for i in bar:
sleep(0.1)
if not (i % 3):
tldm.write(f"Done task {i}")Both methods will print to standard output sys.stdout by default, but you can specify any file-like object using the file argument. Both also support the flush argument to force flushing of the output buffer.
The most common issues relate to excessive output on multiple lines, instead of a neat one-line progress bar.
- Consoles in general: require support for carriage return (
CR,\r).- Some cloud logging consoles which don't support
\rproperly (cloudwatch, K8s) may benefit fromexport TLDM_POSITION=-1.
- Some cloud logging consoles which don't support
- Consoles in general require support for moving cursors up to the previous line. IDLE, ConEmu, and PyCharm lack full support.
- Windows may require the
coloramamodule to ensure nested bars stay within their respective lines.
- Environments which report that they support unicode will have solid smooth progressbars. The fallback is an ASCII-only bar.
- Windows consoles often only partially support unicode and may require explicit
ascii=True.
Generator wrapper functions tend to hide the length of iterables. tldm does not.
- Replace
tldm(enumerate(...))withenumerate(tldm(...))ortldm(enumerate(x), total=len(x), ...).- The same applies to
numpy.ndenumerate.
- The same applies to
- Replace
tldm(zip(a, b))withzip(tldm(a), b)or evenzip(tldm(a), tldm(b)). - The same applies to
itertools. - Useful convenience functions:
tenumerate,tzip,tmap,tproductare available in this package.
Use docker-compose run instead of docker-compose up and tty: true.
tldm implements a few tricks to increase efficiency and reduce overhead:
- Avoid unnecessary frequent bar refreshing:
minintervaldefines how long to wait between each refresh. - Reduce number of calls to check system clock/time.
minintervalis more intuitive to configure thanminiters. A clever adjustment systemdynamic_miniterswill automatically adjustminitersto the amount of iterations that fit into timemininterval.
However, consider a case with a combination of fast and slow iterations. After a few fast iterations, dynamic_miniters will set miniters to a large number. When iteration rate subsequently slows, miniters will remain large and thus reduce display update frequency. To address this:
maxintervaldefines the maximum time between display refreshes. A concurrent monitoring thread checks for overdue updates and forces one where necessary.
The monitoring thread should not have a noticeable overhead, and guarantees updates at least every 10 seconds by default. This value can be directly changed by setting the monitor_interval of any tldm instance (i.e., t = tldm(...); t.monitor_interval = 2). The monitor thread may be disabled application-wide by setting tldm.monitor_interval = 0 before instantiation of any tldm bar.
All source code is hosted on GitHub. Contributions are welcome.
See the CONTRIBUTING file for more information.
TL;DM is forked from tqdm, created by Noam Yorav-Raphael. We gratefully acknowledge the contributions of all tqdm contributors, especially (in no particular order):
- Casper da Costa-Luis
- Stephen Larroque
- Kyle Altendorf
- Hadrien Mary
- Richard Sheridan
- Ivan Ivanov
- Mikhail Korobov
And all other contributors to the original tqdm project.
This project is licensed under the MPL-2.0 license. See the LICENCE file for details.