Code accompanying the Diploma thesis
"Exploratory Analysis of Latent Topology and Geometry in some Riemannian Autoencoders"
The project investigates how different autoencoder architectures learn latent representations that reflect the topology and geometry of the underlying data manifold.
In particular, the thesis studies whether autoencoders can
- learn latent representations that are topologically and geometrically aligned with the data manifold
- learn a meaningful parametrization of the manifold through the decoder, revealing information about the geometry of the underlying data manifold
It further investigates the impact of topological regularization on these properties.
The repository contains implementations and experimental tools for analyzing the latent topology and geometry of autoencoders.
The following model classes are implemented:
- Euclidean Autoencoders (AE)
- Euclidean Variational Autoencoders (VAE)
- Manifold Autoencoders (Manifold-AE) with non-Euclidean latent spaces
- Manifold Variational Autoencoders (Manifold-VAE) with non-Euclidean latent spaces
The code allows experiments on synthetic datasets with known topology, enabling quantitative evaluation of how well the learned latent space reflects the true structure of the data.
The lib/ module provides utilities for
- training the different autoencoder architectures
- generating synthetic datasets with known topology
- estimating geometric quantities such as curvature
- analyzing the topology of latent representations
- visualizing latent embeddings and decoder maps
The notebooks/ directory contains the experiments and visualizations used in the thesis.
The project uses a conda environment.
conda env create -f conda-env.yml
conda activate TopoGeoAEsExperiments are provided as Jupyter notebooks in:
experiments/notebooks/
This project builds on the following implementations:
Spherical VAEs
https://github.com/nicola-decao/s-vae-pytorch
Toroidal VAEs and geometric analysis tools
https://github.com/geometric-intelligence/neurometry
Topological Autoencoders with persistence regularization
https://github.com/BorgwardtLab/topological-autoencoders
The full thesis is available here:
latent_geometry_and_topology_in_autoencoders_thesis
Parts of this work were accepted as an extended abstract and poster tracks at the NeurIPS 2025 "NeurReps" and "UniReps" workshops
Extended abstracts:
paper/unireps_extended_abstract.pdf
paper/neurreps_extended_abstract.pdf
Poster:
paper/neurreps_poster.pdf
If you use this code, please cite the Diploma thesis:
@mastersthesis{samuelgraepler2025TopoGeoAEs,
title={Exploratory Analysis of Latent Topology and Geometry in some Riemannian Autoencoders},
author={Samuel Graepler},
year={2025}
}
For the citation of the extended abstracts please use:
@inproceedings{
graepler2025on,
title={On the Impact of Topological Regularization on Geometrical and Topological Alignment in Autoencoders: An Empirical Study},
author={Samuel Graepler and Nico Scherf and Anna Wienhard and Diaaeldin Taha},
booktitle={UniReps: 3rd Edition of the Workshop on Unifying Representations in Neural Models},
year={2025},
url={https://openreview.net/forum?id=5i6cA8XS3T}
}
@inproceedings{
graepler2025on,
title={On the Impact of Topological Regularization on Geometrical and Topological Alignment in Autoencoders: An Empirical Study},
author={Samuel Graepler and Nico Scherf and Anna Wienhard and Diaaeldin Taha},
booktitle={NeurIPS 2025 Workshop on Symmetry and Geometry in Neural Representations},
year={2025},
url={https://openreview.net/forum?id=d5MaJiYmUB}
}
