-
Notifications
You must be signed in to change notification settings - Fork 2
Home
Welcome to the Zcells wiki! Zcells is a Matlab-based software for classifying transmitted-light microscopy acquisitions. In short, different objects in a microscopy image will cause different light patterns in the focal, or Z, direction depending on their shape and physical composition. Our approach is to classify those light patterns in a so called Z-stack, a series of microscopy images of a sample acquired at different focal planes above and below the in-focus point(s) of the objects of interest, to identify parts of the image (like cellular membrane, microfluidic features, different organisms...) for greatly simplified segmentation.
The software is presented in details in Scientific Reports:
Identification of individual cells from z-stacks of bright-field microscopy images
You can download example datasets, classifiers and results from the related Zenodo archive:
https://zenodo.org/record/1307781
❗ Please use Github's Issues system to report bugs or ask questions about the code. Unfortunately it looks like there are problems with the email address that was used on the publication.
A microscope with an automated Z-stage can be used to acquire Z-stacks: By adjusting the position of a microscopy objective in the Z axis, different images at different focal points, inside a sample of interest, but also above and below it, can be captured and stored in a series of 2D images. Objects of varying sizes and compositions are known to generate different diffraction patterns. We discovered that vast amounts of information can be found about the sample being observed in those other focal planes.
We call z-pixels (also referred to somewhat interchangeably as focal signatures) vectors single pixels that were acquired at the same (x,y) coordinates throughout a stack. Our approach is to use classification algorithms, namely SVM-based and random forest techniques, on such z-pixels in order to classify images into labels of interest for simplified downstream segmentation.

The whole process is divided into two parts: training and prediction. Training consists in, first, training sets construction based on z-stacks that are hand or semi-automatically labelled via a custom built GUI. Then, the user can train against those labelled training sets, using a vast array of possible preprocessing operations, training parameters, class hierarchy or training algorithms.
Once they are satisfied with their choice, they can move on to prediction, where newly acquired stacks, completely unlabelled, can be fed to the trained classifier and labelled automatically and further analyzed and pushed to downstream segmentation algorithms.
We got varying degrees of success with different organisms and microscopy setups, but we typically perform much better than similar types of software.
We get especially good results with microfluidics. Even though our approach is inherently robust to focusing problems, it is possible that the better defined 3D structure of microfluidic samples helps us achieve higher accuracy: [IMG]
But we also get very good results on agar pads or glass slides: [IMG]
Another point is that the parameter space of our software is pretty large and we could only explore it in details for a handful of problems for which we could easily acquire new stacks, namely bacteria in mother machines and to a lesser extent bacteria on agar pads. The other results were obtained with pretty much default parameters.
- First of all, install Zcells.
- Then, acquire z-stacks for training.
- Create training sets
- Train and evaluate.
- Predict!
- Towards segmentation.
For reference, stacks, training sets, trained classifiers and classification results can be found here:
If you want to raise issues, find answers to your questions or ask for new features, please use github's issues system