Beyond End-to-End VLMs: Leveraging Intermediate Text Representations for Superior Flowchart Understanding
- [2025/01] 🔥 Our TextFlow paper is accepted by NAACL 2025.
- [2025/01] The data and code for TextFlow have been released.
- [2024/12] Excited to announce that our TextFlow paper is now available on arXiv!
TEXTFLOW, a framework that converts flowchart images into text to improve explainability and control in flowchart understanding tasks.
Flowcharts are typically presented as images, driving the trend of using vision-language models (VLMs) for end-to-end flowchart understanding. However, two key challenges arise: (i) Limited controllability—users have minimal influence over the downstream task, as they can only modify input images, while the training of VLMs is often out of reach for most researchers. (ii) Lack of explainability—it is difficult to trace VLM errors to specific causes, such as failures in visual encoding or reasoning.
We propose TextFlow, addressing aforementioned issues with two stages: (i) Vision Textualizer—which generates textual representations from flowchart images; and (ii) Textual Reasoner—which performs question-answering based on the text representations. TextFlow offers three key advantages: (i) users can select the type of text representations (e.g., Graphviz, Mermaid, PlantUML), or further convert them into executable graph object to call tools, enhancing performance and controllability; (ii) it improves explainability by helping to attribute errors more clearly to visual or textual processing components; and (iii) it promotes the modularization of the solution, such as allowing advanced LLMs to be used in the reasoner stage when VLMs underperform in end-to-end fashion. Experiments on the FlowVQA and FlowLearn benchmarks demonstrate TextFlow's state-of-the-art performance as well as its robustness. All code will be publicly released.
Follow these steps to get started with TextFlow:
Run the following command to install all required dependencies:
cd TextFlow
pip install -r requirements.txtSet up your OpenAI or Anthropic API keys in the config.json file.
Perform baseline VQA on the flowvqa dataset:
python src/vqa.py --dataset flowvqa --model_name gpt-4oEvaluate the experimental results:
python src/evaluation.py --model_name gpt-4o --data_path output/flowvqa/vqa/gpt-4o.jsonConvert flowchart images into text representations (Mermaid, Graphviz, or PlantUML). Example for generating the Mermaid text representations:
python src/textualizer.py --dataset flowvqa --textualizer gpt-4o --output_type mermaidPerform question answering based on the text representations:
python src/reasoner.py --dataset flowvqa --reasoner gpt-4o --textualizer gpt-4o --input_type mermaidFor enhanced capabilities, enable tool usage (currently supported for Mermaid text representation with gpt-4o):
python src/reasoner.py --dataset flowvqa --reasoner gpt-4o --textualizer gpt-4o --input_type mermaid --tool_useEvaluate the results of the TextFlow pipeline:
- Without tool use:
python evaluation.py --model_name gpt-4o --data_path output/flowvqa/textflow/mermaid_reasoner_gpt-4o_textualizer_gpt-4o.json
- With tool use:
python evaluation.py --model_name gpt-4o --data_path output/flowvqa/textflow/mermaid_reasoner_tool_use_gpt-4o_textualizer_gpt-4o.json
If you find this project is helpful to your research, please consider to cite our paper:
@inproceedings{ye-etal-2025-beyond,
title = "Beyond End-to-End {VLM}s: Leveraging Intermediate Text Representations for Superior Flowchart Understanding",
author = "Ye, Junyi and
Dash, Ankan and
Yin, Wenpeng and
Wang, Guiling",
editor = "Chiruzzo, Luis and
Ritter, Alan and
Wang, Lu",
booktitle = "Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = apr,
year = "2025",
address = "Albuquerque, New Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2025.naacl-long.180/",
doi = "10.18653/v1/2025.naacl-long.180",
pages = "3534--3548",
ISBN = "979-8-89176-189-6",
abstract = "Flowcharts are typically presented as images, driving the trend of using vision-language models (VLMs) for end-to-end flowchart understanding. However, two key challenges arise: (i) Limited controllability{---}users have minimal influence over the downstream task, as they can only modify input images, while the training of VLMs is often out of reach for most researchers. (ii) Lack of explainability{---}it is difficult to trace VLM errors to specific causes, such as failures in visual encoding or reasoning. We propose TextFlow, addressing aforementioned issues with two stages: (i) Vision Textualizer{---}which generates textual representations from flowchart images; and (ii) Textual Reasoner{---}which performs question-answering based on the text representations. TextFlow offers three key advantages: (i) users can select the type of text representations (e.g., Graphviz, Mermaid, PlantUML), or further convert them into executable graph object to call tools, enhancing performance and controllability; (ii) it improves explainability by helping to attribute errors more clearly to visual or textual processing components; and (iii) it promotes the modularization of the solution, such as allowing advanced LLMs to be used in the reasoner stage when VLMs underperform in end-to-end fashion. Experiments on the FlowVQA and FlowLearn benchmarks demonstrate TextFlow{'}s state-of-the-art performance as well as its robustness. All code and data are publicly available."
}

