VisuLogic: A Benchmark for Evaluating Visual Reasoning in Multi-modal Large Language Models

1University of Science and Technology of China,2 Xi'an Jiaotong University,
3Shanghai Artificial Intelligence Laboratory, 4SenseTime Research, 5Tsinghua University

*Indicates Equal Contribution
;† interns at OpenGVLab, Shanghai AI Laboratory;✉️ Corresponding author
overview

Composition and representative MLLMs accuracies of VisuLogic benchmark. The left figure shows the distribution of the 6 categories and their subcategories in our benchmark. The right figure shows accuracy rates (%) of representative MLLMs across different categories in our benchmark.

Abstract

Visual reasoning is a core component of human intelligence and a critical capability for advanced multimodal models. Yet current reasoning evaluations of multimodal large language models (MLLMs) often rely on text descriptions and allow language-based reasoning shortcuts, failing to measure genuine vision-centric reasoning. To address this, we introduce VisuLogic: a benchmark of 1,000 human-verified problems across six categories (e.g., quantitative shifts, spatial relations, attribute comparisons). These various types of questions can be evaluated to assess the visual reasoning capabilities of MLLMs from multiple perspectives. We evaluate leading MLLMs on this benchmark and analyze their results to identify common failure modes. Most models score below 30% accuracy—only slightly above the 25% random baseline and far below the 51.4% achieved by humans—revealing significant gaps in visual reasoning. Furthermore, we provide an supplementary training dataset and a reinforcement-learning baseline to support further progress.

overview

Comparison of the MLLM description→LLM pipeline on two benchmarks. In MMMU, detailed descriptions lead to correct solutions, while in VisuLogic, critical visual cues (e.g., symmetry, rotation) can be easily lost, causing the LLM to misinterpret the image. This highlights that textual reasoning alone is insufficient, underscoring the benchmark’s demand for robust and in-depth visual reasoning.

Leaderboard (TODO)

Rank Model Overall Qua. Spa. Pos. Att. Sty. Oth.
- Human 51.4 45.3 52.7 71.1 50.0 47.5 44.2
1 InternVL2.5-38B-RL 31.1 31.2 31.2 26.5 30.5 30.0 38.9
2 doubao-1-5-vision-pro-32k-250115 28.1 28.1 23.8 29.1 25.1 32.1 35.0
3 gemini-2.0-pro-exp-02-05 28.0 29.7 24.2 27.9 30.5 22.2 33.3
4 Qwen2.5-VL-7B-Instruct-RL 28.0 26.6 33.8 29.4 23.2 18.9 29.6
5 InternVL3-78B 27.7 27.7 26.1 31.6 26.3 21.3 32.3
6 InternVL2.5-78B 27.3 26.6 26.0 26.5 26.8 31.1 30.6
7 InternVL3-38B 27.1 28.7 27.6 26.1 21.4 23.9 28.5
8 GPT-4o 26.3 28.6 24.7 27.2 26.8 20.0 25.9
9 Qwen2.5VL-72B-Instruct 26.2 25.2 23.8 27.2 25.6 25.6 34.3
10 Qwen2.5-VL-7B-Instruct 26.0 27.6 20.9 25.2 23.2 37.8 25.0
11 kimi-latest 25.9 24.9 29.4 26.5 28.0 16.7 26.9
12 Ovis2-8B 25.6 26.1 23.8 27.2 28.0 25.6 24.1
13 InternVL2.5-38B 25.5 24.4 26.4 27.2 23.2 25.6 26.9
14 LLaVA-OneVision-7B (SI) 25.3 22.4 27.3 33.1 23.2 25.6 22.2
15 MiniCPM-o-2.6 25.3 25.6 23.0 27.3 21.9 24.5 29.9

EXAMPLES

BibTeX

 @article{xu2025visulogic,
        title={VisuLogic: A Benchmark for Evaluating Visual Reasoning in Multi-modal Large Language Models},
        author={Xu, Weiye and Wang, Jiahao and Wang, Weiyun and Chen, Zhe and Zhou, Wengang and Yang, Aijun and Lu, Lewei and Li, Houqiang and Wang, Xiaohua and Zhu, Xizhou and Wang, Wenhai and Dai, Jifeng and Zhu, Jinguo},
        journal={arXiv preprint arXiv:2504.15279},
        year={2025},
        url={https://arxiv.org/abs/2504.15279}
      }