GLM-V is an open-source vision-language model (VLM) series from ZhipuAI that extends the GLM foundation models into multimodal reasoning and perception. The repository provides both GLM-4.5V and GLM-4.1V models, designed to advance beyond basic perception toward higher-level reasoning, long-context understanding, and agent-based applications. GLM-4.5V builds on the flagship GLM-4.5-Air foundation (106B parameters, 12B active), achieving state-of-the-art results on 42 benchmarks across image, video, document, GUI, and grounding tasks. It introduces hybrid training for broad-spectrum reasoning and a Thinking Mode switch to balance speed and depth of reasoning. GLM-4.1V-9B-Thinking incorporates reinforcement learning with curriculum sampling (RLCS) and Chain-of-Thought reasoning, outperforming models much larger in scale (e.g., Qwen-2.5-VL-72B) across many benchmarks.

Features

  • Bilingual (Chinese/English) multimodal reasoning and perception
  • GLM-4.5V: hybrid-trained flagship with state-of-the-art benchmark scores
  • GLM-4.1V-9B-Thinking: reasoning-focused model with RLCS and CoT mechanisms
  • Long-context support (up to 64k) and flexible input (images, video, documents)
  • GUI agent capabilities with platform-aware prompts and precise grounding
  • Thinking Mode switch to toggle between fast and deep reasoning outputs

Project Samples

Project Activity

See All Activity >

License

Apache License V2.0

Follow GLM-V

GLM-V Web Site

Other Useful Business Software
Earn up to 16% annual interest with Nexo. Icon
Earn up to 16% annual interest with Nexo.

More flexibility. More control.

Generate interest, access liquidity without selling, and execute trades seamlessly. All in one platform. Geographic restrictions, eligibility, and terms apply.
Get started with Nexo.
Rate This Project
Login To Rate This Project

User Reviews

Be the first to post a review of GLM-V!

Additional Project Details

Operating Systems

Linux

Programming Language

Python

Related Categories

Python Large Language Models (LLM), Python AI Models

Registered

2025-10-04