Skip to content

Mte90/glm-tray

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

50 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

GLM Tray

GLM Tray πŸ”‘

Release Platform Built with Tauri License

Keep your Z.ai / BigModel API keys alive and monitored β€” silently, from your system tray.

GLM Tray is a lightweight, native desktop app that sits in your system tray and watches your API keys so you never hit an unexpected quota wall or stale key. Manage up to 4 keys, visualise live quota usage, and automate keep-alive pings β€” all without leaving your workflow.


Why You'll Love It

  • πŸ”‘ Multi-key dashboard β€” Monitor up to 4 Z.ai / BigModel API keys side by side
  • πŸ“Š Live quota tracking β€” Token limits, request counts, and model-level breakdowns at a glance
  • πŸ’“ Keep-alive scheduler β€” Three flexible modes keep keys warm automatically: Interval, Specific Times, or After Reset
  • βœ… Smart wake confirmation β€” Validates success via quota delta, retries silently on failure
  • 🌐 Dual platform support β€” Works with both api.z.ai and open.bigmodel.cn endpoints
  • πŸ”” Auto-update notifications β€” Stay current with in-app update prompts
  • πŸ—“ JSONL audit logging β€” Optional, filterable logs with flow_id and phase fields
  • βš™οΈ Global app settings β€” Configure shared quota URL, LLM URL, log path, and retention from one place

Screenshots

GLM Tray Dashboard


Installation

Grab the latest release for your platform from the Releases page β†’

Platform Installer
πŸͺŸ Windows glm-tray_X.X.X_x64-setup.exe
🍎 macOS (Apple Silicon) glm-tray_X.X.X_aarch64.dmg
🍎 macOS (Intel) glm-tray_X.X.X_x64.dmg
🐧 Linux glm-tray_X.X.X_amd64.AppImage

Windows

  1. Download and run the .exe installer
  2. Follow the installation wizard

macOS

  1. Download the .dmg file
  2. Drag GLM Tray to Applications
  3. On first launch: right-click β†’ Open (or allow in System Preferences β†’ Privacy & Security)

Linux

chmod +x glm-tray_*.AppImage
./glm-tray_*.AppImage

Quick Start

  1. Launch the app β€” it appears silently in your system tray
  2. Click the tray icon β€” opens the main window
  3. Add your API key β€” paste your Z.ai or BigModel key into Slot 1
  4. Enable polling β€” toggle on Enable polling to begin monitoring
  5. Check your usage β€” live stats appear instantly on the dashboard

Features

πŸ“Š Quota Monitoring

Stay ahead of your limits with real-time usage data:

  • Token quota consumption and limits
  • Request counts and model-level breakdowns (24-hour window)
  • Tool usage statistics
  • Visual indicators directly in the tray icon

πŸ’“ Keep-Alive Scheduling

Three scheduling modes prevent stale keys β€” mix and match as needed:

Mode Description
Interval Send a keep-alive request every X minutes
Specific Times Fire at fixed times (e.g. 09:00, 12:00, 18:00)
After Reset Trigger X minutes after your quota resets

Wake Confirmation & Retry Logic

Wake requests are verified β€” not just sent:

  • After a wake, the slot is marked wake_pending and an immediate quota poll fires
  • If quota shows a valid nextResetTime advance β†’ βœ… confirmed, wake_pending clears
  • If quota doesn't confirm β†’ retry every minute for the configured wake_quota_retry_window_minutes
  • After the window, a forced retry is attempted
  • Persistent failures increment wake_consecutive_errors; once the threshold is reached, the slot is temporarily auto-disabled for wake

πŸ“ JSONL Logging (Optional)

Enable structured logging to debug API interactions:

  • Daily .jsonl log files with full request/response data
  • flow_id ties each request/response pair together
  • phase field: request, response, error, event
  • Scheduler events logged: wake pending, retry windows, task start/stop
  • Default path: {app_data}/logs/ β€” override with log_directory in Global Settings

βš™οΈ Global App Settings

Access via the gear icon on the home page:

Setting Description
global_quota_url Default quota endpoint for all keys
global_request_url Default LLM endpoint for keep-alive requests
log_directory Override the log file output path
max_log_days How many days of logs to retain

Configuration

Settings are stored in the platform-standard application data folder:

Platform Path
πŸͺŸ Windows %APPDATA%\glm-tray\settings.json
🍎 macOS ~/Library/Application Support/glm-tray/settings.json
🐧 Linux ~/.config/glm-tray/settings.json

API Endpoints

Purpose URL
Quota Limits https://api.z.ai/api/monitor/usage/quota/limit
Model Usage https://api.z.ai/api/monitor/usage/model-usage
Tool Usage https://api.z.ai/api/monitor/usage/tool-usage
Chat Completions https://api.z.ai/api/coding/paas/v4/chat/completions

For BigModel, replace api.z.ai with open.bigmodel.cn.


For Developers

Prerequisites

Linux System Dependencies

sudo apt-get install -y \
  libwebkit2gtk-4.1-dev build-essential curl wget file \
  libssl-dev libgtk-3-dev libayatana-appindicator3-dev \
  librsvg2-dev patchelf pkg-config libsoup-3.0-dev \
  javascriptcoregtk-4.1 libjavascriptcoregtk-4.1-dev

macOS

xcode-select --install

Windows

  • Visual Studio Build Tools (Desktop development with C++)
  • WebView2 Runtime (usually pre-installed on Windows 11)

Development

git clone https://github.com/kiwina/glm-tray.git
cd glm-tray
npm install
npm run tauri dev

Production Build

npm run tauri build

Built installers land in src-tauri/target/release/bundle/.

Debug Mode (Mock Server)

Test wake functionality without hitting production APIs:

node docs/mock-server.cjs   # Start mock server

Then enable Debug Mode in Global Settings β†’ Developer section. All API calls route to the mock server.

See docs/DEBUGGING.md for full documentation.

Project Structure

src/
  main.ts              # Frontend entry (Vue 3 + Pinia)
  styles.css           # DaisyUI + Tailwind CSS 4

src-tauri/src/
  lib.rs               # Tauri setup, commands, state
  config.rs            # Config load/save with migration
  api_client.rs        # HTTP client for API calls
  scheduler.rs         # Background polling scheduler
  tray.rs              # System tray management
  models.rs            # Shared data structures
  update_checker.rs    # Auto-update checker
  file_logger.rs       # JSONL logging module

Release History

  • v0.0.4 β€” Vue 3 rewrite, multi-key dashboard, global settings, JSONL logging, auto-updater
  • v0.0.3 β€” Keep-alive scheduler with wake confirmation and retry logic
  • v0.0.2 β€” Dual platform support (Z.ai + BigModel)
  • v0.0.1 β€” Initial release

License

MIT


Disclaimer

This software is not affiliated with, endorsed by, or sponsored by Z.ai, BigModel, or any of their subsidiaries.

"Z.ai" and "BigModel" are trademarks of their respective owners. This is an independent, community-developed tool for personal API key management. Use at your own risk.

The software is provided "as is", without warranty of any kind, express or implied. The authors are not liable for any damages arising from the use of this software.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • Rust 57.8%
  • Vue 23.4%
  • TypeScript 10.8%
  • Shell 4.0%
  • PowerShell 2.3%
  • CSS 1.6%
  • HTML 0.1%