MemCast is a novel framework that reformulates Time Series Forecasting (TSF) as an experience-conditioned reasoning task. It explicitly accumulates forecasting experience into a hierarchical memory and leverages it to guide the inference process of Large Language Models (LLMs).
📝 “MemCast: Memory-Driven Time Series Forecasting with Experience-Conditioned Reasoning”
Preprint | 📄 Paper
Existing LLM-based forecasting methods often treat instances as isolated reasoning tasks, lacking explicit experience accumulation and continual evolution. MemCast introduces a new learning-to-memory paradigm:
- Experience Accumulation: Organizes training data into a Hierarchical Memory consisting of historical patterns, reasoning wisdom, and general laws.
- Experience-Conditioned Reasoning: Retrieves relevant patterns to guide reasoning, utilizes wisdom for trajectory selection, and applies general laws for reflective iteration.
- Continual Evolution: Enables dynamic confidence adaptation to update memory entry confidence during inference without test data leakage.
- ✅ Hierarchical Memory: Structured storage of trend summaries, reasoning trajectories, and physical constraints.
- 🔗 Wisdom-Driven Exploration: Samples multiple reasoning paths and selects the optimal trajectory based on semantic consistency.
- 🔄 Rule-Based Reflection: Iterative refinement mechanism that enforces domain-specific constraints (e.g., non-negativity).
- 📊 Dynamic Adaptation: A confidence adaptation strategy that allows the model to evolve continuously in non-stationary environments.
- 🏆 SOTA Performance: Consistently outperforms deep learning and LLM-based baselines on Energy, Electricity, and Weather benchmarks.
git clone https://github.com/Xiaoyu-Tao/MemCast-TS
cd MemCastconda create -n memcast python=3.10
conda activate memcast
pip install -r requirements.txtMemCast is evaluated on diverse real-world datasets with rich contextual features: Energy (NP, PJM, BE, FR, DE) Electricity (ETTh1, ETTm1) Renewable Power (Windy Power, Sunny Power) Hydrology (MOPEX) First, the training and evaluation datasets used in our experiments can be found in Google Drive. Please place the datasets in the dataset directory.
mkdir dataset
# Download datasets to ./dataset/sh scripts/accumulation/build_memory.sh
sh scripts/short_term/NP.sh
sh scripts/long_term/ETTh.sh
Full Results:
Overall Performance: MemCast consistently achieves the best or second-best results across datasets, surpassing Time-LLM and TimeReasoner.




