Performance Forecasting Models

Explore top LinkedIn content from expert professionals.

Summary

Performance forecasting models are tools and methods used to predict how systems or businesses will perform in the future, using historical data and specialized algorithms. Recent advances in these models—including those based on deep learning and large language models—are making forecasts more accurate and accessible, even for complex or non-traditional data formats.

  • Choose your approach: Consider whether top-down, bottom-up, or driver-based forecasting best matches your business needs and data availability.
  • Explore new technologies: Investigate advanced models like time series foundation models or language-based regression tools to improve accuracy and reduce manual data effort.
  • Adapt and update: Regularly refine your forecasting models and include relevant operational details so your predictions stay relevant as your situation changes.
Summarized by AI based on LinkedIn member posts
  • View profile for Mandar Karhade, MD PhD

    AI ML GenAI + MD + Epidemiology Statistics + Strategy 🅲🅰🅻🅻 🅼🅴

    4,125 followers

    ❓ What is the biggest headache in the existing business applications? Well the data -- data in the format that cant be used as is -- "Predicting System Performance with Text-to-Text Regression"; By Google, Cornell University, North Carolina State University might have chipped that barrier. Key Takeaways:  ➡️ Performance prediction for large systems is challenging due to complex, non-tabular data; traditional tabular methods struggle with feature engineering. ➡️ Text-to-text regression using a Regression Language Model (RLM) processes raw system data (like YAML strings) directly, bypassing manual feature engineering. ➡️ The RLM, even with moderate parameter counts (60M), achieves high accuracy (up to 0.99 rank correlation, 100x lower MSE on Borg) on complex system performance prediction. ➡️ The pretrained RLM demonstrates strong few-shot adaptation to new tasks/system configurations with minimal fine-tuning data. ➡️ The RLM can model outcome distribution densities and quantify prediction uncertainty. ➡️ Ablations highlight the importance of encoder-decoder architecture for complex inputs and reveal that performance gains plateau around O(100M) parameters for this regression task. Impact score:  🟩🟩🟩🟩🟩🟩🟩🟩🟩 (9/10) This work provides a scalable and general framework for predicting performance in complex systems by leveraging text-to-text models to handle non-tabular data. This reduces the need for labor-intensive feature engineering and enables more accurate simulations and optimizations of large-scale industrial systems. #MachineLearning #SystemsEngineering #PerformancePrediction #AI

  • View profile for Carl Seidman, CSP, CPA

    Premier FP&A and Excel education you can use immediately | 250,000+ LinkedIn Learning Students | Adjunct Professor in Data Analytics @ Rice | Microsoft MVP | Join my newsletter for Excel, FP&A + financial modeling tips👇

    88,019 followers

    Most small businesses default to two forecasting methods: top-down or bottom-up. But they both share the same problem. The "why" behind performance isn't explained. These approaches are easy to model and are used all the time. But they can easily fail as companies grow larger and more driver based. (1) Top-down forecasting Many companies favor top-down because it's simple and aligned with strategic goals. But the biggest drawback is it's often completely disconnected from an operational reality. I use it for high-level financial forecasting and hardly ever for operational planning. • Leadership sets growth or margin targets • The P&L is segmented into business units • These targets cascade down the statements • Line-items are forecast on high-level assumptions (2) Bottom-up forecasting Bottom-up forecasting is based upon detailed inputs such as sales to customers, sales by SKU, hiring plans by individual versus job category or department, expense budgets, etc. The benefit of bottoms-up is it's detailed and grounded in operations. But it's usually time-consuming, fragmented, and hard to roll up consistently. • Individual contributors come up with their numbers • They share it with an accountant or financial analyst • The accounting/finance person puts it into a model • The model is updated constantly with new details (3) Driver-based forecasting Rather than come up with high-level assumptions that don't tie into operations, or granular detail that doesn't separate signal from noise, driver-based combines the best of both. In this example for a professional staffing company, we can tie future revenue to placements per recruiter, contract duration, markup percentage, bill rates, and recruiter headcount. This allows FP&A the ability to flex operating assumptions, test them, and quickly see what can be done on the ground to influence. Differences between the 3 methods matter: Top-down may set revenue at $50 million based upon an 8% growth rate. We can ask "how do we increase growth?" Bottoms-up may set revenue at $50 million based upon a monthly forecast of 200 customers. We can ask "what do we expect from each customer?" Driver-based planning may arrive at the same $50 million but ask "what operational levers can we press to truly move revenue and margin?" The result is forecasts that are faster, more explainable and easier to update. 💡 If you want to explore next-level modeling techniques, join live with 200+ people for Advanced FP&A: Financial Modeling with Dynamic Excel Session 2. https://lnkd.in/emi2xFdZ

  • View profile for David Sauerwein

    AI/ML at AWS | PhD in Quantum Physics

    31,700 followers

    Many forecasting use cases lack the data volumes to benefit from deep learning. The rise of time series foundation models (TSFMs) is changing this landscape. Here's a summary of the latest approaches and their potential impact. 𝐓𝐡𝐞 𝐃𝐚𝐭𝐚 𝐒𝐢𝐳𝐞 𝐂𝐡𝐚𝐥𝐥𝐞𝐧𝐠𝐞 Companies like Amazon, Google, and Zalando rely heavily on deep learning models for forecasting. Yet, practitioners are often disappointed when they try deep learning on their use cases. Use cases with < 10,000 time series never benefit from deep learning scaling laws. This situation mirrors natural language processing (NLP). Typical NLP use cases can't train effective transformer models because of limited document availability. Instead, they use foundation models trained on massive datasets by selected companies. These models, having learned the general structure of language, can be adapted to specific tasks through few-shot prompting, retrieval-augmented generation (RAG), or fine-tuning. 𝐈𝐧𝐭𝐫𝐨𝐝𝐮𝐜𝐢𝐧𝐠 𝐓𝐢𝐦𝐞 𝐒𝐞𝐫𝐢𝐞𝐬 𝐅𝐨𝐮𝐧𝐝𝐚𝐭𝐢𝐨𝐧 𝐌𝐨𝐝𝐞𝐥𝐬 (𝐓𝐒𝐅𝐌𝐬) TSFMs follow a similar concept. Companies with extensive compute and data resources develop foundation models with robust generalization capabilities. These models can then be customized for specific use cases. This means benefits of TSFMs are: 1) Even small-scale use cases can benefit from advanced deep learning methods, with the prospect of using multi-modality in the future (see below; comments) 2) TSFMs can deliver impressive accuracy even in scenarios where standard methods (like XGBoost) struggle. An example are cold-start problems. 𝐓𝐲𝐩𝐞𝐬 𝐨𝐟 𝐓𝐒𝐅𝐌𝐬 There are two primary approaches to TSFMs: 1. Pre-trained Models from Scratch: Built on vast sets of curated time series data. Examples: TimesFM (Google), TimeGPT (Nixla), ForecastPFN, and LagLlama. 2. Bootstrapped from LLMs: Use the hidden structure in sentences that LLMs are trained on, viewing them as time series. Examples: Chronos (Amazon) and TimeLLM. The choice of approach depends on the specific use case, including the methods for tailoring and explaining the models. 𝐋𝐨𝐨𝐤𝐢𝐧𝐠 𝐀𝐡𝐞𝐚𝐝 TSFMs hold promise for widespread deep learning adoption in forecasting. Many challenges remain, e.g. the integration of custom covariates, but these will be addressed over time. Meanwhile, the opportunity is vast, with forecasting being crucial across industries and the options to improve this models further is huge too. For example, TSFMs could become multi-modal. They could, for example, integrate news articles for more comprehensive demand forecasting. I'm excited to see TSFMs grow and revolutionize forecasting, making advanced deep learning accessible and effective across a broad range of applications. Check out the overview article in the comments (image from there) and additional resource! #forecasting #deeplearning #machinelearning

  • View profile for Sohrab Rahimi

    Partner at McKinsey & Company | Head of Data Science Guild in North America

    20,821 followers

    Forecasting stands as a formidable challenge within statistics and machine learning, requiring complex data preparation and specialized models that demand profound expertise—often encompassing entire careers. It merges mathematical precision with domain-specific insights, necessitating a fusion of art and science to achieve accurate predictions. We delve into some of the practical challenges of forecasting, especially in the business context, here: https://lnkd.in/ey8mXYBc The advent of large language models prompts the question: Could these models simplify forecasting, reducing its complexity and the need for specialized knowledge? Recent advancements in time series forecasting have highlighted the power of LLMs, marking a transformative shift in the field. Salesforce's MOIRAI model (https://lnkd.in/ex8h2Vt5), with its robust zero-shot forecasting capabilities and adept handling of multivariate data, has shown superior performance over traditional models by offering cross-domain versatility and advanced probabilistic forecasts. Similarly, the LAMP framework (https://lnkd.in/gSiDi9VQ ) integrates LLMs into event prediction, significantly reducing errors and demonstrating LLMs' potential to refine operational processes through a thoughtful analysis of the 'why' behind events. Moreover, TEMPO, a GPT-based model (https://lnkd.in/ebjT3EvM ), has made strides in numerical modeling by decomposing time series into core components and guiding forecasting with prompts, showcasing remarkable accuracy improvements and adaptability to non-stationary data. As the latest innovation in time series forecasting, Chronos by Amazon (https://lnkd.in/eqgFruwU) stands at the forefront. Through a sophisticated process of tokenization, scaling, and quantization, Chronos adeptly transforms numerical sequences into a format readily processed by language models, akin to converting numbers into words. With its focus on the incorporation of cutting-edge data augmentation methods like TSMix and KernelSynth, Chronos represents a significant leap forward. Chronos excelled in performance across two benchmarks, showcasing its forecasting prowess. In familiar scenarios (Benchmark I), it outperformed both traditional and specialized models, especially in handling seasonal data trends. When faced with completely new datasets (Benchmark II), Chronos impressively exceeded traditional methods and matched or surpassed deep learning models. A simple fine-tuning significantly boosted its performance, demonstrating Chronos's adaptability and potential as a versatile tool for accurate forecasting across diverse scenarios. Collectively, these studies underscore LLMs' role in setting new benchmarks for forecasting, characterized by scalability, adaptability, and enhanced precision, revolutionizing traditional, labor-intensive forecasting methods.

Explore categories