Techniques for Better Decision Making

Explore top LinkedIn content from expert professionals.

  • View profile for Rakesh Gohel

    Scaling with AI Agents | Expert in Agentic AI & Cloud Native Solutions| Builder | Author of Agentic AI: Reinventing Business & Work with AI Agents | Driving Innovation, Leadership, and Growth | Let’s Make It Happen! 🤝

    139,156 followers

    160+ page guide covers top questions regarding Multi-AI Agents From Ideation, Design to Deployment, here's everything they share.. One of my favorite things to read about is the production and deployment of agentic systems. Especially from those building the tools that make it possible to observe and improve these systems. And this report is just that. 📌 It addresses a critical industry problem: Single, powerful agents often fail at complex, interconnected tasks, but multi-agents are expensive, so what to do? The report provides the technical blueprint and strategies necessary to make harder decisions easier for most enterprises. After reading the report, I think these 5 points stood out to me the most: 1. Start simple: Begin with 2 agents (e.g., Generator + Validator). Only add complexity if single-agent prompt engineering fails. 2. Match architecture to your problem: Use centralized for consistency, decentralized for resilience, hierarchical for complex workflows, or hybrid for enterprise-scale systems. 3. Engineer context deliberately: Apply strategies like offloading, retrieval, compaction, and caching to avoid context failure modes (poisoning, distraction, confusion, clash). 4. Isolate business logic from orchestration: Make your agent boundaries “collapsible” so you can merge them later if newer models handle the task alone. 5. Instrument for observability from Day 1: Track Action Completion, Tool Selection Quality, and latency breakdowns to debug and improve systematically. 📌 5-Tips on how to build them responsibly: - Validate necessity first: Ask: Can prompt engineering or better context management solve this? Are subtasks truly independent? - Measure economics: Multi-agent systems often cost 2–5× more; ensure the ROI justifies it. - Design for model evolution: Assume today’s limitations (e.g., small context windows) may disappear; keep orchestration modular and removable. - Implement guardrails: Use validation gates, fallback agents, and human-in-the-loop escalation for low-confidence decisions. - Monitor continuously: Use tools like Galileo to detect context loss, inefficient tool use, and routing errors, then close the loop with data-driven fixes. Bottom line: Multi-agent systems are powerful when applied to the right problems, but they’re not a universal upgrade and should be used with caution because of cost and complexity. Full Report link in comments 👇 Save 💾 ➞ React 👍 ➞ Share♻️ & follow for everything related to AI Agents

  • View profile for Karan Chopra

    Co-Founder Chairman and Co-CEO @ Table Space | Transforming Commercial Real Estate | Wellness and Fitness | BW 40 U 40 | Entrepreneur of the year 2025

    25,403 followers

    Being futuristic means giving tomorrow’s vision a shape today, down to the last detail. In enterprise workspace decisions, imagination often meets hesitation. It’s hard to commit to what you can’t fully see. That’s exactly where immersive tech is changing the game. With AR and VR, clients can now step into their future workspace before a single wall is built. They can walk the layout, explore their meeting rooms, feel the flow of teams across departments, and even visualise how their brand will live and breathe in the space. It’s a full-sensory preview. And when that clarity comes early: – Feedback becomes precise and productive – Approvals happen faster and with more conviction – Conversations shift from “what if” to “what’s next” It’s a mindset shift in how we design, collaborate, and deliver. When people can experience what they’re building, they align faster, take bolder decisions, and set a higher bar for execution. That’s the kind of clarity that excites us. #vrforbusiness #futureofworkspace #spatialtech #clientexperience #creinnovation #arvrinrealestate #designclarity #enterprisegrowth

  • View profile for Max Buckley

    Senior Software Engineer at Google

    28,363 followers

    The Inmates are Running the Asylum: The Unforeseen Consequences of LLMs Judging LLMs LLM evaluation is hard, and the most scalable tool we have for many applications is using LLMs as judges. Yet this practice of using LLMs as judges creates a potentially problematic self-referential loop. Could our heavy reliance on LLMs across content creation, retrieval, reranking, and evaluation inadvertently introduce or amplify biases within our information retrieval systems? A recent DeepMind paper considers the effects of combining multiple LLMs as we now often do in information retrieval — LLMs powering ranking, evaluation, and assisting in content creation — and then using yet another LLM to evaluate said system. Specifically, the authors highlight several interesting considerations, including providing the first empirical evidence of LLM judges exhibiting significant bias in favor of LLM-based rerankers. There is already an excellent paper showing that "Neural Retrievers are Biased Towards LLM-Generated Content"; however, this new research expands on that work. The DeepMind researchers used BM25 as their first-stage retriever and then compared a selection of LLM-based reranking methods. Their results reveal several key findings: (1) LLM judges are more lenient than human judges in their relevance assessments, in line with previous observations. (2) LLM judges exhibit a significant bias towards LLM-based rerankers, a phenomenon previously only hypothesized. (3) LLM judges demonstrate limited ability to discern subtle, yet statistically significant, performance differences between systems, potentially limiting their use in identifying small changes in system performance. The authors also provide a range of best practices for the use of LLM as a judge. Well worth a read if you are using LLM judges or even considering such a practice. DeepMind Paper: https://lnkd.in/eipNy-9s The other excellent paper "Neural Retrievers are Biased Towards LLM-Generated Content": https://lnkd.in/eSVad-zr

  • View profile for Ian Koniak
    Ian Koniak Ian Koniak is an Influencer

    I help tech sales AEs perform to their full potential in sales and life by mastering their mindset, habits, and selling skills | Sales Coach | Former #1 Enterprise AE at Salesforce | $100M+ in career sales

    98,671 followers

    Most sellers think their problem is not enough pipeline. But the real reason you’re missing quota? Your pipeline is full of ghosts. And it’s killing your close rate. Here's how top AE's are closing 62% of their deals: Every quarter, I review reps’ forecasts and see the same thing: 20 “opportunities.” But 10 have been sitting untouched for weeks. 4 have no executive sponsor. 3 are “waiting for next quarter.” And 2 are ghosts. That’s not a pipeline. That’s a list of wishes. And the reason most AEs never break through $300K–$500K is because they waste 80% of their energy trying to rescue bad deals instead of requalifying early. Here’s what the best 20% do differently 👇 1. They sell to power. If you can’t get to a decision maker, you don’t have a deal. You can’t sell to someone who can’t say yes. No amount of “follow-up” will fix that. 2. They confirm the proposal path early. If there’s no scheduled date to review pricing or an exec readout, the deal is not real. Top sellers always work toward a specific decision event. Everything else is motion without momentum. 3. They anchor to pain and priority. If the buyer isn’t in pain, they’re not changing. Your job is to make them see that pain— the bottleneck, inefficiency, or risk blocking their goals. If they can’t name it, quantify it, or explain it… walk away. Because here’s the truth: The best sellers don’t chase deals. They qualify and requalify until they’re certain it’s worth their time. When I was at Salesforce, I’d cut my pipeline by 50% mid-quarter. My close rate doubled overnight. It’s not luck. It’s discipline. Stop bragging about how big your pipeline is. Start bragging about how clean it is.

  • View profile for Marcus Chan
    Marcus Chan Marcus Chan is an Influencer

    Turn your pipeline into revenue by raising Disco→Close win rates 5-9 pts & cutting sales cycles up to 50% without adding headcount | B2B sales training & revenue consultant for CROs/Sales VPs | Ex‑Fortune 500 sales exec

    99,548 followers

    "Do you have budget? Who makes decisions? What's your timeline?" Your prospect just mentally checked out. They've heard these same three questions from every vendor who's called them this week. You're losing deals because you're QUALIFYING instead of DISCOVERING. After coaching hundreds of reps who crush small deals but lose the big ones, I've identified the #1 mistake in enterprise sales: → Treating discovery like a vendor interrogation instead of a trusted advisor conversation. Here's the reality: 10% of prospects will never buy, 10% will always buy, and 80% can be swayed either way. That middle 80%? They're won or lost in discovery. Most reps ask surface level questions and move on: "We're losing customers." "Got it. Next question." But top performers go DEEP: "How many customers exactly? What's the revenue per customer? What's your current churn rate? How does losing customers impact your ability to hit growth targets?" Suddenly you're not solving a "customer retention issue." You're solving a $300K annual revenue leak that's preventing them from hitting their board commitments. This is why I developed the POWERFUL framework: P - Pain  O - Opportunity cost  W - Wants and desires  E - Executive influence  R - Resources  F - Fear of failure  U - Unequivocal trust L- Little stuff" When prospects believe at a level 10 in all eight areas, deals roll fast. The hardest territory to manage is the one between your ears. When you change your mindset from "Do they qualify?" to "How can I understand their world?", you'll start winning those 6 and 7-figure deals you've been losing. One of my clients, Samantha, went from struggling with mid-market to closing 10 Fortune 500 logos in 5 months using this framework. Cold to close. Remember: Prospects don't buy from vendors who qualify them. They buy from advisors who understand them. Sales leaders: Stop training your reps to run through checklists. Train them to pull threads and go deep. Discovery isn't a step in your process - it's embedded in every conversation until close. — Reps: Book your call now to get the EXACT blueprint elite reps use to crush their quotas. https://lnkd.in/gr9u5Vgd Sales leaders: If you're serious about building a sales machine that consistently doubles results in 90 days, visit https://lnkd.in/ghh8VCaf

  • View profile for José Manuel de la Chica
    José Manuel de la Chica José Manuel de la Chica is an Influencer

    Global Head of Santander AI Lab | Leading frontier AI with responsibility. Shaping the future with clarity and purpose.

    15,304 followers

    AI meet Consensus? A New Consensus Framework that Makes Models More Reliable and Collaborative. This paper addresses the challenge of ensuring the reliability of LLMs in high-stakes domains such as healthcare, law, and finance. Traditional methods often depend on external knowledge bases or human oversight, which can limit scalability. To overcome this, the author proposes a novel framework that repurposes ensemble methods for content validation through model consensus. Key Findings: Improved Precision: In tests involving 78 complex cases requiring factual accuracy and causal consistency, the framework increased precision from 73.1% to 93.9% with two models (95% CI: 83.5%-97.9%) and to 95.6% with three models (95% CI: 85.2%-98.8%). Inter-Model Agreement: Statistical analysis showed strong inter-model agreement (κ > 0.76), indicating that while models often concurred, their independent errors could be identified through disagreements. Scalability: The framework offers a clear pathway to further enhance precision with additional validators and refinements, suggesting its potential for scalable deployment. Relevance to Multi-Agent and Collaborative AI Architectures: This framework is particularly pertinent to multi-agent systems and collaborative AI architectures for several reasons: Enhanced Reliability: By leveraging consensus among multiple models, the system can achieve higher reliability, which is crucial in collaborative environments where decisions are based on aggregated outputs. Error Detection: The ability to detect errors through model disagreement allows for more robust systems where agents can cross-verify information, reducing the likelihood of propagating incorrect data. Scalability Without Human Oversight: The framework's design minimizes the need for human intervention, enabling scalable multi-agent systems capable of operating autonomously in complex, high-stakes domains. In summary, the proposed ensemble validation framework offers a promising approach to improving the reliability of LLMs, with significant implications for the development of dependable multi-agent AI systems. https://lnkd.in/d8is44jk

  • View profile for Jay Gengelbach

    Software Engineer at Vercel

    18,816 followers

    Terms every engineer should know: Bike-shedding What it is: It's been observed that you can bring a committee a design for a nuclear reactor and get zero feedback, whereas if you bring a proposal for what color to paint the bike shed, you'll get dozens of conflicting opinions. This tendency to generate more discussion on trivial matters than complex ones has been popularly coined "bike-shedding." Why it's important: Both sides of this principle have dangerous implications. When a design is highly complex and highly consequential (the "nuclear reactor"), reviewers can be lax in giving it appropriate oversight, for a variety of reasons: - They assume the person bringing the design must know how to design a nuclear reactor properly. If they didn't, surely they wouldn't have tried! - Fully comprehending the design in order to review it is a heavy task, and reviewers may feel lazy. - Reviewers may assume another reviewer on the panel is more qualified to give a detailed review, and under-review as a consequence. - Reviewers may feel embarrassed to admit they do not understand the proposal. At the same time, trivial aspects of a design--perhaps the location and color of buttons--can be a lightning rod for discussion. They're widely understood, so nearly everybody feels qualified to offer an opinion on them. The impact of any choice is relatively small, so it's also hard to decisively claim one proposal is better than the others. The result can be that discussions spend far too much time on issues of little consequence and far too little on issues of grave importance. What to do about it: Designs can be greatly improved with good feedback. It's critical to ensure your review processes generate good feedback, not noisy feedback. When moderating design discussions, you should understand how to corral the feedback towards matters of consequence. Find tactful ways to terminate unproductive threads. One of my favorite quotes here is, "If you have data to support that proposal, let me see it. If we're just using one person's opinion, let's use mine." Likewise, ensure that critical concerns get appropriate consideration. If nobody has feedback, confirm whether they actually read and understood the design, rather than assuming silence is assent.

  • View profile for Cian Mcloughlin

    Founder & CEO Trinity | Amazon Bestselling Author | LinkedIn Top Voice | Top 50 Sales Keynote Speaker | Global Win Loss Expert On Complex, High Vale Enterprise Sales

    12,726 followers

    "I haven't had an uninterrupted Christmas with my family for 15 years" That's what a senior sales leader shared, when we caught up recently. She's worked for the same global software company that entire time and their end of financial year, means it's all hands to the pump in December. I wasn't shocked to hear this...this is the reality for most Enterprise AE's. For most of my early corporate career, I was sweating on deals on Christmas Eve, New Years Eve and everyday in between. This type of pressure and stress is part and parcel of the Enterprise Sales World. It comes with the territory and in all honesty, many businesses and sales reps will make or break their year, in the next 2 weeks. So if you're an AE, Sales Leader or Quota carrier, holding out for that must-win deal to drop, this side of Christmas, here's a few strategies plucked from our Win/Loss customer insights that might just help: Tailor Every Single Proposal: So it reads like your client has written it. Validate your win-themes with your internal sponsor...generic, vanilla, cookie-cooker responses won't get the job done. Zero Pricing Ambiguity: Pricing confusion and Sticker shock are killing tons of deals at the moment. Make your pricing is easy to understand, to eliminate hesitation and smooth the decision process. Be The Low Risk Option: Risk has jumped to the top of your customers decision criteria. Right now, whoever does the best job of understanding, managing and mitigating Risk, is in the box seat for the deal. Be Super Responsive, But Not Pushy: Show respect for your customers buying process, first match where they are at, before moving forwards together. I guarantee some of you reading this, will lose deals in the next 2 weeks because your client felt undue pressure. Don't Believe The ROI Hype: You've got to provide clear, quantifiable ROI, not outlandish numbers, based on ill-informed assumptions. Don't just focus on hard $ ROI, what are the soft benefits and what's the cost of doing nothing? Get Two Bites Of The Cherry: We all know assumptions can kill us in sales. Base your proposals and discussions on detailed discovery, not guesswork, to demonstrate your understanding...but ideally, then get your internal sponsor to review a draft copy, before you submit (probity permitting of course) Simplify Complexity (Be Easy To Buy From): Deals fall over, due to confused and skittish senior leaders, uncomfortable making a buying decision. Remove the invisible friction, reduce any barriers to entry, be creative with your commercials and pricing models. Keep Your Team Consistent: Keep the same team involved throughout the process to build trust and ensure continuity. Make no mistake, they are buying your people first. Do Your Dry Runs: Final pitches can make (or too often break) a key deal, do your dry runs every time... I hope these quick tips help a few folks hit quota and earn some downtime over Christmas, until it all starts again next year!

  • View profile for Colin S. Levy
    Colin S. Levy Colin S. Levy is an Influencer

    General Counsel at Malbek | Educator Translating Legal Tech And AI Into Practice | Adjunct Professor | Author, The Legal Tech Ecosystem

    46,789 followers

    I've worked in-house for nearly my entire career. Some observations for those who want to be effective in-house lawyers: 1) Stop leading with disclaimers. When executives seek guidance, they're looking for pathways, not barriers. Quantify impacts, propose alternatives, and frame discussions around business outcomes. Your credibility grows when you speak the language of metrics rather than maybe. 2) Legal judgment divorced from business context is inherently flawed. Witness your company's customer interactions firsthand. Observe how products evolve from concept to market. Understand the competitive pressures your colleagues navigate daily. These experiences will reshape your counsel more profoundly than any legal treatise. 3) Business moves at the speed of incomplete information. Develop the courage to make calculated recommendations without perfect clarity. Document your reasoning, advance the objective, and stand behind your judgment. Curiosity matters—but not when it becomes an excuse for inaction. 4) True value comes from integration, not isolation. The most impactful legal professionals don't wait for invitations—they actively engage, anticipate strategic needs, and become indispensable to business outcomes. #legaltech #innovation #law #business #learning

  • View profile for Dan Harper
    Dan Harper Dan Harper is an Influencer

    Chief Technology Officer at AskYourTeam

    11,839 followers

    When coding with agentic AI it’s context that’s king. If the LLM makes a misstep the best next step is to: 1. Take a moment to think about what may be missing, is it gaps in context, not enough detail in the prompt or something was misinterpreted. 2. Ask your AI agent why it took the approach it did. This is not always accurate, but sometimes it can reveal what signals it took from your code to arrive at the conclusion. 3. Correct the context gap. This may be additional details needed for the task, further prompting or pointing to an existing example in the codebase it can follow. If the misstep can be corrected via an automated test, make sure you ask your AI tool to create tests for it. 4. Improve context for future sessions. It’s likely that that gap will cause future missteps and so whatever additional information you provided should be in a place for future reference. That could be directly added to all future contexts (eg: CLAUDE/AGENT md file or rules files) or a separate markdown file that can be referenced for similar work in the future. Some codebases can be more difficult for an LLM to understand and additional context or different techniques will be needed to get a good output. It can take some hours to map out comprehensive context and automated tests. You’ll notice the difference though. Once you reach a point where there’s enough guidance to an LLM, its decisions will improve dramatically.

Explore categories