Novel Optimization – Evolutionary
Algorithms
Indrakshi Dey
Head of Division - Programmable Autonomous Systems
Walton Institute for Information and Communication
Systems Science
PAS innovates in creating autonomous digital systems that enhance our
lives by merging the physical and digital worlds through advanced
computing and communication technologies, with minimal human input.
Research Focus
TA1 : Data-Space Formulation, Analysis and Prediction
TA2 : Network Management
TA3 : Learning and Intelligence
TA4 : Adaptive and Autonomous Algorithms
Overview of PAS Division
Programmable Autonomous Systems
Summary
PhD Students: 4 + 4 (External) Staff: 13 Ongoing Projects: 14
TA1 : Data-Space Formulation, Analysis and Prediction
TA2 : Network Management and Optimization
TA3 : Learning & Intelligence
TA4 : Adaptive & Autonomous Algorithms
The Problem – Life Is Uncertain
Every day, we make decisions under uncertainty.
Imagine your daily commute:
 🚙 Traffic jams
 🚇 Subway delays
 ️
🌧️Sudden rain
What’s the best choice when the future is unclear?
What Is Robust Optimization?
It’s not just choosing what’s usually best.
It’s choosing what works even when things go wrong.
✅ Plans that handle surprises
✅ Safer, more reliable decisions Mini Example:
Driving is fastest on average...
But if there's traffic? You're stuck.
Subway is slower, but predictable = more robust.
Over Time – Learning & Adapting
Robust optimization over time means you:
 Observe what happens daily
 Adapt your choices
 Get smarter over time
Like adjusting your commute based on
yesterday's mess..
The Big Idea
Robust Optimization Over Time =
Smart, flexible decisions that prepare for uncertainty
and improve with experience.
It’s how you win — not just today, but every day after.
Landscape Analysis
Imagine the search space of an optimization
problem as a landscape:
 Peaks = Good solutions
 Valleys = Bad solutions
 The algorithm is a hiker searching for the
highest peak.
Landscape analysis helps us understand:
 Is the terrain smooth or rugged?
 Are there many peaks (local optima)?
 How hard is it to reach the top?
Real-life Example:
Like hiking blindfolded in the mountains — you
want to understand the terrain before you step.
Different landscapes need different
algorithms:
 Smooth slopes Gradient-based methods
→
 Rugged, tricky terrains Evolutionary or
→
metaheuristics
Landscape analysis helps match the right
strategy to the right terrain.
Real-life Example:
Like choosing shoes for a journey — sneakers
for city streets, boots for mountains, snowshoes
for tundra.
What Do We Analyze? – Features of the Landscape
We measure:
 Modality: How many peaks?
 Ruggedness: How jagged is the surface?
 Neutrality: Are there flat areas?
 Global structure: Is there one dominant
peak or many?
These help predict algorithm difficulty.
Real-life Example:
Like a pilot checking terrain before flying:
bumpy? mountains? flat fields?
Once we know the landscape:
 We can pick or design better algorithms
 Reduce trial-and-error
 Benchmark and compare performance meaningfully
Landscape analysis helps us become strategic explorers, not random
wanderers.
Real-life Example:
Like using Google Maps to plan a hike — knowing elevation, danger zones,
and shortcuts.
What Is Theory in Evolutionary Algorithms Really About?
Theory is not about proving math for math’s sake.
It’s about understanding why and how evolutionary
algorithms (EAs) work.
Real-life Example:
Think of it like nutrition labels for your algorithm:
You want to know what's inside, how it behaves,
and if it’s good for your goal.
Theorists use models, simplified problems, and mathematical thinking to ask
things like:
• How fast does the algorithm find a solution?
• What happens if we change one part of it?
• Why does it get stuck sometimes?
Real-life Example:
Like testing a new recipe in a lab kitchen:
Control the ingredients, vary the steps, see what works best.
What Is the Community Trying to Understand Now?
Hot topics today:
 Complex real-world problems
 Dynamic environments
 Interactions in multi-objective or
coevolutionary settings
 Theory for modern EAs like neural-based
methods
Real-life Example:
Like scientists trying to understand a living
ecosystem:
More variables, interactions, and change over
time.
The Challenge – Expensive Black Boxes
Sometimes, evaluating a solution is slow and
expensive:
 Training a deep learning model
 Running a physical simulation
 Running a real-world experiment
How do we optimize when we can't afford to test
everything?
Real-life Example:
Imagine taste-testing cakes made by a slow,
expensive robot chef — you only want to try the
most promising recipes.
What Is Bayesian Optimization (BO)
BO builds a probabilistic model (like a map) of
the solution space, guessing where the best
solutions might be — without needing to test
them all.
It combines:
 A Gaussian Process (surrogate model)
 An Acquisition Function (decision guide)
Real-life Example:
Like a treasure hunter using a metal detector:
It tells you where gold is likely AND where you
haven’t searched yet.
The Magic of BO – Explore vs Exploit
BO asks:
👉 “Should I try a new idea (explore)?”
👉 “Or go deeper where I’ve seen success
(exploit)?”
This trade-off is handled by the acquisition
function, which finds the next best guess.
Real-life Example:
Like choosing a restaurant:
 Try a new, risky place you know little about?
 Or return to the favourite you trust?
BO & EAs – Not So Different After All
Both BO and EAs are:
 Black-box optimizers
 Derivative-free
 Good at handling noise and multiple
objectives
In fact:
 BO can use EAs to optimize its acquisition
function
 EAs can use surrogate models (like Gaussian
processes)
Real-life Example:
Like two teammates with different strengths —
one is fast and intuitive (EA), the other is smart
and cautious (BO).
Why BO Matters – Real-World Impact
BO shines in problems where every trial is
expensive:
 Tuning neural networks
 Engineering design
 Simulating supply chains
You get smarter decisions from fewer
evaluations.
Real-life Example:
Like a skilled archer with limited arrows — each
shot needs to count!
The Real World Has Rules
Evolutionary algorithms are like free-spirited
explorers…
But real-world problems often come with rules —
➤ “Don't exceed the budget”
➤ “Keep the weight under 10kg”
➤ “Stay within legal limits”
These rules are called constraints, and EAs need
help to follow them.
Real-life Example:
Imagine designing a backpack — it should carry a
lot, but not break your back or the airline's size
limit.
Smarter Tricks – Who Wins the Evolution Game?
The classic trick: add a penalty when a rule is broken.
But there’s a problem:
 Too soft? EA ignores the rules
 Too harsh? EA avoids the edges (where good solutions often
hide)
Tuning penalties is like tuning a car’s brakes — too loose or too
tight can both cause failure.
Instead of just penalties, EAs can:
• Prefer feasible solutions in tournaments
• Use multiple objectives: “fit” AND “rule-following”
• Learn from good solutions (cultural algorithms, immune
systems)
These ideas guide the search without hand-slapping.
Real-life Example:
Like a school competition where students are ranked by both
grades and behavior — being smart AND rule-abiding wins.
Hybrid Techniques – Bringing In Experts
Some EAs team up with mathematical experts, like:
• Lagrange multipliers
• Gradient-based methods
• External solvers
These hybrids mix EA flexibility with precision tools to
handle tight constraints.
Real-life Example:
Like an adventurer calling a GPS expert when maps get
complicated — best of both worlds.
Why It Matters – Creativity With Boundaries
Constraint handling makes EAs useful in real-world
tasks:
 Engineering design
 Finance and logistics
 Scheduling and planning
It's about finding creative solutions within real
limitations.
Real-life Example:
Like designing a tiny home — you’re constrained by
space, but still want it to be functional and beautiful.
What Makes a Problem Stochastic?
In stochastic problems, the same solution can give
different results each time.
Why? Because there's randomness in the environment
or how outcomes are measured.
EAs must adapt to this uncertainty.
Real-life Example:
Like flipping a coin to decide a prize — sometimes you
win big, sometimes nothing, even with the same guess.
Why Are Stochastic Problems So Hard?
Randomness makes it tough to know if a solution is
truly good, or just lucky.
 False positives: a bad idea looks good once
 Missed gems: a good idea looks bad due to
randomness
EAs risk being misled by noise.
Real-life Example:
Like judging a restaurant after one meal — a great chef
could just be having a bad day.
How EAs Deal with Uncertainty
Evolutionary algorithms use smart strategies to stay
on track:
 Resampling: test the same solution more than once
 Statistical selection: compare averages, not one-
offs
 Diversity: avoid putting all bets on one noisy winner
Real-life Example:
Like testing a new product: don’t just ask one person,
ask many and look for the pattern.
Real-World Examples of Stochastic Optimization
EAs shine in stochastic settings like:
 Simulations with randomness (e.g., traffic or
factory workflows)
 Game AI, where opponents behave
unpredictably
 Financial modelling, where markets fluctuate
Real-life Example:
Like training a soccer robot to play in rain, wind,
and against random teams — adaptability is key.
Key Takeaway – Evolution Embraces Uncertainty
Evolutionary computation isn’t scared of
randomness — it uses it to learn and evolve better
strategies.
It's about:
 Testing over time
 Learning from patterns
 Staying robust in a noisy world
Real-life Example:
Like a musician improving by playing in noisy cafes
— learning to perform under uncertainty.
The Real-World Puzzle – Combinatorial Optimization
Combinatorial optimisation is everywhere:
 📦 Packing delivery trucks
 ✈️Scheduling airline crews
 ️
🖥️Allocating cloud resources
 🏭 Managing factory workflows
But these problems are huge puzzles with too
many options to test them all.
Real-life Example:
Like trying to find the best route for 20 deliveries
with traffic, time windows, and fuel limits —
millions of possibilities
Evolutionary Algorithms – Nature’s Problem Solvers
EAs search through these tough problems using:
 Mutation (small changes)
 Crossover (mixing solutions)
 Selection (survival of the fittest)
They’re powerful, but still need carefully
designed rules for each new problem.
Real-life Example:
Like breeding better plants — but you need to
know what traits to select and how to cross-breed
effectively.
Enter Machine Learning – Learning to Optimize
Machine learning (ML) steps in to:
✅ Learn good heuristics automatically
✅ Adapt to new versions of problems
✅ Create smarter decision-making policies
Instead of solving one problem, ML helps design
solvers that work across problems.
Real-life Example:
Like teaching a robot how to drive any kind of
vehicle — not just one car on one road.
Evolution + Learning = Next-Level Optimization
ML and EC together create a self-improving
optimiser:
 ML learns patterns, rules, and smart decisions
 EA evolves the best combinations
 Together, they explore AND learn over time
This combo reduces manual trial-and-error and
speeds up discovery.
Real-life Example:
Like an AI chef that learns new cuisines and
evolves better recipes — faster than any human
team could.
Why This Matters – Smarter Decisions at Scale
This powerful duo is already transforming
industries:
• 📦 Logistics & delivery planning
• 🌐 Cloud infrastructure management
• 🔧 Manufacturing & robotics
• 📅 Smart scheduling for hospitals &
airports
It's not about replacing humans — it's about
amplifying intelligence at scale.
Real-life Example:
Like giving a manager a super-assistant that learns
faster and tests thousands of strategies overnight.
Why Multiobjective? – Because Life Rarely Has One Goal
Real-world decisions involve trade-offs, not just
one “best” goal.
EMO handles problems with conflicting objectives,
like:
• Maximize speed and minimize cost
• Maximize quality and minimize waste
• Maximize profit and minimize risk
Real-life Example:
Buying a laptop: You want power, battery life, and
low price — but improving one often worsens the
others.
Why Evolutionary? – Because One Size Doesn’t Fit All
EMO uses evolutionary algorithms because they
explore many trade-offs at once, using populations
of solutions.
It’s not just finding one best solution — it’s mapping
the entire Pareto front of trade-offs.
Real-life Example:
Like a tailor showing you multiple suit styles to
choose from — not one perfect suit, but options
that fit different needs.
Elitism, Archiving, and the Wisdom of the Past
EMO tracks the best solutions found so far — even if
they're not the “best” in all objectives.
This elitism helps preserve good ideas and build from
them, like memory.
Real-life Example:
Like a chess player keeping notes of great past games
— not all wins, but all insightful
NSGA-II – A Legend Evolving
NSGA-II is one of the most widely used EMO
algorithms — the “Swiss Army knife” of EMO.
It’s known for:
• Fast sorting of trade-offs
• Diversity maintenance
• Simplicity and power
And now, we’re seeing exciting theoretical
upgrades and alternatives.
Real-life Example:
Like the iPhone of EMO — widely adopted, still
evolving, and now facing worthy challengers.
Where Is EMO Going? – Decision Makers, Visualization, and Machines
The future of EMO includes:
• Visual tools for understanding trade-offs
• Integrating the human decision maker
• Replacing DMs with intelligent agents
• Handling asynchronous objectives (like slow
simulations vs. fast models)
Real-life Example:
Like a smart travel planner showing you fastest,
cheapest, and greenest routes — and letting you or AI
decide.
From Random Guessing to Informed Choices
Traditional EAs use random crossover and mutation
to explore the search space.
MBEAs replace that randomness with learned models
that guide variation.
This means:
➤ Less guessing, more smart combining
➤ Better chance of improvement with each step
Real-life Example:
Like cooking with a recipe instead of throwing
random ingredients into a pot.
What Do These Models Learn?
MBEAs build models to learn:
🧬 Which variables depend on each other (linkages)
🎯 What patterns lead to good solutions
📊 Probabilities of good variable combinations
The algorithm samples new solutions based on
this learned structure.
Real-life Example:
Like a detective solving a puzzle by discovering
which clues are connected — not just guessing
randomly.
Estimation of Distribution Algorithms (EDAs)
One key MBEA family: EDAs
• Build a probability model from top solutions
• Sample new candidates from that model
• Avoids standard mutation/crossover
Ideal for black-box optimization: no gradients, just
smart sampling.
Real-life Example:
Like a fisherman learning where the fish tend to
be — and casting nets only in those areas.
GOMEA and Linkage Tree GA – Mixing with Purpose
Recent MBEAs like GOMEA and Linkage Tree GA
go a step further:
 Build hierarchical models of interacting
variables
 Use optimal mixing instead of random
recombination
 Preserve good parts of solutions while
improving others
Real-life Example:
Like selectively upgrading rooms in a house —
only renovate what needs improving, without
tearing down the whole structure.
Why Model-Based EAs Matter – Smarter, Faster Evolution
MBEAs shine in:
 Black-box and grey-box problems
 Complex structures with variable dependencies
 Problems where traditional EAs struggle to find
structure
Result:
✅ Fewer evaluations
✅ Higher-quality solutions
✅ Less trial-and-error in algorithm design
Real-life Example:
Like using Google Maps instead of wandering the
streets — the model helps you navigate
efficiently.
Thank you
for your time.
Any questions?
Indrakshi Dey
Indrakshi.dey@waltoninstitute.ie

Novel Optimization – Evolutionary Algorithms​.pptx

  • 1.
    Novel Optimization –Evolutionary Algorithms Indrakshi Dey Head of Division - Programmable Autonomous Systems Walton Institute for Information and Communication Systems Science
  • 2.
    PAS innovates increating autonomous digital systems that enhance our lives by merging the physical and digital worlds through advanced computing and communication technologies, with minimal human input. Research Focus TA1 : Data-Space Formulation, Analysis and Prediction TA2 : Network Management TA3 : Learning and Intelligence TA4 : Adaptive and Autonomous Algorithms Overview of PAS Division Programmable Autonomous Systems Summary PhD Students: 4 + 4 (External) Staff: 13 Ongoing Projects: 14
  • 3.
    TA1 : Data-SpaceFormulation, Analysis and Prediction
  • 4.
    TA2 : NetworkManagement and Optimization
  • 5.
    TA3 : Learning& Intelligence
  • 6.
    TA4 : Adaptive& Autonomous Algorithms
  • 7.
    The Problem –Life Is Uncertain Every day, we make decisions under uncertainty. Imagine your daily commute:  🚙 Traffic jams  🚇 Subway delays  ️ 🌧️Sudden rain What’s the best choice when the future is unclear? What Is Robust Optimization? It’s not just choosing what’s usually best. It’s choosing what works even when things go wrong. ✅ Plans that handle surprises ✅ Safer, more reliable decisions Mini Example: Driving is fastest on average... But if there's traffic? You're stuck. Subway is slower, but predictable = more robust.
  • 8.
    Over Time –Learning & Adapting Robust optimization over time means you:  Observe what happens daily  Adapt your choices  Get smarter over time Like adjusting your commute based on yesterday's mess.. The Big Idea Robust Optimization Over Time = Smart, flexible decisions that prepare for uncertainty and improve with experience. It’s how you win — not just today, but every day after.
  • 9.
    Landscape Analysis Imagine thesearch space of an optimization problem as a landscape:  Peaks = Good solutions  Valleys = Bad solutions  The algorithm is a hiker searching for the highest peak. Landscape analysis helps us understand:  Is the terrain smooth or rugged?  Are there many peaks (local optima)?  How hard is it to reach the top? Real-life Example: Like hiking blindfolded in the mountains — you want to understand the terrain before you step. Different landscapes need different algorithms:  Smooth slopes Gradient-based methods →  Rugged, tricky terrains Evolutionary or → metaheuristics Landscape analysis helps match the right strategy to the right terrain. Real-life Example: Like choosing shoes for a journey — sneakers for city streets, boots for mountains, snowshoes for tundra.
  • 10.
    What Do WeAnalyze? – Features of the Landscape We measure:  Modality: How many peaks?  Ruggedness: How jagged is the surface?  Neutrality: Are there flat areas?  Global structure: Is there one dominant peak or many? These help predict algorithm difficulty. Real-life Example: Like a pilot checking terrain before flying: bumpy? mountains? flat fields? Once we know the landscape:  We can pick or design better algorithms  Reduce trial-and-error  Benchmark and compare performance meaningfully Landscape analysis helps us become strategic explorers, not random wanderers. Real-life Example: Like using Google Maps to plan a hike — knowing elevation, danger zones, and shortcuts.
  • 11.
    What Is Theoryin Evolutionary Algorithms Really About? Theory is not about proving math for math’s sake. It’s about understanding why and how evolutionary algorithms (EAs) work. Real-life Example: Think of it like nutrition labels for your algorithm: You want to know what's inside, how it behaves, and if it’s good for your goal. Theorists use models, simplified problems, and mathematical thinking to ask things like: • How fast does the algorithm find a solution? • What happens if we change one part of it? • Why does it get stuck sometimes? Real-life Example: Like testing a new recipe in a lab kitchen: Control the ingredients, vary the steps, see what works best.
  • 12.
    What Is theCommunity Trying to Understand Now? Hot topics today:  Complex real-world problems  Dynamic environments  Interactions in multi-objective or coevolutionary settings  Theory for modern EAs like neural-based methods Real-life Example: Like scientists trying to understand a living ecosystem: More variables, interactions, and change over time.
  • 13.
    The Challenge –Expensive Black Boxes Sometimes, evaluating a solution is slow and expensive:  Training a deep learning model  Running a physical simulation  Running a real-world experiment How do we optimize when we can't afford to test everything? Real-life Example: Imagine taste-testing cakes made by a slow, expensive robot chef — you only want to try the most promising recipes.
  • 14.
    What Is BayesianOptimization (BO) BO builds a probabilistic model (like a map) of the solution space, guessing where the best solutions might be — without needing to test them all. It combines:  A Gaussian Process (surrogate model)  An Acquisition Function (decision guide) Real-life Example: Like a treasure hunter using a metal detector: It tells you where gold is likely AND where you haven’t searched yet.
  • 15.
    The Magic ofBO – Explore vs Exploit BO asks: 👉 “Should I try a new idea (explore)?” 👉 “Or go deeper where I’ve seen success (exploit)?” This trade-off is handled by the acquisition function, which finds the next best guess. Real-life Example: Like choosing a restaurant:  Try a new, risky place you know little about?  Or return to the favourite you trust?
  • 16.
    BO & EAs– Not So Different After All Both BO and EAs are:  Black-box optimizers  Derivative-free  Good at handling noise and multiple objectives In fact:  BO can use EAs to optimize its acquisition function  EAs can use surrogate models (like Gaussian processes) Real-life Example: Like two teammates with different strengths — one is fast and intuitive (EA), the other is smart and cautious (BO).
  • 17.
    Why BO Matters– Real-World Impact BO shines in problems where every trial is expensive:  Tuning neural networks  Engineering design  Simulating supply chains You get smarter decisions from fewer evaluations. Real-life Example: Like a skilled archer with limited arrows — each shot needs to count!
  • 18.
    The Real WorldHas Rules Evolutionary algorithms are like free-spirited explorers… But real-world problems often come with rules — ➤ “Don't exceed the budget” ➤ “Keep the weight under 10kg” ➤ “Stay within legal limits” These rules are called constraints, and EAs need help to follow them. Real-life Example: Imagine designing a backpack — it should carry a lot, but not break your back or the airline's size limit.
  • 19.
    Smarter Tricks –Who Wins the Evolution Game? The classic trick: add a penalty when a rule is broken. But there’s a problem:  Too soft? EA ignores the rules  Too harsh? EA avoids the edges (where good solutions often hide) Tuning penalties is like tuning a car’s brakes — too loose or too tight can both cause failure. Instead of just penalties, EAs can: • Prefer feasible solutions in tournaments • Use multiple objectives: “fit” AND “rule-following” • Learn from good solutions (cultural algorithms, immune systems) These ideas guide the search without hand-slapping. Real-life Example: Like a school competition where students are ranked by both grades and behavior — being smart AND rule-abiding wins.
  • 20.
    Hybrid Techniques –Bringing In Experts Some EAs team up with mathematical experts, like: • Lagrange multipliers • Gradient-based methods • External solvers These hybrids mix EA flexibility with precision tools to handle tight constraints. Real-life Example: Like an adventurer calling a GPS expert when maps get complicated — best of both worlds.
  • 21.
    Why It Matters– Creativity With Boundaries Constraint handling makes EAs useful in real-world tasks:  Engineering design  Finance and logistics  Scheduling and planning It's about finding creative solutions within real limitations. Real-life Example: Like designing a tiny home — you’re constrained by space, but still want it to be functional and beautiful.
  • 22.
    What Makes aProblem Stochastic? In stochastic problems, the same solution can give different results each time. Why? Because there's randomness in the environment or how outcomes are measured. EAs must adapt to this uncertainty. Real-life Example: Like flipping a coin to decide a prize — sometimes you win big, sometimes nothing, even with the same guess.
  • 23.
    Why Are StochasticProblems So Hard? Randomness makes it tough to know if a solution is truly good, or just lucky.  False positives: a bad idea looks good once  Missed gems: a good idea looks bad due to randomness EAs risk being misled by noise. Real-life Example: Like judging a restaurant after one meal — a great chef could just be having a bad day.
  • 24.
    How EAs Dealwith Uncertainty Evolutionary algorithms use smart strategies to stay on track:  Resampling: test the same solution more than once  Statistical selection: compare averages, not one- offs  Diversity: avoid putting all bets on one noisy winner Real-life Example: Like testing a new product: don’t just ask one person, ask many and look for the pattern.
  • 25.
    Real-World Examples ofStochastic Optimization EAs shine in stochastic settings like:  Simulations with randomness (e.g., traffic or factory workflows)  Game AI, where opponents behave unpredictably  Financial modelling, where markets fluctuate Real-life Example: Like training a soccer robot to play in rain, wind, and against random teams — adaptability is key.
  • 26.
    Key Takeaway –Evolution Embraces Uncertainty Evolutionary computation isn’t scared of randomness — it uses it to learn and evolve better strategies. It's about:  Testing over time  Learning from patterns  Staying robust in a noisy world Real-life Example: Like a musician improving by playing in noisy cafes — learning to perform under uncertainty.
  • 27.
    The Real-World Puzzle– Combinatorial Optimization Combinatorial optimisation is everywhere:  📦 Packing delivery trucks  ✈️Scheduling airline crews  ️ 🖥️Allocating cloud resources  🏭 Managing factory workflows But these problems are huge puzzles with too many options to test them all. Real-life Example: Like trying to find the best route for 20 deliveries with traffic, time windows, and fuel limits — millions of possibilities
  • 28.
    Evolutionary Algorithms –Nature’s Problem Solvers EAs search through these tough problems using:  Mutation (small changes)  Crossover (mixing solutions)  Selection (survival of the fittest) They’re powerful, but still need carefully designed rules for each new problem. Real-life Example: Like breeding better plants — but you need to know what traits to select and how to cross-breed effectively.
  • 29.
    Enter Machine Learning– Learning to Optimize Machine learning (ML) steps in to: ✅ Learn good heuristics automatically ✅ Adapt to new versions of problems ✅ Create smarter decision-making policies Instead of solving one problem, ML helps design solvers that work across problems. Real-life Example: Like teaching a robot how to drive any kind of vehicle — not just one car on one road.
  • 30.
    Evolution + Learning= Next-Level Optimization ML and EC together create a self-improving optimiser:  ML learns patterns, rules, and smart decisions  EA evolves the best combinations  Together, they explore AND learn over time This combo reduces manual trial-and-error and speeds up discovery. Real-life Example: Like an AI chef that learns new cuisines and evolves better recipes — faster than any human team could.
  • 31.
    Why This Matters– Smarter Decisions at Scale This powerful duo is already transforming industries: • 📦 Logistics & delivery planning • 🌐 Cloud infrastructure management • 🔧 Manufacturing & robotics • 📅 Smart scheduling for hospitals & airports It's not about replacing humans — it's about amplifying intelligence at scale. Real-life Example: Like giving a manager a super-assistant that learns faster and tests thousands of strategies overnight.
  • 32.
    Why Multiobjective? –Because Life Rarely Has One Goal Real-world decisions involve trade-offs, not just one “best” goal. EMO handles problems with conflicting objectives, like: • Maximize speed and minimize cost • Maximize quality and minimize waste • Maximize profit and minimize risk Real-life Example: Buying a laptop: You want power, battery life, and low price — but improving one often worsens the others.
  • 33.
    Why Evolutionary? –Because One Size Doesn’t Fit All EMO uses evolutionary algorithms because they explore many trade-offs at once, using populations of solutions. It’s not just finding one best solution — it’s mapping the entire Pareto front of trade-offs. Real-life Example: Like a tailor showing you multiple suit styles to choose from — not one perfect suit, but options that fit different needs.
  • 34.
    Elitism, Archiving, andthe Wisdom of the Past EMO tracks the best solutions found so far — even if they're not the “best” in all objectives. This elitism helps preserve good ideas and build from them, like memory. Real-life Example: Like a chess player keeping notes of great past games — not all wins, but all insightful
  • 35.
    NSGA-II – ALegend Evolving NSGA-II is one of the most widely used EMO algorithms — the “Swiss Army knife” of EMO. It’s known for: • Fast sorting of trade-offs • Diversity maintenance • Simplicity and power And now, we’re seeing exciting theoretical upgrades and alternatives. Real-life Example: Like the iPhone of EMO — widely adopted, still evolving, and now facing worthy challengers.
  • 36.
    Where Is EMOGoing? – Decision Makers, Visualization, and Machines The future of EMO includes: • Visual tools for understanding trade-offs • Integrating the human decision maker • Replacing DMs with intelligent agents • Handling asynchronous objectives (like slow simulations vs. fast models) Real-life Example: Like a smart travel planner showing you fastest, cheapest, and greenest routes — and letting you or AI decide.
  • 37.
    From Random Guessingto Informed Choices Traditional EAs use random crossover and mutation to explore the search space. MBEAs replace that randomness with learned models that guide variation. This means: ➤ Less guessing, more smart combining ➤ Better chance of improvement with each step Real-life Example: Like cooking with a recipe instead of throwing random ingredients into a pot.
  • 38.
    What Do TheseModels Learn? MBEAs build models to learn: 🧬 Which variables depend on each other (linkages) 🎯 What patterns lead to good solutions 📊 Probabilities of good variable combinations The algorithm samples new solutions based on this learned structure. Real-life Example: Like a detective solving a puzzle by discovering which clues are connected — not just guessing randomly.
  • 39.
    Estimation of DistributionAlgorithms (EDAs) One key MBEA family: EDAs • Build a probability model from top solutions • Sample new candidates from that model • Avoids standard mutation/crossover Ideal for black-box optimization: no gradients, just smart sampling. Real-life Example: Like a fisherman learning where the fish tend to be — and casting nets only in those areas.
  • 40.
    GOMEA and LinkageTree GA – Mixing with Purpose Recent MBEAs like GOMEA and Linkage Tree GA go a step further:  Build hierarchical models of interacting variables  Use optimal mixing instead of random recombination  Preserve good parts of solutions while improving others Real-life Example: Like selectively upgrading rooms in a house — only renovate what needs improving, without tearing down the whole structure.
  • 41.
    Why Model-Based EAsMatter – Smarter, Faster Evolution MBEAs shine in:  Black-box and grey-box problems  Complex structures with variable dependencies  Problems where traditional EAs struggle to find structure Result: ✅ Fewer evaluations ✅ Higher-quality solutions ✅ Less trial-and-error in algorithm design Real-life Example: Like using Google Maps instead of wandering the streets — the model helps you navigate efficiently.
  • 42.
    Thank you for yourtime. Any questions? Indrakshi Dey Indrakshi.dey@waltoninstitute.ie