Skip to content

utk7arsh/ParticleFilterVisual

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

28 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Particle Filter — Interactive Guide

A live, browser-based visualization of probabilistic robot localization. Watch hundreds of hypotheses form, collapse, and recover in real time — no robotics background required.

 

What you are looking at

A particle filter maintains a cloud of possible robot poses. Each sensor tick it runs three steps:

Step What happens
Predict Every particle moves with the same control input as the robot, plus random noise — the cloud spreads.
Update Each particle's predicted range readings are compared against the true robot's sensor data. Particles that agree with the measurements get higher weight; the rest fade.
Resample When weight distribution becomes too skewed (low ESS), the cloud is redrawn proportionally — good hypotheses survive and multiply, bad ones die.

The weighted mean of the surviving cloud is the filter's pose estimate. When the cloud tightens around the white dot, the filter is locked in.


The interface

The app is split into two tabs accessible from the top navigation bar.

Learn

A linear narrative walkthrough — hero, six concept sections, and four embedded mini-canvases that each isolate one step of the filter cycle. No backend needed. Read it top to bottom before touching the Playground.

Playground

A three-column live simulation environment:

┌─────────────────┬──────────────────────────┬──────────────────┐
│  Controls &     │                          │   Telemetry &    │
│  Parameters     │       Canvas             │   Diagnostics    │
└─────────────────┴──────────────────────────┴──────────────────┘
                          Status bar

Controls

Simulation buttons

Button Effect
Pause / Resume Freeze incoming frames. Useful when studying a single snapshot.
Autopilot Toggle between autonomous sinusoidal wandering and manual driving.
Reset Particles Scatter the cloud uniformly over the entire map. Robot stays in place. Watch reconvergence from scratch.
Kidnap Teleport the true robot to a random position. Particles do not move. Classic stress test for global localization.
Step Mode Disable continuous streaming. Advance one logical frame at a time with the Step button to inspect predict → update → resample beat-by-beat.

Parameter sliders

Four sliders appear in the left panel. Adjusting them sends set_params to the simulation in real time.

Slider Range What it controls
N (particles) 50 – 2000 Cloud size. More particles = better coverage of the state space, higher compute cost. Try 50 to see degeneracy; try 2000 to see a dense, smooth cloud.
Motion noise σ 0.5 – 30 Uncertainty injected each predict step. High values spread the cloud quickly — the filter stays flexible but localizes less precisely. Required for kidnap recovery.
Observation noise σ 1 – 80 Width of the Gaussian likelihood over range measurements. High values make all particles seem equally plausible — convergence stalls. Low values snap the cloud tight but become brittle to sensor drift.
Speed 1 – 10 Robot velocity. Faster motion accumulates error faster; slow motion gives the filter more time to converge per unit distance.

Reading the telemetry panel

The right panel exposes the filter's internal state live.

Estimated state

x̂  ŷ  θ̂   ← weighted mean of all particles

True state

x  y  θ    ← ground-truth robot pose (white dot on canvas)

Error

Euclidean distance between estimate and true pose, in canvas units. Below ~20 is good. Above ~80 means the filter has lost the robot.

Confidence gauge

Derived from the particle spread (σ_r of the cloud). A full bar means the cloud is tightly clustered. An empty bar means high uncertainty — the filter does not know where the robot is.

N_eff (Effective Sample Size)

N_eff = 1 / Σ(wᵢ²)

Ranges from 1 (one particle holds all weight — degenerate) to N (all particles equally weighted — maximum diversity). Resampling triggers when N_eff < N/2. Watch this number drop immediately after kidnapping.

Sparkline

A rolling 80-frame history of N_eff. Flat at the top = healthy, converged filter. A sudden valley = weight collapse, usually triggered by kidnapping or a noise spike.

Phase indicator (Predict / Observe / Resample)

Three labeled cards below the sparkline highlight the current step in the filter cycle. In Step Mode, only one lights up at a time so you can inspect each phase independently.


The canvas

Visual element Meaning
White dot (solid) True robot position
Green cross Filter's pose estimate — where the cloud thinks the robot is
Blue dots Particles — each is one hypothesis about the robot's pose
Amber squares (α – ζ) Landmarks. Six fixed beacons the robot uses as reference points.
Amber rings Sensor range. Only landmarks within this radius contribute measurements.
Blue lines (beams) Range measurements from robot to visible landmarks, drawn at observe time.

What to try

Watch cold start convergence Press Reset Particles, then watch the cloud collapse from uniform noise to a tight cluster over 10–20 frames. ESS will dip during the first few updates as bad particles are culled, then recover as the cloud locks in.

Stress test with kidnapping Once localized (confidence gauge near full), press Kidnap. All particles are suddenly wrong. Watch ESS drop to near zero, the narrative banner turn red, and the estimate lag behind the true position. Increase motion noise to help the filter spread and recover faster.

Tune the noise trade-off Set motion noise to 1, observation noise to 5, and let autopilot run. The cloud will be very tight — precise but fragile. Now kidnap the robot: it will struggle to recover. Raise motion noise to 15 and repeat. Recovery is faster but the cloud is noisier during normal operation.

Observe degeneracy at N = 50 Set N to 50 and observation noise to 5. Kidnap the robot repeatedly. You will often see the cloud collapse to a single wrong cluster (a few particles get all the weight) — the filter fails to recover. This is particle deprivation, a fundamental limitation of low-N filters.

Step through a single cycle Enable Step Mode, then click Step one frame at a time. Watch the phase indicator cycle through Predict (blue) → Observe (green) → Resample (amber). Study how the beam drawing, weight updates, and cloud reshaping happen in sequence.


How the math works

The filter implements SIR (Sequential Importance Resampling):

Likelihood — each particle's weight is updated as:

wᵢ ∝ ∏ₖ exp( −(zₖ − ẑₖ)² / (2σ_obs²) )

where zₖ is the true range to landmark k and ẑₖ is the predicted range from particle i's pose.

Systematic resampling — O(N), low variance. One random offset is drawn; N evenly-spaced positions march through the cumulative weight distribution. High-weight particles are selected multiple times; zero-weight particles are dropped.

Resampling threshold — triggered when N_eff = 1 / Σ(wᵢ²) drops below N/2. This avoids wasting resamples when the cloud is already healthy and well-spread.

About

Learning particle filter through visualization

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors