Measuring and improving Overall Equipment Effectiveness (OEE) starts with a working dashboard that turns machine signals, operator data, and ERP context into timely actions on the shop floor. This guide explains how to build OEE dashboard that small CNC and contract shops can deploy quickly: identify the exact signals to capture, set reliable calculation rules, normalize noisy feeds, design visuals that trigger actions, and configure alerts that reduce manual work. Readers will get a practical pilot plan and troubleshooting checks so the first rollout yields validated OEE numbers and useful alerts rather than noise.
TL;DR:
Start with a minimal data checklist (cycle start/stop, spindle-on, part count, program name, operator ID) and capture >95% of events during a 2–4 week pilot
Define Availability, Performance, Quality with ISO-aligned formulas, set standard cycle times from CNC program extraction, and treat micro-stops with debounce logic
Configure alerts with duration-based debounce and escalation tiers (tablet prompts → SMS → maintenance ticket) and measure false positives during the pilot.
Any attempt to build OEE dashboard must begin with a concrete data inventory. For small shops the goal is to capture the minimum signals that produce accurate Availability, Performance, and Quality metrics.
Record these signals where possible:
Cycle start/stop timestamps (ideal)
Spindle-on time
Part counts or good/bad part increments
Program name and line numbers
Axis motion or tool-change events (helpful for long cycles)
Controller alarms and state codes
Common interfaces that expose those signals include MTConnect and OPC UA, and discrete I/O taps when controller integration is not available. Reference implementations and examples are available on GitHub for OEE dashboards that consume controller data: a Performance Insight OEE demo on GitHub. For technical notes on reading cycle estimates from programs, see G-code cycle extraction methods below.
Capture operator ID, setup start/finish, and load/unload times. Even simple touch-buttons on a tablet for “setup complete” and “part inspected” reduce guesswork in Quality and Availability. If ERP or MES provides operator assignments, map those to controller program names.
Map program names and part numbers to ERP job IDs so OEE aggregates by job and route. For mapping patterns and best practices, see the shop floor data ERP integration guide: shop floor data ERP integration.
Capture downtime start/stop with reason codes. Even short categorical reasons (setup, tooling, program, material, break) will make downtime Pareto analysis actionable.
What-you-need checklist for a first pilot (1–3 machines)
Cycle start/stop or spindle-on timestamps
Part counts (good/bad)
Program name → job mapping
Operator ID (manual if no ERP)
Shift schedule and planned production time
When controllers are limited, controllers’ program outputs can be parsed for estimated cycle times; see internal guidance on how to extract cycle times from G-code or the workflow article on extract cycle time from CNC programs. Compare direct machine signals (more precise) to inferred cycle times (lower hardware cost) and choose based on available budget and desired precision.
Before dashboards appear, rules must be explicit. Misaligned definitions produce debates, not improvements.
Use clear formulas:
Availability = run time / planned production time
(run time excludes planned breaks and scheduled maintenance)
Performance = (ideal cycle time × total parts) / run time
(ideal cycle time is the standard time per part)
Quality = good parts / total parts
These align with ISO 22400 KPI definitions; shops can consult ISO guidance for formal definitions and terms.
Define micro-stops and their treatment. Example rule set:
Treat stops shorter than X seconds (e.g., 20–30s) as micro-stops and optionally exclude them from Availability but flag them for performance analysis
Apply debounce: require a stopped state to persist for Y seconds before marking downtime (reduce false positives)
For planned stops (shift change, approved maintenance), remove from planned production time
Use normalized standard cycle times per program or part number. Establish these by:
Extracting cycle times from CNC programs (code-based estimates) — see the cycle time extraction workflow
Validating with several stopwatch or controller-derived samples
Using weighted averages for mixed models on the same program
For mixed-model runs calculate Performance using job-level aggregation and weighted ideal times (ideal cycle × actual count per model).
Choose windows that match operational decisions. Shift-level OEE helps supervisors; job-level OEE helps planners. Store raw events so dashboards can re-aggregate by shift, day, or work order.
For a deeper primer on KPI definitions and baseline calculations see the complete OEE guide. Industry coverage of OEE definition and use cases is also available from trade publishers like IndustryWeek for reference: What is oee.
A dashboard is only as good as the feed behind it. This step is about plumbing: collecting, timestamping, normalizing, and validating.
Options, with trade-offs:
Direct controller connection (MTConnect, OPC UA): highest fidelity, higher initial setup
Edge devices with discrete I/O or current-sensing: low cost, good for spindle-on detection
G-code/program parsing: software-only option when controllers don't expose states; accuracy depends on program structure
For hands-on workflows, see the practical guide to implement cycle time monitoring.
Normalize these fields before aggregation:
Timezones and clock sync (use UTC for storage)
Part number formatting (strip leading zeros, standardize suffixes)
Map CNC program names to ERP job IDs and route steps; use the shop floor data ERP integration article for mapping patterns
Run lightweight checks during pilot:
Data completeness: aim for >95% event coverage for each monitored shift
Zero-count alerts: flag when run time has no part counts
Outlier detection: cycle times outside expected ±30% should be highlighted
Gap detection: gaps > X minutes in expected stream should generate an ingest alert
Start with methods that minimize downtime and operator changes. For example, tap spindle-on or use a small edge device to sense coolant or spindle current. If controllers provide program timestamps, prefer that for accuracy. Practical options and hardware setups are discussed in the article on implement cycle time monitoring and the G-code extraction guide extract cycle times from G-code.
Finally, log raw events and keep a transformation layer: if normalization rules change, you can re-process raw data without recollecting.
Design a dashboard that encourages action. Every chart should answer one operational question.
A recommended layout:
Top row: shop-level OEE % and each OEE pillar (Availability, Performance, Quality) as large tiles
Middle row: shift drilldowns and machine uptime gauges
Bottom row: per-job cycle-time distributions and downtime Pareto chart
A dense "operations view" is useful for production planners; a simplified "shop-floor view" (big tiles, one-line recommended actions) works better for supervisors and tablets.
Use these components and map exact data fields:
Top-line OEE (%): calculated from normalized events (Availability/Performance/Quality)
Sparkline trends: hourly OEE, last 24 hours
Machine uptime gauge: run time vs planned time
Cycle-time boxplots: per-job or per-program using raw cycle samples
Operator workload bars: operator ID vs active minutes
Downtime Pareto: downtime reason total minutes
Real-time dashboards should refresh at intervals that match decision cadence—typically 30–60 seconds for tablets, faster for control-room monitors. The benefits of real-time views and refresh choices are covered in real-time monitoring benefits.
For each tile, define a single action:
Availability < threshold → ping maintenance, open ticket
Performance drop > 10% → check tool wear or program change
Quality rate below X% → pause job and request quality inspection
Avoid non-actionable charts. If a chart doesn't map to "what someone will do next", remove it.
Prototype with sample data and iterate. Use tools like Power BI, Tableau, or Grafana for quick prototypes; see surveys of monitoring tools in best machine monitoring software. Integrate lightweight scheduling overlays with tools listed in free production scheduling tools and consider the scheduling features outlined in essential scheduling features.
A short demo helps stakeholders visualize layout choices and alerts. Viewers will learn dashboard layout choices, real-time widgets, and alert mapping in this concise demo:
Accessibility tips: use colorblind-friendly palettes (avoid red/green pairs alone), provide numeric labels on tiles, and include mobile-friendly summary screens for tablets on the floor.
Alerts must prompt the right human or automated response. Too many false alarms and operators will ignore them.
Recommended alert types and settings:
Machine stopped > threshold (e.g., 2 minutes) → send tablet prompt to operator
OEE drop > X points versus rolling baseline (e.g., 10 points vs last 4 hours) → send email to supervisor
Cycle-time deviation > Y% (e.g., 25%) → create tool-check task
Operator idle > Z minutes (e.g., 10 min) → investigate assignment or changeover
Debounce: require condition to persist for a minimum duration (30–120 seconds depending on signal). Use suppression rules during known planned work.
Match channel to urgency:
Tablet prompts for operator-facing actions
SMS for urgent maintenance escalations outside normal hours
MES/ERP work orders to formalize maintenance or rework tasks
For playbook-level integration to trigger ERP/MES tasks, consult the integration guide: integrate shop-floor monitoring with ERP/MES.
Decide which alerts should trigger automation:
Non-critical: nudge operator on tablet, no escalation
Critical: auto-create maintenance ticket and page on-call technician
Safety/quality: stop scheduling further runs for the job and flag for inspection
Automating operator workload or dispatch after alerts is covered in the checklist to automate operator workload tracking and can feed into flexible schedules as in flexible schedule that adapts to downtime.
During pilot, log all alerts and tag them as true/false positives. Aim to reduce false positives below 10% before wider rollout. Use escalation tiers to limit alarm fatigue.
A structured pilot proves the system and creates buy-in.
Pilot scope:
2–4 representative machines, 2–4 week duration
Success metrics: data completeness >95%, alert precision >90%, reduced manual OEE reporting time by X hours/week
Validation checklist:
Reconcile part counts with QC records or order fulfillment
Compare extracted cycle times to stopwatch samples across 10–20 cycles
Audit downtime codes against operator reports for several shifts
Verify mapping from program name to ERP job ID by checking job numbers on actual work orders
Cycle-time standards should be updated after validation using the cycle time extraction workflow.
Rollout items:
Short training sessions (15–30 minutes) for operators and supervisors
Cheat-sheet with steps for responding to top 3 alerts
Feedback channel (digital form or quick huddles) to collect operators' observations
Use dashboard outputs to drive regular reviews:
Weekly downtime Pareto meetings to address top 2 reasons
Monthly review to refine standard cycle times and alert thresholds
Quarterly reassessment of monitored machines and expansion to next cohort
For planners, link the OEE outcome to scheduling decisions using scheduling concepts in manufacturing scheduling overview.
Troubleshooting checklist during pilot:
Missing events: check clock sync, edge device uptime
Mapping errors: confirm program-to-job rules and dedupe part IDs
Duplicate counts: verify that both controller and operator increments are not counted twice.
To build OEE dashboard that drives throughput improvements, start small, instrument the right signals, use ISO-aligned calculation rules, and link every visual or alert to a single operational action. Run a short pilot, validate cycle times and counts, tune alerts to avoid noise, then scale methodically.
Calculate OEE at the job or lot level rather than the controller program if multiple part types run under the same program. Use weighted ideal cycle times: sum the product of each part's standard cycle time and its produced count, then divide by total run time for Performance. For Availability and Quality, keep the same definitions but tag events with part IDs so you can re-aggregate by model later. A simple rule is to treat mixed runs as separate virtual jobs in the ERP mapping layer so counts and ideals remain consistent.
When controllers lack native telemetry, use software parsing of G-code to estimate cycle times or install a minimal edge device to detect spindle-on, cycle start/stop, or coolant flow. The step-by-step guide to extract cycle times from G-code outlines program parsing workflows. For minimal hardware options, see the implement cycle time monitoring article for pragmatic setups that avoid costly controller retrofits.
Use duration-based debounce, tiered escalations, and only alert when the condition requires human action. Start with conservative thresholds and track false positives during the pilot period; reduce alert volume until operators treat notifications as reliable. Escalate in stages: a tablet prompt for operator attention, then email to supervisor, and only create maintenance tickets for persistent or safety/quality conditions. Document response steps for each alert so operators know what to do immediately.