<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=1128054299183456&amp;ev=PageView&amp;noscript=1">

Something Powerful

What to Actually Look for in a Production Monitoring & Scheduling System — Kinetech
Buyer's Guide Kinetech · 2026

What to actually look for in a production monitoring & scheduling system

A practical framework for evaluating IoT platforms — the questions to ask, the red flags to catch, and the demos that separate real capability from slideware.

Scroll
Introduction

The noise
is real.

Every manufacturing operation faces the same questions. Are we running efficiently? Are we meeting the schedule? Why did that job take twice as long as quoted? The market is full of answers: enterprise MES platforms that cost millions, simple dashboards that look great in a demo but fall apart on a real floor. This guide is here to help you sort through the noise, ask better questions, and avoid writing a check for something that collects dust.

01 — Monitoring & KPIs

OEE gets the
attention.
It shouldn't
get all the
focus.

OEE is useful, but most plants actually live and die by metrics specific to what they make. A steel service center cares about tons per hour and linear feet per hour. A CNC shop watches spindle utilization and parts per shift. A packaging line tracks cases per minute and changeover frequency.

The monitoring system you buy needs to support whatever KPI you can define, not just OEE. That means throughput rates calculated from real production counts and run time. It means OEE however you calculate it, because every plant does it differently. It means time utilization, production-vs-target tracking, live machine-specific telemetry, and rates with context. "Linear feet per run hour" tells you machine capability. "Linear feet per shift hour" tells you operational efficiency. You need both, and the formulas should be yours to define.

What you need
Configurable from the ground up

Configurable KPIs. If you can express it as a calculation using your production data, the system should let you build it. "Footage produced per run hour," "run time as a percentage of total shift time," "tons produced per shift hour." You know your business better than the vendor does.

Downtime tracking that tells you why. Knowing a machine was down for two hours is useful. Knowing it was down 45 minutes waiting for material, 30 minutes for a blade change, and 45 minutes for an unexplained stoppage is actionable. The system should let operators classify downtime in seconds from a kiosk. That data feeds the Pareto chart that shows where you're actually losing time.

A chart builder, not just pre-built screens. Real-time KPIs show what's happening now, but supervisors spend most of their time looking back: daily production trends, weekly throughput, downtime patterns over the last month, shift-over-shift comparisons. Your team needs to build the views they need, when they need them.

An operator interface that operators will actually use. Large touch targets. Dark mode for shop floor visibility. Minimal clicks for common actions. If operators don't use the kiosk, your data is garbage.

Things that also matter

Configurable state detection, so you can define what "running" means per machine type using expressions with debounce timers to smooth out signal flicker. Configurable production units, so a saw counts cuts, a slitter counts linear feet, and a scale counts weight, all in the same plant. Flexible shift models that don't assume everyone works hardcoded 8-hour blocks. Per-machine overrides for when two machines of the same type have different PLC programs. Email alerts and scheduled reports for plant managers who won't log in every day.

Red flags

Requires proprietary sensors or gateways.

KPIs are limited to OEE with no custom formulas.

One machine type definition for all machines of that model, with no per-machine overrides.

Dashboards are pre-built with no chart builder.

Requires an inbound network connection to your PLCs or machines.

Requires a dedicated server or PC on your site to run the platform.

What to ask in the demo
"Show me adding a machine type you haven't pre-configured."Tests whether the system is truly configurable. Have them map fields, set up state rules, create production units, and define a KPI — live.
"I want tons per hour on the operator kiosk and linear feet per hour on the supervisor dashboard. Show me setting up both."Tests whether KPIs are flexible or limited to OEE. If the only metric they can show is OEE, the system is too rigid.
"The run signal on this machine flickers for 500ms during direction changes. How do I prevent that from registering as a stoppage?"Tests state rule debounce. If they can't demo it, you'll be cleaning up false stoppages forever.
"Two machines of the same type have different PLC tag names. How do I handle that without duplicating the entire machine type?"Tests per-machine field mapping overrides.
"Show me a combo chart with daily production as bars and a rate KPI as an overlay line on a separate scale."Tests dashboard flexibility. If they can only show one metric per chart, you'll end up with 20 charts where 5 would do.
A schedule that doesn't know what's happening on the floor is fiction by 10am.
02 — Scheduling

The closed
loop is
everything.

And a monitoring system that doesn't know the plan can't tell you whether you're ahead or behind. The most valuable thing in a manufacturing IoT platform is the closed loop: the schedule says what should happen, the monitoring system says what is happening, and the gap between the two drives improvement.

"We thought we were running to plan. Now we can see exactly where and why we're not."
Core capabilities to require
Live schedule projection
The schedule should show not just the plan but where things actually are based on live production data: "Order 1234 is 60% complete, projected to finish at 2:15pm, 30 minutes behind the plan." This only works if scheduling and monitoring are integrated.
Capability-aware scheduling
Planners can't schedule a 48-inch coil on a machine with a 36-inch capacity. Machine capabilities — max width, max weight, material types — should be defined once and enforced everywhere.
Sequence-dependent changeover optimization
Changeover from steel round to steel round might be 10 minutes. Changeover from steel round to aluminum flat bar might be 45. The scheduler should know this and group similar jobs to minimize total setup time.
Multi-operation routing
A coil that gets slit, then cut to length, then formed on a press brake is three operations on three machines with dependencies. The scheduler should handle this, showing the dependency chain and preventing Operation 2 from starting before Operation 1 finishes.
Predicted durations from actual data
The best estimate for how long a job will take is "how long did similar jobs take?" — weighted by material, dimensions, machine, and recency. A scheduler connected to a monitoring system already has this data. It should use it.
Time-weighted schedule attainment
"We completed 8 out of 10 scheduled jobs" sounds great until you realize the eight were small 1-hour jobs and the two that got bumped were your biggest orders of the day. Count-based attainment hides the real story.
Red flags

Schedule is disconnected from production monitoring. No live progress.

Single-operation jobs only, with no routing.

No capability constraints — any job can go on any machine.

Changeover time is a fixed number per machine, not based on what ran before vs. what runs next.

Duration estimates are manual entry with no historical basis.

Attainment is count-only. A bumped 1-hour order weighs the same as a bumped 8-hour order.

What to ask in the demo
"Show me the schedule with live production progress overlaid."If the schedule only shows the plan with no actual progress, it's a static document, not a management tool.
"This order requires a machine with at least 24-inch jaw width. This machine only has 18 inches. What happens when I drag the order onto it?"Tests capability constraints.
"Two jobs were bumped yesterday — a 1-hour order and an 8-hour production run. Show me the attainment."Tests time-weighted attainment.
"How long will this new order take? Where does that estimate come from?"Tests whether predictions come from actual production history or someone's guess.
03 — Integration

The biggest
ROI is in the
connections.

The biggest ROI isn't in any single module. It's in the connections between them. A monitoring system that detects a breakdown on the floor and auto-creates a maintenance work order — pre-filled with the machine, timestamp, and the operator's description — eliminates the "walk to a computer and file a ticket" gap that causes most breakdowns to go unreported.

When the maintenance system knows a PM is due, that PM should appear on the production schedule as an immovable block. When a breakdown happens mid-job, the schedule should automatically adjust: the current order block splits visually, a maintenance block inserts, and all downstream orders shift. The planner sees the impact immediately.

Each module should work on its own, because nobody wants to buy everything on day one. But when connected, data flows automatically between them. A breakdown on the floor creates a work order, adjusts the schedule, and eventually shows up as a cost variance on the delayed job. No manual re-entry. No spreadsheet gymnastics.

Production monitoring Real-time floor visibility Machine state OEE / KPIs Downtime log Order progress Scheduling Plan, optimize, track Live projection Attainment Duration est. Sequencing Maintenance (CMMS) Work orders, PMs, reliability Auto work orders PM scheduling Executive dashboard Live state Next orders Completion data → predictions Breakdown detected PM blocks on schedule
04 — Hardware

Simpler and
cheaper than
vendors make
it sound.

There are two practical approaches, and the right one depends on what's already on your machine. Both use off-the-shelf hardware — no proprietary anything.

Option A — Edge gateway + PLC

An off-the-shelf industrial edge gateway reads tags directly from the machine's PLC (Modbus, EtherNet/IP, PROFINET, OPC UA) and publishes data to the monitoring platform. The gateway pushes to the platform — nothing reaches into your PLC from the internet. One gateway can connect to multiple PLCs in the same area.

~$1,000 per machine · Best for machines built in the last 20 years

Option B — Bolt-on sensors

Add simple sensors (current transformers, proximity sensors, pulse counters) to the machine's existing electrical signals, wired to a compact off-the-shelf IoT gateway. No PLC access required. Works on any machine age — a 1990s hydraulic press with no PLC works just as well as a brand-new CNC.

~$500 per machine · Best for older legacy equipment

Many plants use a mix. PLC integration on newer machines, bolt-on sensors on legacy equipment, all feeding the same system.

On cybersecurity: Regardless of which approach you choose, data flow should be outbound-only. The gateway initiates an outgoing encrypted connection to a cloud broker. No inbound connections, no open ports, no VPN, no tunnel, no remote access to the gateway or anything behind it. Nothing is listening. There's no path from the internet to the machines. For sites where IT won't grant network access, edge gateways also support cellular connectivity — plug in power and sensors, and the gateway connects over its own modem. No site infrastructure required.
What to avoid

Some vendors sell subscription-priced sensor kits with proprietary gateways. They're expensive — $3,000–10,000+ per machine — for data you can get from a $500–1,000 setup. If you switch platforms, the sensors are paperweights. You're often paying a recurring hardware subscription on top of the software subscription. And the data they produce (run/idle/fault, cycle count) is identical to what standard industrial sensors provide.

05 — Infrastructure

Who controls
where your
data lives?

Many IoT platforms route your production data through the vendor's cloud. Your machine telemetry, order data, and production history sit in someone else's infrastructure — sometimes in a region or country you'd rather they didn't. It's worth asking about this early, because most vendors won't bring it up.

The platform is a managed service either way — the vendor handles software updates and support regardless. The real question is whether your IT and compliance teams need control over the underlying infrastructure.

A fully managed cloud option gets you started in days with no infrastructure to worry about. A deployment into your own Azure tenant means the broker, database, and application run in your tenant while the vendor still manages the software. Your data stays in your infrastructure. Both options deliver the same platform and the same support. The difference is who controls the infrastructure underneath.

06 — The Vendor

Cost is part
of it. What
they build
next
is
the rest.

A good vendor relationship starts with implementation, not a login and a knowledge base. On day one, a specialist should configure everything — machine types, field mappings, state rules, KPI formulas, dashboards, operator kiosk — with your team watching and learning the admin interface. You should walk away with a working system, not a project plan for building one.

After go-live, you should be able to run it yourself. Adding a machine, changing a KPI formula, building a dashboard, modifying a downtime code — all through the admin UI without calling anyone. If the implementation specialist could configure it, your team should be able to change it later.

That's the baseline. The bigger question is whether the vendor is building a platform that keeps up with you. If the system can't provide scheduling capability, CMMS, and cost-to-serve analytics, you'll end up stitching together point solutions or starting over when your needs get more complex. Look for a vendor that's focused on manufacturing and investing in the capabilities you'll need next year, not just the ones you need today.

07 — Getting Started

Crawl,
walk,
run.

Don't try to boil the ocean. The plants that get the most value from these systems start with a focused pilot and expand based on what the data reveals — not based on what the vendor's onboarding checklist says.

Phase 1 · Weeks 1–2
First machines live
Connect your first 3–5 critical machines. Configure machine types, field mappings, state rules, and KPIs. Deploy the operator kiosk with downtime classification. Build your first dashboards and validate the data. Within the first week, the downtime Pareto reveals where you're actually losing time. First data-driven conversation happens.
Phase 2 · Weeks 2–8
Expand and refine
Add machines in waves. Refine KPIs and dashboards based on what Phase 1 revealed — because you'll learn which metrics actually drive action vs. which just looked good in theory. Build views for different audiences: executives want a plant-level view, supervisors want to drill into their workcenter. As your team gets comfortable with the data, they'll ask better questions and want new views.
Phase 3 · When ready
Schedule and optimize
Connect monitoring to scheduling for a closed-loop operation. Begin tracking schedule attainment. Enable duration predictions from accumulated production history. Connect maintenance events to the schedule. Become the best-in-class operation in your industry.

The question isn't whether you need better visibility into your operation. The question is whether the system you're evaluating will actually give it to you — or just look great in the demo.

Ask harder questions. Demand live demos. Trust the data, not the deck.
Ready to evaluate?

See MACH run on
a real floor.

Not a polished demo environment. Not sample data. We'll show you exactly what the platform does — configured for your machine types, your KPIs, your operation.

Request a Demo