Quantifying lost tonnes due to re-handling from your TUM data
- David Anderson

- Mar 24
- 4 min read
A worked example: integrating location metadata to quantify primary draw variance in underground mining
Time Utilisation Models are the backbone of underground mine reporting. They provide a structured, shift-by-shift picture of where time went, how much was productive, how much was lost to downtime, standby, and external delays, and whether each category tracked against its benchmark.
That is valuable. But there is more in that data that I think most operations should be extracting.
Cross-referencing TUM data with location-level metadata creates a layer of visibility that is genuinely useful at a management level. Not just a single auditable view of production variance expressed in tonnes with a clear owner on every category, but a further breakdown of primary and tertiary movements that surfaces something traditional TUM reporting cannot: the exact cost of re-handling ore in lost primary draw tonnes.
What TUM data tells you
A standard TUM report gives you time category splits and benchmark comparisons. It tells you if planned services ran over, or that blast re-entry ate into the shift, or that utilisation was healthy. Experienced operators and supervisors know how to read it and what it means.
Where it stops is the translation into tonnes. Time variances reported in minutes don't immediately convey business impact. That conversion happens informally, drawing on the experience and intuition of people in the room rather than a calculated number.
There is also one category where TUM data is often blind: what was happening inside productive time. A loader pulling ore from a stope face and a loader re-handling material from a stockpile both register identically as productive time. The time looks the same. The business impact is not.
The additional data you already have
The good news is that you likely don't need to capture any more data to obtain these extra insights.
To extract them, you need three additional data sources alongside your call-in log and TUM targets. All are standard outputs of the shift planning and reporting process:
Planned tonnes by location and shift
Actual tonnes by location and shift
Location metadata designating stope vs dev vs stockpile
No new data capture. No changes to existing systems. The methodology sits entirely on data that almost every operation already produces.
What the integration unlocks
By joining these tables you can build a variance waterfall that walks from planned primary draw tonnes to actual primary draw tonnes through every time category, with each step expressed in tonnes and mapped to a clear operational owner.

Each bar represents a specific cause:
External delay covers force majeure events entirely outside mine control
Downtime covers actual downtime above or below the maintenance budget. Owned by Maintenance.
Idle time covers time lost to blast re-entry, ground conditions, truck availability, congestion, and scheduled standby above benchmark. Owned by operational teams depending on the specific event code.
Ancillary operating time covers non-face productive time above target, including tramming and tertiary loading. Owned by the Supervisor and Planning.
Primary draw rate measures whether the fleet loaded faster or slower than the planned rate, given the face loading time that was available. Owned by the operator and their Supervisor.
The manager looks at this chart and immediately knows what happened, who to call, and what number to put on the impact. That is a different quality of conversation to what happens when someone puts a TUM percentage on the table.
The re-handle insight
The ancillary operating time bar is where the integration unlocks something TUM data alone cannot surface.
Re-handle registers in a standard TUM report as productive time. Utilisation looks healthy. But those loading minutes are being spent on material that has already been moved, at the direct expense of pulling primary ore from the stope face. The productivity loss is invisible in the TUM report because the time is classified correctly. The loader is working. The problem is what it was working on.
By splitting productive time using movement type from the location actuals data, you can isolate the loading minutes spent on tertiary re-handle and convert them
to a tonnes figure using the planned primary draw rate.
The question shifts from "did re-handle happen?" to "re-handle displaced X thousand tonnes of primary draw capacity this period. Was that an intentional scheduling decision, and if not, what changes?"

The framework is self-auditing
One property of this approach worth noting: because every minute of a shift must be accounted for across the time categories and the total always sums to the fixed shift length, the variance bridge is self-auditing.
Adjusting call-in behaviour in one category does not make a variance disappear. It surfaces somewhere else instead. An operator who does not call in end of shift on time does not hide the problem. They compress apparent loading time, and the loss shows up immediately as a primary draw rate variance. Every tonne of variance has to land somewhere, and every category has an owner.
This gives the framework a credibility that makes it useful beyond the shift team. It audits back to a fixed number of tonnes that either came out of the ground or did not.
In summary
If your operation already runs a TUM system and produces shift-level location tonnes, you have everything you need to build this. The methodology extracts a layer of insight that is already latent in your data. It just has not been joined together in this way yet.
The result is a single view that tells a manager not just where the variance was largest, but who owns it and exactly what it cost the operation in tonnes.
If you are dealing with operational visibility challenges, lets chat.



Comments