Analysing hypervariate real time data at high granularity

Aliens, physicists and industrial productivity – how analysing hypervariate real-time data at high granularity binds them all together.

Analysing the hypervariate data generated by industrial processes in real time and at very high granularity provides deep insight into productivity improvement opportunities that are otherwise hidden. But are the potential improvements too big to believe?

Consider a typical industrial processing plant that reports performance and sets revised operating tactics on a shift-by-shift basis. The plant is effectively adjusting its tactics based on ~730 data points per year. How much better could we perform, if we could confidently adjust our plant wide tactics at a resolution, say, 50 times higher? How might this impact the decisions that we make and the resultant productivity of our operations?

I find this easier to answer this in reverse – for example, what does Stephen Hawking’s speech “Questioning the Universe” (, February 2008) tell us at 1/50th the full granularity? The results below may prompt us to take up arms against the government, but bear very little resemblance to what was actually said.

At 1/50th resolution, Stephen Hawking said “Alien life that created the universe should patent Earth. Fossils of algae suggest that a government conspiracy seems a pretty safe bet”.


Returning to our industrial process, it is easy to see how making decisions based on shift-by-shift data could lead to poor decisions, or at least decisions that are only good ‘on average’. With this reference point, is it still hard to believe that there might be 10%-20% productivity on the table?

Matt Magee
Director of Operations, Interlate