Heat maps are hard to use right, we still think we are tying to get it 100% right. The biggest difficulty is getting enough data to ensure that your heatmap is providing insight and not just random chance. We have found them invaluable for simulated data and when exploring multiple variable response in regards to a KPI.

 

 

This is article 7/7 in our series focussing on the bread and butter visualisations our analytics team use. To receive a copy of the whole series now go to www.clarofy.ai/download

 

 

Critical in exploring the data and significantly useful in evaluation; insight gained from visualisations inform the whole analytical workflow. Some types of visualisations lend themselves better to one use over the other and, making the decision of which to use in each application is a learning process. At Interlate, our experience with analytics in Minerals processing plants has given us an appreciation for how to make these decisions, and we will outline some of our insights in the following series 

 

Good For:  

  • Evaluating performance over three or more dimensions 
  • Using the averages in each of the bins you can evaluation changes in KPI’s over the surface of the solutions space 
  • Visualising simulated data and seeing the response of variables over multiple dimensions.
     

Difficult With: 

  • Low density data 
  • Noisy data 

 

Tips & Tricks: 

  • Experimenting with bin sizes is a must. 
  • Great for displaying simulated data to allow users to explore scenarios 
  • Ensure the colour chosen is of a good range, often the minimums and maximums and event divergent point if using a double colour scale can give a false impression of the responsiveness of the data. 

 

That’s the end of our series on visuals for minerals processing if you have any questions about this series or are after a copy of the whole as an article please visit www.clarofy.ai/download

 

This article series has focussed on our key visuals, Interlate hope to share our experience to others and provide a robust understanding for their place in Clarofy our visualisation and data analytics application.  No software installation required and runs straight from your browser. www.clarofy.ai

 

 

ArticlesIndustryInterlate News

Exploration Visuals – Time Series

 

 

The time series, something you see a lot more of once you get into industry and then never get away from. Great for telling a story and finding out causation. Below we will share what we think its good for, how to get more out of it, and where we don’t often find any value in using it.

This is article 4/7 in our series focussing on the bread and butter visualisations our analytics team use.

To receive a copy of the whole series now go to www.clarofy.ai/download

 

Critical in exploring the data and significantly useful in evaluation; insight gained from visualisations inform the whole analytical workflow. Some types of visualisations lend themselves better to one use over the other and, making the decision of which to use in each application is a learning process. At Interlate, our experience with analytics in Minerals processing plants has given us an appreciation for how to make these decisions, and we will outline some of our insights in the following series.

 

Good For:

  • Justifiably, looking for changes in variables over time.
  • Ensuring that both long term and short-term trends are examined; looking at changes over the months or years, analysis related to control at a small time periods such as 24 or 48 hours.
  • When stacked or related measures are used, you can find how relationships between variables have changed over time. – however sometimes this is more easily seen in scatter plots coloured by time.
  • Great for identifying how much lag you might need to apply to certain variables to see the relationships on scatter plots.
  • Identifying trials, modes or events in plant operation.

Difficult With:

  • Finding relationships between variables
  • Noisy data that fluctuates heavily within a day or hour and could obscure valuable relationships.
  • High data density as it exacerbates issues found in fluctuations.

Tips & Tricks:

  • Aggregate the data when dealing with variables that fluctuate heavily, however be careful of introducing or removing relationships when aggregating
    • Try different methods of aggregation, min, max, percentiles, average, moving averages.
  • Be open to having a look at any available categorical variables over time, in addition to the numeric variables. Being critical of an operation mode may be unfair if 90% of its data occurred before a particular upgrade or with a certain ore type.
  • Spend some time looking at some time series if you are unfamiliar with the plant. It gives important contextual information as to the stability of the plant, how often shutdowns occur, when different modes in the plant are in operation.
  • After transforming data (such as integrating feed grade out of recovery, or modelling a dependent variable) compare it again on a timescale to see if there are errors or different correlations with time.

 

 

This article series will focus on our key visuals, Interlate hope to share our experience to others and provide a robust understanding for their place in Clarofy our visualisation and data analytics application.  No software installation required and runs straight from your browser www.clarofy.ai

 

 

ArticlesIndustryInterlate News

Exploration Visuals – Scatter Plots

 

Everyone knows the scatter plot, it is the default. But it shouldn’t be for everything. Great for discovering and exploring relationships but only sometimes the best for getting your point across. Below we share the best applications for scatter plots and our tips and tricks for getting the most out of them.

 

This is article 3/7 in our series focussing on the main visualisations our analytics team use.

To receive a copy of the whole series now go to  www.clarofy.ai/download

 

Good For:

  • Discovering and exploring relationships
  • KPI vs Process Variables. e.g. throughput vs. P80; recovery vs. concentrate grade; feed metal tons vs collector dosage.
  • Identifying the dominant relationship in the plant such as feed grade vs recovery.
  • Visualising relationships with only two or three relational factors. Such as throughput, P80 and power draw of major equipment.

Difficult With:

  • Drawing actionable conclusions, calculating valuations, and finding best operating ranges.
  • Very high-density data normally found when looking at minutely/secondly data or a few years of data
  • Variables vs. setpoints as they can often stacked on top of each other.
  • Discovering more subtle relationships in the plant (often over-shadowed by the strongest relationships).
  • Multi-dimensional factors such as float recovery vs a combination of level, air, feed grade, con-grade etc.

Tips & Tricks:

  • Using colour as a third dimension to get information on multiple interactions.
    • Colour by the timestamp to see if there are general trends over time
    • Colour by Ore body to see if there are trends within the ore bodies
    • Colour by operating mode, or flowsheet configuration
  • Often the presence of outliers can make a trend seem more prominent than it is, or may even give an incorrect trend; as shown in the graphs below from Clarofy

Before filtering the gradient of the trend line is slightly negative, with the cluster of points at zero reagent dosage influencing the relationship.

After filtering out the zeros and the outliers above 12gpt; the gradient of the trend line is shown to be positive with recovery.

 

  • When analysing continuous plant data, understanding of the process is important. The practical meaning of the outliers (in the example above: zero reagent dosages) is not available in the data and regular conversations with operators can shed light on the following types of events:
    • Start up and shutdown times (filter by throughput to remove this error)
    • Instrument error (check pv vs. sv visualisations, met accounting vs plant data, any notes about instruments)
    • Plant events such as trials, mill re-lines, equipment commissioning, and natural variation in operator behaviour, control philosophies, equipment limitations.
  • Break up the scatters into groups if a large amount of data is available: it is often practical to break into yearly or 6-monthly sections as it is unlikely that a plant would operate with consistent tactics for that amount of time. Another method of categorising could be different ore bodies being treated, if the data is available, or different flow sheet configurations. Data points can be coloured by these modes, or they may be examined separately to discover trends. Be aware of trends across groups masquerading as trends within the group!
  • With high density data the following techniques work well to get more out of your data at this early exploration stage:
    • Aggregate data over a period that makes practical processing sense for the equipment you are examining. e.g. In the case of a thickener; 1 hour, or even 6 hour aggregations would be appropriate, but on a single float bank, 2-3 times the residence time (usually minutes) is more fitting.
    • Examine scatters for each month, quarter or year if a time trend hypothesis exists. It might not yet be clear why that relationship exists, but this kind of view can offer more information on the nature of the association.
    • Use a density scatter plot (a variation of a standard scatter) that will colour based on the density of the data points in the area.

 

This article series will focus on our key visuals, Interlate hope to share our experience to others and provide a robust understanding for their place in Clarofy our visualisation and data analytics application.  No software installation required and runs straight from your browser. www.clarofy.ai

 

ArticlesIndustryInterlate News

Common Visualisations For Minerals Processing Analytics

 

This is the start of our series of articles on visualisations useful in minerals processing analytics.

To receive the whole article series as a pdf now, please follow this link: www.clarofy.ai/download

 

Analytics and analytical workflows are an iterative process in making a hypothesis, forming it into a driving question, building visuals, analytical workflows and modelling to answer those questions. Interlate have found that, in general, our workflows follow these steps, often jumping back to step 2 as we learn more about the data and the transformations that are required.

  1. Connection with/extraction of the data
  2. Preparation and filtering of the data
  3. Exploration
  4. Modelling (if required)
  5. Evaluation
  6. Deployment

One of the most important tools in doing this successfully is visualisations, and the role they play in exploration and evaluation. These may be simple, like scatter plots & histograms or through to more complex constructions such as heatmaps. They are the primary means of getting a conceptual understanding of the data and our plant. As well as delivering that understanding to others.

 

Critical in exploring the data and significantly useful in evaluation; insight gained from visualisations inform the whole analytical workflow. Some types of visualisations lend themselves better to one use over the other and, making the decision of which to use in each application is a learning process. At Interlate, our experience with analytics in Minerals processing plants has given us an appreciation for how to make these decisions, and we will outline some of our insights in the following sections.

 

This is article 1/7 in our series will focussing on our key visualisations, Interlate hope to share our experience to others and provide a robust understanding for their place in Clarofy our visualisation and data analytics application.  No software installation required and runs straight from your browser www.clarofy.ai

 

Keep your eyes on this space for the next articles from the series.