AI model FAQs

Frequently asked questions about our AI model and image recognition.

biffa product reco

What does accuracy mean?

Put simply: how close our predicted object count matches the true object count.

We use two metrics to track this:

  • Weighted Mean Absolute Percentage Error (MAPEw) measures how well the model predicts the count of objects relative to the frequency of their class. This tells us how accurate our data is on throughput.

  • Maximum Composition Error (MCE) takes the number of objects in one material class and divides it by the total item count during a set time period.

    This lets us measure the difference between true composition and our prediction. MCE quantifies the highest deviation across all classes when we analyse the waste stream. In other words, it ensures we're delivering accurate composition analysis data.

 

How accurate is your system?

We aim to keep our error rates low: under 5% for MAPEw, and under 2% for MCE.

We’re currently meeting those aims on most waste streams – excluding the baler, which exceeds our baselines by just a few percent. It’s a more challenging use case, but we’re improving our performance all the time.

 

How do you calculate the accuracy of your model (what do you compare it to)? 

We assess the performance of our AI model against humans. Both receive the same large sample of waste stream images to analyse, and we compare the results.

We also assess our performance against manual sampling done by customers. We’ve established a process for that purpose, and can provide step-by-step guidelines. Our evaluation pipeline ensures we always deploy our most accurate models to each unit, and continuously track their success rate.

It is worth noting that the AI model often performs better than reports show. We’ve increasingly noticed that human error – difficulty distinguishing objects, distraction, tiredness – can impact the assessment process. To mitigate this, we manually review test sets on a regular basis to reduce the influence of human error.

 

Can you handle multiple layers of waste?

The system directly analyses what the camera can see. There’s no real limit to the number of visible objects the system can detect and recognise at the same time – so far, we’ve been able to process 50+ objects at once.

We can then use reference data points to account for material that’s piled in layers beneath. Testing has proven that we can accurately predict waste stream composition (<2% MCE) even if objects are piled on top of each other. 

That’s possible thanks to the relative homogeneity of many waste streams. By adjusting our parameters to account for hidden layers based on reference data points, we minimise any impact on accurate composition analysis.

 

Is the model optimised for each specific waste stream? 

We use a master model built using data from monitoring units around the world. It depends on the use case, but this model often meets our performance benchmarks (< 5% error) without optimisation.

With that said, we do optimise the model for each individual waste stream. Adjustments take place in the “onboarding” process, during which we collect data from the waste stream in question and use it to fine-tune our model. This ensures that our model takes context into account to maintain high performance, no matter the use case. 

 

How often do you update the AI model?

We constantly improve our AI model based on data from our 40+ active monitoring units. Relevant updates are pushed to units as soon as they’re ready. These include frequent performance improvements (every 6 weeks or so) as well as major updates like an expansion of the materials our AI can recognise (once or twice a year).

 

Are you able to provide calorific value? 

We are not currently able to provide the calorific value of waste material. We're planning to tackle this use case by including additional sensors in our hardware and developing our software accordingly. 

 

How can I reduce the density of my streams to improve accuracy? 

Typically, the best way to de-densify the waste stream is to increase the speed of the conveyor belt.

Our system is perfectly adapted to working at higher speed and can cope with 3m/s belts, while typical product lines belts are running at 1 m/s maximum. 

On optical (NIR) sorter lines, density is already very low (usually at speeds of 2.7 m/s). On infeed and baler lines, the density is higher, and speed increase is not always possible.


One strategy is to install several monitoring units after the infeed belt splits into smaller, lower density belts – or on the output lines before the baler. By merging the data from these locations, our live dashboard can provide total coverage.