Obtaining accurate probabilities using an Ensemble of Linear Trend Estimation

By Mahdi Pakdaman

This Friday, I actually intended to attend the talk of Yun Huang :”Towards reliable and useful student modeling for real-world adaptive tutoring systems” hosted by Intelligent Systems Program at University of Pittsburgh. However, because of schedule change I missed it and catched the next talk given by Mahdi Pakdaman about a data mining specific topic. It was good, still it was about something that I have had interest in the past.


The talk was mainly about postprocessing the outputs produced by binary classifiers. The motivation he presented for the need of a postprocessing step was that probabilistic predictions overestimate. In addition, there is a need for calibration of prediction results because calibrated probability outputs are critical in decision making.  For solving this problem, they proposed ELITE(Ensemble of Linear Trend Estimation) method. In this method, there is a function called calibration mapper which takes the classification score of a classifier and yields a calibrated value.

Then, he talked about most commonly used post processing methods: Platt’s calibration method (using a sigmoid-like function), equal frequency histogram binning, and isotonic regression. There are some drawbacks of histogram-binning methods. First, calibration function mapping the classifier scores to calibrated ones is piecewise constant. Hence, the estimated probabilities have abrupt changes at the boundary of binning intervals. Second, predictions are assumed to be independent in different bins. What they proposed is to use a calibration function which is a piecewise linear. In the problem formulation part, he mentioned about how to formulate this solution for calibration function. They proposed an optimization algorithm for that. In its original form, the algorithm is computationally intractable, so they applied a constraint relaxation to it. As a result, they obtained a tractable one, having O(n logn) complexity.

In order to evaluate their method, they picked 35 datasets randomly from UCI Machine Learning Repository and LibSVM Repository. They used SVM, Naive Bayes, and Logistic Regression models as classifiers. They compared their calibration method to other methods I noted above (i.e. Platt’s calibration method, equal frequency histogram binning, and isotonic regression). The measures they used in evaluation were in two categories: discrimination measures and calibration measures. Discrimination measures were AUC and ACC (I think they stand for area under curve and accuracy). Calibration measures were basically error measures for prediction results like RMSE (root mean squared error), expected calibration error (ECE), and maximum calibration error (MCE). From average results of  base classifiers (Naive Bayes, Logistic Regression and Support Vector Machine), they reported %95 confidence intevals for evaluation measures (ACC, AUC etc). The results showed that their method, ELITE, did not degrade the accuracies of the classifier, but decreased the error measures in a significant manner.


The speaker was a PhD student who has graduated recently. I think his talk was very professional compared to the ones I attended before. He was quite clear, he explained all the steps in his study in a systematic way. Also, he used the time efficiently without going beyond the allocated time, which is very hard to succeed for most of the people I think. The last thing is that he handled questions very good, by not forgetting to thank every time, in a polite manner and by exactly addressing the question without circumventing.






Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s