Learning accurate probabilistic models from data is crucial in many practical tasks in data mining. In this talk I will present a new non-parametric calibration method called Ensemble of Linear Trend Estimation (ELiTE). The method utilizes the recently proposed L_1 trend filtering signal approximation method to find the mapping from uncalibrated classification scores to the calibrated probability estimates. ELiTE is designed to address the key limitations of the histogram binning-based calibration methods which are (1) the use of a piecewise constant form of the calibration mapping using bins, and (2) the assumption of independence of predicted probabilities for the instances that are located in different bins. The method post-processes the output of a binary classifier to obtain calibrated probabilities. Thus, it can be applied with many existing classification models. I will demonstrate the performance of ELiTE on real datasets for commonly used binary classification models. Experimental results show that the method outperforms common binary-classifier calibration methods. In particular, ELiTE commonly performs statistically significantly better than the other methods, and never worse. Moreover, it is able to improve the calibration power of classifiers, while retaining their discrimination power. The method is also computationally tractable for large scale datasets, as it is practically $O(N \log N)$ time, where $N$ is the number of samples.