Latest Posts

  • Normalized Discounted Cumulative Gain (NDCG)

    ·

    Normalized Discounted Cumulative Gain (NDCG)

    Overview Ranking models underpin many aspects of modern digital life, from search results to music recommendations. Anyone who has built a recommendation system understands the many challenges that come from developing and evaluating ranking models to serve their customers. While these challenges start from the data preparation and model training and continue through model development…

  • R Squared: Understanding the Coefficient of Determination

    ·

    R Squared: Understanding the Coefficient of Determination

    Introduction The R-squared metric — R², or the coefficient of determination – is an important tool in the world of machine learning. It is used to measure how well a model fits data, and how well it can predict future outcomes. Simply put, it tells you how much of the variation in your data can…

  • Mean Absolute Percentage Error (MAPE): What You Need To Know

    ·

    Mean Absolute Percentage Error (MAPE): What You Need To Know

    What Is Mean Absolute Percentage Error? One of the most common metrics of model prediction accuracy, mean absolute percentage error (MAPE) is the percentage equivalent of mean absolute error (MAE). Mean absolute percentage error measures the average magnitude of error produced by a model, or how far off predictions are on average. While understanding this…

  • What Is AUC?

    ·

    What Is AUC?

    Introduction: What Is the AUC ROC Curve In Machine Learning? AUC, short for a rea u nder the ROC (receiver operating characteristic) c urve, is a relatively straightforward metric that is useful across a range of use-cases. In this blog, we present an intuitive way of understanding how AUC is calculated. How Do You Calculate…

  • What Is PR AUC?

    ·

    What Is PR AUC?

    AUC , short for area under the precision recall (PR) c urve, is a common way to summarize a model’s overall performance. In a perfect classifier, PR AUC =1 because your model always correctly predicts the positive and negative classes. Since precision-recall curves do not consider true negatives, PR AUC is commonly used for heavily…

  • Calibration Curves: What You Need To Know

    ·

    Calibration Curves: What You Need To Know

    In machine learning, calibration is used to better calculate confidence intervals and prediction probabilities of a given model. Calibration is particularly useful in areas like decision trees or random forests where certain classifiers only give the label of the event and don’t support native probabilities or confidence intervals. When modelers want to be confident in…

  • Understanding and Applying F1 Score: A Deep Dive

    ·

    Understanding and Applying F1 Score: A Deep Dive

    F1 score is a measure of the harmonic mean of precision and recall. Commonly used as an evaluation metric in binary and multi-class classification, the F1 score integrates precision and recall into a single metric to gain a better understanding of model performance. F-score can be modified into F0.5, F1, and F2 based on the…

  • Recall: What Is It and How Does It Differ From Precision?

    ·

    Recall: What Is It and How Does It Differ From Precision?

    In machine learning, recall is a performance metric that corresponds to the fraction of values predicted to be of a positive class out of all the values that truly belong to the positive class (including false negatives). It differs from precision , which is the fraction of values that actually belong to a positive class…

  • Precision: Understanding This Foundational Performance Metric

    ·

    Precision: Understanding This Foundational Performance Metric

    What Is Precision? In machine learning, precision is a model performance metric that corresponds to the fraction of values that actually belong to a positive class out of all of the values which are predicted to belong to that class. Precision is also known as the positive predictive value (PPV). Equation: Precision = true positives…