Applications of Different Parts of an ROC Curve
Published:
Understanding the importance of different parts of an ROC curve and exploring variants of AUC for ML applications
Published:
Understanding the importance of different parts of an ROC curve and exploring variants of AUC for ML applications
Published:
Simplifying a complex algorithm.
Published:
Understanding recent evolution of object detection and localization with intuitive explanation of underlying concepts. Object detection is one of the areas of computer vision that is maturing very rapidly. Thanks to deep learning! Every year, new algorithms/ models keep on outperforming the previous ones.
Published:
Jumping from simple algorithms to complex ones does not always boost performance if the feature engineering is not done right. One type of features that do not easily give away the information they contain are categorical features.
Published:
We see the use of recommendation systems all around us. These systems are personalizing our web experience, telling us what to buy (Amazon), which movies to watch (Netflix), whom to be friends with (Facebook), which songs to listen (Spotify) etc.
Published:
Jumping from simple algorithms to complex ones does not always boost performance if the feature engineering is not done right. One type of features that do not easily give away the information they contain are categorical features.
Published:
Optimize what matters — Authors: Prince Grover and Sourav Dey. This post is our attempt to summarize the importance of custom loss functions in many real-world problems — and how to implement them with the LightGBM gradient boosting package.
Published:
Simplifying a complex algorithm.
Published:
In this post, we’ll learn how we can solve a lot of ML problems using our old math friend: matrix decompositions.
Published:
Optimize what matters — Authors: Prince Grover and Sourav Dey. This post is our attempt to summarize the importance of custom loss functions in many real-world problems — and how to implement them with the LightGBM gradient boosting package.
Published:
Choosing the right loss function for fitting a model. All the algorithms in machine learning rely on minimizing or maximizing a function, which we call “objective function”.
Published:
This post provides you an easy to follow tutorial on how to “train a base neural net” on a dataset and use that pre-trained network to “transfer learn” on a different dataset using MXNet/Gluon framework.
Published:
In this post, we’ll learn how we can solve a lot of ML problems using our old math friend: matrix decompositions.
Published:
Understanding the importance of different parts of an ROC curve and exploring variants of AUC for ML applications
Published:
Choosing the right loss function for fitting a model. All the algorithms in machine learning rely on minimizing or maximizing a function, which we call “objective function”.
Published:
This post provides you an easy to follow tutorial on how to “train a base neural net” on a dataset and use that pre-trained network to “transfer learn” on a different dataset using MXNet/Gluon framework.
Published:
We see the use of recommendation systems all around us. These systems are personalizing our web experience, telling us what to buy (Amazon), which movies to watch (Netflix), whom to be friends with (Facebook), which songs to listen (Spotify) etc.
Published:
Understanding recent evolution of object detection and localization with intuitive explanation of underlying concepts. Object detection is one of the areas of computer vision that is maturing very rapidly. Thanks to deep learning! Every year, new algorithms/ models keep on outperforming the previous ones.
Published:
For someone who thinks that random forest is a black box algorithm, this post can offer a differing opinion. I am going to cover 4 interpretation methods that can help us get meaning out of a random forest model with intuitive explanations.
Published:
We see the use of recommendation systems all around us. These systems are personalizing our web experience, telling us what to buy (Amazon), which movies to watch (Netflix), whom to be friends with (Facebook), which songs to listen (Spotify) etc.
Published:
Choosing the right loss function for fitting a model. All the algorithms in machine learning rely on minimizing or maximizing a function, which we call “objective function”.
Published:
This post provides you an easy to follow tutorial on how to “train a base neural net” on a dataset and use that pre-trained network to “transfer learn” on a different dataset using MXNet/Gluon framework.
Published:
Short Summaries of AI Research.
In 2023, I plan to increase my capacity for consuming research papers. For better retention of all the research, I realized that it is important to have an organized way of summarizing the papers and to compile these summaries in a central location. After writing a few such paper summaries, I thought it might be a good idea to share them publicly, hoping that in the long run they might be useful to me and others.
Published:
In this post, we’ll learn how we can solve a lot of ML problems using our old math friend: matrix decompositions.
Published:
We see the use of recommendation systems all around us. These systems are personalizing our web experience, telling us what to buy (Amazon), which movies to watch (Netflix), whom to be friends with (Facebook), which songs to listen (Spotify) etc.
Published:
For someone who thinks that random forest is a black box algorithm, this post can offer a differing opinion. I am going to cover 4 interpretation methods that can help us get meaning out of a random forest model with intuitive explanations.
Published:
This post provides you an easy to follow tutorial on how to “train a base neural net” on a dataset and use that pre-trained network to “transfer learn” on a different dataset using MXNet/Gluon framework.