The PyTorch sigmoid function is an element-wise operation that squishes any real number into a range between 0 and 1. This is a very common activation function to use as the last layer of binary classifiers (including logistic regression) because it lets you treat model predictions like probabilities that their … Read More
TorchVision, a PyTorch computer vision package, has a great API for image pre-processing in its torchvision.transforms module. This post gives some basic usage examples, describes the API and shows you how to create and use custom image transforms.
PyTorch has a one_hot() function for converting class indices to one-hot encoded targets.
You can use the top-level torch.softmax() function from PyTorch for your softmax activation needs.
Object tracking is pretty easy conceptually. And if you have a good detector, simple methods can be pretty effective.
Cross entropy loss in PyTorch can be a little confusing. Here is a simple explanation of how it works for people who get stuck.
Adding a dimension to a tensor can be important when you’re building deep learning models.
In this post we’ll classify an image with PyTorch. If you prefer to skip the prose, you can checkout the Jupyter notebook. Two interesting features of PyTorch are pythonic tensor manipulation that’s similar to numpy and dynamic computational graphs, which handle recurrent neural networks in a more natural way than static … Read More