Transformers have changed the application of Machine Learning in Natural Language Processing. They have replaced LSTMs as state-of-the-art (SOTA) approaches...
In a different blog post, we studied the concept of a Variational Autoencoder (or VAE) in detail. The models, which...
Autoencoders are special types of neural networks which learn to convert inputs into lower-dimensional form, after which they convert it...
Suppose that you have an image of a man with a moustache and one of a man without one. You...
When you train a supervised machine learning model, you feed forward data, generating predictions on the fly. The comparison of...
Images can be noisy, and you likely want to have this noise removed. Traditional noise removal filters can be used...
Pure signals only exist in theory. That is, when you’re doing signal processing related activities, it’s very likely that you’ll...
The Convolutional layers section of the Keras API contains the so-called UpSampling2D layer. But what does it do? And how...
Transposed convolutions – we looked at them in theory in a previous blog post, but how can they be applied?...