← Back to homepage

TensorFlow model optimization: an introduction to Quantization

September 16, 2020 by Chris

Since the 2012 breakthrough in machine learning, spawning the hype around deep learning - that should have mostly passed by now, favoring more productive applications - people around the world have worked on creating machine learning models for pretty much everything. Personally, to give an example, I have spent time creating a machine learning model for recognizing the material type of underground utilities using ConvNets for my master's thesis. It's really interesting to see how TensorFlow and other frameworks, such as Keras in my case, can be leveraged to create powerful AI models. Really fascinating!

Despite this positivity, critical remarks cannot be left out. While the research explosion around deep learning has focused on finding alternatives to common loss functions, the effectiveness of Batch Normalization and Dropout, and so on, practical problems remain huge. One class of such practical problems is related to deploying your model in the real world. During training, and especially if you use one of the more state-of-the-art model architectures, you'll create a very big model.

Let's repeat this, but then in bold: today's deep learning models are often very big. Negative consequences of model size are that very powerful machines are required for inference (i.e. generating predictions for new data) or to even get them running. Until now, those machines have been deployed in the cloud. In situations where you want to immediately respond in the field, creating a cloud connection is not the way to go. That's why today, a trend is visible where machine learning models are moving to the edge. There is however nobody who runs very big GPUs in the field, say at a traffic sign, to run models. Problematic!

Unless it isn't. Today, fortunately, many deep learning tools have built-in means to optimize machine learning models. TensorFlow and especially the TensorFlow Lite set of tools provide many. In this blog, we'll cover quantization, effectively a means to reduce the size of your machine learning model by rounding float32 numbers to nearest smaller-bit ones.

AI at the edge: the need for model optimization

Let's go back to the core of my master's thesis that I mentioned above - the world of underground utilities. Perhaps, you have already experienced outages some times, but in my country - the Netherlands - things go wrong once every three minutes. With 'wrong', I mean the occurrence of a utility strike. Consequences are big: annually, direct costs are approximately 25 million Euros, with indirect costs maybe ten to fifteen times higher.

Often, utility strikes happen because information about utilities present in the underground is outdated or plainly incorrect. Because of this reason, there are companies today which specialize in scanning and subsequently mapping those utilities. For this purpose, among others, they use a device called a ground penetrating radar (GPR). Using a GPR, which emits radio waves into the ground and subsequently stores the reflections, geophysicists scan and subsequently generate maps of what's subsurface.

Performing such scans and generating those maps is a tedious task. First of all, the engineers have to walk hundreds of meters to perform the scanning activities. Subsequently, they must scrutinize all those hundreds of metres - often in a repetitive way. Clearly, this presents opportunities for automation. And that's what I attempted to do in my master's thesis: amplify the analyst's knowledge by using machine learning - and specifically today's ConvNets - to automatically classify objects on GPR imagery with respect to radar size.

https://www.youtube.com/watch?v=oQaRfA7yJ0g

While very interesting from a machine learning point of view, that should not be the end goal commercially. The holy grail would be to equip a GPR device with a machine learning model that is very accurate and which generalizes well. When this happens, engineers who dig in the underground can themselves perform those scan, and subsequently analyze themselves where they have to be cautious. What an optimization that would be compared to current market conditions, which are often unfavorable for all parties involved.

Now, if that would be the goal, we'd have to literally run the machine learning model on the GPR device as well. That's where we repeat what we discussed in the beginning of this blog: given the sheer size of today's deep learning models, that's practically impossible. Nobody will equip a hardware device used in the field with a very powerful GPU. And if they would, where would they get electricity from? It's unlikely that it can be powered by a simple solar panel.

Here emerges the need for creating machine learning models that run in the field. In business terms, we call this Edge AI - indeed, AI is moving from centralized orchestrations in clouds to the edge, where it can be applied instantly and where insights can be passed to actuators immediately. But doing so requires that models become efficient - much more efficient. Fortunately, many frameworks - TensorFlow included - provide means for doing so. Next, we'll cover TensorFlow Lite's methods for optimization related to quantization. Other optimization methods, such as pruning, will be discussed in future blogs.

Introducing Quantization

Optimizing a machine learning model can be beneficial in multiple ways (TensorFlow, n.d.). Primarily, size reduction, latency reduction and accelerator compatibility can be reasons to optimize one's machine learning model. With respect to reducing model size, benefits are as follows (TensorFlow, n.d.):

Smaller storage size: Smaller models occupy less storage space on your users' devices. For example, an Android app using a smaller model will take up less storage space on a user's mobile device.

Smaller download size: Smaller models require less time and bandwidth to download to users' devices.

Less memory usage: Smaller models use less RAM when they are run, which frees up memory for other parts of your application to use, and can translate to better performance and stability.

That's great from a cost perspective, as well as a user perspective. The benefits of latency reduction compound this effect: because the model is smaller and more efficient, it takes less time to let a new sample pass through it - reducing the time between generating a prediction and receiving that prediction. Finally, with respect to accelerator compatibility, it's possible to achieve extremely good results when combining optimization with TPUs, which are specifically designed to run TensorFlow models (TensorFlow, n.d.). Altogether, optimization can greatly increase machine learning cost performance while keeping model performance at similar levels.

Float32 in your ML model: why it's great

By default, TensorFlow (and Keras) use float32 number representation while training machine learning models:

>>> tf.keras.backend.floatx()
'float32'

Floats or floating-point numbers are "arithmetic using formulaic representation of real numbers as an approximation to support a trade-off between range and precision. For this reason, floating-point computation is often found in systems which include very small and very large real numbers, which require fast processing times" (Wikipedia, 2001). Put plainly, it's a way of representing real numbers (a.k.a., numbers like 1.348348348399...), ensuring processing speed, while having only a minor trade-off between range and precision. This is contrary to integer numbers, which can only be round (say, 10, or 3, or 52).

Floats can always store a number of bits, or 0/1 combinations. The same is true for integers. The number after float in float32 represents the number of bits with which Keras works by default: 32. Therefore, it works with 32-bit floating point numbers. As you can imagine, such float32s can store significantly more precise data compared to int32 - it can represent 2.12, for example, while int32 can only represent 2, and 3. That's the first benefit of using floating point systems in your machine learning model.

This directly translates into another benefit of using floats in your deep learning model. Training your machine learning process is continuous (Stack Overflow, n.d.). This means that weight initialization, backpropagation and subsequent model optimization - a.k.a. the high-level training process - benefits from very precise numbers. Integers can only represent numbers between X and Y, such as 2 and 3. Floats can represent any real number in between the two. Because of this, computations during training can be much more precise, benefiting the performance of your model. Using floating point numbers is therefore great during training.

Float32 in your ML model: why it's not so great

However, if you want to deploy your model, the fact that it was trained using float32 is not so great. The precision that benefits the training process comes at a cost: the cost of storing that precision. For example, compared to the integer 3000, 3000.1298289 requires much bigger number systems in order to be represented. This, in return, makes your model bigger and less efficient during inference.

What model quantization involves

Quantization helps solve this problem. Following TensorFlow (n.d.), "[it] works by reducing the precision of the numbers used to represent a model's parameters". Hence, we simply cut off the precision - that is, from 321.36669 to 321. Hoping that the difference wouldn't impact the model in a major way, we can cut model size significantly. In the blog post "Here’s why quantization matters for AI.", Qualcomm (2020) greatly demonstrates why quantization helps reduce the size of your model through quantization by means of an example:

Now that we know what quantization is and how it benefits model performance, it's time to take a look at the quantization approaches supported by TensorFlow Lite. TF Lite is a collection of tools used for model optimization (TensorFlow lite guide, n.d.). It can be used sequential to regular TensorFlow to reduce size and hence increase efficiency of your trained TF models; it can also be installed on edge devices to run the optimized models. TF Lite supports the following methods of quantization:

Post-training float16 quantization

One of the simplest quantization approaches is to convert the model's float32 based weights into float16 format (TensorFlow, n.d.). This effectively means that the size of your model is reduced by 50%. While the reduction in size is lower compared to other quantization methods (especially the int based ones, as we will see), its benefit is that your models will still run on GPUs - and will run faster, most likely. This does however not mean that they don't run on CPUs instead (TensorFlow, n.d.).

Post-training dynamic range quantization

It's also possible to quantize dynamically - meaning that model weights get quantized into int8 format from float32 format (TensorFlow, n.d.). This means that your model will become 4 times smaller, or 25% of the original size - a 2x increase compared to post-training float16 quantization discussed above. What's more, model activations can be quantized as well, but only during inference time.

While models get smaller with dynamic range quantization, you lose the possibility of running your model for inference on a GPU or TPU. Instead, you'll have to use a CPU for this purpose.

Post-training integer quantization (full integer quantization)

Another, more thorough approach to quantization, is to convert "all model math" into int format. More precisely, everything from your model is converted from float32 into int8 format (TensorFlow, n.d.). This also means that the activations of your model are converted into int format, compared to dynamic range quantization, which does so during inference time only. This method is also c alled "full integer quantization".

Integer quantization helps if you want to run your model on a CPU or even an Edge TPU, which requires integer operations in order to accelerate model performance. What's more, it's also likely that you'll have to perform integer quantization when you want to run your model on a microcontroller. Still, despite the model getting smaller (4 times - from 32-bit into 8-bit) and faster, you'll have to think carefully: changing floats into ints removes precision, as we discussed earlier. Do you accept the possibility that model performance is altered? You'll have to thoroughly test this if you use integer quantization.

Post-training integer quantization with int16 activations (16x8 integer quantization)

Another approach that actually extends the former is post-training integer quantization with int16 activations (TensorFlow, n.d.). Here, weights are converted into int8, but activations are converted into int16 format. Compared to the former, this method is often called "16x8 integer quantization" (TensorFlow, n.d.). It has the benefit that model size is still reduced - because weights are still in int8 format. However, for inference, greater accuracy is achieved compared to full integer quantization through activation quantization into int16 format.

Quantization-aware training

All previous approaches to quantization require you to train a full float32 model first, after which you apply one of the forms of quantization to optimize the model. While this is easier, model accuracy could possibly benefit when the model is already aware during training that it will eventually be quantized using one of the quantization approaches discussed above. Quantization-aware training allows you to do this (TensorFlow, n.d.), by emulating inference-time quantization during the fitting process. Doing so allows your model to learm parameters that are robust against the loss of precision invoked with quantization (Tfmot.quantization.keras.quantize_model, n.d.).

Generally, quantization-aware training is a three-step process:

  1. Train a regular model through tf.keras
  2. Make it quantization-aware by applying the related API, allowing it to learn those loss-robust parameters.
  3. Quantize the model use one of the approaches mentioned above.

  4. More information about quantization-aware training: Quantization aware training

Which quantization method to choose for your ML model

It can be difficult to choose a quantization method for your machine learning model. The table below suggests which quantization method could be best for your use case. It seems to be a trade-off between benefits and hardware, primarily. Here are some general heuristics:

| Technique | Benefits | Hardware | |

Hi, I'm Chris!

I know a thing or two about AI and machine learning. Welcome to MachineCurve.com, where machine learning is explained in gentle terms.