Normalization (machine learning)

(Redirected from Layer normalization)

In machine learning, normalization is a statistical technique with various applications. There are mainly two forms of normalization, data normalization and activation normalization. Data normalization, or feature scaling, is a general technique in statistics, and it includes methods that rescale input data so that they have well-behaved range, mean, variance, and other statistical properties. Activation normalization is specific to deep learning, and it includes methods that rescale the activation of hidden neurons inside a neural network.

Normalization is often used for faster training convergence, less sensitivity to variations in input data, less overfitting, and better generalization to unseen data. They are often theoretically justified as reducing covariance shift, smoother optimization landscapes, increasing regularization, though they are mainly justified by empirical success.[1]

Batch normalization

edit

Batch normalization (BatchNorm)[2] operates on the activations of a layer for each mini-batch.

Consider a simple feedforward network, defined by chaining together modules: where each network module can be a linear transform, a nonlinear activation function, a convolution, etc.   is the input vector,   is the output vector from the first module, etc.

BatchNorm is a module that can be inserted at any point in the feedforward network. For example, suppose it is inserted just after  , then the network would operate accordingly: The BatchNorm module does not operate over individual inputs. Instead, it must operate over one batch of inputs at a time.

Concretely, suppose we have a batch of inputs  , fed all at once into the network. We would obtain in the middle of the network some vectors The BatchNorm module computes the coordinate-wise mean and variance of these vectors: where   indexes the coordinates of the vectors, and   indexes the elements of the batch. In other words, we are considering the  -th coordinate of each vector in the batch, and computing the mean and variance of this collection of numbers.

It then normalizes each coordinate to have zero mean and unit variance:  The   is a small positive constant such as   added to the variance for numerical stability, to avoid division by zero.

Finally, it applies a linear transform: Here,   and   are parameters inside the BatchNorm module. They are learnable parameters, typically trained by gradient descent.

The following code illustrates BatchNorm.

import numpy as np

def batchnorm(x, gamma, beta, epsilon=1e-8):
    # Mean and variance of each feature
    mu = np.mean(x, axis=0)  # shape (N,)
    var = np.var(x, axis=0)  # shape (N,)

    # Normalize the activations
    x_hat = (x - mu) / np.sqrt(var + epsilon)  # shape (B, N)

    # Apply the linear transform
    y = gamma * x_hat + beta  # shape (B, N)

    return y

Interpretation

edit

  and   allow the network to learn to undo the normalization if that is beneficial.[3] BatchNorm can be interpreted as removing the purely linear transformations, so that its layers focus purely on modelling the nonlinear aspects of data, which may be beneficial, as a neural network can always be topped with a linear transform layer on top.[4][3]

It is claimed in the original publication that BatchNorm works by reducing "internal covariance shift", though the claim has both supporters[5][6] and detractors.[7][8]

Special cases

edit

The original paper[2] recommended to only use BatchNorms after a linear transform, not after a nonlinear activation. That is,  , not  . Also, the bias   does not matter, since will be canceled by the subsequent mean subtraction, so it is of form  . That is, if a BatchNorm is preceded by a linear transform, then that linear transform's bias term is set to constant zero.[2]

For convolutional neural networks (CNN), BatchNorm must preserve the translation-invariance of CNN, which means that it must treat all outputs of the same kernel as if they are different data points within a batch.[2] This is sometimes called Spatial BatchNorm, or BatchNorm2D, or per-channel BatchNorm.[9][10]

Concretely, suppose we have a 2-dimensional convolutional layer defined by where

  •   is the activation of the neuron at position   in the  -th channel of the  -th layer.
  •   is a kernel tensor. Each channel   corresponds to a kernel  , with indices  .
  •   is the bias term for the  -th channel of the  -th layer.

In order to preserve the translational invariance, BatchNorm treats all outputs from the same kernel in the same batch as more data in a batch. That is, it is applied once per kernel   (equivalently, once per channel  ), not per activation  : where   is the batch size,   is the height of the feature map, and   is the width of the feature map.

That is, even though there are only   data points in a batch, all   outputs from the kernel in this batch are treated equally.[2]

Subsequently, normalization and the linear transform is also done per kernel: Similar considerations apply for BatchNorm for n-dimensional convolutions.

The following code illustrates BatchNorm for 2D convolutions:

import numpy as np

def batchnorm_cnn(x, gamma, beta, epsilon=1e-8):
    # Calculate the mean and variance for each channel.
    mean = np.mean(x, axis=(0, 1, 2), keepdims=True)
    var = np.var(x, axis=(0, 1, 2), keepdims=True)

    # Normalize the input tensor.
    x_hat = (x - mean) / np.sqrt(var + epsilon)

    # Scale and shift the normalized tensor.
    y = gamma * x_hat + beta

    return y

Improvements

edit

BatchNorm has been very popular and there were many attempted improvements. Some examples include:[11]

  • Ghost batch: Randomly partition a batch into sub-batches and perform BatchNorm separately on each.
  • Weight decay on   and  .
  • Combine BatchNorm with GroupNorm.

A particular problem with BatchNorm is that during training, the mean and variance were calculated on the fly for each batch (usually as an exponential moving average), but during inference, the mean and variance were frozen from those calculated during training. This train-test disparity degrades performance. The disparity can be decreased by simulating the moving average during inference:[11]: Eq. 3  where   is a hyperparameter to be optimized on a validation set.

Other works attempt to eliminate BatchNorm, such as the Normalizer-Free ResNet.[12]

Layer normalization

edit

Layer normalization (LayerNorm)[13] is a common competitor to BatchNorm. Unlike BatchNorm, which normalizes activations across the batch dimension for a given feature, LayerNorm normalizes across all the features within a single data sample. Compared to BatchNorm, LayerNorm's performance is not affected by batch size. It is a key component of Transformers.

For a given data input and layer, LayerNorm computes the mean ( ) and variance ( ) over all the neurons in the layer. Similar to BatchNorm, learnable parameters   (scale) and   (shift) are applied. It is defined by: where   and  , and   ranges over the neurons in that layer.

Examples

edit

For example, in CNN, a LayerNorm applies to all activations in a layer. In the previous notation, we have notice that the batch index   is removed, while the channel index   is added.

In recurrent neural networks[13] and Transformers,[14] LayerNorm is applied individually to each timestep.

For example, if the hidden vector in an RNN at timestep   is   where   is the dimension of the hidden vector, then LayerNorm will be applied with where   and  .

Root mean square layer normalization

edit

Root mean square layer normalization (RMSNorm)[15] changes LayerNorm by Essentially it is LayerNorm where we enforce  .

Adaptive

edit

Adaptive layer norm (adaLN) computes the   in a LayerNorm not from the layer activation itself, but from other data. It was first proposed for CNN,[16] and has been used effectively in diffusion Transformer (DiT).[17] For example, in DiT, the conditioning information (such as text encoding vector) is processed by an MLP into  , which is then applied in the LayerNorm module in a Transformer.

Weight normalization

edit

Weight normalization (WeightNorm)[18] is a technique inspired by BatchNorm. It normalizes weight matrices in a neural network, rather than its neural activations.

One example is spectral normalization, which divides weight matrices by their spectral norm. The spectral normalization is used in generative adversarial networks (GANs) such as the Wasserstein GAN.[19] The spectral radius can be efficiently computed by the following algorithm:

INPUT matrix   and initial guess  

Iterate   to convergence  . This is the eigenvector of   with eigenvalue  .

RETURN  

By reassigning   after each update of the discriminator, we can upper bound  , and thus upper bound  .

The algorithm can be further accelerated by memoization: At step  , store  . Then at step  , use   as the initial guess for the algorithm. Since   is very close to  , so is   close to  , so this allows rapid convergence.

CNN-specific normalization

edit

There are some activation normalization techniques that are only used for CNNs.

Response normalization

edit

Local response normalization[20] was used in AlexNet. It was applied in a convolutional layer, just after a nonlinear activation function. It was defined by where   is the activation of the neuron at location   and channel  . In words, each pixel in a channel is suppressed by the activations of the same pixel in its adjacent channels.

The numbers   are hyperparameters picked by using a validation set.

It was a variant of the earlier local contrast normalization.[21] where   is the average activation in a small window centered on location   and channel  . The numbers  , and the size of the small window, are hyperparameters picked by using a validation set.

Similar methods were called divisive normalization, as they divide activations by a number depending on the activations. They were originally inspired by biology, where it was used to explain nonlinear responses of cortical neurons and nonlinear masking in visual perception.[22]

Both kinds of local normalization were obsoleted by batch normalization, which is a more global form of normalization.[23]

Response normalization reappeared in ConvNeXT-2 as global response normalization.[24]

Group normalization

edit

Group normalization (GroupNorm)[25] is a technique only used for CNNs. It can be understood as the LayerNorm for CNN applied once per channel-group.

Suppose at a layer  , there are channels  , then we partition it into groups  . Then, we apply LayerNorm to each group.

Instance normalization

edit

Instance normalization (InstanceNorm), or contrast normalization, is a technique first developed for neural style transfer, and is only used for CNNs.[26] It can be understood as the LayerNorm for CNN applied once per channel, or equivalently, as group normalization where each group consists of a single channel: 

Adaptive instance normalization

edit

Adaptive instance normalization (AdaIN) is a variant of instance normalization, designed specifically for neural style transfer with CNN, not for CNN in general.[27]

In the AdaIN method of style transfer, we take a CNN, and two input images, one content and one style. Each image is processed through the same CNN, and at a certain layer  , the AdaIn is applied.

Let   be the activation in the content image, and   be the activation in the style image. Then, AdaIn first computes the mean and variance of the activations of the content image  , then use those as the   for InstanceNorm on  . Note that   itself remains unchanged. Explicitly, we have 

Transformers

edit

Some normalization methods were designed for use in Transformers.

The original 2017 Transformer used the "post-LN" configuration for its LayerNorms. It was difficult to train and required careful hyperparameter tuning and a "warm-up" in learning rate, where it starts small and gradually increases. The pre-LN convention, proposed several times in 2018,[28] was found to be easier to train, requiring no warm-up, leading to faster convergence.[29]

FixNorm[30] and ScaleNorm[31] both normalize activation vectors in a Transformer. The FixNorm method divides the output vectors from a Transformer by their L2 norms, then multiply by a learned parameter  . The ScaleNorm replaces all LayerNorms inside a Transformer by division with L2 norm, then multiplying by a learned parameter   (shared by all ScaleNorm modules of a Transformer). Query-Key normalization (QKNorm)[32] normalizes query and key vectors to have unit L2 norm.

In nGPT, many vectors are normalized to have unit L2 norm:[33] hidden state vectors, input and output embedding vectors, weight matrix columns, query and key vectors.

Miscellaneous

edit

Gradient normalization (GradNorm)[34] normalizes gradient vectors during backpropagation.

See also

edit

References

edit
  1. ^ Huang, Lei (2022). Normalization Techniques in Deep Learning. Synthesis Lectures on Computer Vision. Cham: Springer International Publishing. doi:10.1007/978-3-031-14595-7. ISBN 978-3-031-14594-0.
  2. ^ a b c d e Ioffe, Sergey; Szegedy, Christian (2015-06-01). "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift". Proceedings of the 32nd International Conference on Machine Learning. PMLR: 448–456. arXiv:1502.03167.
  3. ^ a b Goodfellow, Ian; Bengio, Yoshua; Courville, Aaron (2016). "8.7.1. Batch Normalization". Deep learning. Adaptive computation and machine learning. Cambridge, Massachusetts: The MIT Press. ISBN 978-0-262-03561-3.
  4. ^ Desjardins, Guillaume; Simonyan, Karen; Pascanu, Razvan; kavukcuoglu, koray (2015). "Natural Neural Networks". Advances in Neural Information Processing Systems. 28. Curran Associates, Inc.
  5. ^ Xu, Jingjing; Sun, Xu; Zhang, Zhiyuan; Zhao, Guangxiang; Lin, Junyang (2019). "Understanding and Improving Layer Normalization". Advances in Neural Information Processing Systems. 32. Curran Associates, Inc. arXiv:1911.07013.
  6. ^ Awais, Muhammad; Bin Iqbal, Md. Tauhid; Bae, Sung-Ho (November 2021). "Revisiting Internal Covariate Shift for Batch Normalization". IEEE Transactions on Neural Networks and Learning Systems. 32 (11): 5082–5092. doi:10.1109/TNNLS.2020.3026784. ISSN 2162-237X. PMID 33095717.
  7. ^ Bjorck, Nils; Gomes, Carla P; Selman, Bart; Weinberger, Kilian Q (2018). "Understanding Batch Normalization". Advances in Neural Information Processing Systems. 31. Curran Associates, Inc. arXiv:1806.02375.
  8. ^ Santurkar, Shibani; Tsipras, Dimitris; Ilyas, Andrew; Madry, Aleksander (2018). "How Does Batch Normalization Help Optimization?". Advances in Neural Information Processing Systems. 31. Curran Associates, Inc.
  9. ^ "BatchNorm2d — PyTorch 2.4 documentation". pytorch.org. Retrieved 2024-09-26.
  10. ^ Zhang, Aston; Lipton, Zachary; Li, Mu; Smola, Alexander J. (2024). "8.5. Batch Normalization". Dive into deep learning. Cambridge New York Port Melbourne New Delhi Singapore: Cambridge University Press. ISBN 978-1-009-38943-3.
  11. ^ a b Summers, Cecilia; Dinneen, Michael J. (2019). "Four Things Everyone Should Know to Improve Batch Normalization". arXiv:1906.03548 [cs.LG].
  12. ^ Brock, Andrew; De, Soham; Smith, Samuel L.; Simonyan, Karen (2021). "High-Performance Large-Scale Image Recognition Without Normalization". arXiv:2102.06171 [cs.CV].
  13. ^ a b Ba, Jimmy Lei; Kiros, Jamie Ryan; Hinton, Geoffrey E. (2016). "Layer Normalization". arXiv:1607.06450 [stat.ML].
  14. ^ Phuong, Mary; Hutter, Marcus (2022-07-19). "Formal Algorithms for Transformers". arXiv:2207.09238 [cs.LG].
  15. ^ Zhang, Biao; Sennrich, Rico (2019-10-16). "Root Mean Square Layer Normalization". arXiv:1910.07467 [cs.LG].
  16. ^ Perez, Ethan; Strub, Florian; De Vries, Harm; Dumoulin, Vincent; Courville, Aaron (2018-04-29). "FiLM: Visual Reasoning with a General Conditioning Layer". Proceedings of the AAAI Conference on Artificial Intelligence. 32 (1). doi:10.1609/aaai.v32i1.11671. ISSN 2374-3468.
  17. ^ Peebles, William; Xie, Saining (2023). "Scalable Diffusion Models with Transformers": 4195–4205. arXiv:2212.09748. {{cite journal}}: Cite journal requires |journal= (help)
  18. ^ Salimans, Tim; Kingma, Diederik P. (2016-06-03). "Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks". arXiv:1602.07868 [cs.LG].
  19. ^ Miyato, Takeru; Kataoka, Toshiki; Koyama, Masanori; Yoshida, Yuichi (2018-02-16). "Spectral Normalization for Generative Adversarial Networks". arXiv:1802.05957 [cs.LG].
  20. ^ Krizhevsky, Alex; Sutskever, Ilya; Hinton, Geoffrey E (2012). "ImageNet Classification with Deep Convolutional Neural Networks". Advances in Neural Information Processing Systems. 25. Curran Associates, Inc.
  21. ^ Jarrett, Kevin; Kavukcuoglu, Koray; Ranzato, Marc' Aurelio; LeCun, Yann (September 2009). "What is the best multi-stage architecture for object recognition?". 2009 IEEE 12th International Conference on Computer Vision. IEEE. pp. 2146–2153. doi:10.1109/iccv.2009.5459469. ISBN 978-1-4244-4420-5.
  22. ^ Lyu, Siwei; Simoncelli, Eero P. (2008). "Nonlinear image representation using divisive normalization". 2008 IEEE Conference on Computer Vision and Pattern Recognition. Vol. 2008. pp. 1–8. doi:10.1109/CVPR.2008.4587821. ISBN 978-1-4244-2242-5. ISSN 1063-6919. PMC 4207373. PMID 25346590.
  23. ^ Ortiz, Anthony; Robinson, Caleb; Morris, Dan; Fuentes, Olac; Kiekintveld, Christopher; Hassan, Md Mahmudulla; Jojic, Nebojsa (2020). "Local Context Normalization: Revisiting Local Normalization": 11276–11285. arXiv:1912.05845. {{cite journal}}: Cite journal requires |journal= (help)
  24. ^ Woo, Sanghyun; Debnath, Shoubhik; Hu, Ronghang; Chen, Xinlei; Liu, Zhuang; Kweon, In So; Xie, Saining (2023). "ConvNeXt V2: Co-Designing and Scaling ConvNets With Masked Autoencoders": 16133–16142. arXiv:2301.00808. {{cite journal}}: Cite journal requires |journal= (help)
  25. ^ Wu, Yuxin; He, Kaiming (2018). "Group Normalization": 3–19. {{cite journal}}: Cite journal requires |journal= (help)
  26. ^ Ulyanov, Dmitry; Vedaldi, Andrea; Lempitsky, Victor (2017-11-06). "Instance Normalization: The Missing Ingredient for Fast Stylization". arXiv:1607.08022 [cs.CV].
  27. ^ Huang, Xun; Belongie, Serge (2017). "Arbitrary Style Transfer in Real-Time With Adaptive Instance Normalization": 1501–1510. arXiv:1703.06868. {{cite journal}}: Cite journal requires |journal= (help)
  28. ^ Wang, Qiang; Li, Bei; Xiao, Tong; Zhu, Jingbo; Li, Changliang; Wong, Derek F.; Chao, Lidia S. (2019-06-04), Learning Deep Transformer Models for Machine Translation, arXiv:1906.01787, retrieved 2024-10-18
  29. ^ Xiong, Ruibin; Yang, Yunchang; He, Di; Zheng, Kai; Zheng, Shuxin; Xing, Chen; Zhang, Huishuai; Lan, Yanyan; Wang, Liwei; Liu, Tie-Yan (2020-06-29). "On Layer Normalization in the Transformer Architecture". arXiv:2002.04745 [cs.LG].
  30. ^ Nguyen, Toan Q.; Chiang, David (2018-04-17), Improving Lexical Choice in Neural Machine Translation, arXiv:1710.01329, retrieved 2024-10-18
  31. ^ Nguyen, Toan Q.; Salazar, Julian (2019-11-02). "Transformers without Tears: Improving the Normalization of Self-Attention". arXiv:1910.05895. doi:10.5281/zenodo.3525484. {{cite journal}}: Cite journal requires |journal= (help)
  32. ^ Henry, Alex; Dachapally, Prudhvi Raj; Pawar, Shubham Shantaram; Chen, Yuxuan (November 2020). Cohn, Trevor; He, Yulan; Liu, Yang (eds.). "Query-Key Normalization for Transformers". Findings of the Association for Computational Linguistics: EMNLP 2020. Online: Association for Computational Linguistics: 4246–4253. doi:10.18653/v1/2020.findings-emnlp.379.
  33. ^ Loshchilov, Ilya; Hsieh, Cheng-Ping; Sun, Simeng; Ginsburg, Boris (2024-10-01), nGPT: Normalized Transformer with Representation Learning on the Hypersphere, arXiv:2410.01131, retrieved 2024-10-18
  34. ^ Chen, Zhao; Badrinarayanan, Vijay; Lee, Chen-Yu; Rabinovich, Andrew (2018-07-03). "GradNorm: Gradient Normalization for Adaptive Loss Balancing in Deep Multitask Networks". Proceedings of the 35th International Conference on Machine Learning. PMLR: 794–803. arXiv:1711.02257.

Further reading

edit