This image could be re-created using vector graphics as an SVG file. This has several advantages; see Commons:Media for cleanup for more information. If an SVG form of this image is available, please upload it and afterwards replace this template with {{vector version available|new image name}}.
It is recommended to name the SVG file “Double descent in a two-layer neural network (Figure 3a from Rocks et al. 2022).svg”—then the template Vector version available (or Vva) does not need the new image name parameter.
Summary
DescriptionDouble descent in a two-layer neural network (Figure 3a from Rocks et al. 2022).png
English: A plot used as an illustration of the double descent phenomenon in deep learning in [1] ("test error falls, rises, then falls as a ratio of parameters to data").
"we plot the training error, test error, bias, and variance as a function of for fixed (more data points than input features [... for a] Random nonlinear features model (two-layer neural network). Analytic solutions for the ensemble-averaged [...] training error (blue squares) and test error (black circles) [...are] plotted as a function of for fixed . Analytic solutions are indicated as dashed lines with numerical results shown as points. [...] a black dashed line marks the boundary between the under and overparameterized regimes at ."
Date
Source
Jason W. Rocks and Pankaj Mehta: Memorizing without overfitting: Bias, variance, and interpolation in overparameterized models. Phys. Rev. Research 4, 013201 – Published 15 March 2022. https://doi.org/10.1103/PhysRevResearch.4.013201
to share – to copy, distribute and transmit the work
to remix – to adapt the work
Under the following conditions:
attribution – You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
https://creativecommons.org/licenses/by/4.0CC BY 4.0 Creative Commons Attribution 4.0 truetrue
Captions
Add a one-line explanation of what this file represents
Uploaded a work by Jason W. Rocks and Pankaj Mehta from Jason W. Rocks and Pankaj Mehta: Memorizing without overfitting: Bias, variance, and interpolation in overparameterized models. Phys. Rev. Research 4, 013201 – Published 15 March 2022. https://doi.org/10.1103/PhysRevResearch.4.013201 with UploadWizard