Tegra is a system on a chip (SoC) series developed by Nvidia for mobile devices such as smartphones, personal digital assistants, and mobile Internet devices. The Tegra integrates an ARM architecture central processing unit (CPU), graphics processing unit (GPU), northbridge, southbridge, and memory controller onto one package. Early Tegra SoCs are designed as efficient multimedia processors. The Tegra-line evolved to emphasize performance for gaming and machine learning applications without sacrificing power efficiency, before taking a drastic shift in direction towards platforms that provide vehicular automation with the applied "Nvidia Drive" brand name on reference boards and its semiconductors; and with the "Nvidia Jetson" brand name for boards adequate for AI applications within e.g. robots or drones, and for various smart high level automation purposes.

Nvidia Tegra T20 (Tegra 2) and T30 (Tegra 3) chips
A Tegra X1 inside a Shield TV

History

edit

The Tegra APX 2500 was announced on February 12, 2008. The Tegra 6xx product line was revealed on June 2, 2008,[1] and the APX 2600 was announced in February 2009. The APX chips were designed for smartphones, while the Tegra 600 and 650 chips were intended for smartbooks and mobile Internet devices (MID).[2]

The first product to use the Tegra was Microsoft's Zune HD media player in September 2009, followed by the Samsung M1.[3] Microsoft's Kin was the first cellular phone to use the Tegra;[4] however, the phone did not have an app store, so the Tegra's power did not provide much advantage. In September 2008, Nvidia and Opera Software announced that they would produce a version of the Opera 9.5 browser optimized for the Tegra on Windows Mobile and Windows CE.[5][6] At Mobile World Congress 2009, Nvidia introduced its port of Google's Android to the Tegra.

On January 7, 2010, Nvidia officially announced and demonstrated its next generation Tegra system-on-a-chip, the Nvidia Tegra 250, at Consumer Electronics Show 2010.[7] Nvidia primarily supports Android on Tegra 2, but booting other ARM-supporting operating systems is possible on devices where the bootloader is accessible. Tegra 2 support for the Ubuntu Linux distribution was also announced on the Nvidia developer forum.[8]

Nvidia announced the first quad-core SoC at the February 2011 Mobile World Congress event in Barcelona. Though the chip was codenamed Kal-El, it is now branded as Tegra 3. Early benchmark results show impressive gains over Tegra 2,[9][10] and the chip was used in many of the tablets released in the second half of 2011.

In January 2012, Nvidia announced that Audi had selected the Tegra 3 processor for its In-Vehicle Infotainment systems and digital instruments display.[11] The processor will be integrated into Audi's entire line of vehicles worldwide, beginning in 2013. The process is ISO 26262-certified.[12]

In summer of 2012 Tesla Motors began shipping the Model S all electric, high performance sedan, which contains two NVIDIA Tegra 3D Visual Computing Modules (VCM). One VCM powers the 17-inch touchscreen infotainment system, and one drives the 12.3-inch all digital instrument cluster."[13]

In March 2015, Nvidia announced the Tegra X1, the first SoC to have a graphics performance of 1 teraflop. At the announcement event, Nvidia showed off Epic Games' Unreal Engine 4 "Elemental" demo, running on a Tegra X1.

On October 20, 2016, Nvidia announced that the Nintendo Switch hybrid video game console will be powered by Tegra hardware.[14] On March 15, 2017, TechInsights revealed the Nintendo Switch is powered by a custom Tegra X1 (model T210), with lower clockspeeds.[15]

Models

edit

Tegra APX

edit
Tegra APX 2500
Tegra APX 2600
  • Enhanced NAND flash
  • Video codecs:[16]
    • 720p H.264 Baseline Profile encode or decode
    • 720p VC-1/WMV9 Advanced Profile decode
    • D-1 MPEG-4 Simple Profile encode or decode

Tegra 6xx

edit
Tegra 600
  • Targeted for GPS segment and automotiveRed
  • Processor: ARM11 700 MHz MPCore
  • Memory: low-power DDR (DDR-333, 166 MHz)
  • SXGA, HDMI, USB, stereo jack
  • HD camera 720p
Tegra 650
  • Targeted for GTX of handheld and notebook
  • Processor: ARM11 800 MHz MPCore
  • Low power DDR (DDR-400, 200 MHz)
  • Less than 1 watt envelope
  • HD image processing for advanced digital still camera and HD camcorder functions
  • Display supports 1080p at 24 frame/s, HDMI v1.3, WSXGA+ LCD and CRT, and NTSC/PAL TV output
  • Direct support for Wi-Fi, disk drives, keyboard, mouse, and other peripherals
  • A complete board support package (BSP) to enable fast time to market for Windows Mobile-based designs

Tegra 2

edit
 
Nvidia Tegra 2 T20
 
Nvidia Tegra 2 T20 die Shot

The second generation Tegra SoC has a dual-core ARM Cortex-A9 CPU, an ultra low power (ULP) GeForce GPU,[17] a 32-bit memory controller with either LPDDR2-600 or DDR2-667 memory, a 32 KB/32 KB L1 cache per core and a shared 1 MB L2 cache.[18] Tegra 2's Cortex A9 implementation does not include ARM's SIMD extension, NEON. There is a version of the Tegra 2 SoC supporting 3D displays; this SoC uses a higher clocked CPU and GPU.

The Tegra 2 video decoder is largely unchanged from the original Tegra and has limited support for HD formats.[19] The lack of support for high-profile H.264 is particularly troublesome when using online video streaming services.

Common features:

  • CPU cache: L1: 32 KB instruction + 32 KB data, L2: 1 MB
  • 40 nm semiconductor technology
Model
number
CPU GPU Memory Adoption
Processor Cores Frequency Micro-
architecture
Core
configuration1
Frequency Type Amount Bus
width
Band-
width
Availability
AP20H (Ventana/Unknown) Cortex-A9 2 1.0 GHz VLIW-based
VEC4 units[20]
4:4:4:4[21] 300 MHz LPDDR2 300 MHz
DDR2 333 MHz
? 32 bit
single-channel
2.4 GB/s
2.7 GB/s
Q1 2010
T20 (Harmony/Ventana) 333 MHz
AP25 1.2 GHz 400 MHz Q1 2011
T25

1 Pixel shaders : Vertex shaders : Texture mapping units : Render output units

Devices

edit
Model Devices
AP20H Motorola Atrix 4G, Motorola Droid X2, Motorola Photon, LG Optimus 2X / LG Optimus Dual P990 / Optimus 2x SU660 (?), Samsung Galaxy R, Samsung Captivate Glide,
T-Mobile G2X P999, Acer Iconia Tab A200 and A500, LG Optimus Pad, Motorola Xoom,[22] Sony Tablet S, Dell Streak Pro,[23] Toshiba Thrive[24] tablet, T-Mobile G-Slate
AP25 Fusion Garage Grid 10[citation needed]
T20 Avionic Design Tamonten Processor Board,[25] Notion Ink Adam tablet, Olivetti OliPad 100, ViewSonic G Tablet, ASUS Eee Pad Transformer, Samsung Galaxy Tab 10.1, Toshiba AC100, CompuLab Trim-Slice nettop, Velocity Micro Cruz Tablet L510, Acer Iconia Tab A100
Un­known Tesla Motors Model S 2012~2017 and Model X 2015~2017 instrument cluster (IC)[26][27]

Tegra 3

edit
 
The Ouya uses a Tegra 3 T33-P-A3.
 
Nvidia Tegra 3 T30L

NVIDIA's Tegra 3 (codenamed "Kal-El")[28] is functionally a SoC with a quad-core ARM Cortex-A9 MPCore CPU, but includes a fifth "companion" core in what Nvidia refers to as a "variable SMP architecture".[29] While all cores are Cortex-A9s, the companion core is manufactured with a low-power silicon process. This core operates transparently to applications and is used to reduce power consumption when processing load is minimal. The main quad-core portion of the CPU powers off in these situations.

Tegra 3 is the first Tegra release to support ARM's SIMD extension, NEON.

The GPU in Tegra 3 is an evolution of the Tegra 2 GPU, with 4 additional pixel shader units and higher clock frequency. It can also output video up to 2560×1600 resolution and supports 1080p MPEG-4 AVC/h.264 40 Mbit/s High-Profile, VC1-AP, and simpler forms of MPEG-4 such as DivX and Xvid.[30]

The Tegra 3 was released on November 9, 2011.[31]

Common features:

  • CPU cache: L1: 32 KB instruction + 32 KB data, L2: 1 MB
  • 40 nm LPG semiconductor technology by TSMC
Model
number
CPU GPU Memory Adoption
Processor Cores Frequency
(multi- / single-core mode)
Micro-
architecture
Core
configuration1
Frequency Type Amount Bus
width
Band-
width
Availability
T30L Cortex-A9 4+1 1.2 GHz / up to 1.3 GHz VLIW-based
VEC4 units[20]
8:4:8:8[32] 416 MHz DDR3-1333 ? 32 bit
single-channel
5.3 GB/s[33] Q1 2012
T30 1.4 GHz / up to 1.5 GHz 520 MHz LPDDR2-1066
DDR3-L-1500
? 4.3 GB/s
6.0 GB/s[34]
Q4 2011
AP33
T33 1.6 GHz / up to 1.7 GHz[33] DDR3-1600 ? 6.4 GB/s[33] Q2 2012

1 Pixel shaders : Vertex shaders : Texture mapping units : Render output units

Devices

edit
Model Devices
AP33 LG Optimus 4X HD, HTC One X, XOLO Play T1000,[35] Coolpad 8735
T30 Asus Eee Pad Transformer Prime (TF201),[36] IdeaTab K2 / LePad K2,[37] Acer Iconia Tab A510, Fuhu Inc. nabi 2 Tablet,[38] Microsoft Surface RT,[39] Lenovo IdeaPad Yoga 11,[40][41]
T30I Tesla Model S 2012~2017 and Model X 2015~2017 media control unit (MCU)[27][42]
T30L Asus Transformer Pad TF300T, Microsoft Surface, Nexus 7 (2012),[43] Sony Xperia Tablet S, Acer Iconia Tab A210, Toshiba AT300 (Excite 10),[44][unreliable source?] BLU Quattro 4.5,[45] Coolpad 9070
T33 Asus Transformer Pad Infinity (TF700T), Fujitsu ARROWS X F-02E, Ouya, HTC One X+

Tegra 4

edit

The Tegra 4 (codenamed "Wayne") was announced on January 6, 2013, and is a SoC with a quad-core CPU, but includes a fifth low-power Cortex A15 companion core which is invisible to the OS and performs background tasks to save power. This power-saving configuration is referred to as "variable SMP architecture" and operates like the similar configuration in Tegra 3.[46]

The GeForce GPU in Tegra 4 is again an evolution of its predecessors. However, numerous feature additions and efficiency improvements were implemented. The number of processing resources was dramatically increased, and clock rate increased as well. In 3D tests, the Tegra 4 GPU is typically several times faster than that of Tegra 3.[47] Additionally, the Tegra 4 video processor has full support for hardware decoding and encoding of WebM video (up to 1080p 60 Mbit/s @ 60fps).[48]

Along with Tegra 4, Nvidia also introduced i500, an optional software modem based on Nvidia's acquisition of Icera, which can be reprogrammed to support new network standards. It supports category 3 (100 Mbit/s) LTE but will later be updated to Category 4 (150 Mbit/s).

Common features:

  • CPU cache: L1: 32 KB instruction + 32 KB data, L2: 2 MB
  • 28 nm HPL semiconductor technology
Model
number
CPU GPU Memory Adoption
Processor Cores Frequency Microarchitecture Core
configuration1
Frequency Type Amount Bus
width
Band-
width
Availability
T114[49][unreliable source?] Cortex-A15 4+1 up to 1.9 GHz VLIW-based VEC4 units[50] 72 (48:24:4)[20][50] 672 MHz[51] DDR3L or LPDDR3 ? 32 bit dual-channel up to 14.9 GB/s (1866 MT/s data rate)[52][53] Q2 2013[54]

1 Pixel shaders : Vertex shaders : Pixel pipelines (pairs 1x TMU and 1x ROP)

Devices

edit
Model Devices
T114 Nvidia Shield Portable, Tegra Note 7, Microsoft Surface 2, HP Slate 7 Extreme,[55] HP Slate 7 Beats Special Edition,[56] HP Slate 8 Pro,[57] HP SlateBook x2,[58] HP SlateBook 14,[59] HP Slate 21,[60] ZTE N988S, nabi Big Tab, Nuvola NP-1, Project Mojo, Asus Transformer Pad TF701T, Toshiba AT10-LE-A (Excite Pro), Vizio 10" tablet, Wexler.Terra 7, Wexler.Terra 10, Acer TA272HUL AIO, Xiaomi Mi 3 (TD-LTE version),[61] Coolpad 8970L (大观 4),[62] Audi Tablet,[63] Le Pan TC1020 10.1",[64] Matrimax iPLAY 7,[65] Kobo Arc 10HD[66]

Tegra 4i

edit

The Tegra 4i (codenamed "Grey") was announced on February 19, 2013. With hardware support for the same audio and video formats,[48] but using Cortex-A9 cores instead of Cortex-A15, the Tegra 4i is a low-power variant of the Tegra 4 and is designed for phones and tablets. Unlike its Tegra 4 counterpart, the Tegra 4i also integrates the Icera i500 LTE/HSPA+ baseband processor onto the same die.

Common features:

  • 28 nm HPM semiconductor technology
  • CPU cache: L1: 32 KB instruction + 32 KB data, L2: 1 MB
Model
number
CPU GPU Memory Adoption
Processor Cores Frequency Microarchitecture Core
configuration1
Frequency Type Amount Bus
width
Band-
width
Availability
T148?[67] Cortex-A9 "R4" 4+1 up to 2.0 GHz VLIW-based VEC4 units[50] 60 (48:12:2)[50] 660 MHz[51] LPDDR3 32 bit single-channel 6.4–7.5 GB/s (800–933 MHz)[53] Q1 2014

1 Pixel shaders : Vertex shaders : Pixel pipelines (pairs 1x TMU and 1x ROP)

Devices
edit
Model Devices
T148? Blackphone, LG G2 mini LTE, Wiko Highway 4G,[68] Explay 4Game,[69] Wiko Wax[70][71] QMobile Noir LT-250[72]

Tegra K1

edit

Nvidia's Tegra K1 (codenamed "Logan") features ARM Cortex-A15 cores in a 4+1 configuration similar to Tegra 4, or Nvidia's 64-bit Project Denver dual-core processor as well as a Kepler graphics processing unit with support for Direct3D 12, OpenGL ES 3.1, CUDA 6.5, OpenGL 4.4/OpenGL 4.5, and Vulkan.[73][74] Nvidia claims that it outperforms both the Xbox 360 and the PS3, whilst consuming significantly less power.[75]

Support Adaptive Scalable Texture Compression.[76]

In late April 2014, Nvidia shipped the "Jetson TK1" development board containing a Tegra K1 SoC and running Ubuntu Linux.[77][unreliable source?]

Model
number
CPU GPU Memory Adoption
Processor Cores Frequency Micro-
architecture
Core
configuration1
Frequency GFLOPS
(FP32)
Type Amount Bus
width
Band-
width
Availability
T124[80] Cortex-A15 R3
(32-bit)
4+1 up to 2.3 GHz[81] GK20A
(Kepler)
192:8:4[82] 756–951 MHz 290–365[83] DDR3L
LPDDR3[82]
max 8 GB
with 40-bit address extension2
64 bit 17 GB/s[82] Q2 2014
T132 Denver
(64-bit)
2[82] up to 2.5 GHz[81] max 8 GB ? ? Q3 2014

1 Unified Shaders : Texture mapping units : Render output units

2 ARM Large Physical Page Extension (LPAE) supports 1 TiB (240 bytes). The 8 GiB limitation is part-specific.

Devices

edit
Model Devices
T124 Jetson TK1 development board,[84] Nvidia Shield Tablet,[85] Acer Chromebook 13,[86] HP Chromebook 14 G3,[87] Xiaomi MiPad,[88] Snail Games
OBox, UTStarcom MC8718, Google Project Tango tablet,[89] Apalis TK1 System on Module,[90] Fuze Tomahawk F1,[91] JXD Singularity S192[92]
T132 HTC Nexus 9[93][94]

In December 2015, the web page of wccftech.com published an article stating that Tesla is going to use a Tegra K1 based design derived from the template of the Nvidia Visual Computing Module (VCM) for driving the infotainment systems and providing visual driving aid in the respective vehicle models of that time.[95] This news has, as of now, found no similar successor or other clear confirmation later on in any other place on such a combination of a multimedia with an auto pilot system for these vehicle models.

Tegra X1

edit
 
The X1 is the basis for the Nintendo Switch video game console.
 
Die shot of the Tegra X1
 
Tegra X1 in Nvidia Shield TV

Released in 2015, Nvidia's Tegra X1 (codenamed "Erista") features two CPU clusters, one with four ARM Cortex-A57 cores and the other with four ARM Cortex-A53 cores, as well as a Maxwell-based graphics processing unit.[96][97] It supports Adaptive Scalable Texture Compression.[76] Only one cluster of cores can be active at once, with the cluster switch being handled by software on the BPMP-L. Devices utilizing the Tegra X1 have only been seen to utilize the cluster with the more powerful ARM Cortex-A57 cores. The other cluster with four ARM Cortex-A53 cores cannot be accessed without first powering down the Cortex-A57 cores (both clusters must be in the CC6 off state).[98] Nvidia has removed the ARM Cortex-A53 cores from later versions of technical documentation, implying that they have been removed from the die.[99][100] The Tegra X1 was found to be vulnerable to a Fault Injection (FI) voltage glitching attack, which allowed for arbitrary code execution and homebrew software on the devices it was implemented in.[101]

A revision (codenamed "Mariko") with greater power efficiency, known officially as Tegra X1+ was released in 2019,[102] fixing the Fusée Gelée exploit. It's also known as T214 and T210B01.

Model
number
SOC Variant Process CPU GPU Memory Adoption
Processor Cores Frequency1 Micro-
architecture
Core
configuration2
Frequency GFLOPS
(FP32)
GFLOPS
(FP16)
Type Amount3 Bus
width
Band-
width4
Availability
T210 ODNX02-A2

TM670D-A1

TM670M-A2

TM671D-A2

TSMC 20 nm Cortex-A57 +
Cortex-A53[106]: 753 
A57: 4
A53: 4[106]
A57: 2.2 GHz[107]
A53: 1.3 GHz
GM20B
(Maxwell)[106]: 14 
256:[106] 16:16 1000 MHz 512 1024 LPDDR3 / LPDDR4 8 GB[106] 64 bit[106] 25.6 GB/s Q2 2015
TM660M-A2 A57: 1.428 GHz
A53: ? GHz
128:16:16 921 MHz 236 472 LPDDR3? / LPDDR4: 773  4 GB March 2019
T214 / T210b01 ODNX10-A1

TM675M-A1

TSMC 16 nm Cortex-A57 A57: 4 A57: 2.1 GHz[108] GM21B (Maxwell)[109] 256:16:16 1267 MHz[110] 649 1298 LPDDR4 /LPDDR4X 8 GB 34.1 GB/s Q2 2019

1 CPU frequency may be clocked differently than the maximum validated by Nvidia at the OEM's discretion

2 Unified Shaders : Texture mapping units : Render output units

3 Maximum validated amount of memory, implementation is board specific

4 Maximum validated memory bandwidth, implementation is board specific

Devices

edit
Model SOC Variant Devices
T210 ODNX02-A2 Nintendo Switch (2017, HAC-001) [111][15]
TM670D-A1 Nvidia Shield Android TV (2015)
TM670M-A2 Nvidia Shield Android TV (2017)
TM660M-A2 Jetson Nano 4 GB, Jetson Nano 2 GB
TM671D-A2 Google Pixel C
Un­known Nvidia Jetson TX1 development board,[112] Nvidia Drive CX & PX
T210b01 ODNX10-A1 Nintendo Switch (2019, HAC-001(-01)), Nintendo Switch: OLED Model (HEG-001), Nintendo Switch Lite (HDH-001)
TM675M-A1 Nvidia Shield Android TV (2019)

Tegra X2

edit

Nvidia's Tegra X2[113][114] (codenamed "Parker") features Nvidia's own custom general-purpose ARMv8-compatible core Denver 2 as well as code-named Pascal graphics processing core with GPGPU support.[115] The chips are made using FinFET process technology using TSMC's 16 nm FinFET+ manufacturing process.[116][117][118]

  • CPU: Nvidia Denver2 ARMv8 (64-bit) dual-core + ARMv8 ARM Cortex-A57 quad-core (64-bit)
  • RAM: up to 8 GB LPDDR4[119]
  • GPU: Pascal-based, 256 CUDA cores; type: GP10B[120]
  • TSMC 16 nm, FinFET process
  • TDP: 7.5–15 W[121]
Model
number
CPU GPU Memory Adoption
Processor Cores Frequency Micro-
architecture
Core
configuration1
Frequency GFLOPS
(FP32)
GFLOPS
(FP16)
Type Amount Bus
width
Band-
width
Availability
T186 Denver2 +
Cortex-A57
2 + 4 Denver2: 1.4–2.0 GHz
A57: 1.2–2.0 GHz
GP10B (Pascal)[122][unreliable source?] 256:16:16 (2)[123] 854–1465 MHz 437–750 874–1500 LPDDR4 8 GB 128 bit 59.7 GB/s

1 Unified Shaders : Texture mapping units : Render output units (SM count)

Devices

edit
Model Devices
T186 Nvidia Drive PX2 (variants),
ZF ProAI 1.1[124]
T186 Nvidia Jetson TX2[121]
Un­known Mercedes-Benz MBUX (infotainment system)[125]
Un­known 1 unit along with 1 GPU semiconductor is part of the ECU for "Tesla vision" functionality in all Tesla vehicles since October 2016[126][127]
T186 Magic Leap One[128][129] (mixed environment glasses)
Un­known Skydio 2 (drone)[130]

Xavier

edit

The Xavier Tegra SoC, named after the comic book character Professor X, was announced on 28 September 2016, and by March 2019, it had been released.[131] It contains 7 billion transistors and 8 custom ARMv8 cores, a Volta GPU with 512 CUDA cores, an open sourced TPU (Tensor Processing Unit) called DLA (Deep Learning Accelerator).[132][133] It is able to encode and decode 8K Ultra HD (7680×4320). Users can configure operating modes at 10 W, 15 W, and 30 W TDP as needed and the die size is 350 mm2.[134][135][136] Nvidia confirmed the fabrication process to be 12 nm FinFET at CES 2018.[137]

  • CPU: Nvidia custom Carmel ARMv8.2-A (64-bit), 8 cores 10-wide superscalar[138]
  • GPU: Volta-based, 512 CUDA cores with 1.4 TFLOPS;[139] type: GV11B[140][120]
  • TSMC 12 nm, FinFET process[137]
  • 20 TOPS DL and 160 SPECint @ 20 W;[134] 30 TOPS DL @ 30 W[136] (TOPS DL = Deep Learning Tera-Ops)
    • 20 TOPS DL via the GPU based tensor cores
    • 10 TOPS DL (INT8) via the DLA unit that shall achieve 5 TFLOPS (FP16)[139]
  • 1.6 TOPS in the PVA unit (Programmable Vision Accelerator,[141] for StereoDisparity/OpticalFlow/ImageProcessing)
  • 1.5 GPix/s in the ISP unit (Image Signal Processor, with native full-range HDR and tile processing support)
  • Video processor for 1.2 GPix/s encoding and 1.8 GPix/s decode[139] including 8k video support[135]
  • MIPI-CSI-3 with 16 lanes[142][143]
  • 1 Gbit/s Ethernet
  • 10 Gbit/s Ethernet
Module

(Model)

SoC Variant CPU GPU Deep Learning Memory Adoption TDP in watts
Processor Cores Frequency

(GHz)

Micro-
architecture
Core
configuration1
Frequency

(MHz)

TFLOPS
(FP32)
TFLOPS
(FP16)
TOPS

(INT8)

Type Amount Bus
width
Band-
width
Availability
AGX Xavier 64 GB Carmel 12 MB cache 8 up to 2.2 Volta 512:64 (8, 4, 1) up to 1377 1.41 2.82 up to 32 LPDDR4X 64 GB 256-bit 136.5 GB/s 10-30
AGX Xavier 32 GB Carmel 12 MB cache 8 up to 2.2 Volta 512:64 (8, 4, 1) up to 1377 1.41 2.82 up to 32 LPDDR4X 32 GB 256-bit 136.5 GB/s 10-30
AGX Xavier Industrial Carmel 12 MB cache 8 up to 2 Volta 512:64 (8, 4, 1) up to 1221 1.24 2.48 up to 30 LPDDR4X 32 GB 256-bit 136.5 GB/s 20-40
Xavier NX 16 GB Carmel 10 MB cache 6 up to 1.9 Volta 384:48 (6, 3, 1) up to 1100 0.84 1.69 up to 21 LPDDR4X 16 GB 128-bit 59.7 GB/s 10-20
Xavier NX 8 GB Carmel 10 MB cache 6 up to 1.9 Volta 384:48 (6, 3, 1) up to 1100 0.84 1.69 up to 21 LPDDR4X 8 GB 128-bit 59.7 GB/s 10-20

1 CUDA cores : Tensor cores (SMs, TPCs, GPCs)

Devices

edit
Model SOC Variant Devices
T194 Un­known Nvidia Drive Xavier (Drive PX-series)[144]
(formerly named Xavier AI Car Supercomputer)
Un­known Nvidia Drive Pegasus (Drive PX-series)[144]
Un­known Nvidia Drive AGX Xavier Developer Kit[145]
Un­known Nvidia Jetson AGX Xavier Developer Kit[146]
Un­known Nvidia Jetson Xavier[146]
TE860M-A2 Nvidia Jetson Xavier NX[147]
Un­known Nvidia Clara AGX[148] "Clara AGX is based on NVIDIA Xavier and NVIDIA Turing GPUs."[149][unreliable source?]
Un­known Bosch and Nvidia designed Self Driving System[150]
Un­known ZF ProAI[151][152]

On the Linux Kernel Mailing List, a Tegra194 based development board with type ID "P2972-0000" got reported: The board consists of the P2888 compute module and the P2822 baseboard.[153]

Orin

edit

Nvidia announced the next-gen SoC codename Orin on March 27, 2018, at GPU Technology Conference 2018.[154] It contains 17 billion transistors and 12 ARM Hercules cores and is capable of 200 INT8 TOPs @ 65W.[155]

The Drive AGX Orin board system family was announced on December 18, 2019, at GTC China 2019. Nvidia has sent papers to the press documenting that the known (from Xavier series) clock and voltage scaling on the semiconductors and by pairing multiple such chips a wider range of application can be realized with the thus resulting board concepts.[156] In early 2021, Nvidia announced the Chinese vehicle company NIO will be using an Orin-based chip in their cars.[157]

The so far published specifications for Orin are:

  • CPU: 12× Arm Cortex-A78AE (Hercules) ARMv8.2-A (64-bit)[158][159]
  • GPU: Ampere-based, 2048[160] CUDA cores and 64 tensor cores1; "with up to 131 Sparse TOPs of INT8 Tensor compute, and up to 5.32 FP32 TFLOPs of CUDA compute."[161]
    • 5.3 CUDA TFLOPs (FP32)[162]
    • 10.6 CUDA TFLOPs (FP16)[162]
  • Samsung 8 nm process[162]
  • 275 TOPS (INT8) DL[162]
    • 170 TOPS DL (INT8) via the GPU
    • 105 TOPS DL (INT8) via the 2x NVDLA 2.0 units (DLA, Deep Learning Accelerator)
  • 85 TOPS DL (FP16)[162]
  • 5 TOPS in the PVA v2.0 unit (Programmable Vision Accelerator for Feature Tracking)
  • 1.85 GPix/s in the ISP unit (Image Signal Processor, with native full-range HDR and tile processing support)
  • Video processor for ? GPix/s encoding and ? GPix/s decode
  • 4× 10 Gbit/s Ethernet, 1× 1 Gbit/s Ethernet

1 Orin uses the double-rate tensor cores in the A100, not the standard tensor cores in consumer Ampere GPUs.

Nvidia announced the latest member of the family, "Orin Nano" in September 2022 at the GPU Technology Conference 2022.[163] The Orin product line now features SoC and SoM(System-On-Module) based on the core Orin design and scaled for different uses from 60W all the way down to 5W. While less is known about the exact SoC's that are being manufactured, Nvidia has publicly shared detailed technical specifications about the entire Jetson Orin SoM product line. These module specifications illustrate how Orin scales providing insight into future devices that contain an Orin derived SoC.

Module

(Model)

SoC Variant CPU GPU Deep Learning Memory Adoption TDP in watts
Processor Cores Frequency

(GHz)

Micro-
architecture
Core
configuration1
Frequency

(MHz)

TFLOPS
(FP32)
TFLOPS
(FP16)
TOPS

(INT8)

Type Amount Bus
width
Band-
width
Availability
AGX Orin 64 GB [164][165] Cortex-A78AE 9 MB cache[161] 12 up to 2.2[161] Ampere 2048:64:8 (16, 8, 2)[161] up to 1300[161] 5.32[161] 10.649 up to 275[161] LPDDR5 64 GB 256-bit 204.8 GB/s[161] Sample 2021, Kit Q1 2022, Prod Dec 2022[166] 15-60[161]
AGX Orin 32 GB[166] Cortex-A78AE 6 MB cache[166] 8 up to 2.2[166] Ampere 1792:56:7 (14, 7, 2)[166] up to 930[166] 3.365[161] 6.73 up to 200[166] LPDDR5 32 GB[166] 256-bit[166] 204.8 GB/s[166] Oct 2022[166] 15-40[166]
Orin NX 16 GB[167] TE980-M[168] Cortex-A78AE 6 MB cache[167] 8 up to 2[167] Ampere 1024:32:4 (8, 4, 1)[167] up to 918[167] 1.88 3.76 up to 100[167] LPDDR5 16 GB[167] 128-bit[167] 102.4 GB/s[167] Dec 2022[167] 10-25[167]
Orin NX 8 GB[166] TE980-M[168] Cortex-A78AE 5.5 MB cache[166] 6 up to 2[166] Ampere 1024:32:4 (8, 4, 1)[166] up to 765[166] 1.57 3.13 up to 70[166] LPDDR5 8 GB[166] 128-bit[166] 102.4 GB/s[166] Jan 2023[166] 10-20[166]
Orin Nano 8 GB[166] Cortex-A78AE 5.5 MB cache[166] 6 up to 1.5[166] Ampere 1024:32:4 (8, 4, 1)[166] up to 625[166] 1.28 2.56 up to 40[166] LPDDR5 8 GB[166] 128-bit[166] 68 GB/s[166] Jan 2023[166] 7-15[166]
Orin Nano 4 GB[166] Cortex-A78AE 5.5 MB cache[166] 6 up to 1.5[166] Ampere 512:16:2 (4, 2, 1)[166] up to 625[166] 0.64 1.28 up to 20[166] LPDDR5 4 GB[166] 64-bit[166] 34 GB/s[166] Jan 2023[166] 5-10[166]

1 CUDA cores : Tensor cores : RT cores (SMs, TPCs, GPCs)

Devices

edit
Model Devices Comments
T234[169] Nvidia Jetson AGX Orin[170][161] comes in 32 GB and 64 GB RAM configurations, available as standalone module or devkit;

intended for industrial robotics and/or embedded HPC applications

Un­known Nvidia Jetson Orin NX[167] mid-power SODIMM-form factor Orin-series module, available only as standalone module;

pin-compatible with Xavier NX carrier

Un­known Nvidia Jetson Orin Nano[171] low-power, cost-effective SODIMM-form factor Orin-series module, available as standalone module or devkit;

intended for entry-level usage

Un­known Nio Adam[172][173] built from 4x Nvidia Drive Orin, totals to 48 CPU cores and 8,192 CUDA cores;
for use in vehicles ET7 in March 2022 and ET5 in September 2022

Grace

edit

The Grace CPU is an NVIDIA-developed ARM Neoverse CPU platform, targeted at large-scale AI and HPC applications, available within several NVIDIA products. The NVIDIA OVX platform combines the Grace Superchip (two Grace dies on one board) with desktop NVIDIA GPUs in a server form-factor, while the NVIDIA HGX platform is available with either the Grace Superchip or the Grace Hopper Superchip.[174] The latter is an HPC platform in of itself, combining a Grace CPU with a Hopper-based GPU, announced by NVIDIA on March 22, 2022.[175] Kernel patchsets indicate that a single Grace CPU is also known as T241, placing it under the Tegra SoC branding, despite the chip itself not including a GPU (a referenced T241 patchset cites impact to "NVIDIA server platforms that use more than two T241 chips...interconnected," pointing to the Grace Superchip design).[176]

Model
number
CPU Memory Adoption
Processor Cores Frequency Cache TFLOPS

(FP64)

Type Amount Bus
width
Band-
width
Availability
T241[177] Grace 72 ARM Neoverse V2 Cores (ARMv9)[178] ? L1: 64 KB I-cache + 64 KB D-cache per core

L2: 1 MB per core L3: 117 MB shared[178]

3.551[178] LPDDR5X ECC[178] Up to 480 GB1[178] ? 500 GB/s[178] H2 2023[179]

1Figures cut in half from full Grace Superchip specification

Atlan

edit

Nvidia announced the next-gen SoC codename Atlan on April 12, 2021, at GPU Technology Conference 2021.[180][181]

Nvidia announced the cancellation of Atlan on September 20, 2022, and their next SoC will be Thor.[182]

Functional units known so far are:

  • Grace Next CPU[183]
  • Ada Lovelace GPU[184]
  • Bluefield DPU (Data Processing Unit)
  • other Accelerators
  • Security Engine
  • Functional Safety Island
  • On-Chip-Memory
  • External Memory Interface(s)
  • High-Speed-IO Interfaces
Model
number
CPU GPU Deep Learning Memory Adoption
Processor Cores Frequency Micro-
architecture
Core
configuration1
Frequency GFLOPS
(FP32)
GFLOPS
(FP16)
TOPS

(INT8)

Type Amount Bus
width
Band-
width
Availability
T254? Grace-Next[183] ? ? Ada Lovelace[185] ? ? ? ? >1000[186] ? ? ? ? Cancelled[187]

Thor

edit

Nvidia announced the next-gen SoC codename Thor on September 20, 2022, at GPU Technology Conference 2022, replacing the cancelled Atlan.[182] A patchset adding support for Tegra264 to mainline Linux was submitted May 5, 2023, likely indicating initial support for Thor.[188]

Devices

edit
Model
number
CPU GPU Deep Learning Memory Adoption
Processor Cores Frequency Micro-
architecture
Core
configuration1
Frequency GFLOPS
(FP32)
GFLOPS
(FP16)
TOPS

(FP8)

Type Amount Bus
width
Band-
width
Availability
T264? Arm Neoverse V3AE[191] ? ? Blackwell ? ? ? ? 2000[182] ? 128 GB ? ? 2025[182]

Comparison

edit
Generation Tegra 2 Tegra 3 Tegra 4 Tegra 4i Tegra K1 Tegra X1 Tegra X1+ Tegra X2 Xavier Orin Thor
CPU Instruction set ARMv7‑A (32‑bit) ARMv8‑A (64‑bit) ARMv8.2‑A (64‑bit) ARMv9.2‑A (64‑bit)
Cores 2 A9 4+1 A9 4+1 A15 4+1 A9 4+1 A15 2 Denver 4 A53 (disabled) +
4 A57
2 Denver2 + 4 A57 8 Carmel 12 A78AE Neoverse V3AE
L1 cache (I/D) 32/32 KB 128/64 KB 32/32 KB + 64/32 KB 128/64 KB + 48/32 KB 128/64 KB 64/64 KB
L2 cache 1 MB 2 MB 128 KB + 2 MB 2 MB + 2 MB 8 MB 3 MB ?
L3 cache N/A 4 MB 6 MB ?
GPU Architecture Vec4 Kepler Maxwell Pascal Volta Ampere Blackwell
CUDA cores 4+4* 8+4* 48+24* 48+12* 192 256 512 2048 ?
Tensor cores N/A 64 ?
RT cores N/A 8 ?
RAM Protocol DDR2/LPDDR2 DDR3/LPDDR2 DDR3/LPDDR3 LPDDR3/LPDDR4 LPDDR4/LPDDR4X LPDDR5 ?
Max. size 1 GB 2 GB 4 GB 8 GB 64 GB 128 GB
Bandwidth 2.7 GB/s 6.4 GB/s 7.5 GB/s 14.88 GB/s 25.6 GB/s 34.1 GB/s 59.7 GB/s 136.5 GB/s 204.8 GB/s ?
Process 40 nm 28 nm 20 nm 16 nm 12 nm 8 nm 4 nm

* VLIW-based Vec4: Pixel shaders + Vertex shaders. Since Kepler, Unified shaders are used.

Software support

edit

FreeBSD

edit

FreeBSD supports a number of different Tegra models and generations, ranging from Tegra K1,[192] to Tegra 210.[193]

Linux

edit

Nvidia distributes proprietary device drivers for Tegra through OEMs and as part of its "Linux for Tegra" (formerly "L4T") development kit, also Nvidia provides JetPack SDK with "Linux for Tegra" and other tools with it. The newer and more powerful devices of the Tegra family are now supported by Nvidia's own Vibrante Linux distribution. Vibrante comes with a larger set of Linux tools plus several Nvidia provided libraries for acceleration in the area of data processing and especially image processing for driving safety and automated driving up to the level of deep learning and neuronal networks that make e.g. heavy use of the CUDA capable accelerator blocks, and via OpenCV can make use of the NEON vector extensions of the ARM cores.

As of April 2012, due to different "business needs" from that of their GeForce line of graphics cards, Nvidia and one of their Embedded Partners, Avionic Design GmbH from Germany, are also working on submitting open-source drivers for Tegra upstream to the mainline Linux kernel.[194][195] Nvidia co-founder & CEO laid out the Tegra processor roadmap using Ubuntu Unity in GPU Technology Conference 2013.[196][unreliable source?]

By end of 2018 it is evident that Nvidia employees have contributed substantial code parts to make the T186 and T194 models run for HDMI display and audio with the upcoming official Linux kernel 4.21 in about Q1 2019. The affected software modules are the open source Nouveau and the closed source Nvidia graphics drivers along with the Nvidia proprietary CUDA interface.[197][unreliable source?]

As of May, 2022, NVIDIA has open-sourced their GPU kernel modules for both Jetson and desktop platforms, allowing all but proprietary userspace libraries to be open-source on Tegra platforms with official NVIDIA drivers starting with T234 (Orin).[198]

The Drive PX2 board was announced with QNX RTOS support at the April 2016 GPU Technology Conference.[199]

Similar platforms

edit

SoCs and platforms with comparable specifications (e.g. audio/video input, output and processing capability, connectivity, programmability, entertainment/embedded/automotive capabilities & certifications, power consumption) are:

See also

edit

References

edit
  1. ^ "Techtree.com India > News > Hardware > Nvidia Rolls out "Tegra" Chips". June 4, 2008. Archived from the original on June 4, 2008.
  2. ^ "NVIDIA Tegra FAQ" (PDF). Nvidia.com. Archived (PDF) from the original on March 20, 2012. Retrieved June 4, 2008.
  3. ^ "Nvidia prepara Tegra 3 a 1,5 GHz". TugaTech. January 27, 2011. Archived from the original on October 16, 2017. Retrieved July 10, 2016.
  4. ^ "Microsoft's Kin are the first Tegra smartphones – PC World Australia". Pcworld.idg.com.au. April 13, 2010. Archived from the original on October 16, 2017. Retrieved July 10, 2016.
  5. ^ "Nvidia and Opera team to accelerate the full Web on mobile devices" (Press release). Opera Software. September 9, 2008. Archived from the original on March 30, 2012. Retrieved January 9, 2009.
  6. ^ "Nvidia And Opera Team To Accelerate The Full Web On Mobile Devices" (Press release). NVIDIA. September 9, 2008. Archived from the original on December 24, 2011. Retrieved April 17, 2009.
  7. ^ "New Nvidia Tegra Processor Powers The Tablet Revolution". Nvidia. January 7, 2010. Archived from the original on December 24, 2018. Retrieved March 19, 2010.
  8. ^ "What operating systems does Tegra support?" (Press release). NVIDIA. August 17, 2011. Archived from the original on September 3, 2011. Retrieved September 14, 2011.
  9. ^ "Why nVidia's Tegra 3 is faster than a Core 2 Duo T7200". Brightsideofnews.com. February 21, 2011. Archived from the original on August 23, 2011. Retrieved August 12, 2011.
  10. ^ Hruska, Joel (February 22, 2011). "Nvidia's Kal-El Demonstration Marred By Benchmark Confusion". HotHardware. Archived from the original on February 26, 2012. Retrieved July 15, 2016.
  11. ^ "Audi selects Tegra processor for infotainment and dashboard". EE Times. January 18, 2012. Archived from the original on January 20, 2012. Retrieved July 15, 2016.
  12. ^ "What Is Automotive Grade? Here's What It Means". The Official NVIDIA Blog. July 15, 2016. Archived from the original on October 11, 2016. Retrieved October 11, 2016.
  13. ^ "Tegra Automotive Infotainment and Navigation". NVIDIA. Archived from the original on January 23, 2013. Retrieved January 4, 2013.
  14. ^ "NVIDIA Gaming Technology Powers Nintendo Switch | NVIDIA Blog". The Official NVIDIA Blog. October 20, 2016. Archived from the original on January 26, 2017. Retrieved October 20, 2016.
  15. ^ a b techinsights.com. "Nintendo Switch Teardown". www.techinsights.com. Archived from the original on March 13, 2017. Retrieved March 15, 2017.
  16. ^ "Nvidia Tegra APX Specifications". Archived from the original on January 27, 2011. Retrieved February 17, 2011.
  17. ^ "LG Optimus 2X & Nvidia Tegra 2 Review: The First Dual-Core Smartphone". AnandTech. Archived from the original on April 26, 2014. Retrieved August 12, 2011.
  18. ^ "NVidia Tegra 2 Product Information". NVidia. Archived from the original on May 4, 2012. Retrieved September 5, 2011.
  19. ^ "NVidia Tegra 2 Product Information". NVidia. Archived from the original on May 8, 2012. Retrieved November 1, 2015.
  20. ^ a b c Shimpi, Anand Lal. "The Tegra 4 GPU, NVIDIA Claims Better Performance Than iPad 4". AnandTech. Archived from the original on January 21, 2019. Retrieved November 5, 2015.
  21. ^ "NVIDIA Tegra 2 GPU Specs". July 25, 2023.
  22. ^ "Motorola Xoom Specifications Table". Motorola Mobility, Inc. February 16, 2011. Archived from the original on February 20, 2011. Retrieved February 16, 2011.
  23. ^ Savov, Vlad (May 19, 2011). "Dell Streak Pro Honeycomb tablet pictured, likely to be with us in June". Engadget. Archived from the original on October 24, 2017. Retrieved February 5, 2016.
  24. ^ "Toshiba Thrive Review". TabletPCReview. TechTarget, Inc. August 3, 2011. Archived from the original on November 6, 2013. Retrieved November 21, 2013.
  25. ^ "Avionic Design Tegra 2 (T290) Tamonten Processor Module — Product Brief" (PDF). Avionic Design. Archived from the original (PDF) on May 21, 2014. Retrieved May 25, 2012.
  26. ^ Nvidia inside: Hands on with Audi, Lamborghini, and Tesla Archived March 15, 2018, at the Wayback Machine by Megan Geuss in May 2014
  27. ^ a b Processors Analysis and Count Archived March 15, 2018, at the Wayback Machine in May 2013
  28. ^ "Nvidia announces the Tegra 3 – Kal-El brings PC class performance to Android". Android Central. November 9, 2011. Archived from the original on July 16, 2012. Retrieved July 10, 2016.
  29. ^ "Tegra 3 Multi-Core Processors". NVIDIA. Archived from the original on April 28, 2012. Retrieved July 15, 2016.
  30. ^ "ASUS Transformer Prime introduced and examined". HEXUS.net. November 9, 2011. Archived from the original on November 11, 2011. Retrieved November 11, 2011.
  31. ^ "NVIDIA Quad-Core Tegra 3 Chip Sets New Standards of Mobile Computing Performance, Energy Efficiency – NVIDIA Newsroom". January 11, 2012. Archived from the original on January 11, 2012.
  32. ^ "NVIDIA Tegra 3 GPU Specs". July 25, 2023.
  33. ^ a b c "A Faster Tegra 3, More Memory Bandwidth – ASUS Transformer Pad Infinity (TF700T) Review". Anandtech.com. Archived from the original on June 27, 2012. Retrieved July 10, 2016.
  34. ^ "Tegra 3 Multi-Core Processors". NVIDIA. Archived from the original on April 28, 2012. Retrieved July 10, 2016.
  35. ^ "XOLO – The Next Level". July 21, 2013. Archived from the original on July 21, 2013.
  36. ^ "Asus Eee Pad Transformer Prime (Nvidia Tegra 3 Processor; 10.1-inch display) Review". December 30, 2011. Archived from the original on April 2, 2013.
  37. ^ "GFXBench – unified graphics benchmark based on DXBenchmark (DirectX) and GLBenchmark (OpenGL ES)". Glbenchmark.com. Archived from the original on January 22, 2012. Retrieved July 15, 2016.
  38. ^ Summerson, Cameron (June 19, 2012). "Fuhu Nabi 2 Review: A Quad-Core Android 4.0 Tablet Designed Just For Your Kids – And It's Surprisingly Awesome". Androidpolice.com. Archived from the original on June 22, 2012. Retrieved July 15, 2016.
  39. ^ "Microsoft Announces New Surface Details | News Center". Microsoft.com. October 16, 2012. Archived from the original on July 12, 2014. Retrieved July 15, 2016.
  40. ^ "Lenovo Introduces The IdeaPad Yoga 11 and 13, The First Tablet & Laptop Ultrabook Hybrid". TechCrunch. October 9, 2012. Archived from the original on December 22, 2017. Retrieved July 15, 2016.
  41. ^ Jackson, Jerry (October 9, 2012). "Lenovo Launches IdeaPad Yoga 11, Yoga 13". Notebookreview.com. Archived from the original on October 18, 2012. Retrieved July 15, 2016.
  42. ^ Hacking a Tesla Model S: What we found and what we learned Archived December 20, 2017, at the Wayback Machine by Kevin Mahaffey on August 7, 2015
  43. ^ "Nexus 7 tablet hands-on". Engadget. June 27, 2012. Archived from the original on June 29, 2012. Retrieved June 27, 2012.
  44. ^ "Toshiba Excite 10 Benchmark Test". YouTube. May 29, 2012. Archived from the original on July 27, 2013. Retrieved November 25, 2012.
  45. ^ "Blu Products: Quattro45". April 20, 2013. Archived from the original on April 20, 2013.
  46. ^ "Tegra 4 Processors". NVIDIA. Archived from the original on January 27, 2013. Retrieved July 15, 2016.
  47. ^ Parrish, Kevin (November 12, 2013). "Results: GPU Benchmarks – EVGA Tegra Note 7 Review: Nvidia's Tegra 4 For $200". Tomshardware.com. Retrieved July 15, 2016.
  48. ^ a b "NVIDIA Tegra Multi-processor Architecture" (PDF). Archived (PDF) from the original on March 20, 2013. Retrieved July 10, 2013.
  49. ^ Larabel, Michael (December 20, 2012). "NVIDIA Publishes Their Next-Gen Tegra 4 Code". phoronix.com. Archived from the original on May 14, 2013. Retrieved August 2, 2013.
  50. ^ a b c d Walrath, Josh (February 26, 2013). "NVIDIA Details Tegra 4 and Tegra 4i Graphics". PC Perspective. Archived from the original on December 23, 2014. Retrieved September 2, 2013.
  51. ^ a b Angelini, Chris (February 24, 2013). "Nvidia's Tegra 4 GPU: Doubling Down On Efficiency". Tom's Hardware. Retrieved September 2, 2013.
  52. ^ "Tegra 4 Processors". NVIDIA. Archived from the original on January 27, 2013. Retrieved July 10, 2013.
  53. ^ a b "NVIDIA Tegra 4 Architecture Deep Dive, Plus Tegra 4i, Icera i500 & Phoenix Hands On". AnandTech. Archived from the original on February 27, 2013. Retrieved July 10, 2013.
  54. ^ "Tegra 4 Shipment Date: Still Q2 2013". AnandTech. Archived from the original on February 17, 2013. Retrieved July 10, 2013.
  55. ^ "HP Slate 7 Extreme 4400CA Tablet Product Specifications". .hp.com. Archived from the original on September 23, 2016. Retrieved September 22, 2016.
  56. ^ "HP Slate7 Beats Special Edition 4501 Tablet Product Specifications". .hp.com. Archived from the original on September 23, 2016. Retrieved September 22, 2016.
  57. ^ "HP Slate 8 Pro 7600us Tablet Product Specifications". hp.com. Archived from the original on September 23, 2016. Retrieved September 22, 2016.
  58. ^ "HP SlateBook x2 Overview – Android Tablet Notebook | HP Official Site". .hp.com. Archived from the original on July 12, 2013. Retrieved July 10, 2013.
  59. ^ "HP SlateBook 14-p010nr Product Specifications". hp.com. Archived from the original on September 23, 2016. Retrieved September 22, 2016.
  60. ^ "HP Slate 21-s100 All-in-One Desktop PC – Product Specifications". hp.com. Archived from the original on September 23, 2016. Retrieved September 22, 2016.
  61. ^ "Cintiq Companion Hybrid – Wacom". August 23, 2013. Archived from the original on August 23, 2013.
  62. ^ "用户太多,系统繁忙". Shop.coolpad.cn. Archived from the original on December 31, 2013. Retrieved July 15, 2016.
  63. ^ Shapiro, Danny. "Audi Offers Taste of Tegra-Powered Future at Geneva Motor Show | NVIDIA Blog". Blogs.nvidia.com. Archived from the original on April 2, 2015. Retrieved July 10, 2016.
  64. ^ "Le Pan – TC1020". Lepantab.com. Archived from the original on September 23, 2016. Retrieved September 22, 2016.
  65. ^ "[Test] Matrimax iPlay". Open-consoles-news.com. Archived from the original on September 23, 2016. Retrieved September 22, 2016.
  66. ^ "Kobo Arc 10 HD Specs". C-Net. Archived from the original on March 15, 2018. Retrieved July 8, 2017.
  67. ^ Cunningham, Andrew (February 19, 2013). "Project Grey becomes Tegra 4i, Nvidia's latest play for smartphones". Ars Technica. Archived from the original on December 2, 2017. Retrieved July 10, 2013.
  68. ^ "Wiko Mobile – HIGHWAY 4G". September 17, 2014. Archived from the original on September 17, 2014.
  69. ^ "Explay 4Game | Четырехъядерный смартфон на базе Tegra 4i | NVIDIA". Blogs.nvidia.com. Archived from the original on December 5, 2014. Retrieved July 10, 2016.
  70. ^ Han, Mike (February 24, 2014). "NVIDIA LTE Modem Makes Landfall in Europe, with Launch of Wiko Tegra 4i LTE Smartphone | The Official NVIDIA Blog". Blogs.nvidia.com. Archived from the original on February 28, 2014. Retrieved July 10, 2016.
  71. ^ "Wiko WAX". DeviceSpecifications. Archived from the original on May 21, 2014. Retrieved May 21, 2014.
  72. ^ "QMobile Noir LT-250". DeviceSpecifications. Archived from the original on February 10, 2015. Retrieved February 10, 2014.
  73. ^ Park, Will (May 15, 2014). "NVIDIA's Tegra K1 Powers Xiaomi's First Tablet | The Official NVIDIA Blog". Blogs.nvidia.com. Archived from the original on July 12, 2014. Retrieved July 15, 2016.
  74. ^ "NVIDIA Shield Tablet K1 gets Vulkan support with Android 6.0.1 update". Archived from the original on May 9, 2016. Retrieved May 3, 2016.
  75. ^ a b Kelion, Leo (January 6, 2014). "CES 2014: Nvidia Tegra K1 offers leap in graphics power". BBC. Archived from the original on January 11, 2014. Retrieved January 11, 2014.
  76. ^ a b "Vulkan API" (PDF). Archived (PDF) from the original on December 22, 2015. Retrieved December 11, 2015.
  77. ^ Larabel, Michael (April 29, 2014). "NVIDIA's Tegra TK1 Jetson Board Is Now Shipping". Phoronix. Archived from the original on April 25, 2016. Retrieved September 14, 2016.
  78. ^ Anthony, Sebastian (January 6, 2014). "Tegra K1 64-bit Denver core analysis: Are Nvidia's x86 efforts hidden within?". ExtremeTech. Archived from the original on January 7, 2014. Retrieved January 7, 2014.
  79. ^ NVIDIA CEO confirms Tegra roadmap, building all now: Kal-El, Wayne, Logan, Stark Archived March 16, 2017, at the Wayback Machine, October 21, 2011: Finally, he confirmed that the inner workings we've heard about in Project Denver will first be present in the Tegra line with the introduction of Stark(...)
  80. ^ "Tegra K1 Next-Gen Mobile Processor | NVIDIA Tegra". NVIDIA. Archived from the original on January 9, 2014. Retrieved July 15, 2016.
  81. ^ a b Stam, Nick. "Mile High Milestone: Tegra K1 "Denver"? Will Be First 64-bit ARM Processor for Android | The Official NVIDIA Blog". Blogs.nvidia.com. Archived from the original on August 12, 2014. Retrieved July 15, 2016.
  82. ^ a b c d Klug, Brian; Shimpi, Anand Lal (January 6, 2014). "NVIDIA Tegra K1 Preview & Architecture Analysis". AnandTech. p. 3. Archived from the original on April 19, 2014. Retrieved May 2, 2014.
  83. ^ Ho, Joshua (5 January 2015). "NVIDIA Tegra X1 Preview & Architecture Analysis". Anandtech. Archived from the original on December 4, 2018. Retrieved December 3, 2018.{{cite web}}: CS1 maint: numeric names: authors list (link)
  84. ^ "Jetson TK1 development board". Archived from the original on September 5, 2015. Retrieved May 1, 2014.
  85. ^ "SHIELD Tablet, The Ultimate Tablet For Gamers". GeForce. July 22, 2014. Archived from the original on July 25, 2014. Retrieved July 15, 2016.
  86. ^ "Tegra K1 Lands in Acer's Newest Chromebook". Anandtech. August 11, 2014. Archived from the original on July 20, 2018. Retrieved August 11, 2014.
  87. ^ "HP Chromebook 14 G3 – Specifications". HP. August 30, 2018. Archived from the original on August 30, 2018. Retrieved August 30, 2018.
  88. ^ "Xiaomi MiPad 7.9". Techindeep. Retrieved May 18, 2018.
  89. ^ "Google". Archived from the original on March 16, 2014. Retrieved July 15, 2016.
  90. ^ "NVIDIA Tegra K1 System/Computer on Module – Apalis TK1 SOM". Toradex.com. Archived from the original on March 4, 2016. Retrieved July 15, 2016.
  91. ^ Rothman, Chelsea. "Fuze Tomahawk F1: The Chinese Android XStation 4". Comics Gaming Magazine. Archived from the original on June 10, 2016. Retrieved June 1, 2016.
  92. ^ "JXD S192 "retro" gaming tablet is powered by Nvidia's Tegra K1 chipset". GSMArena.com. Archived from the original on March 25, 2019. Retrieved March 25, 2019.
  93. ^ "Nexus 9". Archived from the original on October 21, 2014. Retrieved July 15, 2016.
  94. ^ "Google Nexus 9 Specs and Reviews | HTC United States". Htc.com. Archived from the original on November 2, 2014. Retrieved July 15, 2016.
  95. ^ Exclusive: The Tesla AutoPilot – An In-Depth Look At The Technology Behind the Engineering Marvel Archived March 16, 2018, at the Wayback Machine by Usman Pirzada on Dec 3, 2015
  96. ^ "Tegra X1 Super Chip | NVIDIA Tegra". NVIDIA. Archived from the original on January 5, 2015. Retrieved July 10, 2016.
  97. ^ "NVIDIA Tegra X1 Preview & Architecture Analysis". Anandtech.com. Archived from the original on January 5, 2015. Retrieved July 10, 2016.
  98. ^ Tegra_X1_TRM_DP07225001_v1.0.pdf
  99. ^ "Tegra X1 advertised as four core to developers". NVIDIA. December 19, 2015. Archived from the original on October 25, 2019. Retrieved April 4, 2017.
  100. ^ "Tegra X1's A53 cores are disabled on the Pixel C". Anandtech. Archived from the original on April 4, 2017. Retrieved April 4, 2017.
  101. ^ Bittner, Otto; Krachenfels, Thilo; Galauner, Andreas; Seifert, Jean-Pierre (August 16, 2021). "The Forgotten Threat of Voltage Glitching: A Case Study on Nvidia Tegra X2 SoCs". 2021 Workshop on Fault Detection and Tolerance in Cryptography (FDTC). pp. 86–97. arXiv:2108.06131v2. doi:10.1109/FDTC53659.2021.00021. ISBN 978-1-6654-3673-1. S2CID 237048483.
  102. ^ "NVIDIA Shield Android TV 2019 review". Guru3D.com. Archived from the original on October 31, 2020. Retrieved March 25, 2020.
  103. ^ a b Crider, Michael (January 5, 2015). "NVIDIA Announces The New Tegra X1 Mobile Chipset With 256-Core Maxwell GPU". Androidpolice.com. Archived from the original on January 5, 2015. Retrieved July 10, 2016.
  104. ^ "NVIDIA Jetson TX1 Supercomputer-on-Module Drives Next Wave of Autonomous Machines | Parallel Forall". Devblogs.nvidia.com. November 11, 2015. Archived from the original on May 3, 2016. Retrieved July 15, 2016.
  105. ^ "Slide set from Jetson Nano webinar" (PDF). Archived (PDF) from the original on May 3, 2019. Retrieved May 3, 2019.
  106. ^ a b c d e f "Tegra X1 (SoC) Technical Reference Manual". developer.nvidia.com (v1.2p ed.). Retrieved February 20, 2018. (registration required)
  107. ^ [1]Tegra T210 dfll table
  108. ^ Tegra T210b01 dfll table
  109. ^ Strings found in libnvrm_gpu.so and in glxinfo when driver is loaded in linux
  110. ^ Leadbetter, Richard (June 27, 2019). "Switch's next Tegra X1 looks set to deliver more performance and longer battery life". Eurogamer. Archived from the original on July 25, 2019. Retrieved July 19, 2019.
  111. ^ "3.3 Hardware Specifications". dystify.com. Archived from the original on February 13, 2017. Retrieved February 27, 2017.
  112. ^ "Embedded Systems Development Solutions from NVIDIA Jetson". NVIDIA. March 18, 2015. Archived from the original on June 25, 2016. Retrieved July 10, 2016.
  113. ^ "DATA SHEET - NVIDIA Jetson TX2 System-on-Module.pdf" (PDF).
  114. ^ NVIDIA Jetson TX2 Delivers Twice the Intelligence to the Edge Archived February 27, 2018, at the Wayback Machine by Dustin Franklin on March 7, 2017 at Nvidia Developer Blogs
  115. ^ https://developer.nvidia.com/embedded/dlc/jetson-tx2-module-data-sheet (registration required)
  116. ^ "NVIDIA Discloses Next-Generation Tegra SoC; Parker Inbound?". Anandtech.com. January 5, 2016. Archived from the original on June 29, 2016. Retrieved July 10, 2016.
  117. ^ Ho, Joshua. "Hot Chips 2016: NVIDIA Discloses Tegra Parker Details". www.anandtech.com. Archived from the original on March 25, 2019. Retrieved March 25, 2019.
  118. ^ Ho, Joshua (August 25, 2016). "Hot Chips 2016: NVIDIA Discloses Tegra Parker Details". Anandtech. Archived from the original on December 16, 2017. Retrieved August 25, 2016.
  119. ^ "NVIDIA Jetson TX2: High Performance AI at the Edge". NVIDIA. Archived from the original on April 7, 2019. Retrieved April 9, 2019.
  120. ^ a b "NVIDIA Bringing up Open-Source Volta GPU Support for Their Xavier SoC".
  121. ^ a b NVIDIA Announces Jetson TX2: Parker Comes To NVIDIA's Embedded System Kit Archived January 8, 2018, at the Wayback Machine, March 7, 2017
  122. ^ NVIDIA Rolls Out Tegra X2 GPU Support In Nouveau Archived August 9, 2017, at the Wayback Machine by Michael Larabel at phoronix.com on March 29, 2017
  123. ^ "NVIDIA Jetson TX2 GPU Specs | TechPowerUp GPU Database". Techpowerup.com. August 22, 2022. Retrieved August 22, 2022.
  124. ^ Shapiro, Danny (January 4, 2017). "ZF Launches ProAI, DRIVE PX 2 Self-Driving System for Cars, Trucks, Factories – NVIDIA Blog". The Official NVIDIA Blog. Archived from the original on December 14, 2017. Retrieved December 13, 2017.
  125. ^ NVIDIA Powers Mercedes-Benz MBUX, Its Next-Gen AI Cockpit Archived March 16, 2018, at the Wayback Machine by Danny Shapiro on January 9, 2018 via Nvidia company blogs
  126. ^ Look inside Tesla's onboard Nvidia supercomputer for self-driving Archived March 28, 2018, at the Wayback Machine by Fred Lambert on May 22, 2017
  127. ^ Tesla Working With AMD on Self-Driving Car Processor Archived March 15, 2018, at the Wayback Machine by Joel Hruska on September 21, 2017
  128. ^ "Magic Leap One will ship this summer with Nvidia Tegra X2 processor". VentureBeat. July 11, 2018. Archived from the original on July 12, 2018. Retrieved July 11, 2018.
  129. ^ Magic Leap One teardown Archived August 24, 2018, at the Wayback Machine at ifixit.com
  130. ^ Skydio's second-gen drone, a $1,000 self-flying action cam, sells out for 2019 Archived April 12, 2020, at the Wayback Machine by Stephen Shankland on October 2, 2019
  131. ^ Franklin, Dustin (December 12, 2018). "NVIDIA Jetson AGX Xavier Delivers 32 TeraOps for New Era of AI in Robotics". devblogs.nvidia.com. Archived from the original on March 30, 2019. Retrieved March 30, 2019.
  132. ^ Smith, Ryan. "The NVIDIA GPU Tech Conference 2017 Keynote Live Blog". www.anandtech.com. Archived from the original on March 25, 2019. Retrieved March 25, 2019.
  133. ^ Huang, Jensen (May 24, 2017). "The AI Revolution Is Eating Software: NVIDIA Is Powering It | NVIDIA Blog". The Official NVIDIA Blog. Archived from the original on August 22, 2017. Retrieved August 22, 2017.
  134. ^ a b Smith, Ryan. "NVIDIA Teases Xavier, a High-Performance ARM SoC for Drive PX & AI". Archived from the original on September 29, 2016. Retrieved September 28, 2016.
  135. ^ a b Shapiro, Danny (September 28, 2016). "Introducing NVIDIA Xavier – NVIDIA Blog". The Official NVIDIA Blog. Archived from the original on October 2, 2016. Retrieved September 28, 2016.
  136. ^ a b Cutress, Ian; Tallis, Billy (January 4, 2016). "CES 2017: Nvidia Keynote Liveblog". Anandtech.com. Archived from the original on January 10, 2017. Retrieved January 9, 2017.
  137. ^ a b Baldwin, Roberto (January 8, 2018). "NVIDIA unveils its powerful Xavier SOC for self-driving cars". Engadget. Archived from the original on January 8, 2018. Retrieved January 8, 2018.
  138. ^ NVIDIA Drive Xavier SOC Detailed Archived February 24, 2018, at the Wayback Machine by Hassan Mujtaba on Jan 8, 2018 via WccfTech
  139. ^ a b c Abazovic, Fuad. "Nvidia Xavier sampling in Q1 18". www.fudzilla.com. Archived from the original on February 7, 2018. Retrieved February 6, 2018.
  140. ^ "Welcome — Jetson LinuxDeveloper Guide 34.1 documentation".
  141. ^ "Programmable Vision Accelerator". Archived from the original on February 27, 2021. Retrieved March 3, 2021.
  142. ^ "Understanding MIPI Alliance Interface Specifications". Electronic Design. April 1, 2014. Archived from the original on March 25, 2019. Retrieved March 25, 2019.
  143. ^ Mujtaba, Hassan (January 8, 2018). "NVIDIA Xavier SOC Is The Most Biggest and Complex SOC To Date". Archived from the original on February 24, 2018. Retrieved February 7, 2018.
  144. ^ a b Schilling, Andreas (March 27, 2018). "Auf Pegasus folgt Orin: Drive-PX-Plattform mit Turing- oder Ampere-Architektur". Hardwareluxx. Archived from the original on May 27, 2018. Retrieved May 26, 2018.
  145. ^ Sundaram, Shri (September 12, 2018). "Introducing NVIDIA DRIVE AGX Xavier Developer Kit – NVIDIA Blog". The Official NVIDIA Blog. Archived from the original on December 24, 2018. Retrieved December 11, 2018.
  146. ^ a b "Jetson AGX Xavier Developer Kit". NVIDIA Developer. July 9, 2018. Archived from the original on March 25, 2019. Retrieved March 25, 2019.
  147. ^ "Jetson Xavier NX Developer Kit". NVIDIA Developer. November 6, 2019. Archived from the original on November 6, 2019. Retrieved November 6, 2019.
  148. ^ Powell, Kimberly (September 12, 2018). "NVIDIA Clara Platform to Usher in Next Generation of Medical Instruments – NVIDIA Blog". The Official NVIDIA Blog. Archived from the original on December 15, 2018. Retrieved December 11, 2018.
  149. ^ "NVIDIA Rolls Out Tesla T4 GPUs, DRIVE AGX Xavier & Clara Platform – Phoronix". www.phoronix.com. Archived from the original on December 15, 2018. Retrieved December 11, 2018.
  150. ^ Shilov, Anton (March 18, 2017). "Bosch and Nvidia Team Up for Xavier based Self-Driving Systems for Mass Market Cars". Anandtech.com. Archived from the original on June 5, 2017. Retrieved June 22, 2017.
  151. ^ "Dream safety: 'Dream Car' learns to drive autonomously". vision.zf.com.
  152. ^ "Baidu, NVIDIA and ZF team to drive autonomous vehicles in China". Tech Wire Asia. January 8, 2018. Archived from the original on March 25, 2019. Retrieved March 25, 2019.
  153. ^ Linux Kernel Mailing List: (PATCH v3 7/7) arm64: tegra: Add device tree for the Tegra194 P2972-0000 board Archived March 15, 2018, at the Wayback Machine by Mikko Perttunen on Feb 15 2018
  154. ^ Smith, Ryan. "NVIDIA ARM SoC Roadmap Updated: After Xavier Comes Orin". www.anandtech.com. Archived from the original on April 19, 2018. Retrieved April 18, 2018.
  155. ^ Smith, Ryan. "NVIDIA Details DRIVE AGX Orin: A Herculean Arm Automotive SoC For 2022". www.anandtech.com. Archived from the original on December 19, 2019. Retrieved December 21, 2019.
  156. ^ online, heise (December 18, 2019). "Nvidia Orin: Next-Gen-Prozessor für autonome Fahrzeuge mit hoher Rechenleistung". heise online. Archived from the original on January 31, 2021. Retrieved January 26, 2021.
  157. ^ Shapiro, Danny (January 9, 2021). "Chinese Automaker NIO Selects NVIDIA for Electric Vehicles | NVIDIA Blog". The Official NVIDIA Blog. Archived from the original on January 26, 2021. Retrieved January 26, 2021.
  158. ^ Williams, Chris. "Arm hasn't given up on self-driving car brains – its new Cortex-A78AE is going into Nvidia's Orin chip, for a start". www.theregister.com. Archived from the original on October 1, 2020. Retrieved September 29, 2020.
  159. ^ Ltd, Arm. "Cortex-A78AE – Arm". Arm | The Architecture for the Digital World. Archived from the original on October 5, 2020. Retrieved October 3, 2020.
  160. ^ https://blogs.nvidia.com/blog/2021/01/09/nio-selects-nvidia-intelligent-electric-vehicles/ Archived January 26, 2021, at the Wayback Machine 8192 cores / 4 SoCs = 2048 cores / SoC
  161. ^ a b c d e f g h i j k "NVIDIA Jetson AGX Orin Technical Brief.pdf" (PDF).
  162. ^ a b c d e "NVIDIA Orin Brings Arm and Ampere to the Edge at Hot Chips 34". August 23, 2022.
  163. ^ Nvidia. "NVIDIA Jetson Orin Nano Sets New Standard for Entry-Level Edge AI and Robotics With 80x Performance Leap". nvidianews.nvidia.com. Archived from the original on September 23, 2022. Retrieved September 23, 2022.
  164. ^ "kernel/git/next/linux-next.git - The linux-next integration testing tree". git.kernel.org. Retrieved September 22, 2020.
  165. ^ "Linux 5.10 Has Initial Support For NVIDIA Orin, DeviceTree For Purism's Librem 5 - Phoronix". www.phoronix.com. Archived from the original on January 31, 2021. Retrieved January 26, 2021.
  166. ^ a b c d e f g h i j k l m n o p q r s t u v w x y z aa ab ac ad ae af ag ah ai aj ak al am an ao ap aq ar as Nvidia. "Jetson Orin for Next-Gen Robotics NVIDIA". nvidia. Archived from the original on September 23, 2022. Retrieved September 23, 2022.
  167. ^ a b c d e f g h i j k l "Embedded Robotics Modules - Jetson Orin NX". nvidia. Archived from the original on March 8, 2022. Retrieved March 8, 2022.
  168. ^ a b "Jetson Orin NX Series - Thermal Design Guide" (PDF). September 28, 2022. Retrieved September 29, 2022.[dead link]
  169. ^ "Linux 5.18 Adding Audio Support for NVIDIA's Orin SoC".
  170. ^ "NVIDIA Jetson AGX Orin".
  171. ^ "Jetson Orin for Next-Gen Robotics". nvidia.com. NVIDIA Corporation. Retrieved May 8, 2023.
  172. ^ "NIO ET5 Designed for Autonomous Era with DRIVE Orin". December 20, 2021.
  173. ^ "Chinese Automaker NIO Selects NVIDIA for Electric Vehicles". January 9, 2021.
  174. ^ "Introducing Grace". NVIDIA. Retrieved May 8, 2023.
  175. ^ "NVIDIA Introduces Grace CPU Superchip". NVIDIA Newsroom. Retrieved May 8, 2023.
  176. ^ "LKML: Marc Zyngier: Re: [PATCH] irqchip/gicv3: Workaround for NVIDIA erratum T241-FABRIC-4". lkml.org. Retrieved May 8, 2023.
  177. ^ "[PATCH 0/2] gpio: Tegra186: Add support for Tegra241 - Thierry Reding".
  178. ^ a b c d e f "NVIDIA Grace CPU Superchip Architecture In Depth". NVIDIA Technical Blog. January 20, 2023. Retrieved May 8, 2023.
  179. ^ Paul Alcorn (March 22, 2023). "Nvidia CEO Comments on Grace CPU Delay, Teases Sampling Silicon". Tom's Hardware. Retrieved May 8, 2023.
  180. ^ "NVIDIA Unveils NVIDIA DRIVE Atlan, an AI Data Center on Wheels for Next-Gen Autonomous Vehicles".
  181. ^ "NVIDIA Unveils DRIVE Atlan Autonomous Vehicle Platform". April 12, 2021.
  182. ^ a b c d e "NVIDIA Unveils DRIVE Thor — Centralized Car Computer Unifying Cluster, Infotainment, Automated Driving, and Parking in a Single, Cost-Saving System".
  183. ^ a b Labrie, Marie. "NVIDIA Unveils NVIDIA DRIVE Atlan, an AI Data Center on Wheels for Next-Gen Autonomous Vehicles". nvidianews.nvidia.com. NVIDIA. Retrieved January 6, 2023.
  184. ^ Smith, Ryan. "NVIDIA Drops DRIVE Atlan SoC, Introduces 2 PFLOPS DRIVE Thor for 2025 Autos". Anandtech. Retrieved January 6, 2023.
  185. ^ Smith, Ryan. "NVIDIA Drops DRIVE Atlan SoC, Introduces 2 PFLOPS DRIVE Thor for 2025 Autos". Anandtech. Retrieved January 6, 2023.
  186. ^ Smith, Ryan. "NVIDIA Drops DRIVE Atlan SoC, Introduces 2 PFLOPS DRIVE Thor for 2025 Autos". Anandtech. Retrieved January 6, 2023.
  187. ^ Smith, Ryan. "NVIDIA Drops DRIVE Atlan SoC, Introduces 2 PFLOPS DRIVE Thor for 2025 Autos". Anandtech. Retrieved January 6, 2023.
  188. ^ "'[PATCH 1/5] dt-bindings: mailbox: tegra: Document Tegra264 HSP' - MARC". marc.info. Retrieved May 8, 2023.
  189. ^ "NVIDIA DRIVE Powers Next Generation of Transportation — from Cars and Trucks to Robotaxis and Autonomous Delivery Vehicles".
  190. ^ "NVIDIA Announces Project GR00T Foundation Model for Humanoid Robots and Major Isaac Robotics Platform Update".
  191. ^ "NVIDIA DRIVE Thor Strikes AI Performance Balance, Uniting AV and Cockpit on a Single Computer". September 20, 2022.
  192. ^ "FreeBSD on Jetson TK1 | FreeBSD developer's notebook". kernelnomicon.org. Archived from the original on September 28, 2020. Retrieved December 26, 2020.
  193. ^ "src - FreeBSD source tree". cgit.freebsd.org.
  194. ^ Mayo, Jon (April 20, 2012). "[RFC 0/4] Add NVIDIA Tegra DRM support". dri-devel (Mailing list). Archived from the original on December 25, 2014. Retrieved August 21, 2012.
  195. ^ Larabel, Michael (April 11, 2012). "A NVIDIA Tegra 2 DRM/KMS Driver Tips Up". Phoronix Media. Archived from the original on October 7, 2016. Retrieved August 21, 2012.
  196. ^ "GTC 2013: NVIDIA's Tegra Roadmap (6 of 11)". YouTube. March 19, 2013. Archived from the original on December 24, 2018. Retrieved July 10, 2013.
  197. ^ "NVIDIA Tegra X2 & Xavier Get HDMI Audio With Linux 4.21 – Phoronix". www.phoronix.com. Archived from the original on December 23, 2018. Retrieved December 11, 2018.
  198. ^ "NVIDIA Releases Open-Source GPU Kernel Modules". NVIDIA Technical Blog. May 19, 2022. Retrieved May 8, 2023.
  199. ^ "DRIVE PX 2 Shows Next-Gen Nvidia Tegra, Pascal Processors". April 5, 2016. Archived from the original on March 8, 2017. Retrieved March 8, 2017.
edit