About NVIDIA

NVIDIA Corporation (NVIDIA) provides graphics, and compute and networking solutions in the United States, Taiwan, China, and internationally. The company has leveraged its GPU architecture to create platforms for scientific computing, artificial intelligence, or AI, data science, autonomous vehicles, or AV, robotics, metaverse and 3D internet applications. NVIDIA has a platform strategy, bringing together hardware, systems, software, algorithms, libraries, and services to create unique value for the markets the company serves. Businesses The company operates through two segments, Compute & Networking; and Graphics. The Compute & Networking segment includes the company’s Data Center accelerated computing platform; networking; automotive AI Cockpit, autonomous driving development agreements, and autonomous vehicle solutions; electric vehicle computing platforms; Jetson for robotics and other embedded platforms; NVIDIA AI Enterprise and other software; and cryptocurrency mining processors, or CMP. The Graphics segment includes GeForce GPUs for gaming and PCs, the GeForce NOW game streaming service and related infrastructure, and solutions for gaming platforms; Quadro/NVIDIA RTX GPUs for enterprise workstation graphics; virtual GPU, or vGPU, software for cloud-based visual and virtual computing; automotive platforms for infotainment systems; and Omniverse Enterprise software for building and operating metaverse and 3D internet applications. Markets The company specializes in markets in which its computing platforms can provide tremendous acceleration for applications. These platforms incorporate processors, interconnects, software, algorithms, systems, and services to deliver unique value. The company’s platforms address four large markets where its expertise is critical: Data Center, Gaming, Professional Visualization, and Automotive. Data Center The NVIDIA computing platform focuses on accelerating the most compute-intensive workloads, such as AI, data analytics, graphics and scientific computing, across hyperscale, cloud, enterprise, public sector, and edge data centers. The platform consists of the company’s energy efficient GPUs, data processing units, or DPUs, interconnects and systems, its CUDA programming model, and a growing body of software libraries, software development kits, or SDKs, application frameworks and services, which are either available as part of the platform or packaged and sold separately. For both AI and HPC applications, the NVIDIA accelerated computing platform greatly increases computer and data center performance and power efficiency relative to conventional CPU-only approaches. In the field of AI, NVIDIA’s platform accelerates both deep learning and machine learning workloads. Deep learning is a computer science approach where neural networks are trained to recognize patterns from massive amounts of data in the form of images, sounds and text - in some instances better than humans - and in turn provide predictions in production use cases. Machine learning is a related approach that leverages algorithms, as well as data to learn how to make determinations or predictions. HPC, which includes scientific computing, uses numerical computational approaches to solve large and complex problems. The company engages with thousands of organizations working on AI in a multitude of industries, from automating tasks, such as consumer product and service recommendations, to chatbots for the automation of or assistance with live customer interactions, to enabling fraud detection in financial services, to optimizing oil exploration and drilling. These organizations include the world’s leading consumer internet and cloud services companies, enterprises and startups seeking to implement AI in transformative ways across multiple industries. We partner with industry leaders to help transform their applications or their computing platforms. The company also has partnerships in transportation, retail, healthcare, and manufacturing, among others, to accelerate the adoption of AI. At the foundation of the NVIDIA accelerated computing platform are the company’s GPUs, which excel at parallel workloads such as the training and inferencing of neural networks. They are available in industry standard servers from every major computer maker and CSP, as well as in its DGX AI supercomputer, a purpose-built system for deep learning and GPU accelerated applications. To facilitate customer adoption, the company has built other ready-to-use system reference designs around its GPUs, including HGX for hyperscale and supercomputing data centers, EGX for enterprise and edge computing, IGX for high-precision edge AI, and AGX for autonomous machines. In fiscal year 2023, the company introduced the Hopper architecture of data center GPUs, and started shipping the first Hopper-based GPU – the flagship H100. Hopper includes a Transformer Engine, designed to accelerate the training of AI transformer models by an order of magnitude over the prior generation. H100 is ideal for accelerating applications, such as large language models, deep recommender systems, genomics and complex digital twins. NVIDIA will offer enterprise customers NVIDIA AI cloud services directly and through its network of partners. Examples of these services include NVIDIA DGX Cloud, which is cloud-based infrastructure and software for training AI models, and customizable pretrained AI models. NVIDIA has partnered with leading cloud service providers to host these services in their data centers. The company’s networking solutions include InfiniBand and Ethernet network adapters and switches, related software, and cables. This has enabled the company’s to architect end-to-end data center-scale computing platforms that can interconnect thousands of compute nodes with high-performance networking. Beyond GPUs, NVIDIA has expanded its data center processor portfolio to include DPUs, shipping in the market, and CPUs with samples planned to ship in the first half of fiscal year 2024. While its approach starts with powerful chips, what makes it a full-stack computing platform is the company’s large body of software, including the CUDA parallel programming model, the CUDA-X collection of application acceleration libraries, Application Programming Interfaces, or APIs, SDKs and tools, and domain-specific application frameworks. The company also offers the NVIDIA GPU Cloud registry, or NGC, a comprehensive catalog of easy-to-use, optimized software stacks across a range of domains including scientific computing, deep learning, and machine learning. With NGC, AI developers, researchers and data scientists can get started with the development of AI and HPC applications and deploy them on DGX systems, NVIDIA-Certified systems from its partners, or with NVIDIA’s cloud partners. In addition to software that is delivered to customers as an integral part of its data center computing platform, the company offers paid licenses to NVIDIA AI Enterprise, a comprehensive suite of enterprise-grade AI software; and NVIDIA vGPU software for graphics-rich virtual desktops and workstations. Gaming The company’s gaming platforms leverage its GPUs and sophisticated software to enhance the gaming experience with smoother, higher quality graphics. The company developed NVIDIA RTX to bring next generation graphics and AI to games. NVIDIA RTX features ray tracing technology for real-time, cinematic-quality rendering. Ray tracing, which has long been used for special effects in the movie industry, is a computationally intensive technique that simulates the physical behavior of light to achieve greater realism in computer-generated scenes. NVIDIA RTX also features deep learning super sampling, or NVIDIA DLSS, the company’s AI technology that boosts frame rates while generating beautiful, sharp images for games. The company’s products for the gaming market include GeForce RTX and GeForce GTX GPUs for gaming desktop and laptop PCs, GeForce NOW cloud gaming for playing PC games on underpowered devices, SHIELD for high quality streaming on TV, as well as system-on-chips (SOCs) and development services for game consoles. In fiscal year 2023, the company introduced the GeForce RTX 40 Series of gaming GPUs, based on the Ada Lovelace architecture. The 40 Series features its third generation RTX technology, third generation NVIDIA DLSS, and fourth generation Tensor Cores to deliver up to 4X the performance of the previous generation. Professional Visualization The company serves the Professional Visualization market by working closely with independent software vendors, or ISVs, to optimize their offerings for NVIDIA GPUs. The company’s GPU computing platform enhances productivity and introduces new capabilities for critical workflows in many fields, such as design and manufacturing and digital content creation. Design and manufacturing encompass computer-aided design, architectural design, consumer-products manufacturing, medical instrumentation, and aerospace. Digital content creation includes professional video editing and post-production, special effects for films, and broadcast-television graphics. The NVIDIA RTX platform makes it possible to render film-quality, photorealistic objects and environments with physically accurate shadows, reflections and refractions using ray tracing in real-time. Many leading 3D design and content creation applications developed by the company’s ecosystem partners support RTX, allowing professionals to accelerate and transform their workflows with NVIDIA RTX GPUs and software. Automotive NVIDIA’s Automotive market consists of AV, AI cockpit, electric vehicle computing platforms, and infotainment platform solutions. Leveraging the company’s technology leadership in AI and building on its long-standing automotive relationships, it is delivering a complete end-to-end solution for the AV market under the DRIVE Hyperion brand. NVIDIA has demonstrated multiple applications of AI within the car: AI can drive the car itself as a pilot in fully autonomous mode or it can also be a co-pilot, assisting the human driver while creating a safer driving experience. NVIDIA is working with several hundred partners in the automotive ecosystem including automakers, truck makers, tier-one suppliers, sensor manufacturers, automotive research institutions, HD mapping companies, and startups to develop and deploy AI systems for self-driving vehicles. The company’s unified AI computing architecture starts with training deep neural networks using its GPUs, and then running a full perception, fusion, planning and control stack within the vehicle on the NVIDIA DRIVE Hyperion platform. The DRIVE Hyperion platform consists of the high-performance, energy efficient DRIVE AGX computing hardware, a reference sensor set that supports full self-driving capability, as well as an open, modular DRIVE Software platform. The DRIVE Software platform includes DRIVE Chauffeur for autonomous driving, mapping and parking services, Drive Concierge for intelligent in-vehicle experiences, and real time conversational AI capability based on NVIDIA Omniverse Avatar software. In addition, the company offers a scalable data center-based simulation solution, NVIDIA DRIVE Sim, based on NVIDIA Omniverse software, for digital cockpit development, as well as for testing and validating a self-driving platform. NVIDIA's unique end-to-end, software-defined approach is designed for continuous innovation and continuous development, enabling cars to receive over-the-air updates to add new features and capabilities throughout the life of a vehicle. Strategies The key elements of the company’s strategy include advancing the NVIDIA accelerated computing platform; extending its technology and platform leadership in AI; extending its technology and platform leadership in computer graphics; advancing the leading autonomous vehicle platform; and leveraging its intellectual property, or IP. Sales and Marketing The company’s worldwide sales and marketing strategy is key to achieving its objective of providing markets with its high-performance and efficient computing platforms and software. The company’s sales and marketing teams, located across its global markets, work closely with end customers and various industry ecosystems through its partner network. The company’s partner network incorporates each industry's respective OEMs, original device manufacturers, or ODMs, system builders, add-in board manufacturers, or AIBs, retailers/distributors, ISVs, internet and cloud service providers (CSPs), automotive manufacturers and tier-1 automotive suppliers, mapping companies, start-ups, and other ecosystem participants. The company’s developer program makes its products available to developers prior to launch in order to encourage the development of AI frameworks, SDKs, and APIs for software applications and game titles that are optimized for its platforms. The company’s Deep Learning Institute provides in-person and online training for developers in industries and organizations around the world to build AI and accelerated computing applications that leverage its platforms. As NVIDIA’s business has evolved from a focus primarily on gaming products to broader markets, and from chips to platforms, systems and software, so, too, have the company’s avenues to market. Thus, in addition to sales to customers in the company’s partner network, certain of its products are also sold direct to CSPs, enterprise customers, retail channels and consumers. Seasonality The company’s computing platforms serve a diverse set of markets, such as consumer gaming, enterprise and cloud data centers, professional workstations, and automotive. The company’s consumer products typically see stronger revenue in the second half of its fiscal year. In addition, based on the production schedules of key customers, some of the company’s products for notebooks and game consoles typically generate stronger revenue in the second and third quarters, and weaker revenue in the fourth and first quarters (year ended January 29, 2023). Manufacturing The company utilizes suppliers, such as Taiwan Semiconductor Manufacturing Company Limited and Samsung Electronics Co. Ltd, to produce its semiconductor wafers. The company then utilizes independent subcontractors and contract manufacturers, such as Amkor Technology, BYD Auto Co. Ltd., or BYD Auto, Hon Hai Precision Industry Co., or Hon Hai, King Yuan Electronics Co., Ltd., Omni Logistics, LLC, Siliconware Precision Industries Company Ltd., and Wistron Corporation to perform assembly, testing, and packaging of most of its products and platforms. The company uses contract manufacturers, such as Flex Ltd., Jabil Inc., and Universal Scientific Industrial Co., Ltd., to manufacture its standard and custom adapter card products and switch systems, and Fabrinet to manufacture its networking cables. The company purchases substrates from Ibiden Co. Ltd., Kinsus Interconnect Technology Corporation, and Unimicron Technology Corporation, and memory from Micron Technology, Samsung Semiconductor, Inc., or Samsung, and SK Hynix. The company often consigns key components or materials such as the GPU, SoC, memory, and integrated circuit to the contract manufacturers. The company typically receives semiconductor products from its subcontractors, perform incoming quality assurance and configuration using test equipment purchased from industry-leading suppliers, such as Advantest America Inc. and Chroma ATE Inc., and then ship the semiconductors to contract manufacturers, such as BYD Auto and Hon Hai, distributors, motherboard and add-in card, or AIC, customers from its third-party warehouses in Hong Kong, Israel, and the United States. Competition The company’s competitors include suppliers and licensors of hardware and software for discrete and integrated GPUs, custom chips and other accelerated computing solutions, including solutions offered for AI, such as Advanced Micro Devices, Inc., or AMD, and Intel Corporation, or Intel; large cloud services companies with internal teams designing chips and software that incorporate accelerated or AI computing functionality as part of their internal solutions or platforms, such as Alibaba Group, Alphabet Inc., Amazon, Inc., and Baidu, Inc.; suppliers of Arm-based CPUs and companies that incorporate CPUs as part of their internal solutions or platforms; suppliers of SoC products that are used in servers or embedded into automobiles, autonomous machines, and gaming devices, such as Ambarella, Inc., AMD, Broadcom Inc., or Broadcom, Intel, Qualcomm Incorporated, Renesas Electronics Corporation, and Samsung, or companies with internal teams designing SoC products for internal use, such as Tesla, Inc.; and suppliers of interconnect, switch cable solutions, and DPUs, such as AMD, Applied Optoelectronics, Inc., Arista Networks, Broadcom, Cisco Systems, Inc., or Cisco, Hewlett Packard Enterprise Company, Intel, Juniper Networks, Inc., Lumentum Holdings, and Marvell Technology Group, as well as internal teams of system vendors and large cloud services companies. Patents and Proprietary Rights The company’s issued patents have expiration dates from March 2023 to June 2045. The company has numerous patents issued, allowed, and pending in the United States and in foreign jurisdictions. The company’s patents and pending patent applications primarily relate to its products and the technology used in connection with its products. History NVIDIA Corporation was founded in 1993. The company was incorporated in California in 1993 and reincorporated in Delaware in 1998.

Country
Industry:
Semiconductors and related devices
Founded:
1993
IPO Date:
01/22/1999
ISIN Number:
I_US67066G1040
Address:
2788 San Tomas Expressway, Santa Clara, California, 95051, United States
Phone Number
408 486 2000

Key Executives

CEO:
Huang, Jen-Hsun
CFO
Kress, Colette
COO:
Shoquist, Debora