62.6 F
New York
Tuesday, May 14, 2024

Nvidia CEO: Every Big Company Wants to Be in Israel Today

Related Articles

-Advertisement-

Must read

Getting your Trinity Audio player ready...

Edited by: TJVNews.com 

US computer graphics and chip giant NVIDIA is building one of the world’s fastest AI supercomputers in Israel, as was reported on May 29th by the NoCamels.com web site.

The supercomputer, named Israel-1, has been developed in the country over the last 18 months at a cost of hundreds of millions of dollars.

NoCamels.com has reported that NVIDIA s the supercomputer will be used as a blueprint and testbed for its Spectrum-X, a new networking platform that is designed to improve the performance and efficiency of Ethernet-based AI clouds. It aims to allow developers to build applications while reducing run-times of massive generative AI models.

The rise of generative AI applications like ChatGPT present new challenges for networks inside data centers, as was reported by NoCamels.com. As a result of these major changes, AI cloud systems need to be trained using huge amounts of data.

The Spectrum-X will be built for generative AI workloads and will be tailored to data centers around the world and help them transition to AI, NoCamels.com reported.

The Israel-1 will run at a performance of eight exaflops. NoCamels.com reported that one exaflop is capable of performing one quintillion (1,000,000,000,000,000,000) calculations per second.

“Transformative technologies such as generative AI are forcing every enterprise to push the boundaries of data center performance in pursuit of competitive advantage,” said Gilad Shainer, senior vice president of networking at NVIDIA. “NVIDIA Spectrum-X is a new class of Ethernet networking that removes barriers for next-generation AI workloads that have the potential to transform entire industries.”

On March 28, 2019, NoCamels.com reported that Nvidia founder and CEO Jensen Huang said that every big company wants to have a presence in Israel, during a visit to the country.

“One of the things I’ve learned about Israeli companies is that everyone wants you,” he said at a 2019 event  in Tel Aviv, making an allusion to the strong foreign investment and interest drawn by the local high-tech scene. “I don’t know of any big company that doesn’t want to be in Israel today.”

Huang’s visit came on the heels of his company’s announcement in early 2019 that it was acquiring Israel’s Mellanox Technologies, a leading supplier of end-to-end Ethernet and InfiniBand smart interconnect solutions for data servers and storage systems, for $6.9 billion, as was reported by NoCamels.com.

Huang spent much of his visit in Israel with Mellanox founder and CEO Eyal Waldman, “sprinting through Israel’s high-tech area and speaking with Mellanox’s 2,000 Israel-based employees and others,” according to a summary of the events published on an Nvidia blog and reported by NoCamels.com.

“I can’t tell you how excited and proud I am that we’ll be a large company in Israel,” Huang said during his visit. With the acquisition, pending regulatory approvals, Israel will become the second-largest employee base for Nvidia, which has about 14,000 employees, nearly half of them in the US.

“The acquisition will unite two of the world’s leading companies in high-performance computing (HPC),” Nvidia said in a 2019 statement. “Together, NVIDIA’s computing platform and Mellanox’s interconnects power over 250 of the world’s TOP500 supercomputers and have as customers every major cloud service provider and computer maker.”

On May 23 of this year, NVIDIA announced that it is integrating its NVIDIA AI Enterprise software into Microsoft’s Azure Machine Learning to help enterprises accelerate their AI initiatives.

The integration will create a secure, enterprise-ready platform that enables Azure customers worldwide to quickly build, deploy and manage customized applications using the more than 100 NVIDIA AI frameworks and tools that come fully supported in NVIDIA AI Enterprise, the software layer of NVIDIA’s AI platform.

“With the coming wave of generative AI applications, enterprises are seeking secure accelerated tools and services that drive innovation,” said Manuvir Das, vice president of enterprise computing at NVIDIA. “The combination of NVIDIA AI Enterprise software and Azure Machine Learning will help enterprises speed up their AI initiatives with a straight, efficient path from development to production.”

NVIDIA AI Enterprise on Azure Machine Learning will also provide access to the highest-performance NVIDIA accelerated computing resources to speed the training and inference of AI models.

“Microsoft Azure Machine Learning users come to the platform expecting the highest performing, most secure development platform available,” said John Montgomery, corporate vice president of AI platform at Microsoft. “Our integration with NVIDIA AI Enterprise software allows us to meet that expectation, enabling enterprises and developers to easily access everything they need to train and deploy custom, secure large language models.”

With Azure Machine Learning, developers can easily scale applications, from tests to massive deployments, while using Azure Machine Learning data encryption, access control and compliance certifications to meet security and compliance with their organizational policies requirements. NVIDIA AI Enterprise complements Azure Machine Learning with secure, production-ready AI capabilities and includes access to NVIDIA experts and support.

NVIDIA AI Enterprise includes over 100 frameworks, pretrained models and development tools, such as NVIDIA RAPIDS™ for accelerating data science workloads. NVIDIA Metropolis accelerates vision AI model development, and NVIDIA Triton Inference Server™ supports enterprises in standardizing model deployment and execution.

The NVIDIA AI Enterprise integration with Azure Machine Learning is available in a limited technical preview.

NVIDIA AI Enterprise is also available on Azure Marketplace, providing businesses worldwide with expanded options for fully secure and supported AI development and deployment.

Additionally, the NVIDIA Omniverse Cloud™ platform-as-a-service is now available on Microsoft Azure as a private offer for enterprises. Omniverse Cloud provides developers and enterprises with a full-stack cloud environment to design, develop, deploy and manage industrial metaverse applications at scale.

To meet the diverse accelerated computing needs of the world’s data centers, NVIDIA unveiled on May 28th the NVIDIA MGX™ server specification, which provides system manufacturers with a modular reference architecture to quickly and cost-effectively build more than 100 server variations to suit a wide range of AI, high performance computing and Omniverse applications.

ASRock Rack, ASUS, GIGABYTE, Pegatron, QCT and Supermicro will adopt MGX, which can slash development costs by up to three-quarters and reduce development time by two-thirds to just six months.

“Enterprises are seeking more accelerated computing options when architecting data centers that meet their specific business and application needs,” said Kaustubh Sanghani, vice president of GPU products at NVIDIA. “We created MGX to help organizations bootstrap enterprise AI, while saving them significant amounts of time and money.”

With MGX, manufacturers start with a basic system architecture optimized for accelerated computing for their server chassis, and then select their GPU, DPU and CPU. Design variations can address unique workloads, such as HPC, data science, large language models, edge computing, graphics and video, enterprise AI, and design and simulation. Multiple tasks like AI training and 5G can be handled on a single machine, while upgrades to future hardware generations can be frictionless. MGX can also be easily integrated into cloud and enterprise data centers.

Collaboration With Industry Leaders

QCT and Supermicro will be the first to market, with MGX designs appearing in August. Supermicro’s ARS-221GL-NR system, announced today, will include the NVIDIA Grace™ CPU Superchip, while QCT’s S74G-2U system, also announced today, will use the NVIDIA GH200 Grace Hopper Superchip.

Additionally, SoftBank Corp. plans to roll out multiple hyperscale data centers across Japan and use MGX to dynamically allocate GPU resources between generative AI and 5G applications.

“As generative AI permeates across business and consumer lifestyles, building the right infrastructure for the right cost is one of network operators’ greatest challenges,” said Junichi Miyakawa, president and CEO at SoftBank Corp. “We expect that NVIDIA MGX can tackle such challenges and allow for multi-use AI, 5G and more depending on real-time workload requirements.”

balance of natureDonate

Latest article

- Advertisement -