Showing posts with label NVIDIA. Show all posts
Showing posts with label NVIDIA. Show all posts

Cisco and NVIDIA Introduce AI‑Native 6G Wireless Stack, Redefining Cloud and Enterprise Infrastructure

Cisco and NVIDIA Introduce AI‑Native 6G Wireless Stack, Redefining Cloud and Enterprise Infrastructure

Cisco and NVIDIA have announced a broad set of AI infrastructure innovations designed to accelerate adoption of artificial intelligence across cloud, enterprise, and telecom sectors. The collaboration brings together Cisco’s networking and security expertise with NVIDIA’s AI computing leadership, marking what executives described as the beginning of the “largest data center build‑out in history.”

Spectrum‑X Powered Switches

At the center of the announcement is the Cisco N9100 Series data center switch, the first NVIDIA partner‑developed switch built on NVIDIA Spectrum‑X Ethernet technology. The switch is designed to deliver high‑performance, low‑latency networking for AI workloads and will be available with both NX‑OS and SONiC operating models. Cisco said the platform will serve as a Cloud Partner‑compliant reference architecture, enabling neocloud and sovereign cloud providers to deploy AI infrastructure at scale.

Enterprise AI Security and Observability

Cisco also expanded its Secure AI Factory with NVIDIA, a framework that integrates compute, networking, security, and observability into enterprise AI deployments. The initiative aims to give organizations end‑to‑end visibility and protection as they scale AI workloads, particularly in regulated industries. New ecosystem partnerships were announced to strengthen monitoring and compliance capabilities.

Telecom and 6G Readiness

In a move aimed at telecom operators, Cisco and NVIDIA unveiled the industry’s first AI‑native wireless stack for 6G networks. The stack is designed to handle ultra‑low latency and massive device connectivity, preparing carriers for the surge in AI‑driven traffic expected over the next decade. Analysts said the development could redefine mobile networks by enabling real‑time AI services at the edge.

Strategic Context

Executives from both companies emphasized that the innovations are not standalone products but part of a joint reference architecture for next‑generation AI deployments. “We are entering a new era where AI workloads will reshape every industry,” said a Cisco spokesperson. “Our partnership with NVIDIA ensures customers have the flexibility, interoperability, and scalability to build AI infrastructure securely and globally.”

Why It Matters

  • For Cloud Providers: A unified, NVIDIA‑compliant architecture accelerates AI adoption in sovereign and neocloud environments.
  • For Enterprises: Enhanced security and observability ensure safer AI deployments.
  • For Telecoms: The AI‑native 6G stack positions operators to deliver next‑generation services.
With these announcements, Cisco and NVIDIA are positioning themselves at the heart of the global AI infrastructure race, targeting the needs of hyperscalers, enterprises, and telecom operators alike.

Nvidia Commits $100 Billion to OpenAI in Historic AI Infrastructure Partnership

Nvidia Commits $100 Billion to OpenAI in Historic AI Infrastructure Partnership

In a move set to redefine the global artificial intelligence landscape, Nvidia has announced a strategic partnership with OpenAI that includes a staggering investment of up to $100 billion. The deal aims to deploy 10 gigawatts of Nvidia-powered AI data centers, marking one of the largest infrastructure commitments in tech history.

The partnership was formalized through a letter of intent signed by both companies, with the first gigawatt of computing power scheduled to go live in the second half of 2026 on Nvidia’s upcoming Vera Rubin platform. This deployment will involve millions of GPUs and is expected to support OpenAI’s next-generation models, including its push toward artificial general intelligence (AGI).

Everything starts with compute, said OpenAI CEO Sam Altman. Compute infrastructure will be the basis for the economy of the future, and we will utilize what we’re building with Nvidia to both create new AI breakthroughs and empower people and businesses with them at scale.

The deal is structured as two intertwined transactions: Nvidia will invest in OpenAI for non-voting shares, while OpenAI will use the capital to purchase Nvidia’s chips. The first tranche of $10 billion will be deployed once OpenAI finalizes its purchase agreement for Nvidia systems.

Nvidia CEO Jensen Huang described the partnership as “monumental in size,” noting that the 10 GW deployment is equivalent to 4–5 million GPUs—roughly double the company’s annual output. “This investment and infrastructure partnership mark the next leap forward—deploying 10 gigawatts to power the next era of intelligence,” Huang said.

The announcement sent Nvidia’s stock soaring by 4.4%, adding nearly $170 billion to its market cap. Oracle, which is collaborating with OpenAI, Microsoft, and SoftBank on the $500 billion Stargate AI data center initiative, also saw a 6% bump.

Despite the excitement, analysts have raised concerns about the circular nature of the deal, with Nvidia’s investment potentially returning to the company via chip purchases. However, both firms emphasized the strategic value of co-optimizing hardware and software roadmaps to accelerate AI development.

OpenAI, which now boasts over 700 million weekly active users, will treat Nvidia as its preferred compute and networking partner. The collaboration complements existing alliances with Microsoft, Oracle, and SoftBank, and does not alter OpenAI’s ongoing efforts to develop its own custom AI chips.

As competition intensifies among tech giants like Google, Amazon, Meta, and xAI, this Nvidia–OpenAI alliance signals a new phase in the race to build scalable, high-performance AI infrastructure.

NVIDIA Launches $2.5B UK AI Investment Drive

NVIDIA Launches $2.5B UK AI Investment Drive

NVIDIA has officially announced a landmark £2 billion investment to supercharge the United Kingdom’s AI startup ecosystem — a move hailed as a “major vote of confidence” in the UK’s tech future.

Key Highlights of the Investment

  • Purpose: Catalyze globally transformative AI businesses via capital, compute, and talent.
  • Target Regions: London, Oxford, Cambridge, and Manchester as AI growth zones.
  • Strategic Partners: Accel, Air Street Capital, Balderton Capital, Hoxton Ventures, Phoenix Court.
  • Beneficiary Startups: Wayve, Nscale, Revolut, Synthesia, PolyAI, Latent Labs, Basecamp Research.

Major Investments

Startup Sector Investment
Wayve Autonomous Driving £500 million
Nscale AI Data Centers £500 million
Others (Revolut, Synthesia, etc.) Fintech, Generative AI, Bioinformatics Undisclosed

Why It Matters

  • UK’s AI talent pool is strong but scaling has been hindered by infrastructure and funding gaps.
  • NVIDIA aims to democratize access to supercomputing and connect VCs with academic hubs.
  • Energy costs and regional VC concentration have slowed growth — this investment addresses both.

Leadership Voices

“This is the age of AI — the big bang of a new industrial revolution.” — Jensen Huang, CEO of NVIDIA
“This partnership will create jobs, spark new industries and ensure the UK remains at the forefront of global AI leadership.” — UK Prime Minister Sir Keir Starmer

Global Context

  • Microsoft: £22 billion commitment to UK AI infrastructure.
  • OpenAI: Launch of Stargate UK supercomputing initiative.
  • Blackstone: £100 billion pledge over 10 years for UK tech and infrastructure.

Nvidia Near $4 Trillion Market Valuation

Nvidia Near $4 Trillion Market Valuation

Market Cap Peak of Nvidia hit $3.92 trillion on Thursday, momentarily surpassing Apple’s record of $3.915 trillion set in December 2024. It closed slightly lower at $3.89 trillion, still ahead of Microsoft ($3.7T) and Apple ($3.19T). The Growth Trajectory of Nvidia has increased exponentially, as its valuation has increased nearly 8x since 2021, when it was worth around $500 billion.

🧠 Why the Surge?

  • AI Dominance: Nvidia’s chips, especially the H100 and successors, are the backbone of AI infrastructure—powering everything from large language models to autonomous systems.
  • Client Base: Major tech firms like Microsoft, Amazon, Meta, Alphabet, and Tesla are racing to build AI data centers, fueling demand for Nvidia’s processors.
  • Investor Confidence: Despite global trade tensions and competition from cheaper AI models (like China’s DeepSeek), Nvidia’s earnings and demand have remained robust.

📊 Market Impact

  • S&P 500 Influence: Nvidia now makes up 7.4% of the S&P 500, and together with Microsoft, Apple, Amazon, and Alphabet, these five account for 28% of the index.
  • Valuation Metrics: Nvidia trades at 32x forward earnings, below its 5-year average of 41, suggesting earnings are keeping pace with stock growth.

🧭 What’s Next?

  • $4 Trillion Club: Analysts believe Nvidia and Microsoft could both officially cross the $4 trillion mark this summer, with eyes already on the $5 trillion milestone in the next 18 months.
  • Risks: Heavy reliance on Taiwan’s TSMC for manufacturing and potential shifts in AI spending remain key vulnerabilities. 

📈 Nvidia Market Cap Timeline (1999–2025)

Year Market Cap Growth Highlights
1999 $0.96B IPO era, early GPU development
2005 $6.25B Rise of PC gaming and CUDA architecture
2015 $17.73B Entry into deep learning and AI
2020 $323.24B Pandemic-driven demand for GPUs and data centers
2021 $735.27B AI boom begins, data center revenue surges
2023 $1.22T Generative AI explosion (ChatGPT, LLMs)
2024 $3.35T Surpasses Amazon and Alphabet
2025 $3.83T Briefly becomes world’s most valuable company

Schneider Electric and NVIDIA Team Up to Power €200 Bn AI Revolution in Europe

Schneider Electric and NVIDIA Team Up to Power €200 Bn AI Revolution in Europe
  • R&D initiatives underscore companies’ commitment to co-developing new cooling, power, building management and control systems for digital and physical AI data centers
  • Schneider Electric announces launch of new NVIDIA-enabled rack solution
Schneider Electric, the leader in the digital transformation of energy management and automation, today announced it is collaborating with NVIDIA to serve the growing demand for sustainable, AI-ready infrastructure. Together, Schneider Electric and NVIDIA are advancing research and development (R&D) initiatives for power, cooling, controls, and high-density rack systems to enable the next generation of AI factories across Europe and beyond.

This unique global partnership, announced during NVIDIA GTC Paris, brings together the world leaders in sustainability and accelerated computing to support the European Union’s AI infrastructure ambitions and its “InvestAI” initiative, which plans to mobilize a €200 billion investment in AI.

Leveraging its expertise in AI-ready infrastructure, sustainability, and grid coordination, Schneider Electric and NVIDIA are together responding to the European Commission’s “AI Continent Action Plan,” which outlines a shared mission to set up at least 13 AI factories across Europe, while establishing up to five AI gigafactories.

Schneider Electric and NVIDIA are not just partners — our teams are driving advanced R&D, co-developing the infrastructure needed to power the next wave of AI factories globally,” said Olivier Blum, CEO of Schneider Electric. “Together, we’ve seen tremendous success in deploying next-generation power and liquid cooling solutions, purpose-built for AI data centers. This strategic partnership — bringing together the world leaders in sustainability and accelerated computing — allows us to further accelerate this momentum, pushing the boundaries of what’s possible for the AI workloads of tomorrow.”

AI is the defining technology of our time—the most transformative force reshaping our world,” said Jensen Huang, founder and CEO, NVIDIA. “Together with Schneider Electric, we are building AI factories: the essential infrastructure that brings AI to every company, industry, and society.”

New NVIDIA-Enabled Infrastructure Solutions

In support of today’s announcement, Schneider Electric has also unveiled a suite of AI-ready data center solutions, including new EcoStruxure™ Pod and Rack Infrastructure. Designed to accelerate AI developments globally, the Prefabricated Modular EcoStruxure Pod Data Center is a scalable, pod-based architecture, enabling rapid AI data center deployment.

As part of this, a new Schneider Electric Open Compute Project (OCP) inspired rack system has also been developed to support the NVIDIA GB200 NVL72 platform that uses the NVIDIA MGX modular architecture, integrating Schneider Electric into NVIDIA HGX and MGX ecosystems for the first time.

These announcements build on a series of milestones shared by the two global leaders earlier this year, including Schneider Electric and ETAP unveiling the world’s first digital twin for electrical and large-scale power systems in AI factories using the NVIDIA Omniverse Blueprint.

Together, Schneider Electric and NVIDIA have also co-developed a series of full electrical and liquid cooling-based reference designs as an approved CDU vendor for NVIDIA — many of which also include solutions from Motivair’s liquid cooling portfolio, following its acquisition by Schneider Electric in March 2025.

Through this expanded and deepened strategic partnership, Schneider Electric and NVIDIA will continue to accelerate their infrastructure initiatives, fast-tracking new product rollouts and reference designs to build the AI factories of the future.

Alphabet Spinout SandboxAQ, Backed by Nvidia, Unveils Synthetic Molecule Megaset to Revolutionize Drug Discovery

Alphabet Spinout SandboxAQ, Backed by Nvidia, Unveils Synthetic Molecule Megaset to Revolutionize Drug Discovery

SandboxAQ, an AI startup spun out of Google parent Alphabet and backed by Nvidia, has announced that it has released a massive dataset of 5.2 million synthetic 3D molecules to accelerate drug discovery. These molecules don’t exist in nature—they were generated using Nvidia’s chips and grounded in real-world experimental data to simulate how drugs bind to proteins, a critical step in developing effective treatments.

This synthetic dataset, called the Structurally Augmented IC50 Repository (SAIR), is publicly available and designed to train AI models that can predict drug-protein interactions far faster than traditional lab methods. The goal? To virtually replicate lab results with high accuracy, potentially compressing months of research into a single AI-driven prediction.

It’s a bold move that blends physics-based modeling with machine learning—an approach that could reshape how we think about early-stage pharmaceutical R&D. And with SandboxAQ planning to monetize its own trained models, it’s also a glimpse into the future of AI-powered biotech platforms.

While the dataset is public, SandboxAQ plans to monetize its proprietary AI models trained on it—essentially offering virtual labs as a service.

Headquartered in Palo Alto, California, right in the heart of Silicon Valley, SandboxAQ is a fascinating fusion of quantum tech and AI muscle. It emerged from Alphabet’s moonshot factory and is backed by Nvidia, with nearly $1 billion in venture capital fueling its ambitions.

SandboxAQ's signature tech is Large Quantitative Models, the AI systems grounded in the laws of physics, chemistry, and biology. These aren’t just data-driven models; they simulate the real world with scientific rigor. Think of it as a tireless digital researcher. This tool autonomously explores millions of chemical pathways, helping discover novel molecules far beyond human capacity.

Beyond drug discovery, SandboxAQ is tackling – 1) Cybersecurity: Detecting cryptographic vulnerabilities; 2) Navigation: Enhancing precision in GPS-denied environments. 3) Medical Diagnostics: Using AI to analyze cardiac signals. 4)Materials Science: Predicting atomic-level properties for breakthrough materials.

SandboxAQ was founded by Jack Hidary, a tech entrepreneur with a deep background in quantum computing and neuroscience. He previously led a quantum initiative at Alphabet before spinning it out as SandboxAQ in 2022. Hidary studied philosophy and neuroscience at Columbia University and has authored a well-regarded book on quantum computing.

While Hidary is the primary founder and CEO, the company’s early development was also shaped by influential figures like Eric Schmidt, former CEO of Google, who serves as Chairman of the Board.

Nvidia Faces $5.5 Billion Hit as U.S. Tightens AI Chip Export Rules to China

Nvidia Faces $5.5 Billion Hit as U.S. Tightens AI Chip Export Rules to China

Nvidia is facing a $5.5 billion charge after the U.S. government restricted exports of its H20 Al chips to China. Nvidia's shares dropped about 6% following the announcement. 

The H20 was designed to comply with earlier export limits, but officials now fear it could be used in Chinese supercomputers, prompting indefinite licensing requirements.

China previously accounted for 20% of Nvidia's revenue, but this has now shrunk to about 10%, with expectations that it could drop to near zero.

This move is part of Washington's broader strategy to limit China's access to advanced Al hardware, escalating tensions in the global tech race. Nvidia's stock dropped about 6% following the announcement.

The H20 was Nvidia's most advanced chip available in China, widely used by companies like Tencent, Alibaba, and ByteDance. These firms had ramped up orders due to growing demand for Al models.

While the H20 has lower computing capabilities than Nvidia's top-tier chips, its high-speed memory and connectivity raised concerns that it could be used in Chinese supercomputers, prompting the U.S. to impose indefinite licensing requirements.

Meanwhile, Nvidia is pivoting towards its Blackwell-series Al chips, which are expected to be the next major product line. Besides, the company has recently announced plans to build AI servers worth up to $500 billion in the U.S. over the next four years, aligning with efforts to boost domestic tech infrastructure.

Nvidia is bracing for additional U.S. export controls under proposed "AI diffusion rules," which could further limit its ability to sell advanced AI hardware globally. Revenue from China has halved compared to pre-restriction levels, with Huawei emerging as a key competitor.

Analysts predict that Chinese firms may pivot to Huawei or other domestic alternatives, accelerating China’s push for semiconductor independence.

The U.S. government now requires indefinite export licenses for H20 shipments to China, citing concerns over potential use in Chinese supercomputers.

HCLTech integrates NVIDIA AI Enterprise and Omniverse with HCLTech’s GenAI solutions

HCLTech integrates NVIDIA AI Enterprise and Omniverse with HCLTech’s GenAI solutions

HCLTech, a leading global technology company, announced that it has integrated NVIDIA AI Enterprise with its GenAI-led service transformation platform, AI Force, and NVIDIA Omniverse with its physical AI solution, SmartTwin™. These integrations aim to drive faster AI adoption for enterprises by streamlining software development cycles and enhancing engineering efficiency.

The NVIDIA AI Enterprise software—including NVIDIA NIM and NeMo Retriever microservices —along with the NVIDIA Llama Nemotron model family and NVIDIA AI Blueprints, will enable HCLTech’s AI Force enterprise users to achieve accelerated release timelines, improved code quality and enhanced operational efficiency across coding, testing, legacy modernization, and process optimization.

Simultaneously, HCLTech’s SmartTwin™ platform will harness NVIDIA Omniverse™, enabling enterprises to build interoperable data pipelines on OpenUSD, integrate third-party engineering tools and run high-fidelity virtual simulations. The result: faster, more successful product launches and significant cost savings through optimized processes and reduced reliance on physical prototypes.

"AI is empowering enterprises to achieve operational excellence and business growth across industries. Our work with NVIDIA will bring a wide range of capabilities and benefits to businesses across industries as they adopt AI products and services across their operations," said Vijay Guntur, CTO and head of Ecosystems at HCLTech.

"Agentic and physical AI are transforming every industry from customer service and healthcare to manufacturing and logistics by automating complex workflows, optimizing operations and fostering sustainability and growth,” said John Fanelli, Vice President, Enterprise Software at NVIDIA. "With the integration of the NVIDIA AI Enterprise and NVIDIA Omniverse technologies, HCLTech's AI Force and SmartTwin platforms can help businesses rapidly integrate AI and simulation technology into their operations."

IBM Teams Up with NVIDIA to Supercharge AI Development on the Cloud

IBM Teams Up with NVIDIA to Supercharge AI Development on the Cloud

IBM has announced a collaboration with NVIDIA to enhance AI capabilities at scale. This partnership focuses on integrating NVIDIA's AI Data Platform technologies with IBM's offerings, such as IBM Fusion and watsonx.

Key highlights of this collaboration include:
  • Content-Aware Storage (CAS): IBM plans to introduce CAS in its hybrid cloud infrastructure, enabling enterprises to process unstructured data more effectively for AI applications like retrieval-augmented generation (RAG) and reasoning.
  • Enhanced AI Accessibility: IBM aims to integrate its watsonx platform with NVIDIA's technologies, allowing organizations to develop and deploy AI models across various cloud environments.
  • Support for Compute-Intensive Workloads: IBM Cloud will expand its NVIDIA accelerated computing portfolio, including the availability of NVIDIA H200 Tensor Core GPU instances, designed for high-performance AI workloads.
This collaboration is expected to drive innovation in generative AI and agentic AI applications.

A 2024 IBM report found that more than three in four executives surveyed (77 percent) say generative AI is market-ready, up from just 36 percent in 2023. With this push to put AI into production comes an increased need for compute and data-intensive technologies. The collaboration between IBM and NVIDIA will enable IBM to provide hybrid AI solutions that take advantage of open technologies and platforms while also supporting data management, performance, security, and governance.

"IBM is focused on helping enterprises build and deploy effective AI models and scale with speed," said Hillery Hunter, CTO and General Manager of Innovation, IBM Infrastructure. "Together, IBM and NVIDIA are collaborating to create and offer the solutions, services and technology to unlock, accelerate, and protect data – ultimately helping clients overcome AI's hidden costs and technical hurdles to monetize AI and drive real business outcomes."

"AI agents need to rapidly access, fetch and process data at scale, and today, these steps occur in separate silos," said Rob Davis, vice president, Storage Networking Technology, NVIDIA. "The integration of IBM's content-aware storage with NVIDIA AI orchestrates data and compute across an optimized network fabric to overcome silos with an intelligent, scalable system that drives near real-time inference for responsive AI reasoning."

To learn more about IBM's presence at GTC, please visit https://www.nvidia.com/gtc/session-catalog/?search.suggestedaudiencelevel=1732117107498003nOoA&search=ibm#/

OpenAI Signs $12 Bn Deal with NVIDIA-backed CoreWeave

OpenAI Signs $12 Bn Deal with NVIDIA-backed CoreWeave

OpenAI has reportedly signed a significant five-year deal worth $11.9 billion with CoreWeave, a cloud services provider specializing in AI-focused GPU infrastructure and backed by NVIDIA.

As part of this agreement, OpenAI will receive $350 million worth of equity in CoreWeave. This partnership comes ahead of CoreWeave's planned Initial Public Offering (IPO) and represents a major development for the company, which has seen rapid growth in recent years.

CoreWeave, which began as a cryptocurrency mining operation, has evolved into a leading player in the AI cloud service industry, operating an AI-specific cloud infrastructure across 32 data centers with over 250,000 NVIDIA GPUs. The deal with OpenAI is expected to reduce CoreWeave's dependence on Microsoft, which accounted for 62% of its revenue in 2024.

This partnership comes ahead of CoreWeave's planned Initial Public Offering (IPO) and represents a major development for the company, which has seen rapid growth in recent years.

This strategic move highlights the changing dynamics between Microsoft and OpenAI, as the latter seeks more computing resources amid claims of being "out of GPUs". The partnership with CoreWeave is expected to provide a major boost to the company ahead of its IPO.

According to a report by The Information, It’s unclear whether the OpenAI contract is a net new deal for CoreWeave. Microsoft has previously signed deals with CoreWeave to get extra capacity for OpenAI. Notably, Microsoft is CoreWeave’s largest customer and has signed deals to spend more than $10 billion renting AI servers through 2030.

The deal with OpenAI is expected to reduce CoreWeave's dependence on Microsoft, which accounted for 62% of its revenue in 2024.

CoreWeave has raised more than $14.5 billion in debt and equity across 12 financing rounds. Last year, CoreWeave raised over $7 billion in one of the largest private debt financing rounds in history, led by asset managers Blackstone and Magnetar.

Reliance Buys Advance NVIDIA AI Chips for World's Largest Data Centre - Report

Reliance Buys Advance NVIDIA AI Chips for World's Largest Data Centre - Report

According to a Bloomberg report, Mukesh Ambani's Reliance Industries is planning to build what could become the world's largest data centre in Jamnagar, Gujarat. The facility is expected to have a capacity of 3 GW, significantly surpassing the current largest data centre, Microsoft's 600-megawatt site in Virginia. The project could cost between $20 billion and $30 billion.

Reliance Industries Limited is procuring advanced AI semiconductors from Nvidia to support this ambitious project. These high-performance chips are essential for complex computations required by AI-driven tools.

The data centre will be powered primarily by renewable energy sources, including solar, wind, and hydrogen power. This initiative aligns with Reliance's strategy to boost India's AI capabilities and make AI more affordable and accessible.

This ambitious project aims to significantly boost India's artificial intelligence capabilities.

The facility is expected to have a total capacity of 3 GW, which is three times larger than the largest existing data centres.

As of now, the largest data centre in the world is the China Telecom-Inner Mongolia Information Park in Hohhot, Inner Mongolia, China. This massive facility spans 10.7 million square feet and consumes 150 megawatts of power. It includes not only data storage and processing infrastructure but also contact centers, warehouses, offices, and housing quarters.

Cost and Investment

Building such a massive facility could require an investment of $20 to $30 billion. Reliance Industries has about $26 billion in cash reserves, but funding such a project would still be a challenge.

Ambani aims to lower the cost of AI inferencing, making AI more affordable and accessible to businesses and startups in India.

"AI inferencing" is the process by which a trained machine learning model applies its learned patterns to new, unseen data to make predictions or decisions. For example, an AI model trained to recognize spam emails can infer whether a new email is spam or not based on the patterns it learned during training.

AI inferencing requires significant computational power, often involving GPUs or specialized hardware like TPUs. The cost of using these resources can be substantial, especially for large-scale models.

For example, companies using NVIDIA's AI inference platform can achieve significant cost savings through optimized performance and energy efficiency. Similarly, startups like Pipeshift offer modular inference engines that can reduce GPU usage by up to 75%, leading to substantial cost savings.

This initiative aligns with Reliance's strategy to disrupt the market, similar to how Reliance Jio revolutionized the telecom sector by offering affordable services. The cost of using these resources can be substantial, especially for large-scale models.

This announcement comes within few days after the announcement of The Stargate Project, a massive artificial intelligence initiative, announced by OpenAI, SoftBank, Oracle, and MGX. The project aims to invest $500 billion over the next four years to build AI infrastructure in the United States. Masayoshi Son, the founder of SoftBank, will chair the project, while Sam Altman from OpenAI will manage operations..

Nvidia Unveils Desktop Sized Personal AI Supercomputer Called Digits

Nvidia Unveils Desktop Sized Personal AI Supercomputer Called Digits

Nvidia recently unveiled Project DIGITS, a personal AI supercomputer designed to bring cutting-edge AI computing to researchers, data scientists, and students.

Project DIGITS features the new NVIDIA GB10 Grace Blackwell Superchip, offering a petaflop of AI computing performance for prototyping, fine-tuning and running large AI models.

With Project DIGITS, users can develop and run inference on models using their own desktop system, then seamlessly deploy the models on accelerated cloud or data center infrastructure.

Here are some key details:

Key Features:
  • GB10 Grace Blackwell Superchip: At the heart of Project DIGITS is the GB10 Superchip, delivering up to 1 petaflop of AI performance.
  • Unified Memory: 128GB of unified memory and up to 4TB of NVMe storage for handling large datasets.
  • Power Efficiency: Developed in collaboration with MediaTek, the superchip offers industry-leading efficiency.
  • Scalability: Two DIGITS systems can be connected to handle models up to 405 billion parameters.
  • Software Stack: Preloaded with NVIDIA's AI software stack, including tools like NVIDIA NeMo for fine-tuning models and NVIDIA RAPIDS for accelerating data science tasks.

Availability:

Price: Starting at $3,000, Project DIGITS will be available from Nvidia and its partners starting in May 2025.

Usage: Designed for prototyping, fine-tuning, and running AI models locally, with seamless deployment to cloud or data center environments.

This innovation aims to make powerful AI computing accessible to a broader audience, empowering developers to engage and shape the age of AI.

Nvidia Project DIGITS

Nvidia Project DIGITS

NVIDIA Launches Generative AI Computer at Affordable Price

NVIDIA Launches Generative AI Computer at Affordable Price

NVIDIA recently unveiled its most affordable generative Al supercomputer, the Jetson Orin Nano Super Developer Kit. Priced at $249 (approximately ₹ 21,146), making it accessible for students, enthusiasts, and developers.

The Jetson Orin Nano Super Developer Kit by NVIDIA is a game-changer for those looking to dive into generative AI on a budget. It's designed to support popular generative AI models, which means you can experiment with cutting-edge AI right out of the box.

Jetson Orin Nano Super is an ideal solution for creating LLM chatbots based on retrieval-augmented generation, building a visual AI agent, or deploying AI-based robots.

It offers up to 67 trillion operations per second (TOPS) of Al performance, which is a significant improvement over its predecessor. It supports popular generative AI models and is ideal for advanced robotics applications, vision AI, and various Internet of Things (IoT) devices.
 
Jetson Orin Nano Super
Jetson Orin Nano Super

The Jetson Orin Nano Super is compact enough to fit in the palm of your hand. The new software update has boosted its performance from 40 TOPS to 67 Tops.

NVIDIA Launches Generative AI Computer at Affordable Price

The predecessor to the Jetson Orin Nano Super Developer Kit is the Jetson Orin Nano Developer Kit, launched in 2022. It offered 40 trillion operations per second (TOPS) of AI performance. The new version offers 67 TOPS, which is a 1.7x improvement over its predecessor.

The new version, Jetson Orin Nano Super Developer Kit, also benefits from software updates that enhance generative AI performance, making it a more powerful and cost-effective option for developers, students, and hobbyists.

NVIDIA CEO Jensen Huang mentioned that this supercomputer can run everything that their Hyperscale Graphics Extension (HGX) does, including large language models.

The developer kit consists of a Jetson Orin Nano 8GB system-on-module (SoM) and a reference carrier board, providing an ideal platform for prototyping edge AI applications.

Jetson runs NVIDIA AI software including NVIDIA Isaac for robotics, NVIDIA Metropolis for vision AI and NVIDIA Holoscan for sensor processing. Development time can be reduced with NVIDIA Omniverse Replicator for synthetic data generation and NVIDIA TAO Toolkit for fine-tuning pretrained AI models from the NGC catalog.


NVIDIA’s focus on affordability and performance with the Jetson Orin Nano Super opens up generative AI development to a broader audience, from hobbyists to students and professionals. This democratization of AI technology can lead to rapid innovation and development in various fields.

NVIDIA Develops New AI Model 'FUGATTO', Calls It A Swiss Army Knife for Sound

NVIDIA Develops New AI Model 'FUGATTO' Calls It A Swiss Army Knife for Sound

NVIDIA recently unveiled Fugatto, a generative Al model designed to transform text prompts into audio. Officially named the Foundational Generative Audio Transformer Opus 1, Fugatto is capable of creating music, modifying existing sounds, and even generating speech with specific emotions and accents.

NVIDIA touts Fugatto as the world's most flexible sound machine. This Text-to-Audio (TTA) AI model can create and transform any combination of music, voices, and sounds using text prompts. Fugatto can generate music, modify existing sounds, and even create speech with specific emotions and accents.

NVIDIA has not yet disclosed plans to make Fugatto publicly available due to concerns about potential misuse, such as deepfake audio and copyright infringement.

Key Features:

  • Versatility: Fugatto can generate or transform any mix of music, voices, and sounds described with text prompts.
  • Applications: It has potential uses in music production, language education, and game development.
  • Advanced Capabilities: The model can create speech that conveys specific emotions, like anger, in a chosen accent, or craft soundscapes that evolve over time.
    • The model can create soundscapes that evolve over time and produce unique sounds never heard before.


Fugatto was made by a diverse group of people from around the world, including India, Brazil, China, Jordan and South Korea. Their collaboration made Fugatto’s multi-accent and multilingual capabilities stronger.

Similar to NVIDIA's Fugatto TTA, OpenAI's TTS models are part of their broader suite of AI tools, offering high-quality text-to-speech capabilities for different applications. Microsoft Azure's text-to-speech service is integrated into various applications, providing natural and lifelike voices for different languages.

It is to be noted that several other AI companies have also developed impressive text-to-audio models similar to NVIDIA's Fugatto. For an example, ElevenLabs, known for its natural-sounding voices in multiple languages, offers a range of AI audio solutions, including text-to-speech, voice cloning, and dubbing.

Deepgram's Aura model is designed for real-time conversations with less than 200ms latency, making it ideal for applications like IVR systems and AI agents.

WellSaid Labs is a company that provides flexible voiceover tools that convert plain text into emotion-filled speech, suitable for various use cases like presentations and educational content.

How Fugatto TTA is different from Text-to-Speech (TTS) AI Models

NVIDIA's Fugatto stands out from other AI Text-To-Audio (TTS) models due to its versatility and flexibility. Fugatto can combine, interpolate, or negate instructions using both text and audio inputs, allowing for highly customizable audio outputs. This means it can create entirely new sounds never heard before, modify existing tracks by adding or removing instruments, and change accents or emotions in voices.

Unlike models trained solely on audio data, Fugatto can follow free-form text instructions, making it easier to control and fine-tune the audio output.

Fugatto is designed for unsupervised multitask learning in audio synthesis and transformation, which means it can handle a wide range of tasks without needing separate models for each.

The Fugatto model is particularly useful for music producers, ad agencies, language learning tools, and video game developers, offering a new tool for creating and modifying audio content.

These features make Fugatto a powerful and unique tool in the realm of AI-driven audio generation and transformation.

Infosys Launches Small Language Models Built Using NVIDIA AI

Infosys Launches Small Language Models Built Using NVIDIA AI

Infosys Unveils Small Language Models – Infosys Topaz BankingSLM and Infosys Topaz ITOpsSLM – Built on NVIDIA AI Stack

The small language models will be integrated into products and services as part of Infosys Topaz offerings to provide enterprises with a foundation to build their specialized models.

Infosys today announced the launch of its small language models – Infosys Topaz BankingSLM and Infosys Topaz ITOpsSLM – built using the powerful NVIDIA AI Stack. The collaboration leverages NVIDIA AI and Infosys Topaz offerings to provide a robust foundation for implementing and scaling enterprise AI. These models are developed as part of the Infosys center of excellence dedicated to NVIDIA technologies and built to help businesses quickly adopt and scale AI.

The small language models utilize general and industry-specific data, enhanced by NVIDIA’s AI Enterprise and NVIDIA AI Foundry in collaboration with Sarvam AI. The models are fine-tuned with Infosys data and integrated into existing offerings, like Infosys Finacle and Infosys Topaz for business and IT operations, creating robust foundational models for industry-specific applications. Infosys also provides these models as services that include pretraining-as-a-service and fine-tuning-as-a-service, to help businesses build their own custom AI models securely, in compliance with industry standards.

As part of the center of excellence, Infosys is working with NVIDIA on NIM™ Agent Blueprints to streamline AI application development and integrate innovations such as the new Digital Human blueprint for customer service, multimodal PDF data extraction and various other use cases for Infosys Topaz offerings. Beyond these, the collaboration extends to digitalization efforts, addressing areas like 3D workflows and digital twins with NVIDIA Omniverse Enterprise, and Infosys Responsible AI suite, using NVIDIA NeMo Guardrails. The center of excellence also unveiled an exclusive AI Experience Zone, featuring the latest capabilities from NVIDIA AI and Infosys Topaz. The zone is designed to foster co-innovation in AI solutions, such as agentic and physical AI use cases, across sectors such as telecommunications, retail, and financial services.

Balakrishna D. R. (Bali), Executive Vice President, Global Services Head, AI and Industry Verticals, Infosys, said, “As we further our enterprise AI journey with NVIDIA, our focus is now on delivering foundational small language models as services for businesses to build on. By integrating the NVIDIA AI stack with Infosys Topaz, we are taking advantage of very advanced enterprise AI capabilities to tackle unique business challenges, enhance operational efficiency, and deliver bespoke solutions that drive business value for our clients. Our dedicated center of excellence ensures continuous innovation and establishes Infosys as a preferred partner for our clients’ AI-powered transformation.”

Jay Puri, Executive Vice President, Worldwide Field Operations, NVIDIA, said, “Generative AI and the recent advancements in agentic and physical AI are ushering in a new era of innovation and productivity for enterprises worldwide. NVIDIA's full-stack AI platform combined with Infosys Topaz empowers businesses to build and deploy custom AI applications that will transform industries, helping businesses unlock their full potential.”

India Should Manufacture Its Own AI – NVIDIA CEO, Huang

India Should Manufacture Its Own AI – NVIDIA CEO, Huang

NVIDIA CEO Jensen Huang emphasized the importance of India manufacturing its own AI during the NVIDIA AI Summit in Mumbai on October 24, 2024. He stated, "It makes complete sense that India should manufacture its own AI".

Huang highlighted India's potential to become a global leader in AI, leveraging its vast pool of technical talent and immense data resources. He also mentioned that India could export AI in the future, similar to how it has exported software.

This vision aligns with NVIDIA's collaborations with Indian companies like Reliance Industries and Tech Mahindra to develop advanced AI infrastructure and solutions.

To capitalize on this country’s talent and India’s immense data resources, the country’s leading cloud infrastructure providers are rapidly accelerating their data center capacity. NVIDIA is playing a key role, with NVIDIA GPU deployments expected to grow nearly 10x by year’s end, creating the backbone for an AI-driven economy.

Together with NVIDIA, these companies are at the cutting edge of a shift Huang compared to the seismic change in computing introduced by IBM’s System 360 in 1964, calling it the most profound platform shift since then.

"This industry, the computing industry, is going to become the intelligence industry,” Huang said, pointing to India’s unique strengths to lead this industry, thanks to its enormous amounts of data and large population.

Huang identified three areas where AI will transform industries: sovereign AI, where nations use their own data to drive innovation; agentic AI, which automates knowledge-based work; and physical AI, which applies AI to industrial tasks through robotics and autonomous systems. India, Huang noted, is uniquely positioned to lead in all three areas.

India’s startups are already harnessing NVIDIA technology to drive innovation across industries and are positioning themselves as global players, bringing the country’s AI solutions to the world.

Nvidia founder and CEO Jensen Huang speaking with Reliance Industries Chairman Mukesh Ambani at NVIDIA’s AI Summit in Mumbai.
Nvidia founder and CEO Jensen Huang speaking with Reliance Industries Chairman Mukesh Ambani at NVIDIA’s AI Summit in Mumbai.

During the NVIDIA AI Summit in Mumbai, the CEO of NVIDIA, also praised Mukesh Ambani and Reliance Industries for their significant contributions to India's tech space. Huang highlighted the partnership between NVIDIA and Reliance to build AI infrastructure in India, emphasizing the country's large population of users as a key advantage.


Huang also shared a light-hearted moment with Ambani, joking about the size of their respective homes and acknowledging Nita Ambani's role in building the Jio World Centre. He noted that India is central to NVIDIA's global AI strategy and expressed confidence in India's potential to become a leader in AI.

Notably, Reliance, in partnership with NVIDIA, is building AI factories to automate industrial tasks and transform processes in sectors like energy and manufacturing.

Wipro Launches New Initiatives that Leverage the Full NVIDIA AI Stack

Wipro Launches New Initiatives that Leverage the Full NVIDIA AI Stack

Wipro has announced new initiatives leveraging the full NVIDIA AI stack to help clients across multiple industries, including healthcare, communications, and financial services, quickly develop and implement new business strategies for the era of AI.

Wipro is providing ready-to-use templates for building agentic AI advocates in areas such as intelligent document processing, drug discovery, customer service, and claims processing.

Built on NVIDIA AI Enterprise, WeGA Studio leverages NVIDIA NIM Agent Blueprints to accelerate the deployment of AI virtual assistants, enhancing user experiences and streamlining operations.

In addition, for healthcare innovations, Wipro is using NVIDIA AI to improve member experiences, increase enrollment, and boost productivity in claims adjudication across its healthcare offerings.

Moreover, Wipro plans to expand into areas such as digital manufacturing and digital twins with NVIDIA Omniverse, delivering exceptional value to its global enterprise customers.

This collaboration aims to drive innovation and help enterprises harness the power of AI to transform their operations and achieve measurable business benefits.

TCS Launches NVIDIA Business Unit to Accelerate AI Adoption for Customers Across Industries

TCS Launches NVIDIA Business Unit to Accelerate AI Adoption for Customers Across Industries

Tata Consultancy Services (TCS) has launched a new business unit in collaboration with NVIDIA to accelerate AI adoption for customers across various industries. This new unit is part of TCS' AI.Cloud business unit and builds on a collaboration with NVIDIA that spans over five years.

The new unit will develop tailored AI solutions for industries such as manufacturing, banking and financial services (BFSI), retail, telecom, and automotive.

The collaboration leverages NVIDIA's AI Enterprise and Omniverse platforms, including AI microservices and AI Agent Blueprints, to deliver these solutions. TCS will utilize its global centers of excellence to design and deliver curated AI adoption strategies.

TCS' deep domain expertise combined with NVIDIA's AI technology will help build and deploy agentic AI solutions.

This initiative aims to drive enterprise-wide AI transformation and foster innovation across industries.

Early this month, Accenture also announced the formation of a new NVIDIA Business Group to accelerate AI adoption in enterprises. This group will train 30,000 professionals globally to help clients scale AI solutions and reinvent business processes.

TCS' new NVIDIA unit also offers TCS’ proprietary framework, which brings together its deep domain expertise, enterprise contextual knowledge and NVIDIA AI technology for building and deploying agentic AI solutions - including NVIDIA NIM microservices and NVIDIA NIM Agent Blueprints, which are part of the NVIDIA AI Enterprise software platform and NVIDIA AI Foundry – to deliver value at scale to customers. TCS and NVIDIA have collaborated to build innovative, value chain-centric solutions and offerings for industry verticals on the NVIDIA AI platform. They include:
  • TCS Manufacturing AI for Industrials: This offering leverages the power of AI and large language models (LLMs) to transform raw data into actionable insights for manufacturing enterprises. While general-purpose LLMs lack the capabilities to understand specific industry nuances, TCS’ Manufacturing AI for Industrials LLMs leverage the company’s contextual knowledge, technical prowess and the power of NVIDIA’s application frameworks to help accurately address industry challenges.
  • TCS AI Spectrum for BFSI: This offering delivers innovative and secure ways of infusing the power of LLMs and AI into BFSI lines at enterprise scale. Built on the NVIDIA AI Enterprise platform, it enables faster decision-making, improved regulatory compliance and enhanced risk management for financial institutions.
  • TCS Cognitive Visual Receiving: This is a holistic composite AI offering built on NVIDIA AI Enterprise and Omniverse that revolutionizes retail warehousing with greater accuracy, efficiency and speed by automating quality check, product identification, measurement and attribute extraction.
  • TCS AI-Native Telco Offerings: These offerings built on NVIDIA AI and NVIDIA Aerial Omniverse Digital Twin enables telcos to rapidly create custom telco domain-specific models to meet business needs such as autonomous network anomaly management, billing & revenue assurance, 3D network visualization and customer experience.
  • TCS AI-based Autonomous Vehicle Platform: TCS’ IoT and Digital Engineering unit is working with NVIDIA to leverage generative AI and deep learning technologies, such as Omniverse for simulation and NVIDIA AI Enterprise for synthetic data generation, to accelerate the development of end-to-end autonomous features and capabilities for automotive OEMs and tier 1 suppliers.
Additionally, TCS is also working on a new suite of digital twin solutions built on the NVIDIA Omniverse development platform, enabling clients to design, simulate, operate, and optimize products and production facilities across heavy industries:
  • Factory of the Future: Real-time factory planning, monitoring, and predictive maintenance in a virtual environment, reducing downtime and speeding up time to market.
  • In-Car Digital Twin: Autonomous vehicle simulation using Omniverse’s physics-based simulations, reducing the need for physical testing.
  • Aero Care Efficiency: Digital twins creation for aircraft components, enabling immersive training, enhanced problem-solving and the early detection of failures, helping improve safety and reduce operational risks.
  • Smart Farming Digital Twin: Farming scenario simulations with real-world physics, including soil interactions, terrain analysis, and weather conditions to improve equipment performance, process optimization and sustainability in modern agriculture.
The collaboration with NVIDIA is part of TCS' broader efforts to strengthen its AI-readiness and build end-to-end capabilities powered by NVIDIA technology to foster enterprise-wide AI transformation for its key customers.  

Accenture Forms New NVIDIA Business Group, With 30,000 Professionals Receiving Training Globally

Accenture Forms New NVIDIA Business Group, With 30,000 Professionals Receiving Training Globally

  • New Accenture NVIDIA Business Group launched with 30,000 professionals receiving training globally to help clients reinvent processes and scale enterprise AI adoption with AI agents
  • Accenture AI Refinery platform helping companies jump-start their custom agentic AI journeys using the full NVIDIA AI stack
  • Network of Accenture AI Refinery Engineering Hubs serving 57,000 Accenture AI practitioners to open in Europe, Asia and North America, supporting large-scale operations, agentic architecture and foundation model development with NVIDIA AI
  • Deployment of autonomous agents built in AI Refinery achieves early outcomes in Accenture’s marketing function
Accenture and NVIDIA have announced an expanded partnership to drive Artificial Intelligence (AI) adoption in enterprises. With this, Accenture has formed a new NVIDIA Business Group to accelerate AI adoption in enterprises. This group will train 30,000 professionals globally to help clients scale AI solutions and reinvent business processes.

The newly formed business group will focus on training Accenture’s workforce to leverage NVIDIA’s AI technology, ensuring they can effectively implement AI solutions for clients. The group will utilize Accenture’s AI Refinery platform, which integrates NVIDIA’s AI stack to develop custom AI agents and scale AI across various industries.

The partnership includes the launch of Accenture's AI Refinery platform, which uses NVIDIA's AI stack to help companies develop custom AI agents and integrate AI across their operations. This collaboration is expected to accelerate the deployment of AI in various industries, enhancing productivity and innovation.

Accenture is establishing AI Refinery Engineering Hubs in Europe, Asia, and North America. These hubs will support large-scale operations, agentic architecture, and foundation model development. The AI Refinery focuses on developing agentic AI systems, which can act on user intent, create new workflows, and take appropriate actions based on their environment. This represents a significant advancement in generative AI.

These hubs will focus on the selection, fine-tuning and large-scale inferencing of foundation models, all of which pose significant accuracy, cost, latency and compliance challenges when development is scaled. Building on existing hubs in Mountain View, Calif., and Bangalore, Accenture is adding AI Refinery Engineering Hubs in Singapore, Tokyo, Malaga and London.

With generative AI demand driving $3 billion in Accenture bookings in its recently-closed fiscal year, the new group will help clients lay the foundation for agentic AI functionality using Accenture’s AI Refinery™️, which uses the full NVIDIA AI stack—including NVIDIA AI Foundry, NVIDIA AI Enterprise and NVIDIA Omniverse—to advance areas such as process reinvention, AI-powered simulation and sovereign AI.

The AI Refinery platform, developed by Accenture in collaboration with NVIDIA, is designed to help enterprises accelerate their AI adoption and innovation. The AI Refinery enables companies to create custom AI agents tailored to their specific business needs. This includes developing large language models (LLMs) and other AI solutions using NVIDIA’s AI Stack.

Accenture will also debut a new NVIDIA NIM Agent Blueprint for virtual facility robot fleet simulation, which integrates NVIDIA Omniverse, Isaac and Metropolis software, to enable industrial companies to build autonomous, robot-operated software-defined factories and facilities.

Accenture will use these new capabilities at Eclipse Automation, an Accenture-owned manufacturing automation company, to deliver as much as 50% faster designs and 30% reduction in cycle time on behalf of its clients.

This partnership aims to drive significant productivity and growth by enabling enterprises to harness the full potential of AI.

Dell Collaborates with NVIDIA to Transform Telecom Networks with AI Solutions

Dell Collaborates with NVIDIA to Transform Telecom Networks with AI Solutions

Dell Technologies has extended its collaboration with NVIDIA to revolutionize telecom networks using AI solutions. This partnership aims to co-create and validate AI solutions for communications service providers (CSPs), leveraging Dell's AI Factory and NVIDIA's advanced GPUs and enterprise-grade AI software.

Key highlights of this collaboration include:
  • Enhanced Customer Care: Using AI to improve customer service and network maintenance with platforms like Amdocs amAIz. 
  • Automated Operations: Automating call center scripts and customer care operations with Internal. 
  • Network Analysis: Conducting network troubleshooting and analysis with Kinetica SQL-GPT. 
  • Predictive Maintenance: Developing digital twins for networks and performing predictive network maintenance with Synthefy. 
  • Edge Deployments: Facilitating AI deployments at the edge of the telecom network with Dell PowerEdge XR8000 servers, equipped with NVIDIA L4 Tensor Core GPUs. 
This initiative is part of Dell's broader "AI for Telecom" program, which aims to simplify and accelerate AI deployments for CSPs, enhancing network performance and creating new revenue opportunities at the enterprise edge.

This collaboration is poised to drive significant advancements in the telecom industry, making networks more efficient, customer-centric, and cost-effective.

AI-driven solutions can continuously monitor and optimize network performance, reducing downtime and improving overall efficiency. By using digital twins and predictive analytics, telecom operators can anticipate and address potential issues before they impact the network.

Platforms like Amdocs amAIz can provide more responsive and personalized customer service, leading to higher customer satisfaction.

Automation of routine tasks in call centers can free up human agents to handle more complex issues, improving service quality.

Besides, Dell’s AI solutions are designed to be scalable and flexible, allowing CSPs to adapt quickly to changing market demands.

Market Reports

Market Report & Surveys
IndianWeb2.com © all rights reserved