Showing posts with label AI Cloud. Show all posts
Showing posts with label AI Cloud. Show all posts

Cisco and NVIDIA Introduce AI‑Native 6G Wireless Stack, Redefining Cloud and Enterprise Infrastructure

Cisco and NVIDIA Introduce AI‑Native 6G Wireless Stack, Redefining Cloud and Enterprise Infrastructure

Cisco and NVIDIA have announced a broad set of AI infrastructure innovations designed to accelerate adoption of artificial intelligence across cloud, enterprise, and telecom sectors. The collaboration brings together Cisco’s networking and security expertise with NVIDIA’s AI computing leadership, marking what executives described as the beginning of the “largest data center build‑out in history.”

Spectrum‑X Powered Switches

At the center of the announcement is the Cisco N9100 Series data center switch, the first NVIDIA partner‑developed switch built on NVIDIA Spectrum‑X Ethernet technology. The switch is designed to deliver high‑performance, low‑latency networking for AI workloads and will be available with both NX‑OS and SONiC operating models. Cisco said the platform will serve as a Cloud Partner‑compliant reference architecture, enabling neocloud and sovereign cloud providers to deploy AI infrastructure at scale.

Enterprise AI Security and Observability

Cisco also expanded its Secure AI Factory with NVIDIA, a framework that integrates compute, networking, security, and observability into enterprise AI deployments. The initiative aims to give organizations end‑to‑end visibility and protection as they scale AI workloads, particularly in regulated industries. New ecosystem partnerships were announced to strengthen monitoring and compliance capabilities.

Telecom and 6G Readiness

In a move aimed at telecom operators, Cisco and NVIDIA unveiled the industry’s first AI‑native wireless stack for 6G networks. The stack is designed to handle ultra‑low latency and massive device connectivity, preparing carriers for the surge in AI‑driven traffic expected over the next decade. Analysts said the development could redefine mobile networks by enabling real‑time AI services at the edge.

Strategic Context

Executives from both companies emphasized that the innovations are not standalone products but part of a joint reference architecture for next‑generation AI deployments. “We are entering a new era where AI workloads will reshape every industry,” said a Cisco spokesperson. “Our partnership with NVIDIA ensures customers have the flexibility, interoperability, and scalability to build AI infrastructure securely and globally.”

Why It Matters

  • For Cloud Providers: A unified, NVIDIA‑compliant architecture accelerates AI adoption in sovereign and neocloud environments.
  • For Enterprises: Enhanced security and observability ensure safer AI deployments.
  • For Telecoms: The AI‑native 6G stack positions operators to deliver next‑generation services.
With these announcements, Cisco and NVIDIA are positioning themselves at the heart of the global AI infrastructure race, targeting the needs of hyperscalers, enterprises, and telecom operators alike.

Cassava Taps Accenture to Scale Sovereign AI Across Africa

Cassava Taps Accenture to Scale Sovereign AI Across Africa
Strive Masiyiwa, Cassava Founder & Executive Chairman

Cassava Technologies, a pan-African digital infrastructure powerhouse, has announced a strategic collaboration with global consulting giant Accenture to accelerate the rollout of sovereign AI capabilities across Africa. The partnership marks a pivotal moment in the continent’s digital evolution—one that blends cutting-edge technology with local relevance, regulatory alignment, and inclusive innovation.

Building Africa’s AI Backbone

At the heart of the collaboration is a shared vision: to enable African nations to harness artificial intelligence on their own terms. Accenture will deploy its AI Refinery™ platform alongside Cassava’s GPU-as-a-Service (GPUaaS), powered by NVIDIA’s high-performance AI infrastructure. This fusion will allow AI workloads to be processed within national borders, ensuring compliance with local data governance laws and reinforcing digital sovereignty.

The rollout begins in South Africa, with plans to expand into Egypt, Kenya, Morocco, and Nigeria—leveraging Cassava’s ultra-low-latency fibre broadband network and energy-efficient data centres. These “AI factories” will be equipped with thousands of GPUs, enabling scalable, secure, and context-aware AI development across sectors.

Local Context, Global Capability

Unlike generic AI deployments, Cassava and Accenture are prioritizing localized solutions that reflect Africa’s linguistic diversity, cultural nuances, and economic realities. From agriculture and healthcare to mining, telecom, and financial services, the initiative aims to deliver AI applications that are not only powerful but also deeply relevant.
  • Cassava CEO Ahmed El Beheiry described the initiative as a “nation-building story with inclusion at its centre.
  • Accenture’s Mauro Macchi emphasized the opportunity to “reimagine operations” and “unlock new ways to create value” across the continent.

The Visionary Behind Cassava

This bold move is emblematic of the entrepreneurial ethos of Strive Masiyiwa, Cassava’s founder and executive chairman. A Zimbabwean-born billionaire and telecom pioneer, Masiyiwa is no stranger to building transformative infrastructure. He famously broke Zimbabwe’s telecom monopoly in the 1990s with Econet Wireless and has since become one of Africa’s most influential business leaders.
  • Masiyiwa is investing $720 million to build sovereign AI infrastructure across five African nations.
  • He serves on the boards of Netflix, the Gates Foundation, and National Geographic Society.
  • He is a signatory of the Giving Pledge, supporting education, public health, and youth empowerment.
His mantra: “Start small, think big.”—a call for Africa to become a creator, not just a consumer, of emerging technologies.

Trust, Compliance, and Inclusion

By keeping data within borders and tailoring AI to local realities, the Cassava–Accenture alliance aims to strengthen trust, foster compliance, and democratize access to advanced technologies. It’s a model that could inspire other regions grappling with the tension between global innovation and national sovereignty.

As Africa steps into the AI era, this partnership signals more than just technological progress—it’s a declaration of intent: to build, govern, and scale digital infrastructure that reflects the continent’s values, ambitions, and future.

Tech Mahindra Taps AMD to Power AI-Driven Infrastructure Across Hybrid and Multi-Cloud Ecosystems

Tech Mahindra Taps AMD to Power AI-Driven Infrastructure Across Hybrid and Multi-Cloud Ecosystems

Tech Mahindra (NSE: TECHM), a leading global provider of technology consulting and digital solutions to enterprises across industries, announced an agreement with AMD, the leader in high-performance and adaptive computing, to accelerate enterprise transformation through next-generation infrastructure, hybrid cloud, and AI adoption. The collaboration aims to empower enterprises across key sectors, including manufacturing, finance, telecommunications, and healthcare, to harness the full potential of AI-driven infrastructure.

Through this collaboration, Tech Mahindra will integrate AMD’s compute engines and infrastructure with its Cloud BlazeTech solution to drive AI adoption across enterprise workloads. It plans to develop new solutions to enable enterprises to optimize workloads across end-user devices, servers, and cloud infrastructure, including public, private, and hybrid environments. 

Mohit Joshi, CEO and Managing Director, Tech Mahindra, said, “Enterprises worldwide are scrambling to maximize ROI while navigating the complexity of hybrid and cloud-native ecosystems. Our strategic agreement with AMD is a step towards delivering next-generation hyper scalable solutions that seamlessly bridge on-site infrastructure with cloud-native capabilities. Through these solutions, we aim to enable customers to optimize performance across distributed environments without compromising speed, security, or control.”

Dr. Lisa Su, Chair and CEO, AMD said, “Together, AMD and Tech Mahindra will help enterprises accelerate their cloud transformation and AI adoption with the performance and efficiency they need to scale. By combining our EPYC processors and AMD Instinct accelerators with Tech Mahindra, we can create solutions that enable customers to deploy AI on compute infrastructure across hybrid and multi-cloud environments.”

Tech Mahindra and AMD are embarking on a multi-year collaboration with a comprehensive roadmap focused on infrastructure optimization and AI enablement. Leveraging leadership in compute and software capabilities from AMD, and Tech Mahindra's deep industry experience, this collaboration will empower customers to harness AI-driven innovation, delivering critical business value and operational outcomes.

About AMD

For more than 55 years AMD has driven innovation in high-performance computing, graphics and visualization technologies. Billions of people, leading Fortune 500 businesses and cutting edge scientific research institutions around the world rely on AMD technology daily to improve how they live, work and play. AMD employees are focused on building leadership high- 3 performance and adaptive products that push the boundaries of what is possible. For more information about how AMD is enabling today and inspiring tomorrow, visit the AMD (NASDAQ: AMD) website, blog, LinkedIn, Facebook and X pages. 

About Tech Mahindra 

Tech Mahindra (NSE: TECHM) offers technology consulting and digital solutions to global enterprises across industries, enabling transformative scale at unparalleled speed. With 148,000+ professionals across 90+ countries helping 1100+ clients, Tech Mahindra provides a full spectrum of services including consulting, information technology, enterprise applications, business process services, engineering services, network services, customer experience & design, AI & analytics, and cloud & infrastructure services. It is the first Indian company in the world to have been awarded the Sustainable Markets Initiative’s Terra Carta Seal, which recognizes global companies that are actively leading the charge to create a climate and nature-positive future. Tech Mahindra is part of the Mahindra Group, founded in 1945, one of the largest and most admired multinational federation of companies.

Wipro and CrowdStrike Expand Alliance to Launch AI-Powered CyberShield MDR


Organizations today face an overwhelming volume of alerts from siloed security tools that fail to stop adversaries. Fragmented security operations across endpoints, cloud workloads, identity, and data drive complexity, increase costs, and create operational blind spots. Wipro CyberShield MDR, powered by CrowdStrike Falcon® Next-Gen SIEM, addresses these challenges by enhancing threat visibility, simplifying operations, and strengthening resilience against evolving threats.

Falcon Next-Gen SIEM combines native Falcon platform and third-party data with real-time threat intelligence and AI-powered automation to supercharge threat detection and response across the enterprise. Leveraging Falcon Next-Gen SIEM and Wipro's global ecosystem – along with Wipro Ventures’ portfolio companies Simbian and Tuskira – CyberShield MDR delivers intelligent defense, proactive breach protection, continuous detection, and rapid response to keep organizations resilient and future-ready against AI-driven threats. Wipro’s cybersecurity experts will manage and host the services from eight Cyber Defense Centers (CDCs) strategically located around the globe.

“Wipro’s CyberShield platform, powered by CrowdStrike’s AI-native product suites and strengthened by our security ecosystem will help enterprises contain threats swiftly and ensure continuity of digital operations,” said Tony Buffomante, Senior Vice President & Global Head – Cybersecurity & Risk Services, Wipro Limited. “This integrated platform approach enables AI automated workflows, prevents lateral threat movement, and eliminates potential security gaps that fragmented solutions often miss.”

“The Falcon platform supercharges Wipro’s CyberShield Managed Security Services to deliver real-time attack detection, faster response and outcomes that stop breaches,” said Daniel Bernard, Chief Business Officer, CrowdStrike. “Together, we’re simplifying operations across Wipro’s ecosystem of partners — reducing costs, accelerating time-to-value and giving customers the confidence to stay ahead of today’s adversaries.”

Wipro CyberShieldSM MDR unified MSS will be launched at CrowdStrike Fal.Con 2025.

Wipro Limited (NYSE: WIT, BSE: 507685, NSE: WIPRO) is a leading AI-powered technology services and consulting company focused on building innovative solutions that address clients’ most complex digital transformation needs. Leveraging our holistic portfolio of capabilities in consulting, design, engineering, and operations, we help clients realize their boldest ambitions and build future-ready, sustainable businesses. Wipro Innovation Network, which brings together our clients, partners, academia, and tech communities, reflects our commitment to client-centric co-innovation. As a part of this, the Innovation Labs and Partner Labs, located across the globe, allow us to collaborate with clients to solve real-world challenges and showcase cutting-edge industry solutions that explore the future of technology. With over 230,000 employees and business partners across 65 countries, we deliver on the promise of helping our customers, colleagues, and communities thrive in an ever-changing world. For additional information, visit us at www.wipro.com.

TCS and C-DAC Join Forces to Build India’s Own Cloud

TCS and C-DAC Join Forces to Build India’s Own Cloud

In a major stride toward digital sovereignty, Tata Consultancy Services (TCS) has signed a Memorandum of Understanding (MoU) with the Centre for Development of Advanced Computing (C-DAC) to co-develop technologies that will form the backbone of India’s sovereign cloud infrastructure.

The collaboration aims to create a secure, scalable, and AI-enabled cloud ecosystem tailored to the needs of India’s public sector, including critical applications in healthcare, emergency response, and governance.

A Cloud Built for India, by India

The sovereign cloud initiative is designed to ensure that sensitive government data and citizen services remain within national borders, aligning with India’s growing emphasis on data localization and digital autonomy. The platform will be built on OpenStack architecture, enhanced by indigenous innovations from C-DAC and enterprise-grade deployment capabilities from TCS.
This partnership marks a pivotal moment in India’s journey toward technological self-reliance,” said a senior official from C-DAC. “Together, we’re building a cloud that reflects India’s values, priorities, and security needs.”

Real-World Impact

The sovereign cloud will host mission-critical applications such as:
  • e-Sanjeevani: India’s national telemedicine service
  • Dial 112: Emergency response systems
  • Smart city platforms and defence-grade workloads
  • Banking and financial services requiring high compliance and data protection
By leveraging TCS’s global cloud expertise and C-DAC’s research capabilities, the partnership is expected to accelerate deployment timelines and ensure robust performance across sectors.

Strategic Significance

The move comes amid growing global concerns over data privacy and dependency on foreign hyperscalers. India’s sovereign cloud is seen as a cornerstone of the Digital India mission, reinforcing national cybersecurity and enabling interoperable, cost-effective cloud services for government agencies.

Industry analysts view this as a model for other nations seeking to balance innovation with sovereignty. With this MoU, India signals its intent to lead in ethical, secure, and inclusive cloud infrastructure development.

Google and Reliance Unveil Dedicated Cloud Region in Jamnagar to Power India’s AI Future

Google and Reliance Unveil Dedicated Cloud Region in Jamnagar to Power India’s AI Future

In a landmark announcement at Reliance Industries’ 48th Annual General Meeting, Google CEO Sundar Pichai revealed the launch of a dedicated Google Cloud region in Jamnagar, built exclusively for Reliance. The move marks a pivotal step in India’s digital transformation, aimed at accelerating AI adoption across industries and democratizing access to advanced computing infrastructure.

Purpose-Built for AI Innovation

The Jamnagar region will host Google Cloud’s latest-generation AI hypercomputer, offering full-stack environments for generative AI development, model training, and enterprise deployment. Designed and powered by Reliance, the facility will run entirely on green energy, aligning with the company’s sustainability goals.
This region is purpose-built to support India’s AI ambitions — from large enterprises to kirana stores, said Sundar Pichai.
“It’s a new chapter in India’s technology journey,” added Mukesh Ambani.

Infrastructure Highlights

  • Hypercomputer Deployment: Optimized for large-scale generative models and AI-powered applications
  • Green Energy Backbone: Powered by Reliance’s renewable energy assets
  • Jio Fiber Integration: High-capacity connectivity linking Jamnagar to metros like Mumbai and Delhi
  • Secure Data Environments: Designed for enterprise-grade governance and compliance

Strategic Impact

The Jamnagar region will serve as a launchpad for AI-first services across sectors including:
  • Retail, telecom, energy, and financial services
  • Startups, SMBs, and public sector organizations
  • Developers and researchers building India-centric AI solutions
This initiative complements Reliance’s newly launched Reliance Intelligence, a wholly owned subsidiary focused on building consumer and enterprise-grade AI products.

National Significance

The announcement aligns with India’s broader push for sovereign AI infrastructure under the ₹10,370 crore IndiaAI Mission. By localizing compute power and enabling scalable AI deployment, the Jamnagar region positions India as a serious contender in the global AI race.

What’s Next

The cloud region is expected to go live in early 2026, with pilot deployments already underway in Reliance’s retail and telecom verticals. Analysts view this as a strategic convergence of infrastructure, innovation, and national ambition — one that could redefine India’s digital economy.

Tata Communications, AWS Unveil One of India’s Largest AI-Optimized Network Deployments

Tata Communications, AWS Unveil One of India’s Largest AI-Optimized Network Deployments

Tata Communications, a leading global communications technology player, in collaboration with Amazon Web Services (AWS), an Amazon.com, Inc. company, announced that the companies will enable an advanced AI-ready network in India. The strategic collaboration will establish a high-capacity, resilient long-distance network connecting three major AWS infrastructure locations to bolster generative AI adoption and cloud innovation in India.

The collaboration marks one of the India’s largest ever network deployments by Tata Communications in terms of size, scale and bandwidth. AWS has two data centre Regions in India located in Mumbai and Hyderabad, and AWS Direct Connect and AWS Edge Network infrastructure in Chennai. The network will connect AWS infrastructure in Mumbai, Hyderabad, and Chennai through a comprehensive, national long-haul network, creating a powerful infrastructure backbone for AI and machine learning (ML) workloads across India.

Key highlights of the partnership:
  • Next-Generation Network Connectivity: Leverage Tata Communications’ state-of-the-art network to provide high-bandwidth, low-latency connections essential for AI workloads.​ AWS will continue to deploy its custom network technologies on this network, enabling industry-leading security, availability, and performance between AWS locations
  • Enablement of AI-Powered Applications: Further enable businesses across India to build, train, and deploy scalable AI applications, fostering innovation in sectors like healthcare, finance, and education​
  • Commitment to Security and Compliance: Ensure robust security measures and adhere to regulatory standards to protect data integrity and privacy
The new network will help provide leading network performance and scalability that are critical for next-generation AI applications. By leveraging Tata Communications state-of-the-art network, AWS will further empower Indian businesses to develop Gen AI applications and train AI models, with unprecedented speed and efficiency. The network will feature express routes with ultra-low latency, helping ensure seamless data transfer and processing capabilities essential for compute-intensive AI and ML workloads.

This association marks our largest ever National Long-Distance program and showcases Tata Communications’ unparalleled capability to support large-capacity, complex projects requiring scaled network solutions,” said Genius Wong, Executive Vice President, Core and Next-Gen Connectivity Services and Chief Technology Officer, Tata Communications.AI is transforming industries globally, and our collaboration with AWS positions us at the forefront of this revolution in India. Together, we’re enabling a network that not only meets the current demands but anticipates the needs of tomorrow. By building a tailored network solution we’re ushering in an AI era in India, reinforcing our position as the long-term partner of choice for global technology leaders.”

We are excited to work with Tata Communications to establish an advanced in-country network in India,” said Jesse Dougherty, Vice President for Network Edge Services at Amazon Web Services. ” The infrastructure is designed to support the most data intensive workloads, like 5G, generative AI, and high-performance computing. This collaboration with Tata Communications will further enable our customers in India to innovate at scale with cloud and generative AI, and drive growth in India’s rapidly expanding digital economy.”

L&T-Cloudfiniti Forms Strategic Partnerships with 3 Leading AI Startups

L&T Cloudfiniti Forms Strategic Partnerships with Leading AI Startups
To drive innovation in healthcare, life sciences, and vertical AI solutions in India and across the globe

L&T-Cloudfiniti, a leading technology solutions provider, is proud to announce new strategic partnerships with three leading AI startups, including one based in Europe.

The collaborations will focus on groundbreaking developments in healthcare, life sciences, vertical AI, and conversational technologies in India and across the globe by harnessing cutting-edge AI models to transform key industries and drive digital innovation in multiple sectors.

The three partnerships that L&T-Cloudfiniti has got into are:
  • Hanooman AI (Healthcare & Life Sciences): L&T-Cloudfiniti has partnered with Hanooman AI, a pioneering AI startup in the healthcare and life sciences space. This partnership will leverage Hanooman’s advanced AI-powered tools to accelerate healthcare transformation in India. By integrating AI-driven insights into healthcare practices, Hanooman is poised to improve patient outcomes, optimise treatment pathways, and advance medical research in life sciences.
  • CoRover (Conversational & Attentive AI): L&T-Cloudfiniti has also teamed up with CoRover, an AI-driven startup focussed on creating conversational AI and foundational models (like BharatGPT). CoRover’s innovative solutions offer the ability to enhance user experiences with more natural, human-like conversations across various sectors, including customer service, education, and more. With this collaboration, L&T-Cloudfiniti aims to bring real- time, personalised communication and AI-enabled attentiveness to the forefront of businesses in India.
  • Pidima AI (Agentic AI for Regulated Industries): The third partnership is with Pidima, a UK- based startup revolutionising mission-critical industries with its Agentic AI platform. By automating test specification and compliance documentation, Pidima delivers 10x faster outcomes, reduces costs by millions, and elevates efficiency to extraordinary heights. Pidima’s solutions are designed for regulated sectors such as healthcare, Medtech, automotive, and aerospace, where precision and compliance are non-negotiable.The collaboration will significantly enhance L&T-Cloudfiniti’s AI offerings in these critical domains, paving the way for smarter, more efficient, and highly compliant operations.
Commenting on the development, Ms Seema Ambastha, Chief Executive, L&T-Cloudfiniti, said: “These collaborations reflect our commitment to driving AI adoption across industries, from healthcare to aerospace, by partnering with the brightest minds and the most innovative companies in the AI landscape.The collective expertise and disruptive technologies from these startups will play a crucial role in shaping the future of AI and will enable L&T-Cloudfiniti to provide cutting-edge solutions that deliver tangible business outcomes for clients globally.”

Vishnuvardhan Pogunulu Srinivasulu, CEO & Founder, Hanooman AI, commented: “Partnering with L&T-Cloudfiniti, Hanooman Al pioneers generative healthcare solutions - scalable, secure, and globally compliant. With Cipher Al, we're reimagining care for Bharat, making it accessible while advancing precision medicine for the world, sparking a revolution in global health outcomes. From reversing diabetes to discovering new drugs to deciphering genomics the future of healthcare is intelligent, inclusive, and is here.”

Ankush Sabharwal, Founder & CEO, CoRover AI, added: “Our collaboration with L&T- Cloudfiniti allows us to rapidly scale our conversational AI solutions on secure, high- performance GPU infrastructure, reaching global enterprises effectively. Together, we aim to redefine customer interactions, drive operational excellence, and deliver exceptional business value.”

John Marcus, Founder & CEO of Pidima AI, shared: “We are thrilled to partner with L&T- Cloudfiniti, a company that shares our vision of transforming enterprise efficiency through AI. This collaboration not only strengthens our presence in India but also accelerates our mission to empower mission-critical enterprises with smarter, faster, and more precise solutions."

IBM Teams Up with NVIDIA to Supercharge AI Development on the Cloud

IBM Teams Up with NVIDIA to Supercharge AI Development on the Cloud

IBM has announced a collaboration with NVIDIA to enhance AI capabilities at scale. This partnership focuses on integrating NVIDIA's AI Data Platform technologies with IBM's offerings, such as IBM Fusion and watsonx.

Key highlights of this collaboration include:
  • Content-Aware Storage (CAS): IBM plans to introduce CAS in its hybrid cloud infrastructure, enabling enterprises to process unstructured data more effectively for AI applications like retrieval-augmented generation (RAG) and reasoning.
  • Enhanced AI Accessibility: IBM aims to integrate its watsonx platform with NVIDIA's technologies, allowing organizations to develop and deploy AI models across various cloud environments.
  • Support for Compute-Intensive Workloads: IBM Cloud will expand its NVIDIA accelerated computing portfolio, including the availability of NVIDIA H200 Tensor Core GPU instances, designed for high-performance AI workloads.
This collaboration is expected to drive innovation in generative AI and agentic AI applications.

A 2024 IBM report found that more than three in four executives surveyed (77 percent) say generative AI is market-ready, up from just 36 percent in 2023. With this push to put AI into production comes an increased need for compute and data-intensive technologies. The collaboration between IBM and NVIDIA will enable IBM to provide hybrid AI solutions that take advantage of open technologies and platforms while also supporting data management, performance, security, and governance.

"IBM is focused on helping enterprises build and deploy effective AI models and scale with speed," said Hillery Hunter, CTO and General Manager of Innovation, IBM Infrastructure. "Together, IBM and NVIDIA are collaborating to create and offer the solutions, services and technology to unlock, accelerate, and protect data – ultimately helping clients overcome AI's hidden costs and technical hurdles to monetize AI and drive real business outcomes."

"AI agents need to rapidly access, fetch and process data at scale, and today, these steps occur in separate silos," said Rob Davis, vice president, Storage Networking Technology, NVIDIA. "The integration of IBM's content-aware storage with NVIDIA AI orchestrates data and compute across an optimized network fabric to overcome silos with an intelligent, scalable system that drives near real-time inference for responsive AI reasoning."

To learn more about IBM's presence at GTC, please visit https://www.nvidia.com/gtc/session-catalog/?search.suggestedaudiencelevel=1732117107498003nOoA&search=ibm#/

Red Hat Enhances Security and Virtualization Experience with Latest Version of Red Hat OpenShift

Red Hat Enhances Security and Virtualization Experience with Latest Version of Red Hat OpenShift

Red Hat, Inc., the world's leading provider of open source solutions, today announced the general availability of Red Hat OpenShift 4.18, the latest version of the industry’s leading hybrid cloud application platform powered by Kubernetes. Red Hat OpenShift 4.18 introduces new features and capabilities designed to streamline operations and security across IT environments and deliver greater consistency to all applications, from cloud-native and AI-enabled to virtualized and traditional.

According to the Gartner® press release Top Trends Impacting Infrastructure and Operations for 2025, revirtualization/devirtualization is one of the top trends facing organizations for 2025. As shifts in the virtualization market require organizations to reevaluate their virtualized infrastructure and strategies, for many it is an opportunity to implement technologies that will both deliver on their current IT requirements as well as help them meet the needs of tomorrow. The latest enhancements to Red Hat OpenShift are designed to simplify the management of virtual machines and containers while providing organizations with a common infrastructure to bring their generative AI (gen AI) plans to life.

Enhanced virtualization experience

Red Hat OpenShift 4.18 introduces new virtualization enhancements that improve networking, simplify storage migration, and streamline VM management. These updates reduce operational complexity, enhance flexibility and improve resource efficiency -- making it easier to manage and adapt virtualized environments as needs evolve.

VM-friendly networking provides support for common VM networking use cases with the general availability of user-defined networks, making it easier for users to get their virtualization platform up and running. Also available with OpenShift on AWS and Red Hat OpenShift Service on AWS, this allows users to have similar networking capabilities for secondary networks on AWS as they do on-premises, allowing for more hybrid cloud flexibility.

VM storage migration, available as a technology preview, now includes additional enhancements that allow for non-disruptive movement of data between storage devices and storage classes while a VM is running, enabling users to be more agile as storage needs change.

Tree-view navigation, available as a technology preview, enables users to logically group VMs into folders which allows for a more granular grouping. Additionally, with logical grouping, also available as a technology preview, users have a quicker and easier way to navigate between VMs using a single click.

Red Hat OpenShift 4.18 also enhances user-defined networks with Border Gateway Protocol (BGP), which improves segmentation and supports advanced use cases like VM static IP assignment, live migration and stronger multi-tenancy.

Extending choice for greater hybrid cloud innovation

Red Hat OpenShift 4.18 expands support to additional public cloud providers, providing users with increased flexibility for how and where they choose to run their workloads. Red Hat OpenShift now supports bare-metal deployments on Google Cloud and Oracle Cloud Infrastructure. Additionally, for users looking for virtualization in the public cloud, Red Hat OpenShift Virtualization is now available on Oracle Cloud Infrastructure as a technology preview.

Simplified operations for security

Red Hat OpenShift 4.18 introduces new security features designed to help drive more resilient operations while decreasing potential risks. Secret store container storage interface (CSI) driver is now generally available and provides users with a vendor-agnostic solution for managing credentials and sensitive information for applications. Workloads on Red Hat OpenShift can access external secrets managers without storing secrets on the cluster, enhancing overall security hygiene and simplifying credential management. This allows for clusters to remain unaware of secrets, thereby further reducing risk. Additionally, Secret Store CSI Driver enhances complementary solutions, such as OpenShift GitOps and OpenShift Pipelines, by enabling them to consume secrets from an external secrets manager in a more secure way.

Availability

Red Hat OpenShift 4.18 is now generally available. More information, including how to upgrade to the latest version, is available here.

Mike Barrett, vice president and general manager, Hybrid Cloud Platforms, Red Hat
Many organizations have reached an inflection point with their virtualized infrastructure, needing to make decisions quickly on their future direction. Red Hat OpenShift meets today’s virtualization needs and offers a simplified pathway to migration, but also enables organizations to keep an eye on the future via application modernization. With Red Hat OpenShift, organizations are able to protect their traditional investments while adopting a platform that enables them to seamlessly transition to an AI future.

Additional Resources


Learn more about Red Hat OpenShift 4.18

Read the blog about Red Hat OpenShift 4.18 What’s new for developers in Red Hat OpenShift 4.18

About Red Hat, Inc.

Red Hat is the world’s leading provider of enterprise open source software solutions, using a community-powered approach to deliver reliable and high-performing Linux, hybrid cloud, container, and Kubernetes technologies. Red Hat helps customers integrate new and existing IT applications, develop cloud-native applications, standardize on our industry-leading operating system, and automate, secure, and manage complex environments. Award-winning support, training, and consulting services make Red Hat a trusted adviser to the Fortune 500. As a strategic partner to cloud providers, system integrators, application vendors, customers, and open source communities, Red Hat can help organizations prepare for the digital future.

L&T-Cloudfiniti Onboards Its First Major Customer

L&T-Cloudfiniti Onboards Its First Major Customer

L&T-Cloudfiniti, the Data Centre business initiative of Larsen & Toubro, has onboarded its first major customer at the state-of-the-art hyperscale data centre located in Sriperumbudur near Chennai.

Cloudfiniti’s Sriperumbudur Data Centre has a built-in capacity of 30 MW, and of this, 12 MW colocation-ready capacity is live across two floors. The client, a leading cloud service provider has rented as much as 6 MW IT Load capacity, consisting of high-density racks spread over an entire floor, and a bulk bandwidth, thus marking a major customer-win for Cloudfiniti at the very start.

The contract tenure, which is 10 years, underscores the trust the client has reposed on Cloudfiniti’s high-tech capabilities, cutting-edge infrastructure, and the strategic location of the data centre.

“This deal marks the beginning of many such collaborations and acts a testament to our commitment to delivering world-class colocation and cloud services to businesses across the spectrum. In the days to come, we are confident of redefining India’s data centre landscape with our fast scalable and reliable solutions,” said Seema Ambastha, Chief Executive – L&T-Cloudfiniti.

KPMG Invests $100 Mn in Its Alliance With Google Cloud

KPMG Invests $100 Mn in Its Alliance With Google Cloud

KPMG has announced a $100 million investment in its alliance with Google Cloud to accelerate the adoption of generative Al, data analytics, and cybersecurity among Fortune 500 companies and global enterprises.

This expanded partnership is expected to drive $1 billion in incremental growth for KPMG.

The investment will focus on developing new solutions to help clients solve complex business challenges, with an initial emphasis on data modernization and responsible Al adoption in sectors like consumer and retail, healthcare, and financial services.

KPMG and Google Cloud announced their alliance in April 2024 when KPMG established a Google Cloud Center of Excellence (CoE) to align its product development, industry expertise, and technical resources for enterprises. 

KPMG said that bookings for KPMG's Google Cloud practice have increased tenfold over the past two years, reflecting the growing demand for cloud and AI solutions.

The expanded alliance will focus on data modernization and responsible AI adoption across various industries, with an initial emphasis on consumer and retail, healthcare, and financial services.

The alliance will bring Vertex AI and Gemini models to financial services clients, helping automate processes like fraud detection, financial crime detection, and commercial lending.

Besides this alliance with Google, KPMG has also formed alliances with other major cloud service providers to enhance its Al and digital solutions offerings. For instance, in July 2023, KPMG announced a $2 billion commitment over five years to expand its Al and cloud services through a partnership with Microsoft. This collaboration aims to integrate Microsoft's Al tools, such as OpenAl Service and Azure Al Search, into KPMG's proprietary generative Al tool.

Additionally, other Big Four accounting firms like PwC, Deloitte, and EY have also built partnerships with various tech giants, including Google Cloud, Microsoft, and Amazon Web Services (AWS), to leverage Al and other digital solutions for their clients.

L&T To Acquire 21% Stake in E2E Networks for ₹1, 407 Crore

L&T To Acquire 21% Stake in E2E Networks for ₹1, 407 Crore

Larsen & Toubro (L&T) has announced it will acquire a 21% stake in E2E Networks for ₹1,407 crore. This strategic move aims to bolster L&T's presence in the rapidly expanding cloud and Al sectors.

The acquisition will be completed in two parts: a 15% stake via preferential allotment and an additional 6% through a secondary acquisition. L&T will invest ₹1,079.27 crore for a 15% stake through preferential allotment and ₹327.75 crore for an additional 6% stake through a secondary acquisition.

Post-acquisition, L&T will have the right to nominate up to two directors on E2E Networks' board, ensuring they have a say in the company's strategic direction.

Alongside the acquisition, L&T will enter into a software license agreement, reseller agreement, and colocation agreement with E2E Networks.

This partnership is expected to accelerate digital transformation across various industries in India by integrating E2E Networks' cloud and Al cloud platform with L&T's expertise in data center management and cloud solutions.

E2E Networks' shares surged by 5% following the announcement, reflecting positive market sentiment about the deal.

Post-acquisition, E2E Networks' promoters, Tarun Dua and Srishti Baweja, will still hold a significant stake in the company.

This collaboration is expected to foster a technology-driven, sustainable future for India by promoting the adoption of GenAI solutions and enhancing cloud services.

Infosys To Rake In $100 Mn from Coca-Cola's $1.1 Bn Cloud Migration Deal with Microsoft

Infosys To Rake In $100 Mn from Coca-Cola's $1.1 Bn Cloud Migration Deal with Microsoft

Infosys is set to rake in over $100 million as a key supporting partner in Coca-Cola's $1.1 billion cloud migration deal with Microsoft. This partnership, announced in April, involves Coca-Cola migrating its operations to Microsoft's Azure cloud platform, with Infosys playing a significant role in the process.

Coca-Cola’s $1.1 billion cloud migration deal with Microsoft is a five-year strategic partnership aims to accelerate Coca-Cola’s adoption of cloud and generative AI technologies. The collaboration will leverage Microsoft’s Azure cloud platform and its generative AI capabilities.

This deal highlights the growing importance of cloud and AI technologies for global enterprises and underscores the strategic role Indian IT service providers like Infosys play in these major technology transformations.

Coca-Cola plans to migrate its applications and workloads to Microsoft Azure. This includes exploring innovative AI use cases across various business functions, such as marketing, manufacturing, and supply chain management. The partnership will focus on developing and testing new AI-powered solutions, including the Azure OpenAI Service and Copilot for Microsoft 365. These technologies are expected to enhance workplace productivity, streamline operations, and foster innovation.

Coca-Cola's European subsidiary, Coca-Cola Euro Pacific Partners PLC, disclosed in a recent filing with the US Securities and Exchange Commission (SEC) that it has committed €25 million (approximately $27 million) to Infosys for cloud migration services in the Euro Pacific region.

Given Infosys' involvement in this and potentially other geographies, industry experts partner e the company could surpass the $100 million mark from the broader Coca-Cola–Microsoft deal.

Coca-Cola’s migration to the Azure cloud will involve its core operations and major independent bottling partners worldwide. This move is part of a broader effort to align Coca-Cola’s technology strategy with cutting-edge innovations.

Infosys is a key supporting partner in this deal, earning over $100 million for its role in the migration process. Infosys will assist with cloud strategy, migration execution, application modernization, security, and ongoing support.

Post-migration, Infosys will provide ongoing support and optimization services to ensure that Coca-Cola’s cloud environment remains efficient, secure, and cost-effective.

AMD Acquires ZT Systems for $4.9 Billion To Expand Its Data Center AI Capabilities

AMD Acquires ZT Systems for $4.9 Billion To Expand Its Data Center AI Capabilities

On Monday, AMD announced that it has signed of a definitive agreement to acquire ZT Systems, a leading provider of AI infrastructure for the world’s largest hyperscale computing companies, in a cash and stock transaction valued at $4.9 billion, inclusive of a contingent payment of up to $400 million based on certain post-closing milestones.

AMD expects the transaction to be accretive on a non-GAAP basis by the end of 2025.

This acquisition is expected to significantly enhance AMD's capabilities in the data center and AI markets. By integrating ZT Systems' expertise in custom server solutions, AMD aims to provide more comprehensive and innovative solutions to meet the growing demands of cloud and AI infrastructure.

This acquisition also positions AMD to better compete with other major players in the industry, such as Intel and NVIDIA, by expanding its market reach and strengthening its product offerings.

Upon completion of the acquisition, ZT Systems will join the AMD Data Center Solutions Business Group. ZT CEO Frank Zhang will lead the manufacturing business and ZT President Doug Huang will lead the design and customer enablement teams, both reporting to AMD Executive Vice President and General Manager Forrest Norrod.

ZT Systems is a prominent provider of server solutions, specializing in creating cloud-enabling server infrastructure for leading cloud and telecom providers. They design, manufacture, and deploy custom solutions that balance cost, capability, and creativity to meet complex server needs.

Founded in 1994, by Frank Zhang, and based in New Jersey, ZT Systems focuses on hyperscale cloud computing, cloud storage, artificial intelligence, and machine-to-machine transactions.

Initially, ZT Systems focused on providing custom server solutions for various industries. Their emphasis on quality and customization helped them build a strong reputation. Over the years, ZT Systems expanded its capabilities to address the needs of hyperscale cloud computing, AI, and telecom providers. The company leveraged its engineering expertise and global manufacturing capabilities to deliver high-performance, cost-effective solutions. Headquartered in Secaucus, New Jersey, the company continuously innovated, adapting to the rapidly changing technology landscape. It focused on developing solutions for complex compute, storage, and accelerator needs, ensuring they stayed ahead of industry trends.

Besides, ZT Systems formed strong partnerships with leading technology suppliers like NVIDIA and Intel, enhancing their ability to provide cutting-edge server solutions.

Last year in October, ZT Systems announced the acquisition of a new manufacturing site in the Greater Austin, Texas area. This facility is expected to bolster their production capabilities and support the growing demand for their advanced server solutions.

Accenture Estimates That AWS Can Help Indian Organisations Reduce Associated Carbon Emissions by Upto 99% Compared to On-Premises

Accenture Estimates That AWS’s Global Infrastructure is Upto 4.1 Times More Efficient Than On-Premises

AWS can help Indian organisations reduce carbon emissions of AI workloads

New study estimates workloads optimised on Amazon Web Services (AWS) can help organisations in India reduce associated carbon emissions by up to 99% compared to on-premises

A new study commissioned by Amazon Web Services (AWS) and completed by Accenture shows that an effective way to minimise the environmental footprint of leveraging Artificial Intelligence (AI) is by moving IT workloads from on-premises infrastructure to AWS cloud data centres in India and around the globe. Accenture estimates that AWS’s global infrastructure is up to 4.1 times more efficient than on-premises. For Indian organisations, the total potential carbon reduction opportunity for AI workloads optimised on AWS is up to 99% compared to on-premises data centres.

The research states that simply utilising AWS data centres for compute-heavy, or AI, workloads in India yields a 98% reduction in carbon emissions compared to on-premises data centres. This is credited to AWS’s utilisation of more efficient hardware (32%), improvements in power and cooling efficiency (35%), and additional carbon-free energy procurement (31%). Further optimising on AWS by leveraging purpose-built silicon can increase the total carbon reduction potential of AI workloads to up to 99% for Indian organisations that migrate to and optimise on AWS.

Optimizing workloads on AWS can lower customers' associated carbon footprint by up to 99%

Carbon emissions reduction and energy efficiency by moving to AWS


Considering 85% of global IT spend by organisations remains on-premises, a carbon reduction of up to 99% for AI workloads optimised on AWS in India is a meaningful sustainability opportunity for Indian organisations,” said Jenna Leiner, Head of Environment Social Governance (ESG) and External Engagement, AWS Global. “As India accelerates towards its US$1 trillion-dollar digital opportunity and encourages investments into digital infrastructure, sustainability innovations and minimising IT related carbon emissions will be critical in also helping India meet its net-zero emissions by 2070 goal. This is particularly important given the rising adoption of AI. AWS is constantly innovating for sustainability across our data centres —optimising our data centre design, investing in purpose-built chips, and innovating with new cooling technologies - so that we continuously increase energy efficiency to serve customer compute demands.”

This research shows that AWS's focus on hardware and cooling efficiency, carbon-free energy, purpose-built silicon, and optimized storage can help organizations reduce the carbon footprint of AI and machine learning workloads,” said Sanjay Podder, global lead for Technology Sustainability Innovation at Accenture.As the demand for AI continues to grow, sustainability through technology can play a crucial role in helping businesses meet environmental goals while driving innovation.

Sustainable chip technology innovation – purpose-built silicon

One of the most visible ways AWS is innovating for energy efficiency is through the company’s investment in AWS chips. Launched in 2018, the custom AWS-engineered general purpose processor, Graviton, was the first-of-its-kind to be deployed at scale by a major cloud provider. The latest Graviton4 offers four times the performance of Graviton, and while Graviton3 uses 60% less energy for the same performance as comparable Amazon EC2 instances (where the compute happens in a data centre), Graviton4 is even more energy efficient.

AWS customers are also benefiting from the carbon reduction potential of Graviton. Paytm, India’s leading payments and financial services distribution platform, witnessed a reduction in workload carbon intensity by adopting Graviton processors, reporting up to 47% estimated decrease in carbon emissions per transaction. Similarly, IBS Software, a leading SaaS solutions provider to the global travel industry, reported that other than improving performance and reducing cost by adopting Graviton processors, the company saw a 40% reduction in carbon emissions per instance hour.

Running generative AI applications in a more sustainable way requires innovation at the silicon level with energy efficient hardware. To optimise performance and energy consumption, AWS developed purpose-built silicon like the AWS Trainium chip and AWS Inferentia chip to achieve significantly higher throughput than comparable accelerated compute instances. AWS Trainium cuts the time taken to train generative AI models—in some cases from months to hours. This means building new models requires less money and power, with energy-consumption reductions of almost one third/up to 29%. AWS Inferentia is AWS’s most power-efficient machine learning inference chip. AWS Inferentia2 machine learning accelerator delivers up to 50% higher performance per watt and can reduce costs by up to 40% against comparable instances. These purpose-built accelerators enable AWS to efficiently execute AI models at scale. This translates to a reduced infrastructure footprint for similar workloads, resulting in enhanced performance per watt of power consumption.

Improving energy efficiency across AWS infrastructure

Through innovations in engineering—from electrical distribution to cooling techniques, AWS’s infrastructure is able to operate closer to peak energy efficiency. AWS optimises resource utilisation to minimise idle capacity, and continuously improves the efficiency of its infrastructure. For example, AWS removed the large central Uninterruptible Power Supply (UPS) from its data centre design to instead use small battery packs and custom power supplies that AWS integrates into every rack, which has improved power efficiency and has further increased availability. Every time power is converted from one voltage to another, or from AC to DC and vice versa, some power is lost in the process. By eliminating the central UPS, AWS are able to reduce these conversions. Additionally, AWS have optimised rack power supplies to reduce energy loss in that final conversion. Combined, these changes reduce energy conversion loss by about 35%.

After powering AWS’s server equipment, cooling is one of the largest sources of energy use in AWS data centres. To increase efficiency, AWS uses different cooling techniques, including free air cooling depending on the location and time of year, as well as real-time data to adapt to weather conditions. Implementing these innovative cooling strategies is more challenging on a smaller scale at a typical on-premises data centre. AWS’s latest data centre design seamlessly integrates optimised air-cooling solutions alongside liquid cooling capabilities for the most powerful AI chipsets, like the NVIDIA Grace Blackwell Superchips. This flexible, multimodal cooling design allows AWS to extract maximum performance and efficiency whether running traditional workloads or AI models.

According to the study, AWS’s additional carbon-free energy procurement in India contributes 31% in carbon emissions reduction for compute-heavy workloads and 44% for storage-heavy workloads. Aligning with Amazon's commitment to achieving net-zero carbon emissions across all operations by 2040, AWS is rapidly transitioning its global infrastructure to match electricity use with 100% carbon-free energy. Amazon met its 100% renewable energy goal seven years ahead of schedule. In India 100% of the electricity consumed by AWS data centres was matched with renewable energy sources procured in country in 2022 and 2023. This is due to Amazon’s investment in 50 renewable energy projects in India with an estimated 1.1 gigawatts of renewable energy capacity, enough energy to power more than 1.1 million homes in New Delhi each year.

Sirius Digitech, a JV of Adani and Sirius, Acquire Noida-based Coredge.io

Sirius Digitech, a JV of Adani and Sirius, Acquire Noida-based Coredge.io

Sirius Digitech, a joint venture between the Adani Group and Sirius International Holding, has acquired Noida-based Coredge.io Private Limited, announced Adani Enterprises, on Wednesday. Coredge.io is a cutting-edge sovereign AI and cloud platform company that offers secure and compliant cloud services for AI applications, ensuring data sovereignty. Coredge.io services are available across Japan, Singapore, and India.

The acquisition will enable Sirius Digitech to provide cloud services that empower organizations to leverage sovereign cloud innovations while retaining sensitive data within national borders. Coredge.io's expertise positions it as a leader in the field of sovereign cloud technology, and this move aligns with the growing demand for computation and sovereign data stack driven by artificial intelligence.

Founded as a bootstrap company in 2020, by Arif Khan – a JNTU graduate, Coredge.io has quickly expanded its client base across geographies like Japan, Singapore and India. Coredge aims to capitalize on the trillion-dollar global opportunity for sovereign cloud. Its expertise in accelerating hyper local cloud service providers with stringent data sovereignty and compliance measures has positioned it as a leader in the field.

Partnering with Sirius marks an exciting new chapter for our sovereign AI and cloud platform business, both in India and globally,” said Arif Khan, CEO of Coredge.ioTogether, we can accelerate the development and delivery of advanced AI services while upholding security, privacy and digital sovereignty principles, helping customers across the globe drive technological transformation while complying with their data ethics principles.”

Arif Khan, Founder & CEO, Coredge.io, is also the founder of ParserLab and co-founded VoerEir. Previously, he has also served as Chief Enterprise Architect in Ericsson. 

Coredge aims to build the complete solution stack for sovereign data centers that will include everything from bare metal servers to services, like Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) built on open-source technologies, to enable Sirius Digitech to provide Machine Learning as a Service (MLaaS) as applications get built on its infrastructure.

Tata Communications Advances Collab with NVIDIA for AI Infrastructure Benefiting Startups, Researchers

Tata Communications Advances Collab with NVIDIA for AI Infrastructure Benefiting Startups, Researchers

Tata Communications is advancing its collaboration with NVIDIA, focusing on building an AI supercomputer as part of their deal. They are increasing their AI compute spending to support this initiative. The AI Cloud, which is expected to launch later this year, will provide infrastructure-as-a-service and platform services for AI applications, benefiting startups and researchers, reported Economic Times.

Tata Communications' managing director and chief executive, AS Lakshmi narayanan, told Economic Times that the Tatas will partake in the Centre's AI Mission to offer Al services to startups and researchers. "We will offer infrastructure-as-a-service and the platform on top for customers to do the data pipeline, manage that properly, do the version management of the models, so that our customers can consume cloud capacity very easily at one of the best price performan READ and deliver AI at national scale."

The partnership aims to democratize access to AI infrastructure and accelerate the development of AI solutions. It's a significant move that will likely have a profound impact on AI-led transformations across various sectors. The collaboration is part of Tata Group's broader engagement in the Centre's AI Mission to offer AI services at a national scale.

This partnership aims to build a state-of-the-art AI supercomputer, which will be powered by NVIDIA's next-generation GH200 Grace Hopper Superchip. The collaboration is set to provide Infrastructure-as-a-Service and a platform for AI services, catering to a wide range of organizations, businesses, AI researchers, and startups in India.

The AI supercomputer is expected to deliver best-in-class performance and will play a crucial role in supporting the exponential demand for generative AI, especially from startups and those processing large language models. Tata Communications' robust global network, combined with the AI cloud, will enable high-speed data transfer, effectively bringing the AI cloud to the doorstep of every enterprise.

Moreover, Tata Consultancy Services (TCS) will utilize this AI infrastructure to build and process generative AI applications and upskill its massive workforce, leveraging the partnership to drive an AI-first approach across various sectors. This strategic move is anticipated to catalyze AI-led transformation across the Tata Group's range of companies, from manufacturing to consumer businesses.

This collaboration is a significant step towards democratizing access to AI infrastructure and accelerating the build-out of AI solutions, as well as upgrading AI talent at scale in India.

Cisco Introduces Hypershield, A Groundbreaking AI-native Security for Data Centers and Cloud

Cisco Introduces Hypershield, A Groundbreaking AI-native Security for Data Centers and Cloud

Cisco has introduced the most consequential security product in Cisco’s history — "Cisco Hypershield". It’s a cloud-native, AI-powered approach to highly distributed security for AI-scale data centers that’s built into the fabric of the network.

Cisco Hypershield is one of the most significant security innovations in Cisco's history. It combines hyperscaler technology with an Al-first approach, tipping the scales in favor of defenders.

By partnering with NVIDIA, Cisco co-creates security-specific Al models, optimizing security products for NVIDIA's technology.
Cisco Hypershield security architecture reimagines security with an AI-native approach, bringing the speed and power of hyperscaler technology to the enterprise.

Hypershield is built on open source eBPF, the default mechanism for connecting and protecting cloud-native workloads in the hyperscale cloud. Cisco acquired the leading provider of eBPF for enterprises, Isovalent, earlier this month. Here are some key points about Cisco Hypershield:

1. AI-Native Solution: For the first time, Cisco combines hyperscaler technology with an AI-native solution to address today's security challenges. Hypershield is designed to defend modern, AI-scale data centers.

2. Distributed Architecture: Hypershield integrates network and workload enforcement points under a unified management system. It extends protection seamlessly from traditional infrastructures to the cloud, ensuring robust and scalable security.

3. Key Benefits
  • Hyper-Distributed Security: Hypershield reaches all areas of your network, tapping into previously unreachable workload and network enforcement points.
  • Rapid Exploit Protection: It blocks application exploits within minutes, employing surgical compensating controls evaluated against live production traffic.
  • Adaptive Segmentation: Hypershield continuously adapts and learns, applying highly specific controls tailored to your security needs.
  • Self-Managed Updates: Unified management across the network and workloads allows safe testing on live traffic without risking operations². 
4. Distributed Exploit Protection Module: Hypershield's module automates the entire process of detecting, prioritizing, evaluating, and deploying controls against vulnerabilities. It drastically reduces the time to protect against new vulnerabilities, ensuring smooth application operation.

In summary, Cisco Hypershield represents a significant milestone in cybersecurity, enhancing the security of AI-scale data centers and providing a new level of protection for modern applications and dynamic compute environments. If you're interested in staying informed about Hypershield, you can sign up for updates on its capabilities, demos, and more.

Everything About NVIDIA-Yotta Deal – India's Biggest AI Deal Ever

Everything About NVIDIA-Yotta Deal, An India's Biggest AI Deal Ever

In December 2022, Yotta Data Services, an Indian data center startup, announced a multi-million dollar partnership with Nvidia to provide high-performance computing capabilities from data centers in India. The partnership includes Nvidia's 16,000 chips, and Yotta plans to double its Nvidia AI chip order to $1 billion.

The partnership is considered India's biggest AI bet yet. The existing plans are worth $500 million and will include the purchase of Nvidia GPUs for Yotta's data centers.

Mumbai-headquartered Yotta Data Services designs, builds, and operates data center parks in India. It offers services such as colocation, connectivity, cloud services, security, and managed IT. Yotta has data centers in Mumbai and Noida, and plans to expand to more locations in India and overseas. 

Yotta is aiming to offer high-performance computing capabilities from data centers in India. The company's AI chips will enable computer vision, speech, natural language processing (NLP), generative AI, recommender systems, and more.

In January this year, the company's CEO, Sunil Gupta, told Reuters that its latest AI purchase will be worth $500 million, and consist of 16,000 H100 and GH200 GPUs by March 2025. By the end of 2025, Yotta aims to expand its GPU inventory to 32,768 units. 

This collaboration involves Yotta Data Services investing in NVIDIA's GPUs, specifically the H100 Tensor Core GPUs, to bolster high-performance computing from their data centers in India. The deal is part of a broader initiative to support the development of AI services for corporations, startups, and researchers within the country.

The partnership is noteworthy for its scale, with Yotta Data Services is reportedly planning to increase its NVIDIA GPU inventory to about 20,000 by June 2024. These GPUs, known as Hoppers, are valued between $30,000 to $40,000 each and are integral to training large language models and building applications akin to OpenAI's ChatGPT and Microsoft GitHub Copilot.

This move by Yotta Data Services is seen as one of the biggest bets on AI in India, reflecting the company's ambition to be at the forefront of AI technology and infrastructure development. The total order book with NVIDIA is expected to reach $1 billion, marking a significant investment in AI cloud services.

Yotta's deal with NVIDIA is part it's strategy to support India's Al transformation, providing the necessary infrastructure for corporations, startups, and researchers to develop Al services. The GPUs will be used to train large language models (LLMs) and build applications similar to OpenAl's ChatGPT and Microsoft GitHub Copilot. It's a significant step towards making advanced Al capabilities more accessible within the country. 

What is the H100 Tensor Core GPU?

The NVIDIA H100 Tensor Core GPU is a cutting-edge processor designed for accelerated computing, part of NVIDIA's Hopper architecture. It's built to deliver exceptional performance, scalability, and security for data center workloads, particularly in AI and high-performance computing (HPC). Here are some of its standout features:
  • Performance: The H100 offers a significant performance boost for accelerated computing, capable of handling exascale workloads and large language models efficiently.
  • Scalability: It can connect up to 256 H100 GPUs using the NVIDIA NVLink® Switch System, enabling the acceleration of massive workloads.
  • Transformer Engine: This specialized engine is optimized for processing trillion-parameter language models, making it ideal for advanced AI tasks.
  • Memory: Equipped with 80 GB of HBM2e memory and a 50 MB L2 cache, it supports high-bandwidth and low-latency data access.
  • Tensor Cores: The fourth-generation Tensor Cores in the H100 provide up to 4X faster training for AI models compared to the previous generation.
  • Connectivity: It supports PCIe Gen5 and NDR Quantum-2 InfiniBand networking, ensuring high-speed connections.
These capabilities make the H100 a powerhouse for AI research and enterprise AI solutions.

Last December, Yotta launched "Shakti-Cloud", India’s Largest Super Computer of 16 Exaflops AI computing power with NVIDIA H100 Tensor Core GPUs to drive mass-scale AI innovation in India.

Yotta has partnered with India's business groups, including Tata and Reliance, to set up data center facilities in India. The partnership is intended to help Nvidia develop more country-specific artificial intelligence innovations and products.

Yotta has made GPUs available to customers on a cost-effective usage model.

Yotta's Founders & Backers

Yotta Data Services was co-founded by Sunil Gupta, who serves as the Chief Executive Officer. The company was established in 2019 with the backing of real-estate billionaire Niranjan Hiranandani. Additionally, Darshan Hiranandani is mentioned as the Co-founder & Chairman of Yotta. This team has been instrumental in Yotta's significant partnership with NVIDIA, aiming to bolster India's AI capabilities and infrastructure.

Yotta is part of the Hiranandani Group and operates its cloud regions at its two hyperscale data center parks in Panvel (Navi Mumbai) and Greater Noida (Delhi). Yotta has more than 800 enterprise and government customers, including including multinational hyperscale cloud operators, OTTs, governments at all levels, and enterprises in various tech-enabled sectors.

Yotta Data Services has invested in R&D on new-age technologies, onboarding of industry veterans, and has a vast pool of IT resources. It also has a deep partnership with OEMs and Start-ups.

Market Reports

Market Report & Surveys
IndianWeb2.com © all rights reserved