Showing posts with label Enterprise AI. Show all posts
Showing posts with label Enterprise AI. Show all posts

Cisco and NVIDIA Introduce AI‑Native 6G Wireless Stack, Redefining Cloud and Enterprise Infrastructure

Cisco and NVIDIA Introduce AI‑Native 6G Wireless Stack, Redefining Cloud and Enterprise Infrastructure

Cisco and NVIDIA have announced a broad set of AI infrastructure innovations designed to accelerate adoption of artificial intelligence across cloud, enterprise, and telecom sectors. The collaboration brings together Cisco’s networking and security expertise with NVIDIA’s AI computing leadership, marking what executives described as the beginning of the “largest data center build‑out in history.”

Spectrum‑X Powered Switches

At the center of the announcement is the Cisco N9100 Series data center switch, the first NVIDIA partner‑developed switch built on NVIDIA Spectrum‑X Ethernet technology. The switch is designed to deliver high‑performance, low‑latency networking for AI workloads and will be available with both NX‑OS and SONiC operating models. Cisco said the platform will serve as a Cloud Partner‑compliant reference architecture, enabling neocloud and sovereign cloud providers to deploy AI infrastructure at scale.

Enterprise AI Security and Observability

Cisco also expanded its Secure AI Factory with NVIDIA, a framework that integrates compute, networking, security, and observability into enterprise AI deployments. The initiative aims to give organizations end‑to‑end visibility and protection as they scale AI workloads, particularly in regulated industries. New ecosystem partnerships were announced to strengthen monitoring and compliance capabilities.

Telecom and 6G Readiness

In a move aimed at telecom operators, Cisco and NVIDIA unveiled the industry’s first AI‑native wireless stack for 6G networks. The stack is designed to handle ultra‑low latency and massive device connectivity, preparing carriers for the surge in AI‑driven traffic expected over the next decade. Analysts said the development could redefine mobile networks by enabling real‑time AI services at the edge.

Strategic Context

Executives from both companies emphasized that the innovations are not standalone products but part of a joint reference architecture for next‑generation AI deployments. “We are entering a new era where AI workloads will reshape every industry,” said a Cisco spokesperson. “Our partnership with NVIDIA ensures customers have the flexibility, interoperability, and scalability to build AI infrastructure securely and globally.”

Why It Matters

  • For Cloud Providers: A unified, NVIDIA‑compliant architecture accelerates AI adoption in sovereign and neocloud environments.
  • For Enterprises: Enhanced security and observability ensure safer AI deployments.
  • For Telecoms: The AI‑native 6G stack positions operators to deliver next‑generation services.
With these announcements, Cisco and NVIDIA are positioning themselves at the heart of the global AI infrastructure race, targeting the needs of hyperscalers, enterprises, and telecom operators alike.

Adobe Unveils Firefly Foundry to Help Brands Build Custom Generative AI Models on Their IP

Adobe Unveils Firefly Foundry to Help Brands Build Custom Generative AI Models on Their IP

At Adobe MAX — the world’s largest creativity conference—Adobe (Nasdaq:ADBE) announced Adobe Firefly Foundry, which enables businesses to work directly with Adobe and create tailored generative AI models that are unique to their brand. Trained on entire catalogs of existing IP, these proprietary Adobe Firefly Foundry models are deeply tuned and can be built on top of commercially safe Adobe Firefly models. This unlocks the value of AI, helping teams scale on-brand content production, create new customer experiences and extend their IP. With Firefly as the anchor, Adobe Firefly Foundry models can support all major asset types including image, video, audio, vector and 3D—accelerating content delivery for brand campaigns, performance marketing, media production workflows and more.

Adobe continues to be the partner of choice for businesses to confidently move from AI experimentation to value realization, delivering a unique approach to AI that is anchored in transparency, safety and creative precision. Adobe Firefly Foundry takes this a step further, providing businesses with a team of Adobe experts to collaborate on practical AI solutions and impactful use cases.

Adobe Firefly Foundry builds on years of Adobe innovation and expertise, spanning generative AI models for image, video, audio, vector and 3D, to help businesses solve today’s most complex content and media production challenges,” said Hannah Elsakr, vice president, GenAI New Business Ventures at Adobe. “Businesses can access Adobe’s robust AI training infrastructure, research and expertise to define bespoke AI models—surfaced through Adobe solutions such as GenStudio and Creative Cloud to help teams scale on-brand content experiences. Adobe has been working with tech-forward innovators like Walt Disney Imagineering to drive new levels of customer engagement with Adobe Firefly Foundry.”

Businesses see the potential in using generative AI to increase production of impactful content experiences and meet rising demands across digital channels. In an Adobe study, marketers anticipate content demands will grow by more than 5x over the next two years—making it a challenge to keep their brands top-of-mind with consumers. In many industries, decades of brand, product and franchise building have added complexity to quality and on-brand content production work. Teams need to ensure every new asset preserves the look and feel of their product portfolio, creative direction and design aesthetic. Adobe Firefly Foundry takes on the heavy lifting for businesses, providing a team of Adobe experts that handle AI model training, along with tools for managing and deploying their customized Adobe Firefly Foundry models.

Adobe Firefly Foundry allows businesses to quickly see value through capabilities that include:
  • Adobe Firefly Foundry models: Adobe will work with businesses to create unique generative AI models that are safely trained on their existing IP. Adobe’s comprehensive AI approach—which includes commercially safe Firefly generative AI models across image, video, audio, vector and 3D—can enable teams to generate multimodal outputs that are pixel-perfect, brand-protected and ready for external use.
  • Seamless implementation: Adobe also provides a single destination for businesses to easily manage and deploy their Adobe Firefly Foundry model. Teams will have an application to orchestrate the implementation process, including testing generated outputs and managing model access throughout their organization. Adobe Firefly Foundry models are also grounded in Adobe’s responsible AI principles, ensuring ethical deployment across business workflows.
  • Co-innovation: Embedded Adobe experts—including applied AI/ML scientists and forward-deployed engineers—will co-innovate with businesses to design and deploy high-impact use cases that drive growth. This includes creating tailored solutions for the unique needs of the organization, as well as strategic guidance on reimagining creative workflows. This will enable businesses to jointly develop impactful AI solutions with Adobe, where teams can accelerate time-to-value and deliver measurable ROI.

As part of this announcement, the team from Invoke—a generative media solution for creative production—has joined the Adobe Firefly Foundry team to help build the future of AI-powered creative workflows for businesses.

Mphasis Launches NeoIP™, a Unified AI Platform for Continuous Enterprise Transformation

Mphasis Launches NeoIP™, a Unified AI Platform for Continuous Enterprise Transformation

  • Platform utilizes Ontology-Powered Knowledge Graph, Ontosphere, to deliver compounding value and continuous transformation.
  • Early client deployments of the platform are demonstrating up to 60% efficiency gains, 50% reduction in incident resolution time, and measurable margin improvements.

Mphasis, (BSE: 526299; NSE: MPHASIS), an Information Technology (IT) solutions provider specializing in cloud and cognitive services, announced the launch of Mphasis NeoIP™, a breakthrough Artificial Intelligence (AI) platform, integrating multiple Mphasis.ai innovative solutions, designed for continuous enterprise transformation and differentiating competitive advantage. The platform perpetually rewires core systems, turning enterprise knowledge across legacy systems, data and operations driving intelligent engineering. At the core ofNeoIP™ is this living – breathing layer of connected enterprise understanding, that unifies data, systems, and processes to proactively optimize, modernize, and transform business and IT operations.

NeoIP™ enables organizations continuously evolve, rather than through one-time transformation programs, by making enterprise knowledge machine understandable. NeoIP™ automates complex decisions, predicts and prevents issues before they occur and drives sustained innovation. It empowers CIOs and business leaders to shift left, embedding intelligence early in the software and operations lifecycle to create self-healing, resource efficient systems that learn and improve over time. In addition, it integrates evergreen business intelligence with AI-assisted implementation, fostering continuous learning and evolution with every subsequent initiative. The platform creates a connected, data-centric environment where AI and human teams collaborate to plan, build, and manage transformation.

"Traditional enterprise transformation often fails to deliver value and requires ongoing reinvestment to keep pace with AI's rapid evolution. NeoIP™ redefines the model. NeoIP™ brings together Mphasis-built AI solutions, partner technologies, and client assets into a single platform that supports multiple data sources, large and small language models, and computing environments. It represents the future of enterprise tech, which is intelligent by design, secure by default, and scalable by nature," said Nitin Rakesh, Chief Executive Officer and Managing Director, Mphasis.

A key component of NeoIP™is Ontosphere that in collaboration with various AI Agents constructs and sustains the intelligence through dynamic knowledge graphs using enterprise domain context. Together these capabilities form a foundation for contextual insights, autonomous actions and intelligent orchestration. Ontosphere ensures AI-driven transformation is fast, accurate, and strategically aligned with long-term business goals.

NeoIP™ includes solutions grouped under four categories, namely: Modernization, Application Development, ITOps & BusinessOps. These capabilities are delivered via specialized AI Agents and frameworks:

Agent & Frameworks

Category

Function

Mphasis NeoCrux™

Modernization, App Development

Orchestrates AI-driven code generation and quality automation.

Mphasis NeoZeta™

Modernization

Enables organizations overcome challenges of modernization through comprehensive knowledge graph

Mphasis NeoSaba™

Application Development

Provides agile AI agents for quality-driven product definition and user story elaboration.

Mphasis NeoRigal™

Governance

AI Agent for planning, building and orchestrating various AI components and provides AI governance

Mphasis AIOps

ITOps

Features capabilities for integrated operations, including proactive incident prediction, root-cause analysis, and self-healing automation based on deep system observability.

Mphasis NeoOrko™

Application Development

Agent, model and knowledge operations management system

Mphasis NextOps™

Business Ops

A collection of Domain specific business operations agents that are autonomous in nature

Mphasis NeoARCHE™

Application Development

Analyzes the code stream to route workloads between CPU & GPU: Autonomous Routing for Compute Heterogeneity

Ontosphere

Modernization, App Development, ITOps, Business Ops

Enterprise intelligence encoded with contextual knowledge based on domain ontologies

NeoIP™natively connects with third-party AI agents through Model Context Protocol (MCP) and Agent2Agent (A2A) standards, expanding the agent fabric for unified, cross-enterprise orchestration.

Early client engagements have delivered measurable impact across multiple dimensions:

· Up to 60% improvement in development and modernization efficiency through AI-led code generation, testing, and optimization

· 50% reduction in mean time to detect Mean Time to Detect (MTTD) and resolve IT incidents through improved observability and automation

· 3–5 hours of predictive early warning for major outages using AI-based anomaly detection

· Sustained cost and margin improvements through a Savings-led Transformation approach that builds continuous value

· Build a cloud-ready, composable, business capability-driven platform with real-time event-driven architecture.

Mphasis has also transformed its AppStore into an AI-powered marketplace, enabling employees to easily discover, download, and use NeoIPTM solutions. With intelligent search and curated categories, the platform streamlines access to tools that drive innovation and efficiency across the organization.

NeoIP™ further sets the stage for Mphasis’ next phase of innovation under the Mphasis Quantum initiative, which will introduce new solutions using quantum computing and AI to further enhance enterprise decision-making and transformation.

About Mphasis 
At Mphasis, engineering has been in our DNA since inception.

Mphasis is an AI-led, platform-driven company with human-in-the-loop intelligence, helping global enterprises modernize, infuse AI, and scale with agility. The Mphasis.ai unit and Mphasis AI-powered ‘Tribes’ are focused on client outcomes and embed artificial intelligence and autonomy into every layer of the enterprise technology and process stack. 

HCLTech Joins MIT Media Lab to Advance Human-Centered AI and Quantum Innovation

HCLTech Joins MIT Media Lab to Advance Human-Centered AI and Quantum Innovation

HCLTech, a leading global technology company, has joined the MIT Media Lab, a world-renowned research and innovation ecosystem at the Massachusetts Institute of Technology (MIT) that brings together pioneering research and forward-thinking enterprises. This new engagement reflects HCLTech’s ongoing commitment to shaping the future of AI and accelerating breakthroughs in emerging technology areas, such as quantum computing, through collaborative innovation.

HCLTech will have access to MIT Media Lab’s research and networks, enabling it to deepen engagement with faculty, researchers and innovators in next-generation technologies, particularly AI. This will also enable HCLTech to co-develop projects that could translate meaningful AI innovation into impactful and scalable solutions.

We welcome HCLTech to the MIT Media Lab at a pivotal moment in the evolution of artificial intelligence,” said Jessica Rosenworcel. Executive Director of the MIT Media Lab. “Their commitment to exploring applied AI aligns with our mission to design technologies that empower humanity. We look forward to dynamic collaboration that may advance responsible, human-centered innovation in AI and beyond.”

We are thrilled to collaborate with the MIT Media Lab at the forefront of applied AI research. By engaging with MIT Media Lab’s world-class faculty and researchers, we aim to explore co-development of AI innovations that create real-world impact,” said Vijay Guntur, Chief Technology Officer and Head of Ecosystems at HCLTech.

Rifa AI Secures $1.1 Mn to Scale Human-Like Voice Solutions for Regulated Industries

Rifa AI Secures $1.1 Mn to Scale Human-Like Voice Solutions for Regulated Industries

Rifa AI, a conversational AI platform building human-like voice AI solutions for enterprises, has raised USD 1.1 million in funding, led by Seaborne Capital. The company, backed by industry leaders NASSCOM and FalconX, will use this capital infusion to scale its operations in North America, while advancing its conversational AI platform for call centers. This strategic funding will support the development of highly compliant, modular and scalable voice solutions designed to meet the stringent requirements of regulated industries, such as healthcare, insurance, and financial services.

With this new round of funding, Rifa AI will continue to enhance its technology and expand its market presence, ultimately enabling businesses to manage voice workflows efficiently, intelligently, and in full regulatory compliance.The company has already surpassed its early revenue milestones and is on track to scale its operations fivefold over the next two quarters. Its current client base includes U.S.-based health insurers, underwriters, and debt collection
agencies.

Voice AI in regulated industries isn’t just about understanding speech, it’s about ensuring decisions align with system and policy constraints,” said Sameer Fulzele, Co-Founder of Rifa AI. “We’ve built for complexity from day one, and this funding will help extend our platform to more enterprise customers. Our focus is on scaling voice workflows in sectors where human involvement has traditionally been essential due to compliance challenges. As AI agents become mainstream, the real question is whether they work where it matters most in high-stakes environments.”

Rifa AI has processed over 3 million+minutes of customer interactions in sectors like insurance and financial services, reducing the volume of calls handled by human agents by up to 70%, while enabling up to 60% of queries to be resolved end-to-end through AI voice agents. while ensuring compliance and scalability. The platform automates key workflows, including appointment scheduling, FNOL, payment reminders, and order processing, integrating seamlessly with CRMs, ERPs, and telephony systems.

Expanding into North America has always been part of our vision, not just as a market, but as a proving ground for truly enterprise-grade voice automation. This funding allows us to double down on building locally, supporting our clients more closely, and delivering solutions that meet the nuanced demands of regulated industries. We’re here to show that voice AI can be both deeply compliant and remarkably scalable.”Said, Shubham Khoker, Co-Founder, Rifa AI

Additionally, Rifa AI participated in the FalconX Global Immersion Program in June 2023, connecting start-ups with enterprise customers and leaders in Silicon Valley, further enhancing its credibility in the tech ecosystem. The company is also backed by NASSCOM’s GenAI Startup Foundry and will be participating in the IIT Build Accelerator, supported by Foundation Capital, a prominent venture capital firm in the U.S.

About Rifa AI

Rifa AI Secures $1.1 Mn to Scale Human-Like Voice Solutions for Regulated Industries
Rifa AI is an innovative conversational AI platform designed to transform customer interactions within regulated industries. Founded by Sameer Fulzele, a seasoned entrepreneur and IIT Bombay alumnus, alongside Shubham Khoker, the former growth head at Topmate and also an IIT Bombay graduate, Rifa AI brings together deep expertise in AI, business operations, and technology integration.

GenAI Meets SDLC: Tata Elxsi Joins Forces with KAVIA AI to Automate Millions of Lines of Code

GenAI Meets SDLC: Tata Elxsi Joins Forces with KAVIA AI to Automate Millions of Lines of Code

Tata Elxsi, a global design and technology services company, today announced a strategic partnership with KAVIA AI, San Francisco based software 3.0 AI-powered platform, redefining software development with enterprise grade AI. Built to handle millions of lines of code and complex backend systems, KAVIA AI automates the entire development lifecycle, from planning and architecture to development, quality assurance, deployment and maintenance.

This collaboration will deploy GenAI-assisted automation across Tata Elxsi’s internal platforms and customer-facing programs, aiming to transform software quality and time-to-market. This joint go-to-market will deliver the power of GenAI for Software Development Lifecycle (SDLC) to enterprises across transportation, media, communications, and healthcare where engineering reliability is paramount.

By combining Tata Elxsi’s deep expertise in domain-led engineering from concept to deployment, with KAVIA AI’s cloud-native Workflow Manager Platform, the partnership will enable intelligent automation across every key phase of the SDLC—from requirement planning and architecture design to code creation, testing and deployment.

Nitin Pai, Chief Strategy Officer at Tata Elxsi, said, “GenAI adoption demands more than tools—it requires a trusted partner to pilot, productise and scale development and deployment workflows. Tata Elxsi brings that trust, backed by 25+ years of proprietary expertise in software engineering across complex, regulated industries, and a deep understanding of deploying GenAI with the appropriate industry-specific guardrails. We deliver not just AI automation, but real outcomes that go beyond just efficiency to effectiveness and the ‘shift left’ paradigm that enterprises need to scale GenAI in mission-critical environments.”

Labeeb Ismail, CEO of KAVIA AI, said, “We’re excited to partner with Tata Elxsi, a company that brings the scale, credibility and delivery discipline needed to realise real-world AI adoption in large-scale delivery environments. Our platform is built to be enterprise-ready, and Tata Elxsi’s proven delivery record ensures this technology delivers real outcomes to customers.”

Early deployments are already underway across multiple programs, including SaaS platforms, middleware, embedded systems and device development. These early outcomes validate the potential of GenAI-powered SDLC automation—accelerating software delivery without compromising on quality or compliance.

IBM Unveils Power11 in India: AI-Optimized Server Built for Mission-Critical Workloads and Hybrid Cloud

IBM Unveils Power11 in India: AI-Optimized Server Built for Mission-Critical Workloads and Hybrid Cloud

IBM (NYSE: IBM) has officially launched its most advanced Power server to date, IBM Power11, in India, marking a significant milestone in enterprise AI infrastructure. Purpose-built for the AI-first era. Power11 is engineered to deliver real-time inferencing, seamless hybrid cloud deployment, and unmatched reliability—all while reducing energy consumption and IT costs.

Designed for Mission-Critical Workloads

Developed with major contributions from the IBM India Systems Development Lab (ISDL), Power11 is tailored for enterprises operating complex, data-intensive workloads across sectors like banking, telecom, healthcare, retail, and public services. According to IDC, over one billion new logical applications are expected by 2028, and Power11 is positioned to manage this scale with AI-driven performance, security, and control.
Power11 is built for enterprises that demand high performance, security, and always-on operations while preparing for an AI-first future,” said Subhathra Srinivasaraghavan, Vice President, ISDL, IBM.

Key Features and Innovations

AI Where Data Lives
  • On-chip inferencing enables AI models to run directly where data resides—whether in private data centers or hybrid cloud environments.
  • Supports data sovereignty and compliance mandates critical to India’s evolving digital landscape.
Security and Resilience
  • Offers 99.9999% uptime, the highest in IBM Power history.
  • Features <1-minute ransomware threat detection via IBM Power Cyber Vault.
  • Built-in quantum-safe cryptography protects against future threats like harvest-now, decrypt-later attacks.
Energy Efficiency
  • Delivers 2x performance per watt compared to x86 servers.
  • Introduces an Energy Efficient Mode with up to 28% better server efficiency than Maximum Performance Mode.
Future-Ready Architecture
  • First IBM Power server to support the upcoming IBM Spyre™ Accelerator, arriving in Q4 2025, designed for AI-intensive inference workloads.
  • Fully compatible with IBM’s AI stack including watsonx and Red Hat OpenShift AI.

Availability and Strategic Impact

IBM Power11 will be generally available from July 25, 2025, with the Spyre Accelerator expected later this year. The launch underscores India’s growing role not just as a consumer of cutting-edge infrastructure, but as a co-developer of global enterprise technology.

With Power11, IBM is not just delivering a server—it’s offering a platform for transformation. From UPI and 5G to smart manufacturing and e-governance, Indian enterprises now have a resilient, AI-optimized infrastructure to scale innovation responsibly and securely.

IBM Power11 will be generally available July 25, 2025. The IBM Spyre™ Accelerator is expected to be available in Q4 2025. To learn more about Power11, visit here.

HCLTech Joins Forces with OpenAI to Accelerate Enterprise-Scale GenAI Transformation

HCLTech Joins Forces with OpenAI to Accelerate Enterprise-Scale GenAI Transformation

HCLTech, a leading global technology company, today announced a multi-year strategic collaboration with OpenAI, a leading AI research and deployment company, to drive large-scale enterprise AI transformation as one of the first strategic services partners to OpenAI.

HCLTech’s deep industry knowledge and AI Engineering expertise lay the foundation for scalable AI innovation with OpenAI. This collaboration will enable HCLTech’s clients to leverage OpenAI’s industry-leading AI products portfolio alongside HCLTech’s foundational and applied AI offerings for rapid and scaled GenAI deployment.

Additionally, HCLTech will embed OpenAI’s industry-leading models and solutions across its industry-focused offerings, capabilities and proprietary platforms, including AI Force, AI Foundry, AI Engineering and industry-specific AI accelerators. This deep integration will help its clients modernize business processes, enhance customer and employee experiences and unlock growth opportunities, covering the full AI lifecycle, from AI readiness assessments and integration to enterprise-scale adoption, governance and change management.

HCLTech will roll out ChatGPT Enterprise and OpenAI APIs internally, empowering its employees with secure, enterprise-grade generative AI tools.

Vijay Guntur, Global Chief Technology Officer (CTO) and Head of Ecosystems at HCLTech, said, “We are honored to work with OpenAI, the global leader in generative AI foundation models. This collaboration underscores our commitment to empowering Global 2000 enterprises with transformative AI solutions. It reaffirms HCLTech's robust engineering heritage and aligns with OpenAI's spirit of innovation. Together, we are driving a new era of AI-powered transformation across our offerings and operations at a global scale.”

Giancarlo ‘GC” Lionetti, Chief Commercial Officer at OpenAI, said, “HCLTech’s deep industry knowledge and AI engineering expertise sets the stage for scalable AI innovation. As one of the first system integration companies to integrate OpenAI to improve efficiency and enhance customer experiences, they’re accelerating productivity and setting a new standard for how industries can transform using generative AI.

Why LLMs Work Better with RAG—and What That Means for Enterprises

Why LLMs Work Better with RAG—and What That Means for Enterprises

LLMs have transformed how we interact with information and technology. From chatbots and content creation tools to coding assistants and research aids, these models have shown impressive capabilities across domains. However, they are not without limitations. One of the most promising solutions to these limitations is Retrieval-Augmented Generation, or RAG. When combined, LLMs and RAG offer a powerful, more accurate, and enterprise-ready AI experience.

In the article below, Soham Dutta, Principal Technologist & Founding Member at DaveAI, explains why LLMs work better with Retrieval-Augmented Generation, or RAG. 

Why LLMs Work Better with RAG—and What That Means for Enterprises
Soham Datta – DaveAI

The Limitations of Standalone LLMs

LLMs are trained on large amounts of data from the internet, books, academic papers, and more. During training, they learn to predict words and generate human-like text based on statistical patterns. But despite their language skills, these models do not truly understand facts. They cannot browse the internet, access live databases, or pull in real-time updates. Their knowledge is frozen at the time of training.

This can lead to a problem called hallucination, where the model generates incorrect or fictional information. Even when it sounds confident, it might be wrong. For example, if a user asks a financial LLM about the latest stock prices, the model cannot give an accurate answer unless it is connected to current data.

Another issue is that LLMs do not know anything specific about your organization unless that information was included in the training data. If you are a business leader hoping to use an LLM to answer questions about internal documents, customer data, or product catalogs, a standard LLM simply cannot help unless that information is added through other means.

What is Retrieval-Augmented Generation (RAG)?

RAG is a method that helps LLMs provide better, more reliable answers by adding a retrieval step before generating a response. When a user asks a question, the system first searches a connected knowledge base, like internal company documents or a web database. It then retrieves the most relevant pieces of information and feeds them to the LLM, along with the original query.

This combination allows the LLM to generate a response that is both fluent and accurate. Instead of guessing, the model uses real, retrieved content as its base. This method greatly reduces hallucination and helps the model stay grounded in the latest available facts.

For example, if a company uses RAG to connect its LLM to a database of technical manuals, the AI assistant can provide accurate support based on those manuals. If the company updates a policy document, the LLM can reflect those updates immediately because it fetches the content at the time of the query, not from a static memory.

How RAG Enhances LLMs for Business Use

Enterprises are quickly realizing that the combination of RAG and LLMs creates smarter, more practical solutions for real-world use cases. With this pairing, businesses can offer AI assistants that understand natural language and also access company-specific knowledge.

In customer service, a RAG-enabled chatbot can answer questions by searching up-to-date FAQs, support tickets, or policy documents. This allows the company to offer detailed responses without training the model on every possible question. In marketing, a content generation tool can pull from brand guidelines or campaign briefs to generate on-brand content every time.

Sales teams can benefit as well. Instead of digging through scattered CRM records or pricing sheets, they can ask a smart assistant to retrieve the latest client data and generate a tailored email. Legal teams can scan contracts or compliance documents through natural queries. Engineers can find product specs or configuration settings without reading long manuals.

Enterprise-focused platforms like DaveAI are already demonstrating how LLMs paired with real-time data retrieval can transform product discovery and guided selling across digital channels.

By making enterprise data accessible through natural language, LLMs with RAG reduce the time spent searching for information and increase the accuracy of business decisions.

Benefits for Enterprise Adoption

The biggest benefit of RAG is that it makes AI systems more trustworthy. Enterprises cannot rely on hallucinated or out-of-date information. With RAG, they can control the source of truth. This improves user trust and opens the door for adoption across departments. RAG also supports real-time updates. If an organization adds new documents or changes an internal process, the system reflects those changes immediately. There is no need to retrain the LLM or wait for future versions. This creates a dynamic, living knowledge environment.

Scalability is another key advantage. RAG allows companies to use one central model while connecting it to different data sources for various use cases. Whether it is HR, finance, or operations, each department can maintain its own knowledge base, while the model serves as a unified language interface. In terms of security, RAG systems can be designed to respect internal access controls. Only authorized users can query sensitive information, and audit logs can track who accessed what. This level of control is important for industries like finance, healthcare, and law, where compliance matters.

Finally, RAG improves personalization. A model can retrieve user-specific documents, emails, or records to tailor responses. This leads to more helpful interactions and a smoother user experience.

Implementation Challenges and Future Outlook

While the benefits are significant, setting up a RAG system is not without effort. First, businesses need to prepare their data. This includes converting documents into machine-readable formats and splitting them into smaller chunks that the model can process. Organizing this data into a searchable vector database is essential. Next comes integration. The retrieval engine, LLM, and user interface must be connected in a seamless pipeline. Tools like LangChain, Haystack, and commercial platforms like OpenAI’s API or Google’s Vertex AI are making this easier, but it still requires technical planning.

Performance is another consideration. Retrieving documents and generating a response takes time, so systems need to be optimized for low latency. Techniques like caching frequent queries and indexing relevant documents can help improve speed. Despite these challenges, the trend is clear. More and more companies are investing in RAG-based solutions because the payoff is strong. As generative AI continues to grow, RAG will be a key part of making it usable, safe, and valuable in enterprise environments.

Conclusion

LLMs are a powerful step forward in language technology, but they reach their full potential when paired with Retrieval-Augmented Generation. RAG gives LLMs the ability to access live, reliable, and domain-specific information. For enterprises, this means better accuracy, real-time relevance, and smarter decision-making across functions. While implementation takes planning, the combination of LLM and RAG is quickly becoming a cornerstone of modern AI strategy. Businesses that adopt this approach early will be better positioned to lead in the AI-driven future.

HCLTech with Intel Launches FlexSpace for AI PCs to Transform Enterprise Efficiency

HCLTech, a leading global technology company, announced the launch of HCLTech FlexSpace for AI PCs in collaboration with Intel®. This innovative solution enhances AI-powered enterprise computers, offering businesses the computing power and flexibility needed for AI-driven environments.

By integrating HCLTech FlexSpace, an Experience-as-a-Service digital workplace solution, with Intel® Core™ Ultra processors, enterprises can perform AI tasks locally on devices, ensuring faster and more secure processing. This reduces the need for data transfers to remote servers, minimizing data breach risks.

FlexSpace significantly improves the performance of advanced AI platforms, enabling faster, more responsive interactions and superior data processing for applications like Microsoft Co-Pilot. With HCLTech AI Force and Edge AI, enterprises benefit from rapid data processing and real-time analytics, providing actionable insights. Additionally, AI of Things (AIoT) applications experience reduced latency and improved performance.

"At Intel, we are committed to delivering transformative solutions that address the evolving needs of modern workplaces. Our collaboration with HCLTech on their FlexSpace solution combines the power of Intel's AI PCs with HCLTech's industry-leading IT services. This collaboration not only meets the critical need for advanced workplace solutions but also enhances customer experiences by delivering unmatched performance, scalability, and security. Together, we are shaping the future of workplace transformation," said Santhosh Vishwanathan, Vice President and Managing Director, India Region, Intel.

"At HCLTech, we aim to revolutionize enterprise AI interaction with advanced, scalable solutions that enhance efficiency and innovation. Our collaboration with Intel on FlexSpace for AI PCs is a key step in helping clients fully leverage AI while ensuring top-tier security and performance," said Anand Swamy, Head of Tech and ISV Ecosystems, HCLTech.

HCLTech continues to deliver intelligent workplace solutions through its HCLTech Fluid Workplace framework, leveraging Intel's Core Ultra processors to empower enterprises to streamline workflows, make data-driven decisions and accelerate innovation across healthcare, finance and manufacturing.

For more information about HCLTech's collaboration with Intel, please visit: hcltech.com/cloud/intel

Singulr AI Raises $10 Mn in Seed Funding Led by Nexus Venture Partners and Dell Technologies Capital

Singulr AI Raises $10 Mn in Seed Funding Led by Nexus Venture Partners and Dell Technologies Capital

Singulr AI announced $10 million in seed financing led by Nexus Venture Partners and Dell Technologies Capital, with participation from leading industry executives. It launched today with the general availability of its enterprise AI governance and security platform, already deployed across companies in the technology, finance, and healthcare sectors.

The enterprise AI surge is real in live customer environments, Singulr consistently discovers 500+ unique AI services and models in use, with many of them redundant. Over three out of four employees use unapproved “Shadow AI” tools, often linked to personal accounts that expose enterprise IP. This dynamic threat will force organizations to balance rapid AI deployment against security risks and rising costs. Further, the rise of agentic AI increases complexity, making enterprise-grade governance essential for scaling AI operations.

Using the Singulr AI platform, CIO and IT operations teams can rationalize their AI service inventory and reduce unnecessary spend, while CISO and risk teams can streamline the onboarding of safe AI use cases while implementing granular policies to prevent data leakage and shadow AI.

Founders Shiv Agarwal and Abhijit Sharma, who previously built Arkin Net (acquired by VMware), reunited the team to build Singulr. In just one year, they have gone from a concept to a generally available solution that integrates seamlessly into enterprise environments without requiring infrastructure changes. Singulr is launching with SOC 2 and ISO 27001 compliance, ensuring enterprise-grade security and trust.

Shiv Agarwal, CEO and Co-founder of Singulr, said,
Companies are experiencing explosive growth in AI adoption across employees, partners, and vendors – but this surge comes with mounting security risks and costs.Through our extensive market research, we recognized that the broad use of generative AI technology is a complex problem needing an enterprise-grade solution to scale and secure. We started Singulr – to help enterprises rapidly adopt and operationalize powerful new AI technologies while managing costs and minimizing risk.

Jishnu Bhattacharjee, Managing Director at Nexus Venture Partners, added, “AI is transforming business with a new technology stack, evolving usage patterns, and unprecedented risks—forcing companies to rethink how they do security and governance.The Singulr team has a proven track record of solving complex, enterprise-scale problems and is launching a proven solution that enables businesses to efficiently and safely harness AI’s potential. I’m excited to partner with Shiv and the team again as they establish Singulr as the category leader.”

This is the year the enterprise moves from experimenting with AI to leveraging its potential for pivotal business challenges. However, this adoption brings complex operational and security challenges requiring a systems approach—from developer experience to endpoint defense. The Singulr team has a proven track record of taking on enterprise challenges at Arkin and VMware and are well-equipped to do so again.” said Raman Khanna, Managing Director at Dell Technologies Capital.

Singulr helps customers get ahead and stay ahead of AI adoption with:
  • Continuous discovery of all AI-in-use, including homegrown LLM applications and agents, public AI services, and embedded AI in SaaS applications, along with deep contextual insights into application settings, user activity, and sensitive data exposure in prompts or uploaded files.
  • Rapid AI onboarding with automated risk scoring of AI services and models so that teams can quickly vet and approve new requests, unsanctioned use, or changes in models, dataset, or applications.
  • Continuous AI-powered protection using context-aware policies and enforcement that allows organizations to permit, restrict, warn, redact, and trigger security tickets and workflows.
Companies interested in safe and cost-effective AI use can request a sample Instant AI Audit report or an AI Audit assessment here or at https://singulr.ai/sample-report

About Singulr AI

Singulr AI is an enterprise AI governance and security platform that helps streamline and secure enterprise AI use at scale. Security, IT, and AI teams can now accelerate AI-driven innovation while reducing business risks and unnecessary costs from data leakage, shadow AI, and AI sprawl.

Cognida Secures $15 Mn Series A from Nexus Venture Partners

With proven success across 30+ enterprises, Cognida.ai’s practical approach reduces AI implementation time from 8 months to just 12 weeks, making AI adoption predictable and profitable

Cognida Secures $15 Mn Series A from Nexus Venture Partners
Cognida Team
Cognida.ai, a leader in practical AI solutions for enterprises, today announced the successful close of its $15 million Series A funding round, led by Nexus Venture Partners. This investment validates Cognida.ai's unique approach to making AI implementation practical and financially impactful for enterprises, demonstrated through successful deployments at over 30 leading organizations.

While 87% of enterprises invest in AI, only 20% successfully deploy solutions into production. We are uniquely positioned to close the divide between ambition and practically integrating AI into everyday business processes. -Feroze Mohammed, Cognida CEO.

In a market where 80% of AI projects fail and implementation typically takes 6-8 months, Cognida.ai has cracked the code on enterprise AI adoption through its innovative Zunō accelerator platform and industry-specific solutions, consistently delivering implementations in 10-12 weeks with measurable ROI.

Transforming Enterprise AI Implementation

Cognida.ai’s practical AI approach has delivered measurable results across industries. Examples include:
  • 70% faster invoice processing with 99% accuracy, streamlining operational efficiency for a leading manufacturing client.
  • 45% improvement in inventory forecasting, optimizing SKU level accuracy enabled by advanced ML models for a video surveillance equipment manufacturer.
  • 5x faster quote generation in response to RFQs, generating $10M+ in new revenue with Generative AI.
  • 1% customer churn reduction, saving millions in annual revenue with ML-powered predictive customer insights.
Enterprise AI adoption has reached its tipping point,” said Feroze Mohammed, Founder and CEO of Cognida.ai. "While 87% of enterprises are investing in AI, only 20% successfully deploy solutions into production. We are uniquely positioned to close the divide between ambition and achievement in integrating AI into everyday business processes. We’ve honed the expertise, tools, and delivery model needed to navigate the complexities of AI adoption. This investment validates our approach of delivering measurable ROI through practical AI solutions, leading the next wave of AI services companies."

AI’s mainstream adoption requires specialized service providers who can bridge the gap between cutting-edge capabilities and practical applications," said Anup Gupta, Managing Director at Nexus Venture Partners. "We are impressed with Cognida.ai’s strong traction and innovative approach with a clear focus on practical AI solutions. Their success with enterprise clients showcases an ability to deliver real impact, and we’re excited to partner with Feroze and the team as they scale in this dynamic space.

Successful enterprise AI needs a hybrid model combining specialized services, software, and a co-creation approach to tackle real-world integration and compliance challenges. “When we considered our ambitious slate of projects, we needed a partner with a deep commitment to excellence, strong engineering expertise, and the ability to bring top-tier talent to our AI, digital engineering, and data initiatives,” said Joe Montalto, COO and CIO of The Phia Group. “Partnering with Cognida.ai has been a smart and productive decision. Their Zunō platform and co-creation approach have enabled us to innovate and deliver transformative AI solutions, driving measurable success across multiple initiatives."

“Clopay's digital transformation journey has significantly enhanced customer experience in our industry. As we continue to harness AI solutions to transform our products and services, Cognida.ai has proven to be our trusted partner with their practical approach to AI implementation,” said A. Vinod, CIO of Clopay. "They bring not only deep AI expertise but also a strong understanding of manufacturing operations, data integration, and business processes."

Strategic Investment to scale Cognida.ai’s proven model to Practical AI for the Enterprise

The $15 million investment from Nexus Venture Partners, a globally recognized venture capital firm with a track record of backing transformative startups, reflects strong confidence in Cognida.ai’s vision and execution. The funding will accelerate Cognida.ai's proven implementation model through:
  • Expanded AI solution library across key industries.
  • Advanced development of its innovative Zunō agentic AI platform, delivering greater efficiency and scalability.
  • Growth of AI implementation teams to meet increasing demand from enterprises worldwide.
  • Accelerated go-to-market strategies in target enterprise segments to broaden its customer base.About Cognida.ai
Cognida

Cognida.ai
specializes in practical AI solutions that deliver measurable business outcomes. Through its innovative Zunō platform and industry-specific foundational solutions, Cognida.ai empowers enterprises to adopt AI seamlessly, and achieve real results without disruptive overhauls. Headquartered in Chicago with offices in Silicon Valley and Hyderabad, Cognida.ai serves clients across manufacturing, healthcare, finance, and technology sectors. For more information, visit www.cognida.ai.

About Nexus Venture Partners

Nexus Venture Partners is a leading early-stage venture capital firm partnering with extraordinary entrepreneurs building product-first companies. With $2.6 billion under management, Nexus operates as a unified team across the US and India. Nexus’s portfolio includes Apollo.io, Delhivery, Fingerprint, H2O.ai, Iambic, Infra Market, MinIO, Neuron7.ai, Observe.ai, Postman, Pubmatic, Rapido, Rancher, Turtlemint, Uniqus, and Zepto.

Accenture Launches 'AI Refinery for Industry', A platform for organizations to Rapidly Build and Deploy AI Agents

Accenture Launches 'AI Refinery for Industry', A platform for organizations to Rapidly Build and Deploy AI Agents

Accenture has just launched AI Refinery for Industry, a platform designed to help organizations rapidly build and deploy AI agents to enhance their workforce, address industry-specific challenges, and drive business value faster.

“The launch of AI Refinery for Industry represents an expansion of the platform and our collaboration with NVIDIA—which helps clients convert raw AI technologies and tools into scaled enterprise AI systems—providing a foundation to accelerate agentic functionality and reimagine functions and industry processes,” said Lan Guan, chief AI officer, Accenture. “Our new industry agent solutions will empower organizations to conceptualize agents and quickly prove their value, hitting the ground running on day one as a network of digital teammates.”

Key Features:

Industry Agent Solutions: The platform includes 12 industry agent solutions, with plans to expand to over 100 solutions by the end of the year.

Built on NVIDIA AI: Powered by NVIDIA AI Enterprise software, including NVIDIA NeMo, NVIDIA NIM microservices, and NVIDIA AI Blueprints.

Reduced Development Time: These solutions can reduce the time to build and derive value from agents from months or weeks to days.

Customization: Organizations can customize these multi-agent networks with their data.

Use Cases:

Revenue Growth Management: Automate key decision processes related to promotional activities to maximize revenue.

Clinical Trial Companion: Personalize trial plans and provide guidance throughout a patient’s clinical trial journey.

Asset Troubleshooting: Swiftly troubleshoot or resolve industrial equipment issues using advanced multi-agent systems.

Availability:

Cloud Platforms: Available on all public and private cloud platforms.

Integration: Seamlessly integrates with other Accenture Business Groups to accelerate AI adoption.

This initiative represents a significant step towards democratizing advanced AI capabilities and fostering innovation across various industries.

Powered by Accenture AI Refinery—which is built with NVIDIA AI Enterprise software, including NVIDIA NeMo, NVIDIA NIM microservices and NVIDIA AI Blueprints, including Video Search and Summarization and Digital Human—these industry solutions can reduce the time to build and derive value from agents from months or weeks to days.

Infosys and Microsoft Expand Partnership to Accelerate Adoption of GenAI and Microsoft Azure, Globally

Infosys and Microsoft Expand Partnership to Accelerate Adoption of GenAI and Microsoft Azure, Globally

Infosys and Microsoft have expanded their strategic collaboration to accelerate customer adoption of Microsoft Cloud and Generative Al. This partnership aims to help joint customers realize the value of their technology investments and achiever tansformative outcomes.

Infosys was an early adopter of GitHub Copilot, Microsoft's Al-powered coding tool, which has significantly boosted coding efficiencies. This year in June, Infosys has launched a GitHub Center of Excellence (CoE) to offer enterprise AI innovation to customers worldwide.

With this expanded collaboration, Infosys has been chosen as a strategic supplier to support cloud and AI workloads for Microsoft's enterprise customers.

The Focus Areas of the collaboration spans key sectors like finance, healthcare, supply chain, and telecommunications.

In conjunction with Microsoft’s technology and its own industry-leading AI and Cloud suite of offerings, Infosys Topaz and Infosys Cobalt, as well as its AI-powered marketing suite Infosys Aster, the collaboration will help enhance customer experiences and drive the global adoption of enterprise AI.

The scope of this expanded collaboration will include:
  • Financial Services – Infosys' domain expertise with Finacle, alongside Microsoft's advanced capabilities will enable financial institutions to engage, innovate, operate, and transform more efficiently.
  • Healthcare – Infosys Helix, a next-gen healthcare payer platform built on Microsoft Azure, uses AI/ML automation to optimize patient outcomes, will provide access to care, and enhance constituent experiences, while streamlining processes and reducing costs.
  • Supply Chain – This sector will see optimized processes and increased agility through the combined strengths of TradeEdge and Azure OpenAI service.
  • Telecommunications – Microsoft's generative AI and Infosys Live Operations platforms will deliver enhanced connectivity and customer experiences.
  • Infosys Energy Management Solution, coupled with Microsoft's commitment to sustainability, will accelerate the NetZero journey for customers.
  • Customer service - Infosys Cortex, an AI-driven customer engagement platform, integrates Microsoft GenAI and Copilot to deliver specialized and individualized copilot assistance to every member of a customer service organization.
Many of these solutions will be available on Azure Marketplace, allowing customers to utilize their Microsoft Azure Consumption Commitment (MACC), creating a mutually beneficial market proposition.

This expanded collaboration is expected to drive global adoption of enterprise AI and enhance customer experiences.

To recall, in July, Tech Mahindra also collaborated with Microsoft to modernize workplace experiences using Copilot for Microsoft 365.

Infosys and Microsoft are also focusing on sharing best practices for Responsible AI. Infosys is a key partner in The Microsoft Responsible AI Partner Initiative, contributing to the development of ethical AI guidelines through Infosys’ Responsible AI (RAI) Office. Skilling efforts are also part of the collaboration, ensuring that the workforce is equipped with the necessary expertise to support these initiatives.

Microsoft-owned Inflection AI and Intel Launch Enterprise AI System

Microsoft-owned Inflection AI and Intel Launch Enterprise AI System

Inflection Al, acqui-hired by Microsoft in June this year, and Intel have recently launched a new enterprise Al system called Inflection for Enterprise. It removes development barriers to accelerate hardware testing and model building.

This system is designed to provide businesses with powerful Al capabilities, including large language models (LLMs), to help them build custom, secure, and employee-friendly Al applications.

Essentially, Inflection for Enterprise is an AI system built around a multi-billion end-point LLM that allows enterprises to own their intelligence in its entirety. Its foundational model is fine-tuned to your business and offers an empathetic, human-centric approach to enterprise AI.

The system is powered by Intel's Gaudi 3 Al accelerators, which are designed to deliver high performance and efficiency.

The service is available on Intel's Tiber Al Cloud, providing a managed cloud infrastructure for developing, accelerating, and deploying Al applications at scale.

Inflection Al's platform, Inflection 3.0, focuses on fine-tuning models using proprietary datasets to build enterprise-specific Al applications.

The system will be available as an industry-first Al appliance powered by Gaudi 3, expected to ship to customers in Q1 2025.

This collaboration aims to set a new standard for Al solutions that deliver immediate, high-impact results for enterprises.

Inflection AI and Intel will also enable developers to build enterprise applications for Inflection for Enterprise, leveraging the robust and human-centric Inflection 3.0 system, to generate critical software tools.

Inflection AI COO, Ted Shelton, said, "Every CEO and CTO we speak to is frustrated that existing AI tools on the market aren’t truly enterprise-grade. Enterprise organizations need more than generic off-the-shelf AI, but they don’t have the expertise to fine-tune a model themselves. We’re proud to offer an AI system that solves these problems, and with the performance gains we see from running on Intel Gaudi, we know it can scale to meet the needs of any enterprise.”

Inflection Al is was founded by entrepreneurs Reid Hoffman (Co-founder and executive chairman of LinkedIn), Mustafa Suleyman (CEO of Microsoft AI, and the co-founder and former head of applied AI at DeepMind) and Karén Simonyan in 2022.

In June this year, Inflection AI was acquired by Microsoft for $650 million. Inflection AI co-founders, Suleyman and Simonyan, announced their departure from the company in order to start Microsoft AI, with Microsoft acqui-hiring nearly the entirety of its 70-person workforce of Inflection AI.

Inflection Al has also collaborated with NVIDIA to develop hardware for generative artificial intelligence.

Market Reports

Market Report & Surveys
IndianWeb2.com © all rights reserved