Showing posts with label Open Source. Show all posts
Showing posts with label Open Source. Show all posts

Analog Devices Launches CodeFusion Studio™ 2.0 to Simplify AI-Enabled Embedded Development

Analog Devices Launches CodeFusion Studio™ 2.0 to Simplify AI-Enabled Embedded Development
  • New End-to-end AI workflow support with bring-your-own-model capability, model checks and performance profiling enables rapid deployment across ADI’s full hardware portfolio.
  • New Unified configuration tools, multi-core support and integrated debugging streamline development across heterogeneous systems.
  • New Zephyr-based modular framework for runtime AI/ML profiling and layer-by-layer analysis enhances the open source foundation, eliminating toolchain fragmentation and reducing complexity.
Analog Devices, Inc. (Nasdaq: ADI), a global leader in semiconductor innovation, today launched CodeFusion Studio™ 2.0, a significant upgrade to its open source embedded development platform. Designed to simplify and accelerate the development of AI-enabled embedded systems, CodeFusion Studio 2.0 introduces advanced hardware abstraction, seamless AI integration and powerful automation tools to streamline the journey from concept to deployment across ADI’s diverse processors and microcontrollers.

The next era of embedded intelligence requires removing friction from AI development,” said Rob Oshana, Senior Vice President of the Software and Digital Platforms group, ADI. CodeFusion Studio 2.0 transforms the developer experience by unifying fragmented AI workflows into a seamless process, empowering developers to leverage the full potential of ADI's cutting-edge products with ease so they can focus on innovating and accelerating time to market.”

Empowering Developers with End-to-End AI Workflows

CodeFusion Studio 2.0 now supports complete AI workflows, enabling developers to bring their own models and deploy them efficiently across ADI’s processors and microcontrollers—from low-power edge devices to high-performance DSPs (digital signal processors). The latest platform, based on Microsoft’s Visual Studio Code, features a built-in model compatibility checker, performance profiling tools and optimization capabilities that are designed to ensure robust deployment and an accelerated time-to-market.

A new Zephyr-based modular framework enables runtime performance profiling for AI/ML workloads, offering layer-by-layer analysis and seamless integration with ADI’s heterogeneous platforms. This encapsulation of toolchains simplifies machine learning deployment and enhances system-level performance insights.

Unified Development Experience

The updated CodeFusion Studio System Planner now supports multi-core applications and expanded device compatibility, while unified configuration tools reduce complexity across ADI’s hardware ecosystem. Developers benefit from integrated debugging capabilities, including Core Dump Analysis and GDB (GNU debugger) support, making troubleshooting faster and more intuitive.

ADI Future Proofs Its Digital Roadmap

CodeFusion Studio 2.0 is the latest milestone in ADI’s open-source embedded development platform, embodying its commitment to delivering developer-first tools that simplify complexity and accelerate innovation. As ADI expands its digital roadmap, future releases will continue to push the boundaries of embedded intelligence, bringing deeper hardware-software integration, expanded runtime environments and new capabilities tailored to evolving developer needs as they experiment with physical AI.

“Companies that deliver physically aware AI solutions are poised to transform industries and create new, industry-leading opportunities. That’s why we’re creating an ecosystem that enables developers to optimize, deploy and evaluate AI models seamlessly on ADI hardware, even without physical access to a board,” said Paul Golding, Vice President of Edge AI and Robotics, ADI. “CodeFusion Studio 2.0 is just one step we’re taking to deliver Physical Intelligence to our customers, ultimately enabling them to create systems that perceive, reason and act locally—all within the constraints of real-world physics.”

Availability

CodeFusion Studio 2.0 is now available for download. Developers can access the platform, documentation and community support at https://developer.analog.com/solutions/codefusionstudio

Surya: NASA and IBM Release Largest Open-Source Heliophysics AI Model on Hugging Face



IBM and NASA have unveiled a groundbreaking open-source AI model called Surya, designed to predict solar weather and protect critical infrastructure from space-based disruptions. The powerful AI model is trained on 14 years of observations from NASA’s Solar Dynamics Observatory (SDO).

Here's a breakdown of what makes this initiative so impactful:

What Is Surya?

  • Name Origin: “Surya” is Sanskrit for “Sun,” reflecting its heliophysics focus.
  • Purpose: Predict solar flares, coronal mass ejections, and other solar phenomena that can disrupt satellites, GPS, power grids, and telecommunications.
  • Availability: Open-source and hosted on Hugging Face.

Technical Highlights

  • Foundation Model: Trained on 14 years of high-resolution solar data from NASA’s Solar Dynamics Observatory (SDO).
  • Data Types: Includes solar coronal EUV images, magnetic field maps, and solar surface velocity data.
  • Model Size: 366 million parameters—lightweight enough for broader deployment.

Capabilities & Performance

  • Forecasting Power:
    • Predicts solar flares up to 2 hours in advance.
    • Achieved 16% improvement in flare classification accuracy over previous models.
  • Use Cases:
    • Early warnings for satellite operators.
    • Infrastructure protection for energy grids and aviation.
    • Academic research in heliophysics and space weather.

Why It Matters

  • Economic Risk: A major solar storm could cost the global economy up to $2.4 trillion over five years.
  • Recent Events: Solar storms have already disrupted GPS, diverted flights, and damaged satellites.
  • Future-Proofing: As humanity ventures deeper into space, accurate solar forecasting becomes essential for safety and continuity.

Open Science Impact

  • SuryaBench Dataset: IBM and NASA also released the largest curated heliophysics dataset to support further research.
  • Community Collaboration: Encourages scientists and developers to build on Surya for new applications in space weather prediction.

Summary Table

Feature Details
Model Name Surya
Developed By IBM & NASA
Training Data 14 years of solar observations from NASA's SDO
Parameter Count 366 million
Forecast Window Up to 2 hours before solar flare events
Accuracy Improvement 16% over prior models
Hosted On Hugging Face

India Pioneers Open-Source 5G & 6G Platform, Ushering a New Era in Telecom Innovation

India Pioneers Open-Source 5G & 6G Platform, Ushering a New Era in Telecom Innovation

India has launched a major initiative to develop a production-grade open-source platform for 5G, 5G Advanced (5G-A), and 6G networks.

This effort is backed by seed funding from the Ministry of Electronics and Information Technology (Meity) and is led by IISc Bengaluru's Foundation for Science Innovation and Development (FSID), IIT Delhi's Foundation for Innovation and Technology Transfer (FITT), and the Centre for Development of Advanced Computing (C-DAC).

The initiative is a unique approach compared to other countries, which have largely relied on proprietary solutions from major telecom vendors.

India is focusing on an open-source mobile network stack, fostering indigenous innovation and reducing reliance on proprietary solutions.

The initiative aims to create a fully open-source mobile network stack aligned with global standards such as 3GPP and Open RAN (O-RAN).

The project builds on the Indian Open-Source Platform for End-to-End 5G Network (IOS-5GN) and lays the foundation for the Indian Open-Source Platform for Mobile Communication Networks (IOS-MCN), a consortium aimed at fostering indigenous innovation across academia, industry, and startups.

A key milestone was achieved on January 31, 2025, when the consortium released IOS-MCN Agartala v0.1.0, which includes a 5G CORE, Open RAN-compliant radio access network software, and a service orchestration framework 12.

The system was successfully tested using Made-in-India ORAN-compliant radio units developed by VVDN and Lekha Wireless, achieving downlink speeds of 600-700 Mbps and latency under 10 milliseconds on commercial mobile devices.

This initiative is a significant step toward self-reliance in next-generation communication technologies, reducing reliance on proprietary solutions and fostering open, scalable, and resilient mobile communication networks.

With 961,000 base stations and 356 cities covered, China leads 5G innovation The country is heavily investing in 6G research.

India's open-source approach is distinct from the vendor-driven models seen in China, the US, and South Korea. While other nations focus on rapid deployment, India is prioritizing self-reliance, scalability, and innovation in mobile communication networks.

Alibaba Open-Sources Its AI Video Model Wan2.1 Series

Alibaba to Launch Open-Source AI Video Models

Alibaba is making waves in the AI world by launching an open-source version of its video and image-generating AI model, Wan 2.1. This move is set to intensify competition in China's AI market, especially following DeepSeek's recent launch of its own open-source models.

Alibaba Cloud announced the open source release of four models in its Wan2.1 series of large video generation models. As an open source, it will be open to global academia, researchers, and commercial organizations for use, further promoting innovation and inclusiveness of artificial intelligence (AI) technology.

Alibaba's AI models, particularly the Qwen 2.5-Max and Wan 2.1, are making significant strides in the AI landscape. Alibaba Cloud is one of the first global technology companies to open source its own large-scale AI models, and as early as August 2023, it launched its first open source model Qwen (Qwen-7B). 



Wan 2.1 is designed to generate highly realistic visuals and has already secured a top ranking on VBench, a leaderboard for video generative models. Alibaba has released Wan 2.1, each capable of generating images and videos from text and image input. These models are available globally on Alibaba Cloud's ModelScope and HuggingFace platforms.

In addition to Wan 2.1, Alibaba has also introduced a preview version of its reasoning model, QwQ-Max, which it plans to make open source upon the full release. This strategic move aligns with Alibaba's broader AI ambitions, as the company has announced plans to invest at least $52 billion over the next three years to bolster its cloud computing and AI infrastructure.

The Qwen 2.5-Max model is part of Alibaba's open-source Qwen series and is designed to process long, complex queries and engage in nuanced conversations. It has been benchmarked against models like OpenAI's GPT-4, DeepSeek-V3, and Meta's Llama-3.1-405B, and has shown superior performance in several areas.

This open-source initiative is expected to foster innovation, lower barriers to entry, and position Alibaba as a formidable player in the AI space.

Alibaba in its announcement press release said — Training video-based models requires huge computing resources and a large amount of high-quality training data. Open source helps lower the barrier to entry for more companies to use AI, enabling them to create high-quality visualization content that meets their needs in a cost-effective manner.

Text prompt: A man is performing professional diving moves on a diving platform. In the panoramic shot, he is wearing red swimming trunks and his body is upside down, with his arms extended and his legs together. The camera moves down and he jumps into the water, splashing. In the background is a blue swimming pool.)


Among them, the T2V-14B model is more suitable for generating high-quality visual effects with rich motion dynamics, while the T2V-1.3B model strikes a balance between generation quality and computing power, making it an ideal choice for developers for secondary development and academic research. For example, the T2V-1.3B model allows users to generate a 5-second, 480p resolution video in about 4 minutes using only an ordinary laptop.

Red Hat Enhances Security and Virtualization Experience with Latest Version of Red Hat OpenShift

Red Hat Enhances Security and Virtualization Experience with Latest Version of Red Hat OpenShift

Red Hat, Inc., the world's leading provider of open source solutions, today announced the general availability of Red Hat OpenShift 4.18, the latest version of the industry’s leading hybrid cloud application platform powered by Kubernetes. Red Hat OpenShift 4.18 introduces new features and capabilities designed to streamline operations and security across IT environments and deliver greater consistency to all applications, from cloud-native and AI-enabled to virtualized and traditional.

According to the Gartner® press release Top Trends Impacting Infrastructure and Operations for 2025, revirtualization/devirtualization is one of the top trends facing organizations for 2025. As shifts in the virtualization market require organizations to reevaluate their virtualized infrastructure and strategies, for many it is an opportunity to implement technologies that will both deliver on their current IT requirements as well as help them meet the needs of tomorrow. The latest enhancements to Red Hat OpenShift are designed to simplify the management of virtual machines and containers while providing organizations with a common infrastructure to bring their generative AI (gen AI) plans to life.

Enhanced virtualization experience

Red Hat OpenShift 4.18 introduces new virtualization enhancements that improve networking, simplify storage migration, and streamline VM management. These updates reduce operational complexity, enhance flexibility and improve resource efficiency -- making it easier to manage and adapt virtualized environments as needs evolve.

VM-friendly networking provides support for common VM networking use cases with the general availability of user-defined networks, making it easier for users to get their virtualization platform up and running. Also available with OpenShift on AWS and Red Hat OpenShift Service on AWS, this allows users to have similar networking capabilities for secondary networks on AWS as they do on-premises, allowing for more hybrid cloud flexibility.

VM storage migration, available as a technology preview, now includes additional enhancements that allow for non-disruptive movement of data between storage devices and storage classes while a VM is running, enabling users to be more agile as storage needs change.

Tree-view navigation, available as a technology preview, enables users to logically group VMs into folders which allows for a more granular grouping. Additionally, with logical grouping, also available as a technology preview, users have a quicker and easier way to navigate between VMs using a single click.

Red Hat OpenShift 4.18 also enhances user-defined networks with Border Gateway Protocol (BGP), which improves segmentation and supports advanced use cases like VM static IP assignment, live migration and stronger multi-tenancy.

Extending choice for greater hybrid cloud innovation

Red Hat OpenShift 4.18 expands support to additional public cloud providers, providing users with increased flexibility for how and where they choose to run their workloads. Red Hat OpenShift now supports bare-metal deployments on Google Cloud and Oracle Cloud Infrastructure. Additionally, for users looking for virtualization in the public cloud, Red Hat OpenShift Virtualization is now available on Oracle Cloud Infrastructure as a technology preview.

Simplified operations for security

Red Hat OpenShift 4.18 introduces new security features designed to help drive more resilient operations while decreasing potential risks. Secret store container storage interface (CSI) driver is now generally available and provides users with a vendor-agnostic solution for managing credentials and sensitive information for applications. Workloads on Red Hat OpenShift can access external secrets managers without storing secrets on the cluster, enhancing overall security hygiene and simplifying credential management. This allows for clusters to remain unaware of secrets, thereby further reducing risk. Additionally, Secret Store CSI Driver enhances complementary solutions, such as OpenShift GitOps and OpenShift Pipelines, by enabling them to consume secrets from an external secrets manager in a more secure way.

Availability

Red Hat OpenShift 4.18 is now generally available. More information, including how to upgrade to the latest version, is available here.

Mike Barrett, vice president and general manager, Hybrid Cloud Platforms, Red Hat
Many organizations have reached an inflection point with their virtualized infrastructure, needing to make decisions quickly on their future direction. Red Hat OpenShift meets today’s virtualization needs and offers a simplified pathway to migration, but also enables organizations to keep an eye on the future via application modernization. With Red Hat OpenShift, organizations are able to protect their traditional investments while adopting a platform that enables them to seamlessly transition to an AI future.

Additional Resources


Learn more about Red Hat OpenShift 4.18

Read the blog about Red Hat OpenShift 4.18 What’s new for developers in Red Hat OpenShift 4.18

About Red Hat, Inc.

Red Hat is the world’s leading provider of enterprise open source software solutions, using a community-powered approach to deliver reliable and high-performing Linux, hybrid cloud, container, and Kubernetes technologies. Red Hat helps customers integrate new and existing IT applications, develop cloud-native applications, standardize on our industry-leading operating system, and automate, secure, and manage complex environments. Award-winning support, training, and consulting services make Red Hat a trusted adviser to the Fortune 500. As a strategic partner to cloud providers, system integrators, application vendors, customers, and open source communities, Red Hat can help organizations prepare for the digital future.

AMD and Fujitsu Partner to Create Energy-Efficient Computing Platforms and Accelerate Open-Source AI Initiatives

AMD and Fujitsu Partner to Create Energy-Efficient Computing Platforms and Accelerate Open-Source AI Initiatives

On November 1, 2024, AMD and Fujitsu announced a strategic partnership to develop more sustainable computing infrastructure aimed at accelerating open-source Al initiatives.

The partnership aims for Sustainable Computing by focusing on creating energy-efficient computing platforms that combine Fujitsu's next-generation Arm-based FUJITSU-MONAKA processors with AMD's Instinct™ accelerators.

Further for accelerating Open-Source Artificial Intelligence (Al), the collaboration aims to advance the development of open-source Al software optimized for these new computing platforms. This will help expand the ecosystem and promote the societal implementation of Al. Engineering, Ecosystems, and Business: The partnership will focus

Additionally, the partnership will focus on three strategic areas – engineering innovative computing platforms, fostering an open-source ecosystem, and expanding business opportunities globally.

Fujitsu and AMD plan to jointly develop these computing platforms by 2027, offering more sustainable options in both hardware and software.

This collaboration is expected to bring together Fujitsu's advanced CPU technology and AMD's GPU technology to create a more sustainable and efficient Al/HPC platform ecosystem.

Fujitsu and AMD will also collaborate on marketing and co-creation with customers to offer these AI computing platforms globally. In addition, to expand AI use cases and promote the societal implementation of AI, based on the computing infrastructure of FUJITSU-MONAKA and AMD Instinct accelerators, both companies will collaborate to build an open and more sustainable AI/HPC platform ecosystem, including a joint customer center.

Through this collaboration, Fujitsu and AMD are bringing their respective world-leading technologies together and will promote open-source AI initiatives by offering more sustainable options in both hardware and software that can be utilized by many companies.

India Launches BharatGen, the World’s 1st Govt-Funded Multimodal LLM Project

India Launches BharatGen, the World’s 1st Govt-Funded Multimodal LLM Project

India has just launched the BharatGen project, a pioneering initiative aimed at developing generative AI in Indian languages. This state-funded project is spearheaded by IIT Bombay under the National Mission on Interdisciplinary Cyber-Physical Systems (NM-ICPS).

BharatGen is notable for being the world's first government-funded multimodal large language model project. It aims to create high-quality text and multimodal content in various Indian languages, making AI more accessible and inclusive. The project will benefit government, private, educational, and research institutions, and is expected to be completed in two years.

AtmaNirbhar Bharat, Promoting Indian Languages & Social Equity

By leveraging generative AI, the BharatGen project can help preserve and promote the rich linguistic diversity of India. This initiative not only supports cultural heritage but also ensures that technological advancements are inclusive and accessible to a broader population.

BharatGen aligns with the vision of Atmanirbhar Bharat by creating foundational AI models specifically tailored for India. By developing AI technologies within India, BharatGen reduces reliance on foreign technologies and strengthens the domestic AI ecosystem for startups, industries, and government agencies.

Democratizing access to AI through foundational models and detailed technical recipes it allows innovators, researchers, and startups to build AI applications quickly and affordably. A core feature of BharatGen is its focus on data-efficient learning, particularly for Indian languages with limited digital presence. Through fundamental research and collaboration with academic institutions, the initiative will develop models that are effective with minimal data—a critical need for languages underserved by global AI initiatives. BharatGen will also foster a vibrant AI research community through training programs, hackathons, and collaborations with global experts.

One of the primary goals of BharatGen is to deliver generative AI models and applications as a public good. This means prioritizing India’s socio-cultural and linguistic diversity while ensuring that the benefits of AI reach all segments of society.

This initiative also aligns with India's broader goals of promoting social equity, cultural preservation, and linguistic diversity through advanced AI technologies.

Technical Aspects of BharatGen.

The BharatGen project is being developed by a consortium led by IIT Bombay under the National Mission on Interdisciplinary Cyber-Physical Systems (NM-ICPS). The project is managed by the TIH Foundation for IoT and IOE at IIT Bombay.


Several premier academic institutions are involved in this initiative, including IIIT Hyderabad, IIT Mandi, IIT Kanpur, IIT Hyderabad, IIM Indore and IIT Madras.

This collaborative effort aims to create generative AI systems that can produce high-quality text and multimodal content in various Indian languages.

BharatGen focuses on developing multimodal large language models that can handle text, speech, and computer vision tasks. This means the models will be capable of understanding and generating content across different types of media. BharatGen will be developed as an open-source platform. This approach encourages collaboration and innovation, allowing researchers and developers to contribute to and benefit from the project.

The models will be built and trained using datasets that are specifically curated to represent Indian languages and contexts. This ensures that the AI is culturally and contextually relevant.

BharatGen’s roadmap outlines key milestones up to July 2026. These include extensive AI model development, experimentation, and the establishment of AI benchmarks tailored to India’s needs. BharatGen will also focus on scaling AI adoption across industries and public initiatives.

Reliance Industries Testing 'Jio TV OS', the 1st Made-in-India Smart TV OS

Reliance Industries Testing 'Jio TV OS', the 1st Made-in-India Smart TV OS

Reliance Industries has taken a significant step by beta testing India's first homegrown smart television operating system (OS), known as Jio TV OS.

Jio TV OS is based on Google's Android platform. It will compete with established players such as Samsung's Tizen OS, LG's webOS, Skyworth's Coolita OS, and Hisense's Vidaa OS.

Reliance aims to bundle its other apps, including JioCinema, with the smart TVs running Jio TV OS. The integration of Jio broadband connections is also part of the strategy. Licensing deals and JioCinema integration are expected to drive revenue.

Jio TV OS is an open-source platform, encouraging developers to build optimized apps not only for smart TVs but also for other connected devices like smartphones.

Reliance is not charging any licensing fees, aiming to make the OS widely popular.

While large multinational companies like Samsung and LG may not adopt this OS due to their existing platforms, Reliance plans to collaborate with homegrown and smaller brands to increase adoption.

The smart TV market in India faced a 14% year-on-year decline in shipments during the January to March period. However, the segment of 55-inch and above screen size saw a 23% year-on-year growth in the same quarter. The market is expected to decline by 10% in calendar year 2024, primarily due to reduced demand for small-screen TVs.

Overall, Reliance Industries' Jio TV OS aims to disrupt India's smart TV market by offering an open-source alternative and leveraging its ecosystem of services. Keep an eye out for its official launch around Diwali.

SAP Announces Its New Open Source Manifesto

SAP Announces Its New Open Source Manifesto

SAP recently published its open source manifesto, reinforcing its commitment to open source principles and community engagement. The manifesto aligns with SAP’s established presence as a key contributor to open source. In fact, SAP currently ranks as one of the top ten commercial contributors globally on GitHub in the OSCI.

The manifesto provides guidance to internal developers, as well as customers, partners, and the developer ecosystem. It emphasizes transparency, security, and shared success, aiming to accelerate innovation and collaboration on a significant scale. By embracing open source, SAP not only accelerates its own innovation but also empowers its customers and partners to build on a solid foundation.

SAP actively contributes to several open source projects and foundations. Some notable ones are — Gardener, Kyma, and OpenUI5.

SAP also actively supports and sponsors organizations in the open source community, including the Linux Foundation, Cloud Native Computing Foundation, and the Apache Software Foundation. If anyone is interested, one can explore more of SAP's open source projects on GitHub.

The manifesto coincides with SAP’s participation in the Open Reference Architecture project under the EU’s IPCEI-CIS initiative, exemplifying SAP’s support for digital sovereignty in cloud infrastructure and services. This project’s development, contributed as open source, showcases SAP’s transparent approach to fostering community-led growth and innovation.

Infosys Topaz and Intel to Democratize AI by Bringing Open Standards in AI Hardware and Software

Infosys Topaz and Intel to Democratize AI by Bringing Open Standards in Ai Hardware and Software

Infosys Topaz and Intel Collaborate to Accelerate Enterprise Growth and Efficiency with Generative AI

Expanded collaboration will help democratize AI by bringing open standards in AI hardware and software stack across edge, core, and cloud computing

Infosys and Intel, today announced that they have expanded their strategic collaboration to assist global enterprises in accelerating their AI journeys. The advanced artificial intelligence (AI) solutions offered as a part of this collaboration will aim to help businesses become cost effective and performance driven while being responsible by design.

Infosys Topaz, an AI-first set of services, solutions and platforms that help enterprises accelerate business value using generative AI technologies, will adopt Intel-based solutions, including Intel® Xeon® processors, Intel® Gaudi® accelerators, Intel® Core™ Ultra Processors, software, and future generation products, to enable customers to integrate Gen AI into their businesses and adhere to the emerging guardrails of AI.

Additionally, Infosys will leverage the Intel AI training assets to skill up its employees on Intel product portfolio to provide generative AI expertise to its wide network of global customers across industries.

Infosys Topaz and Intel to Democratize AI by Bringing Open Standards in Ai Hardware and Software

Intel aims to bring Al everywhere by supporting an open Al software ecosystem. The company aims to accelerate the adoption of Intel Xeon and Gaudi accelerators for Gen Al use cases. The collaboration with Infosys and local ISVs provides an opportunity to develop software and tools that drive Intel-based technology adoption and reduce overall total cost of ownership (TCO) for customers.

Recently, in the latest MLPerf v4.0 benchmark results, the Intel Gaudi 2 AI accelerator remains the only benchmarked alternative to Nvidia H100 for generative AI (GenAI) performance and provides strong performance-per-dollar.

Balakrishna D.R. (Bali), Executive Vice President, Global Services Head, AI and Industry Verticals, Infosys, said, “Infosys has embraced an AI-first strategy to deliver advanced AI services to clients seeking to unlock significant business value across their operations. The Infosys Topaz offerings and solutions seamlessly complement Intel’s core stack and its 'AI Everywhere' strategy. By combining our strengths, we are helping enterprises on their journey to become AI-first and accelerate business value with our industry leading AI solutions.”

Christoph Schell, Executive Vice President and Chief Commercial Officer, Intel Corporation, said, “Customers and developers are looking for competitive TCO and time-to-value AI solutions to scale and win. Our approach in bringing AI everywhere is by supporting an open AI software ecosystem and accelerating the adoption of Intel Xeon and Gaudi accelerators for Gen AI use cases. We believe our collaboration with Infosys and local ISVs is a huge opportunity for us to develop software and tools which can help drive Intel-based technology adoption and reduce the overall TCO for our customers.”


Google Introduces GEMMA, An Open Source Lightweight GenAI Models That Can Be Run Anywhere

Google Introduces GEMMA, An Open Source Lightweight GenAI Models That Can Be Run Anywhere

Google has just introduced a new generation of open models to assist developers and researchers in building AI responsibly — Gemma, a family of lightweight, state-of-the art open models built from the same research and technology used to create Google’s earlier 'Gemini' models.

Google says that Gemma models achieved exceptional benchmark results at its 2B and 7B sizes, even outperforming some larger open models.

Developed by Google DeepMind and other teams across Google, Gemma is inspired by Gemini, and the name reflects the Latin gemma, meaning “precious stone.”

Google is also releasing tools to support developer innovation, foster collaboration, and guide responsible use of Gemma models.

Gemma is available worldwide, starting today. Here are the key details to know:
In addition, Google has also released a new Responsible Generative AI Toolkit together with Gemma to help developers and researchers prioritize building safe and responsible AI applications. The toolkit includes:
  • Safety classification: We provide a novel methodology for building robust safety classifiers with minimal examples.
  • Debugging: A model debugging tool helps you investigate Gemma's behavior and address potential issues.
  • Guidance: You can access best practices for model builders based on Google’s experience in developing and deploying large language models.

Free credits for research and development

Gemma is built for the open community of developers and researchers powering AI innovation. You can start working with Gemma today using free access in Kaggle, a free tier for Colab notebooks, and $300 in credits for first-time Google Cloud users. Researchers can also apply for Google Cloud credits of up to $500,000 to accelerate their projects.

AMD Acquiring Indians Founded Nod.ai, An Open-Source AI Software Startup

AMD Acquiring Indians Founded Nod.ai, An Open-Source AI Software Startup

Nod.ai to bolster AMD open-source software strategy

Advanced Micro Devices, Inc. (AMD), a California, US based multinational semiconductor company today announced the signing of a definitive agreement to acquire Nod.ai to expand the company’s open AI software capabilities. The addition of Nod.ai will bring an experienced team that has developed an industry-leading software technology that accelerates the deployment of AI solutions optimized for AMD Instinct™ data center accelerators, Ryzen™ AI processors, EPYC™ processors, Versal™ SoCs and Radeon™ GPUs to AMD.

The acquisition strongly aligns with the AMD AI growth strategy centered on an open software ecosystem that lowers the barriers of entry for customers through developer tools, libraries and models.

Founded in 2013, by Anush Elangovan and Harsh Menon, nod focus on Machine Intelligence, and enables its customers to effectively and efficiently train and deploy Machine Learning models using advanced Compiler and Code Generation Technologies.

Prior to establishing nod, Anush has worked on the first Google ARM Chrome book. He is an engineering graduate from Madurai Kamraj University Tamil Nadu, India as well as MS from Arizona State and a Bachelor’s from Mepco Schlenk Engineering College in India. While Harsh is a Stanford graduate and was an early employee at Zee.Aero / KittyHawk and worked on flying cars.

nod had raised about $36.5 million in total funding from several investors including 8Square Capital, Atlantic Bridge, Pointguard Ventures and Walden International.

The acquisition of Nod.ai is expected to significantly enhance our ability to provide AI customers with open software that allows them to easily deploy highly performant AI models tuned for AMD hardware,” said Vamsi Boppana, senior vice president, Artificial Intelligence Group at AMD. “The addition of the talented Nod.ai team accelerates our ability to advance open-source compiler technology and enable portable, high-performance AI solutions across the AMD product portfolio. Nod.ai’s technologies are already widely deployed in the cloud, at the edge and across a broad range of end point devices today.”

At Nod.ai, we are a team of engineers focused on problem solving — quickly – and moving at pace in an industry of constant change to develop solutions for the next set of problems,” said Anush Elangovan, co-founder and CEO, Nod.ai. “Our journey as a company has cemented our role as the primary maintainer and major contributor to some of the world's most important AI repositories, including SHARK, Torch-MLIR and OpenXLA/IREE code generation technology. By joining forces with AMD, we will bring this expertise to a broader range of customers on a global scale.”

Nod.ai delivers optimized AI solutions to top hyperscalers, enterprises and startups. The compiler-based automation software capabilities of Nod.ai’s SHARK software reduce the need for manual optimization and the time required to deploy highly performant AI models to run across a broad portfolio of data center, edge and client platforms powered by AMD CDNA™, XDNA™, RDNA™ and “Zen” architectures.

Last year in February, AMD acquired Xilinx, Inc., a semiconductor company for $50 billion. Xilinx is known for inventing the first commercially viable field-programmable gate array (FPGA) and creating the first fabless manufacturing model.

Delhi-based SFLC.in Moves Kerala High Court Against Recent Government Ban on 14 Open Source Apps

Delhi-based SFLC.in Moves Kerala High Court Against Recent Government Ban on 14 Open Source Apps

Delhi-based legal not-for-profit organization SFLC.in has filed a writ petition before Kerala High Court for producing and quashing the order passed by the Central Government under Section 69A of the Information Technology Act, 2000 banning the free and open source software.

The Kerala High Court today issued notice through court order to the Deputy Solicitor General of India on a challenge to the blocking of Element and Briar applications. The applications, which are Free and Open Source in nature, were blocked by the Central Government under Section 69A, IT Act, 2000 and Blocking Rules, 2009.

The matter will come up for hearing next on 29th May, 2023. SFLC.in provided legal assistance to the petitioners in the matter.

On 1st May, 2023, several news media outlets reported that the Central Government had blocked 14 mobile apps for messaging (under Section 69A of the Information Technology Act, 2000), on the grounds of these apps being allegedly used for communication between bad actors in the region of Jammu and Kashmir. The list of blocked applications include 2 widely- used FOSS applications, namely ‘Element’ and ‘Briar’ that software developers, technologists, innovators and entrepreneurs utilize on a daily basis.

In its petition, SFLC.in noted that the use of such FOSS applications enabled several benefits like enhanced interoperability, reduced costs, vendor independence, greater localization and developing local growth of the IT sector in India.

The petition also notes that the order for blocking the applications places unreasonable restrictions on the exercise of Freedom of Speech and Expression, and Freedom to Carry Out Trade and Business, as guaranteed under the Constitution of India. The order has also not been made public which suffers from the vice of gross violation of the Principles of Natural Justice insofar as no notice has been served to the Petitioners, nor has any opportunity of hearing granted till date, as mandated under the relevant rules, i.e., Blocking Rules, 2009.

The petition has been filed to produce the order banning these applications and staying the operation and implementation of the order. It also challenges Rule 16 of the Blocking Rules.

About SFLC.in:

SFLC.in is a donor-supported legal services organization that has united lawyers, policy analysts, technologists, business professionals, students and citizens to protect freedom in the digital world since 2010. In discussions of technology where Industry and Governments have representations, we endeavor to present the rights of Indian citizens. We fight the good fight in keeping the Internet open, secure and safe for all.

Our objective is to help Indian policymakers make informed and just decisions in the use and adoption of technology for production of pro-humanity IT. We promote innovation and open access to knowledge by helping developers make great Free and Open Source Software (FOSS), and protect privacy and civil liberties of citizens in the digital world. This is achieved through education, original research, policy engagement and provision of free legal services. For press queries, please drop a mail to press@sflc.in.


Open-Source, Carbon-Neutral Layer-1 Blockchain the XDC Network Secures $50 Mn from LDA Capital



The XDC Network Secures $50M From LDA Capital to Drive Ecosystem Development

The founders of the XDC Network have leveraged a portion of their personal token allocations to secure a $50 million commitment from global alternative investment group LDA Capital Limited to accelerate the expansion and development of Layer 2 projects across the XDC Ecosystem and to facilitate network adoption and real-world utility. LDA support will help fund new ventures and entities laser-focused on increasing network adoption among retail and institutional participants, jumpstarting on-chain activity and Total Value Locked (TVL), and supporting technological innovation.

Launched in 2019, the XDC Network is an enterprise-grade, carbon-neutral, hybrid blockchain purpose-built from the ground up to meet the growing needs of global financial institutions, retail users, and entrepreneurs for fast, secure, decentralized network products. The number of (Smart Contract based) projects built on XDC has already grown relentlessly and exponentially, despite macro-economic conditions being what they are, with DEXs, Metaverses, NFT marketplaces, oracles, decentralized email providers and cloud storage, payment dApps, legal document repositories, and tokenized real-world assets (to name but a few) all planting roots in the network in recent months. And, with the addition of LDA’s support, the pace of growth will only accelerate further.

"Our collaboration with LDA will usher in an exciting new period in the XDC Network’s history by enabling unprecedented growth of the Layer 2 ecosystem across various use-cases, with an emphasis on bringing ever more value TVL (“Total Value Locked”) to the network via hyper-scalable dApps, DEXs,TradeFi/DeFi and advanced products filling the gaps between traditional and decentralized finance." - Ritesh Kakkad, Co-founder XinFin (XDC) Network.

“Though there have been many institutional funds eager to participate in the XDC Network over the years, we’ve always looked for genuine strategic partners, not just funders, who can actively and strategically advance the ecosystem, while bringing utility to the network, and making XDC the preferred Layer 1 for institutions the world over–in LDA, we’ve found such a partner.” - Atul Khekade, Co-founder XinFin (XDC) Network.

"LDA Capital is pleased with the developments made in the XDC Network by the XDC ecosystem. In addition to its funding, LDA will offer strategic counsel and support to help XDC Blockchain Network assume its position as a market leader." - commented Anthony Romano, LDA Capital Ltd.

About LDA CAPITAL 

LDA Capital is a global alternative investment group with expertise in cross border transactions worldwide. Our team has dedicated their careers to international & cross border opportunities having collectively executed over 250 transactions in growth stage businesses across 43 countries with aggregate transaction values of over US$11 billion. For more information please visit: www.ldacap.com; For inquires please email: info@ldacap.com.

About XDC Network

The XDC Network is an open-source, carbon-neutral, enterprise-grade, EVM- (Ethereum virtual machine) compatible, Layer 1 blockchain that has been operationally successful since 2019. The network obtains consensus via a specially delegated proof-of-stake (XDPoS) technique that allows for 2-second transaction times, near-zero gas expenses ($0.0001), over 2000 TPS, and interoperability with ISO 20022 financial messaging standards. The XDC Network powers a wide range of novel blockchain use cases that are secure, scalable, and highly efficient.

Website: www.xinfin.org

Intel, Accenture Released Free Open Source AI Reference Kits to Make Artificial Intelligence More Accessible

Intel Released Free Open Source AI Reference Kits to Make Artificial Intelligence More Accessible

Artificial Intelligence is finding its applications across industries and while big IT services providers have built their capabilities for providing AI solutions to its cliemts, its not an easty task for small-scale companies or freelance developers. Companies and developers are looking to infuse AI into their solutions and the reference kits contribute to that goal.

Intel, in partnership with Accenture, has released the first set of open source AI reference kits specifically designed to make AI more accessible to organizations in on-prem, cloud and edge environments. Intel has launched a series of trained AI reference kits to the open source community to help enterprises innovate and accelerate their digital transformation journey.

These kits enable data scientists and developers to learn how to deploy AI faster and more easily across healthcare, manufacturing, retail and other industries with higher accuracy, better performance and lower total cost of implementation.

The reference kits are open source, pre-built AI with meaningful enterprise contexts for both greenfield AI introduction and strategic changes to existing AI solutions. 

Built on the foundation of the oneAPI open, standards-based, heterogeneous programming model, these kits are also available on Github. It include AI model code, end-to-end machine learning pipeline instructions, libraries and Intel oneAPI components for cross-architecture performance. Four kits are available for download, described below -
  1. Utility asset health:
    For — Energy consumption, Power Distribution

    This predictive analytics model was trained to help utilities deliver higher service reliability. It uses Intel-optimized XGBoost through the Intel® oneAPI Data Analytics Library to model the health of utility poles with 34 attributes and more than 10 million data points.

    Data includes asset age, mechanical properties, geospatial data, inspections, manufacturer, prior repair and maintenance history, and outage records. The predictive asset maintenance model continuously learns as new data, like new pole manufacturer, outages and other changes in condition, are provided.

  2. Visual quality control:
    For — Quality control (QC) in any manufacturing operation.

    The challenge with computer vision techniques is that they often require heavy graphics compute power during training and frequent retraining as new products are introduced.

    The AI Visual QC model was trained using Intel® AI Analytics Toolkit, including Intel® Optimization for PyTorch and Intel® Distribution of OpenVINO™ toolkit, both powered by oneAPI to optimize training and inferencing to be 20% and 55% faster, respectively, compared to stock implementation of Accenture visual quality control kit without Intel optimizations2 for computer vision workloads across CPU, GPU and other accelerator-based architectures. Using computer vision and SqueezeNet classification, the AI Visual QC model used hyperparameter tuning and optimization to detect pharmaceutical pill defects with 95% accuracy.

  3. Customer chatbot:
    For —  Conversational chatbots.

    AI models that support conversational chatbot interactions are massive and highly complex. This reference kit includes deep learning natural language processing models for intent classification and named-entity recognition using BERT and PyTorch. Intel® Extension for PyTorch and Intel Distribution of OpenVINO toolkit optimize the model for better performance – 45% faster inferencing compared to stock implementation of Accenture customer chatbot kit without Intel optimizations3 – across heterogeneous architectures, and allow developers to reuse model development code with minimal code changes for training and inferencing.

  4. Intelligent document indexing:
    For — Analyzing Mass documents

    AI can automate the processing and categorizing of these documents for faster routing and lower manual labor costs. Using a support vector classification (SVC) model, this kit was optimized with Intel® Distribution of Modin and Intel® Extension for Scikit-learn powered by oneAPI.

    These tools improve data pre-processing, training and inferencing times to be 46%, 96% and 60% faster, respectively, compared to stock implementation of Accenture Intelligent document indexing kit without Intel optimizations4 for reviewing and sorting the documents at 65% accuracy.
Intel has planned over 30 AI reference kits with trained machine learning and deep learning models for release to the open source community. Each kit includes model code, training data, instructions for the machine learning pipeline, libraries, and Intel® oneAPI components.

Tech Data, a TD SYNNEX Company, Seals Strategic Partnership with Yugabyte In A First Globally

Tech Data, a TD SYNNEX Company, Seals Strategic Partnership with Yugabyte In A First Globally

Partnership to support growth of cloud-native database market in Asia Pacific & Japan

Tech Data, a TD SYNNEX company, today announced a new partnership with Yugabyte, a leader in open source distributed SQL databases. This partnership in the Asia Pacific & Japan region is the first for Yugabyte across TD SYNNEX & Tech Data globally. It supports Tech Data’s strategy on driving digital transformation built on data for business partners by accelerating capabilities in cloud native architectures, especially for born in the cloud partner ecosystems.

With growing interest around transformative trends such as Mission Critical Microservices due to its promise of improved scalability when compared to monolithic systems1, born in the cloud apps, edge and IOT applications are driving shifts in today’s database market. IDC Futurescape predicts that “by 2024, nearly 60% of organizations’ new custom-developed applications will be built and managed using microservices as foundations for stronger and higher performing automation.”2 These shifts necessitate the need for modern databases to support these modern applications.

“We have observed a trend of end-customers looking to move apps to both hybrid and multi-cloud, as well as new startups building applications in the cloud. With a distributed SQL database built to power global-scale cloud native applications, Yugabyte is well positioned to value-add and support their needs as their operations scale,” shared Bennett Wong, Vice-President for Advanced Solutions, Modern Data Center & Analytics, Tech Data Asia Pacific & Japan.

“As more organizations and ISV’s in Asia embrace digital transformation and app modernization, the need for a cloud-native database is rising. YugabyteDB is built to simplify data geo-distribution as well as to enable speed, scale, resiliency, security and portability, all critical for modern workloads. Our partnership with Tech Data will help accelerate the adoption of YugabyteDB to support partners in modernizing the data layer for their customers. Tech Data’s alignment to open source technologies and presence across the APJ market will allow us to quickly connect and engage with the partner ecosystem effectively.” Says Danny Zaidifard, VP, Business Development & Partners at Yugabyte.

The benefits of this new partnership include:
  • Expanded capabilities in cloud native architectures for born in the cloud partner ecosystems, allowing enterprises to focus on business growth instead of complex data infrastructure management.
  • Full alignment with Tech Data’s Go-To-Market (GTM) strategy for Next Generation Data Centers in Infrastructure Modernization, Data Center Automation, Application Modernization, Cloud Operating Model, and Workload Segmentations, towards supporting digital transformation.
  • Being the cornerstone of Tech Data’s Analytics GTM strategy for Data Modernization, Data Fabric, and Hybrid Motion to support modern applications, where microservices need a cloud native relational database that is resilient, scalable, and geo-distributed to complement our customers’ data maturity journey.
"We are excited to collaborate with Yugabyte to support the enormous growth of independent software vendors and startups across the Asia Pacific and Japan region, particularly in markets like India where Tech Data has deep roots in the enterprise partner ecosystem. This new partnership supports our goal of being a leader in Data Lifecycle and also enables us to expand into application and database modernization,” added Bennett Wong.

[1] Microservices: Migration of a Mission Critical System, IEEE Transactions on Services Computing, Volume 14, Issue 5, Sep – Oct 2021.

[2] IDC Futurescape: Worldwide Developer and DevOps 2021 Predictions, Doc # US46417220, October 2020.

About Tech Data

Tech Data, a TD SYNNEX (NYSE: SNX) company, is a leading global distributor and solutions aggregator for the IT ecosystem. We’re an innovative partner helping more than 150,000 customers in 100+ countries to maximize the value of technology investments, demonstrate business outcomes and unlock growth opportunities. Headquartered in Clearwater, Florida, and Fremont, California, TD SYNNEX’ 22,000 co-workers are dedicated to uniting compelling IT products, services and solutions from 1,500+ best-in-class technology vendors. Our edge-to-cloud portfolio is anchored in some of the highest-growth technology segments including cloud, cybersecurity, big data/analytics, IoT, mobility and everything as a service. TD SYNNEX is committed to serving customers and communities, and we believe we can have a positive impact on our people and our planet, intentionally acting as a respected corporate citizen. We aspire to be a diverse and inclusive employer of choice for talent across the IT ecosystem. For more information, visit www.TDSYNNEX.com or follow us on Twitter, LinkedIn, Facebook and Instagram.

About Yugabyte

Yugabyte is the company behind YugabyteDB, the open source, high-performance distributed SQL database for building global, cloud-native applications. YugabyteDB serves business-critical applications with SQL query flexibility, high performance and cloud-native agility, thus allowing enterprises to focus on business growth instead of complex data infrastructure management. It is trusted by companies in cybersecurity, financial markets, IoT, retail, e-commerce, and other verticals. Founded in 2016 by former Facebook and Oracle engineers, Yugabyte is backed by Lightspeed Venture Partners, 8VC, Dell Technologies Capital, Sapphire Ventures, and others. www.yugabyte.com

Google Cloud Compares Blockchain to Open Source Internet

Google Cloud Compares Blockchain to Open Source Internet

Blockchain technology is gaining more attention as interest in cryptocurrency reaches a fever pitch. Google Cloud has gotten involved with blockchain by creating a Digital Assets Team. The team will focus on blockchain technology - which it thinks will eventually become as common as the internet and open source.

The Growth of Blockchain

Blockchain technology is relatively new. And some people think it’s too early to compare it to developments such as the internet and open source. Also, critics say that blockchain isn’t easy to understand and that most people don’t even know it exists.

However, Google believes that blockchain is here to stay. As long as there’s an interest in cryptocurrency, blockchain will play a role. The company also states that Microsoft Azure and Amazon Web Services have made use of blockchain technology.

Google and Cryptocurrency Capabilities

Google plans to eventually allow cloud customers to receive and make payments using cryptocurrencies. Several companies now accept digital currency as payment, including AT&T. Dealing with cryptocurrencies is currently the most popular use for blockchain.

However, Google’s interest in blockchain is a surprise to some. Especially since Microsoft ended its Azure Blockchain service. Microsoft abruptly ended the service without any notice.

It’s assumed that Microsoft ended the service because it wasn’t profitable. However, Google is betting on their service being a success. The company believes that blockchain technology will play a role in the future. And that it has tremendous potential to create valuable and innovative opportunities for businesses and consumers.

Google also states that as blockchain enters the mainstream, more companies will see its value. And those companies will need secure, reliable, and scalable infrastructure to handle the change. The company plans for Google Cloud to play a role in the mainstream adoption of blockchain technology.

Google’s Digital Asset Team will help customers deal with various aspects of blockchain-based platforms. The team plans to help with short and long-term goals, such as:
  1. Taking place in on-chain governance and node validation with selected Google Cloud partners.
  2. Creating a clean environment for blockchain. The plan is to help users and developers use blockchain in a way that supports Google’s social, environmental, and governance endeavors.
  3. Hosting BigQuery datasets publicly on Google’s Marketplace. Datasets might possibly include full blockchain transaction records for huge cryptocurrencies such as Bitcoin, Ethereum, Litecoin, Dogecoin, and Hedera.
  4. Google believes that digital assets are already changing how the world handles information. Digital assets offer an entirely new way to store and move information. It’s also changing what people value. Such as the rise in digital art in the form of NFTs, which often sell for thousands of dollars.
Google Cloud wants to become a part of this current evolution with decentralized networks and blockchain technology. No one is sure what the future holds for cryptocurrency, blockchain technology, or decentralized networks. But companies such as Google, Amazon, and others are getting involved in what will possibly change the world.

Blockchain-based Open Source Software Maker Tea Raises $8 Mn in Funding Led by Binance Labs

Tea Raises $8 Million Led by Binance Labs to Create New Open Source Software on the Blockchain

Tea Raises $8 Million Led by Binance Labs to Create New Open Source Software on the Blockchain

Tea, a new company building an open source software platform on the blockchain, today announced an $8 million dollar seed funding round led by Binance Labs, the venture capital and innovation incubator of Binance, the world's leading blockchain ecosystem and cryptocurrency infrastructure provider.

Open source has been foundational for the Internet, underpinning everything online for more than twenty-five years. Tea Co-Founder and CEO Max Howell created the open-source software package management system Homebrew, also known as “brew,” which grew into the most contributed‐to open source software program in the world. Homebrew has been used by tens of millions of developers worldwide and has served as the backbone for the largest technology corporations to build their products without directly contributing to its development. 

Tools like Homebrew are uniquely placed, underneath all developer tools, and thus can manage a fair and equitable transfer of value to the entire open source ecosystem. Tea represents a substantial evolution of traditional open source software, creating a new category in which these volunteer developers are compensated and involved in open governance. 

Tea controls the graph of open source that underpins all modern digital infrastructure. Packages will be released on‐chain as NFTs with their dependency metadata included. As developers use Tea, governance tokens are available according to their actual, real‐life usage in order to drive the entire open source ecosystem.

“Web 2.0 accrued fortunes on the backs of free labor by unpaid open source volunteers,” said co-founder Max Howell. “web3 has the power to change this. There is so much that can be done in this space that nobody has tried before. Tea is an approachable, intuitive collection of tools that will work with both Web 2.0's Internet of today along with web3.”

Tea is decentralized at the day-to-day development and installation layers, which increases reliability and provides a native, built-in “version management” for every tool in the stack. Tea's decentralization also offers tangible benefits to ecosystem security. Every layer of apps and dapps is signed and verified on‐chain.

"We’re not changing the nature of open source. It’s still free,” explained co-founder Timothy Lewis. “Software wants to be free, but programmers need to be compensated. We’re bringing the creator economy to open source. Our vision is to fix how open source is funded and create the tools that will accelerate its creation for the benefit of all humanity."

Ken Li, Investment Director at Binance Labs, said, “We are excited to support Max, Tim and team, as part of our overall investment push into the infrastructure levers that shift Web 2.0 to Web 3.0. We believe the Tea team has the potential to introduce new paradigms that allow open source compensation without direct payment. The platform can solve a core problem for the open-source software development community by utilizing of the key value proposition of decentralized token economies."

This funding will be used by Tea Inc. to hire additional resources to continue work on the protocol, software, and the development of the community. Additional funding participants include XBTO, Lattice Capital, Darma Capital, Coral Capital, Woodstock, Rocktree, SVK, and MAKE services. 

To learn more, visit https://tea.xyz.

About Tea Inc.

Tea Inc. is a Puerto Rico-based corporation that is building the open source toolkit that makes development possible and builds the internet: it’s “brew2 for web3”. Developers can work on any platform, CI/CD on any platform, deploy on any platform; Tea abstracts this detail away so developers can get on with the work that matters. Those interested in joining the team fixing how open source is funded and creating the tools that will accelerate its creation for the benefit of all humanity should reach out on Discord or GitHub. 

Nexus backed Hasura Becomes 1st Open-Source-based Unicorn from India; Raises $100 Mn Series C at $1 Bn Valuation

Hasura Becomes 1st Open-Source-based Unicorn from India; Raises $100 Mn
(L-R) Tanmai & Rajoshi

Hasura Announces $100M in Series C Funding at a $1B Valuation to Make GraphQL Available to Everyone

Greenoaks-led round will be used to expand R&D and go-to-market activities for GraphQL innovation leader

GraphQL innovation leader Hasura today announced that it has secured $100M in funding in a round led by Greenoaks with participation from existing investors Nexus Venture Partners, Lightspeed Venture Partners and Vertex Ventures. The Series C round brings the total capital raised by Hasura to $136.5 million and the company’s valuation to $1 billion. 

Hasura plans to use the funding to accelerate research and development and expand go-to-market activities globally for the company’s GraphQL Engine, which makes it fast and easy for even those with zero GraphQL expertise to compose a GraphQL API from existing APIs and databases. Hasura has been downloaded more than 400M times and has earned more than 25,000 GitHub stars since its introduction in 2018.

“There are few better signs of a powerful developer experience than enthusiastic adoption. And on this count, there aren’t many companies like Hasura,” said Neil Shah, partner at Greenoaks. “Since the launch of their GraphQL engine in 2018, Hasura has witnessed explosive uptake across countless organizations, from grassroots open source projects, to some of the largest companies in the world. The common thread is substantial improvements in developer productivity and decreased time to market for mission-critical applications. Now, Hasura Cloud has taken this a step further, truly democratizing GraphQL, and letting anyone access their data with speed and simplicity. We are thrilled to partner with Hasura as they become a core primitive for building cloud-native applications."

Hasura is designed to make web application development faster than ever before by eliminating bottlenecks to data access for frontend and fullstack developers. The platform cuts down the time and niche expertise required to build GraphQL APIs for data access by automating the repetitive work involved in mapping models to APIs with common access patterns like pagination, filtering, joining, setting up authorization rules, and optimizing performance.

With operational data increasingly distributed among multiple sources and developers consuming data in insecure and unauthorized compute environments, Hasura provides data APIs that are able to connect to multiple services and data sources, embed domain-specific authorization logic, and provide the necessary security and performance/concurrency.

“This funding enables Hasura to greatly increase our innovation velocity, which in turn allows our rapidly-expanding user base to deliver software even faster,” said Hasura CEO Tanmai Gopal. “Over the last few years, we’ve worked closely with our users and customers to address a massive gap in delivering and consuming data via an API standard that developers love – GraphQL. With this funding round, our investors and the Hasura team are doubling down on our vision to solve data access and unlock the next decade of developer productivity. We’re going to be addressing the needs of our users by adding support for their favorite data systems much faster. The Hasura GraphQL Engine lets developers of all types, regardless of their GraphQL experience, start building GraphQL APIs without delay. We look forward to seeing what amazing things they build next using Hasura!”

For more information about, and to connect more closely with Hasura, join the Hasura monthly community call here or register to attend HasuraCon’22, being held June 28-30, 2022.

About Hasura

Hasura is helping to build the modern world of globally relevant, data-driven applications and APIs. Hasura’s range of data access solutions helps organizations accelerate product delivery by instantly connecting data and services to applications with GraphQL APIs. For more information, go to: https://hasura.io or follow @HasuraHQ on Twitter.

Open Source Low-Code Software Tooljet Raises $1.5 Mn in Funding Led by Nexus Venture Partners


Open source low-code software for building internal tools

Adopted by hundreds of startups, scale-ups and unicorns

Targeting a worldwide low-code development tech market at $13.8 billion

ToolJet, an open source low-code software for building internal tools has raised $1.5 million in a seed funding round led by Nexus Venture Partners with participation from Ratio ventures, Better capital, Alan Rutledge as well as enterprise founders, operators and investors, including Rohan Murty (founder of Soroco), Sony Joy (head of Enterprise at Truecaller), Vipul Amler (founder of Saeloun), Mohammed Hisamuddin (founder of Entri )and Abhi Kumar of M12 Ventures. The company intends to use the funding for team expansion to accelerate the development of the core platform of ToolJet.

ToolJet’s technology enables companies to build their internal tools with minimum engineering effort. ToolJet’s platform is still in its early days but it has already been adopted by hundreds of startups, scale-ups and even unicorns. ToolJet currently has integrations with more than 15 data sources such as MongoDB, PostgreSQL, Google Sheets and AWS S3.

Navaneeth PK

 ToolJet was founded in April 2021 by Navaneeth PK. Navaneeth has previously co-founded MobioPush which was acqui-hired by Freshworks in 2015. The Toojet team comes with rich experience in building enterprise SaaS applications.

“Companies are under pressure to do more with what they have. Open-source low-code tools enable developers to deliver what the business needs quickly and securely with enough flexibility to customize and extend,” said ToolJet Founder and CEO Navaneeth PK.

Sameer Brij Verma, Director, Nexus Venture Partners, commented, “We are thrilled to partner with ToolJet’s team on their journey to build a revolutionary open-source low-code / no-code platform with the built-in collaboration which will enable their customers to radically increase their operational clock speed and ability to launch new products.”

Building customer internal tools require access to the company’s data. When using the self-hosted editions of ToolJet, the data access can be restricted within the private networks of the companies.

Being open-source allows the users of ToolJet to build custom plugins and connectors using JavaScript. This flexibility is not possible if using proprietary software where the custom requirements are ignored almost always.

Since it was made public in June 2021, ToolJet’s GitHub repository got contributions from more than 100 contributors from more than a dozen countries. The project has observed contributions from developers working at CISCO, Freshworks, Razorpay, and Hasura. Open source products evolve around the community, the constant feedback from the community has helped in improving the product.

The worldwide low-code development technologies market is projected to $13.8 billion in 2021 according to the latest forecast by Gartner, Inc. Low-code adoption is boosted by the cost optimization efforts and remote development due to the ongoing COVID-19 pandemic.

Forrester predicts that 75% of development shops will use low-code platforms by the end of the year.

ToolJet cloud offers a fully managed SaaS service starting at free for a “basic” plan. The enterprise edition of ToolJet ships with additional features, installation support, monthly software updates and priority technical support.

The free and open-source software ( FOSS ) “community edition” of ToolJet can be self-hosted for free without any restrictions on the features or the number of users.

Market Reports

Market Report & Surveys
IndianWeb2.com © all rights reserved