Showing posts with label Edge AI. Show all posts
Showing posts with label Edge AI. Show all posts

Analog Devices Launches CodeFusion Studio™ 2.0 to Simplify AI-Enabled Embedded Development

Analog Devices Launches CodeFusion Studio™ 2.0 to Simplify AI-Enabled Embedded Development
  • New End-to-end AI workflow support with bring-your-own-model capability, model checks and performance profiling enables rapid deployment across ADI’s full hardware portfolio.
  • New Unified configuration tools, multi-core support and integrated debugging streamline development across heterogeneous systems.
  • New Zephyr-based modular framework for runtime AI/ML profiling and layer-by-layer analysis enhances the open source foundation, eliminating toolchain fragmentation and reducing complexity.
Analog Devices, Inc. (Nasdaq: ADI), a global leader in semiconductor innovation, today launched CodeFusion Studio™ 2.0, a significant upgrade to its open source embedded development platform. Designed to simplify and accelerate the development of AI-enabled embedded systems, CodeFusion Studio 2.0 introduces advanced hardware abstraction, seamless AI integration and powerful automation tools to streamline the journey from concept to deployment across ADI’s diverse processors and microcontrollers.

The next era of embedded intelligence requires removing friction from AI development,” said Rob Oshana, Senior Vice President of the Software and Digital Platforms group, ADI. CodeFusion Studio 2.0 transforms the developer experience by unifying fragmented AI workflows into a seamless process, empowering developers to leverage the full potential of ADI's cutting-edge products with ease so they can focus on innovating and accelerating time to market.”

Empowering Developers with End-to-End AI Workflows

CodeFusion Studio 2.0 now supports complete AI workflows, enabling developers to bring their own models and deploy them efficiently across ADI’s processors and microcontrollers—from low-power edge devices to high-performance DSPs (digital signal processors). The latest platform, based on Microsoft’s Visual Studio Code, features a built-in model compatibility checker, performance profiling tools and optimization capabilities that are designed to ensure robust deployment and an accelerated time-to-market.

A new Zephyr-based modular framework enables runtime performance profiling for AI/ML workloads, offering layer-by-layer analysis and seamless integration with ADI’s heterogeneous platforms. This encapsulation of toolchains simplifies machine learning deployment and enhances system-level performance insights.

Unified Development Experience

The updated CodeFusion Studio System Planner now supports multi-core applications and expanded device compatibility, while unified configuration tools reduce complexity across ADI’s hardware ecosystem. Developers benefit from integrated debugging capabilities, including Core Dump Analysis and GDB (GNU debugger) support, making troubleshooting faster and more intuitive.

ADI Future Proofs Its Digital Roadmap

CodeFusion Studio 2.0 is the latest milestone in ADI’s open-source embedded development platform, embodying its commitment to delivering developer-first tools that simplify complexity and accelerate innovation. As ADI expands its digital roadmap, future releases will continue to push the boundaries of embedded intelligence, bringing deeper hardware-software integration, expanded runtime environments and new capabilities tailored to evolving developer needs as they experiment with physical AI.

“Companies that deliver physically aware AI solutions are poised to transform industries and create new, industry-leading opportunities. That’s why we’re creating an ecosystem that enables developers to optimize, deploy and evaluate AI models seamlessly on ADI hardware, even without physical access to a board,” said Paul Golding, Vice President of Edge AI and Robotics, ADI. “CodeFusion Studio 2.0 is just one step we’re taking to deliver Physical Intelligence to our customers, ultimately enabling them to create systems that perceive, reason and act locally—all within the constraints of real-world physics.”

Availability

CodeFusion Studio 2.0 is now available for download. Developers can access the platform, documentation and community support at https://developer.analog.com/solutions/codefusionstudio

IBM Introduces New Power® S1012 Server to Run AI Inferencing Workloads in ROBO Locations Outside Mainstream Datacenter Facilities

IBM Introduces New Power® S1012 Server to Run AI Inferencing Workloads in ROBO Locations Outside Mainstream Datacenter Facilities

IBM has recently announced the expansion of its server portfolio with the introduction of the IBM Power S1012. This new server is designed to enhance AI workloads from the core to the cloud and even to the edge, providing added business value across various industries. The IBM Power S1012 is a 1-socket, half-wide system based on the Power10 processor, and it delivers up to 3X more performance per core compared to the previous Power S812 model.

The server is available in both a 2U rack-mounted and a tower deskside form factor, optimized for edge-level computing. It also offers the lowest entry price point in the Power portfolio to run core workloads for small and medium-sized organizations¹. With the ability to run AI inferencing workloads in remote office and back office locations, the IBM Power S1012 provides flexibility and direct connection to cloud services like the IBM Power Virtual Server for backup and disaster recovery.

This advancement is particularly significant for industries such as retail, manufacturing, healthcare, and transportation, where deploying workloads at the edge can capitalize on data where it originates. Real-time insights gained through edge computing can offer a competitive advantage, with applications ranging from analyzing customer behavior to monitoring production processes.

IBM Power S1012 will be generally available from IBM and certified Business Partners on June 14, 2024.

Use Case

Let's consider a retail industry scenario as an example use case for edge computing with the IBM Power S1012:

Scenario: Real-Time Inventory Management in Retail

Background:

A large retail chain is looking to improve its inventory management and customer experience by implementing AI at the edge.

Challenge:

The retailer needs to process vast amounts of data from various sources, including point-of-sale systems, online transactions, and IoT sensors on shelves. They require real-time analytics to manage stock levels efficiently and predict future demand.

Solution with IBM Power S1012:

Local Processing: The IBM Power S1012 can be deployed in individual stores to process data locally. This reduces latency, as data doesn't need to be sent to a central data center or cloud for processing.

AI Workloads: The server's AI capabilities enable it to run sophisticated algorithms that analyze purchasing patterns and inventory levels.

Predictive Analytics: By leveraging machine learning models, the Power S1012 can forecast demand and suggest optimal stock replenishment schedules.

Integration with IoT: IoT sensors on shelves send data directly to the local Power S1012 server, which can trigger alerts when items are running low or when there's a mismatch in inventory.

Benefits:

Reduced Latency: Real-time processing at the edge ensures immediate insights into inventory levels, leading to quicker decision-making.

Increased Efficiency: Automated stock management reduces overstocking and stockouts, saving costs and improving customer satisfaction.

Enhanced Customer Experience: The system can also provide personalized recommendations to customers based on their shopping history and current in-store promotions.

By utilizing the IBM Power S1012 for edge computing, the retailer can achieve a more responsive and intelligent inventory management system that adapts to changing demands and enhances the overall shopping experience for customers.

UK’s Innovation Agency backed FrontM to Pave the Way for AI Enabled Edge Applications for Maritime



FrontM is on a mission to be the EDGE-intelligent app marketplace where the world goes to connect, inform and care for remote teams and customers. The innovation focuses on overcoming digital poverty in remote and isolated environments, such as the Blue Economy. The World Bank defines the blue economy as the “sustainable use of ocean resources for economic growth, improved livelihoods and jobs while preserving the health of the ocean ecosystem.”

FrontM’s initial use cases include the maritime commercial shipping market, particularly transforming shore-ship team collaboration, automation of workflows, crew safety and welfare.

FrontM is proud to be recognized by Innovate UK and receiving a grant to study the feasibility of integration of Edge AI enablement technology from Hammer Of The Gods (HOT-G). HOT-G is a Palo Alto, California based deep-tech startup, building the infrastructure and developer experience tools for observability, security and monitoring for edge computing. This integration accelerates FrontM’s distributed software infrastructure and paves the way for innovative AI-enabled edge applications in its app-marketplace.

A market research report, written during 2021 by consultancy company Thetius and sponsored by the Inmarsat Research Programme, estimated that by 2030 the digital maritime industry will be worth $345 billion, up from a previous forecast of $279 billion. It reported an explosion in the use of IoT and digital tools in maritime due to massive growth in consumer demand for e-commerce and the use of online booking platforms for shipping freight.

Centralising data in the cloud is a big challenge in maritime due to a combination of factors – the bandwidth constraints in transporting the entire data set from ship to shore, the proliferation of IoT and the generation of more data from each sensor than what is required. While enabling inference from data at the edge can address some of the challenges, deploying AI on constrained devices is today prohibitively complex and expensive compared to developing and deploying AI models on the cloud.

HOT-G’s embedded machine learning orchestration tools combined with FrontM’s programmable app framework seek to address these technological challenges by taking advantage of FrontM’s tools to accelerate maritime digitalisation across a broad range of applications ranging from safety, remote assistance, surveys, predictive analytics and crew welfare.

Enquiries: Anne-Marie Barclay, anne-marie@frontm.com

About Innovate UK

Innovate UK is the UK’s innovation agency. It works with people, companies and partner organisations to find and drive the science and technology innovations that will grow the UK economy. For further information visit innovateuk.gov.uk.

Edge AI Company AlphaICs Raises $8 Mn in Funding Led by Emerald Technology Ventures and Endiya Partners

Pradeep Vajram-CEO & Chairman, AlphaICs

AlphaICs, the leading edge AI technology company from Milpitas, CA and Bangalore, India, announced today that it has secured $8 million funding. The company designs and develops high-performance AI Chips for Edge Computing. It will use the funds to tape-out the Gluon AI chip, to develop the software stack and to build system solutions for its target markets.

The Series B round was led by Endiya Partners and Emerald Technology Ventures, with participation of existing Series A investors ReBright Partners and 3One4 Capital, along with Aaruha Technology Fund, IREON Ventures, Canal Ventures, JSR Corporation, CBC Co ltd and the Whiteboard Capital. Dr. Michal Natora, the Investment Director Industrial IT at Emerald, will join AlphaICs’ board of directors.

With the growth in popularity of Deep Neural Networks, there has been a huge demand for running such networks on edge devices in real-time. Omdia forecasts that global AI edge chipset revenue will grow from $7.7 Billion in 2019 to $51.9 Billion by 2025 at a CAGR of 37.5%.

AlphaICs’ Real AI Processor (RAPTM), based on a proprietary highly modular and scalable architecture, enables AI acceleration for low power edge applications, as well as high performance edge data centers. AlphaICs’ architecture provides best-in-class inference performance, and it is equally suited for edge learning. The rapidly developing field of Edge learning promotes privacy, enables automated labelling, and facilitates continuous learning of new scenarios. 

Prashant Trivedi-VP Business Ddevelopment & Co-Founder of AlphaICs


“We observed a big need in the industry for Machine Learning applications at the edge. AlphaICs’ technology offers significant performance advantages for edge inference as well as for edge learning solutions.” said Michal Natora, Investment Director at Emerald Technology Ventures, “The differentiated technology and the high calibre team led by Pradeep Vajram were two key criteria driving Emerald’s investment.

“Edge AI applications in Consumer markets like High-end Smartphones, Wearables as well as Enterprise Markets like Robots, Cameras, and Sensors will be pervasive in the next few years. AlphaICs RAP accelerates inferencing as well as learning tasks on-device, rather than in a remote data center delivering benefits like low latency, cost, data privacy, and security.” said Sateesh Andra, Managing Director with Endiya Partners. He added, “While NVidia, Google, and Startups like Graphcore are poised to dominate DataCenter AI, AlphaICs has the opportunity to be a market leader in enabling AI at the Edge.”

“AlphaICs innovative architecture will empower system integrators to create AI solutions, with a short time-to-market; while staying within the systems cost and thermal constraints.” said Pradeep Vajram, Chairman & CEO of AlphaICs, “This funding will help us bring our first inference co-processor to the market for vision applications with low latency requirements. We are also working with strategic partners to bring innovative solutions to the Industrial, Automotive, and Surveillance markets.”


About AlphaICs:

AlphaICs, founded in 2016, is a leading AI technology company that develops edge inference and edge learning technologies to enable AI at the edge. Its Next-generation AI architecture, called Real AI Processor (RAPTM), provides highest performance, low power, and minimal latency, enabling best-in-class edge AI inference processing. The RAPTM architecture also supports edge learning to reduce requirement of training data, and enables auto labelling as well as continuous learning at the edge. The company is led by a team of technology experts and successful serial entrepreneurs committed to putting forth the true potential of the AI at the edge. Learn more at https://www.alphaics.ai.





Market Reports

Market Report & Surveys
IndianWeb2.com © all rights reserved