Showing posts with label Responsible AI. Show all posts
Showing posts with label Responsible AI. Show all posts

Mphasis Secures NABCB-Accredited AI Governance Certification Amid Rising Demand for Responsible Tech

Mphasis Secures NABCB-Accredited AI Governance Certification Amid Rising Demand for Responsible Tech

Mphasis, (BSE: 526299; NSE: MPHASIS), an Information Technology (IT) solutions provider specializing in cloud and cognitive services, today announced that it achieved the ISO/IEC 42001:2023 certification by TÜV SÜD South Asia Private Limited, accredited by the Accreditation Board for Certification Bodies (NABCB). Leading the industry in adopting global Artificial Intelligence (AI) standards, the company received certification for the international Artificial Management Systems (AIMS) framework, ensuring responsible AI development and use while promoting trust and interoperability across the global AI ecosystem.
The NABCB-accredited ISO/IEC 42001:2023 certification reinforces our commitment to responsible AI innovation, and we are pleased to have received it. This achievement assures our enterprise clients that every Mphasis AI solution is developed and governed to the highest global standards of integrity, reliability, and compliance, said Nitin Rakesh, Chief Executive Officer and Managing Director, Mphasis.
The accreditation was issued by TÜV SÜD South Asia Pvt. Ltd., one of only two certification bodies accredited by the NABCB under India’s Ministry of Commerce and Industry. This recognition affirms Mphasis’ adherence to the highest standards of ethical governance, transparency, and global best practices in AI.

We congratulate Mphasis on being accredited with the ISO/IEC 42001:2023 certification with TÜV SÜD. This milestone reflects Mphasis’ strong commitment to responsible AI and excellence in designing and operating trustworthy AI applications. We are proud to have supported this achievement and look forward to deepening our collaboration with Mphasis in advancing ethical and future-ready AIMr. Panneer Selvan, Vice President, TÜV SÜD South Asia Private Limited.

ISO/IEC 42001:2023 provides a structured framework for developing, deploying, and governing AI systems. For clients and regulators, NABCB accreditation ensures that the recognition is based on independent verification under rigorous national and international oversight, strengthening trust and reducing risks in AI adoption. While AI offers immense opportunities, it also brings challenges such as data opacity, bias, and ethical concerns. With regulatory scrutiny intensifying and customer expectations rising, organizations need to demonstrate accountability and integrity in AI deployment. By aligning with ISO/IEC 42001:2023, Mphasis has adopted a globally recognized framework to address these risks and instill confidence in AI-driven systems.

As of Q1 FY26, Mphasis’ AI-led deals represent 68% of its total new contract wins, up from around 30% a year ago. This reflects growing customer demand for Mphasis’ AI-driven solutions across its cloud and cognitive services portfolio.

About Mphasis

At Mphasis, engineering has been in our DNA since inception.

Mphasis is an AI-led, platform-driven company with human-in-the-loop intelligence, helping global enterprises modernize, infuse AI, and scale with agility. The Mphasis.ai unit and Mphasis AI-powered ‘Tribes’ are focused on client outcomes and embed artificial intelligence and autonomy into every layer of the enterprise technology and process stack.

Mphasis built NeoIP™, a breakthrough AI platform which orchestrates a powerful pack of AI platforms and solutions to deliver impactful outcomes across the entire enterprise IT value chain, because we believe ‘AI Without Intelligence Is Artificial™’. NeoIP™ is powered by the Ontosphere, a dynamic and ever-evolving knowledge base, delivering continuous and constant innovation through perpetual intelligent engineering - driving end-to-end enterprise transformation.

At the heart of our approach is customer-centricity—reflected in our proprietary Front2Back™ transformation framework, which uses the exponential power of cloud and cognitive to deliver hyper-personalized digital experiences (C=X2C2™ = 1) and build strong relationships with marquee clients. Our Service Transformation solutions enable enterprises to pivot from legacy systems and operations to secure, adaptive, cloud-first operating models with minimal disruption. Continuous investments in platforms, such as the Neo series, enable enterprises to stay efficient, relevant, and ahead in a dynamic AI-first world. Mphasis is a Hi-Tech, Hi-Touch, Hi-Trust company, rooted in a learning and growth culture.

HCLTech Signs EU's AI Pact to Drive Responsible AI

HCLTech Signs EU's AI Pact to Drive Responsible AI

HCLTech, a leading global technology company, has signed the European Commission's AI Pact, reinforcing its commitment to the responsible development and deployment of AI technologies internally and with clients.

The European Commission's AI Pact is a voluntary initiative designed to help organizations prepare for compliance with the EU AI Act before its full implementation. It encourages companies to adopt responsible AI practices, focusing on safety, transparency, and human oversight.

The pact mandates signatory organizations to take proactive steps to align with the Act's emphasis on safety, transparency and human oversight. Over 100 companies have already pledged to align EU Commission's AI strategies with the Act’s principles. Companies commit to proactive measures, such as mapping high-risk AI systems and promoting AI literacy among employees.

"As AI continues to reshape industries and societies, it's important that this technology is leveraged responsibly. By signing this pact, we're reinforcing our commitment to trustworthy development, deployment and use of AI systems and technologies, thus ensuring that they truly benefit society while minimizing risks and promoting transparency," said Heather Domin, Vice President, Responsible AI and Governance at HCLTech.

By joining this initiative, HCLTech has committed to:Adopting an AI governance strategy to foster the uptake of AI in the organization and work towards future compliance with the AI Act Identifying and mapping AI systems likely to be categorized as high-risk under the AI Act Promoting AI awareness and literacy among employees, ensuring ethical and responsible AI development

HCLTech has long embedded responsible AI practices across its operations, providing guardrails for accountability, fairness, security, privacy and transparency, enabling responsible use of the technology. To enable responsible AI practices internally and with clients, HCLTech has established an Office of Responsible AI and Governance. This office drives the implementation and innovation of responsible AI practices within HCLTech and capabilities in products and services.

For further information, please visit: https://www.hcltech.com/responsible-ai#responsible-ai

Infosys Incorporates New Subsidiary in Netherlands; Launches Open-source Responsible AI Toolkit



Infosys has recently made significant strides in both its global expansion and AI initiatives. The company has incorporated a new subsidiary in the Netherlands, named Infosys BPM Netherlands B.V. This move is part of Infosys BPM UK's broader strategy to enhance its presence in Europe and streamline its business process management services.

In addition, Infosys has launched an open-source Responsible AI Toolkit, which is a key component of the Infosys Topaz Responsible AI Suite. This toolkit aims to address ethical concerns and risks associated with AI adoption, providing enterprises with advanced defensive technical guardrails. It builds on the AI3S framework (scan, shield, and steer) and includes features to detect and mitigate issues such as privacy breaches, security attacks, biased outputs, and deepfakes. 


The toolkit includes specialized AI models and shielding algorithms to detect and mitigate issues such as privacy breaches, security attacks, biased outputs, harmful content, and deepfakes.

The toolkit addresses ethical concerns and risks associated with AI adoption, promoting safety, security, privacy, and fairness. Being open-source, it fosters collaboration and innovation, allowing businesses to tailor the toolkit to their specific needs

The open-source nature of the toolkit ensures flexibility and ease of implementation, making it a valuable resource for businesses looking to innovate responsibly.

These developments highlight Infosys' commitment to expanding its global footprint and promoting ethical AI practices.

Google Lifts Ban on Using Its AI for Weapons and Surveillance

Google Lifts Ban on Using Its AI for Weapons and Surveillance

Google recently announced that it has lifted its self-imposed ban on using AI for weapons and surveillance. This is a significant shift from their 2018 principles, which explicitly prohibited the use of AI for technologies that could cause harm, be used as weapons, or violate human rights.

Google has removed language from its AI principles that previously barred AI applications likely to cause harm, be used as weapons, or violate human rights. The updated principles now emphasize bold innovation and responsible development.

The updated principles now focus on bold innovation and responsible development, with an emphasis on mitigating unintended or harmful outcomes and promoting privacy and security. Google's senior executives argue that AI should be developed in collaboration with democratic governments to ensure security and stability.

According to US media reports, Google has made significant changes to its AI principles, lifting the ban on using AI for weapons and surveillance. The updated principles no longer include language that explicitly barred AI systems from being used for technologies that could cause harm or violate human rights.

Google's senior executives, James Manyika and Demis Hassabis, stated that the company believes AI should be developed in collaboration with democratic governments to ensure security and stability. They emphasized that AI should be guided by values such as freedom, equality, and respect for human rights.

The decision has sparked a global debate, with critics arguing that it weakens Google's ethical stance on AI and could lead to unintended harm and ethical dilemmas. Advocacy groups, such as Stop Killer Robots, have warned about the dangers of AI-driven weapons.

The debate over AI’s role in military and surveillance operations continues to intensify, with experts divided on how to regulate the technology while balancing innovation and security.

Google Cloud employees reportedly worked directly with Israeli military officials to ease access to AI tools amid Israel’s ground invasion of the Gaza strip in 2023. Critics argue that such agreements contradict the company’s ethical commitments.

Besides, Google has announced a $60 billion investment in AI infrastructure, research, and applications in 2025. This move follows financial results that fell short of market expectations, leading to a decline in Alphabet’s share price.

India's 1st Major Multi-Stakeholder Alliance for Responsible AI 'CoRE-AI' Launched

India's 1st Major Multi-Stakeholder Alliance for Responsible AI 'CoRE-AI' Launched

A groundbreaking initiative has emerged in India — the Coalition for Responsible Evolution of AI (CoRE-AI). This multi-stakeholder coalition brings together over 30 key players in the tech space, including Big Tech giants like Google, Microsoft, and Amazon Web Services (AWS), IT leaders like Infosys, and esteemed academic institutions such as Ashoka University and IIM Bangalore. Notably, it also includes leading Indian AI startups like CoRover.ai and Beatoven.ai.

Purpose: CoRE-AI focuses on responsible development and deployment of AI technology.

CoRE-AI is housed within The Dialogue tech think tank based in New Delhi and brings together key stakeholders in the AI space from global tech conglomerate like Google, Microsoft and Amazon Web Services (AWS) to IT giants like Infosys and academic institutions like Ashoka University and IIM Bangalore along with number of AI startups like BharatGPT-creator CoRover.ai and AI music startup Beatoven.ai.

Objectives:

  • Foster innovation among Indian AI startups.
  • Ensure industry, academia, and startups' voices are heard in AI regulation discussions.
  • Create public trust in AI through voluntary guidelines, robust regulatory frameworks, and transparency.
  • Address bias, fairness, privacy, and data protection.
Government Support: Mr. S. Krishnan, Secretary of the Ministry of Electronics and Information Technology (MeitY), welcomes CoRE-AI's contributions toward India's leadership in AI globally.

Principles-Based Approach: CoRE-AI will differentiate between regulating AI and regulating responsible AI practices, emphasizing ethical development and deployment.

This coalition represents a significant step toward responsible AI in India, bridging industry, academia, and startups for a trustworthy and innovative AI ecosystem.

The CoRE AI coalition said in an interview and statement to The Hindu that it will focus on exploring a “principles-based approach” utilising risk assessments to provide flexibility in addressing AI’s diverse challenges and will develop guidelines and contribute to a “robust governance framework,” in order to help create a trustworthy and innovative AI ecosystem in India.

Notably, CoRE-AI's guidelines are voluntary, and there are no specific penalties for non-compliance. However, organizations that fail to adhere to responsible AI practices may face reputational risks, loss of public trust, and potential legal consequences if their actions violate existing data protection or privacy laws. It's essential for companies to prioritize ethical AI development to avoid negative repercussions.

To recall, the central government approved the IndiaAI mission with a budget of ₹10,372 crore, aiming to make AI work for India.

The IndiaAI Mission is an ambitious initiative approved by the Indian government to strengthen the AI innovation ecosystem. Further, with IndiaAI Compute Capacity the government aims to build a high-end, scalable AI computing ecosystem. It will include over 10,000 Graphics Processing Units (GPUs) through a public-private partnership. Additionally, an AI marketplace will offer pre-trained models and AI-as-a-service resources.

Last year in December, IBM and Meta launched a global AI alliance in collaboration with over 50 Founding Members and Collaborators globally. India's IIT Bombay and Insurtech startup Roadzen are among the members.

OpenAl and Deepmind Employees Warn of Al Dangers Including Human Extinction, that Companies Are Hiding

OpenAl and Deepmind Employees Warn of Al Dangers Including Human Extinction, that Companies Are Hiding

There has been a significant & serious development regarding AI safety concerns. A group of current and former employees from OpenAI and Google's DeepMind have come forward with an open letter —righttowarn.ai, warning about the potential dangers associated with advanced AI technologies, including human extinction. They allege that these companies are prioritizing financial gains over safety and are not being transparent about the risks involved.

The letter emphasizes the need for better oversight and regulation to prevent serious harms, such as the further entrenchment of existing inequalities, manipulation, misinformation, and even the loss of control over autonomous AI systems. The employees are advocating for a culture of open criticism and are calling for solid whistleblower protections to enable the discussion of these risks without fear of retaliation.
 
OpenAl and Deepmind Employees Warn of Al Dangers Including Human Extinction, that Companies Are Hiding


This is a developing story, and it highlights the importance of ethical considerations and transparency in the field of AI development. It's crucial for AI companies to engage with governments, civil society, and other stakeholders to ensure that AI technologies are developed responsibly and safely.

Specific Risks Employees Are concerned

The employees from OpenAI and Google DeepMind have raised concerns about several specific risks associated with the development and deployment of advanced AI systems. These include:

Entrenchment of Existing Inequalities: Advanced AI could exacerbate social and economic disparities if its benefits are not distributed equitably.

Manipulation and Misinformation: AI systems could be used to create and spread false information, potentially influencing public opinion and undermining trust in institutions.

Loss of Control: There is a risk that autonomous AI systems could become uncontrollable, leading to unintended consequences.

Human Extinction: The letter mentions the extreme risk that unregulated AI poses, including scenarios that could lead to human extinction.

The group behind the open letter has urged AI firms to facilitate a process for current and former employees to raise risk-related concerns and not enforce confidentiality agreements that prohibit criticism. They emphasize the need for transparency and oversight to ensure that AI development does not compromise safety or ethical standards.

Accenture Appoints Arnab Chakraborty As Its 1st Chief Responsible AI Officer

Accenture Appoints Arnab Chakraborty As Its 1st Chief Responsible AI Officer

To help clients scale generative Al responsibly, Accenture has appointed Arnab Chakraborty as its first chief responsible AI officer, effective immediately. With more than two decades of expertise, Chakraborty holds 10 patents in machine learning solutions for business challenges. He has been involved in shaping the WEF AI Governance Alliance and as a member of the US Senate AI Insight Forum where he advises on the practical considerations of balancing AI innovation while mitigating risks.

"Clients are eager to embrace the potential of generative AI, and we are ready to help them build responsible AI into every use. We do this for ourselves, and we can use that example to help our clients find success faster,” said Julie Sweet, chair and CEO, Accenture. “Our focus is to enable our clients to innovate AI safely and be ready to seize the opportunities that AI will bring in the decades ahead.”

The current rise of AI is unlike previous waves of innovation — the technology, regulation and business adoption are accelerating exponentially and simultaneously, creating a unique set of challenges and implications for organizations.

With the appointment of its first chief responsible AI officer, Accenture is taking steps to expand its responsible AI capabilities, solutions, platforms, ecosystem partnerships and thought leadership including:
  • Expanding advisory and technology services to help companies establish policies, principles and standards and implement them through risk assessments and testing frameworks, powered by technology, assets and platforms, as well as ongoing monitoring and compliance, including navigate evolving regulatory landscapes, such as the EU AI Act.
  • Introducing managed services that monitor AI solutions, systems and controls to help companies comply with fast-changing regulations.
  • Investing in capabilities in gen AI testing, ongoing compliance, regulation management and security, and scaling these with its ecosystem partners.
  • Focusing on education and empowerment through responsible AI academies for Accenture people and for clients, including their boards of directors and top leadership.
With a rich history of leading with responsible AI, Accenture will also extend the focus of its research partnerships, including with Stanford, MIT and the World Economic Forum, as well as expand its roles as a leading voice on responsible AI standards and governance. Combined with its work with ecosystem partners and experience with more than 1,000 generative AI client projects, Accenture is bringing its clients the capabilities they need to implement AI rapidly and safely throughout the enterprise.

Accenture recognizes that responsible AI requires taking intentional actions to design, deploy and use AI to create value and build trust and fuel innovation, while protecting from potential AI risks. The company has led by example since 2017, when it first embedded commitments to use AI responsibly in its Code of Business Ethics.

In 2022, the company implemented an enterprise-wide responsible AI program, which focuses on tracking where AI is being used, understanding what it is being used for, assessing AI systems for levels of risk and implementing mitigation strategies to address those risks, and developing post-deployment monitoring programs to oversee AI systems on an ongoing basis. The program leverages technology to improve speed and user experience and also focuses on improving responsible AI literacy through required ethics and compliance training, deep technical training for AI practitioners and responsible AI training for the company’s more than 742,000 people as part of its Technology Quotient (TQ) program.

"Leaders acknowledge the importance of responsible AI principles, but there is a gap in their practical implementation—our research shows that only 2% of companies have fully operationalized responsible AI across their organizations,” said Chakraborty. “Accenture will pave the way to help our clients establish and embed responsible AI, closing the gap between principles and action."

Humanity May Need to Pause Al in the Next 5 Yrs, Said New CEO of Microsoft Al

Humanity May Need to Pause Al in the Next 5 Yrs, Said New CEO of Microsoft Al

The new CEO of Microsoft AI, Mustafa Suleyman, while speaking at the global AI safety summit last year, mentioned the possibility of a pause in AI development towards the end of the decade.

According to The Guardian article, Mustafa stated, "I don’t rule it out. And I think that at some point over the next five years or so, we’re going to have to consider that question very seriously". As the new head of Microsoft AI, he will be balancing his caution with the drive to innovate and commercialize AI technologies.

Previously, Mustafa had said, "The world is still struggling to appreciate how big a deal [Al's] arrival really is."

"We are in the process of seeing a new species grow up around us", Mustafa had said. He also thinks this new species may be capable of becoming self-made millionaires in as little as 2 years."

Mustafa is not alone in a warning views on AI, as Google DeepMind's Chief AGI Scientist Shane Legg also said, "If I had a magic wand, I would slow down. Artificial General Intelligence is like the arrival of human intelligence in the world. This is another intelligence arriving in the world."

Notably, Mustafa Suleyman is a prominent figure in the field of artificial intelligence. He co-founded DeepMind Technologies, which became a leading AI company and was later acquired by Google. After his tenure at DeepMind, Suleyman went on to co-found Inflection AI, focusing on machine learning and generative AI.

Mustafa has recently been appointed as the CEO of Microsoft AI, where he is expected to lead the development of consumer AI products and research.

Mustafa's career has been marked by his contributions to AI and his advocacy for ethical AI practices. His leadership at Microsoft AI is anticipated to further the company's AI initiatives while navigating the complex landscape of AI ethics and societal Impact.

Interestingly, Microsoft's chief scientific officer, Eric Horvitz, has expressed the opposite view, stating that an "acceleration" in Al development is necessary, rather than a pause. It's important to note that discussions about the pace of Al development are ongoing in the tech community, with various experts having different opinions on the matter.

The debate around pausing AI development stems from various concerns raised by experts and the public. Here are some reasons why some think a pause might be necessary:

1. Rapid Advancement: AI is advancing at a pace that may outstrip our ability to understand its implications and establish adequate safeguards.

2. Safety and Ethics: There are fears that without proper oversight, AI could be used in ways that are harmful or unethical. This includes concerns about privacy, security, and the potential for AI to perpetuate biases.

3. Regulatory Catch-Up: A pause could provide time for policymakers to catch up with the technology and create regulations that ensure AI is developed and used responsibly.

4. Unintended Consequences: As AI systems become more complex, the risk of unintended consequences increases. This could include the misuse of AI by malicious actors or the AI acting in unpredictable ways.

5. Societal Impact: There's a concern about the impact of AI on jobs, social structures, and the economy. A pause could allow for a more thoughtful consideration of how to integrate AI into society in a way that benefits everyone.

These concerns highlight the need for a balanced approach to AI development, one that promotes innovation while also ensuring safety, ethical use, and societal well-being. It's a complex issue with no easy answers, but the conversation is crucial as we navigate the future of AI.

Samsung R&D Institute Bangalore Forays into its First Deep-Tech Start-up Showcase for R&D Teams

Samsung R&D Institute Bangalore Forays into its First Deep-Tech Start-up Showcase for R&D Teams

In January, Samsung R&D Institute India – Bangalore (SRI-B) organized ‘The Startup Collab: #FuelingDeepTech’, where 11 startups, working in the field of Generative AI, Responsible AI, Emotion AI, Visual AI, Quantum and Health, participated with great enthusiasm. The startups pitched their unique product offerings and showcased interactive live demos to a large audience consisting of SRI-B’s R&D teams, leaving them highly motivated and mesmerized.

Employees seized this opportunity to understand the startup offerings and explore potential areas of collaboration with them to influence Samsung’s products and services.

The daylong event featured a leadership address, startup pitches, an interactive demo showcase, and Samsung expert talks.

Mohan Rao, CVP & CTO at SRI-B, addressed a packed audience on how Samsung aligns its efforts with ecosystem partners. During the talk, Mohan Rao emphasized the company’s ‘build vs source’ philosophy and how we endeavor to bring about win-win collaboration opportunities.

“At Samsung, we are deeply committed to open innovation and ecosystem collaboration. Collaboration with startups is a symbiotic relationship where startups bring fresh ideas and agility, while Samsung can offer scale and stability. This also infuses the startup’s entrepreneurial spirit within the organization, fostering a culture of risk-taking and innovation. We look forward to collaborating with promising deep-tech startups and creating ground-breaking innovations”, he shared.

The Open Innovation team, operating under the aegis of the CTO Office, collaborates with both R&D teams and the startup ecosystem to identify and align potential collaboration opportunities. As part of their core responsibilities, this team facilitates incubation, partnerships and potential investments into aligned startups. So far, the team has successfully established 30+ partnerships and made around 12 strategic investments.

Traditionally, discussions around R&D, especially in the early exploration phase, have been limited to product owners, technology leaders, and key decision-makers. However, the Startup Collab: #FuelingDeepTech aims to create a platform that opens up startups to a broader audience within SRI-B. This approach enables the identification of high-impact, innovative, and potentially unexpected use cases across various ABC products, fostering collaboration and driving cutting-edge solutions.

“Open Innovation at SRI-B is at the heart of our New Valley Vision. We strongly believe and encourage building Samsung products and services utilizing the power of our partners and the vibrant startup ecosystem”, shared Dr. Balvinder Singh, Head of the Advanced Research Group & Open Innovation at SRI-B.
 
The open format of the event allows Samsung and the startups to explore beyond the boundaries of traditional meeting setups, and enables R&D teams to combine the offerings of multiple startups with Samsung products, creating novel concepts and immersive consumer experiences. This unique approach will lead to many fruitful collaborations in the future, ultimately delivering non-linear value to Samsung users.

Microsoft and Healthcare Leaders Create AI Network 'TRAIN' to Make AI in Health Safe and Trustworthy

Microsoft and Healthcare Leaders Create AI Network 'TRAIN' to Make AI in Health Safe and Trustworthy

New consortium of healthcare leaders announces formation of Trustworthy & Responsible AI Network (TRAIN), making safe and fair AI accessible to every healthcare organization

With start of this week, the HIMSS 2024 Global Health Conference has begin and a new consortium of healthcare leaders  announced the creation of the Trustworthy & Responsible AI Network (TRAIN), which aims to operationalize responsible AI principles to improve the quality, safety and trustworthiness of AI in health.

The Trustworthy & Responsible AI Network (TRAIN) is one of the first health AI networks aimed at operationalizing responsible AI principles.

Members of the network include AdventHealth, Advocate Health, Boston Children’s Hospital, Cleveland Clinic, Duke Health, Johns Hopkins Medicine, Mass General Brigham, MedStar Health, Mercy, Mount Sinai Health System, Northwestern Medicine, Providence, Sharp HealthCare, University of Texas Southwestern Medical Center, University of Wisconsin School of Medicine and Public Health, Vanderbilt University Medical Center, and Microsoft as the technology enabling partner.

Through collaboration, TRAIN members will help improve the quality, safety, and trustworthiness of AI in health by sharing best practices, enabling registration of AI used for clinical care or clinical operations, providing tools to enable measurement of outcomes associated with the implementation of AI, and facilitating the development of a federated national AI outcomes registry for organizations to share amongst themselves.

Additionally, the network is collaborating with OCHIN, which serves a national network of community health organizations with solutions, expertise, clinical insights and tailored technologies, and TruBridge, a partner and conduit to community healthcare, to help ensure that every organization, regardless of resources, has access to TRAIN’s benefits.

New AI capabilities have the potential to transform the healthcare industry by enabling better care outcomes, improving efficiency and productivity, and reducing costs. From helping screen patients, to developing new treatments and drugs, to automating administrative tasks and enhancing public health, AI is creating new possibilities and opportunities for healthcare organizations and practitioners. As new uses of AI in healthcare continue to unfold and grow, the need for rigorous development and evaluation standards becomes even more important to ensure effective and responsible applications of AI.

Through collaboration, TRAIN members will help improve the quality and trustworthiness of AI by:

Sharing best practices related to the use of AI in healthcare settings, including the safety, reliability and monitoring of AI algorithms, and the skillsets required to manage AI responsibly. Data and AI algorithms will not be shared between member organizations or with third parties.

Enabling registration of AI used for clinical care or clinical operations through a secure online portal.

Providing tools to enable measurement of outcomes associated with the implementation of AI, including best practices for studying the efficacy and value of AI methods in healthcare settings and leveraging of privacy-preserving environments, with considerations in both pre- and post-deployment settings. Tools that allow analyses to be performed in subpopulations to assess bias will also be provided.

Facilitating the development of a federated national AI outcomes registry for organizations to share among themselves. The registry will capture real-world outcomes related to efficacy, safety and optimization of AI algorithms.

For more information on the collaboration and to hear from founding members, join us at our session at HIMSS on Tuesday, March 12, 2024, from 3 to 4 p.m. ET,Operationalizing Responsible AI in Healthcare: Challenges and Opportunities.”

“When it comes to AI’s tremendous capabilities, there is no doubt the technology has the potential to transform healthcare. However, the processes for implementing the technology responsibly are just as vital,” said Dr. David Rhew, global chief medical officer and vice president of healthcare, Microsoft. “By working together, TRAIN members aim to establish best practices for operationalizing responsible AI, helping improve patient outcomes and safety while fostering trust in healthcare AI.”

“At Advocate Health, innovation is at the core of our drive to advance the science of medicine,” said Dr. Rasu Shrestha, executive vice president and chief innovation and commercialization officer for Advocate Health. “As we seek to make care more accessible and affordable for all, address the root causes of health inequities and provide the best health outcomes for our patients, we believe the responsible application of AI and leveraging key partnerships in this space will be essential as we reimagine how care delivery can be improved in the future.”

"OCHIN is proud to join this strategic collaboration to help fuel the future of safe and inclusive healthcare innovation,” said Kim Klupenger, chief experience officer, OCHIN. “By participating in the operationalization of responsible AI principles, we’ll help ensure the diverse experiences of patients and providers from underserved communities are represented in the creation and adoption of new solutions that can drive efficiency and make day-to-day care delivery easier and more accessible across our growing network.”

Dr. Nigam Shah, chief data scientist, Stanford Healthcare, said, "As a co-founder and board member of the Coalition for Health AI (CHAI), I am excited to see health systems coming together to operationalize CHAI’s principles for Responsible and Trustworthy AI."

Infosys Topaz Launches Responsible AI Suite of Offerings

Infosys Topaz Launches Responsible AI Suite of Offerings

Infosys Topaz Unveils Responsible AI Suite of Offerings, to help Enterprises Navigate the Regulatory and Ethical Complexities of AI-powered Transformation.

Responsible AI (RAI) Office will serve as the custodian of ethical use of AI and ensure solutions align with emerging guardrails for AI across geographies.


Infosys (NSE, BSE, NYSE: INFY), a global leader in next-generation digital services and consulting, today announced the launch of its Responsible AI Suite, a part of Infosys Topaz, an AI-first set of services, solutions, and platforms using generative AI.

The rise of powerful generative AI systems in the past year has raised several concerns and conversations around the ethical dimensions of AI. According to the Infosys Generative AI Radar, by Infosys Knowledge Institute, enterprises worldwide are identifying data privacy, security, ethics, and bias as the primary challenges in their pursuit of innovation with AI. The Responsible AI Suite is designed to help enterprises balance innovation with ethical considerations, such bias and privacy prevention, and maximize their return on investments.

Infosys Topaz Responsible AI Suite is a set of 10+ offerings built around the Scan, Shield, and Steer framework. The framework aims to monitor and protect AI models and systems from risks and threats, while enabling businesses to apply AI responsibly. The offerings, across the framework, include a combination of accelerators and solutions designed to drive responsible AI adoption across enterprises.

Scan: Includes solutions to help identify the overall AI risk posture, legal obligations, vulnerabilities, and generate a single source of truth for compliance status of all AI projects. For example, the Infosys Topaz RAI Watchtower is used to monitor upcoming threats, vulnerabilities, and legal obligations

Shield: These solutions focus on building technical guardrails, checks, and accelerators that are responsible by design across the AI lifecycle. It also consists of specialized solutions for AI security. For example, the Infosys Topaz Gen AI Guardrails help enforce the safe use of Gen AI by moderating input prompts and output for multiple risks.

Steer: These advisory and consulting services support strong and efficient AI governance for innovation. Offerings include AI strategy formulation, legal consultation, and contract reviews.

Infosys Topaz Responsible AI Suite will be amplified by an ecosystem of technology partners and think tanks via the Responsible AI Coalition. The Responsible AI Coalition will bring together startups, cloud service providers, and technology partners to further the common goal of advancing responsible AI. It will also lead a special working group of noted academicians, influencers, policymakers, and industry leaders to aid in shaping solutions that help set new industry standards in the responsible AI space.

Phil Fersht, CEO and Chief Analyst, HFS Research, said, “With the challenges of Responsible AI currently forcing many enterprises to slow their progress towards achieving scaled value with AI, smart offerings such as Infosys Topaz’s Responsible AI suite can clear the path to help them accelerate their critical AI initiatives.”

The suite of offerings is complemented by Infosys Topaz RAI office. This ensures that the offerings deliver solutions that ably navigate the shifting landscape of complex technical, policy, and governance challenges related to adopting AI responsibly across business functions. The office of RAI is also responsible for looking into different facets of ethical aspects of AI like transparency, fairness, privacy, security and compliance. The Office constitutes a centralized body for streamlining AI governance, formulating AI risk strategy, and maintaining standards, policies, and guidelines. It will also ensure adherence to responsible-by-design principles and standards throughout the AI development lifecycle, facilitating safe use of AI across organizations.

Commenting on the launch, Balakrishna D. R. (Bali), Executive Vice President, Global Services Head, AI and Industry Verticals, Infosys, said, "Responsible and ethical AI deployment is a business imperative as technology evolves. At Infosys, we recognize the critical need for fostering responsible AI and not merely as a set of principles but as actionable steps. Infosys Topaz Responsible AI Suite is a significant stride in helping our clients better manage their AI-first journey. Together with the Infosys RAI office, we affirm our commitment to advancing the ethos of responsible AI, providing enterprises with the necessary tools and expertise for ethical AI implementation."

TCS Launches AWS Generative AI Practice Focussed on Responsible AI

TCS Launches AWS Generative AI Practice Focussed on Responsible AI
  • Next step in its strategy to deepen AI expertise after having completed foundation training on GenAI for 100,000 employees across the world
  • TCS AWS GenAI practice will focus on using responsible AI frameworks and its PacePort™ innovation hubs to build a comprehensive portfolio of solutions and services for every industry sector
  • Wyndham Hotels & Resorts, the world’s largest hotel franchising company, extends its strategic partnership with TCS and AWS to manage its digital transformation journey leveraging AWS generative AI services.
Tata Consultancy Services (TCS) (BSE: 532540, NSE: TCS) has launched its AWS generative AI practice, to help customers harness the full potential of AI and AWS generative AI services to transform different parts of their value chain and achieve superior business outcomes.

Generative AI has captured the attention of enterprises globally for its potential to significantly reshape industries. TCS has been at the forefront of helping clients across different industries explore relevant use cases for generative AI through proofs of concept and pilots. Using its deep domain knowledge across different industry verticals, TCS has developed an extensive catalog of use-cases for generative AI.

To accelerate its customers’ journeys, TCS has invested in foundation training of over 100,000 employees on generative AI. It is now focused on deepening their expertise further, including certification of over 25,000 employees on AWS generative AI services and with the announcement of this new practice today.

TCS’ AWS generative AI practice will help enterprises choose and quickly scale the right solutions for their unique business needs and transform their organizations, using AWS’ services such as Amazon Bedrock. TCS’ consultants will help clients explore the most impactful use-cases in their business context, experiment collaboratively and co-innovate generative AI-powered solutions.

This co-innovation can take place at TCS Pace Ports™, the company’s innovation and research hubs located in major city hubs including New York, Pittsburgh, Toronto, Amsterdam, and Tokyo. At these sites, the teams can also leverage work done by academic researchers and start-up partners from TCS’ innovation ecosystem.

To make generative AI deployment effective and trust-worthy, we must approach the technology holistically across multiple dimensions including creativity, productivity, and business value. Drawing from all the investments we have made in building deep capabilities in generative AI, our strong partnership with AWS, and contextual knowledge of our customers’ businesses, we help them take a comprehensive approach to realize the true potential of generative AI to drive their growth and transformation,” said Krishna Mohan, Deputy Head, TCS AI.Cloud unit.

TCS offers a comprehensive portfolio of generative AI services and solutions including consulting and advisory, solution design and prototyping, large language model training and fine-tuning, guardrail agent design, project delivery and ongoing maintenance. TCS is building a responsible AI framework to help enterprises navigate the ethical and safe uses of AI.

TCS’ AWS generative AI practice will use that technology to help customers uncover and classify organizational knowledge and abstract out insights that optimize their business decision-making or create content. The resultant solutions significantly enhance customer experience and employee productivity.

Further, to drive up productivity of its clients’ IT organizations, TCS will help them deploy Amazon CodeWhisperer to provide generative AI-powered code recommendations to developers directly, saving them the effort and enhancing the quality of their code.

“Generative AI is one of the most transformational technologies of our generation, allowing organizations to reimagine their customers’ experience, increase employees’ productivity, and enhance overall business operations. AWS has been focused on making AI accessible to companies of all sizes and across industries, and by deepening the AWS and TCS relationship through the TCS generative AI practice, more customers can easily and quickly leverage and benefit from generative AI,” said Vasi Philomin, Vice President of Generative AI, AWS.

Wyndham Hotels & Resorts, the world’s largest hotel franchising company with approximately 9,100 hotels across over 95 countries on six continents and a portfolio of 24 global brands, has also enhanced its partnership with TCS as a strategic technology partner to manage the hotel group’s core systems and IT business and digital transformation journey on AWS.

"At Wyndham, we’re on transformative digital journey as we pursue our mission of making hotel travel possible for all. Through our work with TCS and AWS, we’ve been able to migrate our systems to the cloud while further investing in data standardization. These investments have allowed us to build a foundation that will not only help to accelerate future innovation, but also realize the promise of generative AI powered by Amazon generative AI services,” said Scott Strickland, Chief Information Officer, Wyndham Hotels & Resorts.

TCS offers enterprise customers end-to-end services and solutions around cloud migration, application, and data modernization, managed services, and industry-specific innovation on AWS. TCS holds several AWS validated qualifications, including membership in the AWS Premier Tier Service Partner Program, AWS Managed Service Provider, AWS Public Sector Partner Program, AWS Solution Provider Program, AWS Well-Architected Partner Program, and over 35 AWS Competencies and Service Validations. TCS’ large pool of AWS cloud-ready professionals leverage their domain knowledge and AWS technology building blocks to create transformational solutions contextualized to specific industry sub-verticals. For more information, visit www.tcs.com/tcs-aws.

NITI Aayog Proposes Overseeing Body for Responsible Management of AI in India



For responsible management of Artificial Intelligence in India, Government think-tank Niti Aayog has proposed setting up an oversight body which will play an enabling role regarding technical, legal, policy and societal aspects of artificial intelligence (AI).


In its draft 'Working Document: Enforcement Mechanisms for Responsible #AIforAll', Niti Aayog said the oversight body must have industry representatives as well as experts from legal, humanities and social science fields.

"Use cases for Artificial Intelligence have emerged across sectors and the technology has shown rapid
growth over recent years. Approach to manage risks cannot be isolated. Such approaches must be highly participatory and must keep pace with technology. Risk across use cases and contexts vary and also evolve over time. One-size-fits-all approach is not sustainable.", said the document.

"A flexible risk-based approach must be adopted. In this regard, the National Strategy for Artificial
Intelligence proposes an Oversight Body.", said the draft.

Oversight body may identify design standards, guidelines and acceptable benchmarks for priority use
cases with sectoral regulators and experts. These may be made mandatory for public sector procurement

The oversight body must play an enabling role under the following broad areas -
  1. Manage and update Principles for responsible AI in India
  2. Research technical, legal,policy, societal issues of AI
  3. Provide clarity on responsible behaviour through design structures, standards, guidelines, etc
  4. Enable access to Responsible AI tools and techniques
  5. Education and Awareness on Responsible AI
  6. Coordinate with various sectoral AI regulators, identify gaps and harmonize policies across sectors
  7. 'Represent India (and other emerging economies) in International AI dialogue.
Besides this, Niti Aayog has also proposed highly participatory advisory body called Council for Ethics and Technology, with a multi-disciplinary composition including Computer Science and AI experts, Legal and relevant sectoral experts, Humanities and Social Science experts , among few others.

Ethical Committees are accountable for enforcement of principles in the AI system’s lifecycle and must ensure the AI system is developed, deployed, operated and maintained in accordance with the Principles.

The Aayog has invited comments on the draft document by stakeholders by December 15.

The Niti Aayog had in June 2020 released a draft paper titled 'Towards Responsible#AIForAll' and had said there is a potential of large scale adoption of AI in a variety of social sectors.

Market Reports

Market Report & Surveys
IndianWeb2.com © all rights reserved