Showing posts with label Ethics in AI. Show all posts
Showing posts with label Ethics in AI. Show all posts

Mphasis Secures NABCB-Accredited AI Governance Certification Amid Rising Demand for Responsible Tech

Mphasis Secures NABCB-Accredited AI Governance Certification Amid Rising Demand for Responsible Tech

Mphasis, (BSE: 526299; NSE: MPHASIS), an Information Technology (IT) solutions provider specializing in cloud and cognitive services, today announced that it achieved the ISO/IEC 42001:2023 certification by TÜV SÜD South Asia Private Limited, accredited by the Accreditation Board for Certification Bodies (NABCB). Leading the industry in adopting global Artificial Intelligence (AI) standards, the company received certification for the international Artificial Management Systems (AIMS) framework, ensuring responsible AI development and use while promoting trust and interoperability across the global AI ecosystem.
The NABCB-accredited ISO/IEC 42001:2023 certification reinforces our commitment to responsible AI innovation, and we are pleased to have received it. This achievement assures our enterprise clients that every Mphasis AI solution is developed and governed to the highest global standards of integrity, reliability, and compliance, said Nitin Rakesh, Chief Executive Officer and Managing Director, Mphasis.
The accreditation was issued by TÜV SÜD South Asia Pvt. Ltd., one of only two certification bodies accredited by the NABCB under India’s Ministry of Commerce and Industry. This recognition affirms Mphasis’ adherence to the highest standards of ethical governance, transparency, and global best practices in AI.

We congratulate Mphasis on being accredited with the ISO/IEC 42001:2023 certification with TÜV SÜD. This milestone reflects Mphasis’ strong commitment to responsible AI and excellence in designing and operating trustworthy AI applications. We are proud to have supported this achievement and look forward to deepening our collaboration with Mphasis in advancing ethical and future-ready AIMr. Panneer Selvan, Vice President, TÜV SÜD South Asia Private Limited.

ISO/IEC 42001:2023 provides a structured framework for developing, deploying, and governing AI systems. For clients and regulators, NABCB accreditation ensures that the recognition is based on independent verification under rigorous national and international oversight, strengthening trust and reducing risks in AI adoption. While AI offers immense opportunities, it also brings challenges such as data opacity, bias, and ethical concerns. With regulatory scrutiny intensifying and customer expectations rising, organizations need to demonstrate accountability and integrity in AI deployment. By aligning with ISO/IEC 42001:2023, Mphasis has adopted a globally recognized framework to address these risks and instill confidence in AI-driven systems.

As of Q1 FY26, Mphasis’ AI-led deals represent 68% of its total new contract wins, up from around 30% a year ago. This reflects growing customer demand for Mphasis’ AI-driven solutions across its cloud and cognitive services portfolio.

About Mphasis

At Mphasis, engineering has been in our DNA since inception.

Mphasis is an AI-led, platform-driven company with human-in-the-loop intelligence, helping global enterprises modernize, infuse AI, and scale with agility. The Mphasis.ai unit and Mphasis AI-powered ‘Tribes’ are focused on client outcomes and embed artificial intelligence and autonomy into every layer of the enterprise technology and process stack.

Mphasis built NeoIP™, a breakthrough AI platform which orchestrates a powerful pack of AI platforms and solutions to deliver impactful outcomes across the entire enterprise IT value chain, because we believe ‘AI Without Intelligence Is Artificial™’. NeoIP™ is powered by the Ontosphere, a dynamic and ever-evolving knowledge base, delivering continuous and constant innovation through perpetual intelligent engineering - driving end-to-end enterprise transformation.

At the heart of our approach is customer-centricity—reflected in our proprietary Front2Back™ transformation framework, which uses the exponential power of cloud and cognitive to deliver hyper-personalized digital experiences (C=X2C2™ = 1) and build strong relationships with marquee clients. Our Service Transformation solutions enable enterprises to pivot from legacy systems and operations to secure, adaptive, cloud-first operating models with minimal disruption. Continuous investments in platforms, such as the Neo series, enable enterprises to stay efficient, relevant, and ahead in a dynamic AI-first world. Mphasis is a Hi-Tech, Hi-Touch, Hi-Trust company, rooted in a learning and growth culture.

Reddit vs. AI: Secret Bot Experiment Sparks Legal Showdown

Reddit vs. AI: Secret Bot Experiment Sparks Legal Showdown

Reddit is considering legal action against researchers from the University of Zurich who secretly deployed AI bots on the r/changemyview subreddit. The experiment, which ran from November 2024 to March 2025, aimed to test AI's ability to persuade users but was conducted without Reddit's knowledge or user consent.

The bots impersonated real people, including trauma survivors and political activists, raising serious ethical concerns. Reddit's Chief Legal Officer, Ben Lee, condemned the experiment as a violation of academic and human rights norms, stating that the company is sending formal legal demands to the university and research team.

This case highlights growing tensions between AI research and digital platform governance.

The University of Zurich's AI experiment on Reddit's r/changemyview subreddit aimed to test how effectively AI could persuade users to change their opinions. Researchers deployed AI bots that impersonated real people—including trauma survivors and political activists—to engage in debates and influence discussions.

The bots analyzed users' posting histories to infer personal traits like gender, age, ethnicity, and political views, then tailored responses accordingly. The experiment sought to measure AI's ability to craft persuasive arguments and assess its impact on online discourse. However, the lack of informed consent and the use of fabricated identities sparked ethical concerns and legal scrutiny.

This raises big questions about AI's role in shaping opinions.

Infosys Incorporates New Subsidiary in Netherlands; Launches Open-source Responsible AI Toolkit



Infosys has recently made significant strides in both its global expansion and AI initiatives. The company has incorporated a new subsidiary in the Netherlands, named Infosys BPM Netherlands B.V. This move is part of Infosys BPM UK's broader strategy to enhance its presence in Europe and streamline its business process management services.

In addition, Infosys has launched an open-source Responsible AI Toolkit, which is a key component of the Infosys Topaz Responsible AI Suite. This toolkit aims to address ethical concerns and risks associated with AI adoption, providing enterprises with advanced defensive technical guardrails. It builds on the AI3S framework (scan, shield, and steer) and includes features to detect and mitigate issues such as privacy breaches, security attacks, biased outputs, and deepfakes. 


The toolkit includes specialized AI models and shielding algorithms to detect and mitigate issues such as privacy breaches, security attacks, biased outputs, harmful content, and deepfakes.

The toolkit addresses ethical concerns and risks associated with AI adoption, promoting safety, security, privacy, and fairness. Being open-source, it fosters collaboration and innovation, allowing businesses to tailor the toolkit to their specific needs

The open-source nature of the toolkit ensures flexibility and ease of implementation, making it a valuable resource for businesses looking to innovate responsibly.

These developments highlight Infosys' commitment to expanding its global footprint and promoting ethical AI practices.

Google Lifts Ban on Using Its AI for Weapons and Surveillance

Google Lifts Ban on Using Its AI for Weapons and Surveillance

Google recently announced that it has lifted its self-imposed ban on using AI for weapons and surveillance. This is a significant shift from their 2018 principles, which explicitly prohibited the use of AI for technologies that could cause harm, be used as weapons, or violate human rights.

Google has removed language from its AI principles that previously barred AI applications likely to cause harm, be used as weapons, or violate human rights. The updated principles now emphasize bold innovation and responsible development.

The updated principles now focus on bold innovation and responsible development, with an emphasis on mitigating unintended or harmful outcomes and promoting privacy and security. Google's senior executives argue that AI should be developed in collaboration with democratic governments to ensure security and stability.

According to US media reports, Google has made significant changes to its AI principles, lifting the ban on using AI for weapons and surveillance. The updated principles no longer include language that explicitly barred AI systems from being used for technologies that could cause harm or violate human rights.

Google's senior executives, James Manyika and Demis Hassabis, stated that the company believes AI should be developed in collaboration with democratic governments to ensure security and stability. They emphasized that AI should be guided by values such as freedom, equality, and respect for human rights.

The decision has sparked a global debate, with critics arguing that it weakens Google's ethical stance on AI and could lead to unintended harm and ethical dilemmas. Advocacy groups, such as Stop Killer Robots, have warned about the dangers of AI-driven weapons.

The debate over AI’s role in military and surveillance operations continues to intensify, with experts divided on how to regulate the technology while balancing innovation and security.

Google Cloud employees reportedly worked directly with Israeli military officials to ease access to AI tools amid Israel’s ground invasion of the Gaza strip in 2023. Critics argue that such agreements contradict the company’s ethical commitments.

Besides, Google has announced a $60 billion investment in AI infrastructure, research, and applications in 2025. This move follows financial results that fell short of market expectations, leading to a decline in Alphabet’s share price.

India's 1st Major Multi-Stakeholder Alliance for Responsible AI 'CoRE-AI' Launched

India's 1st Major Multi-Stakeholder Alliance for Responsible AI 'CoRE-AI' Launched

A groundbreaking initiative has emerged in India — the Coalition for Responsible Evolution of AI (CoRE-AI). This multi-stakeholder coalition brings together over 30 key players in the tech space, including Big Tech giants like Google, Microsoft, and Amazon Web Services (AWS), IT leaders like Infosys, and esteemed academic institutions such as Ashoka University and IIM Bangalore. Notably, it also includes leading Indian AI startups like CoRover.ai and Beatoven.ai.

Purpose: CoRE-AI focuses on responsible development and deployment of AI technology.

CoRE-AI is housed within The Dialogue tech think tank based in New Delhi and brings together key stakeholders in the AI space from global tech conglomerate like Google, Microsoft and Amazon Web Services (AWS) to IT giants like Infosys and academic institutions like Ashoka University and IIM Bangalore along with number of AI startups like BharatGPT-creator CoRover.ai and AI music startup Beatoven.ai.

Objectives:

  • Foster innovation among Indian AI startups.
  • Ensure industry, academia, and startups' voices are heard in AI regulation discussions.
  • Create public trust in AI through voluntary guidelines, robust regulatory frameworks, and transparency.
  • Address bias, fairness, privacy, and data protection.
Government Support: Mr. S. Krishnan, Secretary of the Ministry of Electronics and Information Technology (MeitY), welcomes CoRE-AI's contributions toward India's leadership in AI globally.

Principles-Based Approach: CoRE-AI will differentiate between regulating AI and regulating responsible AI practices, emphasizing ethical development and deployment.

This coalition represents a significant step toward responsible AI in India, bridging industry, academia, and startups for a trustworthy and innovative AI ecosystem.

The CoRE AI coalition said in an interview and statement to The Hindu that it will focus on exploring a “principles-based approach” utilising risk assessments to provide flexibility in addressing AI’s diverse challenges and will develop guidelines and contribute to a “robust governance framework,” in order to help create a trustworthy and innovative AI ecosystem in India.

Notably, CoRE-AI's guidelines are voluntary, and there are no specific penalties for non-compliance. However, organizations that fail to adhere to responsible AI practices may face reputational risks, loss of public trust, and potential legal consequences if their actions violate existing data protection or privacy laws. It's essential for companies to prioritize ethical AI development to avoid negative repercussions.

To recall, the central government approved the IndiaAI mission with a budget of ₹10,372 crore, aiming to make AI work for India.

The IndiaAI Mission is an ambitious initiative approved by the Indian government to strengthen the AI innovation ecosystem. Further, with IndiaAI Compute Capacity the government aims to build a high-end, scalable AI computing ecosystem. It will include over 10,000 Graphics Processing Units (GPUs) through a public-private partnership. Additionally, an AI marketplace will offer pre-trained models and AI-as-a-service resources.

Last year in December, IBM and Meta launched a global AI alliance in collaboration with over 50 Founding Members and Collaborators globally. India's IIT Bombay and Insurtech startup Roadzen are among the members.

Accenture Appoints Arnab Chakraborty As Its 1st Chief Responsible AI Officer

Accenture Appoints Arnab Chakraborty As Its 1st Chief Responsible AI Officer

To help clients scale generative Al responsibly, Accenture has appointed Arnab Chakraborty as its first chief responsible AI officer, effective immediately. With more than two decades of expertise, Chakraborty holds 10 patents in machine learning solutions for business challenges. He has been involved in shaping the WEF AI Governance Alliance and as a member of the US Senate AI Insight Forum where he advises on the practical considerations of balancing AI innovation while mitigating risks.

"Clients are eager to embrace the potential of generative AI, and we are ready to help them build responsible AI into every use. We do this for ourselves, and we can use that example to help our clients find success faster,” said Julie Sweet, chair and CEO, Accenture. “Our focus is to enable our clients to innovate AI safely and be ready to seize the opportunities that AI will bring in the decades ahead.”

The current rise of AI is unlike previous waves of innovation — the technology, regulation and business adoption are accelerating exponentially and simultaneously, creating a unique set of challenges and implications for organizations.

With the appointment of its first chief responsible AI officer, Accenture is taking steps to expand its responsible AI capabilities, solutions, platforms, ecosystem partnerships and thought leadership including:
  • Expanding advisory and technology services to help companies establish policies, principles and standards and implement them through risk assessments and testing frameworks, powered by technology, assets and platforms, as well as ongoing monitoring and compliance, including navigate evolving regulatory landscapes, such as the EU AI Act.
  • Introducing managed services that monitor AI solutions, systems and controls to help companies comply with fast-changing regulations.
  • Investing in capabilities in gen AI testing, ongoing compliance, regulation management and security, and scaling these with its ecosystem partners.
  • Focusing on education and empowerment through responsible AI academies for Accenture people and for clients, including their boards of directors and top leadership.
With a rich history of leading with responsible AI, Accenture will also extend the focus of its research partnerships, including with Stanford, MIT and the World Economic Forum, as well as expand its roles as a leading voice on responsible AI standards and governance. Combined with its work with ecosystem partners and experience with more than 1,000 generative AI client projects, Accenture is bringing its clients the capabilities they need to implement AI rapidly and safely throughout the enterprise.

Accenture recognizes that responsible AI requires taking intentional actions to design, deploy and use AI to create value and build trust and fuel innovation, while protecting from potential AI risks. The company has led by example since 2017, when it first embedded commitments to use AI responsibly in its Code of Business Ethics.

In 2022, the company implemented an enterprise-wide responsible AI program, which focuses on tracking where AI is being used, understanding what it is being used for, assessing AI systems for levels of risk and implementing mitigation strategies to address those risks, and developing post-deployment monitoring programs to oversee AI systems on an ongoing basis. The program leverages technology to improve speed and user experience and also focuses on improving responsible AI literacy through required ethics and compliance training, deep technical training for AI practitioners and responsible AI training for the company’s more than 742,000 people as part of its Technology Quotient (TQ) program.

"Leaders acknowledge the importance of responsible AI principles, but there is a gap in their practical implementation—our research shows that only 2% of companies have fully operationalized responsible AI across their organizations,” said Chakraborty. “Accenture will pave the way to help our clients establish and embed responsible AI, closing the gap between principles and action."

The Inside-Out History of Deepfake Technology

The Inside-Out History of Deepfake Technology

Today, deepfakes continue to be a topic of concern due to its potential to create convincing false representations (videos/images) of individuals, which can be used for malicious purposes. There is ongoing research in both creating more sophisticated deepfakes and developing methods to detect and combat them.

Deepfake technology, which involves creating synthetic media that portrays events or images that never actually occurred, has a relatively recent history. The term "deepfake" is a combination of "deep learning" and "fake," and it refers to the use of artificial intelligence (AI) to generate convincing fake content.

The concept of deepfake became widely known in 2017 when a Reddit user created a subreddit dedicated to sharing videos that used face-swapping technology to insert celebrities' likenesses into existing videos, often for pornographic purposes. This use of AI for creating realistic-looking media quickly raised concerns about its potential for misuse, particularly in the creation of fake news, hoaxes, and other forms of disinformation.

Deepfakes are produced using two main AI algorithms: one that creates a synthetic image or video, and another that detects whether the replica is fake. The creation algorithm adjusts the synthetic media based on feedback from the detection algorithm until it becomes indistinguishable from real media.

The technology behind deepfakes has evolved from earlier forms of media manipulation, with photo manipulation dating back to the 19th century and applied to motion pictures as technology improved. However, the rapid advancement of AI in the late 20th and early 21st centuries has made deepfakes much more accessible and difficult to detect.

The history of deepfake technology is quite fascinating and involves a mix of academic research and community-driven development. Here's a brief overview:

Early Development: The foundations of deepfake technology can be traced back to the 1990s, with researchers at academic institutions exploring the potential of AI in media manipulation.

Generative Adversarial Networks (GANs):

A significant leap in the technology came with the invention of GANs in 2014 by computer scientist Ian Goodfellow. GANs are a class of AI algorithms used in unsupervised machine learning, implemented by a system of two neural networks contesting with each other in a zero-sum game framework. 

Public Emergence: The term "deepfake" emerged in 2017 when a Reddit user, known by the pseudonym 'deepfakes', began sharing videos on a subreddit that used machine learning to swap celebrities' faces onto existing videos, often for pornographic content.

Widespread Attention: This use of AI caught public attention and raised concerns about its potential for creating convincing fake content that could be used for disinformation or other malicious purposes.

Creators involved 

The inventors/creators involved in the development of deepfake technology have ranged from academic researchers to anonymous online community members. The technology has since evolved, becoming more accessible and sophisticated, leading to a wide range of applications beyond the initial controversial uses. As deepfake technology continues to develop, there is an ongoing discussion about its ethical implications and the need for regulations to prevent misuse.

Deepfake technology has led to the creation of various notable projects and the emergence of skilled creators. Here are some of the most prominent examples:

EZ RyderX47: This creator gained fame for a deepfake video where Marty McFly and Emmett "Doc" Brown from "Back to the Future" are replaced by Tom Holland and Robert Downey Jr., respectively. The video showcases the creative possibilities of deepfake technology.
 

McAfee's Project Mockingbird: Announced at CES 2024, this project aims to empower users to identify deepfakes. It gained attention when it was used to debunk a deepfake scam involving a fake Taylor Swift promoting cookware.


In Event of Moon Disaster: This short film features an incredibly convincing deepfake of Richard Nixon delivering a speech that was prepared in case the Apollo 11 mission had failed. The film explores the implications of deepfakes and their historical context.


These projects and creators have contributed to both the advancement of deepfake technology and the ongoing conversation about its ethical use and societal impact.

Losses incurred due to deepfake tech

Deepfake technology has led to significant financial and social losses over the years. Some of the notable impacts are as :

Financial Losses: Deepfake scams have resulted in losses ranging from $243,000 to $35 million in individual cases. For instance, a bank manager was tricked into transferring $35 million to a fraudulent account due to a deepfake audio message.

Business Impact: A report from 2020 projected that deepfakes could cost businesses globally $250 billion by 2025. Financial institutions might face annual losses of up to $30 billion due to deepfake fraud by 2027.

Cybersecurity Threats: In 2022, 66% of cybersecurity professionals experienced deepfake attacks within their organizations. The banking sector is particularly concerned, with 92% of cyber practitioners worried about its fraudulent misuse.

Social Engineering Attacks: Deepfakes have been used to create fake videos or audio messages, often impersonating CEOs or other high-ranking executives to deceive individuals into sending money or disclosing sensitive information.

Misinformation and Public Trust: Deepfakes have the potential to undermine election outcomes, social stability, and even national security, especially in the context of disinformation campaigns.

Donald Trump Case: In 2018, a deepfake video of Donald Trump was released by a Belgian political party, urging Belgium to withdraw from the Paris climate agreement. Although intended as satire, it highlighted the ease of manipulating a world leader's image.


Deepfake Voice Scam: In 2019, criminals used deepfake technology to mimic a CEO's voice in a fraudulent attempt to transfer funds, showcasing the potential for financial scams.

The rise of deepfake content and its misuse has prompted discussions on the need for better regulation and detection technologies to combat this issue and mitigate its harmful effects.

Criminalization of deepfake technology misuse

The issue of deepfake technology and its criminalization is a complex and evolving area of law globally, including in India. While deepfakes have potential benefits in various fields, they also pose significant risks such as privacy violations, defamation, and the spread of misinformation.

Globally, there is a growing concern about the malicious use of deepfakes, and countries are exploring ways to regulate this technology. The legal status of tackling crimes related to deepfakes varies from country to country, with some having specific regulations while others rely on existing laws to address the issue.

In India, as of the information available up to 2021, there was no specific statute that directly addressed deepfake cybercrime. However, various other laws could be applied to combat crimes involving deepfakes. For instance, Section 66E of the Information Technology (IT) Act of 2000 could be invoked in cases of deepfake offenses that infringe on a person's privacy by capturing, publishing, or transmitting their image without consent. 

Experts have pointed out that while India and other countries face challenges due to deepfakes, practical solutions are available, and provisions under several pieces of legislation could offer both civil and criminal relief. It's important to note that the development and use of deepfakes is a global issue, likely requiring international cooperation to effectively regulate their use and prevent associated crimes.

Infosys Topaz Launches Responsible AI Suite of Offerings

Infosys Topaz Launches Responsible AI Suite of Offerings

Infosys Topaz Unveils Responsible AI Suite of Offerings, to help Enterprises Navigate the Regulatory and Ethical Complexities of AI-powered Transformation.

Responsible AI (RAI) Office will serve as the custodian of ethical use of AI and ensure solutions align with emerging guardrails for AI across geographies.


Infosys (NSE, BSE, NYSE: INFY), a global leader in next-generation digital services and consulting, today announced the launch of its Responsible AI Suite, a part of Infosys Topaz, an AI-first set of services, solutions, and platforms using generative AI.

The rise of powerful generative AI systems in the past year has raised several concerns and conversations around the ethical dimensions of AI. According to the Infosys Generative AI Radar, by Infosys Knowledge Institute, enterprises worldwide are identifying data privacy, security, ethics, and bias as the primary challenges in their pursuit of innovation with AI. The Responsible AI Suite is designed to help enterprises balance innovation with ethical considerations, such bias and privacy prevention, and maximize their return on investments.

Infosys Topaz Responsible AI Suite is a set of 10+ offerings built around the Scan, Shield, and Steer framework. The framework aims to monitor and protect AI models and systems from risks and threats, while enabling businesses to apply AI responsibly. The offerings, across the framework, include a combination of accelerators and solutions designed to drive responsible AI adoption across enterprises.

Scan: Includes solutions to help identify the overall AI risk posture, legal obligations, vulnerabilities, and generate a single source of truth for compliance status of all AI projects. For example, the Infosys Topaz RAI Watchtower is used to monitor upcoming threats, vulnerabilities, and legal obligations

Shield: These solutions focus on building technical guardrails, checks, and accelerators that are responsible by design across the AI lifecycle. It also consists of specialized solutions for AI security. For example, the Infosys Topaz Gen AI Guardrails help enforce the safe use of Gen AI by moderating input prompts and output for multiple risks.

Steer: These advisory and consulting services support strong and efficient AI governance for innovation. Offerings include AI strategy formulation, legal consultation, and contract reviews.

Infosys Topaz Responsible AI Suite will be amplified by an ecosystem of technology partners and think tanks via the Responsible AI Coalition. The Responsible AI Coalition will bring together startups, cloud service providers, and technology partners to further the common goal of advancing responsible AI. It will also lead a special working group of noted academicians, influencers, policymakers, and industry leaders to aid in shaping solutions that help set new industry standards in the responsible AI space.

Phil Fersht, CEO and Chief Analyst, HFS Research, said, “With the challenges of Responsible AI currently forcing many enterprises to slow their progress towards achieving scaled value with AI, smart offerings such as Infosys Topaz’s Responsible AI suite can clear the path to help them accelerate their critical AI initiatives.”

The suite of offerings is complemented by Infosys Topaz RAI office. This ensures that the offerings deliver solutions that ably navigate the shifting landscape of complex technical, policy, and governance challenges related to adopting AI responsibly across business functions. The office of RAI is also responsible for looking into different facets of ethical aspects of AI like transparency, fairness, privacy, security and compliance. The Office constitutes a centralized body for streamlining AI governance, formulating AI risk strategy, and maintaining standards, policies, and guidelines. It will also ensure adherence to responsible-by-design principles and standards throughout the AI development lifecycle, facilitating safe use of AI across organizations.

Commenting on the launch, Balakrishna D. R. (Bali), Executive Vice President, Global Services Head, AI and Industry Verticals, Infosys, said, "Responsible and ethical AI deployment is a business imperative as technology evolves. At Infosys, we recognize the critical need for fostering responsible AI and not merely as a set of principles but as actionable steps. Infosys Topaz Responsible AI Suite is a significant stride in helping our clients better manage their AI-first journey. Together with the Infosys RAI office, we affirm our commitment to advancing the ethos of responsible AI, providing enterprises with the necessary tools and expertise for ethical AI implementation."

UN Forms New AI Advisory Board; iSPIRT's Sharad Sharma and Hugging Face's Nazneen Rajani Among the Members

UN Forms New AI Advisory Board; iSPIRT's Sharad Sharma and Hugging Face's Nazneen Rajani Among the Members

On Thursday, the Secretary-General at a press conference announced the creation of a new Artificial Intelligence Advisory Body on risks, opportunities and international governance of Artificial Intelligence (AI) . That body will support the international community’s efforts to govern artificial intelligence.

Among the members of AI Advisory Body, two are from India — Sharad Sharma (iSPIRT) and Nazneen Rajani (Hugging Face).

Sharad Sharma is a co-founder of iSPIRT, a non-profit think tank that wants India to be a product nation. He was the CEO of Yahoo India R&D and dubbed as the architect of Indian software products ecosystem.

About Nazneen Rajani, she is a Research Lead at Hugging Face, which is building an open-source alternative to ChatGPT called H4, a powerful LLM, "aligning language models to be helpful, honest, harmless, and huggy".

U.N. Secretary-General Antonio Guterres on Thursday launched this 39-member advisory body of tech company executives, government officials and academics from countries spanning 6 continents.

The panel aims to issue preliminary recommendations on AI governance by the end of the year and finalize them before the U.N. Summit of the Future next September.

The full members list are as below:
  • Anna Abramova, Director of the Moscow State Institute of International Relations-University AI Centre, Russian Federation
  • Omar Sultan al Olama, Minister of State for Artificial Intelligence of the United Arab Emirates, United Arab Emirates
  • Latifa al-Abdulkarim, Member of the Shura Council (Saudi Parliament), Assistant Professor of Computer Science at King Saud University, Saudi Arabia
  • Estela Aranha, Special Advisor to the Minister for Justice and Public Security, Federal Government of Brazil, Brazil
  • Carme Artigas, Secretary of State for Digitalization and Artificial Intelligence of Spain, Spain
  • Ran Balicer, Chief Innovation Officer and Deputy Director General at Clalit Health Services Israel, Israel
  • Paolo Benanti, Third Order Regular Franciscan, Lecturer at the Pontifical Gregorian University, Italy
  • Abeba Birhane, Senior Advisor in AI Accountability at Mozilla Foundation, Ethiopia
  • Ian Bremmer, President and Founder of Eurasia Group, United States
  • Anna Christmann, Aerospace Coordinator of the German Federal Government, Germany
  • Natasha Crampton, Chief Responsible AI Officer at Microsoft, New Zealand
  • Nighat Dad, Executive Director of the Digital Rights Foundation Pakistan, Pakistan
  • Vilas Dhar, President of the Patrick J. McGovern Foundation, United States
  • Virginia Dignum, Professor of Responsible Artificial Intelligence at UmeÃ¥ University, Portugal/Netherlands
  • Arisa Ema, Associate Professor at the University of Tokyo, Japan
  • Mohamed Farahat, Legal Consultant and Vice-Chair of MAG of North Africa IGF, Egypt
  • Amandeep Singh Gill, Secretary-General's Envoy on Technology
  • Dame Wendy Hall, Regius Professor of Computer Science at the University of Southampton, United Kingdom
  • Rahaf Harfoush, Digital Anthropologist, France
  • Hiroaki Kitano, Chief Technology Officer of Sony Group Corporation, Japan
  • Haksoo Ko, Chair of Republic of Korea’s Personal Information Protection Commission, Republic of Korea
  • Andreas Krause, Professor at ETH Zurich, Switzerland
  • James Manyika, Senior Vice-President of Google-Alphabet, President for Research, Technology and Society, Zimbabwe
  • Maria Vanina Martinez Posse, Ramon and Cajal Fellow at the Artificial Research Institute, Argentina
  • Seydina Moussa Ndiaye, Lecturer at Cheikh Hamidou Kane Digital University, Senegal
  • Mira Murati, Chief Technology Officer of OpenAI, Albania
  • Petri Myllymaki, Full Professor at the Department of Computer Science of University of Helsinki, Finland
  • Alondra Nelson, Harold F. Linder Professor at the Institute for Advanced Study, United States
  • Nazneen Rajani, Lead Researcher at Hugging Face, India
  • Craig Ramlal, Head of the Control Systems Group at the University of The West Indies at St. Augustine, Trinidad and Tobago
  • He Ruimin, Chief Artificial Intelligence Officer and Deputy Chief Digital Technology Officer, Government of Singapore, Singapore
  • Emma Ruttkamp-Bloem, Professor at the University of Pretoria, South Africa
  • Sharad Sharma, Co-founder iSPIRT Foundation, India
  • Marietje Schaake, International Policy Director at Stanford University Cyber Policy Center, Netherlands
  • Jaan Tallinn, Co-founder of the Cambridge Centre for the Study of Existential Risk, Estonia
  • Philip Thigo, Adviser at the Government of Kenya, Kenya
  • Jimena Sofia Viveros Alvarez, Chief of Staff and Head Legal Advisor to Justice Loretta Ortiz at the Mexican Supreme Court, Mexico
  • Yi Zeng, Professor and Director of Brain-inspired Cognitive AI Lab, Chinese Academy of Sciences, China
  • Zhang Linghan, Professor at the Institute of Data Law, China University of Political Science and Law, China

The transformative potential of AI for good is difficult even to grasp,” Guterres said. He pointed to possible uses including predicting crises, improving public health and education, and tackling the climate crisis.

However, the UN Secretary-General cautioned, “It is already clear that the malicious use of AI could undermine trust in institutions, weaken social cohesion and threaten democracy itself.”

Strict Rules Against AI Tools Like ChatGPT in Schools – UNESCO

Strict Rules Against AI Tools Like ChatGPT in Schools – UNESCO

UNESCO on Thursday published its first guidance on use of Generative AI (GenAI) for education, urging governmental agencies to regulate the use of the technology, including protection of data privacy and putting an age limit for users.

In new guidance for governments, the UN’s education body UNESCO is calling on Governments across the world to implement appropriate regulations and teacher training, to ensure a human-centred approach to using Generative AI in education.

The UNESCO Guidance sets an age limit of 13 for the use of AI tools in the classroom and calls for teacher training on this subject.

In its guidance report, UNESCO said, "Publicly available generative AI (GenAI) tools are rapidly emerging, and the release of iterative versions is outpacing the adaptation of national regulatory frameworks. The absence of national regulations on GenAI in most countries leaves the data privacy of users unprotected and educational institutions largely unprepared to validate the tools."

Generative AI can be a tremendous opportunity for human development, but it can also cause harm and prejudice,” said Audrey Azoulay of UNESCO.

In a recent UNESCO global survey, it was found that over 450 schools and universities showed that less than 10% of them had institutional policies and/or formal guidance concerning the use of generative AI applications, largely due to the absence of national regulations.

Students have taken a liking for GenAI, which can generate anything from essays to mathematical calculations with just a few line of prompts.

Among a series of guidelines in a 64-page guidance report, UNESCO stressed on the need for government-sanctioned AI curricula for school education, in technical and vocational education and training.

Presently in India, there's no law to regulate the AI sector. In April this year, the Ministry of Electronics and IT had said that it is not considering any law to regulate the AI sector, with Union IT minister Ashwini Vaishnaw admitting that though AI “had ethical concerns and associated risks”, it had proven to be an enabler of the digital and innovation ecosystem.

However, last month Prime Minister Narendra Modi’s call for a global framework on the expansion of “ethical” artificial intelligence (AI). In July, India's telecom regulatory body TRAI proposed for immediately establishing an independent statutory authority for ensuring development of responsible AI and regulation of use cases in the country.

Generative AI hit public awareness last year in November with the launch of ChatGPT, which became the fastest growing app in history.
While ChatGPT reached 100 million monthly active users in January 2023, only one country has released regulation on generative Al in July.
Moving forward with UNESCO guidance report, it further states – "The most fundamental perspective of the long-term implications of GenAI for education and research is still about the complementary relationship between human agency and machines. One of the key questions is whether humans can possibly cede basic levels of thinking and skill-acquisition processes to AI and rather concentrate on higher-order thinking skills based on the outputs provided by AI."

Writing, for example, is often associated with the structuring of thinking. With GenAI, rather than starting from scratch to plan the aims, scope and outline of a set of ideas, humans can now start with a wellstructured outline provided by GenAI.

"Some experts have characterized the use of GenAI to generate text in this way as ‘writing without thinking’ (Chayka, 2023). As these new GenAI-assisted practices become more widely adopted, established methods for the acquisition and assessment of writing skills will need to adapt." - the UNESCO Guidance report mentions.

One option in the future is that the learning of writing may focus on building skills in planning and composing prompts, critical evaluation of the GenAI outputs, higher-order thinking, as well as on co-writing based on GenAI’s outlines.

In concluding remarks the UNESCO Guidance report states — "From the perspective of a human-centred approach, AI tools should be designed to extend or augment human intellectual abilities and social skills – and not undermine them, conflict with them or usurp them, said UNESCO."

While GenAI should be used to serve education and research, we all need to be cognizant that GenAI might also change the established systems and their foundations in these domains. The transformation of education and research to be triggered by GenAI, if any, should be rigorously reviewed and steered by a human-centred approach.

TRAI Recommends Setting Up Artificial Intelligence and Data Authority of India (AIDAI)

TRAI Recommends Setting Up Artificial Intelligence and Data Authority of India (AIDAI)

The Telecom Regulatory Authority of India (TRAI) has today released its recommendations on ‘Leveraging Artificial Intelligence and Big Data in Telecommunication Sector’. The telecom regulatory body also proposed that the Ministry of Electronics and Information Technology (MeiTY) be designated as the administrative ministry for AI.

TRAI recommends setting up regulatory framework. "For ensuring development of responsible Artificial Intelligence (AI) in India, there is an urgent need to adopt a regulatory framework by the Government that should be applicable across sectors," says the TRAI recommendation.The broad principles of the suggested regulatory framework, TRAI says should comprise of —
  • An independent statutory authority.
  • A Multi Stakeholder Body (MSB) that will act as an advisory body to the proposed statutory authority.
  • Categorizations of the AI use cases based on their risk and regulating them according to broad principles of Responsible AI
TRAI recommends of immediately establishing an independent statutory authority for ensuring development of responsible AI and regulation of use cases in India.

TRAI designates the authority as — “Artificial Intelligence and Data Authority of India” (AIDAI).

AIDAI functions –
  • Regulation Making Functions
  • Recommendatory functions 

Regulation Making Functions –

  • Framing regulations on various aspects of AI including its responsible use. 
  • Defining principles of responsible AI and their applicability on AI use cases based on risk assessment.
  • AIDAI should evolve the framework based on its assessment, advice of proposed MSB, global best practices, and public consultation.
  • Ensuring that principles of responsible AI are made applicable at each phase of AI framework lifecycle viz. design, development, validation, deployment, monitoring and refinement.
  • Developing model AI Governance Framework to guide organizations on deploying AI in a responsible manner.
  • Developing model Ethical Codes for adoption by public and private entities in different sectors. Any other aspect of regulation of AI for orderly growth of the AI s ector and protection of the consumers

Recommondatory Functions —

  • Facilitating adoption of future technologies and innovative architectures related to AI models.
  • Monitoring and making recommendations on the enforcement framework on AI applications and its u se cases.
  • Coordinating with technical standard setting bodies of government like Telecom Engineering Centre (TEC) for accreditation of various labs for testing and accreditation of AI products and solutions and giving recommendations thereof.
  • Capacity building and infrastructure requirements related evaluation and giving recommendations to the Government.
  • Assess the data digitization requirement in the country; review and prioritize the avenues requiring concentrated efforts for data digitization and fix time frames accordingly.
  • Be the apex body to oversee all issues related to data digitization, data sharing and data monetization in the country including framing policies and incentivization schemes for data digitalization, data sharing and data mone tization.
  • Define the process framework for use of AI and related technology in data processing, data sharing and data monetization while ensuring the privacy and security of the data owner.
  • Putting in place an overarching framework for ethical use of data both by the Government as well as by the corporates in India.The framework should address the generic as well as vertical sector specific requirements.
  • Study the possible impact of upcoming technologies on data ethics and come out with relevant rules/guidelines on the subject. 
  • Creation of a national level mechanism to bring the State Governments, Local Bodies and other agencies onboard to adopt the national policy on data governance.
  • Creation of a uniform framework to on entities for adoptboard private ion of national policy on data governance and to enable them and public sector entities to digitalize, monetize and share their data within the privacy and other applicable laws and policies.
TRAI also recommend that DoT should, in collaboration with organizations such as IISc Bangalore, IIT Madras, IIT Kanpur and other research institutes, launch research in telecommunication s develop indigenous AI use cases.

In the recommendation, TRAI advices of AI Specific Infrastructure and Experimental Campuses.

"At least one Centre of Excellence for Artificial Intelligence (CoEAI) should be established in each State/UT for facilitating educational institutions, startups, innovators, researchers and other public/private entities to develop and demonstrate technological capabilities. These centres should have access to high bandwidth, computational facilities and data sets for training AI models. All such centres should also be linked with proposed 5G/6G labs for sharing of resources and knowledge. To galvanize an effective AI ecosystem and to nurture quality human resources these CoE-AIs should allow industry players as well as startups to partner with academia in conducting research, developing cutting-edge applications and scalable problem solutions in various fields such as agriculture, healthcare, education, smart cities, smart mobilities, etc.," the recommendation paper mentioned.

Meanwhile, the Cyberspace Administration of China (CAC) released their "Interim Measures for the Management of Generative Artificial Intelligence Services." In this , the Chinese government lays out its rules to regulate those who provide generative AI capabilities to the public in China.

While the US is still underway to Regulate AI, the EU’s AI Act is being speculated as the possible global standard for AI Regulation,just like GDPR – likely changing how many machine learning engineers do their work. The proposal includes a ban on certain uses of AI, such as facial recognition in public spaces, as well as requirements for transparency and accountability in the use of AI.

To recall, TRAI had earlier floated a consultation paper to explore a framework for internet-based calls and messaging apps like Facebook, WhatsApp, Telegram, Apple's FaceTime etc. and selectively ban their services during emergency situations.


The Kavli Foundation Launches Two Kavli Centers for Ethics, Science, and the Public to Engage the Public in Exploring Ethical Implications Born from Scientific Discovery



Business Wire India

Scientific discoveries deepen our understanding of nature and ourselves, with the potential to transform our everyday lives, yet can raise ethical concerns or risks for society.

This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20211209005349/en/

Cutting-edge neuroscience, genetics, and artificial intelligence are a few examples that are driving the need to discuss: Who bears responsibility for broad ethical considerations of scientific discoveries? When is it optimal to consider implications and risks? How can the public be empowered to participate in these discussions?

Two Kavli Centers for Ethics, Science, and the Public – at the University of California, Berkeley, and the University of Cambridge – are launching to engage the public in identifying and exploring ethical considerations and impacts born from scientific discovery.

The Kavli Foundation’s vision for the centers is a paradigm shift to meet an as-yet unmet need within science: a proactive and sustained effort that is intentional in connecting the public, scientists, ethicists, social scientists, and science communicators early in the process of scientific discoveries to identify and discuss potential impacts on society.

“We’re embarking on a democratization of the way we think, collaborate, and communicate about scientific discoveries and their ethical aspects – and ensuring the public is included,” said The Kavli Foundation President Cynthia Friend. “It’s long past due for this to happen.”

Until now, there hasn’t been a sustained and proactive venture to address ethical implications born from scientific discovery that involves the public early and intentionally in the scientific process. And while there is increasing recognition within the scientific community that the public should be involved, mechanisms and infrastructure to do this are lacking. The public is too often left out of these important discussions, or they are brought in too late.

“With the Kavli Centers for Ethics, Science, and the Public, we are taking necessary action to create the infrastructure that enables early and intentional public engagement in the ethical considerations born from scientific discoveries,” remarked The Kavli Foundation Director of Public Engagement Brooke Smith.

Two centers were selected for this new venture based on their vision, approach, and experience. While both are multi-faceted and complementary in their approaches working across disciplines in the sciences and humanities, each will have an initial focus that is unique.

The Kavli Center for Ethics, Science, and the Public at UC Berkeley will reimagine how scientists are trained, beginning in the fields of neuroscience, genetics, and artificial intelligence. Leading the center is AI expert Stuart Russell, along with Nobel-Prize Laureate Saul Perlmutter, who provided some of the first evidence that the expansion of the universe is accelerating; Nobel and Kavli Prize Laureate Jennifer Doudna, known for her discovery of the gene-editing tool CRISPR; theoretical and moral philosopher Jay Wallace; bioethicist Jodi Halpern; neuroscientist Jack Gallant; and historian and writer Elena Conis.

“The impetus from The Kavli Foundation has helped to mobilize Berkeley’s unparalleled resources in the humanities, social sciences, natural sciences, and engineering to collaborate on addressing one of humanity's most pressing problems: how to ensure that our rapidly advancing scientific and technological capabilities are directed towards the interests of humanity,” said Stuart Russell, who serves as the inaugural Director of the Kavli Center for Ethics, Science, and the Public at UC Berkeley.
 

In a unique collaboration with Wellcome Connecting Science, the Kavli Center for Ethics, Science, and the Public at the University of Cambridge will be led by internationally recognized social scientist and genetic counsellor Anna Middleton; supported by sociologist and bioethicist Richard Milne; and journalist and broadcaster Catherine Galloway; with creative industry expertise from broadcaster Vivienne Parry, OBE; sociology of education expertise from Susan Robertson; and genomics and public engagement expertise from Julian Rayner. Drawing on a network of experts in ethics and public engagement from the UK, China, Russia, India, and Japan, the new center will explore how ethical implications raised by science are tackled in different cultural contexts within the domains of genomics, big data, health research, and emerging technologies.

“From the discovery of DNA’s structure to sequencing 20% of the world’s COVID virus and the development of the first artificial intelligence, Cambridge has been at the cutting edge of science for centuries,” remarked Anna Middleton, director for the Kavli Center for Ethics, Science, and the Public at the University of Cambridge. “Through collaboration with experts in popular culture we will find the evidence base to communicate complex ideas around the ethical issues raised by science so that all of us can share in decision making around the implications of science for society.”

The idea for the Centers was sparked by The Kavli Foundation’s work and observations in science and society, including research at the 20 Kavli Institutes globally, where inspiring and transformative science is being done—ranging from decoding brain activity to fabricating artificial cells.

“This is a long-overdue beginning of an important journey for the scientific community, and we look forward to the impact the Kavli Centers for Ethics, Science, and the Public will have on the future role of science within society,” said Friend.

The Kavli Foundation is dedicated to advancing science for the benefit of humanity. The foundation’s mission is implemented through Kavli research institutes globally and programs that support basic science in the fields of astrophysics, nanoscience, neuroscience, and theoretical physics; initiatives that strengthen the relationship between science and society; and prizes and awards including the international Kavli Prizes and the AAAS Kavli Science Journalism Awards. Learn more at kavlifoundation.org and follow @kavlifoundation.

View source version on businesswire.com: https://www.businesswire.com/news/home/20211209005349/en/


Manipulated Video & Audio Made using 'Deepfakes' Poses Threat to Elections

A video on social media shows a high-ranking U.S. legislator declaring his support for an overwhelming tax increase. You react accordingly because the video looks like him and sounds like him, so certainly it has be him.

The term "fake news" is taking a much more literal turn as new technology is making it easier to manipulate the faces and audio in videos. The videos, called deepfakes, can then be posted to any social media site with no indication they are not the real thing.

Edward Delp, director of the Video and Imaging Processing Laboratory at Purdue University, says deepfakes are a growing danger with the next presidential election fast approaching.

“It’s possible that people are going to use fake videos to make fake news and insert these into a political election,” said Delp, the Charles William Harrison Distinguished Professor of Electrical and Computer Engineering. “There’s been some evidence of that in other elections throughout the world already.

“We’ve got our election coming up in 2020 and I suspect people will use these. People believe them and that will be the problem.”

The videos pose a danger to swaying the court of public opinion through social media, as almost 70 percent of adults indicate they use Facebook, usually daily. YouTube boasts even higher numbers, with more than 90 percent of 18- to 24-year-olds using it.

Delp and doctoral student David Güera have worked for two years on video tampering as part of a larger research into media forensics. They’ve worked with sophisticated machine learning techniques based on artificial intelligence and machine learning to create an algorithm that detects deepfakes.

A YouTube video is available at https://youtu.be/aWKBWoDtR8k.

Detecting Deep Fakes Video through Media Forensics



[tnm_video layout="mnmd-post-media-wide"]https://www.youtube.com/watch?v=aWKBWoDtR8k[/tnm_video]

Late last year, Delp and his team’s algorithm won a Defense Advanced Research Projects Agency (DARPA) contest. DARPA is an agency of the U.S. Department of Defense.

“By analyzing the video, the algorithm can see whether or not the face is consistent with the rest of the information in the video,” Delp said. “If it’s inconsistent, we detect these subtle inconsistencies. It can be as small as a few pixels, it’s can be coloring inconsistencies, it can be different types of distortion.”

“Our system is data driven, so it can look for everything – it can look into anomalies like blinking, it can look for anomalies in illumination,” Güera said, adding the system will continue to get better at detecting deepfakes as they give it more examples to learn from.

The research was presented in November at the 2018 IEEE International Conference on Advanced Video and Signal Based Surveillance.

Deepfakes also can be used to fake pornography video and images, using the faces of celebrities or even children.

Delp said early deepfakes were easier to spot. The techniques couldn’t recreate eye movement well, resulting in videos of a person that didn’t blink. But advances have made the technology better and more available to people.

News organizations and social media sites have concerns about the future of deepfakes. Delp foresees both having tools like his algorithm in the future to determine what video footage is real and what is a deepfake.

“It’s an arms race,” he said. “Their technology is getting better and better, but I like to think that we’ll be able to keep up."

Top Image - Blog.avira.com

Businesses Should Stay on Top of Emerging Ethical Concerns Surrounding AI

The demands for more ethical use of Artificial Intelligence (AI) will increase. The population is now becoming more aware of the disastrous effects of unintended consequences of automation run amok. Simplistic automation that we today find in Facebook, Twitter, Google, Amazon etc. can lead to unwanted effects on society.

Much has been written about Ethics and Artificial Intelligence (AI) and with many organizations looking to adopt some form of AI technology in 2018, business leaders are wise to stay on top of these emerging ethical concerns, according to GlobalData, a leading data and analytics company.

Job displacement is still a key consideration, as is safeguarding data. In a survey conducted by GlobalData, 23% of organizations indicated they had cut or not replaced employees because of AI and 57% indicated security as a top concern.

AI Ethics and businesses

However, looking ahead, the ethics questions that the AI community will need to tackle are even more controversial. For example, if a child runs into the road, there are questions surrounding how a self-driven car would react to this situation, such as whether it would hit the child or swerve and risk injuring its passenger.

More relevant to business leaders is the concern that an AI infused application may not perform up to the organization’s ethical standards. It may contain unintentional racial bias – say a financial algorithm that is biased against a specific race, or an application that demonstrates a preference towards one gender over another. This raises concerns over what should be done when a phrase that is acceptable when said by one demographic is completely unacceptable when uttered by another. It leads to people questioning whether an algorithm can be trained to reliably make this distinction and what happens when it makes a mistake.

To recall, in order to work on such issues surrounding AI and its impact on society, tech giants namely -- Facebook, Microsoft, Google (and Google’s DeepMind), IBM, and Amazon, have together formed a partnership, in October 2016, and launched a non-profit organization. Called the Partnership on Artificial Intelligence to Benefit People and Society (PAI), the nonprofit organization has been established with an aim of studying and formulating the best possible practices on AI technologies, which can not only advance the public’s understanding of AI, but also serve as an open platform for discussion and engagement about AI and its influences on common people and society.

Rena Bhattacharyya, Technology Analyst at GlobalData, comments: “On the one hand, unintentional results are not the fault of the organization using the AI solution. The responsibility may lie in the data used to train the underlying machine learning model. However, customers are quick to pass judgment. If and when these unintentional biases become public, customers will quickly assign blame to the company using them, potentially with enormous impact to a brand’s reputation.

“Just as CEOs may take the blame for customer data breaches, and as a result may lose their jobs, senior leaders are also at risk of taking the fall when an AI solution implemented by their organization crosses an ethical line. It’s in their best interest to ensure that does not happen as their reputation depends on it.”

Speaking about India, last month India’s IT ministry had constituted four committees to thoroughly study on various aspects of Artificial Intelligence for citizen-centric use.

A mixed notions has been circulating globally about adoption of AI and automation as it was a report by MIT review first that had said that Artificial Intelligence will not eliminate jobs in Asia. However, then it was reported -- in January 2017, prior to MIT review report --- that Indian IT giant Infosys had replaced 9,000 of its employees with automation and AI. Going forward, the Infosys CEO, Vishal Sikka, quit the company eventually in the same year which gave a speculative message that Sikka's strategy of fastly embracing automation and AI was one of the reason that he had to take the exit from Infosys.

Top-featured Image - www.slideshare.net/sparksandhoney/ai-ethics

Businesses Should Stay on Top of Emerging Ethical Concerns Surrounding AI


The demands for more ethical use of Artificial Intelligence (AI) will increase. The population is now becoming more aware of the disastrous effects of unintended consequences of automation run amok. Simplistic automation that we today find in Facebook, Twitter, Google, Amazon etc. can lead to unwanted effects on society.

Much has been written about Ethics and Artificial Intelligence (AI) and with many organizations looking to adopt some form of AI technology in 2018, business leaders are wise to stay on top of these emerging ethical concerns, according to GlobalData, a leading data and analytics company.

Job displacement is still a key consideration, as is safeguarding data. In a survey conducted by GlobalData, 23% of organizations indicated they had cut or not replaced employees because of AI and 57% indicated security as a top concern.



However, looking ahead, the ethics questions that the AI community will need to tackle are even more controversial. For example, if a child runs into the road, there are questions surrounding how a self-driven car would react to this situation, such as whether it would hit the child or swerve and risk injuring its passenger.

More relevant to business leaders is the concern that an AI infused application may not perform up to the organization’s ethical standards. It may contain unintentional racial bias – say a financial algorithm that is biased against a specific race, or an application that demonstrates a preference towards one gender over another. This raises concerns over what should be done when a phrase that is acceptable when said by one demographic is completely unacceptable when uttered by another. It leads to people questioning whether an algorithm can be trained to reliably make this distinction and what happens when it makes a mistake.

To recall, in order to work on such issues surrounding AI and its impact on society, tech giants namely -- Facebook, Microsoft, Google (and Google’s DeepMind), IBM, and Amazon, have together formed a partnership, in October 2016, and launched a non-profit organization. Called the Partnership on Artificial Intelligence to Benefit People and Society (PAI), the nonprofit organization has been established with an aim of studying and formulating the best possible practices on AI technologies, which can not only advance the public’s understanding of AI, but also serve as an open platform for discussion and engagement about AI and its influences on common people and society.

Rena Bhattacharyya, Technology Analyst at GlobalData, comments: “On the one hand, unintentional results are not the fault of the organization using the AI solution. The responsibility may lie in the data used to train the underlying machine learning model. However, customers are quick to pass judgment. If and when these unintentional biases become public, customers will quickly assign blame to the company using them, potentially with enormous impact to a brand’s reputation.

“Just as CEOs may take the blame for customer data breaches, and as a result may lose their jobs, senior leaders are also at risk of taking the fall when an AI solution implemented by their organization crosses an ethical line. It’s in their best interest to ensure that does not happen as their reputation depends on it.”

Speaking about India, last month India’s IT ministry had constituted four committees to thoroughly study on various aspects of Artificial Intelligence for citizen-centric use.

A mixed notions has been circulating globally about adoption of AI and automation as it was a report by MIT review first that had said that Artificial Intelligence will not eliminate jobs in Asia. However, then it was reported -- in January 2017, prior to MIT review report --- that Indian IT giant Infosys had replaced 9,000 of its employees with automation and AI. Going forward, the Infosys CEO, Vishal Sikka, quit the company eventually in the same year which gave a speculative message that Sikka's strategy of fastly embracing automation and AI was one of the reason that he had to take the exit from Infosys.

Top-featured Image - www.slideshare.net/sparksandhoney/ai-ethics


Market Reports

Market Report & Surveys
IndianWeb2.com © all rights reserved