Showing posts with label Deepfakes. Show all posts
Showing posts with label Deepfakes. Show all posts

India Tightens IT Rules to Combat Deepfakes and AI Misinformation

India Tightens IT Rules to Combat Deepfakes and AI Misinformation

The Indian government has proposed significant amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, aimed at curbing the misuse of deepfakes and generative AI content. Here's a breakdown of the key changes:

New IT Rules Targeting Deepfakes (2025 Draft Amendments)

  • Definition Introduced
    Synthetically Generated Information: Defined as content created, altered, or modified using computer tools to appear real.
  • Mandatory Labelling
    Platforms with over 5 million users (like Facebook, YouTube, Instagram) must:
    • Ask users to declare if uploaded content is synthetic.
    • Take reasonable steps to verify these claims.
    • Clearly label synthetic content with visible or audible markers.
    • For videos: markers must cover at least 10% of the screen.
    • For audio: markers must be present in the first 10% of the clip.
    • These markers cannot be removed or altered.
  • Legal Protections
    Platforms acting in good faith to remove or block synthetic content will receive legal protection.
  • Takedown Oversight
    Only senior officers (Joint Secretary or above) can issue takedown orders.
    All takedown actions will undergo monthly review by a Secretary-level officer to ensure legality and proportionality.
  • Timeline & Feedback
    Draft rules were released on October 22, 2025.
    Public feedback is open until November 6, 2025.
    Final rules are expected to take effect from November 1, 2025.

Dehradun’s Big Leap—₹10 Crore AI Centre Aims to Blend Tech with Trust

Dehradun’s Big Leap—₹10 Crore AI Centre Aims to Blend Tech with Trust

  • Dr. Jitendra Singh Inaugurates AI Centre at Dehradun University, Calls for Integrity in Technology Use
  • New AI Centre to Train Future Innovators and Startups, Aligns with PM Modi’s Digital India and Skill India initiatives
Union Minister Dr. Jitendra Singh inaugurated the Centre of Excellence for Artificial Intelligence, Skill Development, and Innovation at Graphic Era University in Dehradun—a landmark moment for Uttarakhand and India’s tech ecosystem.

Key Highlights:

  • First-of-its-kind in Uttarakhand: Spanning 1.5 lakh sq. ft., the centre includes:
    • Apple iOS Development Centre (in collaboration with Apple and Infosys)
    • NVIDIA-powered AI and HPC hub featuring DGX B200 system with 8 GPUs and 1.74 TB GPU memory
  • Investment: ₹10+ crore to support research in healthcare, agriculture, smart cities, and more
  • Academic Prestige: India’s first Generative AI Ready Campus powered by AWS, ranked 48th in NIRF 2025, accredited A+ by NAAC

Ethical Tech Vision:

Dr. Singh emphasized: AI can create miracles if used judiciously, but without integrity it can also be misused, referencing deepfakes and misinformation.

He advocated a hybrid model—where AI complements human judgment, not replaces it.

Real-World Impact:

AI-enabled telemedicine vans are serving rural India, showcasing how hybrid AI-human models can deliver healthcare to underserved regions.

Strategic Outlook:

India, once a late adopter of technologies, is now leading in space exploration and quantum research.

Dr. Singh stressed the importance of linking academic research with industry and startups to sustain this momentum.

Infibeam Avenues and IISc Join Together to Develop Real-time Deepfake Detection Systems for Government, Corporates, Organisations to Combat AI-related threats; sign MoU

Infibeam Avenues and IISc Join Together to Develop Real-time Deepfake Detection Systems for Government, Corporates, Organisations to Combat AI-related threats; sign MoU

Infibeam Avenues Ltd, a leading fintech company, has announced signing of a strategic MoU for Research and Development (R&D) with the Indian Institute of Sciences (IISc) Bangalore, one of the world’s top-ranking universities known for its research excellence. This collaboration aims to research and develop advanced real-time deepfake detection systems designed to enhance digital security for government entities, corporations, and organizations, effectively combating the rising threat of AI-generated deception.

As deepfake technology continues to evolve, it poses significant risks to personal and corporate integrity. These sophisticated AI-generated media can create hyper-realistic but false representations of individuals and events, leading to misinformation, cyberbullying, harassment, financial fraud, and identity theft. Such threats have far-reaching implications for businesses, government institutions, and the general public.

A notable incident highlighting the urgency of this issue occurred in January 2024, when a Hong Kong-based multinational company lost approximately $25 million (around Rs 207 crore) due to a deepfake scam. Scammers executed a convincing live video call using deepfake technology to impersonate company executives, leading to a severe financial loss before the deception was uncovered. (Source: South China Morning Post and Business Insider).

“Digital communications and a digital India will thrive only as long as there is trust,” said Mr. Rajesh Kumar SA, CEO of Phronetic.AI, an AI unit of Infibeam Avenues Ltd. “This partnership is a pivotal step in restoring trust in digital communications. Together, we will equip users with the necessary tools to differentiate between truth and fabrication in an increasingly complex digital landscape, thereby mitigating fraud risks and enhancing digital trust.”

Under the terms of the Memorandum of Understanding (MoU), Infibeam Avenues Ltd’s AI business unit, Phronetic.AI, and the IISc team will collaboratively develop anti-deepfake technology specifically tailored for real-time video communication. The partnership will focus on selecting the most effective detection models for various scenarios, ensuring that real-time deepfake detection operates efficiently and cost-effectively at scale.

“We are committed to staying ahead of malicious actors by developing innovative AI solutions that ensure digital authenticity,” said Mr. Vishal Mehta, Chairman and Managing Director of Infibeam Avenues Ltd. “This partnership is a crucial step toward enhancing cybersecurity and preventing the misuse of deepfake technology for fraudulent activities.”

Despite the availability of various deepfake detection tools in the market, only a limited number possess the capability for real-time operation. In a pioneering effort, Infibeam’s Phronetic.AI team has developed an advanced video AI agent that can detect deepfakes in real-time through a novel interventional technique. This agent actively engages in live video conversations, alerting users when the other participant is identified as a deepfake. Infibeam has already filed a patent for its innovative real-time deepfake detection algorithm.

Recognizing the increasing sophistication of deepfakes and the necessity for detection algorithms to evolve continuously to address this growing threat, the company has collaborated with Vision and AI Lab (VAL) of the Indian Institute of Science (IISc), where it aims to improve these algorithms further, ensuring robust defenses against the challenges posed by increasingly realistic deepfake technology.

“As Generative AI continues to advance at an unprecedented pace, the rise of deepfakes poses a significant challenge. Without proactive measures, the spread of AI-generated misinformation could become a major concern. Addressing this requires ongoing efforts from AI researchers to monitor emerging generative models and develop robust techniques to detect deepfakes effectively.” said Prof. Venkatesh Babu, Professor and Chair of the Dept. of Computational and Data Sciences (CDS) IISc.

Additionally, the research will prioritize the development of a user-friendly interface, enabling easy access for non-experts to verify the authenticity of live visuals and audio. This scalable detection system will be adaptable across various sectors, including banking, healthcare, insurance, finance, fintech, HR recruitment, government organizations, police, armed forces, and personal communications, addressing the diverse needs of industries particularly vulnerable to deepfake technology.

This research initiative aims to offer Real-Time Deepfake detection AI Agent that enhances public confidence and protect the reputations of its users whether it’s a government institutions, organizations, or a corporations.

About Infibeam Avenues Limited:

Infibeam Avenues Ltd. is one of the leading global financial technology (fintech) company offering comprehensive digital payment solutions and enterprise software platforms to businesses and governments across industry verticals. The company's payment infrastructure solution includes acquiring and issuing solutions and offering infrastructure for banks. The core Payment Gateway (PG) business provides over 200 plus payment options to the merchants allowing them to accept payments through website and mobile devices in 27 international currencies. Infibeam Avenues' enterprise software platform hosts India's largest online marketplace for government procurement. The company processed transaction worth INR 7 trillion (US$ 86 billion) in FY24. Company currently has over 10 million plus clients across digital payments and enterprise software platforms. The company's vast clientele includes merchants, enterprises, corporations, governments, and financial institutions in both domestic (India) as well as international markets. Infibeam Avenues' international operations are based in the United Arab Emirates, Australia, and the United States of America. We also have business presence in Oman working with three of the largest banks in the country.

India-US Researchers Creates Quantum-Safe Video Encryption Framework to Tackle Deepfake-like Threats

India-US Researchers Creates Quantum-Safe Video Encryption Framework to Tackle Deepfake-like Threats

Researchers from India and the USA have created a quantum-safe video encryption framework to tackle modern cyber threats like deepfakes and data manipulation. This innovative framework combines quantum computing's inherent randomness with advanced SSL-encrypted HTTP transmission, providing unmatched security and efficiency.

The research, led by experts from Florida International University and the National Forensic Sciences University, has been featured in IEEE Transactions on Consumer Electronics.

This framework integrates quantum encryption with classical video transmission methods to enhance security against evolving cyber threats.

This breakthrough is expected to significantly enhance video communication security, especially for sensitive communications in defense, government, and military operations.
India-US Researchers Creates Quantum-Safe Video Encryption Framework to Tackle Deepfake-like Threats

Dr. Naveen Kumar Chaudhary from the National Forensic Sciences University in India collaborated with Dr. S.S. Iyengar and Dr. Yashas Hariprasad from Florida International University has led to the development of this quantum-safe encryption framework.

A promising step towards a more secure digital future, the framework is based on hybrid quantum video encryption, which uniquely combines the power of quantum encryption with classical video transmission techniques, ensuring robust protection against potential quantum computing threats.

The Quantum Encryption utilizes the principles of quantum mechanics to create encryption keys that are virtually impossible to crack using classical computing methods.

The framework incorporates advanced SSL-encrypted HTTP transmission to maintain high-quality video communication. It Merges the strengths of both quantum and classical encryption, offering a dual layer of security.

It has varied cybersecurity applications with an aims to protect sensitive video communications, particularly in sectors like defense, government, and military.

Designed to withstand the advancements in quantum computing, making it a long-term solution for secure video transmission, the framework is a significant leap forward in cybersecurity, addressing the growing concerns over deepfakes and data manipulation.

It's a promising development that could reshape the landscape of secure digital communication. The research has been funded by U.S. Army DEVCOM Army Research Laboratory and U.S. National Science Foundation (NSF), an independent agency of the United States federal government. 

Tackling Deepfakes

The quantum-safe encryption framework tackles deepfake threats by leveraging the inherent randomness of quantum computing and advanced SSL-encrypted HTTP transmission. Here's how it works:

1. Pseudorandom Keys: The framework uses quantum-generated pseudorandom keys to encrypt video streams. These keys are extremely difficult to predict or replicate, making it challenging for deepfake creators to manipulate the video content.

2. Quantum-Safe Protocols: Individual frames of the video are secured using quantum-safe protocols, ensuring that each frame is protected against tampering.

3. Enhanced Security: By combining quantum encryption with classical methods, the framework provides a dual layer of security, significantly outperforming current methods.

4. Authenticity and Integrity: The encryption ensures the authenticity and integrity of video communications, making it difficult for malicious actors to create convincing deepfakes.

This approach is particularly effective in sensitive sectors like defense, government, and military operations, where the authenticity of video communications is crucial.

Indian Govt Issues Advisory Warning on AI Generated Deepfake Threats

Indian Govt Issues Advisory Warning on AI Generated Deepfake Threats

India's national nodal agency for responding to computer security incidents in the country, the Indian Computer Emergency Response Team (CERT-In), has recently issued an advisory warning about the rising threats posed by Al-generated deepfakes.

Deepfake technology, which involves the use of artificial intelligence (AI) to create highly realistic and convincing fake videos, images, and audio, is becoming increasingly sophisticated. This technology poses significant risks, including the potential for disinformation, fraud, and social engineering attacks.

The advisory highlights risks such as misinformation, financial fraud, and privacy violations, and provides guidance for individuals and organizations to detect and counter these threats.

Here are some key points from the advisory:

1. Verify Sources: Ensure digital content is from reliable sources before sharing or acting on it.

2. Look for Anomalies: Identify signs such as unnatural blinking, mismatched lip-sync, inconsistent lighting, or distorted visuals.

3. Cross-Reference Information: Confirm the accuracy of content through multiple trusted sources

4. Limit Personal Data: Avoid sharing high-resolution images or videos online.

5. Use Multi-Factor Authentication (MFA): Secure accounts with MFA to reduce risks of hacking.

6. Monitor Public Channels: Keep track of potential deepfake content targeting your Organization.

7. Adopt Secure Communication: Use encrypted channels for sensitive discussions to prevent interception.

The advisory also urges organizations to strengthen detection tools, monitor public channels, and enhance digital forensics capabilities.

The advisory, with original issued date of 27 November 2024, serves as a critical resource for identifying, assessing, and mitigating the threats posed by synthetic media.

It's crucial to stay informed and vigilant about these threats.

Deepfake Videos of Narayana Murthy and Mukesh Ambani Scammed Two to Lose ~₹ 90 Lakh

Deepfake Videos of Narayana Murthy and Mukesh Ambani Scammed Two to Lose ~₹ 90 Lakh

Two residents of Bengaluru fell victim to deepfake videos featuring Infosys co-founder Narayana Murthy and Reliance Industries Chairman Mukesh Ambani, collectively losing around ₹95 lakh.

These deepfake videos promoted trading platforms and promised high returns, leading the victims to invest large sums of money. One victim lost ₹67 lakh, while another lost ₹19 lakh.

First Victim:

A woman from Banashankari came across a video on social media promoting a trading platform with high returns. She clicked on a suspicious link, shared her details, and was contacted by someone claiming to be an agent. Initially, she invested ₹1.4 lakh and received ₹8,000 in returns. Encouraged by this, she invested ₹6.7 lakh but didn't receive any returns. She also lost ₹67 lakh to another platform promising work-from-home opportunities.

Second Victim:

A retired employee saw a similar video on Facebook promoting a trading platform. He transferred ₹19 lakh to two different bank accounts provided by the fraudsters but didn't receive any response after the transfer.

Both victims didn't verify the authenticity of the videos and ended up clicking on links that led to fake websites created by fraudsters. Separate cases have been registered at the CEN (Cyber Economic and Narcotics) South police station, and investigations are ongoing to track down the culprits.

Deepfake technology played a crucial role in this scam by creating highly realistic videos of Narayana Murthy and Mukesh Ambani. These videos were used to promote fraudulent trading platforms, convincing victims that the endorsements were genuine. The deepfakes were so convincing that the victims didn't question their authenticity and ended up investing large sums of money.

It's a stark reminder to always verify the authenticity of online content, especially when it involves financial investments.

It must be recalled that last year, Narayana Murthy had alerted people that many trading platforms are using his identity for promotions and said that he does not endorse any of them.

In an X post, Murthy said, “In recent months, there have been several fake news items propagated via social media apps and on various web pages available on the Internet claiming that I have endorsed or invested in automated trading applications named BTC AI Evex, British Bitcoin Profit, Bit Lyte Sync, Immediate Momentum, Capitalix Ventures etc. The news items appear on fraudulent websites that masquerade as popular newspaper websites and some of them even publish fake interviews using deepfake pictures and videos. I categorically deny any endorsement, relation or association with these applications or websites.”

In response to the deepfake scam, the Bengaluru police have taken several actions. Separate cases have been registered at the CEN (Cyber Economic and Narcotics) South police station. The police are actively investigating to track down the culprits behind the scam.

Public Awareness: Authorities have urged the public to be cautious of deepfake videos and to verify the authenticity of any suspicious content before taking any action. They have also advised people to report any such incidents to the police immediately.

Helpline: The Bengaluru City Police have launched a helpline (1930) for victims of deepfake scams to register complaints and seek assistance.

It's crucial to stay vigilant and report any suspicious activity to the authorities to help prevent such scams in the future. If you or someone you know has been affected, don't hesitate to reach out to the police for help.

Accenture Invests in Deepfake Detection Startup Reality Defender

Accenture Invests in Deepfake Detection Startup Reality Defender

Accenture has made a strategic investment in Reality Defender, a cybersecurity company specializing in deepfake detection, through its venture arm, Accenture Ventures.

Notably, Accenture has invested in Reality Defender as part of the startup's $33 million series-A-extended round of funding, which is led by Illuminate Financial, with participation from Accenture, Booz Allen Ventures, IBM Ventures, and the Jefferies Family Office.

Reality Defender, which won the RSA Innovation award, offers solutions to detect and prevent deepfake fraud across various industries, including financial services, media, and high-tech.

The partnership aims to equip clients with the ability to rapidly identify, detect, respond to, and prevent deepfake fraud, ensuring a more secure digital landscape. Reality Defender's technology includes real-time voice detection and audiovisual detection to catch even the most advanced AI-generated content.

Founded in 2021, by Ben Colman, Ali Shahriyari, and Gaurav Bharaj, and based in New York, Reality Defender provides solutions to detect and prevent deepfake fraud across various industries, including financial services, media, and high-tech. Ben Colman serves as the Co-Founder and CEO, Ali Shahriyari is the Co-Founder and CTO, and Gaurav Bharaj is the Co-Founder and Head of AI.

Reality Defender Founders
Reality Defender Founders - Ben Colman, Ali Shahriyari, and Gaurav Bharaj

 Reality Defender's technology includes real-time voice detection and audiovisual detection, which can identify even the most advanced AI-generated content. Reality Defender has recently introduced a tool for real-time video deepfake detection, which is currently in private beta for select clients.

The cybersecurity startup has received recognition for its innovative solutions, including winning the RSA Innovation award and being named the Most Innovative Company at RSA's Innovation Sandbox competition.

Accenture's Cyber Intelligence researchers have documented a staggering 223% spike in deepfake-related tool trading on dark web forums in the first quarter of 2024, compared to the same period in 2023. This escalating issue requires immediate attention and education to reduce its potential damaging impacts.

Accenture intends to integrate Reality Defender’s capabilities into its existing deepfake detection and protection offering, including extending it to their call center AI automation solution.

“As deepfakes become more convincing and harder to identify, organizations urgently need scalable and effective detection solutions,” said Paolo Dal Cin, global lead, Accenture Security. “Reality Defender offers a unique approach to proactively detect AI-related threats across image, audio, text and video. Our investment in Reality Defender demonstrates our strong commitment to helping clients confidently navigate the gen AI driven threat landscape, mitigate financial fraud and maintain the integrity of their digital communications.”

Reality Defender is the latest company to join Accenture Ventures’ Project Spotlight, an engagement and investment program focused on working with companies that create or apply disruptive enterprise technologies.

Most recently, the companies that have received investment from Accenture Ventures under Project Spotlight include Martian, Earli Inc, and an AI startup Turbine while cybersecurity/ quantum security companies are – Aliro Quantum, Tenchi Security, SpiderOak and Interos.

Indian Politicians Using Their Audio, Video Deepfakes to Reach Voters

Indian Politicians Using Their Audio, Video Deepfakes to Reach Voters

The use of deepfake technology by Indian politicians to reach voters has been a significant development in the country's election campaigns and for the first time, it’s happening on a large scale. 

Companies like The Indian Deepfaker has been creating AI-generated avatars and deepfake videos for political campaigns, enabling personalized messaging on a large scale. These deepfakes have been used to present politicians positively, and even deceased leaders have been digitally resurrected to garner support.

Headquartered in Ajmer, Rajasthan, the Indian Deepfaker company has created personalized messaging and holographic avatars for political campaigns. The company counts Netflix as one of its clients.

The 2021-founded company is reportedly handling handling more than a dozen election-related projects, including creating holographic avatars of politicians, using audio cloning and video deepfakes to enable personalised messaging in groups, and deploying a conversational AI agent that identifies itself as AI, but speaks in the voice of a political candidate during calls with voters.

Just like any of the sector or industries saw exponential use of technology, the Indian politics is no less than an industry, metaphorically. In 2014, India’s current prime minister, Narendra Modi used 3D hologram technology to broadcast prerecorded speeches at multiple campaign rallies around India. Years later now, PM Modi's AI-generated avatar speaks to voters by name in WhatsApp videos as the use of AI technology in Indian politics has ballooned

However, the Election Commission of India has advised political parties to refrain from using deepfakes and other forms of misinformation during elections, highlighting the potential risks associated with such practices. The impact of deepfakes on voter decisions and the democratic process is still being assessed, with experts expressing concerns about the spread of misinformation and its influence on the electorate.

The Election Commission of India has issued guidelines directing political parties to take down deepfakes within three hours during the Model Code of Conduct (MCC) period in general elections. These guidelines emphasize responsible and ethical.

It's important to note that while deepfakes can be a powerful tool for political communication, they also raise ethical questions and the need for responsible use to ensure the integrity of democratic processes.

Besides, there have been several case studies highlighting the impact of deepfakes on past and the ongoing elections. The ongoing India's 2024 General Elections has seen many deepfake misuse cases that extensively used AI-generated deepfakes of politicians, including deceased ones, to reach voters. Concerns were raised about misinformation and its potential to influence voters and fuel protests.

During Telangana State Elections, a deepfake video was circulated on social media showing a political leader endorsing the opposition party. The video was released on election day, leaving no time for the opposition to control the damage.

In Slovakia, during last elections, a number of AI-generated audio recordings impersonated a liberal candidate discussing plans to raise alcohol prices and rig the election. These deepfakes circulated on social media platforms, creating confusion and eroding voter trust just before the polls.

These cases demonstrate the disruptive potential of deepfakes in the electoral process, highlighting the need for vigilance and regulatory measures to safeguard democratic integrity.

Center for Countering Digital Hate, a nonprofit organization has successfully created examples of harmful Al-generated election disinformation content to warn about the technology's potential to undermine democracy. It raises questions about the usefulness of such technology against the potential harm it may cause.

With elections this year in over 50 countries involving half the globe’s population, there are fears deepfakes could seriously undermine their integrity.

In One of the World’s Biggest Known Deepfake Scams, UK Engg. Group Lost $2 Mn

In One of the World’s Biggest Known Deepfake Scams, UK Engg. Group Lost $2 Mn

A UK based engineering group, Arup, was targeted in a significant deepfake scam. The incident involved a hyper-realistic video created using artificial intelligence that featured a digitally cloned version of Arup's chief financial officer. This deepfake was used to deceive an employee into transferring a total of HK$200 million to various bank accounts during a video call.

Previously, Hong Kong police revealed what is one of the world's biggest known deepfake scams, but did not identify the company involved. However, Financial Times reported that it has confirmed it was the UK group Arup (officially Arup Group Limited). The engineering firm employs about 18,000 people globally and has annual revenues of more than £2bn.

The scam took place in the Hong Kong office of Arup and is considered one of the largest known deepfake scams to date.

Citing a Hong Kong police acting senior superintendent Baron Chan, the FT report said, "After a video conference joined by the company's digitally cloned CFO and other fake company employees. The staff member made a total of 15 transfers to five Hong Kong bank accounts before eventually discovering it was a scam upon following up with the group's headquarters."

Despite the substantial financial loss, Arup has confirmed that their financial stability and business operations were not affected, and none of their internal systems were compromised. The company, which is headquartered in London, has been working with the authorities, and investigations into the incident are ongoing.

Arup's east Asia chair Andy Lee stepped down in the weeks following the scam after just a year in the role. He was replaced by Michael Kwok, a former east Asia chair for the company. Lee said on his personal LinkedIn page that he had "decided to embark on a new opportunity".

This event highlights the increasing sophistication of cyber-attacks and the rising threat of deepfake technology being used for fraudulent purposes. It serves as a reminder of the importance of vigilance and the need for robust security measures to protect against such sophisticated scams.

Legal actions are being pursued in response to the deepfake scam that targeted Arup. The authorities have been notified, and investigations are ongoing. However, as of the latest updates, no arrests have been made yet¹²³. The case is classified as "obtaining property by deception," which indicates that law enforcement is treating it as a serious criminal matter.

Arup has been cooperative with the police, and while they have not disclosed details due to the ongoing nature of the investigation, they have confirmed that they are working with law enforcement to address the incident¹. The company has also expressed hope that their experience will help raise awareness about the increasing sophistication of cyber-attacks and the evolving techniques of malicious actors.

The incident underscores the importance of cybersecurity and the need for companies to stay vigilant against such advanced forms of fraud. It also highlights the challenges that law enforcement faces in tracking down and prosecuting perpetrators in the digital age, where the tools and methods used to commit crimes are constantly evolving. 

The Inside-Out History of Deepfake Technology

The Inside-Out History of Deepfake Technology

Today, deepfakes continue to be a topic of concern due to its potential to create convincing false representations (videos/images) of individuals, which can be used for malicious purposes. There is ongoing research in both creating more sophisticated deepfakes and developing methods to detect and combat them.

Deepfake technology, which involves creating synthetic media that portrays events or images that never actually occurred, has a relatively recent history. The term "deepfake" is a combination of "deep learning" and "fake," and it refers to the use of artificial intelligence (AI) to generate convincing fake content.

The concept of deepfake became widely known in 2017 when a Reddit user created a subreddit dedicated to sharing videos that used face-swapping technology to insert celebrities' likenesses into existing videos, often for pornographic purposes. This use of AI for creating realistic-looking media quickly raised concerns about its potential for misuse, particularly in the creation of fake news, hoaxes, and other forms of disinformation.

Deepfakes are produced using two main AI algorithms: one that creates a synthetic image or video, and another that detects whether the replica is fake. The creation algorithm adjusts the synthetic media based on feedback from the detection algorithm until it becomes indistinguishable from real media.

The technology behind deepfakes has evolved from earlier forms of media manipulation, with photo manipulation dating back to the 19th century and applied to motion pictures as technology improved. However, the rapid advancement of AI in the late 20th and early 21st centuries has made deepfakes much more accessible and difficult to detect.

The history of deepfake technology is quite fascinating and involves a mix of academic research and community-driven development. Here's a brief overview:

Early Development: The foundations of deepfake technology can be traced back to the 1990s, with researchers at academic institutions exploring the potential of AI in media manipulation.

Generative Adversarial Networks (GANs):

A significant leap in the technology came with the invention of GANs in 2014 by computer scientist Ian Goodfellow. GANs are a class of AI algorithms used in unsupervised machine learning, implemented by a system of two neural networks contesting with each other in a zero-sum game framework. 

Public Emergence: The term "deepfake" emerged in 2017 when a Reddit user, known by the pseudonym 'deepfakes', began sharing videos on a subreddit that used machine learning to swap celebrities' faces onto existing videos, often for pornographic content.

Widespread Attention: This use of AI caught public attention and raised concerns about its potential for creating convincing fake content that could be used for disinformation or other malicious purposes.

Creators involved 

The inventors/creators involved in the development of deepfake technology have ranged from academic researchers to anonymous online community members. The technology has since evolved, becoming more accessible and sophisticated, leading to a wide range of applications beyond the initial controversial uses. As deepfake technology continues to develop, there is an ongoing discussion about its ethical implications and the need for regulations to prevent misuse.

Deepfake technology has led to the creation of various notable projects and the emergence of skilled creators. Here are some of the most prominent examples:

EZ RyderX47: This creator gained fame for a deepfake video where Marty McFly and Emmett "Doc" Brown from "Back to the Future" are replaced by Tom Holland and Robert Downey Jr., respectively. The video showcases the creative possibilities of deepfake technology.
 

McAfee's Project Mockingbird: Announced at CES 2024, this project aims to empower users to identify deepfakes. It gained attention when it was used to debunk a deepfake scam involving a fake Taylor Swift promoting cookware.


In Event of Moon Disaster: This short film features an incredibly convincing deepfake of Richard Nixon delivering a speech that was prepared in case the Apollo 11 mission had failed. The film explores the implications of deepfakes and their historical context.


These projects and creators have contributed to both the advancement of deepfake technology and the ongoing conversation about its ethical use and societal impact.

Losses incurred due to deepfake tech

Deepfake technology has led to significant financial and social losses over the years. Some of the notable impacts are as :

Financial Losses: Deepfake scams have resulted in losses ranging from $243,000 to $35 million in individual cases. For instance, a bank manager was tricked into transferring $35 million to a fraudulent account due to a deepfake audio message.

Business Impact: A report from 2020 projected that deepfakes could cost businesses globally $250 billion by 2025. Financial institutions might face annual losses of up to $30 billion due to deepfake fraud by 2027.

Cybersecurity Threats: In 2022, 66% of cybersecurity professionals experienced deepfake attacks within their organizations. The banking sector is particularly concerned, with 92% of cyber practitioners worried about its fraudulent misuse.

Social Engineering Attacks: Deepfakes have been used to create fake videos or audio messages, often impersonating CEOs or other high-ranking executives to deceive individuals into sending money or disclosing sensitive information.

Misinformation and Public Trust: Deepfakes have the potential to undermine election outcomes, social stability, and even national security, especially in the context of disinformation campaigns.

Donald Trump Case: In 2018, a deepfake video of Donald Trump was released by a Belgian political party, urging Belgium to withdraw from the Paris climate agreement. Although intended as satire, it highlighted the ease of manipulating a world leader's image.


Deepfake Voice Scam: In 2019, criminals used deepfake technology to mimic a CEO's voice in a fraudulent attempt to transfer funds, showcasing the potential for financial scams.

The rise of deepfake content and its misuse has prompted discussions on the need for better regulation and detection technologies to combat this issue and mitigate its harmful effects.

Criminalization of deepfake technology misuse

The issue of deepfake technology and its criminalization is a complex and evolving area of law globally, including in India. While deepfakes have potential benefits in various fields, they also pose significant risks such as privacy violations, defamation, and the spread of misinformation.

Globally, there is a growing concern about the malicious use of deepfakes, and countries are exploring ways to regulate this technology. The legal status of tackling crimes related to deepfakes varies from country to country, with some having specific regulations while others rely on existing laws to address the issue.

In India, as of the information available up to 2021, there was no specific statute that directly addressed deepfake cybercrime. However, various other laws could be applied to combat crimes involving deepfakes. For instance, Section 66E of the Information Technology (IT) Act of 2000 could be invoked in cases of deepfake offenses that infringe on a person's privacy by capturing, publishing, or transmitting their image without consent. 

Experts have pointed out that while India and other countries face challenges due to deepfakes, practical solutions are available, and provisions under several pieces of legislation could offer both civil and criminal relief. It's important to note that the development and use of deepfakes is a global issue, likely requiring international cooperation to effectively regulate their use and prevent associated crimes.

NSE Warns Investors Against Deepfake Videos of Its CEO Recommending Stocks

NSE Warns Investors Against Deepfake Videos of CEOs Recommending Stocks

The National Stock Exchange (NSE) has issued a warning to caution investors, advising them to be wary of deepfake videos featuring its chief executive offering stock recommendations. These videos appear to have been created using advanced technologies to mimic the voice and facial expressions of NSE CEO Ashishkumar Chauhan.

"We have observed the use of face / voice of Shri Ashishkumar Chauhan, PID & CEO NSE and NSE logo in a few investment and advisory audio and video clips falsely created using technology," said NSE in an official press release.

NSE officials are not authorized to endorse or engage in any stock-related activities. This alert comes amid a backdrop of thriving equity markets and a surge in retail investor participation. Regulators have expressed concerns about the potential misuse of social media platforms by financial influencers to attract investors. Remember to verify information from official sources and exercise caution when encountering investment advice online.

Such videos seem to have been created using sophisticated technologies to imitate the voice and facial expressions of Shri Ashishkumar Chauhan, PID & CEO of NSE.

Investors are hereby cautioned not to believe in such audio and videos and not follow any such investment or other advice coming from such fake videos or other mediums. It may be noted that NSE’s employees are not authorised to recommend any stock or deal in those stocks.

Additionally, NSE makes efforts requesting these platforms to take down these objectionable videos, wherever possible.

As per NSE’s process, any official communication is made only through its official website www.nseindia.com, and the Exchange’s social media handles - Twitter: @NSEIndia, Facebook: @NSE India, Instagram: @nseindia, Linkedln: @NSE India, YouTube: NSE India.

Everyone is requested to verify the source of communication and content which is sent out on behalf of NSE and to check the official social media handles.

All investors are requested to take note of the same and verify the information coming from NSE or its officials from its website www.nseindia.com as the official information.

Investors & the public at large are advised to take note of the above.

 

Meta and MCA Launching Fact-Checking WhatsApp Helpline in India to Curb AI-generated Misinformation

Meta and MCA Launching Fact-Checking WhatsApp Helpline in India to Curb AI-generated Misinformation
  • Meta aunching a dedicated fact-checking helpline on WhatsApp with the Misinformation Combat Alliance (MCA) to combat AI-generated misinformation.
  • The program will implement a four-pillar approach – detection, prevention, reporting and driving awareness around deepfakes.
  • Collaboration with MCA represents our continued effort to empower people with tools and resources to verify information on WhatsApp.
  • The helpline will be available for the public to use in March 2024.
Meta is collaborating with the Misinformation Combat Alliance (MCA) to launch a dedicated fact-checking helpline on WhatsApp in an effort to combat media generated using artificial intelligence which may deceive people on matters of public importance, commonly known as deepfakes, and help people connect with verified and credible information. The helpline will be available for the public to use in March 2024.

The industry leading initiative will allow MCA and its associated network of independent fact-checkers and research organizations to address viral misinformation – particularly deepfakes. People will be able to flag deepfakes by sending it to the WhatsApp chatbot which will offer multilingual support in English and three regional languages (Hindi, Tamil, Telugu).

The MCA will set up a central ‘deepfake analysis unit’ to manage all inbound messages they receive on the WhatsApp helpline. They will work closely with member fact-checking organizations as well as industry partners and digital labs to assess and verify the content and respond to the messages accordingly, debunking false claims and misinformation.

The focus of the program is to implement a four-pillar approach – detection, prevention, reporting and driving awareness around the escalating spread of deepfakes along with building a critical instrument that allows citizens to access reliable information to fight the spread of such misinformation. With millions of Indians using WhatsApp, our collaboration with MCA represents a continued effort to empower users with tools to verify information on its service.

“We recognize the concerns around AI-generated misinformation and believe combatting this requires concrete and cooperative measures across the industry. Our collaboration with MCA to launch a WhatsApp helpline dedicated to debunking deepfakes that can materially deceive people is consistent with our pledge under the Tech Accord to Combat Deceptive Use of AI in 2024 Elections. As a company that has been at the cutting edge of AI development for more than a decade, we remain committed to work with industry stakeholders to introduce common technical standards for AI detection, transparency solutions and policies, along with empowering people on our platforms with resources and tools that make it simpler for them to identify content that has been generated using AI tools and curb the spread of misinformation.” – Shivnath Thukral, Director, Public Policy India, Meta.

“The Deepfakes Analysis Unit (DAU) will serve as a critical and timely intervention to arrest the spread of AI-enabled disinformation among social media and internet users in India. Its formation highlights the collaboration and whole-of-society approach to foster a healthy information ecosystem that the MCA was set up for. The initiative will see IFCN signatory fact-checkers, journalists, civic tech professionals, research labs and forensic experts come together, with Meta’s support. We hope the DAU will become a trusted resource for the public to discern between real and AI generated media and we invite more stakeholders to be a part of the initiative.” – Bharat Gupta, President, Misinformation Combat Alliance.

Our robust fact-checking program in India includes partnerships with 11 independent fact-checking organizations that help users to identify, review, verify information and help prevent the spread of misinformation on its platforms. On WhatsApp, we encourage people to double-check information that sounds suspicious or inaccurate by sending it to WhatsApp tiplines. People can also follow dedicated fact-checking organizations on WhatsApp Channels to receive verified, accurate and timely updates. In addition to the fact-checking program, WhatsApp addresses misinformation by limiting forwards and actively constraining virality on the platform.

Our approach to addressing deceptive synthetic media has several components, including working to investigate deceptive behaviors like fake accounts and misleading manipulated media; our third-party fact-checking program, in which fact checkers rate misinformation, including content that has been edited or synthesized in a way that could mislead people; and engaging with academia, government and industry. We have recently announced an AI labeling policy. In the coming months, we will label images that users post to Facebook, Instagram and Threads when we can detect industry standard indicators that they are AI-generated.

We have also pledged to help prevent deceptive AI content from interfering with this year’s global elections. The “Tech Accord to Combat Deceptive Use of AI in 2024 Elections” is a set of commitments to deploy technology countering harmful AI-generated content meant to deceive voters. Signatories, including Meta, pledge to work collaboratively on tools to detect and address online distribution of such AI content, drive educational campaigns, and provide transparency, among other concrete steps.

The Misinformation Combat Alliance (MCA) and Meta are working on launching a dedicated fact-checking helpline on WhatsApp in an effort to combat media generated using artificial intelligence which may deceive people on matters of public importance, commonly known as deepfakes

Deepfake: Minister Held Digital India Dialogues with Digital Intermediaries on Deep Fakes, IT Rules Compliance

Deepfakes: Minister Held Digital India Dialogues with Digital Intermediaries on Deep Fakes, IT Rules Compliance

Platforms and Intermediaries Commit to Tackling Deepfakes Under Existing Laws

“Platforms have agreed that within the next 7 days, they will ensure that all terms and contracts with users expressly forbid them from engaging in the 11 types of content laid out in the IT rules”: MoS Rajeev Chandrasekhar

“MEITY has confirmed the imminent appointment of a ‘Rule 7’ officer and a digital platform for users to report any violations by intermediaries”: MoS Rajeev Chandrasekhar 

In a Digital India dialogue session held on Friday, Union Minister of State for Skill Development & Entrepreneurship and Electronics & IT, Shri Rajeev Chandrasekhar, reiterated the need for safe and trusted internet, and for social media intermediaries to be accountable to Digital Nagriks. Following Prime Minister Shri Narendra Modi’s concerns about deepfake threats, all platforms and intermediaries agree to align their community guidelines with the IT rules, specifically targeting 11 types of content that causes user harm including Deepfakes.

Addressing the media after the session, the Minister affirmed the collective commitment of platforms and intermediaries to confront deepfake challenges within the bounds of current regulations.

The Minister said, “All platforms and intermediaries have agreed that the current laws and rules even as we discuss new laws and regulations, they provide for them to deal with deepfakes conclusively. They have agreed that in the next seven days they will ensure all the terms and views and contracts with users will expressly forbid users from 11 types of content laid out in IT rules. The Hon’ble Prime Minister Narendra Modi ji had already highlighted the issue of deepfakes and the threats and challenges that it represents to a safe and trusted internet last year. MEITY has conveyed that there will be an imminent appointment of a rule seven officer shortly and a digital platform for digital nagriks to report violations of law by intermediaries. Digital nagriks have rights to a safe and trusted internet and intermediaries are accountable for providing the same."

Acknowledging the progress in grievance redressal mechanisms, Minister Rajeev Chandrasekhar urged continued collaboration with intermediaries to address challenges such as deepfakes and misinformation.

We have in partnership, the government and platforms done considerably well in terms of addressing grievances. I must congratulate intermediaries for proactively doing this. However, there is more to be done, especially in the areas of misinformation, deepfakes and advertising of betting illegal betting platforms and advertising of fraudulent loan apps. These continue to be threats to safety and trust online,” the Minister further added.

Manipulated Video & Audio Made using 'Deepfakes' Poses Threat to Elections

A video on social media shows a high-ranking U.S. legislator declaring his support for an overwhelming tax increase. You react accordingly because the video looks like him and sounds like him, so certainly it has be him.

The term "fake news" is taking a much more literal turn as new technology is making it easier to manipulate the faces and audio in videos. The videos, called deepfakes, can then be posted to any social media site with no indication they are not the real thing.

Edward Delp, director of the Video and Imaging Processing Laboratory at Purdue University, says deepfakes are a growing danger with the next presidential election fast approaching.

“It’s possible that people are going to use fake videos to make fake news and insert these into a political election,” said Delp, the Charles William Harrison Distinguished Professor of Electrical and Computer Engineering. “There’s been some evidence of that in other elections throughout the world already.

“We’ve got our election coming up in 2020 and I suspect people will use these. People believe them and that will be the problem.”

The videos pose a danger to swaying the court of public opinion through social media, as almost 70 percent of adults indicate they use Facebook, usually daily. YouTube boasts even higher numbers, with more than 90 percent of 18- to 24-year-olds using it.

Delp and doctoral student David Güera have worked for two years on video tampering as part of a larger research into media forensics. They’ve worked with sophisticated machine learning techniques based on artificial intelligence and machine learning to create an algorithm that detects deepfakes.

A YouTube video is available at https://youtu.be/aWKBWoDtR8k.

Detecting Deep Fakes Video through Media Forensics



[tnm_video layout="mnmd-post-media-wide"]https://www.youtube.com/watch?v=aWKBWoDtR8k[/tnm_video]

Late last year, Delp and his team’s algorithm won a Defense Advanced Research Projects Agency (DARPA) contest. DARPA is an agency of the U.S. Department of Defense.

“By analyzing the video, the algorithm can see whether or not the face is consistent with the rest of the information in the video,” Delp said. “If it’s inconsistent, we detect these subtle inconsistencies. It can be as small as a few pixels, it’s can be coloring inconsistencies, it can be different types of distortion.”

“Our system is data driven, so it can look for everything – it can look into anomalies like blinking, it can look for anomalies in illumination,” Güera said, adding the system will continue to get better at detecting deepfakes as they give it more examples to learn from.

The research was presented in November at the 2018 IEEE International Conference on Advanced Video and Signal Based Surveillance.

Deepfakes also can be used to fake pornography video and images, using the faces of celebrities or even children.

Delp said early deepfakes were easier to spot. The techniques couldn’t recreate eye movement well, resulting in videos of a person that didn’t blink. But advances have made the technology better and more available to people.

News organizations and social media sites have concerns about the future of deepfakes. Delp foresees both having tools like his algorithm in the future to determine what video footage is real and what is a deepfake.

“It’s an arms race,” he said. “Their technology is getting better and better, but I like to think that we’ll be able to keep up."

Top Image - Blog.avira.com

Market Reports

Market Report & Surveys
IndianWeb2.com © all rights reserved