Showing posts with label Fake News. Show all posts
Showing posts with label Fake News. Show all posts

India Tightens IT Rules to Combat Deepfakes and AI Misinformation

India Tightens IT Rules to Combat Deepfakes and AI Misinformation

The Indian government has proposed significant amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, aimed at curbing the misuse of deepfakes and generative AI content. Here's a breakdown of the key changes:

New IT Rules Targeting Deepfakes (2025 Draft Amendments)

  • Definition Introduced
    Synthetically Generated Information: Defined as content created, altered, or modified using computer tools to appear real.
  • Mandatory Labelling
    Platforms with over 5 million users (like Facebook, YouTube, Instagram) must:
    • Ask users to declare if uploaded content is synthetic.
    • Take reasonable steps to verify these claims.
    • Clearly label synthetic content with visible or audible markers.
    • For videos: markers must cover at least 10% of the screen.
    • For audio: markers must be present in the first 10% of the clip.
    • These markers cannot be removed or altered.
  • Legal Protections
    Platforms acting in good faith to remove or block synthetic content will receive legal protection.
  • Takedown Oversight
    Only senior officers (Joint Secretary or above) can issue takedown orders.
    All takedown actions will undergo monthly review by a Secretary-level officer to ensure legality and proportionality.
  • Timeline & Feedback
    Draft rules were released on October 22, 2025.
    Public feedback is open until November 6, 2025.
    Final rules are expected to take effect from November 1, 2025.

Meta and MCA Launching Fact-Checking WhatsApp Helpline in India to Curb AI-generated Misinformation

Meta and MCA Launching Fact-Checking WhatsApp Helpline in India to Curb AI-generated Misinformation
  • Meta aunching a dedicated fact-checking helpline on WhatsApp with the Misinformation Combat Alliance (MCA) to combat AI-generated misinformation.
  • The program will implement a four-pillar approach – detection, prevention, reporting and driving awareness around deepfakes.
  • Collaboration with MCA represents our continued effort to empower people with tools and resources to verify information on WhatsApp.
  • The helpline will be available for the public to use in March 2024.
Meta is collaborating with the Misinformation Combat Alliance (MCA) to launch a dedicated fact-checking helpline on WhatsApp in an effort to combat media generated using artificial intelligence which may deceive people on matters of public importance, commonly known as deepfakes, and help people connect with verified and credible information. The helpline will be available for the public to use in March 2024.

The industry leading initiative will allow MCA and its associated network of independent fact-checkers and research organizations to address viral misinformation – particularly deepfakes. People will be able to flag deepfakes by sending it to the WhatsApp chatbot which will offer multilingual support in English and three regional languages (Hindi, Tamil, Telugu).

The MCA will set up a central ‘deepfake analysis unit’ to manage all inbound messages they receive on the WhatsApp helpline. They will work closely with member fact-checking organizations as well as industry partners and digital labs to assess and verify the content and respond to the messages accordingly, debunking false claims and misinformation.

The focus of the program is to implement a four-pillar approach – detection, prevention, reporting and driving awareness around the escalating spread of deepfakes along with building a critical instrument that allows citizens to access reliable information to fight the spread of such misinformation. With millions of Indians using WhatsApp, our collaboration with MCA represents a continued effort to empower users with tools to verify information on its service.

“We recognize the concerns around AI-generated misinformation and believe combatting this requires concrete and cooperative measures across the industry. Our collaboration with MCA to launch a WhatsApp helpline dedicated to debunking deepfakes that can materially deceive people is consistent with our pledge under the Tech Accord to Combat Deceptive Use of AI in 2024 Elections. As a company that has been at the cutting edge of AI development for more than a decade, we remain committed to work with industry stakeholders to introduce common technical standards for AI detection, transparency solutions and policies, along with empowering people on our platforms with resources and tools that make it simpler for them to identify content that has been generated using AI tools and curb the spread of misinformation.” – Shivnath Thukral, Director, Public Policy India, Meta.

“The Deepfakes Analysis Unit (DAU) will serve as a critical and timely intervention to arrest the spread of AI-enabled disinformation among social media and internet users in India. Its formation highlights the collaboration and whole-of-society approach to foster a healthy information ecosystem that the MCA was set up for. The initiative will see IFCN signatory fact-checkers, journalists, civic tech professionals, research labs and forensic experts come together, with Meta’s support. We hope the DAU will become a trusted resource for the public to discern between real and AI generated media and we invite more stakeholders to be a part of the initiative.” – Bharat Gupta, President, Misinformation Combat Alliance.

Our robust fact-checking program in India includes partnerships with 11 independent fact-checking organizations that help users to identify, review, verify information and help prevent the spread of misinformation on its platforms. On WhatsApp, we encourage people to double-check information that sounds suspicious or inaccurate by sending it to WhatsApp tiplines. People can also follow dedicated fact-checking organizations on WhatsApp Channels to receive verified, accurate and timely updates. In addition to the fact-checking program, WhatsApp addresses misinformation by limiting forwards and actively constraining virality on the platform.

Our approach to addressing deceptive synthetic media has several components, including working to investigate deceptive behaviors like fake accounts and misleading manipulated media; our third-party fact-checking program, in which fact checkers rate misinformation, including content that has been edited or synthesized in a way that could mislead people; and engaging with academia, government and industry. We have recently announced an AI labeling policy. In the coming months, we will label images that users post to Facebook, Instagram and Threads when we can detect industry standard indicators that they are AI-generated.

We have also pledged to help prevent deceptive AI content from interfering with this year’s global elections. The “Tech Accord to Combat Deceptive Use of AI in 2024 Elections” is a set of commitments to deploy technology countering harmful AI-generated content meant to deceive voters. Signatories, including Meta, pledge to work collaboratively on tools to detect and address online distribution of such AI content, drive educational campaigns, and provide transparency, among other concrete steps.

The Misinformation Combat Alliance (MCA) and Meta are working on launching a dedicated fact-checking helpline on WhatsApp in an effort to combat media generated using artificial intelligence which may deceive people on matters of public importance, commonly known as deepfakes

"Government Giving 3 Months Free Recharge for Online Classes" Is A Fake Message Circulating on WhatsApp



Lockdown was resorted to avoid spread of coronavirus in the country and today most people are doing work from home. Due to the closure of schools and colleges, online classes are being given to children so that there is is no loss of education of children and all this has been possible only due to technology. Technology has kept each other connected even in these troubled times. But there are many people who are duping people in the name of online classes.

Recently, a message was becoming quite viral on the instant messaging app WhatsApp, in which it was claimed that the government is giving 3 months recharge absolutely free for online classes. But now the whole truth of this message has come to the fore. It has been claimed in this message that the government is providing free internet service to 100 million users for 3 months for online classes. 

Along with this, a link has also been given in the message and it is written that by clicking on this link you can also get free internet service. On clicking on the link given in the message, a fake website opens in which the photo of Prime Minister Narendra Modi has been given and it has also been claimed that users can choose their telecom operator and get free internet service for three months.

This message is completely fake, this message going viral on WhatsApp in the name of online class is completely fake . At the same time, the Press Information Bureau (PIB) has also shared a post regarding this message on the official Twitter account. In which, while warning the telecom users, it has been said that this message of free recharge for three months on WhatsApp is completely fake and do not click on the link given in it. No such announcement has been made by the Government of India. Beware of such fake website.

Twitter to Label Tweets Containing Harmful, Misleading Content on COVID-19

Twitter on Monday said it will label tweets that contain "potentially harmful, misleading information" related to COVID-19, and provide additional context to curb spread of fake news around the pandemic that has claimed thousands of lives globally.

Like other digital platforms, including Google and Facebook, Twitter is also undertaking these measures to ensure that people have access to information from trusted health experts and organisations like World Health Organization at a time when nations across the world are combating the coronavirus pandemic.

Earlier this year, Twitter had introduced a new label for tweets containing synthetic and manipulated media that aims to mislead people, and had said it would take steps including removal of tweets if such content has the potential to harm public safety.

The crackdown on such content comes amid widespread concerns globally over altered, forged content on social media, including deepfake videos, and its catastrophic implications.

Twitter, in a blogpost on Monday, said, "In serving the public conversation, our goal is to make it easy to find credible information on Twitter and to limit the spread of potentially harmful and misleading content".

"Starting today, we're introducing new labels and warning messages that will provide additional context and information on some tweets containing disputed or misleading information related to COVID-19," it said, adding that this will also apply to tweets sent before May 11.

Twitter said it will take action based on three broad categories -- misleading information, disputed claims, and unverified claims.

Tweets containing misleading information will either be removed or contain a label with links to trusted information sources, depending on the propensity for harm.

Tweets featuring disputed claims will either carry a label or warning depending on the severity of content. No action will be taken for tweets containing unverified claims, Twitter said.

The company said these labels will link to a Twitter-curated page or external trusted source containing additional information on the claims made within the tweet.

These warnings, it said, will inform people that the information in the tweet is in contradiction to public health experts' guidance before a user views the content.

However, embedded tweets and tweets viewed by people not logged into Twitter may still appear without a label, the company said.

"Our teams are using and improving on internal systems to proactively monitor content related to COVID-19. These systems help ensure we're not amplifying tweets with these warnings or labels and detecting the high-visibility content quickly," Twitter said.

It added that the microblogging platform will also continue to rely on partners to identify content that is likely to result in offline harm.

"Given the dynamic situation, we will prioritise review and labeling of content that could lead to increased exposure or transmission. We'll learn a lot as we use these new labels, and are open to adjusting as we explore labeling different types of misleading information," it noted. PTI SR MBI

Facebook Sues Indian Techie for Running Deceptive Ads, Fake News on Coronavirus

Facebook has filed a lawsuit against an Indian man for running a software company that pushed deceptive advertisements and misinformation about coronavirus outbreak on social media platforms by bypassing its advertising review process. The suit, filed in federal court in California, alleges that Basant.

Gajjar's company LeadCloak provided ad-cloaking software designed to sneak fake news and scams related to COVID-19, cryptocurrency, diet pills and more past Facebook and Instagram's automated advertising review process.

Using the name “LeadCloak,” Gajjar, said to be based in Thailand, violated Facebook Terms and Policies by providing cloaking software and services designed to circumvent automated ad review systems, and ultimately run deceptive ads on Facebook and Instagram, Jessica Romero, Director of Platform Enforcement and Litigation at Facebook said in a statement.

LeadCloak's software also targeted a number of other technology companies including Google, Oath, WordPress, Shopify, and others, Romero said.

Cloaking is a malicious technique that impairs ad review systems by concealing the nature of the website linked to an ad.

When ads are cloaked, a company's advertisement review system may see a website showing an innocuous product such as a sweater, but a user will see a different website, promoting deceptive products and services which, in many cases, are not allowed.

In this case, Leadcloak's software was used to conceal websites featuring scams related to global health crisis COVID-19, cryptocurrency, pharmaceuticals, diet pills, and fake news pages. Some of these cloaked websites also included images of celebrities, the social media giant said in the statement.

In addition to the filing, Facebook has taken technical enforcement measures against Leadcloak and accounts that the company has determined have used their software, including disabling personal and ad accounts on Facebook and Instagram.

This suit will also further our efforts to identify Leadcloak's customers and take additional enforcement actions against them, the statement added. PTI PMS

Cyber Security Agency Cautions Against Fake PM-CARES UPI IDs

The national cyber security agency has alerted donors against fake 'UPI IDs' for a special fund launched by Prime Minister Narendra Modi to combat the COVID-19 pandemic.

In an advisory issued on Saturday, the Indian Computer Emergency Response Team (CERT-In) said it has "tracked several fake UPI IDs which are similar to the UPI ID
used by the "Prime Minister's Citizen Assistance and Relief in Emergency
Situations (PM-CARES) Fund".

CERT-In is the country's nodal agency to guard cyber space.

The advisory identified some of the fake UPI IDs in circulation such as pmcares@pnb, pmcares@hdfcbank, pmcare@yesbank, pmcare@ybl, pmcare@upi, pmcare@sbi and pmcares@icici.

"It may be noted that the genuine UPI ID is "pmcares@sbi"and the registered account name is "PM CARES," it said, asking people to verify UPI ID and the registered name before making any donations.

"Citizens, donors and organisations are advised to visit the website  "pmindia.gov.in" for further details about the fund," the advisory said. 

The PM-CARES Fund was launched by Prime Minister Narendra Modi on March 28 and it has received donations worth crores. 

A message on PM's official website says that keeping in mind the need for having a dedicated national fund with the primary objective of dealing with any kind of emergency or distress situation, like posed by the COVID-19 pandemic, and to provide relief to the affected, a public charitable trust under the name of 'Prime Minister's Citizen Assistance and Relief in Emergency Situations Fund' (PM CARES Fund)' was set up. PTI NES

Workshop on 'Combating Fraud Activities using Data Science' held at IIIT-Delhi


  • Problems such as fake news, hate speech and collusive activities on social media were discussed by subject matter experts


Laboratory for Computational Social Systems (LCS2) at Indraprastha Institute of Information Technology Delhi (IIIT Delhi) organized a workshop on Combating Fraud Activities using Data Science (Co-FAD) on Saturday, 11th January in their campus. It was a one-day colloquium studying the impact of fraudulent activities in Social Science, Journalism, Product Reviews, Forged Imagery, Cybercrime, and Finance among others. The speakers for the workshop were some of the most influential people whose research is helping curb fake activities around the web. 

The workshop commenced with a welcome note by the Director of IIIT Delhi, Prof. Ranjan Bose. He left us with the thought that where there is money, there is fraud. All the talks that followed further emphasized this point. The director’s address was followed by an enlightening keynote by Dr. P.N. Vasanti, director general of Centre for Media Studies (CMS). She highlighted the work of CMS and Penn State University, supported by WhatsApp, and gave some valuable insights. She also discussed the intersection of technology with legal policies for combating fraud.

 



"Major theme that we have trying to address is collusion in online media -- a secret collaboration to deceive someone. These collusions can be propagated through a vast number of social media platforms and also happens on online review forums where people hire colluders to promote or demote certain products. We are trying to address the collusive activities on online platforms and came up with a method to identify such collusion in product reviews. We have also published a research paper on how to detect collusion on twitter," says Prof Tanmoy Chakraborty, IIIT-Delhi, and convenor of the workshop. 

"There are black markets where you pay money and get the desired number of followers. Then there are other services in the black market where one can re-tweet to their customers to get credit and those credits can be used to promote their agenda, etc. It is very difficult to detect such colluders as they are common people. We are trying to collect datasets from premium black market services that will help in detecting such collusions," further added by Prof Chakraborty.

Another problem that society is facing is fake news and hate speeches, which is difficult to control or detect in a country like India where we have numerous languages. Prof Chakraborty was of the view that "Major problem in fake news is that it is not very well defined. We are trying to define fake news to address the problem. Then comes the hate speech problem on social media where we are coming up with a tool which will be able to detect hate speech in Indian languages." 

Yet another informative keynote was given by Anushree Bishnoi and Ankur Pandey, founders of unfound.ai. Unfound.ai is a startup tackling various forms of misinformation in online media. They illustrated their journey, the various challenges and the unique ways they are approaching them. Among the list of other speakers were people from prominent institutions like Queen's University, Belfast, IIT Patna, TCS Research, and Jadavpur University. Head of the Department, CSE, Dr. Vikram Goyal was also one of the speakers for the workshop. Researchers from all over India working or exploring this area participated in this one-day event held at IIIT Delhi. Apart from them, the event also witnessed enthusiasts across academia and industry showing their interest in the event. Student members of LCS2 IIITD also showcased their work through an interactive poster presentation session.

About IIIT-Delhi

It was created as a State University by an act of Delhi Government (The IIIT Delhi Act, 2007) empowering it to do research and development, and grant degrees. In a relatively short time, it has earned an excellent reputation in India and abroad for being a center of quality education and research in IT and interdisciplinary areas. Established in 2008, the Institute has grown to be recognized as one of the most promising young institutions for education and research in India.

The Institute is accredited ‘A’ grade by NAAC (National Assessment and Accreditation Council) and has been accorded 12-B status by the University Grants Commission (UGC). In recognition of its performance, QS India University Ranking 2020 has ranked IIIT-Delhi 41 and QS BRICS ranking 2019 ranked the institute at 192. NIRF also ranked IIIT-Delhi at number 55 this year.

Manipulated Video & Audio Made using 'Deepfakes' Poses Threat to Elections

A video on social media shows a high-ranking U.S. legislator declaring his support for an overwhelming tax increase. You react accordingly because the video looks like him and sounds like him, so certainly it has be him.

The term "fake news" is taking a much more literal turn as new technology is making it easier to manipulate the faces and audio in videos. The videos, called deepfakes, can then be posted to any social media site with no indication they are not the real thing.

Edward Delp, director of the Video and Imaging Processing Laboratory at Purdue University, says deepfakes are a growing danger with the next presidential election fast approaching.

“It’s possible that people are going to use fake videos to make fake news and insert these into a political election,” said Delp, the Charles William Harrison Distinguished Professor of Electrical and Computer Engineering. “There’s been some evidence of that in other elections throughout the world already.

“We’ve got our election coming up in 2020 and I suspect people will use these. People believe them and that will be the problem.”

The videos pose a danger to swaying the court of public opinion through social media, as almost 70 percent of adults indicate they use Facebook, usually daily. YouTube boasts even higher numbers, with more than 90 percent of 18- to 24-year-olds using it.

Delp and doctoral student David Güera have worked for two years on video tampering as part of a larger research into media forensics. They’ve worked with sophisticated machine learning techniques based on artificial intelligence and machine learning to create an algorithm that detects deepfakes.

A YouTube video is available at https://youtu.be/aWKBWoDtR8k.

Detecting Deep Fakes Video through Media Forensics



[tnm_video layout="mnmd-post-media-wide"]https://www.youtube.com/watch?v=aWKBWoDtR8k[/tnm_video]

Late last year, Delp and his team’s algorithm won a Defense Advanced Research Projects Agency (DARPA) contest. DARPA is an agency of the U.S. Department of Defense.

“By analyzing the video, the algorithm can see whether or not the face is consistent with the rest of the information in the video,” Delp said. “If it’s inconsistent, we detect these subtle inconsistencies. It can be as small as a few pixels, it’s can be coloring inconsistencies, it can be different types of distortion.”

“Our system is data driven, so it can look for everything – it can look into anomalies like blinking, it can look for anomalies in illumination,” Güera said, adding the system will continue to get better at detecting deepfakes as they give it more examples to learn from.

The research was presented in November at the 2018 IEEE International Conference on Advanced Video and Signal Based Surveillance.

Deepfakes also can be used to fake pornography video and images, using the faces of celebrities or even children.

Delp said early deepfakes were easier to spot. The techniques couldn’t recreate eye movement well, resulting in videos of a person that didn’t blink. But advances have made the technology better and more available to people.

News organizations and social media sites have concerns about the future of deepfakes. Delp foresees both having tools like his algorithm in the future to determine what video footage is real and what is a deepfake.

“It’s an arms race,” he said. “Their technology is getting better and better, but I like to think that we’ll be able to keep up."

Top Image - Blog.avira.com

WhatsApp Appoints Grievance Officer for India

Facebook-owned messaging service WhatsApp has appointed a grievance officer for India's 200 million users to flag concerns and complaints, including those around fake news. WhatsApp has appointed Menlo Park, California-based Komal Lahiri as the Grievance Officer for the country.

The move from WhatsApp is result of Supreme Court of India reprimanding Whatsapp for not appointing a Grievance Officer and complying with other laws of India. Earlier in the year, a nationwide string of lynchings happened which believed to be incited by false information shared on WhatsApp. While companies like Facebook and Google have appointed Grievance Officers for users in India, WhatsApp had not.

[caption id="attachment_126349" align="alignleft" width="296"] Komal Lahiri[/caption]

Komal Lahiri is Senior Director of Global Customer Operations and Localisation at WhatsApp Inc since March 2018.

The Grievance Officer, Lahiri, can be contacted via email and general post by over 200 million users in the country.

"To contact the Grievance Officer, please send an email with your complaint or concern and sign with an electronic signature. If you're contacting us about a specific account, please include your phone number in full international format, including the country code," said the FAQ under the security and privacy settings.

If you want to contact WhatsApp, go to Settings, then Help and Contact Us section.

"You can contact the Grievance Officer with complaints or concerns, including the following: WhatsApp's Terms of Service and Questions about your account," read the information.

"If you're a law enforcement official, please read our information for law enforcement authorities and how you can contact us," it added.

Lahiri's appointment comes after WhatsApp CEO Chris Daniels held a meeting with IT minister Ravi Shankar Prasad last month in which he was asked by the Indian government to find a way out to track the origin of fake messages on its platform. The government also asked the company to appoint a grievance officer to deal with these cases and set up a corporate entity in the country.

An updated FAQ on WhatsApp’s site states that people can contact the grievance officer directly via email. But it’s unclear as to how WhatsApp will process complaints, and what measures it will take to address reports of fake news and hoax forwards. We’ve written to ask, and will update this post when we hear back.

Source - Gadgets.NDTV.com

WhatsApp Appoints Grievance Officer for India

Facebook-owned messaging service WhatsApp has appointed a grievance officer for India's 200 million users to flag concerns and complaints, including those around fake news. WhatsApp has appointed Menlo Park, California-based Komal Lahiri as the Grievance Officer for the country.

The move from WhatsApp is result of Supreme Court of India reprimanding Whatsapp for not appointing a Grievance Officer and complying with other laws of India. Earlier in the year, a nationwide string of lynchings happened which believed to be incited by false information shared on WhatsApp. While companies like Facebook and Google have appointed Grievance Officers for users in India, WhatsApp had not.

[caption id="attachment_126349" align="alignleft" width="296"] Komal Lahiri[/caption]

Komal Lahiri is Senior Director of Global Customer Operations and Localisation at WhatsApp Inc since March 2018.

The Grievance Officer, Lahiri, can be contacted via email and general post by over 200 million users in the country.

"To contact the Grievance Officer, please send an email with your complaint or concern and sign with an electronic signature. If you're contacting us about a specific account, please include your phone number in full international format, including the country code," said the FAQ under the security and privacy settings.

If you want to contact WhatsApp, go to Settings, then Help and Contact Us section.

"You can contact the Grievance Officer with complaints or concerns, including the following: WhatsApp's Terms of Service and Questions about your account," read the information.

"If you're a law enforcement official, please read our information for law enforcement authorities and how you can contact us," it added.

Lahiri's appointment comes after WhatsApp CEO Chris Daniels held a meeting with IT minister Ravi Shankar Prasad last month in which he was asked by the Indian government to find a way out to track the origin of fake messages on its platform. The government also asked the company to appoint a grievance officer to deal with these cases and set up a corporate entity in the country.

An updated FAQ on WhatsApp’s site states that people can contact the grievance officer directly via email. But it’s unclear as to how WhatsApp will process complaints, and what measures it will take to address reports of fake news and hoax forwards. We’ve written to ask, and will update this post when we hear back.

Source - Gadgets.NDTV.com

This Startup Fought WhatsApp False News and Prevented Two Tragedies in Chennai

In the past two months, more than 20 people across India have been murdered by mob provoked by fake viral videos of alleged child abduction spiralling on social media platforms like WhatsApp.

Though WhatsApp, in its first effort to combat fake messages, recently published advertisements in key Indian newspapers to tackle the spread of misinformation but still the false and fake news spreading like wilfire are costing lives of human by humans.

Recently, When Verify.Wiki LLC - a start-up that fights False News through a methodology called "reverse virality" - by combining crowdsourcing with social networking- saw the recent lynching incidents in India due to False WhatsApp messages, it immediately sprang into action. The company picked Chennai as a pilot city to test if it could prevent another tragedy.

On July 13th, 2018, the company noticed two suspected False News stories propagating via WhatsApp, one asking people to punish a school teacher in Perambur, Chennai for hitting and kicking little children, and another circulating a young female doctor's photo, claiming she was treating patients for free in Chennai. Both these posts went viral on WhatsApp and Facebook in Chennai.



"The team immediately kicked off the pilot. They first learnt that the video that was circulated was from a school in Egypt, recorded in 2014, not from Perambur, Chennai. They also quickly uncovered the other story about the doctor was also false, propagated by a person who steals profile photos of young women from Facebook", said Siva Nadarajah, an adviser and investor at Verify.Wiki who recently requested the company to help with the deadly False News crisis in India.

Once the stories were verified to be false through crowdsourced research, Verify.Wiki said its "reverse virality" approach ensured the propagation of the False News was stopped within hours. With reverse virality, the corrected version of the False News, was propagated back through the same path the story originated, via WhatsApp and Facebook, targeting those who might have consumed the False News.

"We were able to stop both the False News stories within a few hours. We also noticed the propagation of those two false stories completely stopping within 24 hours.", added Siva Nadarajah.

"Imagine if Wikipedia and Facebook had a baby. You combine crowdsourcing with social networking. It's so powerful when it comes to transparency and credibility in fighting False News. We stopped seven False News stories just within two weeks of our pilot in Chennai. Some are harmless and some are deadly. The nice thing is anyone can anonymously submit a suspected False News and everyone can participate in the verification activities. It's a democratic process to fight False News", added Siva Nadarajah.

Facebook, the parent company of WhatsApp recently took full page newspaper advertisements to warn people of False News propagating via WhatsApp, after lynching incidents killed dozens of people across India.

To Fight 'Fake News' in India, Facebook Ties-up with BOOM - An Independent Journalism Initiative

To combat the spread of "fake news" on its platform, Facebook is partnering with third-party fact-checking organizations in different parts of the world as one of the ways to better identify and reduce the reach of false news that people share on Facebook. In India, Facebook has partnered with BOOM, an independent digital journalism initiative certified through the International Fact-Checking Network, for a pilot in Karnataka.

BOOM, which is India’s premier fact checking website bringing its readers verified facts rather than opinion, will review English language news stories flagged on Facebook, check facts, and rate their accuracy.

Boom will review flagged stories thereafter these stories will be placed lower in the News Feed and will be less visible.

For pages that frequently share false stories, their post distribution will be reduced and their ability to monetize and advertise will likely be removed.

People who will try share a fake story will receive a notification that the post has been determined by the fact checker to be false.

As a global phenomena, false & fake news, lies have ripped countries apart as a society, fomenting hate and anger. Last December, seven people lost their lives in two separate incidences in Jharkhand, in a fury that was born on social media and based on falsified information that the killers received over WhatsApp, a messenger owned by Facebook.

Facebook is running similar initiatives in France, Italy, the Netherlands, Germany, Mexico, Indonesia, the Philippines and the US.

In its blogpost, the Facebook said that it believes that “once a story is rated as false, we have been able to reduce its distribution by 80 per cent, and thereby, improve accuracy of information on Facebook and reduce misinformation”.

Facebook has already stated that it will use reports from community along with other signals to send stories to fact-checking organizations. In the Philippines, Facebook's fact-checking partner Rappler has documented the disinformation and misinformation online which shape public opinion and influence critical decisions

“We are beginning small and know it is important to learn from this test and listen to our community as we continue to update ways for people to understand what might be false news in their News Feed,” it said.

Detailing out the process, Facebook said after a story is rated as false by the fact-checker, it will figure lower in News Feed, “significantly reducing its distribution”.

“This in turn stops the hoax from spreading and reduces the number of people who see it. Pages and domains that repeatedly share false news will also see their distribution reduced and their ability to monetise and advertise removed,” it said.

This, Facebook said, will help curb the spread of “financially motivated false news”.

“We also want to empower people to decide for themselves what to read, trust, and share by providing the community with more information and control… If third-party fact-checkers write articles debunking a false news story, we’ll show it in Related Articles immediately below the story in News Feed,” Facebook said.

Facebook will also send people and Page administrators notifications if they try to share a story or have shared one in the past that has been determined to be false.

“While third party fact checking is part of our ongoing efforts to combat spread of false news, we are working hard to improve the accuracy of information on Facebook in various ways,” it added.

The above news was first reported in Times of India.

Facebook Launches ‘Context’ Button In a Bid To Fight Fake News

Social networking giant Facebook is not ready to drop its tame the fake news mission anytime soon. The Mark Zuckerberg-led company is currently testing out a new ‘Context’ button that will allow its more than 2 billion users get a little more context about the source of the news that they’re reading on the platform.

The context button is the latest step taken by Facebook in curbing misinformation on its leading social networking platform. Earlier in the year, we reported how Mark Zuckerberg's Facebook finally launched its much-awaited 'disputed' tag to fight 'fake news' on its famous social networking site.

With the context button, Facebook users reading news on the platform will be able to get context on the source of a news article with just a single click without having to leave Facebook and their news feed.

"We are testing a button that people can tap to easily access additional information without needing to go elsewhere," revealed product managers Andrew Anker, Sara Su and Jeff Smith in a company blogpost on Thursday.

The blogpost further revealed that additional contextual information about a news piece will be gathered from across Facebook and other sources, such as information from the publisher's Wikipedia entry. In cases where information is unavailable, the social networking company "will let people know, which can also be helpful context.”

Facebook is hoping that providing more contextual information will help users in evaluating if articles are from a publisher they can trust, and if the story itself is credible or not.

Facebook finally decided to launch a crackdown on fake news stories this year in the light of the massive backlash it received in November last year that the so-called fake news on the platform had influenced the outcome of the US presidential election. In December, Facebook decided to do some damage control by announcing that it has joined hands with fact checkers that are signatories of the journalism non-profit Poynter’s International Fact Checking Code of Principles and included ABC News, FactCheck.org,Snopes and Politifact.

[Image:]

Facebook Launches ‘Context’ Button In a Bid To Fight Fake News

Social networking giant Facebook is not ready to drop its tame the fake news mission anytime soon. The Mark Zuckerberg-led company is currently testing out a new ‘Context’ button that will allow its more than 2 billion users get a little more context about the source of the news that they’re reading on the platform.

The context button is the latest step taken by Facebook in curbing misinformation on its leading social networking platform. Earlier in the year, we reported how Mark Zuckerberg's Facebook finally launched its much-awaited 'disputed' tag to fight 'fake news' on its famous social networking site.

With the context button, Facebook users reading news on the platform will be able to get context on the source of a news article with just a single click without having to leave Facebook and their news feed.

"We are testing a button that people can tap to easily access additional information without needing to go elsewhere," revealed product managers Andrew Anker, Sara Su and Jeff Smith in a company blogpost on Thursday.

The blogpost further revealed that additional contextual information about a news piece will be gathered from across Facebook and other sources, such as information from the publisher's Wikipedia entry. In cases where information is unavailable, the social networking company "will let people know, which can also be helpful context.”

Facebook is hoping that providing more contextual information will help users in evaluating if articles are from a publisher they can trust, and if the story itself is credible or not.

Facebook finally decided to launch a crackdown on fake news stories this year in the light of the massive backlash it received in November last year that the so-called fake news on the platform had influenced the outcome of the US presidential election. In December, Facebook decided to do some damage control by announcing that it has joined hands with fact checkers that are signatories of the journalism non-profit Poynter’s International Fact Checking Code of Principles and included ABC News, FactCheck.org,Snopes and Politifact.

[Image:]

Introducing 'WikiTribune' - A New Tool By Wikipedia Founder To Battle Fake News

The topic of fake news has caught everyone's attention like never before. Though the phenomenon has been existent since the conception of journalism but it gained some major traction with the advent of the internet. Seeing the issue ballooning up, a number of tech giants decided to jump in and curtail the problem as much as they can. Earlier in the year, we reported how Mark Zuckerberg's Facebook finally launched its much-awaited 'disputed' tag to fight 'fake news' on its famous social networking site. And now, Wikipedia founder Jimmy Wales has announced Wikitribune, a news platform that will bring together journalists with a legion of fact-checkers.

According to the platform, its main aim is to ensure that people all around the world are lifted of this curse called fake news and they only read true fact-based articles that can contribute towards having a real impact on both local and global events.

Though Wikitribune will be publishing news stories written by professional journalists, but it will give internet users the ability to propose factual corrections and additions if any, almost similar to the model being followed by Wikipedia. All the changes and additions suggested will be reviewed by volunteer fact-checkers towards the end.

In order to ensure transparency, which is somewhere missing in today's journalism practices, Wikitribune will try to be as transparent about its sources as possible and post full transcripts of its interviews, as well as videos and audios.

Further, at Wikitribune, the language used in its stories will be neutral and factual and not favour one party over another. According to Wales, "It takes professional, standards-based journalism, and incorporates the radical idea from the world of wiki that a community of volunteers can and will reliably protect the integrity of information."

Wikitribune has been designed with an intent of counteracting fake news spreading on social media. It has been seen that fake news has a potential of spreading like a wild fire on social media in a matter of seconds and cause major havoc and panic among people. It fundamentally breaks the news and shows us what we want to read in order to confirm our biases, and to keep making us click on those links at any cost.

Though the Wikipedia founder is now trying to make the world a better informed society, it is interesting to note that prior to Wikitribune, he was on the other side of the road. Wales' internet encyclopedia has often run into troubles for hosting misleading or inaccurate information, as at 10 edits per second. Being a community platform, Wikipedia often finds it difficult to single out miscreants who purposefully plant false information.

It will be interesting to see what damage control Wikitribune is able to achieve in the world full of fake news.

Introducing 'WikiTribune' - A New Tool By Wikipedia Founder To Battle Fake News

The topic of fake news has caught everyone's attention like never before. Though the phenomenon has been existent since the conception of journalism but it gained some major traction with the advent of the internet. Seeing the issue ballooning up, a number of tech giants decided to jump in and curtail the problem as much as they can. Earlier in the year, we reported how Mark Zuckerberg's Facebook finally launched its much-awaited 'disputed' tag to fight 'fake news' on its famous social networking site. And now, Wikipedia founder Jimmy Wales has announced Wikitribune, a news platform that will bring together journalists with a legion of fact-checkers.

According to the platform, its main aim is to ensure that people all around the world are lifted of this curse called fake news and they only read true fact-based articles that can contribute towards having a real impact on both local and global events.

Though Wikitribune will be publishing news stories written by professional journalists, but it will give internet users the ability to propose factual corrections and additions if any, almost similar to the model being followed by Wikipedia. All the changes and additions suggested will be reviewed by volunteer fact-checkers towards the end.

In order to ensure transparency, which is somewhere missing in today's journalism practices, Wikitribune will try to be as transparent about its sources as possible and post full transcripts of its interviews, as well as videos and audios.

Further, at Wikitribune, the language used in its stories will be neutral and factual and not favour one party over another. According to Wales, "It takes professional, standards-based journalism, and incorporates the radical idea from the world of wiki that a community of volunteers can and will reliably protect the integrity of information."

Wikitribune has been designed with an intent of counteracting fake news spreading on social media. It has been seen that fake news has a potential of spreading like a wild fire on social media in a matter of seconds and cause major havoc and panic among people. It fundamentally breaks the news and shows us what we want to read in order to confirm our biases, and to keep making us click on those links at any cost.

Though the Wikipedia founder is now trying to make the world a better informed society, it is interesting to note that prior to Wikitribune, he was on the other side of the road. Wales' internet encyclopedia has often run into troubles for hosting misleading or inaccurate information, as at 10 edits per second. Being a community platform, Wikipedia often finds it difficult to single out miscreants who purposefully plant false information.

It will be interesting to see what damage control Wikitribune is able to achieve in the world full of fake news.

Two Techies Working To Solve India's Urgent Social Media Problem- FAKE NEWS

The Fake News trend on Social Media has been there for a while. But, the world really took a notice of the situation during America's recent Presidential elections where it was alleged that fake stories on Facebook helped tip the balance in a particular candidates' favour. If you thought, America is the only one suffering from the fake news fiasco, you're deeply mistaken. The whole trend has actually been plaguing the rest of the world as well and has even entered the largest democracy on Earth, India.

Two Indian techies have decided to control the fake news scenario before it gets out of hand. The Bengaluru duo, Bal Krishn Birla and Shammas Oliyath, have built a website called check4spam.com that will help detect fake messages that are being widely shared on WhatsApp and Facebook. The site checks a particular story/message on the basis of both the research and investigation done by the check4spam team along with some volunteer users.

A self-funded project for now, check4spam is working with a vision of providing unconditional service to humanity. It's mission is to make life easy for the common man and trouble for the spammers. According to their official website, with check4spam, the duo have taken upon themselves to educate people in India that fall prey to the fake messages on social media, and then end up circulating those messages.

With the internet usage in the country increasing at a rapid pace, it is vital to curb the menace of fake news as soon as possible as many of the internet users in the country find it difficult to differentiate authentic sources to fake or malicious onesies. Further, there are several threats of hoaxes, click bait or Trojan horse-style software built in order to steal /phish information from a user’s device.

The website, check4spam, verified a particular piece of news by:

1. Contacting the person/organization mentioned in the post
2. Doing an extensive search, both online and offline, to find any further fine information about the news

The company has even set up a WhatsApp number wherein people can send in their messages for fact-checks. According to them, they're currently getting as many as 100 messages a day for verification.

The check4spam.com website, which gets half a million page views a month, supports messages that are text-only, image-only, and contains both text and image. The platform is also crowdsourcing spam message detection by asking social media users to report the spam messages that they find out themselves.

So, if you detect a news, story or message which you think doesn't add up, you know where to check it now.

Facebook Launches ‘Disputed’ Tag To Fight ‘Fake News’

Mark Zuckerberg's Facebook has kept its word and finally launched the much-awaited 'disputed' tag to fight 'fake news' on its famous social networking site. Though currently only active in the United States, the initiative will see Facebook tagging stories that are deemed false by facts with a disputed tag. The stories reported fake by the Facebook users will be checked for factual accuracy by non-partisan third-party organisations like Politifact and Snopes.

The social networking site has also added a question on its help centre page explaining in detail how they will be marking a news as disputed on the platform. The section mentions that the feature isn't yet available to everyone but it is still unclear about how many users currently have access to the tool.

The new tool first came to limelight on Twitter, when excited users shared the screenshots which identified links to websites that are infamous for producing misinformation and false stories.

Facebook finally decided to launch a crackdown on fake news stories after 13 years of its launch in the light of the massive backlash it received in November last year that the so-called fake news on the platform had influenced the outcome of the US presidential election. In December, Facebook decided to do some damage control by announcing that it has joined hands with fact checkers that are signatories of the journalism non-profit Poynter’s International Fact Checking Code of Principles and included ABC News, FactCheck.org,Snopes and Politifact.

News stories which are reported fake by Facebook users will be forwarded to these fact checkers for verification of facts. If upon observation, the fact-checkers agree that the story does contain some false facts which can be misleading, they will then appears in Facebook's News Feed with a "disputed” tag, along with a link to a corresponding article explaining why it has been given the tag. Such stories will also rank lower in the News Feed and each time a user shares the story on their timeline, they will receive a warning making them aware of the disputed tag allotted to the story.

While for now the tool has only been launched in the US, but Facebook is continuously making efforts to make it available in every region that the social networking site is currently active in. A special focus is being allocated to Europe amid threats from the European Union on reducing the spread of misinformation. Facebook recently announced fact checking partnerships in France and Germany in the light of the coming elections in each of the country.

Market Reports

Market Report & Surveys
IndianWeb2.com © all rights reserved