Tuesday, June 17, 2025
HomeGadgetsGoogle Unveils India-Focused Safety Charter, Shares How It Is Using AI to...

Google Unveils India-Focused Safety Charter, Shares How It Is Using AI to Combat Online Frauds and Scams

Published on

spot_img



Google unveiled its Safety Charter for India, highlighting how it is using artificial intelligence (AI) technology to identify and prevent instances of cybercrimes across its products. The Mountain View-based tech giant highlighted that with the rise of India’s digital economy, the need for trust-based systems was high. The company is now using AI in its products, country-wide programmes, and to detect and remove vulnerabilities in enterprise software. Alongside, Google also highlighted the need to build AI responsibly.

Google’s Safety Charter for India Highlights Key Milestones

In a blog post, the tech giant detailed its achievements in successful identification and prevention of online fraud and scams across its consumer products, as well as enterprise software. Explaining the focus on cybersecurity, Google cited a report highlighting that UPI related frauds cost Indian users more than Rs. 1,087 crore in 2024, and the total financial losses from unchecked cybercrimes reportedly reached Rs. 20,000 crore in 2025.

Google also mentioned that bad actors are rapidly adopting AI to enhance cybercrime techniques. Some of these include AI-generated content, deepfakes, and voice cloning to pull off convincing frauds and scams.

The company is combining its policies and suite of security technologies with India’s DigiKavach programme to better protect the country’s digital landscape. Google has also partnered with the Indian Cyber Crime Coordination Centre (14C) to “strengthen its efforts towards user awareness on cybercrimes, over the next couple of months in a phased approach.”

Coming to the company’s achievements in this space, the tech giant said it removed 247 million ads and suspended 2.9 million fraudulent accounts that were violating its policies, which also includes complying with the state and country-specific regulations.

In Google Search, the company claimed to be using AI models to catch 20 times more scammy web pages before they appear on the results page. The platform is also said to have reduced instances of fraudulent websites impersonating customer service and governments by more than 80 percent and 70 percent, respectively.

Google Message recently adopted the new AI-powered Scam Detection feature. The company claims the security tool is flagging more than 500 million suspicious messages every month. The feature also warns users when they open URLs sent by senders whose contact details are not saved. The warning message is said to have been shown more than 2.5 billion times.

The company’s app marketplace for Android, Google Play, is claimed to have blocked nearly six crore attempts to install high-risk apps. This included more than 220,000 unique apps that were being installed on more than 13 million devices. Its UPI app, Google Pay, also displayed 41 million warnings after the system detected the transactions being made were potential scams.

Google is also working towards securing its enterprise-focused products from potential cybersecurity threats. The company initiated Project Zero in collaboration with DeepMind to discover previously unknown vulnerabilities in popular enterprise software such as SQLite. In the SQLite vulnerability, the company used an AI agent to detect the flaw.

The company is also collaborating with IIT Madras to research Post-Quantum Cryptography (PQC). It refers to cryptographic algorithms that are designed to secure systems from potential threats caused by quantum computers. These algorithms are used for encryption, digital signatures, and key exchanges.

Finally, on the responsible AI front, Google claimed that its models and infrastructure are thoroughly tested against adversarial attacks via both internal systems as well as AI-assisted red teaming efforts.

For accuracy and labeling AI-generated content, the tech giant is using SynthID to embed an invisible watermark on text, audio, video, and images generated by its models. Google also requires its YouTube content creators to disclose AI-generated content. Additionally, the double-check feature in Gemini allows users to make the chatbot identify any inaccuracies by running a Google Search.



Source link

Latest articles

Gemini 2.5 model family expands

We designed Gemini 2.5 to be a family of hybrid reasoning models that...

As Rumors Fly, Elon Musk’s Posts “Results” of Drug Test for PCP, Benzos and More

Late last month, the New York Times published an eyebrow-raising story in which...

Apple India’s Back to School offer is live: Get free AirPods or Pencil with these Mac and iPad models

Apple has launched its annual Back to School offer in India, valid from...

Qodo teams up with Google Cloud, to provide devs with FREE AI code review tools directly within platform

Join the event trusted by enterprise leaders for nearly two decades. VB Transform...

More like this

Gemini 2.5 model family expands

We designed Gemini 2.5 to be a family of hybrid reasoning models that...

As Rumors Fly, Elon Musk’s Posts “Results” of Drug Test for PCP, Benzos and More

Late last month, the New York Times published an eyebrow-raising story in which...

Apple India’s Back to School offer is live: Get free AirPods or Pencil with these Mac and iPad models

Apple has launched its annual Back to School offer in India, valid from...