Skip to content

In letter to FTC, Senators request information on how the agency is tracking the role of A.I. in scams targeted at older Americans

Senators: “We still have a lot to learn about how AI will be utilized in the future against American consumers and older adults”

Washington, D.C. - Today, U.S. Senator Bob Casey (D-PA), Chairman of the U.S. Senate Special Committee on Aging, led his colleagues Richard Blumenthal (D-CT), John Fetterman (D-PA), and Kirsten Gillibrand (D-NY) in sending a letter to the Federal Trade Commission (FTC) to request information about the agency’s work to track the use of artificial intelligence (A.I.) in scams targeted at older Americans. In the letter, the Senators pointed out the increasing role that A.I. is playing in frauds and scams, and asked the agency for information about how it is tracking A.I. powered-scams and the actions it is taking the prevent them.


The Senators wrote, “Many times, the AI-powered scams seem so realistic that the victims do not know the scammers have utilized AI in targeting them. In these situations, generative AI can exacerbate the false panic and sense of urgency victims often feel when targeted, compelling them to turn over the private or financial information the scammer requests. Unfortunately, it is evident that we still have a lot to learn about how AI will be utilized in the future against American consumers and older adults…efforts to educate the public, law enforcement, and policymakers should be informed by evidence-based data”

Last month, Chairman Casey held a hearing entitled, Modern Scams: How Scammers Are Using Artificial Intelligence & How We Can Fight Back. The hearing examined how Artificial Intelligence (AI) can be utilized by scammers to deploy scams and convince targets of their veracity, and how AI technology is being deployed to enhance the next generation of fraud detection systems. During the hearing, Chairman Casey unveiled the Aging Committee’s annual Fraud Book, and also released a brochure on AI-powered scams and a bookmark featuring tips to avoid scams.

You can read the letter here or below:

The Honorable Lina M. Khan

Chair

Federal Trade Commission

600 Pennsylvania Avenue, NW

Washington, D.C. 20580

Dear Chair Khan:

We write to request additional information about the Federal Trade Commission’s (FTC) work to track the increasing use of artificial intelligence (AI) to perpetrate frauds and scams against older Americans. Safeguarding older Americans from frauds, scams, and financial exploitation has been a long-standing bipartisan priority of the Committee. The White House, in an Executive Order issued on October 30, 2023, also joined in calling for stronger protections from AI-enabled fraud and deception, specifically encouraging FTC to consider rulemaking to protect consumers and workers from AI harms. In order to respond effectively, we must understand the extent of the threat before us; we ask that FTC share how it is working to gather data on the use of AI in scams and ensure it is accurately reflected in its Consumer Sentinel Network (Sentinel) database.

Sentinel is a secure online database where FTC stores reports regarding issues consumers have experienced in the marketplace, including scams perpetrated by individuals, businesses, or networks. Similar to the Aging Committee’s Fraud Hotline, consumers can report as much or as little as they wish when they file a report with FTC. Data within Sentinel and details of each report are available to law enforcement, and FTC provides the public with an analysis of all reports received in the over year. In 2022, Sentinel received over 5.4 million consumer reports, which were sorted into 29 top categories.

Consumer reports are categorized based on the contact methods deployed by scammers, payment types, and fraud losses and reports by age. Over the years, FTC has updated these categories to incorporate emerging technology, like Peer-to-Peer (P2P) payment apps or services and cryptocurrency. However, references to AI are noticeably lacking in FTC’s Sentinel.

AI is a broad term that refers to technology that can simulate human intelligence. Generative AI is a sub-field of AI that can be used to create original content—generative AI can power chatbots that copy writing styles, find personal information, create fake documents, create fake photos, and more. Voice cloning and deepfakes are both examples of generative AI in use: voice cloning allows a user to mimic or impersonate the voice of loved ones, authorities, or celebrities; similarly, deepfakes are AI-generated images that can be used to spread misinformation and commit fraud.

We know that AI is simply exacerbating the pervasiveness and effects of existing schemes. As FTC shared in its July 27, 2023 letter, frauds and scams involving AI typically fall into the following categories: (1) family emergency scams, where the scammer mimics the voice of a family member and asks for money; (2) romance scams, where the scammer fakes a love interest by using chatbots to send messages; (3) business-related scams, where the scammer acts as a high-ranking employee or other official to get businesses to complete wire transfers or send sensitive information; and (4) phishing scams, where scammers can easily and quickly deploy personalized email or text messages to convince targets to share their personal information.

As the agency explores additional policymaking routes to put an end to these scams, older Americans are continuing to fall victim to these schemes. Where a consumer may have been able to easily identify and dismiss a scam before, now they face greater challenges in determining whether the email, text, or phone call they received is legitimate. Many times, the AI-powered scams seem so realistic that the victims do not know the scammers have utilized AI in targeting them. In these situations, generative AI can exacerbate the false panic and sense of urgency victims often feel when targeted, compelling them to turn over the private or financial information the scammer requests.

Unfortunately, it is evident that we still have a lot to learn about how AI will be utilized in the future against American consumers and older adults. While public reporting indicates that more families are being targeted by voice clones in family emergency scams, the number of Americans targeted by scammers using generative AI remains unknown. Efforts to educate the public, law enforcement, and policymakers should be informed by evidence-based data, similar to what is collected in Sentinel.

As FTC considers more strategies to safeguard older Americans and inform decision-making around generative AI, we request the following information by January 9, 2024.

  1. How are AI-powered frauds and scams tagged and reported within Sentinel? If they are not identified, how will FTC ensure that they are identified moving forward? If these frauds and scams have been tagged and reported within Sentinel, what subcategories do these scams fall into?
  2. Often times, consumers are unaware that scams have been deployed with the use of generative AI. How does FTC identify if AI was utilized by the scammer, if this is not readily identified by the consumer?
  3. For AI-powered scams, based on available data, what is the breakdown between chatbots, voice cloning, deepfakes, phishing, spoofing, and other types of scams? How else has FTC observed the use of generative AI in scams?
  4. How does FTC utilize AI in gathering, categorizing, and studying data reported to the agency and stored in Sentinel?

Thank you for your attention to this important issue. We look forward to your response.

###