Leading tech companies are in a race to release and improve artificial intelligence (AI) products, leaving users in the United States to puzzle out how much of their personal data could be extracted to train AI tools.
Meta (which owns Facebook, Instagram, Threads and WhatsApp), Google and LinkedIn have all rolled out AI app features that have the capacity to draw on users’ public profiles or emails. Google and LinkedIn offer users ways to opt out of the AI features, while Meta’s AI tool provides no means for its users to say “no, thanks.”
“Gmail just flipped a dangerous switch on October 10, 2025 and 99% of Gmail users have no idea,” a November 8 Instagram post said.
Posts warned that the platforms’ AI tool rollouts make most private information available for tech company harvesting. “Every conversation, every photo, every voice message, fed into AI and used for profit,” a November 9 X video about Meta said.
Technology companies are rarely fully transparent when it comes to the user data they collect and what they use it for, Krystyna Sikora, a research analyst for the Alliance for Securing Democracy at the German Marshall Fund, told PolitiFact.
“Unsurprisingly, this lack of transparency can create significant confusion that in turn can lead to fear mongering and the spread of false information about what is and is not permissible,” Sikora said.
The best – if tedious – way for people to know and protect their privacy rights is to read the terms and conditions, since it often explicitly outlines how the data will be used and whether it will be shared with third parties, Sikora said. The US doesn’t have any comprehensive federal laws on data privacy for technology companies.
Here’s what we learned about how each platform’s AI is handling your data:








United Arab Emirates Dirham Exchange Rate

