Several major social media platforms are using user data to enhance their artificial intelligence capabilities. OpenAI, for instance, has admitted to using copyrighted materials to develop ChatGPT. LinkedIn is another example, employing user resumes to refine its AI models. Even Snapchat has acknowledged using user selfies to personalize ads, CNN reported.
These platforms are leveraging a wealth of conversational data from social media posts, including slang and real-time events. However, many users may feel uncomfortable with their personal information being used to train these AI systems.
David Ogiste, founder of Nobody’s Cafe, expressed concerns about the lack of transparency regarding data usage.
“Right now, there is a lot of fear being created around AI, some of it well-founded and some based in science fiction, so it’s on these platforms to be very open about how they will and won’t use our data to help alleviate some of the reactions that this type of news brings – which for me, it doesn’t feel like that has been done yet,” Mr Ogiste told CNN. He emphasized the need for platforms to be more open about their practices and provide clear opt-out options.
While some platforms offer users the choice to opt out of data sharing for AI training, it’s important to note that publicly posted content can still be accessed by third parties.
Here’s a breakdown of how major social media platforms handle user data for AI:
- LinkedIn: Offers an opt-out option for AI training data, but previous training may have already occurred.
- X (formerly Twitter): Requires users to opt-out if they don’t want their posts used to train Grok.
- Snapchat: Uses selfies for AI-generated ads but allows users to opt-out.
- Reddit: Shares public user data with third parties for AI training, but private content is not shared.
- Meta: Uses public Facebook and Instagram data for AI training, but private messages are not used.