Meta’s ulterior motive behind the viral 2016-2026 trend
Longitudinal data, or data that shows how we change over time across context, allows companies to get extremely high-value data that is a dream for machine learning.
A few weeks ago, I started noticing an unusual trend on my Instagram feed. Friends, celebrities, and influencers were all posting photos of themselves from 2016, with captions along the lines of “2026 is the new 2016” or “ten years ago I was a bit lost, and then I found my fitness routine.”
I had little desire to participate in the trend, mostly because in 2016, I lost my job to a con man CEO, and Donald Trump was elected president. So even flicking through photos from that year made me want nothing less than to post photos memorializing that time.
I stopped thinking about it until one of my favorite novelists, Katie Kitamura, shared an Instagram post from Dr. Sarah Saska, an AI ethicist and researcher. The post said: “that 2016 post isn’t just a throwback; it’s data that helps train AI behind facial recognition, deepfakes, and tracking.”
It’s not just about aging, she continued in the post, but how identity persists across time and change–and how “once AI learns how identity persists over time,” it gets better at “facial recognition, deepfakes, and tracking.”
It immediately piqued my interest because I’ve long wondered how these trends find momentum; whether they’re devised by the companies or generated by influencers and then the companies catch on and boost for engagement.
We all know that by using any of these platforms, we’re relinquishing some right to privacy with our data. But I’d never really considered that such a trend is a dream for not just engagement, but data collection–potentially millions of Instagram users are happily and voluntarily sharing tagged posts about the changes in their lives over the last 10 years.
So I reached out to Sarah Saska to chat with her about her research and learn a bit more.
Ariella Steinhorn: How did you get into this line of work?
Sarah Saska: I was doing my PhD research and bridging together conversations around social sciences, arts and humanities, and technology.
I downloaded various white papers from tech companies, and noticed that they never mentioned all these human components like gender, race, or sexuality. I started digging around, and saw that things were gender-blind or gender-neutral. They didn’t need to consider anything more in how they considered human elements. It was a clusterfuck.
At the time, my grad school didn’t know where to put me. Folks thought I was dabbling in conspiracy, that I had a doomsday vibe. So I started a company in the product inclusion space to look into different types of bias. Today, it’s hard to find a company that’s not in the AI space, so it’s been a natural evolution to researching AI.
AS: Do you think that this 2016 trend started with Meta, some sort of an orchestrated meeting to jumpstart this? Or that it started organically, and Meta saw the traction and jumped on it somehow?
SS: It’s unclear the origin. It could have been from an influencer being encouraged to launch something. I’m not saying Meta planned this, but the data it ultimately produces has unusually high value that they have the right to use.
The trend creates the perfect storm because of identity continuity. It is explicit in its labeling. It captures biological aging, and also social confirmation–people are liking, commenting, and engaging in a way that confirms that these images are all of the same person.
From a machine learning perspective, this creates really high confidence. And it tackles one of the biggest challenges in AI, which is how do we recognize the same person across time, context, and change.
It’s difficult to ethically collect longitudinal data as well, so people producing it voluntarily and at scale is ideal.
AS: How do you think a company like Meta is using data differently from a company like Palantir? Because what you’re describing sounds a lot like Palantir.
SS: The different players in big tech all offer one component part of what’s needed. It’s more about the combination, that together these big tech players have the full or whole picture.
This can be really valuable information for advertisers, who can sell us things we don’t need. Or it can be a form of surveillance for election interference, where you’re targeted for your political beliefs.
And it’s not just Meta, platforms like X for example downgrade your account based on what you’re choosing to post and engage with. Grok actually went rogue and made X’s algorithm more public to users, where people could see whether their account or post was censored or suppressed, as well as penalties incurred from past posts and subject matter.
AS: Speaking of X and Grok, I can’t help but think that this sort of 10-year data makes it easier to create deepfakes of women–like Grok’s sexualized images.
SS: The Grok controversy around sexualized images of women shows the social consequence of this dynamic: once women’s identities are easily learnable and reproducible, they are more vulnerable to being re-created in sexualized or degrading contexts without consent.
In that context, the 10-year trend doesn’t cause deepfakes, but it feeds into the same data conditions that make gendered, non-consensual image abuse easier and more normalized.
AS: Do you think users will start to care about this?
SS: The comments on the post are good anecdotal research into how the average consumer is thinking or reacting, and what the general gap of knowledge is. People are saying, well I already have pictures, engagement, and content from the last 10 years. It’s no different if I put images together for this trend.
The average person doesn’t get that one of the number one utilities for really high quality longitudinal data is that it vastly improves surveillance and identity tracking. They don’t connect the dots of how their daily online habits ladder up to the government or even military. Even the comments on my post about the 2016 trend were interesting; people were saying “well they already have my data, so what can I do?”
It allows authorities to match people years into the future–so if you’re caught on camera tomorrow, can they take a picture from you 10 years ago to match?
It helps with identifying people across hair changes and cosmetic surgery. And it also allows for law enforcement or immigration enforcement to match an outdated ID or drivers’ license photo to people now. It can be used for identifying people at protests, and being able to match them up.
That said, the shift towards a more analog lifestyle is really starting to take shape, and for good reason.
People said they’re getting rid of TikTok this week because of all the changes in ownership. We’re all going to have to figure out how we reconnect with one another.



If you’re paying less than it’s worth, you’re often the product. If you’re not paying for it at all, you’re always the product.
Absolutely brilliant breakdown of something thats been staring us in the face this whole time. What's particularly disturbing is how the social confrimation aspect (likes and comments) gives AI high confidence that these images are the same person across time. We're essentially voluntarily creating a massive dataset for surveillance tech that would be unethical to collect any other way. I literally shared one of these posts last week without even thinking twice about it.