Well, thats a wrap…
Why designing for ongoing reflection matters

It’s that time of year again where apps confidently tell us who we are. This year, Spotify informed me that I’m 78 years old in Spotify years (I’m 33 in human ones). I personally thought my affinity for Chapel Roan would have shaved off a decade or two but I guess not! Strava had thoughts about my running habits. YouTube summarized my viewing habits and decided I was an Adventurer, whatever that means. Even LinkedIn had something to say about who I’d been professionally!

Honestly I don’t want to be a hater, these year-end wrap ups are fun! They feel personal. They often even feel accurate. But also they create an uneasiness and raise a question I keep coming back to. How do these systems actually decide who we are?
At a high level, most of the systems I mentioned learn about us through our behavior. Some of that behavior is subtle, like how long we linger, what we skip, when we hesitate. Some of it feels more intentional like likes, follows, comments, subscriptions etc. In most cases, we’re not carefully declaring who we want to be to our products.
Over time, we learn that these implicit behaviors function as levers. If we want less of something, we skip it. If we want more, we engage. We adapt ourselves to these janky systems. It’s a very narrow kind of control because we don’t know the weight of each actions on our experience. Then once a year, we’re handed a polished summary of who the system thinks we are. But there are better ways to explore this.
Awhile ago, I had the opportunity to work on a project for a company called Ground News. Their goal is to aggregate news sources into one place in order to give you a clearer picture of your news diet. Part of how they do that is through a feature called My Media Bias. This feature returns your media consumption data back to you in a structured, legible way showing what sources you read, how they’re distributed across the political spectrum, and where potential blind spots might exist.

The design intent wasn’t to tell people what their biases were, but to make their own consumption data visible in a way that could be inspected and revisited over time. Over time is a distinction that matters. Ground’s approach treats visibility as part of the ongoing experience, not a once-a-year summary. That kind of continuous feedback supports insight and intentional adjustment in a way that opaque behavioral feedback loops don’t.
In truth there’s a reason most large platforms privilege behavioral signals. From a systems perspective, they work! Observed behavior (what we click, watch, skip, or linger on) is abundant, continuous, and highly predictive of what we’ll do next. Decades of recommender systems research show that these implicit signals consistently outperform self-reported preferences when it comes to predicting future interaction.
And I can’t deny that, but these signals only describe what happened, not what a person is trying to do. That’s where explicit preferences matter. They’re messier, slower, and harder to collect but they’re the only place intent actually lives. When systems treat stated preferences as secondary or optional. They often sideline the user’s own goals.
What’s interesting about these year-end summaries is the kind of relationship they establish between the system and the person using it. You can see what the system thinks of you, but you can’t interrogate it, correct it, or guide it in any explicit way. The moment for reflection arrives only after the system has already learned everything it’s going to learn from you that year.
When assumptions are visible as they’re being formed, users have the opportunity to respond with intention rather than reaction. That’s a fundamentally different kind of agency. It treats people less like sources of signals and more like collaborators in the modeling process.
Systems that meaningfully shape what we see, hear, and engage with are making value-laden decisions on our behalf. When the only way to influence those decisions is through opaque behavioral feedback, users are left guessing how to express intent.
Wrapped features are interesting because they acknowledge that people want to understand how they’re being seen. The problem is that so far all of the examples I’ve seen stop short of giving users the tools that would actually support that understanding. There’s no invitation to adjust or say this doesn’t quite feel right.
To be clear, I’m glad Wrapped exists. I hope more companies keep doing these kinds of reflections. They offer a rare glimpse into how these systems see us, what they prioritize, and where their interpretations fall short.
The more interesting question isn’t whether Wrapped gets us right or wrong. It’s why reflection shows up as a once-a-year artifact instead of an ongoing conversation. If systems are going to model us continuously, it’s worth asking why we’re only invited to reflect after the fact, rather than participate while that model is still being formed.
Well, thats a wrap… was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
This post first appeared on Read More

