The Many Problems with Future Predictions About UX

Updated Februar, 2026 by Dr. Katharina Grimm

Dr. Katharina Grimm is a UX Writer, educator, and founder of The UX Writing School with 8+ years of industry experience and PhD in Technology Management and Communications.


In April 2025, I focused my content across YouTube, my newsletter, Instagram, and this blog on the broad topic of the future of UX. It was a theme I'd been exploring in preparation for a free talk about where our field might be headed.

As part of that preparation, I spent a significant amount of time digging into what others were writing, predicting, and saying about what comes next in UX.

Why the extra effort? Because a lot of what gets shared under the label "future of UX" doesn't appear to be grounded in research — or even direct, current experience. Instead, it often functions as a tool to grab attention, build reach, or sell something. And that contributes to a broader pattern worth naming: treating predictions as surface-level engagement material rather than the directional signals they really are.

Why Future Predictions Deserve Higher Standards

Speculating about the future is natural. Anyone can do it. But in a professional context, predictions do more than express opinions. They shape priorities. They influence what people choose to study, which skills they decide to develop, and which parts of the field they move away from.

When someone claims "AI will take over UX" or "Tool X will be irrelevant in five years," they are not simply reflecting on the landscape — they are implying what others should act on today.

These kinds of statements tend to get attention because they offer certainty in a field that is currently uncertain. But that's also exactly what makes them potentially misleading. According to the User Interviews State of User Research 2025 report, nearly 49% of researchers report feeling negatively about the future of UXR — a 26-point increase from 2024. ResearchGate That level of widespread anxiety in a professional field doesn't emerge from nowhere. It's partly the result of an information environment saturated with loud, definitive, poorly grounded claims about what's coming next.

If we don't treat future predictions with the weight they deserve, we risk contributing to that anxiety rather than helping people navigate it thoughtfully.

What's Going Wrong in the Discourse?

During my research, I noticed something interesting. On platforms like Reddit and Medium, discussions around the future of UX were often thoughtful and well-argued. People questioned others and themselves, added context, and acknowledged complexity. On LinkedIn, the tone was different: less careful, more confident, and in many cases, less grounded.

Here are the five patterns that stood out most.

1. Emotion Over Evidence

Many predictions are written in urgent, dramatic, or emotionally charged language. Lines like "adapt or get left behind" are common, along with more extreme variations of that framing. While these kinds of statements may generate engagement, they don't actually inform — or help.

What's typically missing is data. If a tool is described as "the next big thing," how often does it appear in actual job listings? Are there examples of teams implementing it successfully? What outcomes have been observed in practice?

Without that context, a confident-sounding prediction is just an opinion — and we don't even know if it's an informed one. According to Nielsen Norman Group's 2025 UX Reckoning report, 47% of UX professionals who used AI in their work found it had "some value," while 20% were "not impressed." Zainabiftikhar That's a notably mixed picture — and yet the public discourse often presents AI's impact on UX in far more absolute terms.

2. Repetition Over Original Thinking

Something else I noticed: especially on LinkedIn, people tend to surface very similar lists of "emerging trends" in UX and UX Writing. Very similar, in a way that's difficult to explain by coincidence.

These lists typically include entries like "Accessibility & Inclusion," "Voice UI," "Emotional Content Design," and "Localization." But many of these so-called trends didn't align with what I see in my day-to-day work. And some aren't trends at all — they're foundational quality standards that have been essential to good UX practice for years.

That raised a question: where are these predictions actually coming from?

When I asked ChatGPT for emerging trends in UX Writing, I got back nearly identical lists to what I'd been seeing in those posts. And when I followed up with ChatGPT about its sources, it became clear that the most recent data it had drawn on was from 2023.

The point isn't that ChatGPT learned from those predictions. The point is the reverse: people were publishing AI-generated outputs as professional insights — without questioning the source, the date of the underlying data, or the absence of any original reasoning. AI can support idea generation, but when a prediction comes directly from a language model and is published as expert analysis, something has gone wrong in the process.

3. Authority Without Proximity to the Work

Many high-visibility predictions about the future of UX come from people who are no longer involved in day-to-day UX work — or who never were.

This isn't about excluding voices. Someone working at a strategic, organizational, or educational level can offer genuinely valuable perspective on where the field is heading. But that perspective should be framed accordingly — as a view from a particular vantage point, not as a practitioner's read on what's happening on the ground.

As someone who has actively worked as a UX Writer for close to eight years, I can say that most trends discussed as hot topics on LinkedIn and in the media have not yet shown up meaningfully in corporate or startup reality. The gap between what gets predicted and what's actually happening in product teams is often significant.

4. Visibility Over Responsibility

Predictions that are loud, confident, and definitive tend to perform well on social platforms. But the posts that perform well aren't always the posts that serve the field.

Once someone positions themselves as a thought leader and follows through on that positioning, their words carry real influence. Their posts shape the discourse. Other professionals take their opinions seriously. People who are newer to the field look up to them.

That's a meaningful responsibility. And when predictions are shared without nuance, context, or genuine reasoning behind them, they can cause more harm than good. Rarely, when a high-confidence prediction turns out to be wrong or gets challenged, does the original author revisit it publicly. The post stays up. The impressions keep accumulating.

5. A Narrow Focus on AI

Perhaps the most limiting pattern I encountered during my research was the narrowness of the conversation itself.

Almost all future-of-UX discussions are centered on AI. And while AI is undoubtedly a major factor in how the field is evolving, it is not the only one.

What about economic shifts? Political change? Education reforms? New hardware categories? The UX implications of aging populations? The needs of users in emerging markets? All of these forces can shape our work in significant ways — and they are largely absent from most predictions.

Even within the AI discourse, the conversation often stays shallow. Most posts offer the same single-line takeaway: "AI won't replace us, but our jobs will change." That's a reasonable starting point. But why not be more precise about how jobs will change — which tasks will shift, which skills will become more important, what new ethical questions will emerge, and what practitioners should be learning right now? According to Maze's Future of User Research Report 2025, AI adoption among UX practitioners rose 32% in a single year, with 58% now using AI tools in their work. That's a significant shift — and it deserves more analytical depth than a reassuring platitude about human irreplaceability.

How We Can Do Better

We don't need perfect answers about the future. No one has those. But we do need a better process for asking the right questions. A few approaches that have been useful in my own research:

  • Filter out the noise. With so many predictions circulating, not all of them will be thoughtful or useful. Posts that lead with dramatic claims — "brand voice is dead," "the UX role will disappear in two years" — are usually designed more to provoke than to inform. If something feels engineered for reaction rather than reflection, it's probably not worth your time.

  • Listen to practitioners who are close to the work. When someone makes strong claims about where UX is going, it's reasonable to ask whether they actively work in UX today — or whether they've moved into a different kind of role. Again, this isn't about excluding perspectives. It's about weighting them appropriately and being aware of what vantage point each voice is speaking from.

  • Build your own picture. If every prediction you encounter is about AI, pause and ask: what else might be shaping the field that isn't getting discussed? Look into hiring patterns, trends in job listings, tech policy, emerging markets, or hardware innovation. Talk to peers. Ask questions that the dominant narrative isn't asking. Don't just consume the story — actively contribute to building it.

  • Choose platforms thoughtfully. Where a prediction is published affects how it tends to be written. Medium posts and long-form articles generally include more context and supporting reasoning. Reddit threads often show a genuine range of perspectives. LinkedIn posts are frequently optimized for reach. None of these platforms are inherently unreliable — but it's worth bringing different expectations to each one.

  • Help shape the conversation. If you have a question, ask it out loud. If a claim doesn't hold up against your direct experience, say so. If you've observed something meaningful in your work that contradicts what's being widely predicted, share it. The future of our field is not something that happens to us. It's something all of us participate in making.

The future of our field is not an ominous wave about to crash onto the shore of our careers. Think of it more like a soup with different chefs adding ingredients. It will be served to all of us — but we also get to stand at the stove. The more we contribute thoughtfully to the discourse, the better that soup is going to taste.
— Dr. Katharina Grimm
 
Many future predictions about UX lack the evidence, proximity to practice, or original reasoning needed to be genuinely useful. The patterns to watch for: emotion over evidence, AI-generated repetition presented as expertise, authority without proximity to the work, and a narrowness that treats AI as the only factor shaping the field’s future.
— Dr. Katharina Grimm

A Note on the Current Mood in UX

The broader emotional context of this conversation matters. The User Interviews State of User Research 2025 report found that 67% of UX researchers gave a thumbs down to career opportunities in the field — a 21-point increase from 2024. ResearchGate That's a striking number, and it reflects a genuinely difficult period for many practitioners.

In that context, the quality of the discourse around the future of UX matters more, not less. Predictions that amplify anxiety without offering grounded analysis, or that present worst-case scenarios as certainties, add to a professional climate that is already under pressure. Predictions that are honest about uncertainty, specific about what's actually changing, and grounded in real evidence serve everyone in the field better.

The future of UX Writing — and UX more broadly — is genuinely uncertain right now. That uncertainty deserves to be taken seriously, engaged with carefully, and discussed with the kind of rigor that a professional field should bring to questions that affect people's livelihoods and careers.

Key Takeaways

  • Future predictions in UX shape real decisions — what people study, which skills they build, which parts of the field they move toward or away from. That makes their quality a professional responsibility, not just a matter of opinion.

  • The most common problems with future-of-UX predictions are: emotion over evidence, AI-generated repetition presented as expertise, authority without proximity to practice, visibility prioritized over responsibility, and a narrow focus on AI at the expense of other shaping forces.

  • Better contributions to this conversation involve sourcing predictions in data and direct experience, framing perspective appropriately, acknowledging uncertainty, and being specific rather than dramatic.

  • Practitioners who are currently close to the work — in active roles in UX Writing, design, or research — are valuable sources of grounded, current perspective. Their voices deserve more weight in this conversation.

  • The future of UX is shaped by everyone who participates in it. Contributing thoughtfully to the discourse is itself a form of active participation in where the field goes next.

Frequently Asked Questions

Why are so many future predictions about UX unreliable? 

Several factors contribute: the incentive structure of social platforms rewards confident, dramatic content over nuanced analysis; some predictions are generated with AI tools and published without critical evaluation; and many high-visibility voices in UX have moved away from day-to-day practice, meaning their perspective is valuable but may not reflect what's happening on the ground in product teams.

How can I tell if a UX prediction is worth taking seriously? 

Look for: a clearly identified source of evidence or experience, specific rather than sweeping claims, acknowledgment of uncertainty, and a clearly stated vantage point. Predictions grounded in observable patterns — job listing trends, published research, documented outcomes from real teams — are generally more trustworthy than those grounded in general impressions or urgency.

Is AI really going to change UX as dramatically as people predict? 

It is already changing aspects of UX work, particularly in research and productivity workflows. According to Maze's Future of User Research Report 2025, 58% of UX practitioners now use AI tools — a 32% increase in a single year. But the changes are uneven, context-dependent, and nowhere near as absolute as the most dramatic predictions suggest. The reality is more gradual and more nuanced than most social media discourse reflects.

What factors besides AI are shaping the future of UX? 

Economic conditions, political and regulatory change, demographic shifts (including aging populations and emerging markets), new hardware categories, education reforms, and the evolving relationship between UX disciplines and adjacent fields like content strategy and product management are all relevant. These factors are underrepresented in most future-of-UX discourse.

How should UX practitioners approach uncertainty about the field's future? 

With curiosity and groundedness rather than anxiety. Build knowledge that is durable — skills and expertise that will hold value across different scenarios. Follow the discourse critically, not passively. Seek out grounded, practice-based perspectives. And contribute your own observations from real work — they are more valuable to the field than they might seem.

How can the UX community improve the quality of future predictions? 

By holding itself to higher standards: sourcing claims in evidence, framing perspective appropriately, revisiting and updating predictions when they don't hold up, and creating space for nuanced, complex conversations rather than rewarding the loudest or most dramatic voices. The quality of the discourse reflects the maturity of the field.


Want to go deeper into UX Writing? Subscribe to The UX Writing Memo — a newsletter that explores one specific question from the world of UX, writing, and the tech industry per issue.

Learn more at writewithdrkat.com | The UX Writing School | YouTube

Previous
Previous

How I Use AI Tools as a UX Writer and UX Writing Business Owner

Next
Next

5 Reasons UX Writers Struggle to Make an Impact – and How to Fix Them