What's left for urban scientists in the age of AI?

I’ve been thinking about this question a lot lately. As someone who works at the intersection of urban studies and AI, I keep hearing variations of the same concern: “Will AI replace urban scientists?” “What’s the point of doing all the research when AI can do it all?” “Should I even pursue a PhD in urban science anymore?”

These are valid questions. We’re living through a moment where AI models can generate city images, predict urban patterns, analyse satellite imagery, and even write research papers. Just last week, I tested a foundation model to produce a comprehensive urban analysis based on urban maps, urban data, and urban images that would have taken a researcher days to complete. It was both impressive and, honestly, a bit unsettling.

I know some top researchers are working hard to develop more sophisticated urban analysis models for different scenarios and applications. But it’s foreseeable that much of this work—the technical modelling, the pattern recognition, the data processing—will likely be accomplished by AI within the next decade. This raises an uncomfortable question: will traditional urban scientists be reduced to organising materials, giving lectures, and conducting field surveys while AI handles the analytical heavy lifting?

And let’s be honest about the elephant in the room: if AI can do much of what we do, faster and cheaper, why would anyone pay us? How do we justify our salaries when a foundation model can produce similar analyses at a fraction of the cost? This isn’t just an existential question—it’s an economic one.

But here’s what I’ve learned from working in this space: the question isn’t really about what’s left for us—it’s about what’s uniquely ours to contribute, and crucially, what value we can create that AI cannot.

The things AI can’t (yet) do

Let me start with a story. A few months ago, I was working on a project analysing thermal comfort in Singapore’s streetscapes. We had all the data—street view images and pedestrian surveys. We trained sophisticated models that could predict comfort levels with impressive accuracy based on SVIs. The AI did its job beautifully.

But here’s what struck me when thinking about the implications: technical accuracy is only part of the story. The questions that really matter are: “Why is this neighbourhood changing?” “What does this mean for the people who live there?” “How should we intervene, if at all?” “Whose interests are we serving with this analysis?”

The AI saw patterns. We saw people.

This is the first thing that remains uniquely ours: contextual understanding. Cities aren’t just optimisation problems or pattern recognition tasks. They’re living, breathing ecosystems of human needs, cultural practices, historical legacies, and social dynamics. AI can process information, but it can’t understand why a particular street corner matters to a community, or why a technically “optimal” solution might fail because it ignores local context.

I’m not romanticising human intuition here. I’m talking about something more fundamental: the ability to ask “why does this matter?” and “for whom?” These questions require judgment that comes from lived experience, cultural awareness, and ethical reasoning—things that AI, for all its power, doesn’t possess.

The second thing is problem formulation. AI is incredibly good at solving problems we give it. But who decides what problems are worth solving? Who determines that we should optimise for traffic flow versus pedestrian safety versus carbon emissions versus social equity? These aren’t technical questions—they’re value judgements that shape the entire direction of urban research and policy.

I’ve seen too many AI-driven urban projects that are technically brilliant but fundamentally misguided because they optimised for the wrong things. A model that maximises housing density without considering community cohesion. An algorithm that improves traffic flow by routing cars through low-income neighbourhoods. These aren’t AI failures—they’re human failures in problem formulation.

The third thing, and perhaps the most important: critical thinking and validation. AI models are black boxes that can be confidently wrong. They can perpetuate biases, miss edge cases, and produce results that look plausible but are fundamentally flawed. Someone needs to interrogate these outputs, validate them against reality, and understand their limitations.

Recently, I came across a study that used an AI model to predict urban growth patterns. The predictions looked reasonable on the surface until you realised they completely ignored existing zoning laws, infrastructure constraints, and political realities. The model had learnt patterns from data but had no understanding of the institutional frameworks that actually govern urban development. Without human expertise to catch these issues, such research could be useless—or worse, misleading.

What we should be doing differently (and where the money is)

So what does this mean for urban scientists in the age of AI? I don’t think it means we should resist AI or pretend it’s not transforming our field. That ship has sailed. Instead, I think we need to evolve our role—and understand where we can create real, monetisable value.

We need to become better at asking questions, not just answering them. The bottleneck in urban research is increasingly not computational power or data availability—it’s knowing what questions matter and how to frame them properly. This requires deep domain knowledge, theoretical grounding, and the ability to connect technical capabilities with real urban challenges.

And here’s the thing: this skill is valuable. The organisations hiring urban scientists—whether government agencies, property developers, or research institutions—don’t just want data analysis. They want people who can tell them what to analyse in the first place. The market values those who can frame the right questions, not just run the models. I spend more time now thinking about problem formulation than I do about methodology, and I believe this is where the real value lies.

We need to become translators and integrators. Cities are complex systems that require insights from multiple disciplines—sociology, economics, environmental science, public health, design, policy. AI can process data from all these domains, but it can’t integrate them meaningfully without human guidance. Someone needs to bridge the gap between technical capabilities and practical applications, between model outputs and policy recommendations, between data patterns and human needs.

In research, I find myself constantly navigating between different disciplines and communities: thinking about how to communicate urban insights to computer scientists, how AI findings translate to urban planning contexts, how technical research connects to broader policy implications. This translation work is increasingly valuable and uniquely human—and well-compensated in the job market. Organisations pay premium rates for people who can speak multiple “languages”: technical, domain-specific, and policy-oriented. It’s not about being the best coder or the most experienced planner; it’s about being the bridge.

We need to focus on the questions AI can’t answer. There are entire domains of urban research that remain fundamentally human: understanding how people experience cities, exploring the cultural meaning of places, investigating power dynamics in urban development, examining the ethical implications of urban technologies, imagining alternative urban futures.

I believe the most impactful research increasingly combines AI-driven analysis with approaches that remain fundamentally human: understanding how people experience cities, exploring the cultural meaning of places, investigating power dynamics in urban development. These qualitative methods aren’t being replaced by AI—they’re becoming more important as counterbalances to purely data-driven approaches. And here’s the economic reality: organisations increasingly recognise that pure data analysis without human context leads to costly mistakes. They’re willing to pay for research that combines AI’s analytical power with human insight into what the data actually means for real people.

A different kind of expertise (and how it pays off)

So what does this mean for anyone considering a career in urban science in the age of AI? I think the field needs people more than ever, but it needs a different kind of expertise. And yes, there’s still money in it—but not where you might think.

You need technical literacy. You don’t necessarily need to be an AI expert, but you need to understand what these tools can do, how they work, and where they fail. You need to be able to critically evaluate AI-driven research and use these tools effectively in your own work. This baseline technical competency is becoming the entry ticket, not the differentiator.

But you also need deep domain expertise. The more AI can do, the more valuable human expertise becomes—not as a replacement for AI, but as a complement to it. You need to know cities deeply: their history, their politics, their social dynamics, their physical form, their ecological context. This knowledge is what allows you to ask good questions, interpret results critically, and connect technical analysis to real-world impact. And crucially, this is what commands premium rates in the market.

Here’s something I’ve observed from the job market and talking to peers: junior positions that mainly involve running models are increasingly commoditised. But roles that require contextualising AI outputs, spotting when they’re wrong, and translating them into strategic recommendations are in high demand. The salary gap between “can use AI tools” and “can critically evaluate and contextualise AI outputs” seems to be widening, not shrinking.

And you need ethical grounding. As AI becomes more powerful in shaping urban environments, someone needs to ask the hard questions about equity, justice, privacy, and power. Who benefits from these technologies? Who gets left behind? What values are we encoding in our algorithms? These aren’t technical questions—they’re fundamentally about what kind of cities we want to create. And increasingly, organisations are willing to pay for this expertise—whether it’s to avoid costly mistakes, manage reputational risks, or genuinely pursue equitable outcomes.

The future I see (and the economic reality)

I’m actually optimistic about the future of urban science in the age of AI. Not because AI won’t be transformative—it will be. But because the combination of human expertise and AI capabilities can achieve things neither could do alone. And critically, because there’s a sustainable economic model for urban scientists who can navigate this new landscape.

I envision urban scientists who can fluidly move between computational analysis and ethnographic research, between big data patterns and individual stories, between technical optimisation and ethical reasoning. Researchers who use AI as a powerful tool while maintaining critical distance from its outputs. Professionals who can bridge the gap between technical possibility and social desirability.

The economic landscape is shifting in interesting ways. Traditional “research assistant” roles that involve data processing and basic analysis will likely face downward pressure on salaries—AI can do much of that work. But roles that require judgment, contextualisation, and strategic thinking are becoming more valuable. The career ladder isn’t disappearing; it’s becoming steeper. The gap between entry-level and senior positions is widening, but the ceiling is also rising for those who can master the full stack: technical skills, domain expertise, and strategic thinking.

I see new revenue streams emerging too. Consulting on responsible AI implementation in cities. Validating and stress-testing AI-driven urban plans. Training urban professionals to work effectively with AI tools. Developing hybrid methodologies that combine AI analysis with human insight. These services command good rates because they address real needs that pure technology companies can’t meet.

The cities of the future will be shaped by AI, but they need to be guided by human wisdom. They need people who understand both the power and the limitations of these technologies, who can ask the right questions and interpret the answers critically, who can ensure that technical capabilities serve human needs rather than the other way around. And they’re willing to pay for it.

So what’s left for urban scientists in the age of AI? Everything that matters. The questions worth asking. The contexts worth understanding. The values worth defending. The futures worth imagining. And yes, the ability to make a good living doing meaningful work.

AI is changing our tools, but it’s not changing our purpose: to understand cities better so we can make them work better for the people who live in them. That mission remains as important as ever—perhaps more so in an age where technology is reshaping urban life at an unprecedented pace. And the market is increasingly recognising the value of professionals who can pursue that mission effectively in an AI-augmented world.

The real question isn’t whether there’s a place for urban scientists in the age of AI, or whether we can make a living. It’s whether we’re ready to evolve our practice to meet this moment, and position ourselves where we can create and capture value. I think we can be. I think we must be.


What’s your experience with AI in urban research? How do you see the role of urban scientists evolving? I’d love to hear your thoughts in the comments.