Home/Employers/Emerging Skills Radar: What Companies Are Actually Hiring For
SkillsNov 2025 - Feb 2026

Emerging Skills Radar: What Companies Are Actually Hiring For

Separating hiring reality from industry discourse

The gap between what the AI conversation emphasises and what hiring managers actually screen for is wider than most people assume. LLM appears in 16% of descriptions while context engineering barely registers.

16%

LLM mentions

of all job descriptions

1 in 3

AI penetration

postings mention AI skills

8%

Agentic

of descriptions

Everyone has a take on the skills that will define the next five years of tech hiring. Context engineering. Vibe coding. AI alignment. Agentic everything. The discourse is loud, confident, and largely disconnected from what companies are putting in job descriptions right now.

We analysed around 8,700 job postings from company career pages across product, data, and delivery roles in London, New York, Denver, San Francisco, and Singapore. When we search those raw descriptions for the terms dominating the AI conversation, the gap between discourse and hiring reality is striking.

The terms that barely register

Some of the most discussed concepts in AI have almost no presence in job postings. Context engineering, the idea that surfaced after Andrej Karpathy coined it and Shopify's CEO made it a company priority, barely appears. AI alignment, vibe coding, and AI ethics as named requirements are essentially absent.

These aren't skills companies are screening for. They may become important, and some already matter within organisations that don't formalise them in job specs. But if you're optimising your CV around them today, you're optimising for the social media conversation. The hiring pipeline hasn't caught up.

AI governance tells a more nuanced story. The AI-specific framing (AI governance, responsible AI, AI safety) barely registers in job postings. But data governance appears in nearly 3% of descriptions. Companies are hiring for governance under the data banner. If you're interested in AI governance as a career, the entry point is more likely a data governance role than a dedicated AI ethics position.

Discourse vs. hiring reality

Two-section bar chart with clear visual separator. Top section in strong colour (actual hiring signals), bottom section in muted colour (discourse-heavy terms with modest hiring presence). Label exact counts on each bar. Terms with fewer than 30 mentions (context engineering, vibe coding, AI alignment, AI safety) excluded from visualisation.

Low
High

What companies are actually asking for

The terms that dominate job descriptions tell a clearer story about where the market is heading. LLM appears in 16% of descriptions. Agentic workflows in 8%. Generative AI in 7%. These aren't niche requirements any more, they're becoming baseline expectations for a growing share of technical roles.

Beneath the headline terms, a specific production stack is forming. RAG (retrieval-augmented generation) shows up in 4.5% of descriptions. Prompt engineering in 4%. Fine-tuning in 5%. Embeddings in 5%. Vector databases in around 2%. These skills tend to cluster together: a posting that mentions LLMs is likely to also mention RAG, and a posting that mentions RAG almost always mentions embeddings. The stack is cohering into a recognisable toolkit.

Two distinct career archetypes are emerging within this stack. The first is research-oriented: PyTorch, fine-tuning, RLHF, distributed training. These roles are about building and training models. The second is application-oriented: RAG, prompt engineering, vector databases, LangChain. These roles are about deploying models into products. The application side is larger and growing faster.

The emerging LLM production stack

Three grouped sections. "Core LLM terms" at top, "Application stack" in middle, "Research stack" at bottom. Colour-code the three groups. This chart shows the production stack crystallising.

Low
High

The horizontal creep

The most consequential finding may be how broadly LLM-related skills are spreading across roles that weren't traditionally AI-focused. Among AI/ML product manager postings, 85% mention at least one LLM-related skill. For ML engineers it's 52%. But the penetration extends well beyond specialist roles: 22% of core product manager postings now mention AI or LLM skills, along with 17% of data scientist roles and 9% of data engineering postings.

This is a horizontal shift. Companies are embedding AI expectations into existing roles. A core PM who can evaluate LLM capabilities for their product, a data engineer who understands how to build pipelines that serve retrieval-augmented systems, a data scientist who can fine-tune a model as well as analyse its output. The job titles haven't changed, but the skill expectations within them are expanding.

LLM skill penetration by role

Descending bar chart. Use a gradient or single colour with decreasing intensity. The story is the long tail, AI expectations spreading far beyond specialist roles.

The data infrastructure layer nobody's talking about

While the AI discourse focuses on LLMs and agents, a parallel shift is happening in data infrastructure that gets far less attention. Data quality appears in 8.5% of descriptions, making it more common than generative AI as a named concern. dbt appears in 8.2%, just behind LLM in raw frequency. Airflow holds at 7.5%.

These aren't emerging in the sense that they're new, but they're emerging as explicit job requirements in a way they weren't before. Companies are increasingly naming their infrastructure expectations explicitly. Data products (5%), lakehouse architectures (2.5%), and data lineage (0.5%) are all showing up as specific terms in descriptions, reflecting a market that's maturing past "we need a data person" toward "we need someone who understands our specific stack and methodology."

Data mesh sits at 0.4% of descriptions. Present, but still a niche organisational concept that hasn't become a mainstream hiring criterion. Data contracts (0.5%) and data observability (0.7%) are in a similar band, early-stage signals from more sophisticated data organisations.

The watch list

A few signals are still too small for confident claims but worth tracking.

Agentic AI, as distinct from the broader "agentic" term, showed the steepest growth trajectory in our data: a small base but accelerating quickly. Industry analysts project the agentic AI market at $50 billion by 2030, and Gartner estimates 33% of enterprise software will incorporate agentic capabilities by 2028. The hiring data suggests this is beginning but hasn't yet translated into widespread job requirements.

Modern orchestration tools are gaining ground against Airflow's dominance. Dagster and Prefect are both appearing in descriptions, though Airflow still outnumbers them roughly 10-to-1. For data engineers evaluating what to learn next, Airflow remains the safer bet for employability while the newer tools represent where the field is heading.

LLM provider mentions are roughly evenly split between OpenAI and Anthropic in job descriptions, suggesting the market views these as interchangeable. Deep expertise in a specific provider's ecosystem doesn't appear to be a differentiator. Hugging Face appears at a similar frequency as an infrastructure layer.

What to make of this

The gap between what the AI conversation emphasises and what hiring managers actually screen for is wider than most people assume. The practical skills that land jobs today are concentrated in a small, cohering LLM production stack: RAG, prompt engineering, fine-tuning, embeddings, and vector databases. The theoretical and governance-oriented skills that dominate thought leadership have minimal presence in job descriptions.

For people navigating career decisions, the data points toward three practical conclusions. First, LLM literacy is becoming a baseline expectation well beyond AI-specialist roles, so getting conversant with these tools is worth prioritising regardless of your current title. Second, the infrastructure and data quality layer remains essential and under-hyped. The companies hiring for agentic AI systems still need someone to build reliable data pipelines underneath them. Third, the skills that sound impressive in conversation and the skills that show up in job descriptions are two different lists right now. Optimising for the wrong one is an easy trap.


Based on 8,672 job descriptions from company career pages, tracking product, data, and delivery roles across London, New York, Denver, San Francisco, and Singapore. Data covers November 2025 through February 2026. Full interactive dashboard at richjacobs.me/projects/hiring-market.

Want interactive dashboards and the full dataset?

Sign up for free to access the dashboard, job feed, and detailed reports.

Sign up free