Taste Still Matters: Why Judgment Is the Skill AI Cannot Replace
When AI can generate anything, the bottleneck shifts. It's no longer about who can produce - it's about who knows what's worth producing. Taste has never been more valuable.
April 13, 2026
A essay about taste and AI made the front page of Hacker News last week with 239 points. The argument was simple enough to be provocative: in an era when AI can generate code, images, writing, and music on demand, the thing that becomes scarce is not the ability to produce - it's the ability to judge what's worth producing.
Taste, in other words, is having a moment.
What taste actually means
Taste is not preference. Preference is liking one thing over another. Taste is knowing why one thing is better than another, and being right about it consistently enough that people trust your judgment.
A photographer with taste doesn't just know which of their photos looks best - they know why it works: the light, the timing, the framing, what the composition is saying that the alternatives aren't. That understanding is what lets them make better photos next time, not just pick winners from a batch.
A developer with taste doesn't just know when code is elegant - they know what makes it legible, maintainable, and correct in ways that matter over time. That judgment is what makes their code reviews valuable, not just their ability to write code.
AI generates prolifically. It does not have taste in this sense. It produces outputs that match statistical patterns in its training data, and those outputs are often good - sometimes excellent. But the model has no understanding of why an output is good, only that it resembles things that were good. It cannot tell you what the image is trying to say, only what images that got positive responses looked like.
Where this shows up in practice
In AI image generation - the clearest current example - the tools available today are remarkable. Midjourney, Ideogram, Leonardo AI, and their competitors can generate images that would have required days of skilled production work five years ago. The bottleneck is no longer the generation - it's the prompt.
Two people with the same tool and the same subject get very different results depending on how they prompt. The person with taste writes prompts that encode specific visual decisions: lighting direction, aspect ratio, the mood they want the color grading to suggest, what should be in focus and what should fall away. The person without taste describes the subject and hopes for the best.
The output reflects the prompting. The tool doesn't compensate for vague direction by making aesthetic choices - it makes choices, but they're arbitrary. The taste lives in the prompter, not the model.
In coding, the same pattern holds. AI coding tools like Cursor or Claude Code can write code from a description. What the description says determines everything. A developer with taste specifies the constraints that actually matter: this needs to be readable by a junior dev, this will be hot-path code so clarity matters more than cleverness here, this is a prototype so get it working before making it clean. Those constraints are taste expressed as requirements.
A developer without taste says "write me a function that does X" and takes the first output. A developer with taste knows what "good" looks like for this specific context and pushes until they get it.
Taste is learnable but not shortcuttable
The uncomfortable part of this argument is that taste is not something AI can give you. You can use AI to generate more things faster, but you cannot use AI to develop the judgment to know which of those things are good.
Taste develops through exposure and feedback over time. You see many examples of something, you develop intuitions about what works and why, you make things, you get responses to them, your intuitions get refined. That process is slow. There are no shortcuts.
What AI does change is the stakes. In the past, developing taste in photography meant taking thousands of photos over years - the production cost created a natural forcing function for developing judgment (you couldn't afford to shoot carelessly). Now you can generate thousands of images in an afternoon. The forcing function is gone. The only way to develop judgment is to deliberately impose it: to decide, for each output, why it is or isn't working, and to keep learning.
Developers who use AI coding tools extensively without doing this will find that their code review instincts and architectural judgment atrophy in the same way. The tool produces more code faster, but the judgment about what the code should do, how it should be organized, and what makes it maintainable still has to come from somewhere.
The practical implication
If taste is the scarce resource in an AI-saturated environment, it's worth treating it as one.
That means being deliberate about what you consume and why. Reading criticism - of design, of writing, of code - to build vocabulary for why things work. Making things yourself, without AI, occasionally, to keep the judgment sharp. Reviewing AI output actively rather than accepting it passively.
It also means being honest about which tasks require taste and which don't. Generating boilerplate, summarizing documents, producing first drafts of structured content - these are good uses of AI where taste mostly isn't the bottleneck. The final edit of anything that will be seen or used by others - that still needs a person with developed judgment in the loop.
AI democratizes production. It does not democratize taste. The people who figured that out early are going to be in an unusually strong position as the rest of the market catches up.
Comments
Some links in this article are affiliate links. Learn more.