Generative AI and the Future of Creativity: Trade-Offs, Risks, and Opportunities

Generative AI is reshaping what it means to create. New research co-conducted by DK Lee, Kelli Questrom Associate Professor in Information Systems at Boston University Questrom School of Business shows that while these tools boost productivity and widen access to creative expression, they also introduce a trade-off: more output, but slightly less novelty per piece. The result isn’t dilution so much as expansion – like moving from film to digital photography, where higher volume leads to a bigger creative frontier. In this Q&A, DK explores how AI is changing the balance between “masterminds” and “hiveminds,” what skills matter most in an AI-assisted world, and the risks and opportunities for artists, industries, and policymakers navigating this new creative era.

Your research shows that generative AI tools boost productivity but also lead to declines in average novelty. How should we interpret this trade-off—are we gaining more creativity overall, or diluting it?

Think of it like switching from film to digital photography: with digital, you may take hundreds or thousands of photos where you once took only dozens, and each shot may be less deliberate. Yet, because you try more ideas, the overall creative yield improves. We see a similar pattern in art with generative AI. Our data show a clear trade-off but overall increase in creativity. After adoption, creators post roughly 50% more work, yet the per-artifact H-creativity rate falls by about 2.1 percentage points (the “dilution” effect). Overall, the absolute count of frontier-expanding ideas rises by ~4.3%, reaching ~6.6% by the end of 2023—because higher volume overwhelms the lower hit rate. In short: more exploration, lower selectivity per piece, and a bigger frontier overall.


In your “Masterminds vs. Hiveminds” framing, early adopters drove breakthroughs while later adoption spread novelty more broadly. What does this say about how creative ecosystems evolve with new technologies?

Creative diffusion follows a predictable pattern: masterminds break the trail, then the hive follows and widens the path—a process accelerated by generative AI. Initially, the top 10% of AI adopters contributed ~35% of novel contributions, concentrating power even more than before AI. By late 2023, their share dropped to ~23% as thousands of “hive mind” creators began making smaller but meaningful contributions. This mirrors photography—early masters like Ansel Adams pioneered techniques that eventually became accessible as Instagram filters. Importantly, open-source models like Stable Diffusion accelerated this democratization, suggesting that tool accessibility and the ability for users to inject novelty into the system are critical in shaping creative evolution with generative AI.


You’ve argued that ideation and filtering remain the uniquely human parts of the creative process. What skills or mindsets will matter most for artists and creators who want to thrive with AI tools?

Creativity has two parts: novelty and value. Novelty is algorithmically easier to generate; value remains distinctly human because we decide what matters. In art, commercials, and songs, human taste is what we want to maximize. Taste evolves over time, and AI does not yet have enough data to model what will resonate with the population at any given time. In creative advertising, for example, effective work blends the new and the familiar. That balance is hard to learn, even with massive datasets because much of how humans feel about multimedia is not yet digitized. Whether those signals can be digitized and trained into generative AI is an empirical question that time will answer.


Your work suggests AI may redistribute creative “value capture” more evenly across artists. Do you see generative AI as leveling the playing field, or will it still reinforce advantages for certain creators?

AI is paradoxically both an equalizer and an amplifier. It democratizes technical execution (bringing those who could not closer to those who can), yet it disproportionately boosts a select few with domain knowledge and the skill to leverage AI. The tool raises the bottom and the middle overall, but it does not bring everyone to the very top. In the end, those who combine domain knowledge with the ability to exploit AI will accelerate faster than others to the very top. Domain knowledge, combined with the ability to leverage AI, is an unstoppable force that will dominate every field.


Your findings highlight declining visual novelty over time, with many artists converging on similar styles. How real is the risk of creative homogenization, and what can artists—or platforms—do to counter it?

The homogenization risk is real and accelerating—visual novelty declined ~10% more steeply than content novelty, suggesting AI pulls creators toward a visual mean even when concepts differ. This happens because models are trained on existing art, creating a gravitational pull toward “what worked before.” Artists converge not by choice but by tool limitation. To counter this, artists should actively fight the current: use custom models, combine multiple tools unpredictably, and deliberately prompt for “ugly” or “wrong” outputs that break aesthetic conventions. Platforms can help by rewarding novelty (not just engagement), creating spaces for experimental work, and supporting models trained on deliberately diverse datasets. Without intervention, we risk an algorithmic monoculture—professionally polished yet indistinguishable.


What lessons apply to creative industries—like music, writing, or even product design—where generative AI is spreading rapidly?

I expect similar patterns across modalities, with medium-specific differences. Mapping that heterogeneity—across music, writing, and product design—is on my research agenda.


Artists and institutions are wrestling with copyright, attribution, and fairness in AI-generated art. Based on your research, what policy or governance measures seem most urgent?

First, we still lack a clear picture of how people think about these tools; communities exist both for and against them, often with strong views. A better understanding would inform policy. Second, attributing specific ideas or novelty within generative systems is technically difficult. Third, because jurisdictions and policies differ across countries, governance alone will not solve the problem; platform-level levers are needed where regulation lags. In short, the issue of generative AI and copyright is at an early stage and will take time to resolve due to (i) incomplete mapping of practices, (ii) fragmented jurisdictions, and (iii) limited technology to assign attribution or audit large, black-box models. I have several working papers on these issues as well that I hope to share in the future.


If we think five to ten years out, how do you imagine generative AI reshaping the balance between human originality and machine assistance? Will we see more “generative synesthesia” or more automation?

As we document in two studies, at least in art, generative AI primarily acts as an accelerator through automation that enables humans to iterate and exhaust ideas more efficiently; we do not see evidence of a human–AI complementarity that inherently elevates H-creativity rates. At the same time, we observe “generative synesthesia,” where artists interact with the world through these tools and bounce ideas off them to produce novel and valuable artifacts. As models improve and more human-taste data becomes available, these distinctions may blur. But because human taste keeps evolving, digitizing and modeling it will always lag. Put simply, humans will continue to breach the creative horizon—aided by these AI tools, not replaced by them.

Exit mobile version