樱花动漫

Skip to main content Skip to search

YU News

YU News

Generative AI Makes Good Research Better, But Demands Human Discipline

Sy Syms Assistant Professor Travis Oh is a co-author of the report: New Tools, New Roles: A Manager鈥檚 Guide to Harnessing Generative AI for Marketing Insight.

By Dave DeFusco

On Monday morning, a marketing team sketches out a new product idea. By Friday, they have concept tests, customer reactions and a polished insights deck in hand, all generated with the help of artificial intelligence. What once took months now unfolds in days.

That compressed timeline is no longer hypothetical. It is the reality described in , 鈥淣ew Tools, New Roles: A Manager鈥檚 Guide to Harnessing Generative AI for Marketing Insight,鈥 co-authored by Travis Oh, assistant professor of marketing at the Sy Syms School of Business, and his colleagues at Columbia Business School, Georgetown University, USC and the University of Tennessee. 

The report explores how generative AI is transforming every stage of marketing research, while warning that speed alone does not guarantee better decisions.

鈥淕enAI dramatically compresses the speed of insight generation,鈥 said Oh, 鈥渂ut it doesn鈥檛 remove the need for rigor. It actually increases it.鈥

At the core of these tools are large language models, or LLMs, trained on vast amounts of text to predict what comes next in a sequence. Their fluency makes them powerful for drafting surveys, summarizing reports and even writing code. That same fluency, however, can be misleading because LLMs are probabilistic. They generate what sounds right, not necessarily what is correct.

鈥淚t should fundamentally shift the mindset from acceptance to interrogation,鈥 said Oh. 鈥淭hese systems are predicting what sounds right, not verifying what is right.鈥

That distinction is critical as organizations rush to integrate AI into their workflows. In desk research, for example, a single prompt can produce a synthesis of dozens of studies in seconds. The efficiency is undeniable, but so is the risk. AI can conflate findings or misattribute sources, creating a polished narrative built on shaky ground.

Oh advises managers to treat these outputs as starting points, not conclusions. 鈥淚f it references multiple sources, open a few of them yourself before using the insight,鈥 he said. 鈥淕enAI is excellent at surfacing themes, but you should never rely on it blindly for source accuracy.鈥

The same balance of speed and scrutiny applies to internal data. Many organizations sit on years of underused surveys, reports and qualitative research. Generative AI can unlock that value, synthesizing fragmented knowledge into actionable insights. But doing so safely requires careful infrastructure.

鈥淭here鈥檚 enormous value in internal data, but you have to treat access to it as a systems problem,鈥 said Oh. 鈥淭hat means using enterprise-grade environments or local deployments and ensuring data never leaves controlled infrastructure.鈥

Perhaps the most immediate impact of GenAI is in designing surveys and research instruments. Tasks that once required weeks of iteration can now be completed in seconds. Yet here, too, precision matters. Vague prompts can lead to subtle but important errors鈥攚hat researchers call 鈥渃onstruct drift.鈥

Oh points to an example where a retailer asked AI to generate survey items measuring 鈥渧isual attractiveness鈥 of produce. The system instead produced statements about freshness and quality, which were related but fundamentally different concepts.

鈥淭he best prompts explicitly define what the concept includes and what it excludes,鈥 he said. 鈥淚f you don鈥檛 do that, the model will default to adjacent ideas that sound right but are theoretically different.鈥

Beyond design, generative AI is reshaping how data is collected. Conversational AI tools now enable dynamic, adaptive interviews that respond to each participant in real time. The result is a new kind of research: qualitative depth at quantitative scale.

鈥淲hat excites me most is that we鈥檙e no longer forced to trade off depth for scale,鈥 said Oh. 鈥淵ou can now get rich, probing responses across large samples.鈥

That flexibility, however, introduces new challenges. Slight variations in how AI poses questions can introduce inconsistencies, making it harder to compare responses across participants.

鈥淭he concern is subtle inconsistency,鈥 he said. 鈥淪mall variations can introduce noise, and that noise can look like meaningful variation if you鈥檙e not careful.鈥

On the analysis side, GenAI can code vast amounts of unstructured data鈥攖ext, images, even video鈥攊n minutes. It can also generate and execute statistical code, lowering technical barriers for many teams. Still, Oh cautions against overreliance on outputs produced entirely within chat interfaces.

鈥淚f you can鈥檛 rerun the code in a proper analytics environment and get the same result, you don鈥檛 really have a reliable analysis,鈥 he said. 鈥淵ou have a convenient output.鈥

Across all these applications, a common theme emerges: generative AI is not a replacement for human judgment but a force multiplier for it. Used well, it expands what teams can explore and how quickly they can move. Used poorly, it accelerates errors just as efficiently.

鈥淲hat I see most often is managers treating GenAI outputs as if they鈥檙e already insight,鈥 said Oh. 鈥淭he outputs are fluent and convincing, which creates a false sense of certainty.鈥

The solution, he argues, is discipline. Treat outputs as hypotheses. Verify sources. Validate analyses. In short, pair new tools with new rules.

鈥淚n practice, it means you use GenAI to expand your thinking, not to finalize it,鈥 said Oh. 鈥淵ou move quickly when exploring, but you slow down deliberately when committing to decisions.鈥

As generative AI continues to reshape marketing research, the organizations that succeed will not simply be the fastest. They will be the ones that balance speed with rigor by turning rapid insights into reliable ones.

鈥淥thers will move quickly as well,鈥 said Oh. 鈥淏ut they鈥檒l just be wrong faster.鈥

Share

FacebookTwitterLinkedInWhat's AppEmailPrint

Follow Us