By Dave DeFusco
When Dr. Travis Oh walked into the Association for Consumer Research (ACR) Conference in Washington, D.C., this fall, he knew the topic on everyone鈥檚 mind would be generative artificial intelligence (GenAI). What he didn鈥檛 expect was how much curiosity鈥攁nd confusion鈥攕urrounded it.
鈥淓verybody鈥檚 excited for GenAI,鈥 said Professor Oh, an assistant professor of marketing at the Sy Syms School of Business at 樱花动漫. 鈥淏ut to be honest, many behavioral researchers still don鈥檛 know exactly what鈥檚 happening under the hood or what the best practices are. There are no established rules yet. That鈥檚 what we鈥檙e trying to help create.鈥
Professor Oh co-led one of only two workshops selected for presentation at the ACR conference, a major recognition from one of the field鈥檚 top gatherings. His session, based on his Journal of Marketing paper, 鈥,鈥 offered a hands-on introduction to how marketing academics can responsibly integrate AI tools into their work.
The paper, co-authored with colleagues from Columbia Business School and other institutions, offers a roadmap for researchers eager to use AI to design surveys, run experiments and analyze open-ended data. It provides both a caution and a guide: while AI can dramatically speed up research, it also raises new risks for transparency, reproducibility and ethics.
Generative AI systems, like ChatGPT, Claude and Gemini, have quickly become fixtures in everyday life. They can write, summarize, analyze and even simulate conversations. For researchers, this opens new doors: designing experiments, coding data or generating realistic chatbot interactions for study participants. But, warns Oh, ease of use can be deceptive.
鈥淭he issue is that what you see in the chat box isn鈥檛 the whole picture,鈥 said Professor Oh. 鈥淯nderneath, there can be many different prompts or models running, and you don鈥檛 always know exactly what鈥檚 happening. For behavioral scientists, what鈥檚 most important is transparency and reproducibility.鈥
In other words, researchers who rely on AI without understanding how it works may unknowingly introduce bias or lose control of their methods. That鈥檚 why Professor Oh and his co-authors strongly recommend using API-based access, which allows researchers to control the model鈥檚 parameters and document every step, instead of general web interfaces.
鈥淚t鈥檚 easy to use,鈥 he said of the public chat tools, 鈥渂ut probably not the wisest to use for research.鈥
According to Professor Oh, the most common mistake researchers make is uploading confidential data into public AI tools, which can violate both privacy laws and university research ethics (IRB) guidelines. But there鈥檚 also a subtler danger: the temptation to use AI鈥檚 flexibility to overfit results.
鈥淏ecause AI is so fast and cheap, you can run your data through it a hundred different ways until you get the result that supports your hypothesis,鈥 he said. 鈥淩esearchers might tell themselves, 鈥極h, maybe my prompt wasn鈥檛 good; let me just tweak it again.鈥 But at that point, you鈥檙e fooling yourself.鈥
His team鈥檚 paper provides practical 鈥渞ules of engagement鈥 for avoiding these pitfalls. Chief among them is to document everything.
鈥淵ou should always record what you鈥檙e doing鈥攚hat model, what parameters, everything,鈥 said Professor Oh. 鈥淢odels change over time. For example, GPT-5 may not even have some of the settings we use today. But if you鈥檙e transparent about your process, others can reproduce or verify your results later.鈥
To make their recommendations as accessible as possible, Professor Oh and his co-authors created a companion website, , which hosts free templates, reproducible code and example workflows in R and SPSS, both software tools used for statistical analysisand data management. The playfully named site鈥斺渜uestionable research鈥 being an inside joke about transparency鈥攊nvites feedback from researchers experimenting with GenAI in their own work. A sister site, , provides additional examples and tools.
At his ACR workshop, Professor Oh demonstrated how to integrate interactive chatbots into marketing and behavioral studies. Instead of using pre-written scripts, researchers can now design experiments where participants engage with AI characters in real time.
鈥淭his is a new tool that lets us answer new questions,鈥 he said. 鈥淔or example, now that many companies use AI for customer service, we can test whether it鈥檚 better for a chatbot to sound formal or conversational and how that affects customer satisfaction.鈥
The workshop also explored coding unstructured data, such as open-ended survey responses, using GPT APIs. The biggest 鈥渁ha鈥 moment for attendees, he said, was realizing how detailed their instructions to AI needed to be.
鈥淭o get good results, you have to treat AI the same way you would train a graduate student,鈥 said Professor Oh. 鈥淵ou wouldn鈥檛 just say, 鈥楪o code this.鈥 You鈥檇 define exactly what you mean, what to look for and how to interpret it. The same applies to AI. It鈥檚 not as smart as people think unless you guide it carefully.鈥
For Professor Oh, the excitement around AI isn鈥檛 just about innovation, it鈥檚 about responsibility. His work urges researchers to slow down, think critically and document clearly in an era that rewards speed and novelty.
鈥淕enAI is changing the landscape of research,鈥 he said. 鈥淏ut that doesn鈥檛 mean we abandon rigor. In fact, it鈥檚 more important than ever. The future of AI in research won鈥檛 just depend on what these systems can do, but on how carefully and thoughtfully we choose to use them.鈥