Nitin Verma


About me

I am a Teaching Assistant Professor at the School of Information Sciences at the University of Illinois Urbana–Champaign. Prior to this, I was a Postdoctoral Research Scholar at Arizona State University (ASU) and the New York Academy of Sciences (NYAS).

My research program is aimed at understanding how information technologies—particularly modern, agentic AI systems—shape knowledge work. By knowledge work I mean the work of gathering and analyzing data and transforming it into useful and usable information along the whole spectrum ranging from on-the-street journalistic reporting to the publication of peer-reviewed research manuscripts. Ultimately, my lifelong research quest is centered on understanding belief-formation and the psychological, social, and technological factors that underpin that process.


Research Interests

The following broad topics capture the essence of my research interests in the service of my broader research program: Human–AI Interaction; Trust and Trustworthiness; AI Ethics and Policy; Human Values; and Science Communication. These topics get to the heart of the knowledge work I referenced above.

You can get a better idea of my research interests, by looking at my published research, by visiting my Google Scholar page, and/or by visiting my UIUC faculty page. But here’s a summary of projects I am currently working on:

Value (Mis)Alignment Between LLMs and Scientific Peer Review

LLMs and other generative technologies will increasingly play a role in helping scholars conduct their day-to-day tasks of analyzing data and writing manuscripts. This project, which I started during my postdoc at ASU–NYAS, is the most recent addition to my research program. In this project I’m examining the role of general purpose, chat-oriented large language models (LLMs) such as ChatGPT in peer review. This research is motivated by years of evidence that suggests that the manual, volunteer-driven peer review system is unable to cope with the explosive, exponential growth of submission volume. Unfortunately, the impressive linguistic and reasoning capabilities displayed by ChatGPT, and other models, makes it very tempting for early career scholars (and, perhaps, even experienced ones) to use it to review manuscripts.

In the first paper on this project, titled All Accept, No Reject: Evaluating LLMs as “Peer Reviewers” (DOI Link) published recently in the Proceedings of the 2026 ACM Conference on Human Factors in Computing (CHI’ 26, Barcelona, Spain) we found that the six most recent OpenAI LLMs (as of November 2025) accepted nearly all papers we asked them to review. Further, we found that the human values encoded in these models (with an emphasis on self-transcendence and openness to change, both signifying acceptance of others’ views and an eschewing of tradition) are out of alignment with the values that underpin science (accountability, trustworthiness, autonomy, fairness, and national and global needs). You can watch my recorded video presentation of the paper at CHI’26 on YouTube here.

For grounding this study in existing literature, I draw on key works in the science and history of science (e.g., Kuhn; Ezrahi; and Chubin & Hackett) that highlight existing issues and imperfections in the peer review system. In turn these works help anticipate the impact of advanced information technology on science, the scientific method, and on innovation.

Trust in Information, Information Sources, and the Media

The notion of trust forms the thread that runs through my published and ongoing research. I have deeply invested my intellectual efforts in understanding the notion of trust right from its etymological roots, to the range of sociological concepts the word trust encodes in everyday language. I have channeled this motivation to better estimate if generative AI would upend public trust in photographic images and in recorded (in the archival sense) information broadly. In studying trust, I draw on literatures from multiple disciplines including information, communication, sociology, and psychology.