Nitin Verma


About me

I am a Postdoctoral Research Scholar in the AI & Society program jointly organized by the School for the Future of Innovation in Society (SFIS) at Arizona State University (ASU) and the New York Academy of Sciences (NYAS).

My doctoral dissertation (at the School of Information at The University of Texas at Austin) conceptualized public trust in video, and assessed the impact of deepfake technology on public trust in video and in visual journalism.


Research Interests

My broader research program is aimed at understanding how the perceived veracity of everyday facts—whether about science, politics, or conflicts, to name just a few categories—is affected by and contested through information technologies. My research program is ultimately centered on exploring belief-formation and the biological, psychological, social, and technological factors that underpin that process.

You can, perhaps, get a better idea of my research interests, by looking at my published research, by visiting my Google Scholar page, and/or by visiting my ASU profile.

Trust in Information, Information Sources, and the Media

The notion of trust forms the thread that runs through my published and ongoing research. Since choosing to study deepfake technology for my doctoral dissertation work, I have deeply invested my intellectual efforts in understanding the notion of trust right from its etymological roots, to the range of sociological concepts the word trust encodes in everyday language. I have channeled this motivation to better estimate if generative AI would upend public trust in photographic images and in recorded (in the archival sense) information broadly. In studying trust, I draw quite a bit from Niklas Luhmann’s work.

Concerns Regarding Generative AI

Diminishing Evidentiary Value of Photographs and Film/Video

Since the very arrival of deepfake technology on the scene, researchers and members of the general public alike, have raised concerns about the detrimental impact of this technology on the evidentiary value of photographically generated images (i.e., videos, and well, photographs). That is, with increasing inundation of the infosphere with artificially generated images, it would be harder to trust authentically created images (especially those created using photographic processes that capture an actual visual scene in front of a camera). The ability of the camera to bear witness to human events lies under threat—one that has been called an epistemic threat.

Disinformation

In addition to the potential detriment to the evidentiary value of photos and videos, deepfake technology puts extremely capable technologies in the hands of those who wish to create and spread disinformation. And disinformation that is accompanied by photographic-looking visual images, can pose a severe threat to public trust in the news media.

Reliability of the Historical Record

I am deeply concerned by those applications of generative AI that result in hard-to-trace manipulation of documents. In particular, I am concerned about both the idea and the discipline of history, in which scholarship direly depends on access to trustworthy archival records and secondary sources (reports and artifacts produced by researchers who accessed and analyzed primary sources) whose provenance and authorship can be established accurately. I believe that the 2010s, when generative AI technology truly came of age, mark a period in human history that divides humanity’s timeline into two epochs: Before AI (BAI) and After AI (AAI). Though meant to be somewhat tongue-in-cheek descriptors of time periods, these labels have a significant bearing on how contemporary documents will be appraised in the future. This is especially important to anticipate how distinctly the trustworthiness of the photographic record from BAI and AAI epochs will be perceived in the future. In any case, authenticity of records ostensibly from the BAI epoch could be called into question, because of the threat of historical negationism.

Generative AI and Scientific Knowledge Production

Generative AI tools, especially LLMs and other types of transformer models, will increasingly play a role in helping scholars conduct their day-to-day tasks of analyzing data and writing manuscripts. I’m currently working on a project exploring the impact of generative AI on the peer-review process in scieintific publishing. This project is an interdisciplinary effort led by Dr. Asheley Landrum in the Cronkite School of Journalism at ASU.