1 minute read

Post on LinkedIn

Is more “trust” in AI assistants for software development always better? Can we equate “trust” with artifact acceptance?

I usually don’t share preprints before acceptance, but this topic is too timely not to share. Also, it’s truly an interdisciplinary effort and—as many of you know—I’m a huge proponent of interdisciplinary research. You can find the abstract as well as a link to the paper preprint below (currently under review):

Trust is a fundamental concept in human decision-making and collaboration that has long been studied in philosophy and psychology. However, software engineering (SE) articles often use the term ‘trust’ informally—providing an explicit definition or embedding results in established trust models is rare. In SE research on AI assistants, this practice culminates in equating trust with the likelihood of accepting generated content, which does not capture the full complexity of the trust concept. Without a common definition, true secondary research on trust is impossible. The objectives of our research were: (1) to present the psychological and philosophical foundations of human trust, (2) to systematically study how trust is conceptualized in SE and the related disciplines human-computer interaction and information systems, and (3) to discuss limitations of equating trust with content acceptance, outlining how SE research can adopt existing trust models to overcome the widespread informal use of the term ‘trust’. We conducted a literature review across disciplines and a critical review of recent SE articles focusing on conceptualizations of trust. We found that trust is rarely defined or conceptualized in SE articles. Related disciplines commonly embed their methodology and results in established trust models, clearly distinguishing, for example, between initial trust and trust formation and discussing whether and when trust can be applied to AI assistants. Our study reveals a significant maturity gap of trust research in SE compared to related disciplines. We provide concrete recommendations on how SE researchers can adopt established trust models and instruments to study trust in AI assistants beyond the acceptance of generated software artifacts.

Preprint: Rethinking Trust in AI Assistants for Software Development: A Critical Review

Updated: