Theorising the Impacts of Generative Artificial Intelligence

  • December 25, 2024
    Call for papers published


  • June 30, 2025
    Deadline for extended abstract submissions for early feedback


  • December 31, 2025
    Full paper submission deadline

Editors

  • Robert Davison, City University of Hong Kong
  • Antonio Díaz Andrade, University of Agder
  • Manuel Trenz, University of Gottingen

Description

Since 2022, we have witnessed an inexorable flood of information about Generative Artificial Intelligence (GAI) and its various components and manifestations. As hypes go, this one seems overdue for a cyclical correction; however, the enthusiasm for GAI across a wide range of contexts still rises. The euphoria permeates not just mainstream society and popular culture, but also business, politics, warfare and, of course, academia. The tone of inevitability about GAI and its benefits is fervent and the passion of many (notably academics, tech pundits and businesspeople) is abundant. This is not to say that there are no critical voices: there are, but their concerns risk being submerged by the tide of optimism and it may seem heretical to critique, let alone excoriate, the GAI phenomenon.

For example, at some institutions, even measured critiques of GAI are viewed as misaligned with their forward-looking visions, creating an environment where scholars may feel pressured to conform to a predominantly positive narrative. Almost as soon as GAI was released and became a topic of discussion, we quickly saw a spate of opinions, accompanied by the wisdom of pundits and some speculative analyses of what the impact on a variety of societal and organizational functions might be. These early discussions lacked grounding in robust empirical or theoretical insights, instead reflecting speculative or overly generalized narratives. In many ways, these initial reactions paralleled the reactions to Artificial Intelligence almost forty years earlier (Sheil, 1987).

Some academic publishers quickly outlawed the use of GAI by submitting authors and other notes of caution were expressed regarding the potential negative impacts. However, as the technology evolved, so in parallel did the restrictions on academic use, most of which were just as quickly retracted and replaced with more nuanced advisories. Academic institutions, journal publishers, and universities started to promulgate guidelines, essentially legitimizing GAI application, yet also unleashing peer and institutional pressure to take advantage of this new opportunity.

In parallel to these academic perspectives, business organizations have expressed both enthusiasm for the technology and concerns about such issues as data protection, truth, trust, and errors, as well as how to train people to use GAI correctly. Today we see that AI-themed programmes and research centres are being created in profusion. This proliferation is worrisome not because of the exploration of GAI itself, but because it often lacks critical engagement with the potential harms and adverse consequences.

At the ISJ, we have already seen both submissions and reviews where GAI appears to have played a role not so far from either co-author or some form of assistant. In response, we have published cautionary editorials (Davison et al., 2023, 2024). When it comes to AI-themed research submissions, our sense is that we are at the stage of the trickle before the deluge. Given the normative process of empirical (qualitative or quantitative) research, it is not surprising that few such AI-themed articles (e.g., Susarla et al., 2023; Grover, 2024) have been published. However, in a pre-GAI special issue (Mikalef et al., 2022), a more critical tone was adopted with an explicit focus on the dark side of AI.

At the ISJ, the vast majority of GAI-themed submissions to date tend to emphasise the novelty of application area yet actually have little to do with IS and fail to engage with its broader sociotechnical axis. This reflects an ongoing challenge for IS scholars: how to meaningfully situate GAI research within the discipline’s core concerns. Nevertheless, this also presents an opportunity for IS researchers to engage critically with GAI through theoretical and methodological innovation and empirical rigor. What we also find to be absent from much of the GAI-related work done so far is either a more critical stance or attempts to engage in rigorous and novel theoretical development on the impacts of GAI.

This is troubling because of the risks that GAI poses to individuals, organizations, society more broadly, and the natural environment. In fact, poorly vetted applications of GAI run the risk of such negative outcomes as violating privacy, facilitating research misconduct, insulting customers, and discriminating unfairly against specific groups. Given this backdrop, we argue that the time is now ripe for researchers to critically investigate the diverse impacts of GAI. Analyzing these impacts through a sociotechnical lens is essential to bridge the gap between technological advancements and their broader implications.

We can expect that a wide variety of impacts may be experienced globally, some of which will be legal in one jurisdiction even as they are illegal elsewhere; this heightens concerns about widening digital and AI-related divides. To explain these phenomena and anticipate their future trajectories, researchers must employ appropriate theoretical frameworks and remain open to innovative forms of theorizing. GAI's emergence necessitates both the adaptation of existing theories and the development of new theoretical arguments that reflect its distinctive characteristics, such as its effects on human agency and the contextual nuances of its applications.

We encourage the application of grounded research, given that GAI is still an emerging phenomenon. For example, theories of disruption, given the undeniably disruptive nature of GAI, may offer more fruitful opportunities for theoretical innovation, while critical design approaches rooted in theories of emancipation could illuminate issues of AI oppression. A robust theoretical foundation that explores the ‘dark side’ of GAI is indispensable for advancing sociotechnical research and addressing its complex, often adverse, societal impacts.

In a positive sense, the research that we aim to publish in this special issue should contribute to our understanding of the potential negative impacts of GAI, with the hope that we can also mitigate them. In-depth ethnographic fieldwork, critical case studies, and grounded theory can help reveal and address the otherwise hidden aspects of GAI in practice. Action Researchers may go a step further to propose and even enact those mitigation strategies in particular contexts, thus subjecting their theoretical ideas to empirical tests.

For this special issue, we specifically solicit research that draws on primary data. Primary data allows for a deeper understanding of GAI’s real-world impact and enables researchers to engage directly with stakeholders, such as individuals and organizations affected by GAI. We aim to foster research that is grounded in real-world experiences and produces theoretically innovative, methodologically rigorous work that can truly advance our understanding of GAI's sociotechnical consequences. We expect that authors will make a direct contribution to knowledge by developing a new theory or theorisations that acknowledge the unique characteristics of GAI. We welcome interdisciplinary work. We also welcome action research and design science studies if they empirically demonstrate how the negative impacts of GAI can be mitigated, and in addition develop theory.

For this special issue, we do not accept the submission of stand-alone reviews of the technology, opinion papers and purely conceptual papers. Submissions to this special issue must align closely with the sociotechnical axis central to IS research. Specifically, we seek studies that explore the interplay between human, technological, environmental and organizational dimensions of GAI in real-world contexts. The range of topics and contexts that may be suitable is clearly extensive, and thus we only provide a partial list below. As we indicate above, we are particularly interested in the dark side of GAI.

Potential topics

  • Human - GAI Relationships and Collaboration (e.g., Anthropomorphism in AI and human identity; Dehumanized interactions; Relationships with robots or chatbots; Power dynamics in human - AI collaboration and hegemonic interactions)
  • Human Agency, Creativity, and Cognition (e.g., Erosion of human agency and GAI dependency; Challenges to human creativity in the era of AI - driven pattern generation; Mediocrity as a new norm, and the struggle for excellence; Cognitive enhancement, deterioration, and dependence in GAI - mediated environments)
  • Social Inequalities and Marginalization (e.g., Impacts on marginalized populations and inequality by design; Educational equity and impact on learning; Global and cultural distortions and digital exclusion; Data colonialism and global inequalities)
  • Work and Organizational Shifts (e.g., Transformation, decline and disappearance of professions; Deskilling and disempowering of individuals; Disruptions to work, routines, and shifts in organizational dynamics)
  • Governance and Policy (e.g., Implications of strong or absent governance frameworks; Regulation and policy development by governments and organizations; National policies and their differences; Environmental and societal trade-offs)
  • Disinformation and Trust (Development & dissemination of disinformation, misinformation, and conspiracy theories; Erosion of trust in human and organizational communication)
  • Fundamental Risks (e.g., Systemic risk, resilience, and dependencies in AI - enhanced infrastructure; Vulnerabilities and data poisoning; Misuse of intellectual property; Warfare automation)

Associate editors

Angsana Techatassanasoontorn, AUT University
Carla Bonina, University of Surrey
Christoph Breidbach, University of Queensland
Dimitra Petrakaki, University of Sussex
Efpraxia Zamani, Durham University
Gilbert Fridgen, University of Luxembourg
Hameed Chughtai, University of Lancaster
Han-fen Hu, University of Nevada Las Vegas
Harminder Singh, AUT University
Johan Sæbo, University of Oslo
Luiz Joia, Getulio Vargas Foundation
Marco Marabelli, Bentley University
Martin Adam, University of Göttingen
Maximilian Schreieck, University of Innsbruck
Mike Lee, University of Nevada Las Vegas
Mira Slavova, Warwick Business School
Mylene Struijk, University of Sydney
Nancy Deng, California State University Dominguez Hills
Petros Chamakiotis, ESCP
Petter Nielsen, University of Oslo
Phil Zhou, Tongji University
Pierre-Emmanuel Arduin, Paris Dauphine University
Sven Laumer, Friedrich-Alexander University
Tommy Chan, University of Manchester