Artificial Intelligence-Information Assurance Nexus: The Future of Information Systems Security, Privacy, and Quality

  • October 16, 2024
    Call for papers published


  • November 15, 2024
    Optional introductory workshops (virtual): informational sessions.


  • January 20, 2025
    Two-page abstracts submission deadline.


  • March 31, 2025
    Feedback on abstracts.


  • July 11, 2025
    Paper development workshop (virtual).


  • October 31, 2025
    First-round paper submission.


  • January 31, 2026
    First-round decisions.


  • February 1, 2026
    Workshop for authors with first-round revise-and-resubmit.


  • May 31, 2026
    Second-round revisions due.


  • August 31, 2026
    Second-round decisions.


  • November 30, 2026
    Final revisions due.


  • February 28, 2027
    Final revisions.

Editors

  • Rui Chen, Iowa State University
  • Juan Feng, Tsinghua University
  • Miguel Godinho de Matos, Católica Lisbon School of Business & Economics
  • Carol Hsu, University of Sydney
  • H. Raghav Rao, University of Texas at San Antonio

Description

Digital threats continue to impede information assurance. Many issues in information assurance have arisen in the last decade or two, including risk management, information quality, intellectual property, privacy protection, compliance with regulations, and continuity of operations. As a result, protecting information has become a global priority, and collaborative efforts are being made to prevent, detect, and react to threats to information quality, authenticity, integrity, confidentiality, and availability. As society steps into the age of generative AI (GenAI), fresh challenges and opportunities are arising in the realms of information security, privacy, and quality. Questions have emerged regarding the role and intended/unintended consequences of GenAI in information assurance. GenAI is believed to pose a paradox, serving as a dual-edged sword in the realm of information assurance. GenAI creates new content, whereas traditional AI mostly makes predictions and classifications based on existing datasets. GenAI is designed to reason and operate independently across various domains, whereas traditional AI focuses on narrow tasks (e.g., playing chess and translating languages by following specific rules). In addition, GenAI works with multiple data modalities (e.g., text, images, and videos), whereas traditional AI primarily functions in a single mode of data. These new capabilities of GenAI open new possibilities for its applications in a wide range of areas.

The emergence of GenAI is poised to exert a profound impact on assurance. On the one hand, GenAI has been recognized for its ability to bolster information assurance. On the other hand, GenAI heightens the potency of existing threats, allows the fabrication of false information, fuels intellectual property theft, and poses challenges to governance and compliance.

Another source of threats to information assurance stems from attacks that are designed to target the way GenAI systems are trained and expected to be used. Many of these attacks can be mitigated by explicitly integrating information assurance considerations when designing GenAI systems. Cisco found that 92% of organizations see GenAI as fundamentally different, requiring new techniques to manage data and risks. The relationship between GenAI and information assurance is depicted below. Numerous new opportunities exist for information systems (IS) scholars to study information assurance issues within the context of GenAI, as traditional approaches may not work. This special issue seeks research that goes beyond simple applications of existing theories and methods from the cybersecurity literature in IS. We invite studies that explore the unique information assurance challenges in the realm of GenAI, calling for the development and application of new theories or methods. By focusing on important research questions, this special issue will generate answers to address significant national and global research fronts.

Potential topics

  • What factors influence individuals’ security and privacy behavior in the presence of GenAI tools?
  • How can we predict, analyze, and counteract the emerging threats to GenAI models?
  • How can economic analysis contribute to combating information assurance threats in GenAI?
  • What are the managerial strategies and their effectiveness in addressing GenAI-induced issues on data security and privacy?
  • What are the key principles in attributing accountability and responsibilities for the risks in GenAI model output?
  • individual behaviors
  • organizational practices
  • societal impacts
  • risk management
  • investments in assurance
  • market effects
  • attacker analysis

Associate editors

Panagiotis Adamopoulos, Emory University
Jingjing Li, University of Virginia
Rodrigo Belo, Nova School of Business and Economics
Huigang Liang, University of Memphis
Indranil Bose, NEOMA
Alexander Maedche, Karlsruhe Institute of Technology
Lemuria Carter, University of Sydney
Ning Nan, University of British Columbia
Christy Cheung, Hong Kong Baptist University
Jella Pfeiffer, University of Stuttgart
Rahul De’, Indian Institute of Management Bangalore
Dandan Qiao, National University of Singapore
Amany Elbanna, University of Sussex
Sagar Samtani, Indiana University
Uri Gal, University of Sydney
Anastasia Sergeeva, Vrije Universiteit Amsterdam
Weiyin Hong, Hong Kong University of Science and Technology
Maha Shaikh, ESADE Business School
Nina Huang, University of Miami
Paolo Spagnoletti, Luiss Business School
Hartmut Höhle, University of Mannheim
Rohit Valecha, University of Texas at San Antonio
Allen Johnston, University of Alabama
Jing Wang, Hong Kong University of Science and Technology
Arpan Kar, Indian Institute of Technology
Jingguo Wang, University of Texas at Arlington
Juhee Kwon, City University of Hong Kong
Hong Xu, Hong Kong University of Science and Technology
Atanu Lahiri, University of Texas at Dallas
Heng Xu, University of Florida
Alvin Leung, City University of Hong Kong
Niam Yaraghi, University of Miami
Ting Li, Erasmus University
Cathy Liu Yang, HEC Paris
Yingjie Zhang, Peking University