Artificial Intelligence-Information Assurance Nexus: The Future of Information Systems Security, Privacy, and Quality

  • October 16, 2024
    Call for papers published


  • January 20, 2025
    Abstract Proposal Deadline


  • March 31, 2025
    Feedback on abstracts


  • July 11, 2025
    Paper development workshop (virtual)


  • October 31, 2025
    Stage 1 Submission Deadline


  • January 31, 2026
    First-round decisions


  • February 1, 2026
    Workshop for authors with first-round revise-and-resubmit


  • May 31, 2026
    Second-round revisions due


  • August 31, 2026
    Second-round decisions


  • November 30, 2026
    Final revisions due


  • February 28, 2027
    Final revisions

Editors

  • Rui Chen, Iowa State University
  • Juan Feng, Tsinghua University
  • Miguel Godinho de Matos, Católica Lisbon School of Business & Economics
  • Carol Hsu, University of Sydney
  • H. Raghav Rao, University of Texas at San Antonio

Description

Digital threats continue to impede information assurance. Many issues in information assurance have arisen in the last decade or two, including risk management, information quality, intellectual property, privacy protection, compliance with regulations, and continuity of operations (Mahmood et al., 2010; Forrest, 2023). As a result, protecting information has become a global priority, and collaborative efforts are being made to prevent, detect, and react to threats to information quality, authenticity, integrity, confidentiality, and availability (European Parliament, 2018; White House, 2023a). As society steps into the age of generative AI (GenAI) (Dennehy et. al., 2023), fresh challenges and opportunities are arising in the realms of information security, privacy, and quality. Questions have emerged regarding the role and intended/unintended consequences of GenAI in information assurance. GenAI is believed to pose a paradox, serving as a dual-edged sword in the realm of information assurance (Robidoux, 2024).

GenAI creates new content, whereas traditional AI mostly makes predictions and classifications based on existing datasets. GenAI is designed to reason and operate independently across various domains, whereas traditional AI focuses on narrow tasks (e.g., playing chess and translating languages by following specific rules). In addition, GenAI works with multiple data modalities (e.g., text, images, and videos), whereas traditional AI primarily functions in a single mode of data. These new capabilities of GenAI open new possibilities for its applications in a wide range of areas. GenAI models can range from generalized models to domain-specific models that automate tasks and generate content adhering to industry-specific terminologies, context-specialized knowledge, and tailored experiences. Its power has sparked discussions on ethics and societal questions regarding the potential impact on employment, bias, privacy, and human-AI relationships.

The emergence of GenAI is poised to exert a profound impact on assurance (Barrett et al., 2023; Sun et al., 2023). On the one hand, GenAI has been recognized for its ability to bolster information assurance. The IBM Institute for Business Value (2024) commented that GenAI has the potential to strengthen business defenses, accelerate security processes, and identify emerging threats as they arise. Studies have also noted that GenAI may be able to address information management challenges, including quality (Bhatti, 2024). On the other hand, GenAI heightens the potency of existing threats, allows the fabrication of false information, fuels intellectual property theft, and poses challenges to governance and compliance. The 2024 February deepfake fraud incident in Hong Kong is a case in point (Chen & Magramo, 2024). Even unexpected users can threaten the protection of data, which is manifested by examples of employees sharing confidential data with GenAI models. Industry reports from Forrester (2023), Cisco (2024), and the IBM Institute for Business Value (2024) have highlighted GenAI-induced risks to information assurance as a major threat to firms’ adoption and implementation of GenAI initiatives. Similar concerns have been acknowledged from a government perspective in the 2023 U.S. Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (White House, 2023b) and the 2023 congressional research report: Generative Artificial Intelligence and Data Privacy: A Primer (Congressional Research Service, 2023).

Examples of how GenAI may exacerbate information assurance issues include:

1. **Sophisticated human-deception attacks:** Cybercriminals can use GenAI to compromise businesses by writing targeted emails. In addition, GenAI can create deepfakes and voice clones, resulting in “vishing” that uses phone calls and voice messages to trick people into sharing sensitive information.

2. **Hallucination and confabulation:** GenAI is known to create incorrect information that is seemingly correct. In addition, attackers may trick GenAI into recommending unverified software code packages to unexpected users. When attackers embed malicious codes into packages endorsed by GenAI, users may unwittingly download and utilize these harmful codes, creating security vulnerabilities ripe for exploitation.

3. **Intellectual property theft:** GenAI can create materials that violate intellectual property rights. For example, GenAI can create content that resembles existing copyrighted materials, resulting in legal ramifications and conflicts.

4. **Challenges in regulation and compliance:** Keeping GenAI models in check is subject to considerable hurdles because of how intricate and swiftly they evolve. Ensuring compliance with data protection laws and standards will become increasingly difficult as GenAI becomes more autonomous and capable of making independent decisions.

Another source of threats to information assurance stems from attacks that are designed to target the way GenAI systems are trained and expected to be used. Many of these attacks can be mitigated by explicitly integrating information assurance considerations when designing GenAI systems. For example, GenAI tools may be subject to:

1. **Unreliable training data:** A substantial amount of training data is employed in constructing large language models (LLMs). Yet this data may be of low quality and is often unverified. The security of the models is influenced by the quality of the training data, paving the way for potential vulnerabilities, unauthorized access, and compromises regarding sensitive information.

2. **Data poisoning:** GenAI needs to be trained and tuned on inputs and outputs. Data poisoning occurs when inputs are manipulated to alter or corrupt the training data in LLMs, which consequently impacts the desired outputs of the overall system.

3. **Security leaks, inference attacks, and knowledge phishing:** Security leaks in the context of GenAI refer to the unintended disclosure of sensitive information embedded within a model itself or through its responses. This is also known as inference attacks or knowledge phishing.

4. **Prompt injections:** Prompt injections occur when malicious inputs are provided to an AI system to manipulate its output or to execute unintended actions.

Cisco (2024) found that 92% of organizations “see GenAI as fundamentally different, requiring new techniques to manage data and risks.” The 2023 U.S. Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence calls for actions on refining GenAI by mitigating information assurance issues (White House, 2023b). Worldwide efforts are being made on these fronts to protect LLMs against threats of information fabrication, system misuse, privacy breaches, etc. Gartner recommends mitigation strategies, which include “establishing a governance entity and workflow, monitoring and blocking access, communicating acceptable use policies, exploring prompt engineering and API integrations, and prioritizing private hosting options” (Robidoux, 2024). However, there are growing concerns that excessive focus and regulation on data security and privacy may stifle and slow the advancement of GenAI, especially in terms of the European Union’s AI Act (Timis, 2023).

The relationship between GenAI and information assurance is depicted below.

Potential topics

  • What factors influence individuals’ security and privacy behavior in the presence of GenAI tools?
  • How can we predict, analyze, and counteract the emerging threats to GenAI models?
  • How can economic analysis contribute to combating information assurance threats in GenAI?
  • What are the managerial strategies and their effectiveness in addressing GenAI-induced issues on data security and privacy?
  • What are the key principles in attributing accountability and responsibilities for the risks in GenAI model output?
  • Individual behaviors
  • Organizational practices
  • Societal impacts
  • Risk management
  • Investments in assurance
  • Market effects
  • Attacker analysis

Associate editors

Panagiotis Adamopoulos, Emory University
Jingjing Li, University of Virginia
Rodrigo Belo, Nova School of Business and Economics
Huigang Liang, University of Memphis
Indranil Bose, NEOMA
Alexander Maedche, Karlsruhe Institute of Technology
Lemuria Carter, University of Sydney
Ning Nan, University of British Columbia
Christy Cheung, Hong Kong Baptist University
Jella Pfeiffer, University of Stuttgart
Rahul De’, Indian Institute of Management Bangalore
Dandan Qiao, National University of Singapore
Amany Elbanna, University of Sussex
Sagar Samtani, Indiana University
Uri Gal, University of Sydney
Anastasia Sergeeva, Vrije Universiteit Amsterdam
Weiyin Hong, Hong Kong University of Science and Technology
Maha Shaikh, ESADE Business School
Nina Huang, University of Miami
Paolo Spagnoletti, Luiss Business School
Hartmut Höhle, University of Mannheim
Rohit Valecha, University of Texas at San Antonio
Allen Johnston, University of Alabama
Jing Wang, Hong Kong University of Science and Technology
Arpan Kar, Indian Institute of Technology
Jingguo Wang, University of Texas at Arlington
Juhee Kwon, City University of Hong Kong
Hong Xu, Hong Kong University of Science and Technology
Atanu Lahiri, University of Texas at Dallas
Heng Xu, University of Florida
Alvin Leung, City University of Hong Kong
Niam Yaraghi, University of Miami
Ting Li, Erasmus University
Cathy Liu Yang, HEC Paris
Yingjie Zhang, Peking University