This special issue examines Generative AI's dual impact on organisations, driving innovation, efficiency, and societal value while raising ethical, governance, and workforce challenges.
Generative Artificial Intelligence (GAI) is at the forefront of technological innovation, with rapid growth and transformative potential. By 2032, the global GAI market is projected to reach $151.9 billion. This trajectory reflects the growing recognition of GAI's capabilities across multiple domains, fundamentally reshaping innovation paradigms and redefining organisational processes and structures. In particular, GAI is increasingly acknowledged as a transformative force in facilitating data-driven decision-making and optimising operational processes within and across organisations. Foundational GAI systems such as BERT, ChatGPT DALL-E, Deepseek and Gemini illustrate their adaptability and extensive applicability. These systems can execute a wide array of tasks, ranging from creative production and service delivery to organisational decision-making, offering novel opportunities for personalisation, operational efficiency, and interdisciplinary collaboration. As GAI continues to evolve, some organisations have commenced developing and disseminating regulatory frameworks to legitimise its deployment in organisational processes. There is growing interest in whether GAI can be repurposed as a strategic technology employed for advancing social justice and addressing grand challenges, from climate resilience to digital access and social welfare.
While prior studies have examined AI more conceptually and broadly, a significant unexplored area remains, which concerns the multifaceted impacts of GAI on organisational decision-making within the information systems research. Because of the relevant strategic implications (and potentially ethical challenges) associated with implementing GAI in organisations, filling this gap becomes paramount. In particular, this special issue aims to stimulate research focusing on the dual nature of GAI, its simultaneous potential for value co-creation and value co-destruction in organisations; GAI presents promising opportunities and significant challenges. In particular, GAI used for automated content creation, internal reporting and predictive analytics has the potential to improve operational efficiency by identifying workflow bottlenecks, reducing manual workload, and enabling data-driven insights into employee performance and strategic planning. With these capabilities, it can detect subtle signs of fatigue or burnout, allowing for early intervention through tailored support or adjustments to the workload. Also, by synthesising vast amounts of information, GAI supports value co-creation and strategic agility in decision-making, promotes innovation and digital inclusion, and allows employees to focus on higher-level tasks such as problem-solving and strategic thinking. GAI can foster creativity and knowledge sharing, support employee well-being by automating repetitive tasks, and improve cross-functional collaboration. However, these organisational benefits are tempered by critical concerns. The increasing reliance on opaque and complex GAI systems introduces risks related to hallucinations, diminished human oversight, and the externalisation of ethical responsibility onto non-accountable technologies. 75% of companies have restricted or actively considered restricting technologies such as ChatGPT due to fears of data breaches, intellectual property loss, and declining trust in AI-generated content. More fundamentally, GAI threatens the nature of creative labour, raising questions about job security, intellectual integrity, and the automation of cognitively demanding work. GAI implementation may also generate unintended consequences for employee well-being, including role displacement, cognitive dependency, and reduced transparency in decision-making, which affects diversity, equality, and inclusion in the working environment. Concerning workplace surveillance, GAI can undermine employee autonomy by enabling constant monitoring that erodes trust and psychological safety. Employees may feel reduced to performance metrics, reflecting a shift toward Digital Taylorism, which prioritises algorithmic efficiency over human creativity. This dynamic can further intensify privacy concerns due to the opaque nature of many GAI systems, where data is often collected and used without clear consent. Thus, all of these concerns highlight the potential for value co-destruction, wherein mismanaged or poorly governed GAI adoption undermines organisational integrity, ethical standards, and stakeholder trust.
In light of these tensions, this special issue invites scholars and practitioners to adopt a perspective considering both the strategic, transformative potential and disruptive consequences of GAI. We welcome contributions that examine how GAI simultaneously enhances and challenges existing organisational frameworks and processes, ultimately reshaping the landscape of technological governance, human agency, and organisational transformation. Submissions may offer theoretical or empirical insights into the promises, ethical dilemmas, and societal shifts driven by GAI at the organisational level, including its application to grand societal challenges. We will not consider papers focusing on users, individuals, or manuscripts that do not align with the central focus of the special issue. Papers may use qualitative, quantitative, or pluralistic approaches.