This special issue examines Generative AI's dual impact on organisations, driving innovation, efficiency, and societal value while raising ethical, governance, and workforce challenges.
Generative Artificial Intelligence (GAI) is at the forefront of technological innovation, with rapid growth and transformative potential. By 2032, the global GAI market is projected to reach $151.9 billion. This trajectory reflects the growing recognition of GAI's capabilities across multiple domains, fundamentally reshaping innovation paradigms and redefining organisational processes and structures. In particular, GAI is increasingly acknowledged as a transformative force in facilitating data-driven decision-making and optimising operational processes within and across organisations. Foundational GAI systems such as BERT, ChatGPT DALL-E, Deepseek and Gemini illustrate their adaptability and extensive applicability. These systems can execute a wide array of tasks, ranging from creative production and service delivery to organisational decision-making, offering novel opportunities for personalisation, operational efficiency, and interdisciplinary collaboration. As GAI continues to evolve, some organisations have commenced developing and disseminating regulatory frameworks to legitimise its deployment in organisational processes. There is growing interest in whether GAI can be repurposed as a strategic technology employed for advancing social justice and addressing grand challenges, from climate resilience to digital access and social welfare. At the same time, there are stakeholders, including organisational actors, that are concerned with the risks posed by GAI, such as data governance, epistemic reliability, trust, error propagation, and bias. This special issue aims to unfold strategic opportunities and challenges of GAI in organizations, networks, government bodies and society at large.
While prior studies have examined AI more conceptually and broadly, a significant unexplored area remains, which concerns the multifaceted impacts of GAI on organisational decision-making within the information systems research. Because of the relevant strategic implications (and potentially ethical challenges) associated with implementing GAI in organisations, filling this gap becomes paramount. In particular, with this special issue we aim to stimulate research focusing on the dual nature of GAI, its simultaneous potential for value co-creation and value co-destruction in organisations; GAI presents promising opportunities and significant challenges. In particular, GAI used for automated content creation, internal reporting and predictive analytics, has the potential to improve operational efficiency by identifying workflow bottlenecks, reducing manual workload, and enabling data-driven insights into employee performance and strategic planning. With these capabilities, it can detect subtle signs of fatigue or burnout, such as slower response times, changes in communication tone, or altered work patterns. Hence it enables early intervention through tailored support or adjustments to the workload. Also, by synthesising vast amounts of information, GAI supports value co-creation and strategic agility in decision-making, promotes innovation and digital inclusion, and allows employees to focus on higher-level tasks such as problem-solving and strategic thinking. In fact, GAI can foster creativity and knowledge sharing, support employee well-being by automating repetitive tasks, and improve cross-functional collaboration. GAI can also detect subtle patterns indicative of fatigue or burnout, such as reduced response times, changes in communication tone, or shifts in work rhythms, allowing organisations to intervene early with tailored support or workload adjustments.
However, these organisational benefits are tempered by critical concerns. The increasing reliance on opaque and complex GAI systems introduces risks related to hallucinations, diminished human oversight, and the externalisation of ethical responsibility onto non-accountable technologies. According to a 2023 global survey, 75% of companies have restricted or actively considered restricting technologies such as ChatGPT due to fears of data breaches, intellectual property loss, and declining trust in AI-generated content. More fundamentally, GAI threatens the nature of creative labour, raising questions about job security, intellectual integrity, and the automation of cognitively demanding work. GAI implementation may also generate unintended consequences for employee well-being, including role displacement, cognitive dependency, and reduced transparency in decision-making, which somehow affects the diversity, equality, and inclusion in the working environment. Concerning workplace surveillance, GAI can undermine employee autonomy by enabling constant monitoring that erodes trust and psychological safety. Employees may feel reduced to performance metrics, reflecting a shift toward Digital Taylorism, which prioritises algorithmic efficiency over human creativity. This dynamic can further intensify privacy concerns due to the opaque nature of many GAI systems, where data is often collected and used without clear consent. This will also heighten the risk of biased or discriminatory outcomes. Thus, all of these concerns highlight the potential for value co-destruction, wherein mismanaged or poorly governed GAI adoption undermines organisational integrity, ethical standards, and stakeholder trust. This resonates with recent developments in people analytics, where GAI-facilitated behavioural monitoring intersects with algorithmic control, potentially transforming managerial decision-making into a form of digital micromanagement.
Given these growing uncertainties and tensions surrounding GAI, there is an urgent need to move beyond celebratory narratives and engage in critical, multidisciplinary inquiry into its dual-edged nature. On one hand, GAI serves as a powerful cognitive enabler, helping organisations to synthesise complex information, generate novel insights, and expand the boundaries of individual and collective knowledge. On the other hand, increasing reliance on these systems may gradually erode essential cognitive functions, such as critical thinking, independent reasoning, and problem-solving skills. Furthermore, the large-scale integration of GAI risks reinforcing existing cognitive biases, curating algorithmic filter bubbles, and narrowing exposure to diverse perspectives, thereby constraining informed decision-making. In light of these tensions, this special issue invites scholars and practitioners to adopt a perspective considering both the strategic, transformative potential and disruptive consequences of GAI. We welcome contributions that examine how GAI simultaneously enhances and challenges existing organisational frameworks and processes, ultimately reshaping the landscape of technological governance, human agency, and organisational transformation. Submissions may offer theoretical or empirical insights into the promises, ethical dilemmas, and societal shifts driven by GAI at the organisational level, including its application to grand societal challenges. We will not consider papers focusing on users, individuals, or manuscripts that do not align with the central focus of the special issue. Papers may use qualitative, quantitative, or pluralistic approaches.