Compassionate AI

  • June 30, 2025
    Call for papers published


  • January 20, 2026
    Full Paper Submission


  • April 15, 2026
    First Round of Editorial Decisions


  • August 31, 2026
    Revisions Due


  • December 31, 2026
    Second Round of Editorial Decisions


  • February 28, 2027
    Final Revisions Due


  • May 31, 2027
    Final Editorial Decisions

Editors

  • Rajiv Kohli, William & Mary
  • Meng Li, University of Houston
  • Ting Li, Erasmus University Rotterdam
  • Paul A. Pavlou, University of Miami

Description

Although Artificial Intelligence (AI)—including generative and agentic forms—continues to reshape industries and societies, it remains largely devoid of human qualities, and today’s AI is often characterized as mechanistic and even 'sterile'. Indeed, current AI design paradigms focus on maximizing efficiency, accuracy, and computational sophistication, often at the expense of embedding emotional, social, and ethical dimensions that are inherent in human interactions. While AI is effective in processing data and automating tasks, it is often impersonal, thus detaching from the human experience and undermining the adoption of AI.

To address these shortcomings, AI development must be rethought toward an approach that emphasizes compassionate design by integrating ethical considerations, cultural sensitivity, and emotional intelligence into the core design of AI systems. By embedding these principles, AI should not only be technically proficient but also capable of understanding and responding to the diverse needs of its human users to ensure that AI advances meaningfully contribute to humanity. The need for compassion-centered AI design has never been more pressing.

Compassionate AI refers to systems that not only recognize human emotions and suffering but also proactively seek to alleviate distress, promote well-being, and uphold human dignity. Compassionate AI systems are envisioned as tools that complement human decision-making by providing support that is empathetic, inclusive, and contextually aware. Unlike empathy, which involves understanding others’ emotional states, or sympathy, which evokes feelings of sorrow, compassion combines emotional awareness with a purposeful intention to help other human beings.

While AI has increased efficiency, personalization, and cost reduction in many settings, it has also raised ethical, legal, and societal concerns—particularly in high-stakes settings such as healthcare, education, crisis response, and social services. These concerns are magnified when AI systems, lacking moral agency, make decisions that affect vulnerable populations. In many applications—ranging from healthcare to finance and customer service—the absence of a humanistic perspective can result in interactions that feel mechanistic and unresponsive to the complexities of individual circumstances and propagate (or even amplify) existing societal biases.

Compassionate AI addresses this challenge by embedding empathy, care, and contextual sensitivity into the design, deployment, and governance of AI systems. It envisions AI not as a mechanistic and utilitarian tool, but as a partner that embodies humanity’s highest moral aspirations. Ultimately, compassionate AI is both an ethical imperative and a catalyst—one that rehumanizes AI, ensuring that our AI-led future uplifts humanity, nurtures societal well-being, and reflects our collective commitment to a more empathetic and better world.

Potential topics

  • Healthcare: Compassionate AI can enhance patient care by predicting adverse events, assisting with end-of-life decision-making, and providing emotional support to patients and families.
  • Crisis Management: During natural disasters or emergencies, compassionate AI systems can analyze real-time data, such as social media posts, to identify distressed individuals and provide timely assistance.
  • Education: AI systems can transform education by personalizing learning experiences and adapting instruction to meet diverse student needs.
  • Social Services: Compassionate AI can support victims of trauma or abuse by offering non-judgmental, understanding virtual assistance.
  • Customer Service: AI-powered chatbots can enhance customer experiences by providing compassionate and empathetic responses to inquiries.
  • Human Resources: AI can monitor employee well-being by detecting signals of stress or burnout and proactively offering support resources.

Associate editors

Sutirtha Chatterjee, University of Nevada, Las Vegas
Monica Chiarini Tremblay, William & Mary
Jennifer Claggett, Wake Forest University
Yulin Fang, HKU Business School
Shu He, University of Florida
Nina Huang, University of Miami
Tina Blegind Jensen, Copenhagen Business School
Hyeokkoo Eric Kwon, Nanyang Technological University
Gwanhoo Lee, American University
Ilan Oshri, University of Auckland
Matti Rossi, Aalto University School of Business
Mochen Yang, University of Minnesota