In the spirit of substantive exchange between stakeholders from different fields and sectors, we announce a call for papers, panel discussions and workshops from all stakeholders in the responsible AI ecosystem. All applicants should clearly indicate how their proposed paper or session takes an applied research perspective or integrates interdisciplinary or multistakeholder approaches.
The Responsible AI Forum aims to bring experts from diverse fields, as well as from academia, government, civil society and industry, together to discuss the development of AI-based technologies within an ethical, sustainable, inclusive and comprehensive framework that delivers world benefit.
Key topics of interest are outlined under our five tracks, with suggested subtopics within each. One track must be chosen upon submission, but the relevant topics can go beyond the track descriptions.
How is AI being used by militaries and government for improving security and how are dual use applications being considered in AI ethics? How does the use of AI change dynamics around foreign policy engagement and deterrence? What ethical considerations do we need for the use of AI for human security and protecting civilians and in conflict/crisis settings?
What are the societal impacts of LLMs across domains such as education, (mental) health and public services? How do we design and evaluate these systems to ensure alignment with human values such as fairness and transparency? How do we meaningfully include marginalized populations and interdisciplinary perspectives in shaping the development and governance of LLMs?
How can AI be designed and deployed to serve the public good (social and environmental well-being) and what opportunities are being currently missed in this context? How may AI enhance public administration, services and sectors such as education, healthcare and environmental management? What does AI governance look like in this space? How can governments and policy makers better engage with ethical AI?
How does AI shape the way we think, behave, feel and interact, both online and in person? In what ways does it influence our emotions, decision-making, and social connections? And how can we better understand, assess and guide their effects to ensure AI systems remain responsible and human-centred?
What methods or best practices exist for participatory, people-centered AI research and practice? How is AI ethics being employed in formal and informal educational settings? How can we enact public dialogues and reach different groups of stakeholders? What innovative (including arts-based) methods are facilitating participation and deliberation and contributing to AI ethics, while fostering AI awareness and literacy?
Please submit your abstracts (500 words) including title, authors, affiliations, contact information of corresponding author, and 5 keywords. If you are submitting a proposal for a panel, please include a list of panel speakers (max 5) and whether or not their interest/availability has been confirmed.
We will accept submissions until January 7th, 2026.
Decision will be communicated by February 16th, 2026
Disclaimer: This page includes images that were generated or edited using Google Gemini.