To automate or not to automate crisis decisions?
Job DescriptionA major challenge in crises is the combination of complexity, time pressure and moral decisions. AI has the potential to support first responders in crisis decisions, yet the increasing use of AI has led to a debate about the risks and AI safety if decisions are automated and supported by AI. In this PhD project, we go beyond the 'human in the loop' paradigm, and investigate the implications of automation at scale in human-AI teams. This PhD is part of the NWO-funded AI-COMPASS project.
In human-AI-teams, often, the distinction between tasks to be automated (or not) are made via simplified guidelines ('humans are better at' / 'machines are better at'), yet not taking into account team dynamics and trust. A hallmark of control and oversight of automated systems is the idea of human agency. However, in complex and decentralized networks, the implications of automation will emerge from the interaction of many humans with many AI systems, which is currently not considered.
This PhD position aims to assess these aspects of AI risk by contextualising automation and integrate group and team performance, where many humans are supported by AI. Key questions are: which tasks or processes can and should be automated? How can we maintain meaningful human autonomy and control in decentralized networks?
To address these questions, you will start from developing a conceptual model of human-AI interaction in decentralized networks based on the literature and empirical data collected through workshops with our stakeholders and partners. From there, you will design and implement an agent-based model that will integrate human decision-makers from different organisations (e.g., police, municipality) and artificial agents that support these humans by automating information acquisition, analysis, or decision-making. This model will then be used to analyse a range of crisis scenario for our two most prominent use cases in The Hague and Rotterdam. The insights from the simulations will then be coupled to the concept of human moral autonomy, which defines the conditions that need to be fulfilled for human decision-makers to be able to maintain moral agency when interacting with an AI. The results will lead to coordination guidelines for human-AI-teams in crisis response that detail which processes to automate, and under which circumstances.
The position is a part of the AI-COMPASS team of three PhD researchers that will collectively research crowd crisis management at the TU Delft. This position is embedded at the Faculty of Technology, Policy & Management (TPM), and will be jointly supervised with the faculty of Civil Engineering. As such, the candidate will be joining a vibrant and growing community. You will be supervised by Prof. Tina Comes, Dr. Srijith Balakrishnan and Prof. Serge Hoogendoorn, and you will closely collaborate with Dr. Sascha Hoogendoorn-Lanser. This embedding ensures access to a broad network of partners in research, policy and practice. You may also gain experience in supervising MSc students and engage in teaching and training. Via our network and tailored mentoring, we will create opportunities for you to develop your career through support for conferences, collaborations, and training possibilities.
Job Requirement- You are a highly motivated and enthusiastic researcher with the ambition to conduct high-quality interdisciplinary research that pushes the boundaries of human-centred AI research in crises.
- You have an MSc degree in a field with a strong emphasis on computer science and modelling, such as computational social science (essential).
- You have demonstrated expertise in agent-based modeling (essential).
- You have a keen interest in solid empirical foundations for your models. Experience in working with stakeholders and project partners in workshops is a plus (desirable).
- You are interested in the concepts of human autonomy, and AI ethics more broadly (essential).
- You have excellent study results and excellent command of English (essential).
- You have knowledge in the field of crisis / disaster management or resilience (desirable).
- You are keen to collaborate with project partners and to translate your insights into practice and policy (desirable).
- You thrive in a collaborative and multi-disciplinary environment (essential).
Delft University of Technology is built on strong foundations. As creators of the world-famous Dutch waterworks and pioneers in biotech, TU Delft is a top international university combining science, engineering and design. It delivers world class results in education, research and innovation to address challenges in the areas of energy, climate, mobility, health and digital society. For generations, our engineers have proven to be entrepreneurial problem-solvers, both in business and in a social context.
At TU Delft we embrace diversity as one of our core values and we actively engage to be a university where you feel at home and can flourish. We value different perspectives and qualities. We believe this makes our work more innovative, the TU Delft community more vibrant and the world more just. Together, we imagine, invent and create solutions using technology to have a positive impact on a global scale. That is why we invite you to apply. Your application will receive fair consideration.
Faculty Technology, Policy & ManagementThe Faculty of TPM provides an important contribution to solving complex technical-social issues, such as energy transition, mobility, digitalisation, water management and (cyber) security. TPM does this with its excellent education and research at the intersection of technology, society and policy. We combine insights from both engineering and social sciences as well as the humanities. TPM develops robust models and designs, is internationally oriented and has an extensive network of knowledge institutions, companies, social organisations and governments.
Het salaris bedraagt €2872 - €3670