Join the XAI - ERC project team at Pisa, Italy!

Join the XAI - ERC project team at Pisa, Italy!

A two-year PostDoc position open, to join the ERC Advanced Grant XAI team in Pisa, Italy

XAI: Science and technology for the eXplanation of AI decision making, ERC Advanced Grant n. 834756 (2019-2025)

Principal investigators: Fosca Giannotti, Scuola Normale Superiore, Pisa in collaboration with Dino Pedreschi, University of Pisa, at KDD Lab in Pisa, Italy ?https://kdd.isti.cnr.it/?

Deadline for online applications: Monday, 30 January 2023 at 13:00 CEST

Online applications at https://pica.cineca.it/unipi/ass-inf2022-17/ (selection code: ass-inf2022-17)

Salary: approximately € 3.100 net per month, a competitive salary for Italian standards

Duration: 2 years

________________________________________________

The successful candidate will work in an interdisciplinary team with top-ranked researchers at the KDD Lab, a joint Lab of University of Pisa, Scuola Normale Superiore and the Italian national Research Council (CNR) https://xai-project.eu/people.html

Join us to develop the future innovative models and methods for Human-centered, Explainable Artificial Intelligence for Collaborative Decision Making:

  • advanced eXplainable AI (XAI) paradigms in support of synergistic human-machine interaction and collaboration. These should be aware of the characteristics and expertise of the human being interacted with, and prevent potential pitfalls that are common in the XAI literature like confirmation and automation bias.
  • “human-in-the-loop” co-evolution of human decision making and machine learning models. Human decision makers and their AI assistant should learn from each other and jointly evolve to best exploit their respective strengths and overcome their respective weaknesses, optimizing the final outcome of the joint human-AI system.

The XAI ERC project https://xai-project.eu/ focuses on how to construct meaningful explanations of opaque AI/ML systems, following several research lines: how to devise machine learning models that are transparent-by-design, how to perform black-box model auditing and explanation, how to reveal data and algorithmic bias, how to learn causal relationships, how to enable fruitful conversation between human decision makers and AI decision support tools, how to make explainable AI work in concrete domains, such as healthcare, risk assessment, justice, etc.

See our publications here: https://xai-project.eu/resources.html

Ideal candidates should hold or be about to obtain a PhD degree in Computer Science, Computer Engineering, Mathematics, Physics, Cognitive Sciences or related disciplines, and a proven track record of excellent University grades and publications in relevant top-tier conferences and journals. Background on (some of) the following topics is appreciated: machine learning, deep learning, inductive relational learning, statistical learning, statistical physics of machine learning, knowledge graphs, causal reasoning and learning, counterfactual reasoning, cognitive models of learning and reasoning, human-computer interaction. Good written and spoken communication skills in English are required.

Applications must be submitted exclusively online at: https://pica.cineca.it/unipi/ass-inf2022-17/ within the deadline of Monday, 30 January 2023 at 13:00 CEST

We are happy if the interested candidates also send us an expression of interest, containing the candidate’s CV accompanied by a letter of motivation and key publications. Please send your expression of interest (not mandatory) to [email protected] and [email protected] and [email protected] with subject: [XAI] Expression of interest.

Salary is approximately 48,000 Euro per year (gross, before taxes). Given the current tax regulation, this corresponds to approximately € 3.100 per month (net, after taxes), which is a competitive salary according to Italian standards. Appointments are expected to start in Spring 2023 (start date is flexible).

KDD Lab, the Knowledge Discovery & Data Mining Lab of ISTI-CNR and Univ. of Pisa https://kdd.isti.cnr.it/, is a pioneering initiative in data science & AI, established in 1994. KDD Lab is a core node of the Humane-AI-Net H2020 network of excellent research centers in AI https://www.humane-ai.eu, the coordinator of the SoBigData++ research infrastructure on Social Mining and Big Data Analytics https://www.sobigdata.eu and a partner of AI4EU, the EU on-demand AI platform https://www.ai4eu.eu/

Recent publications of KDD Lab on XAI:

  • R. Guidotti, A. Monreale, S. Ruggieri, F. Naretto, F. Giannotti, D. Pedreschi, F. Turini.?Stable and actionable explanations of black-box models through factual and counterfactual rules.?Data Mining & Knowledge Discovery?(2022). https://doi.org/10.1007/s10618-022-00878-5
  • Cecilia Panigutti, Andrea Beretta, Fosca Giannotti, and Dino Pedreschi. 2022. Understanding the impact of explanations on advice-taking: a user study for AI-based clinical Decision Support Systems. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (CHI '22). Association for Computing Machinery, New York, NY, USA, Article 568, 1–9. https://doi.org/10.1145/3491102.3502104
  • M. Setzu, R. Guidotti, A. Monreale, F. Turini, D. Pedreschi, F. Giannotti. GLocalX - From Local to Global Explanations of Black Box AI Models. Artificial Intelligence, Volume 294, 2021, 103457 https://doi.org/10.1016/j.artint.2021.103457
  • R. Guidotti, A. Monreale, F. Spinnato, D. Pedreschi and F. Giannotti. Explaining Any Time Series Classifier. 2020 IEEE Second International Conference on Cognitive Machine Intelligence (CogMI), Atlanta, GA, USA, 2020, pp. 167-176, doi: 10.1109/CogMI50398.2020.00029.
  • Guidotti, R., Monreale, A., Matwin, S., & Pedreschi, D. (2020). Explaining Image Classifiers Generating Exemplars and Counter-Exemplars from Latent Representations.?Proceedings of the AAAI Conference on Artificial Intelligence,?34(09), 13665-13668 https://doi.org/10.1609/aaai.v34i09.7116
  • C. Panigutti, A. Perotti, and D. Pedreschi. 2020. Doctor XAI: an ontology-based approach to black-box sequential data classification explanations. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAT* '20). Association for Computing Machinery, New York, NY, USA, 629–639. https://doi.org/10.1145/3351095.3372855
  • R. Guidotti, A. Monreale, F. Giannotti, D. Pedreschi, S. Ruggieri and F. Turini. Factual and Counterfactual Explanations for Black Box Decision Making. IEEE Intelligent Systems, vol. 34, no. 6, pp. 14-23, 1 Nov.-Dec. 2019, doi: 10.1109/MIS.2019.2957223.
  • Pedreschi, D., Giannotti, F., Guidotti, R., Monreale, A., Ruggieri, S., & Turini, F. (2019). Meaningful Explanations of Black Box AI Decision Systems. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 9780-9784. https://doi.org/10.1609/aaai.v33i01.33019780
  • Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Fosca Giannotti, and Dino Pedreschi. 2018. A Survey of Methods for Explaining Black Box Models. ACM Computing Surveys 51, 5, Article 93 (January 2019), 42 pages. https://doi.org/10.1145/3236009

________________________________________________

要查看或添加评论,请登录

Dino Pedreschi的更多文章

社区洞察

其他会员也浏览了