Logo Centrum Wiskunde & Informatica

Vacatures geplaatst door Centrum Wiskunde & Informatica

Mimir verzorgt het geautomatiseerde beheer van vacatures op vacaturebanken voor Centrum Wiskunde & Informatica.

Laatste vacatures

Repositorymanager (x/v/m) voor 32-40 uur

Als Repositorymanager voor de bibliotheek van CWI is je primaire focus de institutionele repository, ons digitale archief met wetenschappelijke output. Je zorgt dat CWI onderzoek wereldwijd open en vindbaar is, waarbij je zoveel mogelijk non-profit, community-driven, open source mogelijkheden benut. Je neemt initiatieven tot het verbeteren van informatiestromen en dienstverlening, en met de Coördinator Information & Documentation werk je samen op het gebied van Open Science. Kortom een uitdagende, zelfstandige functie in een team van vier collega’s die samen meer dan 150 wetenschappers ondersteunen.

Concreet houd je je onder meer bezig met:

  • Beheren van de database en records, van invoer en categorisering tot het oplossen van issues en het opleveren van managementrapportages.
  • Integraties met externe bronnen te realiseren (middels API calls en SQL query’s) om op geautomatiseerde wijze informatie binnen te halen.
  • Volgen van (inter)nationale IT- en beleidsontwikkelingen om initiatieven tot verbetering te nemen.
  • Samenwerken met IT, de projectadministratie en instituten buiten het CWI.
  • Samenwerken met de coördinator op het gebied van Open Science.
  • De actieve rol in ons kleine team: op sommige tijden bijspringen in de bibliotheek.

0 sollicitaties
0 views


21-02-2025 Centrum Wiskunde & Informatica
Tenure-track or tenured position (m/f/x) in the area of Responsible AI

The Human-Centered Data Analytics (HCDA) group of CWI aims to strengthen its research activities in the field of responsible AI. We are looking for talented scientists who are keen to develop their own innovative line of fundamental research related to, for example, bias, fairness, human oversight, or ethics in cutting-edge AI technology.

The HCDA group investigates human-centered, responsible AI in the culture and media sectors. Current research topics in the group include: measuring bias and diversity in recommender systems; detecting offensive colonial terminology in knowledge graphs; developing transparent techniques for misinformation detection; predicting political bias of crowd workers; and studying bias against the LGBTIQA+ community in large language models.

We maintain close collaborations with professionals from the culture and media sectors, as well as social scientists and humanities scholars, through the Cultural AI Lab and the AI, Media and Democracy Lab. These interdisciplinary labs provide us with opportunities to work with real data and real-world use cases.

Next to developing an innovative research line, you will explore collaboration opportunities within the research group and/or within the two interdisciplinary labs, acquire funds to grow our research capacity for responsible AI, and help strengthen the role of CWI as the national institute for computer science and mathematics. For tenure track or tenured positions at CWI there is only a 10% teaching requirement. You are expected in due time to make connections to one of the Dutch universities to contribute to their curriculum.

35 sollicitaties
0 views

0 sollicitaties
0 views


27-01-2025 Centrum Wiskunde & Informatica
Postdoc on the subject of Bias and Fairness in Knowledge Graphs

Are you inspired by the idea of an inclusive Semantic Web? We are looking for a talented postdoctoral researcher to study various types of bias in knowledge graphs, Linked Open Data and/or metadata.

The position is part of the HAICu project. HAICu (digital Humanities - Artificial Intelligence - Cultural heritage) is a large-scale Dutch research project in which AI researchers and Digital Humanities scholars collaborate with cultural-heritage institutions such as libraries, archives and museums. Linked Open Data is widely used in this sector for metadata about collection objects, for data enrichment, and for cross-collection links.

Inclusivity is a key value in the cultural heritage domain, with organizations employing a range of strategies to deal with unwanted bias in their (often historic) collections. Inspired by these efforts, for this position we focus on bias and fairness on the Semantic Web. Bias in this context may come in various forms. For example, groups of people may be over or under-represented among the entities in Linked Open Data. Or, the labels and descriptions used to represent people or their cultures may reinforce negative stereotypes, e.g., when outdated, colonial terminology is used. We will investigate one or more of the following topics:

  • To what extent and in what way is social bias reflected in LOD?
  • What are the strategies employed by the LOD community to reduce bias and promote inclusivity?
  • What is the impact of bias in LOD on applications (e.g. generative AI) and users?

Within the HAICu team, the postdoc researcher will participate in Work Package 5, titled “Construction of polyvocal, multimodal narratives.” In this Work Package, CWI will collaborate with UvA and VU and with the National Museum of World Cultures.

The researcher will be based at CWI in a dynamic research group called Human-Centered Data Analytics (HCDA). HCDA investigates human-centered, responsible AI in the culture and media sectors. How can we ensure that digital systems are inclusive, promote diversity, and can be used to combat misinformation? The HCDA group addresses these important questions. Our work includes a wide range of techniques, such as statistical AI (machine learning), symbolic AI (knowledge graphs, reasoning), and human computation (crowdsourcing). By analyzing empirical evidence of human interactions with data and systems, we derive insights into the impact of design and implementation choices on users. We maintain close collaborations with professionals from the culture and media sectors, as well as social scientists and humanities scholars, through the Cultural AI Lab and the AI, Media and Democracy Lab. These interdisciplinary labs provide us with opportunities to work with real data and real-world use cases.

19 sollicitaties
0 views

0 sollicitaties
0 views


25-11-2024 Centrum Wiskunde & Informatica
Postdoctoral in AI/multi-agent modelling of dynamics in social media (M/F/X)

The Amsterdam AI, Media and Democracy Lab

Artificial Intelligence (AI) is expected to play a crucial role in the future of social media. AI can contribute to new ways of informing and engaging with citizens but, in order to achieve this goal, it must address the pressing problem presented by the spread of disinformation, polarisation and fake news.

The Netherlands AI, Media and Democracy Lab (AI4DEM) https://www.aim4dem.nl/ aims both to create models for how rapid developments in AI will transform the media and democracy area. AI4DEM was set up as an interdisciplinary collaboration between 3 top academic institutions in the Amsterdam area (UvA, HvA and CWI), with many companies, media organisations and societal partners.

The Intelligent and Autonomous Systems group at CWI
The proposed position will be based in the IAS group at Centrum Wiskunde & Informatica (CWI). Based in the Science Park, Amsterdam, CWI is the national research institute for Mathematics and Computer Science in the Netherlands. The Intelligent and Autonomous Systems research group at CWI (https://www.cwi.nl/en/groups/intelligent-and-autonomous-systems/) studies distributed intelligence and autonomy in complex cyber-physical systems, and applies them to concrete areas of societal relevance, including smart energy systems, distributed logistics, financial markets and online social networks. IAS researchers have extensive experience in areas like complex networks, multi-agent system design, automated markets, algorithmic game theory and automated negotiation.

Background problem for the postdoc position
The postdoctoral researcher will work closely with staff researchers in the IAS group and across the AI4DEM consortium, focusing on AI/multi-agent models for the dynamics and prevention of disinformation and polarisation. This involves studying complex social networks, formed of both humans and automated agents, and modeling how agents in the network can be influenced by the spread of fake news, and in turn, influence others. In more detail, this can lead to complex systems dynamics, such as “cascade effects” in which a particular piece of disinformation spreads rapidly through a social network. This also involves studying how different parameters influence such dynamics, and study how game theoretic methods can be designed to prevent spread of disinformation in social networks.

While disinformation has always been a problem in social media and online news, the recent advances in large language models (LLMs) has brought increasing urgency to addressing these challenges. The ability to effortless generate vast amounts of content, both informative and persuasive, can transform media dynamics drastically. By posing as human users or content creators, AI agents can, for example, make users believe in misleading or false news, or lead to the creation of filter bubbles, where their own biases are reinforced. Moreover, they can also corrupt online decision-making or voting systems, by creating the impression some biased point of view is more popular/accepted than it really is.

Relevant research topics:
In more detail, some specific potential directions that can be relevant for this position include:

  • Multi-agent models for the spread of disinformation and polarisation on online media platforms. Specifically, such models can capture how individual agent behaviours can lead to complex effects, such as cascade effect dynamics in the spread of disinformation in social networks, or as polarisation (i.e. Schelling segregation models).
  • Game theoretic methods for incentivising agents in social networks. New methods from algorithmic game-theory and mechanism design methods can be employed to reward truthful information spreading, and identify/punish agents that aim to influence others by spreading disinformation.
  • Machine learning, network science and game-theory to construct models that explain the dynamics of opinion creation and spread of (dis)-information in social networks, especially when populated by both humans and LLM-agents impersonating humans.
  • Study of the dynamics of large online deliberation and decision-making platforms, in the presence of strategic and potentially malicious agents. Here, we aim to develop links to the international multi-agent research community, such as the subcommunity working on the International Computational Social Choice Competition: https://compsoc.algocratic.org/

6 sollicitaties
0 views


20-11-2024 Centrum Wiskunde & Informatica