How to Cope with the Future AI Act

Intra-company training

Who is the training for?

  • AI technologies designers
  • Digital transformation officers
  • Data protection officers
  • Leaders and manager in companies

Duration

4,00 hours(s)

Language(s) of service

EN

Goals

If assisted tools are acknowledged to improve process of organisations, there are also growing concerns about their embedded algorithms which remain biased.

Like previously with General Data Protection Regulation (GDPR), companies will have to comply with an upcoming EU regulation connected to their use of AI assisted systems in order to prevent the perpetuation of historical patterns of discrimination (e.g., against women, certain age groups, persons with disabilities, or persons of certain racial or ethnic origins or sexual orientation).

At the end of the training, participants are:

  • Aware of trustworthy AI main paradigms
  • Aware of the future regulation challenges for their organisations
  • Able to identify "risky" AI-assisted tools used in their companies

Contents

  • Trustworthy AI: what does it mean?
  • Sources and risks of biased AI assisted tools for your organisation (illustrations: Amazon recruitment engine, compass algorithms for recidivism assessment, facial recognition)
  • The AI act
  • How to increase fairness in AI algorithms (metrics, bias mitigation methods, tool kits and programme, data collection, explainable AI)?
  • Case study: an AI-assisted programme to match CVs with job offers avoiding age biases (use of the LIST technological demonstrator AMANDA).

Certificate, diploma

At the end of the course, participants will receive a certificate of attendance issued by the House of Training and Digital Learning Hub.

Additional information

  • Theory/practice
  • Case studies/illustrations (e.g. recruitment, face recognition, justice)
  • In situ learning (i.e. use of a technological demonstrator)

These courses might interest you