How to Cope With the Future AI Act

Formation intra-entreprise

À qui s'adresse la formation?

  • AI technologies designers
  • Digital transformation officers
  • Data protection officers
  • Leaders and manager in companies

Durée

4,00 heure(s)

Langues(s) de prestation

EN

Prochaine session

Objectifs

If assisted tools are acknowledged to improve process of organisations, there are also growing concerns about their embedded algorithms which remain biased. Like previously with General Data Protection Regulation (GDPR), companies will have to comply with an upcoming EU regulation connected to their use of AI assisted systems in order to prevent the perpetuation of historical patterns of discrimination (e.g., against women, certain age groups, persons with disabilities, or persons of certain racial or ethnic origins or sexual orientation).

At the end of the training, participants are:

  • Aware of trustworthy AI main paradigms
  • Aware of the future regulation challenges for their organisations
  • Able to identify "risky" AI-assisted tools used in their companies

Contenu

  • Trustworthy AI: what does it mean?
  • Sources and risks of biased AI assisted tools for your organisation (illustrations: Amazon recruitment engine, compass algorithms for recidivism assessment, facial recognition)
  • The AI act
  • How to increase fairness in AI algorithms (metrics, bias mitigation methods, tool kits and programme, data collection, explainable AI)?
  • Case study: an AI-assisted programme to match CVs with job offers avoiding age biases (use of the LIST technological demonstrator AMANDA).

Certificat, diplôme

At the end of the course, participants will receive a certificate of attendance issued by the House of Training and DLH.

Informations supplémentaires

  • Theory/Practice
  • Case studies/illustrations (e.g., recruitment, face recognition, justice)
  • In situ learning (i.e., use of a technological demonstrator)

Ces formations pourraient vous intéresser