2  Requirements Engineering

Requirements engineering consist in understanding, specifying and managing requirements to minimize the risk of delivering a system that does not meet the stakeholders’ desires and needs.

The results are a collection of functional and non-functional requirements that will drive the design and implementation of the system:

How to define requirements for ML systems?

Data Quantity and Quality Requirements

Performance Requirements

How is performance measured? Accuracy, Precision, Recall, F1-Score, ROC-AUC, etc…

Performance requirements for ML systems demand a rigorous analysis of the problem to be solved: we need to find a balance between precision and recall, depending on the application domain (if we are building a spam filter, we want high precision to avoid false positives; if we are building a cancer detection system, we want high recall to avoid false negatives).

As for the non functional requirements, we have new types of qualities for ML systems:

2.1 Model Requirements Checklist

  • Set minimum accuracy expectations
  • Identify runtime needs at inference time for the model
    • Latency, Inference throughput, cost of operation
  • Identify evolution needs for the model
    • Frequency of model updates, latency for those updates, cost of training and experimentation, ability to incrementally learn
  • Identify explainability needs for the model in the system
  • Identify safety and fairness concerns in the system
  • Identify how security and privacy concerns in the system relate to the model
    • including both legal and ethical concerns
  • Understand what data is available
    • quantity, quality, formats, provenance

The ML Canvas is a useful tool to capture and document all the requirements for an AI-enabled system.

2.2 EU Guidelines for Trustworthy AI

Trustworthy AI should be:

  • lawful: respecting all applicable laws and regulations
  • ethical: respecting ethical principles and values
  • robust: both from a technical perspective while taking into account its social environment

7 key requirements that trustworthy AI should meet:

  1. Human agency and oversight
  2. Technical robustness and safety
  3. Privacy and data governance
  4. Transparency
  5. Diversity, non-discrimination and fairness
  6. Societal and environmental wellbeing
  7. Accountability

The AI Act defines different levels of risk for AI systems:

  • Minimal risk: applications such as spam filters or video games
  • Limited risk: applications that require transparency obligations, such as chatbots
    • They need to inform users that they are interacting with an AI system
  • High risk: applications that have a significant impact on people’s lives, such as AI systems in medical devices
    • They have strict obligations before they can be put on the market
  • Unacceptable risk: applications that are considered a threat to safety, livelihoods and rights of people, such as social scoring
    • Completely banned