Responsible AI

As AI technologies become more common the need for trust in AI increases at a greater rate. There is a need to trust the AI model, the dataset that trains the AI machine, the statements about governance and compliance made by the AI vendor before you can trust the output of the AI machine.

Responsible AI includes an ethical and legal viewpoint to ensure that AI works for the good of society, fundamental to this is Trust and Transparency.

As consumers of the AI model:

  • We need to be certain that an AI machine is making decisions that are no worse than those that would be made by a trained and competent human.
  • We need to know that it has been trained on ‘good’ data, not ‘bad’ data.
  • We need to know that the system has been designed to be compliant with the correct standards and policies.
  • We need to know that it will not misuse our personal information.
  • We need to know that the system is being developed and improved to those same standards.

Above all, we don’t want to take the vendors word for it, they need to prove it!

DataTrails empowers this by providing an immutable lineage record (the data trail) for all aspects of the AI machine which supports responsible and ethical governance, coupled with transparency and traceability of the training data and output analysis. Together these enhance the explainability and interpretability of the AI machine’s output which results in trust and efficient decision making by the user whether that user is a human or another AI machine.

Opportunities for Transparency

RAG: Retrieval Augmented Generation
SHAP: SHapley Additive exPlanations
LIME: Local Interpretable Model-agnostic Explanations

Considerations

Policy and Standards Compliance: A set of Asset attributes can be created to record the baseline compliance of the AI system. This can include internal policies such as Bias, Discrimination and Copyright statements or external policies such as GDPR and other legal frameworks. Any policy changes or changes in compliance status can be recorded as an Event to build the immutable record of compliance over time.

The AI Model and the Training Data: The versions of the AI process model, the AI machine software and of the Training datasets could also be recorded as Asset attributes. Other things to include could be changes to the Training model and any manual Training decisions that influence the output of the AI machine. Recording updates as Events will transparently record the version history of the working components of the AI system as it is developed and improved.

Access Policies: Use Access policies to enable fine-grained control over access to the data. Access Policies provide stakeholders with the transparent access to the untampered provenance record that they need to be able to make decisions and gain trust in the system.