XAIN's AI Privacy Engine
XAIN provides privacy-preserving technology dedicated to keep the data used for the training of AI projects private. Our privacy engine for machine learning is compliant with data privacy regulations such as GDPR and CCPA. It offers a simple and scalable multi-party computation based on federated learning that reflects technical and regulatory needs of commercial AI projects.
Find out how this works and why it makes complicated and costly anonymization obsolete.
Background - Why Privacy in AI matters
Effective AI, especially machine learning, requires lots of data. Very often, such data is personal or sensitive. Therefore, AI needs to use data in an ethical and compliant manner that meets public acceptability. Read our blog article about why privacy in AI is so critical and use technology to protect your data.
Model Aggregation Service
Our privacy engine for AI is based on federated learning, which allows to train AI models on local data without local data having to be moved. This overcomes the technical and legal barriers of having decentralized data silos that you wish to use to train an AI. On top of this, Federated Learning creates effective AI models.
1 | Local Training
All local models get trained with local data within the environment of its data controller.
2 | Model Aggregation
Selected updated local models are communicated to a Coordinator, who aggregates these updates into a global model in a privacy-preserving manner.
3 | Global Model Distribution
The global model that the Coordinator computes has higher accuracy and is communicated back to local training sites.
Understanding Data Controlling vs. Data Processing
Alongside many privacy notions in AI, it is crucial to understand the implications arising through the legal definitions of data processing and data controlling (§4 of the General Data Protection Regulation). Continue reading in our blog how to make sure your lawyers are happy when it comes to applying machine learning.
How Federated Learning performs
We have benchmarked our technology on classification problems through a comparison to the centralized setting in which machine learning takes place on each dataset separately, without any sharing of locally updated models.
We can see that the benefits of Federated Learning are already visible when the data is independent and identically distributed across partitions (IID_balanced). Federated Learning can do better here since it simply has access to more data, where this access is indirect through the repeated aggregation of locally updated models.
For more information on benchmark results, please refer to our Whitepaper.
Still have questions?
If you wish to receive more information about using the XAIN platform or have other queries, we're more than happy to help.