There are two inevitable truths in machine learning: more data and more computational power to train on this data. In a recent article I covered a basic architecture covering compute and storage for ML. One of the main assumptions in the architecture I presented was in having all training data consolidated in a single location (storage system) and more critically, that this data is readily available for ML training. What happens then, if we need some data for training, but for a variety of reasons, that data cannot be stored in our environment? Enter
Decentralized ML training with Federated Learning
Decentralized ML training with Federated…
Decentralized ML training with Federated Learning
There are two inevitable truths in machine learning: more data and more computational power to train on this data. In a recent article I covered a basic architecture covering compute and storage for ML. One of the main assumptions in the architecture I presented was in having all training data consolidated in a single location (storage system) and more critically, that this data is readily available for ML training. What happens then, if we need some data for training, but for a variety of reasons, that data cannot be stored in our environment? Enter