Deep transfer learning
Transfer learning is the process of adapting a model trained on one set of data to another set of data. Transfer learning is much faster than training models from scratch, and it requires much less data for the training.
Google Cloud AutoML implements deep transfer learning for vision, translation, and natural language. Azure Machine Learning Service offers similar deep transfer learning services as custom vision, customizable speech and translation, and custom search.
Distributed deep learning training
While TensorFlow has its own way of coordinating distributed training with parameter servers, a more general approach uses Open MPI (message passing interface). Horovod, a distributed training framework for TensorFlow, Keras, and PyTorch that was created at Uber, uses Open MPI as well as Nvidia NCCL. Horovod achieves between 68 percent and 90 percent scaling efficiency, depending on the model being trained.
Deep learning books and resources
You can learn a lot about deep learning simply by installing one of the deep learning packages, trying out its samples, and reading its tutorials. For more depth, consider one or more of the following resources.
- Neural Networks and Deep Learning by Michael Nielsen
- A Brief Introduction to Neural Networks by David Kriesel
- Deep Learning by Yoshua Bengio, Ian Goodfellow, and Aaron Courville
- A Course in Machine Learning by Hal Daumé III
- TensorFlow Playground by Daniel Smilkov and Shan Carter
- CS231n: Convolutional Neural Networks for Visual Recognition from Stanford University