The Project Philly Team has built a turnkey GPU compute cluster dedicated to large-scale
Deep Neural Network (DNN) training with CNTK. The cluster supports seamless multi-tenancy
and fault-tolerant DNN training scale-out on an infrastructure built atop Linux, Docker,
Infiniband, YARN, GPUDirect RDMA, and other open-source cluster management components.
Philly supports training with CNTK, TensorFlow, and limited support for custom docker images
which may leverage other deep learning toolkits. If you want to train large volume of data
quickly, AI+R' has optimized CNTK, our open-source DNN toolkit, to scale out model training
to hundreds of GPUs. This enables rapid, no-hassle DNN experimentation and use of much larger
models/training sets and more advanced algorithms. We are also currently investing in improving
support for distributed TensorFlow to come in FY18 H1.