Sign in to see all reviews and comparisons. It's Free!
By clicking Sign In with Social Media, you agree to let PAT RESEARCH store, use and/or disclose your Social Media profile and email address in accordance with the PAT RESEARCH Privacy Policy and agree to the Terms of Use.
Mocha is a Deep Learning framework for Julia, inspired by the C++ framework Caffe. Efficient implementations of general stochastic gradient solvers and common layers in Mocha can be used to train deep / shallow (convolutional) neural networks, with (optional) unsupervised pre-training via (stacked) auto-encoders.
Small (<50 employees), Medium (50 to 1000 employees), Enterprise (>1001 employees)
Company
Mocha
What is best?
•High-level Interface •Portability and Speed •Open Source •Highly Efficient Computation
PAT Rating™
Editor Rating
Aggregated User Rating
Rate Here
Ease of use
8.4
5.6
Features & Functionality
8.6
6.6
Advanced Features
8.4
—
Integration
8.6
—
Performance
8.5
—
Customer Support
7.6
—
Implementation
—
Renew & Recommend
—
Bottom Line
Mocha has a clean architecture with isolated components like network layers, activation functions, solvers, regularizers, initializers, etc. Built-in components are sufficient for typical deep (convolutional) neural network applications and more are being added in each release
8.4
Editor Rating
6.1
Aggregated User Rating
2 ratings
You have rated this
Mocha is a Deep Learning framework for Julia, inspired by the C++ framework Caffe. Efficient implementations of general stochastic gradient solvers and common layers in Mocha could be used to train deep / shallow (convolutional) neural networks, with (optional) unsupervised pre-training via (stacked) auto-encoders.
Mocha has a clean architecture with isolated components like network layers, activation functions, solvers, regularizers, initializers, etc. Built-in components are sufficient for typical deep (convolutional) neural network applications and more are being added in each release. All of them could be easily extended by adding custom sub-types. Mocha is written in Julia, a high-level dynamic programming language designed for scientific computing. Combining with the expressive power of Julia and other its package eco-system, playing with deep neural networks in Mocha is easy and intuitive.
Mocha comes with multiple backend that could be switched transparently. The pure Julia backend is portable as it runs on any platform that supports Julia. This is reasonably fast on small models thanks to Julia's LLVM-based just-in-time (JIT) compiler and Performance Annotations, and could be very useful for prototyping.The native extension backend could be turned on when a C++ compiler is available.
It runs 2 to 3 times faster than the pure Julia backend.on the other hand, the GPU backend uses NVidia cuDNN, cuBLAS and customized CUDA kernels to provide highly efficient computation. It’s 20 to 30 times or even more speedup could be observed on a modern GPU device, especially on larger models.
Mocha uses the widely adopted HDF5 format to store both datasets and model snapshots, making it easy to inter-operate with Matlab, Python (numpy) and other existing computational tools. Mocha also provides tools to import trained model snapshots from Caffe. And since it is an Open Source program, users will be able to use it for freely and easily.
We are the movers and shakers of B2B Software & Services. We have been copied by many including the you know who?
Why not get it straight and right from the original source. Join over 55,000+ Executives by subscribing to our newsletter... its FREE ! and get fully confidential personalized recommendations for your software and services search
Privacy Policy: We hate SPAM and promise to keep your email address safe.
By clicking Sign In with Social Media, you agree to let PAT RESEARCH store, use and/or disclose your Social Media profile and email address in accordance with the PAT RESEARCH Privacy Policy and agree to the Terms of Use.