Bigdata
Now Reading
Caffe
0
Review

Caffe

Overview
Synopsis

Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by Berkeley AI Research (BAIR) and by community contributors.

Category

Deep Learning Software

Features

• Expressive architecture
• Extensible code
• Speed
• Community

License

Proprietary

Price

Free

Pricing

Subscription

Free Trial

Available

Users Size

Small (<50 employees), Medium (50 to 1000 Enterprise (>1001 employees)

Website
Company

Caffe

PAT Rating™
Editor Rating
Aggregated User Rating
Rate Here
Ease of use
7.6
8.2
Features & Functionality
7.6
8.3
Advanced Features
7.6
8.6
Integration
7.6
8.8
Performance
7.6
6.3
Training
Customer Support
7.6
Implementation
Renew & Recommend
Bottom Line

Expressive architecture encourages application and innovation. Models and optimization are defined by configuration without hard-coding. Switch between CPU and GPU by setting a single flag to train on a GPU machine then deploy to commodity clusters or mobile devices.

7.6
Editor Rating
8.0
Aggregated User Rating
1 rating
You have rated this

Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by Berkeley AI Research (BAIR) and by community contributors. Yangqing Jia created the project during his PhD at UC Berkeley. Caffe is released under the BSD 2-Clause license. The BAIR/BVLC reference models are released for unrestricted use. Caffe is installed and run on Ubuntu 16.04–12.04, OS X 10.11–10.8, and through Docker and AWS. Caffe requires the CUDA nvcc compiler to compile its GPU code and CUDA driver for GPU operation. Caffe promotes expressive architecture which encourages application and innovation. Models and optimization are defined by configuration without hard-coding. Users can switch between CPU and GPU by setting a single flag to train on a GPU machine then deploy to commodity clusters or mobile devices. Caffe provides extensible code that fosters active development. In Caffe’s first year, it has been forked by over 1,000 developers and had many significant changes contributed back. Thanks to these contributors the framework tracks the state-of-the-art in both code and models. In addition, speed makes Caffe perfect for research experiments and industry deployment. Caffe can process over 60M images per day with a single NVIDIA K40 GPU*. That’s 1 ms/image for inference and 4 ms/image for learning and more recent library versions and hardware are faster still. Caffe is among the fastest convnet implementations available. Caffe has a Community. Caffe already powers academic research projects, startup prototypes, and even large-scale industrial applications in vision, speech, and multimedia. Join Caffe’s community of brewers on the caffe-users group and Github.

Filter reviews
User Ratings





User Company size



User role





User industry





Ease of use
Features & Functionality
Advanced Features
Integration
Performance
Training
Customer Support
Implementation
Renew & Recommend

What's your reaction?
Love It
0%
Very Good
0%
INTERESTED
0%
COOL
0%
NOT BAD
0%
WHAT !
0%
HATE IT
0%