This is a personal caffe implementation of mobile convolution layer. For details, please read the original paper:
- Merge the caffe folder in the repo with your own caffe.
$ cp -r $REPO/caffe/* $YOURCAFFE/
- Then make.
$ cd $YOURCAFFE && make
Replacing the type of mobile convolution layer with "DepthwiseConvolution" is all. Please refer to the example/Withdw_MN_train_128_1_train.prototxt, which is altered from
| GPUPerformance | Origin1 | Mine | 
|---|---|---|
| forward_batch1 | 41 ms | 8 ms | 
| backward_batch1 | 51 ms | 11 ms | 
| forward_batch16 | 532 ms | 36 ms | 
| backward_batch16 | 695 ms | 96 ms | 
I write a script [transfer2Mobilenet.py] to convert normal net to mobilenet format. You may try too. Usage:
python ./transfer2Mobilenet.py sourceprototxt targetprototxt [--midbn nobn --weight_filler msra --activation ReLU]    ["--origin_type" means the depthwise convolution layer's type will be "Convolution" instead of "DepthwiseConvolution"]
The "transferTypeToDepthwiseConvolution.py" will be used for changing the depthwise convolution layer's type from "Convolution" to "DepthwiseConvolution".
Footnotes
- 
When turn on cudnn, the memory consuming of mobilenet would increase to unbelievable level. You may try. ↩