I0427 19:08:26.254542 23810 upgrade_proto.cpp:1082] Attempting to upgrade input file specified using deprecated 'solver_type' field (enum)': /mnt/bigdisk/DIGITS-MAN-3/digits/jobs/20210427-190824-cae4/solver.prototxt I0427 19:08:26.254765 23810 upgrade_proto.cpp:1089] Successfully upgraded file specified using deprecated 'solver_type' field (enum) to 'type' field (string). W0427 19:08:26.254771 23810 upgrade_proto.cpp:1091] Note that future Caffe releases will only support 'type' field (string) for a solver's type. I0427 19:08:26.254854 23810 caffe.cpp:218] Using GPUs 3 I0427 19:08:26.359385 23810 caffe.cpp:223] GPU 3: GeForce GTX 1080 Ti I0427 19:08:27.103227 23810 solver.cpp:44] Initializing solver from parameters: test_iter: 7 test_interval: 102 base_lr: 0.01 display: 12 max_iter: 3060 lr_policy: "exp" gamma: 0.99934 momentum: 0.9 weight_decay: 0.0001 snapshot: 102 snapshot_prefix: "snapshot" solver_mode: GPU device_id: 3 net: "train_val.prototxt" train_state { level: 0 stage: "" } type: "SGD" I0427 19:08:27.510030 23810 solver.cpp:87] Creating training net from net file: train_val.prototxt I0427 19:08:27.575852 23810 net.cpp:294] The NetState phase (0) differed from the phase (1) specified by a rule in layer val-data I0427 19:08:27.575884 23810 net.cpp:294] The NetState phase (0) differed from the phase (1) specified by a rule in layer accuracy I0427 19:08:27.576122 23810 net.cpp:51] Initializing net from parameters: state { phase: TRAIN level: 0 stage: "" } layer { name: "train-data" type: "Data" top: "data" top: "label" include { phase: TRAIN } transform_param { mirror: true crop_size: 227 mean_file: "/mnt/bigdisk/DIGITS-MAN-3/digits/jobs/20210427-190400-faa9/mean.binaryproto" } data_param { source: "/mnt/bigdisk/DIGITS-MAN-3/digits/jobs/20210427-190400-faa9/train_db" batch_size: 256 backend: LMDB } } layer { name: "conv1" type: "Convolution" bottom: "data" top: "conv1" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 96 kernel_size: 11 stride: 4 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 0 } } } layer { name: "relu1" type: "ReLU" bottom: "conv1" top: "conv1" } layer { name: "norm1" type: "LRN" bottom: "conv1" top: "norm1" lrn_param { local_size: 5 alpha: 0.0001 beta: 0.75 } } layer { name: "pool1" type: "Pooling" bottom: "norm1" top: "pool1" pooling_param { pool: MAX kernel_size: 3 stride: 2 } } layer { name: "conv2" type: "Convolution" bottom: "pool1" top: "conv2" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 256 pad: 2 kernel_size: 5 group: 2 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 0.1 } } } layer { name: "relu2" type: "ReLU" bottom: "conv2" top: "conv2" } layer { name: "norm2" type: "LRN" bottom: "conv2" top: "norm2" lrn_param { local_size: 5 alpha: 0.0001 beta: 0.75 } } layer { name: "pool2" type: "Pooling" bottom: "norm2" top: "pool2" pooling_param { pool: MAX kernel_size: 3 stride: 2 } } layer { name: "conv3" type: "Convolution" bottom: "pool2" top: "conv3" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 384 pad: 1 kernel_size: 3 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 0 } } } layer { name: "relu3" type: "ReLU" bottom: "conv3" top: "conv3" } layer { name: "conv4" type: "Convolution" bottom: "conv3" top: "conv4" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 384 pad: 1 kernel_size: 3 group: 2 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 0.1 } } } layer { name: "relu4" type: "ReLU" bottom: "conv4" top: "conv4" } layer { name: "conv5" type: "Convolution" bottom: "conv4" top: "conv5" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 256 pad: 1 kernel_size: 3 group: 2 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 0.1 } } } layer { name: "relu5" type: "ReLU" bottom: "conv5" top: "conv5" } layer { name: "pool5" type: "Pooling" bottom: "conv5" top: "pool5" pooling_param { pool: MAX kernel_size: 3 stride: 2 } } layer { name: "fc6" type: "InnerProduct" bottom: "pool5" top: "fc6" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } inner_product_param { num_output: 4096 weight_filler { type: "gaussian" std: 0.005 } bias_filler { type: "constant" value: 0.1 } } } layer { name: "relu6" type: "ReLU" bottom: "fc6" top: "fc6" } layer { name: "drop6" type: "Dropout" bottom: "fc6" top: "fc6" dropout_param { dropout_ratio: 0.5 } } layer { name: "fc7" type: "InnerProduct" bottom: "fc6" top: "fc7" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } inner_product_param { num_output: 4096 weight_filler { type: "gaussian" std: 0.005 } bias_filler { type: "constant" value: 0.1 } } } layer { name: "relu7" type: "ReLU" bottom: "fc7" top: "fc7" } layer { name: "drop7" type: "Dropout" bottom: "fc7" top: "fc7" dropout_param { dropout_ratio: 0.5 } } layer { name: "fc8" type: "InnerProduct" bottom: "fc7" top: "fc8" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } inner_product_param { num_output: 196 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 0 } } } layer { name: "loss" type: "SoftmaxWithLoss" bottom: "fc8" bottom: "label" top: "loss" } I0427 19:08:27.576243 23810 layer_factory.hpp:77] Creating layer train-data I0427 19:08:27.754865 23810 db_lmdb.cpp:35] Opened lmdb /mnt/bigdisk/DIGITS-MAN-3/digits/jobs/20210427-190400-faa9/train_db I0427 19:08:27.860886 23810 net.cpp:84] Creating Layer train-data I0427 19:08:27.860922 23810 net.cpp:380] train-data -> data I0427 19:08:27.860954 23810 net.cpp:380] train-data -> label I0427 19:08:27.860973 23810 data_transformer.cpp:25] Loading mean file from: /mnt/bigdisk/DIGITS-MAN-3/digits/jobs/20210427-190400-faa9/mean.binaryproto I0427 19:08:27.868000 23810 data_layer.cpp:45] output data size: 256,3,227,227 I0427 19:08:28.284266 23810 net.cpp:122] Setting up train-data I0427 19:08:28.284293 23810 net.cpp:129] Top shape: 256 3 227 227 (39574272) I0427 19:08:28.284301 23810 net.cpp:129] Top shape: 256 (256) I0427 19:08:28.284307 23810 net.cpp:137] Memory required for data: 158298112 I0427 19:08:28.284318 23810 layer_factory.hpp:77] Creating layer conv1 I0427 19:08:28.284344 23810 net.cpp:84] Creating Layer conv1 I0427 19:08:28.284353 23810 net.cpp:406] conv1 <- data I0427 19:08:28.284370 23810 net.cpp:380] conv1 -> conv1 I0427 19:08:29.544167 23810 net.cpp:122] Setting up conv1 I0427 19:08:29.544196 23810 net.cpp:129] Top shape: 256 96 55 55 (74342400) I0427 19:08:29.544203 23810 net.cpp:137] Memory required for data: 455667712 I0427 19:08:29.544231 23810 layer_factory.hpp:77] Creating layer relu1 I0427 19:08:29.544248 23810 net.cpp:84] Creating Layer relu1 I0427 19:08:29.544255 23810 net.cpp:406] relu1 <- conv1 I0427 19:08:29.544265 23810 net.cpp:367] relu1 -> conv1 (in-place) I0427 19:08:29.544780 23810 net.cpp:122] Setting up relu1 I0427 19:08:29.544795 23810 net.cpp:129] Top shape: 256 96 55 55 (74342400) I0427 19:08:29.544801 23810 net.cpp:137] Memory required for data: 753037312 I0427 19:08:29.544807 23810 layer_factory.hpp:77] Creating layer norm1 I0427 19:08:29.544821 23810 net.cpp:84] Creating Layer norm1 I0427 19:08:29.544827 23810 net.cpp:406] norm1 <- conv1 I0427 19:08:29.544862 23810 net.cpp:380] norm1 -> norm1 I0427 19:08:29.545575 23810 net.cpp:122] Setting up norm1 I0427 19:08:29.545590 23810 net.cpp:129] Top shape: 256 96 55 55 (74342400) I0427 19:08:29.545596 23810 net.cpp:137] Memory required for data: 1050406912 I0427 19:08:29.545603 23810 layer_factory.hpp:77] Creating layer pool1 I0427 19:08:29.545616 23810 net.cpp:84] Creating Layer pool1 I0427 19:08:29.545624 23810 net.cpp:406] pool1 <- norm1 I0427 19:08:29.545632 23810 net.cpp:380] pool1 -> pool1 I0427 19:08:29.545691 23810 net.cpp:122] Setting up pool1 I0427 19:08:29.545701 23810 net.cpp:129] Top shape: 256 96 27 27 (17915904) I0427 19:08:29.545706 23810 net.cpp:137] Memory required for data: 1122070528 I0427 19:08:29.545713 23810 layer_factory.hpp:77] Creating layer conv2 I0427 19:08:29.545729 23810 net.cpp:84] Creating Layer conv2 I0427 19:08:29.545735 23810 net.cpp:406] conv2 <- pool1 I0427 19:08:29.545744 23810 net.cpp:380] conv2 -> conv2 I0427 19:08:29.555555 23810 net.cpp:122] Setting up conv2 I0427 19:08:29.555577 23810 net.cpp:129] Top shape: 256 256 27 27 (47775744) I0427 19:08:29.555581 23810 net.cpp:137] Memory required for data: 1313173504 I0427 19:08:29.555594 23810 layer_factory.hpp:77] Creating layer relu2 I0427 19:08:29.555604 23810 net.cpp:84] Creating Layer relu2 I0427 19:08:29.555608 23810 net.cpp:406] relu2 <- conv2 I0427 19:08:29.555615 23810 net.cpp:367] relu2 -> conv2 (in-place) I0427 19:08:29.556053 23810 net.cpp:122] Setting up relu2 I0427 19:08:29.556062 23810 net.cpp:129] Top shape: 256 256 27 27 (47775744) I0427 19:08:29.556066 23810 net.cpp:137] Memory required for data: 1504276480 I0427 19:08:29.556069 23810 layer_factory.hpp:77] Creating layer norm2 I0427 19:08:29.556077 23810 net.cpp:84] Creating Layer norm2 I0427 19:08:29.556082 23810 net.cpp:406] norm2 <- conv2 I0427 19:08:29.556087 23810 net.cpp:380] norm2 -> norm2 I0427 19:08:29.556607 23810 net.cpp:122] Setting up norm2 I0427 19:08:29.556622 23810 net.cpp:129] Top shape: 256 256 27 27 (47775744) I0427 19:08:29.556628 23810 net.cpp:137] Memory required for data: 1695379456 I0427 19:08:29.556633 23810 layer_factory.hpp:77] Creating layer pool2 I0427 19:08:29.556645 23810 net.cpp:84] Creating Layer pool2 I0427 19:08:29.556651 23810 net.cpp:406] pool2 <- norm2 I0427 19:08:29.556660 23810 net.cpp:380] pool2 -> pool2 I0427 19:08:29.556705 23810 net.cpp:122] Setting up pool2 I0427 19:08:29.556715 23810 net.cpp:129] Top shape: 256 256 13 13 (11075584) I0427 19:08:29.556721 23810 net.cpp:137] Memory required for data: 1739681792 I0427 19:08:29.556726 23810 layer_factory.hpp:77] Creating layer conv3 I0427 19:08:29.556741 23810 net.cpp:84] Creating Layer conv3 I0427 19:08:29.556746 23810 net.cpp:406] conv3 <- pool2 I0427 19:08:29.556756 23810 net.cpp:380] conv3 -> conv3 I0427 19:08:29.573365 23810 net.cpp:122] Setting up conv3 I0427 19:08:29.573383 23810 net.cpp:129] Top shape: 256 384 13 13 (16613376) I0427 19:08:29.573387 23810 net.cpp:137] Memory required for data: 1806135296 I0427 19:08:29.573401 23810 layer_factory.hpp:77] Creating layer relu3 I0427 19:08:29.573410 23810 net.cpp:84] Creating Layer relu3 I0427 19:08:29.573415 23810 net.cpp:406] relu3 <- conv3 I0427 19:08:29.573421 23810 net.cpp:367] relu3 -> conv3 (in-place) I0427 19:08:29.574128 23810 net.cpp:122] Setting up relu3 I0427 19:08:29.574142 23810 net.cpp:129] Top shape: 256 384 13 13 (16613376) I0427 19:08:29.574148 23810 net.cpp:137] Memory required for data: 1872588800 I0427 19:08:29.574153 23810 layer_factory.hpp:77] Creating layer conv4 I0427 19:08:29.574169 23810 net.cpp:84] Creating Layer conv4 I0427 19:08:29.574177 23810 net.cpp:406] conv4 <- conv3 I0427 19:08:29.574185 23810 net.cpp:380] conv4 -> conv4 I0427 19:08:29.594336 23810 net.cpp:122] Setting up conv4 I0427 19:08:29.594362 23810 net.cpp:129] Top shape: 256 384 13 13 (16613376) I0427 19:08:29.594367 23810 net.cpp:137] Memory required for data: 1939042304 I0427 19:08:29.594383 23810 layer_factory.hpp:77] Creating layer relu4 I0427 19:08:29.594395 23810 net.cpp:84] Creating Layer relu4 I0427 19:08:29.594422 23810 net.cpp:406] relu4 <- conv4 I0427 19:08:29.594435 23810 net.cpp:367] relu4 -> conv4 (in-place) I0427 19:08:29.595006 23810 net.cpp:122] Setting up relu4 I0427 19:08:29.595019 23810 net.cpp:129] Top shape: 256 384 13 13 (16613376) I0427 19:08:29.595026 23810 net.cpp:137] Memory required for data: 2005495808 I0427 19:08:29.595031 23810 layer_factory.hpp:77] Creating layer conv5 I0427 19:08:29.595049 23810 net.cpp:84] Creating Layer conv5 I0427 19:08:29.595055 23810 net.cpp:406] conv5 <- conv4 I0427 19:08:29.595067 23810 net.cpp:380] conv5 -> conv5 I0427 19:08:29.671907 23810 net.cpp:122] Setting up conv5 I0427 19:08:29.671931 23810 net.cpp:129] Top shape: 256 256 13 13 (11075584) I0427 19:08:29.671936 23810 net.cpp:137] Memory required for data: 2049798144 I0427 19:08:29.671955 23810 layer_factory.hpp:77] Creating layer relu5 I0427 19:08:29.671969 23810 net.cpp:84] Creating Layer relu5 I0427 19:08:29.671975 23810 net.cpp:406] relu5 <- conv5 I0427 19:08:29.671985 23810 net.cpp:367] relu5 -> conv5 (in-place) I0427 19:08:29.672986 23810 net.cpp:122] Setting up relu5 I0427 19:08:29.673002 23810 net.cpp:129] Top shape: 256 256 13 13 (11075584) I0427 19:08:29.673007 23810 net.cpp:137] Memory required for data: 2094100480 I0427 19:08:29.673012 23810 layer_factory.hpp:77] Creating layer pool5 I0427 19:08:29.673022 23810 net.cpp:84] Creating Layer pool5 I0427 19:08:29.673027 23810 net.cpp:406] pool5 <- conv5 I0427 19:08:29.673035 23810 net.cpp:380] pool5 -> pool5 I0427 19:08:29.673089 23810 net.cpp:122] Setting up pool5 I0427 19:08:29.673096 23810 net.cpp:129] Top shape: 256 256 6 6 (2359296) I0427 19:08:29.673101 23810 net.cpp:137] Memory required for data: 2103537664 I0427 19:08:29.673105 23810 layer_factory.hpp:77] Creating layer fc6 I0427 19:08:29.673120 23810 net.cpp:84] Creating Layer fc6 I0427 19:08:29.673125 23810 net.cpp:406] fc6 <- pool5 I0427 19:08:29.673133 23810 net.cpp:380] fc6 -> fc6 I0427 19:08:30.200594 23810 net.cpp:122] Setting up fc6 I0427 19:08:30.200620 23810 net.cpp:129] Top shape: 256 4096 (1048576) I0427 19:08:30.200626 23810 net.cpp:137] Memory required for data: 2107731968 I0427 19:08:30.200639 23810 layer_factory.hpp:77] Creating layer relu6 I0427 19:08:30.200652 23810 net.cpp:84] Creating Layer relu6 I0427 19:08:30.200660 23810 net.cpp:406] relu6 <- fc6 I0427 19:08:30.200670 23810 net.cpp:367] relu6 -> fc6 (in-place) I0427 19:08:30.201550 23810 net.cpp:122] Setting up relu6 I0427 19:08:30.201562 23810 net.cpp:129] Top shape: 256 4096 (1048576) I0427 19:08:30.201567 23810 net.cpp:137] Memory required for data: 2111926272 I0427 19:08:30.201572 23810 layer_factory.hpp:77] Creating layer drop6 I0427 19:08:30.201583 23810 net.cpp:84] Creating Layer drop6 I0427 19:08:30.201589 23810 net.cpp:406] drop6 <- fc6 I0427 19:08:30.201596 23810 net.cpp:367] drop6 -> fc6 (in-place) I0427 19:08:30.201637 23810 net.cpp:122] Setting up drop6 I0427 19:08:30.201645 23810 net.cpp:129] Top shape: 256 4096 (1048576) I0427 19:08:30.201650 23810 net.cpp:137] Memory required for data: 2116120576 I0427 19:08:30.201655 23810 layer_factory.hpp:77] Creating layer fc7 I0427 19:08:30.201665 23810 net.cpp:84] Creating Layer fc7 I0427 19:08:30.201669 23810 net.cpp:406] fc7 <- fc6 I0427 19:08:30.201679 23810 net.cpp:380] fc7 -> fc7 I0427 19:08:30.447713 23810 net.cpp:122] Setting up fc7 I0427 19:08:30.447732 23810 net.cpp:129] Top shape: 256 4096 (1048576) I0427 19:08:30.447736 23810 net.cpp:137] Memory required for data: 2120314880 I0427 19:08:30.447746 23810 layer_factory.hpp:77] Creating layer relu7 I0427 19:08:30.447754 23810 net.cpp:84] Creating Layer relu7 I0427 19:08:30.447759 23810 net.cpp:406] relu7 <- fc7 I0427 19:08:30.447765 23810 net.cpp:367] relu7 -> fc7 (in-place) I0427 19:08:30.457134 23810 net.cpp:122] Setting up relu7 I0427 19:08:30.457152 23810 net.cpp:129] Top shape: 256 4096 (1048576) I0427 19:08:30.457156 23810 net.cpp:137] Memory required for data: 2124509184 I0427 19:08:30.457162 23810 layer_factory.hpp:77] Creating layer drop7 I0427 19:08:30.457171 23810 net.cpp:84] Creating Layer drop7 I0427 19:08:30.457195 23810 net.cpp:406] drop7 <- fc7 I0427 19:08:30.457204 23810 net.cpp:367] drop7 -> fc7 (in-place) I0427 19:08:30.457247 23810 net.cpp:122] Setting up drop7 I0427 19:08:30.457252 23810 net.cpp:129] Top shape: 256 4096 (1048576) I0427 19:08:30.457255 23810 net.cpp:137] Memory required for data: 2128703488 I0427 19:08:30.457259 23810 layer_factory.hpp:77] Creating layer fc8 I0427 19:08:30.457268 23810 net.cpp:84] Creating Layer fc8 I0427 19:08:30.457273 23810 net.cpp:406] fc8 <- fc7 I0427 19:08:30.457278 23810 net.cpp:380] fc8 -> fc8 I0427 19:08:30.480334 23810 net.cpp:122] Setting up fc8 I0427 19:08:30.480361 23810 net.cpp:129] Top shape: 256 196 (50176) I0427 19:08:30.480367 23810 net.cpp:137] Memory required for data: 2128904192 I0427 19:08:30.480382 23810 layer_factory.hpp:77] Creating layer loss I0427 19:08:30.480397 23810 net.cpp:84] Creating Layer loss I0427 19:08:30.480406 23810 net.cpp:406] loss <- fc8 I0427 19:08:30.480414 23810 net.cpp:406] loss <- label I0427 19:08:30.480425 23810 net.cpp:380] loss -> loss I0427 19:08:30.480441 23810 layer_factory.hpp:77] Creating layer loss I0427 19:08:30.487527 23810 net.cpp:122] Setting up loss I0427 19:08:30.487555 23810 net.cpp:129] Top shape: (1) I0427 19:08:30.487560 23810 net.cpp:132] with loss weight 1 I0427 19:08:30.487587 23810 net.cpp:137] Memory required for data: 2128904196 I0427 19:08:30.487596 23810 net.cpp:198] loss needs backward computation. I0427 19:08:30.487608 23810 net.cpp:198] fc8 needs backward computation. I0427 19:08:30.487614 23810 net.cpp:198] drop7 needs backward computation. I0427 19:08:30.487620 23810 net.cpp:198] relu7 needs backward computation. I0427 19:08:30.487625 23810 net.cpp:198] fc7 needs backward computation. I0427 19:08:30.487632 23810 net.cpp:198] drop6 needs backward computation. I0427 19:08:30.487637 23810 net.cpp:198] relu6 needs backward computation. I0427 19:08:30.487643 23810 net.cpp:198] fc6 needs backward computation. I0427 19:08:30.487649 23810 net.cpp:198] pool5 needs backward computation. I0427 19:08:30.487655 23810 net.cpp:198] relu5 needs backward computation. I0427 19:08:30.487661 23810 net.cpp:198] conv5 needs backward computation. I0427 19:08:30.487666 23810 net.cpp:198] relu4 needs backward computation. I0427 19:08:30.487673 23810 net.cpp:198] conv4 needs backward computation. I0427 19:08:30.487679 23810 net.cpp:198] relu3 needs backward computation. I0427 19:08:30.487684 23810 net.cpp:198] conv3 needs backward computation. I0427 19:08:30.487690 23810 net.cpp:198] pool2 needs backward computation. I0427 19:08:30.487696 23810 net.cpp:198] norm2 needs backward computation. I0427 19:08:30.487702 23810 net.cpp:198] relu2 needs backward computation. I0427 19:08:30.487709 23810 net.cpp:198] conv2 needs backward computation. I0427 19:08:30.487715 23810 net.cpp:198] pool1 needs backward computation. I0427 19:08:30.487720 23810 net.cpp:198] norm1 needs backward computation. I0427 19:08:30.487728 23810 net.cpp:198] relu1 needs backward computation. I0427 19:08:30.487735 23810 net.cpp:198] conv1 needs backward computation. I0427 19:08:30.487741 23810 net.cpp:200] train-data does not need backward computation. I0427 19:08:30.487746 23810 net.cpp:242] This network produces output loss I0427 19:08:30.487772 23810 net.cpp:255] Network initialization done. I0427 19:08:30.488459 23810 solver.cpp:172] Creating test net (#0) specified by net file: train_val.prototxt I0427 19:08:30.488549 23810 net.cpp:294] The NetState phase (1) differed from the phase (0) specified by a rule in layer train-data I0427 19:08:30.488852 23810 net.cpp:51] Initializing net from parameters: state { phase: TEST } layer { name: "val-data" type: "Data" top: "data" top: "label" include { phase: TEST } transform_param { crop_size: 227 mean_file: "/mnt/bigdisk/DIGITS-MAN-3/digits/jobs/20210427-190400-faa9/mean.binaryproto" } data_param { source: "/mnt/bigdisk/DIGITS-MAN-3/digits/jobs/20210427-190400-faa9/val_db" batch_size: 256 backend: LMDB } } layer { name: "conv1" type: "Convolution" bottom: "data" top: "conv1" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 96 kernel_size: 11 stride: 4 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 0 } } } layer { name: "relu1" type: "ReLU" bottom: "conv1" top: "conv1" } layer { name: "norm1" type: "LRN" bottom: "conv1" top: "norm1" lrn_param { local_size: 5 alpha: 0.0001 beta: 0.75 } } layer { name: "pool1" type: "Pooling" bottom: "norm1" top: "pool1" pooling_param { pool: MAX kernel_size: 3 stride: 2 } } layer { name: "conv2" type: "Convolution" bottom: "pool1" top: "conv2" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 256 pad: 2 kernel_size: 5 group: 2 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 0.1 } } } layer { name: "relu2" type: "ReLU" bottom: "conv2" top: "conv2" } layer { name: "norm2" type: "LRN" bottom: "conv2" top: "norm2" lrn_param { local_size: 5 alpha: 0.0001 beta: 0.75 } } layer { name: "pool2" type: "Pooling" bottom: "norm2" top: "pool2" pooling_param { pool: MAX kernel_size: 3 stride: 2 } } layer { name: "conv3" type: "Convolution" bottom: "pool2" top: "conv3" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 384 pad: 1 kernel_size: 3 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 0 } } } layer { name: "relu3" type: "ReLU" bottom: "conv3" top: "conv3" } layer { name: "conv4" type: "Convolution" bottom: "conv3" top: "conv4" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 384 pad: 1 kernel_size: 3 group: 2 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 0.1 } } } layer { name: "relu4" type: "ReLU" bottom: "conv4" top: "conv4" } layer { name: "conv5" type: "Convolution" bottom: "conv4" top: "conv5" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 256 pad: 1 kernel_size: 3 group: 2 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 0.1 } } } layer { name: "relu5" type: "ReLU" bottom: "conv5" top: "conv5" } layer { name: "pool5" type: "Pooling" bottom: "conv5" top: "pool5" pooling_param { pool: MAX kernel_size: 3 stride: 2 } } layer { name: "fc6" type: "InnerProduct" bottom: "pool5" top: "fc6" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } inner_product_param { num_output: 4096 weight_filler { type: "gaussian" std: 0.005 } bias_filler { type: "constant" value: 0.1 } } } layer { name: "relu6" type: "ReLU" bottom: "fc6" top: "fc6" } layer { name: "drop6" type: "Dropout" bottom: "fc6" top: "fc6" dropout_param { dropout_ratio: 0.5 } } layer { name: "fc7" type: "InnerProduct" bottom: "fc6" top: "fc7" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } inner_product_param { num_output: 4096 weight_filler { type: "gaussian" std: 0.005 } bias_filler { type: "constant" value: 0.1 } } } layer { name: "relu7" type: "ReLU" bottom: "fc7" top: "fc7" } layer { name: "drop7" type: "Dropout" bottom: "fc7" top: "fc7" dropout_param { dropout_ratio: 0.5 } } layer { name: "fc8" type: "InnerProduct" bottom: "fc7" top: "fc8" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } inner_product_param { num_output: 196 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 0 } } } layer { name: "accuracy" type: "Accuracy" bottom: "fc8" bottom: "label" top: "accuracy" include { phase: TEST } } layer { name: "loss" type: "SoftmaxWithLoss" bottom: "fc8" bottom: "label" top: "loss" } I0427 19:08:30.489028 23810 layer_factory.hpp:77] Creating layer val-data I0427 19:08:30.491255 23810 db_lmdb.cpp:35] Opened lmdb /mnt/bigdisk/DIGITS-MAN-3/digits/jobs/20210427-190400-faa9/val_db I0427 19:08:30.491452 23810 net.cpp:84] Creating Layer val-data I0427 19:08:30.491469 23810 net.cpp:380] val-data -> data I0427 19:08:30.491485 23810 net.cpp:380] val-data -> label I0427 19:08:30.491497 23810 data_transformer.cpp:25] Loading mean file from: /mnt/bigdisk/DIGITS-MAN-3/digits/jobs/20210427-190400-faa9/mean.binaryproto I0427 19:08:30.495795 23810 data_layer.cpp:45] output data size: 256,3,227,227 I0427 19:08:30.834892 23810 net.cpp:122] Setting up val-data I0427 19:08:30.834918 23810 net.cpp:129] Top shape: 256 3 227 227 (39574272) I0427 19:08:30.834924 23810 net.cpp:129] Top shape: 256 (256) I0427 19:08:30.834929 23810 net.cpp:137] Memory required for data: 158298112 I0427 19:08:30.834939 23810 layer_factory.hpp:77] Creating layer label_val-data_1_split I0427 19:08:30.834954 23810 net.cpp:84] Creating Layer label_val-data_1_split I0427 19:08:30.834960 23810 net.cpp:406] label_val-data_1_split <- label I0427 19:08:30.834970 23810 net.cpp:380] label_val-data_1_split -> label_val-data_1_split_0 I0427 19:08:30.834983 23810 net.cpp:380] label_val-data_1_split -> label_val-data_1_split_1 I0427 19:08:30.835039 23810 net.cpp:122] Setting up label_val-data_1_split I0427 19:08:30.835047 23810 net.cpp:129] Top shape: 256 (256) I0427 19:08:30.835053 23810 net.cpp:129] Top shape: 256 (256) I0427 19:08:30.835057 23810 net.cpp:137] Memory required for data: 158300160 I0427 19:08:30.835062 23810 layer_factory.hpp:77] Creating layer conv1 I0427 19:08:30.835078 23810 net.cpp:84] Creating Layer conv1 I0427 19:08:30.835083 23810 net.cpp:406] conv1 <- data I0427 19:08:30.835090 23810 net.cpp:380] conv1 -> conv1 I0427 19:08:30.846004 23810 net.cpp:122] Setting up conv1 I0427 19:08:30.846029 23810 net.cpp:129] Top shape: 256 96 55 55 (74342400) I0427 19:08:30.846035 23810 net.cpp:137] Memory required for data: 455669760 I0427 19:08:30.846052 23810 layer_factory.hpp:77] Creating layer relu1 I0427 19:08:30.846062 23810 net.cpp:84] Creating Layer relu1 I0427 19:08:30.846066 23810 net.cpp:406] relu1 <- conv1 I0427 19:08:30.846073 23810 net.cpp:367] relu1 -> conv1 (in-place) I0427 19:08:30.846480 23810 net.cpp:122] Setting up relu1 I0427 19:08:30.846493 23810 net.cpp:129] Top shape: 256 96 55 55 (74342400) I0427 19:08:30.846498 23810 net.cpp:137] Memory required for data: 753039360 I0427 19:08:30.846503 23810 layer_factory.hpp:77] Creating layer norm1 I0427 19:08:30.846514 23810 net.cpp:84] Creating Layer norm1 I0427 19:08:30.846518 23810 net.cpp:406] norm1 <- conv1 I0427 19:08:30.846526 23810 net.cpp:380] norm1 -> norm1 I0427 19:08:30.862814 23810 net.cpp:122] Setting up norm1 I0427 19:08:30.862835 23810 net.cpp:129] Top shape: 256 96 55 55 (74342400) I0427 19:08:30.862839 23810 net.cpp:137] Memory required for data: 1050408960 I0427 19:08:30.862846 23810 layer_factory.hpp:77] Creating layer pool1 I0427 19:08:30.862857 23810 net.cpp:84] Creating Layer pool1 I0427 19:08:30.862862 23810 net.cpp:406] pool1 <- norm1 I0427 19:08:30.862870 23810 net.cpp:380] pool1 -> pool1 I0427 19:08:30.862905 23810 net.cpp:122] Setting up pool1 I0427 19:08:30.862910 23810 net.cpp:129] Top shape: 256 96 27 27 (17915904) I0427 19:08:30.862912 23810 net.cpp:137] Memory required for data: 1122072576 I0427 19:08:30.862915 23810 layer_factory.hpp:77] Creating layer conv2 I0427 19:08:30.862927 23810 net.cpp:84] Creating Layer conv2 I0427 19:08:30.862951 23810 net.cpp:406] conv2 <- pool1 I0427 19:08:30.862957 23810 net.cpp:380] conv2 -> conv2 I0427 19:08:30.893818 23810 net.cpp:122] Setting up conv2 I0427 19:08:30.893846 23810 net.cpp:129] Top shape: 256 256 27 27 (47775744) I0427 19:08:30.893851 23810 net.cpp:137] Memory required for data: 1313175552 I0427 19:08:30.893868 23810 layer_factory.hpp:77] Creating layer relu2 I0427 19:08:30.893882 23810 net.cpp:84] Creating Layer relu2 I0427 19:08:30.893888 23810 net.cpp:406] relu2 <- conv2 I0427 19:08:30.893898 23810 net.cpp:367] relu2 -> conv2 (in-place) I0427 19:08:30.894645 23810 net.cpp:122] Setting up relu2 I0427 19:08:30.894659 23810 net.cpp:129] Top shape: 256 256 27 27 (47775744) I0427 19:08:30.894663 23810 net.cpp:137] Memory required for data: 1504278528 I0427 19:08:30.894670 23810 layer_factory.hpp:77] Creating layer norm2 I0427 19:08:30.894683 23810 net.cpp:84] Creating Layer norm2 I0427 19:08:30.894688 23810 net.cpp:406] norm2 <- conv2 I0427 19:08:30.894696 23810 net.cpp:380] norm2 -> norm2 I0427 19:08:30.895474 23810 net.cpp:122] Setting up norm2 I0427 19:08:30.895489 23810 net.cpp:129] Top shape: 256 256 27 27 (47775744) I0427 19:08:30.895495 23810 net.cpp:137] Memory required for data: 1695381504 I0427 19:08:30.895500 23810 layer_factory.hpp:77] Creating layer pool2 I0427 19:08:30.895509 23810 net.cpp:84] Creating Layer pool2 I0427 19:08:30.895515 23810 net.cpp:406] pool2 <- norm2 I0427 19:08:30.895522 23810 net.cpp:380] pool2 -> pool2 I0427 19:08:30.895571 23810 net.cpp:122] Setting up pool2 I0427 19:08:30.895581 23810 net.cpp:129] Top shape: 256 256 13 13 (11075584) I0427 19:08:30.895584 23810 net.cpp:137] Memory required for data: 1739683840 I0427 19:08:30.895591 23810 layer_factory.hpp:77] Creating layer conv3 I0427 19:08:30.895606 23810 net.cpp:84] Creating Layer conv3 I0427 19:08:30.895612 23810 net.cpp:406] conv3 <- pool2 I0427 19:08:30.895622 23810 net.cpp:380] conv3 -> conv3 I0427 19:08:30.927903 23810 net.cpp:122] Setting up conv3 I0427 19:08:30.927927 23810 net.cpp:129] Top shape: 256 384 13 13 (16613376) I0427 19:08:30.927930 23810 net.cpp:137] Memory required for data: 1806137344 I0427 19:08:30.927944 23810 layer_factory.hpp:77] Creating layer relu3 I0427 19:08:30.927955 23810 net.cpp:84] Creating Layer relu3 I0427 19:08:30.927959 23810 net.cpp:406] relu3 <- conv3 I0427 19:08:30.927970 23810 net.cpp:367] relu3 -> conv3 (in-place) I0427 19:08:30.928622 23810 net.cpp:122] Setting up relu3 I0427 19:08:30.928634 23810 net.cpp:129] Top shape: 256 384 13 13 (16613376) I0427 19:08:30.928637 23810 net.cpp:137] Memory required for data: 1872590848 I0427 19:08:30.928642 23810 layer_factory.hpp:77] Creating layer conv4 I0427 19:08:30.928653 23810 net.cpp:84] Creating Layer conv4 I0427 19:08:30.928658 23810 net.cpp:406] conv4 <- conv3 I0427 19:08:30.928663 23810 net.cpp:380] conv4 -> conv4 I0427 19:08:30.946349 23810 net.cpp:122] Setting up conv4 I0427 19:08:30.946379 23810 net.cpp:129] Top shape: 256 384 13 13 (16613376) I0427 19:08:30.946384 23810 net.cpp:137] Memory required for data: 1939044352 I0427 19:08:30.946398 23810 layer_factory.hpp:77] Creating layer relu4 I0427 19:08:30.946410 23810 net.cpp:84] Creating Layer relu4 I0427 19:08:30.946416 23810 net.cpp:406] relu4 <- conv4 I0427 19:08:30.946426 23810 net.cpp:367] relu4 -> conv4 (in-place) I0427 19:08:30.947001 23810 net.cpp:122] Setting up relu4 I0427 19:08:30.947012 23810 net.cpp:129] Top shape: 256 384 13 13 (16613376) I0427 19:08:30.947017 23810 net.cpp:137] Memory required for data: 2005497856 I0427 19:08:30.947022 23810 layer_factory.hpp:77] Creating layer conv5 I0427 19:08:30.947041 23810 net.cpp:84] Creating Layer conv5 I0427 19:08:30.947047 23810 net.cpp:406] conv5 <- conv4 I0427 19:08:30.947057 23810 net.cpp:380] conv5 -> conv5 I0427 19:08:30.965010 23810 net.cpp:122] Setting up conv5 I0427 19:08:30.965041 23810 net.cpp:129] Top shape: 256 256 13 13 (11075584) I0427 19:08:30.965047 23810 net.cpp:137] Memory required for data: 2049800192 I0427 19:08:30.965067 23810 layer_factory.hpp:77] Creating layer relu5 I0427 19:08:30.965111 23810 net.cpp:84] Creating Layer relu5 I0427 19:08:30.965119 23810 net.cpp:406] relu5 <- conv5 I0427 19:08:30.965131 23810 net.cpp:367] relu5 -> conv5 (in-place) I0427 19:08:30.965994 23810 net.cpp:122] Setting up relu5 I0427 19:08:30.966008 23810 net.cpp:129] Top shape: 256 256 13 13 (11075584) I0427 19:08:30.966014 23810 net.cpp:137] Memory required for data: 2094102528 I0427 19:08:30.966019 23810 layer_factory.hpp:77] Creating layer pool5 I0427 19:08:30.966034 23810 net.cpp:84] Creating Layer pool5 I0427 19:08:30.966039 23810 net.cpp:406] pool5 <- conv5 I0427 19:08:30.966048 23810 net.cpp:380] pool5 -> pool5 I0427 19:08:30.966115 23810 net.cpp:122] Setting up pool5 I0427 19:08:30.966125 23810 net.cpp:129] Top shape: 256 256 6 6 (2359296) I0427 19:08:30.966128 23810 net.cpp:137] Memory required for data: 2103539712 I0427 19:08:30.966133 23810 layer_factory.hpp:77] Creating layer fc6 I0427 19:08:30.966145 23810 net.cpp:84] Creating Layer fc6 I0427 19:08:30.966150 23810 net.cpp:406] fc6 <- pool5 I0427 19:08:30.966156 23810 net.cpp:380] fc6 -> fc6 I0427 19:08:31.603840 23810 net.cpp:122] Setting up fc6 I0427 19:08:31.603868 23810 net.cpp:129] Top shape: 256 4096 (1048576) I0427 19:08:31.603873 23810 net.cpp:137] Memory required for data: 2107734016 I0427 19:08:31.603885 23810 layer_factory.hpp:77] Creating layer relu6 I0427 19:08:31.603897 23810 net.cpp:84] Creating Layer relu6 I0427 19:08:31.603904 23810 net.cpp:406] relu6 <- fc6 I0427 19:08:31.603914 23810 net.cpp:367] relu6 -> fc6 (in-place) I0427 19:08:31.605135 23810 net.cpp:122] Setting up relu6 I0427 19:08:31.605159 23810 net.cpp:129] Top shape: 256 4096 (1048576) I0427 19:08:31.605163 23810 net.cpp:137] Memory required for data: 2111928320 I0427 19:08:31.605170 23810 layer_factory.hpp:77] Creating layer drop6 I0427 19:08:31.605181 23810 net.cpp:84] Creating Layer drop6 I0427 19:08:31.605187 23810 net.cpp:406] drop6 <- fc6 I0427 19:08:31.605198 23810 net.cpp:367] drop6 -> fc6 (in-place) I0427 19:08:31.605242 23810 net.cpp:122] Setting up drop6 I0427 19:08:31.605249 23810 net.cpp:129] Top shape: 256 4096 (1048576) I0427 19:08:31.605253 23810 net.cpp:137] Memory required for data: 2116122624 I0427 19:08:31.605258 23810 layer_factory.hpp:77] Creating layer fc7 I0427 19:08:31.605273 23810 net.cpp:84] Creating Layer fc7 I0427 19:08:31.605278 23810 net.cpp:406] fc7 <- fc6 I0427 19:08:31.605286 23810 net.cpp:380] fc7 -> fc7 I0427 19:08:31.875675 23810 net.cpp:122] Setting up fc7 I0427 19:08:31.875699 23810 net.cpp:129] Top shape: 256 4096 (1048576) I0427 19:08:31.875705 23810 net.cpp:137] Memory required for data: 2120316928 I0427 19:08:31.875717 23810 layer_factory.hpp:77] Creating layer relu7 I0427 19:08:31.875730 23810 net.cpp:84] Creating Layer relu7 I0427 19:08:31.875735 23810 net.cpp:406] relu7 <- fc7 I0427 19:08:31.875746 23810 net.cpp:367] relu7 -> fc7 (in-place) I0427 19:08:31.876345 23810 net.cpp:122] Setting up relu7 I0427 19:08:31.876356 23810 net.cpp:129] Top shape: 256 4096 (1048576) I0427 19:08:31.876361 23810 net.cpp:137] Memory required for data: 2124511232 I0427 19:08:31.876366 23810 layer_factory.hpp:77] Creating layer drop7 I0427 19:08:31.876375 23810 net.cpp:84] Creating Layer drop7 I0427 19:08:31.876380 23810 net.cpp:406] drop7 <- fc7 I0427 19:08:31.876389 23810 net.cpp:367] drop7 -> fc7 (in-place) I0427 19:08:31.876425 23810 net.cpp:122] Setting up drop7 I0427 19:08:31.876431 23810 net.cpp:129] Top shape: 256 4096 (1048576) I0427 19:08:31.876435 23810 net.cpp:137] Memory required for data: 2128705536 I0427 19:08:31.876441 23810 layer_factory.hpp:77] Creating layer fc8 I0427 19:08:31.876451 23810 net.cpp:84] Creating Layer fc8 I0427 19:08:31.876456 23810 net.cpp:406] fc8 <- fc7 I0427 19:08:31.876464 23810 net.cpp:380] fc8 -> fc8 I0427 19:08:31.935492 23810 net.cpp:122] Setting up fc8 I0427 19:08:31.935519 23810 net.cpp:129] Top shape: 256 196 (50176) I0427 19:08:31.935525 23810 net.cpp:137] Memory required for data: 2128906240 I0427 19:08:31.935539 23810 layer_factory.hpp:77] Creating layer fc8_fc8_0_split I0427 19:08:31.935551 23810 net.cpp:84] Creating Layer fc8_fc8_0_split I0427 19:08:31.935581 23810 net.cpp:406] fc8_fc8_0_split <- fc8 I0427 19:08:31.935592 23810 net.cpp:380] fc8_fc8_0_split -> fc8_fc8_0_split_0 I0427 19:08:31.935606 23810 net.cpp:380] fc8_fc8_0_split -> fc8_fc8_0_split_1 I0427 19:08:31.935653 23810 net.cpp:122] Setting up fc8_fc8_0_split I0427 19:08:31.935662 23810 net.cpp:129] Top shape: 256 196 (50176) I0427 19:08:31.935667 23810 net.cpp:129] Top shape: 256 196 (50176) I0427 19:08:31.935672 23810 net.cpp:137] Memory required for data: 2129307648 I0427 19:08:31.935676 23810 layer_factory.hpp:77] Creating layer accuracy I0427 19:08:31.935685 23810 net.cpp:84] Creating Layer accuracy I0427 19:08:31.935691 23810 net.cpp:406] accuracy <- fc8_fc8_0_split_0 I0427 19:08:31.935698 23810 net.cpp:406] accuracy <- label_val-data_1_split_0 I0427 19:08:31.935706 23810 net.cpp:380] accuracy -> accuracy I0427 19:08:31.935719 23810 net.cpp:122] Setting up accuracy I0427 19:08:31.935725 23810 net.cpp:129] Top shape: (1) I0427 19:08:31.935729 23810 net.cpp:137] Memory required for data: 2129307652 I0427 19:08:31.935734 23810 layer_factory.hpp:77] Creating layer loss I0427 19:08:31.935742 23810 net.cpp:84] Creating Layer loss I0427 19:08:31.935747 23810 net.cpp:406] loss <- fc8_fc8_0_split_1 I0427 19:08:31.935752 23810 net.cpp:406] loss <- label_val-data_1_split_1 I0427 19:08:31.935763 23810 net.cpp:380] loss -> loss I0427 19:08:31.935773 23810 layer_factory.hpp:77] Creating layer loss I0427 19:08:31.947444 23810 net.cpp:122] Setting up loss I0427 19:08:31.947472 23810 net.cpp:129] Top shape: (1) I0427 19:08:31.947477 23810 net.cpp:132] with loss weight 1 I0427 19:08:31.947494 23810 net.cpp:137] Memory required for data: 2129307656 I0427 19:08:31.947501 23810 net.cpp:198] loss needs backward computation. I0427 19:08:31.947510 23810 net.cpp:200] accuracy does not need backward computation. I0427 19:08:31.947516 23810 net.cpp:198] fc8_fc8_0_split needs backward computation. I0427 19:08:31.947521 23810 net.cpp:198] fc8 needs backward computation. I0427 19:08:31.947526 23810 net.cpp:198] drop7 needs backward computation. I0427 19:08:31.947531 23810 net.cpp:198] relu7 needs backward computation. I0427 19:08:31.947536 23810 net.cpp:198] fc7 needs backward computation. I0427 19:08:31.947541 23810 net.cpp:198] drop6 needs backward computation. I0427 19:08:31.947546 23810 net.cpp:198] relu6 needs backward computation. I0427 19:08:31.947551 23810 net.cpp:198] fc6 needs backward computation. I0427 19:08:31.947556 23810 net.cpp:198] pool5 needs backward computation. I0427 19:08:31.947562 23810 net.cpp:198] relu5 needs backward computation. I0427 19:08:31.947568 23810 net.cpp:198] conv5 needs backward computation. I0427 19:08:31.947573 23810 net.cpp:198] relu4 needs backward computation. I0427 19:08:31.947578 23810 net.cpp:198] conv4 needs backward computation. I0427 19:08:31.947583 23810 net.cpp:198] relu3 needs backward computation. I0427 19:08:31.947590 23810 net.cpp:198] conv3 needs backward computation. I0427 19:08:31.947597 23810 net.cpp:198] pool2 needs backward computation. I0427 19:08:31.947602 23810 net.cpp:198] norm2 needs backward computation. I0427 19:08:31.947608 23810 net.cpp:198] relu2 needs backward computation. I0427 19:08:31.947613 23810 net.cpp:198] conv2 needs backward computation. I0427 19:08:31.947618 23810 net.cpp:198] pool1 needs backward computation. I0427 19:08:31.947624 23810 net.cpp:198] norm1 needs backward computation. I0427 19:08:31.947629 23810 net.cpp:198] relu1 needs backward computation. I0427 19:08:31.947635 23810 net.cpp:198] conv1 needs backward computation. I0427 19:08:31.947641 23810 net.cpp:200] label_val-data_1_split does not need backward computation. I0427 19:08:31.947647 23810 net.cpp:200] val-data does not need backward computation. I0427 19:08:31.947652 23810 net.cpp:242] This network produces output accuracy I0427 19:08:31.947659 23810 net.cpp:242] This network produces output loss I0427 19:08:31.947685 23810 net.cpp:255] Network initialization done. I0427 19:08:31.947804 23810 solver.cpp:56] Solver scaffolding done. I0427 19:08:31.948482 23810 caffe.cpp:248] Starting Optimization I0427 19:08:31.948539 23810 solver.cpp:272] Solving I0427 19:08:31.948545 23810 solver.cpp:273] Learning Rate Policy: exp I0427 19:08:31.967983 23810 solver.cpp:330] Iteration 0, Testing net (#0) I0427 19:08:31.968005 23810 net.cpp:676] Ignoring source layer train-data I0427 19:08:32.183445 23810 blocking_queue.cpp:49] Waiting for data I0427 19:08:37.449852 23943 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:08:38.110247 23810 solver.cpp:397] Test net output #0: accuracy = 0.00390625 I0427 19:08:38.110285 23810 solver.cpp:397] Test net output #1: loss = 5.28248 (* 1 = 5.28248 loss) I0427 19:08:38.365494 23810 solver.cpp:218] Iteration 0 (9.94495e+36 iter/s, 6.41665s/12 iters), loss = 5.27818 I0427 19:08:38.365561 23810 solver.cpp:237] Train net output #0: loss = 5.27818 (* 1 = 5.27818 loss) I0427 19:08:38.365593 23810 sgd_solver.cpp:105] Iteration 0, lr = 0.01 I0427 19:08:51.942961 23810 solver.cpp:218] Iteration 12 (0.883856 iter/s, 13.5769s/12 iters), loss = 5.2622 I0427 19:08:51.955009 23810 solver.cpp:237] Train net output #0: loss = 5.2622 (* 1 = 5.2622 loss) I0427 19:08:51.955034 23810 sgd_solver.cpp:105] Iteration 12, lr = 0.00992109 I0427 19:09:08.870322 23810 solver.cpp:218] Iteration 24 (0.709443 iter/s, 16.9147s/12 iters), loss = 5.28887 I0427 19:09:08.870429 23810 solver.cpp:237] Train net output #0: loss = 5.28887 (* 1 = 5.28887 loss) I0427 19:09:08.870442 23810 sgd_solver.cpp:105] Iteration 24, lr = 0.0098428 I0427 19:09:27.241456 23810 solver.cpp:218] Iteration 36 (0.653228 iter/s, 18.3703s/12 iters), loss = 5.27633 I0427 19:09:27.241511 23810 solver.cpp:237] Train net output #0: loss = 5.27633 (* 1 = 5.27633 loss) I0427 19:09:27.241523 23810 sgd_solver.cpp:105] Iteration 36, lr = 0.00976512 I0427 19:09:41.967593 23810 solver.cpp:218] Iteration 48 (0.814913 iter/s, 14.7255s/12 iters), loss = 5.28664 I0427 19:09:41.969899 23810 solver.cpp:237] Train net output #0: loss = 5.28664 (* 1 = 5.28664 loss) I0427 19:09:41.969913 23810 sgd_solver.cpp:105] Iteration 48, lr = 0.00968806 I0427 19:10:02.352556 23810 solver.cpp:218] Iteration 60 (0.588797 iter/s, 20.3805s/12 iters), loss = 5.29291 I0427 19:10:02.352618 23810 solver.cpp:237] Train net output #0: loss = 5.29291 (* 1 = 5.29291 loss) I0427 19:10:02.352632 23810 sgd_solver.cpp:105] Iteration 60, lr = 0.00961161 I0427 19:10:24.853227 23810 solver.cpp:218] Iteration 72 (0.533339 iter/s, 22.4997s/12 iters), loss = 5.2809 I0427 19:10:24.865293 23810 solver.cpp:237] Train net output #0: loss = 5.2809 (* 1 = 5.2809 loss) I0427 19:10:24.865320 23810 sgd_solver.cpp:105] Iteration 72, lr = 0.00953576 I0427 19:10:40.147543 23810 solver.cpp:218] Iteration 84 (0.785254 iter/s, 15.2817s/12 iters), loss = 5.28188 I0427 19:10:40.147652 23810 solver.cpp:237] Train net output #0: loss = 5.28188 (* 1 = 5.28188 loss) I0427 19:10:40.147666 23810 sgd_solver.cpp:105] Iteration 84, lr = 0.00946051 I0427 19:10:55.555341 23810 solver.cpp:218] Iteration 96 (0.77886 iter/s, 15.4071s/12 iters), loss = 5.29336 I0427 19:10:55.566395 23810 solver.cpp:237] Train net output #0: loss = 5.29336 (* 1 = 5.29336 loss) I0427 19:10:55.566423 23810 sgd_solver.cpp:105] Iteration 96, lr = 0.00938586 I0427 19:11:02.191859 23845 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:11:03.062417 23810 solver.cpp:447] Snapshotting to binary proto file snapshot_iter_102.caffemodel I0427 19:11:06.920866 23810 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_102.solverstate I0427 19:11:09.586941 23810 solver.cpp:330] Iteration 102, Testing net (#0) I0427 19:11:09.586961 23810 net.cpp:676] Ignoring source layer train-data I0427 19:11:12.355242 23943 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:11:13.758445 23810 solver.cpp:397] Test net output #0: accuracy = 0.00446429 I0427 19:11:13.758483 23810 solver.cpp:397] Test net output #1: loss = 5.27586 (* 1 = 5.27586 loss) I0427 19:11:17.996634 23810 solver.cpp:218] Iteration 108 (0.535012 iter/s, 22.4294s/12 iters), loss = 5.27086 I0427 19:11:17.996697 23810 solver.cpp:237] Train net output #0: loss = 5.27086 (* 1 = 5.27086 loss) I0427 19:11:17.996711 23810 sgd_solver.cpp:105] Iteration 108, lr = 0.00931179 I0427 19:11:29.806394 23810 solver.cpp:218] Iteration 120 (1.01615 iter/s, 11.8092s/12 iters), loss = 5.23421 I0427 19:11:29.806588 23810 solver.cpp:237] Train net output #0: loss = 5.23421 (* 1 = 5.23421 loss) I0427 19:11:29.806598 23810 sgd_solver.cpp:105] Iteration 120, lr = 0.00923831 I0427 19:11:39.285578 23810 solver.cpp:218] Iteration 132 (1.26601 iter/s, 9.47861s/12 iters), loss = 5.22548 I0427 19:11:39.285624 23810 solver.cpp:237] Train net output #0: loss = 5.22548 (* 1 = 5.22548 loss) I0427 19:11:39.285632 23810 sgd_solver.cpp:105] Iteration 132, lr = 0.0091654 I0427 19:11:48.873174 23810 solver.cpp:218] Iteration 144 (1.25167 iter/s, 9.58717s/12 iters), loss = 5.19262 I0427 19:11:48.873217 23810 solver.cpp:237] Train net output #0: loss = 5.19262 (* 1 = 5.19262 loss) I0427 19:11:48.873226 23810 sgd_solver.cpp:105] Iteration 144, lr = 0.00909308 I0427 19:11:58.344630 23810 solver.cpp:218] Iteration 156 (1.26702 iter/s, 9.47103s/12 iters), loss = 5.16915 I0427 19:11:58.344673 23810 solver.cpp:237] Train net output #0: loss = 5.16915 (* 1 = 5.16915 loss) I0427 19:11:58.344682 23810 sgd_solver.cpp:105] Iteration 156, lr = 0.00902132 I0427 19:12:07.875228 23810 solver.cpp:218] Iteration 168 (1.25916 iter/s, 9.53018s/12 iters), loss = 5.15933 I0427 19:12:07.875341 23810 solver.cpp:237] Train net output #0: loss = 5.15933 (* 1 = 5.15933 loss) I0427 19:12:07.875350 23810 sgd_solver.cpp:105] Iteration 168, lr = 0.00895013 I0427 19:12:17.257442 23810 solver.cpp:218] Iteration 180 (1.27908 iter/s, 9.38173s/12 iters), loss = 5.13112 I0427 19:12:17.257491 23810 solver.cpp:237] Train net output #0: loss = 5.13112 (* 1 = 5.13112 loss) I0427 19:12:17.257503 23810 sgd_solver.cpp:105] Iteration 180, lr = 0.0088795 I0427 19:12:27.184715 23810 solver.cpp:218] Iteration 192 (1.20885 iter/s, 9.92683s/12 iters), loss = 5.16174 I0427 19:12:27.184759 23810 solver.cpp:237] Train net output #0: loss = 5.16174 (* 1 = 5.16174 loss) I0427 19:12:27.184769 23810 sgd_solver.cpp:105] Iteration 192, lr = 0.00880943 I0427 19:12:34.551403 23845 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:12:35.848412 23810 solver.cpp:447] Snapshotting to binary proto file snapshot_iter_204.caffemodel I0427 19:12:41.700443 23810 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_204.solverstate I0427 19:12:46.120790 23810 solver.cpp:330] Iteration 204, Testing net (#0) I0427 19:12:46.120808 23810 net.cpp:676] Ignoring source layer train-data I0427 19:12:47.760217 23943 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:12:49.332746 23810 solver.cpp:397] Test net output #0: accuracy = 0.00948661 I0427 19:12:49.332775 23810 solver.cpp:397] Test net output #1: loss = 5.14372 (* 1 = 5.14372 loss) I0427 19:12:49.494626 23810 solver.cpp:218] Iteration 204 (0.5379 iter/s, 22.309s/12 iters), loss = 5.15215 I0427 19:12:49.494683 23810 solver.cpp:237] Train net output #0: loss = 5.15215 (* 1 = 5.15215 loss) I0427 19:12:49.494696 23810 sgd_solver.cpp:105] Iteration 204, lr = 0.00873991 I0427 19:12:57.556529 23810 solver.cpp:218] Iteration 216 (1.48855 iter/s, 8.06152s/12 iters), loss = 5.1768 I0427 19:12:57.556576 23810 solver.cpp:237] Train net output #0: loss = 5.1768 (* 1 = 5.1768 loss) I0427 19:12:57.556584 23810 sgd_solver.cpp:105] Iteration 216, lr = 0.00867094 I0427 19:13:07.311255 23810 solver.cpp:218] Iteration 228 (1.23023 iter/s, 9.75428s/12 iters), loss = 5.06974 I0427 19:13:07.311313 23810 solver.cpp:237] Train net output #0: loss = 5.06974 (* 1 = 5.06974 loss) I0427 19:13:07.311326 23810 sgd_solver.cpp:105] Iteration 228, lr = 0.00860252 I0427 19:13:16.955027 23810 solver.cpp:218] Iteration 240 (1.24438 iter/s, 9.64333s/12 iters), loss = 5.13137 I0427 19:13:16.955168 23810 solver.cpp:237] Train net output #0: loss = 5.13137 (* 1 = 5.13137 loss) I0427 19:13:16.955178 23810 sgd_solver.cpp:105] Iteration 240, lr = 0.00853463 I0427 19:13:26.775128 23810 solver.cpp:218] Iteration 252 (1.22205 iter/s, 9.81956s/12 iters), loss = 5.14166 I0427 19:13:26.775173 23810 solver.cpp:237] Train net output #0: loss = 5.14166 (* 1 = 5.14166 loss) I0427 19:13:26.775183 23810 sgd_solver.cpp:105] Iteration 252, lr = 0.00846728 I0427 19:13:37.912567 23810 solver.cpp:218] Iteration 264 (1.07845 iter/s, 11.1271s/12 iters), loss = 5.11355 I0427 19:13:37.912628 23810 solver.cpp:237] Train net output #0: loss = 5.11355 (* 1 = 5.11355 loss) I0427 19:13:37.912642 23810 sgd_solver.cpp:105] Iteration 264, lr = 0.00840046 I0427 19:13:57.800839 23810 solver.cpp:218] Iteration 276 (0.603396 iter/s, 19.8874s/12 iters), loss = 5.08452 I0427 19:13:57.813659 23810 solver.cpp:237] Train net output #0: loss = 5.08452 (* 1 = 5.08452 loss) I0427 19:13:57.813683 23810 sgd_solver.cpp:105] Iteration 276, lr = 0.00833417 I0427 19:14:11.250164 23810 solver.cpp:218] Iteration 288 (0.893123 iter/s, 13.436s/12 iters), loss = 5.10309 I0427 19:14:11.250206 23810 solver.cpp:237] Train net output #0: loss = 5.10309 (* 1 = 5.10309 loss) I0427 19:14:11.250214 23810 sgd_solver.cpp:105] Iteration 288, lr = 0.00826841 I0427 19:14:24.188681 23810 solver.cpp:218] Iteration 300 (0.927504 iter/s, 12.938s/12 iters), loss = 5.1315 I0427 19:14:24.188743 23810 solver.cpp:237] Train net output #0: loss = 5.1315 (* 1 = 5.1315 loss) I0427 19:14:24.188755 23810 sgd_solver.cpp:105] Iteration 300, lr = 0.00820316 I0427 19:14:26.691829 23845 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:14:29.519140 23810 solver.cpp:447] Snapshotting to binary proto file snapshot_iter_306.caffemodel I0427 19:14:38.741139 23810 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_306.solverstate I0427 19:14:42.314996 23810 solver.cpp:330] Iteration 306, Testing net (#0) I0427 19:14:42.315016 23810 net.cpp:676] Ignoring source layer train-data I0427 19:14:43.720014 23943 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:14:46.527870 23810 solver.cpp:397] Test net output #0: accuracy = 0.0122768 I0427 19:14:46.527900 23810 solver.cpp:397] Test net output #1: loss = 5.09113 (* 1 = 5.09113 loss) I0427 19:14:50.928254 23810 solver.cpp:218] Iteration 312 (0.448792 iter/s, 26.7385s/12 iters), loss = 5.03665 I0427 19:14:50.928318 23810 solver.cpp:237] Train net output #0: loss = 5.03665 (* 1 = 5.03665 loss) I0427 19:14:50.928326 23810 sgd_solver.cpp:105] Iteration 312, lr = 0.00813842 I0427 19:15:03.627173 23810 solver.cpp:218] Iteration 324 (0.945005 iter/s, 12.6983s/12 iters), loss = 5.05474 I0427 19:15:03.629384 23810 solver.cpp:237] Train net output #0: loss = 5.05474 (* 1 = 5.05474 loss) I0427 19:15:03.629398 23810 sgd_solver.cpp:105] Iteration 324, lr = 0.0080742 I0427 19:15:16.504554 23810 solver.cpp:218] Iteration 336 (0.932286 iter/s, 12.8716s/12 iters), loss = 5.06623 I0427 19:15:16.504611 23810 solver.cpp:237] Train net output #0: loss = 5.06623 (* 1 = 5.06623 loss) I0427 19:15:16.504622 23810 sgd_solver.cpp:105] Iteration 336, lr = 0.00801048 I0427 19:15:29.449254 23810 solver.cpp:218] Iteration 348 (0.927061 iter/s, 12.9441s/12 iters), loss = 5.07216 I0427 19:15:29.449314 23810 solver.cpp:237] Train net output #0: loss = 5.07216 (* 1 = 5.07216 loss) I0427 19:15:29.449326 23810 sgd_solver.cpp:105] Iteration 348, lr = 0.00794727 I0427 19:15:42.497638 23810 solver.cpp:218] Iteration 360 (0.919695 iter/s, 13.0478s/12 iters), loss = 4.98448 I0427 19:15:42.508531 23810 solver.cpp:237] Train net output #0: loss = 4.98448 (* 1 = 4.98448 loss) I0427 19:15:42.508553 23810 sgd_solver.cpp:105] Iteration 360, lr = 0.00788456 I0427 19:15:55.293190 23810 solver.cpp:218] Iteration 372 (0.938658 iter/s, 12.7842s/12 iters), loss = 5.09387 I0427 19:15:55.293247 23810 solver.cpp:237] Train net output #0: loss = 5.09387 (* 1 = 5.09387 loss) I0427 19:15:55.293260 23810 sgd_solver.cpp:105] Iteration 372, lr = 0.00782234 I0427 19:16:08.788147 23810 solver.cpp:218] Iteration 384 (0.88926 iter/s, 13.4944s/12 iters), loss = 5.09024 I0427 19:16:08.800568 23810 solver.cpp:237] Train net output #0: loss = 5.09024 (* 1 = 5.09024 loss) I0427 19:16:08.800593 23810 sgd_solver.cpp:105] Iteration 384, lr = 0.00776061 I0427 19:16:21.790889 23810 solver.cpp:218] Iteration 396 (0.923801 iter/s, 12.9898s/12 iters), loss = 5.07018 I0427 19:16:21.791075 23810 solver.cpp:237] Train net output #0: loss = 5.07018 (* 1 = 5.07018 loss) I0427 19:16:21.791088 23810 sgd_solver.cpp:105] Iteration 396, lr = 0.00769937 I0427 19:16:30.181129 23845 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:16:33.891472 23810 solver.cpp:447] Snapshotting to binary proto file snapshot_iter_408.caffemodel I0427 19:16:38.413650 23810 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_408.solverstate I0427 19:16:41.137689 23810 solver.cpp:330] Iteration 408, Testing net (#0) I0427 19:16:41.137715 23810 net.cpp:676] Ignoring source layer train-data I0427 19:16:41.795092 23943 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:16:45.131821 23810 solver.cpp:397] Test net output #0: accuracy = 0.016183 I0427 19:16:45.131855 23810 solver.cpp:397] Test net output #1: loss = 5.04728 (* 1 = 5.04728 loss) I0427 19:16:45.293864 23810 solver.cpp:218] Iteration 408 (0.510598 iter/s, 23.5019s/12 iters), loss = 5.0022 I0427 19:16:45.293907 23810 solver.cpp:237] Train net output #0: loss = 5.0022 (* 1 = 5.0022 loss) I0427 19:16:45.293916 23810 sgd_solver.cpp:105] Iteration 408, lr = 0.00763861 I0427 19:16:47.945899 23943 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:16:55.928529 23810 solver.cpp:218] Iteration 420 (1.12844 iter/s, 10.6341s/12 iters), loss = 4.96925 I0427 19:16:55.930389 23810 solver.cpp:237] Train net output #0: loss = 4.96925 (* 1 = 4.96925 loss) I0427 19:16:55.930403 23810 sgd_solver.cpp:105] Iteration 420, lr = 0.00757833 I0427 19:17:08.153527 23810 solver.cpp:218] Iteration 432 (0.981784 iter/s, 12.2227s/12 iters), loss = 5.0644 I0427 19:17:08.153574 23810 solver.cpp:237] Train net output #0: loss = 5.0644 (* 1 = 5.0644 loss) I0427 19:17:08.153584 23810 sgd_solver.cpp:105] Iteration 432, lr = 0.00751852 I0427 19:17:17.734647 23810 solver.cpp:218] Iteration 444 (1.25252 iter/s, 9.58068s/12 iters), loss = 5.00421 I0427 19:17:17.734697 23810 solver.cpp:237] Train net output #0: loss = 5.00421 (* 1 = 5.00421 loss) I0427 19:17:17.734709 23810 sgd_solver.cpp:105] Iteration 444, lr = 0.00745919 I0427 19:17:27.228205 23810 solver.cpp:218] Iteration 456 (1.26407 iter/s, 9.49312s/12 iters), loss = 4.9901 I0427 19:17:27.230047 23810 solver.cpp:237] Train net output #0: loss = 4.9901 (* 1 = 4.9901 loss) I0427 19:17:27.230060 23810 sgd_solver.cpp:105] Iteration 456, lr = 0.00740033 I0427 19:17:36.781705 23810 solver.cpp:218] Iteration 468 (1.25638 iter/s, 9.55128s/12 iters), loss = 4.98945 I0427 19:17:36.781749 23810 solver.cpp:237] Train net output #0: loss = 4.98945 (* 1 = 4.98945 loss) I0427 19:17:36.781759 23810 sgd_solver.cpp:105] Iteration 468, lr = 0.00734193 I0427 19:17:46.280876 23810 solver.cpp:218] Iteration 480 (1.26333 iter/s, 9.49874s/12 iters), loss = 4.96181 I0427 19:17:46.280920 23810 solver.cpp:237] Train net output #0: loss = 4.96181 (* 1 = 4.96181 loss) I0427 19:17:46.280928 23810 sgd_solver.cpp:105] Iteration 480, lr = 0.00728399 I0427 19:17:56.071491 23810 solver.cpp:218] Iteration 492 (1.22572 iter/s, 9.79016s/12 iters), loss = 4.92384 I0427 19:17:56.071544 23810 solver.cpp:237] Train net output #0: loss = 4.92384 (* 1 = 4.92384 loss) I0427 19:17:56.071554 23810 sgd_solver.cpp:105] Iteration 492, lr = 0.00722651 I0427 19:18:05.496701 23810 solver.cpp:218] Iteration 504 (1.27324 iter/s, 9.42477s/12 iters), loss = 4.92447 I0427 19:18:05.497606 23810 solver.cpp:237] Train net output #0: loss = 4.92447 (* 1 = 4.92447 loss) I0427 19:18:05.497617 23810 sgd_solver.cpp:105] Iteration 504, lr = 0.00716949 I0427 19:18:06.025575 23845 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:18:09.298389 23810 solver.cpp:447] Snapshotting to binary proto file snapshot_iter_510.caffemodel I0427 19:18:15.374423 23810 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_510.solverstate I0427 19:18:18.235628 23810 solver.cpp:330] Iteration 510, Testing net (#0) I0427 19:18:18.235652 23810 net.cpp:676] Ignoring source layer train-data I0427 19:18:21.235042 23810 solver.cpp:397] Test net output #0: accuracy = 0.0228795 I0427 19:18:21.235072 23810 solver.cpp:397] Test net output #1: loss = 4.95451 (* 1 = 4.95451 loss) I0427 19:18:22.787643 23943 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:18:24.777573 23810 solver.cpp:218] Iteration 516 (0.622432 iter/s, 19.2792s/12 iters), loss = 4.83638 I0427 19:18:24.777617 23810 solver.cpp:237] Train net output #0: loss = 4.83638 (* 1 = 4.83638 loss) I0427 19:18:24.777626 23810 sgd_solver.cpp:105] Iteration 516, lr = 0.00711291 I0427 19:18:34.325294 23810 solver.cpp:218] Iteration 528 (1.2569 iter/s, 9.54728s/12 iters), loss = 4.8654 I0427 19:18:34.325356 23810 solver.cpp:237] Train net output #0: loss = 4.8654 (* 1 = 4.8654 loss) I0427 19:18:34.325369 23810 sgd_solver.cpp:105] Iteration 528, lr = 0.00705678 I0427 19:18:43.896265 23810 solver.cpp:218] Iteration 540 (1.25385 iter/s, 9.57053s/12 iters), loss = 4.88271 I0427 19:18:43.896369 23810 solver.cpp:237] Train net output #0: loss = 4.88271 (* 1 = 4.88271 loss) I0427 19:18:43.896379 23810 sgd_solver.cpp:105] Iteration 540, lr = 0.00700109 I0427 19:18:53.404134 23810 solver.cpp:218] Iteration 552 (1.26218 iter/s, 9.50738s/12 iters), loss = 4.92939 I0427 19:18:53.404177 23810 solver.cpp:237] Train net output #0: loss = 4.92939 (* 1 = 4.92939 loss) I0427 19:18:53.404186 23810 sgd_solver.cpp:105] Iteration 552, lr = 0.00694584 I0427 19:19:03.356667 23810 solver.cpp:218] Iteration 564 (1.20578 iter/s, 9.95207s/12 iters), loss = 4.90338 I0427 19:19:03.356729 23810 solver.cpp:237] Train net output #0: loss = 4.90338 (* 1 = 4.90338 loss) I0427 19:19:03.356743 23810 sgd_solver.cpp:105] Iteration 564, lr = 0.00689103 I0427 19:19:13.141902 23810 solver.cpp:218] Iteration 576 (1.22642 iter/s, 9.78458s/12 iters), loss = 4.82544 I0427 19:19:13.141968 23810 solver.cpp:237] Train net output #0: loss = 4.82544 (* 1 = 4.82544 loss) I0427 19:19:13.141976 23810 sgd_solver.cpp:105] Iteration 576, lr = 0.00683665 I0427 19:19:23.070026 23810 solver.cpp:218] Iteration 588 (1.20874 iter/s, 9.92765s/12 iters), loss = 4.84965 I0427 19:19:23.070143 23810 solver.cpp:237] Train net output #0: loss = 4.84965 (* 1 = 4.84965 loss) I0427 19:19:23.070153 23810 sgd_solver.cpp:105] Iteration 588, lr = 0.0067827 I0427 19:19:41.036849 23810 solver.cpp:218] Iteration 600 (0.667929 iter/s, 17.966s/12 iters), loss = 4.83285 I0427 19:19:41.036921 23810 solver.cpp:237] Train net output #0: loss = 4.83285 (* 1 = 4.83285 loss) I0427 19:19:41.036933 23810 sgd_solver.cpp:105] Iteration 600, lr = 0.00672918 I0427 19:19:49.757575 23845 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:19:55.239158 23810 solver.cpp:447] Snapshotting to binary proto file snapshot_iter_612.caffemodel I0427 19:20:04.022778 23810 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_612.solverstate I0427 19:20:07.615145 23810 solver.cpp:330] Iteration 612, Testing net (#0) I0427 19:20:07.615166 23810 net.cpp:676] Ignoring source layer train-data I0427 19:20:11.618229 23810 solver.cpp:397] Test net output #0: accuracy = 0.0329241 I0427 19:20:11.618265 23810 solver.cpp:397] Test net output #1: loss = 4.83938 (* 1 = 4.83938 loss) I0427 19:20:11.794260 23810 solver.cpp:218] Iteration 612 (0.390166 iter/s, 30.7561s/12 iters), loss = 4.92101 I0427 19:20:11.794323 23810 solver.cpp:237] Train net output #0: loss = 4.92101 (* 1 = 4.92101 loss) I0427 19:20:11.794334 23810 sgd_solver.cpp:105] Iteration 612, lr = 0.00667608 I0427 19:20:13.209606 23943 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:20:22.333182 23810 solver.cpp:218] Iteration 624 (1.13869 iter/s, 10.5384s/12 iters), loss = 4.80611 I0427 19:20:22.333250 23810 solver.cpp:237] Train net output #0: loss = 4.80611 (* 1 = 4.80611 loss) I0427 19:20:22.333262 23810 sgd_solver.cpp:105] Iteration 624, lr = 0.00662339 I0427 19:20:35.073413 23810 solver.cpp:218] Iteration 636 (0.941941 iter/s, 12.7396s/12 iters), loss = 4.83985 I0427 19:20:35.073843 23810 solver.cpp:237] Train net output #0: loss = 4.83985 (* 1 = 4.83985 loss) I0427 19:20:35.073856 23810 sgd_solver.cpp:105] Iteration 636, lr = 0.00657113 I0427 19:20:47.762445 23810 solver.cpp:218] Iteration 648 (0.945769 iter/s, 12.6881s/12 iters), loss = 4.83271 I0427 19:20:47.762502 23810 solver.cpp:237] Train net output #0: loss = 4.83271 (* 1 = 4.83271 loss) I0427 19:20:47.762512 23810 sgd_solver.cpp:105] Iteration 648, lr = 0.00651927 I0427 19:21:00.467459 23810 solver.cpp:218] Iteration 660 (0.944552 iter/s, 12.7044s/12 iters), loss = 4.76191 I0427 19:21:00.467522 23810 solver.cpp:237] Train net output #0: loss = 4.76191 (* 1 = 4.76191 loss) I0427 19:21:00.467535 23810 sgd_solver.cpp:105] Iteration 660, lr = 0.00646782 I0427 19:21:13.178560 23810 solver.cpp:218] Iteration 672 (0.9441 iter/s, 12.7105s/12 iters), loss = 4.81114 I0427 19:21:13.178877 23810 solver.cpp:237] Train net output #0: loss = 4.81114 (* 1 = 4.81114 loss) I0427 19:21:13.178890 23810 sgd_solver.cpp:105] Iteration 672, lr = 0.00641678 I0427 19:21:25.544593 23810 solver.cpp:218] Iteration 684 (0.970464 iter/s, 12.3652s/12 iters), loss = 4.81177 I0427 19:21:25.544649 23810 solver.cpp:237] Train net output #0: loss = 4.81177 (* 1 = 4.81177 loss) I0427 19:21:25.544661 23810 sgd_solver.cpp:105] Iteration 684, lr = 0.00636615 I0427 19:21:38.160421 23810 solver.cpp:218] Iteration 696 (0.951229 iter/s, 12.6153s/12 iters), loss = 4.79758 I0427 19:21:38.160465 23810 solver.cpp:237] Train net output #0: loss = 4.79758 (* 1 = 4.79758 loss) I0427 19:21:38.160475 23810 sgd_solver.cpp:105] Iteration 696, lr = 0.00631591 I0427 19:21:55.126952 23845 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:21:57.432562 23810 solver.cpp:218] Iteration 708 (0.623699 iter/s, 19.2401s/12 iters), loss = 4.77363 I0427 19:21:57.432619 23810 solver.cpp:237] Train net output #0: loss = 4.77363 (* 1 = 4.77363 loss) I0427 19:21:57.432631 23810 sgd_solver.cpp:105] Iteration 708, lr = 0.00626607 I0427 19:22:08.363253 23810 solver.cpp:447] Snapshotting to binary proto file snapshot_iter_714.caffemodel I0427 19:22:14.423629 23810 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_714.solverstate I0427 19:22:18.283361 23810 solver.cpp:330] Iteration 714, Testing net (#0) I0427 19:22:18.283387 23810 net.cpp:676] Ignoring source layer train-data I0427 19:22:23.198225 23810 solver.cpp:397] Test net output #0: accuracy = 0.0373884 I0427 19:22:23.198269 23810 solver.cpp:397] Test net output #1: loss = 4.7294 (* 1 = 4.7294 loss) I0427 19:22:24.333964 23943 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:22:28.184556 23810 solver.cpp:218] Iteration 720 (0.390251 iter/s, 30.7495s/12 iters), loss = 4.65005 I0427 19:22:28.186131 23810 solver.cpp:237] Train net output #0: loss = 4.65005 (* 1 = 4.65005 loss) I0427 19:22:28.186148 23810 sgd_solver.cpp:105] Iteration 720, lr = 0.00621662 I0427 19:22:42.510990 23810 solver.cpp:218] Iteration 732 (0.837738 iter/s, 14.3243s/12 iters), loss = 4.61587 I0427 19:22:42.511046 23810 solver.cpp:237] Train net output #0: loss = 4.61587 (* 1 = 4.61587 loss) I0427 19:22:42.511057 23810 sgd_solver.cpp:105] Iteration 732, lr = 0.00616756 I0427 19:22:57.828651 23810 solver.cpp:218] Iteration 744 (0.783444 iter/s, 15.317s/12 iters), loss = 4.64141 I0427 19:22:57.841017 23810 solver.cpp:237] Train net output #0: loss = 4.64141 (* 1 = 4.64141 loss) I0427 19:22:57.841045 23810 sgd_solver.cpp:105] Iteration 744, lr = 0.00611889 I0427 19:23:12.512820 23810 solver.cpp:218] Iteration 756 (0.817928 iter/s, 14.6712s/12 iters), loss = 4.79265 I0427 19:23:12.526942 23810 solver.cpp:237] Train net output #0: loss = 4.79265 (* 1 = 4.79265 loss) I0427 19:23:12.526970 23810 sgd_solver.cpp:105] Iteration 756, lr = 0.00607061 I0427 19:23:27.300565 23810 solver.cpp:218] Iteration 768 (0.812345 iter/s, 14.7721s/12 iters), loss = 4.79474 I0427 19:23:27.300621 23810 solver.cpp:237] Train net output #0: loss = 4.79474 (* 1 = 4.79474 loss) I0427 19:23:27.300630 23810 sgd_solver.cpp:105] Iteration 768, lr = 0.0060227 I0427 19:23:40.606102 23810 solver.cpp:218] Iteration 780 (0.901921 iter/s, 13.3049s/12 iters), loss = 4.63127 I0427 19:23:40.606159 23810 solver.cpp:237] Train net output #0: loss = 4.63127 (* 1 = 4.63127 loss) I0427 19:23:40.606171 23810 sgd_solver.cpp:105] Iteration 780, lr = 0.00597517 I0427 19:23:52.786487 23810 solver.cpp:218] Iteration 792 (0.985236 iter/s, 12.1798s/12 iters), loss = 4.62466 I0427 19:23:52.789999 23810 solver.cpp:237] Train net output #0: loss = 4.62466 (* 1 = 4.62466 loss) I0427 19:23:52.790014 23810 sgd_solver.cpp:105] Iteration 792, lr = 0.00592802 I0427 19:24:05.740311 23810 solver.cpp:218] Iteration 804 (0.926656 iter/s, 12.9498s/12 iters), loss = 4.65392 I0427 19:24:05.740355 23810 solver.cpp:237] Train net output #0: loss = 4.65392 (* 1 = 4.65392 loss) I0427 19:24:05.740365 23810 sgd_solver.cpp:105] Iteration 804, lr = 0.00588124 I0427 19:24:10.334211 23845 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:24:17.561113 23810 solver.cpp:447] Snapshotting to binary proto file snapshot_iter_816.caffemodel I0427 19:24:21.650934 23810 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_816.solverstate I0427 19:24:24.671726 23810 solver.cpp:330] Iteration 816, Testing net (#0) I0427 19:24:24.671816 23810 net.cpp:676] Ignoring source layer train-data I0427 19:24:28.880919 23810 solver.cpp:397] Test net output #0: accuracy = 0.047433 I0427 19:24:28.880959 23810 solver.cpp:397] Test net output #1: loss = 4.62475 (* 1 = 4.62475 loss) I0427 19:24:29.048189 23810 solver.cpp:218] Iteration 816 (0.514969 iter/s, 23.3024s/12 iters), loss = 4.55167 I0427 19:24:29.048249 23810 solver.cpp:237] Train net output #0: loss = 4.55167 (* 1 = 4.55167 loss) I0427 19:24:29.048261 23810 sgd_solver.cpp:105] Iteration 816, lr = 0.00583483 I0427 19:24:29.150295 23943 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:24:39.995539 23810 solver.cpp:218] Iteration 828 (1.09621 iter/s, 10.9468s/12 iters), loss = 4.45701 I0427 19:24:39.995585 23810 solver.cpp:237] Train net output #0: loss = 4.45701 (* 1 = 4.45701 loss) I0427 19:24:39.995594 23810 sgd_solver.cpp:105] Iteration 828, lr = 0.00578879 I0427 19:24:53.106590 23810 solver.cpp:218] Iteration 840 (0.915299 iter/s, 13.1105s/12 iters), loss = 4.56345 I0427 19:24:53.106647 23810 solver.cpp:237] Train net output #0: loss = 4.56345 (* 1 = 4.56345 loss) I0427 19:24:53.106658 23810 sgd_solver.cpp:105] Iteration 840, lr = 0.00574311 I0427 19:25:06.181860 23810 solver.cpp:218] Iteration 852 (0.917805 iter/s, 13.0747s/12 iters), loss = 4.56535 I0427 19:25:06.196594 23810 solver.cpp:237] Train net output #0: loss = 4.56535 (* 1 = 4.56535 loss) I0427 19:25:06.196612 23810 sgd_solver.cpp:105] Iteration 852, lr = 0.00569778 I0427 19:25:17.362203 23810 solver.cpp:218] Iteration 864 (1.07477 iter/s, 11.1652s/12 iters), loss = 4.41324 I0427 19:25:17.362252 23810 solver.cpp:237] Train net output #0: loss = 4.41324 (* 1 = 4.41324 loss) I0427 19:25:17.362260 23810 sgd_solver.cpp:105] Iteration 864, lr = 0.00565282 I0427 19:25:26.858595 23810 solver.cpp:218] Iteration 876 (1.2637 iter/s, 9.49595s/12 iters), loss = 4.31594 I0427 19:25:26.858640 23810 solver.cpp:237] Train net output #0: loss = 4.31594 (* 1 = 4.31594 loss) I0427 19:25:26.858649 23810 sgd_solver.cpp:105] Iteration 876, lr = 0.00560821 I0427 19:25:36.449460 23810 solver.cpp:218] Iteration 888 (1.25125 iter/s, 9.59042s/12 iters), loss = 4.40052 I0427 19:25:36.450130 23810 solver.cpp:237] Train net output #0: loss = 4.40052 (* 1 = 4.40052 loss) I0427 19:25:36.450140 23810 sgd_solver.cpp:105] Iteration 888, lr = 0.00556396 I0427 19:25:46.077728 23810 solver.cpp:218] Iteration 900 (1.24647 iter/s, 9.6272s/12 iters), loss = 4.35248 I0427 19:25:46.077769 23810 solver.cpp:237] Train net output #0: loss = 4.35248 (* 1 = 4.35248 loss) I0427 19:25:46.077776 23810 sgd_solver.cpp:105] Iteration 900, lr = 0.00552005 I0427 19:25:53.749680 23845 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:25:55.842551 23810 solver.cpp:218] Iteration 912 (1.22896 iter/s, 9.76439s/12 iters), loss = 4.40483 I0427 19:25:55.842588 23810 solver.cpp:237] Train net output #0: loss = 4.40483 (* 1 = 4.40483 loss) I0427 19:25:55.842597 23810 sgd_solver.cpp:105] Iteration 912, lr = 0.00547649 I0427 19:25:59.861815 23810 solver.cpp:447] Snapshotting to binary proto file snapshot_iter_918.caffemodel I0427 19:26:02.884793 23810 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_918.solverstate I0427 19:26:05.196338 23810 solver.cpp:330] Iteration 918, Testing net (#0) I0427 19:26:05.196359 23810 net.cpp:676] Ignoring source layer train-data I0427 19:26:07.904117 23943 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:26:08.305918 23810 solver.cpp:397] Test net output #0: accuracy = 0.0853795 I0427 19:26:08.305949 23810 solver.cpp:397] Test net output #1: loss = 4.40712 (* 1 = 4.40712 loss) I0427 19:26:11.726522 23810 solver.cpp:218] Iteration 924 (0.755511 iter/s, 15.8833s/12 iters), loss = 4.33858 I0427 19:26:11.726569 23810 solver.cpp:237] Train net output #0: loss = 4.33858 (* 1 = 4.33858 loss) I0427 19:26:11.726578 23810 sgd_solver.cpp:105] Iteration 924, lr = 0.00543327 I0427 19:26:21.343950 23810 solver.cpp:218] Iteration 936 (1.24779 iter/s, 9.61698s/12 iters), loss = 4.23896 I0427 19:26:21.343998 23810 solver.cpp:237] Train net output #0: loss = 4.23896 (* 1 = 4.23896 loss) I0427 19:26:21.344008 23810 sgd_solver.cpp:105] Iteration 936, lr = 0.0053904 I0427 19:26:30.964148 23810 solver.cpp:218] Iteration 948 (1.24743 iter/s, 9.61976s/12 iters), loss = 4.17401 I0427 19:26:30.964195 23810 solver.cpp:237] Train net output #0: loss = 4.17401 (* 1 = 4.17401 loss) I0427 19:26:30.964203 23810 sgd_solver.cpp:105] Iteration 948, lr = 0.00534786 I0427 19:26:40.504468 23810 solver.cpp:218] Iteration 960 (1.25788 iter/s, 9.53988s/12 iters), loss = 4.25876 I0427 19:26:40.504861 23810 solver.cpp:237] Train net output #0: loss = 4.25876 (* 1 = 4.25876 loss) I0427 19:26:40.504874 23810 sgd_solver.cpp:105] Iteration 960, lr = 0.00530566 I0427 19:26:50.183631 23810 solver.cpp:218] Iteration 972 (1.23988 iter/s, 9.67836s/12 iters), loss = 4.21374 I0427 19:26:50.183698 23810 solver.cpp:237] Train net output #0: loss = 4.21374 (* 1 = 4.21374 loss) I0427 19:26:50.183712 23810 sgd_solver.cpp:105] Iteration 972, lr = 0.00526379 I0427 19:26:59.742141 23810 solver.cpp:218] Iteration 984 (1.25549 iter/s, 9.55805s/12 iters), loss = 4.09107 I0427 19:26:59.742183 23810 solver.cpp:237] Train net output #0: loss = 4.09107 (* 1 = 4.09107 loss) I0427 19:26:59.742192 23810 sgd_solver.cpp:105] Iteration 984, lr = 0.00522225 I0427 19:27:02.912415 23810 blocking_queue.cpp:49] Waiting for data I0427 19:27:09.296279 23810 solver.cpp:218] Iteration 996 (1.25606 iter/s, 9.5537s/12 iters), loss = 4.23114 I0427 19:27:09.296324 23810 solver.cpp:237] Train net output #0: loss = 4.23114 (* 1 = 4.23114 loss) I0427 19:27:09.296331 23810 sgd_solver.cpp:105] Iteration 996, lr = 0.00518104 I0427 19:27:19.233317 23810 solver.cpp:218] Iteration 1008 (1.20766 iter/s, 9.93659s/12 iters), loss = 4.30655 I0427 19:27:19.233711 23810 solver.cpp:237] Train net output #0: loss = 4.30655 (* 1 = 4.30655 loss) I0427 19:27:19.233721 23810 sgd_solver.cpp:105] Iteration 1008, lr = 0.00514015 I0427 19:27:21.279713 23845 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:27:28.082283 23810 solver.cpp:447] Snapshotting to binary proto file snapshot_iter_1020.caffemodel I0427 19:27:31.115478 23810 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_1020.solverstate I0427 19:27:33.428221 23810 solver.cpp:330] Iteration 1020, Testing net (#0) I0427 19:27:33.428241 23810 net.cpp:676] Ignoring source layer train-data I0427 19:27:35.533305 23943 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:27:36.445698 23810 solver.cpp:397] Test net output #0: accuracy = 0.0954241 I0427 19:27:36.445726 23810 solver.cpp:397] Test net output #1: loss = 4.23627 (* 1 = 4.23627 loss) I0427 19:27:36.607903 23810 solver.cpp:218] Iteration 1020 (0.690707 iter/s, 17.3735s/12 iters), loss = 4.10262 I0427 19:27:36.607944 23810 solver.cpp:237] Train net output #0: loss = 4.10262 (* 1 = 4.10262 loss) I0427 19:27:36.607954 23810 sgd_solver.cpp:105] Iteration 1020, lr = 0.00509959 I0427 19:27:44.490015 23810 solver.cpp:218] Iteration 1032 (1.5225 iter/s, 7.88175s/12 iters), loss = 4.17548 I0427 19:27:44.490054 23810 solver.cpp:237] Train net output #0: loss = 4.17548 (* 1 = 4.17548 loss) I0427 19:27:44.490062 23810 sgd_solver.cpp:105] Iteration 1032, lr = 0.00505935 I0427 19:27:54.161039 23810 solver.cpp:218] Iteration 1044 (1.24088 iter/s, 9.67059s/12 iters), loss = 4.02158 I0427 19:27:54.161171 23810 solver.cpp:237] Train net output #0: loss = 4.02158 (* 1 = 4.02158 loss) I0427 19:27:54.161180 23810 sgd_solver.cpp:105] Iteration 1044, lr = 0.00501942 I0427 19:28:03.789826 23810 solver.cpp:218] Iteration 1056 (1.24633 iter/s, 9.62826s/12 iters), loss = 3.97395 I0427 19:28:03.789866 23810 solver.cpp:237] Train net output #0: loss = 3.97395 (* 1 = 3.97395 loss) I0427 19:28:03.789875 23810 sgd_solver.cpp:105] Iteration 1056, lr = 0.00497981 I0427 19:28:13.308192 23810 solver.cpp:218] Iteration 1068 (1.26078 iter/s, 9.51793s/12 iters), loss = 3.92006 I0427 19:28:13.308234 23810 solver.cpp:237] Train net output #0: loss = 3.92006 (* 1 = 3.92006 loss) I0427 19:28:13.308243 23810 sgd_solver.cpp:105] Iteration 1068, lr = 0.00494052 I0427 19:28:22.800523 23810 solver.cpp:218] Iteration 1080 (1.26424 iter/s, 9.49187s/12 iters), loss = 4.0803 I0427 19:28:22.800570 23810 solver.cpp:237] Train net output #0: loss = 4.0803 (* 1 = 4.0803 loss) I0427 19:28:22.800578 23810 sgd_solver.cpp:105] Iteration 1080, lr = 0.00490153 I0427 19:28:32.338241 23810 solver.cpp:218] Iteration 1092 (1.25822 iter/s, 9.53728s/12 iters), loss = 4.10808 I0427 19:28:32.338348 23810 solver.cpp:237] Train net output #0: loss = 4.10808 (* 1 = 4.10808 loss) I0427 19:28:32.338357 23810 sgd_solver.cpp:105] Iteration 1092, lr = 0.00486285 I0427 19:28:42.013208 23810 solver.cpp:218] Iteration 1104 (1.24038 iter/s, 9.67446s/12 iters), loss = 4.05093 I0427 19:28:42.013257 23810 solver.cpp:237] Train net output #0: loss = 4.05093 (* 1 = 4.05093 loss) I0427 19:28:42.013265 23810 sgd_solver.cpp:105] Iteration 1104, lr = 0.00482448 I0427 19:28:48.060762 23845 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:28:51.654923 23810 solver.cpp:218] Iteration 1116 (1.24465 iter/s, 9.64127s/12 iters), loss = 3.72264 I0427 19:28:51.654968 23810 solver.cpp:237] Train net output #0: loss = 3.72264 (* 1 = 3.72264 loss) I0427 19:28:51.654976 23810 sgd_solver.cpp:105] Iteration 1116, lr = 0.0047864 I0427 19:28:55.530076 23810 solver.cpp:447] Snapshotting to binary proto file snapshot_iter_1122.caffemodel I0427 19:29:03.721940 23810 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_1122.solverstate I0427 19:29:07.444954 23810 solver.cpp:330] Iteration 1122, Testing net (#0) I0427 19:29:07.444983 23810 net.cpp:676] Ignoring source layer train-data I0427 19:29:09.191290 23943 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:29:10.558537 23810 solver.cpp:397] Test net output #0: accuracy = 0.112165 I0427 19:29:10.558574 23810 solver.cpp:397] Test net output #1: loss = 4.08405 (* 1 = 4.08405 loss) I0427 19:29:14.201396 23810 solver.cpp:218] Iteration 1128 (0.532257 iter/s, 22.5455s/12 iters), loss = 3.92602 I0427 19:29:14.201454 23810 solver.cpp:237] Train net output #0: loss = 3.92602 (* 1 = 3.92602 loss) I0427 19:29:14.201465 23810 sgd_solver.cpp:105] Iteration 1128, lr = 0.00474863 I0427 19:29:23.770715 23810 solver.cpp:218] Iteration 1140 (1.25407 iter/s, 9.56887s/12 iters), loss = 3.98341 I0427 19:29:23.770766 23810 solver.cpp:237] Train net output #0: loss = 3.98341 (* 1 = 3.98341 loss) I0427 19:29:23.770776 23810 sgd_solver.cpp:105] Iteration 1140, lr = 0.00471116 I0427 19:29:33.368619 23810 solver.cpp:218] Iteration 1152 (1.25033 iter/s, 9.59746s/12 iters), loss = 4.17852 I0427 19:29:33.368670 23810 solver.cpp:237] Train net output #0: loss = 4.17852 (* 1 = 4.17852 loss) I0427 19:29:33.368681 23810 sgd_solver.cpp:105] Iteration 1152, lr = 0.00467398 I0427 19:29:43.070777 23810 solver.cpp:218] Iteration 1164 (1.2369 iter/s, 9.70171s/12 iters), loss = 4.05607 I0427 19:29:43.070930 23810 solver.cpp:237] Train net output #0: loss = 4.05607 (* 1 = 4.05607 loss) I0427 19:29:43.070940 23810 sgd_solver.cpp:105] Iteration 1164, lr = 0.0046371 I0427 19:29:52.509899 23810 solver.cpp:218] Iteration 1176 (1.27138 iter/s, 9.43858s/12 iters), loss = 3.79628 I0427 19:29:52.509943 23810 solver.cpp:237] Train net output #0: loss = 3.79628 (* 1 = 3.79628 loss) I0427 19:29:52.509950 23810 sgd_solver.cpp:105] Iteration 1176, lr = 0.00460051 I0427 19:30:02.426576 23810 solver.cpp:218] Iteration 1188 (1.21014 iter/s, 9.91623s/12 iters), loss = 3.81043 I0427 19:30:02.426622 23810 solver.cpp:237] Train net output #0: loss = 3.81043 (* 1 = 3.81043 loss) I0427 19:30:02.426631 23810 sgd_solver.cpp:105] Iteration 1188, lr = 0.0045642 I0427 19:30:11.711247 23810 solver.cpp:218] Iteration 1200 (1.29251 iter/s, 9.28424s/12 iters), loss = 3.81588 I0427 19:30:11.711287 23810 solver.cpp:237] Train net output #0: loss = 3.81588 (* 1 = 3.81588 loss) I0427 19:30:11.711294 23810 sgd_solver.cpp:105] Iteration 1200, lr = 0.00452818 I0427 19:30:21.390172 23810 solver.cpp:218] Iteration 1212 (1.23986 iter/s, 9.67849s/12 iters), loss = 3.68586 I0427 19:30:21.390270 23810 solver.cpp:237] Train net output #0: loss = 3.68586 (* 1 = 3.68586 loss) I0427 19:30:21.390280 23810 sgd_solver.cpp:105] Iteration 1212, lr = 0.00449245 I0427 19:30:21.924535 23845 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:30:30.121770 23810 solver.cpp:447] Snapshotting to binary proto file snapshot_iter_1224.caffemodel I0427 19:30:35.571188 23810 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_1224.solverstate I0427 19:30:37.884375 23810 solver.cpp:330] Iteration 1224, Testing net (#0) I0427 19:30:37.884400 23810 net.cpp:676] Ignoring source layer train-data I0427 19:30:39.069721 23943 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:30:41.039160 23810 solver.cpp:397] Test net output #0: accuracy = 0.124442 I0427 19:30:41.039193 23810 solver.cpp:397] Test net output #1: loss = 3.99179 (* 1 = 3.99179 loss) I0427 19:30:41.199111 23810 solver.cpp:218] Iteration 1224 (0.605815 iter/s, 19.808s/12 iters), loss = 3.62536 I0427 19:30:41.199156 23810 solver.cpp:237] Train net output #0: loss = 3.62536 (* 1 = 3.62536 loss) I0427 19:30:41.199163 23810 sgd_solver.cpp:105] Iteration 1224, lr = 0.004457 I0427 19:30:49.163661 23810 solver.cpp:218] Iteration 1236 (1.50675 iter/s, 7.96418s/12 iters), loss = 3.69529 I0427 19:30:49.163707 23810 solver.cpp:237] Train net output #0: loss = 3.69529 (* 1 = 3.69529 loss) I0427 19:30:49.163715 23810 sgd_solver.cpp:105] Iteration 1236, lr = 0.00442183 I0427 19:30:58.637065 23810 solver.cpp:218] Iteration 1248 (1.26676 iter/s, 9.47296s/12 iters), loss = 3.88463 I0427 19:30:58.637209 23810 solver.cpp:237] Train net output #0: loss = 3.88463 (* 1 = 3.88463 loss) I0427 19:30:58.637225 23810 sgd_solver.cpp:105] Iteration 1248, lr = 0.00438693 I0427 19:31:08.198923 23810 solver.cpp:218] Iteration 1260 (1.25506 iter/s, 9.56133s/12 iters), loss = 3.71264 I0427 19:31:08.198964 23810 solver.cpp:237] Train net output #0: loss = 3.71264 (* 1 = 3.71264 loss) I0427 19:31:08.198973 23810 sgd_solver.cpp:105] Iteration 1260, lr = 0.00435231 I0427 19:31:17.782263 23810 solver.cpp:218] Iteration 1272 (1.25223 iter/s, 9.5829s/12 iters), loss = 3.71487 I0427 19:31:17.782311 23810 solver.cpp:237] Train net output #0: loss = 3.71487 (* 1 = 3.71487 loss) I0427 19:31:17.782320 23810 sgd_solver.cpp:105] Iteration 1272, lr = 0.00431797 I0427 19:31:27.383476 23810 solver.cpp:218] Iteration 1284 (1.2499 iter/s, 9.60077s/12 iters), loss = 3.69247 I0427 19:31:27.383522 23810 solver.cpp:237] Train net output #0: loss = 3.69247 (* 1 = 3.69247 loss) I0427 19:31:27.383531 23810 sgd_solver.cpp:105] Iteration 1284, lr = 0.00428389 I0427 19:31:36.853140 23810 solver.cpp:218] Iteration 1296 (1.26726 iter/s, 9.46923s/12 iters), loss = 3.51774 I0427 19:31:36.853281 23810 solver.cpp:237] Train net output #0: loss = 3.51774 (* 1 = 3.51774 loss) I0427 19:31:36.853291 23810 sgd_solver.cpp:105] Iteration 1296, lr = 0.00425009 I0427 19:31:46.344022 23810 solver.cpp:218] Iteration 1308 (1.26444 iter/s, 9.49036s/12 iters), loss = 3.51971 I0427 19:31:46.344060 23810 solver.cpp:237] Train net output #0: loss = 3.51971 (* 1 = 3.51971 loss) I0427 19:31:46.344070 23810 sgd_solver.cpp:105] Iteration 1308, lr = 0.00421655 I0427 19:31:51.058030 23845 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:31:55.896901 23810 solver.cpp:218] Iteration 1320 (1.25622 iter/s, 9.55245s/12 iters), loss = 3.50136 I0427 19:31:55.896947 23810 solver.cpp:237] Train net output #0: loss = 3.50136 (* 1 = 3.50136 loss) I0427 19:31:55.896956 23810 sgd_solver.cpp:105] Iteration 1320, lr = 0.00418328 I0427 19:31:59.853188 23810 solver.cpp:447] Snapshotting to binary proto file snapshot_iter_1326.caffemodel I0427 19:32:08.435227 23810 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_1326.solverstate I0427 19:32:14.492202 23810 solver.cpp:330] Iteration 1326, Testing net (#0) I0427 19:32:14.492223 23810 net.cpp:676] Ignoring source layer train-data I0427 19:32:15.189463 23943 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:32:17.562248 23810 solver.cpp:397] Test net output #0: accuracy = 0.13058 I0427 19:32:17.562290 23810 solver.cpp:397] Test net output #1: loss = 4.05188 (* 1 = 4.05188 loss) I0427 19:32:21.011348 23810 solver.cpp:218] Iteration 1332 (0.477833 iter/s, 25.1134s/12 iters), loss = 3.71585 I0427 19:32:21.011389 23810 solver.cpp:237] Train net output #0: loss = 3.71585 (* 1 = 3.71585 loss) I0427 19:32:21.011399 23810 sgd_solver.cpp:105] Iteration 1332, lr = 0.00415026 I0427 19:32:30.661098 23810 solver.cpp:218] Iteration 1344 (1.24361 iter/s, 9.64931s/12 iters), loss = 3.42146 I0427 19:32:30.661142 23810 solver.cpp:237] Train net output #0: loss = 3.42146 (* 1 = 3.42146 loss) I0427 19:32:30.661151 23810 sgd_solver.cpp:105] Iteration 1344, lr = 0.00411751 I0427 19:32:40.071950 23810 solver.cpp:218] Iteration 1356 (1.27518 iter/s, 9.41042s/12 iters), loss = 3.52966 I0427 19:32:40.072048 23810 solver.cpp:237] Train net output #0: loss = 3.52966 (* 1 = 3.52966 loss) I0427 19:32:40.072057 23810 sgd_solver.cpp:105] Iteration 1356, lr = 0.00408502 I0427 19:32:49.536437 23810 solver.cpp:218] Iteration 1368 (1.26796 iter/s, 9.46399s/12 iters), loss = 3.44604 I0427 19:32:49.536481 23810 solver.cpp:237] Train net output #0: loss = 3.44604 (* 1 = 3.44604 loss) I0427 19:32:49.536514 23810 sgd_solver.cpp:105] Iteration 1368, lr = 0.00405278 I0427 19:32:58.982800 23810 solver.cpp:218] Iteration 1380 (1.27039 iter/s, 9.44592s/12 iters), loss = 3.34844 I0427 19:32:58.982838 23810 solver.cpp:237] Train net output #0: loss = 3.34844 (* 1 = 3.34844 loss) I0427 19:32:58.982846 23810 sgd_solver.cpp:105] Iteration 1380, lr = 0.0040208 I0427 19:33:08.517009 23810 solver.cpp:218] Iteration 1392 (1.25868 iter/s, 9.53377s/12 iters), loss = 3.59791 I0427 19:33:08.517052 23810 solver.cpp:237] Train net output #0: loss = 3.59791 (* 1 = 3.59791 loss) I0427 19:33:08.517061 23810 sgd_solver.cpp:105] Iteration 1392, lr = 0.00398907 I0427 19:33:18.037307 23810 solver.cpp:218] Iteration 1404 (1.26052 iter/s, 9.51986s/12 iters), loss = 3.40887 I0427 19:33:18.037439 23810 solver.cpp:237] Train net output #0: loss = 3.40887 (* 1 = 3.40887 loss) I0427 19:33:18.037449 23810 sgd_solver.cpp:105] Iteration 1404, lr = 0.00395759 I0427 19:33:26.759753 23845 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:33:27.431051 23810 solver.cpp:218] Iteration 1416 (1.27752 iter/s, 9.39322s/12 iters), loss = 3.48615 I0427 19:33:27.431097 23810 solver.cpp:237] Train net output #0: loss = 3.48615 (* 1 = 3.48615 loss) I0427 19:33:27.431107 23810 sgd_solver.cpp:105] Iteration 1416, lr = 0.00392636 I0427 19:33:35.911401 23810 solver.cpp:447] Snapshotting to binary proto file snapshot_iter_1428.caffemodel I0427 19:33:39.011236 23810 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_1428.solverstate I0427 19:33:49.197413 23810 solver.cpp:330] Iteration 1428, Testing net (#0) I0427 19:33:49.197567 23810 net.cpp:676] Ignoring source layer train-data I0427 19:33:49.434558 23943 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:33:52.419850 23810 solver.cpp:397] Test net output #0: accuracy = 0.16183 I0427 19:33:52.419879 23810 solver.cpp:397] Test net output #1: loss = 3.79475 (* 1 = 3.79475 loss) I0427 19:33:52.580598 23810 solver.cpp:218] Iteration 1428 (0.477166 iter/s, 25.1485s/12 iters), loss = 3.44631 I0427 19:33:52.580646 23810 solver.cpp:237] Train net output #0: loss = 3.44631 (* 1 = 3.44631 loss) I0427 19:33:52.580654 23810 sgd_solver.cpp:105] Iteration 1428, lr = 0.00389538 I0427 19:33:54.114938 23943 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:34:00.556205 23810 solver.cpp:218] Iteration 1440 (1.50466 iter/s, 7.97522s/12 iters), loss = 3.16734 I0427 19:34:00.556247 23810 solver.cpp:237] Train net output #0: loss = 3.16734 (* 1 = 3.16734 loss) I0427 19:34:00.556257 23810 sgd_solver.cpp:105] Iteration 1440, lr = 0.00386464 I0427 19:34:09.856087 23810 solver.cpp:218] Iteration 1452 (1.2904 iter/s, 9.29946s/12 iters), loss = 3.42633 I0427 19:34:09.856129 23810 solver.cpp:237] Train net output #0: loss = 3.42633 (* 1 = 3.42633 loss) I0427 19:34:09.856137 23810 sgd_solver.cpp:105] Iteration 1452, lr = 0.00383414 I0427 19:34:19.812732 23810 solver.cpp:218] Iteration 1464 (1.20528 iter/s, 9.95619s/12 iters), loss = 3.2504 I0427 19:34:19.812834 23810 solver.cpp:237] Train net output #0: loss = 3.2504 (* 1 = 3.2504 loss) I0427 19:34:19.812842 23810 sgd_solver.cpp:105] Iteration 1464, lr = 0.00380388 I0427 19:34:29.419848 23810 solver.cpp:218] Iteration 1476 (1.24914 iter/s, 9.60658s/12 iters), loss = 3.40183 I0427 19:34:29.419889 23810 solver.cpp:237] Train net output #0: loss = 3.40183 (* 1 = 3.40183 loss) I0427 19:34:29.419898 23810 sgd_solver.cpp:105] Iteration 1476, lr = 0.00377387 I0427 19:34:38.985692 23810 solver.cpp:218] Iteration 1488 (1.25452 iter/s, 9.5654s/12 iters), loss = 3.37638 I0427 19:34:38.985760 23810 solver.cpp:237] Train net output #0: loss = 3.37638 (* 1 = 3.37638 loss) I0427 19:34:38.985772 23810 sgd_solver.cpp:105] Iteration 1488, lr = 0.00374409 I0427 19:34:48.734813 23810 solver.cpp:218] Iteration 1500 (1.23094 iter/s, 9.74865s/12 iters), loss = 3.26443 I0427 19:34:48.734856 23810 solver.cpp:237] Train net output #0: loss = 3.26443 (* 1 = 3.26443 loss) I0427 19:34:48.734865 23810 sgd_solver.cpp:105] Iteration 1500, lr = 0.00371454 I0427 19:34:58.325737 23810 solver.cpp:218] Iteration 1512 (1.25124 iter/s, 9.59049s/12 iters), loss = 3.21999 I0427 19:34:58.325835 23810 solver.cpp:237] Train net output #0: loss = 3.21999 (* 1 = 3.21999 loss) I0427 19:34:58.325843 23810 sgd_solver.cpp:105] Iteration 1512, lr = 0.00368523 I0427 19:35:01.673496 23845 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:35:07.693439 23810 solver.cpp:218] Iteration 1524 (1.28106 iter/s, 9.36722s/12 iters), loss = 3.20613 I0427 19:35:07.693480 23810 solver.cpp:237] Train net output #0: loss = 3.20613 (* 1 = 3.20613 loss) I0427 19:35:07.693488 23810 sgd_solver.cpp:105] Iteration 1524, lr = 0.00365615 I0427 19:35:11.723994 23810 solver.cpp:447] Snapshotting to binary proto file snapshot_iter_1530.caffemodel I0427 19:35:19.910480 23810 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_1530.solverstate I0427 19:35:24.615007 23810 solver.cpp:330] Iteration 1530, Testing net (#0) I0427 19:35:24.615031 23810 net.cpp:676] Ignoring source layer train-data I0427 19:35:27.616437 23810 solver.cpp:397] Test net output #0: accuracy = 0.175223 I0427 19:35:27.616468 23810 solver.cpp:397] Test net output #1: loss = 3.72446 (* 1 = 3.72446 loss) I0427 19:35:28.955528 23943 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:35:31.125104 23810 solver.cpp:218] Iteration 1536 (0.512149 iter/s, 23.4307s/12 iters), loss = 3.09449 I0427 19:35:31.125151 23810 solver.cpp:237] Train net output #0: loss = 3.09449 (* 1 = 3.09449 loss) I0427 19:35:31.125159 23810 sgd_solver.cpp:105] Iteration 1536, lr = 0.00362729 I0427 19:35:40.644585 23810 solver.cpp:218] Iteration 1548 (1.26063 iter/s, 9.51904s/12 iters), loss = 3.15404 I0427 19:35:40.644627 23810 solver.cpp:237] Train net output #0: loss = 3.15404 (* 1 = 3.15404 loss) I0427 19:35:40.644636 23810 sgd_solver.cpp:105] Iteration 1548, lr = 0.00359867 I0427 19:35:50.193886 23810 solver.cpp:218] Iteration 1560 (1.25669 iter/s, 9.54886s/12 iters), loss = 3.33091 I0427 19:35:50.193933 23810 solver.cpp:237] Train net output #0: loss = 3.33091 (* 1 = 3.33091 loss) I0427 19:35:50.193941 23810 sgd_solver.cpp:105] Iteration 1560, lr = 0.00357027 I0427 19:35:59.834007 23810 solver.cpp:218] Iteration 1572 (1.24486 iter/s, 9.63967s/12 iters), loss = 3.02418 I0427 19:35:59.834342 23810 solver.cpp:237] Train net output #0: loss = 3.02418 (* 1 = 3.02418 loss) I0427 19:35:59.834352 23810 sgd_solver.cpp:105] Iteration 1572, lr = 0.0035421 I0427 19:36:09.658797 23810 solver.cpp:218] Iteration 1584 (1.22149 iter/s, 9.82403s/12 iters), loss = 2.82289 I0427 19:36:09.658843 23810 solver.cpp:237] Train net output #0: loss = 2.82289 (* 1 = 2.82289 loss) I0427 19:36:09.658852 23810 sgd_solver.cpp:105] Iteration 1584, lr = 0.00351415 I0427 19:36:19.156857 23810 solver.cpp:218] Iteration 1596 (1.26348 iter/s, 9.49757s/12 iters), loss = 2.84923 I0427 19:36:19.156901 23810 solver.cpp:237] Train net output #0: loss = 2.84923 (* 1 = 2.84923 loss) I0427 19:36:19.156910 23810 sgd_solver.cpp:105] Iteration 1596, lr = 0.00348641 I0427 19:36:28.673156 23810 solver.cpp:218] Iteration 1608 (1.26106 iter/s, 9.51581s/12 iters), loss = 2.89666 I0427 19:36:28.673197 23810 solver.cpp:237] Train net output #0: loss = 2.89666 (* 1 = 2.89666 loss) I0427 19:36:28.673207 23810 sgd_solver.cpp:105] Iteration 1608, lr = 0.0034589 I0427 19:36:36.125336 23845 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:36:38.189152 23810 solver.cpp:218] Iteration 1620 (1.2611 iter/s, 9.51551s/12 iters), loss = 3.10332 I0427 19:36:38.189199 23810 solver.cpp:237] Train net output #0: loss = 3.10332 (* 1 = 3.10332 loss) I0427 19:36:38.189208 23810 sgd_solver.cpp:105] Iteration 1620, lr = 0.00343161 I0427 19:36:47.033330 23810 solver.cpp:447] Snapshotting to binary proto file snapshot_iter_1632.caffemodel I0427 19:36:50.979465 23810 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_1632.solverstate I0427 19:36:53.784245 23810 solver.cpp:330] Iteration 1632, Testing net (#0) I0427 19:36:53.784265 23810 net.cpp:676] Ignoring source layer train-data I0427 19:36:57.039942 23810 solver.cpp:397] Test net output #0: accuracy = 0.198103 I0427 19:36:57.039981 23810 solver.cpp:397] Test net output #1: loss = 3.56134 (* 1 = 3.56134 loss) I0427 19:36:57.203553 23810 solver.cpp:218] Iteration 1632 (0.631131 iter/s, 19.0135s/12 iters), loss = 2.91174 I0427 19:36:57.203599 23810 solver.cpp:237] Train net output #0: loss = 2.91174 (* 1 = 2.91174 loss) I0427 19:36:57.203608 23810 sgd_solver.cpp:105] Iteration 1632, lr = 0.00340453 I0427 19:36:57.862133 23943 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:37:05.184305 23810 solver.cpp:218] Iteration 1644 (1.5037 iter/s, 7.98033s/12 iters), loss = 2.82919 I0427 19:37:05.184351 23810 solver.cpp:237] Train net output #0: loss = 2.82919 (* 1 = 2.82919 loss) I0427 19:37:05.184360 23810 sgd_solver.cpp:105] Iteration 1644, lr = 0.00337766 I0427 19:37:14.486055 23810 solver.cpp:218] Iteration 1656 (1.29015 iter/s, 9.30127s/12 iters), loss = 2.88952 I0427 19:37:14.486200 23810 solver.cpp:237] Train net output #0: loss = 2.88952 (* 1 = 2.88952 loss) I0427 19:37:14.486212 23810 sgd_solver.cpp:105] Iteration 1656, lr = 0.00335101 I0427 19:37:24.033682 23810 solver.cpp:218] Iteration 1668 (1.25693 iter/s, 9.54704s/12 iters), loss = 2.6262 I0427 19:37:24.033730 23810 solver.cpp:237] Train net output #0: loss = 2.6262 (* 1 = 2.6262 loss) I0427 19:37:24.033738 23810 sgd_solver.cpp:105] Iteration 1668, lr = 0.00332456 I0427 19:37:33.781991 23810 solver.cpp:218] Iteration 1680 (1.23105 iter/s, 9.74781s/12 iters), loss = 2.8497 I0427 19:37:33.782033 23810 solver.cpp:237] Train net output #0: loss = 2.8497 (* 1 = 2.8497 loss) I0427 19:37:33.782043 23810 sgd_solver.cpp:105] Iteration 1680, lr = 0.00329833 I0427 19:37:43.364387 23810 solver.cpp:218] Iteration 1692 (1.25236 iter/s, 9.58191s/12 iters), loss = 2.68238 I0427 19:37:43.364428 23810 solver.cpp:237] Train net output #0: loss = 2.68238 (* 1 = 2.68238 loss) I0427 19:37:43.364436 23810 sgd_solver.cpp:105] Iteration 1692, lr = 0.0032723 I0427 19:37:52.946342 23810 solver.cpp:218] Iteration 1704 (1.25242 iter/s, 9.58147s/12 iters), loss = 2.76224 I0427 19:37:52.946445 23810 solver.cpp:237] Train net output #0: loss = 2.76224 (* 1 = 2.76224 loss) I0427 19:37:52.946455 23810 sgd_solver.cpp:105] Iteration 1704, lr = 0.00324648 I0427 19:38:02.634852 23810 solver.cpp:218] Iteration 1716 (1.23865 iter/s, 9.68797s/12 iters), loss = 2.59546 I0427 19:38:02.634891 23810 solver.cpp:237] Train net output #0: loss = 2.59546 (* 1 = 2.59546 loss) I0427 19:38:02.634898 23810 sgd_solver.cpp:105] Iteration 1716, lr = 0.00322086 I0427 19:38:04.610316 23845 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:38:12.066814 23810 solver.cpp:218] Iteration 1728 (1.27233 iter/s, 9.43148s/12 iters), loss = 2.62841 I0427 19:38:12.066864 23810 solver.cpp:237] Train net output #0: loss = 2.62841 (* 1 = 2.62841 loss) I0427 19:38:12.066874 23810 sgd_solver.cpp:105] Iteration 1728, lr = 0.00319544 I0427 19:38:16.079458 23810 solver.cpp:447] Snapshotting to binary proto file snapshot_iter_1734.caffemodel I0427 19:38:19.002750 23810 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_1734.solverstate I0427 19:38:21.531606 23810 solver.cpp:330] Iteration 1734, Testing net (#0) I0427 19:38:21.531630 23810 net.cpp:676] Ignoring source layer train-data I0427 19:38:24.604288 23810 solver.cpp:397] Test net output #0: accuracy = 0.209263 I0427 19:38:24.604395 23810 solver.cpp:397] Test net output #1: loss = 3.51084 (* 1 = 3.51084 loss) I0427 19:38:24.947299 23943 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:38:27.949204 23810 solver.cpp:218] Iteration 1740 (0.75559 iter/s, 15.8816s/12 iters), loss = 2.82124 I0427 19:38:27.949252 23810 solver.cpp:237] Train net output #0: loss = 2.82124 (* 1 = 2.82124 loss) I0427 19:38:27.949261 23810 sgd_solver.cpp:105] Iteration 1740, lr = 0.00317022 I0427 19:38:37.460367 23810 solver.cpp:218] Iteration 1752 (1.26174 iter/s, 9.51068s/12 iters), loss = 2.68115 I0427 19:38:37.460412 23810 solver.cpp:237] Train net output #0: loss = 2.68115 (* 1 = 2.68115 loss) I0427 19:38:37.460419 23810 sgd_solver.cpp:105] Iteration 1752, lr = 0.00314521 I0427 19:38:46.996109 23810 solver.cpp:218] Iteration 1764 (1.25849 iter/s, 9.53526s/12 iters), loss = 2.65628 I0427 19:38:46.996156 23810 solver.cpp:237] Train net output #0: loss = 2.65628 (* 1 = 2.65628 loss) I0427 19:38:46.996165 23810 sgd_solver.cpp:105] Iteration 1764, lr = 0.00312039 I0427 19:38:56.602170 23810 solver.cpp:218] Iteration 1776 (1.24927 iter/s, 9.60558s/12 iters), loss = 2.6711 I0427 19:38:56.602291 23810 solver.cpp:237] Train net output #0: loss = 2.6711 (* 1 = 2.6711 loss) I0427 19:38:56.602300 23810 sgd_solver.cpp:105] Iteration 1776, lr = 0.00309576 I0427 19:39:06.159684 23810 solver.cpp:218] Iteration 1788 (1.25563 iter/s, 9.55696s/12 iters), loss = 2.66871 I0427 19:39:06.159729 23810 solver.cpp:237] Train net output #0: loss = 2.66871 (* 1 = 2.66871 loss) I0427 19:39:06.159739 23810 sgd_solver.cpp:105] Iteration 1788, lr = 0.00307133 I0427 19:39:15.812913 23810 solver.cpp:218] Iteration 1800 (1.24317 iter/s, 9.65274s/12 iters), loss = 2.63469 I0427 19:39:15.812976 23810 solver.cpp:237] Train net output #0: loss = 2.63469 (* 1 = 2.63469 loss) I0427 19:39:15.812986 23810 sgd_solver.cpp:105] Iteration 1800, lr = 0.0030471 I0427 19:39:25.178270 23810 solver.cpp:218] Iteration 1812 (1.28138 iter/s, 9.36487s/12 iters), loss = 2.6025 I0427 19:39:25.178313 23810 solver.cpp:237] Train net output #0: loss = 2.6025 (* 1 = 2.6025 loss) I0427 19:39:25.178321 23810 sgd_solver.cpp:105] Iteration 1812, lr = 0.00302305 I0427 19:39:31.254762 23845 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:39:34.759810 23810 solver.cpp:218] Iteration 1824 (1.25247 iter/s, 9.58106s/12 iters), loss = 2.41994 I0427 19:39:34.759852 23810 solver.cpp:237] Train net output #0: loss = 2.41994 (* 1 = 2.41994 loss) I0427 19:39:34.759862 23810 sgd_solver.cpp:105] Iteration 1824, lr = 0.00299919 I0427 19:39:43.373363 23810 solver.cpp:447] Snapshotting to binary proto file snapshot_iter_1836.caffemodel I0427 19:39:47.659066 23810 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_1836.solverstate I0427 19:39:49.995548 23810 solver.cpp:330] Iteration 1836, Testing net (#0) I0427 19:39:49.995568 23810 net.cpp:676] Ignoring source layer train-data I0427 19:39:52.764804 23943 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:39:53.011533 23810 solver.cpp:397] Test net output #0: accuracy = 0.228237 I0427 19:39:53.011572 23810 solver.cpp:397] Test net output #1: loss = 3.50092 (* 1 = 3.50092 loss) I0427 19:39:53.181243 23810 solver.cpp:218] Iteration 1836 (0.651446 iter/s, 18.4206s/12 iters), loss = 2.6302 I0427 19:39:53.181294 23810 solver.cpp:237] Train net output #0: loss = 2.6302 (* 1 = 2.6302 loss) I0427 19:39:53.181304 23810 sgd_solver.cpp:105] Iteration 1836, lr = 0.00297553 I0427 19:40:01.185958 23810 solver.cpp:218] Iteration 1848 (1.49919 iter/s, 8.0043s/12 iters), loss = 2.51229 I0427 19:40:01.186007 23810 solver.cpp:237] Train net output #0: loss = 2.51229 (* 1 = 2.51229 loss) I0427 19:40:01.186017 23810 sgd_solver.cpp:105] Iteration 1848, lr = 0.00295205 I0427 19:40:10.546694 23810 solver.cpp:218] Iteration 1860 (1.28201 iter/s, 9.36026s/12 iters), loss = 2.78321 I0427 19:40:10.546813 23810 solver.cpp:237] Train net output #0: loss = 2.78321 (* 1 = 2.78321 loss) I0427 19:40:10.546825 23810 sgd_solver.cpp:105] Iteration 1860, lr = 0.00292875 I0427 19:40:19.901613 23810 solver.cpp:218] Iteration 1872 (1.28282 iter/s, 9.35438s/12 iters), loss = 2.60814 I0427 19:40:19.901656 23810 solver.cpp:237] Train net output #0: loss = 2.60814 (* 1 = 2.60814 loss) I0427 19:40:19.901665 23810 sgd_solver.cpp:105] Iteration 1872, lr = 0.00290564 I0427 19:40:29.503151 23810 solver.cpp:218] Iteration 1884 (1.24986 iter/s, 9.60106s/12 iters), loss = 2.53749 I0427 19:40:29.503192 23810 solver.cpp:237] Train net output #0: loss = 2.53749 (* 1 = 2.53749 loss) I0427 19:40:29.503202 23810 sgd_solver.cpp:105] Iteration 1884, lr = 0.00288271 I0427 19:40:39.041018 23810 solver.cpp:218] Iteration 1896 (1.25821 iter/s, 9.53739s/12 iters), loss = 2.47364 I0427 19:40:39.041060 23810 solver.cpp:237] Train net output #0: loss = 2.47364 (* 1 = 2.47364 loss) I0427 19:40:39.041069 23810 sgd_solver.cpp:105] Iteration 1896, lr = 0.00285996 I0427 19:40:48.685503 23810 solver.cpp:218] Iteration 1908 (1.2443 iter/s, 9.64401s/12 iters), loss = 2.52361 I0427 19:40:48.687878 23810 solver.cpp:237] Train net output #0: loss = 2.52361 (* 1 = 2.52361 loss) I0427 19:40:48.687887 23810 sgd_solver.cpp:105] Iteration 1908, lr = 0.00283739 I0427 19:40:58.271562 23810 solver.cpp:218] Iteration 1920 (1.25218 iter/s, 9.58325s/12 iters), loss = 2.30507 I0427 19:40:58.271613 23810 solver.cpp:237] Train net output #0: loss = 2.30507 (* 1 = 2.30507 loss) I0427 19:40:58.271625 23810 sgd_solver.cpp:105] Iteration 1920, lr = 0.002815 I0427 19:40:58.879822 23845 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:41:07.900849 23810 solver.cpp:218] Iteration 1932 (1.24626 iter/s, 9.62881s/12 iters), loss = 2.4762 I0427 19:41:07.900892 23810 solver.cpp:237] Train net output #0: loss = 2.4762 (* 1 = 2.4762 loss) I0427 19:41:07.900902 23810 sgd_solver.cpp:105] Iteration 1932, lr = 0.00279279 I0427 19:41:11.924974 23810 solver.cpp:447] Snapshotting to binary proto file snapshot_iter_1938.caffemodel I0427 19:41:15.102705 23810 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_1938.solverstate I0427 19:41:17.742512 23810 solver.cpp:330] Iteration 1938, Testing net (#0) I0427 19:41:17.742532 23810 net.cpp:676] Ignoring source layer train-data I0427 19:41:20.062181 23943 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:41:20.737622 23810 solver.cpp:397] Test net output #0: accuracy = 0.229911 I0427 19:41:20.737659 23810 solver.cpp:397] Test net output #1: loss = 3.48808 (* 1 = 3.48808 loss) I0427 19:41:23.951153 23810 solver.cpp:218] Iteration 1944 (0.747685 iter/s, 16.0495s/12 iters), loss = 2.52405 I0427 19:41:23.951210 23810 solver.cpp:237] Train net output #0: loss = 2.52405 (* 1 = 2.52405 loss) I0427 19:41:23.951221 23810 sgd_solver.cpp:105] Iteration 1944, lr = 0.00277075 I0427 19:41:33.472319 23810 solver.cpp:218] Iteration 1956 (1.26041 iter/s, 9.52069s/12 iters), loss = 2.42952 I0427 19:41:33.472358 23810 solver.cpp:237] Train net output #0: loss = 2.42952 (* 1 = 2.42952 loss) I0427 19:41:33.472366 23810 sgd_solver.cpp:105] Iteration 1956, lr = 0.00274888 I0427 19:41:42.919838 23810 solver.cpp:218] Iteration 1968 (1.27024 iter/s, 9.44706s/12 iters), loss = 2.46212 I0427 19:41:42.919883 23810 solver.cpp:237] Train net output #0: loss = 2.46212 (* 1 = 2.46212 loss) I0427 19:41:42.919916 23810 sgd_solver.cpp:105] Iteration 1968, lr = 0.00272719 I0427 19:41:50.766559 23810 blocking_queue.cpp:49] Waiting for data I0427 19:41:52.518858 23810 solver.cpp:218] Iteration 1980 (1.25019 iter/s, 9.59855s/12 iters), loss = 2.28554 I0427 19:41:52.518904 23810 solver.cpp:237] Train net output #0: loss = 2.28554 (* 1 = 2.28554 loss) I0427 19:41:52.518913 23810 sgd_solver.cpp:105] Iteration 1980, lr = 0.00270567 I0427 19:42:01.980129 23810 solver.cpp:218] Iteration 1992 (1.26839 iter/s, 9.4608s/12 iters), loss = 2.23898 I0427 19:42:01.980173 23810 solver.cpp:237] Train net output #0: loss = 2.23898 (* 1 = 2.23898 loss) I0427 19:42:01.980181 23810 sgd_solver.cpp:105] Iteration 1992, lr = 0.00268432 I0427 19:42:11.376857 23810 solver.cpp:218] Iteration 2004 (1.2771 iter/s, 9.39627s/12 iters), loss = 1.99958 I0427 19:42:11.376901 23810 solver.cpp:237] Train net output #0: loss = 1.99958 (* 1 = 1.99958 loss) I0427 19:42:11.376910 23810 sgd_solver.cpp:105] Iteration 2004, lr = 0.00266313 I0427 19:42:20.701670 23810 solver.cpp:218] Iteration 2016 (1.28695 iter/s, 9.32436s/12 iters), loss = 2.11228 I0427 19:42:20.701707 23810 solver.cpp:237] Train net output #0: loss = 2.11228 (* 1 = 2.11228 loss) I0427 19:42:20.701716 23810 sgd_solver.cpp:105] Iteration 2016, lr = 0.00264212 I0427 19:42:25.626410 23845 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:42:30.626078 23810 solver.cpp:218] Iteration 2028 (1.2092 iter/s, 9.92393s/12 iters), loss = 2.30387 I0427 19:42:30.626123 23810 solver.cpp:237] Train net output #0: loss = 2.30387 (* 1 = 2.30387 loss) I0427 19:42:30.626132 23810 sgd_solver.cpp:105] Iteration 2028, lr = 0.00262127 I0427 19:42:39.234244 23810 solver.cpp:447] Snapshotting to binary proto file snapshot_iter_2040.caffemodel I0427 19:42:42.206322 23810 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_2040.solverstate I0427 19:42:44.546710 23810 solver.cpp:330] Iteration 2040, Testing net (#0) I0427 19:42:44.546737 23810 net.cpp:676] Ignoring source layer train-data I0427 19:42:46.392390 23943 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:42:47.571627 23810 solver.cpp:397] Test net output #0: accuracy = 0.232701 I0427 19:42:47.571666 23810 solver.cpp:397] Test net output #1: loss = 3.50434 (* 1 = 3.50434 loss) I0427 19:42:47.742101 23810 solver.cpp:218] Iteration 2040 (0.70113 iter/s, 17.1152s/12 iters), loss = 2.40799 I0427 19:42:47.742157 23810 solver.cpp:237] Train net output #0: loss = 2.40799 (* 1 = 2.40799 loss) I0427 19:42:47.742168 23810 sgd_solver.cpp:105] Iteration 2040, lr = 0.00260058 I0427 19:42:55.802301 23810 solver.cpp:218] Iteration 2052 (1.48887 iter/s, 8.05978s/12 iters), loss = 2.03631 I0427 19:42:55.802456 23810 solver.cpp:237] Train net output #0: loss = 2.03631 (* 1 = 2.03631 loss) I0427 19:42:55.802467 23810 sgd_solver.cpp:105] Iteration 2052, lr = 0.00258006 I0427 19:43:05.477584 23810 solver.cpp:218] Iteration 2064 (1.24035 iter/s, 9.6747s/12 iters), loss = 2.14673 I0427 19:43:05.477632 23810 solver.cpp:237] Train net output #0: loss = 2.14673 (* 1 = 2.14673 loss) I0427 19:43:05.477640 23810 sgd_solver.cpp:105] Iteration 2064, lr = 0.0025597 I0427 19:43:14.907601 23810 solver.cpp:218] Iteration 2076 (1.27259 iter/s, 9.42955s/12 iters), loss = 2.15096 I0427 19:43:14.907647 23810 solver.cpp:237] Train net output #0: loss = 2.15096 (* 1 = 2.15096 loss) I0427 19:43:14.907656 23810 sgd_solver.cpp:105] Iteration 2076, lr = 0.0025395 I0427 19:43:24.525085 23810 solver.cpp:218] Iteration 2088 (1.24779 iter/s, 9.61701s/12 iters), loss = 2.22039 I0427 19:43:24.525133 23810 solver.cpp:237] Train net output #0: loss = 2.22039 (* 1 = 2.22039 loss) I0427 19:43:24.525141 23810 sgd_solver.cpp:105] Iteration 2088, lr = 0.00251946 I0427 19:43:34.317044 23810 solver.cpp:218] Iteration 2100 (1.22556 iter/s, 9.79148s/12 iters), loss = 2.02081 I0427 19:43:34.317162 23810 solver.cpp:237] Train net output #0: loss = 2.02081 (* 1 = 2.02081 loss) I0427 19:43:34.317171 23810 sgd_solver.cpp:105] Iteration 2100, lr = 0.00249958 I0427 19:43:43.942276 23810 solver.cpp:218] Iteration 2112 (1.24679 iter/s, 9.62469s/12 iters), loss = 1.98064 I0427 19:43:43.942322 23810 solver.cpp:237] Train net output #0: loss = 1.98064 (* 1 = 1.98064 loss) I0427 19:43:43.942329 23810 sgd_solver.cpp:105] Iteration 2112, lr = 0.00247986 I0427 19:43:52.775826 23845 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:43:53.397321 23810 solver.cpp:218] Iteration 2124 (1.26923 iter/s, 9.45458s/12 iters), loss = 2.36049 I0427 19:43:53.397368 23810 solver.cpp:237] Train net output #0: loss = 2.36049 (* 1 = 2.36049 loss) I0427 19:43:53.397377 23810 sgd_solver.cpp:105] Iteration 2124, lr = 0.00246029 I0427 19:44:03.059708 23810 solver.cpp:218] Iteration 2136 (1.24199 iter/s, 9.66191s/12 iters), loss = 2.18682 I0427 19:44:03.059765 23810 solver.cpp:237] Train net output #0: loss = 2.18682 (* 1 = 2.18682 loss) I0427 19:44:03.059777 23810 sgd_solver.cpp:105] Iteration 2136, lr = 0.00244087 I0427 19:44:06.856206 23810 solver.cpp:447] Snapshotting to binary proto file snapshot_iter_2142.caffemodel I0427 19:44:17.130012 23810 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_2142.solverstate I0427 19:44:20.186915 23810 solver.cpp:330] Iteration 2142, Testing net (#0) I0427 19:44:20.186940 23810 net.cpp:676] Ignoring source layer train-data I0427 19:44:21.458315 23943 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:44:23.187969 23810 solver.cpp:397] Test net output #0: accuracy = 0.242746 I0427 19:44:23.187999 23810 solver.cpp:397] Test net output #1: loss = 3.49084 (* 1 = 3.49084 loss) I0427 19:44:26.417268 23810 solver.cpp:218] Iteration 2148 (0.513776 iter/s, 23.3565s/12 iters), loss = 2.0139 I0427 19:44:26.417312 23810 solver.cpp:237] Train net output #0: loss = 2.0139 (* 1 = 2.0139 loss) I0427 19:44:26.417320 23810 sgd_solver.cpp:105] Iteration 2148, lr = 0.00242161 I0427 19:44:35.961891 23810 solver.cpp:218] Iteration 2160 (1.25731 iter/s, 9.54416s/12 iters), loss = 1.96675 I0427 19:44:35.961936 23810 solver.cpp:237] Train net output #0: loss = 1.96675 (* 1 = 1.96675 loss) I0427 19:44:35.961946 23810 sgd_solver.cpp:105] Iteration 2160, lr = 0.0024025 I0427 19:44:45.580005 23810 solver.cpp:218] Iteration 2172 (1.24771 iter/s, 9.61764s/12 iters), loss = 1.93052 I0427 19:44:45.580132 23810 solver.cpp:237] Train net output #0: loss = 1.93052 (* 1 = 1.93052 loss) I0427 19:44:45.580140 23810 sgd_solver.cpp:105] Iteration 2172, lr = 0.00238354 I0427 19:44:55.468959 23810 solver.cpp:218] Iteration 2184 (1.21354 iter/s, 9.8884s/12 iters), loss = 2.24207 I0427 19:44:55.469007 23810 solver.cpp:237] Train net output #0: loss = 2.24207 (* 1 = 2.24207 loss) I0427 19:44:55.469015 23810 sgd_solver.cpp:105] Iteration 2184, lr = 0.00236473 I0427 19:45:04.791939 23810 solver.cpp:218] Iteration 2196 (1.28721 iter/s, 9.32252s/12 iters), loss = 1.98025 I0427 19:45:04.791986 23810 solver.cpp:237] Train net output #0: loss = 1.98025 (* 1 = 1.98025 loss) I0427 19:45:04.791996 23810 sgd_solver.cpp:105] Iteration 2196, lr = 0.00234607 I0427 19:45:14.333346 23810 solver.cpp:218] Iteration 2208 (1.25774 iter/s, 9.54094s/12 iters), loss = 2.1379 I0427 19:45:14.333397 23810 solver.cpp:237] Train net output #0: loss = 2.1379 (* 1 = 2.1379 loss) I0427 19:45:14.333406 23810 sgd_solver.cpp:105] Iteration 2208, lr = 0.00232756 I0427 19:45:23.876261 23810 solver.cpp:218] Iteration 2220 (1.25754 iter/s, 9.54244s/12 iters), loss = 1.85016 I0427 19:45:23.876363 23810 solver.cpp:237] Train net output #0: loss = 1.85016 (* 1 = 1.85016 loss) I0427 19:45:23.876371 23810 sgd_solver.cpp:105] Iteration 2220, lr = 0.00230919 I0427 19:45:27.323117 23845 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:45:33.429440 23810 solver.cpp:218] Iteration 2232 (1.25619 iter/s, 9.55267s/12 iters), loss = 2.15453 I0427 19:45:33.429481 23810 solver.cpp:237] Train net output #0: loss = 2.15453 (* 1 = 2.15453 loss) I0427 19:45:33.429488 23810 sgd_solver.cpp:105] Iteration 2232, lr = 0.00229097 I0427 19:45:42.110940 23810 solver.cpp:447] Snapshotting to binary proto file snapshot_iter_2244.caffemodel I0427 19:45:45.654311 23810 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_2244.solverstate I0427 19:45:47.966079 23810 solver.cpp:330] Iteration 2244, Testing net (#0) I0427 19:45:47.966101 23810 net.cpp:676] Ignoring source layer train-data I0427 19:45:48.966387 23943 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:45:51.247730 23810 solver.cpp:397] Test net output #0: accuracy = 0.265067 I0427 19:45:51.247771 23810 solver.cpp:397] Test net output #1: loss = 3.4071 (* 1 = 3.4071 loss) I0427 19:45:51.410218 23810 solver.cpp:218] Iteration 2244 (0.66741 iter/s, 17.98s/12 iters), loss = 2.01389 I0427 19:45:51.410269 23810 solver.cpp:237] Train net output #0: loss = 2.01389 (* 1 = 2.01389 loss) I0427 19:45:51.410277 23810 sgd_solver.cpp:105] Iteration 2244, lr = 0.00227289 I0427 19:45:59.245472 23810 solver.cpp:218] Iteration 2256 (1.53162 iter/s, 7.83486s/12 iters), loss = 1.93861 I0427 19:45:59.245568 23810 solver.cpp:237] Train net output #0: loss = 1.93861 (* 1 = 1.93861 loss) I0427 19:45:59.245577 23810 sgd_solver.cpp:105] Iteration 2256, lr = 0.00225495 I0427 19:46:08.788539 23810 solver.cpp:218] Iteration 2268 (1.25752 iter/s, 9.54255s/12 iters), loss = 1.80058 I0427 19:46:08.788585 23810 solver.cpp:237] Train net output #0: loss = 1.80058 (* 1 = 1.80058 loss) I0427 19:46:08.788594 23810 sgd_solver.cpp:105] Iteration 2268, lr = 0.00223716 I0427 19:46:18.467870 23810 solver.cpp:218] Iteration 2280 (1.23981 iter/s, 9.67886s/12 iters), loss = 1.7193 I0427 19:46:18.467909 23810 solver.cpp:237] Train net output #0: loss = 1.7193 (* 1 = 1.7193 loss) I0427 19:46:18.467918 23810 sgd_solver.cpp:105] Iteration 2280, lr = 0.0022195 I0427 19:46:28.031179 23810 solver.cpp:218] Iteration 2292 (1.25486 iter/s, 9.56285s/12 iters), loss = 1.76452 I0427 19:46:28.031229 23810 solver.cpp:237] Train net output #0: loss = 1.76452 (* 1 = 1.76452 loss) I0427 19:46:28.031237 23810 sgd_solver.cpp:105] Iteration 2292, lr = 0.00220199 I0427 19:46:37.639953 23810 solver.cpp:218] Iteration 2304 (1.24892 iter/s, 9.60831s/12 iters), loss = 1.37297 I0427 19:46:37.640085 23810 solver.cpp:237] Train net output #0: loss = 1.37297 (* 1 = 1.37297 loss) I0427 19:46:37.640094 23810 sgd_solver.cpp:105] Iteration 2304, lr = 0.00218461 I0427 19:46:47.161695 23810 solver.cpp:218] Iteration 2316 (1.26035 iter/s, 9.52119s/12 iters), loss = 1.69398 I0427 19:46:47.161749 23810 solver.cpp:237] Train net output #0: loss = 1.69398 (* 1 = 1.69398 loss) I0427 19:46:47.161762 23810 sgd_solver.cpp:105] Iteration 2316, lr = 0.00216737 I0427 19:46:54.590591 23845 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:46:56.597769 23810 solver.cpp:218] Iteration 2328 (1.27178 iter/s, 9.43561s/12 iters), loss = 1.69098 I0427 19:46:56.597811 23810 solver.cpp:237] Train net output #0: loss = 1.69098 (* 1 = 1.69098 loss) I0427 19:46:56.597820 23810 sgd_solver.cpp:105] Iteration 2328, lr = 0.00215027 I0427 19:47:06.232182 23810 solver.cpp:218] Iteration 2340 (1.24559 iter/s, 9.63395s/12 iters), loss = 1.67814 I0427 19:47:06.232220 23810 solver.cpp:237] Train net output #0: loss = 1.67814 (* 1 = 1.67814 loss) I0427 19:47:06.232228 23810 sgd_solver.cpp:105] Iteration 2340, lr = 0.0021333 I0427 19:47:10.130981 23810 solver.cpp:447] Snapshotting to binary proto file snapshot_iter_2346.caffemodel I0427 19:47:14.477864 23810 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_2346.solverstate I0427 19:47:16.869706 23810 solver.cpp:330] Iteration 2346, Testing net (#0) I0427 19:47:16.869725 23810 net.cpp:676] Ignoring source layer train-data I0427 19:47:17.203023 23943 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:47:19.906210 23810 solver.cpp:397] Test net output #0: accuracy = 0.260045 I0427 19:47:19.906239 23810 solver.cpp:397] Test net output #1: loss = 3.45756 (* 1 = 3.45756 loss) I0427 19:47:21.949620 23943 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:47:23.054261 23810 solver.cpp:218] Iteration 2352 (0.71338 iter/s, 16.8213s/12 iters), loss = 1.50073 I0427 19:47:23.054307 23810 solver.cpp:237] Train net output #0: loss = 1.50073 (* 1 = 1.50073 loss) I0427 19:47:23.054316 23810 sgd_solver.cpp:105] Iteration 2352, lr = 0.00211647 I0427 19:47:32.660944 23810 solver.cpp:218] Iteration 2364 (1.24919 iter/s, 9.60621s/12 iters), loss = 1.70088 I0427 19:47:32.660990 23810 solver.cpp:237] Train net output #0: loss = 1.70088 (* 1 = 1.70088 loss) I0427 19:47:32.660998 23810 sgd_solver.cpp:105] Iteration 2364, lr = 0.00209976 I0427 19:47:42.197798 23810 solver.cpp:218] Iteration 2376 (1.25834 iter/s, 9.53639s/12 iters), loss = 1.49299 I0427 19:47:42.197919 23810 solver.cpp:237] Train net output #0: loss = 1.49299 (* 1 = 1.49299 loss) I0427 19:47:42.197928 23810 sgd_solver.cpp:105] Iteration 2376, lr = 0.00208319 I0427 19:47:51.656926 23810 solver.cpp:218] Iteration 2388 (1.26869 iter/s, 9.4586s/12 iters), loss = 1.58014 I0427 19:47:51.656975 23810 solver.cpp:237] Train net output #0: loss = 1.58014 (* 1 = 1.58014 loss) I0427 19:47:51.656983 23810 sgd_solver.cpp:105] Iteration 2388, lr = 0.00206675 I0427 19:48:01.311673 23810 solver.cpp:218] Iteration 2400 (1.24297 iter/s, 9.65428s/12 iters), loss = 1.36561 I0427 19:48:01.311720 23810 solver.cpp:237] Train net output #0: loss = 1.36561 (* 1 = 1.36561 loss) I0427 19:48:01.311729 23810 sgd_solver.cpp:105] Iteration 2400, lr = 0.00205044 I0427 19:48:10.856954 23810 solver.cpp:218] Iteration 2412 (1.25723 iter/s, 9.54482s/12 iters), loss = 1.70809 I0427 19:48:10.857009 23810 solver.cpp:237] Train net output #0: loss = 1.70809 (* 1 = 1.70809 loss) I0427 19:48:10.857020 23810 sgd_solver.cpp:105] Iteration 2412, lr = 0.00203426 I0427 19:48:20.441952 23810 solver.cpp:218] Iteration 2424 (1.25202 iter/s, 9.58453s/12 iters), loss = 1.58908 I0427 19:48:20.443845 23810 solver.cpp:237] Train net output #0: loss = 1.58908 (* 1 = 1.58908 loss) I0427 19:48:20.443859 23810 sgd_solver.cpp:105] Iteration 2424, lr = 0.00201821 I0427 19:48:22.489058 23845 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:48:30.011471 23810 solver.cpp:218] Iteration 2436 (1.25428 iter/s, 9.56722s/12 iters), loss = 1.49896 I0427 19:48:30.011519 23810 solver.cpp:237] Train net output #0: loss = 1.49896 (* 1 = 1.49896 loss) I0427 19:48:30.011529 23810 sgd_solver.cpp:105] Iteration 2436, lr = 0.00200228 I0427 19:48:38.843729 23810 solver.cpp:447] Snapshotting to binary proto file snapshot_iter_2448.caffemodel I0427 19:48:41.860239 23810 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_2448.solverstate I0427 19:48:44.173274 23810 solver.cpp:330] Iteration 2448, Testing net (#0) I0427 19:48:44.173295 23810 net.cpp:676] Ignoring source layer train-data I0427 19:48:47.149266 23810 solver.cpp:397] Test net output #0: accuracy = 0.284598 I0427 19:48:47.149307 23810 solver.cpp:397] Test net output #1: loss = 3.40782 (* 1 = 3.40782 loss) I0427 19:48:47.313560 23810 solver.cpp:218] Iteration 2448 (0.693589 iter/s, 17.3013s/12 iters), loss = 1.64524 I0427 19:48:47.313601 23810 solver.cpp:237] Train net output #0: loss = 1.64524 (* 1 = 1.64524 loss) I0427 19:48:47.313608 23810 sgd_solver.cpp:105] Iteration 2448, lr = 0.00198648 I0427 19:48:48.548789 23943 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:48:55.352193 23810 solver.cpp:218] Iteration 2460 (1.49286 iter/s, 8.03824s/12 iters), loss = 1.44013 I0427 19:48:55.352854 23810 solver.cpp:237] Train net output #0: loss = 1.44013 (* 1 = 1.44013 loss) I0427 19:48:55.352864 23810 sgd_solver.cpp:105] Iteration 2460, lr = 0.00197081 I0427 19:49:04.993657 23810 solver.cpp:218] Iteration 2472 (1.24476 iter/s, 9.64039s/12 iters), loss = 1.42761 I0427 19:49:04.993701 23810 solver.cpp:237] Train net output #0: loss = 1.42761 (* 1 = 1.42761 loss) I0427 19:49:04.993710 23810 sgd_solver.cpp:105] Iteration 2472, lr = 0.00195526 I0427 19:49:14.462409 23810 solver.cpp:218] Iteration 2484 (1.26739 iter/s, 9.4683s/12 iters), loss = 1.52288 I0427 19:49:14.462452 23810 solver.cpp:237] Train net output #0: loss = 1.52288 (* 1 = 1.52288 loss) I0427 19:49:14.462461 23810 sgd_solver.cpp:105] Iteration 2484, lr = 0.00193983 I0427 19:49:24.046797 23810 solver.cpp:218] Iteration 2496 (1.2521 iter/s, 9.58393s/12 iters), loss = 1.60031 I0427 19:49:24.046842 23810 solver.cpp:237] Train net output #0: loss = 1.60031 (* 1 = 1.60031 loss) I0427 19:49:24.046851 23810 sgd_solver.cpp:105] Iteration 2496, lr = 0.00192452 I0427 19:49:33.504863 23810 solver.cpp:218] Iteration 2508 (1.26882 iter/s, 9.45761s/12 iters), loss = 1.41071 I0427 19:49:33.504973 23810 solver.cpp:237] Train net output #0: loss = 1.41071 (* 1 = 1.41071 loss) I0427 19:49:33.504984 23810 sgd_solver.cpp:105] Iteration 2508, lr = 0.00190933 I0427 19:49:43.011123 23810 solver.cpp:218] Iteration 2520 (1.2624 iter/s, 9.50574s/12 iters), loss = 1.4034 I0427 19:49:43.011165 23810 solver.cpp:237] Train net output #0: loss = 1.4034 (* 1 = 1.4034 loss) I0427 19:49:43.011174 23810 sgd_solver.cpp:105] Iteration 2520, lr = 0.00189426 I0427 19:49:49.066536 23845 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:49:52.535389 23810 solver.cpp:218] Iteration 2532 (1.26 iter/s, 9.52381s/12 iters), loss = 1.33305 I0427 19:49:52.535436 23810 solver.cpp:237] Train net output #0: loss = 1.33305 (* 1 = 1.33305 loss) I0427 19:49:52.535445 23810 sgd_solver.cpp:105] Iteration 2532, lr = 0.00187932 I0427 19:50:02.213272 23810 solver.cpp:218] Iteration 2544 (1.24 iter/s, 9.67742s/12 iters), loss = 1.4935 I0427 19:50:02.213315 23810 solver.cpp:237] Train net output #0: loss = 1.4935 (* 1 = 1.4935 loss) I0427 19:50:02.213323 23810 sgd_solver.cpp:105] Iteration 2544, lr = 0.00186449 I0427 19:50:06.114696 23810 solver.cpp:447] Snapshotting to binary proto file snapshot_iter_2550.caffemodel I0427 19:50:09.148319 23810 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_2550.solverstate I0427 19:50:11.461884 23810 solver.cpp:330] Iteration 2550, Testing net (#0) I0427 19:50:11.461910 23810 net.cpp:676] Ignoring source layer train-data I0427 19:50:14.565202 23810 solver.cpp:397] Test net output #0: accuracy = 0.27846 I0427 19:50:14.565245 23810 solver.cpp:397] Test net output #1: loss = 3.38736 (* 1 = 3.38736 loss) I0427 19:50:15.467234 23943 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:50:17.908679 23810 solver.cpp:218] Iteration 2556 (0.764589 iter/s, 15.6947s/12 iters), loss = 1.3316 I0427 19:50:17.908720 23810 solver.cpp:237] Train net output #0: loss = 1.3316 (* 1 = 1.3316 loss) I0427 19:50:17.908730 23810 sgd_solver.cpp:105] Iteration 2556, lr = 0.00184977 I0427 19:50:27.367208 23810 solver.cpp:218] Iteration 2568 (1.26876 iter/s, 9.45807s/12 iters), loss = 1.37675 I0427 19:50:27.367267 23810 solver.cpp:237] Train net output #0: loss = 1.37675 (* 1 = 1.37675 loss) I0427 19:50:27.367278 23810 sgd_solver.cpp:105] Iteration 2568, lr = 0.00183517 I0427 19:50:36.937711 23810 solver.cpp:218] Iteration 2580 (1.25391 iter/s, 9.57003s/12 iters), loss = 1.25677 I0427 19:50:36.937851 23810 solver.cpp:237] Train net output #0: loss = 1.25677 (* 1 = 1.25677 loss) I0427 19:50:36.937865 23810 sgd_solver.cpp:105] Iteration 2580, lr = 0.00182069 I0427 19:50:46.440277 23810 solver.cpp:218] Iteration 2592 (1.26289 iter/s, 9.50202s/12 iters), loss = 1.31031 I0427 19:50:46.440321 23810 solver.cpp:237] Train net output #0: loss = 1.31031 (* 1 = 1.31031 loss) I0427 19:50:46.440330 23810 sgd_solver.cpp:105] Iteration 2592, lr = 0.00180633 I0427 19:50:56.353835 23810 solver.cpp:218] Iteration 2604 (1.21052 iter/s, 9.91308s/12 iters), loss = 1.16593 I0427 19:50:56.353885 23810 solver.cpp:237] Train net output #0: loss = 1.16593 (* 1 = 1.16593 loss) I0427 19:50:56.353894 23810 sgd_solver.cpp:105] Iteration 2604, lr = 0.00179207 I0427 19:51:05.962234 23810 solver.cpp:218] Iteration 2616 (1.24897 iter/s, 9.60793s/12 iters), loss = 1.46679 I0427 19:51:05.962280 23810 solver.cpp:237] Train net output #0: loss = 1.46679 (* 1 = 1.46679 loss) I0427 19:51:05.962288 23810 sgd_solver.cpp:105] Iteration 2616, lr = 0.00177793 I0427 19:51:15.544764 23810 solver.cpp:218] Iteration 2628 (1.25234 iter/s, 9.58207s/12 iters), loss = 1.36944 I0427 19:51:15.544878 23810 solver.cpp:237] Train net output #0: loss = 1.36944 (* 1 = 1.36944 loss) I0427 19:51:15.544888 23810 sgd_solver.cpp:105] Iteration 2628, lr = 0.0017639 I0427 19:51:16.381842 23845 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:51:25.178032 23810 solver.cpp:218] Iteration 2640 (1.24575 iter/s, 9.63274s/12 iters), loss = 1.15998 I0427 19:51:25.178076 23810 solver.cpp:237] Train net output #0: loss = 1.15998 (* 1 = 1.15998 loss) I0427 19:51:25.178084 23810 sgd_solver.cpp:105] Iteration 2640, lr = 0.00174998 I0427 19:51:33.991179 23810 solver.cpp:447] Snapshotting to binary proto file snapshot_iter_2652.caffemodel I0427 19:51:37.141141 23810 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_2652.solverstate I0427 19:51:39.473850 23810 solver.cpp:330] Iteration 2652, Testing net (#0) I0427 19:51:39.473875 23810 net.cpp:676] Ignoring source layer train-data I0427 19:51:42.534845 23810 solver.cpp:397] Test net output #0: accuracy = 0.314732 I0427 19:51:42.534893 23810 solver.cpp:397] Test net output #1: loss = 3.34483 (* 1 = 3.34483 loss) I0427 19:51:42.699429 23810 solver.cpp:218] Iteration 2652 (0.684907 iter/s, 17.5206s/12 iters), loss = 1.26812 I0427 19:51:42.699473 23810 solver.cpp:237] Train net output #0: loss = 1.26812 (* 1 = 1.26812 loss) I0427 19:51:42.699483 23810 sgd_solver.cpp:105] Iteration 2652, lr = 0.00173617 I0427 19:51:43.004976 23943 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:51:50.682291 23810 solver.cpp:218] Iteration 2664 (1.50329 iter/s, 7.98247s/12 iters), loss = 1.25844 I0427 19:51:50.682430 23810 solver.cpp:237] Train net output #0: loss = 1.25844 (* 1 = 1.25844 loss) I0427 19:51:50.682440 23810 sgd_solver.cpp:105] Iteration 2664, lr = 0.00172247 I0427 19:52:00.542832 23810 solver.cpp:218] Iteration 2676 (1.21704 iter/s, 9.85998s/12 iters), loss = 1.34587 I0427 19:52:00.542874 23810 solver.cpp:237] Train net output #0: loss = 1.34587 (* 1 = 1.34587 loss) I0427 19:52:00.542883 23810 sgd_solver.cpp:105] Iteration 2676, lr = 0.00170888 I0427 19:52:10.160816 23810 solver.cpp:218] Iteration 2688 (1.24772 iter/s, 9.61752s/12 iters), loss = 1.30297 I0427 19:52:10.160858 23810 solver.cpp:237] Train net output #0: loss = 1.30297 (* 1 = 1.30297 loss) I0427 19:52:10.160866 23810 sgd_solver.cpp:105] Iteration 2688, lr = 0.00169539 I0427 19:52:19.721427 23810 solver.cpp:218] Iteration 2700 (1.25521 iter/s, 9.56016s/12 iters), loss = 1.05779 I0427 19:52:19.721468 23810 solver.cpp:237] Train net output #0: loss = 1.05779 (* 1 = 1.05779 loss) I0427 19:52:19.721477 23810 sgd_solver.cpp:105] Iteration 2700, lr = 0.00168201 I0427 19:52:29.098497 23810 solver.cpp:218] Iteration 2712 (1.27978 iter/s, 9.37663s/12 iters), loss = 0.969754 I0427 19:52:29.100739 23810 solver.cpp:237] Train net output #0: loss = 0.969754 (* 1 = 0.969754 loss) I0427 19:52:29.100749 23810 sgd_solver.cpp:105] Iteration 2712, lr = 0.00166874 I0427 19:52:38.775674 23810 solver.cpp:218] Iteration 2724 (1.24037 iter/s, 9.67452s/12 iters), loss = 1.27893 I0427 19:52:38.775717 23810 solver.cpp:237] Train net output #0: loss = 1.27893 (* 1 = 1.27893 loss) I0427 19:52:38.775725 23810 sgd_solver.cpp:105] Iteration 2724, lr = 0.00165557 I0427 19:52:43.721036 23845 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:52:48.391238 23810 solver.cpp:218] Iteration 2736 (1.24804 iter/s, 9.61511s/12 iters), loss = 1.13604 I0427 19:52:48.391285 23810 solver.cpp:237] Train net output #0: loss = 1.13604 (* 1 = 1.13604 loss) I0427 19:52:48.391294 23810 sgd_solver.cpp:105] Iteration 2736, lr = 0.00164251 I0427 19:52:57.842851 23810 solver.cpp:218] Iteration 2748 (1.26969 iter/s, 9.45116s/12 iters), loss = 1.08543 I0427 19:52:57.842895 23810 solver.cpp:237] Train net output #0: loss = 1.08543 (* 1 = 1.08543 loss) I0427 19:52:57.842907 23810 sgd_solver.cpp:105] Iteration 2748, lr = 0.00162954 I0427 19:53:01.741602 23810 solver.cpp:447] Snapshotting to binary proto file snapshot_iter_2754.caffemodel I0427 19:53:04.802616 23810 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_2754.solverstate I0427 19:53:07.121760 23810 solver.cpp:330] Iteration 2754, Testing net (#0) I0427 19:53:07.121783 23810 net.cpp:676] Ignoring source layer train-data I0427 19:53:10.021072 23943 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:53:10.139443 23810 solver.cpp:397] Test net output #0: accuracy = 0.31808 I0427 19:53:10.139472 23810 solver.cpp:397] Test net output #1: loss = 3.39128 (* 1 = 3.39128 loss) I0427 19:53:13.565083 23810 solver.cpp:218] Iteration 2760 (0.763285 iter/s, 15.7215s/12 iters), loss = 1.05316 I0427 19:53:13.565125 23810 solver.cpp:237] Train net output #0: loss = 1.05316 (* 1 = 1.05316 loss) I0427 19:53:13.565135 23810 sgd_solver.cpp:105] Iteration 2760, lr = 0.00161668 I0427 19:53:23.311376 23810 solver.cpp:218] Iteration 2772 (1.2313 iter/s, 9.74583s/12 iters), loss = 1.30527 I0427 19:53:23.311419 23810 solver.cpp:237] Train net output #0: loss = 1.30527 (* 1 = 1.30527 loss) I0427 19:53:23.311429 23810 sgd_solver.cpp:105] Iteration 2772, lr = 0.00160393 I0427 19:53:32.827708 23810 solver.cpp:218] Iteration 2784 (1.26105 iter/s, 9.51588s/12 iters), loss = 1.12753 I0427 19:53:32.827827 23810 solver.cpp:237] Train net output #0: loss = 1.12753 (* 1 = 1.12753 loss) I0427 19:53:32.827837 23810 sgd_solver.cpp:105] Iteration 2784, lr = 0.00159127 I0427 19:53:42.436837 23810 solver.cpp:218] Iteration 2796 (1.24888 iter/s, 9.60859s/12 iters), loss = 1.00821 I0427 19:53:42.436890 23810 solver.cpp:237] Train net output #0: loss = 1.00821 (* 1 = 1.00821 loss) I0427 19:53:42.436900 23810 sgd_solver.cpp:105] Iteration 2796, lr = 0.00157871 I0427 19:53:51.930862 23810 solver.cpp:218] Iteration 2808 (1.26401 iter/s, 9.49356s/12 iters), loss = 1.15592 I0427 19:53:51.930909 23810 solver.cpp:237] Train net output #0: loss = 1.15592 (* 1 = 1.15592 loss) I0427 19:53:51.930917 23810 sgd_solver.cpp:105] Iteration 2808, lr = 0.00156625 I0427 19:54:01.631287 23810 solver.cpp:218] Iteration 2820 (1.23712 iter/s, 9.69997s/12 iters), loss = 0.991768 I0427 19:54:01.631330 23810 solver.cpp:237] Train net output #0: loss = 0.991768 (* 1 = 0.991768 loss) I0427 19:54:01.631340 23810 sgd_solver.cpp:105] Iteration 2820, lr = 0.00155389 I0427 19:54:10.556316 23845 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:54:11.109720 23810 solver.cpp:218] Iteration 2832 (1.26609 iter/s, 9.47798s/12 iters), loss = 1.01935 I0427 19:54:11.109763 23810 solver.cpp:237] Train net output #0: loss = 1.01935 (* 1 = 1.01935 loss) I0427 19:54:11.109771 23810 sgd_solver.cpp:105] Iteration 2832, lr = 0.00154163 I0427 19:54:20.621572 23810 solver.cpp:218] Iteration 2844 (1.26164 iter/s, 9.5114s/12 iters), loss = 0.900237 I0427 19:54:20.621616 23810 solver.cpp:237] Train net output #0: loss = 0.900237 (* 1 = 0.900237 loss) I0427 19:54:20.621625 23810 sgd_solver.cpp:105] Iteration 2844, lr = 0.00152947 I0427 19:54:29.435839 23810 solver.cpp:447] Snapshotting to binary proto file snapshot_iter_2856.caffemodel I0427 19:54:32.452203 23810 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_2856.solverstate I0427 19:54:34.783140 23810 solver.cpp:330] Iteration 2856, Testing net (#0) I0427 19:54:34.783159 23810 net.cpp:676] Ignoring source layer train-data I0427 19:54:37.213383 23943 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:54:37.765722 23810 solver.cpp:397] Test net output #0: accuracy = 0.302455 I0427 19:54:37.765760 23810 solver.cpp:397] Test net output #1: loss = 3.43803 (* 1 = 3.43803 loss) I0427 19:54:37.929782 23810 solver.cpp:218] Iteration 2856 (0.693344 iter/s, 17.3074s/12 iters), loss = 1.13196 I0427 19:54:37.929827 23810 solver.cpp:237] Train net output #0: loss = 1.13196 (* 1 = 1.13196 loss) I0427 19:54:37.929836 23810 sgd_solver.cpp:105] Iteration 2856, lr = 0.0015174 I0427 19:54:45.757985 23810 solver.cpp:218] Iteration 2868 (1.53299 iter/s, 7.82782s/12 iters), loss = 1.03304 I0427 19:54:45.758917 23810 solver.cpp:237] Train net output #0: loss = 1.03304 (* 1 = 1.03304 loss) I0427 19:54:45.758927 23810 sgd_solver.cpp:105] Iteration 2868, lr = 0.00150542 I0427 19:54:55.359338 23810 solver.cpp:218] Iteration 2880 (1.25 iter/s, 9.60001s/12 iters), loss = 0.903036 I0427 19:54:55.359386 23810 solver.cpp:237] Train net output #0: loss = 0.903036 (* 1 = 0.903036 loss) I0427 19:54:55.359395 23810 sgd_solver.cpp:105] Iteration 2880, lr = 0.00149354 I0427 19:55:04.987401 23810 solver.cpp:218] Iteration 2892 (1.24642 iter/s, 9.6276s/12 iters), loss = 1.08024 I0427 19:55:04.987444 23810 solver.cpp:237] Train net output #0: loss = 1.08024 (* 1 = 1.08024 loss) I0427 19:55:04.987453 23810 sgd_solver.cpp:105] Iteration 2892, lr = 0.00148176 I0427 19:55:14.547948 23810 solver.cpp:218] Iteration 2904 (1.25522 iter/s, 9.5601s/12 iters), loss = 1.00608 I0427 19:55:14.547987 23810 solver.cpp:237] Train net output #0: loss = 1.00608 (* 1 = 1.00608 loss) I0427 19:55:14.547996 23810 sgd_solver.cpp:105] Iteration 2904, lr = 0.00147006 I0427 19:55:23.988548 23810 solver.cpp:218] Iteration 2916 (1.27117 iter/s, 9.44015s/12 iters), loss = 1.11201 I0427 19:55:23.989475 23810 solver.cpp:237] Train net output #0: loss = 1.11201 (* 1 = 1.11201 loss) I0427 19:55:23.989485 23810 sgd_solver.cpp:105] Iteration 2916, lr = 0.00145846 I0427 19:55:33.248920 23810 solver.cpp:218] Iteration 2928 (1.29603 iter/s, 9.25905s/12 iters), loss = 0.945978 I0427 19:55:33.248963 23810 solver.cpp:237] Train net output #0: loss = 0.945978 (* 1 = 0.945978 loss) I0427 19:55:33.248971 23810 sgd_solver.cpp:105] Iteration 2928, lr = 0.00144695 I0427 19:55:36.748215 23845 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:55:42.717571 23810 solver.cpp:218] Iteration 2940 (1.2674 iter/s, 9.4682s/12 iters), loss = 0.980025 I0427 19:55:42.717618 23810 solver.cpp:237] Train net output #0: loss = 0.980025 (* 1 = 0.980025 loss) I0427 19:55:42.717628 23810 sgd_solver.cpp:105] Iteration 2940, lr = 0.00143554 I0427 19:55:52.285236 23810 solver.cpp:218] Iteration 2952 (1.25428 iter/s, 9.56721s/12 iters), loss = 0.847701 I0427 19:55:52.285282 23810 solver.cpp:237] Train net output #0: loss = 0.847701 (* 1 = 0.847701 loss) I0427 19:55:52.285290 23810 sgd_solver.cpp:105] Iteration 2952, lr = 0.00142421 I0427 19:55:56.181511 23810 solver.cpp:447] Snapshotting to binary proto file snapshot_iter_2958.caffemodel I0427 19:55:59.294775 23810 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_2958.solverstate I0427 19:56:03.108840 23810 solver.cpp:330] Iteration 2958, Testing net (#0) I0427 19:56:03.108865 23810 net.cpp:676] Ignoring source layer train-data I0427 19:56:05.189956 23943 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:56:06.372916 23810 solver.cpp:397] Test net output #0: accuracy = 0.301897 I0427 19:56:06.372953 23810 solver.cpp:397] Test net output #1: loss = 3.53822 (* 1 = 3.53822 loss) I0427 19:56:09.592065 23810 solver.cpp:218] Iteration 2964 (0.693399 iter/s, 17.3061s/12 iters), loss = 0.898277 I0427 19:56:09.592113 23810 solver.cpp:237] Train net output #0: loss = 0.898277 (* 1 = 0.898277 loss) I0427 19:56:09.592121 23810 sgd_solver.cpp:105] Iteration 2964, lr = 0.00141297 I0427 19:56:12.851022 23810 blocking_queue.cpp:49] Waiting for data I0427 19:56:19.332520 23810 solver.cpp:218] Iteration 2976 (1.23203 iter/s, 9.73999s/12 iters), loss = 0.861763 I0427 19:56:19.332562 23810 solver.cpp:237] Train net output #0: loss = 0.861763 (* 1 = 0.861763 loss) I0427 19:56:19.332571 23810 sgd_solver.cpp:105] Iteration 2976, lr = 0.00140182 I0427 19:56:29.299075 23810 solver.cpp:218] Iteration 2988 (1.20408 iter/s, 9.96609s/12 iters), loss = 0.797927 I0427 19:56:29.299175 23810 solver.cpp:237] Train net output #0: loss = 0.797927 (* 1 = 0.797927 loss) I0427 19:56:29.299183 23810 sgd_solver.cpp:105] Iteration 2988, lr = 0.00139076 I0427 19:56:38.750245 23810 solver.cpp:218] Iteration 3000 (1.26975 iter/s, 9.45067s/12 iters), loss = 0.776535 I0427 19:56:38.750293 23810 solver.cpp:237] Train net output #0: loss = 0.776535 (* 1 = 0.776535 loss) I0427 19:56:38.750303 23810 sgd_solver.cpp:105] Iteration 3000, lr = 0.00137978 I0427 19:56:48.263566 23810 solver.cpp:218] Iteration 3012 (1.26145 iter/s, 9.51287s/12 iters), loss = 0.738732 I0427 19:56:48.263603 23810 solver.cpp:237] Train net output #0: loss = 0.738732 (* 1 = 0.738732 loss) I0427 19:56:48.263612 23810 sgd_solver.cpp:105] Iteration 3012, lr = 0.00136889 I0427 19:56:57.707598 23810 solver.cpp:218] Iteration 3024 (1.2707 iter/s, 9.44359s/12 iters), loss = 0.793183 I0427 19:56:57.707643 23810 solver.cpp:237] Train net output #0: loss = 0.793183 (* 1 = 0.793183 loss) I0427 19:56:57.707650 23810 sgd_solver.cpp:105] Iteration 3024, lr = 0.00135809 I0427 19:57:05.306511 23845 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:57:07.256675 23810 solver.cpp:218] Iteration 3036 (1.25673 iter/s, 9.54862s/12 iters), loss = 0.812391 I0427 19:57:07.256723 23810 solver.cpp:237] Train net output #0: loss = 0.812391 (* 1 = 0.812391 loss) I0427 19:57:07.256732 23810 sgd_solver.cpp:105] Iteration 3036, lr = 0.00134737 I0427 19:57:16.557035 23810 solver.cpp:218] Iteration 3048 (1.29033 iter/s, 9.29992s/12 iters), loss = 0.788986 I0427 19:57:16.557080 23810 solver.cpp:237] Train net output #0: loss = 0.788986 (* 1 = 0.788986 loss) I0427 19:57:16.557088 23810 sgd_solver.cpp:105] Iteration 3048, lr = 0.00133674 I0427 19:57:25.097097 23810 solver.cpp:447] Snapshotting to binary proto file snapshot_iter_3060.caffemodel I0427 19:57:28.128262 23810 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_3060.solverstate I0427 19:57:31.580711 23810 solver.cpp:310] Iteration 3060, loss = 0.732433 I0427 19:57:31.580739 23810 solver.cpp:330] Iteration 3060, Testing net (#0) I0427 19:57:31.580744 23810 net.cpp:676] Ignoring source layer train-data I0427 19:57:33.019724 23943 data_layer.cpp:73] Restarting data prefetching from start. I0427 19:57:34.692806 23810 solver.cpp:397] Test net output #0: accuracy = 0.308594 I0427 19:57:34.692845 23810 solver.cpp:397] Test net output #1: loss = 3.47373 (* 1 = 3.47373 loss) I0427 19:57:34.692855 23810 solver.cpp:315] Optimization Done. I0427 19:57:34.692863 23810 caffe.cpp:259] Optimization Done.