I0419 21:27:34.746908 27863 upgrade_proto.cpp:1082] Attempting to upgrade input file specified using deprecated 'solver_type' field (enum)': /mnt/bigdisk/DIGITS-AMB-2/digits/jobs/20210419-212733-4c30/solver.prototxt I0419 21:27:34.747046 27863 upgrade_proto.cpp:1089] Successfully upgraded file specified using deprecated 'solver_type' field (enum) to 'type' field (string). W0419 21:27:34.747051 27863 upgrade_proto.cpp:1091] Note that future Caffe releases will only support 'type' field (string) for a solver's type. I0419 21:27:34.747113 27863 caffe.cpp:218] Using GPUs 0 I0419 21:27:34.792650 27863 caffe.cpp:223] GPU 0: GeForce RTX 2080 I0419 21:27:35.228098 27863 solver.cpp:44] Initializing solver from parameters: test_iter: 51 test_interval: 607 base_lr: 0.01 display: 40 max_iter: 18210 lr_policy: "exp" gamma: 0.99988908 momentum: 0.9 weight_decay: 0.0001 snapshot: 607 snapshot_prefix: "snapshot" solver_mode: GPU device_id: 0 net: "train_val.prototxt" train_state { level: 0 stage: "" } type: "SGD" I0419 21:27:35.229019 27863 solver.cpp:87] Creating training net from net file: train_val.prototxt I0419 21:27:35.229655 27863 net.cpp:294] The NetState phase (0) differed from the phase (1) specified by a rule in layer val-data I0419 21:27:35.229669 27863 net.cpp:294] The NetState phase (0) differed from the phase (1) specified by a rule in layer accuracy I0419 21:27:35.229809 27863 net.cpp:51] Initializing net from parameters: state { phase: TRAIN level: 0 stage: "" } layer { name: "train-data" type: "Data" top: "data" top: "label" include { phase: TRAIN } transform_param { mirror: true crop_size: 227 mean_file: "/mnt/bigdisk/DIGITS-AMB-2/digits/jobs/20210419-212033-af57/mean.binaryproto" } data_param { source: "/mnt/bigdisk/DIGITS-AMB-2/digits/jobs/20210419-212033-af57/train_db" batch_size: 128 backend: LMDB } } layer { name: "conv1" type: "Convolution" bottom: "data" top: "conv1" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 96 kernel_size: 11 stride: 4 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 0 } } } layer { name: "relu1" type: "ReLU" bottom: "conv1" top: "conv1" } layer { name: "norm1" type: "LRN" bottom: "conv1" top: "norm1" lrn_param { local_size: 5 alpha: 0.0001 beta: 0.75 } } layer { name: "pool1" type: "Pooling" bottom: "norm1" top: "pool1" pooling_param { pool: MAX kernel_size: 3 stride: 2 } } layer { name: "conv2" type: "Convolution" bottom: "pool1" top: "conv2" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 256 pad: 2 kernel_size: 5 group: 2 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 0.1 } } } layer { name: "relu2" type: "ReLU" bottom: "conv2" top: "conv2" } layer { name: "norm2" type: "LRN" bottom: "conv2" top: "norm2" lrn_param { local_size: 5 alpha: 0.0001 beta: 0.75 } } layer { name: "pool2" type: "Pooling" bottom: "norm2" top: "pool2" pooling_param { pool: MAX kernel_size: 3 stride: 2 } } layer { name: "conv3" type: "Convolution" bottom: "pool2" top: "conv3" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 384 pad: 1 kernel_size: 3 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 0 } } } layer { name: "relu3" type: "ReLU" bottom: "conv3" top: "conv3" } layer { name: "conv4" type: "Convolution" bottom: "conv3" top: "conv4" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 384 pad: 1 kernel_size: 3 group: 2 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 0.1 } } } layer { name: "relu4" type: "ReLU" bottom: "conv4" top: "conv4" } layer { name: "conv5" type: "Convolution" bottom: "conv4" top: "conv5" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 256 pad: 1 kernel_size: 3 group: 2 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 0.1 } } } layer { name: "relu5" type: "ReLU" bottom: "conv5" top: "conv5" } layer { name: "pool5" type: "Pooling" bottom: "conv5" top: "pool5" pooling_param { pool: MAX kernel_size: 3 stride: 2 } } layer { name: "fc6" type: "InnerProduct" bottom: "pool5" top: "fc6" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } inner_product_param { num_output: 4096 weight_filler { type: "gaussian" std: 0.005 } bias_filler { type: "constant" value: 0.1 } } } layer { name: "relu6" type: "ReLU" bottom: "fc6" top: "fc6" } layer { name: "drop6" type: "Dropout" bottom: "fc6" top: "fc6" dropout_param { dropout_ratio: 0.5 } } layer { name: "fc7" type: "InnerProduct" bottom: "fc6" top: "fc7" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } inner_product_param { num_output: 4096 weight_filler { type: "gaussian" std: 0.005 } bias_filler { type: "constant" value: 0.1 } } } layer { name: "relu7" type: "ReLU" bottom: "fc7" top: "fc7" } layer { name: "drop7" type: "Dropout" bottom: "fc7" top: "fc7" dropout_param { dropout_ratio: 0.5 } } layer { name: "fc8" type: "InnerProduct" bottom: "fc7" top: "fc8" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } inner_product_param { num_output: 196 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 0 } } } layer { name: "loss" type: "SoftmaxWithLoss" bottom: "fc8" bottom: "label" top: "loss" } I0419 21:27:35.229902 27863 layer_factory.hpp:77] Creating layer train-data I0419 21:27:35.232308 27863 db_lmdb.cpp:35] Opened lmdb /mnt/bigdisk/DIGITS-AMB-2/digits/jobs/20210419-212033-af57/train_db I0419 21:27:35.233057 27863 net.cpp:84] Creating Layer train-data I0419 21:27:35.233068 27863 net.cpp:380] train-data -> data I0419 21:27:35.233088 27863 net.cpp:380] train-data -> label I0419 21:27:35.233103 27863 data_transformer.cpp:25] Loading mean file from: /mnt/bigdisk/DIGITS-AMB-2/digits/jobs/20210419-212033-af57/mean.binaryproto I0419 21:27:35.238873 27863 data_layer.cpp:45] output data size: 128,3,227,227 I0419 21:27:35.385043 27863 net.cpp:122] Setting up train-data I0419 21:27:35.385063 27863 net.cpp:129] Top shape: 128 3 227 227 (19787136) I0419 21:27:35.385068 27863 net.cpp:129] Top shape: 128 (128) I0419 21:27:35.385071 27863 net.cpp:137] Memory required for data: 79149056 I0419 21:27:35.385082 27863 layer_factory.hpp:77] Creating layer conv1 I0419 21:27:35.385102 27863 net.cpp:84] Creating Layer conv1 I0419 21:27:35.385108 27863 net.cpp:406] conv1 <- data I0419 21:27:35.385120 27863 net.cpp:380] conv1 -> conv1 I0419 21:27:36.267966 27863 net.cpp:122] Setting up conv1 I0419 21:27:36.267987 27863 net.cpp:129] Top shape: 128 96 55 55 (37171200) I0419 21:27:36.267989 27863 net.cpp:137] Memory required for data: 227833856 I0419 21:27:36.268009 27863 layer_factory.hpp:77] Creating layer relu1 I0419 21:27:36.268019 27863 net.cpp:84] Creating Layer relu1 I0419 21:27:36.268023 27863 net.cpp:406] relu1 <- conv1 I0419 21:27:36.268028 27863 net.cpp:367] relu1 -> conv1 (in-place) I0419 21:27:36.268349 27863 net.cpp:122] Setting up relu1 I0419 21:27:36.268358 27863 net.cpp:129] Top shape: 128 96 55 55 (37171200) I0419 21:27:36.268362 27863 net.cpp:137] Memory required for data: 376518656 I0419 21:27:36.268364 27863 layer_factory.hpp:77] Creating layer norm1 I0419 21:27:36.268373 27863 net.cpp:84] Creating Layer norm1 I0419 21:27:36.268376 27863 net.cpp:406] norm1 <- conv1 I0419 21:27:36.268400 27863 net.cpp:380] norm1 -> norm1 I0419 21:27:36.268923 27863 net.cpp:122] Setting up norm1 I0419 21:27:36.268932 27863 net.cpp:129] Top shape: 128 96 55 55 (37171200) I0419 21:27:36.268935 27863 net.cpp:137] Memory required for data: 525203456 I0419 21:27:36.268939 27863 layer_factory.hpp:77] Creating layer pool1 I0419 21:27:36.268945 27863 net.cpp:84] Creating Layer pool1 I0419 21:27:36.268949 27863 net.cpp:406] pool1 <- norm1 I0419 21:27:36.268954 27863 net.cpp:380] pool1 -> pool1 I0419 21:27:36.268985 27863 net.cpp:122] Setting up pool1 I0419 21:27:36.268990 27863 net.cpp:129] Top shape: 128 96 27 27 (8957952) I0419 21:27:36.268992 27863 net.cpp:137] Memory required for data: 561035264 I0419 21:27:36.268994 27863 layer_factory.hpp:77] Creating layer conv2 I0419 21:27:36.269004 27863 net.cpp:84] Creating Layer conv2 I0419 21:27:36.269007 27863 net.cpp:406] conv2 <- pool1 I0419 21:27:36.269011 27863 net.cpp:380] conv2 -> conv2 I0419 21:27:36.276625 27863 net.cpp:122] Setting up conv2 I0419 21:27:36.276640 27863 net.cpp:129] Top shape: 128 256 27 27 (23887872) I0419 21:27:36.276643 27863 net.cpp:137] Memory required for data: 656586752 I0419 21:27:36.276652 27863 layer_factory.hpp:77] Creating layer relu2 I0419 21:27:36.276659 27863 net.cpp:84] Creating Layer relu2 I0419 21:27:36.276664 27863 net.cpp:406] relu2 <- conv2 I0419 21:27:36.276669 27863 net.cpp:367] relu2 -> conv2 (in-place) I0419 21:27:36.277227 27863 net.cpp:122] Setting up relu2 I0419 21:27:36.277237 27863 net.cpp:129] Top shape: 128 256 27 27 (23887872) I0419 21:27:36.277240 27863 net.cpp:137] Memory required for data: 752138240 I0419 21:27:36.277243 27863 layer_factory.hpp:77] Creating layer norm2 I0419 21:27:36.277251 27863 net.cpp:84] Creating Layer norm2 I0419 21:27:36.277254 27863 net.cpp:406] norm2 <- conv2 I0419 21:27:36.277259 27863 net.cpp:380] norm2 -> norm2 I0419 21:27:36.277590 27863 net.cpp:122] Setting up norm2 I0419 21:27:36.277598 27863 net.cpp:129] Top shape: 128 256 27 27 (23887872) I0419 21:27:36.277601 27863 net.cpp:137] Memory required for data: 847689728 I0419 21:27:36.277604 27863 layer_factory.hpp:77] Creating layer pool2 I0419 21:27:36.277611 27863 net.cpp:84] Creating Layer pool2 I0419 21:27:36.277616 27863 net.cpp:406] pool2 <- norm2 I0419 21:27:36.277619 27863 net.cpp:380] pool2 -> pool2 I0419 21:27:36.277644 27863 net.cpp:122] Setting up pool2 I0419 21:27:36.277648 27863 net.cpp:129] Top shape: 128 256 13 13 (5537792) I0419 21:27:36.277652 27863 net.cpp:137] Memory required for data: 869840896 I0419 21:27:36.277654 27863 layer_factory.hpp:77] Creating layer conv3 I0419 21:27:36.277662 27863 net.cpp:84] Creating Layer conv3 I0419 21:27:36.277665 27863 net.cpp:406] conv3 <- pool2 I0419 21:27:36.277670 27863 net.cpp:380] conv3 -> conv3 I0419 21:27:36.288041 27863 net.cpp:122] Setting up conv3 I0419 21:27:36.288053 27863 net.cpp:129] Top shape: 128 384 13 13 (8306688) I0419 21:27:36.288056 27863 net.cpp:137] Memory required for data: 903067648 I0419 21:27:36.288065 27863 layer_factory.hpp:77] Creating layer relu3 I0419 21:27:36.288072 27863 net.cpp:84] Creating Layer relu3 I0419 21:27:36.288075 27863 net.cpp:406] relu3 <- conv3 I0419 21:27:36.288080 27863 net.cpp:367] relu3 -> conv3 (in-place) I0419 21:27:36.288599 27863 net.cpp:122] Setting up relu3 I0419 21:27:36.288609 27863 net.cpp:129] Top shape: 128 384 13 13 (8306688) I0419 21:27:36.288611 27863 net.cpp:137] Memory required for data: 936294400 I0419 21:27:36.288614 27863 layer_factory.hpp:77] Creating layer conv4 I0419 21:27:36.288625 27863 net.cpp:84] Creating Layer conv4 I0419 21:27:36.288627 27863 net.cpp:406] conv4 <- conv3 I0419 21:27:36.288632 27863 net.cpp:380] conv4 -> conv4 I0419 21:27:36.299872 27863 net.cpp:122] Setting up conv4 I0419 21:27:36.299885 27863 net.cpp:129] Top shape: 128 384 13 13 (8306688) I0419 21:27:36.299888 27863 net.cpp:137] Memory required for data: 969521152 I0419 21:27:36.299896 27863 layer_factory.hpp:77] Creating layer relu4 I0419 21:27:36.299904 27863 net.cpp:84] Creating Layer relu4 I0419 21:27:36.299928 27863 net.cpp:406] relu4 <- conv4 I0419 21:27:36.299934 27863 net.cpp:367] relu4 -> conv4 (in-place) I0419 21:27:36.300410 27863 net.cpp:122] Setting up relu4 I0419 21:27:36.300420 27863 net.cpp:129] Top shape: 128 384 13 13 (8306688) I0419 21:27:36.300422 27863 net.cpp:137] Memory required for data: 1002747904 I0419 21:27:36.300426 27863 layer_factory.hpp:77] Creating layer conv5 I0419 21:27:36.300436 27863 net.cpp:84] Creating Layer conv5 I0419 21:27:36.300439 27863 net.cpp:406] conv5 <- conv4 I0419 21:27:36.300444 27863 net.cpp:380] conv5 -> conv5 I0419 21:27:36.309607 27863 net.cpp:122] Setting up conv5 I0419 21:27:36.309621 27863 net.cpp:129] Top shape: 128 256 13 13 (5537792) I0419 21:27:36.309624 27863 net.cpp:137] Memory required for data: 1024899072 I0419 21:27:36.309636 27863 layer_factory.hpp:77] Creating layer relu5 I0419 21:27:36.309643 27863 net.cpp:84] Creating Layer relu5 I0419 21:27:36.309648 27863 net.cpp:406] relu5 <- conv5 I0419 21:27:36.309653 27863 net.cpp:367] relu5 -> conv5 (in-place) I0419 21:27:36.310137 27863 net.cpp:122] Setting up relu5 I0419 21:27:36.310145 27863 net.cpp:129] Top shape: 128 256 13 13 (5537792) I0419 21:27:36.310148 27863 net.cpp:137] Memory required for data: 1047050240 I0419 21:27:36.310153 27863 layer_factory.hpp:77] Creating layer pool5 I0419 21:27:36.310158 27863 net.cpp:84] Creating Layer pool5 I0419 21:27:36.310161 27863 net.cpp:406] pool5 <- conv5 I0419 21:27:36.310166 27863 net.cpp:380] pool5 -> pool5 I0419 21:27:36.310199 27863 net.cpp:122] Setting up pool5 I0419 21:27:36.310204 27863 net.cpp:129] Top shape: 128 256 6 6 (1179648) I0419 21:27:36.310206 27863 net.cpp:137] Memory required for data: 1051768832 I0419 21:27:36.310209 27863 layer_factory.hpp:77] Creating layer fc6 I0419 21:27:36.310217 27863 net.cpp:84] Creating Layer fc6 I0419 21:27:36.310220 27863 net.cpp:406] fc6 <- pool5 I0419 21:27:36.310225 27863 net.cpp:380] fc6 -> fc6 I0419 21:27:36.668668 27863 net.cpp:122] Setting up fc6 I0419 21:27:36.668687 27863 net.cpp:129] Top shape: 128 4096 (524288) I0419 21:27:36.668690 27863 net.cpp:137] Memory required for data: 1053865984 I0419 21:27:36.668700 27863 layer_factory.hpp:77] Creating layer relu6 I0419 21:27:36.668709 27863 net.cpp:84] Creating Layer relu6 I0419 21:27:36.668712 27863 net.cpp:406] relu6 <- fc6 I0419 21:27:36.668718 27863 net.cpp:367] relu6 -> fc6 (in-place) I0419 21:27:36.669409 27863 net.cpp:122] Setting up relu6 I0419 21:27:36.669418 27863 net.cpp:129] Top shape: 128 4096 (524288) I0419 21:27:36.669421 27863 net.cpp:137] Memory required for data: 1055963136 I0419 21:27:36.669425 27863 layer_factory.hpp:77] Creating layer drop6 I0419 21:27:36.669431 27863 net.cpp:84] Creating Layer drop6 I0419 21:27:36.669435 27863 net.cpp:406] drop6 <- fc6 I0419 21:27:36.669438 27863 net.cpp:367] drop6 -> fc6 (in-place) I0419 21:27:36.669463 27863 net.cpp:122] Setting up drop6 I0419 21:27:36.669468 27863 net.cpp:129] Top shape: 128 4096 (524288) I0419 21:27:36.669471 27863 net.cpp:137] Memory required for data: 1058060288 I0419 21:27:36.669473 27863 layer_factory.hpp:77] Creating layer fc7 I0419 21:27:36.669479 27863 net.cpp:84] Creating Layer fc7 I0419 21:27:36.669482 27863 net.cpp:406] fc7 <- fc6 I0419 21:27:36.669486 27863 net.cpp:380] fc7 -> fc7 I0419 21:27:36.829452 27863 net.cpp:122] Setting up fc7 I0419 21:27:36.829473 27863 net.cpp:129] Top shape: 128 4096 (524288) I0419 21:27:36.829475 27863 net.cpp:137] Memory required for data: 1060157440 I0419 21:27:36.829485 27863 layer_factory.hpp:77] Creating layer relu7 I0419 21:27:36.829493 27863 net.cpp:84] Creating Layer relu7 I0419 21:27:36.829497 27863 net.cpp:406] relu7 <- fc7 I0419 21:27:36.829504 27863 net.cpp:367] relu7 -> fc7 (in-place) I0419 21:27:36.829926 27863 net.cpp:122] Setting up relu7 I0419 21:27:36.829934 27863 net.cpp:129] Top shape: 128 4096 (524288) I0419 21:27:36.829937 27863 net.cpp:137] Memory required for data: 1062254592 I0419 21:27:36.829941 27863 layer_factory.hpp:77] Creating layer drop7 I0419 21:27:36.829946 27863 net.cpp:84] Creating Layer drop7 I0419 21:27:36.829970 27863 net.cpp:406] drop7 <- fc7 I0419 21:27:36.829975 27863 net.cpp:367] drop7 -> fc7 (in-place) I0419 21:27:36.829998 27863 net.cpp:122] Setting up drop7 I0419 21:27:36.830003 27863 net.cpp:129] Top shape: 128 4096 (524288) I0419 21:27:36.830004 27863 net.cpp:137] Memory required for data: 1064351744 I0419 21:27:36.830008 27863 layer_factory.hpp:77] Creating layer fc8 I0419 21:27:36.830013 27863 net.cpp:84] Creating Layer fc8 I0419 21:27:36.830016 27863 net.cpp:406] fc8 <- fc7 I0419 21:27:36.830020 27863 net.cpp:380] fc8 -> fc8 I0419 21:27:36.837854 27863 net.cpp:122] Setting up fc8 I0419 21:27:36.837863 27863 net.cpp:129] Top shape: 128 196 (25088) I0419 21:27:36.837867 27863 net.cpp:137] Memory required for data: 1064452096 I0419 21:27:36.837872 27863 layer_factory.hpp:77] Creating layer loss I0419 21:27:36.837877 27863 net.cpp:84] Creating Layer loss I0419 21:27:36.837880 27863 net.cpp:406] loss <- fc8 I0419 21:27:36.837883 27863 net.cpp:406] loss <- label I0419 21:27:36.837890 27863 net.cpp:380] loss -> loss I0419 21:27:36.837899 27863 layer_factory.hpp:77] Creating layer loss I0419 21:27:36.838495 27863 net.cpp:122] Setting up loss I0419 21:27:36.838503 27863 net.cpp:129] Top shape: (1) I0419 21:27:36.838506 27863 net.cpp:132] with loss weight 1 I0419 21:27:36.838523 27863 net.cpp:137] Memory required for data: 1064452100 I0419 21:27:36.838527 27863 net.cpp:198] loss needs backward computation. I0419 21:27:36.838533 27863 net.cpp:198] fc8 needs backward computation. I0419 21:27:36.838536 27863 net.cpp:198] drop7 needs backward computation. I0419 21:27:36.838539 27863 net.cpp:198] relu7 needs backward computation. I0419 21:27:36.838541 27863 net.cpp:198] fc7 needs backward computation. I0419 21:27:36.838544 27863 net.cpp:198] drop6 needs backward computation. I0419 21:27:36.838546 27863 net.cpp:198] relu6 needs backward computation. I0419 21:27:36.838549 27863 net.cpp:198] fc6 needs backward computation. I0419 21:27:36.838552 27863 net.cpp:198] pool5 needs backward computation. I0419 21:27:36.838555 27863 net.cpp:198] relu5 needs backward computation. I0419 21:27:36.838557 27863 net.cpp:198] conv5 needs backward computation. I0419 21:27:36.838560 27863 net.cpp:198] relu4 needs backward computation. I0419 21:27:36.838563 27863 net.cpp:198] conv4 needs backward computation. I0419 21:27:36.838567 27863 net.cpp:198] relu3 needs backward computation. I0419 21:27:36.838569 27863 net.cpp:198] conv3 needs backward computation. I0419 21:27:36.838572 27863 net.cpp:198] pool2 needs backward computation. I0419 21:27:36.838574 27863 net.cpp:198] norm2 needs backward computation. I0419 21:27:36.838578 27863 net.cpp:198] relu2 needs backward computation. I0419 21:27:36.838580 27863 net.cpp:198] conv2 needs backward computation. I0419 21:27:36.838583 27863 net.cpp:198] pool1 needs backward computation. I0419 21:27:36.838587 27863 net.cpp:198] norm1 needs backward computation. I0419 21:27:36.838589 27863 net.cpp:198] relu1 needs backward computation. I0419 21:27:36.838591 27863 net.cpp:198] conv1 needs backward computation. I0419 21:27:36.838594 27863 net.cpp:200] train-data does not need backward computation. I0419 21:27:36.838598 27863 net.cpp:242] This network produces output loss I0419 21:27:36.838608 27863 net.cpp:255] Network initialization done. I0419 21:27:36.870055 27863 solver.cpp:172] Creating test net (#0) specified by net file: train_val.prototxt I0419 21:27:36.870121 27863 net.cpp:294] The NetState phase (1) differed from the phase (0) specified by a rule in layer train-data I0419 21:27:36.870442 27863 net.cpp:51] Initializing net from parameters: state { phase: TEST } layer { name: "val-data" type: "Data" top: "data" top: "label" include { phase: TEST } transform_param { crop_size: 227 mean_file: "/mnt/bigdisk/DIGITS-AMB-2/digits/jobs/20210419-212033-af57/mean.binaryproto" } data_param { source: "/mnt/bigdisk/DIGITS-AMB-2/digits/jobs/20210419-212033-af57/val_db" batch_size: 32 backend: LMDB } } layer { name: "conv1" type: "Convolution" bottom: "data" top: "conv1" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 96 kernel_size: 11 stride: 4 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 0 } } } layer { name: "relu1" type: "ReLU" bottom: "conv1" top: "conv1" } layer { name: "norm1" type: "LRN" bottom: "conv1" top: "norm1" lrn_param { local_size: 5 alpha: 0.0001 beta: 0.75 } } layer { name: "pool1" type: "Pooling" bottom: "norm1" top: "pool1" pooling_param { pool: MAX kernel_size: 3 stride: 2 } } layer { name: "conv2" type: "Convolution" bottom: "pool1" top: "conv2" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 256 pad: 2 kernel_size: 5 group: 2 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 0.1 } } } layer { name: "relu2" type: "ReLU" bottom: "conv2" top: "conv2" } layer { name: "norm2" type: "LRN" bottom: "conv2" top: "norm2" lrn_param { local_size: 5 alpha: 0.0001 beta: 0.75 } } layer { name: "pool2" type: "Pooling" bottom: "norm2" top: "pool2" pooling_param { pool: MAX kernel_size: 3 stride: 2 } } layer { name: "conv3" type: "Convolution" bottom: "pool2" top: "conv3" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 384 pad: 1 kernel_size: 3 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 0 } } } layer { name: "relu3" type: "ReLU" bottom: "conv3" top: "conv3" } layer { name: "conv4" type: "Convolution" bottom: "conv3" top: "conv4" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 384 pad: 1 kernel_size: 3 group: 2 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 0.1 } } } layer { name: "relu4" type: "ReLU" bottom: "conv4" top: "conv4" } layer { name: "conv5" type: "Convolution" bottom: "conv4" top: "conv5" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 256 pad: 1 kernel_size: 3 group: 2 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 0.1 } } } layer { name: "relu5" type: "ReLU" bottom: "conv5" top: "conv5" } layer { name: "pool5" type: "Pooling" bottom: "conv5" top: "pool5" pooling_param { pool: MAX kernel_size: 3 stride: 2 } } layer { name: "fc6" type: "InnerProduct" bottom: "pool5" top: "fc6" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } inner_product_param { num_output: 4096 weight_filler { type: "gaussian" std: 0.005 } bias_filler { type: "constant" value: 0.1 } } } layer { name: "relu6" type: "ReLU" bottom: "fc6" top: "fc6" } layer { name: "drop6" type: "Dropout" bottom: "fc6" top: "fc6" dropout_param { dropout_ratio: 0.5 } } layer { name: "fc7" type: "InnerProduct" bottom: "fc6" top: "fc7" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } inner_product_param { num_output: 4096 weight_filler { type: "gaussian" std: 0.005 } bias_filler { type: "constant" value: 0.1 } } } layer { name: "relu7" type: "ReLU" bottom: "fc7" top: "fc7" } layer { name: "drop7" type: "Dropout" bottom: "fc7" top: "fc7" dropout_param { dropout_ratio: 0.5 } } layer { name: "fc8" type: "InnerProduct" bottom: "fc7" top: "fc8" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } inner_product_param { num_output: 196 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 0 } } } layer { name: "accuracy" type: "Accuracy" bottom: "fc8" bottom: "label" top: "accuracy" include { phase: TEST } } layer { name: "loss" type: "SoftmaxWithLoss" bottom: "fc8" bottom: "label" top: "loss" } I0419 21:27:36.870635 27863 layer_factory.hpp:77] Creating layer val-data I0419 21:27:36.877683 27863 db_lmdb.cpp:35] Opened lmdb /mnt/bigdisk/DIGITS-AMB-2/digits/jobs/20210419-212033-af57/val_db I0419 21:27:36.878434 27863 net.cpp:84] Creating Layer val-data I0419 21:27:36.878456 27863 net.cpp:380] val-data -> data I0419 21:27:36.878474 27863 net.cpp:380] val-data -> label I0419 21:27:36.878489 27863 data_transformer.cpp:25] Loading mean file from: /mnt/bigdisk/DIGITS-AMB-2/digits/jobs/20210419-212033-af57/mean.binaryproto I0419 21:27:36.884140 27863 data_layer.cpp:45] output data size: 32,3,227,227 I0419 21:27:36.921000 27863 net.cpp:122] Setting up val-data I0419 21:27:36.921023 27863 net.cpp:129] Top shape: 32 3 227 227 (4946784) I0419 21:27:36.921027 27863 net.cpp:129] Top shape: 32 (32) I0419 21:27:36.921030 27863 net.cpp:137] Memory required for data: 19787264 I0419 21:27:36.921036 27863 layer_factory.hpp:77] Creating layer label_val-data_1_split I0419 21:27:36.921048 27863 net.cpp:84] Creating Layer label_val-data_1_split I0419 21:27:36.921052 27863 net.cpp:406] label_val-data_1_split <- label I0419 21:27:36.921058 27863 net.cpp:380] label_val-data_1_split -> label_val-data_1_split_0 I0419 21:27:36.921067 27863 net.cpp:380] label_val-data_1_split -> label_val-data_1_split_1 I0419 21:27:36.921114 27863 net.cpp:122] Setting up label_val-data_1_split I0419 21:27:36.921119 27863 net.cpp:129] Top shape: 32 (32) I0419 21:27:36.921123 27863 net.cpp:129] Top shape: 32 (32) I0419 21:27:36.921124 27863 net.cpp:137] Memory required for data: 19787520 I0419 21:27:36.921128 27863 layer_factory.hpp:77] Creating layer conv1 I0419 21:27:36.921137 27863 net.cpp:84] Creating Layer conv1 I0419 21:27:36.921140 27863 net.cpp:406] conv1 <- data I0419 21:27:36.921145 27863 net.cpp:380] conv1 -> conv1 I0419 21:27:36.924433 27863 net.cpp:122] Setting up conv1 I0419 21:27:36.924445 27863 net.cpp:129] Top shape: 32 96 55 55 (9292800) I0419 21:27:36.924448 27863 net.cpp:137] Memory required for data: 56958720 I0419 21:27:36.924458 27863 layer_factory.hpp:77] Creating layer relu1 I0419 21:27:36.924464 27863 net.cpp:84] Creating Layer relu1 I0419 21:27:36.924468 27863 net.cpp:406] relu1 <- conv1 I0419 21:27:36.924472 27863 net.cpp:367] relu1 -> conv1 (in-place) I0419 21:27:36.924805 27863 net.cpp:122] Setting up relu1 I0419 21:27:36.924815 27863 net.cpp:129] Top shape: 32 96 55 55 (9292800) I0419 21:27:36.924818 27863 net.cpp:137] Memory required for data: 94129920 I0419 21:27:36.924821 27863 layer_factory.hpp:77] Creating layer norm1 I0419 21:27:36.924829 27863 net.cpp:84] Creating Layer norm1 I0419 21:27:36.924832 27863 net.cpp:406] norm1 <- conv1 I0419 21:27:36.924836 27863 net.cpp:380] norm1 -> norm1 I0419 21:27:36.925348 27863 net.cpp:122] Setting up norm1 I0419 21:27:36.925356 27863 net.cpp:129] Top shape: 32 96 55 55 (9292800) I0419 21:27:36.925359 27863 net.cpp:137] Memory required for data: 131301120 I0419 21:27:36.925364 27863 layer_factory.hpp:77] Creating layer pool1 I0419 21:27:36.925369 27863 net.cpp:84] Creating Layer pool1 I0419 21:27:36.925372 27863 net.cpp:406] pool1 <- norm1 I0419 21:27:36.925377 27863 net.cpp:380] pool1 -> pool1 I0419 21:27:36.925402 27863 net.cpp:122] Setting up pool1 I0419 21:27:36.925407 27863 net.cpp:129] Top shape: 32 96 27 27 (2239488) I0419 21:27:36.925410 27863 net.cpp:137] Memory required for data: 140259072 I0419 21:27:36.925412 27863 layer_factory.hpp:77] Creating layer conv2 I0419 21:27:36.925420 27863 net.cpp:84] Creating Layer conv2 I0419 21:27:36.925422 27863 net.cpp:406] conv2 <- pool1 I0419 21:27:36.925449 27863 net.cpp:380] conv2 -> conv2 I0419 21:27:36.934937 27863 net.cpp:122] Setting up conv2 I0419 21:27:36.934952 27863 net.cpp:129] Top shape: 32 256 27 27 (5971968) I0419 21:27:36.934954 27863 net.cpp:137] Memory required for data: 164146944 I0419 21:27:36.934967 27863 layer_factory.hpp:77] Creating layer relu2 I0419 21:27:36.934974 27863 net.cpp:84] Creating Layer relu2 I0419 21:27:36.934978 27863 net.cpp:406] relu2 <- conv2 I0419 21:27:36.934983 27863 net.cpp:367] relu2 -> conv2 (in-place) I0419 21:27:36.935547 27863 net.cpp:122] Setting up relu2 I0419 21:27:36.935557 27863 net.cpp:129] Top shape: 32 256 27 27 (5971968) I0419 21:27:36.935560 27863 net.cpp:137] Memory required for data: 188034816 I0419 21:27:36.935564 27863 layer_factory.hpp:77] Creating layer norm2 I0419 21:27:36.935573 27863 net.cpp:84] Creating Layer norm2 I0419 21:27:36.935576 27863 net.cpp:406] norm2 <- conv2 I0419 21:27:36.935581 27863 net.cpp:380] norm2 -> norm2 I0419 21:27:36.936347 27863 net.cpp:122] Setting up norm2 I0419 21:27:36.936357 27863 net.cpp:129] Top shape: 32 256 27 27 (5971968) I0419 21:27:36.936360 27863 net.cpp:137] Memory required for data: 211922688 I0419 21:27:36.936364 27863 layer_factory.hpp:77] Creating layer pool2 I0419 21:27:36.936372 27863 net.cpp:84] Creating Layer pool2 I0419 21:27:36.936375 27863 net.cpp:406] pool2 <- norm2 I0419 21:27:36.936383 27863 net.cpp:380] pool2 -> pool2 I0419 21:27:36.936409 27863 net.cpp:122] Setting up pool2 I0419 21:27:36.936415 27863 net.cpp:129] Top shape: 32 256 13 13 (1384448) I0419 21:27:36.936417 27863 net.cpp:137] Memory required for data: 217460480 I0419 21:27:36.936420 27863 layer_factory.hpp:77] Creating layer conv3 I0419 21:27:36.936431 27863 net.cpp:84] Creating Layer conv3 I0419 21:27:36.936434 27863 net.cpp:406] conv3 <- pool2 I0419 21:27:36.936439 27863 net.cpp:380] conv3 -> conv3 I0419 21:27:36.948204 27863 net.cpp:122] Setting up conv3 I0419 21:27:36.948220 27863 net.cpp:129] Top shape: 32 384 13 13 (2076672) I0419 21:27:36.948225 27863 net.cpp:137] Memory required for data: 225767168 I0419 21:27:36.948236 27863 layer_factory.hpp:77] Creating layer relu3 I0419 21:27:36.948246 27863 net.cpp:84] Creating Layer relu3 I0419 21:27:36.948251 27863 net.cpp:406] relu3 <- conv3 I0419 21:27:36.948256 27863 net.cpp:367] relu3 -> conv3 (in-place) I0419 21:27:36.948845 27863 net.cpp:122] Setting up relu3 I0419 21:27:36.948854 27863 net.cpp:129] Top shape: 32 384 13 13 (2076672) I0419 21:27:36.948858 27863 net.cpp:137] Memory required for data: 234073856 I0419 21:27:36.948860 27863 layer_factory.hpp:77] Creating layer conv4 I0419 21:27:36.948873 27863 net.cpp:84] Creating Layer conv4 I0419 21:27:36.948875 27863 net.cpp:406] conv4 <- conv3 I0419 21:27:36.948881 27863 net.cpp:380] conv4 -> conv4 I0419 21:27:36.959245 27863 net.cpp:122] Setting up conv4 I0419 21:27:36.959262 27863 net.cpp:129] Top shape: 32 384 13 13 (2076672) I0419 21:27:36.959265 27863 net.cpp:137] Memory required for data: 242380544 I0419 21:27:36.959275 27863 layer_factory.hpp:77] Creating layer relu4 I0419 21:27:36.959285 27863 net.cpp:84] Creating Layer relu4 I0419 21:27:36.959288 27863 net.cpp:406] relu4 <- conv4 I0419 21:27:36.959295 27863 net.cpp:367] relu4 -> conv4 (in-place) I0419 21:27:36.959677 27863 net.cpp:122] Setting up relu4 I0419 21:27:36.959686 27863 net.cpp:129] Top shape: 32 384 13 13 (2076672) I0419 21:27:36.959689 27863 net.cpp:137] Memory required for data: 250687232 I0419 21:27:36.959692 27863 layer_factory.hpp:77] Creating layer conv5 I0419 21:27:36.959702 27863 net.cpp:84] Creating Layer conv5 I0419 21:27:36.959705 27863 net.cpp:406] conv5 <- conv4 I0419 21:27:36.959712 27863 net.cpp:380] conv5 -> conv5 I0419 21:27:36.969378 27863 net.cpp:122] Setting up conv5 I0419 21:27:36.969398 27863 net.cpp:129] Top shape: 32 256 13 13 (1384448) I0419 21:27:36.969401 27863 net.cpp:137] Memory required for data: 256225024 I0419 21:27:36.969413 27863 layer_factory.hpp:77] Creating layer relu5 I0419 21:27:36.969421 27863 net.cpp:84] Creating Layer relu5 I0419 21:27:36.969424 27863 net.cpp:406] relu5 <- conv5 I0419 21:27:36.969449 27863 net.cpp:367] relu5 -> conv5 (in-place) I0419 21:27:36.970011 27863 net.cpp:122] Setting up relu5 I0419 21:27:36.970021 27863 net.cpp:129] Top shape: 32 256 13 13 (1384448) I0419 21:27:36.970023 27863 net.cpp:137] Memory required for data: 261762816 I0419 21:27:36.970027 27863 layer_factory.hpp:77] Creating layer pool5 I0419 21:27:36.970038 27863 net.cpp:84] Creating Layer pool5 I0419 21:27:36.970041 27863 net.cpp:406] pool5 <- conv5 I0419 21:27:36.970046 27863 net.cpp:380] pool5 -> pool5 I0419 21:27:36.970083 27863 net.cpp:122] Setting up pool5 I0419 21:27:36.970088 27863 net.cpp:129] Top shape: 32 256 6 6 (294912) I0419 21:27:36.970090 27863 net.cpp:137] Memory required for data: 262942464 I0419 21:27:36.970093 27863 layer_factory.hpp:77] Creating layer fc6 I0419 21:27:36.970101 27863 net.cpp:84] Creating Layer fc6 I0419 21:27:36.970103 27863 net.cpp:406] fc6 <- pool5 I0419 21:27:36.970109 27863 net.cpp:380] fc6 -> fc6 I0419 21:27:37.329417 27863 net.cpp:122] Setting up fc6 I0419 21:27:37.329437 27863 net.cpp:129] Top shape: 32 4096 (131072) I0419 21:27:37.329439 27863 net.cpp:137] Memory required for data: 263466752 I0419 21:27:37.329448 27863 layer_factory.hpp:77] Creating layer relu6 I0419 21:27:37.329457 27863 net.cpp:84] Creating Layer relu6 I0419 21:27:37.329460 27863 net.cpp:406] relu6 <- fc6 I0419 21:27:37.329468 27863 net.cpp:367] relu6 -> fc6 (in-place) I0419 21:27:37.330251 27863 net.cpp:122] Setting up relu6 I0419 21:27:37.330260 27863 net.cpp:129] Top shape: 32 4096 (131072) I0419 21:27:37.330263 27863 net.cpp:137] Memory required for data: 263991040 I0419 21:27:37.330266 27863 layer_factory.hpp:77] Creating layer drop6 I0419 21:27:37.330273 27863 net.cpp:84] Creating Layer drop6 I0419 21:27:37.330276 27863 net.cpp:406] drop6 <- fc6 I0419 21:27:37.330281 27863 net.cpp:367] drop6 -> fc6 (in-place) I0419 21:27:37.330305 27863 net.cpp:122] Setting up drop6 I0419 21:27:37.330310 27863 net.cpp:129] Top shape: 32 4096 (131072) I0419 21:27:37.330312 27863 net.cpp:137] Memory required for data: 264515328 I0419 21:27:37.330315 27863 layer_factory.hpp:77] Creating layer fc7 I0419 21:27:37.330322 27863 net.cpp:84] Creating Layer fc7 I0419 21:27:37.330324 27863 net.cpp:406] fc7 <- fc6 I0419 21:27:37.330330 27863 net.cpp:380] fc7 -> fc7 I0419 21:27:37.531955 27863 net.cpp:122] Setting up fc7 I0419 21:27:37.531976 27863 net.cpp:129] Top shape: 32 4096 (131072) I0419 21:27:37.531980 27863 net.cpp:137] Memory required for data: 265039616 I0419 21:27:37.531989 27863 layer_factory.hpp:77] Creating layer relu7 I0419 21:27:37.531998 27863 net.cpp:84] Creating Layer relu7 I0419 21:27:37.532002 27863 net.cpp:406] relu7 <- fc7 I0419 21:27:37.532011 27863 net.cpp:367] relu7 -> fc7 (in-place) I0419 21:27:37.532538 27863 net.cpp:122] Setting up relu7 I0419 21:27:37.532547 27863 net.cpp:129] Top shape: 32 4096 (131072) I0419 21:27:37.532550 27863 net.cpp:137] Memory required for data: 265563904 I0419 21:27:37.532554 27863 layer_factory.hpp:77] Creating layer drop7 I0419 21:27:37.532560 27863 net.cpp:84] Creating Layer drop7 I0419 21:27:37.532564 27863 net.cpp:406] drop7 <- fc7 I0419 21:27:37.532570 27863 net.cpp:367] drop7 -> fc7 (in-place) I0419 21:27:37.532594 27863 net.cpp:122] Setting up drop7 I0419 21:27:37.532601 27863 net.cpp:129] Top shape: 32 4096 (131072) I0419 21:27:37.532603 27863 net.cpp:137] Memory required for data: 266088192 I0419 21:27:37.532606 27863 layer_factory.hpp:77] Creating layer fc8 I0419 21:27:37.532613 27863 net.cpp:84] Creating Layer fc8 I0419 21:27:37.532616 27863 net.cpp:406] fc8 <- fc7 I0419 21:27:37.532622 27863 net.cpp:380] fc8 -> fc8 I0419 21:27:37.541198 27863 net.cpp:122] Setting up fc8 I0419 21:27:37.541208 27863 net.cpp:129] Top shape: 32 196 (6272) I0419 21:27:37.541210 27863 net.cpp:137] Memory required for data: 266113280 I0419 21:27:37.541218 27863 layer_factory.hpp:77] Creating layer fc8_fc8_0_split I0419 21:27:37.541224 27863 net.cpp:84] Creating Layer fc8_fc8_0_split I0419 21:27:37.541226 27863 net.cpp:406] fc8_fc8_0_split <- fc8 I0419 21:27:37.541255 27863 net.cpp:380] fc8_fc8_0_split -> fc8_fc8_0_split_0 I0419 21:27:37.541262 27863 net.cpp:380] fc8_fc8_0_split -> fc8_fc8_0_split_1 I0419 21:27:37.541294 27863 net.cpp:122] Setting up fc8_fc8_0_split I0419 21:27:37.541299 27863 net.cpp:129] Top shape: 32 196 (6272) I0419 21:27:37.541302 27863 net.cpp:129] Top shape: 32 196 (6272) I0419 21:27:37.541306 27863 net.cpp:137] Memory required for data: 266163456 I0419 21:27:37.541308 27863 layer_factory.hpp:77] Creating layer accuracy I0419 21:27:37.541314 27863 net.cpp:84] Creating Layer accuracy I0419 21:27:37.541317 27863 net.cpp:406] accuracy <- fc8_fc8_0_split_0 I0419 21:27:37.541321 27863 net.cpp:406] accuracy <- label_val-data_1_split_0 I0419 21:27:37.541326 27863 net.cpp:380] accuracy -> accuracy I0419 21:27:37.541333 27863 net.cpp:122] Setting up accuracy I0419 21:27:37.541337 27863 net.cpp:129] Top shape: (1) I0419 21:27:37.541339 27863 net.cpp:137] Memory required for data: 266163460 I0419 21:27:37.541342 27863 layer_factory.hpp:77] Creating layer loss I0419 21:27:37.541347 27863 net.cpp:84] Creating Layer loss I0419 21:27:37.541349 27863 net.cpp:406] loss <- fc8_fc8_0_split_1 I0419 21:27:37.541353 27863 net.cpp:406] loss <- label_val-data_1_split_1 I0419 21:27:37.541359 27863 net.cpp:380] loss -> loss I0419 21:27:37.541366 27863 layer_factory.hpp:77] Creating layer loss I0419 21:27:37.542065 27863 net.cpp:122] Setting up loss I0419 21:27:37.542074 27863 net.cpp:129] Top shape: (1) I0419 21:27:37.542078 27863 net.cpp:132] with loss weight 1 I0419 21:27:37.542086 27863 net.cpp:137] Memory required for data: 266163464 I0419 21:27:37.542090 27863 net.cpp:198] loss needs backward computation. I0419 21:27:37.542094 27863 net.cpp:200] accuracy does not need backward computation. I0419 21:27:37.542098 27863 net.cpp:198] fc8_fc8_0_split needs backward computation. I0419 21:27:37.542101 27863 net.cpp:198] fc8 needs backward computation. I0419 21:27:37.542104 27863 net.cpp:198] drop7 needs backward computation. I0419 21:27:37.542107 27863 net.cpp:198] relu7 needs backward computation. I0419 21:27:37.542109 27863 net.cpp:198] fc7 needs backward computation. I0419 21:27:37.542112 27863 net.cpp:198] drop6 needs backward computation. I0419 21:27:37.542115 27863 net.cpp:198] relu6 needs backward computation. I0419 21:27:37.542119 27863 net.cpp:198] fc6 needs backward computation. I0419 21:27:37.542122 27863 net.cpp:198] pool5 needs backward computation. I0419 21:27:37.542125 27863 net.cpp:198] relu5 needs backward computation. I0419 21:27:37.542129 27863 net.cpp:198] conv5 needs backward computation. I0419 21:27:37.542132 27863 net.cpp:198] relu4 needs backward computation. I0419 21:27:37.542135 27863 net.cpp:198] conv4 needs backward computation. I0419 21:27:37.542138 27863 net.cpp:198] relu3 needs backward computation. I0419 21:27:37.542141 27863 net.cpp:198] conv3 needs backward computation. I0419 21:27:37.542145 27863 net.cpp:198] pool2 needs backward computation. I0419 21:27:37.542148 27863 net.cpp:198] norm2 needs backward computation. I0419 21:27:37.542151 27863 net.cpp:198] relu2 needs backward computation. I0419 21:27:37.542155 27863 net.cpp:198] conv2 needs backward computation. I0419 21:27:37.542157 27863 net.cpp:198] pool1 needs backward computation. I0419 21:27:37.542160 27863 net.cpp:198] norm1 needs backward computation. I0419 21:27:37.542163 27863 net.cpp:198] relu1 needs backward computation. I0419 21:27:37.542166 27863 net.cpp:198] conv1 needs backward computation. I0419 21:27:37.542171 27863 net.cpp:200] label_val-data_1_split does not need backward computation. I0419 21:27:37.542174 27863 net.cpp:200] val-data does not need backward computation. I0419 21:27:37.542176 27863 net.cpp:242] This network produces output accuracy I0419 21:27:37.542181 27863 net.cpp:242] This network produces output loss I0419 21:27:37.542199 27863 net.cpp:255] Network initialization done. I0419 21:27:37.542268 27863 solver.cpp:56] Solver scaffolding done. I0419 21:27:37.542649 27863 caffe.cpp:248] Starting Optimization I0419 21:27:37.542659 27863 solver.cpp:272] Solving I0419 21:27:37.542672 27863 solver.cpp:273] Learning Rate Policy: exp I0419 21:27:37.544376 27863 solver.cpp:330] Iteration 0, Testing net (#0) I0419 21:27:37.544386 27863 net.cpp:676] Ignoring source layer train-data I0419 21:27:37.630053 27863 blocking_queue.cpp:49] Waiting for data I0419 21:27:41.891072 27881 data_layer.cpp:73] Restarting data prefetching from start. I0419 21:27:41.937431 27863 solver.cpp:397] Test net output #0: accuracy = 0.00428922 I0419 21:27:41.937479 27863 solver.cpp:397] Test net output #1: loss = 5.27921 (* 1 = 5.27921 loss) I0419 21:27:42.036509 27863 solver.cpp:218] Iteration 0 (0 iter/s, 4.49379s/40 iters), loss = 5.29297 I0419 21:27:42.038203 27863 solver.cpp:237] Train net output #0: loss = 5.29297 (* 1 = 5.29297 loss) I0419 21:27:42.038233 27863 sgd_solver.cpp:105] Iteration 0, lr = 0.01 I0419 21:27:57.008185 27863 solver.cpp:218] Iteration 40 (2.67201 iter/s, 14.97s/40 iters), loss = 5.30581 I0419 21:27:57.008235 27863 solver.cpp:237] Train net output #0: loss = 5.30581 (* 1 = 5.30581 loss) I0419 21:27:57.008250 27863 sgd_solver.cpp:105] Iteration 40, lr = 0.00995573 I0419 21:28:13.146764 27863 solver.cpp:218] Iteration 80 (2.47854 iter/s, 16.1385s/40 iters), loss = 5.29883 I0419 21:28:13.146837 27863 solver.cpp:237] Train net output #0: loss = 5.29883 (* 1 = 5.29883 loss) I0419 21:28:13.146847 27863 sgd_solver.cpp:105] Iteration 80, lr = 0.00991165 I0419 21:28:29.254734 27863 solver.cpp:218] Iteration 120 (2.48325 iter/s, 16.1079s/40 iters), loss = 5.2675 I0419 21:28:29.254768 27863 solver.cpp:237] Train net output #0: loss = 5.2675 (* 1 = 5.2675 loss) I0419 21:28:29.254776 27863 sgd_solver.cpp:105] Iteration 120, lr = 0.00986777 I0419 21:28:45.341791 27863 solver.cpp:218] Iteration 160 (2.48647 iter/s, 16.087s/40 iters), loss = 5.25376 I0419 21:28:45.341912 27863 solver.cpp:237] Train net output #0: loss = 5.25376 (* 1 = 5.25376 loss) I0419 21:28:45.341919 27863 sgd_solver.cpp:105] Iteration 160, lr = 0.00982408 I0419 21:29:01.420658 27863 solver.cpp:218] Iteration 200 (2.48775 iter/s, 16.0788s/40 iters), loss = 5.24475 I0419 21:29:01.420697 27863 solver.cpp:237] Train net output #0: loss = 5.24475 (* 1 = 5.24475 loss) I0419 21:29:01.420704 27863 sgd_solver.cpp:105] Iteration 200, lr = 0.00978058 I0419 21:29:17.525264 27863 solver.cpp:218] Iteration 240 (2.48377 iter/s, 16.1046s/40 iters), loss = 5.12882 I0419 21:29:17.525358 27863 solver.cpp:237] Train net output #0: loss = 5.12882 (* 1 = 5.12882 loss) I0419 21:29:17.525367 27863 sgd_solver.cpp:105] Iteration 240, lr = 0.00973728 I0419 21:29:33.617506 27863 solver.cpp:218] Iteration 280 (2.48568 iter/s, 16.0922s/40 iters), loss = 5.1609 I0419 21:29:33.617543 27863 solver.cpp:237] Train net output #0: loss = 5.1609 (* 1 = 5.1609 loss) I0419 21:29:33.617550 27863 sgd_solver.cpp:105] Iteration 280, lr = 0.00969417 I0419 21:29:49.717291 27863 solver.cpp:218] Iteration 320 (2.48451 iter/s, 16.0998s/40 iters), loss = 5.10461 I0419 21:29:49.717375 27863 solver.cpp:237] Train net output #0: loss = 5.10461 (* 1 = 5.10461 loss) I0419 21:29:49.717383 27863 sgd_solver.cpp:105] Iteration 320, lr = 0.00965125 I0419 21:30:05.819911 27863 solver.cpp:218] Iteration 360 (2.48408 iter/s, 16.1026s/40 iters), loss = 5.04258 I0419 21:30:05.819948 27863 solver.cpp:237] Train net output #0: loss = 5.04258 (* 1 = 5.04258 loss) I0419 21:30:05.819957 27863 sgd_solver.cpp:105] Iteration 360, lr = 0.00960852 I0419 21:30:21.877710 27863 solver.cpp:218] Iteration 400 (2.491 iter/s, 16.0578s/40 iters), loss = 5.08354 I0419 21:30:21.877825 27863 solver.cpp:237] Train net output #0: loss = 5.08354 (* 1 = 5.08354 loss) I0419 21:30:21.877833 27863 sgd_solver.cpp:105] Iteration 400, lr = 0.00956598 I0419 21:30:37.838526 27863 solver.cpp:218] Iteration 440 (2.50615 iter/s, 15.9607s/40 iters), loss = 5.05903 I0419 21:30:37.838559 27863 solver.cpp:237] Train net output #0: loss = 5.05903 (* 1 = 5.05903 loss) I0419 21:30:37.838567 27863 sgd_solver.cpp:105] Iteration 440, lr = 0.00952363 I0419 21:30:53.941490 27863 solver.cpp:218] Iteration 480 (2.48402 iter/s, 16.1029s/40 iters), loss = 5.20552 I0419 21:30:53.941601 27863 solver.cpp:237] Train net output #0: loss = 5.20552 (* 1 = 5.20552 loss) I0419 21:30:53.941610 27863 sgd_solver.cpp:105] Iteration 480, lr = 0.00948146 I0419 21:31:09.861757 27863 solver.cpp:218] Iteration 520 (2.51254 iter/s, 15.9202s/40 iters), loss = 5.05087 I0419 21:31:09.861791 27863 solver.cpp:237] Train net output #0: loss = 5.05087 (* 1 = 5.05087 loss) I0419 21:31:09.861799 27863 sgd_solver.cpp:105] Iteration 520, lr = 0.00943948 I0419 21:31:25.491660 27863 solver.cpp:218] Iteration 560 (2.5592 iter/s, 15.6299s/40 iters), loss = 5.00735 I0419 21:31:25.491775 27863 solver.cpp:237] Train net output #0: loss = 5.00735 (* 1 = 5.00735 loss) I0419 21:31:25.491784 27863 sgd_solver.cpp:105] Iteration 560, lr = 0.00939769 I0419 21:31:41.233386 27863 solver.cpp:218] Iteration 600 (2.54103 iter/s, 15.7416s/40 iters), loss = 5.07284 I0419 21:31:41.233418 27863 solver.cpp:237] Train net output #0: loss = 5.07284 (* 1 = 5.07284 loss) I0419 21:31:41.233425 27863 sgd_solver.cpp:105] Iteration 600, lr = 0.00935608 I0419 21:31:43.510802 27871 data_layer.cpp:73] Restarting data prefetching from start. I0419 21:31:43.573357 27863 solver.cpp:447] Snapshotting to binary proto file snapshot_iter_607.caffemodel I0419 21:31:47.503228 27863 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_607.solverstate I0419 21:31:50.046018 27863 solver.cpp:330] Iteration 607, Testing net (#0) I0419 21:31:50.046043 27863 net.cpp:676] Ignoring source layer train-data I0419 21:31:54.775455 27881 data_layer.cpp:73] Restarting data prefetching from start. I0419 21:31:54.861956 27863 solver.cpp:397] Test net output #0: accuracy = 0.0355392 I0419 21:31:54.862001 27863 solver.cpp:397] Test net output #1: loss = 4.96639 (* 1 = 4.96639 loss) I0419 21:32:07.154608 27863 solver.cpp:218] Iteration 640 (1.54314 iter/s, 25.9212s/40 iters), loss = 4.9642 I0419 21:32:07.154721 27863 solver.cpp:237] Train net output #0: loss = 4.9642 (* 1 = 4.9642 loss) I0419 21:32:07.154731 27863 sgd_solver.cpp:105] Iteration 640, lr = 0.00931466 I0419 21:32:22.801513 27863 solver.cpp:218] Iteration 680 (2.55643 iter/s, 15.6468s/40 iters), loss = 4.94701 I0419 21:32:22.801548 27863 solver.cpp:237] Train net output #0: loss = 4.94701 (* 1 = 4.94701 loss) I0419 21:32:22.801555 27863 sgd_solver.cpp:105] Iteration 680, lr = 0.00927342 I0419 21:32:38.384459 27863 solver.cpp:218] Iteration 720 (2.56691 iter/s, 15.5829s/40 iters), loss = 5.01544 I0419 21:32:38.384531 27863 solver.cpp:237] Train net output #0: loss = 5.01544 (* 1 = 5.01544 loss) I0419 21:32:38.384539 27863 sgd_solver.cpp:105] Iteration 720, lr = 0.00923236 I0419 21:32:53.986081 27863 solver.cpp:218] Iteration 760 (2.56385 iter/s, 15.6016s/40 iters), loss = 4.87573 I0419 21:32:53.986114 27863 solver.cpp:237] Train net output #0: loss = 4.87573 (* 1 = 4.87573 loss) I0419 21:32:53.986120 27863 sgd_solver.cpp:105] Iteration 760, lr = 0.00919149 I0419 21:33:09.553637 27863 solver.cpp:218] Iteration 800 (2.56945 iter/s, 15.5675s/40 iters), loss = 4.76269 I0419 21:33:09.553704 27863 solver.cpp:237] Train net output #0: loss = 4.76269 (* 1 = 4.76269 loss) I0419 21:33:09.553714 27863 sgd_solver.cpp:105] Iteration 800, lr = 0.00915079 I0419 21:33:25.185225 27863 solver.cpp:218] Iteration 840 (2.55893 iter/s, 15.6315s/40 iters), loss = 4.86078 I0419 21:33:25.185258 27863 solver.cpp:237] Train net output #0: loss = 4.86078 (* 1 = 4.86078 loss) I0419 21:33:25.185266 27863 sgd_solver.cpp:105] Iteration 840, lr = 0.00911028 I0419 21:33:40.867130 27863 solver.cpp:218] Iteration 880 (2.55071 iter/s, 15.6819s/40 iters), loss = 4.8163 I0419 21:33:40.867241 27863 solver.cpp:237] Train net output #0: loss = 4.8163 (* 1 = 4.8163 loss) I0419 21:33:40.867250 27863 sgd_solver.cpp:105] Iteration 880, lr = 0.00906995 I0419 21:33:52.899638 27863 blocking_queue.cpp:49] Waiting for data I0419 21:33:56.455799 27863 solver.cpp:218] Iteration 920 (2.56598 iter/s, 15.5886s/40 iters), loss = 4.81533 I0419 21:33:56.455832 27863 solver.cpp:237] Train net output #0: loss = 4.81533 (* 1 = 4.81533 loss) I0419 21:33:56.455839 27863 sgd_solver.cpp:105] Iteration 920, lr = 0.00902979 I0419 21:34:12.054451 27863 solver.cpp:218] Iteration 960 (2.56433 iter/s, 15.5986s/40 iters), loss = 4.51674 I0419 21:34:12.054531 27863 solver.cpp:237] Train net output #0: loss = 4.51674 (* 1 = 4.51674 loss) I0419 21:34:12.054539 27863 sgd_solver.cpp:105] Iteration 960, lr = 0.00898981 I0419 21:34:27.683354 27863 solver.cpp:218] Iteration 1000 (2.55937 iter/s, 15.6288s/40 iters), loss = 4.63599 I0419 21:34:27.683387 27863 solver.cpp:237] Train net output #0: loss = 4.63599 (* 1 = 4.63599 loss) I0419 21:34:27.683394 27863 sgd_solver.cpp:105] Iteration 1000, lr = 0.00895001 I0419 21:34:43.297852 27863 solver.cpp:218] Iteration 1040 (2.56173 iter/s, 15.6145s/40 iters), loss = 4.49456 I0419 21:34:43.297952 27863 solver.cpp:237] Train net output #0: loss = 4.49456 (* 1 = 4.49456 loss) I0419 21:34:43.297961 27863 sgd_solver.cpp:105] Iteration 1040, lr = 0.00891038 I0419 21:34:58.862126 27863 solver.cpp:218] Iteration 1080 (2.57 iter/s, 15.5642s/40 iters), loss = 4.52277 I0419 21:34:58.862159 27863 solver.cpp:237] Train net output #0: loss = 4.52277 (* 1 = 4.52277 loss) I0419 21:34:58.862167 27863 sgd_solver.cpp:105] Iteration 1080, lr = 0.00887094 I0419 21:35:14.405555 27863 solver.cpp:218] Iteration 1120 (2.57344 iter/s, 15.5434s/40 iters), loss = 4.39579 I0419 21:35:14.405665 27863 solver.cpp:237] Train net output #0: loss = 4.39579 (* 1 = 4.39579 loss) I0419 21:35:14.405673 27863 sgd_solver.cpp:105] Iteration 1120, lr = 0.00883166 I0419 21:35:30.012408 27863 solver.cpp:218] Iteration 1160 (2.56299 iter/s, 15.6068s/40 iters), loss = 4.40955 I0419 21:35:30.012440 27863 solver.cpp:237] Train net output #0: loss = 4.40955 (* 1 = 4.40955 loss) I0419 21:35:30.012447 27863 sgd_solver.cpp:105] Iteration 1160, lr = 0.00879256 I0419 21:35:45.604666 27863 solver.cpp:218] Iteration 1200 (2.56538 iter/s, 15.5922s/40 iters), loss = 4.18958 I0419 21:35:45.604790 27863 solver.cpp:237] Train net output #0: loss = 4.18958 (* 1 = 4.18958 loss) I0419 21:35:45.604799 27863 sgd_solver.cpp:105] Iteration 1200, lr = 0.00875363 I0419 21:35:50.536141 27871 data_layer.cpp:73] Restarting data prefetching from start. I0419 21:35:50.612198 27863 solver.cpp:447] Snapshotting to binary proto file snapshot_iter_1214.caffemodel I0419 21:35:53.657121 27863 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_1214.solverstate I0419 21:35:56.014509 27863 solver.cpp:330] Iteration 1214, Testing net (#0) I0419 21:35:56.014542 27863 net.cpp:676] Ignoring source layer train-data I0419 21:36:00.671998 27881 data_layer.cpp:73] Restarting data prefetching from start. I0419 21:36:00.808873 27863 solver.cpp:397] Test net output #0: accuracy = 0.0863971 I0419 21:36:00.808918 27863 solver.cpp:397] Test net output #1: loss = 4.33836 (* 1 = 4.33836 loss) I0419 21:36:10.332160 27863 solver.cpp:218] Iteration 1240 (1.61764 iter/s, 24.7274s/40 iters), loss = 4.20173 I0419 21:36:10.332195 27863 solver.cpp:237] Train net output #0: loss = 4.20173 (* 1 = 4.20173 loss) I0419 21:36:10.332201 27863 sgd_solver.cpp:105] Iteration 1240, lr = 0.00871488 I0419 21:36:25.921816 27863 solver.cpp:218] Iteration 1280 (2.56581 iter/s, 15.5896s/40 iters), loss = 4.06203 I0419 21:36:25.921926 27863 solver.cpp:237] Train net output #0: loss = 4.06203 (* 1 = 4.06203 loss) I0419 21:36:25.921936 27863 sgd_solver.cpp:105] Iteration 1280, lr = 0.00867629 I0419 21:36:41.412765 27863 solver.cpp:218] Iteration 1320 (2.58217 iter/s, 15.4909s/40 iters), loss = 4.10715 I0419 21:36:41.412799 27863 solver.cpp:237] Train net output #0: loss = 4.10715 (* 1 = 4.10715 loss) I0419 21:36:41.412807 27863 sgd_solver.cpp:105] Iteration 1320, lr = 0.00863788 I0419 21:36:56.990005 27863 solver.cpp:218] Iteration 1360 (2.56785 iter/s, 15.5772s/40 iters), loss = 4.17495 I0419 21:36:56.990157 27863 solver.cpp:237] Train net output #0: loss = 4.17495 (* 1 = 4.17495 loss) I0419 21:36:56.990167 27863 sgd_solver.cpp:105] Iteration 1360, lr = 0.00859964 I0419 21:37:12.542030 27863 solver.cpp:218] Iteration 1400 (2.57204 iter/s, 15.5519s/40 iters), loss = 4.10599 I0419 21:37:12.542062 27863 solver.cpp:237] Train net output #0: loss = 4.10599 (* 1 = 4.10599 loss) I0419 21:37:12.542069 27863 sgd_solver.cpp:105] Iteration 1400, lr = 0.00856156 I0419 21:37:28.135536 27863 solver.cpp:218] Iteration 1440 (2.56517 iter/s, 15.5935s/40 iters), loss = 3.89232 I0419 21:37:28.135651 27863 solver.cpp:237] Train net output #0: loss = 3.89232 (* 1 = 3.89232 loss) I0419 21:37:28.135660 27863 sgd_solver.cpp:105] Iteration 1440, lr = 0.00852366 I0419 21:37:43.769011 27863 solver.cpp:218] Iteration 1480 (2.55863 iter/s, 15.6334s/40 iters), loss = 3.67734 I0419 21:37:43.769044 27863 solver.cpp:237] Train net output #0: loss = 3.67734 (* 1 = 3.67734 loss) I0419 21:37:43.769052 27863 sgd_solver.cpp:105] Iteration 1480, lr = 0.00848592 I0419 21:37:59.327702 27863 solver.cpp:218] Iteration 1520 (2.57091 iter/s, 15.5587s/40 iters), loss = 3.78883 I0419 21:37:59.327813 27863 solver.cpp:237] Train net output #0: loss = 3.78883 (* 1 = 3.78883 loss) I0419 21:37:59.327823 27863 sgd_solver.cpp:105] Iteration 1520, lr = 0.00844835 I0419 21:38:14.983827 27863 solver.cpp:218] Iteration 1560 (2.55493 iter/s, 15.656s/40 iters), loss = 3.58728 I0419 21:38:14.983861 27863 solver.cpp:237] Train net output #0: loss = 3.58728 (* 1 = 3.58728 loss) I0419 21:38:14.983868 27863 sgd_solver.cpp:105] Iteration 1560, lr = 0.00841094 I0419 21:38:30.577006 27863 solver.cpp:218] Iteration 1600 (2.56523 iter/s, 15.5932s/40 iters), loss = 3.46444 I0419 21:38:30.577126 27863 solver.cpp:237] Train net output #0: loss = 3.46444 (* 1 = 3.46444 loss) I0419 21:38:30.577134 27863 sgd_solver.cpp:105] Iteration 1600, lr = 0.0083737 I0419 21:38:46.162581 27863 solver.cpp:218] Iteration 1640 (2.56649 iter/s, 15.5855s/40 iters), loss = 3.38872 I0419 21:38:46.162612 27863 solver.cpp:237] Train net output #0: loss = 3.38872 (* 1 = 3.38872 loss) I0419 21:38:46.162621 27863 sgd_solver.cpp:105] Iteration 1640, lr = 0.00833663 I0419 21:39:01.667083 27863 solver.cpp:218] Iteration 1680 (2.5799 iter/s, 15.5045s/40 iters), loss = 3.40505 I0419 21:39:01.667193 27863 solver.cpp:237] Train net output #0: loss = 3.40505 (* 1 = 3.40505 loss) I0419 21:39:01.667202 27863 sgd_solver.cpp:105] Iteration 1680, lr = 0.00829972 I0419 21:39:17.188987 27863 solver.cpp:218] Iteration 1720 (2.57702 iter/s, 15.5218s/40 iters), loss = 3.35344 I0419 21:39:17.189019 27863 solver.cpp:237] Train net output #0: loss = 3.35344 (* 1 = 3.35344 loss) I0419 21:39:17.189028 27863 sgd_solver.cpp:105] Iteration 1720, lr = 0.00826298 I0419 21:39:32.760318 27863 solver.cpp:218] Iteration 1760 (2.56883 iter/s, 15.5713s/40 iters), loss = 3.70299 I0419 21:39:32.760430 27863 solver.cpp:237] Train net output #0: loss = 3.70299 (* 1 = 3.70299 loss) I0419 21:39:32.760438 27863 sgd_solver.cpp:105] Iteration 1760, lr = 0.00822639 I0419 21:39:48.321646 27863 solver.cpp:218] Iteration 1800 (2.57049 iter/s, 15.5612s/40 iters), loss = 3.07988 I0419 21:39:48.321679 27863 solver.cpp:237] Train net output #0: loss = 3.07988 (* 1 = 3.07988 loss) I0419 21:39:48.321686 27863 sgd_solver.cpp:105] Iteration 1800, lr = 0.00818997 I0419 21:39:56.014340 27871 data_layer.cpp:73] Restarting data prefetching from start. I0419 21:39:56.107697 27863 solver.cpp:447] Snapshotting to binary proto file snapshot_iter_1821.caffemodel I0419 21:39:59.545858 27863 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_1821.solverstate I0419 21:40:04.209985 27863 solver.cpp:330] Iteration 1821, Testing net (#0) I0419 21:40:04.210172 27863 net.cpp:676] Ignoring source layer train-data I0419 21:40:08.671203 27863 blocking_queue.cpp:49] Waiting for data I0419 21:40:08.692436 27881 data_layer.cpp:73] Restarting data prefetching from start. I0419 21:40:08.846632 27863 solver.cpp:397] Test net output #0: accuracy = 0.211397 I0419 21:40:08.846678 27863 solver.cpp:397] Test net output #1: loss = 3.43291 (* 1 = 3.43291 loss) I0419 21:40:15.711102 27863 solver.cpp:218] Iteration 1840 (1.46042 iter/s, 27.3894s/40 iters), loss = 3.29942 I0419 21:40:15.711133 27863 solver.cpp:237] Train net output #0: loss = 3.29942 (* 1 = 3.29942 loss) I0419 21:40:15.711141 27863 sgd_solver.cpp:105] Iteration 1840, lr = 0.00815371 I0419 21:40:31.283064 27863 solver.cpp:218] Iteration 1880 (2.56872 iter/s, 15.5719s/40 iters), loss = 3.17069 I0419 21:40:31.283097 27863 solver.cpp:237] Train net output #0: loss = 3.17069 (* 1 = 3.17069 loss) I0419 21:40:31.283105 27863 sgd_solver.cpp:105] Iteration 1880, lr = 0.00811761 I0419 21:40:46.815307 27863 solver.cpp:218] Iteration 1920 (2.57529 iter/s, 15.5322s/40 iters), loss = 3.09791 I0419 21:40:46.815426 27863 solver.cpp:237] Train net output #0: loss = 3.09791 (* 1 = 3.09791 loss) I0419 21:40:46.815435 27863 sgd_solver.cpp:105] Iteration 1920, lr = 0.00808167 I0419 21:41:02.375690 27863 solver.cpp:218] Iteration 1960 (2.57065 iter/s, 15.5603s/40 iters), loss = 2.94879 I0419 21:41:02.375720 27863 solver.cpp:237] Train net output #0: loss = 2.94879 (* 1 = 2.94879 loss) I0419 21:41:02.375727 27863 sgd_solver.cpp:105] Iteration 1960, lr = 0.00804589 I0419 21:41:17.891494 27863 solver.cpp:218] Iteration 2000 (2.57802 iter/s, 15.5158s/40 iters), loss = 2.71923 I0419 21:41:17.891605 27863 solver.cpp:237] Train net output #0: loss = 2.71923 (* 1 = 2.71923 loss) I0419 21:41:17.891613 27863 sgd_solver.cpp:105] Iteration 2000, lr = 0.00801027 I0419 21:41:33.521070 27863 solver.cpp:218] Iteration 2040 (2.55927 iter/s, 15.6295s/40 iters), loss = 2.94186 I0419 21:41:33.521101 27863 solver.cpp:237] Train net output #0: loss = 2.94186 (* 1 = 2.94186 loss) I0419 21:41:33.521108 27863 sgd_solver.cpp:105] Iteration 2040, lr = 0.0079748 I0419 21:41:49.090742 27863 solver.cpp:218] Iteration 2080 (2.5691 iter/s, 15.5696s/40 iters), loss = 2.45199 I0419 21:41:49.090855 27863 solver.cpp:237] Train net output #0: loss = 2.45199 (* 1 = 2.45199 loss) I0419 21:41:49.090863 27863 sgd_solver.cpp:105] Iteration 2080, lr = 0.0079395 I0419 21:42:04.622568 27863 solver.cpp:218] Iteration 2120 (2.57537 iter/s, 15.5317s/40 iters), loss = 2.83462 I0419 21:42:04.622601 27863 solver.cpp:237] Train net output #0: loss = 2.83462 (* 1 = 2.83462 loss) I0419 21:42:04.622608 27863 sgd_solver.cpp:105] Iteration 2120, lr = 0.00790435 I0419 21:42:20.191718 27863 solver.cpp:218] Iteration 2160 (2.56919 iter/s, 15.5691s/40 iters), loss = 2.68964 I0419 21:42:20.191821 27863 solver.cpp:237] Train net output #0: loss = 2.68964 (* 1 = 2.68964 loss) I0419 21:42:20.191830 27863 sgd_solver.cpp:105] Iteration 2160, lr = 0.00786935 I0419 21:42:35.720006 27863 solver.cpp:218] Iteration 2200 (2.57596 iter/s, 15.5282s/40 iters), loss = 2.25442 I0419 21:42:35.720039 27863 solver.cpp:237] Train net output #0: loss = 2.25442 (* 1 = 2.25442 loss) I0419 21:42:35.720047 27863 sgd_solver.cpp:105] Iteration 2200, lr = 0.00783451 I0419 21:42:51.286700 27863 solver.cpp:218] Iteration 2240 (2.56959 iter/s, 15.5667s/40 iters), loss = 2.39565 I0419 21:42:51.286813 27863 solver.cpp:237] Train net output #0: loss = 2.39565 (* 1 = 2.39565 loss) I0419 21:42:51.286823 27863 sgd_solver.cpp:105] Iteration 2240, lr = 0.00779982 I0419 21:43:06.811344 27863 solver.cpp:218] Iteration 2280 (2.57657 iter/s, 15.5245s/40 iters), loss = 2.39515 I0419 21:43:06.811376 27863 solver.cpp:237] Train net output #0: loss = 2.39515 (* 1 = 2.39515 loss) I0419 21:43:06.811383 27863 sgd_solver.cpp:105] Iteration 2280, lr = 0.00776529 I0419 21:43:22.311506 27863 solver.cpp:218] Iteration 2320 (2.58062 iter/s, 15.5001s/40 iters), loss = 2.60474 I0419 21:43:22.311651 27863 solver.cpp:237] Train net output #0: loss = 2.60474 (* 1 = 2.60474 loss) I0419 21:43:22.311661 27863 sgd_solver.cpp:105] Iteration 2320, lr = 0.00773091 I0419 21:43:37.897507 27863 solver.cpp:218] Iteration 2360 (2.56643 iter/s, 15.5859s/40 iters), loss = 2.29584 I0419 21:43:37.897539 27863 solver.cpp:237] Train net output #0: loss = 2.29584 (* 1 = 2.29584 loss) I0419 21:43:37.897547 27863 sgd_solver.cpp:105] Iteration 2360, lr = 0.00769668 I0419 21:43:53.429265 27863 solver.cpp:218] Iteration 2400 (2.57537 iter/s, 15.5317s/40 iters), loss = 2.27549 I0419 21:43:53.429380 27863 solver.cpp:237] Train net output #0: loss = 2.27549 (* 1 = 2.27549 loss) I0419 21:43:53.429389 27863 sgd_solver.cpp:105] Iteration 2400, lr = 0.00766261 I0419 21:44:03.802145 27871 data_layer.cpp:73] Restarting data prefetching from start. I0419 21:44:03.914577 27863 solver.cpp:447] Snapshotting to binary proto file snapshot_iter_2428.caffemodel I0419 21:44:06.970949 27863 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_2428.solverstate I0419 21:44:09.309139 27863 solver.cpp:330] Iteration 2428, Testing net (#0) I0419 21:44:09.309168 27863 net.cpp:676] Ignoring source layer train-data I0419 21:44:13.536413 27881 data_layer.cpp:73] Restarting data prefetching from start. I0419 21:44:13.737174 27863 solver.cpp:397] Test net output #0: accuracy = 0.324142 I0419 21:44:13.737217 27863 solver.cpp:397] Test net output #1: loss = 2.77914 (* 1 = 2.77914 loss) I0419 21:44:17.689139 27863 solver.cpp:218] Iteration 2440 (1.64882 iter/s, 24.2598s/40 iters), loss = 2.34226 I0419 21:44:17.689170 27863 solver.cpp:237] Train net output #0: loss = 2.34226 (* 1 = 2.34226 loss) I0419 21:44:17.689178 27863 sgd_solver.cpp:105] Iteration 2440, lr = 0.00762868 I0419 21:44:33.288305 27863 solver.cpp:218] Iteration 2480 (2.56424 iter/s, 15.5991s/40 iters), loss = 2.55555 I0419 21:44:33.288416 27863 solver.cpp:237] Train net output #0: loss = 2.55555 (* 1 = 2.55555 loss) I0419 21:44:33.288425 27863 sgd_solver.cpp:105] Iteration 2480, lr = 0.00759491 I0419 21:44:48.851080 27863 solver.cpp:218] Iteration 2520 (2.57025 iter/s, 15.5627s/40 iters), loss = 1.98296 I0419 21:44:48.851111 27863 solver.cpp:237] Train net output #0: loss = 1.98296 (* 1 = 1.98296 loss) I0419 21:44:48.851119 27863 sgd_solver.cpp:105] Iteration 2520, lr = 0.00756128 I0419 21:45:04.409093 27863 solver.cpp:218] Iteration 2560 (2.57103 iter/s, 15.558s/40 iters), loss = 2.30881 I0419 21:45:04.409205 27863 solver.cpp:237] Train net output #0: loss = 2.30881 (* 1 = 2.30881 loss) I0419 21:45:04.409214 27863 sgd_solver.cpp:105] Iteration 2560, lr = 0.0075278 I0419 21:45:19.971653 27863 solver.cpp:218] Iteration 2600 (2.57029 iter/s, 15.5625s/40 iters), loss = 2.09993 I0419 21:45:19.971686 27863 solver.cpp:237] Train net output #0: loss = 2.09993 (* 1 = 2.09993 loss) I0419 21:45:19.971693 27863 sgd_solver.cpp:105] Iteration 2600, lr = 0.00749447 I0419 21:45:35.512467 27863 solver.cpp:218] Iteration 2640 (2.57387 iter/s, 15.5408s/40 iters), loss = 2.16655 I0419 21:45:35.512579 27863 solver.cpp:237] Train net output #0: loss = 2.16655 (* 1 = 2.16655 loss) I0419 21:45:35.512588 27863 sgd_solver.cpp:105] Iteration 2640, lr = 0.00746129 I0419 21:45:51.094375 27863 solver.cpp:218] Iteration 2680 (2.5671 iter/s, 15.5818s/40 iters), loss = 1.88205 I0419 21:45:51.094408 27863 solver.cpp:237] Train net output #0: loss = 1.88205 (* 1 = 1.88205 loss) I0419 21:45:51.094415 27863 sgd_solver.cpp:105] Iteration 2680, lr = 0.00742826 I0419 21:46:06.650921 27863 solver.cpp:218] Iteration 2720 (2.57127 iter/s, 15.5565s/40 iters), loss = 1.92081 I0419 21:46:06.651016 27863 solver.cpp:237] Train net output #0: loss = 1.92081 (* 1 = 1.92081 loss) I0419 21:46:06.651024 27863 sgd_solver.cpp:105] Iteration 2720, lr = 0.00739537 I0419 21:46:22.424535 27863 solver.cpp:218] Iteration 2760 (2.53589 iter/s, 15.7735s/40 iters), loss = 1.92357 I0419 21:46:22.424566 27863 solver.cpp:237] Train net output #0: loss = 1.92357 (* 1 = 1.92357 loss) I0419 21:46:22.424574 27863 sgd_solver.cpp:105] Iteration 2760, lr = 0.00736263 I0419 21:46:28.631026 27863 blocking_queue.cpp:49] Waiting for data I0419 21:46:38.015525 27863 solver.cpp:218] Iteration 2800 (2.56559 iter/s, 15.591s/40 iters), loss = 1.77646 I0419 21:46:38.015635 27863 solver.cpp:237] Train net output #0: loss = 1.77646 (* 1 = 1.77646 loss) I0419 21:46:38.015645 27863 sgd_solver.cpp:105] Iteration 2800, lr = 0.00733003 I0419 21:46:53.561439 27863 solver.cpp:218] Iteration 2840 (2.57304 iter/s, 15.5458s/40 iters), loss = 2.10096 I0419 21:46:53.561471 27863 solver.cpp:237] Train net output #0: loss = 2.10096 (* 1 = 2.10096 loss) I0419 21:46:53.561478 27863 sgd_solver.cpp:105] Iteration 2840, lr = 0.00729758 I0419 21:47:09.375849 27863 solver.cpp:218] Iteration 2880 (2.52934 iter/s, 15.8144s/40 iters), loss = 1.69692 I0419 21:47:09.375968 27863 solver.cpp:237] Train net output #0: loss = 1.69692 (* 1 = 1.69692 loss) I0419 21:47:09.375977 27863 sgd_solver.cpp:105] Iteration 2880, lr = 0.00726527 I0419 21:47:25.205413 27863 solver.cpp:218] Iteration 2920 (2.52693 iter/s, 15.8295s/40 iters), loss = 1.53851 I0419 21:47:25.205446 27863 solver.cpp:237] Train net output #0: loss = 1.53851 (* 1 = 1.53851 loss) I0419 21:47:25.205453 27863 sgd_solver.cpp:105] Iteration 2920, lr = 0.0072331 I0419 21:47:40.762516 27863 solver.cpp:218] Iteration 2960 (2.57118 iter/s, 15.5571s/40 iters), loss = 1.62683 I0419 21:47:40.762569 27863 solver.cpp:237] Train net output #0: loss = 1.62683 (* 1 = 1.62683 loss) I0419 21:47:40.762578 27863 sgd_solver.cpp:105] Iteration 2960, lr = 0.00720108 I0419 21:47:56.824373 27863 solver.cpp:218] Iteration 3000 (2.49038 iter/s, 16.0618s/40 iters), loss = 1.78571 I0419 21:47:56.824406 27863 solver.cpp:237] Train net output #0: loss = 1.78571 (* 1 = 1.78571 loss) I0419 21:47:56.824414 27863 sgd_solver.cpp:105] Iteration 3000, lr = 0.0071692 I0419 21:48:10.061076 27871 data_layer.cpp:73] Restarting data prefetching from start. I0419 21:48:10.192723 27863 solver.cpp:447] Snapshotting to binary proto file snapshot_iter_3035.caffemodel I0419 21:48:13.266474 27863 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_3035.solverstate I0419 21:48:15.610862 27863 solver.cpp:330] Iteration 3035, Testing net (#0) I0419 21:48:15.610889 27863 net.cpp:676] Ignoring source layer train-data I0419 21:48:19.774066 27881 data_layer.cpp:73] Restarting data prefetching from start. I0419 21:48:20.006260 27863 solver.cpp:397] Test net output #0: accuracy = 0.377451 I0419 21:48:20.006306 27863 solver.cpp:397] Test net output #1: loss = 2.6632 (* 1 = 2.6632 loss) I0419 21:48:21.343061 27863 solver.cpp:218] Iteration 3040 (1.63141 iter/s, 24.5187s/40 iters), loss = 1.74572 I0419 21:48:21.343096 27863 solver.cpp:237] Train net output #0: loss = 1.74572 (* 1 = 1.74572 loss) I0419 21:48:21.343103 27863 sgd_solver.cpp:105] Iteration 3040, lr = 0.00713746 I0419 21:48:36.902568 27863 solver.cpp:218] Iteration 3080 (2.57078 iter/s, 15.5595s/40 iters), loss = 1.58475 I0419 21:48:36.902599 27863 solver.cpp:237] Train net output #0: loss = 1.58475 (* 1 = 1.58475 loss) I0419 21:48:36.902606 27863 sgd_solver.cpp:105] Iteration 3080, lr = 0.00710586 I0419 21:48:52.478936 27863 solver.cpp:218] Iteration 3120 (2.568 iter/s, 15.5763s/40 iters), loss = 1.56652 I0419 21:48:52.479056 27863 solver.cpp:237] Train net output #0: loss = 1.56652 (* 1 = 1.56652 loss) I0419 21:48:52.479065 27863 sgd_solver.cpp:105] Iteration 3120, lr = 0.0070744 I0419 21:49:08.046418 27863 solver.cpp:218] Iteration 3160 (2.56948 iter/s, 15.5674s/40 iters), loss = 1.56616 I0419 21:49:08.046449 27863 solver.cpp:237] Train net output #0: loss = 1.56616 (* 1 = 1.56616 loss) I0419 21:49:08.046456 27863 sgd_solver.cpp:105] Iteration 3160, lr = 0.00704308 I0419 21:49:23.586099 27863 solver.cpp:218] Iteration 3200 (2.57406 iter/s, 15.5397s/40 iters), loss = 1.59564 I0419 21:49:23.586210 27863 solver.cpp:237] Train net output #0: loss = 1.59564 (* 1 = 1.59564 loss) I0419 21:49:23.586217 27863 sgd_solver.cpp:105] Iteration 3200, lr = 0.00701189 I0419 21:49:39.159576 27863 solver.cpp:218] Iteration 3240 (2.56849 iter/s, 15.5734s/40 iters), loss = 1.37178 I0419 21:49:39.159608 27863 solver.cpp:237] Train net output #0: loss = 1.37178 (* 1 = 1.37178 loss) I0419 21:49:39.159616 27863 sgd_solver.cpp:105] Iteration 3240, lr = 0.00698085 I0419 21:49:54.732522 27863 solver.cpp:218] Iteration 3280 (2.56856 iter/s, 15.5729s/40 iters), loss = 1.41451 I0419 21:49:54.732661 27863 solver.cpp:237] Train net output #0: loss = 1.41451 (* 1 = 1.41451 loss) I0419 21:49:54.732671 27863 sgd_solver.cpp:105] Iteration 3280, lr = 0.00694994 I0419 21:50:10.292101 27863 solver.cpp:218] Iteration 3320 (2.57079 iter/s, 15.5594s/40 iters), loss = 1.15097 I0419 21:50:10.292135 27863 solver.cpp:237] Train net output #0: loss = 1.15097 (* 1 = 1.15097 loss) I0419 21:50:10.292142 27863 sgd_solver.cpp:105] Iteration 3320, lr = 0.00691917 I0419 21:50:25.811324 27863 solver.cpp:218] Iteration 3360 (2.57745 iter/s, 15.5192s/40 iters), loss = 1.26883 I0419 21:50:25.811441 27863 solver.cpp:237] Train net output #0: loss = 1.26883 (* 1 = 1.26883 loss) I0419 21:50:25.811450 27863 sgd_solver.cpp:105] Iteration 3360, lr = 0.00688854 I0419 21:50:41.386459 27863 solver.cpp:218] Iteration 3400 (2.56821 iter/s, 15.575s/40 iters), loss = 1.09192 I0419 21:50:41.386492 27863 solver.cpp:237] Train net output #0: loss = 1.09192 (* 1 = 1.09192 loss) I0419 21:50:41.386498 27863 sgd_solver.cpp:105] Iteration 3400, lr = 0.00685804 I0419 21:50:56.946899 27863 solver.cpp:218] Iteration 3440 (2.57062 iter/s, 15.5604s/40 iters), loss = 1.59878 I0419 21:50:56.947026 27863 solver.cpp:237] Train net output #0: loss = 1.59878 (* 1 = 1.59878 loss) I0419 21:50:56.947034 27863 sgd_solver.cpp:105] Iteration 3440, lr = 0.00682768 I0419 21:51:12.491482 27863 solver.cpp:218] Iteration 3480 (2.57326 iter/s, 15.5445s/40 iters), loss = 1.29139 I0419 21:51:12.491513 27863 solver.cpp:237] Train net output #0: loss = 1.29139 (* 1 = 1.29139 loss) I0419 21:51:12.491520 27863 sgd_solver.cpp:105] Iteration 3480, lr = 0.00679745 I0419 21:51:27.984218 27863 solver.cpp:218] Iteration 3520 (2.58186 iter/s, 15.4927s/40 iters), loss = 1.4153 I0419 21:51:27.984338 27863 solver.cpp:237] Train net output #0: loss = 1.4153 (* 1 = 1.4153 loss) I0419 21:51:27.984347 27863 sgd_solver.cpp:105] Iteration 3520, lr = 0.00676735 I0419 21:51:43.519636 27863 solver.cpp:218] Iteration 3560 (2.57478 iter/s, 15.5353s/40 iters), loss = 1.14856 I0419 21:51:43.519670 27863 solver.cpp:237] Train net output #0: loss = 1.14856 (* 1 = 1.14856 loss) I0419 21:51:43.519676 27863 sgd_solver.cpp:105] Iteration 3560, lr = 0.00673739 I0419 21:51:59.125255 27863 solver.cpp:218] Iteration 3600 (2.56318 iter/s, 15.6056s/40 iters), loss = 1.241 I0419 21:51:59.125377 27863 solver.cpp:237] Train net output #0: loss = 1.241 (* 1 = 1.241 loss) I0419 21:51:59.125386 27863 sgd_solver.cpp:105] Iteration 3600, lr = 0.00670756 I0419 21:52:14.688757 27863 solver.cpp:218] Iteration 3640 (2.57013 iter/s, 15.5634s/40 iters), loss = 0.888013 I0419 21:52:14.688791 27863 solver.cpp:237] Train net output #0: loss = 0.888013 (* 1 = 0.888013 loss) I0419 21:52:14.688799 27863 sgd_solver.cpp:105] Iteration 3640, lr = 0.00667787 I0419 21:52:14.871111 27871 data_layer.cpp:73] Restarting data prefetching from start. I0419 21:52:15.020851 27863 solver.cpp:447] Snapshotting to binary proto file snapshot_iter_3642.caffemodel I0419 21:52:18.978490 27863 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_3642.solverstate I0419 21:52:22.331689 27863 solver.cpp:330] Iteration 3642, Testing net (#0) I0419 21:52:22.331714 27863 net.cpp:676] Ignoring source layer train-data I0419 21:52:26.828598 27881 data_layer.cpp:73] Restarting data prefetching from start. I0419 21:52:27.141785 27863 solver.cpp:397] Test net output #0: accuracy = 0.41299 I0419 21:52:27.141831 27863 solver.cpp:397] Test net output #1: loss = 2.66218 (* 1 = 2.66218 loss) I0419 21:52:41.317652 27863 solver.cpp:218] Iteration 3680 (1.50213 iter/s, 26.6289s/40 iters), loss = 0.89359 I0419 21:52:41.317806 27863 solver.cpp:237] Train net output #0: loss = 0.89359 (* 1 = 0.89359 loss) I0419 21:52:41.317816 27863 sgd_solver.cpp:105] Iteration 3680, lr = 0.0066483 I0419 21:52:43.593564 27863 blocking_queue.cpp:49] Waiting for data I0419 21:52:56.892151 27863 solver.cpp:218] Iteration 3720 (2.56832 iter/s, 15.5744s/40 iters), loss = 1.22896 I0419 21:52:56.892181 27863 solver.cpp:237] Train net output #0: loss = 1.22896 (* 1 = 1.22896 loss) I0419 21:52:56.892189 27863 sgd_solver.cpp:105] Iteration 3720, lr = 0.00661887 I0419 21:53:12.457813 27863 solver.cpp:218] Iteration 3760 (2.56976 iter/s, 15.5656s/40 iters), loss = 1.01277 I0419 21:53:12.457923 27863 solver.cpp:237] Train net output #0: loss = 1.01277 (* 1 = 1.01277 loss) I0419 21:53:12.457932 27863 sgd_solver.cpp:105] Iteration 3760, lr = 0.00658956 I0419 21:53:28.016108 27863 solver.cpp:218] Iteration 3800 (2.57099 iter/s, 15.5582s/40 iters), loss = 1.15221 I0419 21:53:28.016141 27863 solver.cpp:237] Train net output #0: loss = 1.15221 (* 1 = 1.15221 loss) I0419 21:53:28.016149 27863 sgd_solver.cpp:105] Iteration 3800, lr = 0.00656039 I0419 21:53:44.060520 27863 solver.cpp:218] Iteration 3840 (2.49309 iter/s, 16.0444s/40 iters), loss = 0.707244 I0419 21:53:44.060699 27863 solver.cpp:237] Train net output #0: loss = 0.707244 (* 1 = 0.707244 loss) I0419 21:53:44.060719 27863 sgd_solver.cpp:105] Iteration 3840, lr = 0.00653134 I0419 21:54:00.169194 27863 solver.cpp:218] Iteration 3880 (2.48316 iter/s, 16.1085s/40 iters), loss = 0.764524 I0419 21:54:00.169227 27863 solver.cpp:237] Train net output #0: loss = 0.764524 (* 1 = 0.764524 loss) I0419 21:54:00.169235 27863 sgd_solver.cpp:105] Iteration 3880, lr = 0.00650242 I0419 21:54:16.267930 27863 solver.cpp:218] Iteration 3920 (2.48467 iter/s, 16.0987s/40 iters), loss = 0.829397 I0419 21:54:16.268043 27863 solver.cpp:237] Train net output #0: loss = 0.829397 (* 1 = 0.829397 loss) I0419 21:54:16.268051 27863 sgd_solver.cpp:105] Iteration 3920, lr = 0.00647364 I0419 21:54:32.313203 27863 solver.cpp:218] Iteration 3960 (2.49296 iter/s, 16.0452s/40 iters), loss = 0.940687 I0419 21:54:32.313236 27863 solver.cpp:237] Train net output #0: loss = 0.940687 (* 1 = 0.940687 loss) I0419 21:54:32.313243 27863 sgd_solver.cpp:105] Iteration 3960, lr = 0.00644497 I0419 21:54:48.107347 27863 solver.cpp:218] Iteration 4000 (2.53259 iter/s, 15.7941s/40 iters), loss = 0.861863 I0419 21:54:48.107468 27863 solver.cpp:237] Train net output #0: loss = 0.861863 (* 1 = 0.861863 loss) I0419 21:54:48.107477 27863 sgd_solver.cpp:105] Iteration 4000, lr = 0.00641644 I0419 21:55:04.151381 27863 solver.cpp:218] Iteration 4040 (2.49316 iter/s, 16.0439s/40 iters), loss = 0.557392 I0419 21:55:04.151412 27863 solver.cpp:237] Train net output #0: loss = 0.557392 (* 1 = 0.557392 loss) I0419 21:55:04.151419 27863 sgd_solver.cpp:105] Iteration 4040, lr = 0.00638803 I0419 21:55:20.184135 27863 solver.cpp:218] Iteration 4080 (2.4949 iter/s, 16.0327s/40 iters), loss = 0.652555 I0419 21:55:20.184253 27863 solver.cpp:237] Train net output #0: loss = 0.652555 (* 1 = 0.652555 loss) I0419 21:55:20.184262 27863 sgd_solver.cpp:105] Iteration 4080, lr = 0.00635975 I0419 21:55:36.204761 27863 solver.cpp:218] Iteration 4120 (2.4968 iter/s, 16.0205s/40 iters), loss = 1.02891 I0419 21:55:36.204798 27863 solver.cpp:237] Train net output #0: loss = 1.02891 (* 1 = 1.02891 loss) I0419 21:55:36.204807 27863 sgd_solver.cpp:105] Iteration 4120, lr = 0.00633159 I0419 21:55:52.181984 27863 solver.cpp:218] Iteration 4160 (2.50357 iter/s, 15.9772s/40 iters), loss = 0.939055 I0419 21:55:52.182101 27863 solver.cpp:237] Train net output #0: loss = 0.939055 (* 1 = 0.939055 loss) I0419 21:55:52.182109 27863 sgd_solver.cpp:105] Iteration 4160, lr = 0.00630356 I0419 21:56:08.253425 27863 solver.cpp:218] Iteration 4200 (2.4889 iter/s, 16.0713s/40 iters), loss = 0.782052 I0419 21:56:08.253458 27863 solver.cpp:237] Train net output #0: loss = 0.782052 (* 1 = 0.782052 loss) I0419 21:56:08.253465 27863 sgd_solver.cpp:105] Iteration 4200, lr = 0.00627565 I0419 21:56:24.278450 27863 solver.cpp:218] Iteration 4240 (2.4961 iter/s, 16.025s/40 iters), loss = 0.946295 I0419 21:56:24.278592 27863 solver.cpp:237] Train net output #0: loss = 0.946295 (* 1 = 0.946295 loss) I0419 21:56:24.278601 27863 sgd_solver.cpp:105] Iteration 4240, lr = 0.00624787 I0419 21:56:27.285362 27871 data_layer.cpp:73] Restarting data prefetching from start. I0419 21:56:27.460373 27863 solver.cpp:447] Snapshotting to binary proto file snapshot_iter_4249.caffemodel I0419 21:56:30.529305 27863 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_4249.solverstate I0419 21:56:32.871845 27863 solver.cpp:330] Iteration 4249, Testing net (#0) I0419 21:56:32.871870 27863 net.cpp:676] Ignoring source layer train-data I0419 21:56:37.303491 27881 data_layer.cpp:73] Restarting data prefetching from start. I0419 21:56:37.627207 27863 solver.cpp:397] Test net output #0: accuracy = 0.439338 I0419 21:56:37.627250 27863 solver.cpp:397] Test net output #1: loss = 2.60161 (* 1 = 2.60161 loss) I0419 21:56:49.214677 27863 solver.cpp:218] Iteration 4280 (1.6041 iter/s, 24.9361s/40 iters), loss = 0.789526 I0419 21:56:49.214711 27863 solver.cpp:237] Train net output #0: loss = 0.789526 (* 1 = 0.789526 loss) I0419 21:56:49.214718 27863 sgd_solver.cpp:105] Iteration 4280, lr = 0.00622021 I0419 21:57:05.328603 27863 solver.cpp:218] Iteration 4320 (2.48233 iter/s, 16.1139s/40 iters), loss = 0.872897 I0419 21:57:05.328685 27863 solver.cpp:237] Train net output #0: loss = 0.872897 (* 1 = 0.872897 loss) I0419 21:57:05.328693 27863 sgd_solver.cpp:105] Iteration 4320, lr = 0.00619267 I0419 21:57:21.180755 27863 solver.cpp:218] Iteration 4360 (2.52333 iter/s, 15.8521s/40 iters), loss = 0.890195 I0419 21:57:21.180788 27863 solver.cpp:237] Train net output #0: loss = 0.890195 (* 1 = 0.890195 loss) I0419 21:57:21.180796 27863 sgd_solver.cpp:105] Iteration 4360, lr = 0.00616525 I0419 21:57:36.750039 27863 solver.cpp:218] Iteration 4400 (2.56917 iter/s, 15.5693s/40 iters), loss = 0.657194 I0419 21:57:36.750094 27863 solver.cpp:237] Train net output #0: loss = 0.657194 (* 1 = 0.657194 loss) I0419 21:57:36.750102 27863 sgd_solver.cpp:105] Iteration 4400, lr = 0.00613795 I0419 21:57:52.298890 27863 solver.cpp:218] Iteration 4440 (2.57254 iter/s, 15.5488s/40 iters), loss = 0.595525 I0419 21:57:52.298925 27863 solver.cpp:237] Train net output #0: loss = 0.595525 (* 1 = 0.595525 loss) I0419 21:57:52.298933 27863 sgd_solver.cpp:105] Iteration 4440, lr = 0.00611078 I0419 21:58:07.883263 27863 solver.cpp:218] Iteration 4480 (2.56668 iter/s, 15.5843s/40 iters), loss = 0.539223 I0419 21:58:07.883375 27863 solver.cpp:237] Train net output #0: loss = 0.539223 (* 1 = 0.539223 loss) I0419 21:58:07.883384 27863 sgd_solver.cpp:105] Iteration 4480, lr = 0.00608372 I0419 21:58:23.447306 27863 solver.cpp:218] Iteration 4520 (2.57004 iter/s, 15.5639s/40 iters), loss = 0.738723 I0419 21:58:23.447340 27863 solver.cpp:237] Train net output #0: loss = 0.738723 (* 1 = 0.738723 loss) I0419 21:58:23.447346 27863 sgd_solver.cpp:105] Iteration 4520, lr = 0.00605679 I0419 21:58:38.973385 27863 solver.cpp:218] Iteration 4560 (2.57631 iter/s, 15.5261s/40 iters), loss = 0.408688 I0419 21:58:38.973496 27863 solver.cpp:237] Train net output #0: loss = 0.408688 (* 1 = 0.408688 loss) I0419 21:58:38.973505 27863 sgd_solver.cpp:105] Iteration 4560, lr = 0.00602997 I0419 21:58:54.506052 27863 solver.cpp:218] Iteration 4600 (2.57523 iter/s, 15.5326s/40 iters), loss = 0.414432 I0419 21:58:54.506085 27863 solver.cpp:237] Train net output #0: loss = 0.414432 (* 1 = 0.414432 loss) I0419 21:58:54.506093 27863 sgd_solver.cpp:105] Iteration 4600, lr = 0.00600328 I0419 21:59:10.072006 27863 solver.cpp:218] Iteration 4640 (2.56972 iter/s, 15.5659s/40 iters), loss = 0.679599 I0419 21:59:10.072116 27863 solver.cpp:237] Train net output #0: loss = 0.679599 (* 1 = 0.679599 loss) I0419 21:59:10.072125 27863 sgd_solver.cpp:105] Iteration 4640, lr = 0.0059767 I0419 21:59:10.404331 27863 blocking_queue.cpp:49] Waiting for data I0419 21:59:25.609280 27863 solver.cpp:218] Iteration 4680 (2.57447 iter/s, 15.5372s/40 iters), loss = 0.552662 I0419 21:59:25.609311 27863 solver.cpp:237] Train net output #0: loss = 0.552662 (* 1 = 0.552662 loss) I0419 21:59:25.609319 27863 sgd_solver.cpp:105] Iteration 4680, lr = 0.00595024 I0419 21:59:41.129945 27863 solver.cpp:218] Iteration 4720 (2.57721 iter/s, 15.5206s/40 iters), loss = 0.73903 I0419 21:59:41.130105 27863 solver.cpp:237] Train net output #0: loss = 0.73903 (* 1 = 0.73903 loss) I0419 21:59:41.130115 27863 sgd_solver.cpp:105] Iteration 4720, lr = 0.00592389 I0419 21:59:56.622779 27863 solver.cpp:218] Iteration 4760 (2.58186 iter/s, 15.4927s/40 iters), loss = 0.621681 I0419 21:59:56.622813 27863 solver.cpp:237] Train net output #0: loss = 0.621681 (* 1 = 0.621681 loss) I0419 21:59:56.622820 27863 sgd_solver.cpp:105] Iteration 4760, lr = 0.00589766 I0419 22:00:12.188314 27863 solver.cpp:218] Iteration 4800 (2.56978 iter/s, 15.5655s/40 iters), loss = 0.711084 I0419 22:00:12.188426 27863 solver.cpp:237] Train net output #0: loss = 0.711084 (* 1 = 0.711084 loss) I0419 22:00:12.188434 27863 sgd_solver.cpp:105] Iteration 4800, lr = 0.00587155 I0419 22:00:27.739034 27863 solver.cpp:218] Iteration 4840 (2.57224 iter/s, 15.5506s/40 iters), loss = 0.679621 I0419 22:00:27.739068 27863 solver.cpp:237] Train net output #0: loss = 0.679621 (* 1 = 0.679621 loss) I0419 22:00:27.739076 27863 sgd_solver.cpp:105] Iteration 4840, lr = 0.00584556 I0419 22:00:33.342725 27871 data_layer.cpp:73] Restarting data prefetching from start. I0419 22:00:33.527931 27863 solver.cpp:447] Snapshotting to binary proto file snapshot_iter_4856.caffemodel I0419 22:00:36.597499 27863 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_4856.solverstate I0419 22:00:38.939735 27863 solver.cpp:330] Iteration 4856, Testing net (#0) I0419 22:00:38.939759 27863 net.cpp:676] Ignoring source layer train-data I0419 22:00:42.909759 27881 data_layer.cpp:73] Restarting data prefetching from start. I0419 22:00:43.252133 27863 solver.cpp:397] Test net output #0: accuracy = 0.455882 I0419 22:00:43.252178 27863 solver.cpp:397] Test net output #1: loss = 2.60976 (* 1 = 2.60976 loss) I0419 22:00:51.991053 27863 solver.cpp:218] Iteration 4880 (1.64935 iter/s, 24.252s/40 iters), loss = 0.578756 I0419 22:00:51.991086 27863 solver.cpp:237] Train net output #0: loss = 0.578756 (* 1 = 0.578756 loss) I0419 22:00:51.991093 27863 sgd_solver.cpp:105] Iteration 4880, lr = 0.00581968 I0419 22:01:07.527573 27863 solver.cpp:218] Iteration 4920 (2.57458 iter/s, 15.5365s/40 iters), loss = 0.694088 I0419 22:01:07.527606 27863 solver.cpp:237] Train net output #0: loss = 0.694088 (* 1 = 0.694088 loss) I0419 22:01:07.527614 27863 sgd_solver.cpp:105] Iteration 4920, lr = 0.00579391 I0419 22:01:23.052803 27863 solver.cpp:218] Iteration 4960 (2.57646 iter/s, 15.5252s/40 iters), loss = 0.527921 I0419 22:01:23.052922 27863 solver.cpp:237] Train net output #0: loss = 0.527921 (* 1 = 0.527921 loss) I0419 22:01:23.052932 27863 sgd_solver.cpp:105] Iteration 4960, lr = 0.00576826 I0419 22:01:38.614711 27863 solver.cpp:218] Iteration 5000 (2.5704 iter/s, 15.5618s/40 iters), loss = 0.395868 I0419 22:01:38.614744 27863 solver.cpp:237] Train net output #0: loss = 0.395868 (* 1 = 0.395868 loss) I0419 22:01:38.614753 27863 sgd_solver.cpp:105] Iteration 5000, lr = 0.00574272 I0419 22:01:54.145620 27863 solver.cpp:218] Iteration 5040 (2.57551 iter/s, 15.5309s/40 iters), loss = 0.647082 I0419 22:01:54.145730 27863 solver.cpp:237] Train net output #0: loss = 0.647082 (* 1 = 0.647082 loss) I0419 22:01:54.145740 27863 sgd_solver.cpp:105] Iteration 5040, lr = 0.0057173 I0419 22:02:09.711891 27863 solver.cpp:218] Iteration 5080 (2.56967 iter/s, 15.5662s/40 iters), loss = 0.492759 I0419 22:02:09.711925 27863 solver.cpp:237] Train net output #0: loss = 0.492759 (* 1 = 0.492759 loss) I0419 22:02:09.711931 27863 sgd_solver.cpp:105] Iteration 5080, lr = 0.00569198 I0419 22:02:25.267254 27863 solver.cpp:218] Iteration 5120 (2.57146 iter/s, 15.5553s/40 iters), loss = 0.586604 I0419 22:02:25.267405 27863 solver.cpp:237] Train net output #0: loss = 0.586604 (* 1 = 0.586604 loss) I0419 22:02:25.267414 27863 sgd_solver.cpp:105] Iteration 5120, lr = 0.00566678 I0419 22:02:40.804708 27863 solver.cpp:218] Iteration 5160 (2.57445 iter/s, 15.5373s/40 iters), loss = 0.486276 I0419 22:02:40.804740 27863 solver.cpp:237] Train net output #0: loss = 0.486276 (* 1 = 0.486276 loss) I0419 22:02:40.804749 27863 sgd_solver.cpp:105] Iteration 5160, lr = 0.00564169 I0419 22:02:56.355717 27863 solver.cpp:218] Iteration 5200 (2.57218 iter/s, 15.551s/40 iters), loss = 0.390128 I0419 22:02:56.355827 27863 solver.cpp:237] Train net output #0: loss = 0.390128 (* 1 = 0.390128 loss) I0419 22:02:56.355836 27863 sgd_solver.cpp:105] Iteration 5200, lr = 0.00561672 I0419 22:03:11.939083 27863 solver.cpp:218] Iteration 5240 (2.56686 iter/s, 15.5833s/40 iters), loss = 0.437796 I0419 22:03:11.939116 27863 solver.cpp:237] Train net output #0: loss = 0.437796 (* 1 = 0.437796 loss) I0419 22:03:11.939123 27863 sgd_solver.cpp:105] Iteration 5240, lr = 0.00559185 I0419 22:03:27.469487 27863 solver.cpp:218] Iteration 5280 (2.5756 iter/s, 15.5304s/40 iters), loss = 0.422743 I0419 22:03:27.469600 27863 solver.cpp:237] Train net output #0: loss = 0.422743 (* 1 = 0.422743 loss) I0419 22:03:27.469609 27863 sgd_solver.cpp:105] Iteration 5280, lr = 0.00556709 I0419 22:03:42.988703 27863 solver.cpp:218] Iteration 5320 (2.57747 iter/s, 15.5191s/40 iters), loss = 0.425805 I0419 22:03:42.988737 27863 solver.cpp:237] Train net output #0: loss = 0.425805 (* 1 = 0.425805 loss) I0419 22:03:42.988745 27863 sgd_solver.cpp:105] Iteration 5320, lr = 0.00554244 I0419 22:03:58.503252 27863 solver.cpp:218] Iteration 5360 (2.57823 iter/s, 15.5145s/40 iters), loss = 0.385916 I0419 22:03:58.503358 27863 solver.cpp:237] Train net output #0: loss = 0.385916 (* 1 = 0.385916 loss) I0419 22:03:58.503366 27863 sgd_solver.cpp:105] Iteration 5360, lr = 0.0055179 I0419 22:04:14.097380 27863 solver.cpp:218] Iteration 5400 (2.56508 iter/s, 15.594s/40 iters), loss = 0.307777 I0419 22:04:14.097414 27863 solver.cpp:237] Train net output #0: loss = 0.307777 (* 1 = 0.307777 loss) I0419 22:04:14.097421 27863 sgd_solver.cpp:105] Iteration 5400, lr = 0.00549347 I0419 22:04:29.653829 27863 solver.cpp:218] Iteration 5440 (2.57129 iter/s, 15.5564s/40 iters), loss = 0.415371 I0419 22:04:29.653939 27863 solver.cpp:237] Train net output #0: loss = 0.415371 (* 1 = 0.415371 loss) I0419 22:04:29.653947 27863 sgd_solver.cpp:105] Iteration 5440, lr = 0.00546915 I0419 22:04:37.987426 27871 data_layer.cpp:73] Restarting data prefetching from start. I0419 22:04:38.191260 27863 solver.cpp:447] Snapshotting to binary proto file snapshot_iter_5463.caffemodel I0419 22:04:41.530737 27863 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_5463.solverstate I0419 22:04:44.619215 27863 solver.cpp:330] Iteration 5463, Testing net (#0) I0419 22:04:44.619237 27863 net.cpp:676] Ignoring source layer train-data I0419 22:04:48.668694 27881 data_layer.cpp:73] Restarting data prefetching from start. I0419 22:04:49.055965 27863 solver.cpp:397] Test net output #0: accuracy = 0.460172 I0419 22:04:49.056001 27863 solver.cpp:397] Test net output #1: loss = 2.7086 (* 1 = 2.7086 loss) I0419 22:04:55.076622 27863 solver.cpp:218] Iteration 5480 (1.5734 iter/s, 25.4227s/40 iters), loss = 0.527137 I0419 22:04:55.076654 27863 solver.cpp:237] Train net output #0: loss = 0.527137 (* 1 = 0.527137 loss) I0419 22:04:55.076663 27863 sgd_solver.cpp:105] Iteration 5480, lr = 0.00544494 I0419 22:05:10.664893 27863 solver.cpp:218] Iteration 5520 (2.56604 iter/s, 15.5882s/40 iters), loss = 0.386329 I0419 22:05:10.665007 27863 solver.cpp:237] Train net output #0: loss = 0.386329 (* 1 = 0.386329 loss) I0419 22:05:10.665016 27863 sgd_solver.cpp:105] Iteration 5520, lr = 0.00542083 I0419 22:05:22.675182 27863 blocking_queue.cpp:49] Waiting for data I0419 22:05:26.239082 27863 solver.cpp:218] Iteration 5560 (2.56837 iter/s, 15.5741s/40 iters), loss = 0.297022 I0419 22:05:26.239117 27863 solver.cpp:237] Train net output #0: loss = 0.297022 (* 1 = 0.297022 loss) I0419 22:05:26.239125 27863 sgd_solver.cpp:105] Iteration 5560, lr = 0.00539683 I0419 22:05:41.795750 27863 solver.cpp:218] Iteration 5600 (2.57125 iter/s, 15.5566s/40 iters), loss = 0.430983 I0419 22:05:41.795900 27863 solver.cpp:237] Train net output #0: loss = 0.430983 (* 1 = 0.430983 loss) I0419 22:05:41.795914 27863 sgd_solver.cpp:105] Iteration 5600, lr = 0.00537294 I0419 22:05:57.356762 27863 solver.cpp:218] Iteration 5640 (2.57055 iter/s, 15.5609s/40 iters), loss = 0.343441 I0419 22:05:57.356796 27863 solver.cpp:237] Train net output #0: loss = 0.343441 (* 1 = 0.343441 loss) I0419 22:05:57.356803 27863 sgd_solver.cpp:105] Iteration 5640, lr = 0.00534915 I0419 22:06:13.037158 27863 solver.cpp:218] Iteration 5680 (2.55096 iter/s, 15.6804s/40 iters), loss = 0.455152 I0419 22:06:13.037303 27863 solver.cpp:237] Train net output #0: loss = 0.455152 (* 1 = 0.455152 loss) I0419 22:06:13.037312 27863 sgd_solver.cpp:105] Iteration 5680, lr = 0.00532547 I0419 22:06:28.612375 27863 solver.cpp:218] Iteration 5720 (2.5682 iter/s, 15.5751s/40 iters), loss = 0.472045 I0419 22:06:28.612407 27863 solver.cpp:237] Train net output #0: loss = 0.472045 (* 1 = 0.472045 loss) I0419 22:06:28.612416 27863 sgd_solver.cpp:105] Iteration 5720, lr = 0.00530189 I0419 22:06:44.100929 27863 solver.cpp:218] Iteration 5760 (2.58256 iter/s, 15.4885s/40 iters), loss = 0.326134 I0419 22:06:44.101043 27863 solver.cpp:237] Train net output #0: loss = 0.326134 (* 1 = 0.326134 loss) I0419 22:06:44.101052 27863 sgd_solver.cpp:105] Iteration 5760, lr = 0.00527842 I0419 22:07:00.244200 27863 solver.cpp:218] Iteration 5800 (2.47783 iter/s, 16.1432s/40 iters), loss = 0.236468 I0419 22:07:00.244248 27863 solver.cpp:237] Train net output #0: loss = 0.236468 (* 1 = 0.236468 loss) I0419 22:07:00.244261 27863 sgd_solver.cpp:105] Iteration 5800, lr = 0.00525505 I0419 22:07:16.561473 27863 solver.cpp:218] Iteration 5840 (2.4514 iter/s, 16.3172s/40 iters), loss = 0.360293 I0419 22:07:16.561604 27863 solver.cpp:237] Train net output #0: loss = 0.360293 (* 1 = 0.360293 loss) I0419 22:07:16.561612 27863 sgd_solver.cpp:105] Iteration 5840, lr = 0.00523178 I0419 22:07:33.126489 27863 solver.cpp:218] Iteration 5880 (2.41475 iter/s, 16.5649s/40 iters), loss = 0.45782 I0419 22:07:33.126536 27863 solver.cpp:237] Train net output #0: loss = 0.45782 (* 1 = 0.45782 loss) I0419 22:07:33.126545 27863 sgd_solver.cpp:105] Iteration 5880, lr = 0.00520862 I0419 22:07:49.662434 27863 solver.cpp:218] Iteration 5920 (2.41898 iter/s, 16.5359s/40 iters), loss = 0.395273 I0419 22:07:49.662556 27863 solver.cpp:237] Train net output #0: loss = 0.395273 (* 1 = 0.395273 loss) I0419 22:07:49.662567 27863 sgd_solver.cpp:105] Iteration 5920, lr = 0.00518556 I0419 22:08:06.062175 27863 solver.cpp:218] Iteration 5960 (2.43908 iter/s, 16.3996s/40 iters), loss = 0.47459 I0419 22:08:06.062224 27863 solver.cpp:237] Train net output #0: loss = 0.47459 (* 1 = 0.47459 loss) I0419 22:08:06.062233 27863 sgd_solver.cpp:105] Iteration 5960, lr = 0.0051626 I0419 22:08:22.449678 27863 solver.cpp:218] Iteration 6000 (2.44089 iter/s, 16.3875s/40 iters), loss = 0.407193 I0419 22:08:22.449792 27863 solver.cpp:237] Train net output #0: loss = 0.407193 (* 1 = 0.407193 loss) I0419 22:08:22.449800 27863 sgd_solver.cpp:105] Iteration 6000, lr = 0.00513974 I0419 22:08:38.741186 27863 solver.cpp:218] Iteration 6040 (2.45528 iter/s, 16.2914s/40 iters), loss = 0.198213 I0419 22:08:38.741231 27863 solver.cpp:237] Train net output #0: loss = 0.198213 (* 1 = 0.198213 loss) I0419 22:08:38.741240 27863 sgd_solver.cpp:105] Iteration 6040, lr = 0.00511699 I0419 22:08:50.392266 27871 data_layer.cpp:73] Restarting data prefetching from start. I0419 22:08:50.625924 27863 solver.cpp:447] Snapshotting to binary proto file snapshot_iter_6070.caffemodel I0419 22:08:53.949851 27863 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_6070.solverstate I0419 22:08:56.572008 27863 solver.cpp:330] Iteration 6070, Testing net (#0) I0419 22:08:56.572026 27863 net.cpp:676] Ignoring source layer train-data I0419 22:09:00.814945 27881 data_layer.cpp:73] Restarting data prefetching from start. I0419 22:09:01.264235 27863 solver.cpp:397] Test net output #0: accuracy = 0.466912 I0419 22:09:01.264282 27863 solver.cpp:397] Test net output #1: loss = 2.84426 (* 1 = 2.84426 loss) I0419 22:09:04.723060 27863 solver.cpp:218] Iteration 6080 (1.53954 iter/s, 25.9819s/40 iters), loss = 0.376134 I0419 22:09:04.723101 27863 solver.cpp:237] Train net output #0: loss = 0.376134 (* 1 = 0.376134 loss) I0419 22:09:04.723109 27863 sgd_solver.cpp:105] Iteration 6080, lr = 0.00509433 I0419 22:09:21.038641 27863 solver.cpp:218] Iteration 6120 (2.45165 iter/s, 16.3155s/40 iters), loss = 0.216864 I0419 22:09:21.038682 27863 solver.cpp:237] Train net output #0: loss = 0.216864 (* 1 = 0.216864 loss) I0419 22:09:21.038691 27863 sgd_solver.cpp:105] Iteration 6120, lr = 0.00507178 I0419 22:09:37.435082 27863 solver.cpp:218] Iteration 6160 (2.43956 iter/s, 16.3964s/40 iters), loss = 0.381805 I0419 22:09:37.435200 27863 solver.cpp:237] Train net output #0: loss = 0.381805 (* 1 = 0.381805 loss) I0419 22:09:37.435209 27863 sgd_solver.cpp:105] Iteration 6160, lr = 0.00504932 I0419 22:09:53.805796 27863 solver.cpp:218] Iteration 6200 (2.4434 iter/s, 16.3706s/40 iters), loss = 0.479016 I0419 22:09:53.805837 27863 solver.cpp:237] Train net output #0: loss = 0.479016 (* 1 = 0.479016 loss) I0419 22:09:53.805846 27863 sgd_solver.cpp:105] Iteration 6200, lr = 0.00502697 I0419 22:10:10.100077 27863 solver.cpp:218] Iteration 6240 (2.45485 iter/s, 16.2942s/40 iters), loss = 0.211993 I0419 22:10:10.100208 27863 solver.cpp:237] Train net output #0: loss = 0.211993 (* 1 = 0.211993 loss) I0419 22:10:10.100216 27863 sgd_solver.cpp:105] Iteration 6240, lr = 0.00500471 I0419 22:10:26.394647 27863 solver.cpp:218] Iteration 6280 (2.45482 iter/s, 16.2944s/40 iters), loss = 0.394681 I0419 22:10:26.394695 27863 solver.cpp:237] Train net output #0: loss = 0.394681 (* 1 = 0.394681 loss) I0419 22:10:26.394702 27863 sgd_solver.cpp:105] Iteration 6280, lr = 0.00498255 I0419 22:10:42.718843 27863 solver.cpp:218] Iteration 6320 (2.45036 iter/s, 16.3242s/40 iters), loss = 0.3133 I0419 22:10:42.718912 27863 solver.cpp:237] Train net output #0: loss = 0.3133 (* 1 = 0.3133 loss) I0419 22:10:42.718921 27863 sgd_solver.cpp:105] Iteration 6320, lr = 0.00496049 I0419 22:10:58.971139 27863 solver.cpp:218] Iteration 6360 (2.4612 iter/s, 16.2522s/40 iters), loss = 0.412002 I0419 22:10:58.971180 27863 solver.cpp:237] Train net output #0: loss = 0.412002 (* 1 = 0.412002 loss) I0419 22:10:58.971187 27863 sgd_solver.cpp:105] Iteration 6360, lr = 0.00493853 I0419 22:11:15.320992 27863 solver.cpp:218] Iteration 6400 (2.44651 iter/s, 16.3498s/40 iters), loss = 0.231316 I0419 22:11:15.321111 27863 solver.cpp:237] Train net output #0: loss = 0.231316 (* 1 = 0.231316 loss) I0419 22:11:15.321120 27863 sgd_solver.cpp:105] Iteration 6400, lr = 0.00491667 I0419 22:11:31.815673 27863 solver.cpp:218] Iteration 6440 (2.42504 iter/s, 16.4946s/40 iters), loss = 0.107494 I0419 22:11:31.815714 27863 solver.cpp:237] Train net output #0: loss = 0.107494 (* 1 = 0.107494 loss) I0419 22:11:31.815722 27863 sgd_solver.cpp:105] Iteration 6440, lr = 0.0048949 I0419 22:11:48.313845 27863 solver.cpp:218] Iteration 6480 (2.42452 iter/s, 16.4981s/40 iters), loss = 0.131094 I0419 22:11:48.313962 27863 solver.cpp:237] Train net output #0: loss = 0.131094 (* 1 = 0.131094 loss) I0419 22:11:48.313971 27863 sgd_solver.cpp:105] Iteration 6480, lr = 0.00487323 I0419 22:11:58.931430 27863 blocking_queue.cpp:49] Waiting for data I0419 22:12:04.694325 27863 solver.cpp:218] Iteration 6520 (2.44195 iter/s, 16.3804s/40 iters), loss = 0.249012 I0419 22:12:04.694396 27863 solver.cpp:237] Train net output #0: loss = 0.249012 (* 1 = 0.249012 loss) I0419 22:12:04.694409 27863 sgd_solver.cpp:105] Iteration 6520, lr = 0.00485165 I0419 22:12:21.135951 27863 solver.cpp:218] Iteration 6560 (2.43286 iter/s, 16.4416s/40 iters), loss = 0.167777 I0419 22:12:21.136132 27863 solver.cpp:237] Train net output #0: loss = 0.167777 (* 1 = 0.167777 loss) I0419 22:12:21.136142 27863 sgd_solver.cpp:105] Iteration 6560, lr = 0.00483017 I0419 22:12:37.472918 27863 solver.cpp:218] Iteration 6600 (2.44846 iter/s, 16.3368s/40 iters), loss = 0.235242 I0419 22:12:37.472959 27863 solver.cpp:237] Train net output #0: loss = 0.235242 (* 1 = 0.235242 loss) I0419 22:12:37.472967 27863 sgd_solver.cpp:105] Iteration 6600, lr = 0.00480878 I0419 22:12:53.874295 27863 solver.cpp:218] Iteration 6640 (2.43882 iter/s, 16.4013s/40 iters), loss = 0.216518 I0419 22:12:53.874439 27863 solver.cpp:237] Train net output #0: loss = 0.216518 (* 1 = 0.216518 loss) I0419 22:12:53.874450 27863 sgd_solver.cpp:105] Iteration 6640, lr = 0.00478749 I0419 22:13:08.343482 27871 data_layer.cpp:73] Restarting data prefetching from start. I0419 22:13:08.597123 27863 solver.cpp:447] Snapshotting to binary proto file snapshot_iter_6677.caffemodel I0419 22:13:15.646440 27863 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_6677.solverstate I0419 22:13:22.016634 27863 solver.cpp:330] Iteration 6677, Testing net (#0) I0419 22:13:22.016659 27863 net.cpp:676] Ignoring source layer train-data I0419 22:13:26.300890 27881 data_layer.cpp:73] Restarting data prefetching from start. I0419 22:13:26.797749 27863 solver.cpp:397] Test net output #0: accuracy = 0.490196 I0419 22:13:26.797796 27863 solver.cpp:397] Test net output #1: loss = 2.59859 (* 1 = 2.59859 loss) I0419 22:13:27.367651 27863 solver.cpp:218] Iteration 6680 (1.19427 iter/s, 33.4933s/40 iters), loss = 0.35227 I0419 22:13:27.367700 27863 solver.cpp:237] Train net output #0: loss = 0.35227 (* 1 = 0.35227 loss) I0419 22:13:27.367708 27863 sgd_solver.cpp:105] Iteration 6680, lr = 0.0047663 I0419 22:13:43.908990 27863 solver.cpp:218] Iteration 6720 (2.41819 iter/s, 16.5413s/40 iters), loss = 0.16566 I0419 22:13:43.909029 27863 solver.cpp:237] Train net output #0: loss = 0.16566 (* 1 = 0.16566 loss) I0419 22:13:43.909039 27863 sgd_solver.cpp:105] Iteration 6720, lr = 0.0047452 I0419 22:14:00.356058 27863 solver.cpp:218] Iteration 6760 (2.43205 iter/s, 16.447s/40 iters), loss = 0.299409 I0419 22:14:00.356178 27863 solver.cpp:237] Train net output #0: loss = 0.299409 (* 1 = 0.299409 loss) I0419 22:14:00.356186 27863 sgd_solver.cpp:105] Iteration 6760, lr = 0.00472419 I0419 22:14:16.751870 27863 solver.cpp:218] Iteration 6800 (2.43966 iter/s, 16.3957s/40 iters), loss = 0.27295 I0419 22:14:16.751924 27863 solver.cpp:237] Train net output #0: loss = 0.27295 (* 1 = 0.27295 loss) I0419 22:14:16.751935 27863 sgd_solver.cpp:105] Iteration 6800, lr = 0.00470327 I0419 22:14:33.375882 27863 solver.cpp:218] Iteration 6840 (2.40616 iter/s, 16.624s/40 iters), loss = 0.298068 I0419 22:14:33.376003 27863 solver.cpp:237] Train net output #0: loss = 0.298068 (* 1 = 0.298068 loss) I0419 22:14:33.376013 27863 sgd_solver.cpp:105] Iteration 6840, lr = 0.00468245 I0419 22:14:49.678500 27863 solver.cpp:218] Iteration 6880 (2.45361 iter/s, 16.3025s/40 iters), loss = 0.215106 I0419 22:14:49.678537 27863 solver.cpp:237] Train net output #0: loss = 0.215106 (* 1 = 0.215106 loss) I0419 22:14:49.678544 27863 sgd_solver.cpp:105] Iteration 6880, lr = 0.00466172 I0419 22:15:06.149888 27863 solver.cpp:218] Iteration 6920 (2.42846 iter/s, 16.4714s/40 iters), loss = 0.129787 I0419 22:15:06.149977 27863 solver.cpp:237] Train net output #0: loss = 0.129787 (* 1 = 0.129787 loss) I0419 22:15:06.149986 27863 sgd_solver.cpp:105] Iteration 6920, lr = 0.00464108 I0419 22:15:22.555965 27863 solver.cpp:218] Iteration 6960 (2.43813 iter/s, 16.406s/40 iters), loss = 0.201552 I0419 22:15:22.556001 27863 solver.cpp:237] Train net output #0: loss = 0.201552 (* 1 = 0.201552 loss) I0419 22:15:22.556010 27863 sgd_solver.cpp:105] Iteration 6960, lr = 0.00462053 I0419 22:15:39.446632 27863 solver.cpp:218] Iteration 7000 (2.36818 iter/s, 16.8906s/40 iters), loss = 0.160063 I0419 22:15:39.446781 27863 solver.cpp:237] Train net output #0: loss = 0.160063 (* 1 = 0.160063 loss) I0419 22:15:39.446790 27863 sgd_solver.cpp:105] Iteration 7000, lr = 0.00460007 I0419 22:15:55.892277 27863 solver.cpp:218] Iteration 7040 (2.43228 iter/s, 16.4455s/40 iters), loss = 0.286133 I0419 22:15:55.892319 27863 solver.cpp:237] Train net output #0: loss = 0.286133 (* 1 = 0.286133 loss) I0419 22:15:55.892328 27863 sgd_solver.cpp:105] Iteration 7040, lr = 0.00457971 I0419 22:16:12.235953 27863 solver.cpp:218] Iteration 7080 (2.44743 iter/s, 16.3436s/40 iters), loss = 0.237785 I0419 22:16:12.236089 27863 solver.cpp:237] Train net output #0: loss = 0.237785 (* 1 = 0.237785 loss) I0419 22:16:12.236099 27863 sgd_solver.cpp:105] Iteration 7080, lr = 0.00455943 I0419 22:16:28.766155 27863 solver.cpp:218] Iteration 7120 (2.41983 iter/s, 16.5301s/40 iters), loss = 0.0468862 I0419 22:16:28.766216 27863 solver.cpp:237] Train net output #0: loss = 0.0468861 (* 1 = 0.0468861 loss) I0419 22:16:28.766228 27863 sgd_solver.cpp:105] Iteration 7120, lr = 0.00453924 I0419 22:16:45.104079 27863 solver.cpp:218] Iteration 7160 (2.4483 iter/s, 16.3379s/40 iters), loss = 0.254463 I0419 22:16:45.104179 27863 solver.cpp:237] Train net output #0: loss = 0.254463 (* 1 = 0.254463 loss) I0419 22:16:45.104189 27863 sgd_solver.cpp:105] Iteration 7160, lr = 0.00451915 I0419 22:17:01.332571 27863 solver.cpp:218] Iteration 7200 (2.46481 iter/s, 16.2284s/40 iters), loss = 0.246974 I0419 22:17:01.332610 27863 solver.cpp:237] Train net output #0: loss = 0.246974 (* 1 = 0.246974 loss) I0419 22:17:01.332618 27863 sgd_solver.cpp:105] Iteration 7200, lr = 0.00449914 I0419 22:17:17.653076 27863 solver.cpp:218] Iteration 7240 (2.45091 iter/s, 16.3205s/40 iters), loss = 0.145093 I0419 22:17:17.653193 27863 solver.cpp:237] Train net output #0: loss = 0.145093 (* 1 = 0.145093 loss) I0419 22:17:17.653203 27863 sgd_solver.cpp:105] Iteration 7240, lr = 0.00447922 I0419 22:17:34.059950 27863 solver.cpp:218] Iteration 7280 (2.43802 iter/s, 16.4068s/40 iters), loss = 0.200698 I0419 22:17:34.059994 27863 solver.cpp:237] Train net output #0: loss = 0.200698 (* 1 = 0.200698 loss) I0419 22:17:34.060003 27863 sgd_solver.cpp:105] Iteration 7280, lr = 0.00445939 I0419 22:17:34.966173 27871 data_layer.cpp:73] Restarting data prefetching from start. I0419 22:17:35.244143 27863 solver.cpp:447] Snapshotting to binary proto file snapshot_iter_7284.caffemodel I0419 22:17:38.363059 27863 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_7284.solverstate I0419 22:17:40.738634 27863 solver.cpp:330] Iteration 7284, Testing net (#0) I0419 22:17:40.738660 27863 net.cpp:676] Ignoring source layer train-data I0419 22:17:44.967483 27881 data_layer.cpp:73] Restarting data prefetching from start. I0419 22:17:45.531402 27863 solver.cpp:397] Test net output #0: accuracy = 0.496936 I0419 22:17:45.531438 27863 solver.cpp:397] Test net output #1: loss = 2.80069 (* 1 = 2.80069 loss) I0419 22:17:59.575760 27863 solver.cpp:218] Iteration 7320 (1.56766 iter/s, 25.5158s/40 iters), loss = 0.119391 I0419 22:17:59.576068 27863 solver.cpp:237] Train net output #0: loss = 0.119391 (* 1 = 0.119391 loss) I0419 22:17:59.576077 27863 sgd_solver.cpp:105] Iteration 7320, lr = 0.00443965 I0419 22:18:15.989019 27863 solver.cpp:218] Iteration 7360 (2.4371 iter/s, 16.413s/40 iters), loss = 0.131609 I0419 22:18:15.989063 27863 solver.cpp:237] Train net output #0: loss = 0.131609 (* 1 = 0.131609 loss) I0419 22:18:15.989069 27863 sgd_solver.cpp:105] Iteration 7360, lr = 0.00441999 I0419 22:18:32.377688 27863 solver.cpp:218] Iteration 7400 (2.44072 iter/s, 16.3886s/40 iters), loss = 0.0926085 I0419 22:18:32.377815 27863 solver.cpp:237] Train net output #0: loss = 0.0926085 (* 1 = 0.0926085 loss) I0419 22:18:32.377825 27863 sgd_solver.cpp:105] Iteration 7400, lr = 0.00440042 I0419 22:18:38.901708 27863 blocking_queue.cpp:49] Waiting for data I0419 22:18:48.784144 27863 solver.cpp:218] Iteration 7440 (2.43808 iter/s, 16.4063s/40 iters), loss = 0.160279 I0419 22:18:48.784188 27863 solver.cpp:237] Train net output #0: loss = 0.160279 (* 1 = 0.160279 loss) I0419 22:18:48.784195 27863 sgd_solver.cpp:105] Iteration 7440, lr = 0.00438094 I0419 22:19:05.350497 27863 solver.cpp:218] Iteration 7480 (2.41454 iter/s, 16.5663s/40 iters), loss = 0.282893 I0419 22:19:05.350637 27863 solver.cpp:237] Train net output #0: loss = 0.282893 (* 1 = 0.282893 loss) I0419 22:19:05.350646 27863 sgd_solver.cpp:105] Iteration 7480, lr = 0.00436154 I0419 22:19:21.726878 27863 solver.cpp:218] Iteration 7520 (2.44256 iter/s, 16.3763s/40 iters), loss = 0.168379 I0419 22:19:21.726923 27863 solver.cpp:237] Train net output #0: loss = 0.168379 (* 1 = 0.168379 loss) I0419 22:19:21.726931 27863 sgd_solver.cpp:105] Iteration 7520, lr = 0.00434223 I0419 22:19:38.121551 27863 solver.cpp:218] Iteration 7560 (2.43982 iter/s, 16.3946s/40 iters), loss = 0.111262 I0419 22:19:38.121668 27863 solver.cpp:237] Train net output #0: loss = 0.111262 (* 1 = 0.111262 loss) I0419 22:19:38.121677 27863 sgd_solver.cpp:105] Iteration 7560, lr = 0.00432301 I0419 22:19:54.551910 27863 solver.cpp:218] Iteration 7600 (2.43453 iter/s, 16.4303s/40 iters), loss = 0.19038 I0419 22:19:54.551975 27863 solver.cpp:237] Train net output #0: loss = 0.19038 (* 1 = 0.19038 loss) I0419 22:19:54.551990 27863 sgd_solver.cpp:105] Iteration 7600, lr = 0.00430387 I0419 22:20:10.952252 27863 solver.cpp:218] Iteration 7640 (2.43898 iter/s, 16.4003s/40 iters), loss = 0.202652 I0419 22:20:10.952394 27863 solver.cpp:237] Train net output #0: loss = 0.202652 (* 1 = 0.202652 loss) I0419 22:20:10.952404 27863 sgd_solver.cpp:105] Iteration 7640, lr = 0.00428481 I0419 22:20:27.490272 27863 solver.cpp:218] Iteration 7680 (2.41869 iter/s, 16.5379s/40 iters), loss = 0.280055 I0419 22:20:27.490312 27863 solver.cpp:237] Train net output #0: loss = 0.280055 (* 1 = 0.280055 loss) I0419 22:20:27.490320 27863 sgd_solver.cpp:105] Iteration 7680, lr = 0.00426584 I0419 22:20:43.932341 27863 solver.cpp:218] Iteration 7720 (2.43279 iter/s, 16.442s/40 iters), loss = 0.240859 I0419 22:20:43.932524 27863 solver.cpp:237] Train net output #0: loss = 0.240859 (* 1 = 0.240859 loss) I0419 22:20:43.932533 27863 sgd_solver.cpp:105] Iteration 7720, lr = 0.00424696 I0419 22:21:00.330610 27863 solver.cpp:218] Iteration 7760 (2.43931 iter/s, 16.3981s/40 iters), loss = 0.105186 I0419 22:21:00.330662 27863 solver.cpp:237] Train net output #0: loss = 0.105186 (* 1 = 0.105186 loss) I0419 22:21:00.330673 27863 sgd_solver.cpp:105] Iteration 7760, lr = 0.00422815 I0419 22:21:16.638736 27863 solver.cpp:218] Iteration 7800 (2.45277 iter/s, 16.3081s/40 iters), loss = 0.204231 I0419 22:21:16.638840 27863 solver.cpp:237] Train net output #0: loss = 0.204231 (* 1 = 0.204231 loss) I0419 22:21:16.638849 27863 sgd_solver.cpp:105] Iteration 7800, lr = 0.00420943 I0419 22:21:33.058951 27863 solver.cpp:218] Iteration 7840 (2.43603 iter/s, 16.4201s/40 iters), loss = 0.121645 I0419 22:21:33.058990 27863 solver.cpp:237] Train net output #0: loss = 0.121645 (* 1 = 0.121645 loss) I0419 22:21:33.059000 27863 sgd_solver.cpp:105] Iteration 7840, lr = 0.0041908 I0419 22:21:49.415546 27863 solver.cpp:218] Iteration 7880 (2.4455 iter/s, 16.3566s/40 iters), loss = 0.137653 I0419 22:21:49.415664 27863 solver.cpp:237] Train net output #0: loss = 0.137653 (* 1 = 0.137653 loss) I0419 22:21:49.415673 27863 sgd_solver.cpp:105] Iteration 7880, lr = 0.00417224 I0419 22:21:53.164531 27871 data_layer.cpp:73] Restarting data prefetching from start. I0419 22:21:53.457376 27863 solver.cpp:447] Snapshotting to binary proto file snapshot_iter_7891.caffemodel I0419 22:21:58.039054 27863 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_7891.solverstate I0419 22:22:01.151788 27863 solver.cpp:330] Iteration 7891, Testing net (#0) I0419 22:22:01.151808 27863 net.cpp:676] Ignoring source layer train-data I0419 22:22:05.470309 27881 data_layer.cpp:73] Restarting data prefetching from start. I0419 22:22:06.059468 27863 solver.cpp:397] Test net output #0: accuracy = 0.511642 I0419 22:22:06.059509 27863 solver.cpp:397] Test net output #1: loss = 2.671 (* 1 = 2.671 loss) I0419 22:22:17.424597 27863 solver.cpp:218] Iteration 7920 (1.42811 iter/s, 28.009s/40 iters), loss = 0.110437 I0419 22:22:17.424638 27863 solver.cpp:237] Train net output #0: loss = 0.110437 (* 1 = 0.110437 loss) I0419 22:22:17.424644 27863 sgd_solver.cpp:105] Iteration 7920, lr = 0.00415377 I0419 22:22:33.960322 27863 solver.cpp:218] Iteration 7960 (2.41901 iter/s, 16.5357s/40 iters), loss = 0.148673 I0419 22:22:33.960469 27863 solver.cpp:237] Train net output #0: loss = 0.148673 (* 1 = 0.148673 loss) I0419 22:22:33.960479 27863 sgd_solver.cpp:105] Iteration 7960, lr = 0.00413538 I0419 22:22:50.253330 27863 solver.cpp:218] Iteration 8000 (2.45506 iter/s, 16.2929s/40 iters), loss = 0.113701 I0419 22:22:50.253373 27863 solver.cpp:237] Train net output #0: loss = 0.113701 (* 1 = 0.113701 loss) I0419 22:22:50.253382 27863 sgd_solver.cpp:105] Iteration 8000, lr = 0.00411707 I0419 22:23:06.735427 27863 solver.cpp:218] Iteration 8040 (2.42688 iter/s, 16.4821s/40 iters), loss = 0.098643 I0419 22:23:06.735533 27863 solver.cpp:237] Train net output #0: loss = 0.098643 (* 1 = 0.098643 loss) I0419 22:23:06.735543 27863 sgd_solver.cpp:105] Iteration 8040, lr = 0.00409884 I0419 22:23:23.430994 27863 solver.cpp:218] Iteration 8080 (2.39586 iter/s, 16.6955s/40 iters), loss = 0.137187 I0419 22:23:23.431036 27863 solver.cpp:237] Train net output #0: loss = 0.137187 (* 1 = 0.137187 loss) I0419 22:23:23.431046 27863 sgd_solver.cpp:105] Iteration 8080, lr = 0.0040807 I0419 22:23:39.844547 27863 solver.cpp:218] Iteration 8120 (2.43701 iter/s, 16.4135s/40 iters), loss = 0.0972873 I0419 22:23:39.844660 27863 solver.cpp:237] Train net output #0: loss = 0.0972873 (* 1 = 0.0972873 loss) I0419 22:23:39.844668 27863 sgd_solver.cpp:105] Iteration 8120, lr = 0.00406263 I0419 22:23:56.314699 27863 solver.cpp:218] Iteration 8160 (2.42865 iter/s, 16.4701s/40 iters), loss = 0.218486 I0419 22:23:56.314743 27863 solver.cpp:237] Train net output #0: loss = 0.218486 (* 1 = 0.218486 loss) I0419 22:23:56.314751 27863 sgd_solver.cpp:105] Iteration 8160, lr = 0.00404464 I0419 22:24:12.701892 27863 solver.cpp:218] Iteration 8200 (2.44093 iter/s, 16.3872s/40 iters), loss = 0.193288 I0419 22:24:12.702069 27863 solver.cpp:237] Train net output #0: loss = 0.193289 (* 1 = 0.193289 loss) I0419 22:24:12.702080 27863 sgd_solver.cpp:105] Iteration 8200, lr = 0.00402673 I0419 22:24:29.018483 27863 solver.cpp:218] Iteration 8240 (2.45152 iter/s, 16.3164s/40 iters), loss = 0.148644 I0419 22:24:29.018524 27863 solver.cpp:237] Train net output #0: loss = 0.148644 (* 1 = 0.148644 loss) I0419 22:24:29.018532 27863 sgd_solver.cpp:105] Iteration 8240, lr = 0.00400891 I0419 22:24:45.591166 27863 solver.cpp:218] Iteration 8280 (2.41361 iter/s, 16.5727s/40 iters), loss = 0.145303 I0419 22:24:45.591257 27863 solver.cpp:237] Train net output #0: loss = 0.145303 (* 1 = 0.145303 loss) I0419 22:24:45.591267 27863 sgd_solver.cpp:105] Iteration 8280, lr = 0.00399116 I0419 22:25:01.885073 27863 solver.cpp:218] Iteration 8320 (2.45492 iter/s, 16.2938s/40 iters), loss = 0.116296 I0419 22:25:01.885119 27863 solver.cpp:237] Train net output #0: loss = 0.116296 (* 1 = 0.116296 loss) I0419 22:25:01.885128 27863 sgd_solver.cpp:105] Iteration 8320, lr = 0.00397349 I0419 22:25:18.243134 27863 solver.cpp:218] Iteration 8360 (2.44528 iter/s, 16.358s/40 iters), loss = 0.162545 I0419 22:25:18.243247 27863 solver.cpp:237] Train net output #0: loss = 0.162545 (* 1 = 0.162545 loss) I0419 22:25:18.243257 27863 sgd_solver.cpp:105] Iteration 8360, lr = 0.0039559 I0419 22:25:22.645536 27863 blocking_queue.cpp:49] Waiting for data I0419 22:25:34.575318 27863 solver.cpp:218] Iteration 8400 (2.44917 iter/s, 16.3321s/40 iters), loss = 0.0956047 I0419 22:25:34.575361 27863 solver.cpp:237] Train net output #0: loss = 0.0956048 (* 1 = 0.0956048 loss) I0419 22:25:34.575369 27863 sgd_solver.cpp:105] Iteration 8400, lr = 0.00393838 I0419 22:25:50.942952 27863 solver.cpp:218] Iteration 8440 (2.44385 iter/s, 16.3676s/40 iters), loss = 0.137866 I0419 22:25:50.943123 27863 solver.cpp:237] Train net output #0: loss = 0.137866 (* 1 = 0.137866 loss) I0419 22:25:50.943132 27863 sgd_solver.cpp:105] Iteration 8440, lr = 0.00392094 I0419 22:26:07.346608 27863 solver.cpp:218] Iteration 8480 (2.4385 iter/s, 16.4035s/40 iters), loss = 0.0765274 I0419 22:26:07.346650 27863 solver.cpp:237] Train net output #0: loss = 0.0765274 (* 1 = 0.0765274 loss) I0419 22:26:07.346659 27863 sgd_solver.cpp:105] Iteration 8480, lr = 0.00390358 I0419 22:26:13.995358 27871 data_layer.cpp:73] Restarting data prefetching from start. I0419 22:26:14.311553 27863 solver.cpp:447] Snapshotting to binary proto file snapshot_iter_8498.caffemodel I0419 22:26:18.897769 27863 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_8498.solverstate I0419 22:26:21.256640 27863 solver.cpp:330] Iteration 8498, Testing net (#0) I0419 22:26:21.256731 27863 net.cpp:676] Ignoring source layer train-data I0419 22:26:25.393366 27881 data_layer.cpp:73] Restarting data prefetching from start. I0419 22:26:26.043236 27863 solver.cpp:397] Test net output #0: accuracy = 0.519608 I0419 22:26:26.043270 27863 solver.cpp:397] Test net output #1: loss = 2.69624 (* 1 = 2.69624 loss) I0419 22:26:34.415915 27863 solver.cpp:218] Iteration 8520 (1.47769 iter/s, 27.0693s/40 iters), loss = 0.190784 I0419 22:26:34.415961 27863 solver.cpp:237] Train net output #0: loss = 0.190784 (* 1 = 0.190784 loss) I0419 22:26:34.415969 27863 sgd_solver.cpp:105] Iteration 8520, lr = 0.0038863 I0419 22:26:50.765208 27863 solver.cpp:218] Iteration 8560 (2.44659 iter/s, 16.3493s/40 iters), loss = 0.196768 I0419 22:26:50.765262 27863 solver.cpp:237] Train net output #0: loss = 0.196768 (* 1 = 0.196768 loss) I0419 22:26:50.765272 27863 sgd_solver.cpp:105] Iteration 8560, lr = 0.0038691 I0419 22:27:07.141988 27863 solver.cpp:218] Iteration 8600 (2.44249 iter/s, 16.3767s/40 iters), loss = 0.1261 I0419 22:27:07.142105 27863 solver.cpp:237] Train net output #0: loss = 0.1261 (* 1 = 0.1261 loss) I0419 22:27:07.142114 27863 sgd_solver.cpp:105] Iteration 8600, lr = 0.00385197 I0419 22:27:23.482849 27863 solver.cpp:218] Iteration 8640 (2.44787 iter/s, 16.3407s/40 iters), loss = 0.104429 I0419 22:27:23.482916 27863 solver.cpp:237] Train net output #0: loss = 0.104429 (* 1 = 0.104429 loss) I0419 22:27:23.482930 27863 sgd_solver.cpp:105] Iteration 8640, lr = 0.00383491 I0419 22:27:39.897285 27863 solver.cpp:218] Iteration 8680 (2.43689 iter/s, 16.4144s/40 iters), loss = 0.118175 I0419 22:27:39.897409 27863 solver.cpp:237] Train net output #0: loss = 0.118175 (* 1 = 0.118175 loss) I0419 22:27:39.897420 27863 sgd_solver.cpp:105] Iteration 8680, lr = 0.00381793 I0419 22:27:56.301415 27863 solver.cpp:218] Iteration 8720 (2.43843 iter/s, 16.404s/40 iters), loss = 0.21746 I0419 22:27:56.301456 27863 solver.cpp:237] Train net output #0: loss = 0.21746 (* 1 = 0.21746 loss) I0419 22:27:56.301465 27863 sgd_solver.cpp:105] Iteration 8720, lr = 0.00380103 I0419 22:28:12.644729 27863 solver.cpp:218] Iteration 8760 (2.44749 iter/s, 16.3433s/40 iters), loss = 0.0272381 I0419 22:28:12.644848 27863 solver.cpp:237] Train net output #0: loss = 0.0272382 (* 1 = 0.0272382 loss) I0419 22:28:12.644857 27863 sgd_solver.cpp:105] Iteration 8760, lr = 0.0037842 I0419 22:28:29.236337 27863 solver.cpp:218] Iteration 8800 (2.41087 iter/s, 16.5915s/40 iters), loss = 0.0687923 I0419 22:28:29.236378 27863 solver.cpp:237] Train net output #0: loss = 0.0687923 (* 1 = 0.0687923 loss) I0419 22:28:29.236387 27863 sgd_solver.cpp:105] Iteration 8800, lr = 0.00376745 I0419 22:28:45.671334 27863 solver.cpp:218] Iteration 8840 (2.43384 iter/s, 16.435s/40 iters), loss = 0.0484341 I0419 22:28:45.671449 27863 solver.cpp:237] Train net output #0: loss = 0.0484342 (* 1 = 0.0484342 loss) I0419 22:28:45.671459 27863 sgd_solver.cpp:105] Iteration 8840, lr = 0.00375077 I0419 22:29:02.034963 27863 solver.cpp:218] Iteration 8880 (2.44446 iter/s, 16.3635s/40 iters), loss = 0.159894 I0419 22:29:02.035014 27863 solver.cpp:237] Train net output #0: loss = 0.159894 (* 1 = 0.159894 loss) I0419 22:29:02.035023 27863 sgd_solver.cpp:105] Iteration 8880, lr = 0.00373416 I0419 22:29:18.441592 27863 solver.cpp:218] Iteration 8920 (2.43804 iter/s, 16.4066s/40 iters), loss = 0.0622979 I0419 22:29:18.441741 27863 solver.cpp:237] Train net output #0: loss = 0.0622979 (* 1 = 0.0622979 loss) I0419 22:29:18.441751 27863 sgd_solver.cpp:105] Iteration 8920, lr = 0.00371763 I0419 22:29:34.780884 27863 solver.cpp:218] Iteration 8960 (2.44811 iter/s, 16.3392s/40 iters), loss = 0.0596981 I0419 22:29:34.780925 27863 solver.cpp:237] Train net output #0: loss = 0.0596982 (* 1 = 0.0596982 loss) I0419 22:29:34.780933 27863 sgd_solver.cpp:105] Iteration 8960, lr = 0.00370117 I0419 22:29:51.253139 27863 solver.cpp:218] Iteration 9000 (2.42833 iter/s, 16.4722s/40 iters), loss = 0.14723 I0419 22:29:51.253288 27863 solver.cpp:237] Train net output #0: loss = 0.14723 (* 1 = 0.14723 loss) I0419 22:29:51.253304 27863 sgd_solver.cpp:105] Iteration 9000, lr = 0.00368478 I0419 22:30:07.632117 27863 solver.cpp:218] Iteration 9040 (2.44217 iter/s, 16.3788s/40 iters), loss = 0.0611965 I0419 22:30:07.632159 27863 solver.cpp:237] Train net output #0: loss = 0.0611966 (* 1 = 0.0611966 loss) I0419 22:30:07.632169 27863 sgd_solver.cpp:105] Iteration 9040, lr = 0.00366847 I0419 22:30:23.924369 27863 solver.cpp:218] Iteration 9080 (2.45516 iter/s, 16.2922s/40 iters), loss = 0.123406 I0419 22:30:23.924484 27863 solver.cpp:237] Train net output #0: loss = 0.123406 (* 1 = 0.123406 loss) I0419 22:30:23.924494 27863 sgd_solver.cpp:105] Iteration 9080, lr = 0.00365223 I0419 22:30:33.416949 27871 data_layer.cpp:73] Restarting data prefetching from start. I0419 22:30:33.756954 27863 solver.cpp:447] Snapshotting to binary proto file snapshot_iter_9105.caffemodel I0419 22:30:36.849524 27863 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_9105.solverstate I0419 22:30:39.208019 27863 solver.cpp:330] Iteration 9105, Testing net (#0) I0419 22:30:39.208043 27863 net.cpp:676] Ignoring source layer train-data I0419 22:30:43.340698 27881 data_layer.cpp:73] Restarting data prefetching from start. I0419 22:30:44.058312 27863 solver.cpp:397] Test net output #0: accuracy = 0.525123 I0419 22:30:44.058351 27863 solver.cpp:397] Test net output #1: loss = 2.7958 (* 1 = 2.7958 loss) I0419 22:30:49.541359 27863 solver.cpp:218] Iteration 9120 (1.56147 iter/s, 25.6169s/40 iters), loss = 0.12285 I0419 22:30:49.541404 27863 solver.cpp:237] Train net output #0: loss = 0.12285 (* 1 = 0.12285 loss) I0419 22:30:49.541412 27863 sgd_solver.cpp:105] Iteration 9120, lr = 0.00363606 I0419 22:31:05.969278 27863 solver.cpp:218] Iteration 9160 (2.43488 iter/s, 16.4279s/40 iters), loss = 0.0714519 I0419 22:31:05.969396 27863 solver.cpp:237] Train net output #0: loss = 0.071452 (* 1 = 0.071452 loss) I0419 22:31:05.969405 27863 sgd_solver.cpp:105] Iteration 9160, lr = 0.00361996 I0419 22:31:22.313107 27863 solver.cpp:218] Iteration 9200 (2.44742 iter/s, 16.3437s/40 iters), loss = 0.130679 I0419 22:31:22.313151 27863 solver.cpp:237] Train net output #0: loss = 0.13068 (* 1 = 0.13068 loss) I0419 22:31:22.313159 27863 sgd_solver.cpp:105] Iteration 9200, lr = 0.00360393 I0419 22:31:38.724648 27863 solver.cpp:218] Iteration 9240 (2.43731 iter/s, 16.4115s/40 iters), loss = 0.114292 I0419 22:31:38.724766 27863 solver.cpp:237] Train net output #0: loss = 0.114292 (* 1 = 0.114292 loss) I0419 22:31:38.724774 27863 sgd_solver.cpp:105] Iteration 9240, lr = 0.00358798 I0419 22:31:55.099934 27863 solver.cpp:218] Iteration 9280 (2.44272 iter/s, 16.3752s/40 iters), loss = 0.0875807 I0419 22:31:55.099975 27863 solver.cpp:237] Train net output #0: loss = 0.0875808 (* 1 = 0.0875808 loss) I0419 22:31:55.099983 27863 sgd_solver.cpp:105] Iteration 9280, lr = 0.00357209 I0419 22:31:55.444103 27863 blocking_queue.cpp:49] Waiting for data I0419 22:32:11.652372 27863 solver.cpp:218] Iteration 9320 (2.41657 iter/s, 16.5524s/40 iters), loss = 0.0543663 I0419 22:32:11.652523 27863 solver.cpp:237] Train net output #0: loss = 0.0543664 (* 1 = 0.0543664 loss) I0419 22:32:11.652532 27863 sgd_solver.cpp:105] Iteration 9320, lr = 0.00355628 I0419 22:32:28.067289 27863 solver.cpp:218] Iteration 9360 (2.43683 iter/s, 16.4148s/40 iters), loss = 0.10501 I0419 22:32:28.067332 27863 solver.cpp:237] Train net output #0: loss = 0.10501 (* 1 = 0.10501 loss) I0419 22:32:28.067340 27863 sgd_solver.cpp:105] Iteration 9360, lr = 0.00354053 I0419 22:32:44.425262 27863 solver.cpp:218] Iteration 9400 (2.4453 iter/s, 16.3579s/40 iters), loss = 0.108626 I0419 22:32:44.425360 27863 solver.cpp:237] Train net output #0: loss = 0.108626 (* 1 = 0.108626 loss) I0419 22:32:44.425370 27863 sgd_solver.cpp:105] Iteration 9400, lr = 0.00352486 I0419 22:33:00.833751 27863 solver.cpp:218] Iteration 9440 (2.43777 iter/s, 16.4084s/40 iters), loss = 0.164317 I0419 22:33:00.833798 27863 solver.cpp:237] Train net output #0: loss = 0.164317 (* 1 = 0.164317 loss) I0419 22:33:00.833806 27863 sgd_solver.cpp:105] Iteration 9440, lr = 0.00350925 I0419 22:33:17.207448 27863 solver.cpp:218] Iteration 9480 (2.44295 iter/s, 16.3737s/40 iters), loss = 0.122389 I0419 22:33:17.207559 27863 solver.cpp:237] Train net output #0: loss = 0.122389 (* 1 = 0.122389 loss) I0419 22:33:17.207569 27863 sgd_solver.cpp:105] Iteration 9480, lr = 0.00349371 I0419 22:33:33.568086 27863 solver.cpp:218] Iteration 9520 (2.44491 iter/s, 16.3605s/40 iters), loss = 0.0673458 I0419 22:33:33.568125 27863 solver.cpp:237] Train net output #0: loss = 0.0673459 (* 1 = 0.0673459 loss) I0419 22:33:33.568132 27863 sgd_solver.cpp:105] Iteration 9520, lr = 0.00347824 I0419 22:33:50.061648 27863 solver.cpp:218] Iteration 9560 (2.42519 iter/s, 16.4935s/40 iters), loss = 0.0467098 I0419 22:33:50.061791 27863 solver.cpp:237] Train net output #0: loss = 0.0467099 (* 1 = 0.0467099 loss) I0419 22:33:50.061807 27863 sgd_solver.cpp:105] Iteration 9560, lr = 0.00346284 I0419 22:34:06.385233 27863 solver.cpp:218] Iteration 9600 (2.45046 iter/s, 16.3235s/40 iters), loss = 0.0606559 I0419 22:34:06.385273 27863 solver.cpp:237] Train net output #0: loss = 0.060656 (* 1 = 0.060656 loss) I0419 22:34:06.385282 27863 sgd_solver.cpp:105] Iteration 9600, lr = 0.00344751 I0419 22:34:22.887336 27863 solver.cpp:218] Iteration 9640 (2.42394 iter/s, 16.5021s/40 iters), loss = 0.0450601 I0419 22:34:22.887457 27863 solver.cpp:237] Train net output #0: loss = 0.0450603 (* 1 = 0.0450603 loss) I0419 22:34:22.887467 27863 sgd_solver.cpp:105] Iteration 9640, lr = 0.00343225 I0419 22:34:39.345010 27863 solver.cpp:218] Iteration 9680 (2.43049 iter/s, 16.4576s/40 iters), loss = 0.0802854 I0419 22:34:39.345058 27863 solver.cpp:237] Train net output #0: loss = 0.0802855 (* 1 = 0.0802855 loss) I0419 22:34:39.345067 27863 sgd_solver.cpp:105] Iteration 9680, lr = 0.00341705 I0419 22:34:51.551568 27871 data_layer.cpp:73] Restarting data prefetching from start. I0419 22:34:52.000717 27863 solver.cpp:447] Snapshotting to binary proto file snapshot_iter_9712.caffemodel I0419 22:34:55.098628 27863 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_9712.solverstate I0419 22:34:59.562919 27863 solver.cpp:330] Iteration 9712, Testing net (#0) I0419 22:34:59.562942 27863 net.cpp:676] Ignoring source layer train-data I0419 22:35:03.608970 27881 data_layer.cpp:73] Restarting data prefetching from start. I0419 22:35:04.350173 27863 solver.cpp:397] Test net output #0: accuracy = 0.525123 I0419 22:35:04.350219 27863 solver.cpp:397] Test net output #1: loss = 2.67643 (* 1 = 2.67643 loss) I0419 22:35:07.041116 27863 solver.cpp:218] Iteration 9720 (1.44425 iter/s, 27.6961s/40 iters), loss = 0.0655055 I0419 22:35:07.041157 27863 solver.cpp:237] Train net output #0: loss = 0.0655056 (* 1 = 0.0655056 loss) I0419 22:35:07.041165 27863 sgd_solver.cpp:105] Iteration 9720, lr = 0.00340193 I0419 22:35:23.530634 27863 solver.cpp:218] Iteration 9760 (2.42579 iter/s, 16.4895s/40 iters), loss = 0.127322 I0419 22:35:23.530678 27863 solver.cpp:237] Train net output #0: loss = 0.127322 (* 1 = 0.127322 loss) I0419 22:35:23.530685 27863 sgd_solver.cpp:105] Iteration 9760, lr = 0.00338686 I0419 22:35:39.929317 27863 solver.cpp:218] Iteration 9800 (2.43922 iter/s, 16.3987s/40 iters), loss = 0.0489704 I0419 22:35:39.929498 27863 solver.cpp:237] Train net output #0: loss = 0.0489705 (* 1 = 0.0489705 loss) I0419 22:35:39.929508 27863 sgd_solver.cpp:105] Iteration 9800, lr = 0.00337187 I0419 22:35:56.234472 27863 solver.cpp:218] Iteration 9840 (2.45324 iter/s, 16.305s/40 iters), loss = 0.0414051 I0419 22:35:56.234513 27863 solver.cpp:237] Train net output #0: loss = 0.0414052 (* 1 = 0.0414052 loss) I0419 22:35:56.234520 27863 sgd_solver.cpp:105] Iteration 9840, lr = 0.00335694 I0419 22:36:12.691823 27863 solver.cpp:218] Iteration 9880 (2.43053 iter/s, 16.4573s/40 iters), loss = 0.0982534 I0419 22:36:12.691944 27863 solver.cpp:237] Train net output #0: loss = 0.0982535 (* 1 = 0.0982535 loss) I0419 22:36:12.691953 27863 sgd_solver.cpp:105] Iteration 9880, lr = 0.00334208 I0419 22:36:28.985090 27863 solver.cpp:218] Iteration 9920 (2.45502 iter/s, 16.2932s/40 iters), loss = 0.122497 I0419 22:36:28.985131 27863 solver.cpp:237] Train net output #0: loss = 0.122497 (* 1 = 0.122497 loss) I0419 22:36:28.985138 27863 sgd_solver.cpp:105] Iteration 9920, lr = 0.00332728 I0419 22:36:45.330269 27863 solver.cpp:218] Iteration 9960 (2.44721 iter/s, 16.3452s/40 iters), loss = 0.117691 I0419 22:36:45.330395 27863 solver.cpp:237] Train net output #0: loss = 0.117691 (* 1 = 0.117691 loss) I0419 22:36:45.330404 27863 sgd_solver.cpp:105] Iteration 9960, lr = 0.00331255 I0419 22:37:01.736164 27863 solver.cpp:218] Iteration 10000 (2.43817 iter/s, 16.4058s/40 iters), loss = 0.0736327 I0419 22:37:01.736212 27863 solver.cpp:237] Train net output #0: loss = 0.0736328 (* 1 = 0.0736328 loss) I0419 22:37:01.736219 27863 sgd_solver.cpp:105] Iteration 10000, lr = 0.00329788 I0419 22:37:18.491847 27863 solver.cpp:218] Iteration 10040 (2.38725 iter/s, 16.7556s/40 iters), loss = 0.00937526 I0419 22:37:18.491966 27863 solver.cpp:237] Train net output #0: loss = 0.00937531 (* 1 = 0.00937531 loss) I0419 22:37:18.491974 27863 sgd_solver.cpp:105] Iteration 10040, lr = 0.00328328 I0419 22:37:34.989179 27863 solver.cpp:218] Iteration 10080 (2.42465 iter/s, 16.4972s/40 iters), loss = 0.0379632 I0419 22:37:34.989220 27863 solver.cpp:237] Train net output #0: loss = 0.0379633 (* 1 = 0.0379633 loss) I0419 22:37:34.989228 27863 sgd_solver.cpp:105] Iteration 10080, lr = 0.00326875 I0419 22:37:51.327080 27863 solver.cpp:218] Iteration 10120 (2.4483 iter/s, 16.3379s/40 iters), loss = 0.102093 I0419 22:37:51.327219 27863 solver.cpp:237] Train net output #0: loss = 0.102093 (* 1 = 0.102093 loss) I0419 22:37:51.327227 27863 sgd_solver.cpp:105] Iteration 10120, lr = 0.00325427 I0419 22:38:07.765735 27863 solver.cpp:218] Iteration 10160 (2.43331 iter/s, 16.4385s/40 iters), loss = 0.0573684 I0419 22:38:07.765774 27863 solver.cpp:237] Train net output #0: loss = 0.0573684 (* 1 = 0.0573684 loss) I0419 22:38:07.765782 27863 sgd_solver.cpp:105] Iteration 10160, lr = 0.00323987 I0419 22:38:24.012789 27863 solver.cpp:218] Iteration 10200 (2.46199 iter/s, 16.247s/40 iters), loss = 0.127367 I0419 22:38:24.012908 27863 solver.cpp:237] Train net output #0: loss = 0.127367 (* 1 = 0.127367 loss) I0419 22:38:24.012918 27863 sgd_solver.cpp:105] Iteration 10200, lr = 0.00322552 I0419 22:38:38.626132 27863 blocking_queue.cpp:49] Waiting for data I0419 22:38:40.326988 27863 solver.cpp:218] Iteration 10240 (2.45187 iter/s, 16.3141s/40 iters), loss = 0.0211606 I0419 22:38:40.327031 27863 solver.cpp:237] Train net output #0: loss = 0.0211607 (* 1 = 0.0211607 loss) I0419 22:38:40.327040 27863 sgd_solver.cpp:105] Iteration 10240, lr = 0.00321124 I0419 22:38:56.783696 27863 solver.cpp:218] Iteration 10280 (2.43062 iter/s, 16.4567s/40 iters), loss = 0.0595373 I0419 22:38:56.783824 27863 solver.cpp:237] Train net output #0: loss = 0.0595373 (* 1 = 0.0595373 loss) I0419 22:38:56.783834 27863 sgd_solver.cpp:105] Iteration 10280, lr = 0.00319702 I0419 22:39:11.896152 27871 data_layer.cpp:73] Restarting data prefetching from start. I0419 22:39:12.368726 27863 solver.cpp:447] Snapshotting to binary proto file snapshot_iter_10319.caffemodel I0419 22:39:16.847277 27863 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_10319.solverstate I0419 22:39:20.926640 27863 solver.cpp:330] Iteration 10319, Testing net (#0) I0419 22:39:20.926661 27863 net.cpp:676] Ignoring source layer train-data I0419 22:39:24.956310 27881 data_layer.cpp:73] Restarting data prefetching from start. I0419 22:39:25.769307 27863 solver.cpp:397] Test net output #0: accuracy = 0.523284 I0419 22:39:25.769354 27863 solver.cpp:397] Test net output #1: loss = 2.79235 (* 1 = 2.79235 loss) I0419 22:39:25.959056 27863 solver.cpp:218] Iteration 10320 (1.37102 iter/s, 29.1753s/40 iters), loss = 0.0655471 I0419 22:39:25.960806 27863 solver.cpp:237] Train net output #0: loss = 0.0655471 (* 1 = 0.0655471 loss) I0419 22:39:25.960820 27863 sgd_solver.cpp:105] Iteration 10320, lr = 0.00318287 I0419 22:39:42.029386 27863 solver.cpp:218] Iteration 10360 (2.48933 iter/s, 16.0686s/40 iters), loss = 0.0788399 I0419 22:39:42.029512 27863 solver.cpp:237] Train net output #0: loss = 0.07884 (* 1 = 0.07884 loss) I0419 22:39:42.029522 27863 sgd_solver.cpp:105] Iteration 10360, lr = 0.00316878 I0419 22:39:58.481910 27863 solver.cpp:218] Iteration 10400 (2.43126 iter/s, 16.4524s/40 iters), loss = 0.0255263 I0419 22:39:58.481956 27863 solver.cpp:237] Train net output #0: loss = 0.0255264 (* 1 = 0.0255264 loss) I0419 22:39:58.481964 27863 sgd_solver.cpp:105] Iteration 10400, lr = 0.00315475 I0419 22:40:14.957671 27863 solver.cpp:218] Iteration 10440 (2.42781 iter/s, 16.4757s/40 iters), loss = 0.0596457 I0419 22:40:14.957777 27863 solver.cpp:237] Train net output #0: loss = 0.0596458 (* 1 = 0.0596458 loss) I0419 22:40:14.957787 27863 sgd_solver.cpp:105] Iteration 10440, lr = 0.00314078 I0419 22:40:31.630046 27863 solver.cpp:218] Iteration 10480 (2.39919 iter/s, 16.6723s/40 iters), loss = 0.0527862 I0419 22:40:31.630087 27863 solver.cpp:237] Train net output #0: loss = 0.0527862 (* 1 = 0.0527862 loss) I0419 22:40:31.630096 27863 sgd_solver.cpp:105] Iteration 10480, lr = 0.00312688 I0419 22:40:48.048949 27863 solver.cpp:218] Iteration 10520 (2.43622 iter/s, 16.4189s/40 iters), loss = 0.0857335 I0419 22:40:48.049062 27863 solver.cpp:237] Train net output #0: loss = 0.0857335 (* 1 = 0.0857335 loss) I0419 22:40:48.049072 27863 sgd_solver.cpp:105] Iteration 10520, lr = 0.00311303 I0419 22:41:04.511240 27863 solver.cpp:218] Iteration 10560 (2.42981 iter/s, 16.4622s/40 iters), loss = 0.0891717 I0419 22:41:04.511286 27863 solver.cpp:237] Train net output #0: loss = 0.0891718 (* 1 = 0.0891718 loss) I0419 22:41:04.511296 27863 sgd_solver.cpp:105] Iteration 10560, lr = 0.00309925 I0419 22:41:21.052883 27863 solver.cpp:218] Iteration 10600 (2.41814 iter/s, 16.5416s/40 iters), loss = 0.039199 I0419 22:41:21.052987 27863 solver.cpp:237] Train net output #0: loss = 0.039199 (* 1 = 0.039199 loss) I0419 22:41:21.053004 27863 sgd_solver.cpp:105] Iteration 10600, lr = 0.00308553 I0419 22:41:37.278533 27863 solver.cpp:218] Iteration 10640 (2.46525 iter/s, 16.2256s/40 iters), loss = 0.0869988 I0419 22:41:37.278578 27863 solver.cpp:237] Train net output #0: loss = 0.0869989 (* 1 = 0.0869989 loss) I0419 22:41:37.278586 27863 sgd_solver.cpp:105] Iteration 10640, lr = 0.00307187 I0419 22:41:53.750073 27863 solver.cpp:218] Iteration 10680 (2.42844 iter/s, 16.4715s/40 iters), loss = 0.072473 I0419 22:41:53.750202 27863 solver.cpp:237] Train net output #0: loss = 0.072473 (* 1 = 0.072473 loss) I0419 22:41:53.750211 27863 sgd_solver.cpp:105] Iteration 10680, lr = 0.00305827 I0419 22:42:10.142231 27863 solver.cpp:218] Iteration 10720 (2.44021 iter/s, 16.392s/40 iters), loss = 0.0171564 I0419 22:42:10.142274 27863 solver.cpp:237] Train net output #0: loss = 0.0171565 (* 1 = 0.0171565 loss) I0419 22:42:10.142282 27863 sgd_solver.cpp:105] Iteration 10720, lr = 0.00304473 I0419 22:42:26.450583 27863 solver.cpp:218] Iteration 10760 (2.45273 iter/s, 16.3083s/40 iters), loss = 0.0653762 I0419 22:42:26.450744 27863 solver.cpp:237] Train net output #0: loss = 0.0653763 (* 1 = 0.0653763 loss) I0419 22:42:26.450754 27863 sgd_solver.cpp:105] Iteration 10760, lr = 0.00303125 I0419 22:42:42.835194 27863 solver.cpp:218] Iteration 10800 (2.44134 iter/s, 16.3845s/40 iters), loss = 0.133441 I0419 22:42:42.835237 27863 solver.cpp:237] Train net output #0: loss = 0.133441 (* 1 = 0.133441 loss) I0419 22:42:42.835244 27863 sgd_solver.cpp:105] Iteration 10800, lr = 0.00301783 I0419 22:42:59.141438 27863 solver.cpp:218] Iteration 10840 (2.45305 iter/s, 16.3062s/40 iters), loss = 0.063498 I0419 22:42:59.141557 27863 solver.cpp:237] Train net output #0: loss = 0.063498 (* 1 = 0.063498 loss) I0419 22:42:59.141567 27863 sgd_solver.cpp:105] Iteration 10840, lr = 0.00300447 I0419 22:43:15.738474 27863 solver.cpp:218] Iteration 10880 (2.41008 iter/s, 16.5969s/40 iters), loss = 0.0841154 I0419 22:43:15.738519 27863 solver.cpp:237] Train net output #0: loss = 0.0841154 (* 1 = 0.0841154 loss) I0419 22:43:15.738528 27863 sgd_solver.cpp:105] Iteration 10880, lr = 0.00299116 I0419 22:43:32.141564 27863 solver.cpp:218] Iteration 10920 (2.43857 iter/s, 16.4031s/40 iters), loss = 0.0710147 I0419 22:43:32.141692 27863 solver.cpp:237] Train net output #0: loss = 0.0710148 (* 1 = 0.0710148 loss) I0419 22:43:32.141702 27863 sgd_solver.cpp:105] Iteration 10920, lr = 0.00297792 I0419 22:43:33.673166 27871 data_layer.cpp:73] Restarting data prefetching from start. I0419 22:43:34.169373 27863 solver.cpp:447] Snapshotting to binary proto file snapshot_iter_10926.caffemodel I0419 22:43:37.295161 27863 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_10926.solverstate I0419 22:43:39.655792 27863 solver.cpp:330] Iteration 10926, Testing net (#0) I0419 22:43:39.655812 27863 net.cpp:676] Ignoring source layer train-data I0419 22:43:43.759848 27881 data_layer.cpp:73] Restarting data prefetching from start. I0419 22:43:44.588677 27863 solver.cpp:397] Test net output #0: accuracy = 0.530637 I0419 22:43:44.588721 27863 solver.cpp:397] Test net output #1: loss = 2.68209 (* 1 = 2.68209 loss) I0419 22:43:57.866835 27863 solver.cpp:218] Iteration 10960 (1.5549 iter/s, 25.7252s/40 iters), loss = 0.0888347 I0419 22:43:57.866878 27863 solver.cpp:237] Train net output #0: loss = 0.0888347 (* 1 = 0.0888347 loss) I0419 22:43:57.866887 27863 sgd_solver.cpp:105] Iteration 10960, lr = 0.00296474 I0419 22:44:14.315820 27863 solver.cpp:218] Iteration 11000 (2.43177 iter/s, 16.4489s/40 iters), loss = 0.0269397 I0419 22:44:14.315950 27863 solver.cpp:237] Train net output #0: loss = 0.0269398 (* 1 = 0.0269398 loss) I0419 22:44:14.315965 27863 sgd_solver.cpp:105] Iteration 11000, lr = 0.00295161 I0419 22:44:30.665756 27863 solver.cpp:218] Iteration 11040 (2.44651 iter/s, 16.3498s/40 iters), loss = 0.0438073 I0419 22:44:30.665797 27863 solver.cpp:237] Train net output #0: loss = 0.0438074 (* 1 = 0.0438074 loss) I0419 22:44:30.665807 27863 sgd_solver.cpp:105] Iteration 11040, lr = 0.00293854 I0419 22:44:47.186475 27863 solver.cpp:218] Iteration 11080 (2.42121 iter/s, 16.5207s/40 iters), loss = 0.149078 I0419 22:44:47.186566 27863 solver.cpp:237] Train net output #0: loss = 0.149079 (* 1 = 0.149079 loss) I0419 22:44:47.186574 27863 sgd_solver.cpp:105] Iteration 11080, lr = 0.00292553 I0419 22:45:03.903086 27863 solver.cpp:218] Iteration 11120 (2.39284 iter/s, 16.7165s/40 iters), loss = 0.0479921 I0419 22:45:03.903126 27863 solver.cpp:237] Train net output #0: loss = 0.0479922 (* 1 = 0.0479922 loss) I0419 22:45:03.903136 27863 sgd_solver.cpp:105] Iteration 11120, lr = 0.00291258 I0419 22:45:14.495800 27863 blocking_queue.cpp:49] Waiting for data I0419 22:45:20.284899 27863 solver.cpp:218] Iteration 11160 (2.44174 iter/s, 16.3818s/40 iters), loss = 0.0464803 I0419 22:45:20.285099 27863 solver.cpp:237] Train net output #0: loss = 0.0464804 (* 1 = 0.0464804 loss) I0419 22:45:20.285115 27863 sgd_solver.cpp:105] Iteration 11160, lr = 0.00289968 I0419 22:45:36.591106 27863 solver.cpp:218] Iteration 11200 (2.45308 iter/s, 16.306s/40 iters), loss = 0.038743 I0419 22:45:36.591150 27863 solver.cpp:237] Train net output #0: loss = 0.038743 (* 1 = 0.038743 loss) I0419 22:45:36.591159 27863 sgd_solver.cpp:105] Iteration 11200, lr = 0.00288685 I0419 22:45:52.941059 27863 solver.cpp:218] Iteration 11240 (2.44649 iter/s, 16.3499s/40 iters), loss = 0.013346 I0419 22:45:52.941171 27863 solver.cpp:237] Train net output #0: loss = 0.0133461 (* 1 = 0.0133461 loss) I0419 22:45:52.941182 27863 sgd_solver.cpp:105] Iteration 11240, lr = 0.00287406 I0419 22:46:09.215670 27863 solver.cpp:218] Iteration 11280 (2.45783 iter/s, 16.2745s/40 iters), loss = 0.0760383 I0419 22:46:09.215713 27863 solver.cpp:237] Train net output #0: loss = 0.0760383 (* 1 = 0.0760383 loss) I0419 22:46:09.215721 27863 sgd_solver.cpp:105] Iteration 11280, lr = 0.00286134 I0419 22:46:25.548027 27863 solver.cpp:218] Iteration 11320 (2.44913 iter/s, 16.3323s/40 iters), loss = 0.0993845 I0419 22:46:25.548151 27863 solver.cpp:237] Train net output #0: loss = 0.0993845 (* 1 = 0.0993845 loss) I0419 22:46:25.548163 27863 sgd_solver.cpp:105] Iteration 11320, lr = 0.00284867 I0419 22:46:41.912878 27863 solver.cpp:218] Iteration 11360 (2.44428 iter/s, 16.3647s/40 iters), loss = 0.0241425 I0419 22:46:41.912917 27863 solver.cpp:237] Train net output #0: loss = 0.0241425 (* 1 = 0.0241425 loss) I0419 22:46:41.912925 27863 sgd_solver.cpp:105] Iteration 11360, lr = 0.00283606 I0419 22:46:58.325734 27863 solver.cpp:218] Iteration 11400 (2.43712 iter/s, 16.4128s/40 iters), loss = 0.0496592 I0419 22:46:58.325845 27863 solver.cpp:237] Train net output #0: loss = 0.0496592 (* 1 = 0.0496592 loss) I0419 22:46:58.325855 27863 sgd_solver.cpp:105] Iteration 11400, lr = 0.0028235 I0419 22:47:14.806668 27863 solver.cpp:218] Iteration 11440 (2.42706 iter/s, 16.4808s/40 iters), loss = 0.0789735 I0419 22:47:14.806710 27863 solver.cpp:237] Train net output #0: loss = 0.0789735 (* 1 = 0.0789735 loss) I0419 22:47:14.806720 27863 sgd_solver.cpp:105] Iteration 11440, lr = 0.002811 I0419 22:47:31.191896 27863 solver.cpp:218] Iteration 11480 (2.44123 iter/s, 16.3852s/40 iters), loss = 0.050915 I0419 22:47:31.191988 27863 solver.cpp:237] Train net output #0: loss = 0.050915 (* 1 = 0.050915 loss) I0419 22:47:31.191998 27863 sgd_solver.cpp:105] Iteration 11480, lr = 0.00279856 I0419 22:47:47.572124 27863 solver.cpp:218] Iteration 11520 (2.44198 iter/s, 16.3802s/40 iters), loss = 0.0340299 I0419 22:47:47.572167 27863 solver.cpp:237] Train net output #0: loss = 0.03403 (* 1 = 0.03403 loss) I0419 22:47:47.572175 27863 sgd_solver.cpp:105] Iteration 11520, lr = 0.00278617 I0419 22:47:51.946894 27871 data_layer.cpp:73] Restarting data prefetching from start. I0419 22:47:52.456749 27863 solver.cpp:447] Snapshotting to binary proto file snapshot_iter_11533.caffemodel I0419 22:47:56.621315 27863 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_11533.solverstate I0419 22:47:59.583518 27863 solver.cpp:330] Iteration 11533, Testing net (#0) I0419 22:47:59.583539 27863 net.cpp:676] Ignoring source layer train-data I0419 22:48:03.394639 27881 data_layer.cpp:73] Restarting data prefetching from start. I0419 22:48:04.208025 27863 solver.cpp:397] Test net output #0: accuracy = 0.530637 I0419 22:48:04.208060 27863 solver.cpp:397] Test net output #1: loss = 2.6772 (* 1 = 2.6772 loss) I0419 22:48:14.666510 27863 solver.cpp:218] Iteration 11560 (1.47632 iter/s, 27.0944s/40 iters), loss = 0.0403126 I0419 22:48:14.666558 27863 solver.cpp:237] Train net output #0: loss = 0.0403127 (* 1 = 0.0403127 loss) I0419 22:48:14.666565 27863 sgd_solver.cpp:105] Iteration 11560, lr = 0.00277383 I0419 22:48:31.096405 27863 solver.cpp:218] Iteration 11600 (2.43459 iter/s, 16.4299s/40 iters), loss = 0.0505775 I0419 22:48:31.096447 27863 solver.cpp:237] Train net output #0: loss = 0.0505776 (* 1 = 0.0505776 loss) I0419 22:48:31.096455 27863 sgd_solver.cpp:105] Iteration 11600, lr = 0.00276155 I0419 22:48:47.582192 27863 solver.cpp:218] Iteration 11640 (2.42634 iter/s, 16.4858s/40 iters), loss = 0.0359816 I0419 22:48:47.582388 27863 solver.cpp:237] Train net output #0: loss = 0.0359816 (* 1 = 0.0359816 loss) I0419 22:48:47.582401 27863 sgd_solver.cpp:105] Iteration 11640, lr = 0.00274932 I0419 22:49:04.062708 27863 solver.cpp:218] Iteration 11680 (2.42714 iter/s, 16.4803s/40 iters), loss = 0.0561779 I0419 22:49:04.062762 27863 solver.cpp:237] Train net output #0: loss = 0.056178 (* 1 = 0.056178 loss) I0419 22:49:04.062774 27863 sgd_solver.cpp:105] Iteration 11680, lr = 0.00273715 I0419 22:49:20.337414 27863 solver.cpp:218] Iteration 11720 (2.45781 iter/s, 16.2747s/40 iters), loss = 0.0605178 I0419 22:49:20.337538 27863 solver.cpp:237] Train net output #0: loss = 0.0605179 (* 1 = 0.0605179 loss) I0419 22:49:20.337546 27863 sgd_solver.cpp:105] Iteration 11720, lr = 0.00272503 I0419 22:49:36.802230 27863 solver.cpp:218] Iteration 11760 (2.42944 iter/s, 16.4647s/40 iters), loss = 0.0803202 I0419 22:49:36.802273 27863 solver.cpp:237] Train net output #0: loss = 0.0803203 (* 1 = 0.0803203 loss) I0419 22:49:36.802281 27863 sgd_solver.cpp:105] Iteration 11760, lr = 0.00271297 I0419 22:49:53.216833 27863 solver.cpp:218] Iteration 11800 (2.43686 iter/s, 16.4146s/40 iters), loss = 0.0211375 I0419 22:49:53.216967 27863 solver.cpp:237] Train net output #0: loss = 0.0211375 (* 1 = 0.0211375 loss) I0419 22:49:53.216980 27863 sgd_solver.cpp:105] Iteration 11800, lr = 0.00270096 I0419 22:50:09.511384 27863 solver.cpp:218] Iteration 11840 (2.45483 iter/s, 16.2944s/40 iters), loss = 0.0551623 I0419 22:50:09.511442 27863 solver.cpp:237] Train net output #0: loss = 0.0551624 (* 1 = 0.0551624 loss) I0419 22:50:09.511452 27863 sgd_solver.cpp:105] Iteration 11840, lr = 0.002689 I0419 22:50:25.824858 27863 solver.cpp:218] Iteration 11880 (2.45197 iter/s, 16.3134s/40 iters), loss = 0.0781777 I0419 22:50:25.824976 27863 solver.cpp:237] Train net output #0: loss = 0.0781778 (* 1 = 0.0781778 loss) I0419 22:50:25.824986 27863 sgd_solver.cpp:105] Iteration 11880, lr = 0.00267709 I0419 22:50:42.252748 27863 solver.cpp:218] Iteration 11920 (2.4349 iter/s, 16.4278s/40 iters), loss = 0.124717 I0419 22:50:42.252794 27863 solver.cpp:237] Train net output #0: loss = 0.124717 (* 1 = 0.124717 loss) I0419 22:50:42.252801 27863 sgd_solver.cpp:105] Iteration 11920, lr = 0.00266524 I0419 22:50:58.572521 27863 solver.cpp:218] Iteration 11960 (2.45102 iter/s, 16.3197s/40 iters), loss = 0.0624124 I0419 22:50:58.572645 27863 solver.cpp:237] Train net output #0: loss = 0.0624125 (* 1 = 0.0624125 loss) I0419 22:50:58.572654 27863 sgd_solver.cpp:105] Iteration 11960, lr = 0.00265344 I0419 22:51:15.136394 27863 solver.cpp:218] Iteration 12000 (2.41491 iter/s, 16.5638s/40 iters), loss = 0.028364 I0419 22:51:15.136441 27863 solver.cpp:237] Train net output #0: loss = 0.0283641 (* 1 = 0.0283641 loss) I0419 22:51:15.136451 27863 sgd_solver.cpp:105] Iteration 12000, lr = 0.00264169 I0419 22:51:31.479043 27863 solver.cpp:218] Iteration 12040 (2.44759 iter/s, 16.3426s/40 iters), loss = 0.0023318 I0419 22:51:31.479161 27863 solver.cpp:237] Train net output #0: loss = 0.00233185 (* 1 = 0.00233185 loss) I0419 22:51:31.479169 27863 sgd_solver.cpp:105] Iteration 12040, lr = 0.00263 I0419 22:51:47.891278 27863 solver.cpp:218] Iteration 12080 (2.43722 iter/s, 16.4121s/40 iters), loss = 0.0233364 I0419 22:51:47.891335 27863 solver.cpp:237] Train net output #0: loss = 0.0233364 (* 1 = 0.0233364 loss) I0419 22:51:47.891350 27863 sgd_solver.cpp:105] Iteration 12080, lr = 0.00261835 I0419 22:51:56.433064 27863 blocking_queue.cpp:49] Waiting for data I0419 22:52:04.297516 27863 solver.cpp:218] Iteration 12120 (2.4381 iter/s, 16.4062s/40 iters), loss = 0.0542893 I0419 22:52:04.297660 27863 solver.cpp:237] Train net output #0: loss = 0.0542894 (* 1 = 0.0542894 loss) I0419 22:52:04.297669 27863 sgd_solver.cpp:105] Iteration 12120, lr = 0.00260676 I0419 22:52:11.556560 27871 data_layer.cpp:73] Restarting data prefetching from start. I0419 22:52:12.078263 27863 solver.cpp:447] Snapshotting to binary proto file snapshot_iter_12140.caffemodel I0419 22:52:16.366231 27863 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_12140.solverstate I0419 22:52:19.788842 27863 solver.cpp:330] Iteration 12140, Testing net (#0) I0419 22:52:19.788866 27863 net.cpp:676] Ignoring source layer train-data I0419 22:52:23.447198 27881 data_layer.cpp:73] Restarting data prefetching from start. I0419 22:52:24.364989 27863 solver.cpp:397] Test net output #0: accuracy = 0.537377 I0419 22:52:24.365036 27863 solver.cpp:397] Test net output #1: loss = 2.72201 (* 1 = 2.72201 loss) I0419 22:52:31.906739 27863 solver.cpp:218] Iteration 12160 (1.4488 iter/s, 27.6091s/40 iters), loss = 0.0078011 I0419 22:52:31.906782 27863 solver.cpp:237] Train net output #0: loss = 0.00780116 (* 1 = 0.00780116 loss) I0419 22:52:31.906791 27863 sgd_solver.cpp:105] Iteration 12160, lr = 0.00259522 I0419 22:52:48.328230 27863 solver.cpp:218] Iteration 12200 (2.43584 iter/s, 16.4215s/40 iters), loss = 0.0182909 I0419 22:52:48.328351 27863 solver.cpp:237] Train net output #0: loss = 0.018291 (* 1 = 0.018291 loss) I0419 22:52:48.328361 27863 sgd_solver.cpp:105] Iteration 12200, lr = 0.00258373 I0419 22:53:04.845567 27863 solver.cpp:218] Iteration 12240 (2.42171 iter/s, 16.5172s/40 iters), loss = 0.15716 I0419 22:53:04.845609 27863 solver.cpp:237] Train net output #0: loss = 0.15716 (* 1 = 0.15716 loss) I0419 22:53:04.845618 27863 sgd_solver.cpp:105] Iteration 12240, lr = 0.00257229 I0419 22:53:21.169081 27863 solver.cpp:218] Iteration 12280 (2.45046 iter/s, 16.3235s/40 iters), loss = 0.0183531 I0419 22:53:21.169212 27863 solver.cpp:237] Train net output #0: loss = 0.0183532 (* 1 = 0.0183532 loss) I0419 22:53:21.169222 27863 sgd_solver.cpp:105] Iteration 12280, lr = 0.0025609 I0419 22:53:37.583932 27863 solver.cpp:218] Iteration 12320 (2.43684 iter/s, 16.4147s/40 iters), loss = 0.0116184 I0419 22:53:37.583978 27863 solver.cpp:237] Train net output #0: loss = 0.0116184 (* 1 = 0.0116184 loss) I0419 22:53:37.583986 27863 sgd_solver.cpp:105] Iteration 12320, lr = 0.00254956 I0419 22:53:54.009896 27863 solver.cpp:218] Iteration 12360 (2.43517 iter/s, 16.4259s/40 iters), loss = 0.0579968 I0419 22:53:54.010012 27863 solver.cpp:237] Train net output #0: loss = 0.0579968 (* 1 = 0.0579968 loss) I0419 22:53:54.010022 27863 sgd_solver.cpp:105] Iteration 12360, lr = 0.00253828 I0419 22:54:10.504281 27863 solver.cpp:218] Iteration 12400 (2.42508 iter/s, 16.4943s/40 iters), loss = 0.0332996 I0419 22:54:10.504318 27863 solver.cpp:237] Train net output #0: loss = 0.0332997 (* 1 = 0.0332997 loss) I0419 22:54:10.504326 27863 sgd_solver.cpp:105] Iteration 12400, lr = 0.00252704 I0419 22:54:27.074631 27863 solver.cpp:218] Iteration 12440 (2.41396 iter/s, 16.5703s/40 iters), loss = 0.114331 I0419 22:54:27.074779 27863 solver.cpp:237] Train net output #0: loss = 0.114331 (* 1 = 0.114331 loss) I0419 22:54:27.074795 27863 sgd_solver.cpp:105] Iteration 12440, lr = 0.00251585 I0419 22:54:43.378970 27863 solver.cpp:218] Iteration 12480 (2.45335 iter/s, 16.3042s/40 iters), loss = 0.0314974 I0419 22:54:43.379012 27863 solver.cpp:237] Train net output #0: loss = 0.0314975 (* 1 = 0.0314975 loss) I0419 22:54:43.379021 27863 sgd_solver.cpp:105] Iteration 12480, lr = 0.00250471 I0419 22:54:59.945827 27863 solver.cpp:218] Iteration 12520 (2.41446 iter/s, 16.5668s/40 iters), loss = 0.109293 I0419 22:54:59.945973 27863 solver.cpp:237] Train net output #0: loss = 0.109293 (* 1 = 0.109293 loss) I0419 22:54:59.945986 27863 sgd_solver.cpp:105] Iteration 12520, lr = 0.00249362 I0419 22:55:16.338318 27863 solver.cpp:218] Iteration 12560 (2.44016 iter/s, 16.3924s/40 iters), loss = 0.0384199 I0419 22:55:16.338369 27863 solver.cpp:237] Train net output #0: loss = 0.03842 (* 1 = 0.03842 loss) I0419 22:55:16.338378 27863 sgd_solver.cpp:105] Iteration 12560, lr = 0.00248258 I0419 22:55:32.628963 27863 solver.cpp:218] Iteration 12600 (2.4554 iter/s, 16.2906s/40 iters), loss = 0.0176858 I0419 22:55:32.629125 27863 solver.cpp:237] Train net output #0: loss = 0.0176858 (* 1 = 0.0176858 loss) I0419 22:55:32.629142 27863 sgd_solver.cpp:105] Iteration 12600, lr = 0.00247159 I0419 22:55:48.987574 27863 solver.cpp:218] Iteration 12640 (2.44522 iter/s, 16.3585s/40 iters), loss = 0.117972 I0419 22:55:48.987620 27863 solver.cpp:237] Train net output #0: loss = 0.117972 (* 1 = 0.117972 loss) I0419 22:55:48.987629 27863 sgd_solver.cpp:105] Iteration 12640, lr = 0.00246065 I0419 22:56:05.432485 27863 solver.cpp:218] Iteration 12680 (2.43237 iter/s, 16.4449s/40 iters), loss = 0.023853 I0419 22:56:05.432610 27863 solver.cpp:237] Train net output #0: loss = 0.0238531 (* 1 = 0.0238531 loss) I0419 22:56:05.432623 27863 sgd_solver.cpp:105] Iteration 12680, lr = 0.00244975 I0419 22:56:21.673655 27863 solver.cpp:218] Iteration 12720 (2.46289 iter/s, 16.2411s/40 iters), loss = 0.0628588 I0419 22:56:21.673699 27863 solver.cpp:237] Train net output #0: loss = 0.0628588 (* 1 = 0.0628588 loss) I0419 22:56:21.673707 27863 sgd_solver.cpp:105] Iteration 12720, lr = 0.00243891 I0419 22:56:31.952823 27871 data_layer.cpp:73] Restarting data prefetching from start. I0419 22:56:32.524888 27863 solver.cpp:447] Snapshotting to binary proto file snapshot_iter_12747.caffemodel I0419 22:56:35.656831 27863 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_12747.solverstate I0419 22:56:38.046237 27863 solver.cpp:330] Iteration 12747, Testing net (#0) I0419 22:56:38.046255 27863 net.cpp:676] Ignoring source layer train-data I0419 22:56:41.925411 27881 data_layer.cpp:73] Restarting data prefetching from start. I0419 22:56:42.901571 27863 solver.cpp:397] Test net output #0: accuracy = 0.545343 I0419 22:56:42.901625 27863 solver.cpp:397] Test net output #1: loss = 2.72431 (* 1 = 2.72431 loss) I0419 22:56:47.666595 27863 solver.cpp:218] Iteration 12760 (1.53888 iter/s, 25.9929s/40 iters), loss = 0.0564331 I0419 22:56:47.666651 27863 solver.cpp:237] Train net output #0: loss = 0.0564331 (* 1 = 0.0564331 loss) I0419 22:56:47.666666 27863 sgd_solver.cpp:105] Iteration 12760, lr = 0.00242811 I0419 22:57:04.098392 27863 solver.cpp:218] Iteration 12800 (2.43431 iter/s, 16.4318s/40 iters), loss = 0.0523026 I0419 22:57:04.098436 27863 solver.cpp:237] Train net output #0: loss = 0.0523026 (* 1 = 0.0523026 loss) I0419 22:57:04.098444 27863 sgd_solver.cpp:105] Iteration 12800, lr = 0.00241736 I0419 22:57:20.457731 27863 solver.cpp:218] Iteration 12840 (2.44509 iter/s, 16.3593s/40 iters), loss = 0.0559671 I0419 22:57:20.457850 27863 solver.cpp:237] Train net output #0: loss = 0.0559671 (* 1 = 0.0559671 loss) I0419 22:57:20.457859 27863 sgd_solver.cpp:105] Iteration 12840, lr = 0.00240666 I0419 22:57:36.887910 27863 solver.cpp:218] Iteration 12880 (2.43456 iter/s, 16.4301s/40 iters), loss = 0.0295572 I0419 22:57:36.887956 27863 solver.cpp:237] Train net output #0: loss = 0.0295573 (* 1 = 0.0295573 loss) I0419 22:57:36.887965 27863 sgd_solver.cpp:105] Iteration 12880, lr = 0.002396 I0419 22:57:53.110689 27863 solver.cpp:218] Iteration 12920 (2.46567 iter/s, 16.2227s/40 iters), loss = 0.066929 I0419 22:57:53.110805 27863 solver.cpp:237] Train net output #0: loss = 0.066929 (* 1 = 0.066929 loss) I0419 22:57:53.110814 27863 sgd_solver.cpp:105] Iteration 12920, lr = 0.00238539 I0419 22:58:09.557145 27863 solver.cpp:218] Iteration 12960 (2.43215 iter/s, 16.4464s/40 iters), loss = 0.0229611 I0419 22:58:09.557188 27863 solver.cpp:237] Train net output #0: loss = 0.0229611 (* 1 = 0.0229611 loss) I0419 22:58:09.557194 27863 sgd_solver.cpp:105] Iteration 12960, lr = 0.00237483 I0419 22:58:26.059368 27863 solver.cpp:218] Iteration 13000 (2.42392 iter/s, 16.5022s/40 iters), loss = 0.0701531 I0419 22:58:26.059567 27863 solver.cpp:237] Train net output #0: loss = 0.0701531 (* 1 = 0.0701531 loss) I0419 22:58:26.059581 27863 sgd_solver.cpp:105] Iteration 13000, lr = 0.00236432 I0419 22:58:30.517439 27863 blocking_queue.cpp:49] Waiting for data I0419 22:58:42.506896 27863 solver.cpp:218] Iteration 13040 (2.432 iter/s, 16.4473s/40 iters), loss = 0.129928 I0419 22:58:42.506937 27863 solver.cpp:237] Train net output #0: loss = 0.129928 (* 1 = 0.129928 loss) I0419 22:58:42.506947 27863 sgd_solver.cpp:105] Iteration 13040, lr = 0.00235385 I0419 22:58:58.844130 27863 solver.cpp:218] Iteration 13080 (2.4484 iter/s, 16.3372s/40 iters), loss = 0.00861658 I0419 22:58:58.844267 27863 solver.cpp:237] Train net output #0: loss = 0.00861661 (* 1 = 0.00861661 loss) I0419 22:58:58.844277 27863 sgd_solver.cpp:105] Iteration 13080, lr = 0.00234343 I0419 22:59:15.290391 27863 solver.cpp:218] Iteration 13120 (2.43218 iter/s, 16.4461s/40 iters), loss = 0.0127091 I0419 22:59:15.290443 27863 solver.cpp:237] Train net output #0: loss = 0.0127092 (* 1 = 0.0127092 loss) I0419 22:59:15.290452 27863 sgd_solver.cpp:105] Iteration 13120, lr = 0.00233305 I0419 22:59:31.618829 27863 solver.cpp:218] Iteration 13160 (2.44972 iter/s, 16.3284s/40 iters), loss = 0.0362785 I0419 22:59:31.618947 27863 solver.cpp:237] Train net output #0: loss = 0.0362785 (* 1 = 0.0362785 loss) I0419 22:59:31.618957 27863 sgd_solver.cpp:105] Iteration 13160, lr = 0.00232273 I0419 22:59:47.910670 27863 solver.cpp:218] Iteration 13200 (2.45523 iter/s, 16.2917s/40 iters), loss = 0.0200603 I0419 22:59:47.910710 27863 solver.cpp:237] Train net output #0: loss = 0.0200604 (* 1 = 0.0200604 loss) I0419 22:59:47.910718 27863 sgd_solver.cpp:105] Iteration 13200, lr = 0.00231244 I0419 23:00:04.247156 27863 solver.cpp:218] Iteration 13240 (2.44851 iter/s, 16.3365s/40 iters), loss = 0.0303158 I0419 23:00:04.247253 27863 solver.cpp:237] Train net output #0: loss = 0.0303159 (* 1 = 0.0303159 loss) I0419 23:00:04.247263 27863 sgd_solver.cpp:105] Iteration 13240, lr = 0.0023022 I0419 23:00:20.585031 27863 solver.cpp:218] Iteration 13280 (2.44831 iter/s, 16.3378s/40 iters), loss = 0.054773 I0419 23:00:20.585074 27863 solver.cpp:237] Train net output #0: loss = 0.054773 (* 1 = 0.054773 loss) I0419 23:00:20.585083 27863 sgd_solver.cpp:105] Iteration 13280, lr = 0.00229201 I0419 23:00:37.017997 27863 solver.cpp:218] Iteration 13320 (2.43414 iter/s, 16.4329s/40 iters), loss = 0.0383264 I0419 23:00:37.018116 27863 solver.cpp:237] Train net output #0: loss = 0.0383264 (* 1 = 0.0383264 loss) I0419 23:00:37.018126 27863 sgd_solver.cpp:105] Iteration 13320, lr = 0.00228186 I0419 23:00:49.971308 27871 data_layer.cpp:73] Restarting data prefetching from start. I0419 23:00:50.534863 27863 solver.cpp:447] Snapshotting to binary proto file snapshot_iter_13354.caffemodel I0419 23:00:53.925871 27863 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_13354.solverstate I0419 23:00:56.614967 27863 solver.cpp:330] Iteration 13354, Testing net (#0) I0419 23:00:56.614989 27863 net.cpp:676] Ignoring source layer train-data I0419 23:01:00.416330 27881 data_layer.cpp:73] Restarting data prefetching from start. I0419 23:01:01.425493 27863 solver.cpp:397] Test net output #0: accuracy = 0.551471 I0419 23:01:01.425540 27863 solver.cpp:397] Test net output #1: loss = 2.74159 (* 1 = 2.74159 loss) I0419 23:01:03.200845 27863 solver.cpp:218] Iteration 13360 (1.52772 iter/s, 26.1828s/40 iters), loss = 0.0374607 I0419 23:01:03.200892 27863 solver.cpp:237] Train net output #0: loss = 0.0374607 (* 1 = 0.0374607 loss) I0419 23:01:03.200901 27863 sgd_solver.cpp:105] Iteration 13360, lr = 0.00227176 I0419 23:01:19.681996 27863 solver.cpp:218] Iteration 13400 (2.42702 iter/s, 16.4811s/40 iters), loss = 0.00860004 I0419 23:01:19.682109 27863 solver.cpp:237] Train net output #0: loss = 0.00860007 (* 1 = 0.00860007 loss) I0419 23:01:19.682119 27863 sgd_solver.cpp:105] Iteration 13400, lr = 0.0022617 I0419 23:01:36.256753 27863 solver.cpp:218] Iteration 13440 (2.41332 iter/s, 16.5746s/40 iters), loss = 0.168237 I0419 23:01:36.256798 27863 solver.cpp:237] Train net output #0: loss = 0.168237 (* 1 = 0.168237 loss) I0419 23:01:36.256808 27863 sgd_solver.cpp:105] Iteration 13440, lr = 0.00225169 I0419 23:01:52.505740 27863 solver.cpp:218] Iteration 13480 (2.4617 iter/s, 16.249s/40 iters), loss = 0.0111544 I0419 23:01:52.505904 27863 solver.cpp:237] Train net output #0: loss = 0.0111544 (* 1 = 0.0111544 loss) I0419 23:01:52.505914 27863 sgd_solver.cpp:105] Iteration 13480, lr = 0.00224172 I0419 23:02:08.740229 27863 solver.cpp:218] Iteration 13520 (2.46391 iter/s, 16.2343s/40 iters), loss = 0.00576859 I0419 23:02:08.740273 27863 solver.cpp:237] Train net output #0: loss = 0.00576859 (* 1 = 0.00576859 loss) I0419 23:02:08.740281 27863 sgd_solver.cpp:105] Iteration 13520, lr = 0.00223179 I0419 23:02:25.068984 27863 solver.cpp:218] Iteration 13560 (2.44967 iter/s, 16.3287s/40 iters), loss = 0.0327455 I0419 23:02:25.069084 27863 solver.cpp:237] Train net output #0: loss = 0.0327455 (* 1 = 0.0327455 loss) I0419 23:02:25.069094 27863 sgd_solver.cpp:105] Iteration 13560, lr = 0.00222191 I0419 23:02:41.370121 27863 solver.cpp:218] Iteration 13600 (2.45383 iter/s, 16.3011s/40 iters), loss = 0.03255 I0419 23:02:41.370162 27863 solver.cpp:237] Train net output #0: loss = 0.03255 (* 1 = 0.03255 loss) I0419 23:02:41.370170 27863 sgd_solver.cpp:105] Iteration 13600, lr = 0.00221208 I0419 23:02:57.625496 27863 solver.cpp:218] Iteration 13640 (2.46073 iter/s, 16.2554s/40 iters), loss = 0.0837436 I0419 23:02:57.625617 27863 solver.cpp:237] Train net output #0: loss = 0.0837436 (* 1 = 0.0837436 loss) I0419 23:02:57.625627 27863 sgd_solver.cpp:105] Iteration 13640, lr = 0.00220228 I0419 23:03:13.890691 27863 solver.cpp:218] Iteration 13680 (2.45926 iter/s, 16.2651s/40 iters), loss = 0.102445 I0419 23:03:13.890743 27863 solver.cpp:237] Train net output #0: loss = 0.102445 (* 1 = 0.102445 loss) I0419 23:03:13.890758 27863 sgd_solver.cpp:105] Iteration 13680, lr = 0.00219253 I0419 23:03:30.173295 27863 solver.cpp:218] Iteration 13720 (2.45661 iter/s, 16.2826s/40 iters), loss = 0.0638942 I0419 23:03:30.173415 27863 solver.cpp:237] Train net output #0: loss = 0.0638942 (* 1 = 0.0638942 loss) I0419 23:03:30.173425 27863 sgd_solver.cpp:105] Iteration 13720, lr = 0.00218283 I0419 23:03:46.564682 27863 solver.cpp:218] Iteration 13760 (2.44032 iter/s, 16.3913s/40 iters), loss = 0.016119 I0419 23:03:46.564719 27863 solver.cpp:237] Train net output #0: loss = 0.0161191 (* 1 = 0.0161191 loss) I0419 23:03:46.564728 27863 sgd_solver.cpp:105] Iteration 13760, lr = 0.00217316 I0419 23:04:02.727344 27863 solver.cpp:218] Iteration 13800 (2.47484 iter/s, 16.1626s/40 iters), loss = 0.0361081 I0419 23:04:02.727427 27863 solver.cpp:237] Train net output #0: loss = 0.0361081 (* 1 = 0.0361081 loss) I0419 23:04:02.727435 27863 sgd_solver.cpp:105] Iteration 13800, lr = 0.00216354 I0419 23:04:18.985828 27863 solver.cpp:218] Iteration 13840 (2.46026 iter/s, 16.2584s/40 iters), loss = 0.00737973 I0419 23:04:18.985868 27863 solver.cpp:237] Train net output #0: loss = 0.00737977 (* 1 = 0.00737977 loss) I0419 23:04:18.985877 27863 sgd_solver.cpp:105] Iteration 13840, lr = 0.00215396 I0419 23:04:35.262595 27863 solver.cpp:218] Iteration 13880 (2.45749 iter/s, 16.2767s/40 iters), loss = 0.0074965 I0419 23:04:35.262710 27863 solver.cpp:237] Train net output #0: loss = 0.00749655 (* 1 = 0.00749655 loss) I0419 23:04:35.262719 27863 sgd_solver.cpp:105] Iteration 13880, lr = 0.00214442 I0419 23:04:51.662122 27863 solver.cpp:218] Iteration 13920 (2.43911 iter/s, 16.3994s/40 iters), loss = 0.0258829 I0419 23:04:51.662176 27863 solver.cpp:237] Train net output #0: loss = 0.0258829 (* 1 = 0.0258829 loss) I0419 23:04:51.662190 27863 sgd_solver.cpp:105] Iteration 13920, lr = 0.00213493 I0419 23:05:07.431871 27871 data_layer.cpp:73] Restarting data prefetching from start. I0419 23:05:08.067677 27863 solver.cpp:218] Iteration 13960 (2.43821 iter/s, 16.4055s/40 iters), loss = 0.022442 I0419 23:05:08.067721 27863 solver.cpp:237] Train net output #0: loss = 0.0224421 (* 1 = 0.0224421 loss) I0419 23:05:08.067729 27863 sgd_solver.cpp:105] Iteration 13960, lr = 0.00212548 I0419 23:05:08.067871 27863 solver.cpp:447] Snapshotting to binary proto file snapshot_iter_13961.caffemodel I0419 23:05:11.194530 27863 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_13961.solverstate I0419 23:05:17.835605 27863 solver.cpp:330] Iteration 13961, Testing net (#0) I0419 23:05:17.835623 27863 net.cpp:676] Ignoring source layer train-data I0419 23:05:18.439764 27863 blocking_queue.cpp:49] Waiting for data I0419 23:05:21.316110 27881 data_layer.cpp:73] Restarting data prefetching from start. I0419 23:05:22.256157 27863 solver.cpp:397] Test net output #0: accuracy = 0.547181 I0419 23:05:22.256204 27863 solver.cpp:397] Test net output #1: loss = 2.74581 (* 1 = 2.74581 loss) I0419 23:05:37.143566 27863 solver.cpp:218] Iteration 14000 (1.37571 iter/s, 29.0759s/40 iters), loss = 0.0210473 I0419 23:05:37.143604 27863 solver.cpp:237] Train net output #0: loss = 0.0210473 (* 1 = 0.0210473 loss) I0419 23:05:37.143611 27863 sgd_solver.cpp:105] Iteration 14000, lr = 0.00211607 I0419 23:05:53.255388 27863 solver.cpp:218] Iteration 14040 (2.48265 iter/s, 16.1118s/40 iters), loss = 0.0240112 I0419 23:05:53.255558 27863 solver.cpp:237] Train net output #0: loss = 0.0240112 (* 1 = 0.0240112 loss) I0419 23:05:53.255568 27863 sgd_solver.cpp:105] Iteration 14040, lr = 0.0021067 I0419 23:06:09.158146 27863 solver.cpp:218] Iteration 14080 (2.51531 iter/s, 15.9026s/40 iters), loss = 0.0203596 I0419 23:06:09.158182 27863 solver.cpp:237] Train net output #0: loss = 0.0203597 (* 1 = 0.0203597 loss) I0419 23:06:09.158190 27863 sgd_solver.cpp:105] Iteration 14080, lr = 0.00209737 I0419 23:06:25.244455 27863 solver.cpp:218] Iteration 14120 (2.48659 iter/s, 16.0863s/40 iters), loss = 0.0155054 I0419 23:06:25.244568 27863 solver.cpp:237] Train net output #0: loss = 0.0155055 (* 1 = 0.0155055 loss) I0419 23:06:25.244577 27863 sgd_solver.cpp:105] Iteration 14120, lr = 0.00208809 I0419 23:06:41.296296 27863 solver.cpp:218] Iteration 14160 (2.49194 iter/s, 16.0517s/40 iters), loss = 0.0124813 I0419 23:06:41.296336 27863 solver.cpp:237] Train net output #0: loss = 0.0124814 (* 1 = 0.0124814 loss) I0419 23:06:41.296344 27863 sgd_solver.cpp:105] Iteration 14160, lr = 0.00207884 I0419 23:06:57.389991 27863 solver.cpp:218] Iteration 14200 (2.48545 iter/s, 16.0937s/40 iters), loss = 0.0105194 I0419 23:06:57.390116 27863 solver.cpp:237] Train net output #0: loss = 0.0105194 (* 1 = 0.0105194 loss) I0419 23:06:57.390125 27863 sgd_solver.cpp:105] Iteration 14200, lr = 0.00206964 I0419 23:07:13.459817 27863 solver.cpp:218] Iteration 14240 (2.48915 iter/s, 16.0697s/40 iters), loss = 0.0765347 I0419 23:07:13.459856 27863 solver.cpp:237] Train net output #0: loss = 0.0765347 (* 1 = 0.0765347 loss) I0419 23:07:13.459864 27863 sgd_solver.cpp:105] Iteration 14240, lr = 0.00206047 I0419 23:07:29.352607 27863 solver.cpp:218] Iteration 14280 (2.51687 iter/s, 15.8928s/40 iters), loss = 0.0123874 I0419 23:07:29.352720 27863 solver.cpp:237] Train net output #0: loss = 0.0123875 (* 1 = 0.0123875 loss) I0419 23:07:29.352730 27863 sgd_solver.cpp:105] Iteration 14280, lr = 0.00205135 I0419 23:07:45.431623 27863 solver.cpp:218] Iteration 14320 (2.48773 iter/s, 16.0789s/40 iters), loss = 0.00879509 I0419 23:07:45.431659 27863 solver.cpp:237] Train net output #0: loss = 0.0087951 (* 1 = 0.0087951 loss) I0419 23:07:45.431666 27863 sgd_solver.cpp:105] Iteration 14320, lr = 0.00204227 I0419 23:08:01.533174 27863 solver.cpp:218] Iteration 14360 (2.48424 iter/s, 16.1015s/40 iters), loss = 0.175333 I0419 23:08:01.533301 27863 solver.cpp:237] Train net output #0: loss = 0.175333 (* 1 = 0.175333 loss) I0419 23:08:01.533310 27863 sgd_solver.cpp:105] Iteration 14360, lr = 0.00203323 I0419 23:08:17.614151 27863 solver.cpp:218] Iteration 14400 (2.48743 iter/s, 16.0809s/40 iters), loss = 0.017103 I0419 23:08:17.614210 27863 solver.cpp:237] Train net output #0: loss = 0.017103 (* 1 = 0.017103 loss) I0419 23:08:17.614223 27863 sgd_solver.cpp:105] Iteration 14400, lr = 0.00202423 I0419 23:08:33.679576 27863 solver.cpp:218] Iteration 14440 (2.48983 iter/s, 16.0654s/40 iters), loss = 0.0216395 I0419 23:08:33.679767 27863 solver.cpp:237] Train net output #0: loss = 0.0216395 (* 1 = 0.0216395 loss) I0419 23:08:33.679782 27863 sgd_solver.cpp:105] Iteration 14440, lr = 0.00201526 I0419 23:08:49.708482 27863 solver.cpp:218] Iteration 14480 (2.49552 iter/s, 16.0287s/40 iters), loss = 0.023332 I0419 23:08:49.708518 27863 solver.cpp:237] Train net output #0: loss = 0.0233321 (* 1 = 0.0233321 loss) I0419 23:08:49.708525 27863 sgd_solver.cpp:105] Iteration 14480, lr = 0.00200634 I0419 23:09:05.665762 27863 solver.cpp:218] Iteration 14520 (2.5067 iter/s, 15.9572s/40 iters), loss = 0.0114488 I0419 23:09:05.665915 27863 solver.cpp:237] Train net output #0: loss = 0.0114488 (* 1 = 0.0114488 loss) I0419 23:09:05.665930 27863 sgd_solver.cpp:105] Iteration 14520, lr = 0.00199746 I0419 23:09:21.753626 27863 solver.cpp:218] Iteration 14560 (2.48637 iter/s, 16.0877s/40 iters), loss = 0.0207251 I0419 23:09:21.753662 27863 solver.cpp:237] Train net output #0: loss = 0.0207251 (* 1 = 0.0207251 loss) I0419 23:09:21.753671 27863 sgd_solver.cpp:105] Iteration 14560, lr = 0.00198862 I0419 23:09:23.932597 27871 data_layer.cpp:73] Restarting data prefetching from start. I0419 23:09:24.533608 27863 solver.cpp:447] Snapshotting to binary proto file snapshot_iter_14568.caffemodel I0419 23:09:29.032946 27863 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_14568.solverstate I0419 23:09:32.764778 27863 solver.cpp:330] Iteration 14568, Testing net (#0) I0419 23:09:32.764797 27863 net.cpp:676] Ignoring source layer train-data I0419 23:09:36.470068 27881 data_layer.cpp:73] Restarting data prefetching from start. I0419 23:09:37.556740 27863 solver.cpp:397] Test net output #0: accuracy = 0.550245 I0419 23:09:37.556787 27863 solver.cpp:397] Test net output #1: loss = 2.68992 (* 1 = 2.68992 loss) I0419 23:09:49.704720 27863 solver.cpp:218] Iteration 14600 (1.43107 iter/s, 27.9511s/40 iters), loss = 0.0090101 I0419 23:09:49.704759 27863 solver.cpp:237] Train net output #0: loss = 0.00901013 (* 1 = 0.00901013 loss) I0419 23:09:49.704767 27863 sgd_solver.cpp:105] Iteration 14600, lr = 0.00197981 I0419 23:10:05.771347 27863 solver.cpp:218] Iteration 14640 (2.48964 iter/s, 16.0666s/40 iters), loss = 0.0235501 I0419 23:10:05.771384 27863 solver.cpp:237] Train net output #0: loss = 0.0235502 (* 1 = 0.0235502 loss) I0419 23:10:05.771391 27863 sgd_solver.cpp:105] Iteration 14640, lr = 0.00197105 I0419 23:10:21.890324 27863 solver.cpp:218] Iteration 14680 (2.48155 iter/s, 16.1189s/40 iters), loss = 0.0330341 I0419 23:10:21.890431 27863 solver.cpp:237] Train net output #0: loss = 0.0330341 (* 1 = 0.0330341 loss) I0419 23:10:21.890440 27863 sgd_solver.cpp:105] Iteration 14680, lr = 0.00196232 I0419 23:10:37.962589 27863 solver.cpp:218] Iteration 14720 (2.48877 iter/s, 16.0722s/40 iters), loss = 0.0119345 I0419 23:10:37.962626 27863 solver.cpp:237] Train net output #0: loss = 0.0119345 (* 1 = 0.0119345 loss) I0419 23:10:37.962633 27863 sgd_solver.cpp:105] Iteration 14720, lr = 0.00195363 I0419 23:10:54.081426 27863 solver.cpp:218] Iteration 14760 (2.48157 iter/s, 16.1188s/40 iters), loss = 0.0348807 I0419 23:10:54.081538 27863 solver.cpp:237] Train net output #0: loss = 0.0348807 (* 1 = 0.0348807 loss) I0419 23:10:54.081547 27863 sgd_solver.cpp:105] Iteration 14760, lr = 0.00194498 I0419 23:11:10.170131 27863 solver.cpp:218] Iteration 14800 (2.48623 iter/s, 16.0886s/40 iters), loss = 0.0200636 I0419 23:11:10.170168 27863 solver.cpp:237] Train net output #0: loss = 0.0200636 (* 1 = 0.0200636 loss) I0419 23:11:10.170176 27863 sgd_solver.cpp:105] Iteration 14800, lr = 0.00193637 I0419 23:11:26.257455 27863 solver.cpp:218] Iteration 14840 (2.48643 iter/s, 16.0873s/40 iters), loss = 0.0213709 I0419 23:11:26.257606 27863 solver.cpp:237] Train net output #0: loss = 0.0213709 (* 1 = 0.0213709 loss) I0419 23:11:26.257616 27863 sgd_solver.cpp:105] Iteration 14840, lr = 0.0019278 I0419 23:11:40.695611 27863 blocking_queue.cpp:49] Waiting for data I0419 23:11:42.359109 27863 solver.cpp:218] Iteration 14880 (2.48424 iter/s, 16.1015s/40 iters), loss = 0.0405363 I0419 23:11:42.359148 27863 solver.cpp:237] Train net output #0: loss = 0.0405363 (* 1 = 0.0405363 loss) I0419 23:11:42.359156 27863 sgd_solver.cpp:105] Iteration 14880, lr = 0.00191926 I0419 23:11:58.408591 27863 solver.cpp:218] Iteration 14920 (2.4923 iter/s, 16.0494s/40 iters), loss = 0.0168516 I0419 23:11:58.408710 27863 solver.cpp:237] Train net output #0: loss = 0.0168516 (* 1 = 0.0168516 loss) I0419 23:11:58.408718 27863 sgd_solver.cpp:105] Iteration 14920, lr = 0.00191076 I0419 23:12:14.512248 27863 solver.cpp:218] Iteration 14960 (2.48392 iter/s, 16.1035s/40 iters), loss = 0.0187314 I0419 23:12:14.512282 27863 solver.cpp:237] Train net output #0: loss = 0.0187314 (* 1 = 0.0187314 loss) I0419 23:12:14.512290 27863 sgd_solver.cpp:105] Iteration 14960, lr = 0.00190231 I0419 23:12:30.435276 27863 solver.cpp:218] Iteration 15000 (2.51209 iter/s, 15.923s/40 iters), loss = 0.0112809 I0419 23:12:30.435391 27863 solver.cpp:237] Train net output #0: loss = 0.0112809 (* 1 = 0.0112809 loss) I0419 23:12:30.435400 27863 sgd_solver.cpp:105] Iteration 15000, lr = 0.00189388 I0419 23:12:46.518683 27863 solver.cpp:218] Iteration 15040 (2.48705 iter/s, 16.0833s/40 iters), loss = 0.0257254 I0419 23:12:46.518719 27863 solver.cpp:237] Train net output #0: loss = 0.0257254 (* 1 = 0.0257254 loss) I0419 23:12:46.518726 27863 sgd_solver.cpp:105] Iteration 15040, lr = 0.0018855 I0419 23:13:02.610900 27863 solver.cpp:218] Iteration 15080 (2.48568 iter/s, 16.0922s/40 iters), loss = 0.00499501 I0419 23:13:02.611023 27863 solver.cpp:237] Train net output #0: loss = 0.004995 (* 1 = 0.004995 loss) I0419 23:13:02.611032 27863 sgd_solver.cpp:105] Iteration 15080, lr = 0.00187715 I0419 23:13:18.703243 27863 solver.cpp:218] Iteration 15120 (2.48567 iter/s, 16.0922s/40 iters), loss = 0.00590482 I0419 23:13:18.703279 27863 solver.cpp:237] Train net output #0: loss = 0.0059048 (* 1 = 0.0059048 loss) I0419 23:13:18.703287 27863 sgd_solver.cpp:105] Iteration 15120, lr = 0.00186884 I0419 23:13:34.780948 27863 solver.cpp:218] Iteration 15160 (2.48792 iter/s, 16.0777s/40 iters), loss = 0.0241536 I0419 23:13:34.781073 27863 solver.cpp:237] Train net output #0: loss = 0.0241536 (* 1 = 0.0241536 loss) I0419 23:13:34.781082 27863 sgd_solver.cpp:105] Iteration 15160, lr = 0.00186057 I0419 23:13:39.749687 27871 data_layer.cpp:73] Restarting data prefetching from start. I0419 23:13:40.355211 27863 solver.cpp:447] Snapshotting to binary proto file snapshot_iter_15175.caffemodel I0419 23:13:43.440119 27863 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_15175.solverstate I0419 23:13:46.569245 27863 solver.cpp:330] Iteration 15175, Testing net (#0) I0419 23:13:46.569263 27863 net.cpp:676] Ignoring source layer train-data I0419 23:13:50.097005 27881 data_layer.cpp:73] Restarting data prefetching from start. I0419 23:13:51.115238 27863 solver.cpp:397] Test net output #0: accuracy = 0.550858 I0419 23:13:51.115276 27863 solver.cpp:397] Test net output #1: loss = 2.71725 (* 1 = 2.71725 loss) I0419 23:14:00.531002 27863 solver.cpp:218] Iteration 15200 (1.5534 iter/s, 25.75s/40 iters), loss = 0.0809868 I0419 23:14:00.531038 27863 solver.cpp:237] Train net output #0: loss = 0.0809868 (* 1 = 0.0809868 loss) I0419 23:14:00.531046 27863 sgd_solver.cpp:105] Iteration 15200, lr = 0.00185233 I0419 23:14:16.605932 27863 solver.cpp:218] Iteration 15240 (2.48835 iter/s, 16.0749s/40 iters), loss = 0.0151827 I0419 23:14:16.606017 27863 solver.cpp:237] Train net output #0: loss = 0.0151827 (* 1 = 0.0151827 loss) I0419 23:14:16.606026 27863 sgd_solver.cpp:105] Iteration 15240, lr = 0.00184413 I0419 23:14:32.714422 27863 solver.cpp:218] Iteration 15280 (2.48317 iter/s, 16.1084s/40 iters), loss = 0.0489809 I0419 23:14:32.714459 27863 solver.cpp:237] Train net output #0: loss = 0.0489809 (* 1 = 0.0489809 loss) I0419 23:14:32.714468 27863 sgd_solver.cpp:105] Iteration 15280, lr = 0.00183596 I0419 23:14:48.798714 27863 solver.cpp:218] Iteration 15320 (2.4869 iter/s, 16.0843s/40 iters), loss = 0.0332742 I0419 23:14:48.798848 27863 solver.cpp:237] Train net output #0: loss = 0.0332742 (* 1 = 0.0332742 loss) I0419 23:14:48.798857 27863 sgd_solver.cpp:105] Iteration 15320, lr = 0.00182783 I0419 23:15:04.882972 27863 solver.cpp:218] Iteration 15360 (2.48692 iter/s, 16.0841s/40 iters), loss = 0.0372654 I0419 23:15:04.883009 27863 solver.cpp:237] Train net output #0: loss = 0.0372653 (* 1 = 0.0372653 loss) I0419 23:15:04.883016 27863 sgd_solver.cpp:105] Iteration 15360, lr = 0.00181974 I0419 23:15:20.761565 27863 solver.cpp:218] Iteration 15400 (2.51912 iter/s, 15.8786s/40 iters), loss = 0.0540374 I0419 23:15:20.761649 27863 solver.cpp:237] Train net output #0: loss = 0.0540374 (* 1 = 0.0540374 loss) I0419 23:15:20.761658 27863 sgd_solver.cpp:105] Iteration 15400, lr = 0.00181168 I0419 23:15:36.744235 27863 solver.cpp:218] Iteration 15440 (2.50272 iter/s, 15.9826s/40 iters), loss = 0.00262664 I0419 23:15:36.744271 27863 solver.cpp:237] Train net output #0: loss = 0.00262663 (* 1 = 0.00262663 loss) I0419 23:15:36.744279 27863 sgd_solver.cpp:105] Iteration 15440, lr = 0.00180366 I0419 23:15:52.678894 27863 solver.cpp:218] Iteration 15480 (2.51026 iter/s, 15.9346s/40 iters), loss = 0.0230237 I0419 23:15:52.679005 27863 solver.cpp:237] Train net output #0: loss = 0.0230237 (* 1 = 0.0230237 loss) I0419 23:15:52.679014 27863 sgd_solver.cpp:105] Iteration 15480, lr = 0.00179568 I0419 23:16:08.696364 27863 solver.cpp:218] Iteration 15520 (2.49729 iter/s, 16.0174s/40 iters), loss = 0.00107224 I0419 23:16:08.696400 27863 solver.cpp:237] Train net output #0: loss = 0.00107222 (* 1 = 0.00107222 loss) I0419 23:16:08.696408 27863 sgd_solver.cpp:105] Iteration 15520, lr = 0.00178773 I0419 23:16:24.766757 27863 solver.cpp:218] Iteration 15560 (2.48905 iter/s, 16.0704s/40 iters), loss = 0.0404578 I0419 23:16:24.766876 27863 solver.cpp:237] Train net output #0: loss = 0.0404578 (* 1 = 0.0404578 loss) I0419 23:16:24.766886 27863 sgd_solver.cpp:105] Iteration 15560, lr = 0.00177981 I0419 23:16:40.855542 27863 solver.cpp:218] Iteration 15600 (2.48622 iter/s, 16.0887s/40 iters), loss = 0.0220762 I0419 23:16:40.855576 27863 solver.cpp:237] Train net output #0: loss = 0.0220762 (* 1 = 0.0220762 loss) I0419 23:16:40.855584 27863 sgd_solver.cpp:105] Iteration 15600, lr = 0.00177193 I0419 23:16:56.918905 27863 solver.cpp:218] Iteration 15640 (2.49014 iter/s, 16.0633s/40 iters), loss = 0.00226359 I0419 23:16:56.918978 27863 solver.cpp:237] Train net output #0: loss = 0.00226358 (* 1 = 0.00226358 loss) I0419 23:16:56.918987 27863 sgd_solver.cpp:105] Iteration 15640, lr = 0.00176409 I0419 23:17:12.966799 27863 solver.cpp:218] Iteration 15680 (2.49255 iter/s, 16.0478s/40 iters), loss = 0.0100553 I0419 23:17:12.966837 27863 solver.cpp:237] Train net output #0: loss = 0.0100553 (* 1 = 0.0100553 loss) I0419 23:17:12.966845 27863 sgd_solver.cpp:105] Iteration 15680, lr = 0.00175628 I0419 23:17:28.931722 27863 solver.cpp:218] Iteration 15720 (2.5055 iter/s, 15.9649s/40 iters), loss = 0.0262595 I0419 23:17:28.931807 27863 solver.cpp:237] Train net output #0: loss = 0.0262595 (* 1 = 0.0262595 loss) I0419 23:17:28.931818 27863 sgd_solver.cpp:105] Iteration 15720, lr = 0.0017485 I0419 23:17:45.020516 27863 solver.cpp:218] Iteration 15760 (2.48622 iter/s, 16.0887s/40 iters), loss = 0.00776311 I0419 23:17:45.020556 27863 solver.cpp:237] Train net output #0: loss = 0.0077631 (* 1 = 0.0077631 loss) I0419 23:17:45.020565 27863 sgd_solver.cpp:105] Iteration 15760, lr = 0.00174076 I0419 23:17:52.772069 27871 data_layer.cpp:73] Restarting data prefetching from start. I0419 23:17:53.399564 27863 solver.cpp:447] Snapshotting to binary proto file snapshot_iter_15782.caffemodel I0419 23:17:57.426785 27863 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_15782.solverstate I0419 23:17:59.779551 27863 solver.cpp:330] Iteration 15782, Testing net (#0) I0419 23:17:59.779665 27863 net.cpp:676] Ignoring source layer train-data I0419 23:18:03.394234 27881 data_layer.cpp:73] Restarting data prefetching from start. I0419 23:18:04.560686 27863 solver.cpp:397] Test net output #0: accuracy = 0.544118 I0419 23:18:04.560731 27863 solver.cpp:397] Test net output #1: loss = 2.69519 (* 1 = 2.69519 loss) I0419 23:18:05.463371 27863 blocking_queue.cpp:49] Waiting for data I0419 23:18:11.144557 27863 solver.cpp:218] Iteration 15800 (1.53116 iter/s, 26.124s/40 iters), loss = 0.00302385 I0419 23:18:11.144594 27863 solver.cpp:237] Train net output #0: loss = 0.00302382 (* 1 = 0.00302382 loss) I0419 23:18:11.144603 27863 sgd_solver.cpp:105] Iteration 15800, lr = 0.00173305 I0419 23:18:27.135848 27863 solver.cpp:218] Iteration 15840 (2.50137 iter/s, 15.9913s/40 iters), loss = 0.06426 I0419 23:18:27.135885 27863 solver.cpp:237] Train net output #0: loss = 0.06426 (* 1 = 0.06426 loss) I0419 23:18:27.135892 27863 sgd_solver.cpp:105] Iteration 15840, lr = 0.00172538 I0419 23:18:43.105052 27863 solver.cpp:218] Iteration 15880 (2.50483 iter/s, 15.9692s/40 iters), loss = 0.0494193 I0419 23:18:43.105182 27863 solver.cpp:237] Train net output #0: loss = 0.0494192 (* 1 = 0.0494192 loss) I0419 23:18:43.105191 27863 sgd_solver.cpp:105] Iteration 15880, lr = 0.00171774 I0419 23:18:59.191792 27863 solver.cpp:218] Iteration 15920 (2.48654 iter/s, 16.0866s/40 iters), loss = 0.00907987 I0419 23:18:59.191825 27863 solver.cpp:237] Train net output #0: loss = 0.00907985 (* 1 = 0.00907985 loss) I0419 23:18:59.191833 27863 sgd_solver.cpp:105] Iteration 15920, lr = 0.00171014 I0419 23:19:15.274309 27863 solver.cpp:218] Iteration 15960 (2.48718 iter/s, 16.0825s/40 iters), loss = 0.0420989 I0419 23:19:15.274439 27863 solver.cpp:237] Train net output #0: loss = 0.0420989 (* 1 = 0.0420989 loss) I0419 23:19:15.274449 27863 sgd_solver.cpp:105] Iteration 15960, lr = 0.00170257 I0419 23:19:31.381615 27863 solver.cpp:218] Iteration 16000 (2.48336 iter/s, 16.1072s/40 iters), loss = 0.0154519 I0419 23:19:31.381651 27863 solver.cpp:237] Train net output #0: loss = 0.0154519 (* 1 = 0.0154519 loss) I0419 23:19:31.381659 27863 sgd_solver.cpp:105] Iteration 16000, lr = 0.00169503 I0419 23:19:47.460788 27863 solver.cpp:218] Iteration 16040 (2.48769 iter/s, 16.0791s/40 iters), loss = 0.0121085 I0419 23:19:47.460902 27863 solver.cpp:237] Train net output #0: loss = 0.0121084 (* 1 = 0.0121084 loss) I0419 23:19:47.460912 27863 sgd_solver.cpp:105] Iteration 16040, lr = 0.00168752 I0419 23:20:03.550848 27863 solver.cpp:218] Iteration 16080 (2.48602 iter/s, 16.09s/40 iters), loss = 0.00387434 I0419 23:20:03.550885 27863 solver.cpp:237] Train net output #0: loss = 0.00387431 (* 1 = 0.00387431 loss) I0419 23:20:03.550892 27863 sgd_solver.cpp:105] Iteration 16080, lr = 0.00168005 I0419 23:20:19.599071 27863 solver.cpp:218] Iteration 16120 (2.49249 iter/s, 16.0482s/40 iters), loss = 0.0119489 I0419 23:20:19.599133 27863 solver.cpp:237] Train net output #0: loss = 0.0119489 (* 1 = 0.0119489 loss) I0419 23:20:19.599141 27863 sgd_solver.cpp:105] Iteration 16120, lr = 0.00167261 I0419 23:20:35.680167 27863 solver.cpp:218] Iteration 16160 (2.4874 iter/s, 16.081s/40 iters), loss = 0.0312316 I0419 23:20:35.680202 27863 solver.cpp:237] Train net output #0: loss = 0.0312315 (* 1 = 0.0312315 loss) I0419 23:20:35.680209 27863 sgd_solver.cpp:105] Iteration 16160, lr = 0.00166521 I0419 23:20:51.793274 27863 solver.cpp:218] Iteration 16200 (2.48246 iter/s, 16.1131s/40 iters), loss = 0.023795 I0419 23:20:51.793357 27863 solver.cpp:237] Train net output #0: loss = 0.023795 (* 1 = 0.023795 loss) I0419 23:20:51.793366 27863 sgd_solver.cpp:105] Iteration 16200, lr = 0.00165784 I0419 23:21:07.893183 27863 solver.cpp:218] Iteration 16240 (2.4845 iter/s, 16.0998s/40 iters), loss = 0.0267566 I0419 23:21:07.893220 27863 solver.cpp:237] Train net output #0: loss = 0.0267566 (* 1 = 0.0267566 loss) I0419 23:21:07.893229 27863 sgd_solver.cpp:105] Iteration 16240, lr = 0.0016505 I0419 23:21:23.951866 27863 solver.cpp:218] Iteration 16280 (2.49087 iter/s, 16.0586s/40 iters), loss = 0.0321761 I0419 23:21:23.951988 27863 solver.cpp:237] Train net output #0: loss = 0.0321761 (* 1 = 0.0321761 loss) I0419 23:21:23.951998 27863 sgd_solver.cpp:105] Iteration 16280, lr = 0.00164319 I0419 23:21:39.862990 27863 solver.cpp:218] Iteration 16320 (2.51398 iter/s, 15.911s/40 iters), loss = 0.00572111 I0419 23:21:39.863027 27863 solver.cpp:237] Train net output #0: loss = 0.00572108 (* 1 = 0.00572108 loss) I0419 23:21:39.863034 27863 sgd_solver.cpp:105] Iteration 16320, lr = 0.00163591 I0419 23:21:55.950345 27863 solver.cpp:218] Iteration 16360 (2.48643 iter/s, 16.0873s/40 iters), loss = 0.0344095 I0419 23:21:55.950414 27863 solver.cpp:237] Train net output #0: loss = 0.0344094 (* 1 = 0.0344094 loss) I0419 23:21:55.950423 27863 sgd_solver.cpp:105] Iteration 16360, lr = 0.00162867 I0419 23:22:06.349265 27871 data_layer.cpp:73] Restarting data prefetching from start. I0419 23:22:06.988709 27863 solver.cpp:447] Snapshotting to binary proto file snapshot_iter_16389.caffemodel I0419 23:22:10.762959 27863 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_16389.solverstate I0419 23:22:15.639768 27863 solver.cpp:330] Iteration 16389, Testing net (#0) I0419 23:22:15.639788 27863 net.cpp:676] Ignoring source layer train-data I0419 23:22:18.953994 27881 data_layer.cpp:73] Restarting data prefetching from start. I0419 23:22:20.056716 27863 solver.cpp:397] Test net output #0: accuracy = 0.539828 I0419 23:22:20.056761 27863 solver.cpp:397] Test net output #1: loss = 2.68807 (* 1 = 2.68807 loss) I0419 23:22:23.777521 27863 solver.cpp:218] Iteration 16400 (1.43745 iter/s, 27.8271s/40 iters), loss = 0.020089 I0419 23:22:23.777555 27863 solver.cpp:237] Train net output #0: loss = 0.020089 (* 1 = 0.020089 loss) I0419 23:22:23.777563 27863 sgd_solver.cpp:105] Iteration 16400, lr = 0.00162146 I0419 23:22:39.773025 27863 solver.cpp:218] Iteration 16440 (2.50071 iter/s, 15.9955s/40 iters), loss = 0.0378885 I0419 23:22:39.773123 27863 solver.cpp:237] Train net output #0: loss = 0.0378885 (* 1 = 0.0378885 loss) I0419 23:22:39.773131 27863 sgd_solver.cpp:105] Iteration 16440, lr = 0.00161428 I0419 23:22:55.850960 27863 solver.cpp:218] Iteration 16480 (2.4879 iter/s, 16.0778s/40 iters), loss = 0.0335365 I0419 23:22:55.850996 27863 solver.cpp:237] Train net output #0: loss = 0.0335365 (* 1 = 0.0335365 loss) I0419 23:22:55.851004 27863 sgd_solver.cpp:105] Iteration 16480, lr = 0.00160713 I0419 23:23:11.940102 27863 solver.cpp:218] Iteration 16520 (2.48615 iter/s, 16.0891s/40 iters), loss = 0.00269466 I0419 23:23:11.940186 27863 solver.cpp:237] Train net output #0: loss = 0.00269462 (* 1 = 0.00269462 loss) I0419 23:23:11.940196 27863 sgd_solver.cpp:105] Iteration 16520, lr = 0.00160002 I0419 23:23:28.033332 27863 solver.cpp:218] Iteration 16560 (2.48553 iter/s, 16.0931s/40 iters), loss = 0.0437248 I0419 23:23:28.033367 27863 solver.cpp:237] Train net output #0: loss = 0.0437248 (* 1 = 0.0437248 loss) I0419 23:23:28.033375 27863 sgd_solver.cpp:105] Iteration 16560, lr = 0.00159293 I0419 23:23:44.128481 27863 solver.cpp:218] Iteration 16600 (2.48523 iter/s, 16.0951s/40 iters), loss = 0.0694906 I0419 23:23:44.128604 27863 solver.cpp:237] Train net output #0: loss = 0.0694906 (* 1 = 0.0694906 loss) I0419 23:23:44.128613 27863 sgd_solver.cpp:105] Iteration 16600, lr = 0.00158588 I0419 23:24:00.212872 27863 solver.cpp:218] Iteration 16640 (2.4869 iter/s, 16.0843s/40 iters), loss = 0.0387005 I0419 23:24:00.212906 27863 solver.cpp:237] Train net output #0: loss = 0.0387005 (* 1 = 0.0387005 loss) I0419 23:24:00.212914 27863 sgd_solver.cpp:105] Iteration 16640, lr = 0.00157886 I0419 23:24:16.330157 27863 solver.cpp:218] Iteration 16680 (2.48181 iter/s, 16.1173s/40 iters), loss = 0.0280161 I0419 23:24:16.330314 27863 solver.cpp:237] Train net output #0: loss = 0.028016 (* 1 = 0.028016 loss) I0419 23:24:16.330324 27863 sgd_solver.cpp:105] Iteration 16680, lr = 0.00157187 I0419 23:24:32.458215 27863 solver.cpp:218] Iteration 16720 (2.48017 iter/s, 16.1279s/40 iters), loss = 0.0472207 I0419 23:24:32.458256 27863 solver.cpp:237] Train net output #0: loss = 0.0472206 (* 1 = 0.0472206 loss) I0419 23:24:32.458263 27863 sgd_solver.cpp:105] Iteration 16720, lr = 0.00156491 I0419 23:24:40.799708 27863 blocking_queue.cpp:49] Waiting for data I0419 23:24:48.511688 27863 solver.cpp:218] Iteration 16760 (2.49168 iter/s, 16.0534s/40 iters), loss = 0.0203627 I0419 23:24:48.511807 27863 solver.cpp:237] Train net output #0: loss = 0.0203627 (* 1 = 0.0203627 loss) I0419 23:24:48.511816 27863 sgd_solver.cpp:105] Iteration 16760, lr = 0.00155798 I0419 23:25:04.600709 27863 solver.cpp:218] Iteration 16800 (2.48618 iter/s, 16.0889s/40 iters), loss = 0.0172858 I0419 23:25:04.600746 27863 solver.cpp:237] Train net output #0: loss = 0.0172858 (* 1 = 0.0172858 loss) I0419 23:25:04.600754 27863 sgd_solver.cpp:105] Iteration 16800, lr = 0.00155108 I0419 23:25:20.654440 27863 solver.cpp:218] Iteration 16840 (2.49164 iter/s, 16.0537s/40 iters), loss = 0.0114336 I0419 23:25:20.654565 27863 solver.cpp:237] Train net output #0: loss = 0.0114336 (* 1 = 0.0114336 loss) I0419 23:25:20.654575 27863 sgd_solver.cpp:105] Iteration 16840, lr = 0.00154422 I0419 23:25:36.748930 27863 solver.cpp:218] Iteration 16880 (2.48534 iter/s, 16.0944s/40 iters), loss = 0.0203488 I0419 23:25:36.748970 27863 solver.cpp:237] Train net output #0: loss = 0.0203487 (* 1 = 0.0203487 loss) I0419 23:25:36.748977 27863 sgd_solver.cpp:105] Iteration 16880, lr = 0.00153738 I0419 23:25:52.838227 27863 solver.cpp:218] Iteration 16920 (2.48613 iter/s, 16.0893s/40 iters), loss = 0.017895 I0419 23:25:52.838361 27863 solver.cpp:237] Train net output #0: loss = 0.0178949 (* 1 = 0.0178949 loss) I0419 23:25:52.838369 27863 sgd_solver.cpp:105] Iteration 16920, lr = 0.00153057 I0419 23:26:08.912173 27863 solver.cpp:218] Iteration 16960 (2.48852 iter/s, 16.0738s/40 iters), loss = 0.0375178 I0419 23:26:08.912209 27863 solver.cpp:237] Train net output #0: loss = 0.0375177 (* 1 = 0.0375177 loss) I0419 23:26:08.912217 27863 sgd_solver.cpp:105] Iteration 16960, lr = 0.0015238 I0419 23:26:22.278030 27871 data_layer.cpp:73] Restarting data prefetching from start. I0419 23:26:22.940697 27863 solver.cpp:447] Snapshotting to binary proto file snapshot_iter_16996.caffemodel I0419 23:26:27.206215 27863 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_16996.solverstate I0419 23:26:30.103225 27863 solver.cpp:330] Iteration 16996, Testing net (#0) I0419 23:26:30.103247 27863 net.cpp:676] Ignoring source layer train-data I0419 23:26:33.652246 27881 data_layer.cpp:73] Restarting data prefetching from start. I0419 23:26:34.910343 27863 solver.cpp:397] Test net output #0: accuracy = 0.552696 I0419 23:26:34.910408 27863 solver.cpp:397] Test net output #1: loss = 2.68867 (* 1 = 2.68867 loss) I0419 23:26:35.860911 27863 solver.cpp:218] Iteration 17000 (1.4843 iter/s, 26.9487s/40 iters), loss = 0.000658972 I0419 23:26:35.860949 27863 solver.cpp:237] Train net output #0: loss = 0.000658937 (* 1 = 0.000658937 loss) I0419 23:26:35.860956 27863 sgd_solver.cpp:105] Iteration 17000, lr = 0.00151705 I0419 23:26:51.802440 27863 solver.cpp:218] Iteration 17040 (2.50918 iter/s, 15.9415s/40 iters), loss = 0.017393 I0419 23:26:51.802480 27863 solver.cpp:237] Train net output #0: loss = 0.017393 (* 1 = 0.017393 loss) I0419 23:26:51.802489 27863 sgd_solver.cpp:105] Iteration 17040, lr = 0.00151033 I0419 23:27:07.798566 27863 solver.cpp:218] Iteration 17080 (2.50061 iter/s, 15.9961s/40 iters), loss = 0.00414568 I0419 23:27:07.798661 27863 solver.cpp:237] Train net output #0: loss = 0.00414566 (* 1 = 0.00414566 loss) I0419 23:27:07.798671 27863 sgd_solver.cpp:105] Iteration 17080, lr = 0.00150365 I0419 23:27:23.495787 27863 solver.cpp:218] Iteration 17120 (2.54824 iter/s, 15.6971s/40 iters), loss = 0.0138694 I0419 23:27:23.495826 27863 solver.cpp:237] Train net output #0: loss = 0.0138693 (* 1 = 0.0138693 loss) I0419 23:27:23.495833 27863 sgd_solver.cpp:105] Iteration 17120, lr = 0.00149699 I0419 23:27:39.601464 27863 solver.cpp:218] Iteration 17160 (2.4836 iter/s, 16.1056s/40 iters), loss = 0.01949 I0419 23:27:39.601617 27863 solver.cpp:237] Train net output #0: loss = 0.0194899 (* 1 = 0.0194899 loss) I0419 23:27:39.601626 27863 sgd_solver.cpp:105] Iteration 17160, lr = 0.00149036 I0419 23:27:55.670186 27863 solver.cpp:218] Iteration 17200 (2.48933 iter/s, 16.0686s/40 iters), loss = 0.0147974 I0419 23:27:55.670223 27863 solver.cpp:237] Train net output #0: loss = 0.0147973 (* 1 = 0.0147973 loss) I0419 23:27:55.670231 27863 sgd_solver.cpp:105] Iteration 17200, lr = 0.00148376 I0419 23:28:11.750438 27863 solver.cpp:218] Iteration 17240 (2.48753 iter/s, 16.0802s/40 iters), loss = 0.00795631 I0419 23:28:11.750558 27863 solver.cpp:237] Train net output #0: loss = 0.00795627 (* 1 = 0.00795627 loss) I0419 23:28:11.750567 27863 sgd_solver.cpp:105] Iteration 17240, lr = 0.0014772 I0419 23:28:27.821211 27863 solver.cpp:218] Iteration 17280 (2.48901 iter/s, 16.0707s/40 iters), loss = 0.0069412 I0419 23:28:27.821249 27863 solver.cpp:237] Train net output #0: loss = 0.00694117 (* 1 = 0.00694117 loss) I0419 23:28:27.821256 27863 sgd_solver.cpp:105] Iteration 17280, lr = 0.00147065 I0419 23:28:43.864265 27863 solver.cpp:218] Iteration 17320 (2.4933 iter/s, 16.043s/40 iters), loss = 0.00345717 I0419 23:28:43.864331 27863 solver.cpp:237] Train net output #0: loss = 0.00345714 (* 1 = 0.00345714 loss) I0419 23:28:43.864339 27863 sgd_solver.cpp:105] Iteration 17320, lr = 0.00146414 I0419 23:28:59.823766 27863 solver.cpp:218] Iteration 17360 (2.50635 iter/s, 15.9594s/40 iters), loss = 0.00487878 I0419 23:28:59.823803 27863 solver.cpp:237] Train net output #0: loss = 0.00487875 (* 1 = 0.00487875 loss) I0419 23:28:59.823812 27863 sgd_solver.cpp:105] Iteration 17360, lr = 0.00145766 I0419 23:29:15.931124 27863 solver.cpp:218] Iteration 17400 (2.48334 iter/s, 16.1073s/40 iters), loss = 0.0609308 I0419 23:29:15.931243 27863 solver.cpp:237] Train net output #0: loss = 0.0609307 (* 1 = 0.0609307 loss) I0419 23:29:15.931252 27863 sgd_solver.cpp:105] Iteration 17400, lr = 0.00145121 I0419 23:29:32.017719 27863 solver.cpp:218] Iteration 17440 (2.48656 iter/s, 16.0865s/40 iters), loss = 0.0239871 I0419 23:29:32.017755 27863 solver.cpp:237] Train net output #0: loss = 0.0239871 (* 1 = 0.0239871 loss) I0419 23:29:32.017762 27863 sgd_solver.cpp:105] Iteration 17440, lr = 0.00144478 I0419 23:29:48.111284 27863 solver.cpp:218] Iteration 17480 (2.48547 iter/s, 16.0935s/40 iters), loss = 0.00567382 I0419 23:29:48.111419 27863 solver.cpp:237] Train net output #0: loss = 0.00567381 (* 1 = 0.00567381 loss) I0419 23:29:48.111433 27863 sgd_solver.cpp:105] Iteration 17480, lr = 0.00143839 I0419 23:30:04.189297 27863 solver.cpp:218] Iteration 17520 (2.48789 iter/s, 16.0779s/40 iters), loss = 0.0194538 I0419 23:30:04.189332 27863 solver.cpp:237] Train net output #0: loss = 0.0194538 (* 1 = 0.0194538 loss) I0419 23:30:04.189340 27863 sgd_solver.cpp:105] Iteration 17520, lr = 0.00143202 I0419 23:30:20.265022 27863 solver.cpp:218] Iteration 17560 (2.48823 iter/s, 16.0757s/40 iters), loss = 0.00490543 I0419 23:30:20.265142 27863 solver.cpp:237] Train net output #0: loss = 0.00490541 (* 1 = 0.00490541 loss) I0419 23:30:20.265151 27863 sgd_solver.cpp:105] Iteration 17560, lr = 0.00142568 I0419 23:30:36.347640 27863 solver.cpp:218] Iteration 17600 (2.48717 iter/s, 16.0825s/40 iters), loss = 0.000844397 I0419 23:30:36.347674 27863 solver.cpp:237] Train net output #0: loss = 0.000844385 (* 1 = 0.000844385 loss) I0419 23:30:36.347683 27863 sgd_solver.cpp:105] Iteration 17600, lr = 0.00141937 I0419 23:30:36.405485 27871 data_layer.cpp:73] Restarting data prefetching from start. I0419 23:30:37.112893 27863 solver.cpp:447] Snapshotting to binary proto file snapshot_iter_17603.caffemodel I0419 23:30:41.487341 27863 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_17603.solverstate I0419 23:30:45.317898 27863 solver.cpp:330] Iteration 17603, Testing net (#0) I0419 23:30:45.317914 27863 net.cpp:676] Ignoring source layer train-data I0419 23:30:48.788841 27881 data_layer.cpp:73] Restarting data prefetching from start. I0419 23:30:50.094614 27863 solver.cpp:397] Test net output #0: accuracy = 0.550858 I0419 23:30:50.094660 27863 solver.cpp:397] Test net output #1: loss = 2.70259 (* 1 = 2.70259 loss) I0419 23:31:04.261139 27863 solver.cpp:218] Iteration 17640 (1.433 iter/s, 27.9135s/40 iters), loss = 0.0170771 I0419 23:31:04.261260 27863 solver.cpp:237] Train net output #0: loss = 0.017077 (* 1 = 0.017077 loss) I0419 23:31:04.261270 27863 sgd_solver.cpp:105] Iteration 17640, lr = 0.00141308 I0419 23:31:08.649593 27863 blocking_queue.cpp:49] Waiting for data I0419 23:31:20.384207 27863 solver.cpp:218] Iteration 17680 (2.48093 iter/s, 16.123s/40 iters), loss = 0.00200557 I0419 23:31:20.384244 27863 solver.cpp:237] Train net output #0: loss = 0.00200554 (* 1 = 0.00200554 loss) I0419 23:31:20.384253 27863 sgd_solver.cpp:105] Iteration 17680, lr = 0.00140683 I0419 23:31:36.448349 27863 solver.cpp:218] Iteration 17720 (2.49002 iter/s, 16.0641s/40 iters), loss = 0.0105732 I0419 23:31:36.448470 27863 solver.cpp:237] Train net output #0: loss = 0.0105731 (* 1 = 0.0105731 loss) I0419 23:31:36.448479 27863 sgd_solver.cpp:105] Iteration 17720, lr = 0.0014006 I0419 23:31:52.498878 27863 solver.cpp:218] Iteration 17760 (2.49215 iter/s, 16.0504s/40 iters), loss = 0.0171963 I0419 23:31:52.498914 27863 solver.cpp:237] Train net output #0: loss = 0.0171962 (* 1 = 0.0171962 loss) I0419 23:31:52.498922 27863 sgd_solver.cpp:105] Iteration 17760, lr = 0.0013944 I0419 23:32:08.607381 27863 solver.cpp:218] Iteration 17800 (2.48317 iter/s, 16.1085s/40 iters), loss = 0.0253724 I0419 23:32:08.607467 27863 solver.cpp:237] Train net output #0: loss = 0.0253724 (* 1 = 0.0253724 loss) I0419 23:32:08.607476 27863 sgd_solver.cpp:105] Iteration 17800, lr = 0.00138822 I0419 23:32:24.689136 27863 solver.cpp:218] Iteration 17840 (2.4873 iter/s, 16.0817s/40 iters), loss = 0.00361112 I0419 23:32:24.689173 27863 solver.cpp:237] Train net output #0: loss = 0.00361107 (* 1 = 0.00361107 loss) I0419 23:32:24.689182 27863 sgd_solver.cpp:105] Iteration 17840, lr = 0.00138208 I0419 23:32:40.808444 27863 solver.cpp:218] Iteration 17880 (2.4815 iter/s, 16.1193s/40 iters), loss = 0.0102756 I0419 23:32:40.808565 27863 solver.cpp:237] Train net output #0: loss = 0.0102755 (* 1 = 0.0102755 loss) I0419 23:32:40.808574 27863 sgd_solver.cpp:105] Iteration 17880, lr = 0.00137596 I0419 23:32:56.707056 27863 solver.cpp:218] Iteration 17920 (2.51596 iter/s, 15.8985s/40 iters), loss = 0.0204987 I0419 23:32:56.707096 27863 solver.cpp:237] Train net output #0: loss = 0.0204986 (* 1 = 0.0204986 loss) I0419 23:32:56.707103 27863 sgd_solver.cpp:105] Iteration 17920, lr = 0.00136987 I0419 23:33:12.790410 27863 solver.cpp:218] Iteration 17960 (2.48705 iter/s, 16.0833s/40 iters), loss = 0.0453436 I0419 23:33:12.790525 27863 solver.cpp:237] Train net output #0: loss = 0.0453435 (* 1 = 0.0453435 loss) I0419 23:33:12.790534 27863 sgd_solver.cpp:105] Iteration 17960, lr = 0.0013638 I0419 23:33:28.881855 27863 solver.cpp:218] Iteration 18000 (2.48581 iter/s, 16.0913s/40 iters), loss = 0.0290209 I0419 23:33:28.881896 27863 solver.cpp:237] Train net output #0: loss = 0.0290208 (* 1 = 0.0290208 loss) I0419 23:33:28.881904 27863 sgd_solver.cpp:105] Iteration 18000, lr = 0.00135776 I0419 23:33:44.961694 27863 solver.cpp:218] Iteration 18040 (2.48759 iter/s, 16.0798s/40 iters), loss = 0.00747364 I0419 23:33:44.961807 27863 solver.cpp:237] Train net output #0: loss = 0.00747358 (* 1 = 0.00747358 loss) I0419 23:33:44.961815 27863 sgd_solver.cpp:105] Iteration 18040, lr = 0.00135175 I0419 23:34:01.001221 27863 solver.cpp:218] Iteration 18080 (2.49386 iter/s, 16.0394s/40 iters), loss = 0.00458119 I0419 23:34:01.001260 27863 solver.cpp:237] Train net output #0: loss = 0.00458113 (* 1 = 0.00458113 loss) I0419 23:34:01.001267 27863 sgd_solver.cpp:105] Iteration 18080, lr = 0.00134577 I0419 23:34:16.943205 27863 solver.cpp:218] Iteration 18120 (2.5091 iter/s, 15.942s/40 iters), loss = 0.0396522 I0419 23:34:16.943325 27863 solver.cpp:237] Train net output #0: loss = 0.0396521 (* 1 = 0.0396521 loss) I0419 23:34:16.943333 27863 sgd_solver.cpp:105] Iteration 18120, lr = 0.00133981 I0419 23:34:32.991322 27863 solver.cpp:218] Iteration 18160 (2.49252 iter/s, 16.048s/40 iters), loss = 0.0146326 I0419 23:34:32.991376 27863 solver.cpp:237] Train net output #0: loss = 0.0146326 (* 1 = 0.0146326 loss) I0419 23:34:32.991385 27863 sgd_solver.cpp:105] Iteration 18160, lr = 0.00133388 I0419 23:34:49.116569 27863 solver.cpp:218] Iteration 18200 (2.48059 iter/s, 16.1252s/40 iters), loss = 0.0315872 I0419 23:34:49.116680 27863 solver.cpp:237] Train net output #0: loss = 0.0315871 (* 1 = 0.0315871 loss) I0419 23:34:49.116689 27863 sgd_solver.cpp:105] Iteration 18200, lr = 0.00132797 I0419 23:34:51.936329 27871 data_layer.cpp:73] Restarting data prefetching from start. I0419 23:34:52.650980 27863 solver.cpp:447] Snapshotting to binary proto file snapshot_iter_18210.caffemodel I0419 23:34:55.721935 27863 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_18210.solverstate I0419 23:34:58.099835 27863 solver.cpp:330] Iteration 18210, Testing net (#0) I0419 23:34:58.099853 27863 net.cpp:676] Ignoring source layer train-data I0419 23:35:01.539332 27881 data_layer.cpp:73] Restarting data prefetching from start. I0419 23:35:02.881618 27863 solver.cpp:397] Test net output #0: accuracy = 0.556373 I0419 23:35:02.881651 27863 solver.cpp:397] Test net output #1: loss = 2.69321 (* 1 = 2.69321 loss) I0419 23:35:02.881657 27863 solver.cpp:315] Optimization Done. I0419 23:35:02.881661 27863 caffe.cpp:259] Optimization Done.