I splashed out over £1300 on shiny 13" MacBook Pro hoping it would be powerful enough to run deep learning algorithms but was disappointed to say the least.
Anyway, followed this verbatim and here are the results:
INFO:plaidml:Opening device “metal_intel(r)_iris(tm)_plus_graphics_655.0”
Epoch 1/10
60000/60000 [==============================] — 76s 1ms/step — loss: nan — acc: 0.0999
Epoch 2/10
60000/60000 [==============================] — 65s 1ms/step — loss: nan — acc: 0.1000
Epoch 3/10
60000/60000 [==============================] — 64s 1ms/step — loss: nan — acc: 0.1000
Epoch 4/10
60000/60000 [==============================] — 65s 1ms/step — loss: nan — acc: 0.1000
Epoch 5/10
60000/60000 [==============================] — 64s 1ms/step — loss: nan — acc: 0.1000
Epoch 6/10
60000/60000 [==============================] — 63s 1ms/step — loss: nan — acc: 0.1000
Epoch 7/10
60000/60000 [==============================] — 64s 1ms/step — loss: nan — acc: 0.1000
Epoch 8/10
60000/60000 [==============================] — 63s 1ms/step — loss: nan — acc: 0.1000
Epoch 9/10
60000/60000 [==============================] — 64s 1ms/step — loss: nan — acc: 0.1000
Epoch 10/10
60000/60000 [==============================] — 75s 1ms/step — loss: nan — acc: 0.1000
Test accuracy: 0.1
Looks like I got an accuracy of 100% which tells me the algorithm overfitted despite using the dropout function.
The other alternatives are:
- Buy the nVidia Titan X (£1000) plus a GPU enclosure (£300) and configure that setup with the MacBook Pro
- Use one of the many hosted Deep Learning platforms; FloydHub, PaperSpace or Google Colab which manages the hardware for you which all use the “pay as you go” model.