基于Theano的深度學習庫:Keras

jopen 9年前發布 | 79K 次閱讀 Keras 機器學習

Keras是一個簡約,高度模塊化的神經網絡庫。采用Python / Theano開發。

使用Keras如果你需要一個深度學習庫:

  • 可以很容易和快速實現原型(通過總模塊化,極簡主義,和可擴展性)
  • 同時支持卷積網絡(vision)和復發性的網絡(序列數據)。以及兩者的組合。
  • 無縫地運行在CPU和GPU上。
  • </ul>

    Guiding principles

    • Modularity. A model is understood as a sequence of standalone, fully-configurable modules that can be plugged together with as little restrictions as possible. In particular, neural layers, cost functions, optimizers, initialization schemes, activation functions and dropout are all standalone modules that you can combine to create new models.

    • Minimalism. Each module should be kept short and simple (<100 lines of code). Every piece of code should be transparent upon first reading. No black magic: it hurts iteration speed and ability to innovate.

    • Easy extensibility. A new feature (a new module, per the above definition, or a new way to combine modules together) are dead simple to add (as new classes/functions), and existing modules provide ample examples.

    • Work with Python. No separate models configuration files in a declarative format (like in Caffe or PyLearn2). Models are described in Python code, which is compact, easier to debug, benefits from syntax highlighting, and most of all, allows for ease of extensibility. See for yourself with the examples below.

    示例

    Multilayer Perceptron (MLP):

    from keras.models import Sequential
    from keras.layers.core import Dense, Dropout, Activation
    from keras.optimizers import SGD
    
    model = Sequential()
    model.add(Dense(20, 64, init='uniform'))
    model.add(Activation('tanh'))
    model.add(Dropout(0.5))
    model.add(Dense(64, 64, init='uniform'))
    model.add(Activation('tanh'))
    model.add(Dropout(0.5))
    model.add(Dense(64, 1, init='uniform'))
    model.add(Activation('softmax'))
    
    sgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True)
    model.compile(loss='mean_squared_error', optimizer=sgd)
    
    model.fit(X_train, y_train, nb_epoch=20, batch_size=16)
    score = model.evaluate(X_test, y_test, batch_size=16)

    Alternative implementation of MLP:

    model = Sequential()
    model.add(Dense(20, 64, init='uniform', activation='tanh'))
    model.add(Dropout(0.5))
    model.add(Dense(64, 64, init='uniform', activation='tanh'))
    model.add(Dropout(0.5))
    model.add(Dense(64, 1, init='uniform', activation='softmax')
    
    sgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True)
    model.compile(loss='mean_squared_error', optimizer=sgd)

    項目主頁:http://www.baiduhome.net/lib/view/home/1427618731971

 本文由用戶 jopen 自行上傳分享,僅供網友學習交流。所有權歸原作者,若您的權利被侵害,請聯系管理員。
 轉載本站原創文章,請注明出處,并保留原始鏈接、圖片水印。
 本站是一個以用戶分享為主的開源技術平臺,歡迎各類分享!