Harvard的Python深度神經網絡快速原型庫:Kayak

jopen 10年前發布 | 25K 次閱讀 Kayak 機器學習

Harvard的Python深度神經網絡快速原型庫,其特色在于足夠簡單和可擴展,可實現原型架構的快速開發與思路驗證。

import kayak
import numpy.random as npr

X = ... your feature matrix ... Y = ... your label matrix ...

Create Kayak objects for features and labels.

inputs = kayak.Inputs(X) targets = kayak.Targets(Y)

Create Kayak objects first-layer weights and biases. Initialize

them with random Numpy matrices.

weights_1 = kayak.Parameter(npr.randn( input_dims, hidsize_1 )) biases_1 = kayak.Parameter(npr.randn( 1, hidsize_1 ))

Create Kayak objects that implement a network layer. First,

multiply the features by weights and add biases.

hiddens_1a = kayak.ElemAdd(kayak.MatMult( inputs, weights_1 ), biases_1)

Then, apply a "relu" (rectified linear) nonlinearity.

Alternatively, you can apply your own favorite nonlinearity, or

add one for an idea that you want to try out.

hiddens_1b = kayak.HardReLU(hiddens_1a)

Now, apply a "dropout" layer to prevent co-adaptation. Got a

new idea for dropout? It's super easy to extend Kayak with it.

hiddens_1 = kayak.Dropout(hiddens_1b, drop_prob=0.5)

Okay, with that layer constructed, let's make another one the

same way: linear transformation + bias with ReLU and dropout.

First, create the second-layer parameters.

weights_2 = kayak.Parameter(npr.randn(hidsize_1, hidsize_2)) biases_2 = kayak.Parameter(npr.randn(1, hidsize_2))

This time, let's compose all the steps, just to show we can.

hiddens_2 = kayak.Dropout( kayak.HardReLU( kayak.ElemAdd( \ kayak.MatMult( hiddens_1, weights_2), biases_2)), drop_prob=0.5)

Make the output layer linear.

weights_out = kayak.Parameter(npr.randn(hidsize_2, 1)) biases_out = kayak.Parameter(npr.randn()) out = kayak.ElemAdd( kayak.MatMult( hiddens_2, weights_out), biases_out)

Apply a loss function. In this case, we'll just do squared loss.

loss = kayak.MatSum( kayak.L2Loss( out, targets ))

Maybe roll in an L1 norm for the first layer and an L2 norm for the others?

objective = kayak.ElemAdd(loss, kayak.L1Norm(weights_1, weight=100.0), kayak.L2Norm(weights_2, weight=50.0), kayak.L2Norm(weights_out, weight=3.0))

This is the fun part and is the whole point of Kayak. You can

now get the gradient of anything in terms of anything else.

Probably, if you're doing neural networks, you want the gradient

of the parameters in terms of the overall objective. That way

you can go off and do some kind of optimization.

weights_1_grad = objective.grad(weights_1) biases_1_grad = objective.grad(biases_1) weights_2_grad = objective.grad(weights_2) biases_2_grad = objective.grad(biases_2) weights_out_grad = objective.grad(weights_out) biases_out-grad = objective.grad(biases_out)

... use the gradients for learning ... ... probably this whole thing would be in a loop ... ... in practice you'd probably also use minibatches ...</pre>

項目主頁:http://www.baiduhome.net/lib/view/home/1424831137906

 本文由用戶 jopen 自行上傳分享,僅供網友學習交流。所有權歸原作者,若您的權利被侵害,請聯系管理員。
 轉載本站原創文章,請注明出處,并保留原始鏈接、圖片水印。
 本站是一個以用戶分享為主的開源技術平臺,歡迎各類分享!