Google Volley框架源碼走讀

jopen 9年前發布 | 8K 次閱讀 Android開發 移動開發

【工匠若水 http://blog.csdn.net/yanbober】 閱讀前一篇《Google Volley使用之自定義》 http://blog.csdn.net/yanbober/article/details/45307099

開源項目鏈接

Volley自定義 Android Developer文檔

Volley主頁:https://android.googlesource.com/platform/frameworks/volley

Volley倉庫:git clone https://android.googlesource.com/platform/frameworks/volley

Volley GitHub Demo:在GitHub主頁搜索Volley會有很多,不過建議閱讀Android Developer文檔。

背景知識

在Volley使用基礎那一篇最后一個知識點說到了Volley的請求架構,這里再搬過來說說。

在Android Developer上看到的這幅圖:

這里寫圖片描述

RequestQueue會維護一個緩存調度線程(cache線程)和一個網絡調度線程池(net線程),當一個Request被加到隊列中的時候,cache線程會把這個請求進行篩選:如果這個請求的內容可以在緩存中找到,cache線程會親自解析相應內容,并分發到主線程(UI)。如果緩存中沒有,這個request就會被加入到另一個NetworkQueue,所有真正準備進行網絡通信的request都在這里,第一個可用的net線程會從NetworkQueue中拿出一個request扔向服務器。當響應數據到的時候,這個net線程會解析原始響應數據,寫入緩存,并把解析后的結果返回給主線程。

硬著頭皮開始吧

直接這么看好空洞,所以直接把clone的工程導入IDE邊看代碼邊看這個圖吧。

還是按照前邊的順序分析吧,使用Volley的第一步首先是通過Volley.newRequestQueue(context)得到RequestQueue隊列,那么先看下toolbox下的Volley.java中的這個方法吧。

/** * Creates a default instance of the worker pool and calls {@link RequestQueue#start()} on it. * * @param context A {@link Context} to use for creating the cache dir. * @return A started {@link RequestQueue} instance. */
public static RequestQueue newRequestQueue(Context context) {
    return newRequestQueue(context, null);
}

先看如上注釋所示,創建一個默認的worker pool,并且調運RequestQueue的start方法。在這個方法里又調運了該類的另一個連個參數的重載方法,如下所示。

/** * Creates a default instance of the worker pool and calls {@link RequestQueue#start()} on it. * * @param context A {@link Context} to use for creating the cache dir. * @param stack An {@link HttpStack} to use for the network, or null for default. * @return A started {@link RequestQueue} instance. */
public static RequestQueue newRequestQueue(Context context, HttpStack stack) {
    File cacheDir = new File(context.getCacheDir(), DEFAULT_CACHE_DIR);

    String userAgent = "volley/0";
    try {
        String packageName = context.getPackageName();
        PackageInfo info = context.getPackageManager().getPackageInfo(packageName, 0);
        userAgent = packageName + "/" + info.versionCode;
    } catch (PackageManager.NameNotFoundException e) {
    }

    if (stack == null) {
        if (Build.VERSION.SDK_INT >= 9) {
            stack = new HurlStack();
        } else {
            // Prior to Gingerbread, HttpUrlConnection was unreliable.
            // See: http://android-developers.blogspot.com/2011/09/androids-http-clients.html
            stack = new HttpClientStack(AndroidHttpClient.newInstance(userAgent));
        }
    }

    Network network = new BasicNetwork(stack);

    RequestQueue queue = new RequestQueue(new DiskBasedCache(cacheDir), network);
    queue.start();

    return queue;
}

如上所示,該方法有兩個參數,第一個Context是為了拿到當前App的Activity的一些信息,第二個參數在默認的RequestQueue newRequestQueue(Context context)方法中傳遞為null,也就是說在這段代碼中的if (stack == null)會被執行。在這個if中對版本號進行了判斷,如果版本號大于等于9就使得HttpStack對象的實例為HurlStack,如果小于9則實例為HttpClientStack。至于這里為何進行版本號判斷,實際代碼中的(Click Me To See)注釋已經說明了。實際上HurlStack類也在toolbox中,他實現了toolbox的HttpStack接口中的HttpResponse performRequest(Request<?> request, Map<String, String> additionalHeaders)方法,其實現過程使用的是HttpURLConnection。而HttpClientStack也在toolbox中,他也實現了toolbox的HttpStack接口的HttpResponse performRequest(Request<?> request, Map<String, String> additionalHeaders)方法,不過其實現過程使用的是HttpClient而已。

其中的userAgent就是App的包名加版本號而已,傳入new HttpClientStack(AndroidHttpClient.newInstance(userAgent));作為name TAG使用。

如上HttpStack創建完成之后創建了Network實例。BasicNetwork是Network接口的實現,他們都在toolbox中,BasicNetwork實現了public NetworkResponse performRequest(Request<?> request)方法,其作用是根據傳入的HttpStack對象來處理網絡請求。緊接著new出一個RequestQueue對象,并調用它的start()方法進行啟動,然后將RequestQueue返回。RequestQueue是根目錄下的一個類,其作用是一個請求調度隊列調度程序的線程池。這樣newRequestQueue()的方法就執行結束了。

現在再來看下根目錄下RequestQueue隊列的start方法,如下所示:

/** * Starts the dispatchers in this queue. */
public void start() {
    stop();  // Make sure any currently running dispatchers are stopped.
    // Create the cache dispatcher and start it.
    mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery);
    mCacheDispatcher.start();

    // Create network dispatchers (and corresponding threads) up to the pool size.
    for (int i = 0; i < mDispatchers.length; i++) {
        NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork, mCache, mDelivery);
        mDispatchers[i] = networkDispatcher;
        networkDispatcher.start();
    }
}

通過注釋可以看出來這里在派發隊列的事務。先是創建了一個CacheDispatcher的實例,然后調用了它的start()方法,接著在一個for循環里去創建NetworkDispatcher的實例,并分別調用它們的start()方法。這里的CacheDispatcher和NetworkDispatcher都是繼承自Thread的,而默認情況下for循環會執行(DEFAULT_NETWORK_THREAD_POOL_SIZE)四次,也就是說當調用了Volley.newRequestQueue(context)之后,就會有五個線程一直在后臺運行,不斷等待網絡請求的到來,其中一個CacheDispatcher是緩存線程,四個NetworkDispatcher是網絡請求線程。

按照之前使用Volley可以知道,得到了RequestQueue之后,我們只需要構建出相應的Request,然后調用RequestQueue的add()方法將Request傳入就可以完成網絡請求操作了。也就是說add()方法的內部是核心代碼了。現在看下RequestQueue的add方法,具體如下:

/** * Adds a Request to the dispatch queue. * @param request The request to service * @return The passed-in request */
public <T> Request<T> add(Request<T> request) {
    // Tag the request as belonging to this queue and add it to the set of current requests.
    request.setRequestQueue(this);
    synchronized (mCurrentRequests) {
        mCurrentRequests.add(request);
    }

    // Process requests in the order they are added.
    request.setSequence(getSequenceNumber());
    request.addMarker("add-to-queue");

    // If the request is uncacheable, skip the cache queue and go straight to the network.
    if (!request.shouldCache()) {
        mNetworkQueue.add(request);
        return request;
    }

    // Insert request into stage if there's already a request with the same cache key in flight.
    synchronized (mWaitingRequests) {
        String cacheKey = request.getCacheKey();
        if (mWaitingRequests.containsKey(cacheKey)) {
            // There is already a request in flight. Queue up.
            Queue<Request<?>> stagedRequests = mWaitingRequests.get(cacheKey);
            if (stagedRequests == null) {
                stagedRequests = new LinkedList<Request<?>>();
            }
            stagedRequests.add(request);
            mWaitingRequests.put(cacheKey, stagedRequests);
            if (VolleyLog.DEBUG) {
                VolleyLog.v("Request for cacheKey=%s is in flight, putting on hold.", cacheKey);
            }
        } else {
            // Insert 'null' queue for this cacheKey, indicating there is now a request in
            // flight.
            mWaitingRequests.put(cacheKey, null);
            mCacheQueue.add(request);
        }
        return request;
    }
}

可以看到注釋所示,添加一個Request到派發隊列。Request是所有請求的基類,是一個抽象類。request.setRequestQueue(this);的作用就是將請求Request關聯到當前RequestQueue。然后同步操作將當前Request添加到RequestQueue對象的mCurrentRequests HashSet中做記錄。通過request.setSequence(getSequenceNumber());得到當前RequestQueue中請求的個數,然后關聯到當前Request。request.addMarker("add-to-queue");添加調試的Debug標記。if (!request.shouldCache())判斷當前的請求是否可以緩存,如果不能緩存則直接通過mNetworkQueue.add(request);將這條請求加入網絡請求隊列,然后返回request;如果可以緩存的話則在通過同步操作將這條請求加入緩存隊列。在默認情況下,每條請求都是可以緩存的,當然我們也可以調用Request的setShouldCache(false)方法來改變這一默認行為。OK,那么既然默認每條請求都是可以緩存的(shouldCache返回為true),自然就被添加到了緩存隊列中,于是一直在后臺等待的緩存線程就要開始運行起來了。現在來看下CacheDispatcher中的run()方法,代碼如下所示:

@Override
public void run() {
    if (DEBUG) VolleyLog.v("start new dispatcher");
    Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);

    // Make a blocking call to initialize the cache.
    mCache.initialize();

    while (true) {
        try {
            // Get a request from the cache triage queue, blocking until
            // at least one is available.
            final Request<?> request = mCacheQueue.take();
            request.addMarker("cache-queue-take");

            // If the request has been canceled, don't bother dispatching it.
            if (request.isCanceled()) {
                request.finish("cache-discard-canceled");
                continue;
            }

            // Attempt to retrieve this item from cache.
            Cache.Entry entry = mCache.get(request.getCacheKey());
            if (entry == null) {
                request.addMarker("cache-miss");
                // Cache miss; send off to the network dispatcher.
                mNetworkQueue.put(request);
                continue;
            }

            // If it is completely expired, just send it to the network.
            if (entry.isExpired()) {
                request.addMarker("cache-hit-expired");
                request.setCacheEntry(entry);
                mNetworkQueue.put(request);
                continue;
            }

            // We have a cache hit; parse its data for delivery back to the request.
            request.addMarker("cache-hit");
            Response<?> response = request.parseNetworkResponse(
                    new NetworkResponse(entry.data, entry.responseHeaders));
            request.addMarker("cache-hit-parsed");

            if (!entry.refreshNeeded()) {
                // Completely unexpired cache hit. Just deliver the response.
                mDelivery.postResponse(request, response);
            } else {
                // Soft-expired cache hit. We can deliver the cached response,
                // but we need to also send the request to the network for
                // refreshing.
                request.addMarker("cache-hit-refresh-needed");
                request.setCacheEntry(entry);

                // Mark the response as intermediate.
                response.intermediate = true;

                // Post the intermediate response back to the user and have
                // the delivery then forward the request along to the network.
                mDelivery.postResponse(request, response, new Runnable() {
                    @Override
                    public void run() {
                        try {
                            mNetworkQueue.put(request);
                        } catch (InterruptedException e) {
                            // Not much we can do about this.
                        }
                    }
                });
            }
        } catch (InterruptedException e) {
            // We may have been interrupted because it was time to quit.
            if (mQuit) {
                return;
            }
            continue;
        }
    }
}

首先通過Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);設置線程優先級,然后通過mCache.initialize(); 初始化緩存塊,其中mCache是由Volley.java中的newRequestQueue(Context context, HttpStack stack)方法中實例化傳入的,其Cache接口的實現為new DiskBasedCache(cacheDir),其中cacheDir默認在Volley.java中不設置為/data/data/app-package/cache/volley/。

接下來由while (true)可以發現緩存線程是一直在執行,其中通過mQuit標記進制是否結束線程的操作。mCacheQueue.take()從阻塞隊列獲取最前面的一個request,沒有request就阻塞等待。接著通過mCache.get(request.getCacheKey());嘗試從緩存中取出響應結果,如何為空的話則把這條請求加入到網絡請求隊列中,如果不為空的話再判斷該緩存是否已過期,如果已經過期了則同樣把這條請求加入到網絡請求隊列中,否則就認為不需要重發網絡請求,直接使用緩存中的數據即可。在這個過程中調運了parseNetworkResponse()方法來對數據進行解析,再往后就是將解析出來的數據進行回調了。現在先來看下Request抽象基類的這部分代碼:

/**
 * Subclasses must implement this to parse the raw network response
 * and return an appropriate response type. This method will be
 * called from a worker thread.  The response will not be delivered
 * if you return null.
 * @param response Response from the network
 * @return The parsed response, or null in the case of an error
 */
abstract protected Response<T> parseNetworkResponse(NetworkResponse response);

通過注釋可以看到他就是一個解析模塊的功能。

上面說了,當調用了Volley.newRequestQueue(context)之后,就會有五個線程一直在后臺運行,不斷等待網絡請求的到來,其中一個CacheDispatcher是緩存線程,四個NetworkDispatcher是網絡請求線程。CacheDispatcher的run方法剛才已經大致分析了,解析來看下NetworkDispatcher中是怎么處理網絡請求隊列的,具體代碼如下所示:

@Override
public void run() {
    Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);
    while (true) {
        long startTimeMs = SystemClock.elapsedRealtime();
        Request<?> request;
        try {
            // Take a request from the queue.
            request = mQueue.take();
        } catch (InterruptedException e) {
            // We may have been interrupted because it was time to quit.
            if (mQuit) {
                return;
            }
            continue;
        }

        try {
            request.addMarker("network-queue-take");

            // If the request was cancelled already, do not perform the
            // network request.
            if (request.isCanceled()) {
                request.finish("network-discard-cancelled");
                continue;
            }

            addTrafficStatsTag(request);

            // Perform the network request.
            NetworkResponse networkResponse = mNetwork.performRequest(request);
            request.addMarker("network-http-complete");

            // If the server returned 304 AND we delivered a response already,
            // we're done -- don't deliver a second identical response.
            if (networkResponse.notModified && request.hasHadResponseDelivered()) {
                request.finish("not-modified");
                continue;
            }

            // Parse the response here on the worker thread.
            Response<?> response = request.parseNetworkResponse(networkResponse);
            request.addMarker("network-parse-complete");

            // Write to cache if applicable.
            // TODO: Only update cache metadata instead of entire record for 304s.
            if (request.shouldCache() && response.cacheEntry != null) {
                mCache.put(request.getCacheKey(), response.cacheEntry);
                request.addMarker("network-cache-written");
            }

            // Post the response back.
            request.markDelivered();
            mDelivery.postResponse(request, response);
        } catch (VolleyError volleyError) {
            volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs);
            parseAndDeliverNetworkError(request, volleyError);
        } catch (Exception e) {
            VolleyLog.e(e, "Unhandled exception %s", e.toString());
            VolleyError volleyError = new VolleyError(e);
            volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs);
            mDelivery.postError(request, volleyError);
        }
    }
}

和CacheDispatcher差不多,如上可以看見一個類似的while(true)循環,說明網絡請求線程也是在不斷運行的。

如上通過mNetwork.performRequest(request);代碼來發送網絡請求,而Network是一個接口,這里具體的實現之前已經分析是BasicNetwork,所以先看下它的performRequest()方法,如下所示:

NetWork接口的代碼:

public interface Network {
    /** * Performs the specified request. * @param request Request to process * @return A {@link NetworkResponse} with data and caching metadata; will never be null * @throws VolleyError on errors */
    public NetworkResponse performRequest(Request<?> request) throws VolleyError;
}

上面說了,就是執行指定的請求。他的BasicNetwork實現子類如下:

@Override
public NetworkResponse performRequest(Request<?> request) throws VolleyError {
    long requestStart = SystemClock.elapsedRealtime();
    while (true) {
        HttpResponse httpResponse = null;
        byte[] responseContents = null;
        Map<String, String> responseHeaders = Collections.emptyMap();
        try {
            // Gather headers.
            Map<String, String> headers = new HashMap<String, String>();
            addCacheHeaders(headers, request.getCacheEntry());
            httpResponse = mHttpStack.performRequest(request, headers);
            StatusLine statusLine = httpResponse.getStatusLine();
            int statusCode = statusLine.getStatusCode();

            responseHeaders = convertHeaders(httpResponse.getAllHeaders());
            // Handle cache validation.
            if (statusCode == HttpStatus.SC_NOT_MODIFIED) {

                Entry entry = request.getCacheEntry();
                if (entry == null) {
                    return new NetworkResponse(HttpStatus.SC_NOT_MODIFIED, null,
                            responseHeaders, true,
                            SystemClock.elapsedRealtime() - requestStart);
                }

                // A HTTP 304 response does not have all header fields. We
                // have to use the header fields from the cache entry plus
                // the new ones from the response.
                // http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.3.5
                entry.responseHeaders.putAll(responseHeaders);
                return new NetworkResponse(HttpStatus.SC_NOT_MODIFIED, entry.data,
                        entry.responseHeaders, true,
                        SystemClock.elapsedRealtime() - requestStart);
            }

            // Some responses such as 204s do not have content. We must check.
            if (httpResponse.getEntity() != null) {
                responseContents = entityToBytes(httpResponse.getEntity());
            } else {
                // Add 0 byte response as a way of honestly representing a
                // no-content request.
                responseContents = new byte[0];
            }

            // if the request is slow, log it.
            long requestLifetime = SystemClock.elapsedRealtime() - requestStart;
            logSlowRequests(requestLifetime, request, responseContents, statusLine);

            if (statusCode < 200 || statusCode > 299) {
                throw new IOException();
            }
            return new NetworkResponse(statusCode, responseContents, responseHeaders, false,
                    SystemClock.elapsedRealtime() - requestStart);
        } catch (SocketTimeoutException e) {
            attemptRetryOnException("socket", request, new TimeoutError());
        } catch (ConnectTimeoutException e) {
            attemptRetryOnException("connection", request, new TimeoutError());
        } catch (MalformedURLException e) {
            throw new RuntimeException("Bad URL " + request.getUrl(), e);
        } catch (IOException e) {
            int statusCode = 0;
            NetworkResponse networkResponse = null;
            if (httpResponse != null) {
                statusCode = httpResponse.getStatusLine().getStatusCode();
            } else {
                throw new NoConnectionError(e);
            }
            VolleyLog.e("Unexpected response code %d for %s", statusCode, request.getUrl());
            if (responseContents != null) {
                networkResponse = new NetworkResponse(statusCode, responseContents,
                        responseHeaders, false, SystemClock.elapsedRealtime() - requestStart);
                if (statusCode == HttpStatus.SC_UNAUTHORIZED ||
                        statusCode == HttpStatus.SC_FORBIDDEN) {
                    attemptRetryOnException("auth",
                            request, new AuthFailureError(networkResponse));
                } else {
                    // TODO: Only throw ServerError for 5xx status codes.
                    throw new ServerError(networkResponse);
                }
            } else {
                throw new NetworkError(networkResponse);
            }
        }
    }
}

這個方法是網絡請求的具體實現,也是一個大while循環,其中mHttpStack.performRequest(request, headers);代碼中的mHttpStack是Volley的newRequestQueue()方法中創建的實例,前面已經說過,這兩個對象的內部實際就是分別使用HttpURLConnection和HttpClient來發送網絡請求的,然后把服務器返回的數據組裝成一個NetworkResponse對象進行返回。在NetworkDispatcher中收到了NetworkResponse這個返回值后又會調用Request的parseNetworkResponse()方法來解析NetworkResponse中的數據,同時將數據寫入到緩存,這個方法的實現是交給Request的子類來完成的,因為不同種類的Request解析的方式也肯定不同。

前面你可以看到在NetWorkDispatcher的run中最后執行了mDelivery.postResponse(request, response);,也就是說在解析完了NetworkResponse中的數據之后,又會調用ExecutorDelivery(ResponseDelivery接口的實現類)的postResponse()方法來回調解析出的數據,具體代碼如下所示:

@Override
public void postResponse(Request<?> request, Response<?> response, Runnable runnable) {
    request.markDelivered();
    request.addMarker("post-response");
    mResponsePoster.execute(new ResponseDeliveryRunnable(request, response, runnable));
}

這里可以看見在mResponsePoster的execute()方法中傳入了一個ResponseDeliveryRunnable對象,就可以保證該對象中的run()方法就是在主線程當中運行的了,我們看下run()方法中的代碼是什么樣的:

/** * A Runnable used for delivering network responses to a listener on the * main thread. */
@SuppressWarnings("rawtypes")
private class ResponseDeliveryRunnable implements Runnable {
    private final Request mRequest;
    private final Response mResponse;
    private final Runnable mRunnable;

    public ResponseDeliveryRunnable(Request request, Response response, Runnable runnable) {
        mRequest = request;
        mResponse = response;
        mRunnable = runnable;
    }

    @SuppressWarnings("unchecked")
    @Override
    public void run() {
        // If this request has canceled, finish it and don't deliver.
        if (mRequest.isCanceled()) {
            mRequest.finish("canceled-at-delivery");
            return;
        }

        // Deliver a normal response or error, depending.
        if (mResponse.isSuccess()) {
            mRequest.deliverResponse(mResponse.result);
        } else {
            mRequest.deliverError(mResponse.error);
        }

        // If this is an intermediate response, add a marker, otherwise we're done
        // and the request can be finished.
        if (mResponse.intermediate) {
            mRequest.addMarker("intermediate-response");
        } else {
            mRequest.finish("done");
        }

        // If we have been provided a post-delivery runnable, run it.
        if (mRunnable != null) {
            mRunnable.run();
        }
    }
}

這段代碼里的run方法中可以看到如下一部分細節:

// Deliver a normal response or error, depending.
if (mResponse.isSuccess()) {
    mRequest.deliverResponse(mResponse.result);
} else {
    mRequest.deliverError(mResponse.error);
}

這段代碼是最核心的,明顯可以看到通過mRequest的deliverResponse或者deliverError將反饋發送到回調到UI線程。這也是你重寫實現的接口方法。

再來整體看下

現在是該回過頭去看背景知識模塊了,再看下那幅官方圖,對比就明白咋回事了。結合上圖和如上分析可以知道:

  1. 當一個RequestQueue被成功申請后會開啟一個CacheDispatcher和4個默認的NetworkDispatcher。

  2. CacheDispatcher緩存調度器最為第一層緩沖,開始工作后阻塞的從緩存序列mCacheQueue中取得請求;對于已經取消的請求,標記為跳過并結束這個請求;新的或者過期的請求,直接放入mNetworkQueue中由N個NetworkDispatcher進行處理;已獲得緩存信息(網絡應答)卻沒有過期的請求,由Request的parseNetworkResponse進行解析,從而確定此應答是否成功。然后將請求和應答交由Delivery分發者進行處理,如果需要更新緩存那么該請求還會被放入mNetworkQueue中。

  3. 將請求Request add到RequestQueue后對于不需要緩存的請求(需要額外設置,默認是需要緩存)直接丟入mNetworkQueue交給N個NetworkDispatcher處理;對于需要緩存的,新的請求加到mCacheQueue中給CacheDispatcher處理;需要緩存,但是緩存列表中已經存在了相同URL的請求,放在mWaitingQueue中做暫時處理,等待之前請求完畢后,再重新添加到mCacheQueue中。

  4. 網絡請求調度器NetworkDispatcher作為網絡請求真實發生的地方,對消息交給BasicNetwork進行處理,同樣的,請求和結果都交由Delivery分發者進行處理。

  5. Delivery分發者實際上已經是對網絡請求處理的最后一層了,在Delivery對請求處理之前,Request已經對網絡應答進行過解析,此時應答成功與否已經設定;而后Delivery根據請求所獲得的應答情況做不同處理;若應答成功,則觸發deliverResponse方法,最終會觸發開發者為Request設定的Listener;若應答失敗,則觸發deliverError方法,最終會觸發開發者為Request設定的ErrorListener;處理完后,一個Request的生命周期就結束了,Delivery會調用Request的finish操作,將其從mRequestQueue中移除,與此同時,如果等待列表中存在相同URL的請求,則會將剩余的層級請求全部丟入mCacheQueue交由CacheDispatcher進行處理。

至此所有搞定。

PPPS一句:通過上面原理分析之后總結發現,推薦整個App全局持有一個RequestQueue的做法,這樣會有相對比較高的性能效率。


走讀代碼時參考博客鏈接

來自: http://blog.csdn.net//yanbober/article/details/45307217

 本文由用戶 jopen 自行上傳分享,僅供網友學習交流。所有權歸原作者,若您的權利被侵害,請聯系管理員。
 轉載本站原創文章,請注明出處,并保留原始鏈接、圖片水印。
 本站是一個以用戶分享為主的開源技術平臺,歡迎各類分享!