Android OKHttp框架的分發器與攔截器源碼刨析

1.OKHttp簡單使用

OkHttpClient okHttpClient=new OkHttpClient.Builder().build();
Request request=new Request.Builder()
        .url("http://www.baidu.com")
        .get().build();
Call call = okHttpClient.newCall(request);
try {
    Response execute = call.execute();
    Log.d("lpf","execute"+execute.body());
} catch (IOException e) {
    e.printStackTrace();
}

這樣就使用OkHttp做瞭一次網絡請求,使用流程:

  • 通過建造者的方式創建OkHttpClient
  • 通過建造者的方式創建Request
  • 調用OkHttpClient的newCall方法將Request對象當做參數傳進去創建一個Call對象
  • 調用call的execute(這個是同步的方式)方法就完成瞭一次網絡請求,如果調用enqueue方法就是進行的一步請求。

2.OkHttp分發器源碼解析

2.1.同步請求方式

public Response execute() throws IOException {
    synchronized (this) {
        if (executed) throw new IllegalStateException("Already Executed");
        executed = true;
    }
    captureCallStackTrace();
    eventListener.callStart(this);
    try {
        client.dispatcher().executed(this);
        // 發起請求
        Response result = getResponseWithInterceptorChain();
        if (result == null) throw new IOException("Canceled");
        return result;
    } catch (IOException e) {
        eventListener.callFailed(this, e);
        throw e;
    } finally {
        client.dispatcher().finished(this);
    }
}

同步請求的方式很簡單,就是通過調用dispatcher分發器的executed方法,將請求call放到同步請求隊列runningSyncCalls中,然後執行getResponseWithInterceptorChain,經過各個攔截器的處理就能拿到請求結果,然後進行返回就好。

這裡用到來瞭runningSyncCalls同步請求隊列,異步的時候也會有隊列可以做比對。

2.2.異步的方式進行請求

public void enqueue(Callback responseCallback) {
    synchronized (this) { //不能沖突調用
        if (executed) throw new IllegalStateException("Already Executed");
        executed = true;
    }
    captureCallStackTrace();
    eventListener.callStart(this);
    client.dispatcher().enqueue(new AsyncCall(responseCallback)); //OkHttp會提供默認的分發器
}

通過調用dispatcher的enqueue方法進行異步call分發

Dispatch的enqueue方法

synchronized void enqueue(AsyncCall call) { //分發器 分發任務  
    if (runningAsyncCalls.size() < maxRequests && runningCallsForHost(call) < maxRequestsPerHost) {
        runningAsyncCalls.add(call);  //正在執行的請求
        executorService().execute(call); //線程池跑任務
    } else {
        readyAsyncCalls.add(call);   //ready 中的任務什麼時候執行
    }
}
  • 如果正在執行的請求小於64,相同host的請求不能超過5個 就放入運行隊列runningAsyncCalls
  • 否則就放到準備隊列readyAsyncCalls

如果在運行隊列的call就會馬上放入executorService線程池進行執行

AsyncCall的execute方法:

 protected void execute() {
        boolean signalledCallback = false;
        try {
            //執行請求 (攔截器)
            Response response = getResponseWithInterceptorChain();

            if (retryAndFollowUpInterceptor.isCanceled()) {
                signalledCallback = true;
                responseCallback.onFailure(RealCall.this, new IOException("Canceled"));
            } else {
                signalledCallback = true;
                responseCallback.onResponse(RealCall.this, response);
            }
        } catch (IOException e) {
            if (signalledCallback) {
                // Do not signal the callback twice!
                Platform.get().log(INFO, "Callback failure for " + toLoggableString(), e);
            } else {
                eventListener.callFailed(RealCall.this, e);
                responseCallback.onFailure(RealCall.this, e);
            }
        } finally {
            client.dispatcher().finished(this);  //這裡的代碼肯定執行
        }
    }
}

getResponseWithInterceptorChain:通過攔截器進行網絡請求處理,重試被取消瞭回調失敗,否則正常回調onResponse。

註意finally裡邊的內容,最後會執行dispatcher().finished(this)

dispatcher的finish方法

private <T> void finished(Deque<T> calls, T call, boolean promoteCalls) {
    int runningCallsCount;
    Runnable idleCallback;
    synchronized (this) {
        //將任務都運行隊列中移除
        if (!calls.remove(call)) throw new AssertionError("Call wasn't in-flight!");
        if (promoteCalls) promoteCalls();
        runningCallsCount = runningCallsCount();
        idleCallback = this.idleCallback;
    }
    if (runningCallsCount == 0 && idleCallback != null) {
        idleCallback.run();
    }
}

這個裡邊就是處理的隊列任務的切換,如果任務執行完成之後,將運行隊列的call移除,然後查看是否可以將等待隊列的任務放入運行隊列。

2.3.分發器線程池

分發器就是來調配請求任務的,內部會包含一個線程池。當異步請求時,會將請求任務交給線程池來執行。那分發器中默認的線程池是如何定義的呢

 public synchronized ExecutorService executorService() {
        if (executorService == null) {  //這個線程池的特點是高並發 最大吞吐量
            executorService = 
                new ThreadPoolExecutor(0, //核心線程數量
                                      Integer.MAX_VALUE,//最大線程數量
                                       60, 				//空閑線程閑置時間														TimeUnit.SECONDS,   //閑置時間單位
                    				new SynchronousQueue<Runnable>(),//線程等待隊列 
                    				Util.threadFactory("OkHttp Dispatcher",false)//線程工廠
                                      );
        }
        return executorService;
    }

首先核心線程為0,表示線程池不會一直為我們緩存線程,線程池中所有線程都是在60s內沒有工作就會被回收。而最大線程 Integer.MAX_VALUE 與等待隊列 SynchronousQueue 的組合能夠得到最大的吞吐量。即當需要線程池執行任務時,如果不存在空閑線程不需要等待,馬上新建線程執行任務!等待隊列的不同指定瞭線程池的不同排隊機制。一般來說,等待隊列 BlockingQueue 有: ArrayBlockingQueue 、 LinkedBlockingQueue 與 SynchronousQueue 。

假設向線程池提交任務時,核心線程都被占用的情況下:

ArrayBlockingQueue :基於數組的阻塞隊列,初始化需要指定固定大小。

當使用此隊列時,向線程池提交任務,會首先加入到等待隊列中,當等待隊列滿瞭之後,再次提交任務,嘗試加入

隊列就會失敗,這時就會檢查如果當前線程池中的線程數未達到最大線程,則會新建線程執行新提交的任務。所以

最終可能出現後提交的任務先執行,而先提交的任務一直在等待。

LinkedBlockingQueue :基於鏈表實現的阻塞隊列,初始化可以指定大小,也可以不指定。

當指定大小後,行為就和 ArrayBlockingQueu 一致。而如果未指定大小,則會使用默認的 Integer.MAX_VALUE 作為隊列大小。這時候就會出現線程池的最大線程數參數無用,因為無論如何,向線程池提交任務加入等待隊列都會成功。最終意味著所有任務都是在核心線程執行。如果核心線程一直被占,那就一直等待。

SynchronousQueue : 無容量的隊列。

使用此隊列意味著希望獲得最大並發量。因為無論如何,向線程池提交任務,往隊列提交任務都會失敗。而失敗後

如果沒有空閑的非核心線程,就會檢查如果當前線程池中的線程數未達到最大線程,則會新建線程執行新提交的任

務。完全沒有任何等待,唯一制約它的就是最大線程數的個數。因此一般配合 Integer.MAX_VALUE 就實現瞭真正的無等待。

但是需要註意的時,我們都知道,進程的內存是存在限制的,而每一個線程都需要分配一定的內存。所以線程並不

能無限個數。那麼當設置最大線程數為 Integer.MAX_VALUE 時,OkHttp同時還有最大請求任務執行個數: 64的限

堂制。這樣即解決瞭這個問題同時也能獲得最大吞吐。

3.OkHttp的攔截器

RealCall的getResponseWithInterceptorChain方法,是OkHttp最核心的部分

Response getResponseWithInterceptorChain() throws IOException {
    // Build a full stack of interceptors.
    List<Interceptor> interceptors = new ArrayList<>();
    interceptors.addAll(client.interceptors()); //自定義攔截器加入到集合
    interceptors.add(retryAndFollowUpInterceptor);
    interceptors.add(new BridgeInterceptor(client.cookieJar()));
    interceptors.add(new CacheInterceptor(client.internalCache()));
    interceptors.add(new ConnectInterceptor(client));
    if (!forWebSocket) {//如果不是webSocket
        interceptors.addAll(client.networkInterceptors());
    }
    interceptors.add(new CallServerInterceptor(forWebSocket));
    Interceptor.Chain chain = new RealInterceptorChain(interceptors, null, null, null, 0,
            originalRequest, this, eventListener, client.connectTimeoutMillis(),
            client.readTimeoutMillis(), client.writeTimeoutMillis());
    return chain.proceed(originalRequest);
}

這個方法就是進行網絡請求的入口方法,進過這個方法之後就拿到網絡請求的Response結果瞭。

這個方法裡邊就是創建瞭一個List網絡攔截器的集合,將攔截器添加到集合裡邊去。

retryAndFollowUpInterceptor:重試和重定向攔截器

  • client.interceptors():用戶自定義的攔截器
  • BridgeInterceptor:橋接攔截器
  • CacheInterceptor:緩存攔截器
  • ConnectInterceptor:連接攔截器
  • CallServerInterceptor:請求服務攔截器

創建鏈條RealInterceptorChain,調用鏈條的proceed方法

通過責任鏈設計模式,就會逐次訪問每個攔截器,攔截器是一個U型的數據傳遞過程。

PS:責任鏈設計模式,對象行為型模式,為請求創建瞭一個接收者對象的鏈,在處理請求的時候執行過濾(各司職)。

責任鏈上的處理者負責處理請求,客戶隻需要將請求發送到責任鏈即可,無須關心請求的處理細節和請求的傳遞所以職責鏈將請求的發送者和請求的處理者解耦瞭。

3.1.RetryAndFollowUpInterceptor 重試和重定向攔截器

RetryAndFollowUpInterceptor的intercept方法

@Override
public Response intercept(Chain chain) throws IOException {
    Request request = chain.request();
    RealInterceptorChain realChain = (RealInterceptorChain) chain;
    Call call = realChain.call();
    EventListener eventListener = realChain.eventListener();
    /**
     *管理類,維護瞭 與服務器的連接、數據流與請求三者的關系。真正使用的攔截器為 Connect
     */
    StreamAllocation streamAllocation = new StreamAllocation(client.connectionPool(),
            createAddress(request.url()), call, eventListener, callStackTrace);
    this.streamAllocation = streamAllocation;
    int followUpCount = 0;
    Response priorResponse = null;
    while (true) {
        if (canceled) {
            streamAllocation.release();
            throw new IOException("Canceled");
        }
        Response response;
        boolean releaseConnection = true;
        try {
            //請求出現瞭異常,那麼releaseConnection依舊為true。
            response = realChain.proceed(request, streamAllocation, null, null);
            releaseConnection = false;
        } catch (RouteException e) {
            //路由異常,連接未成功,請求還沒發出去
            //The attempt to connect via a route failed. The request will not have been sent.
            if (!recover(e.getLastConnectException(), streamAllocation, false, request)) {
                throw e.getLastConnectException();
            }
            releaseConnection = false;
            continue;
        } catch (IOException e) {
            //請求發出去瞭,但是和服務器通信失敗瞭。(socket流正在讀寫數據的時候斷開連接)
            // HTTP2才會拋出ConnectionShutdownException。所以對於HTTP1 requestSendStarted一定是true
            //An attempt to communicate with a server failed. The request may have been sent.
            boolean requestSendStarted = !(e instanceof ConnectionShutdownException);
            if (!recover(e, streamAllocation, requestSendStarted, request)) throw e;
            releaseConnection = false;
            continue;
        } finally {
            // We're throwing an unchecked exception. Release any resources.
            //不是前兩種的失敗,那直接關閉清理所有資源
            if (releaseConnection) {
                streamAllocation.streamFailed(null);
                streamAllocation.release();
            }
        }
        //如果進過重試/重定向才成功的,則在本次響應中記錄上次響應的情況
        //Attach the prior response if it exists. Such responses never have a body.
        if (priorResponse != null) {
            response = response.newBuilder()
                    .priorResponse(
                            priorResponse.newBuilder()
                                    .body(null)
                                    .build()
                    )
                    .build();
        }
        //處理3和4xx的一些狀態碼,如301 302重定向
        Request followUp = followUpRequest(response, streamAllocation.route());
        if (followUp == null) {
            if (!forWebSocket) {
                streamAllocation.release();
            }
            return response;
        }
        closeQuietly(response.body());
        //限制最大 followup 次數為20次
        if (++followUpCount > MAX_FOLLOW_UPS) {
            streamAllocation.release();
            throw new ProtocolException("Too many follow-up requests: " + followUpCount);
        }
        if (followUp.body() instanceof UnrepeatableRequestBody) {
            streamAllocation.release();
            throw new HttpRetryException("Cannot retry streamed HTTP body", response.code());
        }
        //todo 判斷是不是可以復用同一份連接
        if (!sameConnection(response, followUp.url())) {
            streamAllocation.release();
            streamAllocation = new StreamAllocation(client.connectionPool(),
                    createAddress(followUp.url()), call, eventListener, callStackTrace);
            this.streamAllocation = streamAllocation;
        } else if (streamAllocation.codec() != null) {
            throw new IllegalStateException("Closing the body of " + response
                    + " didn't close its backing stream. Bad interceptor?");
        }
        request = followUp;
        priorResponse = response;
    }
}

這個攔截器:第一個接觸到請求,最後接觸到響應;負責判斷是否需要重新發起整個請求

3.2.橋接攔截器BridgeInterceptor

BridgeInterceptor的intercept方法

public Response intercept(Chain chain) throws IOException {
    Request userRequest = chain.request();
    Request.Builder requestBuilder = userRequest.newBuilder();
    RequestBody body = userRequest.body();
    if (body != null) {
        MediaType contentType = body.contentType();
        if (contentType != null) {
            requestBuilder.header("Content-Type", contentType.toString());
        }
        long contentLength = body.contentLength();
        if (contentLength != -1) {
            requestBuilder.header("Content-Length", Long.toString(contentLength));
            requestBuilder.removeHeader("Transfer-Encoding");
        } else {
            requestBuilder.header("Transfer-Encoding", "chunked");
            requestBuilder.removeHeader("Content-Length");
        }
    }
    if (userRequest.header("Host") == null) {
        requestBuilder.header("Host", hostHeader(userRequest.url(), false));
    }
    if (userRequest.header("Connection") == null) {
        requestBuilder.header("Connection", "Keep-Alive");
    }
    // If we add an "Accept-Encoding: gzip" header field we're responsible for also
  // decompressing
    // the transfer stream.
    boolean transparentGzip = false;
    if (userRequest.header("Accept-Encoding") == null && userRequest.header("Range") == null) {
        transparentGzip = true;
        requestBuilder.header("Accept-Encoding", "gzip");
    }
    List<Cookie> cookies = cookieJar.loadForRequest(userRequest.url());
    if (!cookies.isEmpty()) {
        requestBuilder.header("Cookie", cookieHeader(cookies));
    }
    if (userRequest.header("User-Agent") == null) {
        requestBuilder.header("User-Agent", Version.userAgent());
    }
    Response networkResponse = chain.proceed(requestBuilder.build());
    HttpHeaders.receiveHeaders(cookieJar, userRequest.url(), networkResponse.headers());
    Response.Builder responseBuilder = networkResponse.newBuilder()
            .request(userRequest);
    if (transparentGzip
            && "gzip".equalsIgnoreCase(networkResponse.header("Content-Encoding"))
            && HttpHeaders.hasBody(networkResponse)) {
        GzipSource responseBody = new GzipSource(networkResponse.body().source());
        Headers strippedHeaders = networkResponse.headers().newBuilder()
                .removeAll("Content-Encoding")
                .removeAll("Content-Length")
                .build();
        responseBuilder.headers(strippedHeaders);
        String contentType = networkResponse.header("Content-Type");
        responseBuilder.body(new RealResponseBody(contentType, -1L, Okio.buffer(responseBody)));
    }
    return responseBuilder.build();
}

BridgeInterceptor ,連接應用程序和服務器的橋梁,我們發出的請求將會經過它的處理才能發給服務器,比如設

置請求內容長度,編碼,gzip壓縮,cookie等,獲取響應後保存Cookie等操作。

橋接攔截器的責任是:補全請求頭,並默認使用gzip壓縮,同時將響應體重新設置為gzip讀取。

3.3.緩存攔截器CacheInterceptor

CacheInterceptor的intercept方法,緩存的判斷條件比較復雜

public Response intercept(Chain chain) throws IOException {
    //通過url的md5數據 從文件緩存查找 (GET請求才有緩存)
    Response cacheCandidate = cache != null
            ? cache.get(chain.request())
            : null;
    long now = System.currentTimeMillis();
    //緩存策略:根據各種條件(請求頭)組成 請求與緩存
    CacheStrategy strategy =
            new CacheStrategy.Factory(now, chain.request(), cacheCandidate).get();
    //
    Request networkRequest = strategy.networkRequest;
    Response cacheResponse = strategy.cacheResponse;
    if (cache != null) {
        cache.trackResponse(strategy);
    }
    if (cacheCandidate != null && cacheResponse == null) {
        closeQuietly(cacheCandidate.body()); // The cache candidate wasn't applicable. Close it.
    }
    //沒有網絡請求也沒有緩存
    //If we're forbidden from using the network and the cache is insufficient, fail.
    if (networkRequest == null && cacheResponse == null) {
        return new Response.Builder()
                .request(chain.request())
                .protocol(Protocol.HTTP_1_1)
                .code(504)
                .message("Unsatisfiable Request (only-if-cached)")
                .body(Util.EMPTY_RESPONSE)
                .sentRequestAtMillis(-1L)
                .receivedResponseAtMillis(System.currentTimeMillis())
                .build();
    }
    //沒有請求,肯定就要使用緩存
    //If we don't need the network, we're done.
    if (networkRequest == null) {
        return cacheResponse.newBuilder()
                .cacheResponse(stripBody(cacheResponse))
                .build();
    }
    //去發起請求
    Response networkResponse = null;
    try {
        networkResponse = chain.proceed(networkRequest);
    } finally {
        // If we're crashing on I/O or otherwise, don't leak the cache body.
        if (networkResponse == null && cacheCandidate != null) {
            closeQuietly(cacheCandidate.body());
        }
    }
    // If we have a cache response too, then we're doing a conditional get.
    if (cacheResponse != null) {
        //服務器返回304無修改,那就使用緩存的響應修改瞭時間等數據後作為本次請求的響應
        if (networkResponse.code() == HTTP_NOT_MODIFIED) {
            Response response = cacheResponse.newBuilder()
                    .headers(combine(cacheResponse.headers(), networkResponse.headers()))
                    .sentRequestAtMillis(networkResponse.sentRequestAtMillis())
                    .receivedResponseAtMillis(networkResponse.receivedResponseAtMillis())
                    .cacheResponse(stripBody(cacheResponse))
                    .networkResponse(stripBody(networkResponse))
                    .build();
            networkResponse.body().close();
            // Update the cache after combining headers but before stripping the
            // Content-Encoding header (as performed by initContentStream()).
            cache.trackConditionalCacheHit();
            cache.update(cacheResponse, response);
            return response;
        } else {
            closeQuietly(cacheResponse.body());
        }
    }
    //走到這裡說明緩存不可用 那就使用網絡的響應
    Response response = networkResponse.newBuilder()
            .cacheResponse(stripBody(cacheResponse))
            .networkResponse(stripBody(networkResponse))
            .build();
    //進行緩存
    if (cache != null) {
        if (HttpHeaders.hasBody(response) && CacheStrategy.isCacheable(response,
                networkRequest)) {
            // Offer this request to the cache.
            CacheRequest cacheRequest = cache.put(response);
            return cacheWritingResponse(cacheRequest, response);
        }
        if (HttpMethod.invalidatesCache(networkRequest.method())) {
            try {
                cache.remove(networkRequest);
            } catch (IOException ignored) {
                // The cache cannot be written.
            }
        }
    }
    return response;
}

CacheInterceptor ,在發出請求前,判斷是否命中緩存。如果命中則可以不請求,直接使用緩存的響應。 (隻會存

在Get請求的緩存)

緩存是否存在,整個方法中的第一個判斷是緩存是不是存在,cacheResponse 是從緩存中找到的響應,如果為null,那就表示沒有找到對應的緩存,創建的 CacheStrategy 實例對象隻存在 networkRequest ,這代表瞭需要發起網絡請求。

https請求的緩存,繼續往下走意味著 cacheResponse 必定存在,但是它不一定能用,如果本次請求是HTTPS,但是緩存中沒有對應的握手信息,那麼緩存無效

響應碼以及響應頭,整個邏輯都在 isCacheable 中,緩存響應中的響應碼為 200, 203, 204, 300, 301, 404, 405, 410, 414, 501, 308 的情況下,隻判斷服務器是不是給瞭Cache-Control: no-store (資源不能被緩存),所以如果服務器給到瞭這個響應頭,那就和前面兩個判定一致(緩存不可用)。否則繼續進一步判斷緩存是否可用

  • 響應碼不為 200, 203, 204, 300, 301, 404, 405, 410, 414, 501, 308,302,307 緩存不可用
  • 當響應碼為302或者307時,未包含某些響應頭,則緩存不可用
  • 當存在 Cache-Control: no-store 響應頭則緩存不可用

用戶的請求配置,OkHttp需要先對用戶本次發起的 Request 進行判定,如果用戶指定瞭 Cache-Control: no-cache (不使用緩存)的請求頭或者請求頭包含 If-Modified-Since 或 If-None-Match (請求驗證),那麼就不允許使用緩存

資源是否不變,如果緩存的響應中包含 Cache-Control: immutable ,這意味著對應請求的響應內容將一直不會改變。此時就可以直接使用緩存。否則繼續判斷緩存是否可用

3.4.連接攔截器ConnectInterceptor

ConnectInterceptor的intercept方法

public Response intercept(Chain chain) throws IOException {
    RealInterceptorChain realChain = (RealInterceptorChain) chain;
    Request request = realChain.request();
    StreamAllocation streamAllocation = realChain.streamAllocation();
    // We need the network to satisfy this request. Possibly for validating a conditional GET.
    boolean doExtensiveHealthChecks = !request.method().equals("GET");
    HttpCodec httpCodec = streamAllocation.newStream(client, chain, doExtensiveHealthChecks);
    RealConnection connection = streamAllocation.connection();
    return realChain.proceed(request, streamAllocation, httpCodec, connection);
}

雖然代碼量很少,實際上大部分功能都封裝到其它類去瞭,這裡隻是調用而已。

首先我們看到的 StreamAllocation 這個對象是在第一個攔截器:重定向攔截器創建的,但是真正使用的地方卻在

" 當一個請求發出,需要建立連接,連接建立後需要使用流用來讀寫數據 ";

而這個StreamAllocation就是協調請求、連接與數據流三者之間的關系,它負責為一次請求尋找連接,然後獲得流來實現網絡通信。

這裡使用的 newStream 方法實際上就是去查找或者建立一個與請求主機有效的連接,返回的 HttpCodec 中包含瞭輸入輸出流,並且封裝瞭對HTTP請求報文的編碼與解碼,直接使用它就能夠與請求主機完成HTTP通信。

StreamAllocation 中簡單來說就是維護連接: RealConnection ——封裝瞭Socket與一個Socket連接池。可復用

的 RealConnection 需要:

3.5.請求服務器攔截器CallServerInterceptor

CallServerInterceptor的intercept方法

public Response intercept(Chain chain) throws IOException {
    RealInterceptorChain realChain = (RealInterceptorChain) chain;
    HttpCodec httpCodec = realChain.httpStream();
    StreamAllocation streamAllocation = realChain.streamAllocation();
    RealConnection connection = (RealConnection) realChain.connection();
    Request request = realChain.request();
    long sentRequestMillis = System.currentTimeMillis();
    realChain.eventListener().requestHeadersStart(realChain.call());
    httpCodec.writeRequestHeaders(request);
    realChain.eventListener().requestHeadersEnd(realChain.call(), request);
    Response.Builder responseBuilder = null;
    if (HttpMethod.permitsRequestBody(request.method()) && request.body() != null) {
        // If there's a "Expect: 100-continue" header on the request, wait for a "HTTP/1.1 100
        // Continue" response before transmitting the request body. If we don't get that, return
        // what we did get (such as a 4xx response) without ever transmitting the request body.
         //這個請求頭代表瞭在發送請求體之前需要和服務器確定是否願意接受客戶端發送的請求體
        //但是如果服務器不同意接受請求體,那麼我們就需要標記該連接不能再被復用,調用 noNewStreams() 關閉相關的Socket。
        if ("100-continue".equalsIgnoreCase(request.header("Expect"))) {
            httpCodec.flushRequest();
            realChain.eventListener().responseHeadersStart(realChain.call());
            responseBuilder = httpCodec.readResponseHeaders(true);
        }
        if (responseBuilder == null) {
            // Write the request body if the "Expect: 100-continue" expectation was met.
            realChain.eventListener().requestBodyStart(realChain.call());
            long contentLength = request.body().contentLength();
            CountingSink requestBodyOut =
                    new CountingSink(httpCodec.createRequestBody(request, contentLength));
            BufferedSink bufferedRequestBody = Okio.buffer(requestBodyOut);
            request.body().writeTo(bufferedRequestBody);
            bufferedRequestBody.close();
            realChain.eventListener()
                    .requestBodyEnd(realChain.call(), requestBodyOut.successfulCount);
        } else if (!connection.isMultiplexed()) {
            // If the "Expect: 100-continue" expectation wasn't met, prevent the HTTP/1
          // connection
            // from being reused. Otherwise we're still obligated to transmit the request
          // body to
            // leave the connection in a consistent state.
            streamAllocation.noNewStreams();
        }
    }
    httpCodec.finishRequest();
    if (responseBuilder == null) {
        realChain.eventListener().responseHeadersStart(realChain.call());
        responseBuilder = httpCodec.readResponseHeaders(false);
    }
    Response response = responseBuilder
            .request(request)
            .handshake(streamAllocation.connection().handshake())
            .sentRequestAtMillis(sentRequestMillis)
            .receivedResponseAtMillis(System.currentTimeMillis())
            .build();
    int code = response.code();
    if (code == 100) {
        // server sent a 100-continue even though we did not request one.
        // try again to read the actual response
        responseBuilder = httpCodec.readResponseHeaders(false);
        response = responseBuilder
                .request(request)
                .handshake(streamAllocation.connection().handshake())
                .sentRequestAtMillis(sentRequestMillis)
                .receivedResponseAtMillis(System.currentTimeMillis())
                .build();
        code = response.code();
    }
    realChain.eventListener()
            .responseHeadersEnd(realChain.call(), response);
    if (forWebSocket && code == 101) {
        // Connection is upgrading, but we need to ensure interceptors see a non-null
      // response body.
        response = response.newBuilder()
                .body(Util.EMPTY_RESPONSE)
                .build();
    } else {
        response = response.newBuilder()
                .body(httpCodec.openResponseBody(response))
                .build();
    }
    if ("close".equalsIgnoreCase(response.request().header("Connection"))
            || "close".equalsIgnoreCase(response.header("Connection"))) {
        streamAllocation.noNewStreams();
    }
    if ((code == 204 || code == 205) && response.body().contentLength() > 0) {
        throw new ProtocolException(
                "HTTP " + code + " had non-zero Content-Length: " + response.body().contentLength());
    }
    return response;
}

調用 httpCodec.writeRequestHeaders(request); 將請求頭寫入到緩存中(直到調用 flushRequest() 才真正發

送給服務器)。

CallServerInterceptor ,利用 HttpCodec 發出請求到服務器並且解析生成 Response,在這個攔截器中就是完成HTTP協議報文的封裝與解析。

4.OkHttp總結

整個OkHttp功能的實現就在這五個默認的攔截器中,所以先理解攔截器模式的工作機制是先決條件。

這五個攔截器分別為:

重試攔截器

重試攔截器在交出(交給下一個攔截器)之前,負責判斷用戶是否取消瞭請求;在獲得瞭結果之後,會根據響應碼判斷是否需要重定向,如果滿足條件那麼就會重啟執行所有攔截器

橋接攔截器

橋接攔截器在交出之前,負責將HTTP協議必備的請求頭加入其中(如:Host)並添加一些默認的行為(如:GZIP壓縮);在獲得瞭結果後,調用保存cookie接口並解析GZIP數據

緩存攔截器

緩存攔截器顧名思義,交出之前讀取並判斷是否使用緩存;獲得結果後判斷是否緩存

連接攔截器

連接攔截器在交出之前,負責找到或者新建一個連接,並獲得對應的socket流;在獲得結果後不進行額外的處理。

請求服務攔截器

請求服務器攔截器進行真正的與服務器的通信,向服務器發送數據,解析讀取的響應數據。

每一個攔截器負責的工作不一樣,就好像工廠流水線,最終經過這五道工序,就完成瞭最終的產品。

到此這篇關於Android OKHttp框架的分發器與攔截器源碼刨析的文章就介紹到這瞭,更多相關Android分發器與攔截器內容請搜索WalkonNet以前的文章或繼續瀏覽下面的相關文章希望大傢以後多多支持WalkonNet!

推薦閱讀: