Chromium内核原理之网络栈

合集下载
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Chromium内核原理之⽹络栈
Chromium内核原理之⽹络栈
查看⽹络:以前的:chrome://net-internals#sockets
现在⽤ chrome://net-export/ 捕获。

⽤chrome://net-export 去看。

效果,⽐如看sockets多少个:
参考
12019.02.13 13:11:19字数 2,846阅读 1,413
1.内核⽹络栈概述
2.代码结构
3.⽹络请求剖析(专注于HTTP)
3.1 URLRequest
3.2 URLRequestHttpJob
3.3 HttpNetworkTransaction
3.4 HttpStreamFactory
3.4.1 Proxy解析
3.4.2 连接管理
3.4.3 Host解析
3.4.4 SSL/TLS
1.内核⽹络栈概述
⽹络堆栈主要是单线程跨平台库,主要⽤于资源获取。

它的主要接⼝是URLRequest和URLRequestContext。

URLRequest,如其名称所⽰,表⽰对URL的请求。

URLRequestContext包含完成URL请求所需的所有关联上下⽂,例如cookie,主机解析器,代理解析器,缓存等。

许多URLRequest对象可以共享相同的URLRequestContext。

尽管磁盘缓存可以使⽤专⽤线程,但是⼤多数⽹络对象都不是线程安全的,并且⼏个组件(主机解析,证书验证等)可能使⽤未连接的⼯作线程。

由于它主要在单个⽹络线程上运⾏,因此不允许阻⽌⽹络线程上的操作。

因此,我们使⽤⾮阻塞操作和异步回调(通常是CompletionCallback)。

⽹络堆栈代码还将⼤多数操作记录到NetLog,这允许消费者在内存中记录所述操作并以⽤户友好的格式呈现它以进⾏调试。

Chromium开发⼈员编写了⽹络堆栈,以便:
允许编码到跨平台抽象;
提供⽐更⾼级系统⽹络库(例如WinHTTP或WinINET)更⾼的控制能⼒。

** 避免系统库中可能存在的错误;
** 为性能优化提供更⼤的机会。

2.代码结构
net/base - 获取⼀些⽹络实⽤程序,例如主机解析,cookie,⽹络变化检测,SSL。

net/disk_cache - web resources缓存。

net/ftp - FTP实现。

代码主要基于旧的HTTP实现。

net/http - HTTP实现。

net/ocsp - 不使⽤系统库或系统未提供OCSP实施时的OCSP实施。

⽬前仅包含基于NSS的实现。

net/proxy - 代理(SOCKS和HTTP)配置,解析,脚本提取等。

net/quic - QUIC实现
net/socket - TCP套接字,“SSL套接字”和套接字池的跨平台实现。

net/socket_stream - WebSockets的套接字流。

net/spdy - HTTP2和SPDY实现。

net/url_request - URLRequest, URLRequestContext和URLRequestJob实现。

net/websockets - WebSockets实现。

3.⽹络请求剖析(专注于HTTP)
http_network.jpg
3.1 URLRequest
class URLRequest {
public:
// Construct a URLRequest for |url|, notifying events to |delegate|.
URLRequest(const GURL& url, Delegate* delegate);
// Specify the shared state
void set_context(URLRequestContext* context);
// Start the request. Notifications will be sent to |delegate|.
void Start();
// Read data from the request.
bool Read(IOBuffer* buf, int max_bytes, int* bytes_read);
};
class URLRequest::Delegate {
public:
// Called after the response has started coming in or an error occurred.
virtual void OnResponseStarted(...) = 0;
// Called when Read() calls complete.
virtual void OnReadCompleted(...) = 0;
};
当URLRequest启动时,它⾸先要做的是决定要创建什么类型的URLRequestJob。

主要作业类型是URLRequestHttpJob,⽤于实现http://请求。

还有其他各种⼯作,例如URLRequestFileJob(file://),URLRequestFtpJob(ftp://),URLRequestDataJob(data://)等。

⽹络堆栈将确定满⾜请求的相应作业,但它为客户端提供了两种⾃定义作业创建的⽅法:URLRequest :: Interceptor和URLRequest :: ProtocolFactory。

这些是相当多余的,除了URLRequest :: Interceptor的接⼝更⼴泛。

随着⼯作的进⾏,它将通知
URLRequest,URLRequest将根据需要通知URLRequest :: Delegate。

3.2 URLRequestHttpJob
URLRequestHttpJob将⾸先识别要为HTTP请求设置的cookie,这需要在请求上下⽂中查询CookieMonster。

这可以是异步的,因为CookieMonster可能由sqlite数据库⽀持。

执⾏此操作后,它将询问请求上下⽂的HttpTransactionFactory以创建HttpTransaction。


常,HttpCache将被指定为HttpTransactionFactory。

HttpCache将创建⼀个HttpCache :: Transaction来处理HTTP请求。

HttpCache :: Transaction将⾸先检查HttpCache(它检查磁盘缓存)以查看缓存条⽬是否已存在。

如果是这样,这意味着响应已经被缓存,或者此缓存条⽬已经存在⽹络事务,因此只需从该条⽬中读取即可。

如果缓存条⽬不存在,那么我们创建它并要求HttpCache的HttpNetworkLayer创建⼀个HttpNetworkTransaction来为请求提供服务。

给HttpNetworkTransaction⼀个HttpNetworkSession,它包含执⾏HTTP请求的上下⽂状态。

其中⼀些状态来⾃URLRequestContext。

3.3 HttpNetworkTransaction
class HttpNetworkSession {
...
private:
// Shim so we can mock out ClientSockets.
ClientSocketFactory* const socket_factory_;
// Pointer to URLRequestContext's HostResolver.
HostResolver* const host_resolver_;
// Reference to URLRequestContext's ProxyService
scoped_refptr<ProxyService> proxy_service_;
// Contains all the socket pools.
ClientSocketPoolManager socket_pool_manager_;
// Contains the active SpdySessions.
scoped_ptr<SpdySessionPool> spdy_session_pool_;
// Handles HttpStream creation.
HttpStreamFactory http_stream_factory_;
};
HttpNetworkTransaction要求HttpStreamFactory创建⼀个HttpStream。

HttpStreamFactory返回⼀个HttpStreamRequest,该HttpStreamRequest应该处理确定如何建⽴连接的所有逻辑,并且⼀旦建⽴连接,就⽤⼀个HttpStream⼦类包装它,该⼦类调解直接与⽹络的通信。

class HttpStream {
public:
virtual int SendRequest(...) = 0;
virtual int ReadResponseHeaders(...) = 0;
virtual int ReadResponseBody(...) = 0;
...
};
⽬前,只有两个主要的HttpStream⼦类:HttpBasicStream和SpdyHttpStream,尽管我们计划为HTTP流⽔线创建⼦类。

HttpBasicStream假设它正在直接读取/写⼊套接字。

SpdyHttpStream读取和写⼊SpdyStream。

⽹络事务将调⽤流上的⽅法,并在完成时,将调⽤回调到HttpCache :: Transaction,它将根据需要通知URLRequestHttpJob和URLRequest。

对于HTTP路径,http请求和响应的⽣成和解析将由HttpStreamParser处理。

对于SPDY路径,请求和响应解析由SpdyStream和SpdySession处理。

根据HTTP响应,HttpNetworkTransaction 可能需要执⾏HTTP⾝份验证。

这可能涉及重新启动⽹络事务。

3.4 HttpStreamFactory
HttpStreamFactory⾸先执⾏代理解析以确定是否需要代理。

端点设置为URL主机或代理服务器。

然后,HttpStreamFactory检查SpdySessionPool以查看我们是否为此端点提供了可⽤的SpdySession。

如果没有,则流⼯⼚从适当的池请求“套接字”(TCP /代理/ SSL /等)。

如果套接字是SSL套接字,则它检查NPN是否指⽰协议(可能是SPDY),如果是,则使⽤指定的协议。

对于SPDY,我们将检查SpdySession是否已经存在并使⽤它,如果是这样,否则我们将从这个SSL套接字创建⼀个新的SpdySession,并从SpdySession创建⼀个SpdyStream,我们将SpdyHttpStream包装起来。

对于HTTP,我们将简单地接受套接字并将其包装在HttpBasicStream中。

3.4.1 Proxy解析
HttpStreamFactory查询ProxyService以返回GURL的ProxyInfo。

代理服务⾸先需要检查它是否具有最新的代理配置。

如果没有,它使⽤ProxyConfigService向系统查询当前代理设置。

如果代理设置设置为⽆代理或特定代理,则代理解析很简单(我们不返回代理或特定代理)。

否则,我们需要运⾏PAC脚本来确定适当的代理(或缺少代理)。

如果我们还没有PAC脚本,那么代理设置将指⽰我们应该使⽤WPAD⾃动检测,或者将指定⾃定义PAC URL,我们将使⽤ProxyScriptFetcher获取PAC脚本。

⼀旦我们有PAC脚本,我们将通过ProxyResolver执⾏它。

请注意,我们使⽤填充程序MultiThreadedProxyResolver对象将PAC脚本执⾏分派给运⾏ProxyResolverV8实例的线程。

这是因为PAC脚本执⾏可能会阻⽌主机解析。

因此,为了防⽌⼀个停滞的PAC脚本执⾏阻⽌其他代理解析,我们允许同时执⾏多个PAC脚本(警告:V8不是线程安全的,所以我们获取了javascript绑定的锁,所以当⼀个V8实例被阻⽌时主机解析,它释放锁定,以便另⼀个V8实例可以执⾏PAC脚本来解析不同URL的代理。

3.4.2 连接管理
在HttpStreamRequest确定了适当的端点(URL端点或代理端点)之后,它需要建⽴连接。

它通过识别适当的“套接字”池并从中请求套接字来实现。

请注意,“socket”在这⾥基本上意味着我们可以读取和写⼊的内容,以通过⽹络发送数据。

SSL套接字构建在传输(TCP)套接字之上,并为⽤户加密/解密原始TCP数据。

不同的套接字类型还处理不同的连接设置,HTTP / SOCKS代理,SSL握⼿等。

套接字池设计为分层,因此各种连接设置可以分层在其他套接字之上。

HttpStream可以与实际的底层套接字类型⽆关,因为它只需要读取和写⼊套接字。

套接字池执⾏各种功能 - 它们实现每个代理,每个主机和每个进程限制的连接。

⽬前这些设置为每个代理32个套接字,每个⽬标主机6个套接字,每个进程256个套接字(未正确实现,但⾜够好)。

套接字池还从履⾏中抽象出套接字请求,从⽽为我们提供套接字的“后期绑定”。

套接字请求可以由新连接的套接字或空闲套接字实现(从先前的http事务重⽤)。

3.4.3 Host解析
请注意,传输套接字的连接设置不仅需要传输(TCP)握⼿,还可能需要主机解析。

HostResolverImpl使⽤getaddrinfo()来执⾏主机解析,这是⼀个阻塞调⽤,因此解析器会在未连接的⼯作线程上调⽤这些调⽤。

通常,主机解析通常涉及DNS解析,但可能涉及⾮DNS命名空间,例如NetBIOS / WINS。

请注意,截⾄编写本⽂时,我们将并发主机分辨率的数量限制为8,但希望优化此值。

HostResolverImpl还包含⼀个HostCache,它可以缓存多达1000个主机名。

3.4.4 SSL/TLS
SSL套接字需要执⾏SSL连接设置以及证书验证。

⽬前,在所有平台上,我们使⽤NSS的libssl来处理SSL连接逻辑。

但是,我们使⽤特定于平台的API进⾏证书验证。

我们正在逐步使⽤证书验证缓存,它将多个同⼀证书的证书验证请求合并到⼀个证书验证作业中,并将结果缓存⼀段时间。

SSLClientSocketNSS⼤致遵循这⼀系列事件(忽略Snap Start或基于DNSSEC的证书验证等⾼级功能):
调⽤Connect()。

我们基于SSLConfig指定的配置或预处理器宏来设置NSS的SSL选项。

然后我们开始握⼿。

握⼿完成。

假设我们没有遇到任何错误,我们继续使⽤CertVerifier验证服务器的证书。

证书验证可能需要⼀些时间,因此CertVerifier 使⽤WorkerPool实际调⽤X509Certificate :: Verify(),这是使⽤特定于平台的API实现的。

请注意,Chromium有⾃⼰的NSS补丁,它⽀持⼀些不⼀定在系统的NSS安装中的⾼级功能,例如⽀持NPN,False Start,Snap
Start,OCSP装订等。

最新chrome⾥⾯⽹络变成了服务: network service。

通过mojo调⽤。

Life of a URLRequest
This document gives an overview of the browser's lower-layers for networking.
Networking in the browser ranges from high level Javascript APIs like fetch(), all the way down to writing encrypted bytes on a socket.
This document assumes that requests for URLs are mediated through the browser's , and focuses on all the layers below the Network Service, including key points of integration.
It's particularly targeted at people new to the Chrome network stack, but should also be useful for team members who may be experts at some parts of the stack, but are largely unfamiliar with other components. It starts by walking through how a basic request issued by another process works its way through the network stack, and then moves on to discuss how various components plug in.
If you notice any inaccuracies in this document, or feel that things could be better explained, please do not hesitate to submit patches. Anatomy of the Network Stack
The network stack is located in //net/ in the Chrome repo, and uses the namespace “net”. Whenever a class name in this doc has no namespace, it can generally be assumed it's in //net/ and is in the net namespace.
The top-level network stack object is the URLRequestContext. The context has non-owning pointers to everything needed to create and issue a URLRequest. The context must outlive all requests that use it. Creating a context is a rather complicated process usually managed by URLRequestContextBuilder.
The primary use of the URLRequestContext is to create URLRequest objects using URLRequestContext::CreateRequest(). The URLRequest is the main interface used by direct consumers of the network stack. It manages loading URLs with the http, https, ws, and wss schemes. URLs for other schemes, such as file, filesystem, blob, chrome, and data, are managed completely outside of //net. Each URLRequest tracks a single request across all redirects until an error occurs, it's canceled, or a final response is received, with a (possibly empty) body.
The HttpNetworkSession is another major network stack object. It owns the HttpStreamFactory, the socket pools, and the HTTP/2 and QUIC session pools. It also has non-owning pointers to the network stack objects that more directly deal with sockets.
This document does not mention either of these objects much, but at layers above the HttpStreamFactory, objects often grab their dependencies from the URLRequestContext, while the HttpStreamFactory and layers below it generally get their dependencies from the HttpNetworkSession.
How many “Delegates”?
A URLRequest informs the consumer of important events for a request using two main interfaces: the URLRequest::Delegate interface and the NetworkDelegate interface.
The URLRequest::Delegate interface consists of a small set of callbacks needed to let the embedder drive a request forward. The NetworkDelegate is an object pointed to by the URLRequestContext and shared by all requests, and includes callbacks corresponding to most of the URLRequest::Delegate's callbacks, as well as an assortment of other methods.
The Network Service and Mojo
The network service, which lives in //services/network/, wraps //net/ objects, and provides cross-process network APIs and their implementations for the rest of Chrome. The network service uses the namespace “network” for all its classes. The Mojo interfaces it provides are in the network::mojom namespace. Mojo is Chrome‘s IPC layer. Generally there’s a mojo::Remote proxy object in the consumer‘s process which also implements the network::mojom::Foo interface. When the proxy object’s methods are invoked, it passes the call and all its arguments over a Mojo IPC channel, using a mojo::Receiver, to an implementation of the network::mojom::Foo interface in the network service (the implementation is typically a class named network::Foo), which may be running in another process, another thread in the consumer‘s process, or even the same thread in the consumer’s process.
The network::NetworkService object is singleton that is used by Chrome to create all other network service objects. The primary objects it is used to create are the network::NetworkContexts, each of which owns its own mostly independent URLRequestContext. Chrome has a number of different NetworkContexts, as there is often a need to keep cookies, caches, and socket pools separate for different types of requests, depending on what's making the request. Here are the main NetworkContexts used by Chrome:
The system NetworkContext, created and owned by Chrome‘s SystemNetworkContextManager, is used for requests that aren’t associated with particular user or Profile. It has no on-disk storage, so loses all state, like cookies, after each browser restart. It has no in-memory http cache, either. SystemNetworkContextManager also sets up global network service preferences.
Each Chrome Profile, including incognito Profiles, has its own NetworkContext. Except for incognito and guest profiles, these
contexts store information in their own on-disk store, which includes cookies and an HTTP cache, among other things. Each of these NetworkContexts is owned by a StoragePartition object in the browser process, and created by a Profile's
ProfileNetworkContextService.
On platforms that support apps, each Profile has a NetworkContext for each app installed on that Profile. As with the main
NetworkContext, these may have on-disk data, depending on the Profile and the App.
Life of a Simple URLRequest
A request for data is dispatched from some process, which results in creating a network::URLLoader in the network service (which, on desktop platform, is typically in its own process). The URLLoader then creates a URLRequest to drive the network request. That job first checks the HTTP cache, and then creates a network transaction object, if necessary, to actually fetch the data. That transaction tries to reuse a connection if available. If none is available, it creates a new one. Once it has established a connection, the HTTP request is dispatched, the response read and parsed, and the result returned back up the stack and sent over to the caller.
Of course, it's not quite that simple :-}.
Consider a simple request issued by some process other than the network service‘s process. Suppose it’s an HTTP request, the response is uncompressed, no matching entry in the cache, and there are no idle sockets connected to the server in the socket pool.
Continuing with a “simple” URLRequest, here's a bit more detail on how things work.
Request starts in some (non-network) process
Summary:
In the browser process, the network::mojom::NetworkContext interface is used to create a network::mojom::URLLoaderFactory.
A consumer (e.g. the content::ResourceDispatcher for Blink, the content::NavigationURLLoaderImpl for frame navigations, or a
network::SimpleURLLoader) passes a network::ResourceRequest object and network::mojom::URLLoaderClient Mojo channel to the network::mojom::URLLoaderFactory, and tells it to create and start a network::mojom::URLLoader.
Mojo sends the network::ResourceRequest over an IPC pipe to a network::URLLoaderFactory in the network process.
Chrome has a single browser process which handles starting and configuring other processes, tab management, and navigation, among other things, and multiple child processes, which are generally sandboxed and have no network access themselves, apart from the network service (Which either runs in its own process, or potentially in the browser process to conserve RAM). There are multiple types of child processes (renderer, GPU, plugin, network, etc). The renderer processes are the ones that layout webpages and run HTML.
The browser process creates the top level network::mojom::NetworkContext objects. The NetworkContext interface is privileged and can only be accessed from the browser process. The browser process uses it to create network::mojom::URLLoaderFactories, which can then be passed to less privileged processes to allow them to load resources using the NetworkContext. To create a URLLoaderFactory, a network::mojom::URLLoaderFactoryParams object is passed to the NetworkContext to configure fields that other processes are not trusted to set, for security and privacy reasons.
One such field is the net::IsolationInfo field, which includes:
A net::NetworkIsolationKey, which is used to enforce the in the network stack, separating network resources used by different sites in
order to protect against tracking a user across sites.
A net::SiteForCookies, which is used to determine which site to send SameSite cookies for. SameSite cookies prevent cross-site
attacks by only being accessible when that site is the top-level site.
How to update these values across redirects.
A consumer, either in the browser process or a child process, that wants to make a network request gets a URLLoaderFactory from the browser process through some manner, assembles a bunch of parameters in the large network::ResourceRequest object, creates a network::mojom::URLLoaderClient Mojo channel for the network::mojom::URLLoader to use to talk back to it, and then passes them all to the URLLoaderFactory, which returns a URLLoader object that it can use to manage the network request.
network::URLLoaderFactory sets up the request in the network service
Summary:
network::URLLoaderFactory creates a network::URLLoader.
network::URLLoader uses the network::NetworkContext's URLRequestContext to create and start a URLRequest.
The network::URLLoaderFactory, along with all NetworkContexts and most of the network stack, lives on a single thread in the network service. It gets a reconstituted ResourceRequest object from the network::mojom::URLLoaderFactory Mojo pipe, does some checks to make sure it can service the request, and if so, creates a URLLoader, passing the request and the NetworkContext associated with the
URLLoaderFactory.
The URLLoader then calls into the NetworkContext's net::URLRequestContext to create the URLRequest. The URLRequestContext has pointers to all the network stack objects needed to issue the request over the network, such as the cache, cookie store, and host resolver. The URLLoader then calls into the network::ResourceScheduler, which may delay starting the request, based on priority and other activity. Eventually, the ResourceScheduler starts the request.
Check the cache, request an HttpStream
Summary:
The URLRequest asks the URLRequestJobFactory to create a URLRequestJob, and gets a URLRequestHttpJob.
The URLRequestHttpJob asks the HttpCache to create an HttpTransaction, and gets an HttpCache::Transaction, assuming caching is enabled.
The HttpCache::Transaction sees there's no cache entry for the request, and creates an HttpNetworkTransaction.
The HttpNetworkTransaction calls into the HttpStreamFactory to request an HttpStream.
The URLRequest then calls into the URLRequestJobFactory to create a URLRequestHttpJob, a subclass of URLRequestJob, and then starts it (historically, non-network URL schemes were also disptched through the network stack, so there were a variety of job types.) The URLRequestHttpJob attaches cookies to the request, if needed. Whether or not SameSite cookies are attached depends on the IsolationInfo‘s SiteForCookies, the URL, and the URLRequest’s request_initiator field.
The URLRequestHttpJob calls into the HttpCache to create an HttpCache::Transaction. The cache checks for an entry with the same URL and NetworkIsolationKey. If there's no matching entry, the HttpCache::Transaction will call into the HttpNetworkLayer to create an HttpNetworkTransaction, and transparently wrap it. The HttpNetworkTransaction then calls into the HttpStreamFactory to request an HttpStream to the server.
Create an HttpStream
Summary:
HttpStreamFactory creates an HttpStreamFactory::Job.
HttpStreamFactory::Job calls into the TransportClientSocketPool to populate an ClientSocketHandle.
TransportClientSocketPool has no idle sockets, so it creates a TransportConnectJob and starts it.
TransportConnectJob creates a StreamSocket and establishes a connection.
TransportClientSocketPool puts the StreamSocket in the ClientSocketHandle, and calls into HttpStreamFactory::Job.
HttpStreamFactory::Job creates an HttpBasicStream, which takes ownership of the ClientSocketHandle.
It returns the HttpBasicStream to the HttpNetworkTransaction.
The HttpStreamFactory::Job creates a ClientSocketHandle to hold a socket, once connected, and passes it into the ClientSocketPoolManager. The ClientSocketPoolManager assembles the TransportSocketParams needed to establish the connection and creates a group name (“host:port”) used to identify sockets that can be used interchangeably.
The ClientSocketPoolManager directs the request to the TransportClientSocketPool, since there‘s no proxy and it’s an HTTP request. The request is forwarded to the pool's ClientSocketPoolBase‘s ClientSocketPoolBaseHelper. If there isn’t already an idle connection, and there are available socket slots, the ClientSocketPoolBaseHelper will create a new TransportConnectJob using the aforementioned params object. This Job will do the actual DNS lookup by calling into the HostResolverImpl, if needed, and then finally establishes a connection. Once the socket is connected, ownership of the socket is passed to the ClientSocketHandle. The HttpStreamFactory::Job is then informed the connection attempt succeeded, and it then creates an HttpBasicStream, which takes ownership of the ClientSocketHandle. It then passes ownership of the HttpBasicStream back to the HttpNetworkTransaction.
Send request and read the response headers
Summary:
HttpNetworkTransaction gives the request headers to the HttpBasicStream, and tells it to start the request.
HttpBasicStream sends the request, and waits for the response.
The HttpBasicStream sends the response headers back to the HttpNetworkTransaction.
The response headers are sent up through the URLRequest, to the network::URLLoader.
They're then sent to the network::mojom::URLLoaderClient via Mojo.
The HttpNetworkTransaction passes the request headers to the HttpBasicStream, which uses an HttpStreamParser to (finally) format the request headers and body (if present) and send them to the server.
The HttpStreamParser waits to receive the response and then parses the HTTP/1.x response headers, and then passes them up through both the HttpNetworkTransaction and HttpCache::Transaction to the URLRequestHttpJob. The URLRequestHttpJob saves any cookies, if needed, and then passes the headers up to the URLRequest and on to the network::URLLoader, which sends the data over a Mojo pipe to the network::mojom::URLLoaderClient, passed in to the URLLoader when it was created.
Response body is read
Summary:
network::URLLoader creates a raw Mojo data pipe, and passes one end to the network::mojom::URLLoaderClient.
The URLLoader requests shared memory buffer from the Mojo data pipe.
The URLLoader tells the URLRequest to write to the memory buffer, and tells the pipe when data has been written to the buffer.
The last two steps repeat until the request is complete.
Without waiting to hear back from the network::mojom::URLLoaderClient, the network::URLLoader allocates a raw mojo data pipe, and passes the client the read end of the pipe. The URLLoader then grabs an IPC buffer from the pipe, and passes a 64KB body read request down through the URLRequest all the way down to the HttpStreamParser. Once some data is read, possibly less than 64KB, the number of bytes read makes its way back to the URLLoader, which then tells the Mojo pipe the read was complete, and then requests another buffer from the pipe, to continue writing data to. The pipe may apply back pressure, to limit the amount of unconsumed data that can be in shared memory buffers at once. This process repeats until the response body is completely read.
URLRequest is destroyed
Summary:
When complete, the network::URLLoaderFactory deletes the network::URLLoader, which deletes the URLRequest.
During destruction, the HttpNetworkTransaction determines if the socket is reusable, and if so, tells the HttpBasicStream to return it to the socket pool.
When the URLRequest informs the network::URLLoader the request is complete, the URLLoader passes the message along to the network::mojom::URLLoaderClient, over its Mojo pipe, before telling the URLLoaderFactory to destroy the URLLoader, which results in destroying the URLRequest and closing all Mojo pipes related to the request.
When the HttpNetworkTransaction is being torn down, it figures out if the socket is reusable. If not, it tells the HttpBasicStream to close the socket. Either way, the ClientSocketHandle returns the socket is then returned to the socket pool, either for reuse or so the socket pool knows it has another free socket slot.
Object Relationships and Ownership
A sample of the object relationships involved in the above process is diagramed here:
There are a couple of points in the above diagram that do not come clear visually:
The method that generates the filter chain that is hung off the URLRequestJob is declared on URLRequestJob, but the only current implementation of it is on URLRequestHttpJob, so the generation is shown as happening from that class.
HttpTransactions of different types are layered; i.e. a HttpCache::Transaction contains a pointer to an HttpTransaction, but that pointed-to HttpTransaction generally is an HttpNetworkTransaction.
Additional Topics
HTTP Cache
The HttpCache::Transaction sits between the URLRequestHttpJob and the HttpNetworkTransaction, and implements the HttpTransaction interface, just like the HttpNetworkTransaction. The HttpCache::Transaction checks if a request can be served out of the cache. If a request needs to be revalidated, it handles sending a conditional revalidation request over the network. It may also break a range request into multiple cached and non-cached contiguous chunks, and may issue multiple network requests for a single range URLRequest.
The HttpCache::Transaction uses one of three disk_cache::Backends to actually store the cache's index and files: The in memory backend, the blockfile cache backend, and the simple cache backend. The first is used in incognito. The latter two are both stored on disk, and are used on different platforms.
One important detail is that it has a read/write lock for each URL. The lock technically allows multiple reads at once, but since an HttpCache::Transaction always grabs the lock for writing and reading before downgrading it to a read only lock, all requests for the same URL are effectively done serially. The renderer process merges requests for the same URL in many cases, which mitigates this problem to some extent.
It's also worth noting that each renderer process also has its own in-memory cache, which has no relation to the cache implemented in net/, which lives in the network service.
Cancellation
A consumer can cancel a request at any time by deleting the network::mojom::URLLoader pipe used by the request. This will cause the network::URLLoader to destroy itself and its URLRequest.
When an HttpNetworkTransaction for a cancelled request is being torn down, it figures out if the socket the HttpStream owns can potentially be reused, based on the protocol (HTTP / HTTP/2 / QUIC) and any received headers. If the socket potentially can be reused, an HttpResponseBodyDrainer is created to try and read any remaining body bytes of the HttpStream, if any, before returning the socket to the SocketPool. If this takes too long, or there's an error, the socket is closed instead. Since this all happens at the layer below the cache, any drained bytes are not written to the cache, and as far as the cache layer is concerned, it only has a partial response.
Redirects
The URLRequestHttpJob checks if headers indicate a redirect when it receives them from the next layer down (typically the HttpCache::Transaction). If they indicate a redirect, it tells the cache the response is complete, ignoring the body, so the cache only has the headers. The cache then treats it as a complete entry, even if the headers indicated there will be a body.
The URLRequestHttpJob then checks with the URLRequest if the redirect should be followed. The URLRequest then informs the network::URLLoader about the redirect, which passes information about the redirect to network::mojom::URLLoaderClient, in the consumer process. Whatever issued the original request then checks if the redirect should be followed.
If the redirect should be followed, the URLLoaderClient calls back into the URLLoader over the network::mojom::URLLoader Mojo interface, which tells the URLRequest to follow the redirect. The URLRequest then creates a new URLRequestJob to send the new request. If the URLLoaderClient chooses to cancel the request instead, it can delete the network::mojom::URLLoader pipe, just like the cancellation case discussed above. In either case, the old HttpTransaction is destroyed, and the HttpNetworkTransaction attempts to drain the socket for reuse, as discussed in the previous section.
In some cases, the consumer may choose to handle a redirect itself, like passing off the redirect to a ServiceWorker. In this case, the consumer cancels the request and then calls into some other network::mojom::URLLoaderFactory with the new URL to continue the request.
Filters (gzip, deflate, brotli, etc)
When the URLRequestHttpJob receives headers, it sends a list of all Content-Encoding values to Filter::Factory, which creates a (possibly empty) chain of filters. As body bytes are received, they're passed through the filters at the URLRequestJob layer and the decoded bytes are passed back to the URLRequest::Delegate.
Since this is done above the cache layer, the cache stores the responses prior to decompression. As a result, if files aren‘t compressed over the wire, they aren’t compressed in the cache, either.
Socket Pools
The ClientSocketPoolManager is responsible for assembling the parameters needed to connect a socket, and then sending the request to the right socket pool. Each socket request sent to a socket pool comes with a socket params object, a ClientSocketHandle, and a “group name”. The params object contains all the information a ConnectJob needs to create a connection of a given type, and different types of socket pools take different params types. The ClientSocketHandle will take temporary ownership of a connected socket and return it to the socket pool when done. All connections with the same group name in the same pool can be used to service the same connection requests, so it consists of host, port, protocol, and whether “privacy mode” is enabled for sockets in the goup.
All socket pool classes derive from the ClientSocketPoolBase. The ClientSocketPoolBase handles managing sockets - which requests to create sockets for, which requests get connected sockets first, which sockets belong to which groups, connection limits per group, keeping track of and closing idle sockets, etc. Each ClientSocketPoolBase subclass has its own ConnectJob type, which establishes a connection using the socket params, before the pool hands out the connected socket.
Socket Pool Layering
Some socket pools are layered on top other socket pools. This is done when a “socket” in a higher layer needs to establish a connection in a lower level pool and then take ownership of it as part of its connection process. For example, each socket in the SSLClientSocketPool is layered on top of a socket in the TransportClientSocketPool. There are a couple additional complexities here.
From the perspective of the lower layer pool, all of its sockets that a higher layer pools owns are actively in use, even when the higher layer pool considers them idle. As a result, when a lower layer pool is at its connection limit and needs to make a new connection, it will ask any higher layer pools to close an idle connection if they have one, so it can make a new connection.
Since sockets in the higher layer pool are also in a group in the lower layer pool, they must have their own distinct group name. This is needed so that, for instance, SSL and HTTP connections won't be grouped together in the TcpClientSocketPool, which the SSLClientSocketPool sits on top of.
Socket Pool Class Relationships
The relationships between the important classes in the socket pools is shown diagrammatically for the lowest layer socket pool (TransportSocketPool) below.。

相关文档
最新文档