Most CDP drivers have a mechanism to wait for idle network, or an almost idle
network (sometimes called networkIdle2). These are events the browser must emit.
The page will now emit `networkIdle` when we are reasonably sure there's no more
network activity (this requires some slight changes to request interception,
since, I believe, intercepted requests should be considered).
`networkAlmostIdle` is currently _always_ emitted prior to emitting
`networkIdle`. We should tweak this but I can't, at a glance, think of a great
heuristic for when this should be emitted.
Further reducing bouncing between page and server for loop polling. If there is
a page, the page polls. If there isn't a page, the server polls. Simpler.
Previously, the IO loop was doing three things:
1 - Managing timeouts (either from scripts or for our own needs)
2 - Handling browser IO events (page/script/xhr)
3 - Handling CDP events (accept, read, write, timeout)
With the libcurl merge, 1 was moved to an in-process scheduler and 2 was moved
to libcurl's own event loop. That means the entire loop code, including
the dependency on tigerbeetle-io existed for handling a single TCP client.
Not only is that a lot of code, there was also friction between the two loops
(the libcurl one and our IO loop), which would result in latency - while one
loop is waiting for the events, any events on the other loop go un-processed.
This PR removes our IO loop. To accomplish this:
1 - The main accept loop is blocking. This is simpler and works perfectly well,
given we only allow 1 active connection.
2 - The client socket is passed to libcurl - yes, libcurl's loop can take
arbitrary FDs and poll them along with its own.
In addition to having one less dependency, the CDP code is quite a bit simpler,
especially around shutdowns and writes. This also removes _some_ of the latency
caused by the friction between page process and CDP processing. Specifically,
when CDP now blocks for input, http page events (script loading, xhr, ...) will
still be processed.
There's still friction. For one, the reverse isn't true: when the page is
waiting for events, CDP events aren't going to be processed. But the page.wait
already have some sensitivity to this (e.g. the page.request_intercepted flag).
Also, when CDP waits, while we will process network events, page timeouts are
still not processed. Because of both these remaining issues, we still need to
jump between the two loops - but being able to block on CDP (even for a short
time) WITHOUT stopping the page's network I/O, should reduce some latency.
Add response_data event, CDP now captures the full body so that it can respond
to the Network.getResponseBody. This isn't memory efficient, but I don't see
another way to do it. At least this way, it's only capturing/storing every
response body when (a) CDP is used and (b) Network.enabled is called. That is,
as opposed to baking this into Http/Client.zig, which would force the memory
consumption for all use-cases.
There's arguably some optimizations we could make for XHR requests, which also
dupe/own the response. As of now, the response is dupe'd separately for CDP
and XHR.
With networking enabled, CDP listens to this event and emits a
`Network.loadingFinished` event. This is event is used by puppeteer to know that
details about the response (i.e. the body) can be queries.
Added dummy handling for the Network.getResponseBody message. Returns an
empty body. Needed because we emit the loadingFinished event which signals
to drivers that they can ask for the body.
Stream (to json) the Transfer as a request and response object in the various
network interception-related events (e.g. Network.responseReceived).
Add a page.request_intercepted boolean flag for CDP to signal the page that
requests have been intercepted, allowing Page.wait to prioritize intercept
handling (or, at least, not block it).
Test speed has been improved only slightly by tweaking a 2-second running tests.
Build has been improved by:
1 - moving logFunctionCallError out of js.Caller and to a standalone function
2 - removing some non-generic code from the generic portions of the logger
Caller.getter and Caller.setter have been removed in favor or calling
Caller.method. This wasn't previously possible - prior to our v8 upgrade, they
had different signatures.
Also removed a largely unused parser/str.zig file.
CDP translate this into a Network.loadingFailed. This is necessary to make sure
every Network.requestWillBeSent is paired with either a Network.loadingFailed
or a Network.responseReceived.
Outputs in logfmt in release and a "pretty" print in debug mode. The format
along with the log level will become arguments to the binary at some point in
the future.
- Add 2 internal notifications
1 - http_request_start
2 - http_request_complete
- When Network.enable CDP message is received, browser context registers for
these 2 events (when Network.disable is called, it unregisters)
- On http_request_start, CDP will emit a Network.requestWillBeSent message.
This _does not_ include all the fields, but what we have appears to be enough
for puppeteer.waitForNetworkIdle.
- On http_request_complete, CDP will emit a Network.responseReceived message.
This _does not_ include all the fields, bu what we have appears to be enough
for puppeteer.waitForNetworkIdle.
We currently don't emit any other new events, including any network-specific
lifecycleEvent (i.e. Chrome will emit an networkIdle and networkAlmostIdle).
To support this, the following other things were done:
- CDP now has a `notification_arena` which is re-used between browser contexts.
Normally, CDP code runs based on a "cmd" which has its own message_arena, but
these notifications happen out-of-band, so we needed a new arena which is
valid for handling 1 notification.
- HTTP Client is notification-aware. The SessionState no longer includes the
*http.Client directly. It instead includes an http.RequestFactory which is
the combination fo the client + a specific configuration (i.e. *Notification).
This ensures that all requests made from that factory have the same settings.
- However, despite the above, _some_ requests do not appear to emit CDP events,
such as loading a <script src="X">. So the page still deals directly with the
*http.Client.
- Playwright and Puppeteer (but Playwright in particular) are very sensitive to
event ordering. These new events have introduced additional sensitivity.
The result sent to Page.navigate had to be moved to inside the navigate event
handler, which meant passing some cdp-specific data (the input.id) into the
NavigateOpts. This is the only way I found to keep both happy - the sequence
of events is closer (but still pretty far) from what Chrome does.
Some data has to exist specifically for the navigation of one page to another.
For example, if a hyperlink is clicked, the URL begins its life with the
original page, but is transferred to the new page. The page_arena cannot be used
for such data.
It's possible to use the session_arena, but it's lifetime is much longer and,
given enough navigation, could accumulate a lot of memory.
The new transfer_arena exists within the session, but only exists until the
next navigation.
While currently only used for the navigation URL, the main goal here is to have
a place to put the request body on form submission, which has a lifetime similar
to a click url.
While I'm at it, I promoted the existing session arena and the new transfer
arena to the browser, allowing better memory re-use between sessions.
Replaces the existing, very specialized Notification with something more
general.
Currently, the existing page_navigate and page_navigated have been migrated.
Telemetry's page navigation event now also hooks into these events to generate
the telemetry record.
emit contextCreated when it's needed, not when it actually happens.
I thought we could make this sync-up, but we'd need to create 3 contexts to
satisfy both puppeteer and chromedp. So rather than having it partially
driven by notifications from Browser, I rather just fake it all for now.