Unlike XHR, Response is a bit more complicated as it can exist in Zig code
without ever being given to v8. So we need to track this handoff to know who is
responsible for freeing it (zig code, on error/shutdown) or v8 code after
promise resolution.
This also cleansup a bad merge for the XHR finalizer and adds cleaning up the
`XMLHttpRequestEventTarget` callbacks.
When a type is finalized by V8, it's because it's fallen out of scope. When a
type is finalized by Zig, it's because the Context is being shutdown.
Those are two different environments and might require distinct cleanup logic.
Specifically, a zig-initiated finalization needs to consider that the page and
context are being shutdown. It isn't necessarily safe to execute JavaScript at
this point, and thus, not safe to execute a callback (on_error, on_abort,
ready_state_change, ...).
This was already being handled for async scripts, but for sync scripts, we'd
log the error then proceed to try and execute the body (which would be some
error message).
This allows the header_callback to return a boolean to indicate whether or not
the http client should continue to process the request or abort it.
Deciding what should be an lp.assert, vs an std.debug.assert, vs a debug-only
assert is a little arbitrary.
debug-only asserts, guarded with an `if (comptime IS_DEBUG)` obviously avoid the
check in release and thus have a performance advantage. We also use them at
library boundaries. If libcurl says it will always emit a header line with a
trailing \r\n, is that really a check we need to do in production? I don't think
so. First, that code path is checked _a lot_ in debug. Second, it feels a bit
like we're testing libcurl (in production!)..why? A debug-only assertion should
be good enough to catch any changes in libcurl.
Zigdom broke request interception. It isn't zigdom specifically, but in zigdom
we properly block the parser when executing a normal (not async, not defer)
script. This does not work well with request interception, because an
intercepted request isn't blocked on HTTP data, it's blocked on a message from
CDP. Generally, neither our Page nor ScriptManager are CDP-aware. And, even if
they were, it would be hard to break out of our parsing and return control to
the CDP server.
To fix this, we expand on the HTTP Client's basic awareness of CDP (via its
extra_socket field). The HTTP client is now able to block until an intercepted
request is continued/aborted/fulfilled. it does this by being able to ask the
CDP client to read/process data.
This does not yet work for intercepted authentication requests.
When using CDP, we poll the HTTP clients along with the CDP socket. Because this
polling can be long, we first process any pending message. This can end up
processing _all_ messages, in which case the poll will block for a long time.
This change makes it so that when the initial processing processes 1+ message,
we do not poll, but rather return. This allows the page lifecycle to be
processed normally (and not just blocking on poll, waiting for the CDP client
to send data).
This changes how non-async module loading works. In general, module loading
is triggered by a v8 callback. We ask it to process a module (a <script type=
module>) and then for every module that it depends on, we get a callback. This
callback expects the nested v8.Module instance, so we need to load it then and
there (as opposed to dynamic imports, where we only have to return a promise).
Previously, we solved this by issuing a blocking HTTP get in each callback. The
HTTP loop was able to continuing downloading already-queued resources, but if
a module depended on 20 nested modules, we'd issue 20 blocking gets one after
the other.
Once a module is compiled, we can ask v8 for a list of its dependent module. We
can them immediately start to download all of those modules. We then evaluate
the original module, which will trigger our callback. At this point, we still
need to block and wait for the response, but we've already started the download
and it's much faster. Sure, for the first module, we might need to wait the same
amount of time, but for the other 19, chances are by the time the callback
executes, we already have it downloaded and ready.
Most CDP drivers have a mechanism to wait for idle network, or an almost idle
network (sometimes called networkIdle2). These are events the browser must emit.
The page will now emit `networkIdle` when we are reasonably sure there's no more
network activity (this requires some slight changes to request interception,
since, I believe, intercepted requests should be considered).
`networkAlmostIdle` is currently _always_ emitted prior to emitting
`networkIdle`. We should tweak this but I can't, at a glance, think of a great
heuristic for when this should be emitted.
Further reducing bouncing between page and server for loop polling. If there is
a page, the page polls. If there isn't a page, the server polls. Simpler.