Avoids having to allocate small strings when going from v8 -> Zig. Also
added a discriminatory type, string.Global which uses the arena, rather than
the call_arena, if an allocation _is_ necessary. (This is similar to a feature
we had before, but was lost in zigdom). Strings from v8 that need to be
persisted, can be allocated directly v8 -> arena, rather than v8 -> call_arena
-> arena.
I think there are a lot of places where we should use string.String - where
strings are expected to be short (e.g. attribute names). But started with just
document.querySelector and querySelectorAll.
Deciding what should be an lp.assert, vs an std.debug.assert, vs a debug-only
assert is a little arbitrary.
debug-only asserts, guarded with an `if (comptime IS_DEBUG)` obviously avoid the
check in release and thus have a performance advantage. We also use them at
library boundaries. If libcurl says it will always emit a header line with a
trailing \r\n, is that really a check we need to do in production? I don't think
so. First, that code path is checked _a lot_ in debug. Second, it feels a bit
like we're testing libcurl (in production!)..why? A debug-only assertion should
be good enough to catch any changes in libcurl.
Zigdom broke request interception. It isn't zigdom specifically, but in zigdom
we properly block the parser when executing a normal (not async, not defer)
script. This does not work well with request interception, because an
intercepted request isn't blocked on HTTP data, it's blocked on a message from
CDP. Generally, neither our Page nor ScriptManager are CDP-aware. And, even if
they were, it would be hard to break out of our parsing and return control to
the CDP server.
To fix this, we expand on the HTTP Client's basic awareness of CDP (via its
extra_socket field). The HTTP client is now able to block until an intercepted
request is continued/aborted/fulfilled. it does this by being able to ask the
CDP client to read/process data.
This does not yet work for intercepted authentication requests.
The call_arena was previously owned by the js.Context, but it has to exist on
the page, and the page is created before the context, so it's set to undefined
on the page. While this has never caused an issue, there's no reason for the
page not to own this, and the context to simply reference it.
Also, renamed the js.Context.context_arena to simply `arena`, which is more
consistent with other arena names (e.g. page.arena).
Adds bortli as a submodules, and compiles the decoder, making it available to
libcurl.
Some websites appear to sent brotli encoded responses even though we don't
advertise support for it (e.g. https://movie.douban.com).
Previously, the IO loop was doing three things:
1 - Managing timeouts (either from scripts or for our own needs)
2 - Handling browser IO events (page/script/xhr)
3 - Handling CDP events (accept, read, write, timeout)
With the libcurl merge, 1 was moved to an in-process scheduler and 2 was moved
to libcurl's own event loop. That means the entire loop code, including
the dependency on tigerbeetle-io existed for handling a single TCP client.
Not only is that a lot of code, there was also friction between the two loops
(the libcurl one and our IO loop), which would result in latency - while one
loop is waiting for the events, any events on the other loop go un-processed.
This PR removes our IO loop. To accomplish this:
1 - The main accept loop is blocking. This is simpler and works perfectly well,
given we only allow 1 active connection.
2 - The client socket is passed to libcurl - yes, libcurl's loop can take
arbitrary FDs and poll them along with its own.
In addition to having one less dependency, the CDP code is quite a bit simpler,
especially around shutdowns and writes. This also removes _some_ of the latency
caused by the friction between page process and CDP processing. Specifically,
when CDP now blocks for input, http page events (script loading, xhr, ...) will
still be processed.
There's still friction. For one, the reverse isn't true: when the page is
waiting for events, CDP events aren't going to be processed. But the page.wait
already have some sensitivity to this (e.g. the page.request_intercepted flag).
Also, when CDP waits, while we will process network events, page timeouts are
still not processed. Because of both these remaining issues, we still need to
jump between the two loops - but being able to block on CDP (even for a short
time) WITHOUT stopping the page's network I/O, should reduce some latency.
Stream (to json) the Transfer as a request and response object in the various
network interception-related events (e.g. Network.responseReceived).
Add a page.request_intercepted boolean flag for CDP to signal the page that
requests have been intercepted, allowing Page.wait to prioritize intercept
handling (or, at least, not block it).
Enabling Curl cookie engine brings advantage:
* handle cookies during a redirection: when a srv redirects including
cookies, curl sends back the cookies correctly during the next request
On client.request(req) we now immediately wrap the request into a Transfer. This
results in less copying of the Request object. It also makes the transfer.uri
available, so CDP no longer needs to std.Uri(request.url) anymore.
The main advantage is that it's easier to manage resources. There was a use-
after free before due to the sensitive nature of the tranfer's lifetime. There
were also corner cases where some resources might not be freed. This is
hopefully fixed with the lifetime of Transfer being extended.
This is necessary because of CloudFront which will send gzip content even if
we don't ask for it.
Properly handle scripts that are both async and defer.
Add a helper to print state of page wait. This can be helpful in identifying
what's causing the page to hang on page.wait.