Previously, the IO loop was doing three things:
1 - Managing timeouts (either from scripts or for our own needs)
2 - Handling browser IO events (page/script/xhr)
3 - Handling CDP events (accept, read, write, timeout)
With the libcurl merge, 1 was moved to an in-process scheduler and 2 was moved
to libcurl's own event loop. That means the entire loop code, including
the dependency on tigerbeetle-io existed for handling a single TCP client.
Not only is that a lot of code, there was also friction between the two loops
(the libcurl one and our IO loop), which would result in latency - while one
loop is waiting for the events, any events on the other loop go un-processed.
This PR removes our IO loop. To accomplish this:
1 - The main accept loop is blocking. This is simpler and works perfectly well,
given we only allow 1 active connection.
2 - The client socket is passed to libcurl - yes, libcurl's loop can take
arbitrary FDs and poll them along with its own.
In addition to having one less dependency, the CDP code is quite a bit simpler,
especially around shutdowns and writes. This also removes _some_ of the latency
caused by the friction between page process and CDP processing. Specifically,
when CDP now blocks for input, http page events (script loading, xhr, ...) will
still be processed.
There's still friction. For one, the reverse isn't true: when the page is
waiting for events, CDP events aren't going to be processed. But the page.wait
already have some sensitivity to this (e.g. the page.request_intercepted flag).
Also, when CDP waits, while we will process network events, page timeouts are
still not processed. Because of both these remaining issues, we still need to
jump between the two loops - but being able to block on CDP (even for a short
time) WITHOUT stopping the page's network I/O, should reduce some latency.
Minor improvement to correctness of TreeWalker.
Fun fact, this is the first time, that I've run into, where we have to default
null and undefined to different values.
Also, tweaked the WPT test runner. WPT test results use | as a field delimiter.
But a WPT test (and, I assume a message) can contain a |. So we had at least
a few tests that were being reported as failed, only because the result line
was weird / unexpected. No great robust way to parse this, but I changed it
to look explicitly for |Pass or |Fail and use those positions as anchor points.
Aims to improve compatibility for the lit framework (e.g. what Reddit is using).
1 - Adds support for adoptedStyleSheets to the Document and ShadowRoot
2 - Adds mock support for replace and replaceSync to the CSSStyleSheet
3 - Optionally include shadowroot in dump
4 - Special-case setting innerHTML on a TemplateElement
These are dummy implementations, but they do expose the ready and finished
promise, and do resolve the finished promise, so it should unblock basic cases.
Prototype resolution of Zig types previously had 2 limitations (bug?). The first
was that the Zig prototype chain could only be 1 deep. You couldn't do A->B->C
where each of those was a Zig type (but you could do A->B->C->D->E ... so long
as every other type was a C opaque value).
The other limitation was that Zig prototypes only worked when the nested field
was directly embedded in the struct (i.e. not a pointer). So you could do:
```zig
const X = struct {
proto: XParent,
};
```
But not:
```zig
const X = struct {
proto: *XParent,
};
```
This addresses both limitations. The first issue is solved by keeping track
of the cumulative offset
Previously, MutationObserver callbacks where called using the `jsCallScopeEnd`
mechanism. This was slow and resulted in records split in a way that callers
might not expect. `jsCallScopeEnd` has been removed.
The new approach uses the loop.timeout mechanism, much like a window.setTimeout
and only registers a timeout when events have been handled. It should perform
much better.
Exactly how MutationRecords are supposed to be grouped is still a mystery to me.
This new grouping is still wrong in many cases (according to WPT), but appears
slightly less wrong; I'm pretty hopeful clients don't really have hard-coded
expectations for this though.
Also implement the attributeFilter option of MutationObserver. (Github)
Don't error on JSON.stringify failure (likely caused by circular reference).
In debug mode, try to print [slightly] more meaningful value representation
when default serialization results in [object Object].
Some apis want a value or undefined. For these, we can't use an Optional
return type, null maps to JS null. Adds an Env.UndefinedOr(T) generic
union for such return types.
Root modules (non-cacheable) should register their module_id -> URL so that,
if they load a nested module, we can get the full URL of the nested module.
Crypto.getRandomValues should mutate the given parameter as well as return
the value. This return value must be the same (JsObject) as the input parameter.
There might be more magical ways to solve this, but I opted for both the
simplest and most flexible: adding a `toZig` function to JsObject which does
what js.zig does internally when mapping js values to Zig (and, of course, it
uses the same code).
This allows a caller to receive a JsObject (not too common, but we already do
that in a few places) and return that same JsObject (again, not too common, but
we do have support for returning JsObject directly already). With the main
addition that the JsObjet can now be turned into a Zig type by the caller.