This defaults to Auto, which means it runs when the call stack reaches 0.
It appears that both Node and Deno set this to explicit.
I don't really understand why Auto doesn't work. It says the call stack is the
C++/C callstack, and I don't see what would block the current code from reaching
a depth of 0. Still, we already have explicit calls to performMicrotasksCheckpoint
which ties it holistically with our scheduler, so having it be explicit like
this should...well make it more explicit
This broke a test, but since the tests are being redone in the [fetch PR](https://github.com/lightpanda-io/browser/pull/972) I simply removed the offending one.
- Continue to reuse the Browser/Env/Isolate, but no start a new session per test.
- Test http server now properly closes the sendFile fd
- Run WPT in ReleaseMode
- Add --quiet option to WPT and some commented out debug code for dumping v8
memory stats
Depends on https://github.com/lightpanda-io/browser/pull/993
There's currently 3 ways to execute a page:
1 - page.navigate (as used in both the 'fetch' and 'serve' commands)
2 - jsRunner as used in unit tests
3 - main_wpt as used in the WPT runner
Both jsRunner and main_wpt replicate the page.navigate code, but in their own
hack-ish way. main_wpt re-implements the DOM walking in order to extract and
execute <script> tags, as well as the needed page lifecycle events.
This PR replaces the existing main_wpt loader with a call to page.navigate. To
support this, a test HTTP server was added. (The test HTTP server is extracted
from the existing unit test test server, and re-used between the two).
There are benefits to this approach:
1 - The code is simpler
2 - More of the actual code and flow is tested
3 - There's 1 way to do things (page.navigate)
4 - Having an HTTP server might unlock some WPT tests
Technically, we're replacing file IO with network IO i.e. http requests). This
has potential downsides:
1 - The tests might be more brittle
2 - The tests might be slower
I think we need to run it for a while to see if we get flaky behavior.
The goal for following PRs is to bring this unification to the jsRunner.
Removes optional platform, which only existed for tests.
There is now a global `@import("testing.zig").test_app` available. This is setup
when the test runner starts, and cleaned up at the end of tests. Individual
tests don't have to worry about creating app, which I assume was the reason I
Platform optional, since that woul dhave been something else that needed to be
setup.
Previously, the IO loop was doing three things:
1 - Managing timeouts (either from scripts or for our own needs)
2 - Handling browser IO events (page/script/xhr)
3 - Handling CDP events (accept, read, write, timeout)
With the libcurl merge, 1 was moved to an in-process scheduler and 2 was moved
to libcurl's own event loop. That means the entire loop code, including
the dependency on tigerbeetle-io existed for handling a single TCP client.
Not only is that a lot of code, there was also friction between the two loops
(the libcurl one and our IO loop), which would result in latency - while one
loop is waiting for the events, any events on the other loop go un-processed.
This PR removes our IO loop. To accomplish this:
1 - The main accept loop is blocking. This is simpler and works perfectly well,
given we only allow 1 active connection.
2 - The client socket is passed to libcurl - yes, libcurl's loop can take
arbitrary FDs and poll them along with its own.
In addition to having one less dependency, the CDP code is quite a bit simpler,
especially around shutdowns and writes. This also removes _some_ of the latency
caused by the friction between page process and CDP processing. Specifically,
when CDP now blocks for input, http page events (script loading, xhr, ...) will
still be processed.
There's still friction. For one, the reverse isn't true: when the page is
waiting for events, CDP events aren't going to be processed. But the page.wait
already have some sensitivity to this (e.g. the page.request_intercepted flag).
Also, when CDP waits, while we will process network events, page timeouts are
still not processed. Because of both these remaining issues, we still need to
jump between the two loops - but being able to block on CDP (even for a short
time) WITHOUT stopping the page's network I/O, should reduce some latency.
Minor improvement to correctness of TreeWalker.
Fun fact, this is the first time, that I've run into, where we have to default
null and undefined to different values.
Also, tweaked the WPT test runner. WPT test results use | as a field delimiter.
But a WPT test (and, I assume a message) can contain a |. So we had at least
a few tests that were being reported as failed, only because the result line
was weird / unexpected. No great robust way to parse this, but I changed it
to look explicitly for |Pass or |Fail and use those positions as anchor points.
Aims to improve compatibility for the lit framework (e.g. what Reddit is using).
1 - Adds support for adoptedStyleSheets to the Document and ShadowRoot
2 - Adds mock support for replace and replaceSync to the CSSStyleSheet
3 - Optionally include shadowroot in dump
4 - Special-case setting innerHTML on a TemplateElement
These are dummy implementations, but they do expose the ready and finished
promise, and do resolve the finished promise, so it should unblock basic cases.
Prototype resolution of Zig types previously had 2 limitations (bug?). The first
was that the Zig prototype chain could only be 1 deep. You couldn't do A->B->C
where each of those was a Zig type (but you could do A->B->C->D->E ... so long
as every other type was a C opaque value).
The other limitation was that Zig prototypes only worked when the nested field
was directly embedded in the struct (i.e. not a pointer). So you could do:
```zig
const X = struct {
proto: XParent,
};
```
But not:
```zig
const X = struct {
proto: *XParent,
};
```
This addresses both limitations. The first issue is solved by keeping track
of the cumulative offset
Previously, MutationObserver callbacks where called using the `jsCallScopeEnd`
mechanism. This was slow and resulted in records split in a way that callers
might not expect. `jsCallScopeEnd` has been removed.
The new approach uses the loop.timeout mechanism, much like a window.setTimeout
and only registers a timeout when events have been handled. It should perform
much better.
Exactly how MutationRecords are supposed to be grouped is still a mystery to me.
This new grouping is still wrong in many cases (according to WPT), but appears
slightly less wrong; I'm pretty hopeful clients don't really have hard-coded
expectations for this though.
Also implement the attributeFilter option of MutationObserver. (Github)