When EventTargetTBase is used, we pass the container as the target to libdom.
This is not safe, as libdom is expecting an event_target. We see, for example
that when _dom_event_get_current_target is called, the refcnt is increased.
This works if the current_target is a valid event_target, but if it's a
Zig instance (like the Window) ... we're just altering some bits of the
window instance.
This attempts to add a dummy target to EventTargetTBase which can acts as a
real event_targt in place of the Zig instance.
We currently set request._keepalive prematurely. There are [error cases] where
the request could be abandoned before being fully drained. While we do try to
drain in some cases, it isn't always possible. For this reason,
request.keepalive is only set at the end of the request lifecycle, at which
point we know the connection is ready to be re-used.
Probing a union match for an possible value should rarely hard-fail. Instead,
it should return an .{.invalid = {}} response to let the prober decide how to
proceed. This fixes a hard-fail when a JS value fails to probe as an array.
Also, add :modal pseudo-class.
(both issues came up looking at github integration)
Test speed has been improved only slightly by tweaking a 2-second running tests.
Build has been improved by:
1 - moving logFunctionCallError out of js.Caller and to a standalone function
2 - removing some non-generic code from the generic portions of the logger
Caller.getter and Caller.setter have been removed in favor or calling
Caller.method. This wasn't previously possible - prior to our v8 upgrade, they
had different signatures.
Also removed a largely unused parser/str.zig file.
Libdom's formGetCollection doesn't work (like I would expect) for dynamically
added elements.
For example, given:
```
let el = document.createElement('input');
document.getElementsByTagName('form')[0].append(el);
```
(and assume the page has a form), I'd expect `el.form` to be equal to the form
the input was added to. Instead, it's null. This is a problem given that
`dom_html_form_element_get_elements` uses the element's `form` attribute to
"collect" the elements.
This uses our existing querySelector to find the form elements.
Test speed has been improved only slightly by tweaking a 2-second running tests.
Build has been improved by:
1 - moving logFunctionCallError out of js.Caller and to a standalone function
2 - removing some non-generic code from the generic portions of the logger
Caller.getter and Caller.setter have been removed in favor or calling
Caller.method. This wasn't previously possible - prior to our v8 upgrade, they
had different signatures.
Also removed a largely unused parser/str.zig file.
Probing a union match for an possible value should rarely hard-fail. Instead,
it should return an .{.invalid = {}} response to let the prober decide how to
proceed. This fixes a hard-fail when a JS value fails to probe as an array.
Also, add :modal pseudo-class.
(both issues came up looking at github integration)
Libdom's formGetCollection doesn't work (like I would expect) for dynamically
added elements.
For example, given:
```
let el = document.createElement('input');
document.getElementsByTagName('form')[0].append(el);
```
(and assume the page has a form), I'd expect `el.form` to be equal to the form
the input was added to. Instead, it's null. This is a problem given that
`dom_html_form_element_get_elements` uses the element's `form` attribute to
"collect" the elements.
This uses our existing querySelector to find the form elements.
In https://github.com/lightpanda-io/browser/pull/767 I tried to call loop.run
from within a loop.run (spoiler, it didn't work), in order to make sure aborted
connections were properly cleaned up before starting a new navigation. That
resulted in having loop.run no longer wait for timeouts for fear of having to
wait on a long timeout. The ended up breaking page.wait (used in the fetch
command).
This commit brings back the original behavior where loop.run() waits for all
completions. Which is now safe to do since the nested loop.run() call has
been removed.
This is one of the ways that puppeteer knows that navigation happened
and is needed to support `waitForNavigation` which compares the
existing loader-id with the new one, so it has to change.
Also, fix a crash that could happen if CDP disconnects while
connections are being aborted.
Currently, if there's an internal navigation event, we continue to process the
page normally. This has negative performance implication, and can result in
user-scripts producing unexpected results.
For example, imagine a page with a script that does.
```js
if (x) {
form.submit();
}
reloadProduct();
```
The call to `form.submit()` should stop the script from executing. And, if the
page has 10 other <script> tags after this, they shouldn't be loaded nor
executed.
This code terminates the execution of the current script on an internal
navigation event, and stops the rest of the page from load.
While I believe this creates a more "correct" behavior, it also introduces new
edge cases. There's no a period of time, between the termination being stopped
and then being resumed, where executing code is not safe.
The mix of sync and async HTTP requests requires care to avoid deadlocks.
Previously, it was possible for async requests to use up all available HTTP
state objects duration a navigation flow (either directly, or via an internal
redirect (e.g. click, submit, ...)). This would block the navigation, which,
because everything is single thread, would block the I/O loop, resulting in a
deadlock.
The correct solution seems to be to remove all synchronous I/O. And I tried to
do that, but I ran into a wall with module-loading, which is initiated from V8.
V8 says "give me the source for this module", and I don't see a great way to
tell it: wait a bit.
So I went back to trying to make this work with the hybrid model, despite last
weeks failures to get it to work. I changed two things:
1 - The http client will only directly initiate an async request if there's
at least 2 free state objects available (1 for the request, and leaving 1
free for any synchronous requests)
2 - Delayed navigation retries until there's at least 1 free http state object
available.
Commits from last week did help with this. First, we're now guaranteed to have
a single sync-request at a time (previously, we could have had 2). Secondly,
the async connection is now async end-to-end (previously, it could have blocked
on an empty state pool).
We could probably make this a bit more obviously by reserving 1 state object
for synchronous requests. But, since the long term solution is probably having
no synchronous requests, I'm happy with anything that lets me move past this
issue.