Uh oh!
There was an error while loading. Please reload this page.
- Notifications
You must be signed in to change notification settings - Fork 34.2k
tools: js2c-cache#4777
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tools: js2c-cache #4777
Uh oh!
There was an error while loading. Please reload this page.
Conversation
src/tls_wrap.cc Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what's going on here? accidental debug or something else, same with the return 0 below.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, accidental debug.
rvagg commented Jan 20, 2016
would we consider shipping the binary for this, with a more node-specific name than |
indutny commented Jan 20, 2016
We don't need to ship this anywhere, it is run during build time. |
src/node.js Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking at the definition of NativeModule.wrap 12 lines up - how is cached_data being used?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It isn't, I'm in the process of making it work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@indutny oh oops - sorry for the noise!
rvagg commented Jan 20, 2016
@indutny so you figure that if anyone wants to make use of this kind of functionality outside of lib/*.js, like NW.js, they'll be using source and hacking Node a bit anyway? |
indutny commented Jan 20, 2016
@rvagg I haven't thought about it that far yet, and so far I'm not sure if it provides any significant speed boost after all. Need to finish it up to do benchmarks. |
4a34cea to 37d76faCompareindutny commented Jan 20, 2016
Alright, I think the PR is ready now. Let's see what CI (especially ARM bots) think about it: https://ci.nodejs.org/job/node-test-pull-request/1314/ |
indutny commented Jan 20, 2016
@rvagg regarding NW.js, there is a |
bnoordhuis commented Jan 20, 2016
Data point, when I experimented with the preparser API a while ago (what corresponds to Vyacheslav later told me it only makes a difference when the source code is 100s or 1000s of kilobytes big. Ours is in the low 100s but it's not loaded all at once. |
indutny commented Jan 20, 2016
@bnoordhuis it seems that my small |
src/node.js Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Style nit: we prefer camel case for JavaScript variables no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ack.
indutny commented Jan 20, 2016
@bnoordhuis also, it is not exactly preparser thing. It is caching compiled code, not parser data. |
bnoordhuis commented Jan 20, 2016
Yes, I get that. I'm curious to see if it makes a real difference. Full-codegen is pretty quick but maybe not on ARM. |
tools/js2c-cache.cc Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Everywhere else nullptr is used. Is it okay to use NULL here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ack.
indutny commented Jan 20, 2016
Looks like this |
indutny commented Jan 20, 2016
New CI: https://ci.nodejs.org/job/node-test-pull-request/1315/ (Previous is mostly green, except windows). |
tools/js2c.py Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unused function?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ack.
indutny commented Jan 20, 2016
Gosh, one more windows failure. Should be fixed now. |
indutny commented Jan 20, 2016
benjamingr commented Jan 20, 2016
Sorry for the ignorance, but can this impact the speed of |
indutny commented Jan 20, 2016
Pushed fix, new CI: https://ci.nodejs.org/job/node-test-pull-request/1323/ |
indutny commented Jan 20, 2016
@benjamingr possibly no, fs overhead is very likely to be bigger than compilation overhead, but this PR opens new APIs for using code cache in ChildProcess. |
indutny commented Jan 20, 2016
New CI with windows support: https://ci.nodejs.org/job/node-test-pull-request/1327/ (hopefully) |
indutny commented Jan 20, 2016
One more fix, and it should be good to go: https://ci.nodejs.org/job/node-test-pull-request/1329/ |
tools/js2c.py Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps use subprocess.Communicate to simplify? (written without testing)
output, error=p.communicate(input=line) returnoutputThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Appears to be working locally, thank you for suggestion.
indutny commented Jan 20, 2016
Last CI was finally green. Please let me know if it LGTY. |
Introduce `cachedData`/`produceCachedData` options for `v8.Script`. Could be used to consume/produce V8's code cache for speeding up compilation of known code. PR-URL: nodejs#4777 Reviewed-By: Ben Noordhuis <[email protected]>
lin7sh commented Aug 27, 2017
@hashseed Sound crazy, Do you mean use v8 as the vm platform for any language that can compile to this "reverse engineered byte code"? A bit of the overlapping to wasm, but I believe js bytecode has more feature than low level wasm |
Use V8 Code caching when
host_arch == target_arch. This commit mayslightly improve startup time, and definitely improves require times
(when doing many of them).
For example, executing following file:
Takes: ~74ms on master
Takes: ~54ms with this commit
Number were taken on OS X with 64-bit node.js binaries.
cc @nodejs/collaborators @nodejs/v8