Uh oh!
There was an error while loading. Please reload this page.
- Notifications
You must be signed in to change notification settings - Fork 34.2k
benchmark: remove %OptimizeFunctionOnNextCall#9615
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Uh oh!
There was an error while loading. Please reload this page.
Conversation
evanlucas commented Nov 15, 2016
/cc @mscdex since he's done a lot of performance work too |
mscdex commented Nov 15, 2016
I think this should only be removed for functions that are potentially inlineable due to their function size. |
bzoz commented Nov 24, 2016
@mscdex For most of the benchmarks except ones mentioned in PR removing the Could you also elaborate on those inlineable functions? |
bd2fd0b to 5e1d6baComparebzoz commented Nov 29, 2016
Rebased on master, PTAL |
jasnell commented Dec 5, 2016
@mscdex ... any further thoughts on this? |
mscdex commented Dec 5, 2016 • edited
Loading Uh oh!
There was an error while loading. Please reload this page.
edited
Uh oh!
There was an error while loading. Please reload this page.
As far as inlineability I would just check the current function lengths (there are other factors that determine inlineability of course). If the function being tested is less than 600, remove the forced optimization, otherwise leave it there. |
bzoz commented Dec 9, 2016
Most of the time If we remove it, it will make some of the benchmark perform better. Furthermore, this will make benchmarks engine agnostic, so they can be used with chakracore (see nodejs/node-chakracore#134) |
fhinkel commented Dec 9, 2016
I'm pretty sure that inlineability does not depend on function length any more but bytecode size (at least in the near future). |
fhinkel commented Dec 9, 2016
I'm in favor of making the benchmarks vm agnostic. I don't see the point in measuring only the optimized runs instead of all the runs. |
trevnorris commented Dec 9, 2016
As of 5.7.201 (and is the same on a6976211d1a4d12df815fa9ff5341dcbcf1e8036, latest master) https://github.com/v8/v8/blob/5.7.201/src/flag-definitions.h#L329-L330 |
trevnorris commented Dec 9, 2016
The idea is to remove variability in the benchmarks to better compare between versions. Ideally we could run the benchmarks fully optimized and run them not allowing to optimize the code at all. |
bzoz commented Dec 27, 2016
@trevnorris So, are you in favor of this change? Or maybe you have other suggestions? |
5e1d6ba to 371ff17Comparebzoz commented Jan 5, 2017
Rebased, PTAL |
371ff17 to c1a6689Comparebzoz commented Jan 18, 2017
Rebased again, PTAL |
c1a6689 to d9f44c7Comparebzoz commented Feb 24, 2017
/cc @nodejs/collaborators |
jasnell commented Feb 24, 2017
@mscdex ... Side note: I've been wondering if it would be worthwhile to introduce linting for functions that are over the inlinable size. Sure, there are quite a few instances of fns in core over that size, and I'm not suggesting that we should refactor those, just that they should require linting exceptions. |
jasnell commented Feb 24, 2017
@trevnorris ... I've had a todo on my list for a while to see if there was some way that we can build in checking optimized and non-optimized run's for all benchmarks. I definitely think it would be worthwhile. Likewise, we should likely be making it easier to run the benchmarks in a way that compares use of the ignition toolchain vs. crankshaft. |
addaleax commented Feb 24, 2017
@jasnell Won’t that go away with Turbofan anyway (v8/v8@0702ea3)? |
jasnell commented Feb 24, 2017 • edited
Loading Uh oh!
There was an error while loading. Please reload this page.
edited
Uh oh!
There was an error while loading. Please reload this page.
Yes, sorry, I wasn't clear... it would be useful to be able to more easily run (if this already exists and I just missed it, then just ignore me ;-) ...) |
Trott commented Feb 25, 2017 • edited
Loading Uh oh!
There was an error while loading. Please reload this page.
edited
Uh oh!
There was an error while loading. Please reload this page.
Refocusing the conversation: I imagine @bzoz wants to know if anyone feels good enough about this to give it an approval or else to describe clearly what changes would need to happen to get your approval. Anyone? |
jasnell left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While I'm generally ok with the idea of removing the forced optimization, I'd rather it be done more incrementally than this... with either one benchmark or one related group of benchmarks edited per commit.
evanlucas commented Mar 7, 2017
This isn't landing cleanly on v7.x-staging. Mind submitting a backport PR? |
Removes all instances of %OptimizeFunctionOnNextCall from benchmarks Refs: nodejs#9615 Refs: nodejs#11720
bzoz commented Mar 8, 2017
@evanlucas backport here: #11744 |
Removes all instances of %OptimizeFunctionOnNextCall from benchmarks Refs: nodejs#9615 Refs: nodejs#11720
Checklist
make -j8 test(UNIX), orvcbuild test nosign(Windows) passesAffected core subsystem(s)
benchmark
Description of change
This removes all instances of
%OptimizeFunctionOnNextCallfrom Node.js benchmarks. As it turns out, most bechmark benefit from its removal - they will perform better. See this table in gist.Some of the benchmarks (
buffers/buffer-swap.js,crypto/get-chiper.js,net/punnycode.js,path/parse-*.js,path/relative-*.jsandtls/convertprotocols.js) benefit from warmup phase. Those are executed twice with only second run being measured. For other benchmarks warmup does not provide any advantage.One benchmark that is slower is
crypto/get-chiper.js, when callinggetCiphersonce. Previous version called this function once to trigger optimizations then to benchmark its performance. Results of that function are cached, so it didn’t provide valid data. This is fixed now.Fixes: nodejs/node-chakracore#134
cc @nodejs/benchmarking