Skip to content

Conversation

@AA-Turner
Copy link
Member

This adds a benchmark of Docutils as an application. I thought a reasonable test load was Docutils' own docs (takes ~4.5-5s on my computer).

I haven't submitted a benchmark before---I don't know the best way of storing the input data, so for speed I copied the documentation into git here (the docs are public domain).

A

Copy link
Contributor

@mdboommdboom left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a good choice for benchmark -- real-world workload over a non-trivial codebase.

In the interest of repository size, would it be possible to remove the images? I don't think docutils does much with them, other than linking to them (though correct me if I'm mistaken), so maybe we only need to include one blank image and adjust all of the links to point to that.

@AA-Turner
Copy link
MemberAuthor

I blanked every image file, so the files are still there but empty. I also removed every active .. include:: directive and various other files that are unneeded.

I moved the I/O to be outwith the timing code, I couldn't think of anything better.

A

@AA-Turner
Copy link
MemberAuthor

I have no clue what is causing CI to fail, when I ran the bench_docutils function localy everything worked fine. The logs are also unhelpful (exit code 1 != 0).

A

@mdboom
Copy link
Contributor

I have no clue what is causing CI to fail, when I ran the bench_docutils function localy everything worked fine. The logs are also unhelpful (exit code 1 != 0).

A

I think the clue might be in here:

Traceback (most recent call last): Command failed with exit code 1 File "/home/runner/work/pyperformance/pyperformance/pyperformance/data-files/benchmarks/bm_docutils/run_benchmark.py", line 57, in <module> runner.bench_time_func("docutils", bench_docutils, DOC_ROOT) File "/home/runner/work/pyperformance/pyperformance/venv/cpython3.10-6b3a2b4e6fa6-compat-c0d88e07feb9/lib/python3.10/site-packages/pyperf/_runner.py", line 462, in bench_time_func return self._main(task) File "/home/runner/work/pyperformance/pyperformance/venv/cpython3.10-6b3a2b4e6fa6-compat-c0d88e07feb9/lib/python3.10/site-packages/pyperf/_runner.py", line 427, in _main bench = self._worker(task) File "/home/runner/work/pyperformance/pyperformance/venv/cpython3.10-6b3a2b4e6fa6-compat-c0d88e07feb9/lib/python3.10/site-packages/pyperf/_runner.py", line 401, in _worker run = task.create_run() File "/home/runner/work/pyperformance/pyperformance/venv/cpython3.10-6b3a2b4e6fa6-compat-c0d88e07feb9/lib/python3.10/site-packages/pyperf/_worker.py", line 284, in create_run self.compute() File "/home/runner/work/pyperformance/pyperformance/venv/cpython3.10-6b3a2b4e6fa6-compat-c0d88e07feb9/lib/python3.10/site-packages/pyperf/_worker.py", line 348, in compute WorkerTask.compute(self) File "/home/runner/work/pyperformance/pyperformance/venv/cpython3.10-6b3a2b4e6fa6-compat-c0d88e07feb9/lib/python3.10/site-packages/pyperf/_worker.py", line 273, in compute self.compute_warmups_values() File "/home/runner/work/pyperformance/pyperformance/venv/cpython3.10-6b3a2b4e6fa6-compat-c0d88e07feb9/lib/python3.10/site-packages/pyperf/_worker.py", line 255, in compute_warmups_values self._compute_values(self.values, args.values) File "/home/runner/work/pyperformance/pyperformance/venv/cpython3.10-6b3a2b4e6fa6-compat-c0d88e07feb9/lib/python3.10/site-packages/pyperf/_worker.py", line 72, in _compute_values raise ValueError("benchmark function returned zero") ValueError: benchmark function returned zero 

In some case during the test run, the new benchmark function is returning a time that is zero.

You could try to reproduce this locally by running:

python -u -m pyperformance.tests 

Copy link
Contributor

@mdboommdboom left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for addressing my concerns. I'm approving this pending CI passing.

Co-authored-by: Michael Droettboom <mdboom@gmail.com>
Copy link
Contributor

@mdboommdboom left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@AA-Turner
Copy link
MemberAuthor

@ericsnowcurrently / @gvanrossum you've both committed recently, if you've any time for a review of this PR I'd appreciate it! Thanks

A

@gvanrossum
Copy link
Member

Can I bow out? Eric and/or Mike will be able to review this.

elapsed = 0
for file in doc_root.rglob("*.txt"):
file_contents = file.read_text(encoding="utf-8")
t0 = time.perf_counter_ns()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
t0=time.perf_counter_ns()
t0=pyperf.perf_counter()

"output_encoding": "unicode",
"report_level": 5,
})
elapsed += time.perf_counter_ns() - t0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
elapsed+=time.perf_counter_ns() -t0
elapsed+=pyperf.perf_counter() -t0

@mdboom
Copy link
Contributor

@AA-Turner: It would be great to have this. Any chance you have time to address @kumaraditya303's concerns?

@AA-Turner
Copy link
MemberAuthor

@mdboom / @kumaraditya303 sorry for the delay here, please may you re-review?

A

@kumaraditya303kumaraditya303 self-requested a review August 29, 2022 17:18
@ericsnowcurrentlyericsnowcurrently merged commit 864c3d9 into python:mainSep 6, 2022
Sign up for freeto join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants

@AA-Turner@mdboom@gvanrossum@kumaraditya303@ericsnowcurrently