Conversation
353512b to
b306eaa
Compare
b306eaa to
ae39a14
Compare
ae39a14 to
a19dfa2
Compare
| config.change_flags(numba__cache=cache) if cache is not None else nullcontext() | ||
| ) | ||
| with ctx: | ||
| benchmark.pedantic(compile_and_call_once, rounds=5, iterations=1) |
| def test_convolve1d_benchmark_c(batch, convolve_mode, benchmark): | ||
| _test_convolve1d_benchmark( | ||
| mode="CVM", batch=batch, convolve_mode=convolve_mode, benchmark=benchmark | ||
| ) | ||
|
|
||
|
|
||
| @pytest.mark.parametrize("convolve_mode", ("full", "valid"), ids=lambda x: f"mode={x}") | ||
| @pytest.mark.parametrize("batch", (False, True), ids=lambda x: f"batch={x}") | ||
| def test_convolve1d_benchmark_numba(batch, convolve_mode, benchmark): |
There was a problem hiding this comment.
why not parameterize mode
There was a problem hiding this comment.
I did that at first, makes it hard to isolate the mode I am interested in benching, and we usually try to optimize one mode at a time, so the others are just noise
| y = vector("z") | ||
| out = exp(2 * x * y + y) | ||
|
|
||
| rng = np.random.default_rng(42) |
There was a problem hiding this comment.
i put use sum(map(ord)) for seed in my instructions file, i hate seeing 42 everywhere (HAHA NERD NUMBER)
There was a problem hiding this comment.
also for the purpose of ASV im not sure using a seed is sufficient. I guess the reason you want to seed it is to have better comparability between runs, but we can't be use that the generator state will be advanced in the same way between different PRs
There was a problem hiding this comment.
Why isn't a seed sufficient? this is local to the test. default_rng(42) yields the same state (for the same os/numpy version)
There was a problem hiding this comment.
Yeah but the benchmarks will run across os/numpy versions, that's what I was thinking about
It's an extremely minor point
|
|
||
| def _test_join_benchmark(mode, ndim, axis, memory_layout, gc, benchmark): | ||
| if ndim == 1 and not (memory_layout == "C-contiguous" and axis == 0): | ||
| pytest.skip("Redundant parametrization") |
There was a problem hiding this comment.
we do this in linalg too, is there not a more idiomatic way to handle this in pytest?
There was a problem hiding this comment.
You can make a fancy fixture that skips it programatically, but then you have to go read the logic elsewhere
| @pytest.mark.parametrize("rewrite", [True, False], ids=["rewrite", "no_rewrite"]) | ||
| @pytest.mark.parametrize("size", [10, 100, 1000], ids=["small", "medium", "large"]) | ||
| def test_block_diag_dot_benchmark(benchmark, size, rewrite): | ||
| rng = np.random.default_rng() |
There was a problem hiding this comment.
to seed or not to seed, that is the question
There was a problem hiding this comment.
Seems fine unseeded? It shouldn't affect anything.
Work in #1122 and #1961 made me realize our test suite is a bit of a mess
This PR collects all benchmarks under
tests/benchmark. This will be changed again by the #1122 but the changes should be separate from it.