Skip to content
Open
6 changes: 6 additions & 0 deletions .github/workflows/test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -92,6 +92,8 @@ jobs:

- name: Run tests on Ubuntu Test
if: matrix.os == 'ubuntu-latest'
env:
OPENML_TEST_SERVER_ADMIN_KEY: ${{ secrets.OPENML_TEST_SERVER_ADMIN_KEY }}
run: |
if [ "${{ matrix.code-cov }}" = "true" ]; then
codecov="--cov=openml --long --cov-report=xml"
Expand All @@ -107,6 +109,8 @@ jobs:

- name: Run tests on Ubuntu Production
if: matrix.os == 'ubuntu-latest'
env:
OPENML_TEST_SERVER_ADMIN_KEY: ${{ secrets.OPENML_TEST_SERVER_ADMIN_KEY }}
run: |
if [ "${{ matrix.code-cov }}" = "true" ]; then
codecov="--cov=openml --long --cov-report=xml"
Expand All @@ -122,6 +126,8 @@ jobs:

- name: Run tests on Windows
if: matrix.os == 'windows-latest'
env:
OPENML_TEST_SERVER_ADMIN_KEY: ${{ secrets.OPENML_TEST_SERVER_ADMIN_KEY }}
run: | # we need a separate step because of the bash-specific if-statement in the previous one.
pytest -n 4 --durations=20 --dist load -sv --reruns 5 --reruns-delay 1 -m "not uses_test_server"

Expand Down
14 changes: 14 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -96,6 +96,20 @@ To test your new contribution, add [unit tests](https://github.com/openml/openml
* Please ensure that the example is run on the test server by beginning with the call to `openml.config.start_using_configuration_for_example()`, which is done by default for tests derived from `TestBase`.
* Add the `@pytest.mark.sklearn` marker to your unit tests if they have a dependency on scikit-learn.

#### Running Tests That Require Admin Privileges

Some tests require admin privileges on the test server and will be automatically skipped unless you provide an admin API key. For regular contributors, the tests will skip gracefully. For core contributors who need to run these tests locally:

**Set up the key** by exporting the variable:
run this in the terminal before running the tests:

```bash
# For windows
$env:OPENML_TEST_SERVER_ADMIN_KEY = "admin-key"
# For linux/mac
export OPENML_TEST_SERVER_ADMIN_KEY="admin-key"
```

### Pull Request Checklist

You can go to the `openml-python` GitHub repository to create the pull request by [comparing the branch](https://github.com/openml/openml-python/compare) from your fork with the `develop` branch of the `openml-python` repository. When creating a pull request, make sure to follow the comments and structured provided by the template on GitHub.
Expand Down
1 change: 1 addition & 0 deletions openml/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@

OPENML_CACHE_DIR_ENV_VAR = "OPENML_CACHE_DIR"
OPENML_SKIP_PARQUET_ENV_VAR = "OPENML_SKIP_PARQUET"
OPENML_TEST_SERVER_ADMIN_KEY_ENV_VAR = "OPENML_TEST_SERVER_ADMIN_KEY"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in case of using a .env file, some code is required to load it and may add the extra dependency python-dotenv

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes definitely! Will go ahead with the new dep if approved.

_TEST_SERVER_NORMAL_USER_KEY = "normaluser"


Expand Down
2 changes: 1 addition & 1 deletion openml/testing.py
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ class TestBase(unittest.TestCase):
}
flow_name_tracker: ClassVar[list[str]] = []
test_server = "https://test.openml.org/api/v1/xml"
admin_key = "abc"
admin_key = os.environ.get(openml.config.OPENML_TEST_SERVER_ADMIN_KEY_ENV_VAR)
user_key = openml.config._TEST_SERVER_NORMAL_USER_KEY

# creating logger for tracking files uploaded to test server
Expand Down
4 changes: 4 additions & 0 deletions tests/test_datasets/test_dataset_functions.py
Original file line number Diff line number Diff line change
Expand Up @@ -599,6 +599,10 @@ def _assert_status_of_dataset(self, *, did: int, status: str):
assert len(result) == 1
assert result[did]["status"] == status

@pytest.mark.skipif(
not os.environ.get(openml.config.OPENML_TEST_SERVER_ADMIN_KEY_ENV_VAR),
reason="Test requires admin key. Set OPENML_TEST_SERVER_ADMIN_KEY environment variable.",
)
@pytest.mark.flaky()
@pytest.mark.uses_test_server()
def test_data_status(self):
Expand Down
4 changes: 4 additions & 0 deletions tests/test_openml/test_config.py
Original file line number Diff line number Diff line change
Expand Up @@ -106,6 +106,10 @@ def test_setup_with_config(self):


class TestConfigurationForExamples(openml.testing.TestBase):
@pytest.mark.skipif(
not os.environ.get(openml.config.OPENML_TEST_SERVER_ADMIN_KEY_ENV_VAR),
reason="Test requires admin key. Set OPENML_TEST_SERVER_ADMIN_KEY environment variable.",
)
@pytest.mark.production()
def test_switch_to_example_configuration(self):
"""Verifies the test configuration is loaded properly."""
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

btw this test passes anyways even if you use any random admin_key, right?

Copy link
Contributor Author

@rohansen856 rohansen856 Jan 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, test_switch_to_example_configuration doesn't make any API calls, it only tests that start_using_configuration_for_example() correctly swaps the apikey and server values. So it passes with any value (even None is ok)

But test_data_status (tests\test_datasets\test_dataset_functions.py) does require a valid admin key because it makes actual API calls to openml.datasets.status_update() which the server validates, so if a user passes invalid value it would return authentication error:

FAILED tests/test_datasets/test_dataset_functions.py::TestOpenMLDataset::test_data_status - openml.exceptions.OpenMLServerException: https://test.openml.org/api/v1/xml/data/status/update ...

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I think the more reasonable approach here is to update this test to use some other sentinel value.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also can you please elaborate what do you mean by "some other sentinel value"...?

Just anything that helps verify that openml.config.apikey is updated appropriately. So any literal that isn't identical to TestBase.user_key.

Expand Down
Loading