Skip to content

Latest commit

 

History

History
148 lines (93 loc) · 8.36 KB

File metadata and controls

148 lines (93 loc) · 8.36 KB

Gitpod ready-to-code

Contributing Guidelines

Thank you for your interest in contributing to our project. Whether it's a bug report, new feature, correction, or additional documentation, we greatly value feedback and contributions from our community.

Please read through this document before submitting any issues or pull requests to ensure we have all the necessary information to effectively respond to your bug report or contribution.

AI Usage

While using generative AI is allowed when contributing to this project, please keep the following points in mind:

  • Review all code yourself before you submit it.
  • Understand all the code you have submitted in order to answer any questions the maintainers could have when reviewing your PR.
  • Avoid being overly verbose in code and testing - extra code can be hard to review.
    • For example, avoid writing unit tests that duplicate existing ones, or test libraries that you're using.
  • Keep PR descriptions, comments, and follow ups concise.
  • Ensure AI-generated code meets the same quality standards as human-written code.

Development Guide

Refer to the Development Guide for help with environment setup, running tests, submitting a PR, or anything that will make you more productive.

Reporting Bugs/Feature Requests

We welcome you to use the GitHub issue tracker to report bugs or suggest features.

When filing an issue, please check existing open, or recently closed, issues to make sure somebody else hasn't already reported the issue. Please try to include as much information as you can. Details like these are incredibly useful:

  • A reproducible test case or series of steps
  • The version of our code being used
  • Any modifications you've made relevant to the bug
  • Anything unusual about your environment or deployment

Contributing via Pull Requests

Contributions via pull requests are much appreciated. Before sending us a pull request, please ensure that:

  1. You are working against the latest source on the develop branch.
  2. You check existing open, and recently merged, pull requests to make sure someone else hasn't addressed the problem already.
  3. You open an issue to discuss any significant work - we would hate for your time to be wasted.
  4. The change works in Python3 (see supported Python Versions in setup.py)
  5. Does the PR have updated/added unit, functional, and integration tests?
  6. PR is merged submitted to merge into develop.

To send us a pull request, please:

  1. Fork the repository.
  2. Modify the source; please focus on the specific change you are contributing. If you also reformat all the code, it will be hard for us to focus on your change.
  3. Ensure local tests pass.
  4. Commit to your fork using clear commit messages.
  5. Send us a pull request, answering any default questions in the pull request interface.
  6. Pay attention to any automated CI failures reported in the pull request, and stay involved in the conversation.

GitHub provides additional document on forking a repository and creating a pull request.

Integration Test Guidelines

Integration tests run in CI via .github/workflows/integration-tests.yml. All jobs have AWS credentials. Tests in build and local jobs that require credentials are separated using a pytest marker.

Tests that require AWS credentials

Some tests in tests/integration/buildcmd/ and tests/integration/local/ need AWS credentials (e.g. STS calls, Lambda layer publishing, SAR template resolution). These are:

  • Excluded from build/local jobs via -m "not requires_credential"
  • Collected and run in the dedicated cloud-based-tests CI job via -m requires_credential

To mark a test that requires AWS credentials, add the marker:

import pytest

@pytest.mark.requires_credential
class TestMyCloudFeature(SomeBaseClass):
    ...

The marker is registered in tests/conftest.py. Build and local jobs automatically exclude these tests; cloud-based-tests automatically includes them.

Docker container cleanup in parallel tests

start-api and start-lambda tests run in parallel (-n 2). Each test class snapshots existing Docker container IDs before starting its local server, and on teardown only removes containers created after the snapshot. This prevents one worker from killing another worker's containers.

If you write a new base class that manages Docker containers, follow the same pattern in start_lambda_api_integ_base.py and start_api_integ_base.py: snapshot container IDs in setUpClass, and scope removal to only new containers in tearDownClass.

Tier 1 cross-platform smoke tests

A curated subset of ~50 tests marked with @pytest.mark.tier1 runs on every OS/container-runtime combination (e.g. Linux+Finch). These validate platform-specific code paths: file system operations, container runtime interaction, process spawning, and network operations.

There are two markers:

  • tier1 — standalone tests that don't duplicate any parameterized entry (e.g. a dedicated test_tier1_* method calling unique logic)
  • tier1_extra — dedicated test_tier1_* methods whose parameter set already exists in a @parameterized.expand list on the same class. These are excluded from normal Linux+Docker CI (via -m "not tier1_extra") to avoid running the same test twice, but included in the tier1 Finch job (via -m "tier1 or tier1_extra").

Run locally with: pytest -m "tier1 or tier1_extra" tests/integration tests/regression

To add a tier 1 test for a new feature:

  1. Add a dedicated test_tier1_* method that calls the existing test logic with one specific parameter set:
@pytest.mark.tier1
def test_tier1_my_feature(self):
    """Single test for cross-platform validation."""
    self._test_my_feature("runtime_x", use_container=False)

@pytest.mark.tier1
@skipIf(SKIP_DOCKER_TESTS or SKIP_DOCKER_BUILD, SKIP_DOCKER_MESSAGE)
def test_tier1_my_feature_in_container(self):
    """Single container test for cross-platform validation."""
    self._test_my_feature("runtime_x", use_container=True)
  1. If the parameter set already exists in a @parameterized.expand list, keep it in the original list and use @pytest.mark.tier1_extra instead of @pytest.mark.tier1 on the dedicated method. This avoids running the same test twice in Linux+Docker CI.

  2. If the parameter set does NOT exist in any parameterized list (i.e. it's unique to the tier1 method), use @pytest.mark.tier1.

  3. Each runtime should have one non-container and one container tier 1 test.

Finding contributions to work on

Looking at the existing issues is a great way to find something to contribute on. As our projects, by default, use the default GitHub issue labels ((enhancement/bug/duplicate/help wanted/invalid/question/wontfix), looking at any 'help wanted' issues is a great place to start.

First time contributors

If this your first time looking to contribute, looking at any 'contributors/welcome' or 'contributors/good-first-issue' issues is a great place to start.

Code of Conduct

This project has adopted the Amazon Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opensource-codeofconduct@amazon.com with any additional questions or comments.

Security issue notifications

If you discover a potential security issue in this project we ask that you notify AWS/Amazon Security via our vulnerability reporting page. Please do not create a public github issue.

Licensing

See the LICENSE file for our project's licensing. We will ask you to confirm the licensing of your contribution.

We may ask you to sign a Contributor License Agreement (CLA) for larger changes.