Skip to content
Closed
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
104 changes: 52 additions & 52 deletions e2e/README.md
Original file line number Diff line number Diff line change
@@ -1,53 +1,53 @@
# End-to-end testing of kpt

## kpt live e2e tests

We currently have two different solutions for running e2e tests for the kpt
live functionality. We are working on reconciling this into one approach that
we can use consistently.

All e2e tests for live requires that kind is available.

### Testing with go

We have a framework for running e2e tests based on specifying test cases
under testdata/live-apply folder (tests for other kpt live commands will be
added). The entry point for the test framework is in
the `live_test.go` file.

In order to run all the tests for live apply, there is a make target
```sh
make test-live-apply
```

It is possible to run a single test by specifying the name of the test case:
```sh
make test-live-apply T=crd-and-cr
```

#### Structure of a test case

Each test case is a folder directly under `testdata/live-apply`. In the root
of each test case folder there must be a `config.yaml` file that provides the
configuration of the test case (like whether a clean cluster is required and
the expected output). The package that will be applied with `kpt live apply` is
provided in the `resources` folder.

#### Configuration options

These are the configuration options available in the `config.yaml` file:
* `exitCode`: Defines the expected exit code after running the kpt live command. Defaults to 0.
* `stdErr`: Defines the expected output to stdErr after running the command. Defaults to "".
* `stdOut`: Defines the expected output to stdOut after running the command. Defaults to "".
* `inventory`: Defines the expected inventory after running the command.
* `requiresCleanCluster`: Defines whether a new kind cluster should be created prior to running the test.
* `preinstallResourceGroup`: Defines whether the framework should make sure the RG CRD is available before running the test.
* `kptArgs`: Defines the arguments that will be used with executing the kpt live command.

## Testing with bash

This approach uses a bash script that runs through several scenarios for
kpt live in sequence. Run it by running
```sh
./live/end-to-end-test.sh
# End-to-end testing of kpt
## kpt live e2e tests
We currently have two different solutions for running e2e tests for the kpt
live functionality. We are working on reconciling this into one approach that
we can use consistently.
All e2e tests for live requires that kind is available.
### Testing with go
We have a framework for running e2e tests based on specifying test cases
under testdata/live-apply folder (tests for other kpt live commands will be
added). The entry point for the test framework is in
the `live_test.go` file.
In order to run all the tests for live apply, there is a make target
```sh
make test-live-apply
```
It is possible to run a single test by specifying the name of the test case:
```sh
make test-live-apply T=crd-and-cr
```
#### Structure of a test case
Each test case is a folder directly under `testdata/live-apply`. In the root
of each test case folder there must be a `config.yaml` file that provides the
configuration of the test case (like whether a clean cluster is required and
the expected output). The package that will be applied with `kpt live apply` is
provided in the `resources` folder.
#### Configuration options
These are the configuration options available in the `config.yaml` file:
* `exitCode`: Defines the expected exit code after running the kpt live command. Defaults to 0.
* `stdErr`: Defines the expected output to stdErr after running the command. Defaults to "".
* `stdOut`: Defines the expected output to stdOut after running the command. Defaults to "".
* `inventory`: Defines the expected inventory after running the command.
* `requiresCleanCluster`: Defines whether a new kind cluster should be created prior to running the test.
* `preinstallResourceGroup`: Defines whether the framework should make sure the RG CRD is available before running the test.
* `kptArgs`: Defines the arguments that will be used with executing the kpt live command.
## Testing with bash
This approach uses a bash script that runs through several scenarios for
kpt live in sequence. Run it by running
```sh
./live/end-to-end-test.sh
```
186 changes: 93 additions & 93 deletions e2e/fn_test.go
Original file line number Diff line number Diff line change
@@ -1,93 +1,93 @@
//go:build docker
// +build docker

// Copyright 2021,2026 The kpt Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

package e2e_test

import (
"os"
"path/filepath"
"strings"
"testing"

"github.com/kptdev/kpt/internal/fnruntime"
"github.com/kptdev/kpt/pkg/test/runner"
)

func TestFnRender(t *testing.T) {
runAllTests(t, filepath.Join(".", "testdata", "fn-render"))
}

func TestFnEval(t *testing.T) {
runAllTests(t, filepath.Join(".", "testdata", "fn-eval"))
}

func TestFnSink(t *testing.T) {
runAllTests(t, filepath.Join(".", "testdata", "fn-sink"))
}

// runTests will scan test cases in 'path', run the command
// on all of the packages in path, and test that
// the diff between the results and the original package is as
// expected
func runAllTests(t *testing.T, path string) {
cases, err := runner.ScanTestCases(path)
if err != nil {
t.Fatalf("failed to scan test cases: %s", err)
}
// Run all the sequential tests first then run the parallel tests.
runTests(t, cases, true)
runTests(t, cases, false)
}

func runTests(t *testing.T, cases *runner.TestCases, sequential bool) {
for _, c := range *cases {
c := c // capture range variable
if c.Config.Sequential != sequential {
continue
}
// If the current function runtime doesn't match, we skip this test case.
currRuntime := strings.ToLower(os.Getenv(fnruntime.ContainerRuntimeEnv))
if len(c.Config.Runtimes) > 0 {
skip := true
for _, rt := range c.Config.Runtimes {
if currRuntime == strings.ToLower(rt) {
skip = false
break
}
}
if skip {
continue
}
}
t.Run(c.Path, func(t *testing.T) {
if !c.Config.Sequential {
t.Parallel()
}
r, err := runner.NewRunner(t, c, c.Config.TestType)
if err != nil {
t.Fatalf("failed to create test runner: %s", err)
}
if r.Skip() {
t.Skip()
}
err = r.Run()
if err != nil {
t.Fatalf("failed when running test: %s", err)
}
})
}
}
//go:build docker
// +build docker
// Copyright 2021,2026 The kpt Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package e2e_test
import (
"os"
"path/filepath"
"strings"
"testing"
"github.com/kptdev/kpt/internal/fnruntime"
"github.com/kptdev/kpt/pkg/test/runner"
)
func TestFnRender(t *testing.T) {
runAllTests(t, filepath.Join(".", "testdata", "fn-render"))
}
func TestFnEval(t *testing.T) {
runAllTests(t, filepath.Join(".", "testdata", "fn-eval"))
}
func TestFnSink(t *testing.T) {
runAllTests(t, filepath.Join(".", "testdata", "fn-sink"))
}
// runTests will scan test cases in 'path', run the command
// on all of the packages in path, and test that
// the diff between the results and the original package is as
// expected
func runAllTests(t *testing.T, path string) {
cases, err := runner.ScanTestCases(path)
if err != nil {
t.Fatalf("failed to scan test cases: %s", err)
}
// Run all the sequential tests first then run the parallel tests.
runTests(t, cases, true)
runTests(t, cases, false)
}
func runTests(t *testing.T, cases *runner.TestCases, sequential bool) {
for _, c := range *cases {
c := c // capture range variable
if c.Config.Sequential != sequential {
continue
}
// If the current function runtime doesn't match, we skip this test case.
currRuntime := strings.ToLower(os.Getenv(fnruntime.ContainerRuntimeEnv))
if len(c.Config.Runtimes) > 0 {
skip := true
for _, rt := range c.Config.Runtimes {
if currRuntime == strings.ToLower(rt) {
skip = false
break
}
}
if skip {
continue
}
}
t.Run(c.Path, func(t *testing.T) {
if !c.Config.Sequential {
t.Parallel()
}
r, err := runner.NewRunner(t, c, c.Config.TestType)
if err != nil {
t.Fatalf("failed to create test runner: %s", err)
}
if r.Skip() {
t.Skip()
}
err = r.Run()
if err != nil {
t.Fatalf("failed when running test: %s", err)
}
})
}
}
Loading