Test your package's logic¶
Through testing frameworks¶
You should start by testing your package's logic using standard testing frameworks. These will allow you to run automated regression tests to ensure that your package's logic behaves as you intend as both the package and any of its dependencies evolve.
In fact, you can even start with testing this way before you've bundled up your package into a container image and merged it into atlanhq/marketplace-packages
!
In Python, we use the pytest
testing
framework to write integration tests. Make sure it's installed in your environment:
requirements-dev.txt | |
---|---|
1 2 3 4 |
|
pip install -r requirements-dev.txt
tests/test_package.py | |
---|---|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 |
|
-
Use the built-in
TestId.make_unique()
method to create a unique ID for the test-run. This appends some randomly-generated characters onto the string you provide to ensure each run of the test is unique.Use this generated ID for all objects your test creates
To ensure your test is appropriately isolated from other tests (and possible later runs of the same test), use this generated ID in the naming of all of the objects your test creates. This will ensure it does not clobber, conflict or overlap with any other tests or test runs that might happen in parallel.
-
Provide a list of file paths to the log files that need to be validated.
- Instead of duplicating code across tests, create fixtures and attach these functions to the tests.
@pytest.fixture()
run before each test, providing the necessary data or setup for the test. - When creating fixtures for Atlan assets (e.g:
Connection
), ensure that you call thedelete_asset()
utility function afteryield
to clean up the test object upon test completion. - Create a
CustomConfig
fixture for your test package with test values. Usemonkeypatch.setenv(key, value)
to patchRuntimeConfig
environment variables. This approach is useful for testing code that depends on environment variables without altering the actual system environment. - A common pattern is to create a test class, such as
TestPackage
, with methods that directly invoke themain()
function of your package (main.py
). This simulates running your package in a test environment. - It is also common to include a method that calls the utility function
validate_files_exist()
to ensure that certain files are created by the package. - Additionally, include a method that calls the utility function
validate_error_free_logs()
to verify that there are noERROR
level messages in the log files generated by the package. - Optionally, you can create multiple test classes
and methods to cover various conditions for the package. For example:
TestConnection
class can be used to test connection functionality.
- Optionally, you can create multiple test classes
and methods to cover various conditions for the package. For example:
TestProcessor
class can include methods that call the package’sProcess.process()
method (if implemented) to validate different processing logic within your package.
In Kotlin, to write an integration test you need to extend the package toolkit's PackageTest
class.
Use -PpackageTests
option to run the test
By default, integration tests will be skipped, since they require first setting up appropriate connectivity to an Atlan tenant to run. If you want to run them, you need to pass the -PpackageTests
argument to Gradle.
src/test/kotlin/ImportPetStoreTest.kt | |
---|---|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 |
|
- Extend the built-in
PackageTest
class to define a package test. Provide it a unique string to distinguish it from other integration tests. -
Use the built-in
makeUnique()
method to create a unique ID for the test-run. This appends some randomly-generated characters onto the string you provide to ensure each run of the test is unique.Use this generated ID for all objects your test creates
To ensure your test is appropriately isolated from other tests (and possible later runs of the same test), use this generated ID in the naming of all of the objects your test creates. This will ensure it does not clobber, conflict or overlap with any other tests or test runs that might happen in parallel.
-
Override the
setup()
method to set up any necessary prerequisites for your integration test (such as creating any objects it will rely on when it runs). - Call the
runCustomPackage()
method to actually run your package, with a predefined set of inputs and configuration. -
Pass the
runCustomPackage()
method a new configuration object specific to your package. This simulates the hand-off from the UI for your package to your code.In this example, we create a new configuration for the
OpenAPISpecLoaderCfg
with the settings we want to test. -
You also need to pass the
runCustomPackage()
method the entry point for your package (usually just itsmain
method). -
Any integration test that actually creates some objects in the tenant (whether as part of the prerequisites, or the actual running of the package), should override the
teardown()
method and implement any cleanup of created or side-effected objects.Do this just after setup
While this overridden
teardown()
method can technically be defined anywhere, it is a good practice to define it just after thesetup()
. This helps keep clear what has been created or side-effected in thesetup()
with what needs to then be cleaned up in theteardown()
. -
You can use built-in operations like
removeConnection()
to remove all assets that were created within (and including) a connection. - You can then use as many
@Test
-annotated methods as you like to test various conditions of the result of running your package. These will only execute after the@BeforeClass
method's work is all completed. - A common pattern is to include a method that calls the built-in
validateFilesExist()
method to confirm that certain files are created by the package. - Another common pattern is to include a method that calls the built-in
validateErrorFreeLog()
method to confirm there are no error-level messages in the log file that is generated by the package.
Live on a tenant¶
You should then test the package live on a tenant. This will confirm:
- The UI renders as you intend,
- any inputs provided by the user through the UI are properly handed-off to your logic,
- and your bundled package is orchestrated successfully through Atlan's back-end workflow engine (Argo).
Deploy the package¶
If you have kubectl
access to your cluster, you can selectively deploy your package directly:
-
Ensure you are on your cluster:
loft use vcluster <tenant-name> --project default # (1)!
- Replace
<tenant-name>
with the name of your tenant. (This assumes you are already logged in to Loft — naturally log in there first, if you are not already.)
- Replace
-
(One-off) Install
node
, if you do not already havenpm
available:brew install node
-
Install the latest version of
argopm
:npm i -g argopm
-
Deploy the package from its rendered output directory:
argopm install . -n default -c --force # (1)!
- If you are not in the output directory where your package was rendered, replace the
.
with the directory path for the rendered output.
- If you are not in the output directory where your package was rendered, replace the
Package must first be generally available
To follow these steps, you must first make your package generally available. (Generally available in this sense just means it is available to be deployed — it is not actually deployed to any tenant by default.)
If you do not have kubectl
access to your cluster, you will need to selectively deploy the package through the atlanhq/marketplace-packages repository.
-
Clone atlanhq/marketplace-packages to your local machine (if you have not already):
git clone git@github.com:atlanhq/marketplace-packages.git # (1)! cd marketplace-packages
- This assumes you have configured your Git client with appropriate credentials. If this step fails, you'll need to setup
git
first.
- This assumes you have configured your Git client with appropriate credentials. If this step fails, you'll need to setup
-
Start from an up-to-date
master
branch (in particular if you already have the repository cloned locally):git checkout master git merge origin/master
-
Create a branch in the local repository:
git branch JIRA-TASK-ID # (1)! git checkout JIRA-TASK-ID
- Replace
JIRA-TASK-ID
with the unique ID of the task in Jira where you are tracking your work.
- Replace
-
Create or edit the file
deployment/tenants/<tenant-name>.pkl
for the tenant where you want to deploy the package, with at least the following content:deployment/tenants/<tenant-name>.pkl 1 2 3 4 5
amends "../Deployment.pkl" include { ["@csa/openapi-spec-loader"] {} // (1)! }
- Of course, use your own package's ID in place of
@csa/openapi-spec-loader
.
- Of course, use your own package's ID in place of
-
Stage your new (or modified)
.pkl
file:git add deployment/tenants/<tenant-name>.pkl # (1)!
- Remember to replace
<tenant-name>
with your actual tenant name. (This tellsgit
which files to include all together in your next commit.)
- Remember to replace
-
Commit your new (or modified) file to the branch:
git commit -m 'Package deployment for ...' # (1)!
- Provide a meaningful message for the new package you're deploying. (This tells
git
to take a (local) snapshot of all the changes you staged (above).)
- Provide a meaningful message for the new package you're deploying. (This tells
-
Push your committed changes to the remote repository:
git push --set-upstream origin JIRA-TASK-ID # (1)!
- Remember that
JIRA-TASK-ID
is just a placeholder — replace with the name of your actual branch. (This tellsgit
to push all the (local) commits you've made against this branch to the remote GitHub repository, so they're available to everyone there.)
- Remember that
-
Raise a pull request (PR) from your branch (
JIRA-TASK-ID
) tomaster
on atlanhq/marketplace-packages .Will be auto-approved
As long as you have named the file correctly and written valid contents, it will be auto-approved by a bot.
-
Once auto-approved, you can self-merge to
master
.1 -
Once the PR is merged, wait for the
atlan-update
script to run and complete on your tenant. By default it will run every 30 minutes, so could take up to 1 hour before it has completed on your tenant.2
Test the package¶
Now that the package is deployed on your tenant:
- Hover over the New button in the upper-right, and then click New workflow.
- Select the pill that matches the name of the
category
you specified for your package. (If you did not specify one, it should be under Custom, by default.) - Select the tile for your package, and then click the Setup Workflow button in the upper-right.
- Fill in appropriate inputs to the UI to configure your package, click Next through each step (if more than one), and finally Run the package.
Running example
For our running example, this would produce the following UI:
Confirm:
- The inputs shown in the UI are as you expect, in particular if you use any
rules
to limit what inputs are shown. - The values you provided in the inputs are picked up by your custom logic and influence how the package behaves.
- Your package runs to completion when you provide valid inputs.
- Your package fails with an error when you provide inputs it cannot use to run successfully.
-
If it fails, double-check you have the correct filename, which must end in
.pkl
. ↩ -
It is also possible that synchronization has been disabled on your tenant, in which case
atlan-update
may not run at all. If that is the case, you will need to speak with whoever manages your tenant to see how you can test your package. ↩