Databricks assets package¶
The Databricks assets package crawls databricks assets and publishes them to Atlan for discovery.
Direct extraction¶
Will create a new connection
This should only be used to create the workflow the first time. Each time you run this method it will create a new connection and new assets within that connection — which could lead to duplicate assets if you run the workflow this way multiple times with the same settings.
Instead, when you want to re-crawl assets, re-run the existing workflow (see Re-run existing workflow below).
To crawl assets directly from databricks:
Coming soon
Direct extraction from databricks | |
---|---|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
|
- Base configuration for a new Databricks crawler.
- You must provide a name for the connection that the Databricks assets will exist within.
-
You must specify at least one connection admin, either:
- everyone in a role (in this example, all
$admin
users). - a list of groups (names) that will be connection admins.
- a list of users (names) that will be connection admins.
- everyone in a role (in this example, all
-
You can specify a maximum number of rows that can be accessed for any asset in the connection. defaults to
10000
. - You can specify whether you want to allow queries to this connection
(default:
True
, as in this example) or deny all query access to the connection (False
). - You can specify whether you want to allow data previews on this connection
(default:
True
, as in this example) or deny all sample data previews to the connection (False
). -
When crawling assets directly from Databricks, you are required to provide the following information:
- hostname of the Databricks instance.
- port number of the Databricks instance. default:
443
.
-
When using basic authentication, you are required to provide the following information:
- personal access token through which to access Databricks instance.
- HTTP path of your Databricks instance
You can also utilize any of the following authentication methods:
-
aws_service()
- client_id: client ID for your AWS service principal.
- client_secret: client secret for your AWS service principal.
-
azure_service()
- client_id: client ID for Azure service principal.
- client_secret: client secret for your Azure service principal.
- tenant_id: tenant ID (directory ID) for Azure service principal.
-
Determines the interface that the package will use to extract metadata from Databricks. JDBC is the recommended method (
default
). REST API method is supported only by Unity Catalog enabled instances. -
You can also optionally specify the list of assets to include in crawling. For Databricks assets, this should be specified as a list of database names. If set to
[]
, all databases will be crawled.Recommendation
When using the
DatabricksCrawler.ExtractionMethod.REST
extraction method, ensure that you use theinclude_for_rest_api()
method, which accepts a list of database names to include during crawling. -
You can also optionally specify the list of assets to exclude from crawling. For Databricks assets, this should be specified as a list of database GUIDs. If set to
[]
, no databases will be excluded.Recommendation
When using the
DatabricksCrawler.ExtractionMethod.REST
extraction method, ensure that you use theexclude_for_rest_api()
method, which accepts a list of database names to exclude during crawling. -
You can also optionally specify the exclude regex for the crawler to ignore collections based on a naming convention.
- You can configure whether to enable view lineage as part of crawling Databricks (default:
True
). - You can configure advanced settings to enable
(
True
) or disable (False
) schema-level filtering on the source. Schemas specified in the include filter will be fetched. - Now, you can convert the package into a
Workflow
object. -
Run the workflow by invoking the
run()
method on the workflow client, passing the created object.Workflows run asynchronously
Remember that workflows run asynchronously. See the packages and workflows introduction for details on how you can check the status and wait until the workflow has been completed.
Coming soon
Create the workflow via UI only
We recommend creating the workflow only via the UI. To rerun an existing workflow, see the steps below.
Offline extraction¶
Will create a new connection
This should only be used to create the workflow the first time. Each time you run this method it will create a new connection and new assets within that connection — which could lead to duplicate assets if you run the workflow this way multiple times with the same settings.
Instead, when you want to re-crawl assets, re-run the existing workflow (see Re-run existing workflow below).
To crawl databricks assets from the S3 bucket:
Coming soon
Crawling databricks assets from a bucket | |
---|---|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
|
- Base configuration for a new Databricks crawler.
- You must provide a name for the connection that the Databricks assets will exist within.
-
You must specify at least one connection admin, either:
- everyone in a role (in this example, all
$admin
users). - a list of groups (names) that will be connection admins.
- a list of users (names) that will be connection admins.
- everyone in a role (in this example, all
-
You can specify a maximum number of rows that can be accessed for any asset in the connection. defaults to
10000
. - You can specify whether you want to allow queries to this connection
(default:
True
, as in this example) or deny all query access to the connection (False
). - You can specify whether you want to allow data previews on this connection
(default:
True
, as in this example) or deny all sample data previews to the connection (False
). -
When using
s3()
, you need to provide the following information:- name of the bucket/storage that contains the extracted metadata files.
- prefix is everything after the bucket/storage name, including the
path
. - (Optional) name of the region if applicable.
-
Now, you can convert the package into a
Workflow
object. -
Run the workflow by invoking the
run()
method on the workflow client, passing the created object.Workflows run asynchronously
Remember that workflows run asynchronously. See the packages and workflows introduction for details on how you can check the status and wait until the workflow has been completed.
Coming soon
Create the workflow via UI only
We recommend creating the workflow only via the UI. To rerun an existing workflow, see the steps below.
Re-run existing workflow¶
To re-run an existing workflow for databricks assets:
Coming soon
Re-run existing databricks workflow | |
---|---|
1 2 3 4 5 6 7 8 9 10 11 12 |
|
- You can find workflows by their type using the workflow client
find_by_type()
method and providing the prefix for one of the packages. In this example, we do so for theDatabricksCrawler
. (You can also specify the maximum number of resulting workflows you want to retrieve as results.) -
Once you've found the workflow you want to re-run, you can simply call the workflow client
rerun()
method.- Optionally, you can use
rerun(idempotent=True)
to avoid re-running a workflow that is already in running or in a pending state. This will return details of the already running workflow if found, and by default, it is set toFalse
.
Workflows run asynchronously
Remember that workflows run asynchronously. See the packages and workflows introduction for details on how you can check the status and wait until the workflow has been completed.
- Optionally, you can use
Coming soon
Requires multiple steps through the raw REST API
- Find the existing workflow.
- Send through the resulting re-run request.
POST /api/service/workflows/indexsearch | |
---|---|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
|
-
Searching by the
atlan-databricks
prefix will ensure you only find existing Databricks assets workflows.Name of the workflow
The name of the workflow will be nested within the
_source.metadata.name
property of the response object. (Remember since this is a search, there could be multiple results, so you may want to use the other details in each result to determine which workflow you really want.)
POST /api/service/workflows/submit | |
---|---|
100 101 102 103 104 |
|
- Send the name of the workflow as the
resourceName
to rerun it.