Skip to content

Confluent Kafka assets package

The Confluent Kafka assets package crawls Confluent Kafka assets and publishes them to Atlan for discovery.

Direct extraction

Will create a new connection

This should only be used to create the workflow the first time. Each time you run this method it will create a new connection and new assets within that connection — which could lead to duplicate assets if you run the workflow this way multiple times with the same settings.

Instead, when you want to re-crawl assets, re-run the existing workflow (see Re-run existing workflow below).

2.0.3 1.9.0

To crawl assets directly from Confluent Kafka:

Direct extraction from Confluent Kafka
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
AtlanClient client = Atlan.getDefaultClient();
Workflow crawler = ConfluentKafkaCrawler.creator( // (1)
      client, // (2)
      "production", // (3)
      List.of(client.getRoleCache().getIdForName("$admin")), // (4)
      null,
      null
    )
    .direct( // (5)
      "dev-south.aws.confluent.cloud:9092",
      true
    )
    .apiToken(
      "api-key-here", // (6)
      "api-secret-here" // (7)
    )
    .skipInternal(true) // (8)
    .include(".*_DEV_TOPICS") // (9)
    .exclude(".*_TEST") // (10)
    .build()  // (11)
    .toWorkflow();  // (12)
WorkflowResponse response = crawler.run();  // (13)
  1. The ConfluentKafkaCrawler package will create a workflow to crawl assets from Confluent Kafka.
  2. You must provide Atlan client.
  3. You must provide a name for the connection that the Confluent Kafka assets will exist within.
  4. You must specify at least one connection admin, either:

    • everyone in a role (in this example, all $admin users).
    • a list of groups (names) that will be connection admins.
    • a list of users (names) that will be connection admins.
  5. When crawling assets directly from Confluent Kafka, you are required to provide the following information:

    • hostname and port number (host.example.com:9092) for the Kafka bootstrap server.
    • whether to use an encrypted SSL connection (true), or plaintext (false)
  6. You must provide an API key through which to access Kafka.

  7. You must provide API secret through which to access Kafka.
  8. You can also optionally set whether to skip internal topics when crawling (true) or include them (false).
  9. You can also optionally provide the regular expression to use for including topics when crawling.
  10. You can also optionally provide the regular expression to use for excluding topics when crawling.
  11. Build the minimal package object.
  12. Now, you can convert the package into a Workflow object.
  13. You can then run the workflow using the run() method on the object you've created.

    Workflows run asynchronously

    Remember that workflows run asynchronously. See the packages and workflows introduction for details on how you can check the status and wait until the workflow has been completed.

Direct extraction from Confluent Kafka
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
from pyatlan.client.atlan import AtlanClient
from pyatlan.cache.role_cache import RoleCache
from pyatlan.model.packages import ConfluentKafkaCrawler

client = AtlanClient()

crawler = (
    ConfluentKafkaCrawler( # (1)
        connection_name="production", # (2)
        admin_roles=[RoleCache.get_id_for_name("$admin")],  # (3)
        admin_groups=None,
        admin_users=None,
    )
    .direct( # (4)
        bootstrap="dev-south.aws.confluent.cloud:9092",
        encrypted=True
    )
    .api_token(
        api_key="api-key-here", # (5)
        api_secret="api-secret-here" # (6)
    )
    .skip_internal(True) # (7)
    .include(regex=".*_DEV_TOPICS") # (8)
    .exclude(regex=".*_TEST") # (9)
    .to_workflow() # (10)
)
response = client.workflow.run(crawler) # (11)
  1. Base configuration for a new Confluent Kafka crawler.
  2. You must provide a name for the connection that the Confluent Kafka assets will exist within.
  3. You must specify at least one connection admin, either:

    • everyone in a role (in this example, all $admin users).
    • a list of groups (names) that will be connection admins.
    • a list of users (names) that will be connection admins.
  4. When crawling assets directly from Confluent Kafka, you are required to provide the following information:

    • hostname and port number (host.example.com:9092) for the Kafka bootstrap server.
    • whether to use an encrypted SSL connection (True), or plaintext (False)
  5. You must provide an API key through which to access Kafka.

  6. You must provide API secret through which to access Kafka.
  7. You can also optionally set whether to skip internal topics when crawling (True) or include them (False).
  8. You can also optionally provide the regular expression to use for including topics when crawling. (If set to None, all topics will be crawled.)
  9. You can also optionally provide the regular expression to use for excluding topics when crawling. (If set to None, no topics will be excluded.)
  10. Now, you can convert the package into a Workflow object.
  11. Run the workflow by invoking the run() method on the workflow client, passing the created object.

    Workflows run asynchronously

    Remember that workflows run asynchronously. See the packages and workflows introduction for details on how you can check the status and wait until the workflow has been completed.

Direct extraction from Confluent Kafka
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
val client = Atlan.getDefaultClient()
val crawler = ConfluentKafkaCrawler.creator( // (1)
        client, // (2)
        "production", // (3)
        listOf(client.getRoleCache().getIdForName("\$admin")), // (4)
        null,
        null
    )
    .direct( // (5)
        "dev-south.aws.confluent.cloud:9092",
        true
    )
    .apiToken(
        "api-key-here", // (6)
        "api-secret-here" // (7)
    )
    .skipInternal(true) // (8)
    .include(".*_DEV_TOPICS") // (9)
    .exclude(".*_TEST") // (10)
    .build()  // (11)
    .toWorkflow()  // (12)
val response = crawler.run()  // (13)
  1. The ConfluentKafkaCrawler package will create a workflow to crawl assets from Confluent Kafka.
  2. You must provide Atlan client.
  3. You must provide a name for the connection that the Confluent Kafka assets will exist within.
  4. You must specify at least one connection admin, either:

    • everyone in a role (in this example, all $admin users).
    • a list of groups (names) that will be connection admins.
    • a list of users (names) that will be connection admins.
  5. When crawling assets directly from Confluent Kafka, you are required to provide the following information:

    • hostname and port number (host.example.com:9092) for the Kafka bootstrap server.
    • whether to use an encrypted SSL connection (true), or plaintext (false)
  6. You must provide an API key through which to access Kafka.

  7. You must provide API secret through which to access Kafka.
  8. You can also optionally set whether to skip internal topics when crawling (true) or include them (false).
  9. You can also optionally provide the regular expression to use for including topics when crawling.
  10. You can also optionally provide the regular expression to use for excluding topics when crawling.
  11. Build the minimal package object.
  12. Now, you can convert the package into a Workflow object.
  13. You can then run the workflow using the run() method on the object you've created.

    Workflows run asynchronously

    Remember that workflows run asynchronously. See the packages and workflows introduction for details on how you can check the status and wait until the workflow has been completed.

Create the workflow via UI only

We recommend creating the workflow only via the UI. To rerun an existing workflow, see the steps below.

Re-run existing workflow

1.9.5 1.10.6

To re-run an existing workflow for Confluent Kafka assets:

Re-run existing Confluent Kafka workflow
1
2
3
4
List<WorkflowSearchResult> existing = WorkflowSearchRequest // (1)
            .findByType(ConfluentKafkaCrawler.PREFIX, 5); // (2)
// Determine which of the results is the Confluent Kafka workflow you want to re-run...
WorkflowRunResponse response = existing.get(n).rerun(); // (3)
  1. You can search for existing workflows through the WorkflowSearchRequest class.
  2. You can find workflows by their type using the findByType() helper method and providing the prefix for one of the packages. In this example, we do so for the ConfluentKafkaCrawler. (You can also specify the maximum number of resulting workflows you want to retrieve as results.)
  3. Once you've found the workflow you want to re-run, you can simply call the rerun() helper method on the workflow search result. The WorkflowRunResponse is just a subtype of WorkflowResponse so has the same helper method to monitor progress of the workflow run.

    • Optionally, you can use the rerun(true) method with idempotency to avoid re-running a workflow that is already in running or in a pending state. This will return details of the already running workflow if found, and by default, it is set to false

    Workflows run asynchronously

    Remember that workflows run asynchronously. See the packages and workflows introduction for details on how you can check the status and wait until the workflow has been completed.

Re-run existing Confluent Kafka workflow
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
from pyatlan.client.atlan import AtlanClient
from pyatlan.model.enums import WorkflowPackage

client = AtlanClient()

existing = client.workflow.find_by_type(  # (1)
  prefix=WorkflowPackage.KAFKA_CONFLUENT_CLOUD, max_results=5
)

# Determine which Confluent Kafka workflow (n)
# from the list of results you want to re-run.
response = client.workflow.rerun(existing[n]) # (2)
  1. You can find workflows by their type using the workflow client find_by_type() method and providing the prefix for one of the packages. In this example, we do so for the ConfluentKafkaCrawler. (You can also specify the maximum number of resulting workflows you want to retrieve as results.)
  2. Once you've found the workflow you want to re-run, you can simply call the workflow client rerun() method.

    • Optionally, you can use rerun(idempotent=True) to avoid re-running a workflow that is already in running or in a pending state. This will return details of the already running workflow if found, and by default, it is set to False.

    Workflows run asynchronously

    Remember that workflows run asynchronously. See the packages and workflows introduction for details on how you can check the status and wait until the workflow has been completed.

Re-run existing Confluent Kafka workflow
1
2
3
4
5
val existing = WorkflowSearchRequest // (1)
            .findByType(ConfluentKafkaCrawler.PREFIX, 5); // (2)
// Determine which of the results is the
// Confluent Kafka workflow you want to re-run...
val response = existing.get(n).rerun(); // (3)
  1. You can search for existing workflows through the WorkflowSearchRequest class.
  2. You can find workflows by their type using the findByType() helper method and providing the prefix for one of the packages. In this example, we do so for the ConfluentKafkaCrawler. (You can also specify the maximum number of resulting workflows you want to retrieve as results.)
  3. Once you've found the workflow you want to re-run, you can simply call the rerun() helper method on the workflow search result. The WorkflowRunResponse is just a subtype of WorkflowResponse so has the same helper method to monitor progress of the workflow run.

    • Optionally, you can use the rerun(true) method with idempotency to avoid re-running a workflow that is already in running or in a pending state. This will return details of the already running workflow if found, and by default, it is set to false

    Workflows run asynchronously

    Remember that workflows run asynchronously. See the packages and workflows introduction for details on how you can check the status and wait until the workflow has been completed.

Requires multiple steps through the raw REST API

  1. Find the existing workflow.
  2. Send through the resulting re-run request.
POST /api/service/workflows/indexsearch
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
{
  "from": 0,
  "size": 5,
  "query": {
    "bool": {
      "filter": [
        {
          "nested": {
            "path": "metadata",
            "query": {
              "prefix": {
                "metadata.name.keyword": {
                  "value": "atlan-kafka-confluent-cloud" // (1)
                }
              }
            }
          }
        }
      ]
    }
  },
  "sort": [
    {
      "metadata.creationTimestamp": {
        "nested": {
          "path": "metadata"
        },
        "order": "desc"
      }
    }
  ],
  "track_total_hits": true
}
  1. Searching by the atlan-kafka-confluent-cloud prefix will ensure you only find existing Confluent Kafka assets workflows.

    Name of the workflow

    The name of the workflow will be nested within the _source.metadata.name property of the response object. (Remember since this is a search, there could be multiple results, so you may want to use the other details in each result to determine which workflow you really want.)

POST /api/service/workflows/submit
100
101
102
103
104
{
  "namespace": "default",
  "resourceKind": "WorkflowTemplate",
  "resourceName": "atlan-kafka-confluent-cloud-1684500411" // (1)
}
  1. Send the name of the workflow as the resourceName to rerun it.