Manage lineage¶
Create lineage between assets¶
Directly¶
To create lineage between assets, you need to create a Process
entity.
Input and output assets must already exist
Note that the assets you reference as the inputs and outputs of the process must already exist, before creating the process.
Create lineage between assets | |
---|---|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
|
- Use the
creator()
method to initialize the object with all necessary attributes for creating it. - Provide a name for how the process will be shown in the UI.
-
Provide the
qualifiedName
of the connection that ran the process.Tips for the connection
The process itself must be created within a connection for both access control and icon labelling. Use a connection
qualifiedName
that indicates the system that ran the process:- You could use the same connection
qualifiedName
as the source system, if it was the source system "pushing" data to the target(s). - You could use the same connection
qualifiedName
as the target system, if it was the target system "pulling" data from the source(s). - You could use a different connection
qualifiedName
from either source or target, if there is a system in-between doing the processing (for example an ETL engine or orchestrator).
- You could use the same connection
-
(Optional) Provide the unique ID of the process within that connection. This could be the unique DAG ID for an orchestrator, for example. Since it is optional, you can also send
null
and the SDK will generate a unique ID for you based on the unique combination of inputs and outputs for the process.Use your own ID if you can
While the SDK can generate this ID for you, since it is based on the unique combination of inputs and outputs the ID can change if those inputs or outputs change. This could result in extra processes in lineage as this process itself changes over time.
By using your own ID for the process, any changes that occur in that process over time (even if the inputs or outputs change) the same single process in Atlan will be updated.
-
Provide the list of inputs to the process. Note that each of these is only a
Reference
to an asset, not a full asset object. For a reference you only need (in addition to the type of asset) either:- its GUID (for the static
<Type>.refByGuid()
method) - its
qualifiedName
(for the static<Type>.refByQualifiedName()
method)
- its GUID (for the static
-
Provide the list of outputs to the process. Note that each of these is again only a
Reference
to an asset. - (Optional) Provide the parent
LineageProcess
in which this process ran (for example, if this process is a subprocess of some higher-level process). If this is a top-level process, you can also sendnull
for this parameter (as in this example). - (Optional) You can also add other properties to the lineage process, such as SQL code that runs within the process.
- (Optional) You can also provide a link to the process, which will provide a button to click to go to that link from the Atlan UI when viewing the process in Atlan.
- Call the
save()
method to actually create the process. Because this operation will directly persist the asset in Atlan, you must provide it anAtlanClient
through which to connect to the tenant. - The response will include that single lineage process asset that was created.
- The response will also include the 5 data assets (3 inputs, 2 outputs) that were updated.
Create lineage between assets | |
---|---|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
|
- Use the
create()
method to initialize the object with all necessary attributes for creating it. - Provide a name for how the process will be shown in the UI.
-
Provide the
qualified_name
of the connection that ran the process.Tips for the connection
The process itself must be created within a connection for both access control and icon labelling. Use a connection
qualified_name
that indicates the system that ran the process:- You could use the same connection
qualified_name
as the source system, if it was the source system "pushing" data to the target(s). - You could use the same connection
qualified_name
as the target system, if it was the target system "pulling" data from the source(s). - You could use a different connection
qualified_name
from either source or target, if there is a system in-between doing the processing (for example an ETL engine or orchestrator).
- You could use the same connection
-
(Optional) Provide the unique ID of the process within that connection. This could be the unique DAG ID for an orchestrator, for example. Since it is optional, you can also leave it out and the SDK will generate a unique ID for you based on the unique combination of inputs and outputs for the process.
Use your own ID if you can
While the SDK can generate this ID for you, since it is based on the unique combination of inputs and outputs the ID can change if those inputs or outputs change. This could result in extra processes in lineage as this process itself changes over time.
By using your own ID for the process, any changes that occur in that process over time (even if the inputs or outputs change) the same single process in Atlan will be updated.
-
Provide the list of inputs to the process. Note that each of these is only a
Reference
to an asset, not a full asset object. For a reference you only need (in addition to the type of asset) either:- its GUID (for the
ref_by_guid()
method) - its
qualifiedName
(for theref_by_qualified_name()
method)
- its GUID (for the
-
Provide the list of outputs to the process. Note that each of these is again only a
Reference
to an asset. - (Optional) Provide the parent
Process
in which this process ran (for example, if this process is a subprocess of some higher-level process). If this is a top-level process, you can also sendNone
for this parameter (as in this example). - (Optional) You can also add other properties to the lineage process, such as SQL code that runs within the process.
- (Optional) You can also provide a link to the process, which will provide a button to click to go to that link from the Atlan UI when viewing the process in Atlan.
- Call the
save()
method to actually create the process. - Check that a
Process
was created. - Check that only 1
Process
was created. - Check that tables were updated.
- Check that 5 tables (3 inputs, 2 outputs) were updated.
Create lineage between assets | |
---|---|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
|
- Use the
creator()
method to initialize the object with all necessary attributes for creating it. - Provide a name for how the process will be shown in the UI.
-
Provide the
qualifiedName
of the connection that ran the process.Tips for the connection
The process itself must be created within a connection for both access control and icon labelling. Use a connection
qualifiedName
that indicates the system that ran the process:- You could use the same connection
qualifiedName
as the source system, if it was the source system "pushing" data to the target(s). - You could use the same connection
qualifiedName
as the target system, if it was the target system "pulling" data from the source(s). - You could use a different connection
qualifiedName
from either source or target, if there is a system in-between doing the processing (for example an ETL engine or orchestrator).
- You could use the same connection
-
(Optional) Provide the unique ID of the process within that connection. This could be the unique DAG ID for an orchestrator, for example. Since it is optional, you can also send
null
and the SDK will generate a unique ID for you based on the unique combination of inputs and outputs for the process.Use your own ID if you can
While the SDK can generate this ID for you, since it is based on the unique combination of inputs and outputs the ID can change if those inputs or outputs change. This could result in extra processes in lineage as this process itself changes over time.
By using your own ID for the process, any changes that occur in that process over time (even if the inputs or outputs change) the same single process in Atlan will be updated.
-
Provide the list of inputs to the process. Note that each of these is only a
Reference
to an asset, not a full asset object. For a reference you only need (in addition to the type of asset) either:- its GUID (for the static
<Type>.refByGuid()
method) - its
qualifiedName
(for the static<Type>.refByQualifiedName()
method)
- its GUID (for the static
-
Provide the list of outputs to the process. Note that each of these is again only a
Reference
to an asset. - (Optional) Provide the parent
LineageProcess
in which this process ran (for example, if this process is a subprocess of some higher-level process). If this is a top-level process, you can also sendnull
for this parameter (as in this example). - (Optional) You can also add other properties to the lineage process, such as SQL code that runs within the process.
- (Optional) You can also provide a link to the process, which will provide a button to click to go to that link from the Atlan UI when viewing the process in Atlan.
- Call the
save()
method to actually create the process. Because this operation will directly persist the asset in Atlan, you must provide it anAtlanClient
through which to connect to the tenant. - The response will include that single lineage process asset that was created.
- The response will also include the 5 data assets (3 inputs, 2 outputs) that were updated.
POST /api/meta/entity/bulk | |
---|---|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 |
|
- All assets must be wrapped in an
entities
array. - You must provide the exact type name for a
Process
asset (case-sensitive). - You must provide a name of the integration process.
- You must provide a unique
qualifiedName
for the integration process (case-sensitive). - You must list all of the input assets to the process. These can be referenced by GUID or by
qualifiedName
. - You must list all of the output assets from the process. These can also be referenced by either GUID or
qualifiedName
.
Using OpenLineage¶
To create lineage between assets through OpenLineage, you need to send at least two events: one indicating the start of a job run and the other indicating that job run is finished.
You must first configure OpenLineage
You must first configure a Spark Assets connection in Atlan before sending any OpenLineage events. (You can skip the Configure the integration in Apache Spark section.)
Start lineage between assets via OpenLineage | |
---|---|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
|
- Datasets used in data lineage need a
namespace
that follows the source-specific naming standards of OpenLineage . - Lineage is tracked through jobs. Each job must have:
- the name of a connection (that already exists in Atlan),
- a unique job name (used to idempotently update the same job with multiple runs), and
- a unique URI indicating the code or system responsible for producing this lineage.
- A job must be run at least once for any lineage to exist, and these separate runs of the same job are tracked through
OpenLineageRun
objects. - You can define any number of inputs (sources) for lineage. The
name
of a dataset should use a.
-qualified form. For example, a table should beDATABASE_NAME.SCHEMA_NAME.TABLE_NAME
. - You can define any number of outputs (targets) for lineage. The
name
of a dataset should use a.
-qualified form. For example, a table should beDATABASE_NAME.SCHEMA_NAME.TABLE_NAME
. - Each run of a job must consist of at least two events — a
START
event indicating when the job ran began, and some terminal state indicating when the job run finished. - You can chain any number of
input
s to the event to indicate the source datasets for the lineage. - You can chain any number of
output
s to the event to indicate the target datasets for the lineage. - Use the
emit()
method to actually send the event to Atlan to be processed. The processing itself occurs asynchronously, so a successfulemit()
will only indicate that the event has been successfully sent to Atlan, not that it has (yet) been processed. Because this operation will directly persist the asset in Atlan, you must provide it anAtlanClient
through which to connect to the tenant.
Complete lineage between assets via OpenLineage | |
---|---|
1 2 3 4 5 |
|
- Since each run of a job must consist of at least two events, do not forget to send the terminal state indicating when the job has finished (and whether it was successful with a
COMPLETE
or had some error with aFAIL
.) - Once again, use the
emit()
method to actually send the event to Atlan to be processed (asynchronously). Because this operation will directly persist the asset in Atlan, you must provide it anAtlanClient
through which to connect to the tenant.
Start lineage between assets via OpenLineage | |
---|---|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
|
- Datasets used in data lineage need a
namespace
that follows the source-specific naming standards of OpenLineage . - Lineage is tracked through jobs. Each job must have:
- the name of a connection (that already exists in Atlan),
- a unique job name (used to idempotently update the same job with multiple runs), and
- a unique URI indicating the code or system responsible for producing this lineage.
- A job must be run at least once for any lineage to exist, and these separate runs of the same job are tracked through
OpenLineageRun
objects. - You can define any number of inputs (sources) for lineage. The
name
of a dataset should use a.
-qualified form. For example, a table should beDATABASE_NAME.SCHEMA_NAME.TABLE_NAME
. - You can define any number of outputs (targets) for lineage. The
name
of a dataset should use a.
-qualified form. For example, a table should beDATABASE_NAME.SCHEMA_NAME.TABLE_NAME
. - Each run of a job must consist of at least two events — a
START
event indicating when the job ran began, and some terminal state indicating when the job run finished. - You can chain any number of
input
s to the event to indicate the source datasets for the lineage. - You can chain any number of
output
s to the event to indicate the target datasets for the lineage. - Use the
emit()
method to actually send the event to Atlan to be processed. The processing itself occurs asynchronously, so a successfulemit()
will only indicate that the event has been successfully sent to Atlan, not that it has (yet) been processed.
Complete lineage between assets via OpenLineage | |
---|---|
1 2 3 4 5 |
|
- Since each run of a job must consist of at least two events,
do not forget to send the terminal state indicating when the job
has finished (and whether it was successful with a
COMPLETE
or had some error with aFAIL
.) - Once again, use the
emit()
method to actually send the event to Atlan to be processed (asynchronously).
Start lineage between assets via OpenLineage | |
---|---|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
|
- Datasets used in data lineage need a
namespace
that follows the source-specific naming standards of OpenLineage . - Lineage is tracked through jobs. Each job must have:
- the name of a connection (that already exists in Atlan),
- a unique job name (used to idempotently update the same job with multiple runs), and
- a unique URI indicating the code or system responsible for producing this lineage.
- A job must be run at least once for any lineage to exist, and these separate runs of the same job are tracked through
OpenLineageRun
objects. - You can define any number of inputs (sources) for lineage. The
name
of a dataset should use a.
-qualified form. For example, a table should beDATABASE_NAME.SCHEMA_NAME.TABLE_NAME
. - You can define any number of outputs (targets) for lineage. The
name
of a dataset should use a.
-qualified form. For example, a table should beDATABASE_NAME.SCHEMA_NAME.TABLE_NAME
. - Each run of a job must consist of at least two events — a
START
event indicating when the job ran began, and some terminal state indicating when the job run finished. - You can chain any number of
input
s to the event to indicate the source datasets for the lineage. - You can chain any number of
output
s to the event to indicate the target datasets for the lineage. - Use the
emit()
method to actually send the event to Atlan to be processed. The processing itself occurs asynchronously, so a successfulemit()
will only indicate that the event has been successfully sent to Atlan, not that it has (yet) been processed. Because this operation will directly persist the asset in Atlan, you must provide it anAtlanClient
through which to connect to the tenant.
Complete lineage between assets via OpenLineage | |
---|---|
1 2 3 4 5 |
|
- Since each run of a job must consist of at least two events, do not forget to send the terminal state indicating when the job has finished (and whether it was successful with a
COMPLETE
or had some error with aFAIL
.) - Once again, use the
emit()
method to actually send the event to Atlan to be processed (asynchronously). Because this operation will directly persist the asset in Atlan, you must provide it anAtlanClient
through which to connect to the tenant.
POST /events/openlineage/spark/api/v1/lineage | |
---|---|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 |
|
- Each event for a job run must have a time at which the event occurred.
- Each event must have a URI indicating the code or system responsible for producing this lineage.
- Each run of a job must consist of at least two events — a
START
event indicating when the job ran began, and some terminal state indicating when the job run finished. - Lineage is tracked through jobs. Each job must have:
- the name of a connection (that already exists in Atlan) as its
namespace
, - a unique job name (used to idempotently update the same job with multiple runs)
- the name of a connection (that already exists in Atlan) as its
- A job must be run at least once for any lineage to exist, and each event for the same run of a job must be associated with the same
runId
. - You can define any number of inputs (sources) for lineage.
- Datasets used in data lineage need a
namespace
that follows the source-specific naming standards of OpenLineage . - The
name
of a dataset should use a.
-qualified form. For example, a table should beDATABASE_NAME.SCHEMA_NAME.TABLE_NAME
.
- Datasets used in data lineage need a
- You can define any number of outputs (targets) for lineage.
- Datasets used in data lineage need a
namespace
that follows the source-specific naming standards of OpenLineage . - The
name
of a dataset should use a.
-qualified form. For example, a table should beDATABASE_NAME.SCHEMA_NAME.TABLE_NAME
.
- Datasets used in data lineage need a
POST /events/openlineage/spark/api/v1/lineage | |
---|---|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
|
- Since each run of a job must consist of at least two events, do not forget to send the terminal state indicating when the job has finished (and whether it was successful with a
COMPLETE
or had some error with aFAIL
.)
Create lineage between columns¶
Directly¶
To create lineage between relational asset columns,
it is necessary to create a ColumnProcess
entity.
Lineage with relational columns
Before creating the ColumnProcess, verify lineage already exists between the associated relational assets, and ensure that the columns referenced as inputs and outputs already exist.
Create lineage between columns | |
---|---|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
|
- Use the
creator()
method to initialize the object with all necessary attributes for creating it. - Provide a name for how the column process will be shown in the UI.
-
Provide the
qualifiedName
of the connection that ran the column process.Tips for the connection
The column process itself must be created within a connection for both access control and icon labelling. Use a connection
qualifiedName
that indicates the system that ran the column process:- You could use the same connection
qualifiedName
as the source system, if it was the source system "pushing" data to the target(s). - You could use the same connection
qualifiedName
as the target system, if it was the target system "pulling" data from the source(s). - You could use a different connection
qualifiedName
from either source or target, if there is a system in-between doing the processing (for example an ETL engine or orchestrator).
- You could use the same connection
-
(Optional) Provide the unique ID of the column process within that connection. This could be the unique DAG ID for an orchestrator, for example. Since it is optional, you can also send
null
and the SDK will generate a unique ID for you based on the unique combination of inputs and outputs for the column process.Use your own ID if you can
While the SDK can generate this ID for you, since it is based on the unique combination of inputs and outputs the ID can change if those inputs or outputs change. This could result in extra column processes in lineage as this process itself changes over time.
By using your own ID for the column process, any changes that occur in that process over time (even if the inputs or outputs change) the same single process in Atlan will be updated.
-
Provide the list of inputs to the column process. Note that each of these is only a
Reference
to an asset, not a full asset object. For a reference you only need (in addition to the type of asset) either:- its GUID (for the static
<Type>.refByGuid()
method) - its
qualifiedName
(for the static<Type>.refByQualifiedName()
method)
- its GUID (for the static
-
Provide the list of outputs to the column process. Note that each of these is again only a
Reference
to an asset. - Provide the parent
LineageProcess
in which this process ran since this process is a subprocess of some higher-level process. - (Optional) You can also add other properties to the column process, such as SQL code that runs within the column process.
- (Optional) You can also provide a link to the column process, which will provide a button to click to go to that link from the Atlan UI when viewing the column process in Atlan.
- Call the
save()
method to actually create the column process. Because this operation will directly persist the asset in Atlan, you must provide it anAtlanClient
through which to connect to the tenant. - The response will include that single column process asset that was created.
- The response will also include the 5 column assets (3 inputs, 2 outputs) that were updated.
Create lineage between columns | |
---|---|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
|
- Use the
create()
method to initialize the object with all necessary attributes for creating it. - Provide a name for how the column process will be shown in the UI.
-
Provide the
qualified_name
of the connection that ran the column process.Tips for the connection
The column process itself must be created within a connection for both access control and icon labelling. Use a connection
qualified_name
that indicates the system that ran the column process:- You could use the same connection
qualified_name
as the source system, if it was the source system "pushing" data to the target(s). - You could use the same connection
qualified_name
as the target system, if it was the target system "pulling" data from the source(s). - You could use a different connection
qualified_name
from either source or target, if there is a system in-between doing the processing (for example an ETL engine or orchestrator).
- You could use the same connection
-
(Optional) Provide the unique ID of the column process within that connection. This could be the unique DAG ID for an orchestrator, for example. Since it is optional, you can also leave it out and the SDK will generate a unique ID for you based on the unique combination of inputs and outputs for the column process.
Use your own ID if you can
While the SDK can generate this ID for you, since it is based on the unique combination of inputs and outputs the ID can change if those inputs or outputs change. This could result in extra column processes in lineage as this column process itself changes over time.
By using your own ID for the column process, any changes that occur in that column process over time (even if the inputs or outputs change) the same single column process in Atlan will be updated.
-
Provide the list of inputs to the column process. Note that each of these is only a
Reference
to an asset, not a full asset object. For a reference you only need (in addition to the type of asset) either:- its GUID (for the
ref_by_guid()
method) - its
qualifiedName
(for theref_by_qualified_name()
method)
- its GUID (for the
-
Provide the list of outputs to the column process. Note that each of these is again only a
Reference
to an asset. - Provide the parent
Process
in which this process ran since this process is a subprocess of some higher-level process. - (Optional) You can also add other properties to the column process, such as SQL code that runs within the column process.
- (Optional) You can also provide a link to the column process, which will provide a button to click to go to that link from the Atlan UI when viewing the column process in Atlan.
- Call the
save()
method to actually create the column process. - Check that a
ColumnProcess
was created. - Check that only 1
ColumnProcess
was created. - Check that tables were updated.
- Check that 5 tables (3 inputs, 2 outputs) were updated.
Create lineage between columns | |
---|---|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
|
- Use the
creator()
method to initialize the object with all necessary attributes for creating it. - Provide a name for how the column process will be shown in the UI.
-
Provide the
qualifiedName
of the connection that ran the column process.Tips for the connection
The column process itself must be created within a connection for both access control and icon labelling. Use a connection
qualifiedName
that indicates the system that ran the column process:- You could use the same connection
qualifiedName
as the source system, if it was the source system "pushing" data to the target(s). - You could use the same connection
qualifiedName
as the target system, if it was the target system "pulling" data from the source(s). - You could use a different connection
qualifiedName
from either source or target, if there is a system in-between doing the processing (for example an ETL engine or orchestrator).
- You could use the same connection
-
(Optional) Provide the unique ID of the column process within that connection. This could be the unique DAG ID for an orchestrator, for example. Since it is optional, you can also send
null
and the SDK will generate a unique ID for you based on the unique combination of inputs and outputs for the column process.Use your own ID if you can
While the SDK can generate this ID for you, since it is based on the unique combination of inputs and outputs the ID can change if those inputs or outputs change. This could result in extra column processes in lineage as this process itself changes over time.
By using your own ID for the column process, any changes that occur in that process over time (even if the inputs or outputs change) the same single process in Atlan will be updated.
-
Provide the list of inputs to the column process. Note that each of these is only a
Reference
to an asset, not a full asset object. For a reference you only need (in addition to the type of asset) either:- its GUID (for the static
<Type>.refByGuid()
method) - its
qualifiedName
(for the static<Type>.refByQualifiedName()
method)
- its GUID (for the static
-
Provide the list of outputs to the column process. Note that each of these is again only a
Reference
to an asset. - Provide the parent
LineageProcess
in which this process ran since this process is a subprocess of some higher-level process. - (Optional) You can also add other properties to the column process, such as SQL code that runs within the column process.
- (Optional) You can also provide a link to the column process, which will provide a button to click to go to that link from the Atlan UI when viewing the column process in Atlan.
- Call the
save()
method to actually create the column process. Because this operation will directly persist the asset in Atlan, you must provide it anAtlanClient
through which to connect to the tenant. - The response will include that single column process asset that was created.
- The response will also include the 5 column assets (3 inputs, 2 outputs) that were updated.
POST /api/meta/entity/bulk | |
---|---|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 |
|
- All assets must be wrapped in an
entities
array. - You must provide the exact type name for a
ColumnProcess
asset (case-sensitive). - You must provide a name of the integration column process.
- You must provide a unique
qualifiedName
for the integration column process (case-sensitive). - You must list all of the input assets to the column process. These can be referenced by GUID or by
qualifiedName
. - You must list all of the output assets from the column process. These can also be referenced by either GUID or
qualifiedName
. - You must provide the parent
LineageProcess
in which this process ran since this process is a subprocess of some higher-level process.
Using OpenLineage¶
To create column-lineage between assets through OpenLineage, you need only extend the details of the outputs
you send in your OpenLineage events.
You must first configure OpenLineage
You must first configure a Spark Assets connection in Atlan before sending any OpenLineage events. (You can skip the Configure the integration in Apache Spark section.)
Start column-level lineage between assets via OpenLineage | |
---|---|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
|
- Datasets used in data lineage need a
namespace
that follows the source-specific naming standards of OpenLineage . - Lineage is tracked through jobs. Each job must have:
- the name of a connection (that already exists in Atlan),
- a unique job name (used to idempotently update the same job with multiple runs), and
- a unique URI indicating the code or system responsible for producing this lineage.
- A job must be run at least once for any lineage to exist, and these separate runs of the same job are tracked through
OpenLineageRun
objects. - You can define any number of inputs (sources) for lineage. The
name
of a dataset should use a.
-qualified form. For example, a table should beDATABASE_NAME.SCHEMA_NAME.TABLE_NAME
. - You can define any number of outputs (targets) for lineage. The
name
of a dataset should use a.
-qualified form. For example, a table should beDATABASE_NAME.SCHEMA_NAME.TABLE_NAME
. - For column-level lineage, you specify the mapping only on the target (outputs) end of the lineage, by chaining a
toField
for each output column. - Each key for such a
toField()
chain is the name of a field (column) in the output dataset. -
You can then provide a list that defines all input (source) fields that map to this output field in column-level lineage.
Create input fields from input datasets
You can quickly create such a input (source) field from an input dataset using the
fromField()
method and the name of the column in that input dataset. -
Each run of a job must consist of at least two events — a
START
event indicating when the job ran began, and some terminal state indicating when the job run finished. - You can chain any number of
input
s to the event to indicate the source datasets for the lineage. - You can chain any number of
output
s to the event to indicate the target datasets for the lineage. - Use the
emit()
method to actually send the event to Atlan to be processed. The processing itself occurs asynchronously, so a successfulemit()
will only indicate that the event has been successfully sent to Atlan, not that it has (yet) been processed. Because this operation will directly persist the asset in Atlan, you must provide it anAtlanClient
through which to connect to the tenant.
Complete lineage between assets via OpenLineage | |
---|---|
1 2 3 4 5 |
|
- Since each run of a job must consist of at least two events, do not forget to send the terminal state indicating when the job has finished (and whether it was successful with a
COMPLETE
or had some error with aFAIL
.) - Once again, use the
emit()
method to actually send the event to Atlan to be processed (asynchronously). Because this operation will directly persist the asset in Atlan, you must provide it anAtlanClient
through which to connect to the tenant.
Start column-level lineage between assets via OpenLineage | |
---|---|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 |
|
- Datasets used in data lineage need a
namespace
that follows the source-specific naming standards of OpenLineage . - Lineage is tracked through jobs. Each job must have:
- the name of a connection (that already exists in Atlan),
- a unique job name (used to idempotently update the same job with multiple runs), and
- a unique URI indicating the code or system responsible for producing this lineage.
- A job must be run at least once for any lineage to exist, and these separate runs of the same job are tracked through
OpenLineageRun
objects. - You can define any number of inputs (sources) for lineage. The
name
of a dataset should use a.
-qualified form. For example, a table should beDATABASE_NAME.SCHEMA_NAME.TABLE_NAME
. - You can define any number of outputs (targets) for lineage. The
name
of a dataset should use a.
-qualified form. For example, a table should beDATABASE_NAME.SCHEMA_NAME.TABLE_NAME
. - For column-level lineage, you specify the mapping only
on the target (outputs) end of the lineage to the
to_fields
attribute. - Each key is the name of a field (column) in the output dataset.
-
You can then provide a list that defines all input (source) fields that map to this output field in column-level lineage.
Create input fields from input datasets
You can quickly create such a input (source) field from an input dataset using the
from_Field()
method and the name of the column in that input dataset. -
Each run of a job must consist of at least two events — a
START
event indicating when the job ran began, and some terminal state indicating when the job run finished. - You can chain any number of
input
s to the event to indicate the source datasets for the lineage. - You can chain any number of
output
s to the event to indicate the target datasets for the lineage. - Use the
emit()
method to actually send the event to Atlan to be processed. The processing itself occurs asynchronously, so a successfulemit()
will only indicate that the event has been successfully sent to Atlan, not that it has (yet) been processed.
Complete lineage between assets via OpenLineage | |
---|---|
1 2 3 4 5 |
|
- Since each run of a job must consist of at least two events,
do not forget to send the terminal state indicating when the job
has finished (and whether it was successful with a
COMPLETE
or had some error with aFAIL
.) - Once again, use the
emit()
method to actually send the event to Atlan to be processed (asynchronously).
Start column-level lineage between assets via OpenLineage | |
---|---|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
|
- Datasets used in data lineage need a
namespace
that follows the source-specific naming standards of OpenLineage . - Lineage is tracked through jobs. Each job must have:
- the name of a connection (that already exists in Atlan),
- a unique job name (used to idempotently update the same job with multiple runs), and
- a unique URI indicating the code or system responsible for producing this lineage.
- A job must be run at least once for any lineage to exist, and these separate runs of the same job are tracked through
OpenLineageRun
objects. - You can define any number of inputs (sources) for lineage. The
name
of a dataset should use a.
-qualified form. For example, a table should beDATABASE_NAME.SCHEMA_NAME.TABLE_NAME
. - You can define any number of outputs (targets) for lineage. The
name
of a dataset should use a.
-qualified form. For example, a table should beDATABASE_NAME.SCHEMA_NAME.TABLE_NAME
. - For column-level lineage, you specify the mapping only on the target (outputs) end of the lineage, by chaining a
toField
for each output column. - Each key for such a
toField()
chain is the name of a field (column) in the output dataset. -
You can then provide a list that defines all input (source) fields that map to this output field in column-level lineage.
Create input fields from input datasets
You can quickly create such a input (source) field from an input dataset using the
fromField()
method and the name of the column in that input dataset. -
Each run of a job must consist of at least two events — a
START
event indicating when the job ran began, and some terminal state indicating when the job run finished. - You can chain any number of
input
s to the event to indicate the source datasets for the lineage. - You can chain any number of
output
s to the event to indicate the target datasets for the lineage. - Use the
emit()
method to actually send the event to Atlan to be processed. The processing itself occurs asynchronously, so a successfulemit()
will only indicate that the event has been successfully sent to Atlan, not that it has (yet) been processed. Because this operation will directly persist the asset in Atlan, you must provide it anAtlanClient
through which to connect to the tenant.
Complete lineage between assets via OpenLineage | |
---|---|
1 2 3 4 5 |
|
- Since each run of a job must consist of at least two events, do not forget to send the terminal state indicating when the job has finished (and whether it was successful with a
COMPLETE
or had some error with aFAIL
.) - Once again, use the
emit()
method to actually send the event to Atlan to be processed (asynchronously). Because this operation will directly persist the asset in Atlan, you must provide it anAtlanClient
through which to connect to the tenant.
POST /events/openlineage/spark/api/v1/lineage | |
---|---|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 |
|
- Each event for a job run must have a time at which the event occurred.
- Each event must have a URI indicating the code or system responsible for producing this lineage.
- Each run of a job must consist of at least two events — a
START
event indicating when the job ran began, and some terminal state indicating when the job run finished. - Lineage is tracked through jobs. Each job must have:
- the name of a connection (that already exists in Atlan) as its
namespace
, - a unique job name (used to idempotently update the same job with multiple runs)
- the name of a connection (that already exists in Atlan) as its
- A job must be run at least once for any lineage to exist, and each event for the same run of a job must be associated with the same
runId
. - You can define any number of inputs (sources) for lineage.
- Datasets used in data lineage need a
namespace
that follows the source-specific naming standards of OpenLineage . - The
name
of a dataset should use a.
-qualified form. For example, a table should beDATABASE_NAME.SCHEMA_NAME.TABLE_NAME
.
- Datasets used in data lineage need a
- You can define any number of outputs (targets) for lineage.
- Datasets used in data lineage need a
namespace
that follows the source-specific naming standards of OpenLineage . - The
name
of a dataset should use a.
-qualified form. For example, a table should beDATABASE_NAME.SCHEMA_NAME.TABLE_NAME
.
- Datasets used in data lineage need a
- For column-level lineage, you specify the mapping only on the target (outputs) end of the lineage, by including a
columnLineage
facet with an embeddedfields
object. - Each key for the
fields
object is the name of a field (column) in the output dataset. - You can then provide a list that defines all input (source) fields that map to this output field in column-level lineage.
POST /events/openlineage/spark/api/v1/lineage | |
---|---|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
|
- Since each run of a job must consist of at least two events, do not forget to send the terminal state indicating when the job has finished (and whether it was successful with a
COMPLETE
or had some error with aFAIL
.)
Remove lineage between assets¶
To remove lineage between assets, you need to delete the Process
entity that links them:
Only deletes the process indicated, no more
Also be aware that this will only delete the process with the GUID specified. It will not remove any column processes that may also exist. To remove those column processes as well, you must identify the GUID of each column-level process and call the same purge
method against each of those GUIDs.
Remove lineage between assets | |
---|---|
1 2 3 4 5 6 7 |
|
- Provide the GUID for the process to the static
Asset.purge()
method. Because this operation will directly remove the asset from Atlan, you must provide it anAtlanClient
through which to connect to the tenant. - The response will include that single process that was purged.
- If you want to confirm the details, you'll need to type-check and then cast the generic
Asset
returned into aProcess
.
Remove lineage between assets | |
---|---|
1 2 3 4 5 6 7 8 |
|
- Invoke the
asset.purge_by_guid
to delete theProcess
. - Provide the GUID of the process to be purged.
- Check that a
Process
was purged. - Check that only 1
Process
was purged.
Remove lineage between assets | |
---|---|
1 2 3 4 |
|
- Provide the GUID for the process to the static
Asset.purge()
method. Because this operation will directly remove the asset from Atlan, you must provide it anAtlanClient
through which to connect to the tenant. - The response will include that single process that was purged.
- If you want to confirm the details, you'll need to type-check and then cast the generic
Asset
returned into aProcess
.
DELETE /api/meta/entity/bulk?guid=6fa1f0d0-5720-4041-8243-c2a5628b68bf&deleteType=PURGE | |
---|---|
1 |
|
- All of the details are in the request URL, there is no payload for a deletion. The GUID for the process itself (not any of its inputs or outputs) is what is listed in the URL.
More information
This will irreversibly delete the process, and therefore the lineage it represented. The input and output assets themselves will also be updated, to no longer be linked to the (now non-existent) process. However, the input and output assets themselves will continue to exist in Atlan.