- Added Quickstart model counts to README. (#135)
- Corrected references to connectors and connections in the README. (#135)
PR #133 contains the following updates:
- This change is marked as breaking due to its impact on Redshift configurations.
- For Redshift users, comment data aggregated under the
conversations
field in thejira__issue_enhanced
table is now disabled by default to prevent consistent errors related to Redshift's varchar length limits.- If you wish to re-enable
conversations
on Redshift, set thejira_include_conversations
variable totrue
in yourdbt_project.yml
.
- If you wish to re-enable
- Updated the
comment
seed data to ensure conversations are correctly disabled for Redshift by default. - Renamed the
jira_is_databricks_sql_warehouse
macro tojira_is_incremental_compatible
, which was updated to returntrue
if the Databricks runtime is an all-purpose cluster (previously it checked only for a SQL warehouse runtime) or if the target is any other non-Databricks-supported destination.- This update addresses Databricks runtimes (e.g., endpoints and external runtimes) that do not support the
insert_overwrite
incremental strategy used in thejira__daily_issue_field_history
andint_jira__pivot_daily_field_history
models.
- This update addresses Databricks runtimes (e.g., endpoints and external runtimes) that do not support the
- For Databricks users, the
jira__daily_issue_field_history
andint_jira__pivot_daily_field_history
models will now apply the incremental strategy only if running on an all-purpose cluster. All other Databricks runtimes will not utilize an incremental strategy. - Added consistency tests for the
jira__project_enhanced
andjira__user_enhanced
models.
PR #131 contains the following updates:
Since the following changes are breaking, a
--full-refresh
after upgrading will be required.
- Changed the partitioning from days to weeks in the following models for BigQuery and Databricks All Purpose Cluster destinations:
int_jira__pivot_daily_field_history
- Added field
valid_starting_at_week
for use with the new weekly partition logic.
- Added field
jira__daily_issue_field_history
- Added field
date_week
for use with the new weekly partition logic.
- Added field
- This adjustment reduces the total number of partitions, helping avoid partition limit issues in certain warehouses.
- For Databricks All Purpose Cluster destinations, updated the
file_format
todelta
for improved performance. - Updated the default materialization of
int_jira__issue_calendar_spine
from incremental to ephemeral to improve performance and maintainability.
- Updated README with the new default of 1 week for the
lookback_window
variable.
- Replaced the deprecated
dbt.current_timestamp_backcompat()
function withdbt.current_timestamp()
to ensure all timestamps are captured in UTC for the following models:int_jira__issue_calendar_spine
int_jira__issue_join
jira__issue_enhanced
- Updated model
int_jira__issue_calendar_spine
to prevent errors during compilation. - Added consistency tests for the
jira__daily_issue_field_history
andjira__issue_enhanced
models.
PR #127 contains the following updates:
⚠️ Since the following changes are breaking, a--full-refresh
after upgrading will be required.
- To reduce storage, updated the default materialization of the upstream staging models to views. (See the dbt_jira_source CHANGELOG for more details.)
-
Updated the incremental strategy of the following models to
insert_overwrite
for BigQuery and Databricks All Purpose Cluster destinations anddelete+insert
for all other supported destinations.int_jira__issue_calendar_spine
int_jira__pivot_daily_field_history
jira__daily_issue_field_history
At this time, models for Databricks SQL Warehouse destinations are materialized as tables without support for incremental runs.
-
Removed intermediate models
int_jira__agg_multiselect_history
,int_jira__combine_field_histories
, andint_jira__daily_field_history
by combining them withint_jira__pivot_daily_field_history
. This is to reduce the redundancy of the data stored in tables, the number of full scans, and the volume of write operations.- Note that if you have previously run this package, these models may still exist in your destination schema, however they will no longer be updated.
-
Updated the default materialization of
int_jira__issue_type_parents
from a table to a view. This model is called only inint_jira__issue_users
, so a view will reduce storage requirements while not significantly hindering performance. -
For Snowflake and BigQuery destinations, added the following
cluster_by
columns to the configs for incremental models:int_jira__issue_calendar_spine
clustering on columns['date_day', 'issue_id']
int_jira__pivot_daily_field_history
clustering on columns['valid_starting_on', 'issue_id']
jira__daily_issue_field_history
clustering on columns['date_day', 'issue_id']
-
For Databricks All Purpose Cluster destinations, updated incremental model file formats to
parquet
for compatibility with theinsert_overwrite
strategy.
- Added a default 3-day look-back to incremental models to accommodate late arriving records. The number of days can be changed by setting the var
lookback_window
in your dbt_project.yml. See the Lookback Window section of the README for more details. - Added macro
jira_lookback
to streamline the lookback window calculation.
- Added integration testing pipeline for Databricks SQL Warehouse.
- Added macro
jira_is_databricks_sql_warehouse
for detecting if a Databricks target is an All Purpose Cluster or a SQL Warehouse. - Updated the maintainer pull request template.
PR #122 contains the following updates:
- The following fields in the below mentioned models have been converted to a string datatype (previously integer) to ensure classic Jira projects may link issues to epics. In classic Jira projects the epic reference is in a hyperlink form (ie. "https://ulr-here/epic-key") as opposed to an ID. As such, a string datatype is needed to successfully link issues to epics. If you are referencing these fields downstream, be sure to make any changes to account for the new datatype.
revised_parent_issue_id
field within theint_jira__issue_type_parents
modelparent_issue_id
field within thejira__issue_enhanced
model
- Update README to highlight requirements for using custom fields with the
issue_field_history_columns
variable.
- Included auto-releaser GitHub Actions workflow to automate future releases.
- Updated the maintainer PR template to resemble the most up to date format.
- Updated
field
andissue_field_history
seed files to ensure we have an updated test case to capture the epic-link scenario for classic Jira environments.
PR #108 contains the following updates:
- Updated the
jira__daily_issue_field_history
model to make sureissue_type
values are correctly joined into the downstream issue models. This applied only ifissue type
is leveraged within theissue_field_history_columns
variable.
Note: Please be aware that a
dbt run --full-refresh
will be required after upgrading to this version in order to capture the updates.
- Fixed the
jira__daily_issue_field_history
model to make surecomponent
values are correctly joined into the downstream issue models. This applied only ifcomponents
are leveraged within theissue_field_history_columns
variable. (PR #99)
Note: Please be aware that a
dbt run --full-refresh
will be required after upgrading to this version in order to capture the updates.
- Updated the
int_jira__issue_calendar_spine
logic, which now references theint_jira__field_history_scd
model as an upstream dependency. (PR #104) - Modified the
open_until
field within theint_jira__issue_calendar_spine
model to be dependent on theint_jira__field_history_scd
model'svalid_starting_on
column as opposed to theissue
table'supdated_at
field. (PR #104)- This is required as some resolved issues (outside of the 30 day or
jira_issue_history_buffer
variable window) were having faulty incremental loads due to untracked fields (fields not tracked via theissue_field_history_columns
variable or other fields not identified in the history tables such as Links, Comments, etc.). This caused theupdated_at
column to update, but there were no tracked fields that were updated, thus causing a faulty incremental load.
- This is required as some resolved issues (outside of the 30 day or
- Added additional seed rows to ensure the new configuration for components properly runs for all edge cases and compare against normal issue field history fields like
summary
. (PR #104) - Incorporated the new
fivetran_utils.drop_schemas_automation
macro into the end of each Buildkite integration test job. (PR #98) - Updated the pull request templates. (PR #98)
PR #95 applies the following changes:
- Added the
status_id
column as a default field for thejira__daily_issue_field_history
model. This is required to perform an accurate join for thestatus
field in incremental runs.- Please be aware a
dbt run --full-refresh
will be required following this upgrade.
- Please be aware a
PR #93 applies the following changes:
- Adds the option to use
field_name
instead offield_id
as the field-grain for issue field history transformations. Previously, the package would strictly partition and join issue field data usingfield_id
. However, this assumed that it was impossible to have fields with the same name in Jira. For instance, it is very easy to create anotherSprint
field, and different Jira users across your organization may choose the wrong or inconsistent version of the field.- Thus, to treat these as the same field, set the new
jira_field_grain
variable to'field_name'
in yourdbt_project.yml
file. You must run a full refresh to accurately fold this change in.
- Thus, to treat these as the same field, set the new
PR #95 applies the following changes:
- With the addition of the default
status_id
field in thejira__daily_issue_field_history
model, there is no longer a need to do the extra partitioning to fill values for thestatus
field. As such, thestatus
partitions were removed in place ofstatus_id
. However, in the final cte of the model we join in the status staging model to populate the appropriate status per the accurate status_id for the given day.
- Reverting the changes introduced between v0.12.1 except Databricks compatibility. Please stay tuned for a future release that will integrate the v0.12.1 changes in a bug free release. (#88)
- Fixed
jira__daily_issue_field_history
model to make sure component values are correctly joined into our issue models (#81). - Please note, a
dbt run --full-refresh
will be required after upgrading to this version in order to capture the updates.
- Databricks compatibility 🧱 (#80).
PR #74 includes the following breaking changes:
- Dispatch update for dbt-utils to dbt-core cross-db macros migration. Specifically
{{ dbt_utils.<macro> }}
have been updated to{{ dbt.<macro> }}
for the below macros:any_value
bool_or
cast_bool_to_text
concat
date_trunc
dateadd
datediff
escape_single_quotes
except
hash
intersect
last_day
length
listagg
position
replace
right
safe_cast
split_part
string_literal
type_bigint
type_float
type_int
type_numeric
type_string
type_timestamp
array_append
array_concat
array_construct
- For
current_timestamp
andcurrent_timestamp_in_utc
macros, the dispatch AND the macro names have been updated to the below, respectively:dbt.current_timestamp_backcompat
dbt.current_timestamp_in_utc_backcompat
dbt_utils.surrogate_key
has also been updated todbt_utils.generate_surrogate_key
. Since the method for creating surrogate keys differ, we suggest all users do afull-refresh
for the most accurate data. For more information, please refer to dbt-utils release notes for this update.- Dependencies on
fivetran/fivetran_utils
have been upgraded, previously[">=0.3.0", "<0.4.0"]
now[">=0.4.0", "<0.5.0"]
. - Incremental strategies for all incremental models in this package have been adjusted to use
delete+insert
if the warehouse being used is Snowflake, Postgres, or Redshift.
- While this is a patch update, it may also require a full refresh. Please run
dbt run --full-refresh
after upgrading to ensure you have the latest incremental logic.
- Updated logic for model
int_jira__issue_sprint
to further adjust how current sprint is determined. It now uses a combination of the newestupdated_at
date for the issue and the neweststarted_at
date of the sprint. This is to account for times when jira updates two sprint records at the same time. (#77 and #78)
- For model
jira__issue_enhanced
, updated column namessprint_id
andsprint_name
tocurrent_sprint_id
andcurrent_sprint_name
, respectively, to confirm the record is for the current sprint. (#76)
- Updated logic for model
int_jira__issue_sprint
to adjust how current sprint is determined. It now uses the neweststarted_at
date of the sprint instead of theupdated_at
date. (#76)
- The default schema for the source tables are now built within a schema titled (
<target_schema>
+_jira_source
) in your destination. The previous default schema was (<target_schema>
+_stg_jira
) for source. This may be overwritten if desired. (#63) - Flipped column aliases
sum_close_time_seconds
andsum_current_open_seconds
of intermediate modelint_jira__user_metrics.sql
. (#66) - This ensures that downstream model
jira__user_enhanced.sql
calculates columnsavg_age_currently_open_seconds
andavg_close_time_seconds
correctly. (#66)
- Updated README documentation updates for easier navigation and setup of the dbt package. (#63)
- Added
jira_[source_table_name]_identifier
variables to allow for easier flexibility of the package to refer to source tables with different names. (#63)
- Corrected bug introduced in 0.8.0 that would prevent the correct
status
data from being passed to modeljira__daily_issue_field_history
. (#63)- Please note, a
dbt run --full-refresh
will be required after upgrading to this version in order to capture the updates.
- Please note, a
- Corrected bug introduced in 0.8.0 that would prevent
sprint
data from being passed to modeljira__daily_issue_field_history
. (#62)
- Makes priority data optional. Allows new env var
jira_using_priorities
. Modelsjira__issue_enhanced
andint_jira__issue_join
won't require sourcejira.priority
or contain priority-related columns ifjira_using_priorities: false
. (#55)
- @everettttt (#55)
- Previously the
jira__daily_field_history
andjira__issue_enhanced
models allowed for users to leverage theissue_field_history_columns
to bring through customfield_id
s. However, thefield_id
was not very intuitive to report off. Therefore, the package has been updated to bring through thefield_name
values in the variable and persist through to the final models. (#54)- Please note, if you leveraged this variable in the past then you will want to update the
field_id
(customfield_000123) to be thefield_name
(Cool Custom Field) now. Further, adbt run --full-refresh
will be required as well.
- Please note, if you leveraged this variable in the past then you will want to update the
- Multi-select fields that are populated within the
jira__daily_issue_field_history
andjira__issue_enhanced
models are automatically joined withstg_jira__field_option
to ensure the field names are populated. (#54)
🎉 dbt v1.0.0 Compatibility 🎉
- Adjusts the
require-dbt-version
to now be within the range [">=1.0.0", "<2.0.0"]. Additionally, the package has been updated for dbt v1.0.0 compatibility. If you are using a dbt version <1.0.0, you will need to upgrade in order to leverage the latest version of the package.- For help upgrading your package, I recommend reviewing this GitHub repo's Release Notes on what changes have been implemented since your last upgrade.
- For help upgrading your dbt project to dbt v1.0.0, I recommend reviewing dbt-labs upgrading to 1.0.0 docs for more details on what changes must be made.
- Upgrades the package dependency to refer to the latest
dbt_jira_source
. Additionally, the latestdbt_jira_source
package has a dependency on the latestdbt_fivetran_utils
. Further, the latestdbt_fivetran_utils
package also has a dependency ondbt_utils
[">=0.8.0", "<0.9.0"].- Please note, if you are installing a version of
dbt_utils
in yourpackages.yml
that is not in the range above then you will encounter a package dependency error.
- Please note, if you are installing a version of
- This release of the
dbt_jira
packages implements changes to the incremental logic within various models highlighted in the Bug Fixes section below. As such, adbt run --full-refresh
will be required after upgrading this dependency for this package in yourpackages.yml
.
- Corrected CTE references within
int_jira__issue_assignee_resolution
. The final cte referenced was previously selecting fromissue_field_history
when it should have been selecting fromfiltered
. (#45) - Modified the incremental logic within
int_jira__agg_multiselect_history
to properly capture latest record. Previously, this logic would work for updates made outside of 24 hours. This logic update will now capture any changes since the previous dbt run. (#48)
- Modified the
int_jira__issue_calendar_spine
model to use thedbt-utils.current_timestamp_in_utc
to better capture the current datetime across regions. (#47)
- @thibonacci (#45)
Refer to the relevant release notes on the Github repository for specific details for the previous releases. Thank you!