Using the Terraform Databricks provider within Azure Databricks Workspace

We have moved our infrastructure to the Azure cloud, as mentioned in previous posts.  With this move, we also started the practice of infrastructure as code.  Our preferred method of deploying our infrastructure is using Terraform.  Recently I updated our Terraform code to use the Terraform Databrick provider to automate the creation of the clusters and not just the Azure Databrick Workspace.  This post covers some of my lessons learned while deploying our Databricks solution with Terraform.

Configuring the Databricks provider

When running the apply action of our terraform code, the first problem was trying to configure the Databricks provider and start the deployment of the cluster resource before Azure Databrick Workspace was fully operational.  Even with the implied and explicitly dependence on the azurerm_databricks_workspace resource.

My method to resolve this early start off configuring Databricks provider objects was to use the data.azurerm_databricks_workspace object.  Unfortunately, I still had to explicitly set a dependence, using the “depends_on” clause, against the data.azurerm_databricks_workspace object for all the Databricks provider resource objects.

Azure Databricks workspace recreation

While developing our Databricks solution, we had to make some changes to Azure Databricks Workspace.  These changes sometimes cause the terraform to destroy the old workspace and recreate a new one; when this happens, the action would fail due to the Databricks provider consistency check.  The only way to resolve this problem is to manage the terraform state file and move all the data/resource objects related to the Databricks provider.  You can do this by using the following command: terraform state rm <Databrick object> <Databrick object>

Cluster Work and Driver Nodes Attributes 

When defining your cluster, I would recommend that you explicitly set the driver node type.  Using implied driver node cause us a problem when we update our node types via Terraform.  The worker node type got updated, but the driver node remained unchanged until I explicitly set the driver node. 

Manging the cluster attributes

The Databricks provider has 2 useful data objects that are useful when configuring your cluster: 

The databricks_spark_version gets the spark version string needed for the spark_version attribute for the databricks_cluster resource.  The default is the latest version supported by your Databricks environment.  But there are search attributes with the data object to help ensure you get the version you wish.

Our requirement was to use the latest long term support version, which we achieved by setting the long_term_support when creating our data databricks_node_type object.  Using this data object was helpful because while we were developing our Databricks project, the latest long supported version within Azure got updated.  All we had to ensure that our cluster was using this spark version was re-run our terraform apply action with no code change.

The databricks_node_type will get string for the node_type_id / driver_node_type_id base on the search attributes. I didn’t use this data object in my code, but want to highlight its existent as using it and the databricks_spark_version in your cluster resource configuration could make your code re-useable if needed to move from Azure to AWS or vice versa, only if you didn’t use any cloud provider specific search values.

Comments

Popular posts from this blog

Azure SQL Server IaaS SSIS: A required privilege is not held by the client

SSIS File handle leak in For Each Loop

Cannot update a database that has been registered as a data-tier application