tag:blogger.com,1999:blog-74762472024-03-23T11:37:28.233+00:00Steven Wright BlogA collection of notes, lessons learnt and findings around SQL Server, Data integration and Dev-OpsSteve Wrighthttp://www.blogger.com/profile/15679255693378492813noreply@blogger.comBlogger80125tag:blogger.com,1999:blog-7476247.post-50695538670314719022022-01-03T14:35:00.001+00:002022-01-03T14:35:35.532+00:00Azure Data Factory Pre and Post Deployment for Octopus Deploy<p>This post covers how I solve a deployment and maintenance problem with Azure Data Factory (ADF). The problem was ensuring that the ADF instance only contains the active datasets, pipelines, dataflows and triggers. As mentioned in a previous post, my current company has tight security controls, which prevents us from linking our ADF instance to our git repository; our deployment method uses Octopus Deploy with a mixture of ARM template and Terraform.</p><p>I use the Terraform to create the ADF instance and then ARM template for the ADF objects, like datasets, pipelines and dataflows, which causes the maintenance problem because the ARM templates deployment process does not delete objects that don’t exist in the ARM template.</p><p>We found this out after we did a recent significant change to our ADF code and found many unused objects still in our ADF instance. I didn’t want to leave the unused objects for the following reasons:</p><p></p><ul style="text-align: left;"><li>It would make it hard for other developers to understand what ADF objects our project uses and those that are not.</li><li>Waste time with updating unused ADF objects when underlying data structures were changed.</li><li>Being too scared to remove unused ADF objects because no one can remember what is used and don’t want to break the process.</li></ul><p></p><p>Also, I was not too fond of the idea of deleting unused objects manually, as it would take a while to do and could remove the wrong object by mistake. I tried to see if there was a built-in automated method to remove unused objects, I didn’t find one, but I came across this link: Sample pre- & post-deployment script (https://docs.microsoft.com/en-us/azure/data-factory/continuous-integration-delivery-sample-script).</p><p>The sample script was useful because within Octopus you can create <a href="https://octopus.com/docs/deployments/custom-scripts/script-modules" target="_blank">PowerShell Script Modules</a> and use them in your deployment projects for Pre-Deployment, Post-Deployment and Custom Deployment scripts. After making some changes to the sample script to work with our ARM template and by converting it to be more to a PS module with 2 functions:</p><p></p><ul style="text-align: left;"><li>Invoke-ADFPreDeploymentStep</li><li>Invoke-ADFPostDeploymentStep</li></ul><p></p><p>The Invoke-ADFPreDeploymentStep helped resolve another problem with deploying changes to our ADF instance: disable any triggers linked to pipelines in the ARM template. Invoke-ADFPostDeploymentStep is where all the deleting of the unused objects happened; I also added an extra flag to control the starting of the triggers to ensure that triggers could be configured only to be running in our production environment.</p><p>A copy of this script can be downloaded from this location: <a href="https://github.com/zogamorph/BlogPostCode/blob/main/PowerShell/ADF-Pre-and-Post-Deployment-Script/DeployDataFactory.ps1" target="_blank">DeployDataFactory.ps1</a>.</p><p>There is a test script that you can use to see if the script works with your ARM template from this location: <a href="https://github.com/zogamorph/BlogPostCode/blob/main/PowerShell/ADF-Pre-and-Post-Deployment-Script/Testing-DataFactory.ps1">Testing-DataFactory.ps1</a></p>Steven Wrighthttp://www.blogger.com/profile/10250648827790297322noreply@blogger.com3tag:blogger.com,1999:blog-7476247.post-34913473415224516642021-12-30T10:15:00.010+00:002022-01-03T10:38:20.043+00:00Using the Terraform Databricks provider within Azure Databricks Workspace<p>We have moved our infrastructure to the Azure cloud, as mentioned in previous <a href="https://zogamorph.blogspot.com/search/label/Azure" target="_blank">posts</a>. With this move, we also started the practice of infrastructure as code. Our preferred method of deploying our infrastructure is using Terraform. Recently I updated our Terraform code to use the Terraform Databrick provider to automate the creation of the clusters and not just the Azure Databrick Workspace. This post covers some of my lessons learned while deploying our Databricks solution with Terraform.</p><h4 style="text-align: left;">Configuring the Databricks provider</h4><p>When running the apply action of our terraform code, the first problem was trying to configure the Databricks provider and start the deployment of the cluster resource before Azure Databrick Workspace was fully operational. Even with the implied and explicitly dependence on the azurerm_databricks_workspace resource.</p><p>My method to resolve this early start off configuring Databricks provider objects was to use the data.azurerm_databricks_workspace object. Unfortunately, I still had to explicitly set a dependence, using the “depends_on” clause, against the data.azurerm_databricks_workspace object for all the Databricks provider resource objects.</p><h4 style="text-align: left;">Azure Databricks workspace recreation</h4><p>While developing our Databricks solution, we had to make some changes to Azure Databricks Workspace. These changes sometimes cause the terraform to destroy the old workspace and recreate a new one; when this happens, the action would fail due to the Databricks provider consistency check. The only way to resolve this problem is to manage the terraform state file and move all the data/resource objects related to the Databricks provider. You can do this by using the following command: terraform state rm <Databrick object> <Databrick object></p><h4 style="text-align: left;">Cluster Work and Driver Nodes Attributes </h4><p>When defining your cluster, I would recommend that you explicitly set the driver node type. Using implied driver node cause us a problem when we update our node types via Terraform. The worker node type got updated, but the driver node remained unchanged until I explicitly set the driver node. </p><h4 style="text-align: left;">Manging the cluster attributes</h4><p>The Databricks provider has 2 useful data objects that are useful when configuring your cluster: </p><p></p><ul style="text-align: left;"><li><a href="https://registry.terraform.io/providers/databrickslabs/databricks/latest/docs/data-sources/spark_version" rel="nofollow" target="_blank">databricks_spark_version</a></li><li><a href="https://registry.terraform.io/providers/databrickslabs/databricks/latest/docs/data-sources/node_type" rel="nofollow" target="_blank">databricks_node_type</a></li></ul><p></p><p>The databricks_spark_version gets the spark version string needed for the spark_version attribute for the databricks_cluster resource. The default is the latest version supported by your Databricks environment. But there are search attributes with the data object to help ensure you get the version you wish.</p><p>Our requirement was to use the latest long term support version, which we achieved by setting the long_term_support when creating our data databricks_node_type object. Using this data object was helpful because while we were developing our Databricks project, the latest long supported version within Azure got updated. All we had to ensure that our cluster was using this spark version was re-run our terraform apply action with no code change.</p><p>The databricks_node_type will get string for the node_type_id / driver_node_type_id base on the search attributes. I didn’t use this data object in my code, but want to highlight its existent as using it and the databricks_spark_version in your cluster resource configuration could make your code re-useable if needed to move from Azure to AWS or vice versa, only if you didn’t use any cloud provider specific search values.</p>Steven Wrighthttp://www.blogger.com/profile/10250648827790297322noreply@blogger.com1tag:blogger.com,1999:blog-7476247.post-82059032176634797202021-10-09T16:08:00.001+02:002021-10-09T16:08:26.654+02:00Automating Databricks deployments via Octopus DeployIf you are using Octopus Deploy to automate the deployment of solution(s) and part of your solution is using Databricks, then this post might interest you.<div><br /></div><div>We are using Databrick within my current project, and due to tight security controls, we cannot link our workspace to our git repository.</div><div><br /></div><div>We were manual importing our Databricks workbooks as our method of deployment. Until we automate the process using Octopus Deploy and the newly created community step template: Import Databricks Workbooks.</div><div><br /></div><div>Currently, the community step template only imports workbooks with the ‘.ipynb’or ‘.scala’ file extension and only communicates to the DataBricks instance using HTTPS. The task uses the DataBricks rest API v 2.0 to update the DataBricks instance, so there is no requirement to install any extra supporting software.</div><div><br /></div><div>The template only requires the parameters:<br /><ul style="text-align: left;"><li>DataBricks Workbook Package: The package of the Databricks workbooks for deployment.</li><li>Databricks Instance Uri: The hostname of the Databrick workspace</li><li>Databricks Access Token: any supported access token to allow the requests through to the API (personal access tokens, Azure AD token)</li><li>Databricks Workbook Import Folder: The workspace folder path where the workbooks will be import too. The template creates the folder path in the workspace if it doesn’t exist</li></ul><div style="text-align: center;"><a href="https://www.blogger.com/#"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgwivEE4t5pCnhYhdA7pkMvIKHT6qM09IXrbPWkvwXfhbzzX2PU_9m6JSFa2xja3OWtNReFYUqxsS95cmaUin7tYn0WuS93NevDfknG_pPWRcrTJJHAktyzXf1LKRPpXsRws1Ig/w400-h396/ImportDataBricksWorkbooksStep.png" /></a></div><br /></div>Steven Wrighthttp://www.blogger.com/profile/10250648827790297322noreply@blogger.com0tag:blogger.com,1999:blog-7476247.post-69835966954678913142021-01-31T15:09:00.000+00:002021-01-31T15:09:30.342+00:00Challenges of move SQL Server data to Microsoft Azure SQL Server PaaS<p>This post covers one of the challenges we face when moving our infrastructure from on-prem to Microsoft Azure: Moving
SQL Server data from on-premise to Microsoft SQL Server Paas.</p>
<p>Moving our data was a challenge for the following reasons:</p>
<ul>
<li>Our company has tight controls and rules about what software/connectivity is allowed on our network</li>
<li>The database that we need to migrate after compression was over 100GB in size.</li>
<li>No information about how long other migration processes would take. As needed to ensure our migration was
complete before the next Monday on the weekend of our migration.</li>
</ul>
<p>For most people migrating their data, the best option is to use <a href="https://docs.microsoft.com/en-us/azure/dms/" rel="nofollow" target="_blank">Azure Database Migration Service</a>, especially when transferring a database
size that we were. We couldn't use the Azure Database Migration Service for the following reasons:</p>
<p></p>
<ul style="text-align: left;">
<li>Azure Database Migration Service requires software installed on a server within our network</li>
<li>Connections to both databases were needed.</li>
</ul>
<div>These were not acceptable for our company security teams.</div>
<p></p>
<p>Another approach Azure offers for transferring data is <a href="https://docs.microsoft.com/en-us/azure/databox/data-box-overview#:~:text=The%20Microsoft%20Azure%20Data%20Box,%2C%20inexpensive%2C%20and%20reliable%20way.&text=Depending%20on%20whether%20you%20will,the%20device%20back%20to%20Azure." rel="nofollow" target="_blank">Azure Data Boxes</a>. There are various Azure data boxes options available, which the<a href="https://docs.microsoft.com/en-us/azure/databox/data-box-disk-overview" rel="nofollow" target="_blank"> Azure Data Box Disk</a>. There were a few reasons we had to discount this approach: </p>
<ul>
<li>The lead time for Azure Data Box disk exceeded our migration window.</li>
<li>There are extra costs for using this approach</li>
<li>Our use case didn't precisely fit with the use cases for using Azure Data Boxes.</li>
</ul>
<p>Azure's final method for data transfer, which we used, is <a href="https://docs.microsoft.com/en-us/sql/tools/sqlpackage/sqlpackage?view=sql-server-ver15" rel="nofollow" target="_blank">SQL Package </a>(<a href="https://docs.microsoft.com/en-us/sql/relational-databases/data-tier-applications/data-tier-applications?view=sql-server-ver15" rel="nofollow" target="_blank">DacFx</a>) to create a BACPAC for exporting/importing. Our risk with SQL
Package was as follows: </p>
<ul>
<li>Can it handle a database of over 100 GB in size (our largest table has 400 millions rows of data)?</li>
<li>How long will it take to package, transfer and import?</li>
</ul>
<p>We found that SQL Package can handle our database size once we resolve our package errors: </p>
<ul>
<li>Cannot access a closed stream</li>
<li>Not enough disk space to complete option, even though the final location of the BACPAC file had enough free disk
space.</li>
</ul>
<p>To resolve "not enough disk space error" we found that we needed to move the windows temp folder (TMP and TEMP environment variables) to a disk with as much free space as the database's size. </p><p>For the error: "cannot access a closed stream" we notice the location set by environment variable: LOCALAPPDATA needed to be on a disk with an excess of 80GB of free space.</p><p>We were able to complete the process of packaging, copying and importing into Azure within our required timeframe. Here is our breakdown of each stage of the process: </p>
<ul>
<li>To export into BACPAC completed in about 9 hours.</li>
<li>Copying from our on-premises server to Azure took just over an hour using AZCopy.</li>
<li>To Import into our SQL Server PaaS database took about 9 hours to complete.</li></ul>Steven Wrighthttp://www.blogger.com/profile/10250648827790297322noreply@blogger.com1tag:blogger.com,1999:blog-7476247.post-76886518571258178652021-01-02T15:31:00.000+00:002021-01-02T15:31:23.374+00:00Azure SQL Server IaaS SSIS: A required privilege is not held by the client<p>As mention in a previous post, we are using Azure SQL Server IaaS to help our transition to the cloud. The main reason for this is how some of our SSIS projects, which we inherited, were created.</p><ul><li>Some were using SSIS to manage the transactions, which uses MSDTC not supported in Azure.</li><li>Use extra .Net libraries to read JSON and office files; external programs to do file operations like decompression.</li><li>Migration timelines and priorities we didn't have time to refactor these projects.</li></ul><p>While ago we started to deploy our SSIS projects to our Azure SQL Server IaaS virtual machine, a Windows 2019 server with SQL Server 2019, to get the following error: SSIS Deployment Error 6522 "A required privilege is not held by the client".</p><p>We able to resolve our problem by using the steps we found in the following blog posts: </p><ul><li><a href="https://docs.microsoft.com/en-us/archive/blogs/dataaccesstechnologies/system-componentmodel-win32exception-a-required-privilege-is-not-held-by-the-client-while-deploying-ssis-project">System.ComponentModel.Win32Exception: A required privilege is not held by the client while Deploying SSIS Project | Microsoft Docs</a>.</li><li><a href="https://www.oraylis.de/blog/fixing-sql-2012-ssis-deployment-error-6522-a-required-privilege-is-not-held-by-the-client">Fixing SQL 2012 SSIS Deployment Error 6522 "A required privilege is not held by the client" | ORAYLIS</a>.</li></ul><p>Which was to:</p><ol><li>Run Dcomcnfg.exe. Dcomcnfg.exe provides a user interface for modifying certain settings in the registry.</li><li>In the Component Services dialog, expand the Component Services > Computers > My Computer > DCOM Config node.</li><li>Right-click Microsoft SQL Server Integration Services 15.0, and then click Properties.</li><li>On the Security tab, click Edit in the Launch and Activation Permissions area.</li><li>Add users NT SERVICE\MsDtsServer150, NT SERVICE\SQLSERVERAGENT and NT SERVICE\MSSQLSERVER and assign appropriate permissions, and then click Ok.<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiGXol0veAFkzQBBOEMC7gE2lDgq-PJarIJUmjsIIOBNAEK7HreJdJWx58tqJXwWH0J1TUEJn8GxkILeMPSSxgRIo4fbOXxNuzNKUWbWw2Zo7ISRsfGtilIiPxMImouNUm4dDSq/s1402/InkedDCom_LI.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img alt="Component Servcies Changes" border="0" data-original-height="854" data-original-width="1402" height="195" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiGXol0veAFkzQBBOEMC7gE2lDgq-PJarIJUmjsIIOBNAEK7HreJdJWx58tqJXwWH0J1TUEJn8GxkILeMPSSxgRIo4fbOXxNuzNKUWbWw2Zo7ISRsfGtilIiPxMImouNUm4dDSq/w320-h195/InkedDCom_LI.jpg" title="Component Servcies Changes" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Component Servcies Changes</td></tr></tbody></table><br /></li><li>Run secpol.msc and within dialog expanded local security policy > User Rights Assignment. </li><li>Doboule click Replace a process level token and added NT SERVICE\SQLSERVERAGENT and NT SERVICE\MSSQLSERVER click okay.<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiEbezbWO7Ydly74erazjxUiOtrQi_u4MM-nKLrYURW3Ru5kKvi-V-yO_K6KNgxp06F3lOHU57VyPhZ1pMa7Ul5lPNSqwGI5lqOtEpey_DLYp0XhlnMxonj_Z0dg0PE3UCDoCvm/s1522/InkedSecpolmsc_LI.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img alt="Local Security Policy Changes" border="0" data-original-height="830" data-original-width="1522" height="175" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiEbezbWO7Ydly74erazjxUiOtrQi_u4MM-nKLrYURW3Ru5kKvi-V-yO_K6KNgxp06F3lOHU57VyPhZ1pMa7Ul5lPNSqwGI5lqOtEpey_DLYp0XhlnMxonj_Z0dg0PE3UCDoCvm/w320-h175/InkedSecpolmsc_LI.jpg" title="Local Security Policy Changes" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Local Security Policy Changes</td></tr></tbody></table><br /></li><li>Restart the Integration Services Service.</li></ol><p></p><p></p><p></p>Steven Wrighthttp://www.blogger.com/profile/10250648827790297322noreply@blogger.com0tag:blogger.com,1999:blog-7476247.post-43513065144653099042020-12-30T16:00:00.003+00:002020-12-31T09:45:18.674+00:00Azure SQL Server IaaS: A quick approach to resolving collation conflict.<p>In this post I would like to highlight that approach that you can take to resolve the following error that you might get after restoring your on-prem SQL Server database(s) to Azure SQL Server IaaS: Cannot resolve the collation conflict between</p>
<p>We were getting the error because the Azure IaaS SQL Server default collation is: SQL_Latin1_General_CP1_CI_AS. Our on-prem default is Latin1_General_CP1_CI_AS. And our SQL code wasn’t setting an explicit collation for joins between temp tables and physical database table for character columns.</p>
<p>For the UK SQL Server install the server collation default is Latin1_General_CP1_CI_AS when the windows host language and location are set as English United kingdom.</p>
<p>We also didn’t have time to go through all our SQL code to find where we need to add explicit collation. The decision was to change SQL Server default collation to Latin1_General_CP1_CI_AS</p><p>To set the SQL Server collation to default can be done using this command on the Azure VM: <!--HTML generated using hilite.me--></p>
<div style="background: rgb(255, 255, 255); border-color: gray; border-image: initial; border-style: solid; border-width: 0.1em 0.1em 0.1em 0.8em; border: solid gray; overflow: auto; padding: 0.2em 0.6em; width: auto;"><pre style="line-height: 125%; margin: 0px;">.\ C<span style="border: 1px solid rgb(255, 0, 0);">:</span>\SQLServerFull\Setup /QUIET /ACTION=REBUILDDATABASE /INSTANCENAME=MSSQLServer /SQLSYSADMINACCOUNTS= <span style="color: #a31515;">"$env:COMPUTERNAME\<Windows Admin account>"</span> /SAPWD=<span style="color: #a31515;">"<Password>"</span> /SQLCOLLATION=Latin1_General_CI_AS (or the collcation that suits your needs)
</pre></div>
<p>as the Azure SQL Server IaaS template leaves the full set-up files on the server under the C:\SQLServerFull.</p>
<p>After the above command complete there are extra server principals required for Azure SQL Server IaaS to integration into Azure. The following SQL Script creates the extra required server principals and the permissions needed: </p>
<!--HTML generated using hilite.me--><div style="background: rgb(255, 255, 255); border-color: gray; border-image: initial; border-style: solid; border-width: 0.1em 0.1em 0.1em 0.8em; border: solid gray; overflow: auto; padding: 0.2em 0.6em; width: auto;"><pre style="line-height: 125%; margin: 0px;"><span style="color: blue;">CREATE</span> LOGIN [NT Service<span style="border: 1px solid rgb(255, 0, 0);">\</span>SQLIaaSExtensionQuery] <span style="color: blue;">FROM</span> WINDOWS <span style="color: blue;">WITH</span> DEFAULT_DATABASE=[master], DEFAULT_LANGUAGE=[us_english]
<span style="color: blue;">GO</span>
<span style="color: blue;">ALTER</span> SERVER <span style="color: blue;">ROLE</span> [sysadmin] <span style="color: blue;">ADD</span> MEMBER [NT Service<span style="border: 1px solid rgb(255, 0, 0);">\</span>SQLIaaSExtensionQuery]
<span style="color: blue;">GO</span>
<span style="color: blue;">CREATE</span> LOGIN [NT SERVICE<span style="border: 1px solid rgb(255, 0, 0);">\</span>SQLTELEMETRY] <span style="color: blue;">FROM</span> WINDOWS <span style="color: blue;">WITH</span> DEFAULT_DATABASE=[master], DEFAULT_LANGUAGE=[us_english]
<span style="color: blue;">GO</span>
<span style="color: blue;">GRANT</span> <span style="color: blue;">ALTER</span> <span style="color: blue;">ANY</span> EVENT <span style="color: blue;">SESSION</span> <span style="color: blue;">TO</span> [NT SERVICE<span style="border: 1px solid rgb(255, 0, 0);">\</span>SQLTELEMETRY] <span style="color: blue;">AS</span> [sa]
<span style="color: blue;">GRANT</span> <span style="color: blue;">CONNECT</span> <span style="color: blue;">ANY</span> <span style="color: blue;">DATABASE</span> <span style="color: blue;">TO</span> [NT SERVICE<span style="border: 1px solid rgb(255, 0, 0);">\</span>SQLTELEMETRY] <span style="color: blue;">AS</span> [sa]
<span style="color: blue;">GRANT</span> <span style="color: blue;">CONNECT</span> <span style="color: blue;">SQL</span> <span style="color: blue;">TO</span> [NT SERVICE<span style="border: 1px solid rgb(255, 0, 0);">\</span>SQLTELEMETRY] <span style="color: blue;">AS</span> [sa]
<span style="color: blue;">GRANT</span> <span style="color: blue;">VIEW</span> <span style="color: blue;">ANY</span> DEFINITION <span style="color: blue;">TO</span> [NT SERVICE<span style="border: 1px solid rgb(255, 0, 0);">\</span>SQLTELEMETRY] <span style="color: blue;">AS</span> [sa]
<span style="color: blue;">GRANT</span> <span style="color: blue;">VIEW</span> SERVER <span style="color: blue;">STATE</span> <span style="color: blue;">TO</span> [NT SERVICE<span style="border: 1px solid rgb(255, 0, 0);">\</span>SQLTELEMETRY] <span style="color: blue;">AS</span> [sa]
</pre></div>
Steven Wrighthttp://www.blogger.com/profile/10250648827790297322noreply@blogger.com0tag:blogger.com,1999:blog-7476247.post-81382261214326618812020-10-12T22:58:00.003+02:002020-12-31T09:44:21.092+00:00Mapping Azure files share for Azure SQL Server IaaS<p>As mention in a previous post: <a href="https://zogamorph.blogspot.com/2020/10/a-gotcha-when-using-azure-sql-server.html">A Gotcha when using the Azure SQL Server IaaS.</a> I am working on moving our company product to the Azure Cloud and having to use Azure SQL Server IaaS.</p><p>In this post, I will share how I used the Azure VM Custom Script Extensions to map a drive to Azure file share accessible for SQL Server services.</p><p>I couldn’t use the Azure portal script: </p><!--HTML generated using hilite.me--><div style="background: rgb(255, 255, 255); border-color: gray; border-image: initial; border-style: solid; border-width: 0.1em 0.1em 0.1em 0.8em; border: solid gray; overflow: auto; padding: 0.2em 0.6em; width: auto;"><table><tbody><tr><td><pre style="line-height: 125%; margin: 0px;">1
2
3
4
5
6
7
8
9</pre></td><td><pre style="line-height: 125%; margin: 0px;">$connectTestResult = Test-NetConnection -ComputerName mystorageaccount.file.core.windows.net -Port 445
<span style="color: blue;">if</span> ($connectTestResult.TcpTestSucceeded) {
<span style="color: green;"># Save the password so the drive will persist on reboot</span>
cmd.exe /C <span style="color: #a31515;">"cmdkey /add:`"mystorageaccount.file.core.windows.net`" /user:`"Azure\mystorageaccount`" /pass:`"mystorageaccount.file.core.windows.net key `""</span>
<span style="color: green;"># Mount the drive</span>
New-PSDrive -Name Z -PSProvider FileSystem -Root <span style="color: #a31515;">"\\mystorageaccount.file.core.windows.net\myfileshare"</span> -Persist
} <span style="color: blue;">else</span> {
Write-Error -Message <span style="color: #a31515;">"Unable to reach the Azure storage account via port 445. Check to make sure your organization or ISP is not blocking port 445, or use Azure P2S VPN, Azure S2S VPN, or Express Route to tunnel SMB traffic over a different port."</span>
}
</pre></td></tr></tbody></table></div>
<p>The reason is the storage account key is stored in the windows credential manager (cmdkey) of the executing user. I need this shared to be accessible by the SQL Server services. While searching for a different method to mount a share, I came across this command: New-SmbGlobalMapping. This function can create an SMB mapping to a drive, with the option to persist, and grant access for list of users.</p><p>As the script should be idempotent, I need to find a way to test to see if the mapping already exists and found the matching Get-SmbGlobalMapping and Remove-SmbGlobalMapping. I was able to extend the portal script to use SmbGlobalMapping command lets. Here is an example of my custom script:</p><!--HTML generated using hilite.me--><div style="background: rgb(255, 255, 255); border-color: gray; border-image: initial; border-style: solid; border-width: 0.1em 0.1em 0.1em 0.8em; border: solid gray; overflow: auto; padding: 0.2em 0.6em; width: auto;"><table><tbody><tr><td><pre style="line-height: 125%; margin: 0px;"> 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32</pre></td><td><pre style="line-height: 125%; margin: 0px;"><span style="color: green;">#Setting update Azure File Share.</span>
<span style="color: blue;">param</span>
(
[string] $PassKey
)
$AzureUserName = <span style="color: #a31515;">"Azure\mystorageaccount"</span>
$StoreageAccountName = <span style="color: #a31515;">"mystorageaccount"</span>
$FileshareName = <span style="color: #a31515;">"myfileshare"</span>
$FileShareURI = Join-Path -Path (<span style="color: #a31515;">"\\{0}"</span> -f $StoreageAccountName) -ChildPath $FileshareName
$password = ConvertTo-SecureString -String $PassKey -AsPlainText -Force
$cred = new-object -typename System.Management.Automation.PSCredential -argumentlist $AzureUserName, $password
<span style="color: green;">#Test that a connection can be made</span>
$connectTestResult = Test-NetConnection -ComputerName $StoreageAccountName -Port 445
<span style="color: blue;">if</span> ($connectTestResult.TcpTestSucceeded) {
<span style="color: green;">#See if a SMBGlobalMapping has already made </span>
$Mapping = Get-SmbGlobalMapping -RemotePath $FileShareURI
<span style="color: green;">#If there a mapping then remove it</span>
<span style="color: blue;">if</span>($null -ne $Mapping )
{
Remove-SmbGlobalMapping -RemotePath $FileShareURI -Force
}
<span style="color: green;">#Create a persistent SMB mapping for the all the services </span>
New-SmbGlobalMapping -RemotePath $FileShareURI -Credential $cred -LocalPath Z<span style="border: 1px solid rgb(255, 0, 0);">:</span> -FullAccess <span style="border: 1px solid rgb(255, 0, 0);">@</span>( <span style="color: #a31515;">"NT AUTHORITY\SYSTEM"</span>, <span style="color: #a31515;">"NT AUTHORITY\NetworkService"</span>, <span style="color: #a31515;">"NT Service\MSSQLSERVER"</span>,<span style="color: #a31515;">"NT Service\MsDtsServer150"</span>,<span style="color: #a31515;">"NT Service\SQLSERVERAGENT"</span>, <span style="color: #a31515;">"$env:COMPUTERNAME\localVmAdmin"</span>) -Persistent $true
}
<span style="color: blue;">else</span> {
<span style="color: green;">#write a error message.</span>
Write-Error -Message <span style="color: #a31515;">"Unable to reach the Azure storage account via port 445. Check to make sure your organization or ISP is not blocking port 445, or use Azure P2S VPN, Azure S2S VPN, or Express Route to tunnel SMB traffic over a different port."</span>
}
</pre></td></tr></tbody></table></div>
<p>The next challenge was to supply the storage key to the custom script. There are no examples of passing parameters to a script. I noticed that the script(s) are executed using the PowerShell command-line interface.</p><p>So all I had to was extend the value to the property: commandToExecute to include my parameter name and enclose the value within double-quotes.</p><p>Here is an example of my command property: </p>
<!--HTML generated using hilite.me--><div style="background: rgb(255, 255, 255); border-color: gray; border-image: initial; border-style: solid; border-width: 0.1em 0.1em 0.1em 0.8em; border: solid gray; overflow: auto; padding: 0.2em 0.6em; width: auto;"><pre style="line-height: 125%; margin: 0px;"><span style="color: #a31515;">"commandToExecute"</span><span style="border: 1px solid rgb(255, 0, 0);">:</span> <span style="color: #a31515;">"[concat ('powershell -ExecutionPolicy Unrestricted -File ', variables('firstFileName'), ' -EnviromentName', ' \"', parameters('EnvironmentName'), '\" ', '-PassKey', ' \"',listKeys(variables('AzureStorageAccountId'),variables('AzureStorageAccountApiVersion')).keys[0].value,'\"')]"</span>
</pre></div>
<p>After using the script and setting value for commandToExecute for the custom script. After provisioning Azure SQL Server with ARM template and copied the SQL Server back-up files to file share. I was able to restore all the database from the mapped drive directly.</p><div><br /></div>Steven Wrighthttp://www.blogger.com/profile/10250648827790297322noreply@blogger.com0tag:blogger.com,1999:blog-7476247.post-19222451756242338802020-10-06T21:35:00.005+02:002020-12-31T09:44:00.840+00:00A Gotcha when using the Azure SQL Server IaaS<p>I am currently working on moving our company product to Azure Cloud. In this post, I am going to share a gotcha I came across with our Azure SQL Server IaaS.</p><p>For our product, we have to use Azure SQL Server IaaS for a few reasons: The way our SSIS projects have been developed like: using DTC, Office drivers and file operations; We have a few cross-database queries within our Store procedures; A fair bit of automation using SQL Server Agent.</p><p>I have been helping to create our Azure infrastructure using Terraform and ARM templates to build our Azure SQL Server IaaS. Getting our SQL Server made was reasonably straightforward. The gotcha was when we looked at the results: The SQL Server agent was installed but not started and the start-up for the service set as manual. As mentioned, we use SQL Server agent to automation a few processes within our product; this wasn't the desired state also I must admit I have forgotten that this is SQL Server Agent default install method.</p><p>To have SQL Server agent to have the start-up of the service set to automatic and started after the provisioning. You can use the virtual machine custom script extension with a PowerShell script.</p><p>When provisioning an Azure SQL Server IaaS, you first need to create a virtual machine, then install the SQL Server extension. To use the custom script extension to automate the SQL Server Agent a dependency on the SQL Server extension has to be stated. The following is ARM example of how to create the dependency: </p><!--HTML generated using hilite.me--><div style="background: rgb(255, 255, 255); border-color: gray; border-image: initial; border-style: solid; border-width: 0.1em 0.1em 0.1em 0.8em; border: solid gray; overflow: auto; padding: 0.2em 0.6em; width: auto;"><table><tbody><tr><td><pre style="line-height: 125%; margin: 0px;"> 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20</pre></td><td><pre style="line-height: 125%; margin: 0px;"><span style="color: #a31515;">"resources"</span><span style="border: 1px solid rgb(255, 0, 0);">:</span> [
{
"name": <span style="color: #a31515;">"vmsqlserver/sqlserver"</span>,
"type": <span style="color: #a31515;">"Microsoft.SqlVirtualMachine/SqlVirtualMachines"</span>,
"apiVersion": <span style="color: #a31515;">"2017-03-01-preview"</span>,
"location": <span style="color: #a31515;">"[resourceGroup().location]"</span>
<span style="border: 1px solid rgb(255, 0, 0);">......</span>
},
{
"type": <span style="color: #a31515;">"Microsoft.Compute/virtualMachines/extensions"</span>,
"apiVersion": <span style="color: #a31515;">"2015-06-15"</span>,
"name": <span style="color: #a31515;">"vmsqlserver/CustomScriptExtension"</span>,
"location": <span style="color: #a31515;">"[resourceGroup().location]"</span>,
"dependsOn": [
<span style="color: #a31515;">" vmsqlserver/sqlserver "</span>
],
<span style="border: 1px solid rgb(255, 0, 0);">......</span>
}
<span style="border: 1px solid rgb(255, 0, 0);">}</span>
]
</pre></td></tr></tbody></table></div>
<p>The following is the PowerShell script to the set the service start-up and to start the services:</p>
<!--HTML generated using hilite.me--><div style="background: rgb(255, 255, 255); border-color: gray; border-image: initial; border-style: solid; border-width: 0.1em 0.1em 0.1em 0.8em; border: solid gray; overflow: auto; padding: 0.2em 0.6em; width: auto;"><table><tbody><tr><td><pre style="line-height: 125%; margin: 0px;"> 1
2
3
4
5
6
7
8
9
10
11
12</pre></td><td><pre style="line-height: 125%; margin: 0px;">$ServiceName = <span style="color: #a31515;">'SQLSERVERAGENT'</span>
$SQLServerAgentSerivce = Get-Service -Name $ServiceName
<span style="color: blue;">if</span> ($SQLServerAgentSerivce.Status -eq [System.ServiceProcess.ServiceControllerStatus]<span style="border: 1px solid rgb(255, 0, 0);">::</span>Stopped)
{
$SQLServerAgentSerivce.Start()
}
<span style="color: blue;">if</span> ($SQLServerAgentSerivce.StartType -ne [System.ServiceProcess.ServiceStartMode]<span style="border: 1px solid rgb(255, 0, 0);">::</span>Automatic)
{
Set-Service -Name $ServiceName -StartupType Automatic
}
</pre></td></tr></tbody></table></div>
Steven Wrighthttp://www.blogger.com/profile/10250648827790297322noreply@blogger.com0tag:blogger.com,1999:blog-7476247.post-87026599494984700432020-09-27T13:37:00.006+02:002022-01-03T09:45:21.717+00:00How optimise dynamic search store procedure.<p>I want to share the approach that I lead, with the application developers, in optimising a feature within our application.</p><p>This feature was the search functionality, which almost every application has and gives SQL Server database a bad reputation, that finds data by selected attribute(s) and limits the rows output based upon the application rows and page selectors.</p><p>The problems were not the usual lousy T-SQL code approach to optional search predicates:
</p>
<!-- HTML generated using hilite.me --><div style="background: #ffffff; overflow:auto;width:auto;border:solid gray;border-width:.1em .1em .1em .8em;padding:.2em .6em;"><table><tr><td><pre style="margin: 0; line-height: 125%">1
2</pre></td><td><pre style="margin: 0; line-height: 125%"><span style="color: #0000ff">where</span> (<span style="color: #0000ff">table</span>.column1 = @pColum1PredicateValue <span style="color: #0000ff">Or</span> @pColum1PredicateValue <span style="color: #0000ff">is</span> <span style="color: #0000ff">null</span>)
<span style="color: #0000ff">AND</span> (<span style="color: #0000ff">table</span>.column2 = @pColum2PredicateValue <span style="color: #0000ff">Or</span> @pColum2PredicateValue <span style="color: #0000ff">is</span> <span style="color: #0000ff">null</span>)
</pre></td></tr></table></div>
<p>The main problem was with the implementation of the store procedure to build the dynamic T-SQL string. The string was initialised with a select statement, containing all fields required for the application output, and extra table joins to cover all possible predicates. The where clause only had the predicates that were select by the user.</p><p>Another issue was the way the total number of rows calculated. It would run the same dynamic T-SQL string to get a row count. This dynamic T-SQL string had some poorly implementation select expression.</p><p>The final issue we identified was the method the store procedure took to meet the application need to display reference data, which are rows in reference tables, in a comma-separated list. The store procedure would create a temporary table for all reference keys and the comma-separated list ignoring the predicates and page rows selectors.</p><p>My approach that led was to re-write the store procedure in the following ways:</p><ul><li>Changing the way dynamic T-SQL string was being generated and used.</li><li>Use a different method to send the require the data to the application.</li></ul><h4 style="text-align: left;">Changing the way dynamic T-SQL string was being generated and used.</h4><p>I got the developers to break up dynamic T-SQL string into three parts: Select clause; from clause and where clause and join at the end when need to run them. The select clause would contain only fields need for how the dynamic T-SQL string was going to use for: either count (PrimaryKey) or select of columns required. The from clause would only contain the table joins required for the select clause and required predicates, as for the where clause continued as before.</p><p>We then used dynamic strings if different ways:</p><ul><li>The first usage was to get a total row count from the query. So that application could work out the total number of pages need to display all records from the query.</li><li>The second usage was to populate a temporary table with the primary keys of the root application table, that met the requirement of query where clause and based on the order were within page and number of row selectors.</li></ul><h4 style="text-align: left;">Change the method the require data was and sent to the application.</h4><p>To get the data sent to the application we wrote a static SQL statement that outputted the columns needed for the application with required tables joins and filter the rows by the use the temporary table that dynamic strings populated. We also use the temporary table to limit the creation of reference data in a comma-separated list.</p><h4 style="text-align: left;">Conclusion </h4><p>The reason for sharing this post is because my application developers were very sceptical on taking this approach. I demo my approach with by implementing a few of the parameters include the defaults and show the running times against old.</p><p>With the defaults and few of the required predicates selected the new approach outperformed the old method but 80% when all the predicates were selected then performance was less because the dynamic string was looking that same. We also had the same pattern of dynamic T-SQL use in other parts of the application which convert use this new approach.</p><p>When these store procedure went into production, our customers were noticing the improvement and were sending feedback that applications were running much quicker.</p><p></p><p></p><p></p><p></p><p></p><p></p><p></p><p></p><p></p><p></p><p></p><p></p><p></p>Steven Wrighthttp://www.blogger.com/profile/10250648827790297322noreply@blogger.com0tag:blogger.com,1999:blog-7476247.post-80417717662941747132019-12-17T13:00:00.003+00:002019-12-17T13:09:49.124+00:00Loading SQL Server PowerShell Module tipAs mentioned in previous posts:<br />
<ul>
<li><a href="https://zogamorph.blogspot.com/2019/11/automating-building-of-ssis-projects.html">Automating building of SSIS projects</a></li>
<li><a href="https://zogamorph.blogspot.com/2019/11/automating-deployment-of-ssis-projects_10.html">Automating deployment of SSIS projects</a></li>
</ul>
I have been working on integrating our SSIS projects into our continuous delivery pipeline using Octopus, PowerShell and the SQL Server PowerShell module. In this post I will cover how I had to resolve the following error: New-Object : Cannot find type [Microsoft.SqlServer.Management.IntegrationServices.IntegrationServices]: verify that the assembly containing this type is load.<br />
<br />
I'm going to start explaining the contributing factors which led me to getting the aforementioned error: <br />
<ul>
<li>Where the script is executed</li>
<li>What the script does.</li>
</ul>
<h4>
Where the script is executed</h4>
As part of the deployment pipeline, we decided not to install tentacles on our SQL Servers. To deploy our SQL Server changes we use one of our application servers, to receive Octopus packages like our SSIS projects, SQL Scripts, then the application server runs the deployment PowerShell scripts to update our SQL Servers.<br />
<br />
As detail in the image below: <br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgAF_54IrePAHWV4xQnhIOOKthbCuww_GZaTaa6H1Qn9-HnG-MfF-vS6jqe8TtWPQGXU5n4V7v2C45GJwje2d9bsMNam_NGnSq7uHSPkw982unqni93mC3mRMsIqRKOBEQU8mxN/s1600/Powershell+Modules.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img alt="Our Deployment Pipeline" border="0" data-original-height="420" data-original-width="581" height="289" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgAF_54IrePAHWV4xQnhIOOKthbCuww_GZaTaa6H1Qn9-HnG-MfF-vS6jqe8TtWPQGXU5n4V7v2C45GJwje2d9bsMNam_NGnSq7uHSPkw982unqni93mC3mRMsIqRKOBEQU8mxN/s400/Powershell+Modules.png" title="Our Deployment Pipeline" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Our Deployment Pipeline</td></tr>
</tbody></table>
<br />
<br />
<h4>
What the script does</h4>
The script does the following steps: <br />
<ol>
<li>Check for the installation of the SQL Server PowerShell module </li>
<li>If the module is not installed then download a temporary copy and load all the module assemblies for the current PowerShell session.</li>
<li>Connect to the SSIS server </li>
<li>Create the SSIS folder if it didn't exist. </li>
<li>Upload the SSIS ISPAC files to the server. </li>
<li>Then linked the environment configuration and SSIS project together</li>
<li>Then create/update the environment configuration</li>
</ol>
While our script was executing it would download a temporary copy of the module and then deploy our SSIS project. This was happening for all our SSIS projects, which were created as separate deployments.<br />
<br />
<h4>
How I got the error</h4>
To optimise our deployment pipeline, I decided to manually install the SQL Server PowerShell module on our application server to stop the several downloads of the module. Afterwards I started getting the error: New-Object : Cannot find type [Microsoft.SqlServer.Management.IntegrationServices.IntegrationServices]: verify that the assembly containing this type is load.<br />
<br />
My script used the following example, which is recommended on many sites on the internet:<br />
<br />
<div style="background: rgb(255 , 255 , 255); border-color: gray; border-style: solid; border-width: 0.1em 0.1em 0.1em 0.8em; overflow: auto; padding: 0.2em 0.6em; width: auto;">
<table><tbody>
<tr><td><pre style="line-height: 125%; margin: 0px;"> 1
2
3
4
5
6
7
8
9
10
11
12</pre>
</td><td><pre style="line-height: 125%; margin: 0px;"><span style="color: rgb(0 , 0 , 255);">if</span> ((Get-SqlModuleInstalled -PowerShellModuleName <span style="color: rgb(163 , 21 , 21);">"SqlServer"</span>) -ne $true) {
<span style="color: rgb(0 , 128 , 0);"># Display message</span>
Write-Output <span style="color: rgb(163 , 21 , 21);">"PowerShell module SqlServer not present, downloading temporary copy ..."</span>
<span style="color: rgb(0 , 128 , 0);"># Download and install temporary copy</span>
Install-SqlServerPowerShellModule -PowerShellModuleName <span style="color: rgb(163 , 21 , 21);">"SqlServer"</span> -LocalModulesPath $LocalModules
<span style="color: rgb(0 , 128 , 0);"># Dependent assemblies</span>
Get-SqlServerAssemblies
}
<span style="color: rgb(0 , 0 , 255);">else</span> {
<span style="color: rgb(0 , 128 , 0);"># Load the IntegrationServices Assembly</span>
[Reflection.Assembly]<span style="border-image: none; border: 1px solid rgb(255, 0, 0);">::</span>LoadWithPartialName(<span style="color: rgb(163 , 21 , 21);">"Microsoft.SqlServer.Management.IntegrationServices"</span>) | Out-Null <span style="color: rgb(0 , 128 , 0);"># Out-Null suppresses a message that would normally be displayed saying it loaded out of GAC</span>
}
</pre>
</td></tr>
</tbody></table>
</div>
<br />
The problem was with the else block it assumes that if the module is installed then the SQL Server assemblies will be in the Global Assembly Cache (GAC). In my case, they were not as my application server didn’t have any component of SQL Server installed until the module.<br />
<br />
<h4>
How I resolve the error</h4>
To resolve the error, I had to update my script to always load the assemblies from the installed location of SQL Server PowerShell module. I managed this by changing my script to always call Get-SqlServerAssemblies but tweak the function to use -ListAvailable when calling the Get-Module for the SQLServer. This enable getting the path to were the module is being loaded from, as shown in code below:<br />
<!-- HTML generated using hilite.me --><br />
<div style="background: rgb(255 , 255 , 255); border-color: gray; border-style: solid; border-width: 0.1em 0.1em 0.1em 0.8em; overflow: auto; padding: 0.2em 0.6em; width: auto;">
<table><tbody>
<tr><td><pre style="line-height: 125%; margin: 0px;"> 1
2
3
4
5
6
7
8
9
10
11
12</pre>
</td><td><pre style="line-height: 125%; margin: 0px;"><span style="color: rgb(0 , 0 , 255);">Function</span> Get-SqlServerAssmblies {
<span style="color: rgb(0 , 128 , 0);"># Declare parameters</span>
<span style="color: rgb(0 , 128 , 0);"># Get the folder where the SqlServer module ended up in</span>
$sqlServerModulePath = [System.IO.Path]<span style="border-image: none; border: 1px solid rgb(255, 0, 0);">::</span>GetDirectoryName((Get-Module -ListAvailable -Name <span style="color: rgb(163 , 21 , 21);">"SqlServer"</span>).Path)
<span style="color: rgb(0 , 128 , 0);"># Loop through the assemblies</span>
<span style="color: rgb(0 , 0 , 255);">foreach</span> ($assemblyFile <span style="color: rgb(0 , 0 , 255);">in</span> (Get-ChildItem -Path $sqlServerModulePath -Exclude msv*.dll | Where-Object { $_.Extension -eq <span style="color: rgb(163 , 21 , 21);">".dll"</span> })) {
<span style="color: rgb(0 , 128 , 0);"># Load the assembly</span>
[Reflection.Assembly]<span style="border-image: none; border: 1px solid rgb(255, 0, 0);">::</span>LoadFile($assemblyFile.FullName) | Out-Null
}
}
</pre>
</td></tr>
</tbody></table>
</div>
<br />
Once my script was updated the deployment code was working again and now completes in less than 5 minutes.<br />
<br />
Here is a PowerShell Script template that will load the SQL Server module and assemblies either by download a temporary copy or using the installed module: <a href="https://1drv.ms/u/s!AnzU-D9LfffmkYIfNGxWwXj7yKf30w" target="_blank">Load-SQLServerModule.ps1</a>
<br />
<br />
<div>
Further reading:
<br />
<ul>
<li><a href="https://zogamorph.blogspot.com/2019/11/automating-building-of-ssis-projects.html" target="_blank">Automating building of SSIS projects</a></li>
<li><a href="https://zogamorph.blogspot.com/2019/11/automating-deployment-of-ssis-projects_10.html" target="_blank">Automating deployment of SSIS projects</a></li>
</ul>
</div>
Steven Wrighthttp://www.blogger.com/profile/10250648827790297322noreply@blogger.com0tag:blogger.com,1999:blog-7476247.post-47398026789194948982019-11-10T18:21:00.000+00:002019-12-17T13:07:52.818+00:00Automating deployment of SSIS projectsAs mentioned in the previous post: <a href="http://zogamorph.blogspot.com/2019/11/automating-building-of-ssis-projects.html" target="_blank" title="Blog Post:- Automating building of SSIS projects">Automating building of SSIS projects</a>, I am working on integrating our SSIS
projects into our continuous build/deployment process. In this post, I will cover my implementation of automating
the deployment of SSIS projects.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEijn4uqKZE-TrNu8dok3EPsNFRvr3B8QJY387ROUhLHdmkUj95pOt2DF4Ptdy4z9C5XojHf77H0-CJA4zI_KCkb-URNtzp-jPWYOVbcIVUg7NmX-Mjs8GUcm90gsuBCFB7vp_j2/s1600/octopus.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="100" data-original-width="100" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEijn4uqKZE-TrNu8dok3EPsNFRvr3B8QJY387ROUhLHdmkUj95pOt2DF4Ptdy4z9C5XojHf77H0-CJA4zI_KCkb-URNtzp-jPWYOVbcIVUg7NmX-Mjs8GUcm90gsuBCFB7vp_j2/s1600/octopus.png" /></a></div>
Our continuous delivery pipeline uses the <a href="https://octopus.com/" target="_blank" title="Octopus Deploy Web Site">Octopus Deploy</a> platform which is a
release management; automated deploy and operation runbook platform that consists of a central server and
deployment agents referred to as tentacles.
<br />
<br />
The tentacles receive the deployment packages, which
contains the code being deployed, and runs the deployment scripts.<br />
<br />
The server is where all the deployment
packages are
stored; deployment projects are maintained and the management of the following: infrastructure, users and
server.<br />
<br />
Octopus deployment projects are defined as steps
from one of the following templates types: Built-in, custom or Community Contributed. The template
could be defined in one of the following languages: PowerShell, C#, F#, Bash or Python3.
<br />
<br />
Before I could create my deployment project. I had
to extend my automated build by adding another PowerShell script to create deployment packages
and publish them to the server.
<br />
<br />
The sample of the script can be found here: <a href="https://1drv.ms/u/s!AnzU-D9LfffmkP055pMaBOhgTZRQiA?e=o2Z0o6" target="_blank">PublishSSIS.ps1</a><br />
<br />
I started my deployment project by using
the existing community templates for deploying SSIS ISPACS projects. I found them easy
to use to deploy the SSIS project and configure either by directly changing the SSIS
parameters or via an environment configuration.
<br />
<br />
When using the environment
configuration, which is how we deploy our SSIS projects, the existing templates
copied all the SSIS project and package parameters into the environment configuration
and set the values either from SSIS project/package parameter or matching Octopus
variable name.<br />
<br />
I didn’t like this approach as it made it hard to see and find
what configuration was set by Octopus. This was due to a large number of parameters
existing in the environment configuration from all the connections defined within
our SSIS projects and packages.<br />
<br />
I decided to create my own Octopus template, which I publish to the Octopus
library called: "<a href="https://github.com/zogamorph/Library/blob/master/step-templates/ssis-deploy-ispac-with-enviroment.json" target="_blank" title="Octopus Template:- SSIS Deploy ispac with Environment">SSIS Deploy ispac with Environment</a>", to only create environment
references when Octopus variable name, which has to be wrapped with SSIS[],
matches the name to a SSIS parameter. For package level parameters the Octopus
variable name also has to be prefixed with the package name within []. Any
environment references when there are no matching Octopus variable with this
pattern are removed. <br />
<br />
My template also differs in the following ways:<br />
<br />
Will
not create the SSIS catalogue if it didn't exist, as I don't want
one-time operation included in my template. As well as to reduce the
risk of my template not working due to new parameters required for
creating SSIS catalogue for future release of SQL Server. <br />
<br />
Will always
create environment reference for the project being deployed. The
name of the environment reference will be the same as the project
name being deployed with a defined tag. The default the tag will be
the name of the environment that is SSIS package is being deployed
to.<br />
<br />
I would like
to give thanks to the other SSIS templates community
contributors as their code helped me to write my own
template.
<br />
<br />
<div>
Further reading:
<br />
<ul>
<li><a href="https://zogamorph.blogspot.com/2019/11/automating-building-of-ssis-projects.html" target="_blank">Automating building of SSIS projects</a></li>
<li><a href="https://zogamorph.blogspot.com/2019/12/loading-sql-server-powershell-module-tip.html" target="_blank">Loading SQL Server PowerShell Module tip</a></li>
</ul>
</div>
Steven Wrighthttp://www.blogger.com/profile/10250648827790297322noreply@blogger.com0tag:blogger.com,1999:blog-7476247.post-83289921715101263292019-11-02T16:57:00.001+00:002019-12-17T13:08:11.043+00:00Automating building of SSIS projectsI am currently working for a healthcare solutions company on one of their leading products. Presently we are working on improving our continuous build/deployment processes. I have been looking into how to include all our SQL Server Integration Services (SSIS) projects and would like to share the approach taken for the continuous build in this blog post.<br />
<br />
Automating the build of SSIS project(s) is not as straightforward as it is for the .Net projects. One of the main barriers is that for SSIS projects there is no native support in the visual studio standard build tool (MSBuild).<br />
<br />
If you wish to use MSBuild you would need to write your own build file with a custom extension library that you would have either created yourself or downloaded from a community/open source code sharing site.My approach was to use PowerShell to do the following:<br />
<ol>
<li>Read an XML config file which has the solution and projects files paths along with build configuration.</li>
<li>Find the path to visual studio. By using vswhere.exe looking for version 15 and the installation path.</li>
<li>Then automate visual studio to build require projects</li>
<li>Finally copies all the built output into an artefact folder structure.</li>
</ol>
I develop this script to run on a development computer or the build server. Integrating this script into our build platform, which is currently Jenkins, was an easy process as there was a PowerShell plugging for build projects.<br />
<br />
While running the script on the build server, there was a problem with vswhere program not returning the installation path. To resolve this problem I added a parameter for passing in the install path of visual studio and to bypass the vswhere step.<br />
<br />
A copy of my script and an example of the config file can be found at this location:<a href="https://onedrive.live.com/?authkey=%21ADMOShdiouWs7o8&id=E6F77D4B3FF8D47C%21277924&cid=E6F77D4B3FF8D47C" target="_blank" title="SSIS Automate Build Files">SSIS Automate Build Files</a><br />
<br />
The reasons for using power shall were as follows:<br />
<ul>
<li>I needed to have the build process to be easy to maintain and could be ported over to any build server platform that runs on Windows.</li>
<li>To reduce the risk of us having to revisit the build process due to changes of how SSIS projects are built or the extension library not able to support the build changes.</li>
</ul>
<div>
Further reading:
<br />
<ul>
<li><a href="http://zogamorph.blogspot.com/2019/11/automating-deployment-of-ssis-projects_10.html" target="_blank">Automating deployment of SSIS projects</a></li>
<li><a href="https://zogamorph.blogspot.com/2019/12/loading-sql-server-powershell-module-tip.html" target="_blank">Loading SQL Server PowerShell Module tip</a></li>
</ul>
</div>
Steven Wrighthttp://www.blogger.com/profile/10250648827790297322noreply@blogger.com0tag:blogger.com,1999:blog-7476247.post-46147197334941646392016-01-24T15:52:00.000+00:002016-01-24T15:59:16.336+00:00Database Projects TopologiesI have been working with Visual Studio Database projects starting with Database Professional to its successor SQL Server Data Tools. Within this post I wish to share some of the project topologies that I have used to manage my project databases requirements<br />
<br />
Before starting I would like to cover the 2 ways that database projects can be referenced. As these methods play an important part of the projects topologies without having to develop code.<br />
<ul>
<li>External Referencing<br />When adding a database reference and setting the options of database name or, optional, server name with a literal / variable value will configure the reference as external. This will mean any SQL objects within the reference will need to be referred by using 3/4 part naming. Also the referenced DACPAC / project will need be to deployed first, separately, before deploying the current project. </li>
</ul>
<a href="https://lh3.googleusercontent.com/-bMrQiYSaHzI/VqTzccUlkkI/AAAAAAAAAT4/xWTvc05jKp4/s1600-h/ExternalReferenace%25255B2%25255D.png" target="_blank"><img alt="Screen Shoot of setting Database as a External Referenace" border="0" height="168" src="https://lh3.googleusercontent.com/-fs15ujTozmM/VqTzcy5K46I/AAAAAAAAAT8/D5Tj7vH9jcI/ExternalReferenace_thumb%25255B1%25255D.png?imgmax=800" style="background-image: none; border-bottom: 0px; border-left: 0px; border-right: 0px; border-top: 0px; display: block; float: none; margin-left: auto; margin-right: auto; padding-left: 0px; padding-right: 0px; padding-top: 0px;" title="Screen Shoot of setting Database as a External Referenace" width="240" /></a><br />
<ul>
<li>Composite Referencing<br />Creating a database reference but not setting the database name / server name will configure as internal, composite, to the database. Allowing the use of two part naming. Another benefit is that the project containing the reference can deployed the reference objects within the same deployment.</li>
</ul>
<a href="https://lh3.googleusercontent.com/-jv7v1GYLfzY/VqTzdHhDS5I/AAAAAAAAAUE/Szv1DN4Cp4k/s1600-h/Composite%25255B4%25255D.png" target="_blank"><img alt="Screen Shoot of setting Database as a Composite Referenace " border="0" height="167" src="https://lh3.googleusercontent.com/-pROeJ927Y5A/VqTzdmc6PqI/AAAAAAAAAUM/-4W9QXE-PfE/Composite_thumb%25255B2%25255D.png?imgmax=800" style="background-image: none; border-bottom: 0px; border-left: 0px; border-right: 0px; border-top: 0px; display: block; float: none; margin-left: auto; margin-right: auto; padding-left: 0px; padding-right: 0px; padding-top: 0px;" title="Screen Shoot of setting Database as a Composite Referenace " width="240" /></a><br />
<br />
The reason behind how I came to create theses topologies were due to the following reasons:<br />
<ul>
<li>Deployments problems. </li>
<li>Reduce repeating code </li>
<li>Ensuring consistent model </li>
<li>Couldn’t spend time creating, managing and installing extensions for the tools.</li>
</ul>
<h4>
Tables and Code splitting topology</h4>
I was working on a database warehouse project, which was using DB Pro, was having deployment problems. The deployments were failing due to circular references as database required objects be created in each of the other databases. To get a deployment to work I had re-create the deployment script a few times.<br />
<br />
To deploy all the projects compiled models in a single execution I restructured the projects in the following manner: <br />
<ul>
<li>storage project: which contain all the, table, indexes and files DDL </li>
<li>code project: this project had all the views, functions and store procedures DDL</li>
</ul>
The code project would use composite referencing to the storage project for the same database. Then external referencing both the code and storage project for other external databases.<br />
<br />
There was an added advantage of splitting the projects in this way. It allowed the DBA to check the deployment for table rebuilds. As he only had to check the output of the storage project. However there was some extra configuration management caused by this method, this was mitigated within our deployment framework I was using at the time.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://lh3.googleusercontent.com/-Bhf5VnXwVVQ/VqTzd-Bj9WI/AAAAAAAAAUU/JhPkBxBXi-4/s1600-h/TablesAndCodeSplittingTopology%25255B8%25255D.png" style="margin-left: auto; margin-right: auto;"><img alt="Showing Tables and Code Splitting Topology" border="0" height="173" src="https://lh3.googleusercontent.com/-0Z_7RXBx8yE/VqTzeXzoXzI/AAAAAAAAAUc/OrdvdSf9KjA/TablesAndCodeSplittingTopology_thumb%25255B6%25255D.png?imgmax=800" style="background-image: none; border: 0px; display: block; float: none; margin-left: auto; margin-right: auto; padding-left: 0px; padding-right: 0px; padding-top: 0px;" title="Showing Tables and Code Splitting Topology" width="240" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Showing the Table and Code Referencing Topology</td></tr>
</tbody></table>
<h4>
Publication, Subscriber and Publisher topology</h4>
Another project I worked on made use of SQL Server replication to replicate a subset of objects. To ensure that the objects being replicated where consistent between the databases I use the following project topology:<br />
<ul>
<li>A publication project: Would contain all the objects which would need to exist on both publisher and subscribers. </li>
<li>A publisher project: Which contain objects that would only exists on the publisher database. </li>
<li>A subscriber project: This project contain objects only for the subscribing database.</li>
</ul>
The publication and subscriber project used the composite project reference to this publication project.<br />
This helped ensure that the publisher and subscriber database had a consistent model for the replicated objects. Also any changes only needed to be made to one project.<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://lh3.googleusercontent.com/-tCPUstIbx2M/VqTzerMazqI/AAAAAAAAAUg/756XlkxjpHw/s1600-h/ReplicationTopology%25255B4%25255D.png" style="margin-left: auto; margin-right: auto;"><img alt="ReplicationTopology" border="0" height="240" src="https://lh3.googleusercontent.com/-sk9XVt_hwAg/VqTzewFsTBI/AAAAAAAAAUo/LIHPVdEactg/ReplicationTopology_thumb%25255B1%25255D.png?imgmax=800" style="background-image: none; border: 0px; display: block; float: none; margin-left: auto; margin-right: auto; padding-left: 0px; padding-right: 0px; padding-top: 0px;" title="ReplicationTopology" width="230" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Showing the Publication, Subscriber and Publisher Topology</td></tr>
</tbody></table>
<h4>
Packaging Standard Objects </h4>
Another topology I've used within my database projects is to separate any standard/self-contained code into a separate project. Then apply composite reference if required.<br />
<br />
Here are some examples: <br />
<ul>
<li>An example of self-contained code would be the tsqlt framework. To find out more please read the following link: <a href="http://zogamorph.blogspot.com/2014/07/integrating-tsqlt-with-sql-server-data.html">http://zogamorph.blogspot.com/2014/07/integrating-tsqlt-with-sql-server-data.html</a> </li>
<li>One of my previous workplaces had a standard set of objects and code which were apply to any replicated databases. </li>
<li>My current workplace has a standard security schema which is applied to all its databases.</li>
</ul>
Here are the following reasons why I placed the standard code into a separate project: <br />
<ul>
<li>The standard code is maintained in one place. </li>
<li>I have a deploying mechanism for the standards </li>
<li>Made it easier database projects to get the standards applied by referencing the projects.</li>
</ul>
Steven Wrighthttp://www.blogger.com/profile/10250648827790297322noreply@blogger.com0tag:blogger.com,1999:blog-7476247.post-76164758021414340962015-02-06T10:34:00.001+00:002015-02-06T10:49:28.373+00:00Automation database deployment with refactored objectsWhile automating the deployment for my SQL Server Data Tools (SSDT) projects, as mentioned in a previous post: <a href="http://zogamorph.blogspot.co.uk/2015/01/automating-sql-server-database-projects.html" target="_blank">Automating SQL Server Database Projects Deployment</a>, I found it important to create / maintain the projects refactor logs. This is because SSDT / DAC framework uses the model driven deployment method, which works by comparing the model with the target database to create / execute a delta script for updating the database to match the model. A problem with is method is miss identifying objects which have been renamed or transferred to a new schema.<br />
<br />
For code objects like: store procedures, functions, views etc. this would be more of a nuisance as the database will contain redundant code. This would happen if the deployment option <strong>DropObjectsNotInSource</strong> is not set to true. This would make it harder to maintain the database and may cause future deployments to fail if schema binding is involved. <br />
<br />
However for tables or columns, again depending on deployment options, this will cause one or more of the following: data loss; deployment failures or unnecessary steps within the delta script. A group of the unnecessary steps that would be generated, mainly when columns are renamed, are table rebuilds. These steps could cause deployment failure when copying the data from original table to the temp table if the renamed column is without a default value and doesn’t allow nulls. When the option <strong>BlockOnPossibleDataLoss </strong>is set to false which allows the framework to drop columns and tables regardless if there is data. Could cause unintentional data lost if drop column / table were actual renamed / transferred because the data wouldn’t be copied to new the column / table.<br />
<br />
The SSDT mechanism for managing renames / transfer of database objects is by having the actions recorded in a log file call: <ProjectName>.refactorlog. The refactor log is then used within the deployment to help generate the required steps to update the database. <br />
<br />
To create / update the refactor log is by using the SSDT refactoring options by following these steps: <a href="http://zogamorph.blogspot.co.uk/2015/01/automating-sql-server-database-projects.html" target="_blank"><img align="right" alt="Refactor Menu Options" border="0" src="http://lh5.ggpht.com/-EHNQoPBzWwE/VNSYzukioMI/AAAAAAAAARo/_ywm-E3AtkE/RefactorMenu15.png?imgmax=800" height="240" style="background-image: none; border-bottom-width: 0px; border-left-width: 0px; border-right-width: 0px; border-top-width: 0px; display: inline; float: right; padding-left: 0px; padding-right: 0px; padding-top: 0px;" title="Refactor Menu Options" width="206" /></a> <br />
<ul>
<li>Select the object you wish to re-factor in the SQL Server Object Explorer under projects. </li>
<li>Right mouse click on the object and hover over the refactor menu item and select the required action: <ul>
<li>Rename: The rename option will record that an object has been renamed. This will allow the deployment to create the following statement: EXEC sp_rename '<OldNameOfObject>', '<NewNameOfObject>'; </li>
<li>Move To Schema: The move schema refactor option will record that an object belonging to one schema will be moved to the new schema. This would cause the following statement to be created in deployment: ALTER SCHEMA <NewSchema> TRANSFER <ObjectType>::<OldShema>.<NameOfObject>; </li>
</ul>
</li>
</ul>
When using one of the above options action will also be applied to all the objects which reference the object being refactored. <br />
<br />
<a href="http://zogamorph.blogspot.co.uk/2015/01/automating-sql-server-database-projects.html" target="_blank"><img alt="Refactor Preview Screen" border="0" src="http://lh6.ggpht.com/-pAWo2zJbAl8/VNSY0A65DuI/AAAAAAAAARw/L_MBc-szqes/RenamePreview4.png?imgmax=800" height="467" style="background-image: none; border-bottom-width: 0px; border-left-width: 0px; border-right-width: 0px; border-top-width: 0px; display: block; float: none; margin-left: auto; margin-right: auto; padding-left: 0px; padding-right: 0px; padding-top: 0px;" title="Refactor Preview Screen" width="644" /></a> <br />
<br />
One annoying problem, this is dependent on the object being renamed / transfered, is that the refactor options will not rename the file within the project. So you could end up with a file called one thing ie: Myschema.csp_MyStoredProc.sql but the object inside could be MyTransfer.csp_MyRenamedProc. This can make the solution/project view impossible to use to find objects. <br />
<br />
There are also two other options available which are mainly aimed at code objects like views, stored procedures and functions etc. I have not use these functions so I don't know if they impact the refactor log. <br />
<ul>
<li>Expand Wildcards: The expand wild cards refactoring is there to help remove select * statements by replacing them with all the column names. </li>
<li>Full-Qualify NamesThe full qualify refactoring option is about updating the column names with a qualified table name or alias.</li>
</ul>
Also when adding a new column to already existing table apply them at the end of the table. Otherwise the framework will also do the table rebuild steps as there is no option to force the use of Alter Table Add Column.
<br />
<br />
further reading:<br />
<ul>
<li><a href="http://zogamorph.blogspot.co.uk/2014/07/automated-unit-testing-sql-server-code.html" target="_blank">Automated Unit Testing SQL Server Code</a></li>
<li><a href="http://zogamorph.blogspot.co.uk/2014/07/integrating-tsqlt-with-sql-server-data.html" target="_blank">Integrating tsqlt with SQL Server Data Tools</a></li>
<li><a href="http://zogamorph.blogspot.co.uk/2014/08/integrating-tsqlt-with-team-foundation.html" target="_blank">Integrating tsqlt with Team Foundation Build Services continuous build</a></li>
<li><a href="http://zogamorph.blogspot.co.uk/2015/01/automating-sql-server-database-projects.html" target="_blank">Automating SQL Server Database projects Deployment</a></li>
</ul>
Steven Wrighthttp://www.blogger.com/profile/10250648827790297322noreply@blogger.com0tag:blogger.com,1999:blog-7476247.post-6133013055490551652015-01-19T17:56:00.001+00:002015-01-19T17:59:38.027+00:00Building a simple dynamic site mapA while ago I worked on a project to improve a website indexing within search engines. The project was initiated because there was a noticeable usage downward trend for the site. The main reason was the site was not being listed within relevant searches by search engines. This was because our content didn't emphasize the keywords within the search due to our HTML layout; URL structure and crawling errors.<br />
<br />
One aspect of the project was to improve the site layout and provide more information publicly. The layout improvements were: to include the use of headers to highlight search keywords; a simpler layout to reduce HTML errors. Providing more information was to help increase coverage of keywords. Another improvement made to the site was the restructuring of the site URLs. By making the URLs more human readable and converting the site search functionality to use ASP.Net routes.<br />
<br />
A site map was introduced to enable to help the search providers index our site of our static and searchable content. More information about site maps, which Google, Bing and Yahoo supports, can be found at the following link: <a href="http://www.sitemaps.org/" target="_blank">http://www.sitemaps.org/</a>.<br />
<br />
I needed to find a method of creating a site map that would only list the valid search URLs available based upon the content within our database. The approach I took was to get the SQL Server to generate the XML for the site map and to have an ASP.Net page that would stream the XML out. I would like to start off with the SQL that I use to create the site map XML using the following elements of TSQL:<br />
<ul>
<li>WITH XMLNAMESPACES (DEFAULT 'http://www.sitemaps.org/schemas/sitemap/0.9'): to set the name space of the XML document. </li>
<li>FOR XML PATH('url'): for XML operator also setting the parent element name. </li>
<li>ROOT('urlset'): for creating the root node for a valid XML document.</li>
</ul>
Full Code: <br />
<!-- code formatted by http://manoli.net/csharpformat/ --> <br />
<div class="csharpcode">
<pre class="alt"><span class="lnum"> 1: </span>;<span class="kwrd">WITH</span> XMLNAMESPACES (<span class="kwrd">DEFAULT</span> <span class="str">'http://www.sitemaps.org/schemas/sitemap/0.9'</span>)</pre>
<pre><span class="lnum"> 2: </span>, SiteMapURLS(URL, changefreq, LastUpdated, PriorityWieght) <span class="rem">-- Name Query to get the site URL and the last Date when for URL was last Changed</span></pre>
<pre class="alt"><span class="lnum"> 3: </span><span class="kwrd">AS</span></pre>
<pre><span class="lnum"> 4: </span>(</pre>
<pre class="alt"><span class="lnum"> 5: </span><span class="kwrd">SELECT</span> [ta].[URL] <span class="rem">-- URL patten like 'http://www.mysite.co.uk/' + [ta].[URLPart] +'/' </span></pre>
<pre><span class="lnum"> 6: </span>, <span class="str">'daily'</span> <span class="rem">--or one others: always, hourly, weekly, monthly, yearly, never</span></pre>
<pre class="alt"><span class="lnum"> 7: </span>, [ta].[DateField] <span class="rem">-- MAX(DateField) to get the last date</span></pre>
<pre><span class="lnum"> 8: </span>, 0.5 <span class="rem">-- Valid values range from 0.0 to 1.0</span></pre>
<pre class="alt"><span class="lnum"> 9: </span><span class="kwrd">FROM</span> TableA <span class="kwrd">AS</span> ta</pre>
<pre><span class="lnum"> 10: </span><span class="rem">-- Group By [ta].[URL] OR URL patten like 'http://www.mysite.co.uk/' + [ta].[URLPart] +'/' </span></pre>
<pre class="alt"><span class="lnum"> 11: </span>) </pre>
<pre><span class="lnum"> 12: </span><span class="kwrd">SELECT</span> [smURL].[URL] <span class="kwrd">AS</span> <span class="str">'loc'</span> <span class="rem">-- creates the following: <loc> </span></pre>
<pre class="alt"><span class="lnum"> 13: </span>, REPLACE(<span class="kwrd">CONVERT</span>(<span class="kwrd">varchar</span>,ISNULL([smURL].[LastUpdated],<span class="str">'2013.01.01'</span>),102),<span class="str">'.'</span>,<span class="str">'-'</span>) <span class="kwrd">AS</span> <span class="str">'lastmod'</span> <span class="rem">-- creates the following: <lastmod> and corrects the date format</span></pre>
<pre><span class="lnum"> 14: </span>, [smURL].[changefreq]) <span class="kwrd">AS</span> <span class="str">'<lastmod>'</span> <span class="rem">-- to create the following: <changefreq></span></pre>
<pre class="alt"><span class="lnum"> 15: </span>, <span class="kwrd">CAST</span>(ROUND([smURL].[PriorityWieght],1) <span class="kwrd">AS</span> <span class="kwrd">DECIMAL</span>(2,1)) <span class="kwrd">AS</span> <span class="str">'priority'</span> <span class="rem">-- to create the following <priority></span></pre>
<pre><span class="lnum"> 16: </span><span class="kwrd">FROM</span> SiteMapURLS <span class="kwrd">AS</span> smURL</pre>
<pre class="alt"><span class="lnum"> 17: </span><span class="kwrd">FOR</span> XML <span class="kwrd">PATH</span>(<span class="str">'url'</span>), ROOT(<span class="str">'urlset'</span>);</pre>
</div>
<br />
Within the website I created a generic handler called sitemap.ashx. This page would then stream the content of the stored procedure.<br />
<br />
While the site was being updated it was decided to improve the data access layer by using Entity Framework. In order to make sure that the site had a consistent data access I made use of the Entity Framework. The following code shows how I managed to stream the XML content out of the database:<br />
<!-- code formatted by http://manoli.net/csharpformat/ --><br />
<div class="csharpcode">
<pre class="alt"><span class="lnum"> 1: </span><span class="kwrd">Public</span> <span class="kwrd">Sub</span> ProcessRequest(context <span class="kwrd">As</span> HttpContext) :IHttpHandler.ProcessRequest</pre>
<pre><span class="lnum"> 2: </span> context.Response.ContentType = <span class="str">"text/xml"</span></pre>
<pre class="alt"><span class="lnum"> 3: </span> </pre>
<pre><span class="lnum"> 4: </span> //Getting the Connection from entity framework</pre>
<pre class="alt"><span class="lnum"> 5: </span> Using sqlConnection <span class="kwrd">As</span> SqlConnection = DbContext.Database.Connection</pre>
<pre><span class="lnum"> 6: </span> Using sqlCommand <span class="kwrd">As</span> <span class="kwrd">New</span> SqlCommand(<span class="str">"website.csp_GetWebSiteMap"</span> , sqlConnection)</pre>
<pre class="alt"><span class="lnum"> 7: </span> sqlCommand.CommandType = CommandType.StoredProcedure</pre>
<pre><span class="lnum"> 8: </span> sqlCommand.Connection.Open()</pre>
<pre class="alt"><span class="lnum"> 9: </span> Using xmlReader <span class="kwrd">As</span> XmlReader = sqlCommand.ExecuteXmlReader()</pre>
<pre><span class="lnum"> 10: </span> <span class="kwrd">Dim</span> xdocument <span class="kwrd">As</span> XDocument = xdocument.Load(xmlReader)</pre>
<pre class="alt"><span class="lnum"> 11: </span> context.Response.Write(xdocument)</pre>
<pre><span class="lnum"> 12: </span> xmlReader.Close()</pre>
<pre class="alt"><span class="lnum"> 13: </span> sqlCommand.Connection.Close()</pre>
<pre><span class="lnum"> 14: </span> <span class="kwrd">End</span> Using</pre>
<pre class="alt"><span class="lnum"> 15: </span> <span class="kwrd">End</span> Using</pre>
<pre><span class="lnum"> 16: </span> <span class="kwrd">End</span> Using</pre>
<pre class="alt"><span class="lnum"> 17: </span><span class="kwrd">End</span> Sub</pre>
</div>
<br />
To enable the search crawlers to find our site map I created a robot.txt file. This file also helps to restrict URLs which you don't want crawlers to access like URLs which require authenticated user.<br />
<br />
There is one caveat to this approach which is the URLs which are created in SQL Server do not need to be encoded, which was the case for me. If the URLs you have need to be encoded you would need to create a simple SQL CLR function that would make a call the following method(<a href="http://msdn.microsoft.com/en-us/library/ee360286(v=vs.110).aspx)" target="_blank">http://msdn.microsoft.com/en-us/library/ee360286(v=vs.110).aspx)</a>.Steven Wrighthttp://www.blogger.com/profile/10250648827790297322noreply@blogger.com0tag:blogger.com,1999:blog-7476247.post-86068323201565943832015-01-09T12:19:00.001+00:002015-02-04T11:50:36.086+00:00Automating SQL Server Database projects DeploymentWithin this post I wish to share the methods I used to automate the deployment of my SQL Server database projects. I needed to automate the deployment of my database projects within the build process so I could integrate the execution of my unit tests. As mentioned in a previous post: <a href="http://zogamorph.blogspot.co.uk/2014/08/integrating-tsqlt-with-team-foundation.html" target="_blank">Integrating TSQLT with Team Foundation Build Services continuous build</a>. I also wanted to automate the deployment into the development, test and production environment for the following reasons: <br />
<ul>
<li>To empower the developers to do database deployment into the required environment and reducing the dependence on me. </li>
<li>To reduce the complexity of deploying the database projects. </li>
<li>To have an easy deployment process to document i.e. run this script. </li>
<li>In a way to document the deployment of the database projects.</li>
</ul>
Before I start I would like to highlight an invaluable project / solution file(s) that help with the automation. These are "Profiles" which are used as configuration files for deployment. They store settings like connection string, database name and other behavioural settings for DAC services (the framework used to deploy SQL Server database projects). These files can be created when deploying from visual studio. If the database solution, like mine was, is made of composite projects. Then profiles, which can be shared between the SQL Server database projects, can ensure that the projects are deployed to the same SQL Server and database.<br />
<br />
As mentioned before I needed to automate the deployment of my database project for the unit tests. I couldn't use the MS-Test deployment functionality as I needed a post deployment script within a composite project to be executed. I decided to use PowerShell to automate SQLPackage.exe for the following reasons:<br />
<ul>
<li>It would be easier to manage the arguments required for SQLPackage </li>
<li>Easier to capture the output of SQLPackage into a log file.</li>
</ul>
The basic approach was to use Invoke-Expression on a dynamic created command string and redirect the output to file. Then check the $LASTEXITCODE was 0 after the execution to ensure no errors had happened. The command string was made up of hardcoded values and parameters passed into the script. A demo version of this can be downloaded from the following link: <a href="http://1drv.ms/1Ks0qq0" target="_blank">http://1drv.ms/1Ks0qq0</a>.<br />
<br />
To deploy the successful builds the tool that was used within the company, at the time, was: TFSDeployer ( <a href="http://tfsdeployer.codeplex.com/" target="_blank">http://tfsdeployer.codeplex.com/</a> ). Unfortunately I need to point out that this is not an active project, more information can be found here: <a href="http://tfsdeployer.codeplex.com/discussions/449767" target="_blank">http://tfsdeployer.codeplex.com/discussions/449767</a>. To get TFSDeployer to deploy the dacpac files, the build output for database projects, I needed to create a PowerShell script. I wanted to find a way of using PowerShell without using an external process, SQLPackage or MSDeploy, and using $LASTEXITCODE to detect errors.<br />
While searching for an alternative method I came across the DAC Services API (<a href="http://msdn.microsoft.com/en-us/library/microsoft.sqlserver.dac.dacservices.aspx" target="_blank">http://msdn.microsoft.com/en-us/library/microsoft.sqlserver.dac.dacservices.aspx</a>. The API supports .Net which allows interaction to all the functionality of the framework. With this API I was able to create a reusable function to do the following:<br />
<ul>
<li>Generate deployment report and script. </li>
<li>Optionally deploy the changes to target server and database. </li>
<li>Capture all the events into log files.</li>
</ul>
I also wrapped this function within a script which will take a parameter to load a DAC Services API library file. A version of this script can be downloaded from the following link: <a href="http://1drv.ms/1Kc5fDP" target="_blank">http://1drv.ms/1Kc5fDP</a>. Also a demo script of using the DAC function within a PowerShell script can be downloaded from here: <a href="http://1drv.ms/1Kc5sad" target="_blank">http://1drv.ms/1Kc5sad</a><br />
<br />
The scripts can only be executed within PowerShell 3 or above as DAC service API library requires Microsoft .Net 4.0 or above. Also the following SQL Server redistributable components are required:<br />
<ul>
<li>Microsoft SQL Server Data-Tier Application Framework -<em>DACFramework.msi</em> (<a href="http://www.microsoft.com/en-us/download/details.aspx?id=43370" target="_blank">http://www.microsoft.com/en-us/download/details.aspx?id=43370</a>)</li>
<li>Microsoft System CLR Types for Microsoft SQL Server 2014 - <em>SQLSysClrTypes.msi</em><br />Microsoft SQL Server 2014 Transact-SQL ScriptDom - <em>SQLDOM.msi</em><br />(<a href="http://www.microsoft.com/en-gb/download/details.aspx?id=42295" target="_blank">http://www.microsoft.com/en-gb/download/details.aspx?id=42295</a>)</li>
</ul>
further reading:<br />
<ul>
<li><a href="http://zogamorph.blogspot.co.uk/2014/07/automated-unit-testing-sql-server-code.html" target="_blank">Automated Unit Testing SQL Server Code</a></li>
<li><a href="http://zogamorph.blogspot.co.uk/2014/07/integrating-tsqlt-with-sql-server-data.html" target="_blank">Integrating tsqlt with SQL Server Data Tools</a></li>
<li><a href="http://zogamorph.blogspot.co.uk/2014/08/integrating-tsqlt-with-team-foundation.html" target="_blank">Integrating tsqlt with Team Foundation Build Services continuous build</a></li>
<!--
<li><a href="http://zogamorph.blogspot.co.uk/2015/01/automating-sql-server-database-projects.html" target="_blank">Automating SQL Server Database projects Deployment</a></li>
-->
</ul>
Steven Wrighthttp://www.blogger.com/profile/10250648827790297322noreply@blogger.com2tag:blogger.com,1999:blog-7476247.post-85171422394898230022014-08-26T14:04:00.000+02:002015-02-04T11:48:31.627+00:00Integrating tsqlt with Team Foundation Build Services continuous buildAs I've mentioned in a previous blog post I have been integrating a SQL Server unit test framework: tsqlt into my project life-cycle (<a href="http://zogamorph.blogspot.co.uk/2014/07/integrating-tsqlt-with-sql-server-data.html" target="_blank">click here to read more</a>). Within this post I will share how I got my tsqlt unit tests executing within the continuous integration solution and the results influencing the outcome of the build. The continuous integration platform I am using is Team Foundation Build Services 2012 (TFBS). However my approach will work with any continuous integration platform which handles MS-Build and MS-Test, like Team City.<br />
<br />
<h4>
Build Integration</h4>
As I had created my database unit tests within SQL Server Data Tools (SSDTs) integrating them into the continuous integration was a straightforward process. By extending the build definition to include the unit test project into the list of solution / projects which have to be built. As I had my own solution, which was listed within the build definition, for all the database projects and other related projects my unit tests were already included.<br />
<br />
I also added a very useful MS-Build argument to the build definition: <strong>/p:GenerateProjectSpecificOutputFolder=true</strong> which puts the outputs of the projects in their own separate folders. This was another way of separating the unit test framework and the unit test from the application database.<br />
<br />
<h4>
Unit Test Execution</h4>
Integrating the execution of the database unit tests could be achieved by writing a new unit test adapter to execute the tsqlt unit tests and report the results back to TFBS. However I decided to use an easier approach by making use of the SSDTs extension within MS-Test. I use MS-Test database unit test to execute the tsqlt unit test and to check the execution result. I have reasons why I don't use MS-Tests to unit test my application database. These can be found in a previous blog post (<a href="http://zogamorph.blogspot.co.uk/2014/07/automated-unit-testing-sql-server-code.html" target="_blank">click here to find out more</a>). <br />
<br />
I structured my MS-Test project to mirror the approach taken within tsqlt. By having a MS-Test class for each tsqlt test schema. Then within each test class have one MS-Test database unit test per tsqlt unit test, using the same names minus spaces and the beginning “test”. This will make it easier seeing the results of the tsqlt unit test within result viewer.<br />
<br />
Within the MS-Test database unit test the following SQL pattern was used: <br />
<!-- code formatted by http://manoli.net/csharpformat/ --> <br />
<div class="csharpcode">
<pre class="alt"><span class="lnum"> 1: </span><span class="rem">--Execute the unit tests</span></pre>
<pre><span class="lnum"> 2: </span><span class="kwrd">EXECUTE</span> [tSQLt].[Run] @TestName = N<span class="str">'[MyUnitTestClass].[test My Unit Test for My database Application]'</span>;</pre>
<pre class="alt"><span class="lnum"> 3: </span><span class="rem">--Then check the results using the following SQL</span></pre>
<pre><span class="lnum"> 4: </span><span class="kwrd">SELECT</span> [tr].[<span class="kwrd">Result</span>]</pre>
<pre class="alt"><span class="lnum"> 5: </span>, [tr].[Msg]</pre>
<pre><span class="lnum"> 6: </span><span class="kwrd">FROM</span> [tSQLt].[TestResult] <span class="kwrd">AS</span> tr</pre>
<pre class="alt"><span class="lnum"> 7: </span><span class="kwrd">WHERE</span> [tr].[<span class="kwrd">Class</span>] = <span class="str">'MyUnitTestClass'</span></pre>
<pre><span class="lnum"> 8: </span><span class="kwrd">AND</span> [tr].[TestCase] = <span class="str">'test My Unit Test for My database Application'</span>;</pre>
</div>
<br />
Then use the single value assert method to check row 1 column 1 contained the following value: Success.<br />
<br />
I have been able to write a T4 code generator which connects to a SQL Server database with tsqlt unit tests and does the following:<br />
<ul>
<li>Create a class file for each of the tsqlt test schema </li>
<li>Generate the database unit tests for each tsqlt test found within the schema. </li>
</ul>
You can download my code generator on the following link: <a href="https://onedrive.live.com/redir?resid=E6F77D4B3FF8D47C!385&authkey=!AH-1sMQPJIs_B0c&ithint=file%2c.zip" target="_blank">https://onedrive.live.com/redir?resid=E6F77D4B3FF8D47C!385&authkey=!AH-1sMQPJIs_B0c&ithint=file%2c.zip</a>.<br />
<br />
The code generator requires the following prerequisites:<br />
<ul>
<li>T4 toolbox visual studio extension which can be downloaded from the following links: <ul>
<li>Visual Studio 2012: <a href="http://visualstudiogallery.msdn.microsoft.com/7f9bd62f-2505-4aa4-9378-ee7830371684" target="_blank">http://visualstudiogallery.msdn.microsoft.com/7f9bd62f-2505-4aa4-9378-ee7830371684</a>. </li>
<li>Visual Studio 2013: <a href="http://visualstudiogallery.msdn.microsoft.com/791817a4-eb9a-4000-9c85-972cc60fd5aa" target="_blank">http://visualstudiogallery.msdn.microsoft.com/791817a4-eb9a-4000-9c85-972cc60fd5aa</a>.</li>
</ul>
</li>
</ul>
To use the code generator update the tsqltUnitTest.tt file with the following:<br />
<ul>
<li>SQL Server: With the name of the SQL Server which contains the unit test database. </li>
<li>Database name: The name of the database containing the unit tests. </li>
<li>Test name space: With the name of your MS-Test project that output files will be copied to.</li>
</ul>
Then right mouse click on the tsqltUnitTest.tt and select the option of running custom tool. After the execution of the T4 template there will be a directory, with the same name as your name space setting, containing all the files which need to be copied to your MS-Test project.<br />
<br />
To create the empty MS-Test, to copy the output of the code generator into, use the following steps:<br />
<ol>
<li>Add a new C# MS-Test project to the solution</li>
<br />
<li>Add a SQL Server unit tests file. This will add all the necessary references and code to convert the MS-Test project into a SQL server test project. </li>
<br />
<li>Delete the newly added SQL server unit test file.</li>
</ol>
<a href="http://lh6.ggpht.com/-M4OeIUc9I3A/U_tSi-u770I/AAAAAAAAAQI/tLos2fj_Nxc/s1600-h/MSTestProjectAddSSDTS%25255B5%25255D.png" target="_blank"><img alt="Adding MS-Test Project to solution" border="0" src="http://lh4.ggpht.com/-MbXt0FXfj8A/U_tSjgziOgI/AAAAAAAAAQM/z1hZhEAH0kY/MSTestProjectAddSSDTS_thumb%25255B3%25255D.png?imgmax=800" height="309" style="background-image: none; border: 0px; display: block; float: none; margin-left: auto; margin-right: auto; padding-left: 0px; padding-right: 0px; padding-top: 0px;" title="Adding MS-Test Project to solution" width="640" /></a><br />
<br />
To get the unit tests to execute successfully I had to alter the standard build process template. The alteration was to insert an activity, after build and before the MS-Test activities, to run a power shell script. The script would do a deployment of my application database. Then a deployment of the unit test project, which would include the framework, to the localdb.<br />
<br />
<a href="http://lh6.ggpht.com/-hgqMcXjHsy8/U_tSkVzdAoI/AAAAAAAAAQU/xHyMzGVDCqQ/s1600-h/BuildProcessTemplatePowerShellStep%25255B5%25255D.png" target="_blank"><img alt="Standard Build Process Template With Powershll Task" border="0" src="http://lh3.ggpht.com/-l-mBK8yIzuI/U_tSk2FgsoI/AAAAAAAAAQg/PBBDpLuLP2I/BuildProcessTemplatePowerShellStep_thumb%25255B3%25255D.png?imgmax=800" height="344" style="background-image: none; border: 0px; display: block; float: none; margin-left: auto; margin-right: auto; padding-left: 0px; padding-right: 0px; padding-top: 0px;" title="Standard Build Process Template With Powershll Task" width="640" /></a><br />
<br />
The reason for the extra step was to work around the problem of SQL package only executing Pre/Post deployment files of the dacpack of the referenced source parameter. I needed my application database post deployment script to be executed as this sets up all my reference data which my unit test require. <br />
<br />
To integrate the unit tests execution I only had to update my build definition to start executing test based on DLL file pattern match which was: *test*. And change the setting "failed the build if tests fail" to true. <br />
<br />
Finally another useful advantage of using MS-Test is that database developers can deploy and test their code to localdb all within Visual Studio. So now they can follow the same rules as their .Net counterparts by ensuring that no code is check-in before it passes its unit tests on their local machines.
<br />
<br />
further reading:
<br />
<ul>
<li><a href="http://zogamorph.blogspot.co.uk/2014/07/automated-unit-testing-sql-server-code.html" target="_blank">Automated Unit Testing SQL Server Code</a></li>
<li><a href="http://zogamorph.blogspot.co.uk/2014/07/integrating-tsqlt-with-sql-server-data.html" target="_blank">Integrating tsqlt with SQL Server Data Tools</a></li>
<!--
<li><a href="http://zogamorph.blogspot.co.uk/2014/08/integrating-tsqlt-with-team-foundation.html" target="_blank">Integrating tsqlt with Team Foundation Build Services continuous build</a></li>
-->
<li><a href="http://zogamorph.blogspot.co.uk/2015/01/automating-sql-server-database-projects.html" target="_blank">Automating SQL Server Database projects Deployment</a></li>
</ul>
Steven Wrighthttp://www.blogger.com/profile/10250648827790297322noreply@blogger.com5tag:blogger.com,1999:blog-7476247.post-70636195036444917302014-07-09T11:23:00.000+02:002015-02-04T11:48:00.745+00:00Integrating tsqlt with SQL Server Data ToolsAs mentioned in a previous <a href="http://zogamorph.blogspot.co.uk/2014/07/automated-unit-testing-sql-server-code.html" target="_blank">post </a>I've recently been working with a SQL Server unit test framework: tsqlt. I would like to share how I managed to integrate the framework and unit test into my project life-cycle.<br />
<br />
Unit tests should be treated the same as application code, meaning they should be contained within a project under source control. Being under source control will allow the tests to be maintained and changes tracked when multiple people are helping to write tests. Also having the unit test in a project means there is a deployment mechanism out-of-the-box.<br />
<br />
The tsqlt framework has a few requirement which are as follows<br />
<ul>
<li>CLR is enabled on the SQL Server </li>
<li>The framework and the unit test are contained within the same database of the objects which are being tested </li>
<li>The database instance is trustworthy</li>
</ul>
As unit test are required to be contained within the same database as the objects being tested, also due to the way SQL Server Data Tools (SSDT) project validation works, the easiest solution would be to add the unit test and the framework objects into the same project of your application database.<br />
<br />
However one question would be raised which is how do you stop the unit test framework and the unit tests getting into the production deployment? The tsqlt framework does offer an approach to this problem by supplying an uninstall store procedure which could be called after deployment to remove the unit tests and the framework. This is more of a workaround rather than solving the problem of not deploying tests and the framework into production.<br />
<br />
There is an alternative solution, which is the method I used, to integrate the unit tests into the project life cycle. The solution is to use SSDT project setup method of composite database project. The composite database project method, which has been around since visual studio database professionals, is basically having 2 (or more) database projects, where one references the other(s), which both gets deployed into the same database on the same server.<br />
<br />
My method to implement composite SSDT database project was as follows:<br />
<ul>
<li>Created an application database project as normal. Which contained all the objects for the application requirements. </li>
<li>Create a unit tests database project to contain all my unit tests and supporting objects. This project could also be where tsqlt unit framework code could be stored. I decided created a separate tsqlt dacpak file by using the following steps: <ul>
<li>Create an empty database. </li>
<li>Execute the tsqlt install script within the empty database. </li>
<li>Use SQLPackage with the extract action.</li>
</ul>
<a href="http://lh3.ggpht.com/-bp-H4CCw2T0/U7xa6egFSdI/AAAAAAAAAN4/ZooX2YWRjOE/s1600-h/UnitTestSolutionExplorer11.png" target="_blank"><img alt="UnitTestSolutionExplorer" border="0" src="http://lh5.ggpht.com/-zbSnDLCGEic/U7xa7GM4GvI/AAAAAAAAAOA/3y3-dYEYi14/UnitTestSolutionExplorer_thumb6.png?imgmax=800" height="163" style="background-image: none; border-bottom-width: 0px; border-left-width: 0px; border-right-width: 0px; border-top-width: 0px; display: block; float: none; margin-left: auto; margin-right: auto; padding-left: 0px; padding-right: 0px; padding-top: 0px;" title="UnitTestSolutionExplorer" width="240" /></a> </li>
<li>Before starting to create my unit test code within my unit to project I had to set up the references. I created references to the application project and the tsqlt dacpack file but setting database location to be “<strong>same database</strong>”, which is the same as removing the generated database name and variable.</li>
</ul>
<div style="text-align: center;">
<a href="http://lh6.ggpht.com/-qKWjqKVRV6A/U7xa7rDj95I/AAAAAAAAAOI/cylwL6-gOLM/s1600-h/UnitTestAddDatabaseReferencesProject.png" target="_blank"><img alt="Adding Database Reference To Application Project From Unit Test Project" border="0" src="http://lh3.ggpht.com/-ZrN6Vhl6tUY/U7xa8CLSQsI/AAAAAAAAAOM/Qji6rk_PlzY/UnitTestAddDatabaseReferencesProject%25255B2%25255D.png?imgmax=800" height="168" style="background-image: none; border-width: 0px; display: inline; margin-left: 0px; margin-right: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;" title="Adding Database Reference To Application Project From Unit Test Project" width="240" /></a><a href="http://lh4.ggpht.com/-JS3MKjFSnxM/U7xa8nLhjAI/AAAAAAAAAOY/vzcq-7SuuYo/s1600-h/UnitTestAddDatabaseReferencesDacPac3.png"><img alt="Add Database References to tsqlt dacpac file From Unit Test Project" border="0" src="http://lh6.ggpht.com/-3k7CD6CAlBo/U7xa9O35yPI/AAAAAAAAAOg/T96Vxg4sDXM/UnitTestAddDatabaseReferencesDacPac_.png?imgmax=800" height="168" style="background-image: none; border-width: 0px; display: inline; margin-left: 0px; margin-right: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;" title="Add Database References to tsqlt dacpac file From Unit Test Project" width="240" /></a></div>
Setting up the projects in this method means I can deploy my application project without the unit tests and the framework to all the environments. Then add the unit test to any one of my environments: development, test or separate unit test server. <br />
<br />
The reason for creating a dacpack file was to have a single reusable file for my next projects. And if the unit framework is updated I only have to update a single file. Also hopefully at some point the SSDTS project will support the use of NuGet. This would mean that I can upload my dacpack to a NuGet server and add the package reference. Then if I ever updated the server package the project would automatically pick up the new version. (There is a fork within the NuGet project which already allows this: <a href="https://nuget.codeplex.com/workitem/2439">https://nuget.codeplex.com/workitem/2439.)</a><br />
<br />
Here is a link to find out more about composite projects: <a href="http://blogs.msdn.com/b/ssdt/archive/2012/06/26/composite-projects-and-schema-compare.aspx">http://blogs.msdn.com/b/ssdt/archive/2012/06/26/composite-projects-and-schema-compare.aspx</a>.<br />
<br />
SQLpackage has a setting which will allow a composite project to deploy its composite references objects at the same time. This allow the unit test project to deploy the unit tests including the application database and the tsqlt framework database objects. However be aware that only one Pre/Post deployment file is executed which is one that is attached to the dacpack listed in the SQL package source file parameter. So the composite references project have post/pre-deployment requirements are not executed. This can be a problem if your application database has a post-deployment script to load reference data.<br />
<br />
A couple of closing notes:<br />
<br />
To create a unit test class within SSDTS for tsqlt add the following extended property on the schema which is to become a tsqlt test class: tSQLt.TestClass with a value of 1 <br />
e.g:<br />
<div class="csharpcode">
<pre class="alt"><span class="lnum"> 1: </span><span class="kwrd">EXECUTE</span> sp_addextendedproperty</pre>
<pre><span class="lnum"> 2: </span>@name = N<span class="str">'tSQLt.TestClass'</span></pre>
<pre class="alt"><span class="lnum"> 3: </span>, @value = 1</pre>
<pre><span class="lnum"> 4: </span>, @level0type = N<span class="str">'SCHEMA'</span></pre>
<pre class="alt"><span class="lnum"> 5: </span>, @level0name = N<span class="str">'MyUnitSchema'</span>;</pre>
</div>
<br />
And creating unit test the store procedure must start with "test "
<br />
<br />
further reading:
<br />
<ul>
<li><a href="http://zogamorph.blogspot.co.uk/2014/07/automated-unit-testing-sql-server-code.html" target="_blank">Automated Unit Testing SQL Server Code</a></li>
<!--
<li><a href="http://zogamorph.blogspot.co.uk/2014/07/integrating-tsqlt-with-sql-server-data.html" target="_blank">Integrating tsqlt with SQL Server Data Tools</a></li>
-->
<li><a href="http://zogamorph.blogspot.co.uk/2014/08/integrating-tsqlt-with-team-foundation.html" target="_blank">Integrating tsqlt with Team Foundation Build Services continuous build</a></li>
<li><a href="http://zogamorph.blogspot.co.uk/2015/01/automating-sql-server-database-projects.html" target="_blank">Automating SQL Server Database projects Deployment</a></li>
</ul>
Steven Wrighthttp://www.blogger.com/profile/10250648827790297322noreply@blogger.com5tag:blogger.com,1999:blog-7476247.post-5148588560497761152014-07-09T11:22:00.000+02:002015-02-04T11:43:24.314+00:00Automated Unit Testing SQL Server CodeI have never been a fan of automated unit testing of SQL code mainly because previous frameworks / tools, which I came across, all seem to miss the point of SQL code which is either retrieving or manipulating data.<br />
<br />
Some of the frameworks would only test SQL scalar functions which limited code coverage. Others would test stored procedures but only the output parameter not the data being returned which limited the usefulness of the test. The methods used to determine if a test pass or fail were also limited. As some would only check the return value; the correct number of rows were returned; a single row of data appeared in the dataset or within a row and column a value is returned. This meant that you could write code that would satisfy the test and could still be functionally incorrect as the rest of the data not checked could be inaccurate.<br />
<br />
Another problem that some of the previous frameworks forced upon you to solve was how to make the test repeatable. As data has state which is maintained through the code that you are testing. If you rerun a test it may not be against the same data state as the previous test run. This could cause test to fail because the expected results were out of date or due to constraints violations as previous run weren’t removed. Also getting the data into a starting state could be time-consuming and error prone because of identity properties; referential integrity and table constraints.<br />
<br />
However I recently came across a unit test framework called tsqlt which has changed my mind about automating unit testing against my SQL code. The framework supports the testing of any executable SQL code and offers approaches to isolate embedded stored procedures; triggers and functions. There are a range of asserts the test to use to make sure that the code is functionally correct ranging from data compare, structure compare, the existence of objects and calls were made to embedded code.<br />
<br />
The framework helps with repeatable unit tests by wrapping the execution of tests within database transactions and then rolling back once the execution has completed. Also the framework offers tools for getting the database into a particular test state by faking the database tables and applying selective constraints, this is similar to .net mocking objects, which makes inserting data easier. More details of how to use and get the tsqlt framework can be found at the following URL: <a href="http://tsqlt.org/">http://tsqlt.org/</a>. <br />
<br />
I would like to share some of my methods I use to make using the framework a little bit easier.<br />
<br />
Creating the set up stored procedure within each test class helps reduce the amount of repeatable code that each unit test would have to contain. The set up stored procedure is automatically called before each unit test, within the test class, is executed. The code I put in my Setup procedures was the faking of the required tables and inserting of any initial data required.<br />
<br />
To create a set up stored procedure just create a stored procedure call SetUp e.g.<br />
<br />
<!-- code formatted by http://manoli.net/csharpformat/ --> <br />
<div class="csharpcode">
<pre class="alt"><span class="lnum"> 1: </span><span class="kwrd">CREATE</span> <span class="kwrd">Procedure</span> [MyTestClass].[Setup]</pre>
<pre><span class="lnum"> 2: </span><span class="kwrd">AS</span> </pre>
<pre class="alt"><span class="lnum"> 3: </span><span class="kwrd">EXECUTE</span> [tSQLt].[FakeTable]</pre>
<pre><span class="lnum"> 4: </span>@TableName = N<span class="str">'[MyAppSchema].[MyAppTable]'</span></pre>
<pre class="alt"><span class="lnum"> 5: </span>, @<span class="kwrd">Identity</span> = 1</pre>
<pre><span class="lnum"> 6: </span>, @ComputedColumns = 0</pre>
<pre class="alt"><span class="lnum"> 7: </span>, @Defaults = 1</pre>
<pre><span class="lnum"> 8: </span>Retutn 0</pre>
</div>
<br />
I found by grouping my unit tests based upon C.R.U.D operations help to get the most out of the setup stored procedure. As I found my insert operations needed the tables to retain their identity columns and defaults to ensure that the insert stored procedures worked. Whereas the updates only required the defaults and the reads only required a fake table.<br />
<br />
I found putting my test data into table value functions / views reduced repeating code. Also this helped me in creating my actual data as I was able to apply filters and case statements to get the required results for the particular tests. <br />
<br />
For timestamp column you can emit them from the expected results so they are not compared with the actual table. However my method for dealing with my timestamp columns was within my setup stored procedures / unit tests and application code. <br />
<br />
Within the setup stored procedures I would alter the defaults to a static date and time, which was completely different to the initial data, before faking the table. Then use the same static value in my expected table within the unit test. Within my application code I change my approach to updating my timestamp columns. Instead of directly updating the column by using a call to the SYSDATETIME() function I would update the column by using the DEFAULT keyword e.g.<br />
<!-- code formatted by http://manoli.net/csharpformat/ --><br />
<div class="csharpcode">
<pre class="alt"><span class="lnum"> 1: </span><span class="kwrd">UPDATE</span> [AppSchema].[AppTable]</pre>
<pre><span class="lnum"> 2: </span><span class="kwrd">SET</span> [AppColumn1] = @pValue1</pre>
<pre class="alt"><span class="lnum"> 3: </span>, [AppColumn2] = @pValue2</pre>
<pre><span class="lnum"> 4: </span>, [AppColumn3] = @pValue3</pre>
<pre class="alt"><span class="lnum"> 5: </span>, [LastUpdated] = <span class="kwrd">DEFAULT</span></pre>
<pre><span class="lnum"> 6: </span><span class="kwrd">WHERE</span> [KeyColumnId] = @pKeyColumnId</pre>
</div>
<br />
further reading:
<br />
<ul>
<!--
<li><a href="http://zogamorph.blogspot.co.uk/2014/07/automated-unit-testing-sql-server-code.html" target="_blank">Automated Unit Testing SQL Server Code</a></li>
-->
<li><a href="http://zogamorph.blogspot.co.uk/2014/07/integrating-tsqlt-with-sql-server-data.html" target="_blank">Integrating tsqlt with SQL Server Data Tools</a></li>
<li><a href="http://zogamorph.blogspot.co.uk/2014/08/integrating-tsqlt-with-team-foundation.html" target="_blank">Integrating tsqlt with Team Foundation Build Services continuous build</a></li>
<li><a href="http://zogamorph.blogspot.co.uk/2015/01/automating-sql-server-database-projects.html" target="_blank">Automating SQL Server Database projects Deployment</a></li>
</ul>
Steven Wrighthttp://www.blogger.com/profile/10250648827790297322noreply@blogger.com2tag:blogger.com,1999:blog-7476247.post-20116666580226580072014-06-01T16:03:00.001+02:002014-06-10T20:18:32.609+02:00Configuring database files within Microsoft SQL Server Development Tools<p>Over the years SQL Server database development has evolved from non-structured tools (Query Analyser/Management Studio) to fully structured development tools (Visual Studio Database Professionals/SQL Server Data Tools). However there many important aspects of database development which hasn't changed over the years. One of which I would like to cover within this blog post is the database files configuration especially within Microsoft development tools: SSDTS and VSDB-Pro. </p> <p>Getting your database files size and growth incorrectly configured can hurt your database performance. As on a project we had our first test release into production of our data warehouse and run our ETL process. At the time the databases had to take on the default file settings of the model database. After the process completed within three hours our data base files all grew in excess of 40GB. We did another test release, after tearing down the environment, with database file settings worked into the deployment and re-run our ETL process. This rerun completed in almost half the time of the first test release. </p> <p>I have been using the Microsoft database development tools ever since they have been release. I have found that they do improve the development and deployment process of the database life-cycle. I also believe that all objects involved with your database should be included within the project of these tools so that there is one consistent method of deployment of the databases. </p> <p>Unfortunately there are some restrictions which make putting all the objects within the project impossible to achieve. One of these are that you cannot apply parameters within the file definitions. This is unsuitable as this enforces the database files to be the same size for all environments which don’t always have same disk space assigned. </p> <p>I would like to share a method, which works for both VSDB-Pro & SSDTS, which I've come up with to get around the restriction of not being able to parameterise the file sizes: </p> <ol> <li>Create some variables for the size of the file(s) and how much the file(s) growth should be e.g. <filegroup>FileSize , <filegroup>Filegrowth <a href="http://lh6.ggpht.com/-dOlgEgnG1xg/U5dL7G2i1YI/AAAAAAAAAM8/I5gUOsQ7dk4/s1600-h/CMD%252520Variables%25255B10%25255D.png"><img title="CMD Variables" style="border-top: 0px; border-right: 0px; background-image: none; border-bottom: 0px; padding-top: 0px; padding-left: 0px; border-left: 0px; display: inline; padding-right: 0px" border="0" alt="CMD Variables" src="http://lh4.ggpht.com/-gU6eiYGyb8A/U5dL8pZqDuI/AAAAAAAAANE/dcpRfKvaJB8/CMD%252520Variables_thumb%25255B8%25255D.png?imgmax=800" width="640" height="401"></a> <li>Create your files with an initial size and growth configuration e.g. <br><!-- code formatted by http://manoli.net/csharpformat/ --> <div class="csharpcode"><pre class="alt"><span class="lnum"> 1: </span><span class="kwrd">ALTER</span> <span class="kwrd">DATABASE</span> [$(DatabaseName)] </pre><pre><span class="lnum"> 2: </span><span class="kwrd">ADD</span> <span class="kwrd">FILE</span> ( NAME = <logicalfilename></pre><pre class="alt"><span class="lnum"> 3: </span>, FILENAME = <span class="str">'$(SQLDatabaseDataPath)\$(DatabaseName)_<logicalfilename>.'</span></pre><pre><span class="lnum"> 4: </span>, <span class="kwrd">SIZE</span> = 3072 KB</pre><pre class="alt"><span class="lnum"> 5: </span>, MAXSIZE = UNLIMITED</pre><pre><span class="lnum"> 6: </span>, FILEGROWTH = 1024 KB ) </pre><pre class="alt"><span class="lnum"> 7: </span><span class="kwrd">TO</span> FILEGROUP[<FileGroup>];</pre></div><br /><li>Then within the post-deployment script add the following SQL statement , for each file, which checks that current size of the file is under the configured variable file size to update file configuration:<br><!-- code formatted by http://manoli.net/csharpformat/ --><br /><div class="csharpcode"><pre class="alt"><span class="lnum"> 1: </span><span class="kwrd">IF</span> (<span class="kwrd">SELECT</span> ([<span class="kwrd">size</span>] * 8 / 1024) <span class="kwrd">FROM</span> sys.database_files <span class="kwrd">WHERE</span> name = <span class="str">'<logicalfilename>'</span>) < $(<filegroup>FileSize) </pre><pre><span class="lnum"> 2: </span><span class="kwrd">BEGIN</span> </pre><pre class="alt"><span class="lnum"> 3: </span><span class="kwrd">ALTER</span> <span class="kwrd">DATABASE</span> [$(DatabaseName)] </pre><pre><span class="lnum"> 4: </span><span class="kwrd">MODIFY</span> <span class="kwrd">FILE</span> ( NAME = N<span class="str">'<logicalfilename>'</span></pre><pre class="alt"><span class="lnum"> 5: </span>, <span class="kwrd">SIZE</span> = $(<filegroup>FileSize)MB</pre><pre><span class="lnum"> 6: </span>, FILEGROWTH = $(<filegroup>Filegrowth)MB) </pre><pre class="alt"><span class="lnum"> 7: </span><span class="kwrd">END</span> </pre></div>The above example the unit of measure is MB </li></ol><br /><p>I would like to share some recommendations using this approach: </p><br /><ul><br /><li>I would recommend creating one set of variables, <filegroup>FileSize and <filegroup>Filegrowth, per file group. As it is recommended to have the same file configuration within a file group due to way that SQL Server handles multiple files within the same file group. <br /><li>Having instant file initialisation configured within your SQL Server boost the speed of the data files creation. To enable instant file initialisation do the following steps: <br /><ul><br /><li>Open Local Security Policy on the SQL Server <br /><li>Select Local Policies \ User Rights Assignment \ Perform volume maintenance tasks. <br /><li>When adding a user; change the location to the local computer and the follow user NT SERVICE\MSSQLSERVER (If your SQL Server is the default instance otherwise use the following pattern: NT SERVICE\MSSQL$<instance name>) </li></ul><a href="http://lh3.ggpht.com/-n2lH-9o31_g/U4syk370sGI/AAAAAAAAANM/bV1hXw3dgKA/s1600-h/SQLSERVERIFI%25255B1%25255D.png" target="_blank"><img title="SQLSERVERIFI" style="border-top: 0px; border-right: 0px; background-image: none; border-bottom: 0px; padding-top: 0px; padding-left: 0px; border-left: 0px; display: inline; padding-right: 0px" border="0" alt="SQLSERVERIFI" src="http://lh3.ggpht.com/-A_e_YZLxtkc/U4syldyqN0I/AAAAAAAAANU/kXbp93q0ArM/SQLSERVERIFI_thumb.png?imgmax=800" width="640" height="338"></a></li></ul> Steven Wrighthttp://www.blogger.com/profile/10250648827790297322noreply@blogger.com0tag:blogger.com,1999:blog-7476247.post-74763527269380129382014-05-19T13:51:00.002+02:002014-05-19T13:51:59.660+02:00Waking up Reporting Services with scriptingI have recently deployed a new reporting solution using SQL Server 2008 Reporting Services. I wanted to look into a solution to ensure that the first user, after reporting services had spun down its’ application pools, doesn't have to wait for them to be re-initiated. This so that I could avoid complaints about reporting services performance.<br />
<br />
I was looking for a simple solution to either stop reporting services clearing down the application pools or to find a way of speeding up the first request. I decided on an approach to speed up the first request made by a user. This was because I didn't want to make application settings changes in case other problems occurred.<br />
<br />
After searching for a method into speeding up reporting services application pools start-up I came across a useful PowerShell script. The script, which was for PowerShell V2, made use of the .Net class <a href="http://msdn.microsoft.com/en-us/library/system.net.webrequest(v=vs.80).aspx" target="_blank">System.Net.WebRequest</a> to make a request to the report manager. <br />
<br />
I made a copy of this script and applied some changes which are available from the following location:<a href="https://onedrive.live.com/redir?resid=E6F77D4B3FF8D47C%21375" title="https://onedrive.live.com/redir?resid=E6F77D4B3FF8D47C%21375">https://onedrive.live.com/redir?resid=E6F77D4B3FF8D47C%21375</a><br />
<br />
I decided to change my approach after finding this script. My solution is to automate a request to reporting services, by using the above script, at a time before my first user. As this will force reporting services to spin up its application pools and take the cost of the first request instead of the user. <br />
<br />
In between implementing my solution and writing this blog post PowerShell V3 & 4 have been released. <br />
Here is a script for Powershell V3 and above: <a href="https://onedrive.live.com/redir?resid=E6F77D4B3FF8D47C%21376" title="https://onedrive.live.com/redir?resid=E6F77D4B3FF8D47C%21376">https://onedrive.live.com/redir?resid=E6F77D4B3FF8D47C%21376</a> Steven Wrighthttp://www.blogger.com/profile/10250648827790297322noreply@blogger.com0tag:blogger.com,1999:blog-7476247.post-8571903036371706792012-10-01T12:33:00.000+02:002014-05-15T15:43:03.217+02:00Couple of Tips for an External ActivatorI recently had to review a solution that was designed to keep an Oracle database up to date with changes made within a SQL Server database. The solution was implemented by using CDC, Service Broker and an External Activator to process the messages. Within this post I would like to share a couple of tips that I gave to help improve the efficiency of the External Activator process. <br />
<br />
<strong>Tip 1:</strong> Avoid the temptation for checking to see if there are messages to process. <br />
<br />
Especially at the starting of the External Activator, as the programme is only launched when there are messages to be read. The only way to check if there are existing messages is to run a select command doing a count on the queue. This is a wasted call to the database server and depending on the number of messages within the queue this could be an intensive query to run<a href="https://www.blogger.com/null" name="b{934f79bf-4bad-4c2c-adc8-4ba4acdf9bf8}"></a>. Also attempting to read an empty queue does not cause an error within SQL <br />
<br />
The best way to check if there are still messages to be processed is by using the RECEIVE command and checking if nulls haven't been returned. If nulls are returned then the external activator process can finished as there are no more messages to process. Another method is to combine the WAITFOR command around RECEIVE command. This will cause the receive command to either wait until the specified time to return the null or a message as soon as it has been received by the SQL Server. <br />
<br />
<strong>Tip 2:</strong> Consider alternative ADO.net query methods. <br />
<br />
Within ADO.net there are several methods to execute a T-SQL command, which all have their pros and cons depending on T-SQL command being executed.Also considering your solution requirements there may be a better way of getting the messages out of the database. <br />
If your requirements is that you only need to read one message at a time from the database then it might be useful to consider using ExecuteNonQuery and using output parameters to retrieve the message, types. Using this method can slightly improve the communication between the application and SQL Server.Steven Wrighthttp://www.blogger.com/profile/10250648827790297322noreply@blogger.com0tag:blogger.com,1999:blog-7476247.post-59514274024377068842012-09-30T19:47:00.001+02:002012-10-10T22:23:56.199+02:00Missing the option of creating table dependent objects within separate filesI have been developing database for over 12 years for use with applications and business intelligence projects. One of the challenges I used to face was how to put the SQL Server database under source control. Over the years I came across several methods: Direct Database development with nightly backup, Script(s) place under source control, SSMS Projects. <br />
Microsoft help to solved the problem when they release Visual Studio Data Database Professional (DataDude), when Visual Studio 2005 was around, which add a project structure, development methods, build and deployment methods. Recently Microsoft has release a new replacement database development tool: SQL Server Data Tools, which has some improvements and some missing features. <br />
One of the missing features is the folder structure choice that which I talk about in another post : <a href="http://zogamorph.blogspot.co.uk/2012/06/how-to-create-old-database-project.html" target="_blank">http://zogamorph.blogspot.co.uk/2012/06/how-to-create-old-database-project.html</a>. Another change is the dropping of the development method of one object to each file, which was started to be relaxed within Visual Studio 2008 release. The option is still there, however it's not easy to apply it as self imposed rule when using the new table designer. <br />
Here are a few reason while I think having one object per-file is a good idea: <br />
<ul>
<li>Forces all the objects to be named instead of have system created names. This would help finding build problems within the database project. Having all the objects name is a good coding practices. Also make comparing database easier with 3rd party products. </li>
<li> Reduce the risk of code being lost as there are less chances of check-in conflicts. This can also help to improve productivity by not having to waste time resolving them. </li>
<li>Can help relate / audit deployment changes with source control check-ins. As if the file has been changed then the object would have been changed. Meaning you can compare the file changes list within source control to deployment script to make sure that all generated changes within the deployment script are valid. </li>
<li>Also can help projects which chooses data model to be the is the version of truth for the database. As tables can be under data model tool control and database developers can added operational indexes without them being lost each time the data model is synchronise to database project. </li>
<li>Reduce the number for developer to do deployments. When doing a get latest and if source control only brings down is indexes files then a developer might not have to waste time in deploying the database. </li>
<li>It the same style method that some code developers applied to their coding standards. .Net developers, with the help of Jet Brains Reshaper, put all there objects in single files for some the same reason as I mention above.</li>
</ul>
I have raise a Microsoft Connect ticket to get a option add to the table designer to allow linked objects to be added as separate file to it easier when sticking to the rule one object per-file. If you agree with me and would like to see the feature added please vote for it with your feedback against the following ticket: <a href="https://connect.microsoft.com/SQLServer/feedback/details/748552/create-table-dependent-objects-within-separate-files">https://connect.microsoft.com/SQLServer/feedback/details/748552/create-table-dependent-objects-within-separate-files</a><br />
<br />
<br />
<br />
Steven Wrighthttp://www.blogger.com/profile/10250648827790297322noreply@blogger.com0tag:blogger.com,1999:blog-7476247.post-20058288166790046892012-06-11T21:50:00.001+02:002012-06-11T21:50:28.675+02:00Cannot update a database that has been registered as a data-tier application<p>Within SQL Server 2008 R2 a new feature was introduced, which was the data-tier application database. This feature allow developers / database administrators to package a database schema so that it could be deployed to any other database server and versioned. More about this feature can be found here: <a href="http://msdn.microsoft.com/en-us/library/ee240739(v=sql.105).aspx">http://msdn.microsoft.com/en-us/library/ee240739(v=sql.105).aspx</a> or <a href="http://dacguy.wordpress.com/">http://dacguy.wordpress.com/</a></p> <p>This feature has been included, as I have just found out, to previous versions: SQL Server 2005 via Services Pack 4 and 2008 via Services Pack 2. However SQL Server Management Studio, for SQL Server 2008, hasn't been updated with the tools for the new feature, you have to use the SQL Server 2008 R2 version. The reason for including the support to the previous version might have something to do with the release of the new SQL Server Data Tools (SSDT), the replacement to visual studio database professional, which uses DAC Packs and the framework to deploy a database.</p> <p>I have been using the SSDT to deploy a new SQL Server 2008 (SP3) database for my current project. When I was trying to deploy an update to my database I was getting the following error: Cannot update a database that has been registered as a data-tier application without also updating its registration.</p> <p>I was surprised to be getting this error, even know I did once registered my database as a data-tier application, but since then I had re-created the database a few times without registering as data-tier application. I was even getting the error when I manually drop and re-create the database. Dropping the database within SQL Sever 2008 doesn’t delete the data-tier application registration. So if another database is created with the same name the registration is then maintain when using the DAC framework.</p> <p>To get around this problem you need to remove the data-tier application registration to this with SQL Server 2008 management studio you will have to run the following SQL statement: DELETE FROM [msdb].dbo.[sysdac_instances_internal] WHERE [instance_name] = '<Databasename,varchar,>' .</p> Steven Wrighthttp://www.blogger.com/profile/10250648827790297322noreply@blogger.com3tag:blogger.com,1999:blog-7476247.post-45204528236322043272012-06-05T13:02:00.001+02:002012-06-05T13:02:38.634+02:00How to create the old database project folders for SQL Server Data Tools.<p>I have been working with the new SQL server data tools database project. This is going to be the successor to database professional tool, also known as data dude. There have been several differences to how this product works.</p> <p>One of them is the new tool project folder structure. There is now no folder structure within the project which means any file is automatically created into the root folder of the project. This is unlike database professionals, which had 2 comprehensive folder structures, the options were: by object type folder tree or schema related folder tree. My preference was always the object type folder structure, were script files which contain similar objects were all stored in the same folder e.g.: tables under a table folder, primary keys under the keys folder, views under a view folder etc.</p> <p>If you missed the object tree folder structure, like I have, there is a method to create this structure very quickly within your new data base project by using the following steps: <ol> <li>Create a new SQL server data tools project <li>Once the project has been created, edit the project file. <ul> <li>Right mouse clicking on the project <li>Select Unloading Project option <li>Right mouse clicking on the project (which will be grey out) <li>Select the Edit Project option</li></ul> <li>Then copy the contents of the file:<a href="http://sdrv.ms/LkIxvc" target="_blank">DBProFolderList.xml</a> into your new SQL server within the project node. I would recommend inserting before the closing tag to project. <li>Reload the project.</li></ol> <p>However, once you have folder structure in place the tool will not honour it. You will have to spend time ensuring that object script files are created or moved into the correct location. Also be aware that the schema compare tool, when reverse engineering into the project, uses the schema related tree folder structure to place newly created objects from the database.</p> Steven Wrighthttp://www.blogger.com/profile/10250648827790297322noreply@blogger.com4