9 Dec 2009

My Adventures in Codeplex

In October 2009 I created 2 codeplex projects which I would to tell you about:

SQL Server Reporting Services MSBuild Tasks (ssrsmsbuildtasks)

A few years ago I created some tasks for MSBuild to help deploy reports for my project, details can be found here,  since then my tasks have been re-used in a few other projects within my company.  However lately we have been doing a few projects which have made use of Reporting Services integrated mode with SharePoint, which meant that my tasks were unusable as they only work with a native mode report server.

So I have been updating them to include support for an integrated mode report server.  Also before I added the integrated mode support I took some time to rewrite my original native mode tasks as well.  The re-writing of the tasks includes some improved understanding of MSBuild by making more use of Item Groups and metadata for setting of report server properties of the report items being deployed. 

I placed the tasks within codeplex as I needed a better way of sharing my tasks other than through the blog post.  Also using the task is helpful in getting the your reporting project integrated with any continuous build of your project.  Another way they can help is with automating unit testing of your reporting project which I have talked about how to do from a previous post.

The SQL Server Reporting Services MSBuild Tasks are available from the following URL: http://ssrsmsbuildtasks.codeplex.com/

Server Reporting Services Slide Shower (ssrsslideshower)

While working on our Scrum for Team System process templates for Team Foundation Server (http://www.scrumforteamsystem.com) I created a useful tool.  The tool helps to show reports that are important to a scrum team in a sort slide show presentation, more details can be found through this previous post.

While writing the tool most of the mechanics of the tool doesn’t depend on Team Foundation Server or our Scrum template.  Also I thought it would be a useful tool for people who wish to demo any reports that are within their reporting services infrastructure.  So I have removed the dependences of Team Foundation Server and our Scrum Template. 

I have plans to improve the tool, within my limits of coding knowledge, to allow reports parameters values or labels to be set as it renders the reports within the slide show. I also plan to get it working with a integrated report server as well.  However my ssrsmsbuildtasks project has taken most of my time but over the next few week I plan to start making my changes to this project.

I placed the project on codeplex because within our Scrum for Team System V3 template for Team Foundation Server 2010, which beta 2 is now available from this link,  we have dropped the inclusion of this tool and plan to release it within codeplex.  This is  mainly because it was a value add tool and not needed as part of the process template. Also the changes that were made to template and the reports, which now don’t have clear defaults for our report parameters which fit into the my improvements plans that I mention above.   

The SQL Server Reporting Services Slide Shower is available from the following URL: http://ssrsslideshower.codeplex.com/

7 Dec 2009

SfTS V3 Beta 2:Helping to see the wood for the trees

Within any application lifetime management methodology, the reports are a useful tool to help see how well , or not , the project is progressing.  Within the Scrum methodology the reports which are used for tracking progress are the burndown reports.  In the Scrum for Team Systems process templates for Team Foundation Server these reports are available to help with managing the project. 

With a manual run burndown report when an unexpected trend in the burndown happens, there is an option to manually add an annotation.  This would help during a retrospective when talking about why the trend oddity occurred.  Within our previous versions of the Scrum for Team System templates our burndown reports would only show the end of each days aggregated totals.  Any unexpected totals would have to investigated to understand the underlying cause.  Previously this would have involved either using Team Foundation Explorer to review the history and comments; Using Excel Pivot Services to query the data or writing a report against the warehouse. This can take some time to create and analyse the result set. 

In connection with the above for the last few months I, with the help of fellow work colleague Kate Begley, have been working on writing the reports for the Scrum for Team System V3 process template (For which Beta 2 has been released and details are available from here) part of which has been adding drill-though report functionality to the reports.  This helps with the investigation in explaining for example why there was an unexpected up trend in our sprint burndown seen in our report below:


To get drill-though details for the end of day click on any data point for example within the blue circle in the image above, the drill though details report will be displayed as seen below:


In this example the details report shows the work items broken down by the incomplete tasks which contribute to the sprint burndown line aggregated daily totals, complete tasks which have been completed up to that day, and the tasks that have been descoped within the selected sprint / team.  The incomplete tasks are further grouped together by their states showing the subtotals for each state, as well as the grand total of incomplete tasks, which are used to make up the burndown trend.

To investigate the unexpected up trend in our sprint burndown I would need to compare the totals with the totals of the previous day. To do that I would export to excel the details report of the day concerned and the previous days details and compare these side by side to analyse the up trend, as shown below:


The above shows the reason for the up trend in this example was due to new work being added to the sprint. Please keep in mind this might not always be the case as other outcomes could of happened:

  • New tasks could have been added and the incomplete task could have had the work remaining total for it adjusted upward because of unexpected issues
  • Both the currently incomplete tasks could have had the work remaining adjusted upward because of impediments

Other reports in beta 2 which have the drill-through functionality are:

  • QA / Bug History Chart
  • Release / Product Burndown Chart by Day
  • Release / Product Cumulative Flow
  • Sprint / Sprint Burndown Chart
  • Sprint / Sprint Cumulative Flow

27 Aug 2009

Unit Testing Report within Reporting Services: My Theory

I have been working with Microsoft SQL Server Reporting Services, since its first release, within many of my projects.  One of the biggest tasks, that I find with writing a report is the testing the reports and making sure the data that is displayed is correct.  My method of developing and testing reports is as follows:

  • Write the queries, outside Reporting Services, for creating the report data set(s) with the parameters defined which will be configured within the report.
    • Define the parameters that report data set would create and assign a valid value.
    • Execute the queries.
    • Inspect the results to make sure that they meet the expected results.
    • Repeat with a new valid value or fix any issues.
  • After one or two or even three passes that have worked then I take the queries and put them into report data sets.
  • Then I define the report layout.
  • Then I preview the report make sure it’s still correct
  • Then I release to our testers for testing.

With this amount of manual testing there could be a potential for mistakes to creep past for several reasons: The test data often isn’t diverse enough to test all the logic of the queries; The report data has a lot of rows which aren’t all inspected; Run out of time as the testing was under estimated so amount of testing is reduced.

I did, at the beginning of last year (2008), look into away of being able to test the reports by an automated tool.  I was looking for a way to compare the information that the report was displaying and compare it to some expected results.  The need to test the report itself and not just the underlying queries were for the following reasons:

General Reasons:

  • The Report could be doing more data manipulation outside the SQL queries that have been unit tested.
    • Embedded reporting calculations
    • Embedded VB.Net code
    • External class libraries code

(all of the above could have been unit tested separately but still could produce incorrect information if wired-up incorrectly)

  • Test visibility setting for sections of the reports.
  • Test any embedded report calculations
  • The Report could be using MDX and there isn’t an easy unit testing frame work for MDX

My Project Reasons:

  • All the reports used embedded SQL and MDX.
  • Writing reports against a 3rd party data warehouse and processes
  • There is no database project in which to use a unit test frame work for SQL.

Searching the internet I came across this article: Using Visual Studio 2005 to Perform Load Testing on a SQL Server 2005 Reporting Services Report Server, which gives a guide on how to stress test reports.  This is helpful to ensure that the reporting solution/architecture can handle the demands that are placed upon it.  I also thought that this method of testing can be used for 2 other forms of testing:

  • Simple unit testing: Ensure that report can render.  Placing this within a build process, that adds data into the reported database, then runs the web test to check if the report renders can help highlight if there have been changes to the database that breaks the build.   This can happen if the reports are based off some application database for simple reporting and the database is using an ORM tool. (Which at least one of our projects is doing).
  • Build verification testing: Does the report render? Running a web test after the deployment has happened would highlight any error in the deployment of the solution or deployment of the infrastructure.

I helped to get this method of testing implemented for the above mentioned reason for an application project, where it did help to capture and highlight builds that broke the reports. However as this didn’t meet the requirements fully I was looking as this method, which uses web tests, of testing as it was only inspecting the http responses.

It wasn’t until recently that I starting looking again at unit testing reports again because of my current project as testing the reports is one of the hardest jobs for the following reason: Developing reports based upon a platform so have no control of the flow of the data.  So I looked again at the beginning of this year (2009) and again didn’t found much there that would help me.  So I went to the drawing board to design my own approach and remember that in June 2008 I used Reporting Services in interesting way; I wrote a report which was only create to generate an dynamic XML document for a web page to consume and bind the data to another control (more information can be found here: Could HTTP Soap / T-SQL endpoint be replaced by Reporting Services XML Reports? ).  So I took a further look into the XML render options, to see how the reports that uses chart would render into XML, which my current project heavily utilises, and found that all the values of the data points within the charts were exposed.  This meant that I could test the reports in the way that I intended. 

My next challenge was to design the process that would render the report into XML and then perform some sort of test.  The test was easy, if the report was going to produce an XML document then I should be able determine the how the report will render the XML based upon the following: Expected results based upon the known set of data within data store that report is querying; Passing known values for the reports and knowing the XML setting for the reports; So I decided to do a comparison on the report XML against a prepared XML document.  The next problem was how to do the comparison as there wasn’t a tool that I could find. So I asked a few of my more .NET savvy guys as well with no joy. 

Then I remembered that SQL Server Integration Services (SSIS) has an XML task that does a XML Compare that compares the element’s attributes and more importantly the values.  So I set about creating a SSIS package to do the following:

  • Loop though a control file that contains the location of the reports to test; required parameters, values and location of the expect XML output.
  • Download the reports into XML format from the Report Server.
  • Then use the XML task to compare the results from the report server and compared against the expect results.

Based on some initial testing of this approach work for testing my reports the results were good.  However I haven’t been able to fully implement my approach for the following reasons:

  • Need to work on creating the test data for project.
  • Need to work on getting the test data through the 3rd party process.
  • Need to work out how to deal with time as many of my reports show time differently.
  • Need to work out how make it part of the build/test process.

Also recently I have found that my approach is similar to a tool that is available to buy from Innosphere

3 Aug 2009

SfTS V3: The Beta 1 reports new comers

As mentioned in my previous blog post, there are some new reports available in the Scrum for Team System V3 Beta 1 Template. I would like to give a high level overview of the new reports as there will be full report guidance available with the template explaining in more detail how to use the reports.
The new Diagnostic report isn’t really a new report, more of a spin off from another report. Version 2.x included a Version report that provided details about the status of the Team Foundation Server warehouse process. This covered information like:
  • When the process last ran
  • When the process finished loading data into the warehouse
  • How often the warehouse process would run
  • When the last time the cube was changed

Team Foundation Server 2010 provides more information about the Team Foundation Server warehouse process, including:
  • Is the warehouse blocked from running
  • Is the OLAP processing blocked from running
  • The last full process started and ended time
  • The last incremental process started and ended time

As there was more information available we split the report out so it was less cluttered and easier to read. The Diagnostic report is really useful for investigating reporting issues such as reports showing out of date data.
The Sprint Team Aggregate report, which is new to version 3, is like a classic Sprint burndown, meaning that it shows the burndown of the Sprint Backlog Task, however the report will separately plot each selected team within the selected Sprint. There is also the option to show an aggregated total of the selected teams. This will give a comparison between how well the teams are working.


The final, and saving the best till last, new report is the “Sprint Burndown – Tasks and Stories” report which provides a comparative view of story versus task work burndown at a Sprint or team level.


This is based upon some of the experiences from our agile coaches and other Scrum teams, about what the Sprint burndown should be focusing on: getting Product Backlog “Done”.. So this report shows how many story points have been burnt down as well as the traditional view of the Task burndown.

One of the issues that this report can show is how healthy the task burn down actually is within the sprint. The team(s) could be completing tasks but they are not helping to towards the completion of stories which can highlight quality issues and too much work in progress. Another use for this report is to highlight if the Product Backlog items have been correctly sized or Sprint Backlog Tasks have broken down correctly according to the size of the associated Product Backlog Item.

This report is also our first report that is making use of the Reporting Services 2008 enhanced charting features.

29 Jul 2009

SfTS V3 Operation: Reports face lift.

For the last few months I, with the help of fellow work colleague Kate Begley, have been working on the writing the reports for the Scrum for Team System V3 process template, which Beta 1 has been released and details are available from http://blogs.conchango.com/sfts/archive/2009/07/28/scrum-for-team-system-v3-beta-programme.aspx.

There are a number of changes, from a reporting aspect, that have been introduced into Team Foundation Server 2010 platform and the Scrum for Team System V3 process template like: Warehouse schema changes; only supporting Reporting Services 2008; a sophisticated multi-team support model and new QA model. As a result the reports have been completely rewritten so there is a limited number in this beta release.
However I would like to highlight some of the smaller updates that been applied to the reports to help readability / usability of the reports which are as follows:
  • A reporting site folder structure has been created to make browsing the reports easier
  • Reporting tool tips on the charts to display the unit, value, date that point on the chart is showing.
  • A Today strip line, with tool tip showing where Today is.
  • Small descriptions have been embedded into the reports so when viewing the report in Report Manager or Team Foundation Server Web access their descriptions can be seen to give a snapshot of their function.
  • A report footer has been added with the same embedded description and link to a report guidance (which will be coming soon).
  • The Portal reports can be resized within the SharePoint portal by changing the height and width report parameters to help with customising the SharePoint portal.
The reports that are still available from previous versions are as follows:
  • Version – Diagnostic
  • Bug History Chart – QA
  • Product Burndown Chart By Day – Release
  • Product Cumulative Flow – Release
  • Sprint Burndown Chart – Sprint
  • Sprint Cumulative Flow – Sprint
Alongside these additional new reports for version 3, which I will cover in another blog:
  • Warehouse Status - Diagnostic
  • Sprint Burndown - Tasks and Stories Chart – Sprint
  • Sprint Team Aggregate – Sprint

30 Mar 2009

Reporting Services 2008 Report automatic upgrade gotcha.

For the Scrum for Team System project I have been focusing on the reports; I came across an issue which was due to the way the Reporting Services 2008 "automatic upgrade" feature upgraded our reports.

The Scrum for Team System template contains over 20 reports all written for Reporting Services 2005. This was done because Team Foundation Server 2008 only worked on SQL Server 2005 and Reporting Services 2005. Then in August of 2008 Team Foundation Server 2008 Services Pack 1 was released which added support for the Team Foundation Server 2008 is to run on SQL Server 2008 platform.

With Reporting Services 2008 there is a new reporting definition language schema. However reporting services does have backward compatibility method as describe below:

The report server will validate report definition file against the included reference to the RDL namespace which specifies the version of the report definition schema that is used.

The report is automatically upgraded the first time it is viewed, but the stored report definition file remains unchanged.

So if you edit the report from the server it will still remain in old schema and not the new schema.

To get automatically upgraded report definition you will open the report in one of the authoring tools: Business Intelligence Development Studio (BIDS) or Microsoft Report Builder 2.0.

So I did some testing to see how the 2005 reports would look after the reporting services 2008 processing. I found that 2 reports didn’t render the same as did on reporting services 2005. The 2 reports had a common layout they were using a reporting services sleazy hack: The Green-Bar Matrix. The cell which controls the background colour was a lot more visible than the original as well as the text. So I rewrote the reports and were released them with version 2.2.

Then one of our template users reported that one of the other reports stopped working. The report started to give the following error:

The processing of Parent for the tablix ‘table1’ cannot be performed. The comparison failed. Please check the data type returned by the Parent.

I was able to trace the error to an option in one of the report define groups: Recursive parent, which can found under advanced; removing the option made the report worked again. I haven’t been able trace why the error happens it only seems when our data came back in a certain way which I am currently looking into.

Some practical SQL Spatial tips.

I have just finished a project were I made a lot of use of the SQL Spatial to do some processing and loading into the database. Here are some of lesson I learnt:

  • The first method that needs to call after instantiating a SQLGeographybuilder object is: SetSrid() then the BeginGeography(), BeginPoint().
  • Before using sending a SQLGeometry object to the SQL Server use the IsVaild() function to ensure that Geometry object is valid.

    I create a console application which loaded line, which was using OS coordinates system, data from a flat file. Some of the line data within the file wasn’t as correct as I was led to believe. The application create the SQLGeometry object .NET but the Sql Server then rasied the error when it received and try to save it into the table.

  • The STPointN() function is 1 base which is stated in the Books On-Line
  • If your query needs to select some of the derive data from the functions you can help performance by using persisted compute columns on the base table.

    I had to create a view which needed the latitude and longitude of the starting and end points of the geom line also including the distance. The view would take about 20 seconds to finish executing. Once I created some persisted compute columns on the base table the view would finish executing under 3 seconds

  • You can create more than one spatial index for a spatial column.
  • Trying to join tables using the spatial functions isn’t great for performance.

1 Mar 2009

Working With Reporting Services Multiple Value Parameters

A couple of colleagues of mine were working on a complicated reporting services project; they had one report which had a parameter which allowed a user to select multiple values of numbers, the problem they were facing was how pass the selected values into the query.

They were looking for a way within SQL Server to split a string by a delimiter. As they were using a stored procedure to access the data and multiple value parameters, regardless of parameter type, are passed in as a string of comma separated values.

One thing which they weren’t aware of was that with a multiple value parameter, the behaviour can differ between how the data is being retrieved i.e.: stored procedure or embed sql statement.
If the data set is using direct query to gather the data then the multi value parameter can be used in the query with a IN clause like so:

Where column1 in (@pMultiValueParam)

What happens is that before the query is sent to the SQL Server the query the report server substitutes the @pMulitValueParam with a comma separated list which makes it valid in clause like so:

Where column1 in (10, 3, 4, 56)

Also it has been blogged (http://www.socha.com/blogs/john/2009/03/tfs-report-issues-with-sql-server-2008.html) that in reporting services 2008 there is a new change in the behaviour. You could see the following message when you have no values to select:

Incorrect syntax near ')'.

When there are no values to pass, Reporting Services 2008 simply removes @pMultiValueParam, so you get something like "Where column1 in ()" preventing this query from running.

The solution is to add an expression to the pMultiValueParam report parameter. The expression is evaluated in order to determine what is passed to the query.

Like so:

=IIF(Parameters! pMultiValueParam.Count > 0, Parameters!pMultiValueParam.Value, "")

23 Jan 2009

How to load spatial data into SQL Server 2008 from .Net

I have been working on a project which made use of the spatial data type geography within SQL Server 2008.

An issue that I had was how to load the geography data from a KML file into SQL Server 2008. Currently there is no out of the box tools to do this. There is a 3rd party tool, Safe FME, which offer either their own tool or components which extend integration services. This was overkill for my issues as I only had to do it once.

So I wrote a console application which parses the KML file to extract the point data convert it to a SqlGeography type and store it in the database.

To use the SqlGeography C# type you need to add reference to the following name space: Microsoft.SqlServer.Types. This can be found in the following dll: Microsoft.SqlServer.Types.dll
Then use the following code to create the c# sqlgeography type:

   1:  // use SqlGeographyBuilder to help create the SqlGeography type 
   2:  SqlGeographyBuilder geographyBuilder = new SqlGeographyBuilder(); 
   3:  string[] longLat; 
   4:  SqlGeography geography; 
   6:  // gets the co-ordinates 
   7:  XElement coOrdinates = element.Element(ns + "Point").Element(ns + "coordinates"); 
   9:  // set the Spatial Reference Identifiers that will used to create the point 
  10:  geographyBuilder.SetSrid(4326); 
  12:  // state what type of geography object that I to create 
  13:  geographyBuilder.BeginGeography(OpenGisGeographyType.Point); 
  15:  longLat = coOrdinates.Value.Split(new char[1] {','}); 
  17:  // add the frist figure lat long point 
  18:  geographyBuilder.BeginFigure(Convert.ToDouble(longLat[1]), Convert.ToDouble(longLat[0])); 
  20:  // close the figure and geography class 
  21:  geographyBuilder.EndFigure(); 
  22:  geographyBuilder.EndGeography(); 
  24:  // get the geography builder to return the sqlgeography type 
  25:  geography = geographyBuilder.ConstructedGeography; 
  27:  return geography; 

After creating the data type I then needed to write the code send the data to the SQL server which as follows:

   1:  // set the command text 
   2:  string sqlCommandText = "insert into [dbo].[Location]([Location],[CoOrdinates]) Values(@pLocation,@pCoOrdinates)"; 
   4:  // create the command object and set which command type 
   5:  SqlCommand sqlCommand = new SqlCommand(sqlCommandText, sqlConnection); 
   6:  sqlCommand.CommandType = CommandType.Text; 
   8:  // create and add the paramter for standard sql data type 
   9:  sqlCommand.Parameters.Add(new SqlParameter("@pLocation", name.Value)); 
  11:  // create and add the paramter for sql geography data type 
  12:  // as I am using and system CLR type have to say what data type name the parameter is 
  13:  sqlCommand.Parameters.Add(new SqlParameter("@pCoOrdinates", geography) {UdtTypeName = "Geography"}); 
  15:  // execute the command 
  16:  sqlCommand.ExecuteNonQuery(); 

Like the report viewer control the Microsoft.SqlServer.Types namespace is not installed with .Net. For the application to work on another computer, without having to install SQL server or their client tools, a Redistributable package would need to be installed.

The SQL Server System CLR Types package contains the components implementing the new geometry, geography, and hierarchyid types in SQL Server 2008. This component can be installed separately from the server to allow client applications to use these types outside of the server.

A version can be found here:


19 Jan 2009

Business Intelligence with SQL Server 2008 Geometry Data Type.

I am currently working on a project which is automating a business intelligence process base upon images and image recognition. The process is as follows:

  • Take a photography
  • Put the photography through the image recognition software and record the recognition data.
  • Process the recognition data into meaningful business metrics
  • Produce a report on metrics

The part of the process, from above, that I am helping with is processing the recognition data (the hit data) into meaningful business data. This process simply takes the hit data (which is a point), then creates some square blocks, which I use to group the hits data and perform some metric operations. The metrics currently use data about the area and the density of hits within the area. To help process this data I decided to make use of the SQL Server 2008 spatial type: Geometry.

Firstly I would like to point out that I didn’t have to use the geometry type. As I am currently dealing with square areas, I can group or locate all the hits that fall within area by using the following filter clause: HitY between MinY and MaxY and HitX between MinX and MaxX. However by reading the filter alone can cause misunderstanding on what the filter is suppose to be doing. As I am using the geometry data type that filter is replaced by something which does state what the filter is doing, like so: area.STIntersects(HitPoint) = 1 with the added bonus of less code being written and read as well as less chance of a bug being created.

Another point I would to make is that by using the geometry type my code is adaptable to changes. Currently our process is working to square areas and in the future that might change. Using the standard SQL filters that would mean all the filter clauses of the data would have to be updated. Also I would need to be more sophisticated, which could take a longer development and testing cycle which could increase the risk of issues. With the use of the geometry type, only the way that the area polygon is created would need to be updated which will help in reducing the development and testing cycle which interns reduces the risk of issues.

Another benefit of having some of the data stored as the geometry spatial data type is that I was able to evaluate or debug the steps of the process far easier. This is due to the way that SQL Server Management Studio display spatial data as shown below.