Just a bit of a rant here...
If there were ever any 2 products from Microsoft that were just made for each other it is Excel and SQL Server... (well, that is, now that the newer xlsx format is available).
But sadly, these two products don't 'schmooze' as easily as they might.
Oh sure there are plenty of ways to 'integrate' Excel to SQL Server, but they are all sadly wanting - to prove that point all you have to do is Google (or Bing, or whatever) something like 'how do I submit parameter queries to SQL Server from Excel' and you'll suddenly see my point here.
All those GUI tools don't do the job well.
Writing .Net or DBO or any of the other alphabet soup of products... well, you can do it, but you really need to know those languages and some 'gotchas' in order to make those work.
If you've ever developed an SSRS report in BIDS, you know how simple and straightforward it is.
Build a data source. Set the connection for the data source. Write the query. Need input parameters? Define them in the query object. Done. Now run the query - boom, you have your answer set.
Here's the real question ... Why in the world isn't this kind of functionality provided as a means to integrate Excel to SQL Server?
Surely Microsoft can see that this capability ought to exist for Excel. If they cannot it is beyond me why they cannot... or maybe it isn't beyond me.
Because after all, if SQL developers had such a tool, they wouldn't need Sharepoint, or all the other middleware tools would they?
They wouldn't need anything at all except Excel and SQL Server...and maybe that's why it won't ever happen?
Pragmatic SQL Server
Wednesday, December 5, 2012
The need for a better Excel to SQL Server connection...
provisioning for the unanticipated...
I'm in the process of setting up an SSAS cube.
I'm early in the process... tweaking and perfecting the ETL for the fact and dimension tables.
I just ran into something that I should have thought of before, but never had encountered up to now (not because I'm brilliant, no, far from it, it is because up to now, I was just ... plain ... lucky).
Here is what happened. I was feeling pretty smug as the ETL had been running perfectly for about a week when one day I came in and noticed a failed job in SQL Monitor (Red Gate).
I began to investigate why the job failed. It failed trying to re-apply an FK create on a table after the table had been updated.
After figuring out why the FK create wouldn't happen, I discovered it was nothing about the syntax or mechanics of the alter table statement but rather a disconcerting relationship disconnect between a fact table and an associated dimension table. The fact table had exactly 1 row more than should have been there.
That fact table's one extra row had a key value of '0' (put there by a default constraint in the create table statement). That '0' value told me that for this row, there was no matching value in the dimension table I was trying to relate this table to! In 'perfect world' that never happens (luck); but in reality world it can and does happen (my luck ran out).
I have solved the origin of that mystery of why and how that extra row got there, but here's the point/suggestion I'm blogging about today.
When you are inserting values into your dimension tables, it may be a good move to create a row for a condition such as the one I describe above. My dim tables now all have row #1 with keys of '0', associated rows have 'N/A' (text data types) or '0' (number data types).
That way loads won't fail, and the failure won't be a mind-wracking journey into MSFT's error messages about why a key could not be created (ultimately in my case it wasn't about the key's alter table statement at all, but rather about the data in the 2 tables I was trying to create a relationship for).
You can then also code corrections in the ETL for when the Fact table gets a join to the '0' value data in the key column, but have it done AFTER the load job has completed (and this time it won't fail with an error that has you chasing your tail about why a FK could not be created).
I'm early in the process... tweaking and perfecting the ETL for the fact and dimension tables.
I just ran into something that I should have thought of before, but never had encountered up to now (not because I'm brilliant, no, far from it, it is because up to now, I was just ... plain ... lucky).
Here is what happened. I was feeling pretty smug as the ETL had been running perfectly for about a week when one day I came in and noticed a failed job in SQL Monitor (Red Gate).
I began to investigate why the job failed. It failed trying to re-apply an FK create on a table after the table had been updated.
After figuring out why the FK create wouldn't happen, I discovered it was nothing about the syntax or mechanics of the alter table statement but rather a disconcerting relationship disconnect between a fact table and an associated dimension table. The fact table had exactly 1 row more than should have been there.
That fact table's one extra row had a key value of '0' (put there by a default constraint in the create table statement). That '0' value told me that for this row, there was no matching value in the dimension table I was trying to relate this table to! In 'perfect world' that never happens (luck); but in reality world it can and does happen (my luck ran out).
I have solved the origin of that mystery of why and how that extra row got there, but here's the point/suggestion I'm blogging about today.
When you are inserting values into your dimension tables, it may be a good move to create a row for a condition such as the one I describe above. My dim tables now all have row #1 with keys of '0', associated rows have 'N/A' (text data types) or '0' (number data types).
That way loads won't fail, and the failure won't be a mind-wracking journey into MSFT's error messages about why a key could not be created (ultimately in my case it wasn't about the key's alter table statement at all, but rather about the data in the 2 tables I was trying to create a relationship for).
You can then also code corrections in the ETL for when the Fact table gets a join to the '0' value data in the key column, but have it done AFTER the load job has completed (and this time it won't fail with an error that has you chasing your tail about why a FK could not be created).
Labels:
CUBE,
ETL,
FK,
FOREIGN KEY,
SQL Monitor,
SSAS
Tuesday, November 20, 2012
LOOPING VS. CURSORS
I just cannot say this enough - "Cursors are eeeeeeeeeeeeeeeviiiiilllllllllllllllllllllllll!!!!!!!!!!!!!!!!!!!"
I recently encountered this post in Spiceworks and joined in the conversation -
POST:
I believe I already know the answer to this, but I'd love to hear from some people in the community.
Lets say that we're writing a procedure that needs to perform a calculation based on each row of a table - the example I have below is impractical but shows my point. Just how bad is the performance of using a cursor to loop through the rows, versus a method such as below? Is the way I have outlined below the general best way to do a loop like this?
3 COLUMN EXAMPLE TABLE: rownumber identity PK, intData1, intData2
WHILE(@currentRowNumber <= @maxRowNumber)
BEGIN
INSERT INTO #SomeTable(myCalculation)
SELECT SUM(intData1 + intData2) FROM myTable WHERE PK = @currentRowNumber
SET @currentRowNumber = @currentRowNumber + 1
END
Interestingly, the first 'responder' posted this...
"I have recently been asked to remove cursors from existing stored procedures"
Now why would someone have instructed this person to remove cursors from all their existing stored procedures?
As I've previously posted, there are absolutely tons of opinions - some for, some against - the use of cursors; however, nothing speaks better than quantitative data. And to resolve this quantitatively just google 'Itzak Ben Gan' and 'cursor'. Once you've reviewed his articles, you should be clear on just why this responder would have been told to take cursors out of their stored procedure code.
To net it all out - there is hardly any situation you can craft where cursor will out-perform well crafted t-sql.
(There is an interesting exception, when declaring a static cursor one gets considerably improved performance over a plain old vanilla cursor).
That said, assuming you are operating on a table of say, 2,000,000 rows, or even a more modest table of 200,000 rows, figuring out a relational answer to the processing objective will almost always result in faster processing than 'fetching and operating' on one row at a time until you reach the end of the table. I have, on more than one occasion found myself starting to write a loop similar to the one above and while doing it, and writing the various operations to perform inside the loop, I suddenly see that I could just join tables and do a 'mass upsert' type of operation. In short order, I've taken my 'input batch table' and written two different queries - one joins the batch table to the target and inserts all the rows from the batch where the key in the batch is not in the target. Then I delete those rows from the batch. What is left are rows in the batch that have rows already existing in the target - therefore there must be something different about the data (or maybe not depending on who wrote the code that generated the 'input batch' table); so I write an update changing the value for everything in the target to what is in the batch's corresponding row (except for the key of course); joining on the key column(s).
And this is the point of my statement that cursor are evil. It isn't just the endless arguing about performance of the cursor itself, it is the nature of the beast.
The first time I ever saw a cursor, I thought to myself - "this is just a crutch that has been provided to people who are not accustomed to thinking relationally about data in an RDBMS." My reason for reaching that conclusion is the 'loop' model in the POST. Just about every cursor operation I've ever seen is the equivalent of some kind of loop operation; the loop is there to process each row (usually in a batch construct) and it just reeks of sequential thinking - 'first I'll do this to this piece of data in the row, then I'll do that to this piece of data, then blah, blah, blah ... and when I'm done I'll commit the change to this row and go get the next row until I've reach the 'eof' (end of table)'.
If you must do things this way, then by all means use a loop or a cursor. However, be prepared for the day when someone you report to looks at your code, considers all the overhead you're stuffing onto the servers and the implications of same to annual budgets and then be ready to strip all those cursors out (hint, start writing your code now that will go and find all the places where you've used a cursor in a stored procedure or job).
Why? Because crutches become the preferred way of doing things and pretty soon you are missing the whole point of a relational database construct. In effect you have taken a super-charged race car and tied 30 tons of dead weight to it, and then you'll be pondering why things take so long. "Where's the performance bottelnecks"? If the person you report to is any good at relational thinking, they'll know where the bottlenecks are - and they'll tell you to dump the cursors.
Lesson here - think hard about why you think you need that loop... because you probably don't.
I recently encountered this post in Spiceworks and joined in the conversation -
POST:
I believe I already know the answer to this, but I'd love to hear from some people in the community.
Lets say that we're writing a procedure that needs to perform a calculation based on each row of a table - the example I have below is impractical but shows my point. Just how bad is the performance of using a cursor to loop through the rows, versus a method such as below? Is the way I have outlined below the general best way to do a loop like this?
3 COLUMN EXAMPLE TABLE: rownumber identity PK, intData1, intData2
WHILE(@currentRowNumber <= @maxRowNumber)
BEGIN
INSERT INTO #SomeTable(myCalculation)
SELECT SUM(intData1 + intData2) FROM myTable WHERE PK = @currentRowNumber
SET @currentRowNumber = @currentRowNumber + 1
END
Interestingly, the first 'responder' posted this...
"I have recently been asked to remove cursors from existing stored procedures"
Now why would someone have instructed this person to remove cursors from all their existing stored procedures?
As I've previously posted, there are absolutely tons of opinions - some for, some against - the use of cursors; however, nothing speaks better than quantitative data. And to resolve this quantitatively just google 'Itzak Ben Gan' and 'cursor'. Once you've reviewed his articles, you should be clear on just why this responder would have been told to take cursors out of their stored procedure code.
To net it all out - there is hardly any situation you can craft where cursor will out-perform well crafted t-sql.
(There is an interesting exception, when declaring a static cursor one gets considerably improved performance over a plain old vanilla cursor).
That said, assuming you are operating on a table of say, 2,000,000 rows, or even a more modest table of 200,000 rows, figuring out a relational answer to the processing objective will almost always result in faster processing than 'fetching and operating' on one row at a time until you reach the end of the table. I have, on more than one occasion found myself starting to write a loop similar to the one above and while doing it, and writing the various operations to perform inside the loop, I suddenly see that I could just join tables and do a 'mass upsert' type of operation. In short order, I've taken my 'input batch table' and written two different queries - one joins the batch table to the target and inserts all the rows from the batch where the key in the batch is not in the target. Then I delete those rows from the batch. What is left are rows in the batch that have rows already existing in the target - therefore there must be something different about the data (or maybe not depending on who wrote the code that generated the 'input batch' table); so I write an update changing the value for everything in the target to what is in the batch's corresponding row (except for the key of course); joining on the key column(s).
And this is the point of my statement that cursor are evil. It isn't just the endless arguing about performance of the cursor itself, it is the nature of the beast.
The first time I ever saw a cursor, I thought to myself - "this is just a crutch that has been provided to people who are not accustomed to thinking relationally about data in an RDBMS." My reason for reaching that conclusion is the 'loop' model in the POST. Just about every cursor operation I've ever seen is the equivalent of some kind of loop operation; the loop is there to process each row (usually in a batch construct) and it just reeks of sequential thinking - 'first I'll do this to this piece of data in the row, then I'll do that to this piece of data, then blah, blah, blah ... and when I'm done I'll commit the change to this row and go get the next row until I've reach the 'eof' (end of table)'.
If you must do things this way, then by all means use a loop or a cursor. However, be prepared for the day when someone you report to looks at your code, considers all the overhead you're stuffing onto the servers and the implications of same to annual budgets and then be ready to strip all those cursors out (hint, start writing your code now that will go and find all the places where you've used a cursor in a stored procedure or job).
Why? Because crutches become the preferred way of doing things and pretty soon you are missing the whole point of a relational database construct. In effect you have taken a super-charged race car and tied 30 tons of dead weight to it, and then you'll be pondering why things take so long. "Where's the performance bottelnecks"? If the person you report to is any good at relational thinking, they'll know where the bottlenecks are - and they'll tell you to dump the cursors.
Lesson here - think hard about why you think you need that loop... because you probably don't.
Labels:
cursor,
loop,
SQL Server 2005,
SQL Server 2008,
UPSERT,
while loop
Monday, November 19, 2012
when pivot isn't the answer...
Recently I helped someone out on Spiceworks, so I thought I might as well post the problem and the resolution here as well.
The requestor wrote: "Right now I get a separate row for each Diagnosis_Code with the same service_ID, I need them to appear as separate columns in one row for each service_ID. There are other columns in the output not listed here, so it cannot be a crosstab, unless that can be nested, I don't know."
The problem description went something like this:
The requestor wrote: "Right now I get a separate row for each Diagnosis_Code with the same service_ID, I need them to appear as separate columns in one row for each service_ID. There are other columns in the output not listed here, so it cannot be a crosstab, unless that can be nested, I don't know."
The problem description went something like this:
I have a table Diagnosis_Codes with a list of codes as Diagnosis_Code, PK Diagnosis_Code_ID
I have a table Service_Diagnoses which has FK Service_IDs and FK Diagnosis_Code_IDs and Service_IDs
I have a table Services with Service_IDs
For one Service_ID, there can be multiple entries in Service_Diagnoses pointing to that Service_ID. Service_Diagnoses.Diagnosis_Code_ID is used to get a human usable definition for each entry in Service_Diagnoses. It is possible for a single Service_ID to have multiple (not more than 4) entries in Service_Diagnoses.
(Hope that is understandable)
Here is what I need to do:
For each service ID I must produce 1 row. In this row, I need the Service_ID, and the (up to 4) Diagnosis_Code's.
Service_ID | Diagnosis_Code1 | Diagnosis_Code2 | Diagnosis_Code3 | Diagnosis_Code4
Or, it can even be the Service_ID and the (up to 4) Service_Diagnoses.Diagnosis_Code_ID
------------------------------------------------------------------ end ---------------------------------------------------------------------------------
Now I admit, usually when someone posts something like this, I try to get a little more information. But after I looked at this one for a bit, it was clear enough that I could see what the requestor wanted, but the problem is a pivot isn't well-suited for his 'free-form' kind of result.
So I returned to a trusted approach that served me well before PIVOT came along, and turned out to be the solution to the requestor's needs (and got me the 'best answer' cookie)!
Here is most of that reponse (and all that you need to understand how to craft the solution to this problem!)
If I understand your tables, they would look something like these:
SELECT * FROM serviceid
serviceid
456
789
123
SELECT * FROM diagnosis
diagid diagcode
78 froze
12 burnt
23 dead
56 broke
SELECT * FROM servicediag
serviceid diagid
123 78
123 12
123 23
456 78
456 12
456 23
789 56
789 23
123 56
The first thing I like to do is a query that joins all the tables together and lets me see how the data looks when combined.
You can put it into a real table or a temp table, but the query below does the joins.
SELECT s.serviceid, sd.diagid, d.diagcode
INTO serviceidlist
FROM serviceid AS s
LEFT OUTER JOIN servicediag AS sd ON s.serviceid=sd.serviceid
LEFT OUTER JOIN diagnosis AS d ON sd.diagid=d.diagid
;
The resulting table contents can be seen using the following:
SELECT * FROM serviceidlist;
serviceid diagid diagcode
456 78 froze
456 12 burnt
456 23 dead
789 56 broke
789 23 dead
123 78 froze
123 12 burnt
123 23 dead
123 56 broke
But you want one line for service id, with a max of four columns of diagnostic code ids, so we need to apply a running sequence,
(1 - 4) for each serviceid.
SELECT * ,
rn=ROW_NUMBER() OVER (PARTITION BY serviceid ORDER BY diagid)
INTO serviceidorderedlist
FROM serviceidlist
;
Now the list looks like the one from the following query on the resulting table, "serviceidorderedlist".
SELECT * FROM serviceidorderedlist
;
serviceid diagid diagcode rn
123 12 burnt 1
123 23 dead 2
123 56 broke 3
123 78 froze 4
456 12 burnt 1
456 23 dead 2
456 78 froze 3
789 23 dead 1
789 56 broke 2
Now, all we need is a technique to consolidate all the diag code ids for the service id.
SELECT serviceid, SUM([diagid1])[diagid1], SUM([diagid2])[diagid2], SUM([diagid3])[diagid3], SUM([diagid4])[diagid4]
FROM
(
SELECT serviceid, diagid[diagid1], 0[diagid2], 0[diagid3], 0[diagid4] FROM serviceidorderedlist WHERE rn=1
UNION
SELECT serviceid, 0[diagid1], diagid[diagid2], 0[diagid3], 0[diagid4] FROM serviceidorderedlist WHERE rn=2
UNION
SELECT serviceid, 0[diagid1], 0[diagid2], diagid[diagid3], 0[diagid4] FROM serviceidorderedlist WHERE rn=3
UNION
SELECT serviceid, 0[diagid1], 0[diagid2], 0[diagid3], diagid[diagid4] FROM serviceidorderedlist WHERE rn=4
)dt
GROUP BY serviceid
;
The query above generates a line for each instance of a service id, up to 4 and sticks the appropriate diag code id into the correct column and 0 into the other columns. The union of all the queries in the derived table (dt) gives you up to 4 lines for each service id where only one column has a diag code and the other 3 have 0's; 1 row for each intersection of a service id and a diag code id. But there can still be up to 4 rows of a service id. So we 'wrap' the derived table query (dt) with another select that groups on the service id and sums the diag code columns. Since only on row in the set for any column has a diag code, adding 0 in the sum function has no effect on altering the data, and you get the result below
serviceid diagid1 diagid2 diagid3 diagid4
123 12 23 56 78
456 12 23 78 0
789 23 56 0 0
Wednesday, October 3, 2012
Cursors are almost never the answer...
Over the years, I've been amazed at how many times I encounter the use of cursors, while loops, etc.
I confess I fall into the trap of using a while loop more often than I'd care to admit.
Stated bluntly, using cursors is a procedural crutch. Here we have a database engine written to perform operations efficiently on massive amounts of information (millions, maybe billions of rows) and what do people tend to do?
They try to perform a series of operations on one row at a time, using the same procedural thinking that they learned in computer science 101.
Yes, I admit it, sometimes it is easier to think that way, but that is not the point.
The point is to ask 'what is the most efficient way?'!
Numerous posts are sprinkled all over the internet by heavyweights such at Itzak Ben-Gan and Paul Nielsen that scientifically prove that the cursor and/or while loop procedural approach rarely if ever is justified.
There is almost no problem space these techniques can be applied to that cannot be solved more efficiently with a little careful thought and the proper application of table joins.
Think about it.
I confess I fall into the trap of using a while loop more often than I'd care to admit.
Stated bluntly, using cursors is a procedural crutch. Here we have a database engine written to perform operations efficiently on massive amounts of information (millions, maybe billions of rows) and what do people tend to do?
They try to perform a series of operations on one row at a time, using the same procedural thinking that they learned in computer science 101.
Yes, I admit it, sometimes it is easier to think that way, but that is not the point.
The point is to ask 'what is the most efficient way?'!
Numerous posts are sprinkled all over the internet by heavyweights such at Itzak Ben-Gan and Paul Nielsen that scientifically prove that the cursor and/or while loop procedural approach rarely if ever is justified.
There is almost no problem space these techniques can be applied to that cannot be solved more efficiently with a little careful thought and the proper application of table joins.
Think about it.
Monday, September 17, 2012
Track report usage
One of the best ways to ascertain how effective your portal is in reaching the users the reports were designed for is to track how many times a report is viewed and who is viewing it.
The report above is a no frills report I have in my 'administrator reports' section of the portal at our company. It is solely for my use.
It shows the report name and the number of times it has been viewed during the period specified in the input parameters.
The query is as follows:
USE [ReportServerSP]
DECLARE @start DATETIME -- < of course these are the input parameters on the actual report,
DECLARE @end DATETIME -- < I just show them here for clarity
SET @start = '08/01/2012' -- < of course these are the input parameters on the actual report,
SET @end = '08/31/2012' -- < I just show them here for clarity
SELECT [ReportPath][Report], COUNT(*)[TimesViewedForPeriod]
FROM [dbo].[ExecutionLog2] AS el
WHERE [TimeStart] BETWEEN @start AND @end
AND [ReportPath] <> 'unknown'
GROUP BY [ReportPath]
order by count(*) desc
;
The database, ReportServerSP is one we created when we integrated SSRS with WSS3.0, the SP at the end clues us that it is for the 'Sharepoint' based portal.
When a report name is clicked, the next report drilled to provides a breakdown as to the number of times a particular user on the DOMAIN viewed the report.
That linked report's query is:
USE [ReportServerSP]
SELECT [UserName], COUNT(*)[UserViewsForPeriod]
FROM [dbo].[ExecutionLog2] AS el
WHERE [TimeStart] BETWEEN @start AND @end
AND [ReportPath] IN ( @reportpath )
GROUP BY [UserName]
;
Using these two reports has come in very handy. There have been times when the President has wanted to know for a particular report, whether it was viewed during a range of dates, and if so, how many times and by who.
Using these reports came in very handy on those days.
Even if you don't have your SSRS reports integrated to WSS, by pointing these queries to your ReportServer database, you should be able to extract the same data.
Have fun with it!
Friday, September 14, 2012
Make Reports Easier to Use
One of the things about a successful SSRS install is that after a short time there are so many folders and reports, the general user population gets lost.
Try this. I have found it well received by the user community where I work.
Create a Word document as a directory of your reports in a folder.
Each report has its name as a hyperlink directly to the report on the portal.
I captured an image of the rendered report and embedded it in the document under the report title, this gives the user a 'picture' view of what the report looks like.
Below that I provide a short description of the input parameters the report takes and the data the report provides as output.
Each time a report is changed, deleted, or a new report added; I update the document.
I have provided the user community with an understanding of how they can be alerted by email when a change to the document has been made.
In this way, the user community is kept abreast of the reports on the portal and has an easier way of examining reports and getting a direct link to the report they want. Saves a lot of time rummaging around in folders that they may not be as familiar with as people that use a particular report every day.
Also, if you note the particular report that is highlighted above; this is a 'control panel' report.
My service manager wanted a set of analytic reports to examine service on a particular manufacturer's product lines - how many parts have we installed, what % was warranty, how much labor and travel, etc.
We built this control panel report to feed the exact same set of inputs to each report in the series.
In this way, he can run a report, examine it, export it to Excel should he wish; then press the backspace key to get to this control panel. Then all he need do is click the next hyper-linked report name or image and the same set of inputs is fed to the next report in the series.
The point here is that a portal may contain a great set of reports; they may be innovative and extremely useful. But if they are hard to locate, or the user community is frustrated in their attempts to use it, then it won't succeed, no matter how brilliant the reports are.
Take a bit of extra time to make the reports as easily accessible as you possibly can. Solicit feedback from the user community; don't be defensive when they call 'your baby' ugly though; listen to what they are telling you and do your best to incorporate their ideas. Follow up with them and ask them about your implementation of THEIR IDEAS; you'll find that in itself will get their attention. Once you have the lines of communication open on using the reports, you'll hopefully find they're being used more often!
Subscribe to:
Posts (Atom)