T-SQL Tuesday #13–A Mysterious Business

SQL Server Best Practices
SQL Server Best Practices
It’s T-SQL Tuesday again and Steve Jones (blog|twitter), Mr SQLServerCentral, is hosting this month. Steve has provided a topic that is near and dear to the heart of pretty much every DBA – “What issues have you had in interacting with the business to get your job done”.

I really didn’t know what I could possibly write about this given that I’ve never had an issue with the business.

Stopped laughing yet?

Here’s an actual business challenge that I have had the fun to deal with in the last few months.

What is the maximum number of columns you can have in a table?

It’s a simple little question like that which instantly sets off the alarms in your head, especially when it comes from a Business Analyst. What worried me even more was that this individual had a habit of promising the moon on a stick to users without actually understanding the bigger picture of what might be involved.

The BA was delighted when I told him that the non-wide table limit was 1,024 columns (based upon the Maximum Capacity Specifications in BOL). This took the plotz-meter from rumble to gurgle.

I heard nothing more from the BA for a couple of months so figured it had been just a curiosity that I had answered, then…

Please deploy this work order to create a new table

A work order showed up in my queue. A script was attached to create a bunch of new tables. One of these tables was to have 514 columns, another 365 and another 234 along with a few others that were in the 30-40 column range. That’s some pretty wide tables. Each of these tables was to also have a whole host of indexes (more than 40 on one of them) and were expected to contain several million rows of data. There were also FK constraints (well at least there was some DRI).

With an impending sense of doom I asked why we were creating such wide tables and how they were going to be used. I had a hope that these were going to be used in some kind of OLAP solution. It was a faint hope given that we don’t have Analysis Services deployed anywhere.

I was advised that data was going to be loaded into the tables using a “real-time” ETL process to provide reporting functionality to the user base. There would be a bunch of reports written which could be called, there would also be ad-hoc reporting access to a small group of power users.

I could understand if all of these tables were denormalized to the max however to actually get useful data out would require performing a minimum of four joins.

The business needs them

When taken back to the development team they stated that they developed this to the requirements that they were given. I can understand and appreciate this, they are working with what they are provided. I have no doubt they would have questioned this requirement and just been asked to proceed.

I performed a code review and sent back my recommendations. Narrower tables, denormalization, fewer indexes (given that they could never cover any of the queries that would be executed). I also explained the huge performance issues that we were going to experience, the impact to the other databases on the server and how we were going to see some quite significant disk issues.

My recommendations were returned with “these need to go in, the business needs them and we have made a commitment to get them in by next week”.

You’re there to keep the boss happy

My concerns were obviously not being heeded so it was escalation time. I went to my boss and explained the situation. I went over the concerns I had, the expected performance problems we would experience, the bad design, the lack of denormalization which would have been the ideal solution for what the team was attempting to do.

Continue reading on SirSQL.net.

54321
(0 votes. Average 0 of 5)
Leave a reply

Your email address will not be published. Required fields are marked *