5 Tips for Friday: Database Configurations

5 Tips for Friday
5 Tips for Friday
It’s time once again for 5 Tips for Friday. This week’s topic is Database Configurations.

  1. Beware default file growth settings: Unless the database is always going to be tiny, the default database file growth settings are horrible. If your log file is 1 GB, then adding 10% of space is not an issue. But each time you add 10%, the next 10% gets bigger. Before you know it, the log file is 100 GB and needs to grow so it adds 10 GB (10%) to it. And size the log file cannot be instantly initialized, it has to write 10 GB of zeroes to the log file while all queries hitting the database wait. Also, frequent and uneven growths of the log file leads to high VLF (virtual log files – the internal logical units of a log file) counts.

    I always set my data and log files to use a hard value of MBs for file growths. Never percents. I also pre-size the files so I don’t need to rely on auto-growths, but just in case, they are there. My general is to grow by 1024 MB (1GB) for data files or 512 MB for log files.
  2. Pre-size your database files: When we create a new database, we require the developers to estimate how big it will be a year from now. Turns out that this is surprisingly hard for someone with a PhD in mathematics from MIT. Who knew?

    We always pad their estimates. I like to round up to a nice round number. And if they tell me anything less than 100 GB, I start it off with 200 GB of space for the data file. I don’t make the data file 200 GB, but I allocate 200 GB of disk space and then make the data file something like 50 GB. I generally allocate half the disk space for the log file as I do for the data space for a new database and make the actual log file size something like 4 GB.

    Then we monitor the files for available space and make adjustments as needed proactively. Our script to check for space runs every half hour, and if the free space is below a certain threshold (a sliding threshold based on the size of the database), it sends the DBA team an email alert. This repeats every half hour until the issue is fixed.
  3. Be mindful of the database owners: I’m talking about the account that is the actual owner of the database, not just someone who is a member of the db_owner database role. You can see the owner of the database in sys.databases in the column owner_sid. It stores the owner’s SID (security Identifier) and that user gets mapped to the highly privileged database user dbo. If you want to see who the SID refers to, you can use the system function suser_sname() to get their Windows account name by passing in the owner_sid to it. Like so:

    Select name As DatabaseName, suser_sname(owner_sid) as OwnerName
    From sys.databases;

    I’m not here to talk about the security aspect of which account is used, just to make sure that you have a valid owner. If the SID cannot be resolved to an active account, you will get odd permissions errors. When a sysadmin does something in the database, he or she impersonates the dbo account. If the dbo account is mapped to an invalid account, you get an error stating that what was done could not be done under the current security context. I prefer to use sa as the owner of all of my databases, but if you ever use the TRUSTWORTHY property and/or cross-database ownership chaining, don’t use sa. Use a minimally privileged account that is not tied to a real person.

  4. Setting an AG secondary to be owned by sa (or some other specific account): If you create a database (includes restoring a database without replacing an existing one), you are set as the database owner. For the database on a primary, you can easily change the database owner, but you will get an error if you try to change the owner on a secondary database. At that point, your options are to fail over and change it or wait until a failover occurs and then change it.

    I prefer to manually restore the backups for the database and then use the JOIN ONLY option for adding the new database to AG (Availability Group). When I do this, I use T-SQL to do the initial backups and restores and on the primary, I set the owner as sa and then on the secondary, I use the EXECUTE AS clause to execute the restore as the sa account. This works even if the sa account is disable (disabling just means that it cannot log in interactively). So I use:

    Execute As Login = 'sa';
    
    Restore Database ... With Norecovery;
    
    Restore Log ... With Norecovery;
    
    Revert;
  5. Things to check when upgrading a database from a lower version: The obvious setting you will want to change on a database when upgrading it to a higher version (whether that is due to the instance being upgraded or being restored onto a higher version instance) is compatibility level. Certain new features or certain new enhancements may require the new compatibility level. You want to take advantage of those if you can so don’t forget this setting.

    Another setting that you should check is page verification if this database could have ever been on SQL Server 2000 or older. This is the mechanism that SQL Server will use to validate that pages are not corrupt. Fortunately, the best option for this is the default option so this is only a concern for upgraded databases and not new databases. In SQL Server 2000, the default option was TORN_PAGE_DETECTION. Since SQL Server 2005, the default and best option is CHECKSUM. It should never be set to NONE. I rarely speak definitely like this, but this is a case where NONE is never an acceptable setting. Make sure you are set to CHECKSUM.

    Lastly, if the upgraded database was ever on SQL Server 2000 and has never had a successful DBCC CHECKDB with DATA_PURITY run it, data purity is not being enforced. You need to run this CHECKDB command on the database and fix any errors it reports. Repeat until it completes successfully. From that point forward, data purity will be enforced. Some examples of data purity issues include numbers stored as -0 (negative 0) or a float value outside the valid range (like too many decimal points). You have to find the problem rows and manually update them to valid values.

    Read more about DATA_PURITY checks here: CHECKDB From Every Angle: How to tell if data purity checks will be run? by Paul Randal (blog|@Paulrandal).
54321
(0 votes. Average 0 of 5)
Leave a reply

Your email address will not be published. Required fields are marked *