5 Tips for Friday: Massively Parallel Queries

5 Tips for Friday
5 Tips for Friday
My day-to-day jobs deals mostly with performance tuning queries that run massively parallel. Like hundreds or thousands of the same query with different parameter values. With a normal query, 1 extra second of run time does not matter much. If the query has 2000 versions running at the same time, that’s an extra 2000 seconds (33 minutes 20 seconds). It adds up. It matters in these cases. So performance tuning is different.

Here are some best practices for tuning massively parallel queries:

  1. Conserve parallelism: If you are familiar with the server option cost threshold for parallelism, then you are probably aware that pretty much everybody agrees that the default setting of 5 is way to low. My rule of thumb is to bump it up to 50 straight away and then tune up or down if needed later.

    Cost threshold is the threshold for the estimated cost of a serial plan to consider it for a parallel plan. If the estimated cost of the query is above this threshold, it will compile a parallel plan. Ultimately, it will use whichever plan, serial or parallel, has the lower cost. If the cost of the parallel plan is higher, it will use the serial plan.

    As parallel threads increase, concurrency decreases. The ultimate goal is to get the right mix of parallel and serial queries so that the server is effectively used but there isn’t massive contention for threads. When running queries massively parallel like we do, concurrency quickly becomes an issue. If you have 1000 queries tasks running in serial, they will use 1000 threads. If they are running in parallel, they will use (1000 X the degree of parallelism) threads. So if it runs with 8 threads each, that’s 8000 threads. Queries are backed up at this point.

    So we want to be less aggressive about making plans go parallel. All of our SQL Servers use a cost threshold of 500 (10 X my normal setting) so that only really expensive queries go parallel. And these really expensive queries that go parallel usually get tuned on the client side to issue less instances of it at a time. Despite many attempts to do so, the DBAs are constantly on the lookout for any query that has the potential to take over the server.
  2. Kill the zombies: When I first started where I work, one of my first tasks was to create a maintenance task we dubbed Query Killer. Because of the nature of the work we do, our team of data scientists are constantly running scripts and ad hoc queries on our systems. They are allowed to do this to a point. Query Killer will look for any query that has been running for 15 minutes or more. If not being run by an official application service account or a DBA, the query will be killed and the person running it will be sent an email informing them. The DBA team is copied on these emails. If a long running query is being executed by an official application service account or a DBA, it simply sends the DBA team an email so we’re aware of it.

    When we first turned it, it killed several dozen zombie connections. We learned that the python scripts and a couple of other tools (might have been Linqpad, but I can’t swear to it) was starting an explicit transaction for them. They would connect, run a quick query (or sometimes not even run a query), and then the connection would sit open waiting for a COMMIT or ROLLBACK to be issued.

    As it was, we had scaled out our main databases to a 5 node Availability Group (AG) because we were hitting the ~32K connection limits (connection ID is a smallint). Killing off zombie connections and poorly behaving user queries is part of our strategy to not waste valuable resources. In this case, the resources we’re preserving are connections and threads.
  3. Parameterization, parameterization, parameterization: This is true for all workloads, but especially true for massively parallel queries. This very morning, I had someone inquire about why it took one of his report queries 8 seconds to run at 6 AM. The query was not parameterized. Running it manually takes less than 1 second.

    I checked my monitoring history for the time span between 5:58 AM and 6:02 AM. In that 4 minute span, it was executed 2,450 times with different values. I see this a lot when someone runs a batch of new queries that have not been parameterized. They will sit for a while in the SOS_SCHEDULER_YIELD wait type. Compiling plans is a CPU intensive process and all the actively running queries in the batch are trying to compile at the same time, and they have to yield to each other regularly until they get their plans compiled.

    I had another issue one day where a server was running like a dog. The plan cache was humongous and things were being pushed out of cache constantly to try to right itself. I flushed all the caches and the server returned to normal within a few minutes. The culprit was that one of the developers decided to repopulate one of the caches during the night using a previously untested (and unparameterized) query. He ran approximately 60,000 instance of this query overnight. Each one had a different value and each got it’s own plan compiled.
  4. Make it rain with cache: We have several different caching mechanisms where I work. There are lots of apps that need to analyze the same data in a slightly different way. a couple of our caching mechanisms store cached data or cached query results in a distributed fashion. If you need to work with a certain set of data, you use one cache. IF you are running queries that will be executed over and over, you go to another cache. The data caches decide when the cache needs to be updated and hits the database as needed. The query caches return the cached results if it is cached, and if not cached, it gets the results from the server, caches it, and then returns the results to the caller.

    Without these caches, we would have to have many times more AG nodes to support the load. 2 or 3 times as many.
  5. Use the right data type: I’m sure we’ve all heard about the evils of using the wrong data type. It’s one of the top 5 SQL coding issues where I work. It’s amazing how many simple queries are turned into monsters because they pass in the wrong data types and the query has to perform a scan instead of a seek. We have a lot of data which equates to a lot of big tables. You do not want to scan a 600,000,000+ row table to find one or two rows. Trust me on that one. I think the most egregious example I encountered is a query that was taking ~45 minutes to run. I changed the data types to the right types, and it went to 4 seconds.
54321
(0 votes. Average 0 of 5)
2 Comments
  1. Many thanks for the useful tips regarding tip 5: Use the right data type was this caused by implicit conversion or was their another reason.

    1. Hi Kevin. Thanks.

      When mismatched data types occur, there is a specific order of precedence that SQL Server uses to determine how to manage the conversion. In some cases, it may do a conversion of the data or parameter it is being compared to or it may have to convert the data in the column. When it has to convert the data in the column, it scans the index or table and converts every value and then compares it to the other data. Instead of being able to seek straight to the value it needs and reading only those pages, it has to read the whole index/table in order to determine which ones match.

Comments are closed.