mercredi 3 décembre 2014

Sql server table insert performance optimization


Setting


In a datawarehouse, I am joining a fact table to 20 dimensions. The fact table has 32 million rows and 30 columns. This is a temporary staging table so I don't have to deal with other users reading or writing to the table. I select 10 columns from the base table and 20 columns from the respective dimensions. The dimension tables are small (between 3 and 15.000 rows). The fields on which are joined are both integers and nvarchars. I use a SELECT ... INTO statement. There are no indexes on the tables.


The execution speed of this query is too slow to be useful.


Tried-out solutions


Because the query takes too long to process, I tried out following solutions:



  1. Split the 20 joins into 4 joins on 5 tables. The query performance remains low however.

  2. Put indexes on the foreign key columns. No significant time decrease.

  3. Make sure the fields of the join condition are integers. I noticed a performance increase of 25%. Not quite what I am searching for.

  4. Use an insert into statement instead of select into. Worse performance because of log file growth although the database is in simple recovery mode.


These findings led me to including the actual execution plan which shows that 89% of the cost lies in the table insert. The other costs are 8% table scan on the fact table and 2% on hash matching for the inner joins.


Questions



  1. What are the possible reasons of the slow table insert?

  2. What are ways to identify this bottleneck without the execution plan?

  3. What actions can I take to reduce the cost of the table insert?





Aucun commentaire:

Enregistrer un commentaire