jeudi 26 mars 2015

How to handle to many inserts?


I have a database which are accessed from remote locations in the world wit very long delays and poor transfer speeds. I currently transfer the data using a transaction since either all or no data should be inserted. Currently the values are inserted one at the time and it takes a total of about 37 seconds for 200 inserts before the commit.


The problem is that there may be 10 sites continuously trying to insert data wit a 2 minutes interval. If they fail they will retry until they succeeded. This adds up to an impossible situation as they will start growing a backlog.


I have understand I can use snapshot as Isolation level to allow reads in parallel. But how should I handle the parallel insets? How do banks and other system do it since they should also get a backlog if they do it in a queue?


EDIT1: I uses MSSQL and on the remote client I uses C# to talk to the MSSQL server.


EDIT2: The inserts are of 10 different types and are they are linked in pairs of two.


EDIT3: The inserts are connected in pairs. The first insert is a generic one which is the same for all and are inserted in to the same table let's call it tabel_A. Then the second insert is specific depending on the type of insert data. The second insert is made to 1 of 10 tables and the data varies greatly but that table also store the row index of the first insert in tabel_A. This is how they are linked.


EDIT4: The transfer speed from a good connection is about 1 second and that is no problem. The problem is the remote part witch takes about 40 seconds. Table_A contains about 1=^7 rows and is well indexed. All other tables is some what smaller and well indexed.





Aucun commentaire:

Enregistrer un commentaire