I'm currently investigating moving our INNODB
database to an ndb cluster
(vs buying bigger servers (again) and moving the master / slaves to these).
So far my testing on a small cluster has been fruitful, and suggests that we would benefit from this (especially in data resiliency / write access). However I have one small niggling concern. We have one large table (apx 30% of the whole database size - or 60GB apx at present) which is comprised entirely of BLOB fields (80+), and a single field at the start uniqRowID (INT AUTO INCREMENT)
to identify the row. Few of these contain very much data (mainly some text/ numbers/ dates etc), many are actually empty (NULL). None of these are indexed. We add new rows constantly, and read the old rows frequently too. Updates are few and far between.
Is this likely to become an issue when running this in a cluster?
From my reading I understand that BLOB fields are held in separate tables/pages in the background, so need to be read separately to the main data. But can't see if this is like INNODB, where the first x data is stored in the main table, the rest held separately, or if the whole thing is separate.
I ran some simple tests, inserting records with similar data, (both disk based and normal), with various joins etc. which showed no differences. But I am concerned that my small test system simply isn't stressing it enough, and don't have the resources (or finances) to run a full size test yet.
Can anyone with some experience of this tell me if I am going the right way with this, or am I just walking into a nightmare?
Aucun commentaire:
Enregistrer un commentaire