I have three tables in a PostgreSQL database that I'm querying via a view and some joins.
CREATE TABLE network_info (
network CIDR NOT NULL,
some_info TEXT NULL,
PRIMARY KEY (network)
);
CREATE TABLE ipaddr_info (
ipaddr INET NOT NULL,
some_info INT NULL,
PRIMARY KEY (ipaddr, some_info)
);
CREATE TABLE ipaddrs (
addr INET NOT NULL,
);
CREATE VIEW ipaddr_summary AS
SELECT DISTINCT
i.addr AS ip_address,
a.some_info AS network_info,
COUNT(b.ipaddr) AS ip_info_count
FROM ipaddrs AS i
LEFT JOIN network_info AS a
ON (i.addr << a.network)
LEFT JOIN ipaddr_info AS b
ON (i.addr = b.ipaddr)
GROUP BY i.addr, a.some_info
;
All of the tables have ~150K rows right now, and it takes a really long time (~3 hours) to run SELECT * from ipaddr_summary; on an Intel Pentium 4 2.8GHz dual core with 2G of memory.
Is there a way I can restructure or optimize this particular schema or view to make the query faster, or is it a hardware issue? I'm going to spin up a beefy VM in the cloud and test, but wanted to see if there's a way to optimize w/out just throwing more hardware at it.
Aucun commentaire:
Enregistrer un commentaire