Friday, March 23, 2012

Performance Question

This is a very vague question but I was hoping someone might have a general answer for me as to why a table w/ 71.5 million rows in it might take 26 seconds to come back from:

set tran isolation level read uncommitted
select count(*) from communications

Communications is the table containing 71.5 million rows. Could this be because the statistics are out of whack on the primary key of the table? If so, what's an effective solution for updating the stat's on the primary key colleciton. I can't necessarily do a:

update statististics communications communication_id
with fullscan

Or can I? I mean, when I try that, it's takes ages to come back and I finally force it to bail. Can someone recommend a sample size I should use.

I'm open to suggestions because any time I ask performance related questions, people point me towards articles that are 40 pages long fillled w/ a bunch of unrelated stuff and don't help me at all.

Any suggestions that don't include me reading a dozen 40 pages articles? Thanks!

- James

select count(*) from communications

This statement is always cause for table or clustered index scan (or maybe index scan if you will specify indexed field for count function), but scan is the slowest data access operation. If your table is almost read-only table with few data modifications, you may try to store record count somewhere for it and update this count via triggers, but such approach would create hotspot and would lead to high data concurrecy (generally speaking, every modification transaction would wait until the previous one is complete whether they are trying to modify same resources in the table or not).

sql

1 comment:

Anonymous said...

ur blog is really nice and interesting, You have maintain it so beautifully that I truly like & enjoy it
1981 Buick Century AC Compressor

Post a Comment