Friday, March 30, 2012

Performance Tweaks

I have a client that is having some performance issues with sql 2000
database. The application seems to move slow on the server as well as the
workstations so it does seem like a network or workstation issue. It seems
to be a sql issue. The server is a decent server with high P3 processor, 1
gig of memory, RAID 5 scsi drive array running 10k rpm. At this point they
are only "testing" this application, only a hand full of users are using
app. Any suggestions on performance tweaks would be great. They had a
puesdo-administrator that forced sql to use 500 megs of the 1 gig in the
server... other than that its all default.
thanks for the help,
rob
MOve the log files to somewhere which is mirrored and on a different
physical drive than the data...Also the Data files for your database should
be on a different physical drive than master, model, tempdb.
This sets you up in a position where you can get up to the minute recovery
with no loss of data (using the backup with notruncate option.)
Wayne Snyder, MCDBA, SQL Server MVP
Mariner, Charlotte, NC
www.mariner-usa.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
"Microsoft" <temp@.dstek.com> wrote in message
news:e%23Yb%23b6VEHA.3120@.TK2MSFTNGP12.phx.gbl...
> I have a client that is having some performance issues with sql 2000
> database. The application seems to move slow on the server as well as the
> workstations so it does seem like a network or workstation issue. It
seems
> to be a sql issue. The server is a decent server with high P3 processor,
1
> gig of memory, RAID 5 scsi drive array running 10k rpm. At this point
they
> are only "testing" this application, only a hand full of users are using
> app. Any suggestions on performance tweaks would be great. They had a
> puesdo-administrator that forced sql to use 500 megs of the 1 gig in the
> server... other than that its all default.
> thanks for the help,
> rob
>
|||maybe these links could help him in troubleshooting --
SQL Server 7.0 Performance Tuning Guide :
http://www.microsoft.com/SQL/techinf...tuninguide.htm
Proper SQL Server Configuration Settings :
http://support.microsoft.com/support.../Q166/9/67.ASP
Q175658 - How to gather Information for effective troubleshooting of
Performance :
http://support.microsoft.com/support.../Q175/6/58.ASP
Q224587 INF: Troubleshooting Application Performance with SQL Server
http://support.microsoft.com/support.../q224/5/87.asp
SQL Server Query Performance Troubleshooter :
http://support.microsoft.com/support...7queryperf.asp
Index Tuning Wizard 2000
http://msdn.microsoft.com/library/techart/itwforsql.htm
Troubleshooting the Index Tuning Wizard
http://msdn.microsoft.com/library/ps...tools_48ro.htm
Designing an Index
http://msdn.microsoft.com/library/ps...es_05_2ri0.htm
Thanks,
Vikram
Vikram Jayaram
Microsoft, SQL Server
This posting is provided "AS IS" with no warranties, and confers no rights.
Subscribe to MSDN & use http://msdn.microsoft.com/newsgroups.

Performance Tweaks

I have a client that is having some performance issues with sql 2000
database. The application seems to move slow on the server as well as the
workstations so it does seem like a network or workstation issue. It seems
to be a sql issue. The server is a decent server with high P3 processor, 1
gig of memory, RAID 5 scsi drive array running 10k rpm. At this point they
are only "testing" this application, only a hand full of users are using
app. Any suggestions on performance tweaks would be great. They had a
puesdo-administrator that forced sql to use 500 megs of the 1 gig in the
server... other than that its all default.
thanks for the help,
robMOve the log files to somewhere which is mirrored and on a different
physical drive than the data...Also the Data files for your database should
be on a different physical drive than master, model, tempdb.
This sets you up in a position where you can get up to the minute recovery
with no loss of data (using the backup with notruncate option.)
Wayne Snyder, MCDBA, SQL Server MVP
Mariner, Charlotte, NC
www.mariner-usa.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
"Microsoft" <temp@.dstek.com> wrote in message
news:e%23Yb%23b6VEHA.3120@.TK2MSFTNGP12.phx.gbl...
> I have a client that is having some performance issues with sql 2000
> database. The application seems to move slow on the server as well as the
> workstations so it does seem like a network or workstation issue. It
seems
> to be a sql issue. The server is a decent server with high P3 processor,
1
> gig of memory, RAID 5 scsi drive array running 10k rpm. At this point
they
> are only "testing" this application, only a hand full of users are using
> app. Any suggestions on performance tweaks would be great. They had a
> puesdo-administrator that forced sql to use 500 megs of the 1 gig in the
> server... other than that its all default.
> thanks for the help,
> rob
>|||maybe these links could help him in troubleshooting --
SQL Server 7.0 Performance Tuning Guide :
http://www.microsoft.com/SQL/techinfo/perftuninguide.htm
Proper SQL Server Configuration Settings :
http://support.microsoft.com/support/kb/articles/Q166/9/67.ASP
Q175658 - How to gather Information for effective troubleshooting of
Performance :
http://support.microsoft.com/support/kb/articles/Q175/6/58.ASP
Q224587 INF: Troubleshooting Application Performance with SQL Server
http://support.microsoft.com/support/kb/articles/q224/5/87.asp
SQL Server Query Performance Troubleshooter :
http://support.microsoft.com/support/tshoot/sql7queryperf.asp
Index Tuning Wizard 2000
http://msdn.microsoft.com/library/techart/itwforsql.htm
Troubleshooting the Index Tuning Wizard
http://msdn.microsoft.com/library/psdk/sql/tr_servtools_48ro.htm
Designing an Index
http://msdn.microsoft.com/library/psdk/sql/cm_8_des_05_2ri0.htm
Thanks,
Vikram
Vikram Jayaram
Microsoft, SQL Server
This posting is provided "AS IS" with no warranties, and confers no rights.
Subscribe to MSDN & use http://msdn.microsoft.com/newsgroups.sql

Performance Tweaks

I have a client that is having some performance issues with sql 2000
database. The application seems to move slow on the server as well as the
workstations so it does seem like a network or workstation issue. It seems
to be a sql issue. The server is a decent server with high P3 processor, 1
gig of memory, RAID 5 scsi drive array running 10k rpm. At this point they
are only "testing" this application, only a hand full of users are using
app. Any suggestions on performance tweaks would be great. They had a
puesdo-administrator that forced sql to use 500 megs of the 1 gig in the
server... other than that its all default.
thanks for the help,
rob500 MB memory is not much for SQL Server, however, it all depends on how
big is your database and how many concurrent users, it may be enough, it
may be not. Open perfmon to check if there's any paging going on on the
server. Also, do you have correct index? How did you configure your data
files? Is your application using alot stored procedures? Are those
stored procedures very CPU intensive?
Without any detail, it's hard to determine what's going on.
--
Eric Li
SQL DBA
MCDBA
Microsoft wrote:
> I have a client that is having some performance issues with sql 2000
> database. The application seems to move slow on the server as well as the
> workstations so it does seem like a network or workstation issue. It seems
> to be a sql issue. The server is a decent server with high P3 processor, 1
> gig of memory, RAID 5 scsi drive array running 10k rpm. At this point they
> are only "testing" this application, only a hand full of users are using
> app. Any suggestions on performance tweaks would be great. They had a
> puesdo-administrator that forced sql to use 500 megs of the 1 gig in the
> server... other than that its all default.
> thanks for the help,
> rob
>
>|||RAID 5 is slow for log files.
"Microsoft" <temp@.dstek.com> wrote in message
news:eB9SBc6VEHA.3120@.TK2MSFTNGP12.phx.gbl...
> I have a client that is having some performance issues with sql 2000
> database. The application seems to move slow on the server as well as the
> workstations so it does seem like a network or workstation issue. It
seems
> to be a sql issue. The server is a decent server with high P3 processor,
1
> gig of memory, RAID 5 scsi drive array running 10k rpm. At this point
they
> are only "testing" this application, only a hand full of users are using
> app. Any suggestions on performance tweaks would be great. They had a
> puesdo-administrator that forced sql to use 500 megs of the 1 gig in the
> server... other than that its all default.
> thanks for the help,
> rob
>
>|||so are 10k drives.
you'll need to capture perfmon data and see where your bottlenecks are.
usual suspects.
RAM - 500mb is low and 1GB could be
CPU - Maybe you need additional CPUs
IO - RAID 5 with 10k Drives....that is rather slow. You may be Queueing up
IO requests
Indexes, bad sprocs, excessive blocking, deadlocks, etc
lots of work to do to figure this out
Greg Jackson
PDX, Oregon

Performance turning

Is there a way we can identify most frequently hit tables in the database. We
would like to pin most frequently used tables in memory.
We have 6GB physical memory and like to allocate as 4-5 GB of memory to SQL
server.
Note: 4 or 5 of the tables have at least a million records.
Thanks in advance
Regards,
Amar
Amar
SQL Server Profiler is your tool.
"Amar" <Amar@.discussions.microsoft.com> wrote in message
news:08AF5B1E-3DE4-452B-8FF0-AF2C68EE4CDF@.microsoft.com...
> Is there a way we can identify most frequently hit tables in the database.
We
> would like to pin most frequently used tables in memory.
> We have 6GB physical memory and like to allocate as 4-5 GB of memory to
SQL
> server.
> Note: 4 or 5 of the tables have at least a million records.
> Thanks in advance
> Regards,
> Amar
|||Hi
Pinning very small tables can be argued for under extreme circumstances, but
tables with millions of rows, forget it. If you have queries that need to
process so many rows, you have a basic data architecture problem.
Pinning a table results in that memory not being available to other table
caches, so in effect, you might starve the cache and worsen performance.
SQL Server keeps track of how many times a page has been accessed, so it
will not discard heavily used pages.
Only after turning every query, and optimizing every index and table, should
you consider pinning tables. By then you would know what tables are problems.
Regards
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"Amar" wrote:

> Is there a way we can identify most frequently hit tables in the database. We
> would like to pin most frequently used tables in memory.
> We have 6GB physical memory and like to allocate as 4-5 GB of memory to SQL
> server.
> Note: 4 or 5 of the tables have at least a million records.
> Thanks in advance
> Regards,
> Amar
|||Hi Mike,
Thanks for your quick response. Couple of points I wanted to mention here
are we are using a product which is supplied by our supplier and hence
changing/tuning query needs to go through our QA/testing. We were looking at
this approach of pinning the table as a temporary fix while our supplier
works on tuning the queries. Would you think that is a good idea?
Other thing we noticed was our SQL is using only 1.7 GB of memory when we
have lot more unused memory. As you have said earlier we are noticing very
high cache hit at the same time we are also seeing very high disk read that
is 200M Bytes per sec.
We were looking for a quick fix as our users are finding it very difficult
to work with slow performing system which is affecting the business.
Thanks again for your help.
Regards,
Amar
"Mike Epprecht (SQL MVP)" wrote:
[vbcol=seagreen]
> Hi
> Pinning very small tables can be argued for under extreme circumstances, but
> tables with millions of rows, forget it. If you have queries that need to
> process so many rows, you have a basic data architecture problem.
> Pinning a table results in that memory not being available to other table
> caches, so in effect, you might starve the cache and worsen performance.
> SQL Server keeps track of how many times a page has been accessed, so it
> will not discard heavily used pages.
> Only after turning every query, and optimizing every index and table, should
> you consider pinning tables. By then you would know what tables are problems.
> Regards
> --
> Mike Epprecht, Microsoft SQL Server MVP
> Zurich, Switzerland
> MVP Program: http://www.microsoft.com/mvp
> Blog: http://www.msmvps.com/epprecht/
>
> "Amar" wrote:
|||Amar wrote:
> Hi Mike,
> Thanks for your quick response. Couple of points I wanted to mention
> here are we are using a product which is supplied by our supplier and
> hence changing/tuning query needs to go through our QA/testing. We
> were looking at this approach of pinning the table as a temporary fix
> while our supplier works on tuning the queries. Would you think that
> is a good idea?
> Other thing we noticed was our SQL is using only 1.7 GB of memory
> when we have lot more unused memory. As you have said earlier we are
> noticing very high cache hit at the same time we are also seeing very
> high disk read that is 200M Bytes per sec.
> We were looking for a quick fix as our users are finding it very
> difficult to work with slow performing system which is affecting the
> business.
>
As Mike said, you can pin very small tables if necessary, but you're not
likely to get much improved performance. If you pin the large tables
(the 4 or 5 with millions of rows), and those tables fill up available
SQL Server memory, you might find performance degraded as SQL Server has
to continually grab all other data from disk. You'd need to measure how
large the tables are and weight that against the size of the remaining
table and available memory.
What server version and edition and SQL edition are you using?
David Gugick
Quest Software
www.imceda.com
www.quest.com
|||Hi David,
Thanks for the response.
We are using SQL Enterprise edition and Windows 2003 Enterprise edition.
It is clustred sql with 2 nodes
Is there a way we can identify most frequently hit tables in the database?
"David Gugick" wrote:

> Amar wrote:
> As Mike said, you can pin very small tables if necessary, but you're not
> likely to get much improved performance. If you pin the large tables
> (the 4 or 5 with millions of rows), and those tables fill up available
> SQL Server memory, you might find performance degraded as SQL Server has
> to continually grab all other data from disk. You'd need to measure how
> large the tables are and weight that against the size of the remaining
> table and available memory.
> What server version and edition and SQL edition are you using?
>
> --
> David Gugick
> Quest Software
> www.imceda.com
> www.quest.com
>
|||Amar wrote:
> Hi David,
> Thanks for the response.
> We are using SQL Enterprise edition and Windows 2003 Enterprise
> edition. It is clustred sql with 2 nodes
> Is there a way we can identify most frequently hit tables in the
> database?
>
"Frequently hit" is a somewhat nebulous term. Those tables that are
accessed most frequently may not be best candidates for pinning if they
are always accessed in an index optimzed fashion. Proper indexing keeps
page reads to a minimum and keeps the data cache fresh with usable data.
Conversely, a table that is accessed infrequently, but is accessed by
table scan or clustered index scan operations, can cause the data cache
to get flushed of its good data and replaced with data that has no
business being there.
So my recommendation for a temporary fix is get SQL Server to use as
much memory as possible and at the same time start performance tuning
the database. I think you'll find that looking for candidates now for
pinning is a waste of time since it's going to be time consuming and
difficult to determine the benefits.
I realize you are looking for a quick, temporary fix to performance
issues, but these problems are almost always related to query and index
design. There is little in the way of quick fixes that can help, short
of adding/utilizing all available memory.
I'm not sure if it was you that started another thread on your memory
issues, but are you running AWE and SP4. If so, have you download the
latest SP4 AWE hotfix?
David Gugick
Quest Software
www.imceda.com
www.quest.com
|||Hi Mike and David,
Thanks a lot for quick response and all the suggestions.
Regards,
Amar
"David Gugick" wrote:

> Amar wrote:
> "Frequently hit" is a somewhat nebulous term. Those tables that are
> accessed most frequently may not be best candidates for pinning if they
> are always accessed in an index optimzed fashion. Proper indexing keeps
> page reads to a minimum and keeps the data cache fresh with usable data.
> Conversely, a table that is accessed infrequently, but is accessed by
> table scan or clustered index scan operations, can cause the data cache
> to get flushed of its good data and replaced with data that has no
> business being there.
> So my recommendation for a temporary fix is get SQL Server to use as
> much memory as possible and at the same time start performance tuning
> the database. I think you'll find that looking for candidates now for
> pinning is a waste of time since it's going to be time consuming and
> difficult to determine the benefits.
> I realize you are looking for a quick, temporary fix to performance
> issues, but these problems are almost always related to query and index
> design. There is little in the way of quick fixes that can help, short
> of adding/utilizing all available memory.
> I'm not sure if it was you that started another thread on your memory
> issues, but are you running AWE and SP4. If so, have you download the
> latest SP4 AWE hotfix?
> --
> David Gugick
> Quest Software
> www.imceda.com
> www.quest.com
>

Performance turning

Is there a way we can identify most frequently hit tables in the database. We
would like to pin most frequently used tables in memory.
We have 6GB physical memory and like to allocate as 4-5 GB of memory to SQL
server.
Note: 4 or 5 of the tables have at least a million records.
Thanks in advance
Regards,
AmarAmar
SQL Server Profiler is your tool.
"Amar" <Amar@.discussions.microsoft.com> wrote in message
news:08AF5B1E-3DE4-452B-8FF0-AF2C68EE4CDF@.microsoft.com...
> Is there a way we can identify most frequently hit tables in the database.
We
> would like to pin most frequently used tables in memory.
> We have 6GB physical memory and like to allocate as 4-5 GB of memory to
SQL
> server.
> Note: 4 or 5 of the tables have at least a million records.
> Thanks in advance
> Regards,
> Amar|||Hi
Pinning very small tables can be argued for under extreme circumstances, but
tables with millions of rows, forget it. If you have queries that need to
process so many rows, you have a basic data architecture problem.
Pinning a table results in that memory not being available to other table
caches, so in effect, you might starve the cache and worsen performance.
SQL Server keeps track of how many times a page has been accessed, so it
will not discard heavily used pages.
Only after turning every query, and optimizing every index and table, should
you consider pinning tables. By then you would know what tables are problems.
Regards
--
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"Amar" wrote:
> Is there a way we can identify most frequently hit tables in the database. We
> would like to pin most frequently used tables in memory.
> We have 6GB physical memory and like to allocate as 4-5 GB of memory to SQL
> server.
> Note: 4 or 5 of the tables have at least a million records.
> Thanks in advance
> Regards,
> Amar|||Hi Mike,
Thanks for your quick response. Couple of points I wanted to mention here
are we are using a product which is supplied by our supplier and hence
changing/tuning query needs to go through our QA/testing. We were looking at
this approach of pinning the table as a temporary fix while our supplier
works on tuning the queries. Would you think that is a good idea?
Other thing we noticed was our SQL is using only 1.7 GB of memory when we
have lot more unused memory. As you have said earlier we are noticing very
high cache hit at the same time we are also seeing very high disk read that
is 200M Bytes per sec.
We were looking for a quick fix as our users are finding it very difficult
to work with slow performing system which is affecting the business.
Thanks again for your help.
Regards,
Amar
"Mike Epprecht (SQL MVP)" wrote:
> Hi
> Pinning very small tables can be argued for under extreme circumstances, but
> tables with millions of rows, forget it. If you have queries that need to
> process so many rows, you have a basic data architecture problem.
> Pinning a table results in that memory not being available to other table
> caches, so in effect, you might starve the cache and worsen performance.
> SQL Server keeps track of how many times a page has been accessed, so it
> will not discard heavily used pages.
> Only after turning every query, and optimizing every index and table, should
> you consider pinning tables. By then you would know what tables are problems.
> Regards
> --
> Mike Epprecht, Microsoft SQL Server MVP
> Zurich, Switzerland
> MVP Program: http://www.microsoft.com/mvp
> Blog: http://www.msmvps.com/epprecht/
>
> "Amar" wrote:
> > Is there a way we can identify most frequently hit tables in the database. We
> > would like to pin most frequently used tables in memory.
> > We have 6GB physical memory and like to allocate as 4-5 GB of memory to SQL
> > server.
> >
> > Note: 4 or 5 of the tables have at least a million records.
> >
> > Thanks in advance
> > Regards,
> > Amar|||Amar wrote:
> Hi Mike,
> Thanks for your quick response. Couple of points I wanted to mention
> here are we are using a product which is supplied by our supplier and
> hence changing/tuning query needs to go through our QA/testing. We
> were looking at this approach of pinning the table as a temporary fix
> while our supplier works on tuning the queries. Would you think that
> is a good idea?
> Other thing we noticed was our SQL is using only 1.7 GB of memory
> when we have lot more unused memory. As you have said earlier we are
> noticing very high cache hit at the same time we are also seeing very
> high disk read that is 200M Bytes per sec.
> We were looking for a quick fix as our users are finding it very
> difficult to work with slow performing system which is affecting the
> business.
>
As Mike said, you can pin very small tables if necessary, but you're not
likely to get much improved performance. If you pin the large tables
(the 4 or 5 with millions of rows), and those tables fill up available
SQL Server memory, you might find performance degraded as SQL Server has
to continually grab all other data from disk. You'd need to measure how
large the tables are and weight that against the size of the remaining
table and available memory.
What server version and edition and SQL edition are you using?
David Gugick
Quest Software
www.imceda.com
www.quest.com|||Hi David,
Thanks for the response.
We are using SQL Enterprise edition and Windows 2003 Enterprise edition.
It is clustred sql with 2 nodes
Is there a way we can identify most frequently hit tables in the database?
"David Gugick" wrote:
> Amar wrote:
> > Hi Mike,
> > Thanks for your quick response. Couple of points I wanted to mention
> > here are we are using a product which is supplied by our supplier and
> > hence changing/tuning query needs to go through our QA/testing. We
> > were looking at this approach of pinning the table as a temporary fix
> > while our supplier works on tuning the queries. Would you think that
> > is a good idea?
> >
> > Other thing we noticed was our SQL is using only 1.7 GB of memory
> > when we have lot more unused memory. As you have said earlier we are
> > noticing very high cache hit at the same time we are also seeing very
> > high disk read that is 200M Bytes per sec.
> >
> > We were looking for a quick fix as our users are finding it very
> > difficult to work with slow performing system which is affecting the
> > business.
> >
> As Mike said, you can pin very small tables if necessary, but you're not
> likely to get much improved performance. If you pin the large tables
> (the 4 or 5 with millions of rows), and those tables fill up available
> SQL Server memory, you might find performance degraded as SQL Server has
> to continually grab all other data from disk. You'd need to measure how
> large the tables are and weight that against the size of the remaining
> table and available memory.
> What server version and edition and SQL edition are you using?
>
> --
> David Gugick
> Quest Software
> www.imceda.com
> www.quest.com
>|||Amar wrote:
> Hi David,
> Thanks for the response.
> We are using SQL Enterprise edition and Windows 2003 Enterprise
> edition. It is clustred sql with 2 nodes
> Is there a way we can identify most frequently hit tables in the
> database?
>
"Frequently hit" is a somewhat nebulous term. Those tables that are
accessed most frequently may not be best candidates for pinning if they
are always accessed in an index optimzed fashion. Proper indexing keeps
page reads to a minimum and keeps the data cache fresh with usable data.
Conversely, a table that is accessed infrequently, but is accessed by
table scan or clustered index scan operations, can cause the data cache
to get flushed of its good data and replaced with data that has no
business being there.
So my recommendation for a temporary fix is get SQL Server to use as
much memory as possible and at the same time start performance tuning
the database. I think you'll find that looking for candidates now for
pinning is a waste of time since it's going to be time consuming and
difficult to determine the benefits.
I realize you are looking for a quick, temporary fix to performance
issues, but these problems are almost always related to query and index
design. There is little in the way of quick fixes that can help, short
of adding/utilizing all available memory.
I'm not sure if it was you that started another thread on your memory
issues, but are you running AWE and SP4. If so, have you download the
latest SP4 AWE hotfix?
--
David Gugick
Quest Software
www.imceda.com
www.quest.com|||Hi Mike and David,
Thanks a lot for quick response and all the suggestions.
Regards,
Amar
"David Gugick" wrote:
> Amar wrote:
> > Hi David,
> > Thanks for the response.
> > We are using SQL Enterprise edition and Windows 2003 Enterprise
> > edition. It is clustred sql with 2 nodes
> > Is there a way we can identify most frequently hit tables in the
> > database?
> >
> "Frequently hit" is a somewhat nebulous term. Those tables that are
> accessed most frequently may not be best candidates for pinning if they
> are always accessed in an index optimzed fashion. Proper indexing keeps
> page reads to a minimum and keeps the data cache fresh with usable data.
> Conversely, a table that is accessed infrequently, but is accessed by
> table scan or clustered index scan operations, can cause the data cache
> to get flushed of its good data and replaced with data that has no
> business being there.
> So my recommendation for a temporary fix is get SQL Server to use as
> much memory as possible and at the same time start performance tuning
> the database. I think you'll find that looking for candidates now for
> pinning is a waste of time since it's going to be time consuming and
> difficult to determine the benefits.
> I realize you are looking for a quick, temporary fix to performance
> issues, but these problems are almost always related to query and index
> design. There is little in the way of quick fixes that can help, short
> of adding/utilizing all available memory.
> I'm not sure if it was you that started another thread on your memory
> issues, but are you running AWE and SP4. If so, have you download the
> latest SP4 AWE hotfix?
> --
> David Gugick
> Quest Software
> www.imceda.com
> www.quest.com
>

Performance turning

Is there a way we can identify most frequently hit tables in the database. W
e
would like to pin most frequently used tables in memory.
We have 6GB physical memory and like to allocate as 4-5 GB of memory to SQL
server.
Note: 4 or 5 of the tables have at least a million records.
Thanks in advance
Regards,
AmarAmar
SQL Server Profiler is your tool.
"Amar" <Amar@.discussions.microsoft.com> wrote in message
news:08AF5B1E-3DE4-452B-8FF0-AF2C68EE4CDF@.microsoft.com...
> Is there a way we can identify most frequently hit tables in the database.
We
> would like to pin most frequently used tables in memory.
> We have 6GB physical memory and like to allocate as 4-5 GB of memory to
SQL
> server.
> Note: 4 or 5 of the tables have at least a million records.
> Thanks in advance
> Regards,
> Amar|||Hi
Pinning very small tables can be argued for under extreme circumstances, but
tables with millions of rows, forget it. If you have queries that need to
process so many rows, you have a basic data architecture problem.
Pinning a table results in that memory not being available to other table
caches, so in effect, you might starve the cache and worsen performance.
SQL Server keeps track of how many times a page has been accessed, so it
will not discard heavily used pages.
Only after turning every query, and optimizing every index and table, should
you consider pinning tables. By then you would know what tables are problems
.
Regards
--
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"Amar" wrote:

> Is there a way we can identify most frequently hit tables in the database.
We
> would like to pin most frequently used tables in memory.
> We have 6GB physical memory and like to allocate as 4-5 GB of memory to SQ
L
> server.
> Note: 4 or 5 of the tables have at least a million records.
> Thanks in advance
> Regards,
> Amar|||Hi Mike,
Thanks for your quick response. Couple of points I wanted to mention here
are we are using a product which is supplied by our supplier and hence
changing/tuning query needs to go through our QA/testing. We were looking at
this approach of pinning the table as a temporary fix while our supplier
works on tuning the queries. Would you think that is a good idea?
Other thing we noticed was our SQL is using only 1.7 GB of memory when we
have lot more unused memory. As you have said earlier we are noticing very
high cache hit at the same time we are also seeing very high disk read that
is 200M Bytes per sec.
We were looking for a quick fix as our users are finding it very difficult
to work with slow performing system which is affecting the business.
Thanks again for your help.
Regards,
Amar
"Mike Epprecht (SQL MVP)" wrote:
[vbcol=seagreen]
> Hi
> Pinning very small tables can be argued for under extreme circumstances, b
ut
> tables with millions of rows, forget it. If you have queries that need to
> process so many rows, you have a basic data architecture problem.
> Pinning a table results in that memory not being available to other table
> caches, so in effect, you might starve the cache and worsen performance.
> SQL Server keeps track of how many times a page has been accessed, so it
> will not discard heavily used pages.
> Only after turning every query, and optimizing every index and table, shou
ld
> you consider pinning tables. By then you would know what tables are proble
ms.
> Regards
> --
> Mike Epprecht, Microsoft SQL Server MVP
> Zurich, Switzerland
> MVP Program: http://www.microsoft.com/mvp
> Blog: http://www.msmvps.com/epprecht/
>
> "Amar" wrote:
>|||Amar wrote:
> Hi Mike,
> Thanks for your quick response. Couple of points I wanted to mention
> here are we are using a product which is supplied by our supplier and
> hence changing/tuning query needs to go through our QA/testing. We
> were looking at this approach of pinning the table as a temporary fix
> while our supplier works on tuning the queries. Would you think that
> is a good idea?
> Other thing we noticed was our SQL is using only 1.7 GB of memory
> when we have lot more unused memory. As you have said earlier we are
> noticing very high cache hit at the same time we are also seeing very
> high disk read that is 200M Bytes per sec.
> We were looking for a quick fix as our users are finding it very
> difficult to work with slow performing system which is affecting the
> business.
>
As Mike said, you can pin very small tables if necessary, but you're not
likely to get much improved performance. If you pin the large tables
(the 4 or 5 with millions of rows), and those tables fill up available
SQL Server memory, you might find performance degraded as SQL Server has
to continually grab all other data from disk. You'd need to measure how
large the tables are and weight that against the size of the remaining
table and available memory.
What server version and edition and SQL edition are you using?
David Gugick
Quest Software
www.imceda.com
www.quest.com|||Hi David,
Thanks for the response.
We are using SQL Enterprise edition and Windows 2003 Enterprise edition.
It is clustred sql with 2 nodes
Is there a way we can identify most frequently hit tables in the database?
"David Gugick" wrote:

> Amar wrote:
> As Mike said, you can pin very small tables if necessary, but you're not
> likely to get much improved performance. If you pin the large tables
> (the 4 or 5 with millions of rows), and those tables fill up available
> SQL Server memory, you might find performance degraded as SQL Server has
> to continually grab all other data from disk. You'd need to measure how
> large the tables are and weight that against the size of the remaining
> table and available memory.
> What server version and edition and SQL edition are you using?
>
> --
> David Gugick
> Quest Software
> www.imceda.com
> www.quest.com
>|||Amar wrote:
> Hi David,
> Thanks for the response.
> We are using SQL Enterprise edition and Windows 2003 Enterprise
> edition. It is clustred sql with 2 nodes
> Is there a way we can identify most frequently hit tables in the
> database?
>
"Frequently hit" is a somewhat nebulous term. Those tables that are
accessed most frequently may not be best candidates for pinning if they
are always accessed in an index optimzed fashion. Proper indexing keeps
page reads to a minimum and keeps the data cache fresh with usable data.
Conversely, a table that is accessed infrequently, but is accessed by
table scan or clustered index scan operations, can cause the data cache
to get flushed of its good data and replaced with data that has no
business being there.
So my recommendation for a temporary fix is get SQL Server to use as
much memory as possible and at the same time start performance tuning
the database. I think you'll find that looking for candidates now for
pinning is a waste of time since it's going to be time consuming and
difficult to determine the benefits.
I realize you are looking for a quick, temporary fix to performance
issues, but these problems are almost always related to query and index
design. There is little in the way of quick fixes that can help, short
of adding/utilizing all available memory.
I'm not sure if it was you that started another thread on your memory
issues, but are you running AWE and SP4. If so, have you download the
latest SP4 AWE hotfix?
David Gugick
Quest Software
www.imceda.com
www.quest.com|||Hi Mike and David,
Thanks a lot for quick response and all the suggestions.
Regards,
Amar
"David Gugick" wrote:

> Amar wrote:
> "Frequently hit" is a somewhat nebulous term. Those tables that are
> accessed most frequently may not be best candidates for pinning if they
> are always accessed in an index optimzed fashion. Proper indexing keeps
> page reads to a minimum and keeps the data cache fresh with usable data.
> Conversely, a table that is accessed infrequently, but is accessed by
> table scan or clustered index scan operations, can cause the data cache
> to get flushed of its good data and replaced with data that has no
> business being there.
> So my recommendation for a temporary fix is get SQL Server to use as
> much memory as possible and at the same time start performance tuning
> the database. I think you'll find that looking for candidates now for
> pinning is a waste of time since it's going to be time consuming and
> difficult to determine the benefits.
> I realize you are looking for a quick, temporary fix to performance
> issues, but these problems are almost always related to query and index
> design. There is little in the way of quick fixes that can help, short
> of adding/utilizing all available memory.
> I'm not sure if it was you that started another thread on your memory
> issues, but are you running AWE and SP4. If so, have you download the
> latest SP4 AWE hotfix?
> --
> David Gugick
> Quest Software
> www.imceda.com
> www.quest.com
>

Performance Tuning UPDATE Statement

Below is a simple UPDATE that I have to perform on a table that has
about 2.5 million rows (about 4 million in production) This query
runs for an enourmous amount of time (over 1 hour). Both the
ChangerRoleID and the ChangerID are indexed (not unique). Is there
any way to performance tune this?

Controlling the physical drive of the log file isn't possible at our
client sites (we don't have control) and the recovery model needs to
be set to "Full".

UPDATE CLIENTSHISTORY SET ChangerRoleID = ChangerID WHERE
ChangerRoleID IS NULL

Any Help would be greatly appreciated!On 4 Aug 2004 08:27:50 -0700, MAS wrote:

>Below is a simple UPDATE that I have to perform on a table that has
>about 2.5 million rows (about 4 million in production) This query
>runs for an enourmous amount of time (over 1 hour). Both the
>ChangerRoleID and the ChangerID are indexed (not unique). Is there
>any way to performance tune this?
>Controlling the physical drive of the log file isn't possible at our
>client sites (we don't have control) and the recovery model needs to
>be set to "Full".
>UPDATE CLIENTSHISTORY SET ChangerRoleID = ChangerID WHERE
>ChangerRoleID IS NULL
>Any Help would be greatly appreciated!

Hi MAS,

If you remove the non-unique index on ChangerRoleID before doing the
update and recreate it afterwards, you'll probably save some time. The
index could have been useful if only a few of all rows match the IS NULL
condition, but with over aan hour execution time, I think there are so
many matches that a full table scan will be quicker. Removing the index
before doing the update saves SQL Server the extra work of constantly
having to update the index to keep it in sync with the data. Of course,
this might affect other queries that execute during the update and would
have benefited from this index. The index on ChangerID will neither be
used nor cause extra work for this update.

Check if there's a trigger that gets fired by the update. If you can
safely disable that trigger during the update process, do so. Same for
constraints: are there any CHECK or REFERENCES (foreign key) constraints
defined for ChangerRoleID? If so, disable constraint checking (again, only
if it is safe, i.e. you have to be sure that this update won't cause
violation of the constraint *and* that no other person accessing the
database during the time constraint checking is disabled will be able to
cause violations of the constraint).

You state that the recovery model needs to be full; from that I conclude
that you can't lock other users out of the database during the update. Can
you at least take measures to prevent other users from using (updating,
but preferably reading as well) the CLIENTSHISTORY table?

Best, Hugo
--

(Remove _NO_ and _SPAM_ to get my e-mail address)|||[posted and mailed, please reply in news]

MAS (mas32677@.hotmail.com) writes:
> Below is a simple UPDATE that I have to perform on a table that has
> about 2.5 million rows (about 4 million in production) This query
> runs for an enourmous amount of time (over 1 hour). Both the
> ChangerRoleID and the ChangerID are indexed (not unique). Is there
> any way to performance tune this?
> Controlling the physical drive of the log file isn't possible at our
> client sites (we don't have control) and the recovery model needs to
> be set to "Full".
> UPDATE CLIENTSHISTORY SET ChangerRoleID = ChangerID WHERE
> ChangerRoleID IS NULL
> Any Help would be greatly appreciated!

To add to what Hugo said, if that index on ChangerRoleID is clustered,
and many rows have a NULL value, then you are in for a problem.

It may help to do it batches:

DECLARE @.batch_size int, @.rowc int
SELECT @.batch_size = 50000
SELECT @.rowc = @.batch_size
SET ROWCOUNT @.batch_size
WHILE @.rowc = @.batch_size
BEGIN
UPDATE CLIENTSHISTORY SET ChangerRoleID = ChangerID
WHERE ChangerRoleID IS NULL
AND ChangerID IS NOT NULL
SELECT @.rowc = @.@.rowcount
END
SET ROWCOUNT 0

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.aspsql