Friday, March 30, 2012

Performance Tweaks

I have a client that is having some performance issues with sql 2000
database. The application seems to move slow on the server as well as the
workstations so it does seem like a network or workstation issue. It seems
to be a sql issue. The server is a decent server with high P3 processor, 1
gig of memory, RAID 5 scsi drive array running 10k rpm. At this point they
are only "testing" this application, only a hand full of users are using
app. Any suggestions on performance tweaks would be great. They had a
puesdo-administrator that forced sql to use 500 megs of the 1 gig in the
server... other than that its all default.
thanks for the help,
rob
MOve the log files to somewhere which is mirrored and on a different
physical drive than the data...Also the Data files for your database should
be on a different physical drive than master, model, tempdb.
This sets you up in a position where you can get up to the minute recovery
with no loss of data (using the backup with notruncate option.)
Wayne Snyder, MCDBA, SQL Server MVP
Mariner, Charlotte, NC
www.mariner-usa.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
"Microsoft" <temp@.dstek.com> wrote in message
news:e%23Yb%23b6VEHA.3120@.TK2MSFTNGP12.phx.gbl...
> I have a client that is having some performance issues with sql 2000
> database. The application seems to move slow on the server as well as the
> workstations so it does seem like a network or workstation issue. It
seems
> to be a sql issue. The server is a decent server with high P3 processor,
1
> gig of memory, RAID 5 scsi drive array running 10k rpm. At this point
they
> are only "testing" this application, only a hand full of users are using
> app. Any suggestions on performance tweaks would be great. They had a
> puesdo-administrator that forced sql to use 500 megs of the 1 gig in the
> server... other than that its all default.
> thanks for the help,
> rob
>
|||maybe these links could help him in troubleshooting --
SQL Server 7.0 Performance Tuning Guide :
http://www.microsoft.com/SQL/techinf...tuninguide.htm
Proper SQL Server Configuration Settings :
http://support.microsoft.com/support.../Q166/9/67.ASP
Q175658 - How to gather Information for effective troubleshooting of
Performance :
http://support.microsoft.com/support.../Q175/6/58.ASP
Q224587 INF: Troubleshooting Application Performance with SQL Server
http://support.microsoft.com/support.../q224/5/87.asp
SQL Server Query Performance Troubleshooter :
http://support.microsoft.com/support...7queryperf.asp
Index Tuning Wizard 2000
http://msdn.microsoft.com/library/techart/itwforsql.htm
Troubleshooting the Index Tuning Wizard
http://msdn.microsoft.com/library/ps...tools_48ro.htm
Designing an Index
http://msdn.microsoft.com/library/ps...es_05_2ri0.htm
Thanks,
Vikram
Vikram Jayaram
Microsoft, SQL Server
This posting is provided "AS IS" with no warranties, and confers no rights.
Subscribe to MSDN & use http://msdn.microsoft.com/newsgroups.

Performance Tweaks

I have a client that is having some performance issues with sql 2000
database. The application seems to move slow on the server as well as the
workstations so it does seem like a network or workstation issue. It seems
to be a sql issue. The server is a decent server with high P3 processor, 1
gig of memory, RAID 5 scsi drive array running 10k rpm. At this point they
are only "testing" this application, only a hand full of users are using
app. Any suggestions on performance tweaks would be great. They had a
puesdo-administrator that forced sql to use 500 megs of the 1 gig in the
server... other than that its all default.
thanks for the help,
robMOve the log files to somewhere which is mirrored and on a different
physical drive than the data...Also the Data files for your database should
be on a different physical drive than master, model, tempdb.
This sets you up in a position where you can get up to the minute recovery
with no loss of data (using the backup with notruncate option.)
Wayne Snyder, MCDBA, SQL Server MVP
Mariner, Charlotte, NC
www.mariner-usa.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
"Microsoft" <temp@.dstek.com> wrote in message
news:e%23Yb%23b6VEHA.3120@.TK2MSFTNGP12.phx.gbl...
> I have a client that is having some performance issues with sql 2000
> database. The application seems to move slow on the server as well as the
> workstations so it does seem like a network or workstation issue. It
seems
> to be a sql issue. The server is a decent server with high P3 processor,
1
> gig of memory, RAID 5 scsi drive array running 10k rpm. At this point
they
> are only "testing" this application, only a hand full of users are using
> app. Any suggestions on performance tweaks would be great. They had a
> puesdo-administrator that forced sql to use 500 megs of the 1 gig in the
> server... other than that its all default.
> thanks for the help,
> rob
>|||maybe these links could help him in troubleshooting --
SQL Server 7.0 Performance Tuning Guide :
http://www.microsoft.com/SQL/techinfo/perftuninguide.htm
Proper SQL Server Configuration Settings :
http://support.microsoft.com/support/kb/articles/Q166/9/67.ASP
Q175658 - How to gather Information for effective troubleshooting of
Performance :
http://support.microsoft.com/support/kb/articles/Q175/6/58.ASP
Q224587 INF: Troubleshooting Application Performance with SQL Server
http://support.microsoft.com/support/kb/articles/q224/5/87.asp
SQL Server Query Performance Troubleshooter :
http://support.microsoft.com/support/tshoot/sql7queryperf.asp
Index Tuning Wizard 2000
http://msdn.microsoft.com/library/techart/itwforsql.htm
Troubleshooting the Index Tuning Wizard
http://msdn.microsoft.com/library/psdk/sql/tr_servtools_48ro.htm
Designing an Index
http://msdn.microsoft.com/library/psdk/sql/cm_8_des_05_2ri0.htm
Thanks,
Vikram
Vikram Jayaram
Microsoft, SQL Server
This posting is provided "AS IS" with no warranties, and confers no rights.
Subscribe to MSDN & use http://msdn.microsoft.com/newsgroups.sql

Performance Tweaks

I have a client that is having some performance issues with sql 2000
database. The application seems to move slow on the server as well as the
workstations so it does seem like a network or workstation issue. It seems
to be a sql issue. The server is a decent server with high P3 processor, 1
gig of memory, RAID 5 scsi drive array running 10k rpm. At this point they
are only "testing" this application, only a hand full of users are using
app. Any suggestions on performance tweaks would be great. They had a
puesdo-administrator that forced sql to use 500 megs of the 1 gig in the
server... other than that its all default.
thanks for the help,
rob500 MB memory is not much for SQL Server, however, it all depends on how
big is your database and how many concurrent users, it may be enough, it
may be not. Open perfmon to check if there's any paging going on on the
server. Also, do you have correct index? How did you configure your data
files? Is your application using alot stored procedures? Are those
stored procedures very CPU intensive?
Without any detail, it's hard to determine what's going on.
--
Eric Li
SQL DBA
MCDBA
Microsoft wrote:
> I have a client that is having some performance issues with sql 2000
> database. The application seems to move slow on the server as well as the
> workstations so it does seem like a network or workstation issue. It seems
> to be a sql issue. The server is a decent server with high P3 processor, 1
> gig of memory, RAID 5 scsi drive array running 10k rpm. At this point they
> are only "testing" this application, only a hand full of users are using
> app. Any suggestions on performance tweaks would be great. They had a
> puesdo-administrator that forced sql to use 500 megs of the 1 gig in the
> server... other than that its all default.
> thanks for the help,
> rob
>
>|||RAID 5 is slow for log files.
"Microsoft" <temp@.dstek.com> wrote in message
news:eB9SBc6VEHA.3120@.TK2MSFTNGP12.phx.gbl...
> I have a client that is having some performance issues with sql 2000
> database. The application seems to move slow on the server as well as the
> workstations so it does seem like a network or workstation issue. It
seems
> to be a sql issue. The server is a decent server with high P3 processor,
1
> gig of memory, RAID 5 scsi drive array running 10k rpm. At this point
they
> are only "testing" this application, only a hand full of users are using
> app. Any suggestions on performance tweaks would be great. They had a
> puesdo-administrator that forced sql to use 500 megs of the 1 gig in the
> server... other than that its all default.
> thanks for the help,
> rob
>
>|||so are 10k drives.
you'll need to capture perfmon data and see where your bottlenecks are.
usual suspects.
RAM - 500mb is low and 1GB could be
CPU - Maybe you need additional CPUs
IO - RAID 5 with 10k Drives....that is rather slow. You may be Queueing up
IO requests
Indexes, bad sprocs, excessive blocking, deadlocks, etc
lots of work to do to figure this out
Greg Jackson
PDX, Oregon

Performance turning

Is there a way we can identify most frequently hit tables in the database. We
would like to pin most frequently used tables in memory.
We have 6GB physical memory and like to allocate as 4-5 GB of memory to SQL
server.
Note: 4 or 5 of the tables have at least a million records.
Thanks in advance
Regards,
Amar
Amar
SQL Server Profiler is your tool.
"Amar" <Amar@.discussions.microsoft.com> wrote in message
news:08AF5B1E-3DE4-452B-8FF0-AF2C68EE4CDF@.microsoft.com...
> Is there a way we can identify most frequently hit tables in the database.
We
> would like to pin most frequently used tables in memory.
> We have 6GB physical memory and like to allocate as 4-5 GB of memory to
SQL
> server.
> Note: 4 or 5 of the tables have at least a million records.
> Thanks in advance
> Regards,
> Amar
|||Hi
Pinning very small tables can be argued for under extreme circumstances, but
tables with millions of rows, forget it. If you have queries that need to
process so many rows, you have a basic data architecture problem.
Pinning a table results in that memory not being available to other table
caches, so in effect, you might starve the cache and worsen performance.
SQL Server keeps track of how many times a page has been accessed, so it
will not discard heavily used pages.
Only after turning every query, and optimizing every index and table, should
you consider pinning tables. By then you would know what tables are problems.
Regards
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"Amar" wrote:

> Is there a way we can identify most frequently hit tables in the database. We
> would like to pin most frequently used tables in memory.
> We have 6GB physical memory and like to allocate as 4-5 GB of memory to SQL
> server.
> Note: 4 or 5 of the tables have at least a million records.
> Thanks in advance
> Regards,
> Amar
|||Hi Mike,
Thanks for your quick response. Couple of points I wanted to mention here
are we are using a product which is supplied by our supplier and hence
changing/tuning query needs to go through our QA/testing. We were looking at
this approach of pinning the table as a temporary fix while our supplier
works on tuning the queries. Would you think that is a good idea?
Other thing we noticed was our SQL is using only 1.7 GB of memory when we
have lot more unused memory. As you have said earlier we are noticing very
high cache hit at the same time we are also seeing very high disk read that
is 200M Bytes per sec.
We were looking for a quick fix as our users are finding it very difficult
to work with slow performing system which is affecting the business.
Thanks again for your help.
Regards,
Amar
"Mike Epprecht (SQL MVP)" wrote:
[vbcol=seagreen]
> Hi
> Pinning very small tables can be argued for under extreme circumstances, but
> tables with millions of rows, forget it. If you have queries that need to
> process so many rows, you have a basic data architecture problem.
> Pinning a table results in that memory not being available to other table
> caches, so in effect, you might starve the cache and worsen performance.
> SQL Server keeps track of how many times a page has been accessed, so it
> will not discard heavily used pages.
> Only after turning every query, and optimizing every index and table, should
> you consider pinning tables. By then you would know what tables are problems.
> Regards
> --
> Mike Epprecht, Microsoft SQL Server MVP
> Zurich, Switzerland
> MVP Program: http://www.microsoft.com/mvp
> Blog: http://www.msmvps.com/epprecht/
>
> "Amar" wrote:
|||Amar wrote:
> Hi Mike,
> Thanks for your quick response. Couple of points I wanted to mention
> here are we are using a product which is supplied by our supplier and
> hence changing/tuning query needs to go through our QA/testing. We
> were looking at this approach of pinning the table as a temporary fix
> while our supplier works on tuning the queries. Would you think that
> is a good idea?
> Other thing we noticed was our SQL is using only 1.7 GB of memory
> when we have lot more unused memory. As you have said earlier we are
> noticing very high cache hit at the same time we are also seeing very
> high disk read that is 200M Bytes per sec.
> We were looking for a quick fix as our users are finding it very
> difficult to work with slow performing system which is affecting the
> business.
>
As Mike said, you can pin very small tables if necessary, but you're not
likely to get much improved performance. If you pin the large tables
(the 4 or 5 with millions of rows), and those tables fill up available
SQL Server memory, you might find performance degraded as SQL Server has
to continually grab all other data from disk. You'd need to measure how
large the tables are and weight that against the size of the remaining
table and available memory.
What server version and edition and SQL edition are you using?
David Gugick
Quest Software
www.imceda.com
www.quest.com
|||Hi David,
Thanks for the response.
We are using SQL Enterprise edition and Windows 2003 Enterprise edition.
It is clustred sql with 2 nodes
Is there a way we can identify most frequently hit tables in the database?
"David Gugick" wrote:

> Amar wrote:
> As Mike said, you can pin very small tables if necessary, but you're not
> likely to get much improved performance. If you pin the large tables
> (the 4 or 5 with millions of rows), and those tables fill up available
> SQL Server memory, you might find performance degraded as SQL Server has
> to continually grab all other data from disk. You'd need to measure how
> large the tables are and weight that against the size of the remaining
> table and available memory.
> What server version and edition and SQL edition are you using?
>
> --
> David Gugick
> Quest Software
> www.imceda.com
> www.quest.com
>
|||Amar wrote:
> Hi David,
> Thanks for the response.
> We are using SQL Enterprise edition and Windows 2003 Enterprise
> edition. It is clustred sql with 2 nodes
> Is there a way we can identify most frequently hit tables in the
> database?
>
"Frequently hit" is a somewhat nebulous term. Those tables that are
accessed most frequently may not be best candidates for pinning if they
are always accessed in an index optimzed fashion. Proper indexing keeps
page reads to a minimum and keeps the data cache fresh with usable data.
Conversely, a table that is accessed infrequently, but is accessed by
table scan or clustered index scan operations, can cause the data cache
to get flushed of its good data and replaced with data that has no
business being there.
So my recommendation for a temporary fix is get SQL Server to use as
much memory as possible and at the same time start performance tuning
the database. I think you'll find that looking for candidates now for
pinning is a waste of time since it's going to be time consuming and
difficult to determine the benefits.
I realize you are looking for a quick, temporary fix to performance
issues, but these problems are almost always related to query and index
design. There is little in the way of quick fixes that can help, short
of adding/utilizing all available memory.
I'm not sure if it was you that started another thread on your memory
issues, but are you running AWE and SP4. If so, have you download the
latest SP4 AWE hotfix?
David Gugick
Quest Software
www.imceda.com
www.quest.com
|||Hi Mike and David,
Thanks a lot for quick response and all the suggestions.
Regards,
Amar
"David Gugick" wrote:

> Amar wrote:
> "Frequently hit" is a somewhat nebulous term. Those tables that are
> accessed most frequently may not be best candidates for pinning if they
> are always accessed in an index optimzed fashion. Proper indexing keeps
> page reads to a minimum and keeps the data cache fresh with usable data.
> Conversely, a table that is accessed infrequently, but is accessed by
> table scan or clustered index scan operations, can cause the data cache
> to get flushed of its good data and replaced with data that has no
> business being there.
> So my recommendation for a temporary fix is get SQL Server to use as
> much memory as possible and at the same time start performance tuning
> the database. I think you'll find that looking for candidates now for
> pinning is a waste of time since it's going to be time consuming and
> difficult to determine the benefits.
> I realize you are looking for a quick, temporary fix to performance
> issues, but these problems are almost always related to query and index
> design. There is little in the way of quick fixes that can help, short
> of adding/utilizing all available memory.
> I'm not sure if it was you that started another thread on your memory
> issues, but are you running AWE and SP4. If so, have you download the
> latest SP4 AWE hotfix?
> --
> David Gugick
> Quest Software
> www.imceda.com
> www.quest.com
>

Performance turning

Is there a way we can identify most frequently hit tables in the database. We
would like to pin most frequently used tables in memory.
We have 6GB physical memory and like to allocate as 4-5 GB of memory to SQL
server.
Note: 4 or 5 of the tables have at least a million records.
Thanks in advance
Regards,
AmarAmar
SQL Server Profiler is your tool.
"Amar" <Amar@.discussions.microsoft.com> wrote in message
news:08AF5B1E-3DE4-452B-8FF0-AF2C68EE4CDF@.microsoft.com...
> Is there a way we can identify most frequently hit tables in the database.
We
> would like to pin most frequently used tables in memory.
> We have 6GB physical memory and like to allocate as 4-5 GB of memory to
SQL
> server.
> Note: 4 or 5 of the tables have at least a million records.
> Thanks in advance
> Regards,
> Amar|||Hi
Pinning very small tables can be argued for under extreme circumstances, but
tables with millions of rows, forget it. If you have queries that need to
process so many rows, you have a basic data architecture problem.
Pinning a table results in that memory not being available to other table
caches, so in effect, you might starve the cache and worsen performance.
SQL Server keeps track of how many times a page has been accessed, so it
will not discard heavily used pages.
Only after turning every query, and optimizing every index and table, should
you consider pinning tables. By then you would know what tables are problems.
Regards
--
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"Amar" wrote:
> Is there a way we can identify most frequently hit tables in the database. We
> would like to pin most frequently used tables in memory.
> We have 6GB physical memory and like to allocate as 4-5 GB of memory to SQL
> server.
> Note: 4 or 5 of the tables have at least a million records.
> Thanks in advance
> Regards,
> Amar|||Hi Mike,
Thanks for your quick response. Couple of points I wanted to mention here
are we are using a product which is supplied by our supplier and hence
changing/tuning query needs to go through our QA/testing. We were looking at
this approach of pinning the table as a temporary fix while our supplier
works on tuning the queries. Would you think that is a good idea?
Other thing we noticed was our SQL is using only 1.7 GB of memory when we
have lot more unused memory. As you have said earlier we are noticing very
high cache hit at the same time we are also seeing very high disk read that
is 200M Bytes per sec.
We were looking for a quick fix as our users are finding it very difficult
to work with slow performing system which is affecting the business.
Thanks again for your help.
Regards,
Amar
"Mike Epprecht (SQL MVP)" wrote:
> Hi
> Pinning very small tables can be argued for under extreme circumstances, but
> tables with millions of rows, forget it. If you have queries that need to
> process so many rows, you have a basic data architecture problem.
> Pinning a table results in that memory not being available to other table
> caches, so in effect, you might starve the cache and worsen performance.
> SQL Server keeps track of how many times a page has been accessed, so it
> will not discard heavily used pages.
> Only after turning every query, and optimizing every index and table, should
> you consider pinning tables. By then you would know what tables are problems.
> Regards
> --
> Mike Epprecht, Microsoft SQL Server MVP
> Zurich, Switzerland
> MVP Program: http://www.microsoft.com/mvp
> Blog: http://www.msmvps.com/epprecht/
>
> "Amar" wrote:
> > Is there a way we can identify most frequently hit tables in the database. We
> > would like to pin most frequently used tables in memory.
> > We have 6GB physical memory and like to allocate as 4-5 GB of memory to SQL
> > server.
> >
> > Note: 4 or 5 of the tables have at least a million records.
> >
> > Thanks in advance
> > Regards,
> > Amar|||Amar wrote:
> Hi Mike,
> Thanks for your quick response. Couple of points I wanted to mention
> here are we are using a product which is supplied by our supplier and
> hence changing/tuning query needs to go through our QA/testing. We
> were looking at this approach of pinning the table as a temporary fix
> while our supplier works on tuning the queries. Would you think that
> is a good idea?
> Other thing we noticed was our SQL is using only 1.7 GB of memory
> when we have lot more unused memory. As you have said earlier we are
> noticing very high cache hit at the same time we are also seeing very
> high disk read that is 200M Bytes per sec.
> We were looking for a quick fix as our users are finding it very
> difficult to work with slow performing system which is affecting the
> business.
>
As Mike said, you can pin very small tables if necessary, but you're not
likely to get much improved performance. If you pin the large tables
(the 4 or 5 with millions of rows), and those tables fill up available
SQL Server memory, you might find performance degraded as SQL Server has
to continually grab all other data from disk. You'd need to measure how
large the tables are and weight that against the size of the remaining
table and available memory.
What server version and edition and SQL edition are you using?
David Gugick
Quest Software
www.imceda.com
www.quest.com|||Hi David,
Thanks for the response.
We are using SQL Enterprise edition and Windows 2003 Enterprise edition.
It is clustred sql with 2 nodes
Is there a way we can identify most frequently hit tables in the database?
"David Gugick" wrote:
> Amar wrote:
> > Hi Mike,
> > Thanks for your quick response. Couple of points I wanted to mention
> > here are we are using a product which is supplied by our supplier and
> > hence changing/tuning query needs to go through our QA/testing. We
> > were looking at this approach of pinning the table as a temporary fix
> > while our supplier works on tuning the queries. Would you think that
> > is a good idea?
> >
> > Other thing we noticed was our SQL is using only 1.7 GB of memory
> > when we have lot more unused memory. As you have said earlier we are
> > noticing very high cache hit at the same time we are also seeing very
> > high disk read that is 200M Bytes per sec.
> >
> > We were looking for a quick fix as our users are finding it very
> > difficult to work with slow performing system which is affecting the
> > business.
> >
> As Mike said, you can pin very small tables if necessary, but you're not
> likely to get much improved performance. If you pin the large tables
> (the 4 or 5 with millions of rows), and those tables fill up available
> SQL Server memory, you might find performance degraded as SQL Server has
> to continually grab all other data from disk. You'd need to measure how
> large the tables are and weight that against the size of the remaining
> table and available memory.
> What server version and edition and SQL edition are you using?
>
> --
> David Gugick
> Quest Software
> www.imceda.com
> www.quest.com
>|||Amar wrote:
> Hi David,
> Thanks for the response.
> We are using SQL Enterprise edition and Windows 2003 Enterprise
> edition. It is clustred sql with 2 nodes
> Is there a way we can identify most frequently hit tables in the
> database?
>
"Frequently hit" is a somewhat nebulous term. Those tables that are
accessed most frequently may not be best candidates for pinning if they
are always accessed in an index optimzed fashion. Proper indexing keeps
page reads to a minimum and keeps the data cache fresh with usable data.
Conversely, a table that is accessed infrequently, but is accessed by
table scan or clustered index scan operations, can cause the data cache
to get flushed of its good data and replaced with data that has no
business being there.
So my recommendation for a temporary fix is get SQL Server to use as
much memory as possible and at the same time start performance tuning
the database. I think you'll find that looking for candidates now for
pinning is a waste of time since it's going to be time consuming and
difficult to determine the benefits.
I realize you are looking for a quick, temporary fix to performance
issues, but these problems are almost always related to query and index
design. There is little in the way of quick fixes that can help, short
of adding/utilizing all available memory.
I'm not sure if it was you that started another thread on your memory
issues, but are you running AWE and SP4. If so, have you download the
latest SP4 AWE hotfix?
--
David Gugick
Quest Software
www.imceda.com
www.quest.com|||Hi Mike and David,
Thanks a lot for quick response and all the suggestions.
Regards,
Amar
"David Gugick" wrote:
> Amar wrote:
> > Hi David,
> > Thanks for the response.
> > We are using SQL Enterprise edition and Windows 2003 Enterprise
> > edition. It is clustred sql with 2 nodes
> > Is there a way we can identify most frequently hit tables in the
> > database?
> >
> "Frequently hit" is a somewhat nebulous term. Those tables that are
> accessed most frequently may not be best candidates for pinning if they
> are always accessed in an index optimzed fashion. Proper indexing keeps
> page reads to a minimum and keeps the data cache fresh with usable data.
> Conversely, a table that is accessed infrequently, but is accessed by
> table scan or clustered index scan operations, can cause the data cache
> to get flushed of its good data and replaced with data that has no
> business being there.
> So my recommendation for a temporary fix is get SQL Server to use as
> much memory as possible and at the same time start performance tuning
> the database. I think you'll find that looking for candidates now for
> pinning is a waste of time since it's going to be time consuming and
> difficult to determine the benefits.
> I realize you are looking for a quick, temporary fix to performance
> issues, but these problems are almost always related to query and index
> design. There is little in the way of quick fixes that can help, short
> of adding/utilizing all available memory.
> I'm not sure if it was you that started another thread on your memory
> issues, but are you running AWE and SP4. If so, have you download the
> latest SP4 AWE hotfix?
> --
> David Gugick
> Quest Software
> www.imceda.com
> www.quest.com
>

Performance turning

Is there a way we can identify most frequently hit tables in the database. W
e
would like to pin most frequently used tables in memory.
We have 6GB physical memory and like to allocate as 4-5 GB of memory to SQL
server.
Note: 4 or 5 of the tables have at least a million records.
Thanks in advance
Regards,
AmarAmar
SQL Server Profiler is your tool.
"Amar" <Amar@.discussions.microsoft.com> wrote in message
news:08AF5B1E-3DE4-452B-8FF0-AF2C68EE4CDF@.microsoft.com...
> Is there a way we can identify most frequently hit tables in the database.
We
> would like to pin most frequently used tables in memory.
> We have 6GB physical memory and like to allocate as 4-5 GB of memory to
SQL
> server.
> Note: 4 or 5 of the tables have at least a million records.
> Thanks in advance
> Regards,
> Amar|||Hi
Pinning very small tables can be argued for under extreme circumstances, but
tables with millions of rows, forget it. If you have queries that need to
process so many rows, you have a basic data architecture problem.
Pinning a table results in that memory not being available to other table
caches, so in effect, you might starve the cache and worsen performance.
SQL Server keeps track of how many times a page has been accessed, so it
will not discard heavily used pages.
Only after turning every query, and optimizing every index and table, should
you consider pinning tables. By then you would know what tables are problems
.
Regards
--
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"Amar" wrote:

> Is there a way we can identify most frequently hit tables in the database.
We
> would like to pin most frequently used tables in memory.
> We have 6GB physical memory and like to allocate as 4-5 GB of memory to SQ
L
> server.
> Note: 4 or 5 of the tables have at least a million records.
> Thanks in advance
> Regards,
> Amar|||Hi Mike,
Thanks for your quick response. Couple of points I wanted to mention here
are we are using a product which is supplied by our supplier and hence
changing/tuning query needs to go through our QA/testing. We were looking at
this approach of pinning the table as a temporary fix while our supplier
works on tuning the queries. Would you think that is a good idea?
Other thing we noticed was our SQL is using only 1.7 GB of memory when we
have lot more unused memory. As you have said earlier we are noticing very
high cache hit at the same time we are also seeing very high disk read that
is 200M Bytes per sec.
We were looking for a quick fix as our users are finding it very difficult
to work with slow performing system which is affecting the business.
Thanks again for your help.
Regards,
Amar
"Mike Epprecht (SQL MVP)" wrote:
[vbcol=seagreen]
> Hi
> Pinning very small tables can be argued for under extreme circumstances, b
ut
> tables with millions of rows, forget it. If you have queries that need to
> process so many rows, you have a basic data architecture problem.
> Pinning a table results in that memory not being available to other table
> caches, so in effect, you might starve the cache and worsen performance.
> SQL Server keeps track of how many times a page has been accessed, so it
> will not discard heavily used pages.
> Only after turning every query, and optimizing every index and table, shou
ld
> you consider pinning tables. By then you would know what tables are proble
ms.
> Regards
> --
> Mike Epprecht, Microsoft SQL Server MVP
> Zurich, Switzerland
> MVP Program: http://www.microsoft.com/mvp
> Blog: http://www.msmvps.com/epprecht/
>
> "Amar" wrote:
>|||Amar wrote:
> Hi Mike,
> Thanks for your quick response. Couple of points I wanted to mention
> here are we are using a product which is supplied by our supplier and
> hence changing/tuning query needs to go through our QA/testing. We
> were looking at this approach of pinning the table as a temporary fix
> while our supplier works on tuning the queries. Would you think that
> is a good idea?
> Other thing we noticed was our SQL is using only 1.7 GB of memory
> when we have lot more unused memory. As you have said earlier we are
> noticing very high cache hit at the same time we are also seeing very
> high disk read that is 200M Bytes per sec.
> We were looking for a quick fix as our users are finding it very
> difficult to work with slow performing system which is affecting the
> business.
>
As Mike said, you can pin very small tables if necessary, but you're not
likely to get much improved performance. If you pin the large tables
(the 4 or 5 with millions of rows), and those tables fill up available
SQL Server memory, you might find performance degraded as SQL Server has
to continually grab all other data from disk. You'd need to measure how
large the tables are and weight that against the size of the remaining
table and available memory.
What server version and edition and SQL edition are you using?
David Gugick
Quest Software
www.imceda.com
www.quest.com|||Hi David,
Thanks for the response.
We are using SQL Enterprise edition and Windows 2003 Enterprise edition.
It is clustred sql with 2 nodes
Is there a way we can identify most frequently hit tables in the database?
"David Gugick" wrote:

> Amar wrote:
> As Mike said, you can pin very small tables if necessary, but you're not
> likely to get much improved performance. If you pin the large tables
> (the 4 or 5 with millions of rows), and those tables fill up available
> SQL Server memory, you might find performance degraded as SQL Server has
> to continually grab all other data from disk. You'd need to measure how
> large the tables are and weight that against the size of the remaining
> table and available memory.
> What server version and edition and SQL edition are you using?
>
> --
> David Gugick
> Quest Software
> www.imceda.com
> www.quest.com
>|||Amar wrote:
> Hi David,
> Thanks for the response.
> We are using SQL Enterprise edition and Windows 2003 Enterprise
> edition. It is clustred sql with 2 nodes
> Is there a way we can identify most frequently hit tables in the
> database?
>
"Frequently hit" is a somewhat nebulous term. Those tables that are
accessed most frequently may not be best candidates for pinning if they
are always accessed in an index optimzed fashion. Proper indexing keeps
page reads to a minimum and keeps the data cache fresh with usable data.
Conversely, a table that is accessed infrequently, but is accessed by
table scan or clustered index scan operations, can cause the data cache
to get flushed of its good data and replaced with data that has no
business being there.
So my recommendation for a temporary fix is get SQL Server to use as
much memory as possible and at the same time start performance tuning
the database. I think you'll find that looking for candidates now for
pinning is a waste of time since it's going to be time consuming and
difficult to determine the benefits.
I realize you are looking for a quick, temporary fix to performance
issues, but these problems are almost always related to query and index
design. There is little in the way of quick fixes that can help, short
of adding/utilizing all available memory.
I'm not sure if it was you that started another thread on your memory
issues, but are you running AWE and SP4. If so, have you download the
latest SP4 AWE hotfix?
David Gugick
Quest Software
www.imceda.com
www.quest.com|||Hi Mike and David,
Thanks a lot for quick response and all the suggestions.
Regards,
Amar
"David Gugick" wrote:

> Amar wrote:
> "Frequently hit" is a somewhat nebulous term. Those tables that are
> accessed most frequently may not be best candidates for pinning if they
> are always accessed in an index optimzed fashion. Proper indexing keeps
> page reads to a minimum and keeps the data cache fresh with usable data.
> Conversely, a table that is accessed infrequently, but is accessed by
> table scan or clustered index scan operations, can cause the data cache
> to get flushed of its good data and replaced with data that has no
> business being there.
> So my recommendation for a temporary fix is get SQL Server to use as
> much memory as possible and at the same time start performance tuning
> the database. I think you'll find that looking for candidates now for
> pinning is a waste of time since it's going to be time consuming and
> difficult to determine the benefits.
> I realize you are looking for a quick, temporary fix to performance
> issues, but these problems are almost always related to query and index
> design. There is little in the way of quick fixes that can help, short
> of adding/utilizing all available memory.
> I'm not sure if it was you that started another thread on your memory
> issues, but are you running AWE and SP4. If so, have you download the
> latest SP4 AWE hotfix?
> --
> David Gugick
> Quest Software
> www.imceda.com
> www.quest.com
>

Performance Tuning UPDATE Statement

Below is a simple UPDATE that I have to perform on a table that has
about 2.5 million rows (about 4 million in production) This query
runs for an enourmous amount of time (over 1 hour). Both the
ChangerRoleID and the ChangerID are indexed (not unique). Is there
any way to performance tune this?

Controlling the physical drive of the log file isn't possible at our
client sites (we don't have control) and the recovery model needs to
be set to "Full".

UPDATE CLIENTSHISTORY SET ChangerRoleID = ChangerID WHERE
ChangerRoleID IS NULL

Any Help would be greatly appreciated!On 4 Aug 2004 08:27:50 -0700, MAS wrote:

>Below is a simple UPDATE that I have to perform on a table that has
>about 2.5 million rows (about 4 million in production) This query
>runs for an enourmous amount of time (over 1 hour). Both the
>ChangerRoleID and the ChangerID are indexed (not unique). Is there
>any way to performance tune this?
>Controlling the physical drive of the log file isn't possible at our
>client sites (we don't have control) and the recovery model needs to
>be set to "Full".
>UPDATE CLIENTSHISTORY SET ChangerRoleID = ChangerID WHERE
>ChangerRoleID IS NULL
>Any Help would be greatly appreciated!

Hi MAS,

If you remove the non-unique index on ChangerRoleID before doing the
update and recreate it afterwards, you'll probably save some time. The
index could have been useful if only a few of all rows match the IS NULL
condition, but with over aan hour execution time, I think there are so
many matches that a full table scan will be quicker. Removing the index
before doing the update saves SQL Server the extra work of constantly
having to update the index to keep it in sync with the data. Of course,
this might affect other queries that execute during the update and would
have benefited from this index. The index on ChangerID will neither be
used nor cause extra work for this update.

Check if there's a trigger that gets fired by the update. If you can
safely disable that trigger during the update process, do so. Same for
constraints: are there any CHECK or REFERENCES (foreign key) constraints
defined for ChangerRoleID? If so, disable constraint checking (again, only
if it is safe, i.e. you have to be sure that this update won't cause
violation of the constraint *and* that no other person accessing the
database during the time constraint checking is disabled will be able to
cause violations of the constraint).

You state that the recovery model needs to be full; from that I conclude
that you can't lock other users out of the database during the update. Can
you at least take measures to prevent other users from using (updating,
but preferably reading as well) the CLIENTSHISTORY table?

Best, Hugo
--

(Remove _NO_ and _SPAM_ to get my e-mail address)|||[posted and mailed, please reply in news]

MAS (mas32677@.hotmail.com) writes:
> Below is a simple UPDATE that I have to perform on a table that has
> about 2.5 million rows (about 4 million in production) This query
> runs for an enourmous amount of time (over 1 hour). Both the
> ChangerRoleID and the ChangerID are indexed (not unique). Is there
> any way to performance tune this?
> Controlling the physical drive of the log file isn't possible at our
> client sites (we don't have control) and the recovery model needs to
> be set to "Full".
> UPDATE CLIENTSHISTORY SET ChangerRoleID = ChangerID WHERE
> ChangerRoleID IS NULL
> Any Help would be greatly appreciated!

To add to what Hugo said, if that index on ChangerRoleID is clustered,
and many rows have a NULL value, then you are in for a problem.

It may help to do it batches:

DECLARE @.batch_size int, @.rowc int
SELECT @.batch_size = 50000
SELECT @.rowc = @.batch_size
SET ROWCOUNT @.batch_size
WHILE @.rowc = @.batch_size
BEGIN
UPDATE CLIENTSHISTORY SET ChangerRoleID = ChangerID
WHERE ChangerRoleID IS NULL
AND ChangerID IS NOT NULL
SELECT @.rowc = @.@.rowcount
END
SET ROWCOUNT 0

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.aspsql

Performance Tuning Transactional Replication

Hello,
I have configured replication in a test lab using three identical two processor based servers. One server is being used as the distributor. I am running an application on a fourth machine(laptop) that is directly connected to the Source Server. This appli
cation inserts records into the source database at a designated rate. when I query against the source and target tables simultaneaously, there is huge latency in the number of records that is being replicated (Target is minutes behind the source). I have
modified the the log agent and distribution agent properties (polling interval, the maxbcpthreads, etc.). After doing that, I did notice some improvement but it is not enough. I need to have the two database synchronized within seconds of each other. Is t
here any other modifications that I can make to speed up the synchronization of data? By the time I have inserted 10K records into the source, the target has only half 7K.
it is possible that you are loading your server too much. Latency is a
function of throughput, the more through put you put on your system the
greater your overall latency.
Can you run this command in your distribution database so we can get an idea
of undelivered commands?
select * from MSdistribution_status
Also can you check to see if you can use the replication of stored
procedure's execution. This can radically improve performance.
"Nupee" <anonymous@.discussions.microsoft.com> wrote in message
news:7B6AC640-D49F-4EB7-8EBF-4C394D6D0F5F@.microsoft.com...
> Hello,
> I have configured replication in a test lab using three identical two
processor based servers. One server is being used as the distributor. I am
running an application on a fourth machine(laptop) that is directly
connected to the Source Server. This application inserts records into the
source database at a designated rate. when I query against the source and
target tables simultaneaously, there is huge latency in the number of
records that is being replicated (Target is minutes behind the source). I
have modified the the log agent and distribution agent properties (polling
interval, the maxbcpthreads, etc.). After doing that, I did notice some
improvement but it is not enough. I need to have the two database
synchronized within seconds of each other. Is there any other modifications
that I can make to speed up the synchronization of data? By the time I have
inserted 10K records into the source, the target has only half 7K.

Performance Tuning Tips for SQL Server 2000

Dear All:
Can somebody help me on Performance tuning the SQL Server 2000 as I have
just begin learning the SQL server 2000. The current server configuration
stands as follows:
Windows 2000 Server
Dual CPU with Hyperthreading
2 GB RAM
SQL Server 2000 Standard Edition
The CPU shoots to 100% generally during peak hours. It would be grateful if
somebody could help me in setting up correct performance monitors and tweak
settings in terms of SQL Server with Hardware and Windows 2000.
Thanks
M Z.
Hi
My guess is 1 word: Indexes
Check that appropriate indexes are inplace. Look at
http://www.sql-server-performance.com for good information.
Regards
Mike
"Mitul Z." wrote:

> Dear All:
> Can somebody help me on Performance tuning the SQL Server 2000 as I have
> just begin learning the SQL server 2000. The current server configuration
> stands as follows:
> Windows 2000 Server
> Dual CPU with Hyperthreading
> 2 GB RAM
> SQL Server 2000 Standard Edition
> The CPU shoots to 100% generally during peak hours. It would be grateful if
> somebody could help me in setting up correct performance monitors and tweak
> settings in terms of SQL Server with Hardware and Windows 2000.
> Thanks
> M Z.
|||How's your memory usage look? Got any other apps running on the same box?
Here's an MS Article on how to Troubleshoot SQL Server performance:
http://support.microsoft.com/kb/298475
"Mitul Z." <Mitul Z.@.discussions.microsoft.com> wrote in message
news:82B959A3-5E7C-4F7B-81C9-9D02726B7F79@.microsoft.com...
> Dear All:
> Can somebody help me on Performance tuning the SQL Server 2000 as I have
> just begin learning the SQL server 2000. The current server configuration
> stands as follows:
> Windows 2000 Server
> Dual CPU with Hyperthreading
> 2 GB RAM
> SQL Server 2000 Standard Edition
> The CPU shoots to 100% generally during peak hours. It would be grateful
> if
> somebody could help me in setting up correct performance monitors and
> tweak
> settings in terms of SQL Server with Hardware and Windows 2000.
> Thanks
> M Z.
|||In addition to all the other great suggestions, run SQL Profiler and do a
default trace.. Just watch the counters for a while. Look for high cpu,
high duration, high reads/writes.. (in comparison to all the other request
being made). There is no hard and fast rule, but my personal goal is
that no query / update should take more than 1 second, optimally 100ms.
Obviously this is not always obtainable, but if you see queries taking 10 to
20 seconds, it's definately a place to start looking.
Bill
"Mitul Z." <Mitul Z.@.discussions.microsoft.com> wrote in message
news:82B959A3-5E7C-4F7B-81C9-9D02726B7F79@.microsoft.com...
> Dear All:
> Can somebody help me on Performance tuning the SQL Server 2000 as I have
> just begin learning the SQL server 2000. The current server configuration
> stands as follows:
> Windows 2000 Server
> Dual CPU with Hyperthreading
> 2 GB RAM
> SQL Server 2000 Standard Edition
> The CPU shoots to 100% generally during peak hours. It would be grateful
> if
> somebody could help me in setting up correct performance monitors and
> tweak
> settings in terms of SQL Server with Hardware and Windows 2000.
> Thanks
> M
|||Microsoft SQL Server 2000 Performance Tuning Technical Reference
http://www.microsoft.com/mspress/ind.../book17654.htm
"Mitul Z." <Mitul Z.@.discussions.microsoft.com> wrote in message
news:82B959A3-5E7C-4F7B-81C9-9D02726B7F79@.microsoft.com...
> Dear All:
> Can somebody help me on Performance tuning the SQL Server 2000 as I have
> just begin learning the SQL server 2000. The current server configuration
> stands as follows:
> Windows 2000 Server
> Dual CPU with Hyperthreading
> 2 GB RAM
> SQL Server 2000 Standard Edition
> The CPU shoots to 100% generally during peak hours. It would be grateful
> if
> somebody could help me in setting up correct performance monitors and
> tweak
> settings in terms of SQL Server with Hardware and Windows 2000.
> Thanks
> M Z.

Performance Tuning Tips for SQL Server 2000

Dear All:
Can somebody help me on Performance tuning the SQL Server 2000 as I have
just begin learning the SQL server 2000. The current server configuration
stands as follows:
Windows 2000 Server
Dual CPU with Hyperthreading
2 GB RAM
SQL Server 2000 Standard Edition
The CPU shoots to 100% generally during peak hours. It would be grateful if
somebody could help me in setting up correct performance monitors and tweak
settings in terms of SQL Server with Hardware and Windows 2000.
Thanks
M Z.Hi
My guess is 1 word: Indexes
Check that appropriate indexes are inplace. Look at
http://www.sql-server-performance.com for good information.
Regards
Mike
"Mitul Z." wrote:
> Dear All:
> Can somebody help me on Performance tuning the SQL Server 2000 as I have
> just begin learning the SQL server 2000. The current server configuration
> stands as follows:
> Windows 2000 Server
> Dual CPU with Hyperthreading
> 2 GB RAM
> SQL Server 2000 Standard Edition
> The CPU shoots to 100% generally during peak hours. It would be grateful if
> somebody could help me in setting up correct performance monitors and tweak
> settings in terms of SQL Server with Hardware and Windows 2000.
> Thanks
> M Z.|||How's your memory usage look? Got any other apps running on the same box?
Here's an MS Article on how to Troubleshoot SQL Server performance:
http://support.microsoft.com/kb/298475
"Mitul Z." <Mitul Z.@.discussions.microsoft.com> wrote in message
news:82B959A3-5E7C-4F7B-81C9-9D02726B7F79@.microsoft.com...
> Dear All:
> Can somebody help me on Performance tuning the SQL Server 2000 as I have
> just begin learning the SQL server 2000. The current server configuration
> stands as follows:
> Windows 2000 Server
> Dual CPU with Hyperthreading
> 2 GB RAM
> SQL Server 2000 Standard Edition
> The CPU shoots to 100% generally during peak hours. It would be grateful
> if
> somebody could help me in setting up correct performance monitors and
> tweak
> settings in terms of SQL Server with Hardware and Windows 2000.
> Thanks
> M Z.|||In addition to all the other great suggestions, run SQL Profiler and do a
default trace.. Just watch the counters for a while. Look for high cpu,
high duration, high reads/writes.. (in comparison to all the other request
being made). There is no hard and fast rule, but my personal goal is
that no query / update should take more than 1 second, optimally 100ms.
Obviously this is not always obtainable, but if you see queries taking 10 to
20 seconds, it's definately a place to start looking.
Bill
"Mitul Z." <Mitul Z.@.discussions.microsoft.com> wrote in message
news:82B959A3-5E7C-4F7B-81C9-9D02726B7F79@.microsoft.com...
> Dear All:
> Can somebody help me on Performance tuning the SQL Server 2000 as I have
> just begin learning the SQL server 2000. The current server configuration
> stands as follows:
> Windows 2000 Server
> Dual CPU with Hyperthreading
> 2 GB RAM
> SQL Server 2000 Standard Edition
> The CPU shoots to 100% generally during peak hours. It would be grateful
> if
> somebody could help me in setting up correct performance monitors and
> tweak
> settings in terms of SQL Server with Hardware and Windows 2000.
> Thanks
> M|||Microsoft® SQL Server 2000 Performance Tuning Technical Reference
http://www.microsoft.com/mspress/india/books/book17654.htm
"Mitul Z." <Mitul Z.@.discussions.microsoft.com> wrote in message
news:82B959A3-5E7C-4F7B-81C9-9D02726B7F79@.microsoft.com...
> Dear All:
> Can somebody help me on Performance tuning the SQL Server 2000 as I have
> just begin learning the SQL server 2000. The current server configuration
> stands as follows:
> Windows 2000 Server
> Dual CPU with Hyperthreading
> 2 GB RAM
> SQL Server 2000 Standard Edition
> The CPU shoots to 100% generally during peak hours. It would be grateful
> if
> somebody could help me in setting up correct performance monitors and
> tweak
> settings in terms of SQL Server with Hardware and Windows 2000.
> Thanks
> M Z.

Performance Tuning Tips for SQL Server 2000

Dear All:
Can somebody help me on Performance tuning the SQL Server 2000 as I have
just begin learning the SQL server 2000. The current server configuration
stands as follows:
Windows 2000 Server
Dual CPU with Hyperthreading
2 GB RAM
SQL Server 2000 Standard Edition
The CPU shoots to 100% generally during peak hours. It would be grateful if
somebody could help me in setting up correct performance monitors and tweak
settings in terms of SQL Server with hardware and Windows 2000.
Thanks
M Z.Hi
My guess is 1 word: Indexes
Check that appropriate indexes are inplace. Look at
http://www.sql-server-performance.com for good information.
Regards
Mike
"Mitul Z." wrote:

> Dear All:
> Can somebody help me on Performance tuning the SQL Server 2000 as I have
> just begin learning the SQL server 2000. The current server configuration
> stands as follows:
> Windows 2000 Server
> Dual CPU with Hyperthreading
> 2 GB RAM
> SQL Server 2000 Standard Edition
> The CPU shoots to 100% generally during peak hours. It would be grateful i
f
> somebody could help me in setting up correct performance monitors and twea
k
> settings in terms of SQL Server with hardware and Windows 2000.
> Thanks
> M Z.|||How's your memory usage look? Got any other apps running on the same box?
Here's an MS Article on how to Troubleshoot SQL Server performance:
http://support.microsoft.com/kb/298475
"Mitul Z." <Mitul Z.@.discussions.microsoft.com> wrote in message
news:82B959A3-5E7C-4F7B-81C9-9D02726B7F79@.microsoft.com...
> Dear All:
> Can somebody help me on Performance tuning the SQL Server 2000 as I have
> just begin learning the SQL server 2000. The current server configuration
> stands as follows:
> Windows 2000 Server
> Dual CPU with Hyperthreading
> 2 GB RAM
> SQL Server 2000 Standard Edition
> The CPU shoots to 100% generally during peak hours. It would be grateful
> if
> somebody could help me in setting up correct performance monitors and
> tweak
> settings in terms of SQL Server with hardware and Windows 2000.
> Thanks
> M Z.|||In addition to all the other great suggestions, run SQL Profiler and do a
default trace.. Just watch the counters for a while. Look for high cpu,
high duration, high reads/writes.. (in comparison to all the other request
being made). There is no hard and fast rule, but my personal goal is
that no query / update should take more than 1 second, optimally 100ms.
Obviously this is not always obtainable, but if you see queries taking 10 to
20 seconds, it's definately a place to start looking.
Bill
"Mitul Z." <Mitul Z.@.discussions.microsoft.com> wrote in message
news:82B959A3-5E7C-4F7B-81C9-9D02726B7F79@.microsoft.com...
> Dear All:
> Can somebody help me on Performance tuning the SQL Server 2000 as I have
> just begin learning the SQL server 2000. The current server configuration
> stands as follows:
> Windows 2000 Server
> Dual CPU with Hyperthreading
> 2 GB RAM
> SQL Server 2000 Standard Edition
> The CPU shoots to 100% generally during peak hours. It would be grateful
> if
> somebody could help me in setting up correct performance monitors and
> tweak
> settings in terms of SQL Server with hardware and Windows 2000.
> Thanks
> M|||Microsoft SQL Server 2000 Performance Tuning Technical Reference
http://www.microsoft.com/mspress/in...s/book17654.htm
"Mitul Z." <Mitul Z.@.discussions.microsoft.com> wrote in message
news:82B959A3-5E7C-4F7B-81C9-9D02726B7F79@.microsoft.com...
> Dear All:
> Can somebody help me on Performance tuning the SQL Server 2000 as I have
> just begin learning the SQL server 2000. The current server configuration
> stands as follows:
> Windows 2000 Server
> Dual CPU with Hyperthreading
> 2 GB RAM
> SQL Server 2000 Standard Edition
> The CPU shoots to 100% generally during peak hours. It would be grateful
> if
> somebody could help me in setting up correct performance monitors and
> tweak
> settings in terms of SQL Server with hardware and Windows 2000.
> Thanks
> M Z.

Performance Tuning SQL query in Trigger

i am using sql server 2000. I have written update trigger on CORP_CAGE table to log the details in

CORP_CAGE_LOG_HIST table,if any changes in EMP_SEQ_NO column.

please find the structure of CORP_CAGE table:

1.CORP_CAGE_SEQ_NO
2.RECEIVED_DATE
3.EMP_SEQ_NO

CORP_CAGE table is having 50,000 records. the trigger "Check_Update" is fired when i am executing the following

query from application which updates 10,000 records.

UPDATE CORP_CAGE SET EMP_SEQ_NO=NULL WHERE EMP_SEQ_NO=111

please find below the trigger,in that, trigger can easily find whether any UPDATE done in EMP_SEQ_NO column by using

UPDATE FUNCTION.
But,when it come to insert part, it takes more time(nearly 1 hour or sometimes it will hang.).For minimum

records,this trigger is working fine.


Create trigger Check_Update ON dbo.CORP_CAGE FOR UPDATE AS
BEGIN
IF UPDATE(EMP_SEQ_NO)
BEGIN
INSERT CORP_CAGE_LOG_HIST
(
CAGE_LOG_SEQ_NUM,
BEFORE_VALUE,
AFTER_VALUE,
ENTRY_USER,
FIELD_UPDATED
)
SELECT
i.CAGE_LOG_SEQ_NUM,
d.RECEIVED_DATE,
i.RECEIVED_DATE,
i.UPDATE_USER,
"EMP_SEQ_NO"
FROM
inserted i,
deleted d
WHERE
i.CAGE_LOG_SEQ_NUM = d.CAGE_LOG_SEQ_NUM
END

END

please help me on this for performance tuning the below query.

I don't have the schema of your table, which in this case is critical. However, if this statement:

Code Snippet

UPDATE CORP_CAGE SET EMP_SEQ_NO=NULL WHERE EMP_SEQ_NO=111

is updating 10,000 records then your join is going to cause an update that is the cross product of 10,000 x 10,000 or 100,000,000 logical records. This cannot be right. Look at your trigger WHERE condition:

Code Snippet

WHERE
i.CAGE_LOG_SEQ_NUM = d.CAGE_LOG_SEQ_NUM

since you are updating ONLY for SEQ_NO = 111 and you are joining the INSERTED pseudo table -- with 10,000 records -- with the DELETE pseudo table -- also with 10,000 records -- and since all records of the DELETED pseudo and all records of the DELETED pseudo have the same SEQ_NO -- specifically 111 you end up with the cross product. You need to linclude the KEY information as part of the join condition.

sql

performance tuning on high volume server

I am looking to improve the performance of my sql server databases.

I currently have a dual location system, the database server setup is basically a quad xeon with 4gb at my office and a double xeon with 4gb at a remote webhosting location. There are separate application/web/intranet servers at each site. The two databases servers are replicated with the local server publishing to the remote server.

The relational database holds circa 26 million records, growing by a volume of 10,000 per day, there are approximately 50,000 queries performed per day.

My theory is that the replication of the two databases is causing a slowdown; despite fast network connections (averaging 200ms between servers) the replication seems to place a large load on the local server. Would it be sensible to replicate to a second local server and then replicate to the remote server, placing any burden on the second server?

I am planning to upgrade the local server to a high capacity 4+ cpu 64bit server, my problem is that although I have noticed a slow down in performance over time, I am unsure how to go about measuring and quantifying this in order to diagnose the bottlenecks and ensure that investing in a new server would be worthwhile. Where would one be best advised to start this project?

Hi Gavin. What type of replication scheme have you implemented? Transactional/merge? Peer-to-peer, read-only subscriber, queud/immediate updating subscriber, etc.? Where to start on the research would have a lot to do with what type of topology you are using.

How are you getting to the conclusion that replication is responsible for the slowdown?

|||

Hi - it is a transactional type replication.

Chad Boyd MSFT wrote:

How are you getting to the conclusion that replication is responsible for the slowdown?

When turing off replication, the performance was improved. It seems that the bandwidth between servers would explain this.

What can I tell you about the topology?

|||

Hi Gavin. Is it a bi-directional replication setup? i.e. are the subscribers set to replicate updates back to the publisher, or are the subscribers simply read-only? What type of link exists between the sites (T1, T3, partial T, etc.)?

From the general sounds of things, you don't have an extremely busy write server, so a decent link between the 2 sites sounds sufficient for what you have, which is why it would be surprising to hear that the bottleneck is the network bandwidth...of course, it most certainly could be depending on the types of transactions you are seeing, this is just me thinking out loud.

You mentioned when turning off replication that performance improved, do you mean that end-users received responses to queries faster? Or you noticed particular counters drop significantly? Or possibly blocking/locking issues disipated?

I'd be surprised if the link is the bottleneck, since when you say performance improved I'm going to assume you mean end-users started seeing faster response times to requests...if that's the case, it would seem that there is something occuring on the box itself that is slowing down the response times (of course, that could be the replication agent keeping a lock on something because it is waiting for a response from the subscriber across a slow link, but in transactional replication, that's not as common as with merge, where the agents are querying tables directly...in transactional replication, the log is read directly).

Anything you can post that explains what you are seeing in terms of what is showing you performance is improved? Counters, query response times, etc.?

|||

I am also working with large volumes of data average of 12million records per table and a total of 23million record.No cluster or Indexes, this is because the data is to bulk to change.While running queries i find that my application hangs even if i set the ODBC timeout to 0. Unlike you i working with a normal X86 2.86 GHZ and 504MB RAM.

Please advice

|||

thank you for all your replies and assistance so far.

Chad - to answer your questions the subscribers are all read-only and there is a T1 link between sites.

As I am a developer and not a database specialist I have decided that I need to bring in some outsourced consultancy. Before I do this I would like to do some research so that I can learn as much as possible I would like to be up to speed on this and have as much background knowledge as possible.

I think that the first thing that I should do is to measure the facts as much as possible. Could you please advise me as to what tools and applications I can utilise to gather statistical facts?

What can I learn from my log files? What monitoring tools can I install?

performance tuning on high volume server

I am looking to improve the performance of my sql server databases.

I currently have a dual location system, the database server setup is basically a quad xeon with 4gb at my office and a double xeon with 4gb at a remote webhosting location. There are separate application/web/intranet servers at each site. The two databases servers are replicated with the local server publishing to the remote server.

The relational database holds circa 26 million records, growing by a volume of 10,000 per day, there are approximately 50,000 queries performed per day.

My theory is that the replication of the two databases is causing a slowdown; despite fast network connections (averaging 200ms between servers) the replication seems to place a large load on the local server. Would it be sensible to replicate to a second local server and then replicate to the remote server, placing any burden on the second server?

I am planning to upgrade the local server to a high capacity 4+ cpu 64bit server, my problem is that although I have noticed a slow down in performance over time, I am unsure how to go about measuring and quantifying this in order to diagnose the bottlenecks and ensure that investing in a new server would be worthwhile. Where would one be best advised to start this project?

Hi Gavin. What type of replication scheme have you implemented? Transactional/merge? Peer-to-peer, read-only subscriber, queud/immediate updating subscriber, etc.? Where to start on the research would have a lot to do with what type of topology you are using.

How are you getting to the conclusion that replication is responsible for the slowdown?

|||

Hi - it is a transactional type replication.

Chad Boyd MSFT wrote:

How are you getting to the conclusion that replication is responsible for the slowdown?

When turing off replication, the performance was improved. It seems that the bandwidth between servers would explain this.

What can I tell you about the topology?

|||

Hi Gavin. Is it a bi-directional replication setup? i.e. are the subscribers set to replicate updates back to the publisher, or are the subscribers simply read-only? What type of link exists between the sites (T1, T3, partial T, etc.)?

From the general sounds of things, you don't have an extremely busy write server, so a decent link between the 2 sites sounds sufficient for what you have, which is why it would be surprising to hear that the bottleneck is the network bandwidth...of course, it most certainly could be depending on the types of transactions you are seeing, this is just me thinking out loud.

You mentioned when turning off replication that performance improved, do you mean that end-users received responses to queries faster? Or you noticed particular counters drop significantly? Or possibly blocking/locking issues disipated?

I'd be surprised if the link is the bottleneck, since when you say performance improved I'm going to assume you mean end-users started seeing faster response times to requests...if that's the case, it would seem that there is something occuring on the box itself that is slowing down the response times (of course, that could be the replication agent keeping a lock on something because it is waiting for a response from the subscriber across a slow link, but in transactional replication, that's not as common as with merge, where the agents are querying tables directly...in transactional replication, the log is read directly).

Anything you can post that explains what you are seeing in terms of what is showing you performance is improved? Counters, query response times, etc.?

|||

I am also working with large volumes of data average of 12million records per table and a total of 23million record.No cluster or Indexes, this is because the data is to bulk to change.While running queries i find that my application hangs even if i set the ODBC timeout to 0. Unlike you i working with a normal X86 2.86 GHZ and 504MB RAM.

Please advice

|||

thank you for all your replies and assistance so far.

Chad - to answer your questions the subscribers are all read-only and there is a T1 link between sites.

As I am a developer and not a database specialist I have decided that I need to bring in some outsourced consultancy. Before I do this I would like to do some research so that I can learn as much as possible I would like to be up to speed on this and have as much background knowledge as possible.

I think that the first thing that I should do is to measure the facts as much as possible. Could you please advise me as to what tools and applications I can utilise to gather statistical facts?

What can I learn from my log files? What monitoring tools can I install?

Performance Tuning methods

Hello everyone:
Can anyone list the general performance tuning methods? Thanks a lot.
ZYTRefer to http://www.sql-server-performance.com website for all tips and goodies on PERFORMANCE section.

Performance tuning issues

I have built a solution which runs for two hours on a server with 4CPU 2GHz each and 2GB of RAM on windows 2000 server (CPU utilization almost 70% and almost out of RAM). I moved the two source databases and the solution to a new box runing 8 xeon's at 3GHz each and 16GB of RAM running widows 2003 server 32bit and it still runs for 2 hours (CPU utilization 10% and ample RAM left).

I was expecting it to run much faster. So I started exploring the performance tuning features in SSIS and started tweaking the following:

Control Flow:

DefaultBufferMaxRows: Based on row size and buffer size, calculated the max rows.

DefaultBufferSize: Set this to max 100MB

DataFlow Destination:

Rows Per Batch: Set this to equal to the numbe of rows expected from the source.

Maximum Insert Commit Size: Set this to zero since memory was not an issue

I took the recommendations from other threads on similar issues here including the excellent recommendations at http://www.microsoft.com/technet/prodtechnol/sql/2005/ssisperf.mspx but now the job is running for 6 hours.

Can anyone explain what I am doing wrong here? I have tried each of the above one by one and all together. No matter what combination I try it does not work any faster and both source and destination database are on the same server. Even selects from the same database also slowed down from 10 minutes to one hour.

Any assistance is appreciated, I need to get this job run in an hour.

Thanks!

- Philips.

How complex is your solution? I would recommend you to look at a lower grain;take a look a the log execution and compare it againt previous logs in the old server to see if you can identify a specifc part of the process as the bottleneck...|||

I'd also recommend watching the OVAL webcast that I talk about here:

Donald Farmer's Technet webcast
(http://blogs.conchango.com/jamiethomson/archive/2006/06/14/4076.aspx)

-Jamie

|||Did you setup the server to access more than 4gb of memory?

You need to configure Windows and then SQL server.|||Yes, SQL server is using around 14GB of memory and awe is turned on. Thanks!|||

It is a financial warehouse job which collects data from an ERP system loads a staging area and then the datamart. It also creates aggreagate tables. There are around 8 packages called from the master package.

My problem is I cannot find any way of using those four performance tuning settings accurately.

Any changes I make to the default setting is slowing the job down.

|||

Thanks! I went through it.

I am at a point where I am willing to drop all the control flow task and use plain old insert into statements and use the power of the DBMS engine than try to get the SSIS engine to do it. Unless I figure out what is wrong and why.

|||

Philips-HCR wrote:

Thanks! I went through it.

I am at a point where I am willing to drop all the control flow task and use plain old insert into statements and use the power of the DBMS engine than try to get the SSIS engine to do it. Unless I figure out what is wrong and why.

There's nothing wrong with doing that. The use of SSIS does not dictate that you should use data-flows to move your data about. If SQL is an option then invariably it will be the best option. it depends on your requirements and your preferences. You can issue SQL from an Execute SQL Task and still leverage all the other good stuff in SSIS like logging, workflow, portability etc... if you so wish.

-Jamie

Performance Tuning Help needed

This is a Sql 2000 server running on Win 2k. This is a 1
CPU box connected to a SAN.
I ran a profiler trace for 3 hours and collected all the
data. Some of the queries show a large amount of reads.
When I run those queries individually the # of reads is
fairly smaller. For example the trace file shows 36503
reads and the individual query plan shows 91 CPU reads.
Whats going on ? Is the profiler tracking all I/O activity
at that time ? I mean, is it tracking all tempdb usage and
any other I/O activity at this time. What would explain
this large drop in I/O ?
THanks"JackA" <anonymous@.discussions.microsoft.com> wrote in message
news:061701c3a972$5d6cc080$a301280a@.phx.gbl...
> Whats going on ? Is the profiler tracking all I/O activity
> at that time ? I mean, is it tracking all tempdb usage and
> any other I/O activity at this time. What would explain
> this large drop in I/O ?
I'm new to SQL Server so take what I say with a grain of salt.. But does
the profiler keep track of how many executions for each SQL statement?
Executed once the number of reads may be small but if it was executed 1000
times during that timeframe I would expect the number to be large.|||i assume you ran the query with the same parameters?
if so, then this large a difference in reads might
indicate the trace captured that query with a different
plan. if the choice of parameters can results in large
differences in rows involved, then depending on what
parameters were used when the query was last compiled,
different executes could get different plans.
a smaller difference on the order of several hundred or
even more than 1k might indicate compile or recompile.
>--Original Message--
>This is a Sql 2000 server running on Win 2k. This is a 1
>CPU box connected to a SAN.
>I ran a profiler trace for 3 hours and collected all the
>data. Some of the queries show a large amount of reads.
>When I run those queries individually the # of reads is
>fairly smaller. For example the trace file shows 36503
>reads and the individual query plan shows 91 CPU reads.
>Whats going on ? Is the profiler tracking all I/O
activity
>at that time ? I mean, is it tracking all tempdb usage
and
>any other I/O activity at this time. What would explain
>this large drop in I/O ?
>THanks
>.
>|||Jack,
are you comparing Profiler output to Query Analyzer query plan?
If so... the difference is "normal".
Trust the Profiler, if it shows 36000 reads, that's really happening
in the DB.
peksi
"JackA" <anonymous@.discussions.microsoft.com> wrote in message
news:061701c3a972$5d6cc080$a301280a@.phx.gbl...
> This is a Sql 2000 server running on Win 2k. This is a 1
> CPU box connected to a SAN.
> I ran a profiler trace for 3 hours and collected all the
> data. Some of the queries show a large amount of reads.
> When I run those queries individually the # of reads is
> fairly smaller. For example the trace file shows 36503
> reads and the individual query plan shows 91 CPU reads.
> Whats going on ? Is the profiler tracking all I/O activity
> at that time ? I mean, is it tracking all tempdb usage and
> any other I/O activity at this time. What would explain
> this large drop in I/O ?
> THanks|||Hi Jack
The profiler shows what's happening at run time, which can be different from
what happens when you run the query for a few reasons, eg:
(a) The level & type of SQL Server activity is different when you run the
query from when the same query was captured by the profiler.
(b) The security context from which you ran the query may be different from
the query picked up by profiler.
(c) General resource levels on the server may be different.
Capturing the execution plans & comparing them may reveal some explanation
of what accounts for the different levels of reads, but this only takes you
part way to resolving the actual problem.
HTH
Regards,
Greg Linwood
SQL Server MVP
"JackA" <anonymous@.discussions.microsoft.com> wrote in message
news:061701c3a972$5d6cc080$a301280a@.phx.gbl...
> This is a Sql 2000 server running on Win 2k. This is a 1
> CPU box connected to a SAN.
> I ran a profiler trace for 3 hours and collected all the
> data. Some of the queries show a large amount of reads.
> When I run those queries individually the # of reads is
> fairly smaller. For example the trace file shows 36503
> reads and the individual query plan shows 91 CPU reads.
> Whats going on ? Is the profiler tracking all I/O activity
> at that time ? I mean, is it tracking all tempdb usage and
> any other I/O activity at this time. What would explain
> this large drop in I/O ?
> THankssql

Performance Tuning for Row-by-Row Update Statement

hi

For an unavoidable reason, I have to use row-by-row processing
(update) on a temporary table to update a history table every day.
I have around 60,000 records in temporary table and about 2 million in
the history table.

Could any one please suggest different methods to imporve the runtime
of the query?

Would highly appreciate!Is the row-by-row processing done in a cursor? Must you update exactly one
row at a time (if so, why?) or would it be acceptable to update 2,3 or 50
rows at a time?

You can use SET ROWCOUNT and a loop to fine-tune the batch size of rows to
be updated. Bigger batches should improve performance over updating single
rows.

SET ROWCOUNT 50

WHILE 1=1
BEGIN

UPDATE SomeTable
SET ...
WHERE /* row not already updated */

IF @.@.ROWCOUNT=0
BREAK

END

SET ROWCOUNT 0

--
David Portas
SQL Server MVP
--|||Is the row-by-row processing done in a cursor? Must you update exactly one
row at a time (if so, why?) or would it be acceptable to update 2,3 or 50
rows at a time?

You can use SET ROWCOUNT and a loop to fine-tune the batch size of rows to
be updated. Bigger batches should improve performance over updating single
rows.

SET ROWCOUNT 50

WHILE 1=1
BEGIN

UPDATE SomeTable
SET ...
WHERE /* row not already updated */

IF @.@.ROWCOUNT=0
BREAK

END

SET ROWCOUNT 0

--
David Portas
SQL Server MVP
--|||"Muzamil" <muzamil@.hotmail.com> wrote in message
news:5a998f78.0405211023.24b40513@.posting.google.c om...
> hi
> For an unavoidable reason, I have to use row-by-row processing
> (update) on a temporary table to update a history table every day.
> I have around 60,000 records in temporary table and about 2 million in
> the history table.

Not much you can do if you absolutely HAVE to do row-by-row updating.

You might want to post DDL, etc. so others can take a crack at it. I've
seen many times someone will say, "I have to use a cursor", "I have to
update one row at a time" and then someone posts a much better/faster
solution.

Also, how are you handling transactions? Explicitly or implicitely? If
you're doing them implicitely, are you wrapping each update in its own, or
can up batch say 20 updates?

Finally, where's your log files? Separate physical drives?

> Could any one please suggest different methods to imporve the runtime
> of the query?
> Would highly appreciate!|||Hi
Thanks for your reply.

The row-by-row update is mandatory becuase the leagacy system is
sending us the information such as "Add", "Modify" or "delete" and
this information HAS to be processed in the same order otherwise we'll
get the erroneous data.
I know it's a dumb way of doing things but this is what our and their
IT department has chosen to be correct way of action after several
meetings. Hence the batch idea will not work here.

I am not using Cursors, instead I am using the loop based on the
primary key.

The log files are on different drives.

I've also tried using "WITH (ROWLOCK)" in the update statement but
it's not helping much.

Can you please still throw in some idea? Would be great help!

Thanks

"Greg D. Moore \(Strider\)" <mooregr_deleteth1s@.greenms.com> wrote in message news:<tOxrc.234090$M3.65389@.twister.nyroc.rr.com>...
> "Muzamil" <muzamil@.hotmail.com> wrote in message
> news:5a998f78.0405211023.24b40513@.posting.google.c om...
> > hi
> > For an unavoidable reason, I have to use row-by-row processing
> > (update) on a temporary table to update a history table every day.
> > I have around 60,000 records in temporary table and about 2 million in
> > the history table.
> Not much you can do if you absolutely HAVE to do row-by-row updating.
> You might want to post DDL, etc. so others can take a crack at it. I've
> seen many times someone will say, "I have to use a cursor", "I have to
> update one row at a time" and then someone posts a much better/faster
> solution.
> Also, how are you handling transactions? Explicitly or implicitely? If
> you're doing them implicitely, are you wrapping each update in its own, or
> can up batch say 20 updates?
> Finally, where's your log files? Separate physical drives?
>
> > Could any one please suggest different methods to imporve the runtime
> > of the query?
> > Would highly appreciate!|||Muzamil (muzamil@.hotmail.com) writes:
> The row-by-row update is mandatory becuase the leagacy system is
> sending us the information such as "Add", "Modify" or "delete" and
> this information HAS to be processed in the same order otherwise we'll
> get the erroneous data.

Ouch. Life is cruel, sometimes.

I wonder what possibilities there could be to find parallel streams,
that is updates that could be performed independently. Maybe you
can modify 10 rows at a time then. But it does not sound like a very
easy thing to do.

Without knowing the details of the system, it is difficult to give
much advice. But any sort of pre-aggregation you can do, is probably
going to pay back.

--
Erland Sommarskog, SQL Server MVP, sommar@.algonet.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||Details of the system:
The leagcy system sends us records flagged with "Add", "modify" or
"delete".
The purpose of these flags is self-explnatory. But the fun began when
we noticed that within same file , legacy system sends us "Add" and
then "Modify". Thus, we were left with no other option except to do
row-by-row processing.
We came up with the following logic:

a)If records StatusFlag is A' and records key does not exist in
DataWareHouse's Table, then the record is inserted into
DataWareHouse's Table.

b)If records StatusFlag is A', but records key exists in
DataWareHouse's Table, then the record is marked as invalid and will
be inserted into InvalidTable..

c)If records StatusFlag is M' and records key exists in
DataWareHouse's Table and record is active, then the corresponding
record in DataWareHouse's Table will be updated.

d)If records StatusFlag is M' and records key exists in
DataWareHouse's Table but record is inactive, then the record is
marked as invalid and will be inserted into InvalidTable.

e)If records StatusFlag is M' and records key does not exist in
DataWareHouse's Table, then the record is marked as invalid and will
be inserted into InvalidTable.

f)If records StatusFlag is D' and records key exists in
DataWareHouse's Table and record is active, then the corresponding
record in DataWareHouse's Table will be updated as inactive.

g)If records StatusFlag is D' and records key exists in
DataWareHouse's Table but record is inactive, then the record is
marked as invalid and will be inserted into InvalidTable.

h)If records StatusFlag is D' and records key does not exist in
DataWareHouse's Table, then the record is marked as invalid and will
be inserted into InvalidTable.

This logic takes care of ALL the anomalies we were facing before but
at the cost of long processing time.

I await your comments.

Thanks

Erland Sommarskog <sommar@.algonet.se> wrote in message news:<Xns94F53BF51111Yazorman@.127.0.0.1>...
> Muzamil (muzamil@.hotmail.com) writes:
> > The row-by-row update is mandatory becuase the leagacy system is
> > sending us the information such as "Add", "Modify" or "delete" and
> > this information HAS to be processed in the same order otherwise we'll
> > get the erroneous data.
> Ouch. Life is cruel, sometimes.
> I wonder what possibilities there could be to find parallel streams,
> that is updates that could be performed independently. Maybe you
> can modify 10 rows at a time then. But it does not sound like a very
> easy thing to do.
> Without knowing the details of the system, it is difficult to give
> much advice. But any sort of pre-aggregation you can do, is probably
> going to pay back.|||Muzamil (muzamil@.hotmail.com) writes:
> Details of the system:
> The leagcy system sends us records flagged with "Add", "modify" or
> "delete".
> The purpose of these flags is self-explnatory. But the fun began when
> we noticed that within same file , legacy system sends us "Add" and
> then "Modify". Thus, we were left with no other option except to do
> row-by-row processing.
> We came up with the following logic:

Hm, you might be missing a few cases. What if you get an Add, and record
exists in DW, but is marked inactive? With your current logic, the
input record moved to the Invalid table.

And could that feediug system be as weird as to send Add, Modify, Delete,
and Add again? Well, for a robust solution this is what we should assume.

It's a tricky problem, and I was about to defer the problem, when I
recalled a solution that colleague did for one of our stored procedures.
The secret word for tonight is bucketing! Assuming that there are
only a couple of input records for each key value, this should be
an excellent solution. You create buckets, so that each bucket has
at most one row per key value. Here is an example on how to do it:

UPDATE inputtbl
SET bucket = (SELECT count(*)
FROM inputtbl b
WHERE a.keyval = b.keyval
AND a.rownumber < b.rownumber) + 1
FROM inputtbl a

input.keyval is the keys for the records in the DW table. Rownumber
is a column which as describes the processing order. I assume that
you have such a column.

So now you can iterate over the buckets, and for each bucket, you can do
set- based processing. You still have to iterate, but instead over 60000
rows, only over a couple of buckets.

--
Erland Sommarskog, SQL Server MVP, sommar@.algonet.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||I think I was not articulate enough to convey the logic properly.
Anyways, thanks to everyone for your help.
By using the ROWLOCK and proper indexes, I was ale to reduce the time considerably.

Erland Sommarskog <sommar@.algonet.se> wrote in message news:<Xns94F6821D6ABYazorman@.127.0.0.1>...
> Muzamil (muzamil@.hotmail.com) writes:
> > Details of the system:
> > The leagcy system sends us records flagged with "Add", "modify" or
> > "delete".
> > The purpose of these flags is self-explnatory. But the fun began when
> > we noticed that within same file , legacy system sends us "Add" and
> > then "Modify". Thus, we were left with no other option except to do
> > row-by-row processing.
> > We came up with the following logic:
> Hm, you might be missing a few cases. What if you get an Add, and record
> exists in DW, but is marked inactive? With your current logic, the
> input record moved to the Invalid table.
> And could that feediug system be as weird as to send Add, Modify, Delete,
> and Add again? Well, for a robust solution this is what we should assume.
> It's a tricky problem, and I was about to defer the problem, when I
> recalled a solution that colleague did for one of our stored procedures.
> The secret word for tonight is bucketing! Assuming that there are
> only a couple of input records for each key value, this should be
> an excellent solution. You create buckets, so that each bucket has
> at most one row per key value. Here is an example on how to do it:
> UPDATE inputtbl
> SET bucket = (SELECT count(*)
> FROM inputtbl b
> WHERE a.keyval = b.keyval
> AND a.rownumber < b.rownumber) + 1
> FROM inputtbl a
> input.keyval is the keys for the records in the DW table. Rownumber
> is a column which as describes the processing order. I assume that
> you have such a column.
> So now you can iterate over the buckets, and for each bucket, you can do
> set- based processing. You still have to iterate, but instead over 60000
> rows, only over a couple of buckets.|||Muzamil (muzamil@.hotmail.com) writes:
> I think I was not articulate enough to convey the logic properly.
> Anyways, thanks to everyone for your help. By using the ROWLOCK and
> proper indexes, I was ale to reduce the time considerably.

Good indexes is always useful, and of course for iterative processing
it is even more imperative, since the cost a less-than-optimal plan
is multiplied.

I'm just curious, would my bucketing idea be applicable to your problem?
It should give you even more speed, but if you have good-ebough now, there
is of course no reason to spend more time on it.

--
Erland Sommarskog, SQL Server MVP, sommar@.algonet.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp

Performance tuning for a stored procedure

I have an SP that is big, huge, 700-800 lines.

I am not an expert but I need to figure out every possible way that
I can improve the performance speed of this SP.

In the next couple of weeks I will work on preparing SQL statements
that will create the tables, insert sample record and run the SP.
I would hope people will look at my SP and give me any hints on how
I can better write the SP.

In the meantime, after looking at the SP briefly, my first observations are:

1- use SET NOCOUNT ON
2- avoid using UNION statements
3- use WITH (NOLOCK) with each SELECT statement
4- avoid using NESTED Select statements
5- use #temp tables
6- avoid renaming tables in SELECT statements, for example SELECT * FROM
tblClients C

Am i correct to consider the above 6 points as valid things in terms of
causing
performance problems?

I would appreciate any comments/help

Thank you very muchHi Serge,

On Thu, 9 Sep 2004 00:13:31 -0400, serge wrote:

>I have an SP that is big, huge, 700-800 lines.
>I am not an expert but I need to figure out every possible way that
>I can improve the performance speed of this SP.
>In the next couple of weeks I will work on preparing SQL statements
>that will create the tables, insert sample record and run the SP.
>I would hope people will look at my SP and give me any hints on how
>I can better write the SP.
>In the meantime, after looking at the SP briefly, my first observations are:
>1- use SET NOCOUNT ON

This will not affect performance, but it is good practice. Many clients
choke on the numerous "(n row(s) affected)" messages returned by an SP
without this setting.

>2- avoid using UNION statements

Depends. Sometimes, a UNION can be quicker than the alternative. I've seen
cases where a query with a complicated WHERE clause that was evaluated
with a table scan was rewritten as two (or more) queries with UNION; each
of those queries was resolved with an index and the UNION version ran a
lot quicker.

You can gain performance by using UNION ALL instead of UNION where
possible. Sometimes, this can be made possible by extending the WHERE
clause of one of the queries in the UNION (though you have to be carefull
that the cost of this extension doesn't outweigh the gain of UNION ALL
over UNION!).

Run the following examples and check execution plans and statistics on I/O
and time used for execution:

USE pubs
SELECT au_id, au_lname, au_fname
FROM authors
WHERE au_id LIKE '2%'
OR au_lname = 'Ringer'
GO
SELECT au_id, au_lname, au_fname
FROM authors
WHERE au_id LIKE '2%'
UNION
SELECT au_id, au_lname, au_fname
FROM authors
WHERE au_lname = 'Ringer'
GO
SELECT au_id, au_lname, au_fname
FROM authors
WHERE au_id LIKE '2%'
UNION ALL
SELECT au_id, au_lname, au_fname
FROM authors
WHERE au_lname = 'Ringer'
AND au_id NOT LIKE '2%'
GO

>3- use WITH (NOLOCK) with each SELECT statement

This can gain you some performance (especially if concurrency is high on
your database), but you run the risk of getting dirty reads. If that risk
is acceptable, go ahead. If you don't know what a dirty read is, then
don't use this option.

>4- avoid using NESTED Select statements

Again: depends. If you can safely and easily replace the nested select (or
subquery, as it's usually called) with other code, do so, then test both
versions to see if performance actually has improved (if often won't
improve, as SQL Server's optimizer already uses the same execution plan).

Similar, if you can safely replace a correlated subquery with a
non-correlated, do so and test both versions.

But if removing the subquery means that you have to code lots more SQL, it
might hurt performance instead of improving it. And if you can gain some
performance by replacing an intuitive subquery with a contrived and hard
to understand query, then you might want to reconsider if you really value
performance higher than maintainability. One day, you will find youself
staring at that query, wondering what the %$# that ^%#$&%# query is
supposed to do.

>5- use #temp tables

At the risk of repeating myself: depends. If you find the same subquery
used over and over in the procedure, it MIGHT help performance if you
execute that subquery into a #temp table and use that for the rest of the
execution. It MIGHT also help further to index the temp table. But, again,
it might also hurt performance - creating the temp table and storing the
data induces some overhead as well and if you're not careful, you might be
faced with numerous recompilationms of the stored procedure that wouldn't
be needed without the temp table.

If you use a temp table to break a complicated query down in steps, you
have a good chance of degrading performance. In one complicated query, the
optimizer may choose an execution plan that you would never think of but
that's faster than the obvious way to execute it; if you dictate the steps
by executing them seperateely with a temp table for intermediate results,
you take a lot of options from the optimizer. Of course, there is also the
consideration of maintainability and readability of your code, so you
might choose to accept the performance degradation, just so that you will
understand your code when (not if!!) you (or someone else) have to get
back to it later.

>6- avoid renaming tables in SELECT statements, for example SELECT * FROM
>tblClients C

I've never heard that using a table alias (as this is called) would hurt
performance. If you have any evidence of this, please point me to it. I
would be highly surprised.

In fact, using an alias is absolutely needed when you use the same table
more than once in a query and when you use derived table; in all other
cases (except for single-table queries or very short table names) I'd also
heartily recommend using an alias. Do choose a mnemonic alias, not just a
random one or two letter combination!

>Am i correct to consider the above 6 points as valid things in terms of
>causing
>performance problems?

See above. And you might also want to take a look at this site:
http://www.sql-server-performance.com/

Best, Hugo
--

(Remove _NO_ and _SPAM_ to get my e-mail address)|||As always, Hugo has already given an excellent response. Just some
additions here...

> 3- use WITH (NOLOCK) with each SELECT statement

Don't use this, unless there is a specific performance problem with
respect to the statement in question. And even then you should only use
it after you have assessed whether the risks are acceptable.

> 6- avoid renaming tables in SELECT statements, for example SELECT * FROM
> tblClients C

Nonsense.

> 2- avoid using UNION statements

This is not a goal. Your goal should be to write statements that are
easy to read and understand and are effective. It starts there. In
general, you should not use GROUP BY, DISTINCT or UNION (without ALL) if
you don't need it. But if you do need it, then go for it, SQL-Server is
optimized for their use.

> 4- avoid using NESTED Select statements

Start with a set orientation in mind. That way you will automatically
avoid most unnecessary nested selects. The ones that remain are probably
your best choice.

> 5- use #temp tables

Don't use intermediate tables unless you have to. If you have to then
#temp tables are usually better than creating a permanent table and
dropping it at the end.

Gert-Jan
--
(Please reply only to the newsgroup)|||serge (sergea@.nospam.ehmail.com) writes:
> I have an SP that is big, huge, 700-800 lines.

Hey, you should see some of our procedures. We have one that is 3000 lines
long!

But, OK, 700 lines is no small size for a stored procedure.

Hugo and Gert-Jan has already pointed out weakness in your observations,
but I like to make some supplemental comments.

> 2- avoid using UNION statements

This is a bad rule. Sometimes UNION may be the wrong solution. Sometimes
it is the right. Even if we are talking from a performance perspective.

> 3- use WITH (NOLOCK) with each SELECT statement

Only do this, if there are unavioadble table scans, and you are really
are experience contention problems - *and* if you can accept that the
results are not consistent.

> 4- avoid using NESTED Select statements

Again, not a very good rule. But it depends a little on what you mean.
Say that you have:

SELECT a, b, (SELECT SUM(c) FROM B WHERE B.col = A.col) = c
FROM A
WHERE col2 BETWEEN 12 AND 19

My observation is that a rewrite using a derived table often gives better
performance:

SELECT A.a, A.b, B.c
FROM A
JOIN (SELECT c = SUM(c), col FROM B GROUP BY col) AS B ON A.col = B.col
WHERE col2 BETWEEN 12 AND 19

This may look expensive if B is large and there are only a handful of
values between 12 and 19 in A. But the above is only a logical description
of the query. The optimizer may recast computation order, as long as the
result is the same, and often does with a very good result.

The same thing applies to update queries:

UPDATE A
SET c = (SELECT SUM(c) FROM B WHERE A.col = B.col)
WHERE col2 BETWEEN 12 AND 19

While the above is ANSI-compliant, this is usually more effective:

UPDATE A
SET c = B.c
FROM A
JOIN (SELECT c = SUM(c), col FROM B GROUP BY col) AS B ON A.col = B.col
WHERE col2 BETWEEN 12 AND 19

You must benchmark all such changes. It may not always be the best thing
to do.

> 5- use #temp tables

This is a very complex topic. Yes, it can sometimes be a good thing to
save intermediate results in a temp table. But temp tables can also
cause performance problems, since if you fill up a temp table, SQL Server
may opt to recompile the procedure. And recompiling a 700 line stored
procedure can easily take a few seconds. This can be evaded, by using
table variables instead. Table variabels never causes recompilations.
But then again, you may want those recompilations, because it can slash
the execution time of the procedure from three hours to two minutes.

Here is a real-life story about a procedure optimization that I did
some time ago. The code originated from a stored procedure that I had
written in 1997 for MS SQL 6.0, and used a couple of temp tables on
which a bunch of operations were performed. At a customer site, this
procedure took too long time. I tried a lot of tricks in the book,
but few gave any effect.

Eventually, I replaced the most of the temp-table manipulation with a
50+ line SELECT statement which performed a FULL JOIN of three table
sources, whereof at least one was a derived table. (And a three-way
full join requires at least one derived table in itself, because you
have to full-join two by two.)

So why I did not write it this way in 1997? Well, at that time SQL Server
did not have FULL JOIN or derived tables. Also, the requirements of the
original procedure was different, and more complex that the current one.

> 6- avoid renaming tables in SELECT statements, for example SELECT * FROM
> tblClients C

No, use aliases. The impact on execution on performance is neglible, but
the impact on developer performance should not be ignored.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||>>1- use SET NOCOUNT ON
> This will not affect performance, but it is good practice. Many clients
> choke on the numerous "(n row(s) affected)" messages returned by an SP
> without this setting.

From the SQL 2000 BOL:

<Excerpt href="http://links.10026.com/?link=tsqlref.chm::/ts_set-set_3ed0.htm">
For stored procedures that contain several statements that do not return
much actual data, this can provide a significant performance boost because
network traffic is greatly reduced.
</Excerpt
--
Hope this helps.

Dan Guzman
SQL Server MVP|||On Fri, 10 Sep 2004 11:30:32 GMT, Dan Guzman wrote:

>>>1- use SET NOCOUNT ON
>>
>> This will not affect performance, but it is good practice. Many clients
>> choke on the numerous "(n row(s) affected)" messages returned by an SP
>> without this setting.
>
>From the SQL 2000 BOL:
><Excerpt href="http://links.10026.com/?link=tsqlref.chm::/ts_set-set_3ed0.htm">
>For stored procedures that contain several statements that do not return
>much actual data, this can provide a significant performance boost because
>network traffic is greatly reduced.
></Excerpt
Hi Dan,

Yes, you're right. My bad. Thanks for pointing out my error!

Best, Hugo
--

(Remove _NO_ and _SPAM_ to get my e-mail address)|||Thank you all for your responses.

A lot of new things for me in your answers, I'll have to read them many
times
and take the time to fully understand them.

Thanks again.|||> 3- use WITH (NOLOCK) with each SELECT statement

This post is for everyone who gave me answers about the WITH (NOLOCK).

I've been using WITH (NOLOCK) in my own applications for a few years.
And i do know about Dirty Reads. It's basically the SELECT statement will
not
wait for a transaction with a lock to finish before getting the data. For
example,
if a transaction is at the stage of modifying a column from 'USA' to
'Canada',
the SELECT statement will fetch me USA even though the record
is locked and in the process of having the column's value being changed to
Canada.

Am i correct with my explanation?

Now, i have an SP with 300 lines with a lot tables, joins, left joins and
one or
a few unions.

If i run this SP with parameters in the SQL Query Analyzer, it takes 1
second
to return 11 rows. However, the front-end application which is ASP based,
when calls this SP, it takes 23-25 seconds for the ASP page to load!

If i use WITH (NOLOCK) on all the tables, the ASP page loads in 4-5
seconds!

I am planning on understanding this SP and all other SPs and find all the
ways possible to better write it in order to improve the code and have
better performance results. This will be a slow process.

Until then, if my explanation of WITH (NOLOCK) is valid and that there
is only 1 side-effect, then maybe we can temporarily use WITH (NOLOCK)
for the short term.

As always, I appreciate any feedback.

Thank you very much|||Hi Serge,

On Tue, 14 Sep 2004 00:23:04 -0400, serge wrote:

>I've been using WITH (NOLOCK) in my own applications for a few years.
>And i do know about Dirty Reads. It's basically the SELECT statement will
>not
>wait for a transaction with a lock to finish before getting the data. For
>example,
>if a transaction is at the stage of modifying a column from 'USA' to
>'Canada',
>the SELECT statement will fetch me USA even though the record
>is locked and in the process of having the column's value being changed to
>Canada.
>Am i correct with my explanation?

Only partially. It depends on how "far" the processing of the other
transaction has gone. If the update has already performed but the
transaction is not yet finished, the SELECT statement with nolock hint
will return the value 'Canada'.

This can cause quite unexpected side effects when the other transaction
later has to be rolled back - you'll have read a value that logically
never even existed in the database. It might even be that the rollback
occured BECAUSE the value 'Canada' violates a business rule - it's
inserted in the table first, then the trigger starts that checks the
business rules and a rollback is initiated if the business rules were
violated. Normal locking behaviour ensures that nobody ever sees the
"illegal" value 'Canada', as this row is locked until the transaction
finishes; performing dirty read means that you run the risk of returning
this "illegal" value.

>If i run this SP with parameters in the SQL Query Analyzer, it takes 1
>second
>to return 11 rows. However, the front-end application which is ASP based,
>when calls this SP, it takes 23-25 seconds for the ASP page to load!
>If i use WITH (NOLOCK) on all the tables, the ASP page loads in 4-5
>seconds!

If the SP executes in 1 second from QA, then you should be able to see
comparable performance from any other client. I suspect that this
difference is caused by something in the ASP code. As I have never used
ASP myself, I'll leave it to others to comment on this.

Your solution to use dirty reads looks more like a workaround than like a
fix. I hope someone more ASP-savvy then me can help you find the real
cause of the delay.

And I suggest you think very long and very hard about what harm might be
caused if your SP reads and uses data changed by an unfinished transaction
that might even be rolled back.

Best, Hugo
--

(Remove _NO_ and _SPAM_ to get my e-mail address)|||Hi Hugo,

Your reply was very informative for me.

I liked your explanation of the "illegal" value. It's true, i didn't think
about Dirty Reads like you explained, where the value is actually 'Canada'
but that it hasn't been finalized and the record had been committed.

As far ASP having a problem, yes it has a serious problem the ASP page.
When i run the profiler, bunch of SPs get executed THREE straight times!!!
This i find very bad and I'll have to investigate why the repeat (i guess
the page
is being refreshed 3 times thus triggering all SPs to be executed three
times).

And yes I am considering WITH (NOLOCK) as a workaround for now and
not a fix.

I'll keep investigating.

Thank you very much Hugo.

> >I've been using WITH (NOLOCK) in my own applications for a few years.
> >And i do know about Dirty Reads. It's basically the SELECT statement will
> >not
> >wait for a transaction with a lock to finish before getting the data. For
> >example,
> >if a transaction is at the stage of modifying a column from 'USA' to
> >'Canada',
> >the SELECT statement will fetch me USA even though the record
> >is locked and in the process of having the column's value being changed
to
> >Canada.
> >Am i correct with my explanation?
> Only partially. It depends on how "far" the processing of the other
> transaction has gone. If the update has already performed but the
> transaction is not yet finished, the SELECT statement with nolock hint
> will return the value 'Canada'.
> This can cause quite unexpected side effects when the other transaction
> later has to be rolled back - you'll have read a value that logically
> never even existed in the database. It might even be that the rollback
> occured BECAUSE the value 'Canada' violates a business rule - it's
> inserted in the table first, then the trigger starts that checks the
> business rules and a rollback is initiated if the business rules were
> violated. Normal locking behaviour ensures that nobody ever sees the
> "illegal" value 'Canada', as this row is locked until the transaction
> finishes; performing dirty read means that you run the risk of returning
> this "illegal" value.
>
> >If i run this SP with parameters in the SQL Query Analyzer, it takes 1
> >second
> >to return 11 rows. However, the front-end application which is ASP based,
> >when calls this SP, it takes 23-25 seconds for the ASP page to load!
> >If i use WITH (NOLOCK) on all the tables, the ASP page loads in 4-5
> >seconds!
> If the SP executes in 1 second from QA, then you should be able to see
> comparable performance from any other client. I suspect that this
> difference is caused by something in the ASP code. As I have never used
> ASP myself, I'll leave it to others to comment on this.
> Your solution to use dirty reads looks more like a workaround than like a
> fix. I hope someone more ASP-savvy then me can help you find the real
> cause of the delay.
> And I suggest you think very long and very hard about what harm might be
> caused if your SP reads and uses data changed by an unfinished transaction
> that might even be rolled back.|||On Tue, 14 Sep 2004 08:02:05 -0400, serge wrote:

>As far ASP having a problem, yes it has a serious problem the ASP page.
(snip)
>I'll keep investigating.

Hi Serge,

You might consider looking in the microsoft.public hierarchy of
newsgroups. There are lots of them with "asp" somewhere in the name, so
there's a good chance you'll find an answer to this in one of those
groups. I know too little (read: nothing <g>) about asp to recognise which
group might be the best suited.

Best, Hugo
--

(Remove _NO_ and _SPAM_ to get my e-mail address)|||serge (sergea@.nospam.ehmail.com) writes:
> If i run this SP with parameters in the SQL Query Analyzer, it takes 1
> second to return 11 rows. However, the front-end application which is
> ASP based, when calls this SP, it takes 23-25 seconds for the ASP page
> to load!

If the ASP code calls it three times, there is something to be fixed.

But exactly how was it run? Was any of the calls preceded by
SET FMTONLY ON? FMTONLY it a sort of NOEXEC thing, SQL Server only
returns information about result sets.

Assuming that your ASP code uses, ADO, make sure that you use
CommantType = adStoredProcedure. Do not use adCommandText with an
EXEC statement or a ODBC call syntax.

If there there is an indexed view or indexed computed column involved
somewhere, this can explain the difference between ASP and QA. In QA,
the setting ARITHABORT is ON by default, but it's off when you use
ADO. Thus, issuing SET ARITHABORT ON (or setting it default for the
database with ALTER DATABASE or for the server with sp_configure could
give some effect.)

Also, SET NOCOUNT ON is good.

> If i use WITH (NOLOCK) on all the tables, the ASP page loads in 4-5
> seconds!

This could indicate that there is some blocking. Are you alone on the
server when it takes 25 seconds to run? Unless there are concurrency
issues, I find it difficult to believe that (NOLOCK) has that drastic
effect. There is some cost for locks, but I find it difficult to believe
that it is that huge.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||A lot of good information you've provided me here. I will have to look at
the ASP
code and investigate all your points.

To answer your question. Yes i am alone on the server when it takes 25
seconds to run.
Checking SQL Profiler, i see 3 sets of the same SPs being called and the
main SP
that is slow takes about 6.5 seconds to run, multiply that by 3, that's 20
seconds.
The asp page is also calling other SPs, totaling 23-25 seconds.

I'll have to investigate much further and go through your points and other
people's points
i've received and find out what i can fix/improve.

Thanks again!

> If the ASP code calls it three times, there is something to be fixed.
> But exactly how was it run? Was any of the calls preceded by
> SET FMTONLY ON? FMTONLY it a sort of NOEXEC thing, SQL Server only
> returns information about result sets.
> Assuming that your ASP code uses, ADO, make sure that you use
> CommantType = adStoredProcedure. Do not use adCommandText with an
> EXEC statement or a ODBC call syntax.
> If there there is an indexed view or indexed computed column involved
> somewhere, this can explain the difference between ASP and QA. In QA,
> the setting ARITHABORT is ON by default, but it's off when you use
> ADO. Thus, issuing SET ARITHABORT ON (or setting it default for the
> database with ALTER DATABASE or for the server with sp_configure could
> give some effect.)
> Also, SET NOCOUNT ON is good.
> > If i use WITH (NOLOCK) on all the tables, the ASP page loads in 4-5
> > seconds!
> This could indicate that there is some blocking. Are you alone on the
> server when it takes 25 seconds to run? Unless there are concurrency
> issues, I find it difficult to believe that (NOLOCK) has that drastic
> effect. There is some cost for locks, but I find it difficult to believe
> that it is that huge.