Showing posts with label level. Show all posts
Showing posts with label level. Show all posts

Friday, March 23, 2012

Performance Q

Hi,
I have a table which stores a parent-child relationship and I want to update
a column in the table to hold the ultimate parent, i.e. the top level parent
for each parent. I have written a scalar UDF which creates a creates a table
var to hold all the relationships, I then select the top level from this.
To update 6793 rows it takes just over 2mins, my question is does this seem
like acceptable performance? I'm trying to improve the performance and
looking at the execution plan I can see 63% of the cost is a Table
Spool/Eager spool, another 29% is a table scan.
I have tried creating a unique non-clustered and also a clustered index on
ParentId,ChildId and re-running the query, each time the execution plan
changes to use the index but the overall time is not reduced.
My next question is are there are performance issues or other concerns with
using a datetime field in an index? I have a table which stores historical
data for several days and often want to pull out one days info also I want t
o
enforce that the data cannot be duplicated for the same day so I want to
create a composite key on Date_Stamp plus another col.That sounds horrific, no that cant be ok to update ~7000 rows within 2 min,
since i dont guess that these are really big and complex function, (are
they ?). Just post you DDL in here to see where the bottleneck is located
at.
In common, indexes on datetime columns work fine.
HTH, Jens Suessmeyer.
http://www.sqlserver2005.de
--
"Neil" <Neil@.discussions.microsoft.com> schrieb im Newsbeitrag
news:538D1605-FF40-4B0B-81FA-88B6D22577D4@.microsoft.com...
> Hi,
> I have a table which stores a parent-child relationship and I want to
> update
> a column in the table to hold the ultimate parent, i.e. the top level
> parent
> for each parent. I have written a scalar UDF which creates a creates a
> table
> var to hold all the relationships, I then select the top level from this.
> To update 6793 rows it takes just over 2mins, my question is does this
> seem
> like acceptable performance? I'm trying to improve the performance and
> looking at the execution plan I can see 63% of the cost is a Table
> Spool/Eager spool, another 29% is a table scan.
> I have tried creating a unique non-clustered and also a clustered index on
> ParentId,ChildId and re-running the query, each time the execution plan
> changes to use the index but the overall time is not reduced.
>
> My next question is are there are performance issues or other concerns
> with
> using a datetime field in an index? I have a table which stores historical
> data for several days and often want to pull out one days info also I want
> to
> enforce that the data cannot be duplicated for the same day so I want to
> create a composite key on Date_Stamp plus another col.|||We really need to see your DDL to figure that out. Also, you can try to use
System Monitor to look at Avg. Disk Queue Length, Buffer Cache Hit Ratio and
Page Life Expectancy. You may be memory starved.
Tom
----
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
SQL Server MVP
Columnist, SQL Server Professional
Toronto, ON Canada
www.pinpub.com
.
"Neil" <Neil@.discussions.microsoft.com> wrote in message
news:538D1605-FF40-4B0B-81FA-88B6D22577D4@.microsoft.com...
Hi,
I have a table which stores a parent-child relationship and I want to update
a column in the table to hold the ultimate parent, i.e. the top level parent
for each parent. I have written a scalar UDF which creates a creates a table
var to hold all the relationships, I then select the top level from this.
To update 6793 rows it takes just over 2mins, my question is does this seem
like acceptable performance? I'm trying to improve the performance and
looking at the execution plan I can see 63% of the cost is a Table
Spool/Eager spool, another 29% is a table scan.
I have tried creating a unique non-clustered and also a clustered index on
ParentId,ChildId and re-running the query, each time the execution plan
changes to use the index but the overall time is not reduced.
My next question is are there are performance issues or other concerns with
using a datetime field in an index? I have a table which stores historical
data for several days and often want to pull out one days info also I want
to
enforce that the data cannot be duplicated for the same day so I want to
create a composite key on Date_Stamp plus another col.|||We really need to see your DDL to figure that out. Also, you can try to use
System Monitor to look at Avg. Disk Queue Length, Buffer Cache Hit Ratio and
Page Life Expectancy. You may be memory starved.
Tom
----
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
SQL Server MVP
Columnist, SQL Server Professional
Toronto, ON Canada
www.pinpub.com
.
"Neil" <Neil@.discussions.microsoft.com> wrote in message
news:538D1605-FF40-4B0B-81FA-88B6D22577D4@.microsoft.com...
Hi,
I have a table which stores a parent-child relationship and I want to update
a column in the table to hold the ultimate parent, i.e. the top level parent
for each parent. I have written a scalar UDF which creates a creates a table
var to hold all the relationships, I then select the top level from this.
To update 6793 rows it takes just over 2mins, my question is does this seem
like acceptable performance? I'm trying to improve the performance and
looking at the execution plan I can see 63% of the cost is a Table
Spool/Eager spool, another 29% is a table scan.
I have tried creating a unique non-clustered and also a clustered index on
ParentId,ChildId and re-running the query, each time the execution plan
changes to use the index but the overall time is not reduced.
My next question is are there are performance issues or other concerns with
using a datetime field in an index? I have a table which stores historical
data for several days and often want to pull out one days info also I want
to
enforce that the data cannot be duplicated for the same day so I want to
create a composite key on Date_Stamp plus another col.|||The DDL for the table is -
CREATE TABLE [dbo].[CDRCustomerRelationship] (
DATE_STAMP datetime DEFAULT getdate(),
[ParentId] [int] NOT NULL ,
[ChildId] [int] NOT NULL ,
[DeletionFlag] [char] (1) NOT NULL ,
[UltimateParent] [int] NULL ,
[TopLevelFlag] AS (CASE WHEN ([ParentID] = [UltimateParent]) THEN 1 ELSE 0
END)
) ON [PRIMARY]
GO
The DDL for the UDF is -
/* Returns the Ultimate Parent */
CREATE FUNCTION [dbo].[ufGetUltimateParent] (@.iStartingPoint int)
RETURNS int
AS
BEGIN
DECLARE @.EntityInfo TABLE
(
EParentId int, -- Parent EntityID
EChildId int, -- EntityID
Depth int -- depth
)
DECLARE @.UltimateEParentId int
DECLARE @.Level int
SELECT @.Level = 0 -- initializes the level to value 0
DECLARE @.InsertCount int
SELECT @.InsertCount = 1
-- Try to find the first Parent Record by selecting the starting parent as a
child
INSERT INTO @.EntityInfo
SELECT
ParentId, -- parent id which will be @.startPoint
ChildId, -- child id
@.Level -- depth
FROM
DBO.CDRCustomerRelationship
WHERE
ChildId = @.iStartingPoint
AND DeletionFlag = 'N'
-- Check To see if ultimate parent
SELECT @.InsertCount = @.@.ROWCOUNT
IF ( @.InsertCount > 0 )
BEGIN
SELECT @.Level = @.Level + 1
-- There are more parents so start finding them
WHILE @.InsertCount > 0
BEGIN
INSERT @.EntityInfo
SELECT
ParentId, -- relateditem
ChildId, -- item
@.Level -- depth
FROM
(SELECT DISTINCT EParentId FROM @.EntityInfo WHERE Depth = @.Level-1) AS tmp
INNER JOIN
dbo.CDRCustomerRelationship
ON tmp.EParentId = ChildId
WHERE DeletionFlag = 'N'
SELECT @.InsertCount = @.@.ROWCOUNT
-- Check To see if ultimate parent
IF ( @.InsertCount > 0 )
SELECT @.Level = @.Level + 1
END -- end Loop, no more parents
END
ELSE
BEGIN --No parents in hierachy so insert record for itself
INSERT INTO @.EntityInfo
SELECT
@.iStartingPoint,
@.iStartingPoint,
0 AS Depth
END
SELECT @.UltimateEParentId = EParentId FROM @.EntityInfo Where Depth =
(SELECT MAX(Depth) FROM @.EntityInfo)
RETURN @.UltimateEParentId
END

"Jens Sü?meyer" wrote:

> That sounds horrific, no that can′t be ok to update ~7000 rows within 2 m
in,
> since i don′t guess that these are really big and complex function, (are
> they ?). Just post you DDL in here to see where the bottleneck is located
> at.
> In common, indexes on datetime columns work fine.
> HTH, Jens Suessmeyer.
> --
> http://www.sqlserver2005.de
> --
>
> "Neil" <Neil@.discussions.microsoft.com> schrieb im Newsbeitrag
> news:538D1605-FF40-4B0B-81FA-88B6D22577D4@.microsoft.com...
>
>|||Out of interest what kind of performance would you expect time wise?
The DDL for the table is -
CREATE TABLE [dbo].[CDRCustomerRelationship] (
DATE_STAMP datetime DEFAULT getdate(),
[ParentId] [int] NOT NULL ,
[ChildId] [int] NOT NULL ,
[DeletionFlag] [char] (1) NOT NULL ,
[UltimateParent] [int] NULL ,
[TopLevelFlag] AS (CASE WHEN ([ParentID] = [UltimateParent]) THEN 1 ELSE 0
END)
) ON [PRIMARY]
GO
The DDL for the UDF is -
/* Returns the Ultimate Parent */
CREATE FUNCTION [dbo].[ufGetUltimateParent] (@.iStartingPoint int)
RETURNS int
AS
BEGIN
DECLARE @.EntityInfo TABLE
(
EParentId int, -- Parent EntityID
EChildId int, -- EntityID
Depth int -- depth
)
DECLARE @.UltimateEParentId int
DECLARE @.Level int
SELECT @.Level = 0 -- initializes the level to value 0
DECLARE @.InsertCount int
SELECT @.InsertCount = 1
-- Try to find the first Parent Record by selecting the starting parent as a
child
INSERT INTO @.EntityInfo
SELECT
ParentId, -- parent id which will be @.startPoint
ChildId, -- child id
@.Level -- depth
FROM
DBO.CDRCustomerRelationship
WHERE
ChildId = @.iStartingPoint
AND DeletionFlag = 'N'
-- Check To see if ultimate parent
SELECT @.InsertCount = @.@.ROWCOUNT
IF ( @.InsertCount > 0 )
BEGIN
SELECT @.Level = @.Level + 1
-- There are more parents so start finding them
WHILE @.InsertCount > 0
BEGIN
INSERT @.EntityInfo
SELECT
ParentId, -- relateditem
ChildId, -- item
@.Level -- depth
FROM
(SELECT DISTINCT EParentId FROM @.EntityInfo WHERE Depth = @.Level-1) AS tmp
INNER JOIN
dbo.CDRCustomerRelationship
ON tmp.EParentId = ChildId
WHERE DeletionFlag = 'N'
SELECT @.InsertCount = @.@.ROWCOUNT
-- Check To see if ultimate parent
IF ( @.InsertCount > 0 )
SELECT @.Level = @.Level + 1
END -- end Loop, no more parents
END
ELSE
BEGIN --No parents in hierachy so insert record for itself
INSERT INTO @.EntityInfo
SELECT
@.iStartingPoint,
@.iStartingPoint,
0 AS Depth
END
SELECT @.UltimateEParentId = EParentId FROM @.EntityInfo Where Depth =
(SELECT MAX(Depth) FROM @.EntityInfo)
RETURN @.UltimateEParentId
END
"Tom Moreau" wrote:

> We really need to see your DDL to figure that out. Also, you can try to u
se
> System Monitor to look at Avg. Disk Queue Length, Buffer Cache Hit Ratio a
nd
> Page Life Expectancy. You may be memory starved.
> --
> Tom
> ----
> Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
> SQL Server MVP
> Columnist, SQL Server Professional
> Toronto, ON Canada
> www.pinpub.com
> ..
> "Neil" <Neil@.discussions.microsoft.com> wrote in message
> news:538D1605-FF40-4B0B-81FA-88B6D22577D4@.microsoft.com...
> Hi,
> I have a table which stores a parent-child relationship and I want to upda
te
> a column in the table to hold the ultimate parent, i.e. the top level pare
nt
> for each parent. I have written a scalar UDF which creates a creates a tab
le
> var to hold all the relationships, I then select the top level from this.
> To update 6793 rows it takes just over 2mins, my question is does this see
m
> like acceptable performance? I'm trying to improve the performance and
> looking at the execution plan I can see 63% of the cost is a Table
> Spool/Eager spool, another 29% is a table scan.
> I have tried creating a unique non-clustered and also a clustered index on
> ParentId,ChildId and re-running the query, each time the execution plan
> changes to use the index but the overall time is not reduced.
>
> My next question is are there are performance issues or other concerns wit
h
> using a datetime field in an index? I have a table which stores historical
> data for several days and often want to pull out one days info also I want
> to
> enforce that the data cannot be duplicated for the same day so I want to
> create a composite key on Date_Stamp plus another col.
>|||DIDmread the whole of your DDL code, taht moght keep you an imression of
doing it:
CREATE FUNCTION UlitmateParent
(
@.ChildValue INT
)
RETURNS INT
AS
BEGIN
DECLARE @.UltimateParent INT
DECLARE @.ParentID INT
SET @.ParentId = -1
WHILE @.UltimateParent IS NOT NULL -->Condition depends on what is entered
if there is not Parent at all
BEGIN
Select @.UltimateParent = ParentID from CDRCustomerRelationship Where
ChildID = @.ChildValue
IF @.UltimateParent IS NOT NULL
BEGIN
SET @.ChildValue = @.UltimateParent
END
END
RETURN @.ChildValue
END
Self-relational tables and queries can be found here:
HTH, Jens Suessmeyer.
http://www.sqlserver2005.de
--
"Neil" <Neil@.discussions.microsoft.com> schrieb im Newsbeitrag
news:7CB6B6F9-559F-4EDF-8602-48AF7423312D@.microsoft.com...
> The DDL for the table is -
>
> CREATE TABLE [dbo].[CDRCustomerRelationship] (
> DATE_STAMP datetime DEFAULT getdate(),
> [ParentId] [int] NOT NULL ,
> [ChildId] [int] NOT NULL ,
> [DeletionFlag] [char] (1) NOT NULL ,
> [UltimateParent] [int] NULL ,
> [TopLevelFlag] AS (CASE WHEN ([ParentID] = [UltimateParent]) THEN 1 ELSE 0
> END)
> ) ON [PRIMARY]
> GO
> The DDL for the UDF is -
> /* Returns the Ultimate Parent */
> CREATE FUNCTION [dbo].[ufGetUltimateParent] (@.iStartingPoint int)
> RETURNS int
> AS
> BEGIN
> DECLARE @.EntityInfo TABLE
> (
> EParentId int, -- Parent EntityID
> EChildId int, -- EntityID
> Depth int -- depth
> )
> DECLARE @.UltimateEParentId int
>
> DECLARE @.Level int
> SELECT @.Level = 0 -- initializes the level to value 0
> DECLARE @.InsertCount int
> SELECT @.InsertCount = 1
> -- Try to find the first Parent Record by selecting the starting parent as
> a
> child
> INSERT INTO @.EntityInfo
> SELECT
> ParentId, -- parent id which will be @.startPoint
> ChildId, -- child id
> @.Level -- depth
> FROM
> DBO.CDRCustomerRelationship
> WHERE
> ChildId = @.iStartingPoint
> AND DeletionFlag = 'N'
>
> -- Check To see if ultimate parent
> SELECT @.InsertCount = @.@.ROWCOUNT
> IF ( @.InsertCount > 0 )
> BEGIN
> SELECT @.Level = @.Level + 1
> -- There are more parents so start finding them
> WHILE @.InsertCount > 0
> BEGIN
> INSERT @.EntityInfo
> SELECT
> ParentId, -- relateditem
> ChildId, -- item
> @.Level -- depth
> FROM
> (SELECT DISTINCT EParentId FROM @.EntityInfo WHERE Depth = @.Level-1) AS tmp
> INNER JOIN
> dbo.CDRCustomerRelationship
> ON tmp.EParentId = ChildId
> WHERE DeletionFlag = 'N'
>
> SELECT @.InsertCount = @.@.ROWCOUNT
> -- Check To see if ultimate parent
> IF ( @.InsertCount > 0 )
> SELECT @.Level = @.Level + 1
> END -- end Loop, no more parents
> END
> ELSE
> BEGIN --No parents in hierachy so insert record for itself
> INSERT INTO @.EntityInfo
> SELECT
> @.iStartingPoint,
> @.iStartingPoint,
> 0 AS Depth
> END
> SELECT @.UltimateEParentId = EParentId FROM @.EntityInfo Where Depth =
> (SELECT MAX(Depth) FROM @.EntityInfo)
> RETURN @.UltimateEParentId
> END
>
>
>
>
>
>
>
>
>
>
>
>
> "Jens Smeyer" wrote:
>|||I have it down to 34 secs now by adding clustered index to table, this seems
more reasonable
"Jens Sü?meyer" wrote:

> DIDm′read the whole of your DDL code, taht moght keep you an imression of
> doing it:
>
> CREATE FUNCTION UlitmateParent
> (
> @.ChildValue INT
> )
> RETURNS INT
> AS
> BEGIN
> DECLARE @.UltimateParent INT
> DECLARE @.ParentID INT
> SET @.ParentId = -1
> WHILE @.UltimateParent IS NOT NULL -->Condition depends on what is enter
ed
> if there is not Parent at all
> BEGIN
> Select @.UltimateParent = ParentID from CDRCustomerRelationship Where
> ChildID = @.ChildValue
> IF @.UltimateParent IS NOT NULL
> BEGIN
> SET @.ChildValue = @.UltimateParent
> END
> END
> RETURN @.ChildValue
> END
>
> Self-relational tables and queries can be found here:
>
> HTH, Jens Suessmeyer.
> --
> http://www.sqlserver2005.de
> --
> "Neil" <Neil@.discussions.microsoft.com> schrieb im Newsbeitrag
> news:7CB6B6F9-559F-4EDF-8602-48AF7423312D@.microsoft.com...
>
>|||MS article here, sorry ;-)
http://msdn.microsoft.com/library/d...r />
_5yk3.asp
"Jens Smeyer" <Jens@.Remove_this_For_Contacting.sqlserver2005.de> schrieb
im Newsbeitrag news:%23O36HAjVFHA.3636@.TK2MSFTNGP14.phx.gbl...
> DIDmread the whole of your DDL code, taht moght keep you an imression of
> doing it:
>
> CREATE FUNCTION UlitmateParent
> (
> @.ChildValue INT
> )
> RETURNS INT
> AS
> BEGIN
> DECLARE @.UltimateParent INT
> DECLARE @.ParentID INT
> SET @.ParentId = -1
> WHILE @.UltimateParent IS NOT NULL -->Condition depends on what is
> entered if there is not Parent at all
> BEGIN
> Select @.UltimateParent = ParentID from CDRCustomerRelationship Where
> ChildID = @.ChildValue
> IF @.UltimateParent IS NOT NULL
> BEGIN
> SET @.ChildValue = @.UltimateParent
> END
> END
> RETURN @.ChildValue
> END
>
> Self-relational tables and queries can be found here:
>
> HTH, Jens Suessmeyer.
> --
> http://www.sqlserver2005.de
> --
> "Neil" <Neil@.discussions.microsoft.com> schrieb im Newsbeitrag
> news:7CB6B6F9-559F-4EDF-8602-48AF7423312D@.microsoft.com...
>|||Try to use the Function (perhaps with some corrections based on your
system), perhaps thatll also help you.
"Neil" <Neil@.discussions.microsoft.com> schrieb im Newsbeitrag
news:780B3895-1C43-4BB2-B360-444932B9AB2C@.microsoft.com...
>I have it down to 34 secs now by adding clustered index to table, this
>seems
> more reasonable
> "Jens Smeyer" wrote:
>

Performance problems when running trhough Com+ and DTC

You didn't post the statement so it is hard to say but you can either set
the MAXDOP at the server level or specify a hint inthe query to limit the
number of CPU's a single action uses.
Andrew J. Kelly SQL MVP
"Anders Evensen" <anders.evensen@.millionhandshakes.com> wrote in message
news:OIKIRdUbHHA.1400@.TK2MSFTNGP06.phx.gbl...
> Hi everyone,
> we have a performance problem when running a relatively heavy INSERT
> statement from a COM+ application against SQL Server 2005 (SP1). The query
> takes up all CPU resources (4 CPUs) on the database server while
> processing (about 15 minutes) and the database server does not respond to
> other queries. The general response from the database server computer is
> also poor, including its desktop and other user interactions.
> When running the same statement from Managerment Studio, it takes about
> same time to complete, but it only takes up 1 CPU and other queries can
> run at the same time.
> This happens only for some queries. A minor change to the SELECT-part of
> the query may make the problem go away.
> The SQL Server database is a clustered 64 bit installation. The SQL Server
> has SP1 installed, but not SP2. Is it likely that this issue is fixed in
> SP2.
>
> Thanks in advance.
>
>
The number of threads used are always determined at run time based on a
number of factors. So even if one time it uses all the procs it can easily
use just one the next time around. But in this case I feel it is related to
how it is being called and something called parameter sniffing. You can get
two very different plans if they are not called identically and evaluate to
the same datatypes etc. Again it would help to see the real statement.
Andrew J. Kelly SQL MVP
"Anders Evensen" <anders.evensen@.millionhandshakes.com> wrote in message
news:efOOB7UbHHA.1508@.TK2MSFTNGP06.phx.gbl...
> Thanks. We will try this.
> However, I am very interesting in knowing if there is a logical
> explanation to why SQL Server processes uses totally different CPU
> resources when running the statement from COM+ in a DTC transaction
> compared to running it from Management Studio.
> -Anders
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:%23U4xx0UbHHA.4012@.TK2MSFTNGP03.phx.gbl...
>
|||On Fri, 23 Mar 2007 14:25:37 +0100, "Anders Evensen"
<anders.evensen@.millionhandshakes.com> wrote:

>Thanks. We will try this.
>However, I am very interesting in knowing if there is a logical explanation
>to why SQL Server processes uses totally different CPU resources when
>running the statement from COM+ in a DTC transaction compared to running it
>from Management Studio.
I believe COM+ often sets isolation level to repeatable read, which
could explain the situation - management studio doesn't do that.
J.

>-Anders
>"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
>news:%23U4xx0UbHHA.4012@.TK2MSFTNGP03.phx.gbl...
>
|||Actually I think it used Serializable but am not 100% sure.
Andrew J. Kelly SQL MVP
"JXStern" <JXSternChangeX2R@.gte.net> wrote in message
news:fuj8039981uc8e6ipe83k51r7f3a9092s3@.4ax.com...
> On Fri, 23 Mar 2007 14:25:37 +0100, "Anders Evensen"
> <anders.evensen@.millionhandshakes.com> wrote:
>
> I believe COM+ often sets isolation level to repeatable read, which
> could explain the situation - management studio doesn't do that.
> J.
>
>
|||On Sat, 24 Mar 2007 09:40:55 +0100, "Tibor Karaszi"
<tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote:

>Yep, it is serializable per default.
Right, ... the point being he can try to replicate the behavior by
setting the isolation mode in management studio, I meant to point that
out too.
J.
|||Thanks. We are actually using read committed as the isolation level from
COM+, and the read commitet snapshot option is turned on for the database.
Management Studio is using read committed as well.
-A
"JXStern" <JXSternChangeX2R@.gte.net> wrote in message
news:321b03pup5h45fns2puit4buo90655cqso@.4ax.com...
> On Sat, 24 Mar 2007 09:40:55 +0100, "Tibor Karaszi"
> <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote:
>
> Right, ... the point being he can try to replicate the behavior by
> setting the isolation mode in management studio, I meant to point that
> out too.
> J.
>
|||On Sun, 25 Mar 2007 13:34:43 +0200, "Anders Evensen"
<anders.evensen@.millionhandshakes.com> wrote:

>Thanks. We are actually using read committed as the isolation level from
>COM+, and the read commitet snapshot option is turned on for the database.
>Management Studio is using read committed as well.
Then I guess I would ask, are you *sure* that when you run it through
COM+, nothing else is executing? You're running an INSERT statement,
does COM+ get the exact string you use in the MS or does it do a
prepared statement or somesuch? Have you run profiler to be clear on
this?
The COM+ connections might also prep with other random settings that
could be factors. Do they return exactly the same results either way?
You could use profiler to display the plans from executing from either
side, it wouldn't tell you *why* exactly, but it might give more
hints.
J.

>-A
>"JXStern" <JXSternChangeX2R@.gte.net> wrote in message
>news:321b03pup5h45fns2puit4buo90655cqso@.4ax.com.. .
>

Friday, March 9, 2012

Performance of SQL2k5 with snapshot isolation level turned on

I did several tests of performance of SQL2k5, and I see quite strange result
s.
I restored database from SQL2k, rebuilded indexes, updated statistics. I did
all what described in migration process. Backuped the resulting database. In
each of next tests I restored the database from backup. Next I run tests.
Since our application, OLTP, is very performance sensitive, we need to have
good distribution of transaction execution time. In case if I run tests on
SQL2k5, when snapshot isolation level is not turned on, I see good
distribution - less than 0.5% of transactions executing more than 200ms,
average execution time is ~5 ms. In case if I change compatibility level to
90 and turn on allow_snapshot_isolation, behavior is dramatically changing.
Near 5% of transactions start to execute more than 200 ms, average execution
time is also significally increasing. There are no transactions which are
really use snapshot isolation level, stored procedures are exacly same as
before. There is no any memory pressure, no IO bottlenecks. During the tests
counters showed nearby 1 GB of free memory. IO system - EMC CX500. Data on
5x73GB FC 15kRPM drives (RAID0), log on 3 drives (RAID0). Database have 2 GB
size, but only ~200 MB of the data was read in the tests. 1.2GB write cache
-
it means what all writing during the test was done only to write cache on
CX500, tests were not so long to fill the entire write cache. Windows 2003
x64 Enterprise Edition, SQL2k5 x64 Developer Edition.
CPU load was ~50% in tests without snapshot isolation level, and ~70% with
tests with snapshot level. 2CPU.
From the test it looks like I can not remove replication to QueryDB and run
queries on same computer where OLTP load...
Why impact of snapshot isolation level is so big?As soon as you change the isolation level to either Snapshot Isolation or
Read Committed Snapshot, SQL Server will start saving ALL updated data in
the version store in tempdb, whether or not any queries every read the
versioned data. So every changed row will have additional overhead
associated with it.
HTH
Kalen Delaney, SQL Server MVP
www.solidqualitylearning.com
"andsm" <andsm@.discussions.microsoft.com> wrote in message
news:B12BA2E7-8639-4DE6-9113-75F56E6BAB7C@.microsoft.com...
>I did several tests of performance of SQL2k5, and I see quite strange
>results.
> I restored database from SQL2k, rebuilded indexes, updated statistics. I
> did
> all what described in migration process. Backuped the resulting database.
> In
> each of next tests I restored the database from backup. Next I run tests.
> Since our application, OLTP, is very performance sensitive, we need to
> have
> good distribution of transaction execution time. In case if I run tests on
> SQL2k5, when snapshot isolation level is not turned on, I see good
> distribution - less than 0.5% of transactions executing more than 200ms,
> average execution time is ~5 ms. In case if I change compatibility level
> to
> 90 and turn on allow_snapshot_isolation, behavior is dramatically
> changing.
> Near 5% of transactions start to execute more than 200 ms, average
> execution
> time is also significally increasing. There are no transactions which are
> really use snapshot isolation level, stored procedures are exacly same as
> before. There is no any memory pressure, no IO bottlenecks. During the
> tests
> counters showed nearby 1 GB of free memory. IO system - EMC CX500. Data on
> 5x73GB FC 15kRPM drives (RAID0), log on 3 drives (RAID0). Database have 2
> GB
> size, but only ~200 MB of the data was read in the tests. 1.2GB write
> cache -
> it means what all writing during the test was done only to write cache on
> CX500, tests were not so long to fill the entire write cache. Windows 2003
> x64 Enterprise Edition, SQL2k5 x64 Developer Edition.
> CPU load was ~50% in tests without snapshot isolation level, and ~70% with
> tests with snapshot level. 2CPU.
> From the test it looks like I can not remove replication to QueryDB and
> run
> queries on same computer where OLTP load...
> Why impact of snapshot isolation level is so big?
>|||I understand what snapshot isolation adds some overhead, but I expected
lesser overhead. CPU consumption near 1.4 times more in case if snapshot
option turned on. It means what for most of performance sensitive OLTP
applications, which currently use SQL2k and which use replication to another
database used for longer queries, can not remove replication and run queries
on same server where main OLTP load after migration to SQL2k5. Quite
disappointing...
"Kalen Delaney" wrote:

> As soon as you change the isolation level to either Snapshot Isolation or
> Read Committed Snapshot, SQL Server will start saving ALL updated data in
> the version store in tempdb, whether or not any queries every read the
> versioned data. So every changed row will have additional overhead
> associated with it.
> --
> HTH
> Kalen Delaney, SQL Server MVP
> www.solidqualitylearning.com
>
> "andsm" <andsm@.discussions.microsoft.com> wrote in message
> news:B12BA2E7-8639-4DE6-9113-75F56E6BAB7C@.microsoft.com...
>
>|||There is a lot to consider with this. Snapshot isolation or read committed
snapshot write every single transaction into the version store. Since the
version store is in tempdb, in a high volume environment, you are in effect
pounding tempdb with every single transaction you issue. These are intended
to be applied when you have situations when you do not want reads to block
writes, BUT there is a performance trade-off when you implement that
functionality.
There are a couple of things you can do to minimize the impact of the
version store overhead. First, you need to move tempdb to its own disk
device so that it doesn't have to compete with everything else. Secondly,
you can add additional files (equal to the nmber of processors) to tempdb
that are of equal size in order to stripe the writes. BUT, you are still
going to have a performance impact.
Mike
http://www.solidqualitylearning.com
Disclaimer: This communication is an original work and represents my sole
views on the subject. It does not represent the views of any other person
or entity either by inference or direct reference.
"andsm" <andsm@.discussions.microsoft.com> wrote in message
news:C3C9D444-5CD1-4C9A-9F2F-4BCD5DF26019@.microsoft.com...[vbcol=seagreen]
>I understand what snapshot isolation adds some overhead, but I expected
> lesser overhead. CPU consumption near 1.4 times more in case if snapshot
> option turned on. It means what for most of performance sensitive OLTP
> applications, which currently use SQL2k and which use replication to
> another
> database used for longer queries, can not remove replication and run
> queries
> on same server where main OLTP load after migration to SQL2k5. Quite
> disappointing...
> "Kalen Delaney" wrote:
>

Performance of SQL2k5 with snapshot isolation level turned on

I did several tests of performance of SQL2k5, and I see quite strange results.
I restored database from SQL2k, rebuilded indexes, updated statistics. I did
all what described in migration process. Backuped the resulting database. In
each of next tests I restored the database from backup. Next I run tests.
Since our application, OLTP, is very performance sensitive, we need to have
good distribution of transaction execution time. In case if I run tests on
SQL2k5, when snapshot isolation level is not turned on, I see good
distribution - less than 0.5% of transactions executing more than 200ms,
average execution time is ~5 ms. In case if I change compatibility level to
90 and turn on allow_snapshot_isolation, behavior is dramatically changing.
Near 5% of transactions start to execute more than 200 ms, average execution
time is also significally increasing. There are no transactions which are
really use snapshot isolation level, stored procedures are exacly same as
before. There is no any memory pressure, no IO bottlenecks. During the tests
counters showed nearby 1 GB of free memory. IO system - EMC CX500. Data on
5x73GB FC 15kRPM drives (RAID0), log on 3 drives (RAID0). Database have 2 GB
size, but only ~200 MB of the data was read in the tests. 1.2GB write cache -
it means what all writing during the test was done only to write cache on
CX500, tests were not so long to fill the entire write cache. Windows 2003
x64 Enterprise Edition, SQL2k5 x64 Developer Edition.
CPU load was ~50% in tests without snapshot isolation level, and ~70% with
tests with snapshot level. 2CPU.
From the test it looks like I can not remove replication to QueryDB and run
queries on same computer where OLTP load...
Why impact of snapshot isolation level is so big?
As soon as you change the isolation level to either Snapshot Isolation or
Read Committed Snapshot, SQL Server will start saving ALL updated data in
the version store in tempdb, whether or not any queries every read the
versioned data. So every changed row will have additional overhead
associated with it.
HTH
Kalen Delaney, SQL Server MVP
www.solidqualitylearning.com
"andsm" <andsm@.discussions.microsoft.com> wrote in message
news:B12BA2E7-8639-4DE6-9113-75F56E6BAB7C@.microsoft.com...
>I did several tests of performance of SQL2k5, and I see quite strange
>results.
> I restored database from SQL2k, rebuilded indexes, updated statistics. I
> did
> all what described in migration process. Backuped the resulting database.
> In
> each of next tests I restored the database from backup. Next I run tests.
> Since our application, OLTP, is very performance sensitive, we need to
> have
> good distribution of transaction execution time. In case if I run tests on
> SQL2k5, when snapshot isolation level is not turned on, I see good
> distribution - less than 0.5% of transactions executing more than 200ms,
> average execution time is ~5 ms. In case if I change compatibility level
> to
> 90 and turn on allow_snapshot_isolation, behavior is dramatically
> changing.
> Near 5% of transactions start to execute more than 200 ms, average
> execution
> time is also significally increasing. There are no transactions which are
> really use snapshot isolation level, stored procedures are exacly same as
> before. There is no any memory pressure, no IO bottlenecks. During the
> tests
> counters showed nearby 1 GB of free memory. IO system - EMC CX500. Data on
> 5x73GB FC 15kRPM drives (RAID0), log on 3 drives (RAID0). Database have 2
> GB
> size, but only ~200 MB of the data was read in the tests. 1.2GB write
> cache -
> it means what all writing during the test was done only to write cache on
> CX500, tests were not so long to fill the entire write cache. Windows 2003
> x64 Enterprise Edition, SQL2k5 x64 Developer Edition.
> CPU load was ~50% in tests without snapshot isolation level, and ~70% with
> tests with snapshot level. 2CPU.
> From the test it looks like I can not remove replication to QueryDB and
> run
> queries on same computer where OLTP load...
> Why impact of snapshot isolation level is so big?
>
|||I understand what snapshot isolation adds some overhead, but I expected
lesser overhead. CPU consumption near 1.4 times more in case if snapshot
option turned on. It means what for most of performance sensitive OLTP
applications, which currently use SQL2k and which use replication to another
database used for longer queries, can not remove replication and run queries
on same server where main OLTP load after migration to SQL2k5. Quite
disappointing...
"Kalen Delaney" wrote:

> As soon as you change the isolation level to either Snapshot Isolation or
> Read Committed Snapshot, SQL Server will start saving ALL updated data in
> the version store in tempdb, whether or not any queries every read the
> versioned data. So every changed row will have additional overhead
> associated with it.
> --
> HTH
> Kalen Delaney, SQL Server MVP
> www.solidqualitylearning.com
>
> "andsm" <andsm@.discussions.microsoft.com> wrote in message
> news:B12BA2E7-8639-4DE6-9113-75F56E6BAB7C@.microsoft.com...
>
>
|||There is a lot to consider with this. Snapshot isolation or read committed
snapshot write every single transaction into the version store. Since the
version store is in tempdb, in a high volume environment, you are in effect
pounding tempdb with every single transaction you issue. These are intended
to be applied when you have situations when you do not want reads to block
writes, BUT there is a performance trade-off when you implement that
functionality.
There are a couple of things you can do to minimize the impact of the
version store overhead. First, you need to move tempdb to its own disk
device so that it doesn't have to compete with everything else. Secondly,
you can add additional files (equal to the nmber of processors) to tempdb
that are of equal size in order to stripe the writes. BUT, you are still
going to have a performance impact.
Mike
http://www.solidqualitylearning.com
Disclaimer: This communication is an original work and represents my sole
views on the subject. It does not represent the views of any other person
or entity either by inference or direct reference.
"andsm" <andsm@.discussions.microsoft.com> wrote in message
news:C3C9D444-5CD1-4C9A-9F2F-4BCD5DF26019@.microsoft.com...[vbcol=seagreen]
>I understand what snapshot isolation adds some overhead, but I expected
> lesser overhead. CPU consumption near 1.4 times more in case if snapshot
> option turned on. It means what for most of performance sensitive OLTP
> applications, which currently use SQL2k and which use replication to
> another
> database used for longer queries, can not remove replication and run
> queries
> on same server where main OLTP load after migration to SQL2k5. Quite
> disappointing...
> "Kalen Delaney" wrote:

Performance of SQL2k5 with snapshot isolation level turned on

I did several tests of performance of SQL2k5, and I see quite strange results.
I restored database from SQL2k, rebuilded indexes, updated statistics. I did
all what described in migration process. Backuped the resulting database. In
each of next tests I restored the database from backup. Next I run tests.
Since our application, OLTP, is very performance sensitive, we need to have
good distribution of transaction execution time. In case if I run tests on
SQL2k5, when snapshot isolation level is not turned on, I see good
distribution - less than 0.5% of transactions executing more than 200ms,
average execution time is ~5 ms. In case if I change compatibility level to
90 and turn on allow_snapshot_isolation, behavior is dramatically changing.
Near 5% of transactions start to execute more than 200 ms, average execution
time is also significally increasing. There are no transactions which are
really use snapshot isolation level, stored procedures are exacly same as
before. There is no any memory pressure, no IO bottlenecks. During the tests
counters showed nearby 1 GB of free memory. IO system - EMC CX500. Data on
5x73GB FC 15kRPM drives (RAID0), log on 3 drives (RAID0). Database have 2 GB
size, but only ~200 MB of the data was read in the tests. 1.2GB write cache -
it means what all writing during the test was done only to write cache on
CX500, tests were not so long to fill the entire write cache. Windows 2003
x64 Enterprise Edition, SQL2k5 x64 Developer Edition.
CPU load was ~50% in tests without snapshot isolation level, and ~70% with
tests with snapshot level. 2CPU.
From the test it looks like I can not remove replication to QueryDB and run
queries on same computer where OLTP load...
Why impact of snapshot isolation level is so big?As soon as you change the isolation level to either Snapshot Isolation or
Read Committed Snapshot, SQL Server will start saving ALL updated data in
the version store in tempdb, whether or not any queries every read the
versioned data. So every changed row will have additional overhead
associated with it.
--
HTH
Kalen Delaney, SQL Server MVP
www.solidqualitylearning.com
"andsm" <andsm@.discussions.microsoft.com> wrote in message
news:B12BA2E7-8639-4DE6-9113-75F56E6BAB7C@.microsoft.com...
>I did several tests of performance of SQL2k5, and I see quite strange
>results.
> I restored database from SQL2k, rebuilded indexes, updated statistics. I
> did
> all what described in migration process. Backuped the resulting database.
> In
> each of next tests I restored the database from backup. Next I run tests.
> Since our application, OLTP, is very performance sensitive, we need to
> have
> good distribution of transaction execution time. In case if I run tests on
> SQL2k5, when snapshot isolation level is not turned on, I see good
> distribution - less than 0.5% of transactions executing more than 200ms,
> average execution time is ~5 ms. In case if I change compatibility level
> to
> 90 and turn on allow_snapshot_isolation, behavior is dramatically
> changing.
> Near 5% of transactions start to execute more than 200 ms, average
> execution
> time is also significally increasing. There are no transactions which are
> really use snapshot isolation level, stored procedures are exacly same as
> before. There is no any memory pressure, no IO bottlenecks. During the
> tests
> counters showed nearby 1 GB of free memory. IO system - EMC CX500. Data on
> 5x73GB FC 15kRPM drives (RAID0), log on 3 drives (RAID0). Database have 2
> GB
> size, but only ~200 MB of the data was read in the tests. 1.2GB write
> cache -
> it means what all writing during the test was done only to write cache on
> CX500, tests were not so long to fill the entire write cache. Windows 2003
> x64 Enterprise Edition, SQL2k5 x64 Developer Edition.
> CPU load was ~50% in tests without snapshot isolation level, and ~70% with
> tests with snapshot level. 2CPU.
> From the test it looks like I can not remove replication to QueryDB and
> run
> queries on same computer where OLTP load...
> Why impact of snapshot isolation level is so big?
>|||I understand what snapshot isolation adds some overhead, but I expected
lesser overhead. CPU consumption near 1.4 times more in case if snapshot
option turned on. It means what for most of performance sensitive OLTP
applications, which currently use SQL2k and which use replication to another
database used for longer queries, can not remove replication and run queries
on same server where main OLTP load after migration to SQL2k5. Quite
disappointing...
"Kalen Delaney" wrote:
> As soon as you change the isolation level to either Snapshot Isolation or
> Read Committed Snapshot, SQL Server will start saving ALL updated data in
> the version store in tempdb, whether or not any queries every read the
> versioned data. So every changed row will have additional overhead
> associated with it.
> --
> HTH
> Kalen Delaney, SQL Server MVP
> www.solidqualitylearning.com
>
> "andsm" <andsm@.discussions.microsoft.com> wrote in message
> news:B12BA2E7-8639-4DE6-9113-75F56E6BAB7C@.microsoft.com...
> >I did several tests of performance of SQL2k5, and I see quite strange
> >results.
> > I restored database from SQL2k, rebuilded indexes, updated statistics. I
> > did
> > all what described in migration process. Backuped the resulting database.
> > In
> > each of next tests I restored the database from backup. Next I run tests.
> > Since our application, OLTP, is very performance sensitive, we need to
> > have
> > good distribution of transaction execution time. In case if I run tests on
> > SQL2k5, when snapshot isolation level is not turned on, I see good
> > distribution - less than 0.5% of transactions executing more than 200ms,
> > average execution time is ~5 ms. In case if I change compatibility level
> > to
> > 90 and turn on allow_snapshot_isolation, behavior is dramatically
> > changing.
> > Near 5% of transactions start to execute more than 200 ms, average
> > execution
> > time is also significally increasing. There are no transactions which are
> > really use snapshot isolation level, stored procedures are exacly same as
> > before. There is no any memory pressure, no IO bottlenecks. During the
> > tests
> > counters showed nearby 1 GB of free memory. IO system - EMC CX500. Data on
> > 5x73GB FC 15kRPM drives (RAID0), log on 3 drives (RAID0). Database have 2
> > GB
> > size, but only ~200 MB of the data was read in the tests. 1.2GB write
> > cache -
> > it means what all writing during the test was done only to write cache on
> > CX500, tests were not so long to fill the entire write cache. Windows 2003
> > x64 Enterprise Edition, SQL2k5 x64 Developer Edition.
> > CPU load was ~50% in tests without snapshot isolation level, and ~70% with
> > tests with snapshot level. 2CPU.
> > From the test it looks like I can not remove replication to QueryDB and
> > run
> > queries on same computer where OLTP load...
> > Why impact of snapshot isolation level is so big?
> >
>
>|||There is a lot to consider with this. Snapshot isolation or read committed
snapshot write every single transaction into the version store. Since the
version store is in tempdb, in a high volume environment, you are in effect
pounding tempdb with every single transaction you issue. These are intended
to be applied when you have situations when you do not want reads to block
writes, BUT there is a performance trade-off when you implement that
functionality.
There are a couple of things you can do to minimize the impact of the
version store overhead. First, you need to move tempdb to its own disk
device so that it doesn't have to compete with everything else. Secondly,
you can add additional files (equal to the nmber of processors) to tempdb
that are of equal size in order to stripe the writes. BUT, you are still
going to have a performance impact.
--
Mike
http://www.solidqualitylearning.com
Disclaimer: This communication is an original work and represents my sole
views on the subject. It does not represent the views of any other person
or entity either by inference or direct reference.
"andsm" <andsm@.discussions.microsoft.com> wrote in message
news:C3C9D444-5CD1-4C9A-9F2F-4BCD5DF26019@.microsoft.com...
>I understand what snapshot isolation adds some overhead, but I expected
> lesser overhead. CPU consumption near 1.4 times more in case if snapshot
> option turned on. It means what for most of performance sensitive OLTP
> applications, which currently use SQL2k and which use replication to
> another
> database used for longer queries, can not remove replication and run
> queries
> on same server where main OLTP load after migration to SQL2k5. Quite
> disappointing...
> "Kalen Delaney" wrote:
>> As soon as you change the isolation level to either Snapshot Isolation or
>> Read Committed Snapshot, SQL Server will start saving ALL updated data in
>> the version store in tempdb, whether or not any queries every read the
>> versioned data. So every changed row will have additional overhead
>> associated with it.
>> --
>> HTH
>> Kalen Delaney, SQL Server MVP
>> www.solidqualitylearning.com
>>
>> "andsm" <andsm@.discussions.microsoft.com> wrote in message
>> news:B12BA2E7-8639-4DE6-9113-75F56E6BAB7C@.microsoft.com...
>> >I did several tests of performance of SQL2k5, and I see quite strange
>> >results.
>> > I restored database from SQL2k, rebuilded indexes, updated statistics.
>> > I
>> > did
>> > all what described in migration process. Backuped the resulting
>> > database.
>> > In
>> > each of next tests I restored the database from backup. Next I run
>> > tests.
>> > Since our application, OLTP, is very performance sensitive, we need to
>> > have
>> > good distribution of transaction execution time. In case if I run tests
>> > on
>> > SQL2k5, when snapshot isolation level is not turned on, I see good
>> > distribution - less than 0.5% of transactions executing more than
>> > 200ms,
>> > average execution time is ~5 ms. In case if I change compatibility
>> > level
>> > to
>> > 90 and turn on allow_snapshot_isolation, behavior is dramatically
>> > changing.
>> > Near 5% of transactions start to execute more than 200 ms, average
>> > execution
>> > time is also significally increasing. There are no transactions which
>> > are
>> > really use snapshot isolation level, stored procedures are exacly same
>> > as
>> > before. There is no any memory pressure, no IO bottlenecks. During the
>> > tests
>> > counters showed nearby 1 GB of free memory. IO system - EMC CX500. Data
>> > on
>> > 5x73GB FC 15kRPM drives (RAID0), log on 3 drives (RAID0). Database have
>> > 2
>> > GB
>> > size, but only ~200 MB of the data was read in the tests. 1.2GB write
>> > cache -
>> > it means what all writing during the test was done only to write cache
>> > on
>> > CX500, tests were not so long to fill the entire write cache. Windows
>> > 2003
>> > x64 Enterprise Edition, SQL2k5 x64 Developer Edition.
>> > CPU load was ~50% in tests without snapshot isolation level, and ~70%
>> > with
>> > tests with snapshot level. 2CPU.
>> > From the test it looks like I can not remove replication to QueryDB and
>> > run
>> > queries on same computer where OLTP load...
>> > Why impact of snapshot isolation level is so big?
>> >
>>
>>