Monday, March 26, 2012
Performance question.
I have a table which has arround 30 millions rows.
Table structure is as following..
CREATE TABLE TestTable
(Id INT, --which is PK,
EmpId INT, --There is a non cluster index on it.
DeptName VARCHAR(50),
Hours NUMERIC(5,2),
Tdate DATETIME,
ProjectNumber smallint,
.and few more columns
.
.
)
And i have following query, which is taking arround 1 minute 10 sec to run.
SELECT
DeptName,
EmpId,
SUM(CASE WHEN ProjectNumber = 11 THEN Hours ELSE 0
END) AS FinHours,
SUM(CASE WHEN ProjectId = 12 THEN Hours ELSE 0
END) AS HrHours,
SUM(CASE WHEN ProjectId = 13 THEN Hours ELSE 0
END) AS TaxHours,
FROM TestTable WHERE Tdate between @.Date1 and @.Date2
GROUP BY
DeptName,
EmpId
I do not have index on ProjectNumber column because this column will have
only
200 distinct values.
If i create index on Group by Columns, would it improve performance ?
Pls let me know, how can i imporve performance ?
Thanks.I don't know about the rest of your queries or your usage patterns, but the
most obvious choice in this case is to make the PK nonclustered and create a
clustered index on the Tdate column to support your WHERE clause.
Adam Machanic
Pro SQL Server 2005, available now
http://www.apress.com/book/bookDisplay.html?bID=457
--
"mvp" <mvp@.discussions.microsoft.com> wrote in message
news:569B7FD3-452C-4B8F-9056-06798A32B3EA@.microsoft.com...
> Hello Everybody,
> I have a table which has arround 30 millions rows.
> Table structure is as following..
> CREATE TABLE TestTable
> (Id INT, --which is PK,
> EmpId INT, --There is a non cluster index on it.
> DeptName VARCHAR(50),
> Hours NUMERIC(5,2),
> Tdate DATETIME,
> ProjectNumber smallint,
> .and few more columns
> .
> .
> )
> And i have following query, which is taking arround 1 minute 10 sec to
> run.
>
> SELECT
> DeptName,
> EmpId,
> SUM(CASE WHEN ProjectNumber = 11 THEN Hours ELSE 0
> END) AS FinHours,
> SUM(CASE WHEN ProjectId = 12 THEN Hours ELSE 0
> END) AS HrHours,
> SUM(CASE WHEN ProjectId = 13 THEN Hours ELSE 0
> END) AS TaxHours,
> FROM TestTable WHERE Tdate between @.Date1 and @.Date2
> GROUP BY
> DeptName,
> EmpId
> I do not have index on ProjectNumber column because this column will have
> only
> 200 distinct values.
> If i create index on Group by Columns, would it improve performance ?
> Pls let me know, how can i imporve performance ?
>
> Thanks.|||also you may try an index on all the columns involved in the query,
Tdate first if the interval is narrow, DeptName, EmpId first if the
interval is wide|||Your query indicates that you are using the following columns: DeptId,
EmpId, ProjectNumber, ProjectId, TDate. This means that SQL Server will have
to look at all of the rows being returned regardless of whether an index
exists on your grouped columns. This will be the case unless you were to
create a covering index for all of the columns being returned. In your case,
that's a lot of columns so I don't recommend it.
According to your DDL, I don't see an index on Tdate. I would actually start
with that. However, depending on the number of rows that are being returned
from your query, the optimizer may or may not even choose to use that index
(due to the expense of bookmark lookup). However, I would attempt a
non-clustered index on Tdate first.
Assuming you're not using the data for anything else (or much else), Adam's
method could be the best route. However, this would result in larger indexes
for all of the nonclustered indexes on this table.
Since all nonclustered indexes also include the clustered index key, and
your key is going from a 4-byte data type to an 8-byte data type and add up
114MB to each index on your table. Combined with the fact that a
"uniquifier" is applied to all non-unique clustered indexes, could add
another 4 bytes to your rows and bring each nonclustered index up to 228 MB.
"Adam Machanic" <amachanic@.hotmail._removetoemail_.com> wrote in message
news:%237v%239uXBGHA.892@.TK2MSFTNGP12.phx.gbl...
>I don't know about the rest of your queries or your usage patterns, but the
>most obvious choice in this case is to make the PK nonclustered and create
>a clustered index on the Tdate column to support your WHERE clause.
>
> --
> Adam Machanic
> Pro SQL Server 2005, available now
> http://www.apress.com/book/bookDisplay.html?bID=457
> --
>
> "mvp" <mvp@.discussions.microsoft.com> wrote in message
> news:569B7FD3-452C-4B8F-9056-06798A32B3EA@.microsoft.com...
>|||"Jared Ko" <JaredKo05@.sanbeiji.removethispart.com> wrote in message
news:uPOE8AZBGHA.736@.TK2MSFTNGP10.phx.gbl...
> Since all nonclustered indexes also include the clustered index key, and
> your key is going from a 4-byte data type to an 8-byte data type and add
> up 114MB to each index on your table. Combined with the fact that a
> "uniquifier" is applied to all non-unique clustered indexes, could add
> another 4 bytes to your rows and bring each nonclustered index up to 228
> MB.
Slight correction: The uniquifier is only added to non-unique rows, not
every row. So if the majority are unique (which we might expect from a
DATETIME column), the uniquifier will add very little overhead.
Adam Machanic
Pro SQL Server 2005, available now
http://www.apress.com/book/bookDisplay.html?bID=457
--|||Thanks Adam. I was frantically searching for that information while I was
writing my response. I was thinking that was the case but a couple of web
sites I hit suggested otherwise.
"Adam Machanic" <amachanic@.hotmail._removetoemail_.com> wrote in message
news:OybGaZZBGHA.2356@.tk2msftngp13.phx.gbl...
> "Jared Ko" <JaredKo05@.sanbeiji.removethispart.com> wrote in message
> news:uPOE8AZBGHA.736@.TK2MSFTNGP10.phx.gbl...
> Slight correction: The uniquifier is only added to non-unique rows, not
> every row. So if the majority are unique (which we might expect from a
> DATETIME column), the uniquifier will add very little overhead.
>
> --
> Adam Machanic
> Pro SQL Server 2005, available now
> http://www.apress.com/book/bookDisplay.html?bID=457
> --
>
>|||Best is to test for yourself. Have to similar tables, populate them with bun
ch of rows. One unique,
the other all with same value. Check size of the index. that is how I conclu
ded that uniqifier is
only added for the duplicates.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"Jared Ko" <JaredKo05@.sanbeiji.removethispart.com> wrote in message
news:uyOPGjZBGHA.3984@.TK2MSFTNGP14.phx.gbl...
> Thanks Adam. I was frantically searching for that information while I was
writing my response. I
> was thinking that was the case but a couple of web sites I hit suggested o
therwise.
> "Adam Machanic" <amachanic@.hotmail._removetoemail_.com> wrote in message
> news:OybGaZZBGHA.2356@.tk2msftngp13.phx.gbl...
>|||"Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote in
message news:OK1JSoZBGHA.3472@.TK2MSFTNGP09.phx.gbl...
> Best is to test for yourself. Have to similar tables, populate them with
> bunch of rows. One unique, the other all with same value. Check size of
> the index. that is how I concluded that uniqifier is only added for the
> duplicates.
I took the lazy way out. _Inside SQL Server 2000_, page 412:
"If your clustered index was not created with the UNIQUE property, SQL
Server adds a 4-byte field when necessary to make each key unique."
Adam Machanic
Pro SQL Server 2005, available now
http://www.apress.com/book/bookDisplay.html?bID=457
--|||Just a question - wouldn't horizontal partitioning be useful in this case?
of course apart from index on the Tdate field.
Peter
Performance question for begginers
On an SQL 2000, which of the following queries will have better performance:
select * from Phones
where ContactID in (select ID From Contacts where ClientID = 14)
Or
select * from Phones
inner join Contacts on Phones.ContactID = Contacts.ID
inner join Clients on Contacts.ClientID = Clients.ID
where ClientID = 14
As far as I know the second is better, but my own tests prove otherwise...
Thanks,gwenda wrote:
> Hi,
> On an SQL 2000, which of the following queries will have better
> performance:
> select * from Phones
> where ContactID in (select ID From Contacts where ClientID = 14)
> Or
> select * from Phones
> inner join Contacts on Phones.ContactID = Contacts.ID
> inner join Clients on Contacts.ClientID = Clients.ID
> where ClientID = 14
> As far as I know the second is better, but my own tests prove
> otherwise...
> Thanks,
It depends. First no "SELECT *". It's bad practice, a performance hog, a
network hog, and makes performance tuning impossible. The columns you
select have a lot to do with the overall execution plan SQL Server uses.
In the second example, you're selecting all columns in all three tables.
Re-run the queries with the columns you really want back and report the
results. Also post your indexes, keys, and table DDL.
David Gugick
Imceda Software
www.imceda.com|||SQL Server will try to generate the most efficient execution plan possible
based on your query. Given 2 different queries that are semantically
identical, it is entirely possible that SQL Server will generate the same
efficient execution plan even though the queries are expressed differently.
You'll then get the same performance.
As David pointed out, your queries are semantically different so you're
comparing apples and oranges. The example below shows identical execution
plans with different but semantically identical queries:
CREATE TABLE Contacts
(
ID int NOT NULL
CONSTRAINT PK_Contacts PRIMARY KEY,
ClientID int NOT NULL
)
GO
CREATE INDEX Index1 ON Contacts(ClientID)
CREATE TABLE Phones
(
PhoneID int NOT NULL
CONSTRAINT PK_Phones PRIMARY KEY,
ContactID int NOT NULL
CONSTRAINT FK_Phones_Contacts FOREIGN KEY
REFERENCES Contacts(ID),
)
GO
SELECT p.*
FROM Phones p
WHERE ContactID IN (SELECT ID FROM Contacts WHERE ClientID = 14)
GO
SELECT p.*
FROM Phones p
WHERE EXISTS
(
SELECT *
FROM Contacts c
WHERE c.ID = p.ContactID AND
c.ClientID = 14
)
GO
Hope this helps.
Dan Guzman
SQL Server MVP
"gwenda" <nina@.community.nospam> wrote in message
news:5199EF86-020D-4475-92F2-E7B64EDCA2C0@.microsoft.com...
> Hi,
> On an SQL 2000, which of the following queries will have better
> performance:
> select * from Phones
> where ContactID in (select ID From Contacts where ClientID = 14)
> Or
> select * from Phones
> inner join Contacts on Phones.ContactID = Contacts.ID
> inner join Clients on Contacts.ClientID = Clients.ID
> where ClientID = 14
> As far as I know the second is better, but my own tests prove otherwise...
> Thanks,
>|||These are two totally different queries. The second involves three tables,
returning columns from all three, while the first one only returns data from
one table and only touches two tables.) If you have good indexes on the
clientId and contactId the first one will be very fast.
The second should be pretty fast, but it is totally about indexes and how
everthing is structured.
----
Louis Davidson - drsql@.hotmail.com
SQL Server MVP
Compass Technology Management - www.compass.net
Pro SQL Server 2000 Database Design -
http://www.apress.com/book/bookDisplay.html?bID=266
Blog - http://spaces.msn.com/members/drsql/
Note: Please reply to the newsgroups only unless you are interested in
consulting services. All other replies may be ignored :)
"gwenda" <nina@.community.nospam> wrote in message
news:5199EF86-020D-4475-92F2-E7B64EDCA2C0@.microsoft.com...
> Hi,
> On an SQL 2000, which of the following queries will have better
> performance:
> select * from Phones
> where ContactID in (select ID From Contacts where ClientID = 14)
> Or
> select * from Phones
> inner join Contacts on Phones.ContactID = Contacts.ID
> inner join Clients on Contacts.ClientID = Clients.ID
> where ClientID = 14
> As far as I know the second is better, but my own tests prove otherwise...
> Thanks,
>|||Hi,
Thank you all for this helpful responses and sorry for not providing all the
needed data. So:
Phones:
CREATE TABLE [Phones] (
[ID] [int] IDENTITY (1, 1) NOT NULL ,
[ContactID] [int] NOT NULL ,
[Type] [tinyint] NOT NULL ,
[Number] [varchar] (50) COLLATE Latin1_General_CI_AS NOT NULL ,
[Remarks] [nvarchar] (150) COLLATE Latin1_General_CI_AS NULL ,
CONSTRAINT [PK_Phones] PRIMARY KEY CLUSTERED
(
[ID]
) ON [PRIMARY] ,
CONSTRAINT [FK_Phones_Contacts] FOREIGN KEY
(
[ContactID]
) REFERENCES [Contacts] (
[ID]
)
) ON [PRIMARY]
GO
Contacts:
CREATE TABLE [Contacts] (
[ID] [int] IDENTITY (1, 1) NOT NULL ,
[CategoryCode] [tinyint] NULL ,
[Name] [char] (100) COLLATE Latin1_General_CI_AS NULL ,
[JobTitle] [varchar] (100) COLLATE Latin1_General_CI_AS NULL ,
[Address] [varchar] (200) COLLATE Latin1_General_CI_AS NULL ,
[Email] [char] (50) COLLATE Latin1_General_CI_AS NULL ,
[IsEmergency] [bit] NULL ,
[LastUpdated] [datetime] NULL ,
[Remarks] [nvarchar] (500) COLLATE Latin1_General_CI_AS NULL ,
[ClientID] [int] NULL ,
CONSTRAINT [PK_Contacts] PRIMARY KEY CLUSTERED
(
[ID]
) ON [PRIMARY] ,
CONSTRAINT [FK_Contacts_Clients] FOREIGN KEY
(
[ClientID]
) REFERENCES [Clients] (
[ID]
)
) ON [PRIMARY]
GO
Clients:
CREATE TABLE [Clients] (
[ID] [int] IDENTITY (1, 1) NOT NULL ,
[Name] [nvarchar] (150) COLLATE Latin1_General_CI_AS NOT NULL ,
[Description] [nvarchar] (200) COLLATE Latin1_General_CI_AS NULL ,
CONSTRAINT [PK_Clients] PRIMARY KEY CLUSTERED
(
[ID]
) ON [PRIMARY]
) ON [PRIMARY]
GO
The two "fixed" queries are:
select p.ID, p.ContactID, p.Type, p.Number, p.Remarks
from Phones p
where ContactID in (select ID From Contacts where ClientID = 14)
OR
select p.ID, p.ContactID, p.Type, p.Number, p.Remarks
from Phones
inner join Contacts on Phones.ContactID = Contacts.ID
inner join Clients on Contacts.ClientID = Clients.ID
where ClientID = 14
The purpose is getting the information from phones in the best performance.
Hope that will help you to help me.
Thanks again,
Gwenda|||The performance of each approach depends on the relative table size (and
number of distinct ID's that "select ID From Contacts where ClientID =
14" produces). If there are just a few distinct ID's for ClientID 14,
then the first approach could be faster. If not, then I would expect the
second approach to be faster. So it really depends. You would have to
test it for yourself.
You could also try the following approach and see how fast that is.
select p.ID, p.ContactID, p.Type, p.Number, p.Remarks
from Phones p
where EXISTS (select 1 From Contacts where ClientID = 14 AND
ID=p.ContactID)
BTW: based on the clustered index that already exists on Contacts(ID),
this query would benefit from a nonclustered index on
Contacts(ClientID).
HTH,
Gert-Jan
gwenda wrote:
> Hi,
> Thank you all for this helpful responses and sorry for not providing all t
he
> needed data. So:
> Phones:
> CREATE TABLE [Phones] (
> [ID] [int] IDENTITY (1, 1) NOT NULL ,
> [ContactID] [int] NOT NULL ,
> [Type] [tinyint] NOT NULL ,
> [Number] [varchar] (50) COLLATE Latin1_General_CI_AS NOT NULL ,
> [Remarks] [nvarchar] (150) COLLATE Latin1_General_CI_AS NULL ,
> CONSTRAINT [PK_Phones] PRIMARY KEY CLUSTERED
> (
> [ID]
> ) ON [PRIMARY] ,
> CONSTRAINT [FK_Phones_Contacts] FOREIGN KEY
> (
> [ContactID]
> ) REFERENCES [Contacts] (
> [ID]
> )
> ) ON [PRIMARY]
> GO
> Contacts:
> CREATE TABLE [Contacts] (
> [ID] [int] IDENTITY (1, 1) NOT NULL ,
> [CategoryCode] [tinyint] NULL ,
> [Name] [char] (100) COLLATE Latin1_General_CI_AS NULL ,
> [JobTitle] [varchar] (100) COLLATE Latin1_General_CI_AS NULL ,
> [Address] [varchar] (200) COLLATE Latin1_General_CI_AS NULL ,
> [Email] [char] (50) COLLATE Latin1_General_CI_AS NULL ,
> [IsEmergency] [bit] NULL ,
> [LastUpdated] [datetime] NULL ,
> [Remarks] [nvarchar] (500) COLLATE Latin1_General_CI_AS NULL ,
> [ClientID] [int] NULL ,
> CONSTRAINT [PK_Contacts] PRIMARY KEY CLUSTERED
> (
> [ID]
> ) ON [PRIMARY] ,
> CONSTRAINT [FK_Contacts_Clients] FOREIGN KEY
> (
> [ClientID]
> ) REFERENCES [Clients] (
> [ID]
> )
> ) ON [PRIMARY]
> GO
> Clients:
> CREATE TABLE [Clients] (
> [ID] [int] IDENTITY (1, 1) NOT NULL ,
> [Name] [nvarchar] (150) COLLATE Latin1_General_CI_AS NOT NULL ,
> [Description] [nvarchar] (200) COLLATE Latin1_General_CI_AS NULL ,
> CONSTRAINT [PK_Clients] PRIMARY KEY CLUSTERED
> (
> [ID]
> ) ON [PRIMARY]
> ) ON [PRIMARY]
> GO
> The two "fixed" queries are:
> select p.ID, p.ContactID, p.Type, p.Number, p.Remarks
> from Phones p
> where ContactID in (select ID From Contacts where ClientID = 14)
> OR
> select p.ID, p.ContactID, p.Type, p.Number, p.Remarks
> from Phones
> inner join Contacts on Phones.ContactID = Contacts.ID
> inner join Clients on Contacts.ClientID = Clients.ID
> where ClientID = 14
> The purpose is getting the information from phones in the best performance
.
> Hope that will help you to help me.
> Thanks again,
> Gwenda|||Thanks :)
"Gert-Jan Strik" wrote:
> The performance of each approach depends on the relative table size (and
> number of distinct ID's that "select ID From Contacts where ClientID =
> 14" produces). If there are just a few distinct ID's for ClientID 14,
> then the first approach could be faster. If not, then I would expect the
> second approach to be faster. So it really depends. You would have to
> test it for yourself.
> You could also try the following approach and see how fast that is.
> select p.ID, p.ContactID, p.Type, p.Number, p.Remarks
> from Phones p
> where EXISTS (select 1 From Contacts where ClientID = 14 AND
> ID=p.ContactID)
> BTW: based on the clustered index that already exists on Contacts(ID),
> this query would benefit from a nonclustered index on
> Contacts(ClientID).
> HTH,
> Gert-Jan
> gwenda wrote:
>
Performance Question
Not sure how sql server query optimizer works ?
if i have a query like following.
select * from table where field1 = 'A' or field2 = 'B' or field2 = 'C'
so how the above query will execute ? if field1 = 'A' is true, the other
part of the query field2 = 'B' or field3 = 'C' will execute or not ?
because i have a very complecated query with the same concept.
Pls let me knowIn this case if any OR expression is true it will stop processing the list
as that satisfies the WHERE clause. But you can not guarantee which order
the expressions are evaluated. It is up to the engine to determine that at
run time and as such any of the columns can be addressed first.
Andrew J. Kelly SQL MVP
"mvp" <mvp@.discussions.microsoft.com> wrote in message
news:F3BC8CDD-E914-45FD-8A74-74E3BBD922C0@.microsoft.com...
> Hello Everybody.
> Not sure how sql server query optimizer works ?
> if i have a query like following.
> select * from table where field1 = 'A' or field2 = 'B' or field2 = 'C'
> so how the above query will execute ? if field1 = 'A' is true, the other
> part of the query field2 = 'B' or field3 = 'C' will execute or not ?
> because i have a very complecated query with the same concept.
> Pls let me know
>|||On Fri, 16 Dec 2005 11:53:02 -0800, mvp wrote:
>Hello Everybody.
>Not sure how sql server query optimizer works ?
>if i have a query like following.
>select * from table where field1 = 'A' or field2 = 'B' or field2 = 'C'
>so how the above query will execute ? if field1 = 'A' is true, the other
>part of the query field2 = 'B' or field3 = 'C' will execute or not ?
>because i have a very complecated query with the same concept.
>Pls let me know
Hi mvp,
What Andrew says is correct, *IF* the optimizer decides to use a table
scan or an index scan to satisfy the query. However, there are also
completely different techniques that the optimizer might choose.
For instance, if both field1 and field2 are indexed, and both indexes
are sufficiently selective, the optimizer might decide to do individual
index lookups for each of the three comparisons, then combine the
results to get the end result. To see an example of this, run the
following query against Northwind and check the execution plan:
SELECT *
FROM Orders
WHERE ShippedDate = '19980506'
OR OrderID = 10550
OR OrderID = 11067
Best, Hugo
--
(Remove _NO_ and _SPAM_ to get my e-mail address)sql
Friday, March 23, 2012
Performance Question
In a SP, I want to update table A based on values in table
B after data manipulation.
Which of the following option is better in the performance
point of view.
(1) Using Cursor
(2) Using 'table' datatype to hold one table
(3) Using temporary table instead of cursor.
Is there any other better approach exist?
Can 'table' datatype be used in 'Execute SQLTask' in DTS?
TIA,
HariHi
I would go with (2) but If you post DDL+ sample data + expected result it
will be more easily to olve the problem
Also consider
UPDATE tableA SET col=b.col1 FROM
tableB b JOIN tableA a on b.pk=a.pk
"sqlprogrammer" <anonymous@.discussions.microsoft.com> wrote in message
news:097901c397bb$25543520$a401280a@.phx.gbl...
> Hi there,
> In a SP, I want to update table A based on values in table
> B after data manipulation.
> Which of the following option is better in the performance
> point of view.
> (1) Using Cursor
> (2) Using 'table' datatype to hold one table
> (3) Using temporary table instead of cursor.
> Is there any other better approach exist?
> Can 'table' datatype be used in 'Execute SQLTask' in DTS?
> TIA,
> Hari|||I would also go for table datatype. But if you can share your code with us
then we can try to give you a better solution ... Did you look at using the
following syntax:
Update <TableA>
Set Col1 = <TableB>.Col1
Where <tableA>.id = <TableB>.id
something on these lines if you want to update a table comparing the values
from another table ...
--
HTH,
Vinod Kumar
MCSE, DBA, MCAD, MCSD
http://www.extremeexperts.com
"sqlprogrammer" <anonymous@.discussions.microsoft.com> wrote in message
news:097901c397bb$25543520$a401280a@.phx.gbl...
> Hi there,
> In a SP, I want to update table A based on values in table
> B after data manipulation.
> Which of the following option is better in the performance
> point of view.
> (1) Using Cursor
> (2) Using 'table' datatype to hold one table
> (3) Using temporary table instead of cursor.
> Is there any other better approach exist?
> Can 'table' datatype be used in 'Execute SQLTask' in DTS?
> TIA,
> Hari
Wednesday, March 21, 2012
Performance problem?
I import about 3 million rows into a table with DTS(no indexes because of
the large amount of data). I then run a script with 3 delete and about 25
update stmnts. This takes about 3 hours( Is that long?), I then Truncate the
table and recreate this process(with new data ofcourse). It takes longer and
longer on each iteration. What can i do to improve the performance? Would
dropping and recreating the table or the actual DB help? I've noticed that
the time I save running this script with an index during the import is offset
, since it takes much longer to import that much data with an index. Any
ideas would be greatly appreciated! Thanks!
Also, recovery is set to SIMPLE!
Hi
Why don't you:
Drop the Indexes
Load the Data
Create the Indexes
Run the Update/Deletes
Running deletes and updates against a suitably indexed table is a lot faster.
"mikeb" wrote:
> I'm having performance problems in the following scenario:
> I import about 3 million rows into a table with DTS(no indexes because of
> the large amount of data). I then run a script with 3 delete and about 25
> update stmnts. This takes about 3 hours( Is that long?), I then Truncate the
> table and recreate this process(with new data ofcourse). It takes longer and
> longer on each iteration. What can i do to improve the performance? Would
> dropping and recreating the table or the actual DB help? I've noticed that
> the time I save running this script with an index during the import is offset
> , since it takes much longer to import that much data with an index. Any
> ideas would be greatly appreciated! Thanks!
> Also, recovery is set to SIMPLE!
>
Performance problem?
I import about 3 million rows into a table with DTS(no indexes because of
the large amount of data). I then run a script with 3 delete and about 25
update stmnts. This takes about 3 hours( Is that long?), I then Truncate the
table and recreate this process(with new data ofcourse). It takes longer and
longer on each iteration. What can i do to improve the performance? Would
dropping and recreating the table or the actual DB help? I've noticed that
the time I save running this script with an index during the import is offset
, since it takes much longer to import that much data with an index. Any
ideas would be greatly appreciated! Thanks!
Also, recovery is set to SIMPLE!Hi
Why don't you:
Drop the Indexes
Load the Data
Create the Indexes
Run the Update/Deletes
Running deletes and updates against a suitably indexed table is a lot faster.
"mikeb" wrote:
> I'm having performance problems in the following scenario:
> I import about 3 million rows into a table with DTS(no indexes because of
> the large amount of data). I then run a script with 3 delete and about 25
> update stmnts. This takes about 3 hours( Is that long?), I then Truncate the
> table and recreate this process(with new data ofcourse). It takes longer and
> longer on each iteration. What can i do to improve the performance? Would
> dropping and recreating the table or the actual DB help? I've noticed that
> the time I save running this script with an index during the import is offset
> , since it takes much longer to import that much data with an index. Any
> ideas would be greatly appreciated! Thanks!
> Also, recovery is set to SIMPLE!
>
Tuesday, March 20, 2012
Performance Problem with Twelve Months to Date
I have a performance problem with the following query. I'm using [Twelve Months to Date] on on my columns and also as a filter for NONEMPTY on my rows. It takes about 15 secs to run on AW and much longer on the cube that I'm developing. Any ideas on how I can speed this up?
Code Snippet
WITH MEMBER
[Date].[Fiscal Date Calculations].[Twelve Months to Date]
AS
Aggregate(
{ [Date].[Fiscal Date Calculations].DefaultMember } *
{
ParallelPeriod(
[Date].[Fiscal].[Month],
11,
[Date].[Fiscal].CurrentMember
) : [Date].[Fiscal].CurrentMember
}
)
SELECT
(
{
{[Product].[Category].&[1]} *
{[Product].[Subcategory].&[3]}
}
,
[Date].[Fiscal Date Calculations].[Twelve Months to Date]
)
+
(
{
{[Product].[Category].&[2]} *
{[Product].[Subcategory].&[5]}
}
,
[Date].[Fiscal Date Calculations].[Twelve Months to Date]
)
ON 0,
NONEMPTY
(
[Customer].[Customer].children,
[Date].[Fiscal Date Calculations].[Twelve Months to Date]
)
ON 1
FROM
[Adventure Works]
WHERE
(
[Date].[Fiscal].[Date].&[1078],
[Measures].[Internet Sales Amount]
)
This version of the Adventure Works query runs in 6 seconds (I used [Day Of Week] to hold the calculated member, since [Fiscal Date Calculations] isn't a standard attribute in Adventure Works):
Code Snippet
WITH MEMBER
[Date].[Day Of Week].[Twelve Months to Date]
AS
Aggregate(
{ [Date].[Day Of Week].DefaultMember } *
{
ParallelPeriod(
[Date].[Fiscal].[Month],
11,
[Date].[Fiscal].CurrentMember
) : [Date].[Fiscal].CurrentMember
}
)
SELECT
{(
[Product].[Category].&[1],
[Product].[Subcategory].&[3]
)
,
(
[Product].[Category].&[2],
[Product].[Subcategory].&[5]
)}
*
{[Date].[Day Of Week].[Twelve Months to Date]}
ON 0,
NONEMPTY
(
[Customer].[Customer].children,
{ParallelPeriod(
[Date].[Fiscal].[Month],
11,
[Date].[Fiscal].CurrentMember
) : [Date].[Fiscal].CurrentMember} *
{(
[Product].[Category].&[1],
[Product].[Subcategory].&[3]
)
,
(
[Product].[Category].&[2],
[Product].[Subcategory].&[5]
)}
)
ON 1
FROM
[Adventure Works]
WHERE
(
[Date].[Fiscal].[Date].&[1078],
[Measures].[Internet Sales Amount]
)
|||Thanks Deepak - this did give me a pointer which has meant that I've been able to reduce the time for the query I'm developing.
Performance Problem with Twelve Months to Date
I have a performance problem with the following query. I'm using [Twelve Months to Date] on on my columns and also as a filter for NONEMPTY on my rows. It takes about 15 secs to run on AW and much longer on the cube that I'm developing. Any ideas on how I can speed this up?
Code Snippet
WITH MEMBER
[Date].[Fiscal Date Calculations].[Twelve Months to Date]
AS
Aggregate(
{ [Date].[Fiscal Date Calculations].DefaultMember } *
{
ParallelPeriod(
[Date].[Fiscal].[Month],
11,
[Date].[Fiscal].CurrentMember
) : [Date].[Fiscal].CurrentMember
}
)
SELECT
(
{
{[Product].[Category].&[1]} *
{[Product].[Subcategory].&[3]}
}
,
[Date].[Fiscal Date Calculations].[Twelve Months to Date]
)
+
(
{
{[Product].[Category].&[2]} *
{[Product].[Subcategory].&[5]}
}
,
[Date].[Fiscal Date Calculations].[Twelve Months to Date]
)
ON 0,
NONEMPTY
(
[Customer].[Customer].children,
[Date].[Fiscal Date Calculations].[Twelve Months to Date]
)
ON 1
FROM
[Adventure Works]
WHERE
(
[Date].[Fiscal].[Date].&[1078],
[Measures].[Internet Sales Amount]
)
This version of the Adventure Works query runs in 6 seconds (I used [Day Of Week] to hold the calculated member, since [Fiscal Date Calculations] isn't a standard attribute in Adventure Works):
Code Snippet
WITH MEMBER
[Date].[Day Of Week].[Twelve Months to Date]
AS
Aggregate(
{ [Date].[Day Of Week].DefaultMember } *
{
ParallelPeriod(
[Date].[Fiscal].[Month],
11,
[Date].[Fiscal].CurrentMember
) : [Date].[Fiscal].CurrentMember
}
)
SELECT
{(
[Product].[Category].&[1],
[Product].[Subcategory].&[3]
)
,
(
[Product].[Category].&[2],
[Product].[Subcategory].&[5]
)}
*
{[Date].[Day Of Week].[Twelve Months to Date]}
ON 0,
NONEMPTY
(
[Customer].[Customer].children,
{ParallelPeriod(
[Date].[Fiscal].[Month],
11,
[Date].[Fiscal].CurrentMember
) : [Date].[Fiscal].CurrentMember} *
{(
[Product].[Category].&[1],
[Product].[Subcategory].&[3]
)
,
(
[Product].[Category].&[2],
[Product].[Subcategory].&[5]
)}
)
ON 1
FROM
[Adventure Works]
WHERE
(
[Date].[Fiscal].[Date].&[1078],
[Measures].[Internet Sales Amount]
)
|||Thanks Deepak - this did give me a pointer which has meant that I've been able to reduce the time for the query I'm developing.
Performance problem deleting records
I delete from a SQL 2000 db 48.000 records with the following SQL statement
(NO SPO!):
Delete * From myTable where dateTimeField < anyDateTimeValue
The dateTimeField has an index. I do not use transactions. Also there is no
trigger on the table.
I receive an timeout after 30 seconds. The complete delete operation
requires about 5 minutes.
What can I do to speed up the operation?
Thanks
ChristianChristian
> Delete * From myTable where dateTimeField < anyDateTimeValue
It is ACCESS's syntax
What kind of index , is it Clustered on Nonclusterd?
Try
Delete From myTable where dateTimeField >'19000101' and dateTimeField <
anyDateTimeValue
"Christian Havel" <ChristianHavel@.discussions.microsoft.com> wrote in
message news:89BD70AD-1FD4-488B-9608-6C351A0557FC@.microsoft.com...
> Hi,
> I delete from a SQL 2000 db 48.000 records with the following SQL
> statement
> (NO SPO!):
> Delete * From myTable where dateTimeField < anyDateTimeValue
> The dateTimeField has an index. I do not use transactions. Also there is
> no
> trigger on the table.
> I receive an timeout after 30 seconds. The complete delete operation
> requires about 5 minutes.
> What can I do to speed up the operation?
> Thanks
> Christian|||Delete in small batches:
SET ROWCOUNT 5000
WHILE @.@.ROWCOUNT > 0
BEGIN
DELETE FROM MyTable WHERE dateTimeField < anyDateTimeValue
END
SET ROWCOUNT 0
If you are on SQL2005 use DELETE TOP 5000 instead.
Andrew J. Kelly SQL MVP
Solid Quality Mentors
"Christian Havel" <ChristianHavel@.discussions.microsoft.com> wrote in
message news:89BD70AD-1FD4-488B-9608-6C351A0557FC@.microsoft.com...
> Hi,
> I delete from a SQL 2000 db 48.000 records with the following SQL
> statement
> (NO SPO!):
> Delete * From myTable where dateTimeField < anyDateTimeValue
> The dateTimeField has an index. I do not use transactions. Also there is
> no
> trigger on the table.
> I receive an timeout after 30 seconds. The complete delete operation
> requires about 5 minutes.
> What can I do to speed up the operation?
> Thanks
> Christian|||Andrew
> If you are on SQL2005 use DELETE TOP 5000 instead.
Do you think it is "safe" using TOP without ORDER BY clause? I think we do
not have contol ove what gets deleted.
I got used to use ORDER BY with TOP clause
I'd prefer something liker that
WITH OrdersRN AS
(
SELECT *, ROW_NUMBER() OVER(ORDER BY OrderDate, OrderID) AS RowNum
FROM dbo.MyOrders
)
DELETE FROM OrdersRN
WHERE RowNum <= 3;
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:O%23QZyp3$HHA.4956@.TK2MSFTNGP06.phx.gbl...
> Delete in small batches:
> SET ROWCOUNT 5000
> WHILE @.@.ROWCOUNT > 0
> BEGIN
> DELETE FROM MyTable WHERE dateTimeField < anyDateTimeValue
> END
> SET ROWCOUNT 0
> If you are on SQL2005 use DELETE TOP 5000 instead.
>
> --
> Andrew J. Kelly SQL MVP
> Solid Quality Mentors
>
> "Christian Havel" <ChristianHavel@.discussions.microsoft.com> wrote in
> message news:89BD70AD-1FD4-488B-9608-6C351A0557FC@.microsoft.com...
>> Hi,
>> I delete from a SQL 2000 db 48.000 records with the following SQL
>> statement
>> (NO SPO!):
>> Delete * From myTable where dateTimeField < anyDateTimeValue
>> The dateTimeField has an index. I do not use transactions. Also there is
>> no
>> trigger on the table.
>> I receive an timeout after 30 seconds. The complete delete operation
>> requires about 5 minutes.
>> What can I do to speed up the operation?
>> Thanks
>> Christian
>|||> I delete from a SQL 2000 db 48.000 records with the following SQL
> statement
> (NO SPO!):
> Delete * From myTable where dateTimeField < anyDateTimeValue
> The dateTimeField has an index. I do not use transactions. Also there is
> no
> trigger on the table.
> I receive an timeout after 30 seconds. The complete delete operation
> requires about 5 minutes.
> What can I do to speed up the operation?
Verify that a poorly written trigger (or triggers) is not contributing to
the problem.|||On Sep 25, 9:02 am, "Uri Dimant" <u...@.iscar.co.il> wrote:
> Andrew
> > If you are on SQL2005 use DELETE TOP 5000 instead.
> Do you think it is "safe" using TOP without ORDER BY clause? I think we do
> not have contol ove what gets deleted.
> I got used to use ORDER BY with TOP clause
> I'd prefer something liker that
> WITH OrdersRN AS
> (
> SELECT *, ROW_NUMBER() OVER(ORDER BY OrderDate, OrderID) AS RowNum
> FROM dbo.MyOrders
> )
> DELETE FROM OrdersRN
> WHERE RowNum <= 3;
> "Andrew J. Kelly" <sqlmvpnooos...@.shadhawk.com> wrote in messagenews:O%23QZyp3$HHA.4956@.TK2MSFTNGP06.phx.gbl...
> > Delete in small batches:
> > SET ROWCOUNT 5000
> > WHILE @.@.ROWCOUNT > 0
> > BEGIN
> > DELETE FROM MyTable WHERE dateTimeField < anyDateTimeValue
> > END
> > SET ROWCOUNT 0
> > If you are on SQL2005 use DELETE TOP 5000 instead.
> > --
> > Andrew J. Kelly SQL MVP
> > Solid Quality Mentors
> > "Christian Havel" <ChristianHa...@.discussions.microsoft.com> wrote in
> > messagenews:89BD70AD-1FD4-488B-9608-6C351A0557FC@.microsoft.com...
> >> Hi,
> >> I delete from a SQL 2000 db 48.000 records with the following SQL
> >> statement
> >> (NO SPO!):
> >> Delete * From myTable where dateTimeField < anyDateTimeValue
> >> The dateTimeField has an index. I do not use transactions. Also there is
> >> no
> >> trigger on the table.
> >> I receive an timeout after 30 seconds. The complete delete operation
> >> requires about 5 minutes.
> >> What can I do to speed up the operation?
> >> Thanks
> >> Christian
Hi Uri,
Why do want to control which rows get deleted? At the end of the day
all the rows that match the WHERE clause will gone, as in:
create table #t(i int)
insert #t values(1)
insert #t values(2)
GO
delete top (1) from #t where i > 0
delete top (1) from #t where i > 0
I think Andrew's approach is perfectly safe. Pls correct me if I am
wrong.|||> Do you think it is "safe" using TOP without ORDER BY clause? I think we do
> not have contol ove what gets deleted.
> I got used to use ORDER BY with TOP clause
The WHERE clause will determine what gets deleted in this case not the order
by since we want all of them that meet the where clause to go. It may be
beneficial to use an ORDER BY if it helps to find the 5000 rows faster but
it may not. I haven't seen the actual DDL so they should try it both ways
to see. But either way it is safe as it will only delete what the WHERE
clause says to delete.
--
Andrew J. Kelly SQL MVP
Solid Quality Mentors
"Uri Dimant" <urid@.iscar.co.il> wrote in message
news:%2301lZ23$HHA.4844@.TK2MSFTNGP02.phx.gbl...
> Andrew
>> If you are on SQL2005 use DELETE TOP 5000 instead.
> Do you think it is "safe" using TOP without ORDER BY clause? I think we do
> not have contol ove what gets deleted.
> I got used to use ORDER BY with TOP clause
> I'd prefer something liker that
> WITH OrdersRN AS
> (
> SELECT *, ROW_NUMBER() OVER(ORDER BY OrderDate, OrderID) AS RowNum
> FROM dbo.MyOrders
> )
> DELETE FROM OrdersRN
> WHERE RowNum <= 3;
>
>
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:O%23QZyp3$HHA.4956@.TK2MSFTNGP06.phx.gbl...
>> Delete in small batches:
>> SET ROWCOUNT 5000
>> WHILE @.@.ROWCOUNT > 0
>> BEGIN
>> DELETE FROM MyTable WHERE dateTimeField < anyDateTimeValue
>> END
>> SET ROWCOUNT 0
>> If you are on SQL2005 use DELETE TOP 5000 instead.
>>
>> --
>> Andrew J. Kelly SQL MVP
>> Solid Quality Mentors
>>
>> "Christian Havel" <ChristianHavel@.discussions.microsoft.com> wrote in
>> message news:89BD70AD-1FD4-488B-9608-6C351A0557FC@.microsoft.com...
>> Hi,
>> I delete from a SQL 2000 db 48.000 records with the following SQL
>> statement
>> (NO SPO!):
>> Delete * From myTable where dateTimeField < anyDateTimeValue
>> The dateTimeField has an index. I do not use transactions. Also there is
>> no
>> trigger on the table.
>> I receive an timeout after 30 seconds. The complete delete operation
>> requires about 5 minutes.
>> What can I do to speed up the operation?
>> Thanks
>> Christian
>|||Christian,
Such a query might take a long time if:
- the table is heavily used during the delete (which causes blocking)
- the table has many indexes (the rows need to be deleted from the
indexes too)
- the index on dateTimeField is nonclustered, and myTable has many rows
- there are tables referencing myTable and their foreign key is not
indexed
- there are tables referencing myTable and the foreign key is defined
with ON DELETE CASCADE
If you are deleting a small percentage of the table, then you could
delete in batches (as suggested by others). If you are deleting a large
percentage of the table, you could consider dropping all nonclustered
indexes, execute the delete and then recreate all dropped indexes.
If you have tables with a foreign key constraint to myTable, then make
sure these foreign keys are indexed.
HTH,
Gert-Jan
Christian Havel wrote:
> Hi,
> I delete from a SQL 2000 db 48.000 records with the following SQL statement
> (NO SPO!):
> Delete * From myTable where dateTimeField < anyDateTimeValue
> The dateTimeField has an index. I do not use transactions. Also there is no
> trigger on the table.
> I receive an timeout after 30 seconds. The complete delete operation
> requires about 5 minutes.
> What can I do to speed up the operation?
> Thanks
> Christian|||Why not just increase the timeout threshold? Deleting in smaller batches may
not speed up 'the operation'.
Linchi
"Andrew J. Kelly" wrote:
> Delete in small batches:
> SET ROWCOUNT 5000
> WHILE @.@.ROWCOUNT > 0
> BEGIN
> DELETE FROM MyTable WHERE dateTimeField < anyDateTimeValue
> END
> SET ROWCOUNT 0
> If you are on SQL2005 use DELETE TOP 5000 instead.
>
> --
> Andrew J. Kelly SQL MVP
> Solid Quality Mentors
>
> "Christian Havel" <ChristianHavel@.discussions.microsoft.com> wrote in
> message news:89BD70AD-1FD4-488B-9608-6C351A0557FC@.microsoft.com...
> > Hi,
> >
> > I delete from a SQL 2000 db 48.000 records with the following SQL
> > statement
> > (NO SPO!):
> >
> > Delete * From myTable where dateTimeField < anyDateTimeValue
> >
> > The dateTimeField has an index. I do not use transactions. Also there is
> > no
> > trigger on the table.
> > I receive an timeout after 30 seconds. The complete delete operation
> > requires about 5 minutes.
> > What can I do to speed up the operation?
> >
> > Thanks
> > Christian
>
Performance problem
start to work very slow.
SqlServer start to raise following event
Event Type: Information
Event Source: MSSQLSERVER
Event Category: (2)
Event ID: 17055
Date: 7/3/2006
Time: 8:54:12 AM
User: N/A
Computer: SRVCA2
Description:
17052 :
This SQL Server has been optimized for 8 concurrent queries. This limit
has been exceeded by 19 queries and performance may be adversely
affected.
SQL Server worked fine before this. I replaced hard disk and rebuild of
configuration was sucesfull. In SQL events i saw that SQL Server has
performed recovery of databases.
I have SQL Server Personal edition installed.
Can someone help me with this one please.The verion of the db engine shipped with the Personal Edition has an inbuilt
restriction , that once more than a certain number of activities pass the
restriction.
There are a number of different activities which contribute to the count.
Check DBCC CONCURRENCYVIOLATION to gauge how often the service goes over
the limit. Maybe you could upgrade
Jack Vamvas
___________________________________
Receive free SQL tips - www.ciquery.com/sqlserver.htm
___________________________________
"Sale" <aleksandar.stancic@.gmail.com> wrote in message
news:1151913565.386437.173310@.75g2000cwc.googlegroups.com...
> After a disk failure ( disk was in raid 1 configuration ) SQL Server
> start to work very slow.
> SqlServer start to raise following event
> Event Type: Information
> Event Source: MSSQLSERVER
> Event Category: (2)
> Event ID: 17055
> Date: 7/3/2006
> Time: 8:54:12 AM
> User: N/A
> Computer: SRVCA2
> Description:
> 17052 :
> This SQL Server has been optimized for 8 concurrent queries. This limit
> has been exceeded by 19 queries and performance may be adversely
> affected.
>
> SQL Server worked fine before this. I replaced hard disk and rebuild of
> configuration was sucesfull. In SQL events i saw that SQL Server has
> performed recovery of databases.
> I have SQL Server Personal edition installed.
> Can someone help me with this one please.
>|||"Jack Vamvas" <DEL_TO_REPLYtechsupport@.ciquery.com> wrote in message
news:TI-dnbeidMt4RzXZnZ2dnUVZ8s-dnZ2d@.bt.com...
> The verion of the db engine shipped with the Personal Edition has an
inbuilt
> restriction , that once more than a certain number of activities pass the
> restriction.
> There are a number of different activities which contribute to the count.
> Check DBCC CONCURRENCYVIOLATION to gauge how often the service goes over
> the limit. Maybe you could upgrade
Most likely the slower diskI/O during the failure was the cause here.
You may want to upgrade to something like SQL 2005 Express which removes
this particular restriction. (but has others).
> --
> Jack Vamvas
> ___________________________________
> Receive free SQL tips - www.ciquery.com/sqlserver.htm
> ___________________________________
>
> "Sale" <aleksandar.stancic@.gmail.com> wrote in message
> news:1151913565.386437.173310@.75g2000cwc.googlegroups.com...
>|||I Upgrade Personal edition to Standard, and after this everything work
fine.
Thanks for your help people
Monday, March 12, 2012
Performance Problem
getting 28 seconds for per user which is unexceptable. What will be the best
way to index this table ?
UserName column has 28000 rows for 1 user (User1) and TR column has mostly
usique numbers........
Thanks for any help.
CREATE TABLE [dbo].[CostCompAvailTemp] (
[UserName] [char] (40) NOT NULL ,
[TR] [decimal](10, 0) NOT NULL ,
[RHour] [smallint] NULL ,
[Def] [decimal](5, 3) NULL ,
[Price] [decimal](4, 2) NULL ,
[RDL] [smallint] NULL
) ON [PRIMARY]
GO
CREATE CLUSTERED INDEX [CostCompAvailTemp_ix1] ON
[dbo].[CostCompAvailTemp]([UserName]) ON [PRIMARY]
GO
CREATE INDEX [CostCompAvailTemp_ix2] ON [dbo].[CostCompAvailTem
p]([TR]) ON
[PRIMARY]
GO
----
--
--CREATE PROCEDURE [dbo].[CostCompCopyAvailIntoTemp]
Declare @.UserName char(40)
Declare @.AvailNum as int
Declare @.PVal as dec(4,2)
Declare @.PCol as int
--as
declare @.TR as int
/* clean up old */
delete from CostCompAvailTemp
where UserName = @.UserName
set @.TR = 1
while @.TR < 6
begin
insert into CostCompAvailTemp
select @.UserName,
case @.TR
when 1 then TR1
when 2 then TR2
when 3 then TR3
when 4 then TR4
when 5 then TR5
end as TR,
Deadline,
CreditAvail,
case @.PCol
when 1 then Price1
when 2 then Price2
when 3 then Price3
else @.PVal
end,
Null
from AvailabilityDetail
where AvailNumber = @.AvailNum
set @.TR = @.TR + 1
end
/* clean then invalid values */
delete from CostCompAvailTemp
where TR < 1
or TR is Null
GOI think the main problem of your process is with this statement
delete from CostCompAvailTemp
where TR < 1
or TR is Null
here your are not using the index you create since the column TR is not a
part of the index you could either add the column TR to the index or create
a
second index for this column. you could also consider create a primary key
for your temp table.
Greetings,
Lic. Alfonso Rafael Chavez de León
Consultor TI y DBA
Xignux Corporativo SA de CV
"DXC" wrote:
> I have the following DDL/indexes and the following Stored procedure. I am
> getting 28 seconds for per user which is unexceptable. What will be the be
st
> way to index this table ?
> UserName column has 28000 rows for 1 user (User1) and TR column has mostly
> usique numbers........
> Thanks for any help.
> CREATE TABLE [dbo].[CostCompAvailTemp] (
> [UserName] [char] (40) NOT NULL ,
> [TR] [decimal](10, 0) NOT NULL ,
> [RHour] [smallint] NULL ,
> [Def] [decimal](5, 3) NULL ,
> [Price] [decimal](4, 2) NULL ,
> [RDL] [smallint] NULL
> ) ON [PRIMARY]
> GO
> CREATE CLUSTERED INDEX [CostCompAvailTemp_ix1] ON
> [dbo].[CostCompAvailTemp]([UserName]) ON [PRIMARY]
> GO
> CREATE INDEX [CostCompAvailTemp_ix2] ON [dbo].[CostCompAvail
Temp]([TR]) ON
> [PRIMARY]
> GO
> ----
--
>
> --CREATE PROCEDURE [dbo].[CostCompCopyAvailIntoTemp]
> Declare @.UserName char(40)
> Declare @.AvailNum as int
> Declare @.PVal as dec(4,2)
> Declare @.PCol as int
> --as
> declare @.TR as int
> /* clean up old */
> delete from CostCompAvailTemp
> where UserName = @.UserName
> set @.TR = 1
> while @.TR < 6
> begin
> insert into CostCompAvailTemp
> select @.UserName,
> case @.TR
> when 1 then TR1
> when 2 then TR2
> when 3 then TR3
> when 4 then TR4
> when 5 then TR5
> end as TR,
> Deadline,
> CreditAvail,
> case @.PCol
> when 1 then Price1
> when 2 then Price2
> when 3 then Price3
> else @.PVal
> end,
> Null
> from AvailabilityDetail
> where AvailNumber = @.AvailNum
> set @.TR = @.TR + 1
> end
> /* clean then invalid values */
> delete from CostCompAvailTemp
> where TR < 1
> or TR is Null
>
> GO
>|||"DXC" <DXC@.discussions.microsoft.com> wrote in message
news:4AE4D52C-DD9D-4C25-B592-A02F9A29C554@.microsoft.com...
>I have the following DDL/indexes and the following Stored procedure. I am
> getting 28 seconds for per user which is unexceptable. What will be the
> best
> way to index this table ?
> UserName column has 28000 rows for 1 user (User1) and TR column has mostly
> usique numbers........
> Thanks for any help.
> CREATE TABLE [dbo].[CostCompAvailTemp] (
> [UserName] [char] (40) NOT NULL ,
> [TR] [decimal](10, 0) NOT NULL ,
> [RHour] [smallint] NULL ,
> [Def] [decimal](5, 3) NULL ,
> [Price] [decimal](4, 2) NULL ,
> [RDL] [smallint] NULL
> ) ON [PRIMARY]
> GO
> CREATE CLUSTERED INDEX [CostCompAvailTemp_ix1] ON
> [dbo].[CostCompAvailTemp]([UserName]) ON [PRIMARY]
> GO
> CREATE INDEX [CostCompAvailTemp_ix2] ON [dbo].[CostCompAvailT
emp]([TR])
> ON
> [PRIMARY]
> GO
>
First, use one query, not 5.
Second, don't insert crap, and then clean it up later. Just refrain from
inserting the invalid rows to begin with.
Here's a rewrite using a temp table, a cross join and a derived table:
--CREATE PROCEDURE [dbo].[CostCompCopyAvailIntoTemp]
Declare @.UserName char(40)
Declare @.AvailNum as int
Declare @.PVal as dec(4,2)
Declare @.PCol as int
--as
set nocount on
delete from CostCompAvailTemp
where UserName = @.UserName
declare @.TR table(tr int primary key)
insert into @.TR(tr) values (1)
insert into @.TR(tr) values (2)
insert into @.TR(tr) values (3)
insert into @.TR(tr) values (4)
insert into @.TR(tr) values (5)
insert into CostCompAvailTemp
select * from
(
select
@.Username UserName,
case tr.tr
when 1 then TR1
when 2 then TR2
when 3 then TR3
when 4 then TR4
when 5 then TR5
end as TR,
Deadline RHour,
CreditAvail Def,
case @.PCol
when 1 then Price1
when 2 then Price2
when 3 then Price3
else @.PVal
end Price,
Null RDL
from AvailabilityDetail
cross join @.TR tr
where AvailNumber = @.AvailNum
) dt
where TR < 1
or TR is Null|||Thanks but developers are telling me that TR column will have duplicates...
..
"David Browne" wrote:
> "DXC" <DXC@.discussions.microsoft.com> wrote in message
> news:4AE4D52C-DD9D-4C25-B592-A02F9A29C554@.microsoft.com...
> First, use one query, not 5.
> Second, don't insert crap, and then clean it up later. Just refrain from
> inserting the invalid rows to begin with.
> Here's a rewrite using a temp table, a cross join and a derived table:
> --CREATE PROCEDURE [dbo].[CostCompCopyAvailIntoTemp]
> Declare @.UserName char(40)
> Declare @.AvailNum as int
> Declare @.PVal as dec(4,2)
> Declare @.PCol as int
> --as
> set nocount on
> delete from CostCompAvailTemp
> where UserName = @.UserName
>
> declare @.TR table(tr int primary key)
> insert into @.TR(tr) values (1)
> insert into @.TR(tr) values (2)
> insert into @.TR(tr) values (3)
> insert into @.TR(tr) values (4)
> insert into @.TR(tr) values (5)
> insert into CostCompAvailTemp
> select * from
> (
> select
> @.Username UserName,
> case tr.tr
> when 1 then TR1
> when 2 then TR2
> when 3 then TR3
> when 4 then TR4
> when 5 then TR5
> end as TR,
> Deadline RHour,
> CreditAvail Def,
> case @.PCol
> when 1 then Price1
> when 2 then Price2
> when 3 then Price3
> else @.PVal
> end Price,
> Null RDL
> from AvailabilityDetail
> cross join @.TR tr
> where AvailNumber = @.AvailNum
> ) dt
> where TR < 1
> or TR is Null
>
>
Performance Problem
getting 28 seconds for per user which is unexceptable. What will be the best
way to index this table ?
UserName column has 28000 rows for 1 user (User1) and TR column has mostly
usique numbers........
Thanks for any help.
CREATE TABLE [dbo].[CostCompAvailTemp] (
[UserName] [char] (40) NOT NULL ,
[TR] [decimal](10, 0) NOT NULL ,
[RHour] [smallint] NULL ,
[Def] [decimal](5, 3) NULL ,
[Price] [decimal](4, 2) NULL ,
[RDL] [smallint] NULL
) ON [PRIMARY]
GO
CREATE CLUSTERED INDEX [CostCompAvailTemp_ix1] ON
[dbo].[CostCompAvailTemp]([UserName]) ON [PRIMARY]
GO
CREATE INDEX [CostCompAvailTemp_ix2] ON [dbo].[CostCompAvailTemp]([TR]) ON
[PRIMARY]
GO
-----
--CREATE PROCEDURE [dbo].[CostCompCopyAvailIntoTemp]
Declare @.UserName char(40)
Declare @.AvailNum as int
Declare @.PVal as dec(4,2)
Declare @.PCol as int
--as
declare @.TR as int
/* clean up old */
delete from CostCompAvailTemp
where UserName = @.UserName
set @.TR = 1
while @.TR < 6
begin
insert into CostCompAvailTemp
select @.UserName,
case @.TR
when 1 then TR1
when 2 then TR2
when 3 then TR3
when 4 then TR4
when 5 then TR5
end as TR,
Deadline,
CreditAvail,
case @.PCol
when 1 then Price1
when 2 then Price2
when 3 then Price3
else @.PVal
end,
Null
from AvailabilityDetail
where AvailNumber = @.AvailNum
set @.TR = @.TR + 1
end
/* clean then invalid values */
delete from CostCompAvailTemp
where TR < 1
or TR is Null
GO
I think the main problem of your process is with this statement
delete from CostCompAvailTemp
where TR < 1
or TR is Null
here your are not using the index you create since the column TR is not a
part of the index you could either add the column TR to the index or create a
second index for this column. you could also consider create a primary key
for your temp table.
Greetings,
Lic. Alfonso Rafael Chavez de León
Consultor TI y DBA
Xignux Corporativo SA de CV
"DXC" wrote:
> I have the following DDL/indexes and the following Stored procedure. I am
> getting 28 seconds for per user which is unexceptable. What will be the best
> way to index this table ?
> UserName column has 28000 rows for 1 user (User1) and TR column has mostly
> usique numbers........
> Thanks for any help.
> CREATE TABLE [dbo].[CostCompAvailTemp] (
> [UserName] [char] (40) NOT NULL ,
> [TR] [decimal](10, 0) NOT NULL ,
> [RHour] [smallint] NULL ,
> [Def] [decimal](5, 3) NULL ,
> [Price] [decimal](4, 2) NULL ,
> [RDL] [smallint] NULL
> ) ON [PRIMARY]
> GO
> CREATE CLUSTERED INDEX [CostCompAvailTemp_ix1] ON
> [dbo].[CostCompAvailTemp]([UserName]) ON [PRIMARY]
> GO
> CREATE INDEX [CostCompAvailTemp_ix2] ON [dbo].[CostCompAvailTemp]([TR]) ON
> [PRIMARY]
> GO
> -----
>
> --CREATE PROCEDURE [dbo].[CostCompCopyAvailIntoTemp]
> Declare @.UserName char(40)
> Declare @.AvailNum as int
> Declare @.PVal as dec(4,2)
> Declare @.PCol as int
> --as
> declare @.TR as int
> /* clean up old */
> delete from CostCompAvailTemp
> where UserName = @.UserName
> set @.TR = 1
> while @.TR < 6
> begin
> insert into CostCompAvailTemp
> select @.UserName,
> case @.TR
> when 1 then TR1
> when 2 then TR2
> when 3 then TR3
> when 4 then TR4
> when 5 then TR5
> end as TR,
> Deadline,
> CreditAvail,
> case @.PCol
> when 1 then Price1
> when 2 then Price2
> when 3 then Price3
> else @.PVal
> end,
> Null
> from AvailabilityDetail
> where AvailNumber = @.AvailNum
> set @.TR = @.TR + 1
> end
> /* clean then invalid values */
> delete from CostCompAvailTemp
> where TR < 1
> or TR is Null
>
> GO
>
|||"DXC" <DXC@.discussions.microsoft.com> wrote in message
news:4AE4D52C-DD9D-4C25-B592-A02F9A29C554@.microsoft.com...
>I have the following DDL/indexes and the following Stored procedure. I am
> getting 28 seconds for per user which is unexceptable. What will be the
> best
> way to index this table ?
> UserName column has 28000 rows for 1 user (User1) and TR column has mostly
> usique numbers........
> Thanks for any help.
> CREATE TABLE [dbo].[CostCompAvailTemp] (
> [UserName] [char] (40) NOT NULL ,
> [TR] [decimal](10, 0) NOT NULL ,
> [RHour] [smallint] NULL ,
> [Def] [decimal](5, 3) NULL ,
> [Price] [decimal](4, 2) NULL ,
> [RDL] [smallint] NULL
> ) ON [PRIMARY]
> GO
> CREATE CLUSTERED INDEX [CostCompAvailTemp_ix1] ON
> [dbo].[CostCompAvailTemp]([UserName]) ON [PRIMARY]
> GO
> CREATE INDEX [CostCompAvailTemp_ix2] ON [dbo].[CostCompAvailTemp]([TR])
> ON
> [PRIMARY]
> GO
>
First, use one query, not 5.
Second, don't insert crap, and then clean it up later. Just refrain from
inserting the invalid rows to begin with.
Here's a rewrite using a temp table, a cross join and a derived table:
--CREATE PROCEDURE [dbo].[CostCompCopyAvailIntoTemp]
Declare @.UserName char(40)
Declare @.AvailNum as int
Declare @.PVal as dec(4,2)
Declare @.PCol as int
--as
set nocount on
delete from CostCompAvailTemp
where UserName = @.UserName
declare @.TR table(tr int primary key)
insert into @.TR(tr) values (1)
insert into @.TR(tr) values (2)
insert into @.TR(tr) values (3)
insert into @.TR(tr) values (4)
insert into @.TR(tr) values (5)
insert into CostCompAvailTemp
select * from
(
select
@.Username UserName,
case tr.tr
when 1 then TR1
when 2 then TR2
when 3 then TR3
when 4 then TR4
when 5 then TR5
end as TR,
Deadline RHour,
CreditAvail Def,
case @.PCol
when 1 then Price1
when 2 then Price2
when 3 then Price3
else @.PVal
end Price,
Null RDL
from AvailabilityDetail
cross join @.TR tr
where AvailNumber = @.AvailNum
) dt
where TR < 1
or TR is Null
|||Thanks but developers are telling me that TR column will have duplicates......
"David Browne" wrote:
> "DXC" <DXC@.discussions.microsoft.com> wrote in message
> news:4AE4D52C-DD9D-4C25-B592-A02F9A29C554@.microsoft.com...
> First, use one query, not 5.
> Second, don't insert crap, and then clean it up later. Just refrain from
> inserting the invalid rows to begin with.
> Here's a rewrite using a temp table, a cross join and a derived table:
> --CREATE PROCEDURE [dbo].[CostCompCopyAvailIntoTemp]
> Declare @.UserName char(40)
> Declare @.AvailNum as int
> Declare @.PVal as dec(4,2)
> Declare @.PCol as int
> --as
> set nocount on
> delete from CostCompAvailTemp
> where UserName = @.UserName
>
> declare @.TR table(tr int primary key)
> insert into @.TR(tr) values (1)
> insert into @.TR(tr) values (2)
> insert into @.TR(tr) values (3)
> insert into @.TR(tr) values (4)
> insert into @.TR(tr) values (5)
> insert into CostCompAvailTemp
> select * from
> (
> select
> @.Username UserName,
> case tr.tr
> when 1 then TR1
> when 2 then TR2
> when 3 then TR3
> when 4 then TR4
> when 5 then TR5
> end as TR,
> Deadline RHour,
> CreditAvail Def,
> case @.PCol
> when 1 then Price1
> when 2 then Price2
> when 3 then Price3
> else @.PVal
> end Price,
> Null RDL
> from AvailabilityDetail
> cross join @.TR tr
> where AvailNumber = @.AvailNum
> ) dt
> where TR < 1
> or TR is Null
>
>
Performance problem
start to work very slow.
SqlServer start to raise following event
Event Type: Information
Event Source: MSSQLSERVER
Event Category: (2)
Event ID: 17055
Date: 7/3/2006
Time: 8:54:12 AM
User: N/A
Computer: SRVCA2
Description:
17052 :
This SQL Server has been optimized for 8 concurrent queries. This limit
has been exceeded by 19 queries and performance may be adversely
affected.
SQL Server worked fine before this. I replaced hard disk and rebuild of
configuration was sucesfull. In SQL events i saw that SQL Server has
performed recovery of databases.
I have SQL Server Personal edition installed.
Can someone help me with this one please.The verion of the db engine shipped with the Personal Edition has an inbuilt
restriction , that once more than a certain number of activities pass the
restriction.
There are a number of different activities which contribute to the count.
Check DBCC CONCURRENCYVIOLATION to gauge how often the service goes over
the limit. Maybe you could upgrade
--
Jack Vamvas
___________________________________
Receive free SQL tips - www.ciquery.com/sqlserver.htm
___________________________________
"Sale" <aleksandar.stancic@.gmail.com> wrote in message
news:1151913565.386437.173310@.75g2000cwc.googlegroups.com...
> After a disk failure ( disk was in raid 1 configuration ) SQL Server
> start to work very slow.
> SqlServer start to raise following event
> Event Type: Information
> Event Source: MSSQLSERVER
> Event Category: (2)
> Event ID: 17055
> Date: 7/3/2006
> Time: 8:54:12 AM
> User: N/A
> Computer: SRVCA2
> Description:
> 17052 :
> This SQL Server has been optimized for 8 concurrent queries. This limit
> has been exceeded by 19 queries and performance may be adversely
> affected.
>
> SQL Server worked fine before this. I replaced hard disk and rebuild of
> configuration was sucesfull. In SQL events i saw that SQL Server has
> performed recovery of databases.
> I have SQL Server Personal edition installed.
> Can someone help me with this one please.
>|||"Jack Vamvas" <DEL_TO_REPLYtechsupport@.ciquery.com> wrote in message
news:TI-dnbeidMt4RzXZnZ2dnUVZ8s-dnZ2d@.bt.com...
> The verion of the db engine shipped with the Personal Edition has an
inbuilt
> restriction , that once more than a certain number of activities pass the
> restriction.
> There are a number of different activities which contribute to the count.
> Check DBCC CONCURRENCYVIOLATION to gauge how often the service goes over
> the limit. Maybe you could upgrade
Most likely the slower diskI/O during the failure was the cause here.
You may want to upgrade to something like SQL 2005 Express which removes
this particular restriction. (but has others).
> --
> Jack Vamvas
> ___________________________________
> Receive free SQL tips - www.ciquery.com/sqlserver.htm
> ___________________________________
>
> "Sale" <aleksandar.stancic@.gmail.com> wrote in message
> news:1151913565.386437.173310@.75g2000cwc.googlegroups.com...
> > After a disk failure ( disk was in raid 1 configuration ) SQL Server
> > start to work very slow.
> >
> > SqlServer start to raise following event
> >
> > Event Type: Information
> > Event Source: MSSQLSERVER
> > Event Category: (2)
> > Event ID: 17055
> > Date: 7/3/2006
> > Time: 8:54:12 AM
> > User: N/A
> > Computer: SRVCA2
> > Description:
> > 17052 :
> > This SQL Server has been optimized for 8 concurrent queries. This limit
> > has been exceeded by 19 queries and performance may be adversely
> > affected.
> >
> >
> > SQL Server worked fine before this. I replaced hard disk and rebuild of
> > configuration was sucesfull. In SQL events i saw that SQL Server has
> > performed recovery of databases.
> >
> > I have SQL Server Personal edition installed.
> >
> > Can someone help me with this one please.
> >
>|||I Upgrade Personal edition to Standard, and after this everything work
fine.
Thanks for your help people
Performance Problem
getting 28 seconds for per user which is unexceptable. What will be the best
way to index this table ?
UserName column has 28000 rows for 1 user (User1) and TR column has mostly
usique numbers........
Thanks for any help.
CREATE TABLE [dbo].[CostCompAvailTemp] (
[UserName] [char] (40) NOT NULL ,
[TR] [decimal](10, 0) NOT NULL ,
[RHour] [smallint] NULL ,
[Def] [decimal](5, 3) NULL ,
[Price] [decimal](4, 2) NULL ,
[RDL] [smallint] NULL
) ON [PRIMARY]
GO
CREATE CLUSTERED INDEX [CostCompAvailTemp_ix1] ON
[dbo].[CostCompAvailTemp]([UserName]) ON [PRIMARY]
GO
CREATE INDEX [CostCompAvailTemp_ix2] ON [dbo].[CostCompAvailTemp]([TR]) ON
[PRIMARY]
GO
-----
--CREATE PROCEDURE [dbo].[CostCompCopyAvailIntoTemp]
Declare @.UserName char(40)
Declare @.AvailNum as int
Declare @.PVal as dec(4,2)
Declare @.PCol as int
--as
declare @.TR as int
/* clean up old */
delete from CostCompAvailTemp
where UserName = @.UserName
set @.TR = 1
while @.TR < 6
begin
insert into CostCompAvailTemp
select @.UserName,
case @.TR
when 1 then TR1
when 2 then TR2
when 3 then TR3
when 4 then TR4
when 5 then TR5
end as TR,
Deadline,
CreditAvail,
case @.PCol
when 1 then Price1
when 2 then Price2
when 3 then Price3
else @.PVal
end,
Null
from AvailabilityDetail
where AvailNumber = @.AvailNum
set @.TR = @.TR + 1
end
/* clean then invalid values */
delete from CostCompAvailTemp
where TR < 1
or TR is Null
GOI think the main problem of your process is with this statement
delete from CostCompAvailTemp
where TR < 1
or TR is Null
here your are not using the index you create since the column TR is not a
part of the index you could either add the column TR to the index or create a
second index for this column. you could also consider create a primary key
for your temp table.
--
Greetings,
Lic. Alfonso Rafael Chavez de León
Consultor TI y DBA
Xignux Corporativo SA de CV
"DXC" wrote:
> I have the following DDL/indexes and the following Stored procedure. I am
> getting 28 seconds for per user which is unexceptable. What will be the best
> way to index this table ?
> UserName column has 28000 rows for 1 user (User1) and TR column has mostly
> usique numbers........
> Thanks for any help.
> CREATE TABLE [dbo].[CostCompAvailTemp] (
> [UserName] [char] (40) NOT NULL ,
> [TR] [decimal](10, 0) NOT NULL ,
> [RHour] [smallint] NULL ,
> [Def] [decimal](5, 3) NULL ,
> [Price] [decimal](4, 2) NULL ,
> [RDL] [smallint] NULL
> ) ON [PRIMARY]
> GO
> CREATE CLUSTERED INDEX [CostCompAvailTemp_ix1] ON
> [dbo].[CostCompAvailTemp]([UserName]) ON [PRIMARY]
> GO
> CREATE INDEX [CostCompAvailTemp_ix2] ON [dbo].[CostCompAvailTemp]([TR]) ON
> [PRIMARY]
> GO
> -----
>
> --CREATE PROCEDURE [dbo].[CostCompCopyAvailIntoTemp]
> Declare @.UserName char(40)
> Declare @.AvailNum as int
> Declare @.PVal as dec(4,2)
> Declare @.PCol as int
> --as
> declare @.TR as int
> /* clean up old */
> delete from CostCompAvailTemp
> where UserName = @.UserName
> set @.TR = 1
> while @.TR < 6
> begin
> insert into CostCompAvailTemp
> select @.UserName,
> case @.TR
> when 1 then TR1
> when 2 then TR2
> when 3 then TR3
> when 4 then TR4
> when 5 then TR5
> end as TR,
> Deadline,
> CreditAvail,
> case @.PCol
> when 1 then Price1
> when 2 then Price2
> when 3 then Price3
> else @.PVal
> end,
> Null
> from AvailabilityDetail
> where AvailNumber = @.AvailNum
> set @.TR = @.TR + 1
> end
> /* clean then invalid values */
> delete from CostCompAvailTemp
> where TR < 1
> or TR is Null
>
> GO
>|||"DXC" <DXC@.discussions.microsoft.com> wrote in message
news:4AE4D52C-DD9D-4C25-B592-A02F9A29C554@.microsoft.com...
>I have the following DDL/indexes and the following Stored procedure. I am
> getting 28 seconds for per user which is unexceptable. What will be the
> best
> way to index this table ?
> UserName column has 28000 rows for 1 user (User1) and TR column has mostly
> usique numbers........
> Thanks for any help.
> CREATE TABLE [dbo].[CostCompAvailTemp] (
> [UserName] [char] (40) NOT NULL ,
> [TR] [decimal](10, 0) NOT NULL ,
> [RHour] [smallint] NULL ,
> [Def] [decimal](5, 3) NULL ,
> [Price] [decimal](4, 2) NULL ,
> [RDL] [smallint] NULL
> ) ON [PRIMARY]
> GO
> CREATE CLUSTERED INDEX [CostCompAvailTemp_ix1] ON
> [dbo].[CostCompAvailTemp]([UserName]) ON [PRIMARY]
> GO
> CREATE INDEX [CostCompAvailTemp_ix2] ON [dbo].[CostCompAvailTemp]([TR])
> ON
> [PRIMARY]
> GO
>
First, use one query, not 5.
Second, don't insert crap, and then clean it up later. Just refrain from
inserting the invalid rows to begin with.
Here's a rewrite using a temp table, a cross join and a derived table:
--CREATE PROCEDURE [dbo].[CostCompCopyAvailIntoTemp]
Declare @.UserName char(40)
Declare @.AvailNum as int
Declare @.PVal as dec(4,2)
Declare @.PCol as int
--as
set nocount on
delete from CostCompAvailTemp
where UserName = @.UserName
declare @.TR table(tr int primary key)
insert into @.TR(tr) values (1)
insert into @.TR(tr) values (2)
insert into @.TR(tr) values (3)
insert into @.TR(tr) values (4)
insert into @.TR(tr) values (5)
insert into CostCompAvailTemp
select * from
(
select
@.Username UserName,
case tr.tr
when 1 then TR1
when 2 then TR2
when 3 then TR3
when 4 then TR4
when 5 then TR5
end as TR,
Deadline RHour,
CreditAvail Def,
case @.PCol
when 1 then Price1
when 2 then Price2
when 3 then Price3
else @.PVal
end Price,
Null RDL
from AvailabilityDetail
cross join @.TR tr
where AvailNumber = @.AvailNum
) dt
where TR < 1
or TR is Null|||Thanks but developers are telling me that TR column will have duplicates......
"David Browne" wrote:
> "DXC" <DXC@.discussions.microsoft.com> wrote in message
> news:4AE4D52C-DD9D-4C25-B592-A02F9A29C554@.microsoft.com...
> >I have the following DDL/indexes and the following Stored procedure. I am
> > getting 28 seconds for per user which is unexceptable. What will be the
> > best
> > way to index this table ?
> >
> > UserName column has 28000 rows for 1 user (User1) and TR column has mostly
> > usique numbers........
> >
> > Thanks for any help.
> >
> > CREATE TABLE [dbo].[CostCompAvailTemp] (
> > [UserName] [char] (40) NOT NULL ,
> > [TR] [decimal](10, 0) NOT NULL ,
> > [RHour] [smallint] NULL ,
> > [Def] [decimal](5, 3) NULL ,
> > [Price] [decimal](4, 2) NULL ,
> > [RDL] [smallint] NULL
> > ) ON [PRIMARY]
> > GO
> >
> > CREATE CLUSTERED INDEX [CostCompAvailTemp_ix1] ON
> > [dbo].[CostCompAvailTemp]([UserName]) ON [PRIMARY]
> > GO
> >
> > CREATE INDEX [CostCompAvailTemp_ix2] ON [dbo].[CostCompAvailTemp]([TR])
> > ON
> > [PRIMARY]
> > GO
> >
> First, use one query, not 5.
> Second, don't insert crap, and then clean it up later. Just refrain from
> inserting the invalid rows to begin with.
> Here's a rewrite using a temp table, a cross join and a derived table:
> --CREATE PROCEDURE [dbo].[CostCompCopyAvailIntoTemp]
> Declare @.UserName char(40)
> Declare @.AvailNum as int
> Declare @.PVal as dec(4,2)
> Declare @.PCol as int
> --as
> set nocount on
> delete from CostCompAvailTemp
> where UserName = @.UserName
>
> declare @.TR table(tr int primary key)
> insert into @.TR(tr) values (1)
> insert into @.TR(tr) values (2)
> insert into @.TR(tr) values (3)
> insert into @.TR(tr) values (4)
> insert into @.TR(tr) values (5)
> insert into CostCompAvailTemp
> select * from
> (
> select
> @.Username UserName,
> case tr.tr
> when 1 then TR1
> when 2 then TR2
> when 3 then TR3
> when 4 then TR4
> when 5 then TR5
> end as TR,
> Deadline RHour,
> CreditAvail Def,
> case @.PCol
> when 1 then Price1
> when 2 then Price2
> when 3 then Price3
> else @.PVal
> end Price,
> Null RDL
> from AvailabilityDetail
> cross join @.TR tr
> where AvailNumber = @.AvailNum
> ) dt
> where TR < 1
> or TR is Null
>
>
Friday, March 9, 2012
performance on Reconcile/Compuware
Reconcile/Compuware third party application to SQL server
The most expensive step is on the sort step on the requ_tag column.
However if I put attv_value (text data type) to the last returning column,
the performance is OK.
Do anyone know why ?
How can I do from SQL server to improve performance without changing the
TSQL ?
(indexing ?)
SELECT RC_Requirements.requ_id,
RC_Requirement_type.reqt_id,
attv_value, -- text data type
requ_tag
FROM [RC_Requirements],
[RC_Requirement_type],
[RC_Attribute_value],
[RC_Attribute_definition]
WHERE RC_Requirement_type.reqt_id = RC_Requirements.reqt_id
AND RC_Attribute_value.requ_id = RC_Requirements.requ_id
AND RC_Attribute_value.attd_id = RC_Attribute_definition.attd_id
AND RC_Requirements.proj_id=43144
ORDER BY RC_Requirements.requ_tag"pc" <pc@.discussions.microsoft.com> wrote in message
news:25CB6A22-0DF5-4B18-A783-67F4846941D1@.microsoft.com...
> The performance bottlenet is at the following query sent by the
> Reconcile/Compuware third party application to SQL server
> The most expensive step is on the sort step on the requ_tag column.
> However if I put attv_value (text data type) to the last returning
> column,
> the performance is OK.
> Do anyone know why ?
> How can I do from SQL server to improve performance without changing the
> TSQL ?
> (indexing ?)
>
You can always try indexing requ_tag. This may avoid the expensive sort, as
the index will already maintain a sorted list of requ_tag.
David
performance on Reconcile/Compuware
Reconcile/Compuware third party application to SQL server
The most expensive step is on the sort step on the requ_tag column.
However if I put attv_value (text data type) to the last returning column,
the performance is OK.
Do anyone know why ?
How can I do from SQL server to improve performance without changing the
TSQL ?
(indexing ?)
SELECT RC_Requirements.requ_id,
RC_Requirement_type.reqt_id,
attv_value, -- text data type
requ_tag
FROM [RC_Requirements],
[RC_Requirement_type],
[RC_Attribute_value],
[RC_Attribute_definition]
WHERE RC_Requirement_type.reqt_id = RC_Requirements.reqt_id
AND RC_Attribute_value.requ_id = RC_Requirements.requ_id
AND RC_Attribute_value.attd_id = RC_Attribute_definition.attd_id
AND RC_Requirements.proj_id=43144
ORDER BY RC_Requirements.requ_tag
"pc" <pc@.discussions.microsoft.com> wrote in message
news:25CB6A22-0DF5-4B18-A783-67F4846941D1@.microsoft.com...
> The performance bottlenet is at the following query sent by the
> Reconcile/Compuware third party application to SQL server
> The most expensive step is on the sort step on the requ_tag column.
> However if I put attv_value (text data type) to the last returning
> column,
> the performance is OK.
> Do anyone know why ?
> How can I do from SQL server to improve performance without changing the
> TSQL ?
> (indexing ?)
>
You can always try indexing requ_tag. This may avoid the expensive sort, as
the index will already maintain a sorted list of requ_tag.
David
Wednesday, March 7, 2012
Performance of query joining tables linked between different datab
SELECT Reference.dbo.tlkpInstruments.IDInstrument,
Reference.dbo.tlkpInstruments.Instrument, dbo.tblAccounts.*
FROM dbo.tblAccounts LEFT OUTER JOIN
Reference.dbo.tlkpInstruments ON dbo.tblAccounts.Commodities =
Reference.dbo.tlkpInstruments.IDInstrument
tlkpInstruments contains 4140 rows. Primary key is IDInstrument (int).
All rows retrieved in less than 5 seconds.
tblAccounts contains 43348 rows. Primary key is AccID (bigint) + OffID (int).
All rows retrieved in less than 5 seconds.
Why is the query running so slowly?
Thanks.
The linked column in the subsidiary table is the primary key of that table
and is indexed that way. In the main table, I've tried indexing the linked
column separately and adding it to the primary key. Neither approach makes
any substantial change.
I've also tried importing the subsidiary table into the main database. There
is no change to the performance.
Anything else I can try?
"Tibor Karaszi" wrote:
> Start by adding indexes on the columns that you do the join over. Also consider making one or even
> both of the indexes clustered indexes (assuming that is suitable).
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
>
> "TLD" <TLD@.discussions.microsoft.com> wrote in message
> news:5FF860E0-152C-4C5C-AB90-1FA78D9E05B1@.microsoft.com...
>
|||99% chance it's the way you've indexed the tables.
Post the DDL (CREATE TABLE and CREATE INDEX statements) and someone can
probably help.
-Dave
TLD wrote:[vbcol=seagreen]
> The linked column in the subsidiary table is the primary key of that table
> and is indexed that way. In the main table, I've tried indexing the linked
> column separately and adding it to the primary key. Neither approach makes
> any substantial change.
> I've also tried importing the subsidiary table into the main database. There
> is no change to the performance.
> Anything else I can try?
> "Tibor Karaszi" wrote:
-Dave Markle
http://www.markleconsulting.com/blog
|||TLD, I don't see a thing wrong with your query from a performance
perspective.
Try this:
SELECT
i.IDInstrument,
i.Instrument,
a.*
into #bleh
FROM dbo.tblAccounts a
LEFT JOIN dbo.tlkpInstruments i
ON a.Commodities = i.IDInstrument;
DROP TABLE #bleh;
How long does that take? If it's quick, it probably indicates that you
have a problem with the speed of returning rows to the client. This can
be caused by a network configuration issue or by having some task on the
client side that's eating up CPU cycles.
What happens when you put this in front of your query? What are the
results in your "Messages" window?
SET STATISTICS IO ON;
-Dave
TLD wrote:[vbcol=seagreen]
> Here are table and index definitions:
>
> CREATE TABLE [dbo].[tblAccounts] (
> [AccID] [bigint] NOT NULL ,
> [OffID] [int] NOT NULL ,
> [ID] [bigint] NULL ,
> [AccNo] [char] (5) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
> [AccName] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
> [Office] [char] (3) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
> [Level] [tinyint] NULL ,
> [3rdPartyGty] [varchar] (20) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
> [Division] [varchar] (10) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
> [USRating] [varchar] (5) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
> [UKRating] [char] (1) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
> [InitReqmnts] [int] NULL ,
> [StressExposure] [int] NULL ,
> [MarDefExcess] [int] NULL ,
> [ARTL] [int] NULL ,
> [SRTL] [int] NULL ,
> [IMTL] [int] NULL ,
> [PCOvr] [tinyint] NULL ,
> [Action] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
> [EvalDate] [smalldatetime] NULL ,
> [RenewDate] [smalldatetime] NULL ,
> [LevelPrevious] [int] NULL ,
> [EvalDatePrevious] [smalldatetime] NULL ,
> [Note] [varchar] (125) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
> [Exclude] [bit] NULL ,
> [NetLiquidity] [int] NULL ,
> [MaxRisk] [int] NULL ,
> [NotAmt] [int] NULL ,
> [MarReqd] [int] NULL ,
> [Country] [char] (2) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
> [SHEquity] [int] NULL ,
> [EstProfit] [int] NULL ,
> [Exposure] [numeric](10, 2) NULL ,
> [Options] [bit] NULL ,
> [Contracts] [int] NULL ,
> [Symbol] [varchar] (5) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
> [PriceCurrent] [money] NULL ,
> [PriceHistoric] [money] NULL ,
> [Documentation] [bit] NULL ,
> [Commodities] [int] NULL ,
> [Comment] [varchar] (125) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
> [MD] [bit] NULL ,
> [Diversified] [smallint] NULL ,
> [Doc] [smallint] NULL ,
> [Credit] [char] (1) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
> [Legal] [char] (1) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
> [Accepted] [smallint] NULL ,
> [Man] [bit] NULL ,
> [TriggerContracts] [int] NULL ,
> [Inst] [nvarchar] (8) COLLATE SQL_Latin1_General_CP1_CI_AS NULL
> ) ON [PRIMARY]
>
> CREATE UNIQUE CLUSTERED
> INDEX [PK_tblAccounts] ON [dbo].[tblAccounts] ([AccID], [OffID],
> [Commodities])
> WITH
> FILLFACTOR = 90
> ,DROP_EXISTING
> ON [PRIMARY]
> CREATE TABLE [dbo].[tlkpInstruments] (
> [IDInstrument] [int] IDENTITY (1, 1) NOT NULL ,
> [Instrument] [nvarchar] (15) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
> [Name] [nvarchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
> [ExchangeID] [int] NOT NULL ,
> [GroupID] [int] NULL ,
> [Collective] [nvarchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL
> ) ON [PRIMARY]
> CREATE UNIQUE CLUSTERED
> INDEX [PK_tlkpInstruments] ON [dbo].[tlkpInstruments] ([IDInstrument])
> WITH
> DROP_EXISTING
> ON [PRIMARY]
>
> "TLD" wrote:
-Dave Markle
http://www.markleconsulting.com/blog
|||This is a local instance of SServer. There is no network connection. The only
client running is Query Analyzer.
The SELECT... INTO took one second.
Re-running the query takes 1m54s:
Application Profile Statistics
Timer resolution (milliseconds)00
Number of INSERT, UPDATE, DELETE statements00
Rows effected by INSERT, UPDATE, DELETE statements00
Number of SELECT statements11
Rows effected by SELECT statements4334843348
Number of user transactions22
Average fetch time00
Cumulative fetch time00
Number of fetches00
Number of open statement handles00
Max number of opened statement handles00
Cumulative number of statement handles00
Network Statistics
Number of server roundtrips11
Number of TDS packets sent11
Number of TDS packets received27092709
Number of bytes sent644644
Number of bytes received1.10936e+0071.10936e+007
Time Statistics
Cumulative client processing time00
Cumulative wait time on server replies6464
"Dave Markle" <"dma[remove_ZZ]ZZrkle" wrote:
> TLD, I don't see a thing wrong with your query from a performance
> perspective.
> Try this:
> SELECT
> i.IDInstrument,
> i.Instrument,
> a.*
> into #bleh
> FROM dbo.tblAccounts a
> LEFT JOIN dbo.tlkpInstruments i
> ON a.Commodities = i.IDInstrument;
> DROP TABLE #bleh;
> How long does that take? If it's quick, it probably indicates that you
> have a problem with the speed of returning rows to the client. This can
> be caused by a network configuration issue or by having some task on the
> client side that's eating up CPU cycles.
> What happens when you put this in front of your query? What are the
> results in your "Messages" window?
> SET STATISTICS IO ON;
> -Dave
>
> TLD wrote:
>
> --
> -Dave Markle
> http://www.markleconsulting.com/blog
>
|||The two minutes is slow, of course,
But your "5 seconds" timings are suspect, too.
Are you running on a tiny, tiny machine, or on a busy server with just
all kinds of other stuff going on?
You ought to be able to fetch both tables and the join in five seconds
easy, even with no keys at all, on anything like a current machine.
J.
On Fri, 26 Jan 2007 09:38:01 -0800, TLD
<TLD@.discussions.microsoft.com> wrote:
>The following query takes almost two minutes to run:
>SELECT Reference.dbo.tlkpInstruments.IDInstrument,
>Reference.dbo.tlkpInstruments.Instrument, dbo.tblAccounts.*
>FROM dbo.tblAccounts LEFT OUTER JOIN
> Reference.dbo.tlkpInstruments ON dbo.tblAccounts.Commodities =
>Reference.dbo.tlkpInstruments.IDInstrument
>tlkpInstruments contains 4140 rows. Primary key is IDInstrument (int).
>All rows retrieved in less than 5 seconds.
>tblAccounts contains 43348 rows. Primary key is AccID (bigint) + OffID (int).
>All rows retrieved in less than 5 seconds.
>Why is the query running so slowly?
>Thanks.
|||I'm using a dual-Opteron 64 workstation with 2G of memory. SQL Server is
installed as a local instance. There is no active network connection.
"JXStern" wrote:
> The two minutes is slow, of course,
> But your "5 seconds" timings are suspect, too.
> Are you running on a tiny, tiny machine, or on a busy server with just
> all kinds of other stuff going on?
> You ought to be able to fetch both tables and the join in five seconds
> easy, even with no keys at all, on anything like a current machine.
> J.
>
> On Fri, 26 Jan 2007 09:38:01 -0800, TLD
> <TLD@.discussions.microsoft.com> wrote:
>
>
|||I should add that the performance is the same when running with an Access
front-end in a server environment. This is why I'm trying to get it
straightened out on a local machine first. The speed of return of the tables
by themselves is less than five seconds, more like three.
This shouldn't make any difference but there are many more records in the
main table without an associated record in the subsidiary than there are with
one. That brings up a lot of NULLs.
What about the order of the fields in the join? Could that make any
difference?
"JXStern" wrote:
> The two minutes is slow, of course,
> But your "5 seconds" timings are suspect, too.
> Are you running on a tiny, tiny machine, or on a busy server with just
> all kinds of other stuff going on?
> You ought to be able to fetch both tables and the join in five seconds
> easy, even with no keys at all, on anything like a current machine.
> J.
>
> On Fri, 26 Jan 2007 09:38:01 -0800, TLD
> <TLD@.discussions.microsoft.com> wrote:
>
>
|||Here is an answer:
The field which is causing the delay is
TriggerContracts int 4
When all except this field are accessed with the join, the return is less
than five seconds. When this field is included the return is two minutes.
Bracketing the name doesn't change the performance. Changing the name to
TContracts doesn't change the performance. Add the field to the clustered
index doesn't change the performance. What in the world is going on?
"JXStern" wrote:
> The two minutes is slow, of course,
> But your "5 seconds" timings are suspect, too.
> Are you running on a tiny, tiny machine, or on a busy server with just
> all kinds of other stuff going on?
> You ought to be able to fetch both tables and the join in five seconds
> easy, even with no keys at all, on anything like a current machine.
> J.
>
> On Fri, 26 Jan 2007 09:38:01 -0800, TLD
> <TLD@.discussions.microsoft.com> wrote:
>
>
|||The problem has been solved. By reworking my front end, I was able to
eliminate some of the fields in the query. Now it runs in less than five
seconds.
Thanks to all for your help.
"JXStern" wrote:
> The two minutes is slow, of course,
> But your "5 seconds" timings are suspect, too.
> Are you running on a tiny, tiny machine, or on a busy server with just
> all kinds of other stuff going on?
> You ought to be able to fetch both tables and the join in five seconds
> easy, even with no keys at all, on anything like a current machine.
> J.
>
> On Fri, 26 Jan 2007 09:38:01 -0800, TLD
> <TLD@.discussions.microsoft.com> wrote:
>
>