Just curious if anyone has experience with this.
SSRS2005
I have a report that has a dataset and populates a table.
I have 8 sub-reports that are placed on this report.
Is this approach the best for performance, or should I add the other 8
datasets to this one report,
and then include a table for each of them.
Just wondering what kind of overhead this is creating using the Sub Reports.
Thanks in advance for your reply'sIt sounds like this not a master-detail type of thing. I use subreports if I
plan on using the report inside multiple reports or if I am doing
master-detail. Otherwise I put it all in the same report.
Bruce Loehle-Conger
MVP SQL Server Reporting Services
"Chris" <cexley@.enableconsulting.com> wrote in message
news:%23Fgv55x4GHA.772@.TK2MSFTNGP02.phx.gbl...
> Just curious if anyone has experience with this.
> SSRS2005
> I have a report that has a dataset and populates a table.
> I have 8 sub-reports that are placed on this report.
> Is this approach the best for performance, or should I add the other 8
> datasets to this one report,
> and then include a table for each of them.
> Just wondering what kind of overhead this is creating using the Sub
> Reports.
> Thanks in advance for your reply's
>|||Thanks Bruce. I created them as seperate reports so that I could use them
in multiple places. I was just wondering about performance hits and whether
there was a preferred method of achieving better performace.
"Bruce L-C [MVP]" <bruce_lcNOSPAM@.hotmail.com> wrote in message
news:ercR5Oy4GHA.1496@.TK2MSFTNGP05.phx.gbl...
> It sounds like this not a master-detail type of thing. I use subreports if
> I plan on using the report inside multiple reports or if I am doing
> master-detail. Otherwise I put it all in the same report.
>
> --
> Bruce Loehle-Conger
> MVP SQL Server Reporting Services
> "Chris" <cexley@.enableconsulting.com> wrote in message
> news:%23Fgv55x4GHA.772@.TK2MSFTNGP02.phx.gbl...
>> Just curious if anyone has experience with this.
>> SSRS2005
>> I have a report that has a dataset and populates a table.
>> I have 8 sub-reports that are placed on this report.
>> Is this approach the best for performance, or should I add the other 8
>> datasets to this one report,
>> and then include a table for each of them.
>> Just wondering what kind of overhead this is creating using the Sub
>> Reports.
>> Thanks in advance for your reply's
>>
>
Showing posts with label experience. Show all posts
Showing posts with label experience. Show all posts
Monday, March 26, 2012
Performance Question
Labels:
curious,
database,
dataset,
experience,
microsoft,
mysql,
oracle,
performance,
populates,
report,
server,
sql,
ssrs2005,
sub-reports,
table
Tuesday, March 20, 2012
Performance Problem
I am experience a very strange performance problem. A nightly job that had
been consistently running 2 hours each night is suddenly running 16 hours.
A trace reveals a section of code taking 2500-3800 milliseconds of CPU to
process. The execution plan for the select statement shows the indexes are
correctly being selected and an index s
is being performed on both tables
in the select. There are approx 400,000 rows, and the trace says it is
reading every row (although index s
ing)
If I run the same select statement in query analyzer while the Agent job is
running it processes in about 35 milliseconds, reading less than 60 rows (As
it should)
Any ideas on why I am getting such a large performance gap'
help!
Thanks
Richard DouglassIt's probably something that changed between then and now.
"Richard Douglass" wrote:
> I am experience a very strange performance problem. A nightly job that ha
d
> been consistently running 2 hours each night is suddenly running 16 hours.
> A trace reveals a section of code taking 2500-3800 milliseconds of CPU to
> process. The execution plan for the select statement shows the indexes ar
e
> correctly being selected and an index s
is being performed on both tabl
es
> in the select. There are approx 400,000 rows, and the trace says it is
> reading every row (although index s
ing)
> If I run the same select statement in query analyzer while the Agent job i
s
> running it processes in about 35 milliseconds, reading less than 60 rows (
As
> it should)
> Any ideas on why I am getting such a large performance gap'
> help!
> Thanks
> Richard Douglass
>
>|||Nothing has changed. The code is stable and has not been modified in almost
a year.
It makes no sense that the SELECT in the job runs for 3000 milliseconds and
a cut-n-paste of the same query runs in 40 milliseconds in query analyzer.
Both the job and QA produce the exact same execution plan (Same logical
look, both tables index s
ing)
"KH" <KH@.discussions.microsoft.com> wrote in message
news:7DEC1462-2C11-4340-BAA2-8DD92B47C0C5@.microsoft.com...
> It's probably something that changed between then and now.
>
> "Richard Douglass" wrote:
>
had
hours.
to
are
tables
is
(As|||Is there anything to the timing that your "nightly" job runs at night
(during backups?) and your QueryAnalyzer job you test is run during the day?
(grasping at straws?)
Message posted via http://www.webservertalk.com|||Richard, I saw a very similiar thing at one of my customers.. My suspicion
was the driver (odbc versus native sql driver. don't ask me why).. If we
ran a query in QA, it ran in sub-second, but when ran through the app it ran
in 3 to 4 seconds. Exact same parameters and select etc.
I ended up changing the query in the procedure and did a pre-select and got
both to run in the same time frame.
I know this doesn' t make sense, but have you tried dropping and re-creating
the index'
Bill
"Richard Douglass" <RichardD@.arisinc.com> wrote in message
news:e2ncx%23jNFHA.3380@.TK2MSFTNGP15.phx.gbl...
> Nothing has changed. The code is stable and has not been modified in
> almost
> a year.
> It makes no sense that the SELECT in the job runs for 3000 milliseconds
> and
> a cut-n-paste of the same query runs in 40 milliseconds in query analyzer.
> Both the job and QA produce the exact same execution plan (Same logical
> look, both tables index s
ing)
> "KH" <KH@.discussions.microsoft.com> wrote in message
> news:7DEC1462-2C11-4340-BAA2-8DD92B47C0C5@.microsoft.com...
> had
> hours.
> to
> are
> tables
> is
> (As
>|||Have you tried updating the stats on those two tables? You might have
gotten a bad query plan that is being reused by the job. When you run the
query individually it will get it's own plan. Also make sure that you have
SET NOCOUNT ON set at the beginning of the code in the job.
Andrew J. Kelly SQL MVP
"Richard Douglass" <RichardD@.arisinc.com> wrote in message
news:ukuptfjNFHA.2136@.TK2MSFTNGP14.phx.gbl...
>I am experience a very strange performance problem. A nightly job that had
> been consistently running 2 hours each night is suddenly running 16 hours.
> A trace reveals a section of code taking 2500-3800 milliseconds of CPU to
> process. The execution plan for the select statement shows the indexes
> are
> correctly being selected and an index s
is being performed on both
> tables
> in the select. There are approx 400,000 rows, and the trace says it is
> reading every row (although index s
ing)
> If I run the same select statement in query analyzer while the Agent job
> is
> running it processes in about 35 milliseconds, reading less than 60 rows
> (As
> it should)
> Any ideas on why I am getting such a large performance gap'
> help!
> Thanks
> Richard Douglass
>|||"Richard Douglass" <RichardD@.arisinc.com> wrote in message
news:e2ncx%23jNFHA.3380@.TK2MSFTNGP15.phx.gbl...
> Nothing has changed. The code is stable and has not been modified in
> almost
> a year.
> It makes no sense that the SELECT in the job runs for 3000 milliseconds
> and
> a cut-n-paste of the same query runs in 40 milliseconds in query analyzer.
> Both the job and QA produce the exact same execution plan (Same logical
> look, both tables index s
ing)
It sound very much like a problem we're having on one of our projects here
as well!
Sometime during easter the performance on one og our Stored Procedures has
gone VERY bad!
Our code is also stable and has not been modified during since some time
before easter!
With the Profiler we can see that the SP call from the webapplication takes
approx. 6 seconds - but if we run the same SP call from QA it takes approx.
second!
We suspected an applied MDAC 2.8 update that had been applied during easter
to be the source of the problem, but after an uninstall and restart it's
still slow!
Are there any more people that has experienced this problem out there? And
if any should have solved the problem please explain thoroughly!
Cheers
Rene
been consistently running 2 hours each night is suddenly running 16 hours.
A trace reveals a section of code taking 2500-3800 milliseconds of CPU to
process. The execution plan for the select statement shows the indexes are
correctly being selected and an index s
in the select. There are approx 400,000 rows, and the trace says it is
reading every row (although index s
If I run the same select statement in query analyzer while the Agent job is
running it processes in about 35 milliseconds, reading less than 60 rows (As
it should)
Any ideas on why I am getting such a large performance gap'
help!
Thanks
Richard DouglassIt's probably something that changed between then and now.
"Richard Douglass" wrote:
> I am experience a very strange performance problem. A nightly job that ha
d
> been consistently running 2 hours each night is suddenly running 16 hours.
> A trace reveals a section of code taking 2500-3800 milliseconds of CPU to
> process. The execution plan for the select statement shows the indexes ar
e
> correctly being selected and an index s
es
> in the select. There are approx 400,000 rows, and the trace says it is
> reading every row (although index s
> If I run the same select statement in query analyzer while the Agent job i
s
> running it processes in about 35 milliseconds, reading less than 60 rows (
As
> it should)
> Any ideas on why I am getting such a large performance gap'
> help!
> Thanks
> Richard Douglass
>
>|||Nothing has changed. The code is stable and has not been modified in almost
a year.
It makes no sense that the SELECT in the job runs for 3000 milliseconds and
a cut-n-paste of the same query runs in 40 milliseconds in query analyzer.
Both the job and QA produce the exact same execution plan (Same logical
look, both tables index s
"KH" <KH@.discussions.microsoft.com> wrote in message
news:7DEC1462-2C11-4340-BAA2-8DD92B47C0C5@.microsoft.com...
> It's probably something that changed between then and now.
>
> "Richard Douglass" wrote:
>
had
hours.
to
are
tables
is
(As|||Is there anything to the timing that your "nightly" job runs at night
(during backups?) and your QueryAnalyzer job you test is run during the day?
(grasping at straws?)
Message posted via http://www.webservertalk.com|||Richard, I saw a very similiar thing at one of my customers.. My suspicion
was the driver (odbc versus native sql driver. don't ask me why).. If we
ran a query in QA, it ran in sub-second, but when ran through the app it ran
in 3 to 4 seconds. Exact same parameters and select etc.
I ended up changing the query in the procedure and did a pre-select and got
both to run in the same time frame.
I know this doesn' t make sense, but have you tried dropping and re-creating
the index'
Bill
"Richard Douglass" <RichardD@.arisinc.com> wrote in message
news:e2ncx%23jNFHA.3380@.TK2MSFTNGP15.phx.gbl...
> Nothing has changed. The code is stable and has not been modified in
> almost
> a year.
> It makes no sense that the SELECT in the job runs for 3000 milliseconds
> and
> a cut-n-paste of the same query runs in 40 milliseconds in query analyzer.
> Both the job and QA produce the exact same execution plan (Same logical
> look, both tables index s
> "KH" <KH@.discussions.microsoft.com> wrote in message
> news:7DEC1462-2C11-4340-BAA2-8DD92B47C0C5@.microsoft.com...
> had
> hours.
> to
> are
> tables
> is
> (As
>|||Have you tried updating the stats on those two tables? You might have
gotten a bad query plan that is being reused by the job. When you run the
query individually it will get it's own plan. Also make sure that you have
SET NOCOUNT ON set at the beginning of the code in the job.
Andrew J. Kelly SQL MVP
"Richard Douglass" <RichardD@.arisinc.com> wrote in message
news:ukuptfjNFHA.2136@.TK2MSFTNGP14.phx.gbl...
>I am experience a very strange performance problem. A nightly job that had
> been consistently running 2 hours each night is suddenly running 16 hours.
> A trace reveals a section of code taking 2500-3800 milliseconds of CPU to
> process. The execution plan for the select statement shows the indexes
> are
> correctly being selected and an index s
> tables
> in the select. There are approx 400,000 rows, and the trace says it is
> reading every row (although index s
> If I run the same select statement in query analyzer while the Agent job
> is
> running it processes in about 35 milliseconds, reading less than 60 rows
> (As
> it should)
> Any ideas on why I am getting such a large performance gap'
> help!
> Thanks
> Richard Douglass
>|||"Richard Douglass" <RichardD@.arisinc.com> wrote in message
news:e2ncx%23jNFHA.3380@.TK2MSFTNGP15.phx.gbl...
> Nothing has changed. The code is stable and has not been modified in
> almost
> a year.
> It makes no sense that the SELECT in the job runs for 3000 milliseconds
> and
> a cut-n-paste of the same query runs in 40 milliseconds in query analyzer.
> Both the job and QA produce the exact same execution plan (Same logical
> look, both tables index s
It sound very much like a problem we're having on one of our projects here
as well!
Sometime during easter the performance on one og our Stored Procedures has
gone VERY bad!
Our code is also stable and has not been modified during since some time
before easter!
With the Profiler we can see that the SP call from the webapplication takes
approx. 6 seconds - but if we run the same SP call from QA it takes approx.
second!
We suspected an applied MDAC 2.8 update that had been applied during easter
to be the source of the problem, but after an uninstall and restart it's
still slow!
Are there any more people that has experienced this problem out there? And
if any should have solved the problem please explain thoroughly!
Cheers
Rene
Labels:
consistently,
database,
experience,
hadbeen,
job,
microsoft,
mysql,
nightly,
oracle,
performance,
running,
server,
sql,
strange,
suddenly
Wednesday, March 7, 2012
Performance of backup across network
I am hoping someone with experience can help me with this.
I have been doing differential and log backups across a network to a device
specified by a UNC path for some time. The backup sizes have been small
enough that performance was no issue. Full backups, which total about 250
GB, I have been doing to a RAID set on the local machine, then doing a copy
(literally the command line copy from a batch script) to the same UNC path
as the diff and logs. The backup itself takes about 2.5 hours, and the copy
another 4 hours. The network is dedicated gigabit. The target machine uses
RAID 10 and can do local I/O at >= 60 MB/s. The network can definitely
sustain 30 MB/s with an app that has a clue about the network. xcopy can
sometimes hit that pace, but it and copy more typically do 17 MB/s.
Because I want to use the disk space on the SQL Server box for something
other than backups, I now want to do full backups directly across the
network, using the UNC path as for diff and log. My problem is that this is
dog slow (no insult to dogs, most of which run much faster than I,
intended). I am getting about 7 MB/s, which means the full backup is about
10 hours! Considering that the performance from copy (17 MB/s) is hardly
exemplary, I find SQL Server performance pretty embarassing.
The only useful tip I found on the net was to backup to multiple devices.
When I do this, my network throughput actually *drops* to about 6.5 MB/s
(for 2, 3 or 4 devices). The same tip suggested that multiple devices might
not help if max worker threads is not bumped up. I have not had a chance to
restart SQL Server (this is a 24x7 public database), but we typically have
100-120 connections, so I am not sure that bumping this up will do anything,
anyway.
Does anyone have experience with network backups and is getting better
throughput? I have considered forcing opportunistic locking in the
redirector, but this seems like blind hope.
Do I need to use something with a clue about networks, like the Veritas SQL
Agent? We use Veritas to spin our tapes, but we abandoned SQL Agent 2 years
ago when we could not get it to work.
Help with backing up directly over the network would be much appreciated.
Comments from SQL Agent users are also welcome.
TIA
--
Scott NicholHi Scott
Have you looked at SQL Litespeed? It's all about optimising backup
performance.
http://www.imceda.com/LiteSpeed_Description.htm
Regards,
Greg Linwood
SQL Server MVP
"Scott Nichol" <reply_to_newsgroup@.scottnichol.com> wrote in message
news:u%23NuPnBaEHA.3596@.tk2msftngp13.phx.gbl...
> I am hoping someone with experience can help me with this.
> I have been doing differential and log backups across a network to a
device
> specified by a UNC path for some time. The backup sizes have been small
> enough that performance was no issue. Full backups, which total about 250
> GB, I have been doing to a RAID set on the local machine, then doing a
copy
> (literally the command line copy from a batch script) to the same UNC path
> as the diff and logs. The backup itself takes about 2.5 hours, and the
copy
> another 4 hours. The network is dedicated gigabit. The target machine
uses
> RAID 10 and can do local I/O at >= 60 MB/s. The network can definitely
> sustain 30 MB/s with an app that has a clue about the network. xcopy can
> sometimes hit that pace, but it and copy more typically do 17 MB/s.
> Because I want to use the disk space on the SQL Server box for something
> other than backups, I now want to do full backups directly across the
> network, using the UNC path as for diff and log. My problem is that this
is
> dog slow (no insult to dogs, most of which run much faster than I,
> intended). I am getting about 7 MB/s, which means the full backup is
about
> 10 hours! Considering that the performance from copy (17 MB/s) is hardly
> exemplary, I find SQL Server performance pretty embarassing.
> The only useful tip I found on the net was to backup to multiple devices.
> When I do this, my network throughput actually *drops* to about 6.5 MB/s
> (for 2, 3 or 4 devices). The same tip suggested that multiple devices
might
> not help if max worker threads is not bumped up. I have not had a chance
to
> restart SQL Server (this is a 24x7 public database), but we typically have
> 100-120 connections, so I am not sure that bumping this up will do
anything,
> anyway.
> Does anyone have experience with network backups and is getting better
> throughput? I have considered forcing opportunistic locking in the
> redirector, but this seems like blind hope.
> Do I need to use something with a clue about networks, like the Veritas
SQL
> Agent? We use Veritas to spin our tapes, but we abandoned SQL Agent 2
years
> ago when we could not get it to work.
> Help with backing up directly over the network would be much appreciated.
> Comments from SQL Agent users are also welcome.
> TIA
> --
> Scott Nichol
>
I have been doing differential and log backups across a network to a device
specified by a UNC path for some time. The backup sizes have been small
enough that performance was no issue. Full backups, which total about 250
GB, I have been doing to a RAID set on the local machine, then doing a copy
(literally the command line copy from a batch script) to the same UNC path
as the diff and logs. The backup itself takes about 2.5 hours, and the copy
another 4 hours. The network is dedicated gigabit. The target machine uses
RAID 10 and can do local I/O at >= 60 MB/s. The network can definitely
sustain 30 MB/s with an app that has a clue about the network. xcopy can
sometimes hit that pace, but it and copy more typically do 17 MB/s.
Because I want to use the disk space on the SQL Server box for something
other than backups, I now want to do full backups directly across the
network, using the UNC path as for diff and log. My problem is that this is
dog slow (no insult to dogs, most of which run much faster than I,
intended). I am getting about 7 MB/s, which means the full backup is about
10 hours! Considering that the performance from copy (17 MB/s) is hardly
exemplary, I find SQL Server performance pretty embarassing.
The only useful tip I found on the net was to backup to multiple devices.
When I do this, my network throughput actually *drops* to about 6.5 MB/s
(for 2, 3 or 4 devices). The same tip suggested that multiple devices might
not help if max worker threads is not bumped up. I have not had a chance to
restart SQL Server (this is a 24x7 public database), but we typically have
100-120 connections, so I am not sure that bumping this up will do anything,
anyway.
Does anyone have experience with network backups and is getting better
throughput? I have considered forcing opportunistic locking in the
redirector, but this seems like blind hope.
Do I need to use something with a clue about networks, like the Veritas SQL
Agent? We use Veritas to spin our tapes, but we abandoned SQL Agent 2 years
ago when we could not get it to work.
Help with backing up directly over the network would be much appreciated.
Comments from SQL Agent users are also welcome.
TIA
--
Scott NicholHi Scott
Have you looked at SQL Litespeed? It's all about optimising backup
performance.
http://www.imceda.com/LiteSpeed_Description.htm
Regards,
Greg Linwood
SQL Server MVP
"Scott Nichol" <reply_to_newsgroup@.scottnichol.com> wrote in message
news:u%23NuPnBaEHA.3596@.tk2msftngp13.phx.gbl...
> I am hoping someone with experience can help me with this.
> I have been doing differential and log backups across a network to a
device
> specified by a UNC path for some time. The backup sizes have been small
> enough that performance was no issue. Full backups, which total about 250
> GB, I have been doing to a RAID set on the local machine, then doing a
copy
> (literally the command line copy from a batch script) to the same UNC path
> as the diff and logs. The backup itself takes about 2.5 hours, and the
copy
> another 4 hours. The network is dedicated gigabit. The target machine
uses
> RAID 10 and can do local I/O at >= 60 MB/s. The network can definitely
> sustain 30 MB/s with an app that has a clue about the network. xcopy can
> sometimes hit that pace, but it and copy more typically do 17 MB/s.
> Because I want to use the disk space on the SQL Server box for something
> other than backups, I now want to do full backups directly across the
> network, using the UNC path as for diff and log. My problem is that this
is
> dog slow (no insult to dogs, most of which run much faster than I,
> intended). I am getting about 7 MB/s, which means the full backup is
about
> 10 hours! Considering that the performance from copy (17 MB/s) is hardly
> exemplary, I find SQL Server performance pretty embarassing.
> The only useful tip I found on the net was to backup to multiple devices.
> When I do this, my network throughput actually *drops* to about 6.5 MB/s
> (for 2, 3 or 4 devices). The same tip suggested that multiple devices
might
> not help if max worker threads is not bumped up. I have not had a chance
to
> restart SQL Server (this is a 24x7 public database), but we typically have
> 100-120 connections, so I am not sure that bumping this up will do
anything,
> anyway.
> Does anyone have experience with network backups and is getting better
> throughput? I have considered forcing opportunistic locking in the
> redirector, but this seems like blind hope.
> Do I need to use something with a clue about networks, like the Veritas
SQL
> Agent? We use Veritas to spin our tapes, but we abandoned SQL Agent 2
years
> ago when we could not get it to work.
> Help with backing up directly over the network would be much appreciated.
> Comments from SQL Agent users are also welcome.
> TIA
> --
> Scott Nichol
>
Labels:
across,
backup,
backups,
database,
devicespecified,
differential,
experience,
log,
microsoft,
mysql,
network,
oracle,
performance,
server,
sql
Performance of backup across network
I am hoping someone with experience can help me with this.
I have been doing differential and log backups across a network to a device
specified by a UNC path for some time. The backup sizes have been small
enough that performance was no issue. Full backups, which total about 250
GB, I have been doing to a RAID set on the local machine, then doing a copy
(literally the command line copy from a batch script) to the same UNC path
as the diff and logs. The backup itself takes about 2.5 hours, and the copy
another 4 hours. The network is dedicated gigabit. The target machine uses
RAID 10 and can do local I/O at >= 60 MB/s. The network can definitely
sustain 30 MB/s with an app that has a clue about the network. xcopy can
sometimes hit that pace, but it and copy more typically do 17 MB/s.
Because I want to use the disk space on the SQL Server box for something
other than backups, I now want to do full backups directly across the
network, using the UNC path as for diff and log. My problem is that this is
dog slow (no insult to dogs, most of which run much faster than I,
intended). I am getting about 7 MB/s, which means the full backup is about
10 hours! Considering that the performance from copy (17 MB/s) is hardly
exemplary, I find SQL Server performance pretty embarassing.
The only useful tip I found on the net was to backup to multiple devices.
When I do this, my network throughput actually *drops* to about 6.5 MB/s
(for 2, 3 or 4 devices). The same tip suggested that multiple devices might
not help if max worker threads is not bumped up. I have not had a chance to
restart SQL Server (this is a 24x7 public database), but we typically have
100-120 connections, so I am not sure that bumping this up will do anything,
anyway.
Does anyone have experience with network backups and is getting better
throughput? I have considered forcing opportunistic locking in the
redirector, but this seems like blind hope.
Do I need to use something with a clue about networks, like the Veritas SQL
Agent? We use Veritas to spin our tapes, but we abandoned SQL Agent 2 years
ago when we could not get it to work.
Help with backing up directly over the network would be much appreciated.
Comments from SQL Agent users are also welcome.
TIA
Scott Nichol
Hi Scott
Have you looked at SQL Litespeed? It's all about optimising backup
performance.
http://www.imceda.com/LiteSpeed_Description.htm
Regards,
Greg Linwood
SQL Server MVP
"Scott Nichol" <reply_to_newsgroup@.scottnichol.com> wrote in message
news:u%23NuPnBaEHA.3596@.tk2msftngp13.phx.gbl...
> I am hoping someone with experience can help me with this.
> I have been doing differential and log backups across a network to a
device
> specified by a UNC path for some time. The backup sizes have been small
> enough that performance was no issue. Full backups, which total about 250
> GB, I have been doing to a RAID set on the local machine, then doing a
copy
> (literally the command line copy from a batch script) to the same UNC path
> as the diff and logs. The backup itself takes about 2.5 hours, and the
copy
> another 4 hours. The network is dedicated gigabit. The target machine
uses
> RAID 10 and can do local I/O at >= 60 MB/s. The network can definitely
> sustain 30 MB/s with an app that has a clue about the network. xcopy can
> sometimes hit that pace, but it and copy more typically do 17 MB/s.
> Because I want to use the disk space on the SQL Server box for something
> other than backups, I now want to do full backups directly across the
> network, using the UNC path as for diff and log. My problem is that this
is
> dog slow (no insult to dogs, most of which run much faster than I,
> intended). I am getting about 7 MB/s, which means the full backup is
about
> 10 hours! Considering that the performance from copy (17 MB/s) is hardly
> exemplary, I find SQL Server performance pretty embarassing.
> The only useful tip I found on the net was to backup to multiple devices.
> When I do this, my network throughput actually *drops* to about 6.5 MB/s
> (for 2, 3 or 4 devices). The same tip suggested that multiple devices
might
> not help if max worker threads is not bumped up. I have not had a chance
to
> restart SQL Server (this is a 24x7 public database), but we typically have
> 100-120 connections, so I am not sure that bumping this up will do
anything,
> anyway.
> Does anyone have experience with network backups and is getting better
> throughput? I have considered forcing opportunistic locking in the
> redirector, but this seems like blind hope.
> Do I need to use something with a clue about networks, like the Veritas
SQL
> Agent? We use Veritas to spin our tapes, but we abandoned SQL Agent 2
years
> ago when we could not get it to work.
> Help with backing up directly over the network would be much appreciated.
> Comments from SQL Agent users are also welcome.
> TIA
> --
> Scott Nichol
>
I have been doing differential and log backups across a network to a device
specified by a UNC path for some time. The backup sizes have been small
enough that performance was no issue. Full backups, which total about 250
GB, I have been doing to a RAID set on the local machine, then doing a copy
(literally the command line copy from a batch script) to the same UNC path
as the diff and logs. The backup itself takes about 2.5 hours, and the copy
another 4 hours. The network is dedicated gigabit. The target machine uses
RAID 10 and can do local I/O at >= 60 MB/s. The network can definitely
sustain 30 MB/s with an app that has a clue about the network. xcopy can
sometimes hit that pace, but it and copy more typically do 17 MB/s.
Because I want to use the disk space on the SQL Server box for something
other than backups, I now want to do full backups directly across the
network, using the UNC path as for diff and log. My problem is that this is
dog slow (no insult to dogs, most of which run much faster than I,
intended). I am getting about 7 MB/s, which means the full backup is about
10 hours! Considering that the performance from copy (17 MB/s) is hardly
exemplary, I find SQL Server performance pretty embarassing.
The only useful tip I found on the net was to backup to multiple devices.
When I do this, my network throughput actually *drops* to about 6.5 MB/s
(for 2, 3 or 4 devices). The same tip suggested that multiple devices might
not help if max worker threads is not bumped up. I have not had a chance to
restart SQL Server (this is a 24x7 public database), but we typically have
100-120 connections, so I am not sure that bumping this up will do anything,
anyway.
Does anyone have experience with network backups and is getting better
throughput? I have considered forcing opportunistic locking in the
redirector, but this seems like blind hope.
Do I need to use something with a clue about networks, like the Veritas SQL
Agent? We use Veritas to spin our tapes, but we abandoned SQL Agent 2 years
ago when we could not get it to work.
Help with backing up directly over the network would be much appreciated.
Comments from SQL Agent users are also welcome.
TIA
Scott Nichol
Hi Scott
Have you looked at SQL Litespeed? It's all about optimising backup
performance.
http://www.imceda.com/LiteSpeed_Description.htm
Regards,
Greg Linwood
SQL Server MVP
"Scott Nichol" <reply_to_newsgroup@.scottnichol.com> wrote in message
news:u%23NuPnBaEHA.3596@.tk2msftngp13.phx.gbl...
> I am hoping someone with experience can help me with this.
> I have been doing differential and log backups across a network to a
device
> specified by a UNC path for some time. The backup sizes have been small
> enough that performance was no issue. Full backups, which total about 250
> GB, I have been doing to a RAID set on the local machine, then doing a
copy
> (literally the command line copy from a batch script) to the same UNC path
> as the diff and logs. The backup itself takes about 2.5 hours, and the
copy
> another 4 hours. The network is dedicated gigabit. The target machine
uses
> RAID 10 and can do local I/O at >= 60 MB/s. The network can definitely
> sustain 30 MB/s with an app that has a clue about the network. xcopy can
> sometimes hit that pace, but it and copy more typically do 17 MB/s.
> Because I want to use the disk space on the SQL Server box for something
> other than backups, I now want to do full backups directly across the
> network, using the UNC path as for diff and log. My problem is that this
is
> dog slow (no insult to dogs, most of which run much faster than I,
> intended). I am getting about 7 MB/s, which means the full backup is
about
> 10 hours! Considering that the performance from copy (17 MB/s) is hardly
> exemplary, I find SQL Server performance pretty embarassing.
> The only useful tip I found on the net was to backup to multiple devices.
> When I do this, my network throughput actually *drops* to about 6.5 MB/s
> (for 2, 3 or 4 devices). The same tip suggested that multiple devices
might
> not help if max worker threads is not bumped up. I have not had a chance
to
> restart SQL Server (this is a 24x7 public database), but we typically have
> 100-120 connections, so I am not sure that bumping this up will do
anything,
> anyway.
> Does anyone have experience with network backups and is getting better
> throughput? I have considered forcing opportunistic locking in the
> redirector, but this seems like blind hope.
> Do I need to use something with a clue about networks, like the Veritas
SQL
> Agent? We use Veritas to spin our tapes, but we abandoned SQL Agent 2
years
> ago when we could not get it to work.
> Help with backing up directly over the network would be much appreciated.
> Comments from SQL Agent users are also welcome.
> TIA
> --
> Scott Nichol
>
Labels:
across,
backup,
backups,
database,
devicespecified,
differential,
experience,
log,
microsoft,
mysql,
network,
oracle,
performance,
server,
sql
Performance of backup across network
I am hoping someone with experience can help me with this.
I have been doing differential and log backups across a network to a device
specified by a UNC path for some time. The backup sizes have been small
enough that performance was no issue. Full backups, which total about 250
GB, I have been doing to a RAID set on the local machine, then doing a copy
(literally the command line copy from a batch script) to the same UNC path
as the diff and logs. The backup itself takes about 2.5 hours, and the copy
another 4 hours. The network is dedicated gigabit. The target machine uses
RAID 10 and can do local I/O at >= 60 MB/s. The network can definitely
sustain 30 MB/s with an app that has a clue about the network. xcopy can
sometimes hit that pace, but it and copy more typically do 17 MB/s.
Because I want to use the disk space on the SQL Server box for something
other than backups, I now want to do full backups directly across the
network, using the UNC path as for diff and log. My problem is that this is
dog slow (no insult to dogs, most of which run much faster than I,
intended). I am getting about 7 MB/s, which means the full backup is about
10 hours! Considering that the performance from copy (17 MB/s) is hardly
exemplary, I find SQL Server performance pretty embarassing.
The only useful tip I found on the net was to backup to multiple devices.
When I do this, my network throughput actually *drops* to about 6.5 MB/s
(for 2, 3 or 4 devices). The same tip suggested that multiple devices might
not help if max worker threads is not bumped up. I have not had a chance to
restart SQL Server (this is a 24x7 public database), but we typically have
100-120 connections, so I am not sure that bumping this up will do anything,
anyway.
Does anyone have experience with network backups and is getting better
throughput? I have considered forcing opportunistic locking in the
redirector, but this seems like blind hope.
Do I need to use something with a clue about networks, like the Veritas SQL
Agent? We use Veritas to spin our tapes, but we abandoned SQL Agent 2 years
ago when we could not get it to work.
Help with backing up directly over the network would be much appreciated.
Comments from SQL Agent users are also welcome.
TIA
--
Scott NicholHi Scott
Have you looked at SQL Litespeed? It's all about optimising backup
performance.
http://www.imceda.com/LiteSpeed_Description.htm
Regards,
Greg Linwood
SQL Server MVP
"Scott Nichol" <reply_to_newsgroup@.scottnichol.com> wrote in message
news:u%23NuPnBaEHA.3596@.tk2msftngp13.phx.gbl...
> I am hoping someone with experience can help me with this.
> I have been doing differential and log backups across a network to a
device
> specified by a UNC path for some time. The backup sizes have been small
> enough that performance was no issue. Full backups, which total about 250
> GB, I have been doing to a RAID set on the local machine, then doing a
copy
> (literally the command line copy from a batch script) to the same UNC path
> as the diff and logs. The backup itself takes about 2.5 hours, and the
copy
> another 4 hours. The network is dedicated gigabit. The target machine
uses
> RAID 10 and can do local I/O at >= 60 MB/s. The network can definitely
> sustain 30 MB/s with an app that has a clue about the network. xcopy can
> sometimes hit that pace, but it and copy more typically do 17 MB/s.
> Because I want to use the disk space on the SQL Server box for something
> other than backups, I now want to do full backups directly across the
> network, using the UNC path as for diff and log. My problem is that this
is
> dog slow (no insult to dogs, most of which run much faster than I,
> intended). I am getting about 7 MB/s, which means the full backup is
about
> 10 hours! Considering that the performance from copy (17 MB/s) is hardly
> exemplary, I find SQL Server performance pretty embarassing.
> The only useful tip I found on the net was to backup to multiple devices.
> When I do this, my network throughput actually *drops* to about 6.5 MB/s
> (for 2, 3 or 4 devices). The same tip suggested that multiple devices
might
> not help if max worker threads is not bumped up. I have not had a chance
to
> restart SQL Server (this is a 24x7 public database), but we typically have
> 100-120 connections, so I am not sure that bumping this up will do
anything,
> anyway.
> Does anyone have experience with network backups and is getting better
> throughput? I have considered forcing opportunistic locking in the
> redirector, but this seems like blind hope.
> Do I need to use something with a clue about networks, like the Veritas
SQL
> Agent? We use Veritas to spin our tapes, but we abandoned SQL Agent 2
years
> ago when we could not get it to work.
> Help with backing up directly over the network would be much appreciated.
> Comments from SQL Agent users are also welcome.
> TIA
> --
> Scott Nichol
>
I have been doing differential and log backups across a network to a device
specified by a UNC path for some time. The backup sizes have been small
enough that performance was no issue. Full backups, which total about 250
GB, I have been doing to a RAID set on the local machine, then doing a copy
(literally the command line copy from a batch script) to the same UNC path
as the diff and logs. The backup itself takes about 2.5 hours, and the copy
another 4 hours. The network is dedicated gigabit. The target machine uses
RAID 10 and can do local I/O at >= 60 MB/s. The network can definitely
sustain 30 MB/s with an app that has a clue about the network. xcopy can
sometimes hit that pace, but it and copy more typically do 17 MB/s.
Because I want to use the disk space on the SQL Server box for something
other than backups, I now want to do full backups directly across the
network, using the UNC path as for diff and log. My problem is that this is
dog slow (no insult to dogs, most of which run much faster than I,
intended). I am getting about 7 MB/s, which means the full backup is about
10 hours! Considering that the performance from copy (17 MB/s) is hardly
exemplary, I find SQL Server performance pretty embarassing.
The only useful tip I found on the net was to backup to multiple devices.
When I do this, my network throughput actually *drops* to about 6.5 MB/s
(for 2, 3 or 4 devices). The same tip suggested that multiple devices might
not help if max worker threads is not bumped up. I have not had a chance to
restart SQL Server (this is a 24x7 public database), but we typically have
100-120 connections, so I am not sure that bumping this up will do anything,
anyway.
Does anyone have experience with network backups and is getting better
throughput? I have considered forcing opportunistic locking in the
redirector, but this seems like blind hope.
Do I need to use something with a clue about networks, like the Veritas SQL
Agent? We use Veritas to spin our tapes, but we abandoned SQL Agent 2 years
ago when we could not get it to work.
Help with backing up directly over the network would be much appreciated.
Comments from SQL Agent users are also welcome.
TIA
--
Scott NicholHi Scott
Have you looked at SQL Litespeed? It's all about optimising backup
performance.
http://www.imceda.com/LiteSpeed_Description.htm
Regards,
Greg Linwood
SQL Server MVP
"Scott Nichol" <reply_to_newsgroup@.scottnichol.com> wrote in message
news:u%23NuPnBaEHA.3596@.tk2msftngp13.phx.gbl...
> I am hoping someone with experience can help me with this.
> I have been doing differential and log backups across a network to a
device
> specified by a UNC path for some time. The backup sizes have been small
> enough that performance was no issue. Full backups, which total about 250
> GB, I have been doing to a RAID set on the local machine, then doing a
copy
> (literally the command line copy from a batch script) to the same UNC path
> as the diff and logs. The backup itself takes about 2.5 hours, and the
copy
> another 4 hours. The network is dedicated gigabit. The target machine
uses
> RAID 10 and can do local I/O at >= 60 MB/s. The network can definitely
> sustain 30 MB/s with an app that has a clue about the network. xcopy can
> sometimes hit that pace, but it and copy more typically do 17 MB/s.
> Because I want to use the disk space on the SQL Server box for something
> other than backups, I now want to do full backups directly across the
> network, using the UNC path as for diff and log. My problem is that this
is
> dog slow (no insult to dogs, most of which run much faster than I,
> intended). I am getting about 7 MB/s, which means the full backup is
about
> 10 hours! Considering that the performance from copy (17 MB/s) is hardly
> exemplary, I find SQL Server performance pretty embarassing.
> The only useful tip I found on the net was to backup to multiple devices.
> When I do this, my network throughput actually *drops* to about 6.5 MB/s
> (for 2, 3 or 4 devices). The same tip suggested that multiple devices
might
> not help if max worker threads is not bumped up. I have not had a chance
to
> restart SQL Server (this is a 24x7 public database), but we typically have
> 100-120 connections, so I am not sure that bumping this up will do
anything,
> anyway.
> Does anyone have experience with network backups and is getting better
> throughput? I have considered forcing opportunistic locking in the
> redirector, but this seems like blind hope.
> Do I need to use something with a clue about networks, like the Veritas
SQL
> Agent? We use Veritas to spin our tapes, but we abandoned SQL Agent 2
years
> ago when we could not get it to work.
> Help with backing up directly over the network would be much appreciated.
> Comments from SQL Agent users are also welcome.
> TIA
> --
> Scott Nichol
>
Subscribe to:
Posts (Atom)