We provide an indexing and searching service for our clients, and the
searches are bogging down as the demand has increased. We need to increase
the capacity of our system, so I am looking for some guidance.
Current hardware:
Single AMD Athlon 2800+ CPU
2.5 GB RAM
2 IDE SATA 10,000 RPM disks on Adaptec RAID controller in RAID 0 mode.
2 IDE IDE 7,200 RPM disks on Promise RAID controller in RAID 0 mode.
1 IDE 7,200 RPM system disk
Both the SQL database and the full-text catalogs are on the Adaptec RAID
system because it is much faster than the old Promise system. We are using
the Promise RAID system for our application data files.
Current software:
Windows 2000 Advanced Server
SQL server 2000 version 8.00.760
Language = us_english
Database and index:
Single table with about 15 M rows.
Approximately 300,000 rows are added and removed every day. The additions
and deletions are running non-stop day and night while the searches are
being done. We are using change tracking to manage the insertions and
deletions.
We are indexing a single ASCII text column with an average of about 100
characters and 7 words per row.
Application:
Our application receives search requests via a TCP/IP connection, queues
them internally and feeds them into the SQL searching system. Our
application is multi-threaded, and we have it set so that it never sends
more than 4 simultaneous searches to the SQL server. During the busy times
we sometimes have 30 to 100 pending searches waiting in line in the queue in
our application.
We are using the CONTAINSTABLE method for full-text searching with a limit
of 750 rows to be returned by CONTAINSTABLE. Virtually all of our searches
have a series of AND clauses. Some of them are even more complex with
parentheses and OR clauses.
Some simple searches take only a couple of seconds, but we are seeing some
of the more complex searches taking 20 to 30 seconds during light periods.
During heavy periods when we have 4 simultaneous searches going, the search
times are dragging out to 45 to 60 seconds which is unacceptable.
When we were using the old, slow Promise RAID array for the database and
catalog, we were clearly I/O bound.
With the newer and faster Adaptec RAID array, we see our CPU time pegging at
100% for extended periods.
Search load:
We are averaging one search every 5 seconds over a non-stop 24-hour period
(including very slow times in the early morning). The demand (which we
cannot currently handle) is probably several searches per second during the
busy periods.
In addition to the searches, approximately 800 simple, single-key (key
type=long integer) queries are done to the SQL database per minute.
Future needs and scalability:
We see the search demands of our customers growing briskly over time.
Within a year we may have 3 or more times the number of searches that we
currently have. So scalability is an important issue.
What suggestions do you have for upgrades to our system to handle the
current and anticipated future search needs?
Phil Sherrod
(phil.sherrod 'at' sandh.com)
http://www.dtreg.com (decision tree modeling)
http://www.nlreg.com (nonlinear regression)
http://www.NewsRover.com (Usenet newsreader)
http://www.LogRover.com (Web statistics analysis)
First off, the best thing you can do is to cache your queries, so that they
don't hit the search catalog at all. The problem with this approach is that
you will not have real time results, but there is nothing to stop you from
incrementally generating the per generated html search results pages in
response to new content which is added. On one system where we did this we
achieved a staggering 1400%
So my question to you is can you cache queries, ie are many of your searches
on the same topics? If so such a caching scheme is excellent for this.
Another option is to partition your tables into different catalogs so that a
portion of your data exists in each catalog. MSSearch allocates threads to
each catalog so there are performance advantages to implementing such
partitioning. If you can't break your data logically into different groups
so you know a query will be satisfied with a single catalog you will have to
do a union based query.
Unfortunately SQL FTS benefits from multiple processors, with a sweet spot
for 8 way processors. Using SQL FTS on a single proc box is not optimal even
with hyperthreading.
Also while RAID 0 can offer better performance than RAID 1 or RAID 5 it is
not fault tolerant, and in the long run may not be the best choice.
Did you follow the recommendations in this article:
http://www.microsoft.com/downloads/d...displaylang=en
or this one?
http://msdn.microsoft.com/library/de...tml/sp04f9.asp
"Phil Sherrod" <phil.sherrod@.REMOVETHISsandh.com> wrote in message
news:N76dnQOZqsxcFBfcRVn-gQ@.giganews.com...
> We provide an indexing and searching service for our clients, and the
> searches are bogging down as the demand has increased. We need to
> increase
> the capacity of our system, so I am looking for some guidance.
> Current hardware:
> Single AMD Athlon 2800+ CPU
> 2.5 GB RAM
> 2 IDE SATA 10,000 RPM disks on Adaptec RAID controller in RAID 0 mode.
> 2 IDE IDE 7,200 RPM disks on Promise RAID controller in RAID 0 mode.
> 1 IDE 7,200 RPM system disk
> Both the SQL database and the full-text catalogs are on the Adaptec RAID
> system because it is much faster than the old Promise system. We are using
> the Promise RAID system for our application data files.
> Current software:
> Windows 2000 Advanced Server
> SQL server 2000 version 8.00.760
> Language = us_english
> Database and index:
> Single table with about 15 M rows.
> Approximately 300,000 rows are added and removed every day. The
> additions
> and deletions are running non-stop day and night while the searches are
> being done. We are using change tracking to manage the insertions and
> deletions.
> We are indexing a single ASCII text column with an average of about 100
> characters and 7 words per row.
> Application:
> Our application receives search requests via a TCP/IP connection, queues
> them internally and feeds them into the SQL searching system. Our
> application is multi-threaded, and we have it set so that it never sends
> more than 4 simultaneous searches to the SQL server. During the busy
> times
> we sometimes have 30 to 100 pending searches waiting in line in the queue
> in
> our application.
> We are using the CONTAINSTABLE method for full-text searching with a limit
> of 750 rows to be returned by CONTAINSTABLE. Virtually all of our
> searches
> have a series of AND clauses. Some of them are even more complex with
> parentheses and OR clauses.
> Some simple searches take only a couple of seconds, but we are seeing some
> of the more complex searches taking 20 to 30 seconds during light periods.
> During heavy periods when we have 4 simultaneous searches going, the
> search
> times are dragging out to 45 to 60 seconds which is unacceptable.
> When we were using the old, slow Promise RAID array for the database and
> catalog, we were clearly I/O bound.
> With the newer and faster Adaptec RAID array, we see our CPU time pegging
> at
> 100% for extended periods.
> Search load:
> We are averaging one search every 5 seconds over a non-stop 24-hour period
> (including very slow times in the early morning). The demand (which we
> cannot currently handle) is probably several searches per second during
> the
> busy periods.
> In addition to the searches, approximately 800 simple, single-key (key
> type=long integer) queries are done to the SQL database per minute.
> Future needs and scalability:
> We see the search demands of our customers growing briskly over time.
> Within a year we may have 3 or more times the number of searches that we
> currently have. So scalability is an important issue.
> What suggestions do you have for upgrades to our system to handle the
> current and anticipated future search needs?
> --
> Phil Sherrod
> (phil.sherrod 'at' sandh.com)
> http://www.dtreg.com (decision tree modeling)
> http://www.nlreg.com (nonlinear regression)
> http://www.NewsRover.com (Usenet newsreader)
> http://www.LogRover.com (Web statistics analysis)
|||Hi Phil,
I see that you've implemented many of the ideas & references we discussed
previously under the "Re: Need help with full-text performance for large
database" thread in this newsgroup and more specifically, we discussed that
FTI was mostly I/O bound, now that you're no longer i/o bound but are now
bound by the CPU. While I'm not sure if your server will support, multiple
CPU's, but that's the next step. FYI, you can also use the following command
prompt command on the server to set the MSSearch service to one CPU:
at <current_time+1_min> /interactive taskmgr.exe
and then use sp_configure to set SQL Server to use the other CPU to avoid
the MSSearch service from affecting your normal SQL Server processing. You
should set the sp_fulltext_service resource_usage level to 5 to allow for
more concurrent SQL FTS queries.
Could you post examples of your more complex searches that are taking 20 to
30 seconds to complete? Are they using multiple CONTAINS or CONTAINSTABLE
statements &/or depend upon additional where clause restrictions?
Finally, I'd also recommend (if you have not already done so) that you try
to get a copy of SQL Server 2005 (codename Yukon) beta2 that can be
downloaded from MSDN, if you are a subscriber to MSDN as Yukon contains
many, many FTS performance improvements!
Thanks,
John
"Phil Sherrod" <phil.sherrod@.REMOVETHISsandh.com> wrote in message
news:N76dnQOZqsxcFBfcRVn-gQ@.giganews.com...
> We provide an indexing and searching service for our clients, and the
> searches are bogging down as the demand has increased. We need to
increase
> the capacity of our system, so I am looking for some guidance.
> Current hardware:
> Single AMD Athlon 2800+ CPU
> 2.5 GB RAM
> 2 IDE SATA 10,000 RPM disks on Adaptec RAID controller in RAID 0 mode.
> 2 IDE IDE 7,200 RPM disks on Promise RAID controller in RAID 0 mode.
> 1 IDE 7,200 RPM system disk
> Both the SQL database and the full-text catalogs are on the Adaptec RAID
> system because it is much faster than the old Promise system. We are using
> the Promise RAID system for our application data files.
> Current software:
> Windows 2000 Advanced Server
> SQL server 2000 version 8.00.760
> Language = us_english
> Database and index:
> Single table with about 15 M rows.
> Approximately 300,000 rows are added and removed every day. The
additions
> and deletions are running non-stop day and night while the searches are
> being done. We are using change tracking to manage the insertions and
> deletions.
> We are indexing a single ASCII text column with an average of about 100
> characters and 7 words per row.
> Application:
> Our application receives search requests via a TCP/IP connection, queues
> them internally and feeds them into the SQL searching system. Our
> application is multi-threaded, and we have it set so that it never sends
> more than 4 simultaneous searches to the SQL server. During the busy
times
> we sometimes have 30 to 100 pending searches waiting in line in the queue
in
> our application.
> We are using the CONTAINSTABLE method for full-text searching with a limit
> of 750 rows to be returned by CONTAINSTABLE. Virtually all of our
searches
> have a series of AND clauses. Some of them are even more complex with
> parentheses and OR clauses.
> Some simple searches take only a couple of seconds, but we are seeing some
> of the more complex searches taking 20 to 30 seconds during light periods.
> During heavy periods when we have 4 simultaneous searches going, the
search
> times are dragging out to 45 to 60 seconds which is unacceptable.
> When we were using the old, slow Promise RAID array for the database and
> catalog, we were clearly I/O bound.
> With the newer and faster Adaptec RAID array, we see our CPU time pegging
at
> 100% for extended periods.
> Search load:
> We are averaging one search every 5 seconds over a non-stop 24-hour period
> (including very slow times in the early morning). The demand (which we
> cannot currently handle) is probably several searches per second during
the
> busy periods.
> In addition to the searches, approximately 800 simple, single-key (key
> type=long integer) queries are done to the SQL database per minute.
> Future needs and scalability:
> We see the search demands of our customers growing briskly over time.
> Within a year we may have 3 or more times the number of searches that we
> currently have. So scalability is an important issue.
> What suggestions do you have for upgrades to our system to handle the
> current and anticipated future search needs?
> --
> Phil Sherrod
> (phil.sherrod 'at' sandh.com)
> http://www.dtreg.com (decision tree modeling)
> http://www.nlreg.com (nonlinear regression)
> http://www.NewsRover.com (Usenet newsreader)
> http://www.LogRover.com (Web statistics analysis)
|||John,
Thank you for your message, and thank you for the help you provided
previously. I hope you got your MVP rating.
I never updated you on the solution to the reliability problems that we
finally got from Microsoft. When we designed our system, we were concerned
about overflowing a 32-bit integer in a few years, so we used a double
precision floating point value as the key; it had only integer values in it.
Apparently, there is a bug in FTS that causes it to crash occasionally if
you use a floating point key. We dropped back to a 32-bit integer key, and
that solved the crashes.
On 4-Nov-2004, "John Kane" <jt-kane@.comcast.net> wrote:
> I see that you've implemented many of the ideas & references we discussed
> previously under the "Re: Need help with full-text performance for large
> database" thread in this newsgroup and more specifically, we discussed
> that
> FTI was mostly I/O bound, now that you're no longer i/o bound but are now
> bound by the CPU.
Yes, that is correct. The switch to the Adaptec RAID controller with the
10k RPM disks tremendously increased our I/O capacity. So we're now CPU
bound.
I have read that putting the catalog and SQL table on separate RAID
controllers improves performance, but the old Promise controller that we
have is SO slow compared to the Adaptec that we opted to use it only for our
application files. We may consider adding a second Adaptec RAID controller
with two more SATA drives if you still think that splitting the table and
catalog would provide significant performance improvement.
> While I'm not sure if your server will support, multiple
> CPU's, but that's the next step
Our current server will support only a single CPU. We are considering a new
one with dual Xeon 3.2 GHz CPUs and 4 G of 800 MB FSB memory. Pretty much
all of the servers we've found with more than 2 CPU capability use the
slower 3.0 G CPU's with 333 or 300 MB FSB. I would appreciate your comments
about your feeling about whether this will be adequate.
> and then use sp_configure to set SQL Server to use the other CPU to avoid
> the MSSearch service from affecting your normal SQL Server processing. You
> should set the sp_fulltext_service resource_usage level to 5 to allow for
> more concurrent SQL FTS queries.
Please explain the sp_fulltext_service resource_usage level. What is the
default value? How exactly do we increase it?
Currently, we are sending 4 simultaneous searches to the server. Do you
think this is a good number?
> Could you post examples of your more complex searches that are taking 20
> to
> 30 seconds to complete? Are they using multiple CONTAINS or CONTAINSTABLE
> statements &/or depend upon additional where clause restrictions?
There is only a single CONTAINSTABLE clause in any of our searches.
Here is an example of a simple search that typically takes 2 to 10 seconds
depending on the words specified:
("Basic" AND "Instinct" AND "Sharon" AND "Stone") AND (JPG OR GIF OR PNG OR
MPG OR AVI OR NZB)
A complex query would combine up to 50 searches similar to the one shown
above combined with OR, like this:
(query1) OR (query2) or ... or (query50)
Where query1, etc, is ("Basic" AND "Instinct" AND "Sharon" AND "Stone") AND
(JPG OR GIF OR PNG OR MPG OR AVI OR NZB) or something similar to it. In
other words, each basic query will have AND and OR operators and up to 50 of
them enclosed in parentheses will be combined using OR.
Note that there may be 5 million or more entries in the catalog for JPG, 2
million for MPG, etc., but the combination will probably only match a few
messages if any.
> Finally, I'd also recommend (if you have not already done so) that you try
> to get a copy of SQL Server 2005 (codename Yukon) beta2 that can be
> downloaded from MSDN, if you are a subscriber to MSDN as Yukon contains
> many, many FTS performance improvements!
We have a Universal MSDN subscription, so that's not a problem. Do you
think the beta2 version is reliable enough for production use?
What's involved in transferring a table and catalog with 15 M rows to Yukon?
It would take a long time to build the index from scratch. Can Yukon import
SQL tables and FTS catalogs from SQL 8?
Phil Sherrod
(phil.sherrod 'at' sandh.com)
http://www.dtreg.com (decision tree modeling)
http://www.nlreg.com (nonlinear regression)
http://www.NewsRover.com (Usenet newsreader)
http://www.LogRover.com (Web statistics analysis)
|||You're welcome, Phil,
Thanks for the update on the FT Catalog corruption, i.e.. reliability
problem & yes an int (or short char datatype) is the preferred datatype for
using with a FT Index.
The default value for sp_fulltext_service resource_usage is 3 with 5 being
the max (or dedicated) value. This not only controls the max number of
concurrent FTS query, but also what level of memory the MSSearch service
will use, up to a max of 512Mb, if this memory is available.
Resource_usage Concurrent connections
-- --
1 128
2 512
3 1024 (default)
4 2048
5 4096
Valid value parameters are from 1 to 5, with 3 as the default; these values
indicate proportionally allocated memory within minimum and maximum
boundaries for the given physical memory available on the server. These
minimum and maximum values relate to the number of word lists kept by the
Microsoft Search engine until Shadow Merge completes. The following table
relates Microsoft Search resource levels with the minimum and maximum number
of word lists held in memory.
Parameter Word lists held in memory
====== ==================
1 20/30
2 20/30
3 20/40
4 20/50
5 30/60
For example, a server with 512 MB of RAM and a resource_usage value of 5
will get 60 word lists held in memory and a computer with 128 MB will get 30
word lists held in memory. The actual values are computed by determining the
available physical memory on the server. The number of rows maintained in
memory by the Microsoft Search service before the Master Merge is 250,000 or
500,000, depending on the available physical memory on the server. In
addition to the memory allocated to SQL Server, it is recommended that a
minimum of 15 MB of RAM be reserved for the Microsoft Search service and a
maximum of 512 MB of RAM be allocated for the Microsoft Search service when
you are full-text indexing tables with millions of rows."
So, a single CONTAINSTABLE clause, but for your complex queries, a long set
of AND'ed &/or OR'ed search words, this might be enough justify the CPU
usage.
As for SQL Server 2005 (Yukon) beta2 version, it is stable, but I'm not sure
if I'd say it was production ready. Beta 3 will be early next year, with the
RTM version scheduled to be released in the summer of 2005, although expect
some schedule slippage, based upon past experiences <g>. As for transferring
the data, you could (on a test server), detach your SQL 2000 mdf & ldf files
and copy them to a new Yukon installation, however, I doubt if you're be
able to transfer the FT Catalog files as the underlying structures have
drastically changed. Although, the time to build the Yukon based FT Catalog
will be order's of magnitude faster than the build time with SQL Server
2000!
Thanks,
John
PS: As for the MVP, time and over-allocation is still a factor, so we'll
see!
"Phil Sherrod" <phil.sherrod@.REMOVETHISsandh.com> wrote in message
news:X-mdnTT0H6UGNBfcRVn-gg@.giganews.com...
> John,
> Thank you for your message, and thank you for the help you provided
> previously. I hope you got your MVP rating.
> I never updated you on the solution to the reliability problems that we
> finally got from Microsoft. When we designed our system, we were
concerned
> about overflowing a 32-bit integer in a few years, so we used a double
> precision floating point value as the key; it had only integer values in
it.
> Apparently, there is a bug in FTS that causes it to crash occasionally if
> you use a floating point key. We dropped back to a 32-bit integer key,
and[vbcol=seagreen]
> that solved the crashes.
> On 4-Nov-2004, "John Kane" <jt-kane@.comcast.net> wrote:
discussed[vbcol=seagreen]
now
> Yes, that is correct. The switch to the Adaptec RAID controller with the
> 10k RPM disks tremendously increased our I/O capacity. So we're now CPU
> bound.
> I have read that putting the catalog and SQL table on separate RAID
> controllers improves performance, but the old Promise controller that we
> have is SO slow compared to the Adaptec that we opted to use it only for
our
> application files. We may consider adding a second Adaptec RAID
controller
> with two more SATA drives if you still think that splitting the table and
> catalog would provide significant performance improvement.
>
> Our current server will support only a single CPU. We are considering a
new
> one with dual Xeon 3.2 GHz CPUs and 4 G of 800 MB FSB memory. Pretty much
> all of the servers we've found with more than 2 CPU capability use the
> slower 3.0 G CPU's with 333 or 300 MB FSB. I would appreciate your
comments[vbcol=seagreen]
> about your feeling about whether this will be adequate.
avoid[vbcol=seagreen]
You[vbcol=seagreen]
for[vbcol=seagreen]
> Please explain the sp_fulltext_service resource_usage level. What is the
> default value? How exactly do we increase it?
> Currently, we are sending 4 simultaneous searches to the server. Do you
> think this is a good number?
CONTAINSTABLE
> There is only a single CONTAINSTABLE clause in any of our searches.
> Here is an example of a simple search that typically takes 2 to 10 seconds
> depending on the words specified:
> ("Basic" AND "Instinct" AND "Sharon" AND "Stone") AND (JPG OR GIF OR PNG
OR
> MPG OR AVI OR NZB)
> A complex query would combine up to 50 searches similar to the one shown
> above combined with OR, like this:
> (query1) OR (query2) or ... or (query50)
> Where query1, etc, is ("Basic" AND "Instinct" AND "Sharon" AND "Stone")
AND
> (JPG OR GIF OR PNG OR MPG OR AVI OR NZB) or something similar to it. In
> other words, each basic query will have AND and OR operators and up to 50
of[vbcol=seagreen]
> them enclosed in parentheses will be combined using OR.
> Note that there may be 5 million or more entries in the catalog for JPG, 2
> million for MPG, etc., but the combination will probably only match a few
> messages if any.
try
> We have a Universal MSDN subscription, so that's not a problem. Do you
> think the beta2 version is reliable enough for production use?
> What's involved in transferring a table and catalog with 15 M rows to
Yukon?
> It would take a long time to build the index from scratch. Can Yukon
import
> SQL tables and FTS catalogs from SQL 8?
> --
> Phil Sherrod
> (phil.sherrod 'at' sandh.com)
> http://www.dtreg.com (decision tree modeling)
> http://www.nlreg.com (nonlinear regression)
> http://www.NewsRover.com (Usenet newsreader)
> http://www.LogRover.com (Web statistics analysis)
|||On 5-Nov-2004, "John Kane" <jt-kane@.comcast.net> wrote:
> As for SQL Server 2005 (Yukon) beta2 version, it is stable, but I'm not
> sure
> if I'd say it was production ready. Beta 3 will be early next year, with
> the
> RTM version scheduled to be released in the summer of 2005, although
> expect
> some schedule slippage, based upon past experiences <g>. As for
> transferring
> the data, you could (on a test server), detach your SQL 2000 mdf & ldf
> files
> and copy them to a new Yukon installation, however, I doubt if you're be
> able to transfer the FT Catalog files as the underlying structures have
> drastically changed. Although, the time to build the Yukon based FT
> Catalog
> will be order's of magnitude faster than the build time with SQL Server
> 2000!
OK, if we install the Yukon beta2, will we be able to simply install beta3
and the final release over it, or will we have to uninstall the beta as new
betas come out and when the final release arrives? Every minute our server
is down is a minute when we have unhappy customers.
Phil Sherrod
(phil.sherrod 'at' sandh.com)
http://www.dtreg.com (decision tree modeling)
http://www.nlreg.com (nonlinear regression)
http://www.NewsRover.com (Usenet newsreader)
http://www.LogRover.com (Web statistics analysis)
|||One other question about Yukon: Is the syntax of the SQL query going to be
the same with CONTAINSTABLE to do the full-text search?
Phil Sherrod
(phil.sherrod 'at' sandh.com)
http://www.dtreg.com (decision tree modeling)
http://www.nlreg.com (nonlinear regression)
http://www.NewsRover.com (Usenet newsreader)
http://www.LogRover.com (Web statistics analysis)
|||Hi Hilary,
Is there an easy way, like a perfmon counter, to tell how many searches hit
a cache and how many don't?
Thanks,
-Kevin Kline
Quest Software
SQL Server MVP
I support the Professional Association for SQL Server (PASS),
www.sqlpass.org
"Hilary Cotter" <hilary.cotter@.gmail.com> wrote in message
news:uAp1%236qwEHA.2908@.tk2msftngp13.phx.gbl...
> First off, the best thing you can do is to cache your queries, so that
they
> don't hit the search catalog at all. The problem with this approach is
that
> you will not have real time results, but there is nothing to stop you from
> incrementally generating the per generated html search results pages in
> response to new content which is added. On one system where we did this we
> achieved a staggering 1400%
> So my question to you is can you cache queries, ie are many of your
searches
> on the same topics? If so such a caching scheme is excellent for this.
> Another option is to partition your tables into different catalogs so that
a
> portion of your data exists in each catalog. MSSearch allocates threads to
> each catalog so there are performance advantages to implementing such
> partitioning. If you can't break your data logically into different groups
> so you know a query will be satisfied with a single catalog you will have
to
> do a union based query.
> Unfortunately SQL FTS benefits from multiple processors, with a sweet spot
> for 8 way processors. Using SQL FTS on a single proc box is not optimal
even
> with hyperthreading.
> Also while RAID 0 can offer better performance than RAID 1 or RAID 5 it is
> not fault tolerant, and in the long run may not be the best choice.
> Did you follow the recommendations in this article:
>
http://www.microsoft.com/downloads/d...displaylang=en
> or this one?
>
http://msdn.microsoft.com/library/de...tml/sp04f9.asp[vbcol=seagreen]
>
>
>
> "Phil Sherrod" <phil.sherrod@.REMOVETHISsandh.com> wrote in message
> news:N76dnQOZqsxcFBfcRVn-gQ@.giganews.com...
using[vbcol=seagreen]
queue[vbcol=seagreen]
limit[vbcol=seagreen]
some[vbcol=seagreen]
periods.[vbcol=seagreen]
pegging[vbcol=seagreen]
period
>
|||Hi Kevin
No, by caching queries I mean caching the search results so when a search on
the same search phrase is repeated, the results are returned from the cache
of search results as opposed from the catalog.
This requires you to pregenerate search results pages for each frequently
issued or expensive search phrase. You could go so far as to pregenerate all
previous searches into static pages of search results and refresh them on a
daily or hourly basis.
When you query a list would be checked to see if the search results are
pregenerate and if they are the client would be redirected to the static
pages. If not, the search would be directed to the catalog.
Please refer to this link for an isapi extension that does this.
http://groups.google.com/groups?hl=e...TNGP09.phx.gbl
and
http://groups.google.com/groups?hl=e...TNGP09.phx.gbl
Interestingly enough MSSearch (the engine which provides the search service
for SQL FTS), relies on the file system cache to provide its caching. No
queries per se are cached, but rather the catalog pages are "cached". One of
the developers who worked on MSSearch told me that this works optimially for
them, as opposed to any other cache scavanging mechanism. One of the reasons
for this is the size of the database pages MSSearch uses.
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
"Kevin Kline [MVP]" <kevin.kline@.quest.com> wrote in message
news:eHQulZ3wEHA.1400@.TK2MSFTNGP11.phx.gbl...
> Hi Hilary,
> Is there an easy way, like a perfmon counter, to tell how many searches
hit[vbcol=seagreen]
> a cache and how many don't?
> Thanks,
> -Kevin Kline
> Quest Software
> SQL Server MVP
> I support the Professional Association for SQL Server (PASS),
> www.sqlpass.org
> "Hilary Cotter" <hilary.cotter@.gmail.com> wrote in message
> news:uAp1%236qwEHA.2908@.tk2msftngp13.phx.gbl...
> they
> that
from[vbcol=seagreen]
we[vbcol=seagreen]
> searches
that[vbcol=seagreen]
> a
to[vbcol=seagreen]
groups[vbcol=seagreen]
have[vbcol=seagreen]
> to
spot[vbcol=seagreen]
> even
is
>
http://www.microsoft.com/downloads/d...displaylang=en
>
http://msdn.microsoft.com/library/de...tml/sp04f9.asp[vbcol=seagreen]
mode.[vbcol=seagreen]
RAID[vbcol=seagreen]
> using
are[vbcol=seagreen]
and[vbcol=seagreen]
100[vbcol=seagreen]
queues[vbcol=seagreen]
sends[vbcol=seagreen]
> queue
> limit
> some
> periods.
and[vbcol=seagreen]
> pegging
> period
we[vbcol=seagreen]
during[vbcol=seagreen]
we
>
|||John,
We have upgraded our indexing/searching server to a dual-CPU box with 4 GB of
memory. We also shifted some of our application programs to a separate
machine. Things are running better. We decided to stick with SQL 2000 for
now.
On 4-Nov-2004, "John Kane" <jt-kane@.comcast.net> wrote:
> and then use sp_configure to set SQL Server to use the other CPU to avoid
> the MSSearch service from affecting your normal SQL Server processing. You
> should set the sp_fulltext_service resource_usage level to 5 to allow for
> more concurrent SQL FTS queries.
We attempted to do this today, but the command was not accepted. We typed:
EXEC sp_fulltext_service 'resource_usage' '5'
but it was rejected with a syntax error near "5". What exactly should we type?
Also, I need some guidance about setting memory usage for the SQL server and
mssearch. On the new server, I left the SQL memory setting at "Automatic".
Right now the SQL server is using about 1.5 GB of memory and mssearch is using
about 20 MB. mssearch is paging heavily and the SQL server is having few if
any page faults. We want to increase the memory space used by mssearch to
reduce the paging. I know how to allocate a static memory space for the SQL
server, but I don't know how to adjust the space used by mssearch. What do you
suggest?
Phil Sherrod
(phil.sherrod 'at' sandh.com)
http://www.dtreg.com (decision tree modeling)
http://www.nlreg.com (nonlinear regression)
http://www.LogRover.com (web traffic analysis)
http://www.NewsRover.com (Usenet newsreader)