首页 > 代码库 > Backup and restore of FAST Search for SharePoint 2010

Backup and restore of FAST Search for SharePoint 2010

一个同事问我一个问题: 如果FAST Search for SharePoint 2010被full restore到了一个之前的时间点, 那么当FAST Search重新开始一个增量爬网的时候, 会发生什么? FAST Search会查看内容数据库并发现上一次爬网的记录并为新item或更改的item制作索引么? FAST Search会发现索引与现在内容的不一致么? 还是说它直接会再来一次full crawl?

 

Some Basics

===================

Fast Search for SharePoint 2010 contains several indexing connectors. They can be divided into three types:

· The Microsoft SharePoint Server 2010 indexing connectors and crawling framework (Content SSA)

· Federated search connectors

o Federated search connectors enable you to pass a query to a target system and display results returned from that system without actually crawling that content.

· The FAST Search Server 2010 for SharePoint specific indexing connectors

o FAST Search Web Crawler

o FAST Search JDBC Connector

o FAST Search Lotus Notes Connector

Based on the introduction, we can see only Content SSA and Specific indexing connectors crawl items.

 

How FAST Search for SharePoint crawl items?

====================

For specific indexing connectors, mostly they use checksum based change detection for incremental crawls. This means that if you restored FAST Search to a previous recovery point, the checksum will still be check if the item is changed from last crawl. One incremental crawl after the FAST restore, you will be using correct index for your users’ query. So, no impact on this type.

For Content SSA, we need to talk a little deep to explain.

For this type of connectors, crawl can be divided into two steps:

1. Gathering

2. Feed item to ‘filter’ component.

SharePoint 2010 and FAST Search for 2010 utilize the same process for gathering SharePoint internal content. What different is after the content has been got by the search engine, which component is used to process the item.

· For SharePoint Search, iFilters will be used.

· For FAST Search for SharePoint 2010, FAST Content Plug-in will feed the batch of gathered items to FAST Search pipeline via FAST Content Distributor where items are filtered and processed into an index.

 

Now we will focus on the gathering part.

During an Incremental Crawl, the Crawler will pass along a Change Log Cookie (that it received from the WFE on the previous crawl) to the WFE. This change log cookie contains GUID for applicable Content DB and a row ID from EventCache table.

With this row ID, WFE will look up the EventCache table and knows what items have been changed since the last crawl, and then response the crawler items needs to be crawled.

 

Imagine we have the following event sequence:

? Incremental crawl 1 -> FAST Search full backup -> ItemA changed -> Increment crawl 2 -> FAST Search full restore -> Incremental crawl 3

Incremental crawl 3 will not crawl ItemA. This will bring inconsistency.

Another thing to consider is, EventCache table will be cleaned by a SharePoint timer job. If the recovery point is from long time ago, this is another factor that might bring inconsistency.

 

解释一番之后, 结论如下:

SharePoint Site内容的index与实际内容可能会有不一致, 其他类型connector制作的索引应该没问题.

避免不一致的方式是在full restore之后来一次full crawl, 这样用还是可以用的, 全爬网之后, 就彻底没问题了.

 

Reference

==================

Full backup and restore (FAST Search Server 2010 for SharePoint)

http://technet.microsoft.com/en-us/library/ff460221(v=office.14).aspx#BKMK_FullRestore

SP2010 Search *Explained: Crawling

http://blogs.msdn.com/b/sharepoint_strategery/archive/2012/10/30/sp2010-search-explained-crawling.aspx

SharePoint 2010/2013: “Change Log “Timer Job is not cleaning up Expired entries in EventCache Table

http://blogs.msdn.com/b/spses/archive/2013/05/02/sharepoint-2010-2013-change-log-timer-job-is-not-cleaning-up-expired-entries-in-eventcache-table.aspx

Plan for crawling and federation (FAST Search Server 2010 for SharePoint)

http://technet.microsoft.com/en-us/library/ff383278.aspx