Skip Ribbon Commands
Skip to main content
Skip Navigation LinksHome What's New B2B Blog BlogPost

  • A Tour of SharePoint 2013 Part 1
    01 February 2013
    9:59 AM

    Category:Portals and Collaboration
    Post By:AvePointSA

    -This is a contribution from Tyler Bithell, Chief Technical Architect – Portals at B2B Technologies.

    This post covers Search after configuring the service application… see instructions for configuring the service application here…

    This is part one of a series of Search Posts that will cover SharePoint 2013 Search in detail.  One important note right off is that Fast Search is SharePoint 2013 Search.  Another important note is that it is very resource intensive, so you are going to want to run minimum specs or better in a production environment.

    Also, whatever you do, DO NOT set your virus scanner to scan search data folders in real-time and do not use dynamic RAM allocation for Search Servers.  They couldn't have been more clear about this at the conference.  Another tip I picked up from the conference is that 10 Million items is your threshold for needing to move from single server environment to one in which the search components are distributed between multiple servers.

    To Access the Search Service Application … Click on manage service applications under Application Management on the Central Admin home page.

    From here click on your Search Service Application's name to link to the Search Administration Screen


    The first thing to note is that you can change your Default content access account by clicking on the account name.  This will bring this up…


    You can change the contact email address for crawls by clicking on the current email address.

    You can add a proxy server for federation by clicking on none next to Proxy server for crawling and federation…  I believe this is something that you would need to set up if you wanted to set up Hybrid Search between SharePoint on-premise and SharePoint Online, but that is something I need to look into further, and set up for myself in order to confirm.


    You can disable search alert status and query logging, and you can change your global search center URL.

    If you scroll down you are presented with the Search application Topology


    The Search Application Topology lists your Search Components and databases.

    Starting from the top…

    The Admin component runs all the system processes that search needs in order to function.  You can have multiple Admin components in your farm, but only one can be active at any given time.

    The Crawl component is what is used to crawl content based off of settings stored in the crawl database/s.  You can add crawl components in order to increase crawl performance.

    The Content processing component processes crawled items before passing them on to the index component.  This is where documents are parsed and properties are mapped etc…

    The Analytics processing component handles search and usage analytics.

    The Query processing component handles all analysis and processing of search queries and results.

    Index partitions are a means by which to divide up the index.  Index partitions are stored on disk.  Index partitions collectively make up the Search Index.

    Index replicas are exactly what they sound like.  Really they are Index partition replicas.  Each replica has an index component attached to it.  Creating replicas is a means by which to achieve fault tolerance in that you have two or more replicas of an Index partition that live on different servers, so that is one server goes down that portion of the index is still available.

    The Administration database is where all of your configuration data is stored.  You will have one and only one of these.

    The Analytics reporting database stores your search usage analytics results.

    The Crawl database is your crawl history store and crawl operation manager.  You can have multiple crawl databases and each one can have one or more crawl components associated with it.

    Finally you'll have a Link database which stores data extracted by your content processing component as well as click-through data.

    Additional components and databases must be created via Powershell and that subject warrants its own post.

    There is quite a bit of information out on Technet, and some of the info from this post came from diagrams that can be found at…

    This ends part one… I'm hoping to have time to get the other parts of this up soon, so stay tuned.

    -This is a contribution from Tyler Bithell, Chief Technical Architect – Portals at B2B Technologies.



Skip Navigation LinksHome What's New B2B Blog BlogPost