You are here:   Research
  |  Login

Welcome to my blog, quickest way to find articles is usually to search for them.

Minimize
Search in All Title Contents
 
     

A Geeks Guide to Peer Cache in ConfigMgr Current Branch

Feb 09 2017

In ConfigMgr Current Branch v1610 Microsoft added in a new feature called Peer Cache. It had previously had a few guest appearances in the Technical Preview branch, but now finally made it to the shinier branch, e.g. Current Branch. Anyway, this feature is designed to help reduce the network impact of delivering content to clients in distributed environments, and works with the package types that ConfigMgr supports (updates, legacy packages, applications, images etc.).

Here is a step-by-step guide that shows you how to setup peer cache in ConfigMgr Current Branch v1610 or later, as well as give you some background info.  

Disclaimer: This is the very first release of Peer Cache in ConfigMgr Current Branch, so please don’t be upset over some of its limitations :)

  

image
ConfigMgr Peer Cache in action, a Windows 10 deployment download the WIM file from a peer rather than the distribution point. Very shiny.

The Guide

The guide in this post are four simple steps to get it going:

  • Step 1 – Create a collection structure
  • Step 2 – Configure Peer Cache Client settings
  • Step 3 – Create boundaries and boundary groups
  • Step 4 – Verifying that it works

But first… a little bit of background.

ConfigMgr Current Branch Peer Cache 101

The way Peer Cache works in ConfigMgr Current Branch v1610, is that you enable, via client settings, what machines on each site that should be allowed to share content with their friends. These machines are called Peer Cache Sources. Once these machines have the content, other machines in the same boundary group can download the content from their “friends” rather than from a remote DP. You can basically see these clients as extra distribution points :)

Note #1: The content you want to have available for peer caching must be fully deployed to the peer cache sources, so it’s located in their cache, before it becomes available for other clients. But once they have it, there is no need to wait deploying to the rest. ConfigMgr learns about it’s new “distribution points” very quickly. Within minutes in my testing.

Note #2: Per caching is done per boundary group, so if a client roams to a new site (new boundary group), it needs run a hardware inventory first. So, that ConfigMgr knows about it.

Note #3: Peer Cache does work together with BranchCache, but I did not include BranchCache setup in this guide.

Note #4: Peer Cache in ConfigMgr Current Branch v1610 is a direct replacement for the WinPE Peer Cache feature that was introduced in ConfigMgr Current Branch v1511.

 

Scenario

In my lab, I have two sites, New York (192.168.1.0/24) which has a local DP, and Chicago (192.168.4.0/24) which does not have a local DP.

  • New York:  With the CM01 DP, has five clients: W10PEER-0001 – W10PEER-0005.
  • Chicago: With no DP, has five clients: W10PEER-0006 – W10PEER-0010.

Note: To setup a lab with multiple routed networks I recommend using a virtual router instead of the typical NAT switch in Hyper-V or VMWare. It can be based on either Linux or Windows, and you find a step-by-step guide here: http://deploymentresearch.com/Research/Post/285/Using-a-virtual-router-for-your-lab-and-test-environment

 

      Sites
      Showing off my Microsoft Paint skills. :)

       

            

      Step 1 – Create a collection structure

      Since you need to deploy content to a few machines first in each site (at least one), I created a collection structure that looked like this:

      • Peer Cache Sources – All Sites: In this collection, I added two machines from Chicago, and two machines from New York.
      • Peer Cache Clients – New York: Here I added three other machines in the New York site, just for testing
      • Peer Cache Clients – Chicago: Here I added three other machines in the Chicago site, again just for testing

       

      Note: In order for OS Deployment to use a peer for content, you must add the the SMSTSPeerDownload collection variable, set to True, to the collection(s) you are deploying the task sequence to. Optionally, you can also add the SMSTSPreserveContent variable to force the machine to keep the packages used during OSD in it’s cache. If you skip adding the SMSTSPeerDownload variable, the client will always go to a distribution point for the packages.

       

      Collection Variables
      Configuring OS Deployment to use content from a peer (if available).

         

          Peer-0003
          Collection structure for peer caching created.

           

          Step 2 – Configure Peer Cache Client settings

          To make ConfigMgr Clients share content with others, they must be configured to do so via Client Settings. You also need to extend the ConfigMgr client cache (see below).

           

          Warning: Do not enabling peer caching on all your clients for, just pick a few on each site. The current implementation does not work very well with having all your clients configured for it. The main reason is that each peer, that has the requested content, gets returned in a content lookup. And, if the first one happen to be offline, it takes quite long time (currently 7.5 minutes), for a client to failover to the next peer. Hopefully this behavior will change as peer caching is being improved.

           

          Coolness: Behind the scenes, the client setting is named CCM_SuperPeerClientConfig, and you will also see SuperPeer mentioned in the log files.

           

          1. In the Administration workspace, in the Client Settings node, create a new custom client device setting named Peer Cache Sources.

          2. In the Peer Cache Sources dialog box, select the Client Cache Settings check box, and then in the left pane, select Client Cache Settings.

          3. In the Custom Device Settings pane, set the Maximum cache size to something useful, like 65 GB, and then enable peer caching by setting the Enable Configuration Manager client in full OS to share content policy to Yes.

          Note: A better way to set the maximum client size is by using a configuration item, that via a script sets it dynamically depending on how much free disk space the machines has. You find a good example from Heath Lawson (@HeathL17) here: https://blogs.msdn.microsoft.com/helaw/2014/01/07/configuration-manager-cache-management.

          4. Deploy the Peer Cache Sources client setting to the Peer Cache Sources – All Sites collection.

           

          Peer-0004
          Configuring the client settings for peer caching.

           

          Step 3 – Create boundaries and boundary groups

          Since peer caching clients finds friends within a boundary group only, you need to have a somewhat decent structure of boundary groups. For simplicity in my testing I simple created the following boundary groups.

          • New York: To which I added the 192.168.1.1 – 192.168.1.254 IP range boundary
          • Chicago: To which I added the 192.168.4.1 – 192.168.4.254 IP range boundary

            Peer-0005
            My boundary group in this example.

             

            Step 4 - Verifying that it works

            Now it’s time to verify it work, and in my example I deployed a 1 GB package to the Peer Cache Sources collection (containing two clients in each site).

            Once these clients had the content, I deployed it to the remaining clients in each site, and watched what happened by following the CAS.log on each client.

            Behavior in New York

            You would think that once you enable peer caching, ConfigMgr clients would by default always use it in the first place, right?  Absolutely not :) In New York, since I have a local DP on the same subnet as the peer cache clients, the clients in New York will get the content from the real DP, in my case CM01.

            Note: This behavior, always using a DP when being on the same subnet, is by design. I don’t agree with that being very clever, but that’s how it is right now. If you want to utilize peer caching to it’s fullest in ConfigMgr 1610, simply put the DP on it’s on subnet, without any clients.

            Shorthand: In the default configuration, clients in New York are getting content from the CM01 DP, even though their peer caching friends have the content. This is the interesting line in the log.

            Matching DP location found 0 - http://cm01.corp.viamonstra.com/sms_dp_smspkg$/ps10007f (Locality: SUBNET) 

            image
            Clients in New York were already downloading packages from the local DP, even though there are Peer Cache Sources available.

            Behavior in Chicago

            In Chicago, everything works as expected, since there is no local DP in Chicago, and the clients are in their own boundary, clients will get the content from its peer caching friends. Below is CAS.log example from clients in Chicago, and as you can see it ranked peer cache source before the remote CM01 DP (which it also found).

            Shorthand: The clients in Chicago are getting content from their peer caching friends. This is the interesting line in the log.

            Matching DP location found 0 - http://w10peer-0006.corp.viamonstra.com:8003/sccm_branchcache$/ps100083 (Locality: PEER)   

            image
            Clients in Chicago download content from their peers.

             

            As a final touch, after waiting for 24 hours, you can see the clients reporting their download history, both in the ContentTransferManager.log and as well by going to the Monitoring workspace and select the Distribution Status / Client Data Sources node.

            Note: There is an unsupported / undocumented way to shorten the upload interval for lab and demo scenarios, but until the ConfigMgr team says ok to publish those details, I’ll wait :)

            image
            The ContentTransferManager.log uploading the download history.

             

            image
            The Client Data Sources node after clients reporting their download history.

            Written by Johan Arwidmark








            Happy deployment, and thanks for reading!
            / The Deployment Research team



            Ami Casto

            Johan Arwidmark

            Blog Archive

            Minimize