Wilco van Bragt - LinkeIn Wilco van Bragt - Twitter rssa 

Citrix Provisioning Services Design Part 1

Nowadays installing much software products is not that difficult anymore, however designing the whole environment with software products is still not that easy. Logically a design should be adjusted to the current infrastructure, the requirements and wishes of the organization. So in my opinion a design would never be 100% identical for two different infrastructures. A design is never wrong; however you should always provide arguments why a decision has been made. In this article (series) I will describe and discuss the PVS components you should consider when creating a PVS design.

 

Hardware




I think all technicians/engineers love talking and discussing hardware. However the hardware discussions are become less important using the hypervisor technologies for systems like Citrix Provisioning Services (PVS). Sizing of the hardware of the VM is logically still an important design consideration. However I don’t go into details in this article, because I will release an article (series) where the hardware is already touched into detail. Follow my site for that upcoming article series.

Farm

A farm represents the highest level within a PVS infrastructure. On a farm level there are several settings , which also are reflected on the component in the farm. Where possible the advice is to use one farm if possible to keep maintenance and administration simple. However there are several arguments available where more farms should be considered like government rules/laws, separated IT departments (where the PVS farm Delegation of Control can’t provide the IT organization model), specific farm settings or no communication (or real slow) within the network (PVS server are constantly communicating with the database). If your organization does not have this conditions, use one PVS farm.

Site

Within a farm at least one Site is available. It’s possible to create more sites. Each Site contains other PVS components like PVS server(s), Device Collection(s), vDisk Update Management, Views and more. Each site requires at least one PVS server and one Device Collection. Each site is his own entity. A PVS server can only stream to the Target Devices which are available in the Device Collection within the site. In other words a PSV server cannot facilitate Target Device in other sites. More sites will increase the complexity so again the advice is to limit the amount of sites. Considerations for setting up additional sites are: delegation of control (administration), separation of streaming servers / network locations, different vDisk Pools and different MAK credentials.

Device Collections

Within a site one or more device collections can be created. A Device Collection is a group of PVS Target Devices (clients) which have the similar properties. This can be based on geographical locations, subnet ranges, different departments and/or the role of the Target Device (for example XenApp 6.5 servers, Windows 8 XenDesktop 7.5 or Windows 7 XenDesktop 7.1 VDAs). Also separation can be done for Delegation of Control of the Target Devices. It’s advisable to create a Device Collection based on the assignment of the vDisk to Target Devices. You can drag a vDisk to a Device Collection, so all Target Devices in that collection will have this vDisk assigned.

Views

The Views component is available both on a site level as a farm level. Views are offering another representation of the Target Devices than the device collection representation. A View can exist of target devices out of multiple device collections and if using the view level on farm level also out of different sites. The pity of Views is that they are not build dynamically but devices should be added one by one. Therefore it can be complicated to keep the Views up to date. A view is optional component, so if there are no requirements you can skip creating Views.

Store (Image Location)

It’s advisable that each site should have his own store (technically you can assign the same Store to multiple sites, but you will get strange behavior when using the same vDisk (names) in multiple sites. A Store is a logical name for the configured storage location. The storage location can be based on local storage, CIFS shares (UNC path) or shared LUN. Using a CIFS share or Shared LUN has the advantage that the vDisk are only stored at one location, but it requires additional rights for the streaming service to access the vDisk (service account). Also the data is travelling the network twice, first from the CIF share/LUN to the PVS server, followed by the streaming process from the PVS server to the Target Device(s). Local storage logically does not require additional rights and the data is only send on the network once. However the vDisks should be stored on each PVS server, which requires logically more disk space. Also PVS does not provide a replication component, so you need to set-up something yourself. That can be a simple robocopy script, a DFS-R infrastructure or something in between.

Also you need to determine how much storage you require for the PVS store. Most important is the amount of vDisks and the size of these vDisk. Also take into account storage space for updates (versions) on the vDisks.

Database

PVS is using a SQL database to store his data. The most recent versions are supported including all available SQL edtions (Express, Workgroup, Standard and Enterprise). PVS has built-in support for Microsoft SQL mirroring, where PVS will fail over automatically to the mirror database if the primary database is not available. PVS also includes an Offline Database Support functionality, which arranges that PVS still will keep functioning if the database(s) is/are not available. However Citrix recommends using this functionality in stable production environments, in other words a stable SQL environment that’s normally always available. On Citrix eDocs the exact database usage is available to you can easily calculate the required size of the database. In an environment around 1420 target devices the calculation came on 52 MB for the whole database.

(vDisk) Load Balancing

PVS offers load balancing in two stages. First when a Target Devices is booted and secondly when a Target Device has lost connection with a PVS server (Fault Tolerant). The default modus a both stages is that the PVS infrastructure determines which server has the least load (this is based on the amount of connected Target Devices). PVS offers two additional Load Balancing mechanisms, which can be configured per vDisk. With these Load Balancing techniques you could use one Site and accomplish the same result for Target Devices which only connect to nearby Provisioning Services (and possible to another PVS servers if nearby Provisioning Services are not available). The first mechanism is called Best Effort. When a Target Device connects to the PVS infrastructure, the first step is determine if there is/are PVS server(s) are available within the same subnet as the Target Devices. If one server is available the Target Device will be connected to this server, if there are more the PVS server with the least load will be picked. If there is not server available the least busy PVS server within the whole site will be used. Also the second mechanism, called Fixed, will determine if there are PVS server(s) available within the subnet. If there is one this one will be assigned. Are there more servers in the same subnet the least busy server will be used. If there is no PVS server available in the subnet, the Target Device will not get connected to a PVS server (just like if there is no PVS server is available in a site). Using the Best Effort and Fixed requires that both the PVS server(s) as the Target Devices are in the same subnet. Those advanced load balancing mechanisms can be used for keeping the PVS traffic within one data center, one rack or one enclosure.

Summarization

In the article series Citrix Provisioning Services Design I’m doing a walkthrough of the PVS components and which considerations should be made when designing/setting-up a PVS infrastructure. In this first part I discussed a PVS Farm, Site, Device Collection, View, Store, Database and Load Balancing.