With the release of SP1 for XenClient a few weeks ago, I thought this would be a good topic for my first post. XenClient is a Type 1 client hypervisor developed by Citrix. It allows you to simultaneously run multiple virtual machines (VMs) from a single laptop. Along with XenClient there is also a “Synchronizer” appliance that allows you to upload, download, backup and even erase lost client devices. There are a slew of other features and components to this system so I thought I’d give a brief architectural overview of XenClient and its various features.
Control Domain: The control domain virtualizes hardware for XenClient VMs. All disk, network, audio and USB traffic goes through the control domain to and from each VM. If you’re familiar with XenServer then you’ll know this as Dom 0.
Service VM: The service VM provides you with the capability of managing XenClient VMs. Using the service VM you can create, modify, delete and even upload VMs to the Synchronizer. The current service VM for XenClient is running “Citrix Receiver for XenClient” and gives you the capability of viewing and operating each VM you have running on XenClient. More service VMs are being planned to add additional functionality to XenClient in the future.
GPU Passthrough: As the name implies, GPU passthrough provides direct access to the GPU to a specified VM without the hypervisor or control domain acting as a go-between. This feature allows your VM to experience the full graphical capabilities of your hardware just as if it were installed on bare metal. See here for a demo of this feature. Currently this feature is “experimental” and you can only enable it on one VM on your XenClient device. Citrix has stated that you will be able to do this on multiple VMs in the future.
AMT: Intel Active Management Technology is a hardware based remote administration tool that provides you with the capability to track assets, power on/off client devices and troubleshoot issues with XenClient VMs or XenClient itself. See here for a good demo on this.
Secure Application Sharing: The best way to describe this feature is to say that it’s basically XenApp for your local XenClient VMs. It allows you to work in one VM while using applications installed on another VM. You can publish applications from one or more VMs (known as “application publishing VMs”) and “subscribe” to them via Citrix Dazzle on any VMs you’ve configured application subscription on(known as the “application subscribing VM”). Just like XenApp, any application you subscribe to and then launch is actually running and executing on the publishing VM and merely being displayed on the subscribing VM (Citrix TV has a good demo of this feature here). A configurable application spidering process is running on any application publishing VM to discover all the applications that will be viewable in Citrix Dazzle on the application subscribing VM. To configure this process you’ll have to edit an XML file located in the following directories:
C:\Documents and Settings\All Users\Application Data\Citrix\Xci\Applications\XciDiscoveryConfig.xml
Below is short section of what’s included in this file:
<DiscoveryPath Enabled=”true” Recurse=”true” Wildcard=”*.lnk”>C:\ProgramData\Microsoft\Windows\Start Menu</DiscoveryPath>
<DiscoveryPath Enabled=”true” Recurse=”true” Wildcard=”*.lnk”>C:\Users\Administrator\AppData\Roaming\Microsoft\Windows\Start Menu</DiscoveryPath> <DiscoveryPath Enabled=”true” Recurse=”true” Wildcard=”*.msc”>C:\Windows\system32</DiscoveryPath>
<Whitelist IgnoreCase=”true”>^.:\\Program Files\\Internet Explorer\\iexplore.exe</Whitelist>
With the exception of references to “perfmon” these are the default configurations for this section of the file. Perfmon for Windows 7 is located at “C:\Windows\system32\perfmon.msc”. As you can see, I’ve added it’s root directory in the “DiscoveryPaths” section of the file and also specified the .msc specifically in the “Whitelists” section. Now perfmon is ready to use in the application subscribing VM. You can follow a similar procedure as what I’ve shown above to add any application or group of applications as usable in your application subscribing VMs. This feature is currently “experimental”.
Intel TXT: The role Trusted Execution Technology plays in the XenClient architecture is to cryptographically checksum the XenClient installation at every boot. In more basic terms, its function is to ensure that the hypervisor hasn’t been tampered with while offline. This feature is currently unsupported with XenClient.
TPM: TXT checksums are stored in the Trusted Platform Module. The encryption key is sealed by the TPM and only released if the checksums match. Like TXT, this feature is currently unsupported.
Synchronizer: Synchronizer is an appliance that allows you to centrally manage and deploy virtual machines in your XenClient environment. In its current release, Synchronizer runs on XenServer exclusively and all of the management and configuration of the appliance is done through a web front-end. VMs deployed by synchronizer will, depending on how you’ve configured them, be in periodic communication with the appliance over HTTPS. Some examples of this communication include checking for new images issued to the user, checking for updates to existing images or verifying that a “kill pill” hasn’t been issued for any VM. Synchronizer will even synchronize your XenClient password to match your Active Directory password even though the XenClient device itself isn’t part of Active Directory.
Through the use of snapshots Synchronizer can, in conjunction with XenClient, provide you with the capability of downloading, uploading or even backing up your XenClient VMs. To learn more about this process I highly recommend this article.
Dynamic Image Mode: If you’ve worked with desktop virtualization before then the concept of “layering” desktop images is already familiar to you. You have an operating system with applications being streamed/virtualized on top of that and users settings being redirected/streamed on top of that. What’s interesting about Dynamic Image Mode VMs is that each of these layers are now three separate .vhd files compromising one operating system.
Unlike static mode VMs, the dynamic image mode OS layer is not persistent across reboots. Those familiar with VDI should recognize this behavior as well. Any changes made to the base OS will be wiped clean upon each reboot. The “Application” and “Documents and Settings” layers are redirected to the appropriate .vhd file through the use of junction points. If you are using Windows 7, then “C:\Program Files\Citrix\AppCache” is redirected to the “Application” .vhd and “C:\Users” is redirected to the “Documents and Settings” .vhd. Unlike the OS .vhd file, the Application and Documents and Settings .vhd are persistent across reboots. When backing up a Dynamic Image Mode VM to Synchronizer only the “Documents and Settings” .vhd is backed up. Updates to a Dynamic Image Mode VM will update the OS .vhd only. For more on this, I refer you to this article and once again to the article I mentioned before. Dynamic Image VMs are currently “experimental” for XenClient.
So there you have it, that’s XenClient in a nutshell. While there are currently several key features of XenClient that are “experimental”, I’m personally very excited about the future of this technology. Already the SP1 release has a number of improvements, these include – support for .vhd images created with XenConvert, support for images streamed from Citrix Provisioning server, faster boot times, the ability to boot directly to a VM and more! A good rundown of some of the other important improvement with SP1 can be found here. If you’d like to learn more, I’d suggest reading the XenClient and Synchronizer user/admin guides. Lastly, I’ll refer you to this video from Synergy 2010 that goes into a good amount of technical detail regarding the different features of XenClient.