Posts Tagged VDI

Key features for VDI storage

One of the biggest trends in IT infrastructure today is dedicated “storage systems” for VDI. I put “storage systems” in scare quotes because many of the vendors making these systems would object to being called a storage system. Regardless, the primary use case driving the sales of many of these systems is as a storage location for VDI. The reason for this is that traditional arrays have proven woefully inadequate to handle the amount and type of IO VDI can generate.

The architecture backing these systems varies greatly but when looking for a dedicated storage solution for your VDI environment, here are the top features I look for:

Speed. This one should be obvious but any storage system dedicated for VDI needs to be fast. Anyone who’s ever designed storage for a VDI environment can tell you that VDI workloads can generate tremendous amounts of most write-IO with very ‘bursty’ workload patterns. Traditional storage arrays with active-passive controllers, ALUA architecture and tiered HDD storage weren’t created with this workload in mind. Trying to design a VDI environment on this architecture can become (in some cases) cost and performance prohibitive. Indeed, many businesses are spending 40%-60% of their VDI budget on storage alone.

Today, the speed solution is being solved with a variety of methods. RAM is being used as a read/write location (e.g. Atlantis ILIO) for microsecond access times. “All flash” arrays are being purpose built to hold 100% SSD drives (Invicta, XtremIO, Pure, etc.). Adding to this, a whole host of “converged” compute/storage appliances are popping up utilizing local disk/flash for increased speed and simplicity (Nutanix, Simplivity, VSAN, ScaleIO, etc.) To reiterate, each of these systems I’ve mentioned can do more than just VDI, but VDI just happens to be a good use case for these solutions in many cases. If you’re looking for a place to put your VDI environment, the ability to rapidly process lots of random write IO should be of paramount concern and you should know that there are currently many ways this can be mitigated.

Data reduction. This one will be more controversial, particularly for non-persistent fanboys. Nevertheless, persistent VDI is a fact of life for many VDI environments. As such, large amounts of duplicate data will be written to storage and as a result, data reduction mechanisms become very important. De-duplication and compression will be the most effective methods and will be preferably in-line. Again, various solutions from Atlantis to Invicta, to XtremIO to Pure all offer these features but with very different architectures. If you have no persistent desktops then this feature becomes less important. However, data reduction can still be quite valuable in many non-persistent VDI architectures as well, as an example, XenDesktop MCS could greatly benefit from storage with de-duplication. I also find that many of my customers who start out thinking they’ll have only non-persistent desktops quickly discover during the course of their migration users who need persistence. Don’t be surprised by the need for this feature at a later point, plan for this at the beginning and make sure your storage platform has the appropriate data reduction features.

Scale. I don’t know how many VDI projects I’ve heard of where storage was purchased to support X amount of users only for the VDI project to take off faster and of larger scale than expected. The project then gets stalled because the storage system can’t handle more than the X amount of users it was designed for and the business doesn’t have enough budget to purchase another storage system. For this reason, any storage dedicated to VDI should be able to scale both “up” and “out”. “Up” to support more capacity and “out” to support more IO. The scaling of the system should be such that it is one unified system…not multiple systems with a unified control plane. The converged solutions are great at this, VSAN, Nutanix, et al. All flash arrays typically have this as well e.g. Invicta, XtremIO.

Ease of Management. This sounds basic and very obvious but make sure you evaluate “ease of management” when purchasing any VDI-specific storage solution. The reason for this is simple, any VDI-specific storage system is bound to have a much different architecture than any array’s you currently have in your environment. The harder it is to manage, the higher the learning curve will be for existing admins. My criteria for determining if a VDI storage system is “easy” to manage is this – “can my VDI admins manage this?” (and that’s no slight to VDI admins!). The management of the system shouldn’t require a lot of legacy SAN knowledge or skillsets. This makes the environment more agile by not having to rely on multiple teams for basic functions and doesn’t burden SAN teams with a disparate island of storage they must learn and manage. Again, many of the converged solutions are great at this as well as some of the newer AFA’s.

There are many other important factors in deciding what to look for in a storage solution for your VDI environment. Whatever the architecture, if it doesn’t include the above four features, I’d look elsewhere.

Note: Vijay Swami wrote an excellent article entitled “A buyer’s guide for the All Flash Array Market”. I found it interesting after I wrote this to read his thoughts and note how many of the things he looks for in an AFA are similar to my top features for VDI storage. Regardless, it’s good reading and if you haven’t already, check it out.

, , ,

Leave a comment

Double the Procedure, Double the Price?

In my last post I touched briefly on a claim I’m hearing a lot in IT circles these days.  This claim is often heard in discussions surrounding multi-hypervisor environments and most recently in VDI discussions.  The claim in question, at its’ core, says this – “If you have two procedures to perform the same task you double your operational expense in performing that task”.  Given the prevalence of this argument I wanted to focus on this in one post even though I’ve touched on it elsewhere.

As mentioned in my last post, Shawn Bass recently displayed this logic in a debate at VMworld.  The example given is a company with a mixture of physical and virtual desktops.  In this scenario they manage their physical desktops with Altiris/SCCM and use image-based management techniques for their non-persistent virtual desktops.  Since you are using two different procedures to accomplish the same task (update desktops), it is claimed that you then “double” your operational expense.

As I’ve said, in many scenarios this is clearly false.  The only way having two procedures “doubles” your operational cost is if both procedures require an equal amount of time/effort/training/etc. to implement and maintain.  And the odd thing about this example is that it actually proves the opposite of what it claims.  It’s very common for organizations to have physical desktops that they manage differently than their non-persistent virtual desktops.  Are these organizations just not privy to the nuances of operational expenditures?  I don’t think so, these organizations in many cases chose VDI at least in part for easier desktop management.  For many, it’s just easier and much faster to maintain a small group of “golden images” rather than hundreds or thousands of individual images.  So in this example adding the second procedure of image-based management can actually reduce the overall operational expense.  Now a large portion of my desktops can be managed much more efficiently than they were before, this reduces the overall time and energy I spend managing my total desktops and thus, reduces my operational expense.

We see this same logic in a lot of multi-hypervisor discussions as well.  “Two hypervisors, two ways of managing things, double the operational expense”.  When done wrong, a multi-hypervisor environment can fall into this trap.  However, before treating this logic as universally true you have to evaluate your own IT staff and workload requirements.  Some workloads will be managed/backed up/recovered in a disaster/etc. differently than the rest of your infrastructure anyway, so putting these workloads on a separate hypervisor isn’t going to add to that expense.  The management of the second hypervisor itself doesn’t necessarily “double” your cost as in many cases the knowledge your staff already possesses on how a hypervisor works in general can translate well into managing an alternate hypervisor.  A lot more could be said here but in the end, CAPEX savings should override any nominal added OPEX expense or you’re doing it wrong.

In general, standardization and common management platforms are things every IT department should strive for. Like “best practice” recommendations from vendors, however, we don’t apply them universally.  The main problem with this line of thinking is that it states a generalization as a universal truth and applies it to all situations while ignoring the subtle complexities of individual environments.  In IT, it’s just not that easy.

, ,

Leave a comment

The Great Persistence Debate

There was a good discussion at VMworld this year between persistent and non-persistent VDI proponents.  The debate spawned from discussions on twitter surrounding a blog post by Andre Leibovici entitled “Open letter to non-persistent VDI fanboys…”.  Representing the persistent side of the debate was Andre Leibovici and Shawn Bass.  Non-persistent fanboys were represented by Jason Langone and Jason Mattox.  Overall, this is a good discussion with both sides pointing out some strengths and weaknesses of each position:

So which is the better VDI management model, persistent or non-persistent?  Personally I think Andre nailed it near the end of the debate, it’s all about use case!  I know that’s the typical IT answer to most questions but it really is the best answer in many of these “best tech” debates.  What matters to most customers is not which is the “best” but which is the “right fit”.  A Ferrari may be the best car in the world but it’s clearly not the right fit for a family of four on a budget.  So while it may be fun and entertaining to discuss which is the best, in the real-world, the most relevant question is ‘which is the right fit given a particular use case?’.  If you have a call center with a small application portfolio, then this is an obvious use case for non-persistent desktops (though certainly not the only use case).  I agree with the persistence crowd in regards to larger environments that have extensive application portfolios.  The time it takes to virtualize and package all these applications and the impossibly large amount of software required to go non-persistent for all desktops in such an environment (UEM, app publishing, app streaming, etc.) makes persistence a much more viable option.  This is why many VDI environments will usually have a mixture of persistent and non-persistent desktops.  These are extreme examples but it’s clear that no one model is perfect for every situation.

Other random thoughts from this discussion:

Throughout the debate and in most discussions surrounding persistent desktops, the persistent desktop crowd often points to new technology advances that make persistent desktops a viable option.  Flash-based arrays, inline de-duplication, etc. are all cited as examples.  The only problem with this is that while this technology exists today, many customers still don’t have it and aren’t willing to make the additional investment in a new array or other technology on top of the VDI software investment.  So the technology exists and we can have very high-level, academic discussions on running persistent desktops with this technology but for many customers it’s still not a reality.
Here again, like most times this discussion crops up, the non-persistent crowd makes a point of trumpeting the ease of managing non-persistent desktops while glossing over how difficult it can be to actually deploy this desktop type when organizations are seeking a high percentage of VDI users.  Even if we ignore the technical challenges around application delivery, users still have to like the desktop…and most companies will have more users than they know that will require/demand persistent desktops.
About midway through the debate there is talk about how non-persistence is limiting the user and installing apps is what users want, but earlier in the debate the panel all agreed that just allowing users to install whatever app they want is a security and support nightmare.  I found this dichotomy interesting in that it illuminates this truth – whichever desktop model you choose the user is limited in some way.  Whatever marketing you may hear to the contrary, remember that.

And last but certainly not least…

In this debate Shawn delivers an argument I hear a lot in IT that I disagree with and maybe this deserves a separate post.  He talks about the “duality” of operational expense when you are managing non-persistent desktops using image-based management in an environment where you still have physical endpoints being managed by Altiris/SCCM.  He says you actually “double” your operational expence managing these desktops in different ways.  The logic undergirding this argument is the assumption that ‘double the procedure equals double the operational cost’.  To me this is not necessarily true and for many environments, definitely false.  The only way having two procedures “doubles” your operational cost is if both procedures require an equal amount of time/effort/training/etc. to implement and maintain.  And for many customers (who implement VDI at least partly for easier desktop managment) it’s clear that image-based management is viewed as the easier and faster solution to maintain desktops.  I see this same logic applied to multi-hypervisor environments as well and simply disagree that having multiple procedures is always going to mean you double or even increase your operational cost.

Any other thoughts, comments or disagreements are welcome in the comment section!

, , ,

Leave a comment

View and XenDesktop vCenter Permissions

Both VMware View and Citrix XenDesktop require permissions within vCenter to provision and manage virtual desktops.  VMware and Citrix both have documentation on the exact permissions required for this user account.  Creating a service account with the minimal amount of permissions necessary, however, can be cumbersome and as a result, many businesses have elected to just create an account with “Administrator” permissions within vCenter.  While much easier to create, this configuration will not win you any points with a security auditor.

To make this process a bit easier I’ve created a couple quick scripts, one for XenDesktop and one for View, that create “roles” with the minimal permissions necessary for each VDI platform.  For XenDesktop, the script will create a role called “Citrix XenDesktop” with the privileges specified here.  For View, that script will create a role called “VMware View” with privileges specified on page 87-88 here.  VMware mentions creating three roles in its documentation, but I just created one with all the permissions necessary for View Manager, Composer and local mode.  Removing the “local mode” permissions is easy enough in the script if you don’t think you’re going to use it and the vast majority of View deployments I’ve seen use Composer, so I didn’t see it as necessary to separate that into a different role either.  You’ll also note that I used the privilege “Id” instead of “Name”.  The problem I ran into there is that “Name” is not unique within privileges (e.g. there is a “Power On” under both “vApp” and “Virtual Machine”) while “Id” is unique.  So, for consistencies sake I just used “Id” to reference every privilege.  The only thing that will need to be modified in these scripts is to make sure to enter your vCenter IP/Hostname after “Connect-VIServer”.

Of course, these scripts could be expanded to automate more tasks, such as creating a user account and giving access to specific folders or clusters, etc., but I will let all the PowerCLI gurus out there handle that. 🙂  Really, the only goal of these scripts is to automate the particular task that most people skip due to its tedious nature.  Feel free to download, critique and expand as necessary.

VMwareView_CreateRole.ps1

CitrixXenDesktop_CreateRole.ps1

, , , , ,

Leave a comment

VCP5-DT Blueprint Study Guide

Those familiar with VMware certification exams will have experience studying for those exams with the excellent exam blueprints that occompany each test.  I took the VCP5-DT (VMware View 5) test several weeks ago and used its exam blueprint to study from.  While filling out the blueprint for my own study purposes, I thought it might be a useful tool for others as well so I went ahead and filled out most of the rest of the blueprint as well.  I did however, leave out certain portions for various reasons.  These reasons range from a) the meaning of the particular section was unclear, b) portions of the blueprint were redundant or c) certain sections can only be known through real-world experience (e.g. troubleshooting).  Despite these short omissions, there is quite a bit of content here (30 pages).  I got most of it from the resources listed in the exam blueprint and even copied and pasted tables as necessary.  I did add my own commentary in several places where I felt the listed resources did not go far enough in their explanation.

Download the blueprint study guide here.

Screenshots:
————————————————————————-
————————————————————————-

, ,

2 Comments

Personal vDisks and Application Conflict Resolution

With the recent release of XenDesktop 5.6, Citrix has introduced the “Personal vDisk” feature into its XenDesktop product line.  See below for links on how Personal vDisks work, but the basic idea behind this technology is that it allows you to create pools of non-persistent desktops and still allow users to install applications on top of these desktops and those applications persist between reboots and base image updates.  This is a significant improvement over “dedicated” virtual desktops, where any updates to the base image would completely wipe out user customization.  This limitation forced administrators to apply updates to each dedicated desktop which would, over time, consume large amounts of storage space.  Needless to say, the Personal vDisk model is a welcome step forward for Citrix.

Now, with this release there was some exciting news about this technology’s ability to resolve application conflicts between user and admin installed apps.  For example, in this video, between the 6min-7:40min mark an interesting scenerio is given where a user installs Firefox 9 but the admin installs Firefox 10 as part of an image update.  The default behavior is that Firefox 9 will be “hidden” and Firefox 10 will be the application available to end users.  Another scenerio is given where both the user and admin have installed the exact same application, we are told that in this scenerio the user installed app is removed from their Personal vDisk to save space and only the admin installed app is utilized.  In the Personal vDisk FAQ, we’re also told that “Should an end-user change conflict with an administrator’s change, personal vDisk provides a simple and automatic way to reconcile the changes”.  With these things in mind, I set out to test this feature myself and see how this actually works.  As you might have guessed, things aren’t quite as “easy” as advertised.

What follows are the high-level steps I took to initially test this feature and try to get it to work:

Test #1

  1. Install Firefox 10 in the base/parent image
  2. Update Inventory and Shutdown, create new snapshot
  3. Update Image
  4. Install Firefox 11 as user
    At this point I was expecting to get an error or some warning denying me access to install Firefox 11 and that it conflicts with an admin installed app.  However, this did not happen and I was able to install Firefox 11 as a user.  This led to my next test.

Test #2

  1.  Install firefox 11 in the base/parent image
  2. Update Inventory and Shutdown, create new snapshot
  3. Update image
  4. Install firefox 10 as user
    Again, I was expecting some kind of error or warning at this point but it never happened.  As a user, I was able to install the older version of Firefox without any issues.  This led to another test.

Test # 3

  1. Install firefox 11 in base image/parent image.
  2. Update Inventory and Shutdown, create new snapshot.
  3. Update image.
  4. Install firefox 11 as user and observe more space being taken up on the Personal vDisk.
    Again, no warnings or errors at this point despite directly creating a conflict between a user and admin installed app and wasting space on the Personal vDisk.  I tried this same test with several different applications but had the same result each time.  Frustrated, I turned to the Citrix Forums and found the answer to why this doesn’t work.

As explained in that forum, the reason my tests didn’t turn out the way I thought they would is because Personal vDisk application conflict resolution does not happen proactively, during the time when a user is installing an application, but only after a base image update when files or folders have been modified and updated.  To borrow the example given in the forum and at a more granular level, say that “app.dll” is present in the base image.  The user installs an application or in some way changes “app.dll” on their virtual desktop.  This change will persist indefinitely until “app.dll” is once again updated in the base image.  At that point the inventory process will note that “app.dll” has been modified and the user changes to “app.dll” will be overwritten the first time the virtual desktop boots up after an image update.

I decided to test this out at the individual file level to easily verify the results.  Here is a file in C:\Test on my base image.  Note the size:

As a user, I modify this file by deleting all of the content and create another file in this directory.  Note the sizes:

Now, these user changes persist between reboots and even persist between image updates when this specific file is not updated.  However, when I go back into my base image and update that file (add a word), here’s what it looks like to the user after an image update:

As you can see, the admin changes in the base image have overwitten the user changes.  If we go back to my earlier examples we will see that this same behavior holds true for entire applications as well.  For instance, on Test #3, if I go back into the base image and reinstall Firefox 11, those files get removed from the Personal vDisk the first time it boots up and I now use the application as installed by the administrator from the base image .  On Test #2, if I go back in and reinstall Firefox 11 on the base image, I now see Firefox 11 as the end user and the Firefox 10 files are overwritten.

Conclusion
While the Personal vDisk feature of XenDesktop 5.6 is a definite step in the right direction, there is still some work that needs to be done with application conflict resolution.  Currently, the only way to be sure that admin installed apps overwrite any conflicting user installed apps is to regularly go into the base image and update or reinstall your applications.  Further, since the default behavior is for admin installed apps to “win” in the event of a conflict, administrators should take care when updating applications and images as they could inadvertantly be overwriting user installed apps that they didn’t intent to overwrite and this could lead to a confusing experience for the user (“Hey!  I didn’t install this version!?”).

Not having a solid application conflict mechanism in place isn’t a deal-breaker for me, after all, current “dedicated” desktops don’t have a solution for this either.  However, it is important to know how this works and when overwrites occur so you can properly manage applications in your environment and aren’t unintentionally creating a bad experience for your users.  A future post may delve into ways to modify the default behavior (admin apps overwriting user apps) but for now I put this out there for all who may be confused as to to how this works, as I was.

Here are some useful Personal vDisk links:
http://blogs.citrix.com/2011/08/29/digging-into-ringcube/
http://support.citrix.com/proddocs/topic/xendesktop-ibi/cds-about-personal-vdisks-ibi.html
http://www.citrix.com/tv/#videos/5359
http://www.citrix.com/tv/#videos/5348
http://www.citrix.com/tv/#videos/5269

, , , ,

2 Comments

How server virtualization killed VDI

There’ve been some interesting discussions about VDI recently and many of these discussion share a common theme – that VDI is not all that it was made out to be and that there are better ways to deliver desktops to your users “anytime, anywhere”.  This line of thinking has existed for some time but has recently come into vogue after years of pent-up and cynical disillusionment as a result of overhyped VDI promises and underwhelming results.  Understanding where this hype came from will be instructive in learning the mindset of most organizations starting VDI implementations and why many of these implementations haven’t lived up to the promises and have failed from a technical and user acceptance standpoint.

The Hype

Vendor Marketing, ever present aloft the peak of inflated expectation.

I’ve lost track of the amount of people I’ve talked with over the years that want to pursue VDI with the following justification, “We had so much success with server virtualization that virtualizing our desktops just made sense”.  In fact, hearing this statement just this past week is what prompted this post.  At a cursory glance, this reasoning does have a common sense appeal, however, the devil is in the details and it is precisely this reasoning that has led so many people astray in regards to VDI.  Why might this be?  Because server virtualization is a freak of nature!  Very few things in life become cheaper when more features are added (as other commentators have noted).  In moving from physical to virtual with a server infrastructure, an organization not only saved money with server consolidation and power and cooling but also added some extremely valuable features that just weren’t available to them before.  Things like server mobility, easier disaster recovery, rapid server deployment, etc. all added tremendous value to server virtualization on top of the financial benefits of moving to such a solution.  With the success of server virtualization in hand, many organizations rushed headlong into VDI deployments thinking they would get the same benefits at the same low cost with the same level of ease because hey, desktops are easy, right?  Sensing the excitement building around this “next stage of virtualization” vendor marketing departments went into overdrive hyping VDI and touting its technical benefits and cost effectiveness just as they did with server virtualization. At that time, the unique workload characteristics of desktops and how that would translate to a virtual, shared image environment had not been taken into account by those already experienced with server virtualization (myself included).

The Reality

In reality, virtualizing your desktops is not “easy” and the success of a server virtualization project by no means guarantees success in setting up a virtual desktop infrastructure.  The technical differences between these two technology domains are significant and important to note.  For instance, in most server environments, you’ll have large amounts of idle servers at any given time.  Desktops, however, are busy all the time with user activity. Adding to that, the user “lives” in the virtual desktop, so that any lag in performance, any delay in mouse clicks, will be immediately noticeable.  Since you’ve virtualized your desktops, the user still expects the same level of graphical performance as their local PC, whether they’re viewing work related material or not.  And how many of your servers average 20-30 IOPS on a continual basis?  Many of these problems simply didn’t exist with server virtualization.  Servers remain online almost all the time, with VDI however, desktops are rebooted on a regular basis and can lead to boot storms.  With server virtualization you installed one application per server, with VDI, users want to install their own applications (UIA) and you have a whole assortment of “long tail applications” that you have to develop a strategy around.  Adding to these VDI complexities are profile and “user” management, printing and more.

The Conclusion

Server virtualization is an anomaly and the prevailing opinion of desktops as a lesser form of server has lulled the masses into thinking VDI would be a piece of cake.  The bottom line is, VDI is not “easy” and this line of thinking has led to many failed VDI implementations.  While the technical challenges listed above and previously on this website and others are very real, these would never have “surprised” anyone or even been a problem if planned for and designed carefully.

Paradoxically, while the conflation of desktop and server virtualization and the “desktops are easy” mentality has contributed to so many failed VDI implementations, I am convinced that these failures simply wouldn’t have happened if VDI had been implemented in a similar fashion as server virtualization.   No one started a server virtualization project with a “virtualize everything” mentality.  Server workloads were carefully analyzed to determine the best candidates for virtualization.  After these “low hanging fruit” servers were identified, IT departments slowly worked their way up to more resource intensive servers with unique workload characteristics.  In the end, some servers remained physical and this was fine because it was anticipated as part of the overall strategy.    A similar strategy should be taken with VDI.  Develop a comprehensive desktop and application virtualization strategy.  Create application and user catalogs to determine where your users and applications fit into this strategy.  Then, start with your “low hanging fruit”, your call centers or task workers and slowly work your way up to “knowledge workers” with more unique and demanding requirements.  Ultimately, you may end up with some users who remain with physical desktops.   Careful planning and a realistic level-headedness will result in a successful VDI implementation.  Knowing what VDI “is” and “is not” is essential in determining your end goal for such a project and setting expectations about what VDI will do for your organization.



Appendum:  While doing some research after writing the above post, I ran across a presentation by Ron Oglesby where he raises similar points.  As always, it’s a great presentation.  Here is the link, enjoy!

,

Leave a comment