Microsoft Office (Exchange) 365 – RDSH Myth 20 Comments

For many years now, the everything in cloud is happening more and more, among all the services / software available as a services, most of the vendor are promising a cheaper way to manage their software and a much more simple way to manage it.. I know some of you will disagree with what I writing about and some will agree, this is a view from my experience on the field :) A couple of weeks ago I launched a small poll on Twitter asking this question : "Why Do you think companies are moving to Exchange 365 ?" Here are the results : "It's less complex" won the poll follow by "it's a fashion" and then "it's cheaper"... I tend to agree with everything here, because if you get rid of a complete Exchange infrastructure with all the people you need to architecture and manage it, it will be cheaper and less complex for sure ! But this is just a dream without complexity of companies and without user's usage of their Outlook. What I try to point here is : Moving to Exchange 365 is not as easy as it seems, some company do have a "basic" Outlook / Exchange usage and it won't bring issues but most of the companies I saw have had issue because Microsoft and Microsoft's Partner did not capture the way users were used to work with their Outlook mail software. The picture above is the "put everything into the Cloud, you will save money" ideal. This ideal is true and can be reached when you know your users work habits and already have an organised mail infrastructure. But this ideal can be easily broken If it looks easy on the paper or in a Powerpoint presentation, simple things can break this kind of project into pieces and make it fails.. How ? Here is a list (to be completed :D ) Outlook plugins Online Mode RDSH environment Bad Architecture decision / Consulting ... ... Outlook plugins is the work enemy of the Cloudification because it means in 90% of the case you'll be forced to keep Outlook mail client, it's a road block for OWA adoption... Once you're stuck with the Outlook mail client, you need to deal with the Online or Cache mode with Exchange 365... Piece of cake right ? Workstation / Laptop --> Cache mode enable, no problem ! But what about…

Expand virtual machines hard disk – automation 7 Comments

Sometimes, at some customers's place, with an infrastructure already in place (XenApp with PVS or XenDesktop VDI pooled with PVS) the D: drive is too small. The drive where you redirect Windows Event Logs, Logs (UPM for example and/or other applications - services) This is a drive where page file is often redirected as well and even memory dump file generated. PVS cache can also be on this drive : Cache on device RAM with overflow on Hard Disk When RAM is zero, the target device write cache is only written to the local disk. When RAM is not zero, the target device write cache is written to RAM first. When RAM is full, the least recently used block of data is written to the local Write Cache disk to accommodate newer data on RAM. The amount of RAM specified is the non-paged kernel memory that the target device consumes. Cache on device Hard Disk The cache on local HD is stored in a file on a secondary local hard drive of the device. It gets created as an invisible file in the root folder of the secondary local HD. The cache file size grows, as needed, but never gets larger than the original vDisk, and often not larger than the free space on the original vDisk. It is slower than RAM cache, but faster than Server cache and works in a HA environment. The lack of space on this drive will bring some slowness in user's session and this drive needs to be expanded a bit to get back a normal user experience. To expand these disks two actions need to be done : Expand the Virtual Machine hard disk - in this example vmware Virtual Machines Expand the disk within the Operation System (Windows) In addition to the following script, psexec tool (Microsoft Sysinternal) is used to execute remotely the diskpart command listed in a text file (diskpart.txt) which is upload to the Virtual Machines. Targeted Virtual Machines need to be powered on. Psexec.exe and Diskpart.txt needs to be in the same folder as the Powershell script, of course you can specify their path as it suits your need.   This script is using XenDesktop / XenApp command to list all the Virtual Machines with SessionSupport value equal to SingleSession, it means the VDI only in my case. If you want to check the lust of Virtual Machines…

XenDesktop XenApp 7.x – vmware / ad / delivery group notes and descriptions sync 9 Comments

Several times i had the need to synchronise Virtual Machine notes (vmware) with Active Directory Computer description. As in big environment, different team are managing each of these components, the need to be able to link an Active Directory computer account to a vm with XenApp / XenDesktop delivery group has often been seen as useful. Delivery group name : Desktop123 Virtual Machine note (vmware) : Desktop123 Active Directory account Description : Desktop123 The idea is to simply synchronise the information through the platforms so everyone knows quickly what machine does what. In this particular example that was about XenApp Servers and XenDesktop VDI. You will need a machine where : XenDesktop 7.x SDK (Powershell is installed) vmware PowerCli installed RSAT role deployed as well Thank to Rodolphe Herpeux who simplified the first version of this script I wrote.

Netscaler 10.5 and Storefront 2.5.2 Configuration 13 Comments

Citrix Netscaler 10.5 is out since a couple of weeks now, and if you want to read what's new about this new release just click on the [link] because there are so many things I won't list everything here. I will use this blog to refresh the "how to" I already did about Netscaler and I will go through the basic setup, certificate request, import and Access Gateway configuration to plug my XenDesktop 7.5 lab. First, you need to download your Netscaler (download if you're using a VPX appliance). You can find the appliance corresponding to your hypervizor : vmware ESX Microsoft Hyper-V Citrix XenServer KVM You can download it here : [link] - myCitrix account is required One you boot up the appliance, after give the basic information like IP address, subnet and getway, you can fireup the GUI through your favorite browser. You need to logon and follow the step by step screenshots : The basic configuration is done. now time to add a certificate for the Access Gateway, creating a private key, a CSR and finally importing the pem certificate.   Don't forget to change the nsroot password. Now the certificate part is done (thanks to Digicert for my lab) you can go ahead to the next step and configure your Strorefront server to create a new store ready to connect with the Netscaler Access Gateway. Storefront part is easy and quick to do, you can now continue by creating the Access Gateway using the new wizard and following these steps : Here you go, just a reboot to have the Access Gateway up and running. I had few issue in the end with Application Firewall with Google Chrome and Safari from a Mac OSx computer, you need to enable the learning mode to check what need to be change in Application Firewall rules and allow connexion to you Access Gateway. You can customize the Netscaler Access Gateway logon page and your Storefront very easily, Eric one of my CTP friends did a very short and nice blog about that [link] and a very detailed blog written by Feng Huang Citrite here [link] This blog will give you a good overview on what needs to be done to set up an Access Gateway with Storefront, for those who don't have time to make test, now you know !

Are we missing something ? 2 Comments

As you might know I'm the CTO of a super cool company here in France (Activlan) base around Paris and one side of my job is to watch in my crystal ball to know what our customers will need and how they could use us to remain on top of their productivity with their IT. Reducing cost and accelerate process; giving flexibility and liberty to their users and keeping the information safe when needed. What's very cool in my job is I always exchange so many things with you all during events, when we meet here and there, online and in real life that is give me a flavor of what's happening in IT in a lot of country very different than here in France. Of course I try to give back what I learned of all this shared experience and knowledge but these last months I've been busy working hard on some other project. So, this title brings me back to an old blog : VDI ok, What's next ?  published in May 2012 where my conclusion was : What really matters in the vWorld ? In the end, the data. I think that was about right in 2012 and you know, with all the VDI, RDSH, offline and online, Hypervizor of all type, application installed, streamed or isolated, using a phone a tablet, a thin client or a computer, in the end the only thing that matter remains data. Software vendor in our segment are pushing harder and harder their mobile (ie MAM and MDM) solution thinking everyone should buy these software and work with tablets and phones. I think we aren't still there just yet... When someone is hired in a company this is almost all the time a giant waste of time (and money) the first days... No desktop ready, no application access etc... In the big company, MDM and MAM need to be addressed but that will never be wildly use for the next 2/3 years, what user expect from their company is to have access to their data (core need) through a applications accessed via a desktop, or not but with a consistent environment. They want to work in an optimal way during their working hours and sometime be able to access their data from home or a remote location, but taking over the personal people's phone is over-rated for now. The MAM MDM hype remind me the…

Cloudify my lab with Windows Azure 13 Comments

As I got an unlimited access to Windows Azure I wanted to check out how I could extend my lab into it and use it to store VMs workload (at first). Here what you need : Citrix NetScaler VPX (tested with NS10.1: Build 122.17.nc & NS10.1: Build 123.9.nc) Windows Azure Access Homelab (running on vSphere 5.5) Of course, you need licence for everything... Considerations : Before configuring a CloudBridge tunnel between a CloudBridge appliance in datacenter and  Microsoft Azure, consider the following points: The CloudBridge appliance must have a public facing IPv4 address (type SNIP) to use as a tunnel end-point address for the CloudBridge tunnel. Also, the CloudBridge appliance should not be behind a NAT device. (or you'll have to setup a route for your LAN computers, I'm explaining how to at the end of this blog) Azure supports the following IPSec settings for a CloudBridge tunnel. Therefore, you must specify the same IPSec settings while configuring the CloudBridge appliance for the CloudBridge tunnel. IKE version = v1 Encryption algorithm = AES Hash algorithm = HMAC SHA1  You must configure the firewall in the datacenter edge to allow the following. Any UDP packets for port 500 Any UDP packets for port 4500 Any ESP (IP protocol number 50) packets IKE re-keying, which is renegotiation of new cryptographic keys between the CloudBridge tunnel end points to establish new SAs, is not supported. When the Security Associations  (SAs) expire, the tunnel goes into the DOWN state. Therefore, you must set a very large value for the lifetimes of SAs. You must configure Microsoft Azure before specifying the tunnel configuration on the CloudBridge appliance, because the public IP address of the Azure end (gateway) of the tunnel, and the PSK, are automatically generated when you set up the tunnel configuration in Azure. You need this information for specifying the tunnel configuration on the CloudBridge appliance. First thing first, you need to use your Windows Azure account and follow the next step to begin to configure the IPSec tunnel by creating a local network In the left pane, click NETWORKS. In the lower left-hand corner of the screen, click + NEW. In the NEW navigation pane, click NETWORK, then click VIRTUAL NETWORK, and then click ADD LOCAL NETWORK. In the ADD A LOCAL NETWORK wizard, in the specify your local network details screen, set the following parameters: NAME  VPN DEVICE IP ADDRESS In the lower right corner of the screen,…

Citrix XenDesktop 7 – Create Persistent Hypervisor Connection and Hosting Unit, Unattended 13 Comments

I blogged about how to automate Citrix XenDesktop 7 deployment and database creation, and how to join and existing XenDesktop 7 site unattended, but now to continue and go a bit further in the automation process, I needed and wanted to know how to automate Hosting Configuration by Adding Connection and Resources to the DDC in an unattended way. This blog will cover creation process for XenServer 6.x and vCenter (vSphere) 5.1 since I don't have access to a Hyper-V (yet), I went over Citrix eDoc to check how I could do this and I found here : [link] Thanks to Livio for some PowerShell help :) It helps to understand whet need to be setup and after few tests I ended up writing this script to automate this part :   This script have been tested with Citrix XenDesktop7 and XenServer 6.2 and vSphere 5.1

This is the personalized installation I do when I deploy vmtools on the VMs with VDA to install on it. Don't forget to install vmtools before Citrix Virtual Desktop Agent ! It always good to have this information shared because I had a lot of question regarding vmware vmtools installation with Citrix XenDesktop VDAs. Toolbox – Enable – Used for functions like time synchronization and clean shutdown of guest. Memory Control Driver – Enable - Driver for improved memory management in the virtual machine. This driver is available and recommended if you use VMware vSphere. Excluding this driver hinders the memory management capabilities of the virtual machine in a vSphere deployment. Thin Print Driver – Disable - Handled by Citrix printing in VDA. Paravirtual SCSI – Disable – Used in high I/O operation with SAN and mostly is applicable to Server VMs and not VDA. This driver is for PVSCSI adapters, which enhance the performance of some virtualized applications. Mouse Driver – Enable – Needs the mouse driver as it improves fixes in glitches with the mouse. File System Sync Driver – Disable - Driver for the synchronization of the file system within the virtual machine. For example, for preparation of backups. Only used if you have dedicated VMs and used agents in VMs to backup VMs. In VDA environments most common settings is profile management in which data is moved to a share as opposed to being local on VMs. Shared Folders – Disable – Directory for data exchange between host system and guest system. Currently only works with VMware Workstation and have seen it cause a lot of synchronization issues. SCSI Driver – Enable – Installs and improves BusLogic SCSI driver. If you use LSI Logic this driver is not required. SVGA Driver – Disable – We want to use the Citrix VGA adapter and not the VMware VGA. Use CTX 123952 (below) as work around if using Windows 7. Audio Driver – Enable – Needs audio driver to playback sound. This sound driver is required for all 64-bit Windows guest operating systems and 32-bit Windows Server 2003, Windows Server 2008, and Windows Vista guest operating systems if you use the virtual machine with VMware Server, Workstation, or Fusion. VMXNet NIC Driver – Enable - Network card driver for the VMXNet VMware network card. Improves network performance of the virtual machine, especially in gigabit environments. Furthermore the CPU…

Power and Capacity Management, a bit further 2 Comments

Power and Capacity Management is a great feature for XenApp, I use it more and more for Activlan customers in new implementation we do. Last week I had to find how to automate a workload and server capacity reporting by email. To remind you what is Power and Capacity Management here is a short explanation : Citrix XenApp Power and Capacity Management can help reduce power consumption and manage XenApp server capacity by dynamically scaling up or scaling down the number of online XenApp servers. Consolidating sessions onto fewer online servers improves server utilization, helps minimize power consumption, and helps provide sufficient capacity to handle server loads. As users log on to the system and reduce the idle capacity (the amount of capacity available for additional sessions), other servers in the workload are powered up. As users log off and idle capacity increases, idle servers are powered down. This helps optimize capacity for XenApp workloads. Scheduling provides an automated approach. An administrator defines specific times for powering on and powering off workloads. For example, a schedule powers on servers at 8 in the morning and powers them down at 7 in the evening, from Monday through Friday. The administrator can manually override capacity and schedule settings to accommodate unexpected demand. Load consolidation and power management operate in unison; load consolidation ensures sessions are not spread across online servers, which provides a better opportunity to power off excess servers later, using power management. Use Power and Capacity Management to observe and record utilization and capacity levels. Console monitoring and report generation provide valuable information, even if you do not enable power management and load consolidation. Power and Capacity Management respects all configured XenApp server settings, farm settings, and policies. This is my lab console : I had to figure out how to generate report automatically with Citrix Power and Capacity Management, there is an option to generate report within the PCM console but nothing to send these report automatically by email. When you generate report through the console you can obtain pretty good graphs and table : These reports look good and this is exactly what I needed to generate. CPM is using SQL Reporting Services so it shouldn't be that hard to generate an email from these report. I'm not a SQL expert or whatever but I've done the following change to set two subscriptions. open a web browser and…

VDI, ok ? What’s next ? 58 Comments

This blog is a follow up to the discussion we had in Vienna during the Geek Speak session at E2E event. I had to leave to catch my flight back to France but this discussion was very interesting and I though about it during all my travel time... I'm still on it writing this blog VDI, Desktops... Shared, remote, dedicated, pooled and/or virtual VDI gives the possibility to deliver desktops to everyone, everywhere. Let's say it, in most of the company, users still needs a desktop, a Microsoft desktop; why's that ? Just because they are used to access a Microsoft Windows desktop at home and during the last 20 years we didn't deliver application using another way. Desktop rules the application access, at least until 3/4 years ago when smart phone, tablet / iPad came to everyone's life and change Microsoft Desktop user's life by accessing directly an application. Everyone is getting used to access applications without going through a Microsoft Windows desktop and I think that will change a lot of things within 5 years regarding the way we deliver an environment to our users. Desktop vs Application Why are we accessing a desktop today ? Mainly to open application and being able to switch from a windows to another, copy and past between applications etc. Going from this statement, which I think everyone will agree, why do we need this layer (Microsoft Desktop) to access applications ? As I mentioned before, we have habits and we are used to open our application through a Microsoft Desktop, I remember tried to publish an Internet Explorer few years ago on thin client, on the Web Interface, only application were published, no desktop at all; we had to fall back and published a desktop again because user experience was different, users we used to click to switch between applications instead of using alt-tab key. The amount of memory we tried to save by not publishing a desktop was quiet a lot and as we had to give a desktop back, we had to calcul again all the memory consume per user for a desktop and add more servers according to our results. As you can notice in the graph above, the difference between a seamless published Excel 2010 and a desktop (XenApp 6.5 with Excel 2010) is double. As we needed to publish desktop instead of only using published application, we had…