Musings of a PC

Thoughts about Windows, TV and technology in general

Category Archives: Uncategorized

Quick tip: file differences in Visual Studio

Visual Studio has a pretty good file differencing tool … but it only seems to be available from the GUI if the files you are comparing are under source control.

If you want to compare other files, e.g. the output from an app, etc, a common suggestion is this:

devenv.exe /diff list1.txt list2.txt

which will start a separate instance of Visual Studio and then run the diff tool.

If you want to use diff within a running instance, though, this can be done from the Visual Studio command window (CTRL+W, A) then:

Tools.DiffFiles list1.txt list2.txt

This doesn’t require any plugins or extensions to be installed.


Screen flexibility in Windows

For a while, I’ve been using my Surface Pro 2 with an external monitor, with the Surface beneath the monitor. In Windows, I’ve had the Surface’s screen to the left of the monitor and I’ve “trained” myself that if I want to move something from the monitor “down” to the Surface, I have to move it to the left.

Today, I added a 2nd external monitor, daisy-chained with Display Pro, and then moved the SP2 so that it sits underneath both of them, like this:


I then opened the Screen resolution dialog and started dragging the SP’s screen across so that it would sit between the two external monitors. As I did, I realised you could alter the vertical position of the screen relative to the two monitors. In doing so, I discovered that you can literally match how the monitors are laid out:


I’ve now got to retrain my muscle memory so that it moves content in the physical direction of the screens rather than how I remembered the logical layout, but I am really impressed that this is possible!

(Probably obvious to most, but sometimes it is the little things that can make a big difference!)


2012 in review

The stats helper monkeys prepared a 2012 annual report for this blog.

Here’s an excerpt:

4,329 films were submitted to the 2012 Cannes Film Festival. This blog had 35,000 views in 2012. If each view were a film, this blog would power 8 Film Festivals

Click here to see the complete report.

Now available: The Release Candidate for System Center Virtual Machine Manager 2012

Really pleased to see the Release Candidate made available:

It has been interesting learning about VMM 2012 by using the beta VHD but it became frustrating recently when the provided copy of SQL Server expired. I didn’t have the mental energy to move the databases onto our production SQL Server so I’m glad that a refresh of the VHD is now out.

Evaluating VMM 2012 by using the VHD is, in my opinion, the simplest way to start playing around with VMM and learning what it can do. Download, attach to a virtual server and away you go!

It is also possible – and supported – to upgrade from VMM 2008 R2 SP1 to VMM 2012 RC and then to VMM 2012 RTM, so if you have an existing VMM 2008 R2 SP1 estate, you can upgrade to VMM 2012 RC safe in the knowledge that you’ll be able to upgrade to RTM when it arrives.

Speaking of which, I wonder when it will arrive? Are the various System Center products going to be released independently or in one big hit? If the latter, I suspect we may be waiting a while because I think VMM is the first Release Candidate … DPM has only recently hit beta!

I can’t see any support listed for Linux, though, which is a shame given that RHEL, amongst others, are supposed to be supported guests of Hyper-V.

SCVMM 2012: Server Fabric Lifecycle

… or all about high availability, update management and dynamic optimization.

The goal of the HA feature in VMM is to ensure that a VM can recover from failure, e.g. the failure of a host, and to ensure that a VM can easily be migrated. Over and above the capabilities of HA in previous versions of VMM, VMM 2010 adds the ability to create & delete clusters, manage clusters in untrusted domains, have a non-HA VM on a cluster and have VMM Server itself HA. VMM 2012 also adds the functionality manage Citrix XenServer (over the already existing functionality to manage VMware).

Update management is a new feature of VMM 2012 and aims to keep Windows fabric servers up to date. The reason why this has been added to VMM is to enable management of the complete fabric from a single pane of glass – and that includes all aspects of the server fabric lifecycle.

The feature requires a pre-existing, dedicated, root WSUS 3.0 SP2 64 bit server. If the WSUS server is remote, the WSUS console is required on the VMM server. It supports WSUS in SSL mode.

VMM gets a catalog of updates from the update server. It points the fabric servers to the correct update server, i.e. configures the WUA agent on each fabric server.

A baseline is then created. The baseline is a logical grouping of updates to assess compliance. VMM provides two sample baselines for Security and Critical updates. You can assign the baseline to hosts, host groups and host clusters, plus VMM server roles (library server, PXE server, Update server and VMM server). You cannot assign it to VMs (running or stored) or VHDs in the library.

A scan is then conducted to see if the server is compliant or not for the assigned baseline. VMM leverages WUA for applicability and compliance. Scan is on demand and automatable using PowerShell. VMM then makes the server compliant by installing missing updates. Update installation progress can be tracked in the VMM console and remediation is on demand and automatable using PowerShell.

There is an orchestrated workflow for remediating a Hyper-V cluster whereby each node in turn is put into maintenance mode, evacuating the node using Live Migration, install the missing updates based on baselines assigned, take the node out of maintenance mode, go to the next node and repeat. It supports Windows Server 2008 as well as R2 clusters and is automatable using PowerShell.

Dynamic Optimization is another new feature of VMM 2012. It keeps a cluster balanced for resource usage; Live Migration avoids VM downtime and the feature does not require Operations Manager. It supports Hyper-V, VMware and Citrix XenServer clusters.

DO has two modes – manual and automatic, with the default being manual. The feature optimises for CPU, memory, Disk I/O and Network I/O. It optimises when resource usage goes above the DO threshold. There is a configurable level of aggressiveness … more aggressive = more migrations = more balanced. The default is least aggressive.

There is also Power Optimisation, which extends DO and can only be enabled if DO is in automatic mode. It optimises for the same resources as in DO and optimises when resource usage goes below PO threshold. What PO does is powers off and on the physical hosts when it can move guests elsewhere. It evacuates a host before powering off and ensures that evacuation will not cause other nodes to go above the DO threshold, or that powering off will not violate cluster quorum requirements. It leverages out-of-band management for power off/on.

I need to follow up with Microsoft on the cluster quorum requirements because as I understand it, an even-node cluster requires a file share witness, whilst an odd-node cluster doesn’t … so if you turn off any node in a cluster, you are changing the quorum requirements!

SCVMM 2012: Storage Overview

VMM aims to expose a common model for storage across different arrays, with end-to-end visibility of storage as it relates to hypervisor hosts. The aim is to allow IT to do more, providing deep integration into the UI and PowerShell with a minimal learning curve, streamlining storage tasks across different arrays, taking advantage of more advanced storage features.

That said, VMM is not a storage resource manager. There is no value in trying to replace partner specific tools, it is not possible to keep up with new capabilities and to attempt to be an SRM product would mean that VMM would not ship on time!

What this functionality offers the administrator is the ability to control what host groups can access in terms of available storage logical units and available storage pools.

The standard used by VMM is SMI-S and the four companies announced as supported so far are EMC, HP, HDS and NetApp.

There is support for VDS but it is largely deprecated with the future focus being on SMI-S.

For me, this presents quite a challenge if I want to use VMM to manage the storage used with the VMs because I’ve now go to make sure that the storage is “compatible” with VMM. No real news about Dell, which is my preferred supplier, which makes things extra tricky. It may be that I’ll have to stick with something like Dell’s MD3000i array which supports VDS and wait a few years until there is more clarity around SMI-S and VMM’s storage capability, and change to an SMI-S array at that time.

Having said all of that, it looks like it might be possible to get hold of an SMI-S provider for Dell’s MD arrays – both iSCSI and DAS!

… however, that appears to be for an early version of the software that Dell were working on. There seems to be a newer version from what I can gather in a manual I found:

but I haven’t been able to find the corresponding download. I am encouraged, though, that it should be technically feasible to control a Dell MD array from VMM 2012 so the hunt continues!

SCVMM 2012: Overview of Networking

Just as an aside, it is worth noting that VMM 2012 has the following user role profiles:

  • VMM Admin
    • Scope: Entire system
    • Can take any action
    • Can use Administrator console or PowerShell
  • Delegated Admin
    • Scope: host groups and clouds
    • Set up fabric by configuring hosts, networking and storage
    • Create cloud from physical capacity
    • Assign cloud to self-service users
    • Can use Administrator console or PowerShell
  • Self-Service User
    • Scope: clouds only
    • Author templates
    • Deploy/manage VMs and Services
    • Share resources
    • Revocable actions
    • Quota as a shared and per-user limit
    • Can use Administrator console, PowerShell and Self-service portal
  • Read-only Administrator
    • Scope: host groups and clouds
    • No actions

Network Fabric Management

  • Define logical networks using VLANs and Subnets per datacentre location
  • Address management for static IPs, Load Balancer VIPs and MAC addresses
  • Automated provisioning of Load Balancers

A logical network is the abstraction of the physical network infrastructure, which allows you to model the network based on business needs. You can use them to describe networks for different purposes, e.g. traffic isolation, provision network for different SLAs.

It can span host groups in different locations with different IP subnets or VLANs. For each IP subnet/VLAN, it is possible to define IP pools of addresses to be used by VMM. Pools can contain IPv4 addresses or IPv6 addresses but not both.

An IP pool consists of a range of addresses, which is then described in terms of static IPs, reserved IPs and virtual IPs. Once the pool is defined, when a new VM is created, an IP address is checked-out. When the VM is deleted or migrated, the IP address is checked-in.

The virtual IPs are used for load balancers; they are similarly checked out from the IP Pool. Adding a load balancer to VMM requires a PowerShell provider. Once the provider has been added, a load balancer is defined through its connection properties and the connection validated. A VIP template is then defined in terms of the protocol, LB method, persistency and health monitors. There is support for f5, Citrix and Brocade, along with Microsoft’s NLB. There will also be a published interface if you want to develop your own PowerShell provider Smile.

VMM also supports MAC Address Pool management. You define the MAC range, associate it to a host group and then, when a VM is created, a MAC address is checked out and when the VM is deleted, the MAC address is checked in.

SCVMM 2012: Bare Metal Deployment in Action!

As promised, here is a bare metal deployment in screenshots as initiated from SCVMM 2012. One thing to note is that the bare metal server must be configured to have network booting as the first option so that an unattended PXE boot can be initiated.

In part 2, I covered the steps required in VMM to initiate a bare metal deployment. The following screenshots show what happens on the bare metal server once VMM has kicked off the job.


So the first thing that the host does is a PXE boot. Once that is successful (and you may need to review this post), the host starts to transfer the boot file from the WDS server:


This allows the server to boot into WinPE:


and the VMM bare metal deployment starts:


The principle behind bare metal deployment is that VMM actually deploys a VHD rather than installing the OS onto the raw hard drive.



Once that is done, there is a customisation stage:


and the enabling of the Hyper-V role:


(Remember that this is the bare metal deployment of a new Hyper-V host because this is VMM doing this)

The install then cleans up …


and the host reboots. Although the host is configured to try PXE booting first, the WDS server refuses the PXE boot so the hardware then continues to boot to the hard drive.


From here on, it is a standard OS installation.





One more reboot …


.. and the server finally finishes with a complete installation of Windows Server 2008 R2 SP1 and, in my case, ready joined to the domain.

Very painless and very fast – the above deployment took about 30 minutes.

SCVMM 2012: Getting WDS to work!

In SCVMM 2012- Preparing for Bare Metal Deployment, part 3, I looked at setting up WDS as one of the key parts to getting bare metal deployment to work.

One of the screens in the WDS configuration wizard is the PXE server initial settings:

In the blog posting, I said to leave that setting as “Do not respond to any client computers”. I said this because I was under the impression that the provider that VMM installs onto the WDS server would cause WDS to behave in the way it needs to behave for VMM.

That is not the case.

Indeed, it seems that setting the option to the middle choice – respond only to known client computers – is not the correct option either despite the fact that Microsoft explained that when the bare metal server does a boot from PXE, the PXE server talks to VMM to authorise the PXE boot.

In testing, it looks like the only way WDS will respond is if you set the option to the last choice – respond to all client computers (known and unknown). For my testing, I also selected the checkbox: require administrator approval for unknown computers. That way, you won’t suddenly get a bunch of systems trying to boot off your WDS server!

The setting can be changed retrospectively from the WDS console by right-clicking on the server, choosing Properties and then selecting the PXE Response tab:


I’ll post a separate blog showing the various stages that a bare metal server goes through as the deployment proceeds but hopefully the above change will get everything going for you.

SCVMM 2012: Protecting with DPM

The whole of System Centre is getting a revamp this year so we aren’t just looking at a new version of Virtual Machine Manager, but also a new version of Data Protection Manager.

In DPM 2012, there is support for protecting VMM 2012 and 2008 R2, along with item level recovery of VM contents even when DPM is running inside a VM (it used to require a physical host so that it could use the Hyper-V role) and rapid block level backups of VMs running on stand-alone hosts.

As ever with DPM, you can protect at the host level or the guest level.

If you back up at the host level, you can protect or recover the whole machine. You can protect non-Windows servers and line-of-business applications without VSS writers. However, there is no granularity of backup – it is the whole thing.

By comparison, if you back up on the guest, you protect or recover data specifically, e.g. SQL database, Exchange, SharePoint, etc. It is equivalent to protecting a physical version of that server.

Backing up VMM provides full application backup of the VMM database to disk and tape, and supports original location recovery and restore as files to a network location.

DPM seamlessly protects Live Migrating VMs on CSV (cluster shared volume) Clusters. However, in my experience, for this to work optimally, the storage hardware must support VSS. Without that support, DPM can only backup through the node that “owns” the CSV storage. Either way, the VM is backed up regardless of which node in the cluster hosts the VM.

For recovery, you can restore the VM back to the original host or cluster, or you can restore the VM to a different host or cluster, or you can perform item level recovery (individual files from within the VHDs) to a file share.

If you have primary & secondary DPM sites and the primary site goes down, the DPM admin switches protection to the DPM DR server and backup & recovery of production servers continues seamlessly. DPM does a good job of bare metal recovery as well.