May 132011

Well, it’s been a while since I wrote a new blogpost mostly due to the lack of time. If I look back at the last 6 months or so there where some heavily private issues, but also a lot of studying into new products. So what happened lately besides my personal problems….?

Starting from the beginning, last year I’ve studied and became certified for Cisco UCS implementation, which is a new compute hardware platform. Since my company has a certain preference for Cisco and with my background for Microsoft/compute technology I was asked to join this new “adventure”. I’ve to admit, while I manage some HP blades in our lab environment, UCS Is pretty cool. I love those service profiles and with just a few mouse clicks I can switch a profile and boot up a whole other OS from my SAN environment. Since we using UCS just recently in our lab for such I think this could greatly benefit us, in fact I truly believe many of our customers can benefit from such stateless computing systems. If a blade server fails, just replace it, set the profile and let it boot again. Or when you have a spare blade, UCS will automatically switch the service profile from the failed system to the spare system. For your end users there will be a small disruption, but it won’t take ages before it’s repaired.

I’m not going into to much details about UCS but I can surely recommend it to you. Else check out the great and continually improving simulator Cisco is offering free of charge.

Anyhow, besides of this I was also asked to do my VMware certification. This basically has to do with UC or Cisco Unified Communications product line (just like UCS by the way). Although I’m not a voice guy and not planning to become as such, Cisco did make it possible to virtualize the UC environment on VMware. So to support my colleagues I’ve followed the VMware training for VCP4 examination, which I passed a couple of months later. I’ve to admit, this was probably one of my toughest exams ever and as such I’m pleased I can call myself a VCP. Like I said we are currently setting up a UCS lab/demo environment with UCS, and of course VMware is one of the products we just set up. Besides this we also installed and configured Hyper-V but this was truly a pain in the ass to configure. Well at least for me it was a pain in the ass as an VMware engineer 🙂 Simple tasks like adding a shared LUN, takes different tools and and places to locations to complete the job. Maybe it’s because I haven’t a lot of experience in it yet, so right now I was actually reading a book about it.
In near future we also wanna to implement XenServer so we have multiple Virtualization products running on our storage, all on our 4 blades from UCS.
VDI will also be configured for all those platforms since our customers are asking for it.

If you think I’m done, well you might think again. Besides reading and studying (as such I still do) all the products above I’ve also done training for NetApp. IMHO if you know just VMware, you should also know how storage works, in the past I always thought it was just a bunch of disks with a form of connectivity like FC or Ethernet and I didn’t saw any fun into it. I didn’t care less about a bunch of disks, also I basically hate hardware especially when issues arrive. Hardware should just work, nothing more nothing less.
But after my recent NetApp certification path I actually can say I enjoyed it very much.
There is a lot of thoughts going through my head when I think about future possible implementations or configurations. What will I do to for a configuration with VMware. NFS or LUN?
Why choosing for FC is there’s no historical investment present for FC. In fact, with UCS 2.0 you can even boot from ISCSI whereby FC isn’t needed anymore for completely stateless computing. All this and many more are what I’ve thought about n the last months.. And every time I feel a little smile when I think about it. Where I previously enjoyed security I foresee that I gonna switch my love. Certainly I won’t give up my interests into security but virtualization from A to Z is IMHO the thing I want to do.

Since a couple of weeks from now, where working to win some important customers for our private cloud ideology. This might become a great start where I might blog more on it.

For now, I’m loving it 🙂

Dec 142010

As one of the long term moderators at the Petri website I often see questions which could be easily found or answered. A lot of these questions are regarding protocols such as FTP, HTTP or DHCP.

I know for sure that the vast majority of IT professionals are already aware of this and I’m sure the most of the IT professionals just look the questions up on google for example. And maybe the more experienced IT professionals will read the Request for Comments, or also called the RFCs.

But for the less experienced people is where I’m currently targeting on. This is more because I think that RFCs are a very important part when you work with protocols.

But what is actually an Request For Comment?
Wikipedia will tell you; In computer network engineering, a Request for Comments (RFC) is a memorandum published by the Internet Engineering Task Force (IETF) describing methods, behaviors, research, or innovations applicable to the working of the Internet and Internet-connected systems.

So with other words, it’s a documents which desribes standards for our important part of the job, for example protocols like DHCP, HTTP and FTP.

Ok, lets take DHCP as an example. Often I see questions if it is possible to force a client to use a certain DHCP server. Well the answer is no, since DHCP uses a broadcast mechanism to find a DHCP server. The first one who responds will serve the IP address.

So lets take a look at the RFC 2131 which describes the Dynamic Host Configuration Protocol or DHCP.
As you can see it’s a document about 45 pages. I’m not going to tell you how to read it but I’ll show you where you can find the answer to the question above.

If you skip to page 13 section 3.1 you’ll find the following text: The client broadcasts a DHCPDISCOVER message[…]
Errr? But what does it do? Well if you scroll a bit futher you’ll find a small definition about the the DHCPDISCOVER namely: DHCPDISCOVER – Client broadcast to locate available servers.
So actually they are telling that it sends out a certain packet to find DHCP Servers. This means that there is no option available to select a certain server. Of course you can force it by temporarly disable the other DHCP serves but that is not the point of this post. The point is that a lot of such questions is doucmented very very well.

A nice sheme how the DHCP process works can be found at page 14. If you scroll a bit further you also find the explanation of the process.

The same of course applies to HTTP/1.1 which is the current standard since about 1997.The RFC for DHCP is RFC 2616. This RFC consists of 176 pages. That’s quite a lot but knowing them is very useful, especially when you need to do advanced troubleshooting.

I don’t say you need to remember each of them, however you need to know where to find them. Just remember the website (which is the acronym of the Internet Engineering Task Force) where all those documents can be found, or use google to find them 🙂

However this is not the only publisher or standards. Another one is the IEEE or the Institute of Electrical and Electronics Engineers. This one is a more commercial website, however also extremly important.

For example, I hope you have heard of VLAN tagging. if not, please read this wikipedia link.
Anyhow the VLAN Tagging is defined in a IEEE 802.1Q standards. For all the 802.1 standard you can follow this link. Those documents basically provides the same kind of information as the IETF.

Both IETF and IEEE are extremly important in the current networks. I really suggest to read some of them to get an impression what it is and what it does. I think it will give you a great inside of the protocols and other network standards.

In fact, just a few days ago I actually used the RFC 959 which described the FTP protocol. So if you’re an advanced or a novice IT Professional, it really doesn’t matter. We all using them and if no we all should using them.
So since I got the feeling the RFCs doesn’t get the attention it should have I had the feeling to bring it back under the attenton again.

Dec 032010

Like Microsoft ISA server, the Configuration Storage Server (CSS) from TMG also uses ADAM to store the configuration. When creating a replica of the CSS, ADAM is also used to replicate the data.

But what if the primary fails and you have to reinstall the server? Well, in that case you can still use the replica CSS to connect the firewall to. However when installing a new replica of the secondary CSS you will receive issues with ADAM. One of the issues you might get is something like this:

Event ID: 2091

Ownership of the following FSMO role (Operations Master role) is assigned to a server which is deleted or does not exist.

Operations which require contacting a FSMO role owner will fail until this condition is corrected.

So because of this error the roles needs to be transferred to an other CSS server. There are 2 possible ways to do this. 1) Transferring the role or 2) Seizing the role. Actually it’s just like Active Directory. Seizing is something you only do when the previous FSMO holder isn’t available anymore. If it is still available but you want to replace that server you should use the transfer method.

But how do you do this in a Forefront environment?

Let’s say we have two ISA servers and we want to add an additional CSS on a different computer. Let’s say the computer names are as follows: CSS01, ISA01 and ISA02. The CSS01 will become the primary CSS and we want to decommission the current primary CSS running on ISA01.

First of all, let’s tackle the easy part. In the ISA or TMG client right click the array and simply change the primary configured CSS to the secondary or replica CSS. So instead of as your primary CSS, change it to After this is done you need to change the FSMO roles to CSS01.

Okay, first of all you need to start the ADAM Tools Command Prompt. If you click the start button, go to All Programs >> ADAM and there you can find the ADAM Tools Command Prompt. Basically it opens a new command prompt with a starting point in C:\Windows\adam folder. Those tools are installed when you install a CSS on either computer.

Once you are in the command prompt you need to follow the following procedure:

  1. Open an ADAM tools command prompt on ISA1 or ISA2.
  2. At the command prompt, type: dsmgmt.exe
  3. At the dsmgmt: command prompt, type: roles
  4. At the fsmo maintenance: command prompt, type: connections
  5. At the server connections: command prompt, type: connect to server CSS01.domain.local:2171

The ADAM port used by ISA or TMG is 2171 so keep notice of this. Otherwise it will try to connect to port 389 which is the default port number for ADAM or AD.

Once connected you also need to transfer the roles if possible. To transfer the roles follow the procedure below.

  1. At the server connections: command prompt, type: quit
  2. At the fsmo maintenance: command prompt, type: transfer naming master
  3. At the fsmo maintenance: command prompt, type: transfer schema master

And you’re done! If all went well the roles are transferred. If not you will get error messages in your command line window. Ok this is one part, but what if ISA01 had issues with its CSS? For example, if objects are tombstoned or any way corrupted. Or maybe ISA01 is crashed and cannot be recovered anymore. Or what if you tried to transfer the role and received a warning like this:

Event ID: 1837

An attempt to transfer the operations master role represented by the following object failed.

In that case you can seize the FSMO roles instead of transferring. To do this follow the procedure below:

  1. At the server connections: command prompt, type: quit
  2. At the fsmo maintenance: command prompt, type: seize naming master
  3. At the fsmo maintenance: command prompt, type: seize schema master

If you want to add the ISA01 again as CSS simply install the Configuration Storage Server again as a replica and you’re done.

Dec 022010

Recently I came across something interesting when building a new ISA environment where the Firewall Client will be mostly what is used. Almost all the traffic needs to be authenticated before being sent to its destination. Since the Firewall Client is designed for that (the traffic is not just HTTP(s) and FTP over HTTP) we advised to install the Firewall Client on every Citrix Server and clients.

However during some initial testing I noticed something weird. Although some of the traffic is being balanced, I noticed that the Firewall Client isn’t balanced at all. At first I was really stumped and I didn’t know where to look to troubleshoot. I checked and checked and triple checked the configuration to make sure everything was set correctly. I’d even let a colleague of mine check it again to prevent myself from thinking in circles. Still I couldn’t find any weird configuration issues and neither could my colleague.


Note: The screenshot is made off hours so SecureNat is also imbalanced at the moment. This is not important for this article 😉

So what happened over here? In the end I was pointed by Jason Jones to this Microsoft article where Microsoft stated:

Load balancing is not supported with Forefront TMG Clients or ISA Firewall Clients

Issue: Client machines running Forefront TMG Clients or ISA Firewall Clients may have issues connecting to an array of Forefront TMG servers with any type of load balancing configured on the related Forefront TMG network.

Cause: Load balancing (either integrated or using an external load balancer) is not supported together with Forefront TMG Clients or ISA Firewall Clients.

Solution: Instead of using a load balancer, use DNS round robin to point the clients to the Forefront TMG array member’s dedicated IP addresses.

Hmmm. This is not fun. What’s the reason you should use DNS round robin? Is this by design? Why is that?? After further investigating and talking with Jason Jones I heard the following:

The FW client uses a control channel to facilitate authentication and communication with the TMG firewall. For proper operation, Firewall Clients must therefore be configured to communicate directly with the TMG firewall’s dedicated IP address (DIP) not the VIP.

Jason Jones is a MVP for Microsoft Forefront for a pretty long time and I’ll trust him. Personally I’m not very fond of using DNS round robin to balance such. It might be because the design of the Firewall Client. In my opinion Microsoft should address this “issue”.

Oh, Before I forget, the reason why I have a problem with this is because I see an issue coming up when one node fails. Just imagine this:

A FWC client is configured to use a DNS name to connect to the ISA or Forefront TMG array. Lets say you would use

The client will lookup an IP address for that DNS record and will receive an IP address from the failed host. The FWC tries to connect to the failed ISA server. If the host doesn’t respond you would not be connected. I’m not sure what exactly would happen since this is truly new for me but I guess you’ll never get connected until your lucky enough that the client will receive the IP address of the functioning ISA/TMG node.

I’m forced to use this configuration for one of my clients  but trust me, it doesn’t really feel good. 😉

Nov 122010

While building some basic VMware Training I thought I needed a lab environment for my colleagues to test things out. For this I simply created an empty VMware Template in ESX and I imported this template into VMware Lab Manager (did I already said that I love this product?).

So After creating a basic environment containing a DC, a VCenter, an XP machine and 2 ESX servers I wanted to install ESX 4.0. However during this process I received the following error:

“Could not format a vmfs volume.” At first I thought, “What the hell???? It’s just a VMDK file… Just format that bloody…” So of course I started shouting at my computer like every technical engineer would do… Luckily I was working at home using a VPN connection to our lab so no one could hear me. 😉


I was a bit intrigued with it… what went wrong? I thought that this would become interesting. So after I found my senses I popped up a new browser tab and went to Google. After a while I found a blog post stating that it might have to do with NFS storage. However I’m running this on our FC SAN environment. So although this might be an issue, it wasn’t the exact issue I had. However it made me think about it.

VMware Lab Manager uses linked clones. What if… what if this was causing the issue? So I created a simple lab with just one ESX server inside of it, and I enabled the option “Full Clone”


Ok that seems to work. I could install ESX! But now what? Can I still use the “capture to  library” option to capture and share my setup for my colleagues? This is because there is no option to do a full clone for the ESX servers when I choose to clone to workspace. It states:  “Create a Linked Clone of All Virtual Machines or Selected Virtual Machines”

Nope, that didn’t work either.

Ok, But then what. How can i create a virtual test environment to teach my colleagues some VMware stuff without going to expensive training. They don’t need to certify themselves, they only need to know the basics about it…

But still there is an another option, what if I use Archive to Library instead of Capture to Library and then share it? That might work out since over here I do get an option to create full clones. Also I could share this one and in this case you also won’t have an issue with customizations and stuff.


So creating the archive is what I did. After a while (enough time to drink some coffee) it was finished. But now what, I still couldn’t use it?

I still need to deploy it to my workspace in order to get it work, so I choose from my library the option: “Clone to workspace”

And hey, now I get an Option to do full clones. That looks promising isn’t it?


So testing this setup brought me to a “Hurray!” moment because It passed the 10% error limit. 🙂 And yes it did finished the installation.


I took me about 2 hours to solve and test this out, shouting included of course. 🙂 It’s time for a nice cup of coffee.

Anyhow, to recap the issue. The problem that the vmfs volume couldn’t be formatted lies in the fact that that I was using linked clones or an original ESX configuration. Somehow ESX didn’t liked that and crashes. Full clones however are working fine, though you might understand that this can become an issue when you lack storage.