Friday, August 23, 2013

Exam 70-498 : Delivering Continuous Value with Visual Studio 2012 Application Lifecycle Management

 

I finally got the time to go and write the “Delivering Continuous Value with Visual Studio 2012 Application Lifecycle Management” exam.

I was quite nervous about this one, as it has been a while since I have written any form of exam and because it is very “non technical” focussed. These “fuzzy” questions can often be very misleading.

Luckily I passed, fairly well actually, so I thought I would jot down some of my crib notes..

1) Know the TFS process templates

Especially the terminology and the artefacts that are included in the different process templates. The questions are almost a matter of disqualifying the incorrect answers and then you are left with what can only be the correct ones.

2) Have a good grasp on the methodologies / processes

Especially scrum! The scrum guide is a fairly concise guide and small enough to read in one sitting (even for me!), so there is no real reason not to work through it in any case.

Once again, have a good grasp of the CMMI, Agile and Scrum terminology, artefacts and processes.

3) Read

I really recommend “Professional Scrum Development with Microsoft Visual Studio 2012” by Richard Hundhausen. Even if you are not going to take the exam, a very good read indeed.

Have a look online, there are a lot of brain dumps available for this exam. I would take a look at the questions, but really scrutinise the answers. I looked over a couple and there are definitely plenty of wrong answers provided to the questions. Just be careful and don't learn the answers of by heart!

4) Work through the free jumpstart

Yes, there is actually a free jumpstart for this exam. Definitely worth spending some time on.

 

Done and dusted! Hot smile

Good luck if you are going to give it a go.

Friday, August 16, 2013

Why move from VSS to TFS (Very Sore Safe to Truly Fantastic Server)

Let me give you a hint: Not only is it faster, it’s also more reliable! (There, blog post done : )

Let me expand on the above:

It’s fast!

Seriously, a lot faster.

Anybody that’s ever had to sit and pay VSS tax while dreaming of your post work beer, waiting for a history lookup, a search, or especially “View difference” would know what I mean.

There is a great difference in architecture between the two. I’ll discuss a few to give you an idea of why you should consider moving.

Storage:

TFS Uses a SQL database to store Team Project Collections; VSS uses a File System. So how is this better?

· Size – (Yes it does matter) VSS can store up to 4GB reliably; TFS can go into Terabytes

· Reliable – Ever had a network error when checking in on VSS? You’re left with corrupt files and a caffeine overdose. TFS commits a transaction which can be rolled back if there is an error

· Indexing on the tables so faster searches – Did I mention TFS is faster?

· And of course, having a DB as your data store, you can have all the usual goodies like mirrored and clustered DB’s for TFS, so you never have to lose anything or have any down time!

Networking:

TFS uses HTTP services vs. file shares (That should be enough said)

· Option of a TFS proxy for remote sites to save bandwidth and speed things up a little

· Did I mention that TFS is faster?

Security:

TFS uses Windows Role-Based Security vs. VSS security (I don’t think the methodology was good enough for someone to even come up with a name for it – I’ll just call it Stup-Id, there we go, you’re welcome ;)

Windows Role-Based Security vs. VSS’s Stup-Id:

· With Win Roles you can specify who’s allowed to view, check-out, check-in and lots more. With Stup-Id you can set rights per project, but all users must have the same rights for the database folder. This means all users can access and completely muck up the shared folders. Not pretty.

Extra functionality and pure awesomeness:

· Shelve sets – this is really handy to store code if you don’t want to commit it just yet. Say you go for lunch and you’re afraid that BigDog might chew up your hard drive again: all you do is shelve your code – this stores it in TFS. Once you’ve replaced said eaten hard drive you just unshelve and... tada! No need to say the dog ate my homework.

· Code review – Developer A can request a code review from another developer who can add comments to the code and send it back. (Basically sharing a shelve set)

· Gated check-ins: You can set rules to only allow check-ins when certain conditions have been met. For example, only check in code when:

o the code builds successfully, or

o all unit tests have passed, or

o the code has been reviewed

· Work Items – Bug/issue tracking made with love removes that nagging feeling at the back of your mind that one of these days there will be a PHP or MySQL update that breaks your free open source ticketing/bug tracking system.

· Change sets – basically all the items that you’ve changed and are checking in. You can also associate change sets with work items for better issue tracking.

· Build automation – automate build and deploys (How cool is this?)

But for me the Pièce de résistance is:

Have you ever had a new developer change files outside of the IDE? Maybe change the read-only attribute and made some changes? This completely confuses VSS and is a great way to get your source control out of sync. In TFS you can edit files outside of the IDE to hearts content and TFS will pick it up and queue for the next commit.

How to move?

Google “Visual Source Safe Upgrade Tool for Team Foundation Server” and follow the instructions.

And that is why TFS will make you happy. Better source control means better code quality, leading to happy customers, and maybe being the next Bill Gates (unless you wanted to be Guy Fawkes).

image

Thursday, August 15, 2013

The TFS Apprentice…

Welcome Dawie...the TFS Apprentice

Having joined TFC at the beginning of July 2013, Dawie Snyman has the un-enviable challenge of having to become an expert in all things ALM and TFS.

Like many of my clients, TFS is completely foreign to him!

Dawie will be contributing to our blog, offering up a new perspective on TFS... that of the 'first time user'.

Looking forward to his insights…

See his first post over here..

Tuesday, August 6, 2013

Software Deployment (Part 2)

In the previous post I was discussing how one could go about packaging software to make the long journey from development into production.

image_thumb2

In this post I will take a brief look at a couple of tools or applications that I have come across, to take those packages and automate their deployment. Using them will lower the friction and reduce the reliance on human (and possibly problematic) intervention.

 

Continuous Integration

Once again, we all know that continuous integration is a “basic right” when it comes to development environments, but it does not need to be limited to development environments. If you are using one of the numerous CI environments, extending it to deploy the packages in the previous post should be fairly simple.

I have done this a couple of times to varying degrees of complexity in TFS. It is possible to alter the Build Template to do pretty much anything you require. Setting up default deployment mechanisms and then by simply changing a few parameters, you can point it to different environments.

I have done everything from database deployments, remote msi installations to SharePoint deployments using just the TFS Build to do all the work.

 

3rd Party Deployment Agents

InRelease

You must have heard by now, a very exiting acquisition from Microsoft was the InRelease application from inCycle. This basically extends TFS Build and adds a deployment workflow.  It takes the build output (which could be anything that was discussed in the previous post) and, once again, kicks off a WorkFlow that includes everything from environment configuration to authorisation of deployment steps.

In SAP they speak of “Transports” between environments, and this, in my mind, speaks to the same idea of transporting the package into different environment.

I’m really excited about this, and I can already see a couple of my clients making extensive use of it.

image

Octopus Deploy

Another deployment focussed package that I have been following is Octopus Deploy (OD). OD works on the same premise as InRelease, having agents/deployers/tentacles in the deployment environment that “does the actual work”.

A key differentiator is that OD sources updates etc. from NuGet feeds, so you need to package your deliverables and then post them to a NuGet server. As I explained in the previous post, NuGet is a very capable platform, and with a number of free NuGet servers around, you can very easily create your “private” environment for package deployment.

System Center

Do not forget System Center, or more specifically System Center Configuration Manager (SCCM). SCCM is a great way to push or deploy applications (generally MSI’s) to different servers or environments. Very capable in its own right, and more importantly (assuming you have packaged the software properly) can be setup, configured and managed by the ops team.