Wednesday, December 13, 2017

Tips and Tricks: Open Workitem in Visual Studio

Problem:

Getting annoyed when working in Visual Studio 2017 and each time you you open a work item from Team Explorer pane, it opens it in the web?

Well according to recent conversations I have had, it appears that not to many people know about the ability to set the behaviour.


Solution:

By default, VS2017 will open the work items in the browser, but there is a way to change it. When you open Visual Studio and select the Tools > Options from the menu. Find the "Work Items" section and under General you can change the behaviour.

Setting it to "Visual Studio (compatibility mode)" will open the work items in Visual Studio as before.

image


Be warned though, this is option is due to be removed in the next major release of Visual Studio where the default behaviour will be to open the work item in the browser. So use it while you can Smile



Monday, December 4, 2017

Tips and Tricks: User has Allow Delete work items, but no delete button on work item

Problem:

The user in TFS/VSTS has all the rights enabled to delete work items. The problem is that when you open the work items the "Delete" button is missing, and you do not have the option to delete in query lists.

When you look at the inherited rights you see something like this:

image

All indication is that the permission is allowed, but the end result is that it is denied.


Solution:

This may be that the user is in the "Stakeholder" access level. If you pay close attention to the "unavailable features" you will notice that deleting work items is one of the things a stakeholder can't do.

You can now either acquire a license for the user and move him/her into the basic access level or higher, or the user will need to ask someone that does hove those rights to perform the deletions.


Monday, November 27, 2017

Tips and Tricks: TFS Excel Plugin not loading

Problem:

Close an Excel spreadsheet with/or without having a TFS/VSTS connected list. When you open the spreadsheet again, the Team tab is missing in the ribbon bar.

You then need to go through and re-enable the plugin through Excel options to reload the plugin and get the Team tab back.

This also disconnects your TFS/VSTS linked worksheets, causing you to have to reconnect or re-open a query to carry on working with the work items.


Solution:

  1. With Excel closed, open up the registry editor (regedit)
  2. Then navigate to HKEY_CURRENT_USER\Software\Microsoft\Office\Excel\Addins , and find the TFCOfficeShim entries:
    clip_image002
    There may be more than one, and the version number at the end may differ to the image above
  3. I simply remove/delete the “older versions”  (I surely do not have to remind you to take a backup of your registry before you make any changes),
  4. and then find the “LoadBehavior” in the “folder” and make sure that the value is 3

Now re-open the spreadsheet and see if it will load automatically.


Monday, November 6, 2017

Using Office UI Fabric to create a VSTS Extension

In the beginning

In the beginning god wanted to make earth. So, god found a cool looking bootstrapper and loaded up his favorite command line.
God then stepped through and downloaded all 100’s of packages to be able to run the bootstrapper and then the bootstrapper itself:

npm install create-planet -g

God created earth create-planet earth and all was good for a few days. God then decided he wanted to do more. Humans were missing.

npm install human

+ <human@0.0.1>
added 1702 packages in 5904.903s


God then referenced "humans" in earth.js and tried to compile.

Module not found: Error: Can't resolve sin

Ok, no biggy, npm install sin. Compile.

[at-loader] ./node_modules/@types/Inhabitants/index.d.ts:90:13
TS2403: Subsequent variable declarations must have the same type. Variable
'intellegentlife' must be of type 'Homosapiens', but here has type 'Neanderthal'


Ok, remove package Neanderthal. Compile.

earthpack error -ERROR in earth.js from UglifyUniverse Unexpected token: name (continentParts)

Earthpack.. where did that come from? Ok...
5 hours later, god found the Earthpack config and was able to fix the configuration, and the compile was good.

So god ran earth and was promptly told:
Universe.js Error: Invariant Violation: Minified World error #379.

Then there was a big bang.

An now…

I took on a project a while back to summarise and write a “quick start” series of posts for the ALM/DevOps Rangers to highlight the usage of Office UI Fabric. This meant that I needed to get my head around changing an existing extension from “simple” Typescript to React while incorporating Office UI components.  As you may have summised from the above analogy, I have spent way to much time trying to get things running the way that I want them to, based off of someone else's concepts, intentions and bootstrapping.

If you are interested in the outcomes, please follow the series posted on the msdn blog:
  1. The start























Sunday, November 5, 2017

Containers on Azure as part of a CI/CD pipeline




In my previous posts I was speaking about the journey to go from setting up a container and ultimately publishing it in a continuous fashion to a registry using VSTS.

You may have noticed that a lot of time has elapsed since my last post, and there are a couple of reasons for that.

First of all, the impediment of work, secondly, I noticed a trend where it became fairly popular to blog about the journey from where I ended off and I hung back and followed those for a while.

Instead of me going ahead and creating a bunch of posts to show how to publish your container and run it in production, I’m going to hand it off to a bunch of other capable people :-)

To delve deeper, or just for more information, these are all good reads:


And if you are interested in a brief discussion on how to move to a microservices based architecture this is a good read : Modernizing a Monolithic Application using Microservices and Azure

 

Monday, January 30, 2017

Deploy Docker images to a Private Azure Container Registry

This post continues the journey of creating a dotnet application, containerizing and ultimately deploying the image to production.

The first thing we need to do is to get the source into a source repository (I’m of course going to use VSTS), then we need to configure a build and then push the images to a registry. We will then be able to deploy the images from the registry to our hosts, but more on that later.
Note: Some of these steps may incur some cost, so I would highly recommend at the very least creating a Dev Essentials account. This should cover any costs while we are playing.

I’m assuming you have already pushed your code to a repository in VSTS, so the next step is to create an Azure account, if you have not got one already, and then to setup a container registry.

To create your own private azure container registry to publish the images to:
  1. Login to azure
  2. Select container registries
    image 
  3. This should give you a list and you will need to click “add” to create a new container registry
  4. Fill in the required details and create a new registry
  5. Once created, open up the blade and select the Access Key settings. This should contain the registry name, login server and user name and password details (make sure the “Admin User” is enabled)
    image

Now lets move on to VSTS.
First we need to “connect” VSTS and your container registry:
  1. Login to your VSTS project and under settings, select the services configuration:
    image
  2. Using the details that were in the Access Key settings on the Azure container registry blade, create a docker registry service with your “Login Server” as the docker registry url and the user name and password:
    image

Finally it is time to create the builds. As you would expect, go add a new “empty” build definition that links to your source repository. Instead of selecting the “Hosted” build queue, use the “Hosted Linux Preview” queue. Docker is not available on the normal hosted windows agents yet.
Add 2 command line tasks and 3 docker tasks:
image

Note: If you do not have the docker tasks, then you will need to go and install them from the market place
Now configure the tasks as follows:

Command Line 1 Tool: dotnet
Arguments: restore
Advanced/Working Folder : The folder that your source is located in. In my case it was $(build.sourcesdirectory)/dotnet_sample/
Command Line 2 Tool: dotnet
Arguments : publish -c release -o $(build.sourcesdirectory)/dotnet_sample/output/ or an "output" folder under your source location
Advanced/Working Folder : see above
Docker 1 Docker Registry Connection : the service connection that you created earlier
Action : Build an image
Docker File : The location of your docker file. In my case it was $(build.sourcesdirectory)/dotnet_sample/dockerfile
Build Context: The location of your source code. In my case $(build.sourcesdirectory)/dotnet_sample
Image Name: The name and tag that you want to give your image. In my case I just used dotnet_sample:$(Build.BuildId)
Advanced/Working Folder : same as the other working folders
Docker 2 Docker Registry Connection : the service connection that you created earlier
Action : Run a Docker Command
Command : tag dotnet_sample:$(Build.BuildId) $(DockerRegistryUrl)/sample/dotnet_sample:$(Build.BuildId) the name must be the same as in the task above, and the $(DockerRegistryUrl) must be your Azure container registry url or login server
Advanced/Working Folder : same as the other working folders
Docker 3 Docker Registry Connection : the service connection that you created earlier
Action : Push an image
Image Name : The name you passed in when tagging your container above. In my case it was $(DockerRegistryUrl)/sample/dotnet_sample:$(Build.BuildId)
Advanced/Working Folder : same as the other working folders

Now you can save and queue the build. Hopefully it will look something like this:
image

If all has passed, a quick and easy way to see if your image is in your registry is to navigate to your docker registry’s catalogue url : “https://<<registry_url>>/v2/_catalog”. This will likely prompt you to login with the username and password that you setup previously and then you will download a json file. Opening this file will provide you with all the images hosted in your registry.

In this post we have moved from a locally created image to one residing in our private registry. In the next post we will continue the journey a bit further…

Wednesday, January 25, 2017

Windows joining in the containerization fun

So in the previous posts getting started, creating an application and configuring the container we saw how to install docker, create a sample application and deploy it and run it in a docker container.

Thus far the containers where a Linux flavor and believe it or not, we were running a dotnet application on it.

With Windows 10 (1511 November update or later) and  Windows Server 2016  and Docker Beta 26 or newer, it is possible to create windows containers.

In your system try right click on the docker icon and then select “Switch to Windows Containers”

image

Wait for it to complete.

Once it was switched over, then go back to the application that we created in <<link>> and edit the dockerfile.

Change the first line from FROM microsoft/dotnet to FROM microsoft/dotnet:nanoserver

Then, as before, run the following commands:


docker build . -t dotnet_sample --rm
docker run dotnet_sample –p 80:5000 

The simply navigate to http://localhost/ and voila!! you are not running your same application in a windows based container!

If you are skeptical about what platforms etc. you are running on, download this sample and edit the dockerfile and go to the “Docker” tab

 

As much fun as this is, the goal behind using containers is not to simply play with it on your machine. We want to automate the creation and deployment of the containers.

Next up I will show you how to automatically use VSTS to build and deploy to an Azure  Container host.

 

Monday, January 23, 2017

Creating your first container

When you start dealing with docker you will notice a bunch of terminology being thrown at you. It is a good idea to at least skim the documentation and get a basic understanding about these terms.

That said, we are going to simply go through a bunch of steps which should give a basic understanding. Here goes…

If you have your application ready to containerize, then the next thing you need is a dockerfile. The dockerfile is basically a setup file for your container.
Note: If you have the docker extensions installed in VS Code, then you can open the folder with your sample project in, type “CTR+Shift+P” and the start typing docker. Select the  “Add docker files to workspace” option and provide values for the prompts. This will generate a template for you:
image


Lets create a dockerfile by simply creating a new file and naming it "dockerfile".
For the contents we will start with something simple like this:
FROM microsoft/dotnet 
WORKDIR /app 
ENV ASPNETCORE_URLS http://*:5000 
EXPOSE 5000 
COPY ./output /app 
ENTRYPOINT ["dotnet","docker_sample.dll"] 
Lets break this down:
FROM microsoft/dotnet
This is basically saying that, if you look at the docker repository there is a image by the name of "microsoft/dotnet". I want that one as my base image. We can be more specific and add a tag (example “microsoft/dotnet:1.0-runtime” and it will get that one, or in our case it will just get the latest image available. In fact it is the same as saying “microsoft/dotnet:latest”.

WORKDIR /app
This is the working folder inside the container.

ENV ASPNETCORE_URLS http://*:5000
Here we are explicitly setting an environment variable in the container for your web app to use.

EXPOSE 5000
When creating a containers, you can see it as a "closed system". The only way to expose things is to punch holes though a "firewall". Here we are saying, I have an application in the container and it is accessible through port 5000, so I want port 5000 open to the world.
For default web sites you may have port 80 etc., but without this, you wont be able to access your application. once you have “exposed” the port, you still need to map to it via your host.

COPY ./output /app
This is where we are busy "populating" the container. This simply states that, from the current directory that I'm in (on my machine), I want to copy everything from the "output" folder to the "app" folder inside the container.

ENTRYPOINT ["dotnet","docker_sample.dll"]
Finally, when the container is started, this is the entry point. This will simply execute "dotnet docker_sample.dll" when the container is started.

If you have followed the previous post <<link>> you should not be able to open a PowerShell shell, and in the folder where your application and dockerfile are in, type:
docker build . –t dotnet_sample --rm

If this is the first time you are running this, you will notice it starting to download a bunch of images, once that is done, you may see something like this:
image

if you type docker images now, you will see a list of images that has been downloaded to your machine and a new one named "dotnet_sample":
image

Next comes the fun part, lets run it…
docker run dotnet_sample –p 80:5000
If you are lucky you should see something like this:
image

Now we have a running container, but how do we access it. Notice the text that says "Now listening on: http://*:5000"? Navigate to http://localhost:5000 … oops, not accessible? Remember that this is not running "on your machine", it is running in a container. The -p 80:5000 parameter that we passed basically says, let’s cross the boundary and map the docker host's (your machine) port 80 to port 5000 in the container. Now navigate to http://localhost/ . See something familiar ?
Open another PowerShell shell and then type
docker ps
you will see the running images on your machine (hopefully you have at least one):
image

So what have you done?
  1. We created an application in the previous post,
  2. added a dockerfile
  3. built the docker image from the dockerfile and finally
  4. run the image

It may be worth mentioning at this juncture that this is actually a Linux instance, and we are running a dotnet core web application in the Linux image via your windows host. Is this a crazy world or what?


Wednesday, January 18, 2017

Create and run a dotnet core sample

Now that you have everything to get started, I’m going to create a simple application that we can run. If you have the dotnet core SDK installed, this is fairly simple.
Drop down to your command line / PowerShell and in a new empty folder (I’m using dotnet_sample folder which will become the "name" of the application) simply type :
dotnet new –t web
This will create a bunch of files which is basically a "starter" web application. If you had just typed in "dotnet new" a simple command line "hello world" would have been created.

Even though we have installed dotnet 1.1 core this sample is still being generated to use 1.0.1, so we want to "upgrade" it to use 1.1.
Open up the project.json file and look for the dependencies section:
"dependencies": {
"Microsoft.NETCore.App": {
"version": "1.0.1",
"type": "platform"
},

change the version to 1.1 : "version": "1.1"
then further down in the file under the "frameworks" section change "netcoreapp1.0" to "netcoreapp1.1"

The next step is to "install" or download all the dependencies. For this we simply run:
dotnet restore
Note: you may have to install gulp (if you are using a clean machine) by typing npm install -g gulp
A whole bunch of packages are downloaded and installed and gets your application in a state that is ready to run.
Finally, execute the following:
dotnet run
This will “execute” the web application and put it into a run state.
If you navigate (using your favorite browser) to http://localhost:5000 you should see something like this:
image

For completeness, here is the PowerShell prompt with the commands that are run chronologically. (yours may look a bit different due to package cache etc.)
image
Now you have a fancy new application, ready to containerize!
To package a dotnet web app we need to run:
 dotnet publish -c Release -o output

Monday, January 16, 2017

Docker– Getting Started

If you have not heard of this thing called docker or more generically containerization, then
  1. What rock have you been under?
  2. Here is a quick guide to start off with.
Even though docker is a Linux concept, Microsoft has embraced it and started building the ability to run either Windows or Linux containers in your Windows environment. The catch is, if you want to start playing on your desktop with windows containers, you will need “64bit Windows 10 Pro, Enterprise and Education (1511 November update, Build 10586 or later)” and have Hyper-V and the Container feature enabled.
Once you have that sorted you can start by installing the following:
  • Docker for Windows
    I recommend using the beta channel as this has the support for Windows containers using Hyper-V.
  • Kitematic, you can get it directly from here
  • dotnetCore SDK, ‘cause that is how we roll…
  • A cool IDE like VS Code
    If you are using VS Code then don't forget to add the Docker extension
  • And assuming you have a new, non-developer machine do not forget node. We will use it to restore packages when we start playing with demo samples.
Install docker for Windows. Once you have that installed, you will notice the docker icon in your system tray. Right click on that and select “Open Kitematic”. This will tell you where to download and ultimately put Kitematic. This is strictly not necessary, and we can do everything we need without it, but it is a “nice to have”
image
This should get you ready to start playing.
A note: if you have Visual Studio (2015.3 or newer) installed, then I can recommend you install the dotnetcore tools preview.
In the next post we will create a simple application that we can start doing stuff with.