Quantcast
Channel: Kirk Evans Blog
Viewing all 55 articles
Browse latest View live

What Every SharePoint Admin Needs to Know About Host Named Site Collections

$
0
0

This post intends to tell you everything you need to know about host named site collections so that you can decide if they are appropriate for your environment.  This post is NOT telling you to run out and create everything as host named site collections in SharePoint, instead it is meant to help educate you about a feature that exists as part of the product.  Simply put, I heard from some of my customers that there was some misinformation in the community saying to never use host named site collections.  This is plain wrong, just as saying to always use them is also wrong.  This post explains what host named site collections are, how to use them, and some limitations to consider if you intend to use them. 

What Are Host Named Site Collections

A host-named site collection allows you to address a site collection with a unique DNS name, such as “http://fabrikam.com”. 

Typically you will create a SharePoint web application, and it contains many path-based site collections that share the same host name (DNS name).  For example, Team A has a site collection at http://contoso.com/sites/teamA, and Team B has a site collection at http://contoso.com/sites/teamB.  These are referred to as path-based site collections, and is the recommendation for most corporate scenarios.  Host named site collections enable you to assign a unique DNS name to site collections.  For example, you can address them as http://TeamA.contoso.com and http://TeamB.contoso.com, which enables hosters to scale to many customers.

SharePoint makes the decision on how to map the host name to the site collection when the SPSite object is constructed.  It internally uses the SPWebApplication object to find the web application in the configuration database and determine if there is a host header associated with the site collection.  If no host header information is returned, this is a typical site collection.  Turn on Verbose logging, and for path-based sites you will see a ULS entry similar to “Looking up the additional information about the typical site http://contoso.com/sites/teamB”.  If host header information is returned, then the host named site collection information is retrieved.  You can see this in the ULS log with the entry “Site lookup found the host header site http://hosta.sharepoint.com/Pages/default.aspx”.

Host named site collections have been in the product since SharePoint 2003, where it was referred to as “scalable hosting mode”.  The point is that host named site collections is not a new feature to SharePoint 2010, it has been around a long time and customers have been successfully using it in their environments for many scenarios for quite a long time now.

Admittedly, there were several known issues with the implementation early on in SharePoint 2007, such as the inability to use blobcache with host name site collections (see Stefan Gossner on MOSS 2007 blob caching and its limitations), which has contributed to the misinformation in the community and lingering perception issues about the appropriate use of host named site collections.  This issue as well as several others were addressed early on with SharePoint 2007.  You can see in the screen shot below that images are in fact cached using blobcache using SharePoint 2007 just fine.

image

To be clear, the blobcache issue was never an issue in SharePoint 2010, in fact SharePoint 2010 made two significant improvements for host named site collections: the ability to use managed paths with host-named site collections, and the ability to use off-box SSL termination with host-named site collections.  Host-named site collections can also be used to implement multi tenancy solutions (for a complete discussion on multi tenancy in SharePoint 2010, see Spence Harbar’s excellent Rational Guide to Multi Tenancy in SharePoint 2010).  Office365 is implemented using host named site collections and multi tenancy as this is how the product scales to support many, many customers for that specific hosting scenario.

Creating Host Named Site Collections

To better understand host named site collections, let’s see how you create them. You cannot use the self-service site creation web UI to create a host-named site collection, but instead you use PowerShell. The following code creates a new web application listing on port 80, and two host named site collections that use the Publishing Portal site template.

$w = New-SPWebApplication -DatabaseName "WSS_Content_HostNameTest"
    -ApplicationPool "SharePoint - Content App Pool" -Name "HostNameTest" -Port 80 #Remember to do IISRESET /noforce on each server before using the new web application 

$w = Get-SPWebApplication "HostNameTest"

New-SPSite http://HostA.SharePoint.com -OwnerAlias "SHAREPOINT\kirkevans"
    -HostHeaderWebApplication $w -Name "HostA" -Template "BLANKINTERNETCONTAINER#0"

New-SPSite http://HostB.SharePoint.com -OwnerAlias "SHAREPOINT\kirkevans"
    -HostHeaderWebApplication $w -Name "HostB" -Template "BLANKINTERNETCONTAINER#0"

The result is a web application that contains my two host-named site collections.  There is a third site collection for the Office Viewing Cache because I am using Office Web Apps.  I did this to show that you can have a mix of path-based and host named site collections within the same web application.

image

We created two host named site collections, but I didn’t say anything about adding the host headers into the IIS bindings. That is because the web application that contains them listens to *:80, any port 80 traffic that is not otherwise directed to another web application on the same machine is directed to this web application.  You can verify this by looking at the bindings for the containing web application in IIS.

image

This is an important point to consider.  If you create the web application with a non-default port, something like 3759, your host named site collections should also be on that port.  Otherwise, you will have to manually add the host header entries in the bindings in IIS, and do this on every machine.  Just say no to non-default ports, use port 80 and make everyone’s life easier.  You really do not want to start editing the bindings for the web applications manually, especially on every machine in your farm.  Typically you will just use a web application that listens on *:80.

Reiterating this point, if you are going to implement host named site collections, you should not use a host header for the containing web application that houses the host named site collections or it will not work properly.  Doing so will create a host name binding for the web application in IIS, and the other host names will not be routed to the IIS site.  In the example I used, all host names are processed by port 80, which alleviates the need to manually add host bindings in IIS.  This has one unfortunate downside: if other web applications that use host headers, say contoso.com:80, are stopped due to application pool failures or other issues, then the application that listens on *:80 will process the traffic.  This can lead to unexpected results if the contoso.com:80 site uses a different authentication mechanism or is processed by a different application pool identity.  It’s just a function of how IIS works, but something to consider when designing your applications and something you should test in your environment (you do test things before implementing them in production, don’t you?)

Managed Paths and Host Named Site Collections

As stated previously, an improvement to host named site collections in SharePoint 2010 is the ability to use managed paths.  As we saw when creating host named site collections, there is no UI to create a host header managed path, you instead use PowerShell using the –HostHeader switch on the New-SPManagedPath PowerShell command.

New-SPManagedPath "cthub" -HostHeader -Explicit

$w = Get-SPWebApplication "HostNameTest"

New-SPSite http://HostA.SharePoint.com/cthub -OwnerAlias "SHAREPOINT\kirkevans"
     -HostHeaderWebApplication $w -Name "HostA Content Type Hub" -Template "sts#0"

In this example, we create a managed path “cthub” and create a site collection using that managed path.  The results are as you would expect, a new site collection is created using the HostA.SharePoint.com DNS name using the managed path “cthub”.

image

This can be hugely beneficial when defining consistent provisioning solutions.

Can I Use Alternate Access Mappings with Host Named Site Collections?

SharePoint provides a capability to create a web application using a host name such as http://publishing.contoso.com, and you extend the web application using a new alternate access mapping for the web application to a new host header such as http://www.fabrikam.com. This is particularly useful when you have different users accessing the same content through different authentication means. Internal users authenticated with Windows claims can access the web application using http://publishing.contoso.com, and external users access the same content using http://www.fabrikam.com. The difference is the URL that you use to access the web application.

With host named site collections in SharePoint 2010, you do not have the ability to provide different authentication mechanisms based on the host header. Remember that this happens at the web application level. Using our previous example of a web application “HostNameTest” that contains two host named site collections, “HostA.SharePoint.com” and “HostB.SharePoint.com”, it is not possible in SharePoint 2007 or 2010 to provide different authentication to each of the site collections. This means that you cannot use alternate access mappings with host named site collections. This should be a consideration when designing your information architecture.  If you need to support site collections responding to multiple host-name URLs, consider using path-based site collections with alternate access mappings instead of host-named site collections.

Host Named Site Collections Only Use One Host Name

Continuing on the discussion on AAMs and host named site collections, you cannot use multiple host names to address a site collection in SharePoint 2010. Because host-named site collections have a single URL, they do not support alternate access mappings and are always considered to be in the Default zone.  This is important if you are using a reverse proxy to provide access to external users. Products like Unified Access Gateway 2010 allow external users to authenticate to your gateway and access a site as http://uag.sharepoint.com and forward the call to http://portal.sharepoint.com. Remember that URL rewriting is not permitted. Further, a site collection can only respond to one host name. This means if you are using a reverse proxy, it must forward the calls to the same URL.  If your networking team has a policy against exposing internal URLs externally, you must instead use web applications and extend the web application using an alternate access mapping.

Host Named Site Collections and SSL

We’ve pointed out several times that we can only use one host name, can we simultaneously use both HTTP and SSL based URLs simultaneously?  The answer is no.  By default, host-named site collections in SharePoint 2010 use the same protocol scheme as the public URL of the Default zone of the web application.  If you wish to provide host-named site collections in your Web application over SSL, ensure that the public URL in the Default zone of your Web application is an HTTPS-based URL. 

[Update: thanks, Spence Harbar!]  The August 2010 Cumulative Update adds additional support for both HTTP-based host-named site collections and HTTPS-based host-named site collections to coexist in the same web application. By default this support is disabled. To enable this support, set the
Microsoft.SharePoint.Administration.SPWebService.ContentService.EnableHostHeaderSiteBasedSchemeSelection property to true. As soon as it is enabled, SharePoint will no longer use the web application's default zone public URL protocol scheme for all host-named site collections in that web application. Instead, SharePoint will use the protocol scheme provided during host-named site collection creation, restoration, or rename. Host-named site collections that are created before this update is installed will default to use the HTTP protocol scheme if this property is set to true. These site collections can be switched to use the HTTPS protocol scheme by renaming the host-named site collection and providing an HTTPS-based URL as the new site collection URL.

To configure SSL for host-named site collections, enable SSL when creating the Web application. This will create an IIS Web site with an SSL binding instead of an HTTP binding. After the Web application is created, open IIS Manager and assign a certificate to that SSL binding. You can then create site collections in that Web application.

A server certificate has to be installed and assigned to the IIS Web site. Each host-named site collection in a Web application will share the single server certificate assigned to the IIS Web site.  You need to acquire a wildcard certificate or subject alternate name certificate and then use a host-named site collection URL policy that matches that certificate.  For example, you will need a *.sharepoint.com wildcard certificate to generate host-named site collection URLs such as https://hosta.sharepoint.com, https://hostb.sharepoint.com, and so on, to enable these sites to pass browser SSL validation.  If you require unique second-level domain names for your sites, you have to create multiple Web applications rather than multiple host-named site collections.

Because SharePoint Server 2010 uses the public URL in the Default zone of the Web application to determine whether host-named site collections will be rendered as HTTP or SSL, you can also use host named site collections with off-box SSL termination.  As discussed in the previous section, “Host Named Site Collections Only Use One Host Name”, the SSL terminator must preserve the original HTTP host header from the client.  As discussed in the TechNet paper, Plan for host-named site collections (SharePoint Server 2010), there are 3 requirements to use SSL termination with host-named site collections:

  • The public URL in the Default zone of the Web application must be an HTTPS-based URL.
  • The SSL terminator or reverse proxy must preserve the original HTTP host header from the client.
  • If the client SSL request is sent to the default SSL port (443), then the SSL terminator or reverse proxy must forward the decrypted HTTP request to the front-end Web server on the default HTTP port (80). If the client SSL request is sent to a non-default SSL port, then the SSL terminator or reverse proxy must forward the decrypted HTTP request to the front-end Web server on the same non-default port.

The ability to use SSL termination and managed paths makes host header site collections a very powerful tool for hosting scenarios.

Size Matters

When you need to create vanity URLs, the first question you must ask yourself is “how many URLs will I need?” The unit of scale for SharePoint is at the site collection level. There is a tested limit of around 100 web applications for SharePoint, while each web application can have up to 250,000 site collections (see notes in the section “Know Your Limits” for discussion on number of web applications). When talking about options for vanity URLs, size matters. I have heard several people blanketly denounce host named site collections without providing context around the size of the environment or the solution being architected. It is important to remember the guidance in the SharePoint Server 2010 capacity management: Software boundaries and limits paper when architecting your information architecture. If you require a handful of vanity URLs, you are likely going to be better served creating web applications and leveraging alternate access mappings. If you are creating hundreds or thousands, then obviously you can’t create this many web applications, you are left with host-named site collections.  See the section below “Know Your Limits” and understand the software boundaries and limits when designing your information architecture. 

Uptime Matters

When you create a new web application using the SharePoint web UI, you are reminded to use “IISRESET /noforce” on each of the servers before using the web application.  Simply put, this affects your uptime, even for a very short duration.  A benefit to host named site collections is that you do not need to issue IISRESET when you create a site collection, host named or path-based.  Maybe this isn’t a big deal in your environment, but in some environments an IISRESET is a big deal.  You can see in my example above that I include a note reminding you to use IISRESET on each machine, this is only because I created a new web application as part of the demo.

Can’t I Just Rewrite the Path? 

ASP.NET 2.0 introduced a very cool feature that you could rewrite a URL programmatically, such that a request for http://contoso.com/employees/kirk could be rewritten to http://contoso.com/sites/HR/pages/Employees.aspx?Emp=Kirk.  This allows for URLs that are easier for end users to understand and navigate your web site.  SharePoint does not support asymmetrical URL rewriting (http://technet.microsoft.com/en-us/library/cc288609(office.12).aspx).  Modifications to the content path or the host name are not supported.  This means that URL rewrites (different than redirects, see the next section) cannot be used with SharePoint 2010.  I have seen several developers try to do this to do things like remove “Pages” from the URL, this is not supported either.  The inability to rewrite the path has implications on other scenarios, such as reverse proxies.  See the section “Host Named Site Collections Only Use One Host Name” for a discussion on reverse proxies.

What About Redirects?

If your goal is to provide vanity URLs, then there is another approach to consider: using HTTP redirects.  You can configure IIS to redirect all requests to a particular path to another resource.  This is different than URL rewriting because the client makes a request to a resource, receives a 302 redirect response, and then requests the new resource based on the information in the 302 redirect.  In fact, Jie Li mentions using redirects in an attempt to address SEO.  For many cases, this is absolutely a great approach.  However, consider that SOAP (web services) does not handle 302 redirects.  This means anything that calls SharePoint with the vanity URL will not work.  If your end user tries to use one of the Office client applications and work with SharePoint using their vanity URL, they will receive unexpected behavior and error messages because those applications use SOAP to communicate with the server. 

Kerberos and Host Named Site Collections

You are probably wondering if you can use Kerberos with a host named site collection, and the answer is yes.  To prove this, I configured the web application “HostNameTest” to use Negotiate for authentication.  Its application pool is running as an account sharepoint/sp_app.  The DNS entry for “hosta.sharepoint.com” was set up as an A record.  I added a SPN “HTTP/hosta.sharepoint.com” to that account, made a request, and we see a Kerberos ticket when I use KLIST.exe.  This shows that I am getting a Kerberos ticket. 

image

Yes, Kerberos works with host-named site collections.

Migrating Path-Based to Host Named Site Collections

If you have a number of web applications that provide vanity URLs and you are approaching some of the software boundaries for too many web applications (see the next section!), you are probably wondering if you can somehow collapse the information architecture to use host named site collections instead of path-based site collections in web applications.  The answer is yes!

In order to convert from path-based to host named site collections, you have to use backup and restore using PowerShell.  Back up the site collection to a file using Backup-SPSite, and then restore to a host named site collection using Restore-SPSite.

Backup-SPSite http://server_name/sites/site_name -Path C:\Backup\site_name.bak

Remove-SPSite –Identity http://server_name/sites/site_name –Confirm:$False  

Restore-SPSite http://www.example.com -Path C:\Backup\site_name.bak -HostHeaderWebApplication http://server_name 

It goes without saying that when you backup and restore, make sure that you are not creating multiple site collections that are identical in your farm to avoid duplicate GUIDs.  It is important that you remember to use Remove-SPSite before Restore-SPSite in this example.

In SharePoint 2007, site collection restore was performed with “stsadm –o restore”, which did not provide a mechanism to specify which content database would contain the site collection.  A nice change in SharePoint 2010 is the Restore-SPSite command allows you to specify the content database that will house the site collection.  This lets you kill two birds with one stone if your old path-based site collection was contained in a content database that was approaching the 200 GB limit, you can move the content to a new content database.

By default, the site collection will be set to read-only for the duration of the backup to reduce the potential for user activity during the backup operation to corrupt the backup.  Another nice feature of the improved capabilities in SharePoint 2010 (compared to SharePoint 2007) is the ability to use SQL snapshots with SQL Server Enterprise Edition.  If you have SQL Server Enterprise Edition, we recommend that UseSqlSnapshot parameter be used because this ensures a valid backup while it allows users to continue reading and writing to the site collection during the backup.

See the next section, “Know Your Limits”, for discussion on limitations of the Backup-SPSite and Restore-SPSite commands.

Know Your Limits

There are many factors that you should consider when designing any information architecture, host-named site collections or not.  As a SharePoint administrator, you should commit the SharePoint Server 2010 capacity management: Software boundaries and limits paper to memory.  I am pointing these limits because while most readers will have small farms with a handful of cases where host named site collections are used, there are a few administrators of huge farms that could potentially have tens of thousands of host named site collections.  Understand the limits of the product when designing your provisioning solution.  Also understand that some of these limits are extreme boundaries that may not be at all applicable to your environment, in fact your environment may not be capable of reaching some of these numbers. 

  1. The number of application pools is limited to 10 per web server.
  2. There is no published limit for the number of web applications for SharePoint 2010.  The limit in SharePoint 2007 was 100 web applications total.  Most of the Premier Field Engineers that I work with recommend that this limit generally applies to most SharePoint 2010 environments.  This is due to the number of timer jobs, resource contention, and other factors that vary by environment.  In fact, most PFEs recommend a practical limit of number of web applications to around 20 web applications to improve manageability of the environment.
  3. Each web application can have up to 300 content databases. 
  4. Each content database can have up to 200 GB of content. 
  5. Maximum recommended 2000 site collections per content database. If you intend on creating many site collections, take this and the max content database size in mind. 

Other considerations to keep in mind is how your information architecture may impact search.  Search will not automatically pick up the host named site collections, you may need to manually add the addresses in as start addresses or as content sources. Given the limit of 100 start addresses, you may need to define multiple content sources and start addresses. This means you need to keep the following software boundaries and limits in mind:

  1. Limit to 100 scope rules per scope; 600 total per search service application.
  2. Maximum 200 site scopes and 200 shared scopes per search service application.
  3. Maximum 50 content sources per search service application.
  4. Maximum 100 start addresses per content source.

If you are using host named site collections for vanity URLs (not using multi-tenancy), you should also consider the behavior of the PeoplePicker.  You can adjust the PeoplePicker settings to only resolve names within the current site collection (http://technet.microsoft.com/en-us/library/gg602070.aspx).  This isn’t really anything to do with host named site collections, but worth pointing out.

As mentioned in the Migrating Path-Based to Host Named Site Collections section above, content is migrated from path-based to host-named site collections using Backup-SPSite and Restore-SPSite (replacing the stsadm backup and restore).  This command has a tested limit of 15 GB.  I have one customer who is able to use this command with much larger content databases, and another customer who times out with 18 GB databases.  It varies depending on disk latency for the local disk as well as disk latency / throughput in SQL. Know that the supported limit is 15 GB if you intend to migrate from path-based to host named site collections.

Wrapping Up

Host named site collections are a great tool to keep in mind when designing your information architecture.  However, you must also consider other limitations and decide if host named site collections are appropriate for your scenario and understand the practical limits of the product when designing your information architecture.  And the next time that you hear someone blanketly say “never use host named site collections”, please have them visit this blog post so we can bring them up to date Smile

Many thanks to Spence Harbar, Sean Livingston, and Keith Bendure for reviewing this article!  As always, if you have any feedback to this post, please post comments to this blog post so that others can see. 


Setting Object Cache Accounts in SharePoint 2010

$
0
0

This post will show how to set the PortalSuperUser and PortalSuperReader accounts for SharePoint 2010 using PowerShell.

Background

I frequently create web applications in my SharePoint 2010 environment that use Windows claims authentication.  When you specify the authentication to use claims, an important step is to set the Portal Super User and Portal Super Reader accounts so that the object cache can be read. 

The TechNet documentation on setting object cache accounts explains why to set these accounts, but many people don’t remember to set them until they see errors in the event log

To set these, you go to the User Policy button in the ribbon in Central Administration, add the Portal Super User account with Full Control, and add the Portal Super Reader account with Full Read permission.  Then you go to PowerShell and set the web application property.  I like telling my customers to use this method because it’s easy to copy the claims user name from the UI and paste it into PowerShell.

Ali Mazaheri points out that this is a very important step when upgrading from SharePoint 2007 to SharePoint 2010 as you can get Access Denied errors after upgrading if you don’t set object cache accounts, even for the site collection administrator. 

Implementation

Here is a quick bit of PowerShell script to make things a little easier.  Instead of having to go manually set the Full Read and Full Control permissions using the web UI, I do everything in one shot.

foreach ($wa in Get-SPWebApplication)
{if($wa.UseClaimsAuthentication)
    {
        $superUser = "i:0#.w|sharepoint\sp_superuser"
        $superReader = "i:0#.w|sharepoint\sp_superreader"
        $fullPolicy = $wa.Policies.Add($superUser, $superUser) 
        $fullPolicy.PolicyRoleBindings.Add($wa.PolicyRoles.GetSpecialRole
([Microsoft.SharePoint.Administration.SPPolicyRoleType]::FullControl)) 
        $readPolicy = $wa.Policies.Add($superReader, $superReader) 
        $readPolicy.PolicyRoleBindings.Add($wa.PolicyRoles.GetSpecialRole
([Microsoft.SharePoint.Administration.SPPolicyRoleType]::FullRead)) 
        $wa.Properties["portalsuperuseraccount"] = $superUser; 
        $wa.Properties["portalsuperreaderaccount"] = $superReader;

        $wa.Update() 
    }
}

This should save you quite a bit of time and make the process much less error prone.  After writing this, I noticed that Chris O’Brien wrote a similar script back in 2010, and I’d bet if I did more searches I would find similar scripts.  Got an approach that works for you?  Please share in the comments!

For More Information

Configure Object Cache User Accounts

Migrate users and permissions from SharePoint Server 2007 to SharePoint Server 2010

 Set object caching user accounts with PowerShell

ALM for SharePoint Apps: Implementing Continuous Integration

$
0
0

This post shows how to implement continuous integration for a provider-hosted app in SharePoint 2013. 

Overview

This is Part 2 of a series.

In my previous post, ALM for SharePoint Apps: Configuring a TFS Build Server with Team Foundation Service, I talked about configuring an on-premises TFS Team Build 2012 server that connects to Team Foundation Service in the cloud and demonstrated how you could configure a build that creates an app package.  In this post, we’ll continue with that same example to implement continuous integration such that when a build completes it will automatically deploy the app to a SharePoint site and will deploy the web application to an IIS server.

Download the Office / SharePoint 2013 Continuous Integration with TFS 2012

There is a fantastic project on CodePlex to help you get started with Continuous Integration with TFS 2012, Office / SharePoint 2013 Continuous Integration with TFS 2012.  I highly suggest reading through the Office / SharePoint 2013 CI with TFS 2012 page to get started and understand pre-requisites. 

Configure the OfficeSharePointCI Pre-Requisites

For my example, the build server I created in the previous post has SharePoint 2013, Visual Studio 2012, and the Office Developer Tools for Visual Studio 2012 installed.  If your server does not have the pre-requisites, there are scripts and instructions available to help you collect dependencies from SharePoint 2013 and Visual Studio 2012 to deploy to your build server. 

The scripts use Web Deploy to deploy the web project for your provider-hosted apps to an IIS server.  You will need to install and configure Web Deploy.  Thankfully this is a painless process that the Web Platform Installer makes much easier for you.  Once installed, you will configure Web Deploy on your web site.

In IIS, right-click the web site that you want to deploy to and choose Deploy / Configure Web Deploy Publishing.

image

That will bring up a dialog where you grant a user to give publishing permissions to.  This user should be a member of the local Administrators group. 

image

Click Setup. This will generate a .publishsettings file in the path specified in this wizard.

image

 

If you run into any issues, there are some handy troubleshooting tips on the blog post Installing and Configuring Web Deploy.

Configure the Web Deploy Publishing Profile

In Visual Studio, open your provider-hosted app project.  Right-click the web application and choose Publish.

NOTE:  This is not the best way to do this.  Instead, you should publish the APP project (not the web).  I cover the correct way to do this in my post Part 4 – ALM for SharePoint Apps: Understanding Provider Hosted App Publishing.

image

The wizard asks if you want to select or import a publish profile.  One option is to choose import, and select the file that you created when you set up Web Deploy.

image

However, when you do that the profile is named “Default Settings *” and I don’t see an option to rename it. 

image

Instead, I prefer to create a new publishing profile with a descriptive name like “ALMDemo – Dev”. 

image

Provide the server, site name, and destination URL.  Click Validate Connection and you should see a green check mark indicating success.

image

Click Next and choose a release or debug version (I chose Debug), then click next.  Click the preview button and you’ll get a preview of what’s about to be deployed.

image

Click Publish.

Now check in all of your changes.  This created the publishing profile that you will use with TFS Team Build as part of the build process.

NOTE:  This is not the best way to do this.  Instead, you should publish the APP project (not the web).  I cover the correct way to do this in my post Part 4 – ALM for SharePoint Apps: Understanding Provider Hosted App Publishing.

A Note on AppPrincipals

The CodePlex project that I am using assumes that the app principal for your environment has already been registered, this script does not take care of registering the app principal for you.  You could easily add this step into the script, but for now make sure that the app principal has already been registered.  For testing, you are likely going to need to navigate to your SharePoint server, _layouts/appregnew.aspx and register the app principal prior to the builds.

image

Click create, then copy the value for the app ID into your app manifest’s client ID and web.config’s client ID values.

image

Run the project and make sure that everything works as expected.  Once you’ve smoke-tested it, check it into source control.  Don’t worry that we've hard-coded the URL for the web server here, we’ll change that in an upcoming blog post. 

If you made changes to the web.config, make sure to re-deploy your changes to the IIS site!

 

Check the OfficeSharePointCI CodePlex Artifacts Into Source Control

The next step is to check the stuff you downloaded from CodePlex into source control. From my previous post, I already have a project called ALMDemo in TFS. 

image

 

I copied the DeployScripts and BuildProcessTemplates folders from the OfficeSharePointCI download to my Projects folder.

image

Now that the file structure matches what I want in source control, it makes it easier to map everything in TFS.  Right-click the project node (in my case it is called “ALMDemo”) in the Source Control Explorer and choose Add Items to Folder.

image

TFS then warns me about changing the mapping for the existing project, I say OK, and then I choose the items to add. 

image

Now that everything is mapped and there are pending changes, I check in the changes.

image

The result is that the two folders are added to source control and my project’s local folder is mapped to the new location and downloaded locally.

image

Edit the Parameters.ps1 PowerShell Script

In the $ALMDemo\DeployScripts\SharePointApp folder, there is a PowerShell script called Parameters.ps1 that you will need to edit.

 

image

This file will let you provide the URL of the SharePoint server that is being deployed to, the credentials of the user deploying the app to SharePoint, the credentials of the user deploying the web site to IIS, and the name of the Web Deploy server.  You could deploy to O365 here or to an on-premises SharePoint installation.  I am deploying to my Azure IaaS virtual machine, which means I still have a domain user.

image

Now that you’re done editing, save and check in your changes.

image

Edit the Build Definition

In Visual Studio 2012, go to the Team Explorer tab and then the Builds menu.  Select the build that you previously created (demonstrated in my previous post), right-click and choose Edit Build Definition.

On the process tab, you can choose a process template to use.  The template we will use will be a copy of the one just checked into TFS called OfficeToolsAppTemplate.xaml.  Rather than edit the one we just downloaded, let’s create a new one based on that existing template.  Expand the Build Process Template section and choose New.

image

I chose to make a copy of the existing file and create a new one called MyAppBuildProcessTemplate.xaml.

 

image

Set the MSBuild Arguments to a value of:

/p:ActivePublishProfile="ALMDemo - Dev"

Make sure to replace the name of the publishing profile to the one that you created in the section “Configure the Web Deploy Publishing Profile” above.

Next change the Deployment Script parameter to point to the SharePointAppDeploy.ps1 script in the DeployScripts folder.  In my project that value is:

$/ALMDemo/DeployScripts/SharePointApp/SharePointAppDeploy.ps1

The finished product:

image

 

Test the Build

Right-click the build and choose Queue New Build.  At the end of the build, you should see the following:

image

Click the link to trust the app.

image

Now go back to the app and click it, and BOOM!  Your app was deployed to SharePoint and to IIS as part of the build.

If you get an error saying that there is not working folder mapping for the SharePointAppDeploy.ps1 file, go into the build and map it.

image

 

image

Summary

There are a few moving pieces, but once you get everything configured you can see that you don’t have to touch it again, you simply check in changes into source control and initiate a build, the provider-hosted app is deployed for you.

 

For More Information

Office / SharePoint 2013 Continuous Integration with TFS 2012

Microsoft Office Developer Tools for Visual Studio 2012

ALM for SharePoint Apps: Configuring a TFS Build Server with Team Foundation Service

Installing and Configuring Web Deploy

Moving Path Based to Host Named Site Collections

$
0
0

This post illustrates a problem with detaching content databases that contain site collections restored from path-based site collections to host named site collections.

Background

The recommendation for SharePoint 2013 is to use a single web application and leverage host named site collections.  In a previous post, I wrote about What Every SharePoint Admin Needs to Know About Host Named Site Collections.  In that post, I showed one approach for moving an existing path-based site collection to a host-header site collection.  This is invaluable if you have too many web applications in your farm and need to consolidate the site collections while preserving URLs.  It’s also invaluable to improving the health of your farm as I have seen multiple farms that suffered performance issues that were resolved by consolidating web applications to host-named site collections.

As a reminder, I provided the following sample script:

Backup-SPSite http://server_name/sites/site_name -Path C:\Backup\site_name.bak 
Remove-SPSite –Identity http://server_name/sites/site_name –Confirm:$FalseRestore-SPSite http://www.example.com -Path C:\Backup\site_name.bak -HostHeaderWebApplication http://server_name 

This works, and the site collection is restored successfully with the new host header.  However, there are some additional considerations you’ll want to be aware of.

Existing Web Application With the Same Url

The first problem is that the site collection may be at the root of a web application with the same URL that you are trying to move to a host named site collection.  For example, I have a web application, Intranet.Contoso.lab, that contains a single root site collection that is path-based.  I want to move this to a host named site collection, but that URL is already in use.  The fix is to delete the web application first.  Don’t worry, you have the option of preserving the content database just in case something goes wrong, in which case you could create a new web application using the existing content database and you’ll be back to where you started.  Here is a function that you can use to move your path-based site collection to a host-named site collection and optionally delete the existing web application while preserving the original content database.

 

 
function Move-PathBasedToHNSC(
    [string]$siteUrl, 
    [string]$backupFilePath, 
    [string]$pathBasedWebApplication, 
    [bool]$deletePathBasedWebApplication, 
    [string]$hostHeaderUrl,
    [string]$hostHeaderWebApplication, 
    [string]$contentDatabase)
{
    Backup-SPSite $siteUrl -Path $backupFilePath

    if($deletePathBasedWebApplication)
    {
        #If the HNSC uses the same URL as an existing web application,#the web application must be removed
        Remove-SPWebApplication $pathBasedWebApplication -RemoveContentDatabases:$false -DeleteIISSite:$true
    }
    else
    {
        #Not removing the web application, so just remove the site collection
        Remove-SPSite -Identity $siteUrl -Confirm:$false
    }

    Restore-SPSite $hostHeaderUrl -Path $backupFilePath 
        -HostHeaderWebApplication $hostHeaderWebApplication -ContentDatabase $contentDatabase    
}

Move-PathBasedToHNSC -siteUrl http://HNSCMoveTest2.Contoso.lab 
    -backupFilePath "C:\Backup\HNSCMoveTest2.bak" 
    -pathBasedWebApplication http://HNSCMoveTest2.contoso.lab 
    -deletePathBasedWebApplication $true 
    -hostHeaderUrl http://HNSCMoveTest2.contoso.lab 
    -hostHeaderWebApplication http://spdev 
    -ContentDatabase WSS_Content_HNSC

Before I run the script, here’s what the list of web applications looks like:

image

After running the script, the web application is gone, and I now see the host named site collection in the new web application and in the content database that I specified.

image

As the administrator, I’m happy because there’s one less web application to maintain and, likely, the performance of my farm will increase a bit. 

Detaching (Dismounting) the Content Database

Here’s where the weird things start happening.  You can detach a content database so that it’s not serving any content, but the database is still in SQL Server.  You might do this for a number of reasons, such as upgrades.  Let’s try detaching the content database using PowerShell:

Dismount-SPContentDatabase WSS_Content_HNSC

Now we want to attach it again.

Mount-SPContentDatabase "WSS_Content_HNSC" -WebApplication http://spdev

Go back to the browser and hit refresh, and after some time the host-named site collection will render correctly.  However, we have a few problems.  First, go look at the site collections again in Central Administration.  You might see that the site collection is gone!  We run some PowerShell to see what’s up:

PS C:\> get-spwebapplication http://spdev | get-spsite -limit all

Url                                                   
---                                                   
http://spdev  

Huh?  Where’d my site collection go?  If we go into the content database, we can see the site is still there.  However, the site doesn’t seem to actually be there.  I tried Get-SPSite, even stsadm –o EnumSites, and the site isn’t showing anywhere.  Thanks to my colleague, Joe Rodgers, for showing me the fix. 

$db = Get-SPContentDatabase WSS_Content_HNSC
$db.RefreshSitesInConfigurationDatabase()

This refreshes the sites in the site map in the configuration database, at which point the site collection appears again in PowerShell and in the UI.

image

PS C:\> get-spwebapplication http://spdev | get-spsite -limit all

Url                                                    
---                                                    
http://spdev                                           
http://hnscmovetest.contoso.lab                        
http://hnscmovetest2.contoso.lab                       
http://hnscmovetest3.contoso.lab 

If you are upgrading and have used this technique to move path-based to host-named site collections, I would definitely recommend keeping this in mind.  Note that this behavior does not seem to happen when you create a new host-named site collection or a new path-based site collection, it only seems to happen when you move an existing path-based site collection to become a host-named site collection. I also only tested this in SharePoint 2010. 

Summary

SharePoint scales by having many site collections instead of many web applications, and Host named site collections are a fantastic way to get there without changing URLs.  Honestly, the bit about detaching and attaching the content database and losing the information in the site map seems like unintended behavior to me.  I haven’t tried this in SharePoint 2013 to see if the problem reproduces there, I’d be interested to see if anyone reproduces this in an SP2013 environment.  If so, leave comments!

What Every Developer Needs to Know About SharePoint Apps, CSOM, and Anonymous Publishing Sites

$
0
0

This post will show what works and what doesn’t with CSOM and REST in a SharePoint 2013 publishing site that permits anonymous access.  More importantly, we show what you should and should not do… and why.

Overview

I frequently see questions about using SharePoint apps with “public-facing web sites” where the web content is available to anonymous users.  There are a lot of misconceptions about what is possible.  This post will dive into some of the gory details of CSOM with anonymous access.  This demonstration will use an on-premises lab environment instead of O365.

Setting Up the Demonstration

I created a new web application that allows anonymous access (http://anonymous.contoso.lab) and a site collection using the Publishing template.  I then enabled anonymous access for the entire site by going to Site Settings/ Site Permissions and clicking the Anonymous Access button in the ribbon.  Notice that checkbox “Require Use Remote Interfaces permission” that is checked by default… leave it checked for now.

image

Next, I created a SharePoint-hosted app.  I just slightly modified the out of box template.

'use strict';

var context = SP.ClientContext.get_current();
var user = context.get_web().get_currentUser();

$(document).ready(function () {
    getUserName();
});

function getUserName() {
    context.load(user);
    context.executeQueryAsync(onGetUserNameSuccess, onGetUserNameFail);
}


function onGetUserNameSuccess() {
    if (null != user) {
        //The user is not nullvar userName = null;
        try {
            userName = user.get_title();
        } catch (e) {
            userName = "Anonymous user!";
        }
    }
    $('#message').text('Hello ' + userName);
}


function onGetUserNameFail(sender, args) {
    alert('Failed to get user name. Error:' + args.get_message());
}

Next, I add a client app part to the project.  The client app part isn’t going to do anything special, it just says Hello World.

<%@ Page language="C#" Inherits="Microsoft.SharePoint.WebPartPages.WebPartPage, Microsoft.SharePoint, Version=15.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %><%@ Register Tagprefix="SharePoint" Namespace="Microsoft.SharePoint.WebControls" Assembly="Microsoft.SharePoint, Version=15.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %><%@ Register Tagprefix="Utilities" Namespace="Microsoft.SharePoint.Utilities" Assembly="Microsoft.SharePoint, Version=15.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %><%@ Register Tagprefix="WebPartPages" Namespace="Microsoft.SharePoint.WebPartPages" Assembly="Microsoft.SharePoint, Version=15.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %><WebPartPages:AllowFramingID="AllowFraming"runat="server"/><html><head><title></title><scripttype="text/javascript"src="../Scripts/jquery-1.8.2.min.js"></script><script type="text/javascript" src="/_layouts/15/MicrosoftAjax.js"></script>
    <script type="text/javascript" src="/_layouts/15/sp.runtime.js"></script>
    <script type="text/javascript" src="/_layouts/15/sp.js"></script>

    <script type="text/javascript">
        'use strict';

        // Set the style of the client web part page to be consistent with the host web.
        (function () {
            var hostUrl = '';
            if (document.URL.indexOf('?') != -1) {
                varparams = document.URL.split('?')[1].split('&');
                for (var i = 0; i < params.length; i++) {
                    var p = decodeURIComponent(params[i]);
                    if (/^SPHostUrl=/i.test(p)) {
                        hostUrl = p.split('=')[1];
                        document.write('<link rel="stylesheet" href="' + hostUrl + '/_layouts/15/defaultcss.ashx" />');
                        break;
                    }
                }
            }
            if (hostUrl == '') {
                document.write('<link rel="stylesheet" href="/_layouts/15/1033/styles/themable/corev15.css" />');
            }
        })();
    </script></head><body><h1>Hello, world!</h1></body></html>

Next, I went to Central Administration / Apps / Manage App Catalog and created an app catalog for the web application.  I published the SharePoint-hosted app in Visual Studio (which just generates the .app package) and then uploaded the .app package to the App Catalog.

image

Next, as the site collection administrator, I added the app to the publishing site.

image

Finally, I edit the main page of the publishing site and add the client app part and test that it works.  Check in the page and publish and you should see something like this:

image

What Do Anonymous Users See?

The question on everybody’s mind is what happens if there is not an authenticated user.  In our simple test, recall that the only thing we are showing is a simple IFRAME with some styling obtained from SharePoint.  The only thing our IFRAME is showing is a page that contains static HTML, “Hello, world!”.  I highlighted the “Sign In” link to show that I really am an anonymous user.

image

Now, click the link “SPHostedClientWebPart Title” (yeah, I know, I am lazy… I should have given it a better name) and you are taken to the full-page experience for the app.  What do we see?  We get an error.

image

That error is saying that the anonymous user does not have access to use the Client Side Object Model.  Just for grins, let’s try the REST API with the URL http://anonymous.contoso.lab/_api/Web.  First, you get a login prompt.  Next, you see error text that says you do not have access.

image

This makes sense because the CSOM and REST API are not available by default to anonymous users.

Enabling CSOM for Anonymous Users

Let me start this section by saying what I am about to show you comes with some risk that you will need to evaluate for your environment.  Continue reading the entire article to learn about the risks.

That said, go back to Site Settings / Site Permissions and then click the Anonymous Access button in the ribbon.  Remember that rather cryptic-sounding checkbox Require Use Remote Interfaces permission?  Uncheck it and click OK.

image

That check box decouples use of CSOM from the Use Remote Interfaces permission.  When checked, it simply means that the user must possess the Use Remote Interfaces permission which allows access to SOAP, Web DAV, the Client Object Model.  You can remove this permission from users, disabling their ability to use SharePoint Designer.  There are cases where you still want to remove this permission, such as for anonymous users, but you still want to use the CSOM.  This is exactly what the checkbox is letting you do; you are enabling use of CSOM without requiring the users to have that permission.

To test the change, go back to the main page for your site.  Of course, we still see the IFRAME from before, that’s not doing anything with CSOM.  Click the title of the web part to see the full-page immersive experience.  This time, we do not see an error message, instead we see that our code fell into an exception handler because the Title property of the User object was not initialized.  Our error handling code interprets this as an anonymous user. 

image

You just used the app model in a public-facing SharePoint site with anonymous users.

In case you are interested, you can set the same property with PowerShell using a comically long yet self-descriptive method UpdateClientObjectModelUseRemoteAPIsPermissionSetting.

PS C:\> $site = Get-SPSite http://anonymous.contoso.lab
PS C:\> $site.UpdateClientObjectModelUseRemoteAPIsPermissionSetting($false)

How about that REST API call?  What happens with anonymous users now?

image

Now that you know how to fire the shotgun, let’s help you move it away from your foot.

All Or Nothing

There is no way to selectively enable parts of the CSOM EXCEPT search.

UPDATE: Thanks to Sanjay for pointing out that it is possible to enable search for anonymous without enabling the entire CSOM or REST API, and thanks to Waldek Matykarz for a great article showing how to do it.

Enabling use of CSOM for anonymous users presents a possible information disclosure risk in that it potentially divulges much more information than you would anticipate.  Let me make that clear:  If you remove the require remote interface permission for an anonymous site, the entire CSOM is now available to anonymous users.  Of course, that doesn’t mean they can do anything they want, SharePoint permissions still apply.  If the a list is not made available to anonymous users, then you can’t use the CSOM to circumvent that security requirement.  Similarly, an anonymous user will only be able to see lists or items that have been explicitly made available to anonymous users.  It’s important to know that more than just what you see on the web page is now available via CSOM or REST.

ViewFormPagesLockDown?  Ha!

In a public-facing web site using SharePoint, you want to make sure that users cannot go to form pages such as Pages/Forms/AllItems.aspx where we would see things like CreatedBy and ModifiedBy. The ViewFormPagesLockDown feature is enabled by default for publishing sites to prevent against this very scenario.  This feature reduces fine-grained permissions for the limited access permission level, removing the permission to View Application Pages or Use Remote Interfaces.  This means that an anonymous user cannot go to Pages/Forms/AllItems.aspx and see all of the pages in that library.  If we enable CSOM for anonymous users, you still wont’ be able to access the CreatedBy and ModifiedBy via the HTML UI in the browser, but you can now access that information using CSOM or REST.

To demonstrate, let’s use the REST API as an anonymous user to look through the items in the Pages library by appending _api/Web/Lists/Pages/Items to the site.

image

I’ll give you a moment to soak that one in.

Let me log in as Dan Jump, the CEO of my fictitious Contoso company in my lab environment.  Dan authors a page and publishes it.  An anonymous user now uses the REST API (or CSOM, but if you are reading this hopefully you get that they are the same endpoint) using the URL:


http://anonymous.contoso.lab/_api/Web/Lists/Pages/Items(4)/FieldValuesAsText

image

The resulting text shows his domain account (contoso\danj) and his name (Dan Jump).  This may not be a big deal in many organizations, but for some this would be a huge deal and an unintended disclosure of personally identifiable information.  Understand that if you enable the CSOM for your public-facing internet site, you run the risk of information disclosure. 

For those that might be confused about my using the term “CSOM” but showing examples using REST, here is some code to show you it works.  I don’t need OAuth or anything here, and I run this from a non-domain joined machine to prove that you can now get to the data.

using Microsoft.SharePoint.Client;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

namespace ConsoleApplication1
{
    class Program
    {
        staticvoid Main(string[] args)
        {
            using (ClientContext ctx = new ClientContext(http://anonymous.contoso.lab))
            {
                ctx.Credentials = System.Net.CredentialCache.DefaultCredentials;
                Console.WriteLine("Enter the list name: ");
                string listName = Console.ReadLine();
                List list = ctx.Web.Lists.GetByTitle(listName);

                Console.WriteLine("Enter the field name: ");
                string fieldName = Console.ReadLine();


                CamlQuery camlQuery = new CamlQuery();
                
                ListItemCollection listItems = list.GetItems(camlQuery);
                ctx.Load(
                     listItems,
                     items => items
                         .Include(
                             item => item[fieldName],
                             item => item["Author"],
                             item => item["Editor"]));
                try
                {
                    ctx.ExecuteQuery();

                    foreach (var myListItem in listItems)
                    {
                        Console.WriteLine("{0} = {1}, Created By={2}, Modified By={3}", 
fieldName, myListItem[fieldName], ((FieldUserValue)myListItem["Author"]).LookupValue,  
((FieldUserValue)myListItem["Editor"]).LookupValue );
                    }
                }
                catch (Exception oops)
                {
                    Console.WriteLine(oops.Message);
                }
            }
        }
    }
}

This code simply asks for a list name and a column name that you would like to query data for, such as “Title”. I also include the Created By and Modified By fields as well to demonstrate the potential disclosure risk. Since the CSOM is available to anonymous users, I can call it from a remote machine and gain information that was not intended to be disclosed.

clip_image002

You see that CSOM and REST are calling the same endpoint and getting the same data.

Security Trimming Still in Effect

At this point I have probably freaked a few readers out who didn’t understand the full consequences when they unchecked that checkbox (or, more likely, people who skimmed and did not read this section).  Does this mean that anonymous users can do ANYTHING they want to with the CSOM?  Of course not.  When you configured anonymous access for the web application, you specified the anonymous policy, likely with “Deny Write – Has no write access”. 

image

This means what it says: Anonymous users cannot write, even with the REST API or by using CSOM code.  Further, anonymous users can only see the information that you granted them to see when you configured anonymous access for the site. 

image

If there is content in the site that you don’t want anonymous users to access, you have to break permission inheritance and remove the right for viewers to read.

image

Additionally, there is some information that is already locked down.  Logged in as the site collection administrator, I can go to the User Information List and see information about the site users.

http://anonymous.contoso.lab/_api/web/lists/getbytitle('User%20Information%20List')/Items

If I try that same URL as an anonymous user, I simply get a 404 not found. 

To summarize this section and make it perfectly clear: security trimming is still in effect.  Unpublished pages are not visible by default to anonymous users.  They can only see the lists that enable anonymous access.  If, despite what I’ve written so far, you decide to enable CSOM for anonymous users, then you will want to make sure that you don’t accidentally grant access for anonymous users to things they shouldn’t have access to.

Potential for DoS

When you use SharePoint to create web pages, you carefully construct the information that is shown on the page and you control how frequently it is queried or seen.  Hopefully you perform load tests to confirm your environment can sustain the expected traffic load before putting it into production.

With CSOM enabled for anonymous users, all that testing was pointless.

There is no caching with the CSOM, so this would open up the possibility for me to do things like query batches of information to query 2000 items from multiple lists simultaneously, staying under the default list view threshold while taxing your database. Now if I can get a few other people to run that code, say by using some JavaScript exploit and posting it to Twitter, then I now have the makings for a DoS attack… or at least one hell of a stressful day for my databases.

You Really Need to Understand How OAuth Works

Hopefully by now I have convinced you that enabling the CSOM for anonymous users to directly access is not advised (and hence why it is turned off by default).  At this point in the conversation, I usually hear someone chime in about using the app only policy.  Let me tell you why this is potentially a MONUMENTALLY bad idea.

With provider-hosted apps, I can use this thing called the App Only Policy.  This lets my app perform actions that the current user is not authorized to do.  As an example, an administrator installs the app and grants it permission to write to a list.  A user who has read-only permission can use the app, and the app can still write to the list even though the user does not have permission.  Pretty cool! 

I have presented on the SharePoint 2013 app model around the world, and I can assure you that it all comes down to security and permissions.  Simply put, you must invest the time to understand how OAuth works in SharePoint 2013.  Watch my Build 2013 talk Understanding Authentication and Permissions with Apps for SharePoint and Office as a good starter.

The part to keep in mind is how OAuth works and why we say you ABSOLUTELY MUST USE SSL WITH SHAREPOINT APPS.  It works by the remote web site sending an access token in the HTTP header “Authorization” with a value of “Bearer “ + a base64 encoded string.  Notice I didn’t say encrypted, I said encodedThat means that the information can easily be decoded by anybody who can read the HTTP header value.

I showed a working example of this when I wrote a Fiddler extension to inspect SharePoint tokens.  It isn’t that hard to crack open an access token to see what’s in it.

image

If you have a public-facing web site, you are likely letting everyone access it using HTTP and not requiring HTTPS (of course, you are doing this).  If you tried to use a provider-hosted app without using SSL, then anyone can get to the Authorization token and replay the action, or worse.  They could create a series of tests against lists, the web, the root web, the site collection, additional site collections, or the tenant to see just what level of permission the app had.  If the app has been granted Full Control permission to a list, it has the ability to do anything it wants to that list including delete it.  Even though your app may only be writing a document to the list, your app is authorized to do anything with that list.  Start playing around with CSOM and REST, and I can do some nasty stuff to your environment. 

One more thing… the access token is good for 12 hours.  That’s a pretty large window of time for someone to get creative and make HTTP calls to your server, doing things that you never intended.

Doing It The Right Way

Suffice to say, you DO NOT want to try using CSOM calls in a provider-hosted app to an anonymous site, and you DO NOT want to enable CSOM for anonymous users.  Does this completely rule out using the app model?  

You could set up a different zone with a different URL that uses SSL, and your app will communicate to SharePoint using only server-side calls to an SSL protected endpoint.  To achieve this, the app would have to use the app-only policy because no user information would be passed as part of the token (see my blog post, SharePoint 2013 App Only Policy Made Easy for more information).

image

The reason that I stressed server-side calls in the previous paragraph is simple: if they were client-side calls using the Silverlight CSOM or JavaScript CSOM implementation, we’d be back at the previous problem of exposing the CSOM directly to anonymous users.

This pattern means there are certain interactions with apps that are not going to work easily.  For instance, using this pattern with an ACS trust means that a context token will not be passed to your app because your app is using HTTP.  You can still communicate with SharePoint, but the coding is going to look a bit different. 

string realm = TokenHelper.GetRealmFromTargetUrl(siteUri);

//Get the access token for the URL.  //Requires this app to be registered with the tenantstring accessToken = TokenHelper.GetAppOnlyAccessToken(
    TokenHelper.SharePointPrincipal, 
    siteUri.Authority, 
    realm).AccessToken;

//Get client context with access tokenusing(var clientContext = TokenHelper.GetClientContextWithAccessToken(
     siteUri.ToString(),accessToken))
{
    //Do work here
}

Instead of reading information from the context token, we are simply making a request to Azure ACS to get the access token based on the realm and the URL of the SharePoint site.  This keeps the access token completely away from users, uses SSL when communicating with SharePoint, and allows you to control the frequency and shape of information that is queried rather than opening access to the whole API to any developer who wants to have a go at your servers.

Enabling Search for Anonymous Users

Sanjay Narang pointed out in the comments to this post that it is possible to enable anonymous search REST queries without removing the requires remote interface permission setting.  This is detailed in the article SharePoint Search REST API overview.  administrators can restrict what query parameters to expose to anonymous users by using a file called queryparametertemplate.xml. To demonstrate, I first make sure that the site has the requires remote interface permission.

image

Next, I make sure that search results will return something.  I have a library, Members Only, that contains a single document.

image

That document contains the following text.

image

I break permissions for the library and remove the ability for anonymous users to access it.

image

A search for “dan” returns two results if I am an authenticated user.

image

A search for “dan” only returns 1 result as an anonymous user.

image

If I attempt to use the search REST API as an authenticated user, I get results.

image

If I attempt it as an anonymous user, I get an HTTP 500 error. 

image

Looking in the ULS logs, you will see that the following error occurs.

Microsoft.Office.Server.Search.REST.SearchServiceException: The SafeQueryPropertiesTemplateUrl "The SafeQueryPropertiesTemplateUrl &quot;{0}&quot; is not a valid URL." is not a valid URL. 

To address this, we will use the approach detailed by Waldek Mastykarz in his blog post Configuring SharePoint 2013 Search REST API for anonymous users.  I copy the XML from the SharePoint Search REST API overview article and paste into notepad.  As instructed in that article, you need to replace the farm ID, site ID, and web ID.  You can get those from PowerShell.

PS C:\> Get-SPFarm | select ID

Id                                                                             
--                                                                             
bfa6aff8-2dd4-4fcf-8f80-926f869f63e8                                           



PS C:\> Get-SPSite http://anonymous.contoso.lab | select ID

ID                                                                             
--                                                                             
6a851e78-5065-447a-9094-090555b6e855                                           



PS C:\> Get-SPWeb http://anonymous.contoso.lab | select ID

ID                                                                             
--                                                                             
8d0a1d1a-cdae-4210-a794-ba0206af1751   

Given those values, I can now replace them in the XML file.

image

Save the XML file to a document library named QueryPropertiesTemplate.

image

Finally, append &QueryTemplatePropertiesUrl='spfile://webroot/queryparametertemplate.xml' to the query.

image

Now try the query as an anonymous user, and you get results even though we still require the use remote interfaces permission.  The rest of the CSOM and REST API is not accessible, only search is, and only for the properties allowed through that file.

image

Summary

This was a long read (took even longer to write), but hopefully it helps to answer the question if you should use apps for your public-facing site.  As you saw, you can easily create client app parts and even custom actions.  You should not enable CSOM for anonymous access unless you are OK with the risks and can mitigate against them.  You can still use the app model, but it’s going to take a little more engineering and will require your SharePoint site have an endpoint protected by SSL that is different than the endpoint that users will access.

For More Information

Understanding Authentication and Permissions with Apps for SharePoint and Office

SharePoint 2013 App Only Policy Made Easy

Fiddler extension to inspect SharePoint tokens

Configuring SharePoint 2013 Search REST API for anonymous users.

SharePoint Search REST API overview

Attaching Remote Event Receivers to Lists in the Host Web

$
0
0

This post shows how to attach a remote event receiver to a list in the host web for a SharePoint provider-hosted app.

Background

While working on the SharePoint 2013 Ignite content, apps were still very much new and very little documentation existed.  We were fighting a problem of using an app that required a feature to first be activated in SharePoint.  How could we activate the feature?  That’s when Keenan Newton wrote his blog post, Defining content in Host Web from an App for SharePoint.  The idea was so simple: use the App Installed event. 

This post follows the same pattern.  I am going to take a pretty long path to do something that can be accomplished pretty quickly because there are a few confusing elements to this pattern:

  1. Handle the app installed event.
  2. When the app installed event occurs, an event is sent to our service.  We use the client side object model to attach an event receiver to a list in the host web.
  3. When an item is added to the list, an ItemAdded event is sent to our service.

Visually, it looks like this:

image

Once you understand this pattern, you’ll use it for all sorts of things such as activating features, creating subsites, applying themes, all kinds of stuff.

If you don’t care about how it all works, just skip to the end to the section “Show Me The Code!”

Remote Event Receivers

A remote event receiver is just like a traditional event receiver in SharePoint.  Your code registers itself with SharePoint to be called whenever an event occurs, such as a list is being deleted or a list item is being added.  With full trust code solutions, you would register your code by giving SharePoint an assembly name and type name.  Server side code for apps isn’t installed on SharePoint, but rather on your own web server, so how would you register a remote endpoint?  Provide a URL to a service. 

image

If you aren’t familiar with remote event receivers, go check out the Developer training for Office, SharePoint, Project, Visio, and Access Services which includes a module on remote event receivers. 

The point that I want to highlight here is that you tell SharePoint what WCF service endpoint to call when a specific event occurs.  That means that SharePoint needs to be able to resolve the address to that endpoint.

Handle App Installed and App Uninstalling

To perform this step, I assume you already have an Office 365 Developer Site Collection.  If you don’t, you can sign up for a free 30-day trial.  Even better, as an MSDN subscriber you get an Office 365 developer tenant as part of your MSDN benefits.

In Visual Studio 2013, create a new provider-hosted app. 

image

Provide the URL for your Office 365 developer site, used for debugging, and leave the host type as Provider-hosted.

image

The next screen asks if you want to use the traditional Web Forms model for your app, or if you prefer ASP.NET MVC.  I really love ASP.NET MVC, so I’ll use that option.

image

Finally, you are asked about how your app will authenticate.  We are using Office 365, so leave the default option, “Use Windows Azure Access Control Service”.  Click Finish.

image

Once your project is created, click on the app project (not the web project) and change its Handle App Installed and HandleAppUninstalling properties to True.

image

That will create a WCF service for you in the project where you can now handle an event for when the app is installed.

image

There are two methods, ProcessEvent and ProcessOneWayEvent, and sample code exists in the ProcessEvent method to show you how to get started with a remote event receiver. 

We are going to use the ProcessEvent method to register an event receiver on a list in the host web.  We will also use the ProcessEvent method to unregister the remote event receiver when the app is uninstalled.  Clean up after yourself!

Add a breakpoint in the ProcessEvent method, but don’t hit F5 just yet.

Debugging Remote Event Receivers

Let me restate that last part if you didn’t catch it: SharePoint needs to be able to resolve the address to your WCF endpoint.  Let me change that picture just a bit:

image

See the difference?  Here we have Office 365 calling our web service.  If we told O365 that our WCF service was available at http://localhost:44307/AppEventReceiver.svc, that server would try to make an HTTP call to localhost… the HTTP call would never leave that server.  There’s no way that SharePoint can figure out that what you really meant was to traverse your corporate firewall and get past the Windows Firewall on your laptop to call an HTTP endpoint in IIS Express.

Thankfully, someone incredibly smart on the Visual Studio team (hat’s off, Chaks!) figured out how to use Windows Azure Service Bus to debug remote events.  That means that SharePoint now has an endpoint that it can deliver messages to, and your app can then connect to service bus to receive those messages.

image

Even better, you really don’t have to know much about this to make it all work.  If you don’t have an Azure subscription already, you can sign up for a free trial.  If you have MSDN, you get an Azure subscription as part of your MSDN benefits that includes monthly credits!  If you are worried about the cost here, don’t be: as of today, you are charged $0.10 for every 100 relay hours, $0.01 for every 10,000 messages.  I seriously doubt anyone is leaving their machine debugging for that long.

Once you have an Azure subscription, log into the Windows Azure Management Portal.  Go to the Service Bus extension on the left of the screen.

image

On the bottom of the screen, click Create to add a new namespace. 

image

Give it a unique name and provide a location near you.

image

Once the namespace is created, click the Connection Information button, you will see your connection string. Copy it into your clipboard buffer.

image

Go back to Visual Studio.  In the Solution Explorer, click the app project (not the web project) in Visual Studio’s Solution Explorer pane, then go to Project / AttachEventsInHostWeb Properties…

image

Go to the SharePoint tab and check the checkbox to enable debugging via Windows Service Bus and paste your connection string. 

image

Now, let’s test our app so far.  Press F5 in Visual Studio to start debugging.

image

Our breakpoint is then hit.  Let’s inspect where the WCF message was sent to. In the Watch window in Visual Studio, add the value System.ServiceModel.OperationContext.Current.RequestContext.RequestMessage.Headers.To

image

You can see that SharePoint Online sent the message to:

https://kirkevans.servicebus.windows.net/2228577862/1147692368/obj/0958b186-260a-4fb4-a140-7437d6f2b686/Services/AppEventReceiver.svc

This is the service bus endpoint used during debugging.  This solves our earlier problem of SharePoint not being able to send messages to https://localhost:44307.  The messages are relayed from Service Bus to our local endpoint.

Ask for Permission

The last bit of setup that we need to do is to ask for permission.  We are going to add a remote event receiver to a list in the host web, which means we need to ask for permission to manage the list.  We don’t need Full Control for this operation, we just need Manage.  Further, we only need Manage permission for a list, not the whole web, site collection, or tenant.

The list we will work with is an Announcements list, which has a template ID of 104.  Adding the BaseTemplateId=104 property in a list permission request significantly reduces the number and type of lists that a user chooses from when granting permission.

image

Notice the app-only permission request?  That’s added when we handle the App Installed and App Uninstalling events, because when those happen we want to execute operations that the current user may not have permission to. 

Show Me The Code!

Finally, we’re here.  First, let’s define the name of the event handler and implement the required ProcessEvent method.

privateconststring ReceiverName = "ItemAddedEvent";
privateconststring ListName = "Announcements";

public SPRemoteEventResult ProcessEvent(SPRemoteEventProperties properties)
{
            
    SPRemoteEventResult result = new SPRemoteEventResult();

    switch (properties.EventType)
    {
        case SPRemoteEventType.AppInstalled:
            HandleAppInstalled(properties);
            break;
        case SPRemoteEventType.AppUninstalling:
            HandleAppUninstalling(properties);
            break;
        case SPRemoteEventType.ItemAdded:
            HandleItemAdded(properties);
            break;
    }

            
    return result;
}

Those methods (HandleAppInstalled, HandleAppUninstalling, HandleItemAdded) are methods that we will define. 

   1:  privatevoid HandleAppInstalled(SPRemoteEventProperties properties)
   2:  {
   3:  using (ClientContext clientContext =
   4:          TokenHelper.CreateAppEventClientContext(properties, false))
   5:      {
   6:  if (clientContext != null)
   7:          {
   8:              List myList = clientContext.Web.Lists.GetByTitle(ListName);
   9:              clientContext.Load(myList, p => p.EventReceivers);
  10:              clientContext.ExecuteQuery();
  11:   
  12:  bool rerExists = false;
  13:   
  14:  foreach (var rer in myList.EventReceivers)
  15:              {                        
  16:  if (rer.ReceiverName == ReceiverName)
  17:                  {
  18:                      rerExists = true;
  19:                      System.Diagnostics.Trace.WriteLine("Found existing ItemAdded receiver at "
  20:                          + rer.ReceiverUrl);
  21:                  }
  22:              }
  23:   
  24:  if (!rerExists)
  25:              {
  26:                  EventReceiverDefinitionCreationInformation receiver =
  27:  new EventReceiverDefinitionCreationInformation();
  28:                  receiver.EventType = EventReceiverType.ItemAdded;
  29:   
  30:  //Get WCF URL where this message was handled
  31:                  OperationContext op = OperationContext.Current;
  32:                  Message msg = op.RequestContext.RequestMessage;
  33:   
  34:                  receiver.ReceiverUrl = msg.Headers.To.ToString();
  35:   
  36:                  receiver.ReceiverName = ReceiverName;
  37:                  receiver.Synchronization = EventReceiverSynchronization.Synchronous;
  38:                  myList.EventReceivers.Add(receiver);
  39:   
  40:                  clientContext.ExecuteQuery();
  41:   
  42:                  System.Diagnostics.Trace.WriteLine("Added ItemAdded receiver at "
  43:                          + msg.Headers.To.ToString());
  44:              }
  45:          }
  46:      }
  47:  }

Lines 8-10 just get the list and the event receivers for the list using the client side object model.  The real work is in lines 24-38 where we obtain the WCF address of where the message was originally sent to and use that URL for our new event receiver.  This is how we add a remote event receiver to a list in the host web.

We need to clean up after ourselves, otherwise we may continue to receive messages after someone has uninstalled the app.

   1:  privatevoid HandleAppUninstalling(SPRemoteEventProperties properties)
   2:  {
   3:  using (ClientContext clientContext =
   4:          TokenHelper.CreateAppEventClientContext(properties, false))
   5:      {
   6:  if (clientContext != null)
   7:          {
   8:              List myList = clientContext.Web.Lists.GetByTitle(ListName);
   9:              clientContext.Load(myList, p => p.EventReceivers);
  10:              clientContext.ExecuteQuery();
  11:   
  12:              var rer = myList.EventReceivers.Where(
  13:                  e => e.ReceiverName == ReceiverName).FirstOrDefault();
  14:   
  15:  try
  16:              {
  17:                  System.Diagnostics.Trace.WriteLine("Removing ItemAdded receiver at "
  18:                          + rer.ReceiverUrl);
  19:   
  20:  //This will fail when deploying via F5, but works
  21:  //when deployed to production
  22:                  rer.DeleteObject();
  23:                  clientContext.ExecuteQuery();
  24:  
  25:              }
  26:  catch (Exception oops)
  27:              {                            
  28:                  System.Diagnostics.Trace.WriteLine(oops.Message);                            
  29:              }
  30:  
  31:          }
  32:      }
  33:  }

Now let’s handle the ItemAdded event.

   1:  privatevoid HandleItemAdded(SPRemoteEventProperties properties)
   2:  {
   3:  using (ClientContext clientContext =
   4:          TokenHelper.CreateRemoteEventReceiverClientContext(properties))
   5:      {
   6:  if (clientContext != null)
   7:          {
   8:  try
   9:              {                        
  10:                  List photos = clientContext.Web.Lists.GetById(
  11:                      properties.ItemEventProperties.ListId);
  12:                  ListItem item = photos.GetItemById(
  13:                      properties.ItemEventProperties.ListItemId);
  14:                  clientContext.Load(item);
  15:                  clientContext.ExecuteQuery();
  16:   
  17:                  item["Title"] += "\nUpdated by RER " +
  18:                      System.DateTime.Now.ToLongTimeString();
  19:                  item.Update();
  20:                  clientContext.ExecuteQuery();
  21:              }
  22:  catch (Exception oops)
  23:              {                        
  24:                  System.Diagnostics.Trace.WriteLine(oops.Message);
  25:              }
  26:          }
  27:   
  28:      }
  29:   
  30:  }

I need to point out line 4.  TokenHelper has two different methods for creating a client context for an event.  The first is CreateAppEventClientContext, which is used for app events such as AppInstalled or AppUninstalling.  The second is CreateRemoteEventReceiverClientContext, which is used for all other events.  This has tripped me up on more than one occasion, make sure to use the CreateRemoteEventReceiverClientContext method for handling item events.

That’s really all there is to it… we use the AppInstalled event to register an event on a list in the host web, use the same WCF service to handle the event.  These operations require Manage permission on the object where the event is being added.

Testing it Out

We’ve gone through the steps of creating the app and adding the service bus connection string, let’s see the code work!  Add breakpoints to each of your private methods in the WCF service and press F5 to see it work.

We are prompted to trust the app.  Notice that only the announcements lists in the host web show in the drop-down.

image

Click Trust It.  A short time later, the breakpoint in the HandleAppInstalled method fires.  We continue debugging, and then O365 prompts us to log in. 

Your app’s main entry point is then shown.

image

Without closing the browser (which would stop your debugging session), go back to your SharePoint site.  Go to the Announcements list and add a new announcement.

image

W00t!  Our breakpoint for the ItemAdded event is then hit!

image

If you want to inspect the properties of the remote event receiver that was attached, you can use Chris O’Brien’s scripts from his post, Add/delete and list Remote Event Receivers with PowerShell/CSOM:

image 

Debugging and the Handle App Uninstalling Event

Recall that we will be using the App Installed event to register a remote event receiver on a list in the host web.  We want to also remove the remote event receiver from the list.  If we try to use the AppUninstalling event and unregister the event using DeleteObject(), it doesn’t work.  You will consistently receive an error saying you don’t have permissions.  This only happens when side-loading the app, which is what happens when you use F5 to deploy the solution with Visual Studio. 

Unfortunately, that means that the receivers that are registered for the list hang around.  The only way to get rid of them is to delete the list.  Again, this only occurs when side-loading the apps, it doesn’t happen when the app is deployed.

To see the App Uninstalling event work, we are going to need to deploy our app.

Deploy to Azure and App Catalog

In my previous post, Creating a SharePoint 2013 App With Azure Web Sites, I showed how to create an Azure web site, go to AppRegNew.aspx to create a client ID, and a client secret.  I then showed how to publish the app to an Azure web site, and package the app to generate the .app package.  I did the same here, deploying the web application to an Azure web site called “rerdemo”. 

image

Instead of copying the .app package to a Developer Site Collection, we are instead going to copy the .app package to our App Catalog for our tenant.  Just go to the Apps for SharePoint library and upload the .app package.

image

Now go to a SharePoint site that you want to deploy the app to.  Make sure to create an Announcements list.  Our app could have done this in the App Installed event, but c’mon, this post is long enough as it is.  I’ll leave that as an exercise to the reader.

image

Before we add the app to the site, let’s see something incredibly cool.  Go to the Azure web site in Visual Studio, right-click and choose Settings, and turn up logging for everything.

image

Click save.

Right-click the Azure web site and choose View Streaming Logs in Output Window.  You’ll be greeted with a friendly message.

image

Now go back to your SharePoint site and choose add an app.  You should now see your app as one of the apps that can be installed.

image

Click Trust It.

image

Your app will show that it is installing.

image

Go back to Visual Studio and look at the streaming output logs.

image

OMG.  I don’t know about you, but I nearly wet myself when I saw that.  That is so unbelievably cool.  Let’s keep playing to see what other messages show up.  Go to the Announcements list and add an item.

image

Shortly after clicking OK, you’ll see the Title has changed.

image

Finally, uninstall the app to test our HandleAppUninstalling method.

image

We see a new message that the remote event receiver is being removed.

image

And we can again use Chris O’Brien’s PowerShell script to check if there are any remote event receivers still attached to the Announcements list.

image

Now, go back to Visual Studio.  Right-click on the Azure web site and choose Settings.  Go to the Logs tab and choose Download Logs.

image

A file is now in my Downloads folder.

image

I can double-click on the zip file to navigate into it.  Go to LogFiles / http / RawLogs and see the log file that is sitting there.  Double-click it.  You can see the IIS logs for your site!

image

For More Information

Developer training for Office, SharePoint, Project, Visio, and Access Services

Defining content in Host Web from an App for SharePoint

Add/delete and list Remote Event Receivers with PowerShell/CSOM

Streaming Diagnostics Trace Logging from the Azure Command Line (plus Glimpse!)

Creating a SharePoint 2013 App With Azure Web Sites

Install a New Active Directory forest on an Azure Virtual Network

$
0
0

This post will show how to install a new Active Directory forest on an Azure Virtual Network.  We will use this domain controller and virtual network in subsequent posts.

DISCLAIMER: This post does not contain definitive guidance on the correct way to create a domain controller in Azure.  For more definitive guidance, please see TechNet guidance, including Guidelines for Deploying Windows Server Active Directory on Azure Virtual Machines.  Please don’t expect that I will be able to answer support issues for your particular AD deployment scenario. 

I am going to loosely follow along with the article “Install a new Active Directory forest on an Azure virtual network” to show how to set up a new forest, just adding pictures along the way. 

That said, let’s just dive in.

Create an Affinity Group

I am going to use an affinity group because I want the compute and storage resources located closely together.  I created an affinity group named “kirke-java-east”, but the name can be whatever you want. 

image

Create a Storage Account

You can create a storage account as part of the wizard to create a new virtual machine, but I prefer to create it ahead of time.  I made sure to use the affinity group that we just created as the location.

image

Note that you can use zone redundant storage or geo-replicated storage, but I chose to use locally redundant to reduce costs.  In a production scenario, I would provision according to requirements.

Create the Virtual Network

Start by creating a virtual network.  Go to Networks / Virtual Networks and choose “Create a virtual network”.  Provide the name and region and click next.  The virtual network doesn’t participate in the affinity group, so we choose the same region as our affinity group.

image

Leave DNS servers blank, and don’t worry about creating a VPN just yet.

image

For the subnet address space, I chose to use a 10.0.0.0 start address with a CIDR of /24 (256).

image

Create the Cloud Service

Again, you could do this as part of the wizard to create a virtual machine, but I am showing the cloud service creation separately for completeness.  Create a new cloud service.  I used the custom create option, but quick create does the same thing.  Use the same affinity group you chose previously.

image

Note that the name can be anything you want, what matters is the affinity group.

Create the Virtual Machine

Now create the virtual machine.  Choose the latest Windows Server image from the gallery.

image

Next we’ll give some properties, including the size, login name, and password.

image

Now I can use the VNet, cloud service, and storage account that were created previously.  I choose not to use an availability set for the VM.

image

Note that we could have skipped the affinity group, because the virtual machine will be created in the same location as the virtual network.  I will use the affinity group in a subsequent post. 

Finally, choose to install the VM Agent.

image

Click finish, and after some time your virtual machine will be created.

Set a Static IP Address

The IP address will remain for the duration that the VM is running, but can change if the VM is shut down.  We can use PowerShell to assign a static IP to our previously created VM.  We use Test-AzureStaticVNetIP to test if it’s available (IsAvailable=true, if it’s not available then we see the list of available addresses).

image

We then assign the static IP.

image

The script I used is:

Code Snippet
  1. Test-AzureStaticVNetIPVNetNameKirkE-Java-VNetIPAddress10.0.0.5
  2. Get-AzureVM-ServiceNamekirke-java-east-NameDC1|Set-AzureStaticVNetIP-IPAddress10.0.0.5|Update-AzureVM

 

Create an Empty Disk and Format

image

The next screen is where you specify the size, for example 10 GB.  Make sure to leave the other settings as the default.

image

Once the virtual machine is done updating, connect to the VM using remote desktop.  Once connected, choose Tools / Computer Management.

image

Choose Disk Management, and you will be prompted to initialize the disk.  Choose OK.

image

Once initialized, right-click the new disk and choose “New Simple Volume”.

image

Next, next, next, Finish. 

image

You are then prompted to format the disk.  Choose “Format disk”.

image

image

Bob’s yer uncle, a new disk is now available.

Install Active Directory Domain Services

In the Server Manager dashboard, choose Add Roles and Features.  Choose role-based.

image

Use the local server (pretty cool, notice the IP address is the static one that we used previously).

image

Choose Active Directory Domain Services.

image

You will be prompted to add features.  You need these, so click “Add Features”.

image

Click Next, and when prompted to add additional features just click Next.

You are prompted to install the selected roles and features.  Click Install.  Optionally you can automatically restart the server.  A restart is not required to install ADDS, but is required after you promote the machine to a domain controller. 

image

You can view progress while ADDS is being installed. 

image

If you aren’t a fan of watching progress bars, you can close the wizard or wait for it to complete. 

Once complete, you will see a warning icon in the dashboard.  Click it to see the additional steps required.

image

Promote to a Domain Controller

The next step is to promote the VM to a domain controller.  I am following along with the TechNet documentation, “Install a New Windows Server 2012 Active Directory Forest (Level 200)”.Choose Add a new forest, and I used the name “corp.blueskyabove.us”.  Use your own name, of course.

image

I then leave the default functional levels, and provide a password.

image

On the delegation options screen, just click Next.

image

Leave the NetBIOS domain name.

image

Now change the drive letter to the drive we created before.  Instead of putting the files on the OS drive, we will use our new data disk, the E drive.

image

Next, Next, Install, and the server will automatically restart.

Log In

You can now log into your new domain controller, using the domain credentials.  You can see that I now have Active Directory Users and Computers, and can see that I am logged in as corp\myadmin.

image

Set the DNS Server for the Virtual Network

Now that we’ve created the domain controller, we can set it as the DNS server for the virtual network.  Go to the virtual network in the Azure management portal and go to the Configure tab.  Set the name and IP of the virtual machine and click Save.

image

Finally, select the VM and click Restart to trigger the VM to configure DNS resolver settings with the IP address of the new DNS server.

Congratulations, you now have a domain controller in Azure, and it is configured as the DNS server for the virtual network.  We’ll use this in a subsequent post.

For More Information

Guidelines for Deploying Windows Server Active Directory on Azure Virtual Machines

Install a new Active Directory forest on an Azure virtual network

Configure a Static Internal IP Address for a VM

Install a New Windows Server 2012 Active Directory Forest (Level 200)

Deploy Azure Roles Joined to a VNet Using Eclipse

$
0
0

This post will show how to deploy an Azure worker role running Apache Tomcat using Eclipse, where the worker role is joined to a VNet.

Background

I have been working with a customer to deploy existing Java applications to Azure.  One of their main requirements is to be able to communicate with on-premises resources using either a site to site VPN or ExpressRoute.  Further, the components that they deploy must be able to communicate with virtual machines in separate cloud services within the same subscription, and those components must be accessible only from within the VNet.  The components they utilize must not be accessible publicly over the internet.  Finally, the components are currently deployed on-premises in an environment that is way under capacity in order to accommodate burst.

We first looked at Azure Websites because they are simple to use.  While it is possible to integrate Azure Websites with a virtual network, however this does not grant access to the website from the virtual network, so this option doesn’t work for their scenario.  We then looked at utilizing Azure virtual machines, but they were concerned about traffic bursts and pre-provisioning many virtual machines in order to use autoscaling of VMs. 

Benefits of the Solution

Due to this customer’s particular scenario requiring advanced networking and ability to accommodate burst traffic, this was an ideal scenario to leverage Azure Role Instances.  Azure Role Instances (aka Web Roles and Worker Roles) enable you to deploy the application to the Azure fabric and new instances are automatically provisioned or deprovisioned based on the autoscaling rules that you specify.  The guest OS can be automatically updated, relieving you from many of the management tasks for the underlying operating system. 

Further, because we are able to deploy the roles within an Azure Virtual Network (VNet), we are able to provide fast and secure communication between components as well as enable connectivity to on-premises resources.

Michael Washam wrote a great blog post that details how to join Azure Role Instances to a Simple Virtual Network in Microsoft Azure.  Leveraging the content from his post, I was able to show the customer how to accomplish the task using Eclipse.

This particular customer that I am working with will deploy their worker role into a cloud service that needs to access resources in another cloud service.  The solution that I will build will contain a virtual network with a worker role and two virtual machines, and another cloud service with a virtual machine that is not part of the virtual network.

image

The worker role that we deploy to Azure will be in its own cloud service, and we will deploy two VMs in their own cloud services to prove connectivity.  Azure provides its own DNS that provides host-name resolution for virtual machines within the same cloud service, but does not span cloud services.  In order to provide name resolution, we have to provide our own DNS.  The easiest way to do this is to create a new Windows Server 2012 R2 image and create a new Active Directory Forest.  That’s why I have the domain controller in the solution, because it will also create a DNS server.  The virtual machine that is not part of the VNet will not have name resolution to resources within the second cloud service, proving that our DNS server is actually providing name resolution.

Create an Affinity Group

The first step is to create an affinity group.  In the Azure Management Portal (http://manage.windowsazure.com), go to Settings / Affinity Groups and choose Add.

image

Provide a name and the region for your affinity group.  The name doesn’t matter, I just use the location in the name as a convention so I can easily reference it later.

image

Create a Virtual Network

The next step is to create a virtual network.  Go to Networks / New / Virtual Network / Custom Create.

image

On the next page, specify the name of the virtual network and use the same region as your previously created affinity group.

image

For the purposes of this post, we are going to leave the DNS server blank, and are not configuring point to site or site to site connectivity, we are just going to show how to deploy to a simple virtual network named kirke-java-west-vnet with just one subnet named JavaAppSubnet.

image

Click the Complete button (the check mark at the bottom right) to create the virtual network.

Create a new Active Directory Forest

I am not going to cover all of the required steps here as I’ve already written up the steps in my blog post, Install a New Active Directory forest on an Azure Virtual Network.  Follow those steps to create the AD forest and then update the Azure Virtual Network to use that domain controller’s DNS.  The key to creating the VM is to place it in the virtual network while you are creating it.

image

Once the VM is created, we assign a static IP, testing to see if the IP is available.

Code Snippet
  1. Test-AzureStaticVNetIP-VNetNamekirke-java-west-vnet-IPAddress10.0.0.7
  2. Get-AzureVM-ServiceNameJavaDC-NameJavaDC|Set-AzureStaticVNetIP-IPAddress10.0.0.7|Update-AzureVM

Next, follow the instructions in the blog post (Install a New Active Directory forest on an Azure Virtual Network) to add a disk and promote to a domain controller in a new AD forest.

After the VM is promoted to a domain controller, the VM is rebooted.  Go to the Virtual Network and set the virtual machine as the DNS server.

image

Finally, restart the VM for the DNS settings to take effect.

image

 

Configure the Azure Role Instance in Eclipse

In my blog post, Creating an Eclipse Development Environment for Azure, I showed how to create a virtual machine that uses the Azure Toolkit for Eclipse.  In that post, I showed how to create a simple Java application that is deployed to an Azure Worker Role running Apache Tomcat.  Rather than rehash all the steps for creating the project, I will simply point you to that post for details on creating the initial project structure and Azure Deployment Project.  Once you have the Azure Deployment Project created, the next step is to configure the Azure Role Instance using the Azure Toolkit for Eclipse.

In the Azure Deployment Project, locate the .cscfg file and open it, then go to the Source tab to view the XML markup.

image

Beneath the Role ending element, add the following markup:

Code Snippet
  1. <NetworkConfiguration>
  2.   <VirtualNetworkSitename="kirke-java-west-vnet"/>
  3.   <AddressAssignments>
  4.     <InstanceAddressroleName="WorkerRole1">
  5.       <Subnets>
  6.         <Subnetname="JavaAppSubnet"/>
  7.       </Subnets>
  8.     </InstanceAddress>
  9.   </AddressAssignments>
  10. </NetworkConfiguration>

Note that the roleName value needs to match the “name” attribute on the Role element, as shown in this image:

image

Make sure to Save All to save your work.

Enable Remote Access to the Azure Role Instance

In order for us to do some testing, we are going to need to be able to RDP to the Azure worker role.  We can do this before we deploy, or even enable it for a previously deployed instance.  In order to do this, we need a certificate.  For our purposes, a self-signed certificate will work just fine.

In order for this to work in our environment, we need to enable .NET Framework 3.5.  Just open PowerShell by clicking the icon in the taskbar.

image

In PowerShell, run the following command:

Code Snippet
  1. Install-WindowsFeature Net-Framework-Core

After a few minutes, .NET Framework 3.5 will be installed. 

image

Now, in Eclipse, click the “New Self Signed Certificate” toolbar item.

image

I provide a password, then I then saved the .CER and .PFX files in the “cert” folder for my project.

image

Next, right-click the Azure Deployment Project and choose Azure / Properties.

image

Go to the Remote Access node and check the “Enable all roles to accept Remote Desktop connections with these login credentials” option.  Provide a username, password, and the path to the .CER file that you just created.

image

Once you choose OK, the .CSCFG file will be updated with the values you just entered.

image

We can now publish our changes to Azure using the “Publish to Azure Cloud” button in the toolbar.

image

You are then prompted for the private key password.

image

Once the role is deployed, we can connect to it.  Go to Cloud Services, Instances, click on the Staging tab, and the Connect button will be enabled.

image

You can now RDP into the Azure Role Instance that you just deployed!

Testing It Out

To prove how this works, I am going to create two new virtual machines in their own cloud services.  Azure provides its own DNS, but name resolution does not span cloud services.  In order to do this, I will need to provide my own DNS for the VNet. 

The first virtual machine will be part of the same VNet.  The second virtual machine will not be part of the VNet.

For both VMs, I will use a Windows Server 2012 R2 image.

image

image

For each VM, I will create a new cloud service.  The first VM will be joined to the VNet.  To join to a VNet, simply choose the virtual network from the dropdown.

image

The second VM will create a new cloud service and be in the same region, but not part of the virtual network.

image

Once the VMs are created, I RDP into them.  Let’s just prove that the VM that is not part of the VNet (the VM named VM2) will not have connectivity.  I show the IPConfig here to show it’s not part of the 10.0.x network that we are using for the other components.

image

OK, so that VM isn’t going to be able to connect, let alone use the domain controller for name resolution, so let’s move on to the case where things work. 

The next test will test the VM that is part of the same virtual network.  From VM1 (the one deployed into the VNet), I run a simple ping test using its IP address.

image

Cool, we can connect because this VM is part of the same VNet.  Now, let’s try name resolution, since we provided our own DNS server.  We first test by pinging the DNS server (corp.blueskyabove.us), and we see that it shows the reply from 10.0.0.7… proving it works.

image

Woohoo! 

OK, one last test.  When we deployed our cloud service we configured it so that we could RDP into it.  I connect and then run the same ping test to prove name resolution, and show ipconfig to show the DNS suffix is reddog.microsoft.com, which is consistent for worker roles deployed to Azure.

image

If we needed to communicate from the worker role to the virtual machine, we could update DNS to provide the name resolution (because Azure does not provide name resolution across cloud services). 

The Payoff

Besides the fact that it’s cool, why did we do all of this?  The fact is that in many deployments, you will need more than a single cloud service and will need name resolution across them.  Further, you will use virtual networks to establish gateways to communicate to on-premises resources, in which case you would use your own DNS server to resolve to on-premises resources.  It’s important to understand the connectivity options and how various components communicate.

Another huge benefit is that we also show how Azure roles (web and worker roles) can join in a virtual network.  This enables scenarios such as Java applications that communicate to an on-premises Cassandra database in a secure manner. 

For More Information

integrate Azure Websites with a virtual network

Connecting Web or Worker Roles to a Simple Virtual Network in Windows Azure

Manage Upgrades to the Azure Guest Operating System (Guest OS)

Creating an Eclipse Development Environment for Azure

Azure Toolkit for Eclipse


Autoscaling Azure–Virtual Machines

$
0
0

This post will demonstrate autoscaling in Azure virtual machines.

image

Background

While I spend most of my time working with PaaS (platform as a service) components of Azure such as Cloud Services and Websites, I frequently need to help customers with solutions that require IaaS (infrastructure as a service) virtual machines.  A topic that comes up very regularly in both of those conversations is how autoscaling works. 

The short version is that you have to pre-provision virtual machines, and autoscale turns them on or off according to the rules you specify.  One of those rules might be queue length, enabling you to build a highly scalable solution that provides cloud elasticity.

Autoscaling Virtual Machines

Let’s look at autoscaling virtual machines.  To autoscale a virtual machine, you need to pre-provision the number of VMs and add them to an availability set.  Using the Azure Management Portal to create the VM, I choose a Windows Server 2012 R2 Datacenter image and provide the name, size, and credentials.

image

The next page allows me to specify the cloud service, region or VNet, storage account, and an availability set.  If I don’t already have an availability set, I can create one.  I already created one called “AVSet”, so I add the new VM to the existing availability set.

image

Finally add the extensions required for your VM and click OK to create the VM.  Make sure to enable the VM Agent, we’ll use that later.

image

You can see that I’ve created 5 virtual machines. 

image

I’ve forgotten to place VM2 in the availability set.  No problem, I can go to it’s configuration and add it to the availability set.

image

This is the benefit of autoscaling and the cloud.  I might use the virtual machine for a stateless web application where it’s unlikely that I need all 5 virtual machines running constantly.  If I were running this on-premises, I would typically just leave them running, consuming resources that I don’t actually utilize (overprovisioned).  I can reduce my cost by running them in the cloud and only utilize the resources that I need when I need them.  Autoscale for virtual machines simply turns some of the VMs on or off depending on rules that I specify.

To show this. let’s configure autoscale for my availability set.  Once VM5 is in the Running state, I go to the Cloud Services tool in the portal and then navigate to my cloud service’s dashboard.  On the dashboard I will see a section for autoscale status:

image

It says that it can save up to 60% by configuring autoscale.  Click the link to configure autoscale.  This is the most typical demo that you’ll see, scaling by CPU.  In this screenshot, I’ve configured autoscale to start at 1 instance.  The target CPU range is between 60% and 80%.  If it exceeds that range, then we’ll scale up 2 more instances and then wait 20 minutes for the next action.  If the target is less than that range, we’ll scale down by 1 instance and wait 20 minutes.

image

Easy enough to understand.  A lesser known but incredibly cool pattern is scaling by queues.  In a previous post, I wrote about Solving a throttling problem with Azure where I used a queue-centric work pattern.  Notice the Scale by Metric option provides Queue as an option:

image

That means we can scale based on how many messages are waiting in the queue.  If the messages are increasing, then our application is not able to process them fast enough, thus we need more capacity.  Once the number of messages levels off, we don’t need the additional capacity, so we can turn the VMs off until they are needed again.

I changed my autoscale settings to use a Service Bus queue, scaling up by 1 instance every 5 minutes and down by 1 instance every 5 minutes.

image

After we let the virtual machines run for awhile, we can see that all but one of them were turned off due to our autoscale rules.

image

Just a Little Code

Virtual machines need something to do, so we’ll create a simple solution that sends and receives messages on a Service Bus queue.  On my laptop, I have an application called “Sender.exe” that sends messages to a queue.  Each virtual machine has an application on it that I’ve written called Receiver.exe that simply receive messages from a queue.  We will have up to 5 receivers working simultaneously as a competing consumer of the queue.

image

The Sender application sends a message to the queue once every second.

Sender
  1. using Microsoft.ServiceBus.Messaging;
  2. using Microsoft.WindowsAzure;
  3. using System;
  4.  
  5. namespace Sender
  6. {
  7.     classProgram
  8.     {
  9.         staticvoid Main(string[] args)
  10.         {
  11.             string connectionString =
  12.                 CloudConfigurationManager.GetSetting("Microsoft.ServiceBus.ConnectionString");
  13.  
  14.             QueueClient client =
  15.                 QueueClient.CreateFromConnectionString(connectionString, "myqueue");
  16.  
  17.             int i = 0;
  18.             while(true)
  19.             {                
  20.                 var message = newBrokeredMessage("Test " + i);
  21.                 client.Send(message);
  22.                 
  23.                 Console.WriteLine("Sent message: {0} {1}",
  24.                     message.MessageId,
  25.                     message.GetBody<string>());
  26.                 
  27.                 //Sleep for 1 second
  28.                 System.Threading.Thread.Sleep(TimeSpan.FromSeconds(1));
  29.                 i++;
  30.             }
  31.             
  32.         }
  33.     }
  34. }

The Receiver application reads messages from the queue once every 3 seconds.  The idea is that the sender will send the messages faster than 1 machine can handle, which will let us observe how autoscale works.

Receiver
  1. using Microsoft.ServiceBus.Messaging;
  2. using Microsoft.WindowsAzure;
  3. using System;
  4. using System.Collections.Generic;
  5. using System.Linq;
  6. using System.Text;
  7.  
  8. namespace Receiver
  9. {
  10.     classProgram
  11.     {
  12.         staticvoid Main(string[] args)
  13.         {
  14.             string connectionString =
  15.                 CloudConfigurationManager.GetSetting("Microsoft.ServiceBus.ConnectionString");
  16.  
  17.             QueueClient client =
  18.                 QueueClient.CreateFromConnectionString(connectionString, "myqueue");
  19.             
  20.  
  21.             while (true)
  22.             {
  23.                 var message = client.Receive();
  24.                 if(null != message)
  25.                 {
  26.                     Console.WriteLine("Received {0} : {1}",
  27.                         message.MessageId,
  28.                         message.GetBody<string>());
  29.                     
  30.                     message.Complete();
  31.                 }
  32.                 
  33.                 //Sleep for 3 seconds
  34.                 System.Threading.Thread.Sleep(TimeSpan.FromSeconds(3));
  35.             }                                        
  36.         }
  37.     }
  38. }

I built the Receiver.exe application in Visual Studio then copied all of the files in the bin/debug folder to the c:\temp folder on each virtual machine. 

image

Running Startup Tasks with Autoscale

As each virtual machine is started, I want the Receiver.exe code to execute upon startup.  I could go into each machine and set a group policy to assign computer startup scripts, but since we are working with Azure, we have the ability to use the custom script extension which will run each time the machine is started.  When I created the virtual machine earlier, I enabled the Azure VM Agent on each virtual machine, so we can use the custom script extension.

We need to upload a PowerShell script to be used as a startup task to execute the Receiver.exe code that is already sitting on the computer.  The code for the script is stupid simple:

Startup.ps1
  1. Set-Location"C:\temp"
  2. .\Receiver.exe

This script is uploaded from my local machine to Azure blob storage as a block blob using the following commands:

Upload block blob
  1. $context=New-AzureStorageContext-StorageAccountName"kirkestorage"-StorageAccountKey"QiCZBIREDACTEDuYcqemWtwhTLlw=="
  2. Set-AzureStorageBlobContent-Blob"startup.ps1"-Container"myscripts"-File"c:\temp\startup.ps1"-Context$context-Force

I then set the custom script extension on each virtual machine.

AzureVMCustomScriptExtension
  1. $vms=Get-AzureVM-ServiceName"kirkeautoscaledemo"
  2. foreach($vmin$vms)
  3. {
  4.     Set-AzureVMCustomScriptExtension-VM$vm-StorageAccountName"kirkestorage"-StorageAccountKey"QiCZBIREDACTEDuYcqemWtwhTLlw=="ContainerName"myscripts"FileName"startup.ps1"
  5.     $vm|Update-AzureVM
  6. }

Once updated, I can see that the Receiver.exe is running on the one running virtual machine:

image

Testing It Out

The next step is to fire up my Sender and start sending messages to it.  The only problem is that I haven’t done a good job in providing any way to see what is going on, how many messages are in the queue.  One simple way to do this is to use the Service Bus Explorer tool, a free download.  Simply enter the connection string for your Service Bus queue and you will be able to connect to see how many messages are in the queue.  I can send a few messages, then stop the sender.  Refresh the queue, and the number of messages decreases once every 3 seconds. 

image

OK, so our queue receiver is working.  Now let’s see if it makes Autoscale work.  I’ll fire up the Sender and let it run for awhile.

image

The number of messages in the queue continues to grow…

image

And after a few minutes one of the virtual machines is automatically started.

image

I check and make sure that Receiver.exe is executing:

image

Waiting for awhile (I lost track of time, guessing 30 minutes or so) you can see that all of the VMs are now running because the number of incoming messages outpaced the ability for our virtual machines to process the messages.

image

Once there are around 650 messages in queue, I turn the queue sender off.  The number of messages starts to drop quickly.  Since we are draining messages out of the queue, we should be able to observe autoscale shutting things down.  About 5 minutes after the number of queue messages drained to zero, I saw the following:

image

Go back to the dashboard for the cloud service, and once autoscale shuts down the remaining virtual machines (all but one, just like we defined) you see the following:

image

Monitoring

I just showed how to execute code when the machine is started, but is there any way to see in the logs when an autoscale operation occurs?  You bet!  Go to the Management Services tool in the management portal:

image

Go to the Operation Logs tab, and take a look at the various ExecuteRoleSetOperation entries. 

image

Click on the details for one.

Operation Log Entry
  1. <SubscriptionOperationxmlns="http://schemas.microsoft.com/windowsazure"
  2.                        xmlns:i="http://www.w3.org/2001/XMLSchema-instance">
  3.   <OperationId>REDACTED</OperationId>
  4.   <OperationObjectId>/REDACTED/services/hostedservices/KirkEAutoscaleDemo/deployments/VM1/Roles/Operations</OperationObjectId>
  5.   <OperationName>ExecuteRoleSetOperation</OperationName>
  6.   <OperationParametersxmlns:d2p1="http://schemas.datacontract.org/2004/07/Microsoft.WindowsAzure.ServiceManagement">
  7.     <OperationParameter>
  8.       <d2p1:Name>subscriptionID</d2p1:Name>
  9.       <d2p1:Value>REDACTED</d2p1:Value>
  10.     </OperationParameter>
  11.     <OperationParameter>
  12.       <d2p1:Name>serviceName</d2p1:Name>
  13.       <d2p1:Value>KirkEAutoscaleDemo</d2p1:Value>
  14.     </OperationParameter>
  15.     <OperationParameter>
  16.       <d2p1:Name>deploymentName</d2p1:Name>
  17.       <d2p1:Value>VM1</d2p1:Value>
  18.     </OperationParameter>
  19.     <OperationParameter>
  20.       <d2p1:Name>roleSetOperation</d2p1:Name>
  21.       <d2p1:Value><?xmlversion="1.0"encoding="utf-16"?>
  22.         <z:anyTypexmlns:i="http://www.w3.org/2001/XMLSchema-instance"
  23.                    xmlns:d1p1="http://schemas.microsoft.com/windowsazure"
  24.                    i:type="d1p1:ShutdownRolesOperation"
  25.                    xmlns:z="http://schemas.microsoft.com/2003/10/Serialization/">
  26.           <d1p1:OperationType>ShutdownRolesOperation</d1p1:OperationType>
  27.           <d1p1:Roles>
  28.             <d1p1:Name>VM3</d1p1:Name>
  29.           </d1p1:Roles>
  30.           <d1p1:PostShutdownAction>StoppedDeallocated</d1p1:PostShutdownAction>
  31.         </z:anyType>
  32.       </d2p1:Value>
  33.     </OperationParameter>
  34.   </OperationParameters>
  35.   <OperationCaller>
  36.     <UsedServiceManagementApi>true</UsedServiceManagementApi>
  37.     <UserEmailAddress>Unknown</UserEmailAddress>
  38.     <ClientIP>REDACTED</ClientIP>
  39.   </OperationCaller>
  40.   <OperationStatus>
  41.     <ID>REDACTED</ID>
  42.     <Status>Succeeded</Status>
  43.     <HttpStatusCode>200</HttpStatusCode>
  44.   </OperationStatus>
  45.   <OperationStartedTime>2015-02-20T20:47:54Z</OperationStartedTime>
  46.   <OperationCompletedTime>2015-02-20T20:48:38Z</OperationCompletedTime>
  47.   <OperationKind>ShutdownRolesOperation</OperationKind>
  48. </SubscriptionOperation>

Notice on line 26 that the operation type is “ShutdownRolesOperation”, and on line 28 the role name is VM3.  That entry occurred after VM3 was automatically shut down.

Summary

This post showed a demonstration of Autoscale in Azure turning virtual machines on and off according to the number of messages in a queue.  This pattern can be hugely valuable to build scalable solutions while taking advantage of the elasticity of cloud resources.  You only pay for what you use, and it’s in your best interest to design solutions to take advantage of that and avoid over-provisioning resources.

For More Information

Solving a throttling problem with Azure

Automating VM Customization tasks using Custom Script Extension

Service Bus Explorer

Creating Dev and Test Environments with Windows PowerShell

$
0
0

This post will discuss creating application environments with Windows PowerShell.  We will use these environments in subsequent posts.

Background

I have participated in a series of readiness workshops with our top GSI partners.  Part of the workshop includes case studies where we have the participants review requirements and propose solutions within a few constraints.  One of the case studies involves Visual Studio Release Management.  It’s a very interesting case study, but I felt it could have been so much better with a demo.  You know me, as I come up with ideas like this I try to get some time at the keyboard to build the demo and then share it here on my blog.  This time is no exception.

The case study shows off Visual Studio Release Management to several environments.  I’ll get to that part later, for now we will focus on designing the environment to suit our needs.

Requirements

We will have three environments: dev, stage, and prod.  Each environment must have an uptime SLA of 99.5 or higher while minimizing cost.  These virtual machines must be accessible from on-premises using HTTP over port 8080, but not accessible externally (no HTTP endpoint exposed).  A CPU spike in one environment should not affect the other environments. 

If we break the requirements down, we can see that there are three environments, each requiring an SLA of 99.95% or higher while minimizing cost.  We implement two virtual machines in each environment and place them within an availability set to meet the 99.95% SLA requirement.  To minimize our cost during the POC phase, we will use the Small VM size (for more information on VM sizes, see Virtual Machine and Cloud Service Sizes for Azure).  The part about HTTP over port 8080, we’ll take care of that in a future post as it requires some internal configuration of the VM, but we can take care of the requirement not to expose port 8080 simply by not adding an endpoint for the environment.  We’ll partially address the on-premises connectivity in this post, but that will also need a little more work to complete later.  Finally, we address the customer’s concerns about CPU by placing each environment within its own cloud service.

Our proposed architecture looks like the following.

image

I created this environment using my MSDN subscription in about an hour just using the new Azure portal (https://portal.azure.com).  However, it’s been awhile since I did anything with PowerShell, so let’s show you how to do this using the service management PowerShell SDK.

The Virtual Network

The first task is to create a virtual network XML configuration file.  To be honest, I created this using the old Azure portal (https://manage.windowsazure.com), then exported the VNet to a file.  In the file you can see the 3 subnets with the incredibly clever names of Subnet-1, Subnet-2, and Subnet-3.  The virtual network itself has a name, in this file it’s “kirketestvnet-southcentral”.  You probably want to change that Smile  The value “kirketest” is part of a naming scheme that I use, I prefix all of the services (virtual network, storage, and cloud service) with the same value helping to avoid name collisions as the storage and cloud service names must be globally unique.  In this example, I’ve also added a gateway.  We aren’t going to create the gateway just yet, but we will leave the gateway subnet in the definition for now.

NetworkConfig.xml
  1. <NetworkConfigurationxmlns:xsd="http://www.w3.org/2001/XMLSchema"xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xmlns="http://schemas.microsoft.com/ServiceHosting/2011/07/NetworkConfiguration">
  2.   <VirtualNetworkConfiguration>
  3.     <VirtualNetworkSites>
  4.       <VirtualNetworkSitename="kirketestvnet-southcentral"Location="South Central US">
  5.         <AddressSpace>
  6.           <AddressPrefix>10.0.1.0/24</AddressPrefix>
  7.         </AddressSpace>
  8.         <Subnets>
  9.           <Subnetname="Subnet-1">
  10.             <AddressPrefix>10.0.1.0/27</AddressPrefix>
  11.           </Subnet>
  12.           <Subnetname="Subnet-2">
  13.             <AddressPrefix>10.0.1.32/27</AddressPrefix>
  14.           </Subnet>
  15.           <Subnetname="Subnet-3">
  16.             <AddressPrefix>10.0.1.64/26</AddressPrefix>
  17.           </Subnet>
  18.           <Subnetname="GatewaySubnet">
  19.             <AddressPrefix>10.0.1.128/29</AddressPrefix>
  20.           </Subnet>
  21.         </Subnets>
  22.         <Gateway>
  23.           <VPNClientAddressPool>
  24.             <AddressPrefix>10.0.0.0/24</AddressPrefix>
  25.           </VPNClientAddressPool>
  26.           <ConnectionsToLocalNetwork />
  27.         </Gateway>
  28.       </VirtualNetworkSite>      
  29.     </VirtualNetworkSites>
  30.   </VirtualNetworkConfiguration>
  31. </NetworkConfiguration>

To set the virtual network configuration, you use the following PowerShell script.

Set-AzureVNetConfig
  1. $vnetConfigFilePath="C:\temp\NetworkConfig.xml"
  2.  
  3. Set-AzureVNetConfig-ConfigurationPath$vnetConfigFilePath

A word of caution here… this will update ALL of the virtual networks for your subscription.  If you use the XML as-is above, that is the same as telling Azure to delete existing virtual networks that you may have and then add the new network named “kirketest-southcentral”.  Luckily, if those virtual networks are in use, the operation will fail.  I find it much less risky to simply export the existing virtual network, make whatever changes I need to, then use the Set-AzureVNetConfig to apply all of the changes.

Creating Virtual Machines

When you create a virtual machine in Azure, you choose a base image.  Using Get-AzureVMImage, you can get a list of all the available images and their locations.  The image name will be something family-friendly like this:

a699494373c04fc0bc8f2bb1389d6106__Windows-Server-2012-Datacenter-201505.01-en.us-127GB.vhd

Notice the date and version number on that image.  This image will go away at some point, replaced by a newer image.  Hardcoding that image name in your script will cause you problems later, but it’s easy to just obtain the latest image version.  Michael Collier provides a great post on The Case of the Latest Windows Azure VM Image with a simple solution:  get the image sorted by the date it is published, descending, and get the first image.  He also explains in that post how not all images are available in all locations, so you should include the location as part of your filter.

Get-LatestVMImage
  1. functionGet-LatestVMImage([string]$imageFamily,[string]$location)
  2. {
  3.     #From https://michaelcollier.wordpress.com/2013/07/30/the-case-of-the-latest-windows-azure-vm-image/
  4.     $images=Get-AzureVMImage `
  5.     |where { $_.ImageFamily -eq$imageFamily } `
  6.         |where { $_.Location.Split(";") -contains$location} `
  7.         |Sort-Object-Descending-PropertyPublishedDate
  8.         return$images[0].ImageName;
  9. }

Calling this function is really simple, just provide the location name (obtained by Get-AzureLocation).

Code Snippet
  1. #$imageName = "a699494373c04fc0bc8f2bb1389d6106__Windows-Server-2012-Datacenter-201505.01-en.us-127GB.vhd"
  2. $imageName=Get-LatestVMImage-imageFamily"Windows Server 2012 R2 Datacenter"-location$location

Now that we have an image, we create a new AzureVMConfig with the settings for our VM.  We then create an AzureProvisioningConfig to tell the provisioning engine that this is a Windows machine with a username and password.  We set a virtual network, and we are left with the configuration object.  We haven’t yet told Azure to create a VM.  This lets us create the configuration for multiple VMs at once before we finally start the provisioning process.  Putting the VMs in an availability set is as easy as providing the –AvailabilitySetName parameter (for more information, see Michael Washam’s post, Understanding and Configuring Availability Sets).

Create Multiple VMs
  1.   New-AzureService-ServiceName$serviceName-Location$location
  2.                          
  3. $vm1=New-AzureVMConfig-Name"DEV1"-InstanceSize$size-ImageName$imageName-AvailabilitySetName$avSetName
  4. Add-AzureProvisioningConfig-VM$vm1-Windows-AdminUsername$adminUsername-Password$adminPassword
  5. Set-AzureSubnet-SubnetNames$subnetName-VM$vm1
  6.  
  7. $vm2=New-AzureVMConfig-Name"DEV2"-InstanceSize$size-ImageName$imageName-AvailabilitySetName$avSetName
  8. Add-AzureProvisioningConfig-VM$vm2-Windows-AdminUsername$adminUsername-Password$adminPassword
  9. Set-AzureSubnet-SubnetNames$subnetName-VM$vm2
  10.  
  11. New-AzureVM-ServiceName$serviceName-VMs$vm1,$vm2-VNetName$vnetName

We now have the basic building blocks out of the way.  The rest of the script is simply putting everything together. 

The Script

The rest of the script is fairly straightforward.  We create a cloud service for each environment, two virtual machines in an availability set in each cloud service, each VM is placed within a virtual network subnet.  The full script is shown here:

Full Script
  1. functionGet-LatestVMImage([string]$imageFamily,[string]$location)
  2. {
  3.     #From https://michaelcollier.wordpress.com/2013/07/30/the-case-of-the-latest-windows-azure-vm-image/
  4.     $images=Get-AzureVMImage `
  5.     |where { $_.ImageFamily -eq$imageFamily } `
  6.         |where { $_.Location.Split(";") -contains$location} `
  7.         |Sort-Object-Descending-PropertyPublishedDate
  8.         return$images[0].ImageName;
  9. }
  10.  
  11.   $prefix="mydemo"
  12. $storageAccountName= ($prefix+"storage")
  13.                $location="South Central US"
  14. $vnetConfigFilePath="C:\temp\NetworkConfig.xml"
  15.  
  16. #$imageName = "a699494373c04fc0bc8f2bb1389d6106__Windows-Server-2012-Datacenter-201505.01-en.us-127GB.vhd"
  17. $imageName=Get-LatestVMImage-imageFamily"Windows Server 2012 R2 Datacenter"-location$location
  18.  
  19. $size="Small"
  20.   $adminUsername="YOUR_USERNAME_HERE"
  21.   $adminPassword="YOUR_PASSWORD_HERE"
  22. $vnetName= ($prefix+"vnet-southcentral")
  23. #Use Get-AzureSubscription to find your subscription ID
  24. $subscriptionID="YOUR_SUBSCRIPTION_ID_HERE"
  25.  
  26. #Set the current subscription
  27. Select-AzureSubscription-SubscriptionId$subscriptionID-Current
  28.  
  29. #Create storage account
  30. New-AzureStorageAccount-StorageAccountName$storageAccountName-Location$location
  31.  
  32. #Set the current storage account
  33. Set-AzureSubscription-SubscriptionId$subscriptionID-CurrentStorageAccountName$storageAccountName
  34.  
  35. #Create virtual network
  36. Set-AzureVNetConfig-ConfigurationPath$vnetConfigFilePath
  37.  
  38.  
  39. #Development environment
  40. $avSetName="AVSET-DEV"
  41. $serviceName= ($prefix+"DEV")
  42. $subnetName="Subnet-1"
  43.  
  44.   New-AzureService-ServiceName$serviceName-Location$location
  45.                          
  46. $vm1=New-AzureVMConfig-Name"DEV1"-InstanceSize$size-ImageName$imageName-AvailabilitySetName$avSetName
  47. Add-AzureProvisioningConfig-VM$vm1-Windows-AdminUsername$adminUsername-Password$adminPassword
  48. Set-AzureSubnet-SubnetNames$subnetName-VM$vm1
  49.  
  50. $vm2=New-AzureVMConfig-Name"DEV2"-InstanceSize$size-ImageName$imageName-AvailabilitySetName$avSetName
  51. Add-AzureProvisioningConfig-VM$vm2-Windows-AdminUsername$adminUsername-Password$adminPassword
  52. Set-AzureSubnet-SubnetNames$subnetName-VM$vm2
  53.  
  54. New-AzureVM-ServiceName$serviceName-VMs$vm1,$vm2-VNetName$vnetName
  55.  
  56.  
  57. #Staging environment
  58. $avSetName="AVSET-STAGE"
  59. $serviceName= ($prefix+"STAGE")
  60. $subnetName="Subnet-2"
  61.  
  62.   New-AzureService-ServiceName$serviceName-Location$location
  63.                          
  64. $vm1=New-AzureVMConfig-Name"STAGE1"-InstanceSize$size-ImageName$imageName-AvailabilitySetName$avSetName
  65. Add-AzureProvisioningConfig-VM$vm1-Windows-AdminUsername$adminUsername-Password$adminPassword
  66. Set-AzureSubnet-SubnetNames$subnetName-VM$vm1
  67.  
  68. $vm2=New-AzureVMConfig-Name"STAGE2"-InstanceSize$size-ImageName$imageName-AvailabilitySetName$avSetName
  69. Add-AzureProvisioningConfig-VM$vm2-Windows-AdminUsername$adminUsername-Password$adminPassword
  70. Set-AzureSubnet-SubnetNames$subnetName-VM$vm2
  71.  
  72. New-AzureVM-ServiceName$serviceName-VMs$vm1,$vm2-VNetName$vnetName
  73.  
  74.  
  75. #Production environment
  76. $avSetName="AVSET-PROD"
  77. $serviceName= ($prefix+"PROD")
  78. $subnetName="Subnet-3"
  79.  
  80.   New-AzureService-ServiceName$serviceName-Location$location
  81.                          
  82. $vm1=New-AzureVMConfig-Name"PROD1"-InstanceSize$size-ImageName$imageName-AvailabilitySetName$avSetName
  83. Add-AzureProvisioningConfig-VM$vm1-Windows-AdminUsername$adminUsername-Password$adminPassword
  84. Set-AzureSubnet-SubnetNames$subnetName-VM$vm1
  85.  
  86. $vm2=New-AzureVMConfig-Name"PROD2"-InstanceSize$size-ImageName$imageName-AvailabilitySetName$avSetName
  87. Add-AzureProvisioningConfig-VM$vm2-Windows-AdminUsername$adminUsername-Password$adminPassword
  88. Set-AzureSubnet-SubnetNames$subnetName-VM$vm2
  89.  
  90. New-AzureVM-ServiceName$serviceName-VMs$vm1,$vm2-VNetName$vnetName

 

The Result

I ran the script using my MSDN subscription.  In about 10 minutes I had 6 virtual machines grouped within 3 cloud services and 3 virtual networks.  I could then change the subscription and prefix variables then run the script in a different subscription.  I could have done all of this using the portal (the first time, I did), but once I had the script completed it became an asset to create the environments in a repeatable and consistent manner.

The virtual network shows the resources that are deployed into the correct subnets.

image

Looking at the Configure tab of a single virtual machine lets us see that the virtual machine is part of an availability set.

image

Clean Up

This is my MSDN subscription, so I don’t want to leave resources lying around if I am not using them.  Clean up is simple, I run another script to delete everything I just created.  Again, a word of caution:  this deletes the virtual machines, the associated VHD files, the cloud services, the storage account, and the virtual network configuration.  This is not a reversible operation. 

Clean Up
  1. $prefix="mydemo"
  2. $storageAccountName= ($prefix+"storage")
  3. $subscriptionID="YOUR_SUBSCRIPTION_ID_HERE"
  4.  
  5. #Set up credentials
  6. Add-AzureAccount
  7.  
  8. #Set the current subscription
  9. Select-AzureSubscription-SubscriptionId$subscriptionID-Current
  10.  
  11. $serviceName= ($prefix+"DEV")
  12. #The following command deletes the associated VHD, but takes awhile
  13. Get-AzureVM-ServiceName$serviceName|%{Remove-AzureVM-Name$_.Name -ServiceName$serviceName-DeleteVHD}
  14. Remove-AzureService-ServiceName$serviceName-Force
  15.  
  16. $serviceName= ($prefix+"STAGE")
  17. #The following command deletes the associated VHD, but takes awhile
  18. Get-AzureVM-ServiceName$serviceName|%{Remove-AzureVM-Name$_.Name -ServiceName$serviceName-DeleteVHD}
  19. Remove-AzureService-ServiceName$serviceName-Force
  20.  
  21. $serviceName= ($prefix+"PROD")
  22. #The following command deletes the associated VHD, but takes awhile
  23. Get-AzureVM-ServiceName$serviceName|%{Remove-AzureVM-Name$_.Name -ServiceName$serviceName-DeleteVHD}
  24. Remove-AzureService-ServiceName$serviceName-Force
  25.  
  26. #Remove storage account.  This will fail if the
  27. #disks haven't finished deleting yet.
  28. Remove-AzureStorageAccount-StorageAccountName$storageAccountName
  29.  
  30. #Remove all unused VNets
  31. Remove-AzureVNetConfig

Coming up next we’ll look at how we can establish on-premises connectivity for just a few devices, and then we’ll turn our attention to deploying some code and services to these virtual machines using Visual Studio Release Management.

For More Information

Virtual Machine and Cloud Service Sizes for Azure

Manage the availability of virtual machines

The Case of the Latest Windows Azure VM Image

Understanding and Configuring Availability Sets

Configure a Point-to-Site VPN Connection to an Azure VNet

$
0
0

This post shows how to create a point-to-site (P2S) VPN connection to an Azure virtual network (VNet). 

Background

In my previous post, I showed how to create a virtual network configuration XML file and to create several environments (dev, stage, and prod) that are each deployed into a separate subnet.  It’s kind of a goofy network architecture because typically you see VNets configured that model the tiers of a single application (front tier, middle tier, backend tier).  However, it suits my use case and enables me to show how to create a point-to-site virtual network that enables me to communicate with all of the environments through a single connection.

I am showing point-to-site in this post because that’s what I use for demos while I am on the road.  If you travel for work or work remotely, you likely use an agent that you run in order to connect to the corporate network.  That agent establishes a secure connection to the corporate network, enabling you to access resources even from public locations.  That’s exactly what a point-to-site network is, it includes an installer that will add a VPN connection.  Here you can see that I have a VPN connection to Microsoft IT VPN that allows me to VPN into the Microsoft corporate network, and another VPN connection named “DevOps-demo-dev-southcentral” that enables me to connect to an Azure virtual network.

image

When I click Connect on that VPN connection, the agent appears.

image

I then click Connect, and I am securely connected to the virtual network in Azure.  I can now access any resources within that virtual network as though they were part of my local network. 

There are two other types of connectivity to Azure:  site-to-site VPN and ExpressRoute.  A site-to-site VPN allows you to create a secure connection between your on-premises site and the virtual network by using a Windows RRAS server or configuring a gateway device.  ExpressRoute lets you create private connections between Azure and your on-premises or co-located infrastructure without going over the internet.  For more information on choosing between a VPN and ExpressRoute, see ExpressRoute or Virtual Network VPN - What's right for me?

Create the Network

In my previous post, Creating Dev and Test Environments with Windows PowerShell, I showed an example of an XML configuration file for an Azure virtual network.  I used a simple network with three subnets.  One of the elements in that XML file is an additional gateway subnet.  When you create a virtual network, you can choose to configure a point-to-site VPN.

image

When you configure the subnets, you can then add a gateway subnet.

image

I’m lazy and I cheated.  I created the network using this wizard, and then exported the virtual network.  Who likes authoring XML documents directly, anyway?

image

Exporting your new network results in an XML file that looks like this:

NetworkConfig.xml
  1. <NetworkConfigurationxmlns:xsd="http://www.w3.org/2001/XMLSchema"xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xmlns="http://schemas.microsoft.com/ServiceHosting/2011/07/NetworkConfiguration">
  2.   <VirtualNetworkConfiguration>
  3.     <Dns />
  4.     <VirtualNetworkSites>
  5.       <VirtualNetworkSitename="kirketestvnet-southcentral"Location="South Central US">
  6.         <AddressSpace>
  7.           <AddressPrefix>10.0.1.0/24</AddressPrefix>
  8.         </AddressSpace>
  9.         <Subnets>
  10.           <Subnetname="Subnet-1">
  11.             <AddressPrefix>10.0.1.0/27</AddressPrefix>
  12.           </Subnet>
  13.           <Subnetname="Subnet-2">
  14.             <AddressPrefix>10.0.1.32/27</AddressPrefix>
  15.           </Subnet>
  16.           <Subnetname="Subnet-3">
  17.             <AddressPrefix>10.0.1.64/26</AddressPrefix>
  18.           </Subnet>
  19.           <Subnetname="GatewaySubnet">
  20.             <AddressPrefix>10.0.1.128/29</AddressPrefix>
  21.           </Subnet>
  22.         </Subnets>
  23.         <Gateway>
  24.           <VPNClientAddressPool>
  25.             <AddressPrefix>10.0.0.0/24</AddressPrefix>
  26.           </VPNClientAddressPool>
  27.           <ConnectionsToLocalNetwork />
  28.         </Gateway>
  29.       </VirtualNetworkSite>
  30.     </VirtualNetworkSites>
  31.   </VirtualNetworkConfiguration>
  32. </NetworkConfiguration>

If you have an existing network and want to add a point-to-site VPN to it, simply export the XML configuration, add the gateway subnet and the VPNClientAddressPool nodes, and then import the configuration file.

Create the Gateway

Now that you’ve created the virtual network and the gateway subnet, it’s time to create the gateway itself.  In the Azure Management Portal (https://manage.windowsazure.com), go to the dashboard view of your VNet and click “Create Gateway”. 

image

While the gateway is being created, the status will look similar to this:

image

This process takes some time to complete (expect around 30 minutes).  In the meantime, start on the next step: creating certificates.

Creating Certificates

The communication between on-premises and Azure is secured using a self-signed root certificate.  If you are reading this blog, there is a high likelihood that you are a developer with Visual Studio installed.  If not, install Microsoft Visual Studio Express 2013 for Windows Desktop, which is free of charge.  In Windows 8, go to the Start screen, open the charm bar, and click settings.  Enable the “Show administrative tools” option.

image

Now go to the all apps view and look for the Visual Studio Tools folder.

image

In that folder you will find the Visual Studio 2013 command prompt (thankfully, this is much easier to locate in Visual Studio 2015!)

image

Right-click and run as Administrator.

Now that we have the command prompt open, we can create two certificates using the following commands:

Create Certificates
  1. makecert-skyexchange-r-n"CN=DevOpsDemoRootCert"-pe-asha1-len2048-ssMy"DevOpsDemoRootCert.cer"
  2.  
  3. makecert.exe-n"CN=DevOpsDemoClientCert"-pe-skyexchange-m96-ssMy-in"DevOpsDemoRootCert"-ismy-asha1

Once you’ve created the certificates, upload the root certificate to the management portal.

image

image

The result shows the certificate has been uploaded.

image

The client certificate depends on the root certificate.  We will export the client certificate and choose whether we want to use a password or a group to provide access to the certificate.  Open certmgr.msc.

image

Navigate to Certificates – Current User / Personal / Certificates.  Right-click on the client certificate that you just created and choose export. 

image

Follow the directions to export the client certificate, including the private key.  The result will be a .pfx file, you will distribute that .pfx file to each machine where the client will be installed.

Right-click on the .pfx file and choose Install.  Leave the installation location as Current User, and provide the password when prompted.

image

image

image

image

image

Click finish, and the certificate is now installed.

Create the VPN Client Configuration Package

Go back to the Azure Management Portal.  You may need to refresh the page to get the most current status.  Once the gateway is created, it looks like this:

image

Now click on the link to download the 32-bit or 64-bit client VPN package.

image

When you download, the file name will be a GUID.  Feel free to save as whatever file name you want.

image

Right-click the .EXE file and choose Properties.  On the Properties page, choose Unblock.

image

Now double-click the .EXE to run it.  You are asked if you want to install the VPN client.

image

Test It Out

Now that everything is wired together, the last thing to try is to actually VPN in.  Connect to the VPN client.

image

Now that we’re connected, run a simple ping test and see that it fails.

image

It fails because the Windows firewall in the VM itself is blocking the communication.  When we created the VM, a public endpoint for Remote Desktop connections was created.  That would connect us through the cloud service name, in my case it is kirketestDEV.cloudapp.net.  However, we are already connected to the network, so we don’t have to use that endpoint.  Just open a new remote desktop connection and connect to the IP address of the machine, 10.0.1.4.

image

We connect to the VM using Remote Desktop and enable the Windows Firewall inbound rule for ICMP ping.  

image

Now go back to the console window and try the ping test again.  It works!

image

We were able to set up a point-to-site connection to our VNet.  I frequently use this approach while demonstrating virtual networks when I am on the road traveling because I can connect to the network from anywhere, even hotel or conference wireless connections.  I can now introduce the idea of a VPN connection and show exactly what to expect. 

Clean Up

I am using my MSDN subscription for this, which gives me $150 worth of Azure credits per month.  When we created the dynamic routing gateway (the process that took 30 minutes), that created a network resource that is billable.  Looking at the VPN Gateway Pricing page, I can see that the cost for the dynamic routing gateway is $0.036 US per hour, which is around $27 per month. 

image

While that still leaves me with plenty of room for other projects, I may not want or need that gateway in my MSDN subscription all month if I am just using it for a demo.  Just go back to the virtual network’s dashboard and delete the gateway.  Of course, the next time you need it you will have to create the gateway again and suffer the 30 minute wait, but that’s kind of a small operational price to pay for something that is so incredibly cool and convenient.

For More Information

ExpressRoute or Virtual Network VPN - What's right for me?

Configure a Virtual Network with a Site-to-Site VPN Connection

Configure a Point-to-Site VPN connection to an Azure Virtual Network.

VPN Gateway Pricing

Visual Studio Release Management, DSC, and Azure VMs

$
0
0

This post will show how to use Visual Studio Release Management with Windows PowerShell Desired State Configuration (DSC) to configure multiple deployment environments and deploy an application. 

If you just want to see the big payoff, scroll all the way down to the section “The Big Payoff”.

Background

I am presenting a series of workshops to our top GSI partners around the world and part of the workshop involves a case study on DevOps.  A company (name’s not important here, call them Fabrikam or Contoso) has suffered quality issues trying to deploy solutions into multiple environments.  They can get solutions to work in the development environment, but have trouble in the staging and production environments.  The developers and IT staff have both made manual changes to each environment, so they are not exactly sure of the dependencies for the application and how to ensure that each environment has those dependencies available.  They have an existing investment in Team Foundation Server, we need to propose a solution.  The attendees have approximately 30 minutes to come up with an answer.  This post is going to detail the configuration and provide a demo at the end of exactly why this is so cool.

The environments we will release code to look like the following.

image

There is a complete walkthrough and scripts for creating this environment in my previous post, Creating Dev and Test Environments with Windows PowerShell

For each environment, we will make sure that each server has IIS installed, ASP.NET 4.5 installed, the default web site is stopped, we’ll create a new web site, deploy our code to the new site, and finally open port 8080 on the firewall.  Rather than write lots of PowerShell to do all of this, we’ll use Desired State Configuration (DSC).

Again, if you just want to see what Release Management does and where’s the big payoff, scroll all the way down to the section “The Big Payoff”.

Release Management and Visual Studio Online

While I do find some twisted pleasure of installing server software and configuring permissions, I’d rather not set up TFS and Release Management Server just for this.  If you want to go that way, you can, the steps are documented online at Install Release Management.  I much prefer to use Visual Studio Online for all of my projects now, and it includes Release Management as part of the service.  You only need to install the Release Management Client for Team Foundation Server 2013

Check In Source

Open Visual Studio 2013 and created a new web application project.

image

I choose a new MVC project that does not perform authentication, and I uncheck the “Host in the cloud” option.

image

Next, I add a folder called Deployment.  Our project will include details about deploying the solution.

image

Add a file in that folder named “Configuration.psd1”.

image

Replace the contents of that file.

Configuration.psd1
  1. @{
  2.    AllNodes = @(
  3.      @{
  4.          NodeName =$env:COMPUTERNAME;        
  5.       }
  6.    );
  7. }

Now add a second file to the folder named “InstallWebApp.ps1”.  This is the PowerShell DSC file that our Release Management template will use.  It describes the dependencies that the application requires, including ensuring that IIS is installed, ASP.NET 4.5 is installed, the default web site is stopped, and a new web site is created that uses the contents of our application to serve up the site.  We also enable port 8080 in the Windows Firewall.  We could go one step further and change the port binding for the web site to use port 8080, but let’s leave it as-is for now.

InstallWebApp.ps1
  1.   configurationFabrikam_POC
  2. {  
  3.     # Import the module that defines custom resources  
  4.       Import-DscResource-ModulexWebAdministration
  5.     Import-DscResource-ModulexNetworking
  6.     # Dynamically find the applicable nodes from configuration data  
  7.     Node$AllNodes.NodeName  
  8.     {  
  9.         # Install the IIS role  
  10.           WindowsFeatureIIS
  11.         {  
  12.               Ensure          ="Present"
  13.               Name            ="Web-Server"
  14.         }  
  15.         # Install the ASP .NET 4.5 role  
  16.           WindowsFeatureAspNet45
  17.         {  
  18.               Ensure          ="Present"
  19.               Name            ="Web-Asp-Net45"
  20.         }  
  21.         # Stop an existing website          
  22.            xWebsiteDefaultSite
  23.         {  
  24.               Ensure          ="Present"
  25.               Name            ="Default Web Site"
  26.               State           ="Stopped"
  27.               PhysicalPath    ="C:\Inetpub\wwwroot"
  28.               DependsOn       ="[WindowsFeature]IIS"
  29.         }  
  30.         # Copy the website content  
  31.           FileWebContent
  32.         {  
  33.               Ensure          ="Present"
  34.                   SourcePath="$applicationPath\_PublishedWebSites\RMDemo"
  35.               DestinationPath="C:\inetpub\fabrikam"
  36.               Recurse         =$true
  37.               Type            ="Directory"
  38.               DependsOn       ="[WindowsFeature]AspNet45"
  39.         }         
  40.         # Create a new website          
  41.            xWebsiteFabrikam
  42.         {  
  43.               Ensure          ="Present"
  44.                         Name="Fabrikam POC"
  45.               State           ="Started"
  46.               PhysicalPath    ="C:\inetpub\fabrikam"
  47.               DependsOn       ="[File]WebContent"
  48.         }  
  49.           xFirewallFirewall8080
  50.         {  
  51.               Name                  ="Allow 8080"
  52.               DisplayName           ="Allow 8080"
  53.                       DisplayGroup="Fabrikam POC Group"
  54.               Ensure                ="Present"
  55.               Access                ="Allow"
  56.               State                 ="Enabled"
  57.                            Profile= ("Any")  
  58.               Direction             ="InBound"
  59.                         RemotePort= ("8080","8080")  
  60.                          LocalPort= ("8080","8080")           
  61.               Protocol              ="TCP"
  62.                 Description           ="Allow 8080 for Fabrikam POC App"
  63.         }  
  64.     }  
  65. }
  66. Fabrikam_POC-ConfigurationData$applicationPath\Deployment\Configuration.psd1

Two bits of weirdness to explain here.  The first is line 35.  When the web site is built and the output is copied to the Drop folder, the web site will be in a folder _PublishedWebsites.  You can see this in Visual Studio Online by inspecting the output.

image

The second is the ConfigurationData node.  Admittedly, I copied bits and pieces of scripts trying to get this to work, and that was an artifact I couldn’t seem to get around.  There may be a simpler way, please leave comments if you see opportunities to improve this.

Our InstallWebApp.ps1 script imports two modules, xNetworking and xWebAdministration, neither of which will exist on the server.  We have to deploy those as artifacts to the servers as well.  Create a new folder named Modules as a child of Deployment.

image

Go download the DSC Resource Kit (All Modules).  In File Explorer, select the xNetworking and xWebAdministration folders and drag them to the Modules folder.  The resulting structure looks like this.

image

Now that we have copied the required dependencies, the next step is to create a PowerShell script to deploy them.  I create a file “InstallModules.ps1” that will use DSC to ensure the modules are deployed. 

InstallModules.ps1
  1. configurationInstallModules
  2.     {
  3.     Node$env:COMPUTERNAME
  4.     {
  5.         # Copy the Modules
  6.         FileModuleContent
  7.         {
  8.                       Ensure="Present"
  9.                   SourcePath="$applicationPath\Deployment\Modules"
  10.             DestinationPath="$env:ProgramFiles\WindowsPowershell\Modules"
  11.                      Recurse=$true
  12.                         Type="Directory"
  13.         }       
  14.     }
  15. }
  16.  
  17.   InstallModules

You’ll see when we later configure the workflow for the release template that we need a workflow activity that ensures the modules are deployed before we call the InstallWebApp.ps1 script.  This is because the InstallWebApp.ps1 depends on the modules being present.

!!!IMPORTANT STEP HERE!!!

The way that this solution works is that Release Management will take the build output and release it to the specific environment.  We have to make sure that all of the files we just added are included in the build output, they won’t be by default.  For every file that is a descendant of the Deployment folder, go to the properties of that file and choose Copy Always.  You can multi-select files to make this easier.

image

The last step is to check the code in.  Right-click the solution and choose Add solution to source control.  You are prompted for Git or Team Foundation Version Control.  I prefer Git, but the choice has no bearing on the rest of this post.

image

Since I chose Git, my source is checked into a local repository.  I now need to push it to Visual Studio Online.  Go to VisualStudio.com and create an account if you haven’t already (there’s a free trial, and if you have MSDN you have access already through subscriber benefits).  Create a new project.

image

Once the project is created, we are assured our team will absolutely love this. 

image

Go to the code section, and you will see a URL for the repository.

image

Go back to Visual Studio.  Right-click the solution and choose Commit.  Provide a commit message and then click the Commit button.

image

After you click Commit, you can now click Sync in the resulting dialog to push changes to the server.

image

You can now provide the URL for the remote repository to push our changes to the server.

image

Click publish, then go back to Visual Studio Online to see that the code was committed!

image

Go back to Visual Studio.  We need to define a new build.  Under the team project for your Visual Studio Online project (you may have to connect to the team project first), go to Builds and then choose New Build Definition.

image

Give the new build a name, and choose Enabled for queue processing.

image

For the Trigger, I choose Continuous Integration.  This comes in really handy as you are troubleshooting the PowerShell deployment scripts, as you make changes to them you will need to kick off a build so that the files are available for Release Management to deploy.  For example, I forgot the Copy Always step from above (that’s critical to making this work), so I just had to make a change and then check in the source again, automatically kicking off a new build.

image

For the Build Defaults tab, I chose to use the Hosted Build Controller.  I could create my own build controller, but I love that Visual Studio Online provides that for me, too.

image

Finally, on the Process tab, choose the project or just provide the relative path of the .sln file.

image

Before continuing, test your build definition by queuing a new build.

image

image

Open the Release Management Client

Go to the Release Management client.  You are prompted to connect. 

image

There is some initial configuration for pick lists that is required to identify the stage type and the technology type.

image

We also need to connect to Azure in order to connect those stage types to environments.  Go to the Administration / Manage Azure tab.  You are prompted for the subscription ID, the management certificate key, and a storage account name. 

image

This seems like a good assignment for a summer intern to add some value… create a prompt that connects to Azure and lets you pick this stuff instead of having to read my self-indulgently long blog post to find this needle in a haystack.  However, there’s a straightforward workaround that, if you are working with Azure, you should know how to do this anyway. 

Go to PowerShell and run the following command (requires the Azure PowerShell SDK is installed, available from http://azure.microsoft.com/en-us/downloads/).

image

You are prompted to save the file.

image

That file has the subscription ID (red arrow) and management certificate key (blue arrow).

image

Use the name of a storage account in your Azure subscription.

 image

The management certificate is needed in order to connect to the deployment environment and do things like start the environment and stop it.  The storage account is used as the Drop folder for the build output.  If you open up the blob container “releasemanagement”, you will see the contents of the Drop folder for each Build that is released.  Notice the Deployment folder that contains all of the PowerShell DSC stuff we were working with previously, and the _PublishedWebsites folder that contains the web site.

image

Define the Environments

Now go to the Configure Paths tab and create a new Azure vNext environment.  Click the button to Link Azure Environment.

image

Choose the subscription, and then choose a cloud service that you will deploy to and choose Link.

image

Click the Link Azure Servers button to link the VMs we previously created (see the post Creating Dev and Test Environments with Windows PowerShell).

image

You’ll now have defined the DEV environment that is linked to those Azure virtual machines.

image

Do the same thing, creating an environment for STAGE and PROD, linking to the appropriate servers.

image

Now that we have the environments, we need to create a release path. 

Create a Release Path

Go to Configure Paths / vNext Release Paths.  Click the New button.

image

Add stages.  For each stage, choose the environment.  For the Dev environment, I enabled the Automated checkbox for the acceptance step.  This means that we can automatically release to the Dev environment upon a successful build.

image

Define Components

On the Configure Apps / Components tab, add a new component.  This allows us to associate the component to deploy from the build drop location.  I chose “Build with application”, and I entered a “\” to access the root of the build drop.

image

Create a Release Template

Go to Configure Apps / vNext Release Templates and choose the New button.  Provide a name, and choose the release path that we created previously.  We can also choose if this release template will be triggered upon a build.  We will select that.  Click the Edit button to choose a team project and a build definition.

image

In the Toolbox, right-click Components and choose Add.

image

We can then choose the component that we created previously, “Web Site Files”.  Click the Link button.

image

In the workflow designer, add the “Start Environment” action and four “Deploy Using PS/DSC” actions.  Choose the server name, provide the username and password, and select the component that we just added.  The PSScriptPath for the first action points to the InstallModules.ps1 script that we added to the build output, this will ensure that the xWebAdministration and xNetworking modules are present.  The second action will install the web application files.  We have deployment 4 actions: server1:modules, server1:webapp, server2:modules, server2:webapp.  Fully admitting there might be (OK, there probably is) a better way to accomplish this, but this is how I got it to work.

image

An important note, else the release will fail:  Change the SkipCACheck to true for each deploy action, for each stage (dev, test, and prod).

image

The Big Payoff

It’s time to see the fruits of our labor!  Go to the project and make a change.  For instance, edit Index.cshtml.

Index.cshtml
  1. @{
  2.     ViewBag.Title = "Home Page";
  3. }
  4.  
  5. <divclass="jumbotron">
  6.     <h1>Success!</h1>
  7.     <pclass="lead">Our demo worked.</p>
  8.     <p>If you are seeing this, the demo gods favored me today!</p>
  9. </div>

Save the change and commit, then push the change to your Visual Studio Online repository.  We enabled our build definition to automatically build upon check-in, and we enabled our release path to automatically deploy to the Dev environment upon build.  After we push our changes, we see that a build has automatically been queued.

image

Once the build is complete, go to the Release Management client and go to the Releases / Releases tab.  Our new release was automatically initiated.

image

Double click on the release to view progress.

image

Should something go wrong, you’ll receive an email.

image

Side note: You end up getting a deluge of emails from this thing, and I hate emails. Maybe I am missing it, but I expected to see some tighter integration with TFS work item tracking right here, let me submit work items directly from the RM client (or even automatically).  Seems legit, the validation stage should be a work item tracking assignment to testers, once the work item is closed a request to move to the next stage is created.  A failed release becomes a defect.  Again, maybe that’s in there somewhere, but I’m not finding it.

If you want to kick off a release manually (for instance, you forgot a parameter in one of the deployment actions like I just did and need to edit the release template), you can go to Configure Apps / vNext Release Templates, select your template, and choose New Release.  Choose the target stage (most likely Dev) and choose a build.

image

I wrote in a previous blog, Configure a Point-to-Site VPN Connection to an Azure VNet, I showed how you could establish a VPN to an Azure virtual network.  To test, I connected to the VPN and then opened a browser on my local machine.

image

In case you are not as in awe of this as I am, let’s recap everything that was done.

  • Used the PowerShell script from the post Creating Dev and Test Environments with Windows PowerShell to create 3 environments, each environment having 2 servers in an availability set and part of a virtual network.
  • Used Visual Studio to check in source code to Visual Studio Online.
  • Used Visual Studio Online hosted build controller to automatically build our project.  I could have added a few tests as part of the build just to really show off.
  • Used Release Management to automatically release our project to multiple environments.
  • Used PowerShell DSC to ensure critical dependencies were available on the target machines, including IIS, ASP.NET 4.5, stopping the default web site, and adding a new web site.  We also pushed our code to the new web site, and opened a firewall port 8080 just to show off.

That’s an impressive list… but even more impressive is that now we’ve verified everything is working in the Dev environment, I receive an email letting me know that I have a request to validate a release. 

image

I open the Release Management client and can see my approval requests. 

image

image

Once I approve, Release Management moves on to the next stage.

image

If I forget to turn on the servers before a check-in, no problem.  Release Management will turn the environment on.  If I make manual changes to the environment and accidentally remove a dependency, no problem.  Release Management will ensure the dependency is there as part of the DSC script.

Of course, I have to show the final money shot… we successfully deployed to each environment, gated by approvals and validation throughout the release process.

image

Workflow-controlled releases to multiple environments in a predictable and consistent fashion without manual intervention.  Less bugs, less time troubleshooting deployments. 

For More Information

Install Release Management

Install the Release Management Client for Team Foundation Server 2013

Creating Dev and Test Environments with Windows PowerShell

Configure a Point-to-Site VPN Connection to an Azure VNet

Updated Fiddler OAuth Inspector

$
0
0

This post will detail some of the updates made to the Fiddler OAuth inspector and gives examples of how to use it.

Background

I previously wrote about Creating a Fiddler Extension for SharePoint 2013 App Tokens.  As my friend Andrew Connell let me know, the tool is valuable beyond the context of SharePoint, it is a generic JWT parser that can be used with any JWT token.  This post will show the generic capabilities as well as a few SharePoint-specific capabilities.  The updated source code for the Fiddler extension can be found at https://github.com/andrewconnell/SPOAuthFiddlerExt.

I’ve been asked a few times to write up some documentation on the tool and show how to use it.  The rest of this post will show how to deploy it and some techniques for using it.  If you have any more suggestions on how to use it, please leave comments.  If you have any changes you’d like to see in it, submit a pull request for the GitHub repository at https://github.com/andrewconnell/SPOAuthFiddlerExt.

Deploying the Fiddler Extension

Deploying the extension is simple.  Build the project, copy the SPOAuthFiddlerExt.dll, NewtonSoft.Json.dll, and Newtonsoft.Json.xml files to the C:\Program Files (x86)\Fiddler2\Inspectors folder.  Here is a picture of the Inspectors folder on my machine.

image

There is a Visual Studio post-build action defined that will copy the required DLLs for you.  In order for this to work, you have to run Visual Studio as administrator. 

image

Once you deploy the add-in and start Fiddler, you can click on an HTTP request and see the SPOAuth tab under “Inspectors”.

image

Data will only show up in the extension when the request being inspected contains the “Authorization: Bearer” HTTP header with a JWT token value.  It will also show up when the request contains a form post value named “AppContext”, “AppContextToken”, “AccessToken”, or “SPAppToken” with a JWT token value.  These are to handle SharePoint-specific inspections.

Example: Single Page Application with Azure Active Directory

In a previous post, I showed how to create a Single Page Application that calls a custom Web API protected by Azure AD.  I am using IIS Express to develop and debug the application, so I need to use the URL “https://localhost.fiddler” with the application for localhost traffic to capture in Fiddler (for more information, see Problem: Traffic sent to http://localhost or http://127.0.0.1 is not captured). 

I replace any HTTP calls in my application to use the host name localhost.fiddler  The first is in app.js, replace the localhost endpoint with the Fiddler endpoint.

image

The next change is the actual service call itself, in mailSvc.js, changing “localhost” to localhost.fiddler.

image

Change the project to launch using the localhost.fiddler host name so that hitting F5 will launch the browser using the localhost.fiddler endpoint.

image

Now that I have made changes in the app, I need to also make changes in Azure AD.  When the user logs into the application, they are redirected to Azure AD to sign in, and then are redirected back to the application.  The first change is to update the sign-on URL for the application with the localhost.fiddler host name.

image

Finally, we change the reply URL for the application to reply to the localhost.fiddler endpoint.

image

Start Fiddler, and the run the project.  You should now see the output in Fiddler.

Know What to Look For

The extension is used to decode an access token.  It doesn’t perform validation, and it doesn’t point out where errors might have occurred, you need to have a sense of what you’re looking for.  The most frequent issues that I see are an invalid audience (verified in the “aud” claim) and having a different set of claims than expected.

Here’s a good example.  I was debugging the application and kept getting weird errors in my SPA application, “Exception of type ‘Microsoft.Office365.Discovery.DiscoveryFailedException’ was thrown”:

image

Sweet!  I go look in Fiddler and see the 500 HTTP error when I call the custom Web API. 

image

Click on that record and go to the Inspectors/SPOAuth tab.

image

You can now inspect the JWT token.  First, we look at it in plan text view.

image

Next, we switch over to an easier to read property grid.  I immediately see my problem.  I am logged in as a user who has access to the Azure AD tenant, but does not have an O365 account in my tenant, I need to be logged in as a user who has access to O365.

image

To verify this, I launch an In Private browsing session. I am prompted to log in:

image

This time it works as expected.

image

I can pop over to Fiddler to verify that the user is now the correct user in the access token, and I can see the return values in the response pane.

image

Example: SharePoint Provider-Hosted App

Here’s another example of using the extension.  I create a new SharePoint provider-hosted app, and I want to capture the traffic in Fiddler.  I go to the appmanifest.xml file and change the StartUrl to the localhost.fiddler URL.

image

Start Fiddler and then hit F5 in Visual Studio to start debugging.  You are prompted to log in:

image

You are asked if you trust the app, click Trust It.

image

The app runs, and in Fiddler I can see the request contains “SPAppToken”. 

image

Go to the SPOAuth tab, and it’s empty.  Huh?

image

Go back to the raw view, and notice that the value posted from SharePoint to our app contains the SPAppToken in the body, but the value includes an error message. 

image

This is actually a very common error when debugging SharePoint apps and you are trying to deploy the app.  It is saying that the app principal in SharePoint is registered with a different authority than what we are using with our app.  This is because the SharePoint tools will automatically create an app principal using the authority “localhost” but our application’s authority is “localhost.fiddler”.  The same error can easily occur when your app principal’s authority is “localhost” but you try to deploy to another host such as azurewebsites.net. 

There’s not really a clean way to set up Fiddler using IIS Express with SharePoint Online such that you can see all traffic including the incoming context token as well as the outgoing CSOM or REST API calls.  If you really need this capability, then set up IIS on your machine and use a host header or machine name for your app’s URL instead of “localhost”.  Despite there not being a clean way to see the incoming HTTP POST that contains the context token when using IIS Express, you can still easily see the outbound requests to SharePoint.

For example, I created a new SharePoint provider hosted app and started capturing using Fiddler.  I didn’t see the HTTP POST that SharePoint sends to my app using localhost, but I did see the outgoing request from the CSOM call to /_vti_bin/client.svc/ProcessQuery.

image

And I can go to the SPOAuth tab to inspect the access token values.

image

I can use the grid view to look at the properties and determine the exact date that the token expires.

image

I can make some assumptions that if the audience contains “00000003-0000-0ff1-ce00-000000000000”, then this is a token specific to SharePoint apps.  Then I can use the third tab marked “Property Values” to show the values that the access token assumes. 

image

In this screen shot, there are a few things that can be useful for troubleshooting.  For instance, we can determine the client ID and tenant ID, useful for troubleshooting 401 Unauthorized issues. 

In the case of a high-trust SharePoint token, you can glean a little more information.  A high-trust token includes the SID for the user in the nameid property when using Windows claims.  This is how it identifies the user.  Separately, the high trust token must also identify the app, which is why it contains a nested JWT token in its “actortoken” claim.  This is how it identifies the app separately from the user. 

image

If you’re interested in knowing more about high-trust SharePoint apps, see my post High Trust SharePoint Apps on Non-Microsoft Platforms.

For More Information

Problem: Traffic sent to http://localhost or http://127.0.0.1 is not captured

Creating a Fiddler Extension for SharePoint 2013 App Tokens – post that gives the general structure of the code and how to build a Fiddler add-in.  Note that the code in that post is outdated, you can obtain the latest code from the GitHub repository.

https://github.com/andrewconnell/SPOAuthFiddlerExt – GitHub repository to obtain the latest source code

High Trust SharePoint Apps on Non-Microsoft Platforms

Inside SharePoint 2013 OAuth Context Tokens

Adding Active Directory Certificate Services to a Lab Environment

$
0
0

 

This post will show how to add Active Directory Certificate Services to a lab environment.

Background

I often create a set of virtual machines that include a database, an IIS server, and an Active Directory domain controller.  Frequently I will need to add a certificate for a web site on the IIS server.  While self-signed certificates can be useful, it can be much more useful to utilize a trusted certificate from a certificate authority.  One easy way to do this is just to leverage the domain controller to issue the certificates.

I don’t have to do this frequently enough to have memorized the steps, so this post is as much for you, dear Reader, as it is for me Smile

Note that this post does not contain prescriptive guidance on the best way to set this up in a production scenario.  It is meant for a small development lab that includes an Active Directory domain controller in a standalone forest.  I frequently have to do this with SharePoint farms, therefore the my VM setup is:

image

I will install AD CS on the Active Directory VM.

Installing Active Directory Certificate Services

In the Server Manager dashboard, click Add roles and features.

image

On the Select Installation Type screen, choose Role-based or feature-based installation.

image

On the Select destination server screen, choose Next.

image

On the Select server roles screen, choose Active Directory Certificate Services.  When you click it, you are prompted to add dependent features.

image

Click Add Features, then click Next, Next, Next, Next, Install.

image

Once completed, click Close.

image

Once you have completed, you now need to configure AD Certificate Services.   On Server Manager, you should see an alert to configure AD CS.

image

The first screen asks for credentials.

image

On the Role Services screen, choose Certification Authority and wait about a minute.

image

On the next screen, choose Enterprise CA.

image

For CA Type choose Root CA.

image

The private key will be a new private key.

image

Next.

image

Next.

image

Next.

image

Next.

image

Configure.

image

Again, this post does not contain prescriptive guidance on the best way to set this up in a production scenario.  It is meant for a small development lab that includes an Active Directory domain controller in a standalone forest.

Add Permission for the IIS Server

This one has tripped me up a few times.  I’ll go to the IIS Server and click Create Domain Certificate.

image

Fill in the information and click Next.

image

Get to the next screen to specify an online certification authority, and the Select button is disabled.

image

The reason for this is because the server running IIS does not have permission to the template.  Go back to the Directory Server and choose Certification Authority.

image

Expand the server and choose Certification Templates.  Right-click and choose Manage.

image

Go to the Web Server template, right-click and choose Properties,.

image

On the Security tab, add the IIS server and grant it Read and Enroll permissions.

image

Next do the same for the user that will be enrolling the certificate on the IIS server.  In my case, I am logged onto the IIS server VM as sp_setup.

image

On the IIS Server, we need to reboot the server for the permissions to take effect. 

Create a Domain Certificate from IIS

On the IIS server, you can now issue a domain certificate.  Click the Create Domain Certificate  link in IIS Manager.

image

Fill in the distinguished name properties.

image

image

You can now select eh CA and specify a friendly name.

image

The Result

The result is a new certificate that has been issued and is available in your Server Certificates snap-in on IIS.

image

You can now select this certificate for an IIS site.

image

If you get an error at this point indicating the request was denied, you may need to reboot the IIS server and try again.  You should also double-check the user that is requesting the certificate to make sure you added them in the security permissions for the template on the CA server.

For More Information

Online Certification Authority "Select" greyed out IIS with 2008 R2 PKI

How to create certificate authority and configure it for IIS

Azure AD Application Proxy and SharePoint 2013

$
0
0

This post will show how to configure Azure Active Directory Application Proxy for an on-premises SharePoint 2013 installation using Kerberos constrained delegation.

Background

I see this question in emails and online forums alike almost weekly: “We need users to access our on-premises SharePoint farm from their mobile phones.” This usually involves an in-depth conversation about reverse proxies and ADFS.  When a developer hears “at least 2 more servers”, they kind of freak out and say, “well that’s just not worth it, I’ll find another way.” 

The Azure Active Directory team made this RIDICULOUSLY easy, and avoids the infrastructure burden of adding new servers and opening firewall ports to accommodate.  The Azure Active Directory Application Proxy is a software reverse proxy that enables routing of on-premises resources from a cloud entry point.  It enables the ability to pre-authenticate calls to the server, and unlocks the power of Azure AD to enable things like dynamic groups, conditional access, and multi-factor authentication over existing on-premises assets. 

Newly introduced for Azure Active Directory is the ability to use Windows Integrated Authentication to authenticate to on-premises systems.  In order for this to work, systems must use Kerberos constrained delegation.  As my friend Miguel Wood demonstrated at SharePoint Saturday Houston this weekend, this enables you to use an iPad to access an on-premises server without multiple authentication prompts!  The remainder of this post shows how to set up the environment to enable this scenario. 

Just to prove it, here is what you will be able to do with Azure AD Application Proxy… here is my on-premises SharePoint farm accessed remotely from an Android emulator.

image

How about accessing our internal SharePoint site from an iPad!

image

Prerequisites

The steps in this post require an Azure Active Directory Premium subscription.  I simply activated the Azure AD Premium Trial on my tenant.

I assume your on-premises users that will access SharePoint using Azure AD Application Proxy have the same UPN as the users in the cloud.  This means if my on-premises user in Active Directory is “demouser@blueskyabove.us”, the user is listed in Azure AD as “demouser@blueskyabove.us” as well.

image

image

I did this in my lab by using the Azure AD Connect tool (currently in Preview) to synchronize my on-premises AD to Azure AD.  These configuration steps are outside the scope of this post.

The user that will access the on-premises SharePoint server (I am going to use demouser3@blueskyabove.us) needs to be assigned a license for Azure Active Directory Basic or Premium.  If you have an O365 directory or a free directory, you can enable the AAD Premium trial, then go to the Licenses tab for your directory to assign the user.  The application will fail if the user does not have a Basic or Premium license.

image

Lab Setup

I have 4 virtual machines in my lab environment, all in the same domain.

  • JavaDC – Domain controller using Active Directory Domain Services.  Also serves as enterprise DNS.
  • ADSync – Windows Server 2012 R2 machine that runs the Azure AD Application Proxy Connector.
  • DevSP – Windows Server 2012 R2 machine where SharePoint 2013 is installed.
  • DevSQL – Windows Server 2012 R2 running SQL Server 2014.

In Azure AD, I have a user “demouser3@blueskyabove.us” that I will use for testing, and this user is also in my on-premises AD as well.  We will register the “Portal” application.  Users of that application make a secure request on-premises without requiring a reverse proxy or ADFS. 

image

Add a DNS Entry

Add an A record in DNS for your web application’s URL.  It cannot be a CNAME, it needs to be an A record.

image

Create a Web Application and Site Collection

In Central Administration, create a new web application with no host header on port 443 using SSL, and using Negotiate authentication.  This web application currently is behind the firewall and is not routable outside the network. 

image

Make sure to set the port to 443 and set “Use Secure Sockets Layer” to Yes.

image

Change the application pool to a domain account.  My lab environment uses an account “corp\sp_app”.  This step is important because this is the account I will when configuring Kerberos constrained delegation.

image

Now open up the SharePoint 2013 Management Shell on the SharePoint server and create a host-named site collection.

Host Named Site Collection
  1. $w = Get-SPWebApplication https://spdev
  2. New-SPSite -Url https://portal.corp.blueskyabove.us -OwnerAlias "corp\sp_setup" -HostHeaderWebApplication $w -Name "Portal" -Template "STS#0"

Add a Wildcard SSL Certificate

Now that we’ve created the web application, we need to assign an SSL certificate to the web site binding in IIS. In my previous post, Adding Active Directory Certificate Services to a Lab Environment, I showed how to add Azure Active Directory Certificate Services to your lab environment.  We will now simply request a Domain Certificate.

Go to IIS Manager, select the server node, and select Server Certificates.

image

Click Create Domain Certificate.

image

For the common name, I used *.corp.blueskyabove.us (notice the asterisk). 

image

On  the next page, select the certificate authority and provide the friendly name. 

image

Once you click Finish, you will see the new certificate.

image

Go to IIS Manager, select the new IIS site that was created, and choose Bindings

image

Select the binding and choose Edit.

image

Select the domain certificate. 

image

For more information on using Active Directory Certificate Services, visit my previous blog entry at Adding Active Directory Certificate Services to a Lab Environment.

Grant Permissions

Make sure to grant permission to the test users.  I simply allowed “Everyone” as visitors.

image

 

Set up Kerberos Stuff

When I created the web application, I used “CORP\sp_app” as the application pool account.  Add a service principal name (SPN) to the account for “http/portal” and “http/portal.corp.blueskyabove.us”.  The easiest way to do this is to open a PowerShell window on the domain controller and use the command:

setspn –S http/portal sp_app

setspn –S http/portal.corp.blueskyabove.us sp_app

image

You can see that I used setspn –L sp_app to list the SPNs registered for the account.

Go to Active Directory Users and Computers.  Find the computer name where the Azure Active Directory Application Proxy Connector will be installed.  Right-click that computer and choose Properties.

image

Click the Add button and enter the user name for the application pool account (sp_app).  That will let you choose the SPN that we just registered.  Click OK, then check the Expanded option to show all the SPNs to see that the computer is now trusted to delegate to those services.  Change the option to “Trust the computer to delegate to specified services only” to constrain the delegation.  Change the option to “Use any authentication protocol”.

image

This enables the Application Proxy Connector to impersonate users in AD to the applications that are listed.

Test Kerberos Stuff

Before we go any further, let’s make sure the basic plumbing works.  From the machine that will run the Application Proxy Connector, verify that you can open a browser and access https://portal.corp.blueskyabove.us site.  Add the site to the Local Intranet zone in IE so that it will automatically pass credentials (avoiding the auth challenge prompt).

image

Now browse to the site, and you should see the site with a valid SSL certificate.  We’re still within our corporate network at this point, we’ll enable the remote access in just a bit.

image

Now open a PowerShell window and type “"klist” to see the list of Kerberos tickets.  You should see the SPN you registered.

image

Look in the Security log on the SharePoint server, you should see events with ID 4624 in the event log indicating logon using Kerberos.

image

 

Publish the Application in Azure AD

In your Azure AD tenant, go to the Applications tab and choose “Add” to add a new application.

image

On the next screen, choose “Publish an application that will be accessible from outside your network”. 

image

Give the name of your application, the internal URL, and set the pre-authentication method as “Azure Active Directory”.

image

Once the application is created, you will see 3 steps that need to be completed.  Enable the Application Proxy in the directory.

image

Next, download the Application Proxy connector and install it.  In my environment, this is the “ADSync” server. 

image

The installer prompts to log into Azure AD.

image

When complete, you will see “Setup Successful”.

image

After the connector is installed, look in the event log (Applications and Services Log / Microsoft / AadApplicationProxy / Connector / Admin) to see that the service started successfully.

image

Go back to the Azure Management Portal and go to your new application’s Configure tab.  Change the Internal Authentication Method to “Integrated Windows Authentication”.  Change the Internal Application SPN to the FQDN SPN for your application, which was http/portal.corp.blueskyabove.us

image

Make sure to click Save to save your changes.

Go to the Users and Groups tab and assign the users.  Note that you could use groups as well.

image

Test It Out

Open the MyApps portal, http://myapps.microsoft.com, and log in as the user that was assigned the application.  You will see the applications assigned for the user.

image

Click the Portal application.  In my environment, I get this:

image

You can see the page render, just briefly, before the redirect.  I decided to turn off the Minimal Download Strategy feature for the site.  In the SharePoint site, go to Site Settings / Site Features and disable the Minimal Download Strategy feature.

image

When speaking with Miguel Wood about this, he said he didn’t have to do this in his environment.  If you get this error, then try disabling MDS and try again:

image

Success!  We were able to access our internal SharePoint farm from the internet without opening any firewall ports or adding infrastructure. 

Do Epic Stuff

Maybe that wasn’t quite easy enough.  What if you wanted to enable access to Central Administration remotely so you didn’t have to VPN into your network?  Maybe you want to temporarily provide remote access to Central Administration so that someone can configure your farm remotely without giving them network credentials?  The Azure AD Application Proxy is a very cool solution.

In SharePoint Central Administration, go to Manage Web Applications and click the Central Administration web application.  Click the Authentication Providers button in the ribbon.

image

Click the “Default” link, then change the IIS Authentication Settings to Negotiate.

image

Now add a SPN for Central Administration, add delegation permission from the machine where the connector is installed to the SPN, and register it on the portal.  You now have remote access to Central Administration.

image

It’s not just SharePoint that this would work for… this works for any internal system.  Perhaps you have an application that you want to enable on users’ phones: this works as well, just as I showed in the screenshot of the Android emulator previously.  There is also a MyApps launcher available for iOS that makes it easy to access apps from your iPhone and iPad without installing a browser extension.  Here is a screen shot of the application (thanks to Todd Baginski for grabbing these for me!)

image

And the SharePoint portal accessed from the app.

image

For the O365 users, you will be glad to see that the app experience is integrated.

image

image

I will be showing this and much more of Azure AD at the SharePoint Evolution Conference 2014, April 20-22nd in London, UK.  If you are attending, please make sure to come to the session to learn more about the capabilities of Azure AD.

For More Information

Azure Active Directory Connect Public Preview – March 2015 update

MyApps SSO Launcher for iOS


Deploying Play Framework Apps with the Azure Toolkit for Eclipse

$
0
0

This post shows how to deploy a Play Framework app using the Azure Toolkit for Eclipse.

Background

I am working on a proof of concept with a customer that has several existing types of applications and is deploying them to Microsoft Azure to better understand application migration capabilities.  One of the application types uses the Play framework.  While I am not really a Java developer (although I play one on my blog occasionally), I was able to get this up and going in a relatively short amount of time. 

Install Java JDK and Play Framework

In a previous post, Creating an Eclipse Development Environment for Azure, I showed how to create a virtual machine that runs Eclipse Luna as an IDE to build Java applications.  I am going to use that same VM, but the only thing I am using from it is Eclipse… I don’t need Tomcat or Jetty or even the JRE that is installed on it because Play apps are self-contained and don’t need a container. 

I start by installing an older version of the JDK, version 7u79 for Windows 64 bit from http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html.  I need this older version of the JDK because my demo is going to use an older version of Play Framework, version 1.2.5.5 (the latest version at the time of this writing is 2.3.8). 

The last thing I did was to edit the environment variables for the machine, making sure that JAVA_HOME points to the right directory where I installed the JDK, and adding the path to Play to the %PATH% variable. 

image

Create an App

Now that we have the JDK and Play installed, we can create an app.  Simple enough, just open a command prompt and type “play”.  You’ll see some awesome ASCII art and the valid commands.

image

To create an app, just type “play new <name of app>”.  I used “play new playdemo”.  You are then asked for a name for the application, I used “Play Demo”.

image

To test it, run the command “play run <name of app>”.  My command is “play run playdemo”. 

image

Now open a browser to http://localhost:9000 and you’ll see the app running.  While it looks like just a documentation page, it is actually your app that you can customize. 

image

Open the App Using Eclipse

Now that I have an app, I want to deploy it to Azure.  Since I have been doing a bit of work with Eclipse, it’s pretty easy to use the Azure Toolkit for Eclipse to deploy the app.  In order to use the Play app with Eclipse, you can use the “eclipsify” command, “play eclipsify playdemo”.

image

You can now open Eclipse and use the File / Import / General command to import an existing project into the workspace.

image

Browse for the folder that contains your app.

image

Click Finish, and you now have an Eclipse project.

image

Deploy the App Using the Azure Toolkit for Eclipse

OK, now we’ve created an app and opened it with Eclipse.  Now it’s time to deploy it using the Azure Toolkit for Eclipse.  Choose New / Azure Deployment Project, this project will be used to deploy our application.

image

Give the project a name and click Next.

image

I have multiple JDKs installed in my environment, so I choose which JDK to use for this demo. 

image

Normally, I would go to the Server tab and configure my Tomcat or Jetty server, but Play apps don’t need a container… we just click Finish and we’re done.  This creates the following project structure.

image

We now need to tell the deployment project about the Play application.  Right-click the WorkerRole1 node and choose Properties.  Go to the Components node and see that we are already copying the JDK to the deployment root.  Also, notice we have a HelloWorld.war default file that was added to our deployment project.  On the Components node, highlight the HelloWorld.war, and remove it.

image

We now need to deploy the Play framework.  On the Components node, click Add.  Choose the Directory button and choose the directory where Play is installed.  For the Import Method, choose Zip.  Click on the “As Name” field and it will generate a name for you.  In the deploy method, choose Unzip and leave the To Directory with the default value.  This will zip the Play framework directory, upload it to Azure, then unzip the Play framework to the folder we choose.

image

Choose OK.

We’ll do something just slightly different with our Play app.  In the Components node, click Add, then choose Directory and browse to the directory where the Play app resides.  The import method this time is Copy, and the deploy method is Copy.  This tells the add-in to copy the contents of the folder during deployment.

image

Click OK.

When we ran our Play app previously, we used port 9000, which is the default.  We need to map an external URL to this internal URL.  Go to the Endpoints node and add a new endpoint with type “Input”, a public port 80, private port 9000.

image

We need a way for the cloud service to run our Play application.  There are a few ways to do this, but a simple way to do this is to create a .BAT file that will run the same Play command that we ran previously during our smoke test.  I just went to my temp directory and created a new .BAT file with the following contents:

PlayDemo.bat
  1. cd play1-1.2.5.5
  2. play run ../playdemo

Here is a screen shot that shows the contents of my file.  The filename doesn’t matter, but the relative paths do… they match the relative path structure of how we told Eclipse to deploy our application and the Play framework.

image

Once we create the .BAT file, we can now add it to our deployment.  The import method is Copy, the deploy method is Exec.  This will execute the .BAT file in the AppRoot folder, allowing relative paths.

image

Click OK, and the Components tab looks like the following:

image

Order matters… we need the JDK first, then the Play framework, then our app, then finally the executable.

Testing Testing… Is This Thing On?

A quick test is to click on the deployment project and then use the menu toolbar button to run in the Azure emulator. 

image

You’ll see a few command prompts pop up, and finally you’ll see a command window with the Play ASCII art (not enough ASCII art in today’s computing if you ask me) and the Azure Compute Emulator will show us that Java is running.

image

Open a browser and go to http://localhost:9000 (still using our local port for now) and confirm you see the Play app.

image

Once you’re done playing around, you can reset the Azure emulator to remove the deployment.

image

 

Publish to Azure

We’ve confirmed our packaging works as expected, now it’s time to push to Azure.  Use the Publish to Azure Cloud button to publish.

image

On the next screen, you need to provide your publish settings.  Click Import, and then download.  You are prompted to log in, save the .publishsettings file, and then browse to it.

image

Click OK, and the details are added for you.  You then need to create a Storage account and provide a new Service.  Choose the Target OS, I chose Windows Server 2012 R2.  You can choose between Staging and Production, I use production to get a clean URL.

image

Click Publish, and wait.  The storage account is created, the cloud service is created, the app is packaged into the .cspkg format, that’s uploaded to Azure, new virtual machines are created, the package is downloaded to the new VM, and its contents unzipped as specified in our settings.  There’s quite a bit going on here, so it might take awhile.

The Big Payoff

Of course we have to show the final product is our Play application running in Azure without any modifications. 

image

The bigger payoff is the set of capabilities that the app now has.  We can now automatically scale the application automatically as CPU usage dictates, as shown in my post Autoscaling Azure–Cloud Services, with new VM instances being created and destroyed as necessary.  The underlying OS doesn’t necessarily need to be managed or patched by us because the guest OS instances are automatically patched as part of the evergreen lifecycle.  This is huge… we just focus on the application and data without having to manage the underlying OS.

Another payoff that is maybe not so obvious is that the cloud service role that we just created can be deployed into a virtual network.  If we have ExpressRoute or even a site-to-site VPN configured, this means our cloud service can now access on-premises resources without making modifications to our existing code.  I showed how to do this in the post Deploy Azure Roles Joined to a VNet Using Eclipse.

Note that in this post, we deployed directly from Eclipse, but it is COMPLETELY possible to use Eclipse to package the solution and then use PowerShell or the cross-platform CLI to upload the artifacts and deploy the package instead of doing this from an IDE.

For More Information

Play framework

Tutorial: Running Play Framework Applications in Microsoft Azure Cloud Services 

My Java blog posts

Deploy Azure Roles Joined to a VNet Using Eclipse

Autoscaling Azure–Cloud Services

Creating Dev and Test Environments with Windows PowerShell

$
0
0

This post will discuss creating application environments with Windows PowerShell.  We will use these environments in subsequent posts.

Background

I have participated in a series of readiness workshops with our top GSI partners.  Part of the workshop includes case studies where we have the participants review requirements and propose solutions within a few constraints.  One of the case studies involves Visual Studio Release Management.  It’s a very interesting case study, but I felt it could have been so much better with a demo.  You know me, as I come up with ideas like this I try to get some time at the keyboard to build the demo and then share it here on my blog.  This time is no exception.

The case study shows off Visual Studio Release Management to several environments.  I’ll get to that part later, for now we will focus on designing the environment to suit our needs.

Requirements

We will have three environments: dev, stage, and prod.  Each environment must have an uptime SLA of 99.5 or higher while minimizing cost.  These virtual machines must be accessible from on-premises using HTTP over port 8080, but not accessible externally (no HTTP endpoint exposed).  A CPU spike in one environment should not affect the other environments. 

If we break the requirements down, we can see that there are three environments, each requiring an SLA of 99.95% or higher while minimizing cost.  We implement two virtual machines in each environment and place them within an availability set to meet the 99.95% SLA requirement.  To minimize our cost during the POC phase, we will use the Small VM size (for more information on VM sizes, see Virtual Machine and Cloud Service Sizes for Azure).  The part about HTTP over port 8080, we’ll take care of that in a future post as it requires some internal configuration of the VM, but we can take care of the requirement not to expose port 8080 simply by not adding an endpoint for the environment.  We’ll partially address the on-premises connectivity in this post, but that will also need a little more work to complete later.  Finally, we address the customer’s concerns about CPU by placing each environment within its own cloud service.

Our proposed architecture looks like the following.

image

I created this environment using my MSDN subscription in about an hour just using the new Azure portal (https://portal.azure.com).  However, it’s been awhile since I did anything with PowerShell, so let’s show you how to do this using the service management PowerShell SDK.

The Virtual Network

The first task is to create a virtual network XML configuration file.  To be honest, I created this using the old Azure portal (https://manage.windowsazure.com), then exported the VNet to a file.  In the file you can see the 3 subnets with the incredibly clever names of Subnet-1, Subnet-2, and Subnet-3.  The virtual network itself has a name, in this file it’s “kirketestvnet-southcentral”.  You probably want to change that Smile  The value “kirketest” is part of a naming scheme that I use, I prefix all of the services (virtual network, storage, and cloud service) with the same value helping to avoid name collisions as the storage and cloud service names must be globally unique.  In this example, I’ve also added a gateway.  We aren’t going to create the gateway just yet, but we will leave the gateway subnet in the definition for now.

NetworkConfig.xml
  1. <NetworkConfiguration xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schemas.microsoft.com/ServiceHosting/2011/07/NetworkConfiguration">
  2.   <VirtualNetworkConfiguration>
  3.     <VirtualNetworkSites>
  4.       <VirtualNetworkSite name="kirketestvnet-southcentral" Location="South Central US">
  5.         <AddressSpace>
  6.           <AddressPrefix>10.0.1.0/24</AddressPrefix>
  7.         </AddressSpace>
  8.         <Subnets>
  9.           <Subnet name="Subnet-1">
  10.             <AddressPrefix>10.0.1.0/27</AddressPrefix>
  11.           </Subnet>
  12.           <Subnet name="Subnet-2">
  13.             <AddressPrefix>10.0.1.32/27</AddressPrefix>
  14.           </Subnet>
  15.           <Subnet name="Subnet-3">
  16.             <AddressPrefix>10.0.1.64/26</AddressPrefix>
  17.           </Subnet>
  18.           <Subnet name="GatewaySubnet">
  19.             <AddressPrefix>10.0.1.128/29</AddressPrefix>
  20.           </Subnet>
  21.         </Subnets>
  22.         <Gateway>
  23.           <VPNClientAddressPool>
  24.             <AddressPrefix>10.0.0.0/24</AddressPrefix>
  25.           </VPNClientAddressPool>
  26.           <ConnectionsToLocalNetwork />
  27.         </Gateway>
  28.       </VirtualNetworkSite>      
  29.     </VirtualNetworkSites>
  30.   </VirtualNetworkConfiguration>
  31. </NetworkConfiguration>

To set the virtual network configuration, you use the following PowerShell script.

Set-AzureVNetConfig
  1. $vnetConfigFilePath = "C:\temp\NetworkConfig.xml"
  2.  
  3. Set-AzureVNetConfig -ConfigurationPath $vnetConfigFilePath

A word of caution here… this will update ALL of the virtual networks for your subscription.  If you use the XML as-is above, that is the same as telling Azure to delete existing virtual networks that you may have and then add the new network named “kirketest-southcentral”.  Luckily, if those virtual networks are in use, the operation will fail.  I find it much less risky to simply export the existing virtual network, make whatever changes I need to, then use the Set-AzureVNetConfig to apply all of the changes.

Creating Virtual Machines

When you create a virtual machine in Azure, you choose a base image.  Using Get-AzureVMImage, you can get a list of all the available images and their locations.  The image name will be something family-friendly like this:

a699494373c04fc0bc8f2bb1389d6106__Windows-Server-2012-Datacenter-201505.01-en.us-127GB.vhd

Notice the date and version number on that image.  This image will go away at some point, replaced by a newer image.  Hardcoding that image name in your script will cause you problems later, but it’s easy to just obtain the latest image version.  Michael Collier provides a great post on The Case of the Latest Windows Azure VM Image with a simple solution:  get the image sorted by the date it is published, descending, and get the first image.  He also explains in that post how not all images are available in all locations, so you should include the location as part of your filter.

Get-LatestVMImage
  1. function Get-LatestVMImage([string]$imageFamily, [string]$location)
  2. {
  3.     #From https://michaelcollier.wordpress.com/2013/07/30/the-case-of-the-latest-windows-azure-vm-image/
  4.     $images = Get-AzureVMImage `
  5.     | where { $_.ImageFamily -eq $imageFamily } `
  6.         | where { $_.Location.Split(";") -contains $location} `
  7.         | Sort-Object -Descending -Property PublishedDate
  8.         return $images[0].ImageName;
  9. }

Calling this function is really simple, just provide the location name (obtained by Get-AzureLocation).

Code Snippet
  1. #$imageName = "a699494373c04fc0bc8f2bb1389d6106__Windows-Server-2012-Datacenter-201505.01-en.us-127GB.vhd"
  2. $imageName = Get-LatestVMImage -imageFamily "Windows Server 2012 R2 Datacenter" -location $location

Now that we have an image, we create a new AzureVMConfig with the settings for our VM.  We then create an AzureProvisioningConfig to tell the provisioning engine that this is a Windows machine with a username and password.  We set a virtual network, and we are left with the configuration object.  We haven’t yet told Azure to create a VM.  This lets us create the configuration for multiple VMs at once before we finally start the provisioning process.  Putting the VMs in an availability set is as easy as providing the –AvailabilitySetName parameter (for more information, see Michael Washam’s post, Understanding and Configuring Availability Sets).

Create Multiple VMs
  1.   New-AzureService -ServiceName $serviceName -Location $location
  2.                          
  3. $vm1 = New-AzureVMConfig -Name "DEV1" -InstanceSize $size -ImageName $imageName -AvailabilitySetName $avSetName
  4. Add-AzureProvisioningConfig -VM $vm1 -Windows -AdminUsername $adminUsername -Password $adminPassword
  5. Set-AzureSubnet -SubnetNames $subnetName -VM $vm1
  6.  
  7. $vm2 = New-AzureVMConfig -Name "DEV2" -InstanceSize $size -ImageName $imageName -AvailabilitySetName $avSetName
  8. Add-AzureProvisioningConfig -VM $vm2 -Windows -AdminUsername $adminUsername -Password $adminPassword
  9. Set-AzureSubnet -SubnetNames $subnetName -VM $vm2
  10.  
  11. New-AzureVM -ServiceName $serviceName -VMs $vm1,$vm2 -VNetName $vnetName

We now have the basic building blocks out of the way.  The rest of the script is simply putting everything together. 

The Script

The rest of the script is fairly straightforward.  We create a cloud service for each environment, two virtual machines in an availability set in each cloud service, each VM is placed within a virtual network subnet.  The full script is shown here:

Full Script
  1. function Get-LatestVMImage([string]$imageFamily, [string]$location)
  2. {
  3.     #From https://michaelcollier.wordpress.com/2013/07/30/the-case-of-the-latest-windows-azure-vm-image/
  4.     $images = Get-AzureVMImage `
  5.     | where { $_.ImageFamily -eq $imageFamily } `
  6.         | where { $_.Location.Split(";") -contains $location} `
  7.         | Sort-Object -Descending -Property PublishedDate
  8.         return $images[0].ImageName;
  9. }
  10.  
  11.   $prefix = "mydemo"
  12. $storageAccountName = ($prefix + "storage")
  13.                $location = "South Central US"
  14. $vnetConfigFilePath = "C:\temp\NetworkConfig.xml"
  15.  
  16. #$imageName = "a699494373c04fc0bc8f2bb1389d6106__Windows-Server-2012-Datacenter-201505.01-en.us-127GB.vhd"
  17. $imageName = Get-LatestVMImage -imageFamily "Windows Server 2012 R2 Datacenter" -location $location
  18.  
  19. $size = "Small"
  20.   $adminUsername = "YOUR_USERNAME_HERE"
  21.   $adminPassword = "YOUR_PASSWORD_HERE"
  22. $vnetName = ($prefix + "vnet-southcentral")
  23. #Use Get-AzureSubscription to find your subscription ID
  24. $subscriptionID = "YOUR_SUBSCRIPTION_ID_HERE"
  25.  
  26. #Set the current subscription
  27. Select-AzureSubscription -SubscriptionId $subscriptionID -Current
  28.  
  29. #Create storage account
  30. New-AzureStorageAccount -StorageAccountName $storageAccountName -Location $location
  31.  
  32. #Set the current storage account
  33. Set-AzureSubscription -SubscriptionId $subscriptionID -CurrentStorageAccountName $storageAccountName
  34.  
  35. #Create virtual network
  36. Set-AzureVNetConfig -ConfigurationPath $vnetConfigFilePath
  37.  
  38.  
  39. #Development environment
  40. $avSetName = "AVSET-DEV"
  41. $serviceName = ($prefix + "DEV")
  42. $subnetName = "Subnet-1"
  43.  
  44.   New-AzureService -ServiceName $serviceName -Location $location
  45.                          
  46. $vm1 = New-AzureVMConfig -Name "DEV1" -InstanceSize $size -ImageName $imageName -AvailabilitySetName $avSetName
  47. Add-AzureProvisioningConfig -VM $vm1 -Windows -AdminUsername $adminUsername -Password $adminPassword
  48. Set-AzureSubnet -SubnetNames $subnetName -VM $vm1
  49.  
  50. $vm2 = New-AzureVMConfig -Name "DEV2" -InstanceSize $size -ImageName $imageName -AvailabilitySetName $avSetName
  51. Add-AzureProvisioningConfig -VM $vm2 -Windows -AdminUsername $adminUsername -Password $adminPassword
  52. Set-AzureSubnet -SubnetNames $subnetName -VM $vm2
  53.  
  54. New-AzureVM -ServiceName $serviceName -VMs $vm1,$vm2 -VNetName $vnetName
  55.  
  56.  
  57. #Staging environment
  58. $avSetName = "AVSET-STAGE"
  59. $serviceName = ($prefix + "STAGE")
  60. $subnetName = "Subnet-2"
  61.  
  62.   New-AzureService -ServiceName $serviceName -Location $location
  63.                          
  64. $vm1 = New-AzureVMConfig -Name "STAGE1" -InstanceSize $size -ImageName $imageName -AvailabilitySetName $avSetName
  65. Add-AzureProvisioningConfig -VM $vm1 -Windows -AdminUsername $adminUsername -Password $adminPassword
  66. Set-AzureSubnet -SubnetNames $subnetName -VM $vm1
  67.  
  68. $vm2 = New-AzureVMConfig -Name "STAGE2" -InstanceSize $size -ImageName $imageName -AvailabilitySetName $avSetName
  69. Add-AzureProvisioningConfig -VM $vm2 -Windows -AdminUsername $adminUsername -Password $adminPassword
  70. Set-AzureSubnet -SubnetNames $subnetName -VM $vm2
  71.  
  72. New-AzureVM -ServiceName $serviceName -VMs $vm1,$vm2 -VNetName $vnetName
  73.  
  74.  
  75. #Production environment
  76. $avSetName = "AVSET-PROD"
  77. $serviceName = ($prefix + "PROD")
  78. $subnetName = "Subnet-3"
  79.  
  80.   New-AzureService -ServiceName $serviceName -Location $location
  81.                          
  82. $vm1 = New-AzureVMConfig -Name "PROD1" -InstanceSize $size -ImageName $imageName -AvailabilitySetName $avSetName
  83. Add-AzureProvisioningConfig -VM $vm1 -Windows -AdminUsername $adminUsername -Password $adminPassword
  84. Set-AzureSubnet -SubnetNames $subnetName -VM $vm1
  85.  
  86. $vm2 = New-AzureVMConfig -Name "PROD2" -InstanceSize $size -ImageName $imageName -AvailabilitySetName $avSetName
  87. Add-AzureProvisioningConfig -VM $vm2 -Windows -AdminUsername $adminUsername -Password $adminPassword
  88. Set-AzureSubnet -SubnetNames $subnetName -VM $vm2
  89.  
  90. New-AzureVM -ServiceName $serviceName -VMs $vm1,$vm2 -VNetName $vnetName

 

The Result

I ran the script using my MSDN subscription.  In about 10 minutes I had 6 virtual machines grouped within 3 cloud services and 3 virtual networks.  I could then change the subscription and prefix variables then run the script in a different subscription.  I could have done all of this using the portal (the first time, I did), but once I had the script completed it became an asset to create the environments in a repeatable and consistent manner.

The virtual network shows the resources that are deployed into the correct subnets.

image

Looking at the Configure tab of a single virtual machine lets us see that the virtual machine is part of an availability set.

image

Clean Up

This is my MSDN subscription, so I don’t want to leave resources lying around if I am not using them.  Clean up is simple, I run another script to delete everything I just created.  Again, a word of caution:  this deletes the virtual machines, the associated VHD files, the cloud services, the storage account, and the virtual network configuration.  This is not a reversible operation. 

Clean Up
  1. $prefix = "mydemo"
  2. $storageAccountName = ($prefix + "storage")
  3. $subscriptionID = "YOUR_SUBSCRIPTION_ID_HERE"
  4.  
  5. #Set up credentials
  6. Add-AzureAccount
  7.  
  8. #Set the current subscription
  9. Select-AzureSubscription -SubscriptionId $subscriptionID -Current
  10.  
  11. $serviceName = ($prefix + "DEV")
  12. #The following command deletes the associated VHD, but takes awhile
  13. Get-AzureVM -ServiceName $serviceName | %{Remove-AzureVM -Name $_.Name -ServiceName $serviceName -DeleteVHD}
  14. Remove-AzureService -ServiceName $serviceName -Force
  15.  
  16. $serviceName = ($prefix + "STAGE")
  17. #The following command deletes the associated VHD, but takes awhile
  18. Get-AzureVM -ServiceName $serviceName | %{Remove-AzureVM -Name $_.Name -ServiceName $serviceName -DeleteVHD}
  19. Remove-AzureService -ServiceName $serviceName -Force
  20.  
  21. $serviceName = ($prefix + "PROD")
  22. #The following command deletes the associated VHD, but takes awhile
  23. Get-AzureVM -ServiceName $serviceName | %{Remove-AzureVM -Name $_.Name -ServiceName $serviceName -DeleteVHD}
  24. Remove-AzureService -ServiceName $serviceName -Force
  25.  
  26. #Remove storage account.  This will fail if the
  27. #disks haven't finished deleting yet.
  28. Remove-AzureStorageAccount -StorageAccountName $storageAccountName
  29.  
  30. #Remove all unused VNets
  31. Remove-AzureVNetConfig

Coming up next we’ll look at how we can establish on-premises connectivity for just a few devices, and then we’ll turn our attention to deploying some code and services to these virtual machines using Visual Studio Release Management.

For More Information

Virtual Machine and Cloud Service Sizes for Azure

Manage the availability of virtual machines

The Case of the Latest Windows Azure VM Image

Understanding and Configuring Availability Sets

Configure a Point-to-Site VPN Connection to an Azure VNet

$
0
0

This post shows how to create a point-to-site (P2S) VPN connection to an Azure virtual network (VNet). 

Background

In my previous post, I showed how to create a virtual network configuration XML file and to create several environments (dev, stage, and prod) that are each deployed into a separate subnet.  It’s kind of a goofy network architecture because typically you see VNets configured that model the tiers of a single application (front tier, middle tier, backend tier).  However, it suits my use case and enables me to show how to create a point-to-site virtual network that enables me to communicate with all of the environments through a single connection.

I am showing point-to-site in this post because that’s what I use for demos while I am on the road.  If you travel for work or work remotely, you likely use an agent that you run in order to connect to the corporate network.  That agent establishes a secure connection to the corporate network, enabling you to access resources even from public locations.  That’s exactly what a point-to-site network is, it includes an installer that will add a VPN connection.  Here you can see that I have a VPN connection to Microsoft IT VPN that allows me to VPN into the Microsoft corporate network, and another VPN connection named “DevOps-demo-dev-southcentral” that enables me to connect to an Azure virtual network.

image

When I click Connect on that VPN connection, the agent appears.

image

I then click Connect, and I am securely connected to the virtual network in Azure.  I can now access any resources within that virtual network as though they were part of my local network. 

There are two other types of connectivity to Azure:  site-to-site VPN and ExpressRoute.  A site-to-site VPN allows you to create a secure connection between your on-premises site and the virtual network by using a Windows RRAS server or configuring a gateway device.  ExpressRoute lets you create private connections between Azure and your on-premises or co-located infrastructure without going over the internet.  For more information on choosing between a VPN and ExpressRoute, see ExpressRoute or Virtual Network VPN – What’s right for me?

Create the Network

In my previous post, Creating Dev and Test Environments with Windows PowerShell, I showed an example of an XML configuration file for an Azure virtual network.  I used a simple network with three subnets.  One of the elements in that XML file is an additional gateway subnet.  When you create a virtual network, you can choose to configure a point-to-site VPN.

image

When you configure the subnets, you can then add a gateway subnet.

image

I’m lazy and I cheated.  I created the network using this wizard, and then exported the virtual network.  Who likes authoring XML documents directly, anyway?

image

Exporting your new network results in an XML file that looks like this:

NetworkConfig.xml
  1. <NetworkConfiguration xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schemas.microsoft.com/ServiceHosting/2011/07/NetworkConfiguration">
  2.   <VirtualNetworkConfiguration>
  3.     <Dns />
  4.     <VirtualNetworkSites>
  5.       <VirtualNetworkSite name="kirketestvnet-southcentral" Location="South Central US">
  6.         <AddressSpace>
  7.           <AddressPrefix>10.0.1.0/24</AddressPrefix>
  8.         </AddressSpace>
  9.         <Subnets>
  10.           <Subnet name="Subnet-1">
  11.             <AddressPrefix>10.0.1.0/27</AddressPrefix>
  12.           </Subnet>
  13.           <Subnet name="Subnet-2">
  14.             <AddressPrefix>10.0.1.32/27</AddressPrefix>
  15.           </Subnet>
  16.           <Subnet name="Subnet-3">
  17.             <AddressPrefix>10.0.1.64/26</AddressPrefix>
  18.           </Subnet>
  19.           <Subnet name="GatewaySubnet">
  20.             <AddressPrefix>10.0.1.128/29</AddressPrefix>
  21.           </Subnet>
  22.         </Subnets>
  23.         <Gateway>
  24.           <VPNClientAddressPool>
  25.             <AddressPrefix>10.0.0.0/24</AddressPrefix>
  26.           </VPNClientAddressPool>
  27.           <ConnectionsToLocalNetwork />
  28.         </Gateway>
  29.       </VirtualNetworkSite>
  30.     </VirtualNetworkSites>
  31.   </VirtualNetworkConfiguration>
  32. </NetworkConfiguration>

If you have an existing network and want to add a point-to-site VPN to it, simply export the XML configuration, add the gateway subnet and the VPNClientAddressPool nodes, and then import the configuration file.

Create the Gateway

Now that you’ve created the virtual network and the gateway subnet, it’s time to create the gateway itself.  In the Azure Management Portal (https://manage.windowsazure.com), go to the dashboard view of your VNet and click “Create Gateway”. 

image

While the gateway is being created, the status will look similar to this:

image

This process takes some time to complete (expect around 30 minutes).  In the meantime, start on the next step: creating certificates.

Creating Certificates

The communication between on-premises and Azure is secured using a self-signed root certificate.  If you are reading this blog, there is a high likelihood that you are a developer with Visual Studio installed.  If not, install Microsoft Visual Studio Express 2013 for Windows Desktop, which is free of charge.  In Windows 8, go to the Start screen, open the charm bar, and click settings.  Enable the “Show administrative tools” option.

image

Now go to the all apps view and look for the Visual Studio Tools folder.

image

In that folder you will find the Visual Studio 2013 command prompt (thankfully, this is much easier to locate in Visual Studio 2015!)

image

Right-click and run as Administrator.

Now that we have the command prompt open, we can create two certificates using the following commands:

Create Certificates
  1. makecert -sky exchange -r -n "CN=DevOpsDemoRootCert" -pe -a sha1 -len 2048 -ss My "DevOpsDemoRootCert.cer"
  2.  
  3. makecert.exe -n "CN=DevOpsDemoClientCert" -pe -sky exchange -m 96 -ss My -in "DevOpsDemoRootCert" -is my -a sha1

Once you’ve created the certificates, upload the root certificate to the management portal.

image

image

The result shows the certificate has been uploaded.

image

The client certificate depends on the root certificate.  We will export the client certificate and choose whether we want to use a password or a group to provide access to the certificate.  Open certmgr.msc.

image

Navigate to Certificates – Current User / Personal / Certificates.  Right-click on the client certificate that you just created and choose export. 

image

Follow the directions to export the client certificate, including the private key.  The result will be a .pfx file, you will distribute that .pfx file to each machine where the client will be installed.

Right-click on the .pfx file and choose Install.  Leave the installation location as Current User, and provide the password when prompted.

image

image

image

image

image

Click finish, and the certificate is now installed.

Create the VPN Client Configuration Package

Go back to the Azure Management Portal.  You may need to refresh the page to get the most current status.  Once the gateway is created, it looks like this:

image

Now click on the link to download the 32-bit or 64-bit client VPN package.

image

When you download, the file name will be a GUID.  Feel free to save as whatever file name you want.

image

Right-click the .EXE file and choose Properties.  On the Properties page, choose Unblock.

image

Now double-click the .EXE to run it.  You are asked if you want to install the VPN client.

image

Test It Out

Now that everything is wired together, the last thing to try is to actually VPN in.  Connect to the VPN client.

image

Now that we’re connected, run a simple ping test and see that it fails.

image

It fails because the Windows firewall in the VM itself is blocking the communication.  When we created the VM, a public endpoint for Remote Desktop connections was created.  That would connect us through the cloud service name, in my case it is kirketestDEV.cloudapp.net.  However, we are already connected to the network, so we don’t have to use that endpoint.  Just open a new remote desktop connection and connect to the IP address of the machine, 10.0.1.4.

image

We connect to the VM using Remote Desktop and enable the Windows Firewall inbound rule for ICMP ping.  

image

Now go back to the console window and try the ping test again.  It works!

image

We were able to set up a point-to-site connection to our VNet.  I frequently use this approach while demonstrating virtual networks when I am on the road traveling because I can connect to the network from anywhere, even hotel or conference wireless connections.  I can now introduce the idea of a VPN connection and show exactly what to expect. 

Clean Up

I am using my MSDN subscription for this, which gives me $150 worth of Azure credits per month.  When we created the dynamic routing gateway (the process that took 30 minutes), that created a network resource that is billable.  Looking at the VPN Gateway Pricing page, I can see that the cost for the dynamic routing gateway is $0.036 US per hour, which is around $27 per month. 

image

While that still leaves me with plenty of room for other projects, I may not want or need that gateway in my MSDN subscription all month if I am just using it for a demo.  Just go back to the virtual network’s dashboard and delete the gateway.  Of course, the next time you need it you will have to create the gateway again and suffer the 30 minute wait, but that’s kind of a small operational price to pay for something that is so incredibly cool and convenient.

For More Information

ExpressRoute or Virtual Network VPN – What’s right for me?

Configure a Virtual Network with a Site-to-Site VPN Connection

Configure a Point-to-Site VPN connection to an Azure Virtual Network.

VPN Gateway Pricing

Visual Studio Release Management, DSC, and Azure VMs

$
0
0

This post will show how to use Visual Studio Release Management with Windows PowerShell Desired State Configuration (DSC) to configure multiple deployment environments and deploy an application. 

If you just want to see the big payoff, scroll all the way down to the section “The Big Payoff”.

Background

I am presenting a series of workshops to our top GSI partners around the world and part of the workshop involves a case study on DevOps.  A company (name’s not important here, call them Fabrikam or Contoso) has suffered quality issues trying to deploy solutions into multiple environments.  They can get solutions to work in the development environment, but have trouble in the staging and production environments.  The developers and IT staff have both made manual changes to each environment, so they are not exactly sure of the dependencies for the application and how to ensure that each environment has those dependencies available.  They have an existing investment in Team Foundation Server, we need to propose a solution.  The attendees have approximately 30 minutes to come up with an answer.  This post is going to detail the configuration and provide a demo at the end of exactly why this is so cool.

The environments we will release code to look like the following.

image

There is a complete walkthrough and scripts for creating this environment in my previous post, Creating Dev and Test Environments with Windows PowerShell

For each environment, we will make sure that each server has IIS installed, ASP.NET 4.5 installed, the default web site is stopped, we’ll create a new web site, deploy our code to the new site, and finally open port 8080 on the firewall.  Rather than write lots of PowerShell to do all of this, we’ll use Desired State Configuration (DSC).

Again, if you just want to see what Release Management does and where’s the big payoff, scroll all the way down to the section “The Big Payoff”.

Release Management and Visual Studio Online

While I do find some twisted pleasure of installing server software and configuring permissions, I’d rather not set up TFS and Release Management Server just for this.  If you want to go that way, you can, the steps are documented online at Install Release Management.  I much prefer to use Visual Studio Online for all of my projects now, and it includes Release Management as part of the service.  You only need to install the Release Management Client for Team Foundation Server 2013

Check In Source

Open Visual Studio 2013 and created a new web application project.

image

I choose a new MVC project that does not perform authentication, and I uncheck the “Host in the cloud” option.

image

Next, I add a folder called Deployment.  Our project will include details about deploying the solution.

image

Add a file in that folder named “Configuration.psd1”.

image

Replace the contents of that file.

Configuration.psd1
  1. @{
  2.    AllNodes = @(
  3.      @{
  4.          NodeName = $env:COMPUTERNAME;        
  5.       }
  6.    );
  7. }

Now add a second file to the folder named “InstallWebApp.ps1”.  This is the PowerShell DSC file that our Release Management template will use.  It describes the dependencies that the application requires, including ensuring that IIS is installed, ASP.NET 4.5 is installed, the default web site is stopped, and a new web site is created that uses the contents of our application to serve up the site.  We also enable port 8080 in the Windows Firewall.  We could go one step further and change the port binding for the web site to use port 8080, but let’s leave it as-is for now.

InstallWebApp.ps1
  1.   configuration Fabrikam_POC
  2. {  
  3.     # Import the module that defines custom resources  
  4.       Import-DscResource -Module xWebAdministration
  5.     Import-DscResource -Module xNetworking
  6.     # Dynamically find the applicable nodes from configuration data  
  7.     Node $AllNodes.NodeName  
  8.     {  
  9.         # Install the IIS role  
  10.           WindowsFeature IIS
  11.         {  
  12.               Ensure          = "Present"
  13.               Name            = "Web-Server"
  14.         }  
  15.         # Install the ASP .NET 4.5 role  
  16.           WindowsFeature AspNet45
  17.         {  
  18.               Ensure          = "Present"
  19.               Name            = "Web-Asp-Net45"
  20.         }  
  21.         # Stop an existing website          
  22.            xWebsite DefaultSite
  23.         {  
  24.               Ensure          = "Present"
  25.               Name            = "Default Web Site"
  26.               State           = "Stopped"
  27.               PhysicalPath    = "C:\Inetpub\wwwroot"
  28.               DependsOn       = "[WindowsFeature]IIS"
  29.         }  
  30.         # Copy the website content  
  31.           File WebContent
  32.         {  
  33.               Ensure          = "Present"
  34.                   SourcePath= "$applicationPath\_PublishedWebSites\RMDemo"
  35.               DestinationPath = "C:\inetpub\fabrikam"
  36.               Recurse         = $true
  37.               Type            = "Directory"
  38.               DependsOn       = "[WindowsFeature]AspNet45"
  39.         }         
  40.         # Create a new website          
  41.            xWebsite Fabrikam
  42.         {  
  43.               Ensure          = "Present"
  44.                         Name= "Fabrikam POC"
  45.               State           = "Started"
  46.               PhysicalPath    = "C:\inetpub\fabrikam"
  47.               DependsOn       = "[File]WebContent"
  48.         }  
  49.           xFirewall Firewall8080
  50.         {  
  51.               Name                  = "Allow 8080"
  52.               DisplayName           = "Allow 8080"
  53.                       DisplayGroup= "Fabrikam POC Group"
  54.               Ensure                = "Present"
  55.               Access                = "Allow"
  56.               State                 = "Enabled"
  57.                            Profile= ("Any")  
  58.               Direction             = "InBound"
  59.                         RemotePort= ("8080", "8080")  
  60.                          LocalPort= ("8080", "8080")           
  61.               Protocol              = "TCP"
  62.                 Description           = "Allow 8080 for Fabrikam POC App"
  63.         }  
  64.     }  
  65. }
  66. Fabrikam_POC -ConfigurationData $applicationPath\Deployment\Configuration.psd1

Two bits of weirdness to explain here.  The first is line 35.  When the web site is built and the output is copied to the Drop folder, the web site will be in a folder _PublishedWebsites.  You can see this in Visual Studio Online by inspecting the output.

image

The second is the ConfigurationData node.  Admittedly, I copied bits and pieces of scripts trying to get this to work, and that was an artifact I couldn’t seem to get around.  There may be a simpler way, please leave comments if you see opportunities to improve this.

Our InstallWebApp.ps1 script imports two modules, xNetworking and xWebAdministration, neither of which will exist on the server.  We have to deploy those as artifacts to the servers as well.  Create a new folder named Modules as a child of Deployment.

image

Go download the DSC Resource Kit (All Modules).  In File Explorer, select the xNetworking and xWebAdministration folders and drag them to the Modules folder.  The resulting structure looks like this.

image

Now that we have copied the required dependencies, the next step is to create a PowerShell script to deploy them.  I create a file “InstallModules.ps1” that will use DSC to ensure the modules are deployed. 

InstallModules.ps1
  1. configuration InstallModules
  2.     {
  3.     Node $env:COMPUTERNAME
  4.     {
  5.         # Copy the Modules
  6.         File ModuleContent
  7.         {
  8.                       Ensure= "Present"
  9.                   SourcePath= "$applicationPath\Deployment\Modules"
  10.             DestinationPath = "$env:ProgramFiles\WindowsPowershell\Modules"
  11.                      Recurse= $true
  12.                         Type= "Directory"
  13.         }       
  14.     }
  15. }
  16.  
  17.   InstallModules

You’ll see when we later configure the workflow for the release template that we need a workflow activity that ensures the modules are deployed before we call the InstallWebApp.ps1 script.  This is because the InstallWebApp.ps1 depends on the modules being present.

!!!IMPORTANT STEP HERE!!!

The way that this solution works is that Release Management will take the build output and release it to the specific environment.  We have to make sure that all of the files we just added are included in the build output, they won’t be by default.  For every file that is a descendant of the Deployment folder, go to the properties of that file and choose Copy Always.  You can multi-select files to make this easier.

image

The last step is to check the code in.  Right-click the solution and choose Add solution to source control.  You are prompted for Git or Team Foundation Version Control.  I prefer Git, but the choice has no bearing on the rest of this post.

image

Since I chose Git, my source is checked into a local repository.  I now need to push it to Visual Studio Online.  Go to VisualStudio.com and create an account if you haven’t already (there’s a free trial, and if you have MSDN you have access already through subscriber benefits).  Create a new project.

image

Once the project is created, we are assured our team will absolutely love this. 

image

Go to the code section, and you will see a URL for the repository.

image

Go back to Visual Studio.  Right-click the solution and choose Commit.  Provide a commit message and then click the Commit button.

image

After you click Commit, you can now click Sync in the resulting dialog to push changes to the server.

image

You can now provide the URL for the remote repository to push our changes to the server.

image

Click publish, then go back to Visual Studio Online to see that the code was committed!

image

Go back to Visual Studio.  We need to define a new build.  Under the team project for your Visual Studio Online project (you may have to connect to the team project first), go to Builds and then choose New Build Definition.

image

Give the new build a name, and choose Enabled for queue processing.

image

For the Trigger, I choose Continuous Integration.  This comes in really handy as you are troubleshooting the PowerShell deployment scripts, as you make changes to them you will need to kick off a build so that the files are available for Release Management to deploy.  For example, I forgot the Copy Always step from above (that’s critical to making this work), so I just had to make a change and then check in the source again, automatically kicking off a new build.

image

For the Build Defaults tab, I chose to use the Hosted Build Controller.  I could create my own build controller, but I love that Visual Studio Online provides that for me, too.

image

Finally, on the Process tab, choose the project or just provide the relative path of the .sln file.

image

Before continuing, test your build definition by queuing a new build.

image

image

Open the Release Management Client

Go to the Release Management client.  You are prompted to connect. 

image

There is some initial configuration for pick lists that is required to identify the stage type and the technology type.

image

We also need to connect to Azure in order to connect those stage types to environments.  Go to the Administration / Manage Azure tab.  You are prompted for the subscription ID, the management certificate key, and a storage account name. 

image

This seems like a good assignment for a summer intern to add some value… create a prompt that connects to Azure and lets you pick this stuff instead of having to read my self-indulgently long blog post to find this needle in a haystack.  However, there’s a straightforward workaround that, if you are working with Azure, you should know how to do this anyway. 

Go to PowerShell and run the following command (requires the Azure PowerShell SDK is installed, available from http://azure.microsoft.com/en-us/downloads/).

image

You are prompted to save the file.

image

That file has the subscription ID (red arrow) and management certificate key (blue arrow).

image

Use the name of a storage account in your Azure subscription.

 image

The management certificate is needed in order to connect to the deployment environment and do things like start the environment and stop it.  The storage account is used as the Drop folder for the build output.  If you open up the blob container “releasemanagement”, you will see the contents of the Drop folder for each Build that is released.  Notice the Deployment folder that contains all of the PowerShell DSC stuff we were working with previously, and the _PublishedWebsites folder that contains the web site.

image

Define the Environments

Now go to the Configure Paths tab and create a new Azure vNext environment.  Click the button to Link Azure Environment.

image

Choose the subscription, and then choose a cloud service that you will deploy to and choose Link.

image

Click the Link Azure Servers button to link the VMs we previously created (see the post Creating Dev and Test Environments with Windows PowerShell).

image

You’ll now have defined the DEV environment that is linked to those Azure virtual machines.

image

Do the same thing, creating an environment for STAGE and PROD, linking to the appropriate servers.

image

Now that we have the environments, we need to create a release path. 

Create a Release Path

Go to Configure Paths / vNext Release Paths.  Click the New button.

image

Add stages.  For each stage, choose the environment.  For the Dev environment, I enabled the Automated checkbox for the acceptance step.  This means that we can automatically release to the Dev environment upon a successful build.

image

Define Components

On the Configure Apps / Components tab, add a new component.  This allows us to associate the component to deploy from the build drop location.  I chose “Build with application”, and I entered a “\” to access the root of the build drop.

image

Create a Release Template

Go to Configure Apps / vNext Release Templates and choose the New button.  Provide a name, and choose the release path that we created previously.  We can also choose if this release template will be triggered upon a build.  We will select that.  Click the Edit button to choose a team project and a build definition.

image

In the Toolbox, right-click Components and choose Add.

image

We can then choose the component that we created previously, “Web Site Files”.  Click the Link button.

image

In the workflow designer, add the “Start Environment” action and four “Deploy Using PS/DSC” actions.  Choose the server name, provide the username and password, and select the component that we just added.  The PSScriptPath for the first action points to the InstallModules.ps1 script that we added to the build output, this will ensure that the xWebAdministration and xNetworking modules are present.  The second action will install the web application files.  We have deployment 4 actions: server1:modules, server1:webapp, server2:modules, server2:webapp.  Fully admitting there might be (OK, there probably is) a better way to accomplish this, but this is how I got it to work.

image

An important note, else the release will fail:  Change the SkipCACheck to true for each deploy action, for each stage (dev, test, and prod).

image

The Big Payoff

It’s time to see the fruits of our labor!  Go to the project and make a change.  For instance, edit Index.cshtml.

Index.cshtml
  1. @{
  2.     ViewBag.Title = "Home Page";
  3. }
  4.  
  5. <div class="jumbotron">
  6.     <h1>Success!</h1>
  7.     <p class="lead">Our demo worked.</p>
  8.     <p>If you are seeing this, the demo gods favored me today!</p>
  9. </div>

Save the change and commit, then push the change to your Visual Studio Online repository.  We enabled our build definition to automatically build upon check-in, and we enabled our release path to automatically deploy to the Dev environment upon build.  After we push our changes, we see that a build has automatically been queued.

image

Once the build is complete, go to the Release Management client and go to the Releases / Releases tab.  Our new release was automatically initiated.

image

Double click on the release to view progress.

image

Should something go wrong, you’ll receive an email.

image

Side note: You end up getting a deluge of emails from this thing, and I hate emails. Maybe I am missing it, but I expected to see some tighter integration with TFS work item tracking right here, let me submit work items directly from the RM client (or even automatically).  Seems legit, the validation stage should be a work item tracking assignment to testers, once the work item is closed a request to move to the next stage is created.  A failed release becomes a defect.  Again, maybe that’s in there somewhere, but I’m not finding it.

If you want to kick off a release manually (for instance, you forgot a parameter in one of the deployment actions like I just did and need to edit the release template), you can go to Configure Apps / vNext Release Templates, select your template, and choose New Release.  Choose the target stage (most likely Dev) and choose a build.

image

I wrote in a previous blog, Configure a Point-to-Site VPN Connection to an Azure VNet, I showed how you could establish a VPN to an Azure virtual network.  To test, I connected to the VPN and then opened a browser on my local machine.

image

In case you are not as in awe of this as I am, let’s recap everything that was done.

  • Used the PowerShell script from the post Creating Dev and Test Environments with Windows PowerShell to create 3 environments, each environment having 2 servers in an availability set and part of a virtual network.
  • Used Visual Studio to check in source code to Visual Studio Online.
  • Used Visual Studio Online hosted build controller to automatically build our project.  I could have added a few tests as part of the build just to really show off.
  • Used Release Management to automatically release our project to multiple environments.
  • Used PowerShell DSC to ensure critical dependencies were available on the target machines, including IIS, ASP.NET 4.5, stopping the default web site, and adding a new web site.  We also pushed our code to the new web site, and opened a firewall port 8080 just to show off.

That’s an impressive list… but even more impressive is that now we’ve verified everything is working in the Dev environment, I receive an email letting me know that I have a request to validate a release. 

image

I open the Release Management client and can see my approval requests. 

image

image

Once I approve, Release Management moves on to the next stage.

image

If I forget to turn on the servers before a check-in, no problem.  Release Management will turn the environment on.  If I make manual changes to the environment and accidentally remove a dependency, no problem.  Release Management will ensure the dependency is there as part of the DSC script.

Of course, I have to show the final money shot… we successfully deployed to each environment, gated by approvals and validation throughout the release process.

image

Workflow-controlled releases to multiple environments in a predictable and consistent fashion without manual intervention.  Less bugs, less time troubleshooting deployments. 

For More Information

Install Release Management

Install the Release Management Client for Team Foundation Server 2013

Creating Dev and Test Environments with Windows PowerShell

Configure a Point-to-Site VPN Connection to an Azure VNet

Azure Resource Manager Templates with Visual Studio 2015

$
0
0

This post will show you how to create an Azure Resource Manager template using Visual Studio 2015.

Background

In a previous post, I talked about Creating Dev and Test Environments with Windows PowerShell and showed how to create a virtual network with 3 subnets, and how to create 3 environments that each have 2 virtual machines in an availability set to each of the subnets.

image

I got some honest feedback from friends saying that I should stop showing how to use the service management API and should instead start showing examples of how to use Azure Resource Manager.

At the time, I didn’t really know Azure Resource Manager, although I’ve certainly seen it a number of times.  When I looked at the templates, it looked like a bunch of scary JSON, and I didn’t use the new Azure portal very often.  Since then, I’ve put in a bit more time and thought I would show you how you can create your own Azure Resource Manager template to create an environment that looks like the above.

The template for the following post is available on GitHub at https://github.com/kaevans/DevTestProd.

If you are interested in additional ARM templates, there is a gallery of them at https://github.com/Azure/azure-quickstart-templates.

Getting Started

I am using Visual Studio 2015 RC still as Visual Studio 2015 isn’t due to release for 2 more weeks yet.  Create a new Azure Resource Group project.

image

At this point you can choose from a number of templates to get you started.  For instance, there is a template for “Windows Server Virtual Machines with Load Balancer” that has pretty much what we want to create for a single environment.

image

Clicking that will create the following:

image

Two files are created. LoadBalancedVirtualMachine.json is the Azure Resource Manager (ARM) template that describes the resources to be added or updated.  LoadBalancedVirtualMachine.param.dev.json is a parameter file that enables you to provide parameter values for the template.

The JSON Outline pane separates the template into its 3 parts: parameters, variables, and resources.

The parameters section contains the parameters that you will provide for the template.  You can provide the parameters as a JSON file, or the end user can use the new portal to provide values.  You’ll see this when we deploy the template.

image

The variables section contains the variables used internally within the template’s resources.  These variables help you avoid hard-coding values into the resources themselves, and enable you to promote values to become parameters later.  For instance, instead of having a variable “availabilitySetName”, you might decide to make that a user-defined parameter instead.

image

Finally, the resources section defines the resources to be added or updated during the deployment.  This was the part that scared me as I saw some crazy-looking JSON and thought, “oh hell no, that’s worse than editing XML by hand”.

image

Thankfully, using the Visual Studio 2015 tooling and just a little bit of cleverness, we can tailor this template to our liking.

Delete What You Don’t Need

Now that I’ve been working with these tools, I think the best way to get started is to use a template that looks like what you want, and then start deleting the things you don’t need.  For instance, this template includes a load balancer that wasn’t in my original design, so I am going to delete it and the “loadBalancerName” parameter from the JSON Outline.  Visual Studio then highlights two red markers.  I scroll down to that red marker, and see that there is a reference in the template to the parameter.

image

Easy enough, I don’t need a load balancer, so I delete that line of code.

image

Rinse and repeat until the red marks are all gone in the margin.

Edit What You Want

The next step is to make edits.  Right now, the template represents a single environment, but we want 3 environments of 2 VMs each, deployed into 3 different subnets.  This means we will do quite a bit of editing.

Availability Sets

Click on the Availability Set node in the JSON Outline.  Notice that the availability set resource uses a variable “availabilitySetName”.

image

We are going to have 3 availability sets (one for dev, stage, and prod, respectively).  Let’s change that variable to “devAvailabilitySetName” with a value “DevAvSet”.

image

We now have red marks in the margin.

image

Scroll down to the red marks and correct the errors.

DevAvailabilitySet
  1. {
  2.   "apiVersion": "2015-05-01-preview",
  3.   "type": "Microsoft.Compute/availabilitySets",
  4.   "name": "[variables('devAvailabilitySetName')]",
  5.   "location": "[resourceGroup().location]",
  6.   "tags":
  7.   {
  8.     "displayName": "DevAvailabilitySet"
  9.   }
  10. },

We have now defined the availability set for the dev environment, we will come back to this and create one for stage and prod in a little bit.

Virtual Networks

We currently have a virtual network with a single subnet.  Let’s add a few subnets.  Instead of including the subnet name, location, address space, subnet names, or subnet prefixes in the resource template, I am going to use variables.

Virtual Network
  1. {
  2.   "apiVersion": "2015-05-01-preview",
  3.   "type": "Microsoft.Network/virtualNetworks",
  4.   "name": "[parameters('virtualNetworkName')]",
  5.   "location": "[resourceGroup().location]",
  6.   "dependsOn": [ ],
  7.   "tags":
  8.   {
  9.     "displayName": "VirtualNetwork"
  10.   },
  11.   "properties":
  12.   {
  13.     "addressSpace":
  14.     {
  15.       "addressPrefixes":
  16.       [
  17.         "[variables('VirtualNetworkPrefix')]"
  18.       ]
  19.     },
  20.     "subnets":
  21.     [
  22.       {
  23.         "name": "[variables('VirtualNetworkSubnet1Name')]",
  24.         "properties":
  25.         {
  26.           "addressPrefix": "[variables('VirtualNetworkSubnet1Prefix')]"
  27.         }
  28.       },
  29.       {
  30.         "name": "[variables('VirtualNetworkSubnet2Name')]",
  31.         "properties":
  32.         {
  33.           "addressPrefix": "[variables('VirtualNetworkSubnet2Prefix')]"
  34.         }
  35.       },
  36.       {
  37.         "name": "[variables('VirtualNetworkSubnet3Name')]",
  38.         "properties":
  39.         {
  40.           "addressPrefix": "[variables('VirtualNetworkSubnet3Prefix')]"
  41.         }
  42.       }
  43.     ]
  44.   }
  45. },

Each place where we’ve used the [variables()] syntax requires an accompanying variable to be declared.  We add those to the “variables” section.  This allows me to easily separate out the variables and promote them to input parameters later if I decide I want the user to be able to configure this value.

Variables
  1. "VirtualNetworkPrefix": "10.0.0.0/16",
  2. "VirtualNetworkSubnet1Name": "Subnet-1",
  3. "VirtualNetworkSubnet1Prefix": "10.0.0.0/24",
  4. "VirtualNetworkSubnet2Name": "Subnet-2",
  5. "VirtualNetworkSubnet2Prefix": "10.0.1.0/24",
  6. "VirtualNetworkSubnet3Name": "Subnet-3",
  7. "VirtualNetworkSubnet3Prefix": "10.0.2.0/24",

NetworkInterface

The NetworkInterface binds a virtual machine to a subnet.  If we want to place a VM in Subnet-1, we need to create a network interface to map the two together.  If you examine the networkInterface resource, you will see that it uses the copy element with the copyindex() function.

image

This allows us to perform rudimentary looping to create multiple resources.  The end user provides a parameter of how many instances they want, and we create that many networkInterfaces.  If the user provides the name “NetworkInterface” with 3 instances, the output would be “NetworkInterface1”, “NetworkInterface2”, and “NetworkInterface3”.  I don’t want the user to have to provide this name, we’ll turn this into a variable.  However, I want to use this for the dev environment, but I don’t want to hard-code the environment name “dev” into the resource definition.  Instead, I add a variable “devPrefix” and concatenate its value with “nic” and the copyindex.

NetworkInterfaces
  1. {
  2.   "apiVersion": "2015-05-01-preview",
  3.   "type": "Microsoft.Network/networkInterfaces",
  4.   "name": "[concat(variables('devPrefix'), 'nic', copyindex())]",
  5.   "location": "[resourceGroup().location]",
  6.   "tags":
  7.   {
  8.     "displayName": "DevNetworkInterfaces"
  9.   },
  10.   "copy":
  11.   {
  12.     "name": "nicLoop",
  13.     "count": "[variables('numberOfInstances')]"
  14.   },
  15.   "dependsOn":
  16.   [
  17.     "[concat('Microsoft.Network/virtualNetworks/', parameters('virtualNetworkName'))]"
  18.   ],
  19.   "properties":
  20.   {
  21.     "ipConfigurations":
  22.     [
  23.       {
  24.         "name": "ipconfig1",
  25.         "properties":
  26.         {
  27.           "privateIPAllocationMethod": "Dynamic",
  28.           "subnet":
  29.           {
  30.             "id": "[variables('subnet1Ref')]"
  31.           }
  32.         }
  33.       }
  34.     ]
  35.  
  36.   }
  37. },

If the user input 3 instances with a prefix “Dev”, the output would now be “Dev1nic”,”Dev2nic”, and “Dev3nic”.

Virtual Machines

After we edit the network interfaces, we have more red marks in the margin.  This is telling us that we need to edit the virtual machines.  The first mark is in the dependsOn section.  We edit to use the same naming scheme that we just used for our network interface.

image

We do the same for the networkInterfaces section.

image

The virtual machines are named with a single prefix.  If we want three environments (dev, stage, prod), we have to change this from a single prefix to 3 different prefixes.  Delete the parameter named “vmNamePrefix” and use the variable “devPrefix” that we introduced previously.

image

image

Update the tag to reflect this is the development environment.

image

Finally, we have to specify the name of the OS disk.  Since we are using the copyindex() function to differentiate disks, we need to also include the environment name.

OSDisk
  1. "osDisk":
  2. {
  3.   "name": "osdisk",
  4.   "vhd":
  5.   {
  6.     "uri": "[concat('http://',parameters('newStorageAccountName'),'.blob.core.windows.net/vhds/','osdisk', variables('devPrefix'), copyindex(), '.vhd')]"
  7.   },
  8.   "caching": "ReadWrite",
  9.   "createOption": "FromImage"
  10. }

The result is the following section.

DevVirtualMachines
  1. {
  2.       "apiVersion": "2015-05-01-preview",
  3.       "type": "Microsoft.Compute/virtualMachines",
  4.       "name": "[concat(variables('devPrefix'), copyindex())]",
  5.       "copy":
  6.       {
  7.         "name": "virtualMachineLoop",
  8.         "count": "[variables('numberOfInstances')]"
  9.       },
  10.       "location": "[resourceGroup().location]",
  11.       "tags":
  12.       {
  13.         "displayName": "DevVirtualMachines"
  14.       },
  15.       "dependsOn":
  16.       [
  17.         "[concat('Microsoft.Storage/storageAccounts/', parameters('newStorageAccountName'))]",
  18.         "[concat('Microsoft.Network/networkInterfaces/', variables('devPrefix'), 'nic', copyindex())]",
  19.         "[concat('Microsoft.Compute/availabilitySets/', variables('devAvailabilitySetName'))]"
  20.       ],
  21.       "properties":
  22.       {
  23.         "availabilitySet":
  24.         {
  25.           "id": "[resourceId('Microsoft.Compute/availabilitySets',variables('devAvailabilitySetName'))]"
  26.         },
  27.         "hardwareProfile":
  28.         {
  29.           "vmSize": "[parameters('vmSize')]"
  30.         },
  31.         "osProfile":
  32.         {
  33.           "computername": "[concat(variables('devPrefix'), copyIndex())]",
  34.           "adminUsername": "[parameters('adminUsername')]",
  35.           "adminPassword": "[parameters('adminPassword')]"
  36.         },
  37.         "storageProfile":
  38.         {
  39.           "imageReference":
  40.           {
  41.             "publisher": "[parameters('imagePublisher')]",
  42.             "offer": "[parameters('imageOffer')]",
  43.             "sku": "[parameters('imageSKU')]",
  44.             "version": "latest"
  45.           },
  46.           "osDisk":
  47.           {
  48.             "name": "osdisk",
  49.             "vhd":
  50.             {
  51.               "uri": "[concat('http://',parameters('newStorageAccountName'),'.blob.core.windows.net/vhds/','osdisk', variables('devPrefix'), copyindex(), '.vhd')]"
  52.             },
  53.             "caching": "ReadWrite",
  54.             "createOption": "FromImage"
  55.           }
  56.         },
  57.         "networkProfile":
  58.         {
  59.           "networkInterfaces":
  60.           [
  61.             {
  62.               "id": "[resourceId('Microsoft.Network/networkInterfaces',concat(variables('devPrefix'), 'nic', copyindex()))]"
  63.             }
  64.           ]
  65.         }
  66.       }
  67.     }

After all that work, you are now back to pretty much where you started, but with a few improvements.

You Now Have One Environment

The result is now a little more evident in the resources node in the JSON Outline pane.  Instead of just one environment, we now are set up for a Dev environment.

image

While it may seem that we are no better off than before, you will see that we can now more quickly duplicate sections and create the stage and prod environments.

Staging Environment

I could easily right-click the “resources” node in the JSON Outline and choose “Add New Resource”.  That would add the resource, variables, and parameters to support the newly added resource.  However, we would just have to go back and edit the parameters and variables, so we are going to edit the JSON by hand.  Don’t worry, it’s not that bad.

AvailabilitySet

Click on the “DevAvailabilitySet” node in the JSON Outline.  The entire JSON block is highlighted.  Collapse the DevVirtualMachines node so that you can easily see where to paste a copy.

image

Paste the following to define an availability set for the staging environment.

StageAvailabilitySet
  1. {
  2.   "apiVersion": "2015-05-01-preview",
  3.   "type": "Microsoft.Compute/availabilitySets",
  4.   "name": "[variables('stageAvailabilitySetName')]",
  5.   "location": "[resourceGroup().location]",
  6.   "tags":
  7.   {
  8.     "displayName": "StageAvailabilitySet"
  9.   }
  10. },

We get a red mark in the margin because we reference a variable, “stageAvailabilitySetName”, that doesn’t yet exist.  Just add that to the variables section to fix it.

image

NetworkInterfaces

 

Now we copy the DevNetworkInterfaces section and paste below the StageAvailabilitySet section.

StageNetworkInterface
  1. {
  2.   "apiVersion": "2015-05-01-preview",
  3.   "type": "Microsoft.Network/networkInterfaces",
  4.   "name": "[concat(variables('stagePrefix'), 'nic', copyindex())]",
  5.   "location": "[resourceGroup().location]",
  6.   "tags":
  7.   {
  8.     "displayName": "StageNetworkInterfaces"
  9.   },
  10.   "copy":
  11.   {
  12.     "name": "nicLoop",
  13.     "count": "[variables('numberOfInstances')]"
  14.   },
  15.   "dependsOn":
  16.   [
  17.     "[concat('Microsoft.Network/virtualNetworks/', parameters('virtualNetworkName'))]"
  18.   ],
  19.   "properties":
  20.   {
  21.     "ipConfigurations":
  22.     [
  23.       {
  24.         "name": "ipconfig1",
  25.         "properties":
  26.         {
  27.           "privateIPAllocationMethod": "Dynamic",
  28.           "subnet":
  29.           {
  30.             "id": "[variables('subnet2Ref')]"
  31.           }
  32.         }
  33.       }
  34.     ]
  35.  
  36.   }
  37. },

Notice on line 30 that we are putting these VMs in subnet2, just like in the diagram at the beginning of the post.  We also introduce a new variable, “stagePrefix” with a value of “stage”.

VirtualMachines

We’ve already done the hard work to parameterize all of this, so now we can copy the virtual machines section and replace “dev” with “stage”.

VirtualMachines
  1. {
  2.       "apiVersion": "2015-05-01-preview",
  3.       "type": "Microsoft.Compute/virtualMachines",
  4.       "name": "[concat(variables('stagePrefix'), copyindex())]",
  5.       "copy":
  6.       {
  7.         "name": "virtualMachineLoop",
  8.         "count": "[variables('numberOfInstances')]"
  9.       },
  10.       "location": "[resourceGroup().location]",
  11.       "tags":
  12.       {
  13.         "displayName": "StageVirtualMachines"
  14.       },
  15.       "dependsOn":
  16.       [
  17.         "[concat('Microsoft.Storage/storageAccounts/', parameters('newStorageAccountName'))]",
  18.         "[concat('Microsoft.Network/networkInterfaces/', variables('stagePrefix'), 'nic', copyindex())]",
  19.         "[concat('Microsoft.Compute/availabilitySets/', variables('stageAvailabilitySetName'))]"
  20.       ],
  21.       "properties":
  22.       {
  23.         "availabilitySet":
  24.         {
  25.           "id": "[resourceId('Microsoft.Compute/availabilitySets',variables('stageAvailabilitySetName'))]"
  26.         },
  27.         "hardwareProfile":
  28.         {
  29.           "vmSize": "[parameters('vmSize')]"
  30.         },
  31.         "osProfile":
  32.         {
  33.           "computername": "[concat(variables('stagePrefix'), copyIndex())]",
  34.           "adminUsername": "[parameters('adminUsername')]",
  35.           "adminPassword": "[parameters('adminPassword')]"
  36.         },
  37.         "storageProfile":
  38.         {
  39.           "imageReference":
  40.           {
  41.             "publisher": "[parameters('imagePublisher')]",
  42.             "offer": "[parameters('imageOffer')]",
  43.             "sku": "[parameters('imageSKU')]",
  44.             "version": "latest"
  45.           },
  46.           "osDisk":
  47.           {
  48.             "name": "osdisk",
  49.             "vhd":
  50.             {
  51.               "uri": "[concat('http://',parameters('newStorageAccountName'),'.blob.core.windows.net/vhds/','osdisk', variables('stagePrefix'), copyindex(), '.vhd')]"
  52.             },
  53.             "caching": "ReadWrite",
  54.             "createOption": "FromImage"
  55.           }
  56.         },
  57.         "networkProfile":
  58.         {
  59.           "networkInterfaces":
  60.           [
  61.             {
  62.               "id": "[resourceId('Microsoft.Network/networkInterfaces',concat(variables('stagePrefix'), 'nic', copyindex()))]"
  63.             }
  64.           ]
  65.         }
  66.       }
  67.     }

Prod Environment

We did that so quick this time!  We can now just copy the sections we just pasted and replace “stage” with “prod”.  The easiest way to do this is to leverage the editor… just collapse the 3 sections we previously created.

image

Paste.  Now select the three sections you just pasted.  Use Ctrl+H to find and replace…

image

The result is our set of resources describing the environments we want to deploy.

image

Testing Things Out

Let’s test things out.  When we created the project, a file called “LoadBalancedVirtualMachine.param.dev.json” was created.  Let’s use that file to provide the parameter values for our script.

image

Params
  1. {
  2.   "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
  3.   "contentVersion": "1.0.0.0",
  4.   "parameters":
  5.   {
  6.     "virtualNetworkName":
  7.     {
  8.       "value": "kirkermvnet"
  9.     },
  10.     "adminUsername":
  11.     {
  12.       "value": "myadmin"
  13.  
  14.     }
  15.   }
  16. }

Notice we don’t provide a value for every parameter because some have default values.

Right-click on the “Deploy-AzureResourceGroup.ps1” script and choose “Open with PowerShell ISE”.

image

In the PowerShell ISE command window, execute “Switch-AzureMode AzureResourceManager” and then “Add-AzureAccount”.

image

Now run the script.

Debugging

And if you followed along to this point, you probably got the same outcome as me… everything was provisioned correctly except the virtual machines.

image

There are a few things you can do here, such as go to the portal and look at the logs for the resource group.

Get-AzureResourceGroupLog
  1. Get-AzureResourceGroupLog -ResourceGroup DevTestProd -DetailedOutput

We can also go to the resource group and click on the last deployment date.  From there we can see the parameters used to deploy the template.

image

Scroll down and you can see the list of operations

image

Click on an operation to get the details.  This is the same information as what you pulled from the logs in PowerShell.  More details are available at Troubleshooting deployments.

Truth be told, this screenshot shows Status=OK, but I was staring at “Bad Request” for quite awhile without any helpful information beyond that.  Troubleshooting templates can be frustrating when you have little information to go on (and have been editing JSON directly for the past few hours), but trust me… this is worth troubleshooting through.

image

After a few hours of pulling my hair out and not getting anything beyond “Bad Request”, I finally thought to use a password stronger than “pass@word1”.  I’ll be darned, it worked.  Not only that, but provisioning with Azure Resource Manager is asynchronous, so your scripts finish a heck of a lot sooner than they used to because VMs provision in parallel.

image

We can go to the new portal and see all of our stuff.  For instance, we can inspect the virtual network in the resource group and confirm the VMs are allocated to different subnets.

image

More important, we can now go tag our resources and those tags show up in billing, and we can now use Role Based Access Control (RBAC) to control access to resources.  This is so much better than adding everyone as a subscription admin.

image

Download the Code

The template for this post is available on GitHub at https://github.com/kaevans/DevTestProd.  Something to call out is that I added a link that will let you import the JSON template to your subscription.

image

When you click the link, you are taken to the Azure portal where you can import the template.

image

Save, and now you can provide parameters using the portal.

image

You could also go to the Marketplace and search for Template Deployment.

image

image

Once you create the template deployment, you can edit the template.

image

 

For More Information

Download the code – https://github.com/kaevans/DevTestProd

Gallery of ARM templates – https://github.com/Azure/azure-quickstart-templates

Azure Resource Manager Overview

Using Azure PowerShell with Resource Manager

Using the Azure CLI with Resource Manager

Using the Azure Portal to manage resources

Authoring templates

Troubleshooting deployments

Viewing all 55 articles
Browse latest View live