Better Question

Sorry to say but people do not seem to ask good questions.  In fact I find that people don’t seem to ask questions at all.  I notice people making a lot of assumptions without ever reflecting on whether they got their assumption right.

When people do ask questions, they seem to ask the wrong questions.  The usual question coming out of people is essentially “why?”.

  • Why did you write the component like that?
  • Why aren’t there any unit tests?
  • Why is your project behind schedule, over budget and full of bugs?”

A “Why” questions elicits a belief answer.  When they do setup a “why” question that gets a non-belief based answer they have injected the question full of their own beliefs to get that answer.  A better and easier way to ask questions is to use the other W’s instead.  Who, what, when, where, whow (the W is silent).

  • How many unit tests are there?
  • What is this testing?
  • When are the tests being run?
  • Where are the tests run?
  • Who writes the tests?
  • How do you decide what gets tested?

Part of the reason I avoid belief based questions is to avoid distractions.  Generally, if I’m involved there is a goal that I am trying to achieve (say get a project back on schedule) and I don’t have time to philosophical debates about anything.  I save those discussions for after hours with yummy beverages and delicious food.

Stepping back and asking questions is an important skill.  Learning to ask useful questions is an even more important skill to develop.  For instance, “What would you need to get this million-line project to 80% code coverage in unit testing in 10 days?” is sometimes a good question to ask.  It will start a conversation, probably a long one, that will get the team thinking outside their normal patterns.  In many cases this is all that is needed.

Using questions skillfully can also help develop and deepen relationships with people.  This can be useful helping bring a team together.

Rescue Support – Technologies

As I set out to build rescue support I have specific technologies in mind to use. It should be clear that I reserve the right to grab whatever technology catches my fancy along the way.   Getting started, the menu will include the following technologies:

  • XAMARIN Forms
  • HTML5/CSS3/JS
  • AngularJS
  • Breeze
  • ASP.NET Web API & MVC
  • Azure
  • Docker Containers
  • Microservices

Part of the issue developers always run in to is focusing on one set of technologies means that they are missing out on others. I’m not really going to pay attention to Node.JS, AWS, and a bazillion other ‘new’ developments. There are also a lot of technologies I’ve not even heard of yet but will probably sound really cool later.

I’m also not listing everything here. For instance, Bower and some of the ‘new’ tools that have appeared on the scene aren’t listed but I’ll probably talk about them at some point. There are also the frameworks that are built on top of these tools. For instance, I’m going to use HotTowel as the starting point for my client. I also make use of Unity, Enterprise Logging and several libraries on the server side to make my life easier. I plan to talk about all of them along the way.

The wonderful thing about being a developer is that no matter what time it is there is always something new to learn. The scary thing about being a developer is getting left behind, waking up one day and finding out that nobody cares that you can seriously rock some IUnknown with your mad STL skills.

Pet Rescue Support Project

I’m always anxious about falling behind on the latest technology. While work provides opportunities to work with new stuff, it’s not always what I’m interested in. To help me keep up with ‘new’ technology I’m going to make up my own project and use it as a framework to work on stuff that interests me.

We, my family and I, volunteer with a dog rescue. We will transport dogs, foster them, interview adopters, help at events (dog wrangling), and do fund raising. Through my involvement in rescues I’ve gotten a pretty clear picture of what running a dog rescue is like. Our primary rescue is a bit of a basket case.

Any money a rescue has goes to feeding the dogs and paying for medical care. Right now, I am fostering a puppy that was hit by a car. Her leg was broken and required surgery. We had to raise money to cover the costs and she is staying with us until she if fully healed and ready to be adopted (she’ll be here for about 8 weeks). Because there is no money there isn’t much in the way of automation. Anything free is usable, but what generally happens is lots of emails and Facebook messages get bounced around. This tends to make volunteers work a lot harder than necessary.

My solution is Pet Rescue Support. I plan to build a set of applications, both web based and mobile, that will help pet rescues operate more efficiently. Luckily Microsoft provides some ‘free’ Azure support and there are other avenues through which I can get stuff to help. The apps built will be provided to rescues free of charge.

What will Pet Rescue Support do? My initial plan is to provide the following features:

  • Surrender application
  • Adoption Application
  • Intake workflow
  • Inventory management
  • Volunteer Management & Communication
  • Integration with PetFinder

Everything listed above is generally handled manually through email and excel workbooks.

How will rescues benefit from these services? First, email and manual processes will be reduced. Instead of having to dig through an inbox to find adoption applications, volunteers can see a list of applications. They will be able to search and sort the applications as well to make finding what they are looking for easier.

One issue we have is keeping track of all the approved adopters who have not adopted yet. We need a way to communicate with them. They need to know when and where we are having events, and do we have any dogs that might match what they are looking for.

I’ve already started working on this project. My plan is to create the applications, write about how I created them, my reasoning in the choices I’ll make (beside ‘the technology looked cool and I wanted to use it’). I also plan to release the solution as open source as I go along.

Stay tuned.

The Ringer

I don’t even know where to start with this one.
I’ve been assisting a client with recruiting a team of developers to work on a project. For whatever reason, they were having difficulty finding local talent so we had spread our nets out to see if we could find what they need. Naturally this meant that the interviews had to be done remotely. The usual process is that the staffing agency does a screening (they suck at this given the quality of what makes it through) then we do a technical screen (where we usually reject about 90% of the candidates), then we do a more in-depth interview with the candidate. If we think the person has the technical capabilities to do the work for the client we will pass the candidate on and the client does their interview. We did find 1 local candidate this way and an in-person interview occurred and that person was hired. The rest were offered without having had an in-person interview and they accepted and showed up for their first day on the job which is where the wheels came off the bus.
One of the candidates we passed and was hired turned out to have been using a ringer during the interviews. The person who showed up for the first day of work had none of the technical capabilities that had been demonstrated during the interviews (we write some code and do some design work: things that are difficult to fake). The team quickly caught on to this individual’s lack of skill and was quickly removed.
In the post mortem of this experience I’m looking at how I can further improve the process to prevent this from happening again. The immediate answer is that I’ll be adding a web-cam to the interview. But I need to check with HR to see if there are any rules I need to navigate in doing so. For instance: can we require a photo ID with the resume submission from the staffing agencies?
On one-hand I’m laughing my butt off about the situation because it is comical. On the other hand it is a serious problem for my client because we have to go back to the well to replace this person and find a new one while insuring this doesn’t happen again. Which in my sick twisted mind just makes it funnier.
So here’s the thing, if you plan to use a ringer for your interviews: you might want to make sure that your ringer does not so greatly exceed your own capabilities that if you do get the job that you’ll be immediately identified as a fraud. Perhaps have the ringer provide you a recording of the interview and go study the questions that were asked and the information provided so you’ll be prepared. I mean this person screwed up so massively that the team immediately removed the individual.
What isn’t funny about this is the impact it can have on clients doing remote work. Remote work requires trust between the parties involved and when an individual like this shows up they can cause people to question using remote workers. To be clear this situation was an onsite gig, but I can’t help but wonder how eager they will be to do remote work now.

What is a data lake?

Late to the party as usual. So what the blazes is a data lake?

Some quick research basically paints this picture for me:

  • Store ALL data in 1 place
    • Relational data
    • Flat files
    • images
  • Schema on READ

There are other bullet points but these were, to me, the point ones. The idea is to take all the data in its original form and just store it. Unlike a data warehouse where the data would be transformed in to the warehouse’s schema, in a lake you leave the data as is. The schema gets applied at the time of reading the data.

All of this seems like a pretty cool idea. Data storage today is fast and cheap so why not? I don’t have an answer and don’t see the damage it can cause as of yet. However, I can easily see data lakes turning in to junk drawers if organizations don’t take time to use some governance over what goes in.

I also see issues in the details. How exactly does the platform apply “Schema on read”? What if I want to do a join between Northwind.dbo.Customers and a bunch of jpeg image files of the customers? Are we writing little utilities that do this “schema on read” or is the platform doing it?

I don’t really see data lakes replacing data warehouses. In fact I think they’re complementary ideas.

Shortcut – empty solution template

Windows Explorer’s lets you create new files by right clicking and then going in to the New submenu.   You should already know this.

What you might not know is that you can add your template files. An empty Visual Studio solution for example. To do this:

  1. Create an empty solution (in the new project dialog go under “Other Project Types/Visual Studio Solutions”
  2. Copy the sln file to c:\windows\shellnew and rename it template.sln (to make step #5 easier)
  3. Open regedt32 go to HKEY_CLASSES_ROOT and find .sln
  4. Add a new key called ShellNew
  5. Add a string value to the key called FileName and set its value to “template.sln”

Now you can create new solutions by just right clicking wherever you need to setup a new solution.

You can do this for any file type you want. Just look in the registry and add the key. Windows just copies the file you specify from the shellnew folder to the new location. So you could create a C# class template with all your favorite stuff and then just make new copies whenever you need.

Better sleep

Another post that isn’t really technical but it does have its place. Your brain needs rest, and there is excellent science that says 8 hours of sleep is roughly what is needed. So if you must get 8 hours of sleep you might as well make the most of the 8 hours so you can get the most from the other 16. Here are a few items to consider:

  • Kill all lights. I mean every source of light needs to be blocked outed, turn off lights, get black out curtains, put all of the gadgets in another room. You want your sleeping spot to effectively be in a deep dark cave.
  • The bed is for sleeping. Don’t watch TV in bed, don’t sit in bed working on the laptop. When it’s time for sleep get in bed, if you’re not sleeping get out of bed.
  • If it takes more than 15 minutes to fall asleep: get out of bed do something else and try again when you’re ready.
  • Chill the room, I keep my house at 70F. I don’t mind it being cooler at night when I’m sleeping (ie 68F) but the rest of the family complains and its also harder for me to get back out of bed so 70 to 72F is where the temperature stays.
  • Take a hot shower before bed. I’m unclear on the details but basically the cooling effect when you get out of the shower is supposed to help you fall asleep.
  • I use nature sounds to help me sleep. Specifically I have a play list of thunderstorms that I will play when I go to bed. The blue tooth speakers I use have LEDs on several of the buttons: I have put electrical tape over the buttons so no light escapes. My wife has commented that she sleeps better with the storms going too
  • There are various nighttime teas that you can try if you have trouble falling asleep. They seem to work for my wife. I have no idea because I can make myself sleep inside 3 minutes.
  • Keep your wakeup time constant (I get up at 5am). If you want to sleep in: go to bed earlier.
  • Spend money on your bed. I have no idea what we were thinking but we bought one of those stupid tempur-pedic matresses. It’s been worth the money. It is very comfortable.
  • I sleep with one foot outside the blankets. No idea why this helps but it does. I’m tall enough that I can reach the bottom of the bed and kind of work the sheets out of the way so my foot is outside where it is cooler (I like to sleep under a lot of blankets and dogs).

If you have any tricks that work for you: please share them.

Making business travel enjoyable

I hate business travel. In fact I pretty much hate leaving my house except for a few specific reasons none of them include anything to do with business. However there are occasions where it is necessary and has to be done.

If it has to be done at least we can do it in a way that makes it enjoyable. Here are a few items I practice:

  1. Travel at odd times. The usual company policy is something like Monday to Thursday, meaning that you leave home Monday morning and return home Thursday sometime. I leave Sunday night as late as I can. While my particular flight is always full it’s no big deal because everybody is pretty relax. The few times I was forced to fly on a Monday morning I was quickly reminded why I hate travel.
  2. Check your luggage. Every self-important, OMG! I gotta get there, suit has their wheelie bag and is in a mad dash to get on the plane so they can stuff it in the overhead. And on nearly every flight there is a bunch of the same self-important people who are told they’ll have to check their luggage because there is no room. Sorry, the stress, tension and lack of manners exhibited by people isn’t worth it (another reason I hate travel). Just check the luggage, even if you have to pay for it – the convenience is worth it.
  3. Related to #2 use a baggage delivery service! I use this thing called Bags VIP which handles getting my bags from the airport to wherever I’m going. I don’t have to wait for the bags throwers to unload the plane. I just get in my car and head to the hotel.
  4. Use UBER or a taxi. I used to drive myself to the airport, but I’ve stopped. Initially I was a good boy and parked in the remote parking lot but eventually I quit that and started parking at the terminal. The problem was that I’d end up having to ride the train between terminals to get back to my car so I wasn’t saving time. Leaving my house I use UBER and then coming home I’ll just grab a Taxi waiting by the curb. This generally costs about half of what parking at the terminal was costing and is actually a lot more convenient.
  5. Get to know the people at the hotel. I stay at the same hotel every time I head out. In this case I’m always in the same city so it really is the same hotel. They put me in the room I like and general get me in my room faster than other people just because they see me regularly. The guy who runs the restaurant in the hotel knows me and always greats me and makes me feel at home. It’s a minor thing but it improves things.
  6. Don’t eat in your room. I used to be bad about this. I’d get back to the hotel, order room service and just vegetate in front of the TV until I went to sleep. I’ve now made a rule that I can’t eat in my room. I try to eat outside of the hotel at least once each visit and try not to always go to the same restaurant.
  7. Get a Pelican case! Most people use Pelican cases for photography gear or other delicate equipment. They make freaking awesome suit cases and provide you tons of room to pack all your stuff while not violating the airline policies. They’re expensive but they’re tough and will last forever.
  8. Use the rewards programs. Sign up for every one of them and make sure you get your points. I have at least a week of free hotel stays with my main hotel, I’m near the top of my airline’s preferred customer program, the rental car company already has me in the highest tier and I’ve got more than a week of free car rentals saved up. I don’t use my company’s card (who the blazes uses Diner’s Club today???), I use my own and have enough points for a first class ticket across the Atlantic (need 2 though….).
  9. Admiral’s club or whatever – the Citi card I have includes access to AA’s Admiral’s club. They’re nice and late at night while waiting to get on the bus it’s a nice place to hang out. The Admiral will even provide free bourbon, it’s not great bourbon but it’s free bourbon which tastes pretty good to me.

 

That’s what I have so far. I’d love to hear other people’s secrets.

 

Azure Diagnostic Logging

Resources

Enabling Diagnostics in Azure Cloud Services and Virtual Machines

Get logging in Windows Azure with Enterprise Library

Microsoft Azure Diagnostics Part 1: Introduction

Bare metal logging – Trace with AzureDiagnosticsTraceFilter

Following the principle of “do the simplest possible thing that works” for diagnostics is to just instrument you application using .NET’s System.Diagnostic.Trace feature. Just use Trace.WriteLine to log information as your application is executing, for example:

Trace.WriteLine(“Trace information goes here”, “Category goes here”);

The category information is optional but can help provide context or filtering information

For an application running in Azure you can add the following to your web.config or app.config to cause your trace information to be written to a an Azure Table:

<system.diagnostics>

<trace>

<listeners>

<add
type=Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener, Microsoft.WindowsAzure.Diagnostics, Version=2.7.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35


name=AzureDiagnostics>

<filter
type=“” />

</add>

</listeners>

</trace>

</system.diagnostics>

This will cause all of the trace information to be written to an Azure Table called WADLogsTable. The storage account that gets used is configured as part of the diagnostic configuration.

This approach is the absolute easiest and fastest way to instrument your application for diagnostics. It’s no frills but done properly it is extremely effective. With the addition of the Azure trace listener you have the convenience of being able to review the logging information in a table. It would be really easy to write a tool to make reading the logs easier.

Enterprise Library – Custom Trace Listener for Azure Table Storage

While just using the Diagnostic Trace object will get the job done most applications benefit from something a little more sophisticated. This is where libraries like Log4Net, NLog and Logging Application Block (refered to as LAB from here on) come in to the picture.

My practice is to use the Enterprise Library in my projects so logging is performed through LAB. However, at this time LAB does not have a built in way to write messages to Azure Storage. There are some tricks involving configuring Azure Diagnostics local resources that are supposed to copy the text files in to Azure Blob storage, however I’ve not been able to make them work. Rather than continuing to hack my way through the mystery it was actually easier to just write my own TraceListener that writes what I want to Azure Table storage in the way I want it written.

In order to create the trace listener you first have to get the LAB from NUGet. Create a class library project and add references to the LAB assemblies (see the sample). You’ll also need to get the Windows Azure Storage package from NuGet and add references to it.

Inside your class library project create a class that inherits from CustomTraceListener and override TraceData, Write, and WriteLine.

You’ll need to create an entity that inherits from Azure Storage’s TableEntity. The benefit of this is that you can shape your log table to look anyway you want. The TableEntry object I created has a subset of the properties from LAB’s LogEntry object. In the constructor I set the PartitionKey and RowKey using a date time string.

For the trace listener to work it will need to be configured with information about the storage account, account key and the name of the table to write entries to. This is done as part of the TraceListener’s configuration. You can either use the EntLib configuration utility or just hand write the configuration yourself. The resulting configuration information looks like this:

<listeners>

<add ListenerDataType=”Microsoft….CustomTraceListenerData, Microsoft….Logging”

AccountName=”account goes here” AccountKey=”[key goes here”

TableName=”DiagLog”

type=”GuerillaProgrammer….AzureTableTraceListener, GuerillaProgrammer.AzureTableListener”

name=”AzureTableTraceListener” />

</listeners>

Once you have your TraceListener put together and added to your project you can then write a tool to view the log in whatever way you want. Because the partition key cuts off by the day you easily filter the log to view just a day at a time if you wish.

Inside you application you simply use LAB as usual. In fact if you have existing application that use LAB you can just throw in the new trace listener and start benefiting from it.

The sample code can be downloaded here. The packages will have to be restored from NuGet before it can be built.

Where to next?

What I’ve outlined here is very simple logging. I did not address strategies or anything else. Just plain old – I need to know what my application is doing logging. Nothing else. At this time I’m looking at the Semantic Logging Application Block to see what it offers. I’m also exploring ways to introduce Aspect Orient Programming (AOP) in to existing applications in order to add logging aspects without having to make major refactoring changing to them. New applications may be worth starting off with AOP in place from the very begging, but I need to look at how well Unity supports AOP.

Configuration Notes

Overview

The intent of this document is to lay out some very simple practices and some basic knowledge to make application configuration as straight forward as possible.  The outcome of this knowledge is that you will be able to successfully build your application for different environments without having to edit the web.config file for each build.

Only the most trivial applications avoid the need for configuration files.  Almost every application I have dealt with has the need for some type of configuration.  In the normal course of development I’ll need to be able to test out my application while developing it and the testers will need a different database to do their job.  To do this I use a configuration file.  This is the simplest case but the need for configuration gets more complex as you add environments, features, and the need to integrate with other systems.

Environments

The typical custom application development project will have five environments that will be used to control the release of the finished work product.  The names of the environments can change depending upon the particular customer but the following are common environment names.

Self-hosted

This is your workstation or laptop where all of your work is done.  The major difference between all of the other environments is that this should be the only environment that has Visual Studio installed.  This is also the environment that the application should be able to immediately run in as soon as a new person gets the source code from whatever repository is being used by the project (TFS, subversion, etc.) and presses F5 to run the application in the debugger.

Development

This is an environment where everybody’s code comes together and is built and deployed so it can be tested by the developers in an environment similar to what will be seen in production.  While Visual Studio shouldn’t be installed here it often ends up here so debugging can be performed.

Test

This is the QA environment where the product is built and deployed so that QA testers can perform complete functional testing in an environment that duplicates production.  Visual Studio should absolutely not be installed anywhere in this environment.  Developers do not have access to this environment, deployment is done either via an automated script, a QA administrator, or a build master.

Staging

This is essentially a production environment.  The product is deployed here for final acceptance and testing before the product goes live.  Visual Studio is not here and developers are not permitted access to this environment*.  The deployment to this environment is done through automated scripts or performed by system administrators.

Production

This is the live environment that end users access.    The deployment to this environment is  through automated scripts or performed by system administrators.   It should be obvious that developers are absolutely not permitted in this environment.*

*In reality developers end up with access to these environments but I personally think they are foolish accepting this access.  It only transfers risk from the administrators who are responsible for operating the environment to the developer.

Build Configurations

Inside your Visual Studio project you will need to create a build configuration for each of the above environments.  By default when a new solution is created Visual Studio automatically creates Debug and Release build configurations.  Luckily you can rename them and add additional configurations as needed.

When you create a new build configuration you can add a configuration transform to any web project in the solution.  Visual Studio will do this through the project’s context menu.  For instance an ASP.NET MVC project by default will have a web.config file along with a web.debug.config and web.release.config.  If you create a new build configuration called “Staging” you can add web.Staging.config to the project by right clicking on the project in solution explorer and selecting “Add Config Transforms”.  Visual Studio will add a skeleton transform file to your project for you.

add config transform

Recommendations:

The self-hosted environment should be in the base configuration file (web.config) and should have the correct settings for a new developer to get going without needing to make any changes.

Everybody on the team should have the same settings and the web.config file should not need to be changed unless the change affects everybody on the team.  Nobody should need to remember to exclude web.config from the change set because they have a different database connection string or something else equally silly.

All of the other environments should have their own configuration transform and should be setup so that developers do not need to make any changes when they perform a build for that environment.

Configuration Transform

The configuration transform allows you to change the configurations depending upon the build being performed.  Unfortunately without hacking Visual Studion and MSBuild only perform transforms for web.config.  There are solutions to transform app.config but they are outside the scope of this article.

The configuration transform capability in Visual Studio offers a wide range of capabilities.  Below are the capabilities that are most commonly used.  For more information on Configuration Transforms refer to this article: Web.config Transformation Syntax for Web Project Deployment Using Visual Studio

For information on how to do transformations for other configuration files refer to this Code Project article: Transform app.config and web.config

Replace

Replaces the selected element with the element in the transform file.  For example, we can turn off debug output for the compilation tag in our web.config using the following transform

<system.web>

<compilation debug=”false” tragetFramework=”4.5” xdt:Transform=”Replace” />

</system.web>

Notice that there is no locator specified.  In this case it is not needed.  However, in app.setting or other sections where you need to target a specific setting in a dictionary you’ll need to specify a locator.  In this case I’ll use the Condition locator which uses an XPath expression.  Here is an example for appSettings:

<appSettings>

<add key=”ErrorMode” value=”” xdt:Transform=”Replace” xdt:Locator=”Condition(@key=’ErrorMode’)” />

</appSettings>

Generally the Condition locator is the only one you’ll need, however there are other locators you can use (see the MSDN documentation).

Insert

Insert jams in the element from the transform.  This is useful for turning on authentication in production for instance.  A simple example:

<system.web>

<authorization>

<deny users=”?” xdt:Transform=”Insert” />

<authorization>

</system.web>

Visual Studio Support

Visual Studio will help you test out your configuration transform without having to build your solution for each build type.  To see the results of your transform:

  1. Right click on the transform file in the Solution Explorerpreview config context menu
  2. Select “Preview Transform”
  3. The editor window will be split in half. On the left is the original web.config and on the right is the configuration file after the transforms have been applied.preview config

Custom Configuration Sections

The usual practice in most project is to jam every possible configuration need in to the app.settings section.  It works but quickly becomes unmaintainable as your application grows and requires additional configuration capabilities.  However, as the application’s complexity grows and its configuration needs grow the app.settings section becomes difficult to manage.  At some point along this path the use of custom configuration sections will come up.

If you search the internet for information about custom configuration sections you’ll find information about how to write a custom configuration section class so you can make your own.  However, writing code for a custom configuration section is unnecessary.  There is an easy solution to this that does not involve writing a custom configuration section class.  The class behind the app.settings section is actually just NameValueSectionHandler, which you can use for your sections.

As an example our application needed to send email to users.  We needed to store the URL for our mail server, the user name and password to access the mail server, which port to use, and a few other settings.  Instead of just putting this in app.settings we setup our own section called Messaging and then added Messaging to our configuration.

To set this up we added a new section called Messaging to the configSections of our configuration file:

<configSections>

<section name=”Messaging” type=”System.Configuration.NameValueSectionHandler” />

</configSections>

Then we added Messaging to the configuration file:custom config section

It looks just like App.Settings, but keeps our setting separated and neatly organized.  To access our settings we just have to use one extra line of code as follows:

NameValueCollection settings = ConfigurationManager.GetSection("Messaging") as NameValueCollection;

if(null == settings) {

   throw new ConfigurationErrorsException("Missing Messaging configuration");

}

string smtpUser = settings["SmtpUser"];

Technically the above code should include a check to make sure settings is not null, but this is just a sample.  It is not difficult to wrap this in a class to encapsulate everything above to avoid writing the same lines of code repeatedly.

External Configuration Sections

In some cases because of the volume of configuration information needed it makes sense to divide the configuration file in to multiple files.  Unfortunately the configuration transforms done by visual studio will not be done to the external configuration files but you can just store each environment’s configuration information in a separate file.  However, keep in mind in the case of a web application all of the configuration files have to be marked as content so they’ll be deployed to all environments which is not really a good idea.  You’ll need to add some custom build steps to remove unneeded configuration files from the deployment package.

In our web application we’ll add an App_Config folder to store our external configuration files.  We’ll then add whatever configuration files we want:external config files

In our web.config file we will setup our connection strings as follows:

<connectionStrings configSource=”App_Config\connectionStrings.config” />

The cool part to this approach is that the code that is written to access the configuration information does not change.  The application is not aware that the configuration information it is using is actually stored outside of the .net configuration file.

Encrypting Configuration Sections

There are occasions where sensitive information needs to be stored in configuration files.  User names and password are a common example.  Having plaintext usernames and passwords laying around a file system is generally not considered secure.  Further I as a developer really do not want to know the usernames and passwords for the production environment.  Happily there is a solution to this problem that does not require your application to encrypt and decrypt configuration information for itself.

In this case we are going to use the Pkcs12ProtectedConfigurationProvider which Microsoft released as sample code.  The provider makes use of an X509 certificate for encryption and decryption which means you’ll need to install the certificate on the build server (or workstation) and in the production environment where the application will run.

The code for Pkcs12ProtectedConfigurationProvider can be found here: Pkcs12ProtectedConfigurationProvider 1.0.1

Add the provider to your solution.  In your web.config file (not in the transform) add the following:

<configProtectedData>

<providers>

<add name=”Pkcs12Provider” thumbprint=”XXXX” type=”Pkcs12ProtectedConfigurationProvider.Pkcs12ProtectedConfigurationProvider, PKCS12ProtectedConfigurationProvider” />

</providers>

</configProtectedData>

The thumbprint value comes from the certificate.  To get the thumbprint use the certificate manager plug-in.

Now that you can read encrypted configuration information you need encrypted configuration information.  The process for creating the encrypted sections of the configuration file are as follows:

  1. Create a plaintext version of the configuration file.
  2. This can done either by doing a build and getting the web.config file or using VisualStudio to produce the configuration file. This web.config file will contain the production (or stagin) user names and passwords that we want to secure.
  3. Use the ASPNET_REGIIS utility to encrypt secure sections of the configuration file
  4. In order to use ASPNET_REGIIS to encrypt your configuration you have to do the following:
  1. Start either powershell or the command prompt
  2. Copy the Pkcs12 assembly to the .net framework folder (c:\windows\microsoft.net\v4.0.30319\). Also add that path to your path (PATH %PATH%;….)
  3. Create a temp directory and 1 sub directory for each config file (c:\temp\prod, c:\temp\staging)
  4. Cd in to one of the config directories (c:\temp\prod)
  5. Enter aspnet_regiis –pef “connectionStrings” “.” –prov “Pkcs12Provider”
  6. Repeat for any other sections that need to be encrypted

After they have encrypted everything that needs to be encrypted they return the configuration files to development.  You extract the encrypted sections and put them into the corresponding configuration transform using the Replace transform.  For example:encrypted config

We now have secured our configuration information and our application does not have to make any code changes.

When the production version (or staging version) is built the above encrypted connection strings will be transformed in to the web.config file.

One limitation of encrypting configuration information is that you may not encrypt app.settings.

Also keep in mind that the web.config file cannot be more than 250Kb in size.  For most applications this is not an issue.  However, if you find your web.config file growing too large you can use the external configuration file feature previously discussed to move parts of the configuration information out of web.config.

Conclusion

Creating and maintaining configuration information for applications can be made easier by using the previously discussed features of the .NET platform and Visual Studio.  Configuration Transforms allow the web.config file to change depending upon the type of build being performed.  Using the same class that App.Settings uses allows you to easily create custom configuration sections without writing extra code.  You can move configuration sections in to external files to make it easier to manager.  Sensitive information can be encrypted in your configuration files without needing to write additional code.

Making use of this knowledge will reduce errors and make deployment to production easier.