Better living through PowerShell – Archive old pictures

I’m lazy, I hate having to do the same thing twice.  It’s boring.  If there is something that needs to be done regularly I want it automated.  That’s why I became a software developer.

Naturally: the cobbler’s kids have no shoes.  I’m kept so busy making things better for my clients that I forget to take time to make things better for myself.  Time to start changing that

In this case, I have a fairly simple problem: I take lots of pictures on my cell phone.  My cell phone uploads the pictures to my OneDrive.  I use OneDrive on all my machines.  My Surface has a 256Gb SSD in it.  There are about 127Gb of pictures on my OneDrive.  See the issue?

My main workstation has a 4Tb external hard drive where I kept ‘stuff’ and it has a media folder where old pictures and videos are stored.

The requirements seem easy:

  • all pictures in my Camera Roll folder on OneDrive need to be moved to my Camera Roll folder in my media archive.
  • I also want video files (MP4) that are more than 5 days old to move as well.
  • I’d also like to keep a log so I can tell that this is happening.

The way I chose to implement this (the title should get it away) was to write a very simple PowerShell script and then schedule it using Windows Scheduler.

Here is the script:

script

While this is easy, I had to look up this stuff so I don’t feel bad explaining it in detail.  I’ll break it down step by step.

Before I get in to it though I want to point out that you can type each of the commands above in to the PowerShell command line to see what they do.  You can even change things around to see what happens.  Or you can load the script in to the PowerShell ISE and step through it with a debugger.

The first part of the script just counts how many files are going to be moved.

The very first line (after the comment) is just giving me a short cut to where I’m going to write my logging information.

Next get-childitem gets all the JPEGs in my camera roll.  Get-childitem is also aliased as the old DOS “DIR” command or UNIX ‘ls’ command if you don’t want to type all of those letter.   The list of files is passed on to where-object using a pipe command (the ‘|’ thing).

The where-object filters out pictures that are less than 30 days old and then passes them, or pipes them on to measure-object.  Measure-object counts and measures stuff.  In this case we just want the count.  Notice that I have all the commands wrapped inside of parentheses?  This allows us to get just the count output from the measure-object command and assign it to our $jpegCount variable.

line1

The exact same process is used to get a count of video files in my camera roll.  The ony differences are that we are looking mp4 files and we only want files more than 5 days old.

Next I make a custom date string by getting today’s date with get-date and passing it to ToString with a custom format string.  I like formatting dates from greatest precedence to least precedence.  It makes more sense.  You don’t write Fifty five dollars two hundred do you?  Then why do we write dates as April 27, 2017?

With our nicely formatted date and file counts we now update the log file with the information about how many files are about to be moved.  Add-Content appends whatever you provide to the file you specify.

line 2

With the log file updated we’re now ready to move the files around.  Moving the pictures looks just like getting the count.  The differences are that we are not assigning the results to a variable so we don’t need the parentheses and we pass the filtered list of files to move-item instead of measure-object.

line 3

Moving the videos is the same process with the differences as explained before.

We now have our script that does the work.  The last step is that we need to schedule the script to be run once a day.  To do this run Task Scheduler as the administrator and create a new task.  Here is how I do it:

  1. Window+Q to bring up Cortana and search for “Task Scheduler”

cortana

  1. Right click on “Task Scheduler” and choose “Run as Administrator”
  2. In Task Scheduler click on “Create Basic Task”

create basic task

  1. Give it a name and click “Next”

cbt1

  1. Set the trigger to daily and click “Next”

cbt2

  1. Select a time for the job to run. I like to do things at 1am.  Click “Next”

cbt3

  1. Select “Start a program” and click “Next”

cbt4

  1. Now we enter our actual program. In this case you can just type the entire command line in the main box.  The command you want to type is:

PowerShell -noninteractive -file C:\Users\your user name\Documents\WindowsPowerShell\MoveOneDrivePictures.ps1

With obvious adjustments.  When you’re done click “Next”

cbt5

  1. You’ll be presented with a confirmation message b/c Task Scheduler is going to split things up for you. Just say “Yes”

cbt6

  1. You’ll be presented with a summary of what you want scheduled. If you’re happy just click “Finish” and you’re done.

cbt7

You can test our your job by right clicking on it and choosing “Run”.  Just go see if the log file was created.  If it was your job ran.  Otherwise you have some debugging to do.

Overall this is not a horribly complex script or task, but it opens the door to other things.  We’ll get in to those things soon.

Rescue Support: Problem Solving

Getting started learning a new piece of tech can occasionally be frustrating.  In my case I ran in to what I suspect is common problem with open source software.  Namely: support and what do you do when there isn’t any?

To set the stage I’m learning AngularJS 1.  The reason I’m staying on 1 and not 2 might be a topic for another day.  I’m using Plural Sight to help me learn and a lot of the videos I’m watching are by John Papa.  He has a pretty cool framework called Angular HotTowel that is easy to add to a Visual Studio project.  Here is where our trouble starts.

The trouble starts when I tried to add the angular UI grid to my project.  It blows up because HotTowel is using an old release of AngularJS that was not compatible with Angular UI Grid.  Inside Visual Studio’s package manager all the libraries that HotTowel used had newer versions available.  But when you update to them HotTowel’s navigation breaks.

This is where software development problem solving takes over.  Running in the debugger I can trace through the code and see what is happening but it’s not clear why it is not working.  Reading the Angular library code is very time consuming and I just want to get on with things.  Surely somebody else has experienced this problem already so off to the search engines.

Googling the problem “HotTowel navigation broken” reveals nothings.  Lots of variations and different combinations made no difference.  the next step is to ask.  The obvious place to me was the HotTowel project on GitHub.  Thing brings up another piece of software development problem solving: clearly explaining what the problem is and then providing the steps to reproduce the problem.

Explaining and problem and showing people how to reproduce is can be difficult and time consuming.  However, I’ve usually found the effort worthwhile and in many cases, it leads to the answer before you even ask somebody else.  To get started I needed to isolate the problem to determine if it was an Angular problem or a HotTowel problem.  This lead to me reading a lot of material on how Angular does navigation.  It also put a spot light on the problem for me.

I started off with this article by Viral Patel “AngularJS Routing and Views Tutorial with Example”.  I used his sample code and had it up and working.  Viral is even so nice as to provide a plnkr.co environment to play around.  His example is nearly identical to what HotTowel is doing, even the early version of AngularJS.  If you change line 41 so that it refers to version 1.6.2 of AngularJS we reproduce our problem.   We’re making progress.

Something in Angular’s routing changed.  To try and understand what is happening I went off to Angular’s site (probably a better place to start in the first place, but it might not have revealed the problem so quickly).  In their tutorial (which is excellent by the way) there is a section on routing and views.  In the section on Configuring a Module we start to see what changed:

20170323 pic1

What is this hashPrefix nonsense?  Where did that come from, what does it do and is it causing my trouble?  Easy enough to find out.

Returning to the plnkr we just broke we can switch to the app.js file and change it to be like the one in the tutorial and see what happens.  We need to add ngRoute to our application and then add $locationProvider to the config function and finally setup the hashPrefix.  Easy:

20170323 pic2

Unfortunately, that doesn’t appear to have fixed the problem.  Reading in more details in the Angular tutorial we will notice a little green box with this nugget of wisdom:

20170323 pic4

Our html links for adding an order or showing an order do not have bangs in them (‘bang’ is short hand for the exclamation mark, ‘splat’ is for the asterisk ‘*”, I forget the other Unix cool guy slang.).  Return to the plnkr and see if just adding a bang to the path fixes our problem.

In plnkr return to the index.html file and add bangs after the hash marks:

20170323 pic5

Unfortunately, things still don’t work.  But after playing around a little more I removed the locationProvider from the config method and things started working.  So the config method ends up looking like this:

20170323 pic6

At this point things work.  With that in mind I return to my HotTowel application and make a few changes.  HotTowel already includes ngRoute, so all I have to do is change how SideBar.html creates the links:

20170323 pic7

Running the app and retesting shows that the simple addition of a bang (‘!’) to our links fixes the problem.  Talk about annoying.

For a person who has been working with Angular for a long time this might not seem like a big deal.  The reason I’m taking the time to walk through this process is to show how methodically breaking a problem down will help you solve it.  As you gain experience with a technology you’ll still encounter problems that in hind sight are trivial but at that moment bring you to a standstill.  The key is to isolate the problem in a reproduction.

Had my reproduction of the problem not led me straight to the solution I would have used the reproduction to ask the community at large what was going on.  Keep in mind that everybody is busy so the more succinct you make your question and reproduction the better your odds are for getting help.

I recommend asking for help as quickly as possible.  In case like posting to StackOverflow or the MSDN forums you might not get an answer for days, if at all.  So posting a question while you continue your own work is a good strategy.  In some cases, you’ll arrive at the answer before anybody else answers the question for you.  In others, somebody will point you in the right direction.  As a final not on asking for help: don’t forget to ask your own team.  Whether its work or your social circle: maybe somebody there already has the answer or can help you figure it out.

Custom .NET Configuration Sections

In September of 2015 I wrote a more extensive blog post that covered this material.  However, it covered a lot of other material and buried what I’m discussing here.  If you want more information on how to make your configurations work with custom build configuration read “Configuration Notes

You are writing a cool new application that needs some of its settings to be stored in a configuration file.  Happily .NET has always provided a nice configuration subsystem.  In general everybody knows how to use it.

What I’m going to point our here is that you don’t need to use appSettings for your configuration stuff.  In my experience appSettings is a horrible place to put your configuration information because it becomes crowded with stuff.

The alternative requires no code and in fact is still appSettings but allows you to organize your settings to keep different components separated from each other.   In my current project I need to store the names of tables in Azure Table Storage being used by the service.  Here is what my configuration file looks like:

custom .net config sections app.config file

The secret sauce is in the type tag for our RescueServices section.  The type, NameValueSectionHandler is the same type used for appSettings.  You can verify this by finding the machine.config file and checking out how the sections are defined.

In order to access your configuration information you just need to write the following lines of code:

custom .net config sections code snippet

This is the only downside to not being in appSettings: configuration manager doesn’t provide a short cut.  Instead you first have to ask for your section and tell it the type to use.  For us we just use a NameValueCollection (same as appSettings) and we’re good to go.  Once you have the collection you can go to town.

A truly lazy developer would create a base class that handles all of this so that all you must do is provide the name of the configuration section and it handles the rest.  That is left as an exercise for the reader.

Better Question

Sorry to say but people do not seem to ask good questions.  In fact I find that people don’t seem to ask questions at all.  I notice people making a lot of assumptions without ever reflecting on whether they got their assumption right.

When people do ask questions, they seem to ask the wrong questions.  The usual question coming out of people is essentially “why?”.

  • Why did you write the component like that?
  • Why aren’t there any unit tests?
  • Why is your project behind schedule, over budget and full of bugs?”

A “Why” questions elicits a belief answer.  When they do setup a “why” question that gets a non-belief based answer they have injected the question full of their own beliefs to get that answer.  A better and easier way to ask questions is to use the other W’s instead.  Who, what, when, where, whow (the W is silent).

  • How many unit tests are there?
  • What is this testing?
  • When are the tests being run?
  • Where are the tests run?
  • Who writes the tests?
  • How do you decide what gets tested?

Part of the reason I avoid belief based questions is to avoid distractions.  Generally, if I’m involved there is a goal that I am trying to achieve (say get a project back on schedule) and I don’t have time to philosophical debates about anything.  I save those discussions for after hours with yummy beverages and delicious food.

Stepping back and asking questions is an important skill.  Learning to ask useful questions is an even more important skill to develop.  For instance, “What would you need to get this million-line project to 80% code coverage in unit testing in 10 days?” is sometimes a good question to ask.  It will start a conversation, probably a long one, that will get the team thinking outside their normal patterns.  In many cases this is all that is needed.

Using questions skillfully can also help develop and deepen relationships with people.  This can be useful helping bring a team together.

Rescue Support – Technologies

As I set out to build rescue support I have specific technologies in mind to use. It should be clear that I reserve the right to grab whatever technology catches my fancy along the way.   Getting started, the menu will include the following technologies:

  • XAMARIN Forms
  • HTML5/CSS3/JS
  • AngularJS
  • Breeze
  • ASP.NET Web API & MVC
  • Azure
  • Docker Containers
  • Microservices

Part of the issue developers always run in to is focusing on one set of technologies means that they are missing out on others. I’m not really going to pay attention to Node.JS, AWS, and a bazillion other ‘new’ developments. There are also a lot of technologies I’ve not even heard of yet but will probably sound really cool later.

I’m also not listing everything here. For instance, Bower and some of the ‘new’ tools that have appeared on the scene aren’t listed but I’ll probably talk about them at some point. There are also the frameworks that are built on top of these tools. For instance, I’m going to use HotTowel as the starting point for my client. I also make use of Unity, Enterprise Logging and several libraries on the server side to make my life easier. I plan to talk about all of them along the way.

The wonderful thing about being a developer is that no matter what time it is there is always something new to learn. The scary thing about being a developer is getting left behind, waking up one day and finding out that nobody cares that you can seriously rock some IUnknown with your mad STL skills.

Pet Rescue Support Project

I’m always anxious about falling behind on the latest technology. While work provides opportunities to work with new stuff, it’s not always what I’m interested in. To help me keep up with ‘new’ technology I’m going to make up my own project and use it as a framework to work on stuff that interests me.

We, my family and I, volunteer with a dog rescue. We will transport dogs, foster them, interview adopters, help at events (dog wrangling), and do fund raising. Through my involvement in rescues I’ve gotten a pretty clear picture of what running a dog rescue is like. Our primary rescue is a bit of a basket case.

Any money a rescue has goes to feeding the dogs and paying for medical care. Right now, I am fostering a puppy that was hit by a car. Her leg was broken and required surgery. We had to raise money to cover the costs and she is staying with us until she if fully healed and ready to be adopted (she’ll be here for about 8 weeks). Because there is no money there isn’t much in the way of automation. Anything free is usable, but what generally happens is lots of emails and Facebook messages get bounced around. This tends to make volunteers work a lot harder than necessary.

My solution is Pet Rescue Support. I plan to build a set of applications, both web based and mobile, that will help pet rescues operate more efficiently. Luckily Microsoft provides some ‘free’ Azure support and there are other avenues through which I can get stuff to help. The apps built will be provided to rescues free of charge.

What will Pet Rescue Support do? My initial plan is to provide the following features:

  • Surrender application
  • Adoption Application
  • Intake workflow
  • Inventory management
  • Volunteer Management & Communication
  • Integration with PetFinder

Everything listed above is generally handled manually through email and excel workbooks.

How will rescues benefit from these services? First, email and manual processes will be reduced. Instead of having to dig through an inbox to find adoption applications, volunteers can see a list of applications. They will be able to search and sort the applications as well to make finding what they are looking for easier.

One issue we have is keeping track of all the approved adopters who have not adopted yet. We need a way to communicate with them. They need to know when and where we are having events, and do we have any dogs that might match what they are looking for.

I’ve already started working on this project. My plan is to create the applications, write about how I created them, my reasoning in the choices I’ll make (beside ‘the technology looked cool and I wanted to use it’). I also plan to release the solution as open source as I go along.

Stay tuned.

The Ringer

I don’t even know where to start with this one.
I’ve been assisting a client with recruiting a team of developers to work on a project. For whatever reason, they were having difficulty finding local talent so we had spread our nets out to see if we could find what they need. Naturally this meant that the interviews had to be done remotely. The usual process is that the staffing agency does a screening (they suck at this given the quality of what makes it through) then we do a technical screen (where we usually reject about 90% of the candidates), then we do a more in-depth interview with the candidate. If we think the person has the technical capabilities to do the work for the client we will pass the candidate on and the client does their interview. We did find 1 local candidate this way and an in-person interview occurred and that person was hired. The rest were offered without having had an in-person interview and they accepted and showed up for their first day on the job which is where the wheels came off the bus.
One of the candidates we passed and was hired turned out to have been using a ringer during the interviews. The person who showed up for the first day of work had none of the technical capabilities that had been demonstrated during the interviews (we write some code and do some design work: things that are difficult to fake). The team quickly caught on to this individual’s lack of skill and was quickly removed.
In the post mortem of this experience I’m looking at how I can further improve the process to prevent this from happening again. The immediate answer is that I’ll be adding a web-cam to the interview. But I need to check with HR to see if there are any rules I need to navigate in doing so. For instance: can we require a photo ID with the resume submission from the staffing agencies?
On one-hand I’m laughing my butt off about the situation because it is comical. On the other hand it is a serious problem for my client because we have to go back to the well to replace this person and find a new one while insuring this doesn’t happen again. Which in my sick twisted mind just makes it funnier.
So here’s the thing, if you plan to use a ringer for your interviews: you might want to make sure that your ringer does not so greatly exceed your own capabilities that if you do get the job that you’ll be immediately identified as a fraud. Perhaps have the ringer provide you a recording of the interview and go study the questions that were asked and the information provided so you’ll be prepared. I mean this person screwed up so massively that the team immediately removed the individual.
What isn’t funny about this is the impact it can have on clients doing remote work. Remote work requires trust between the parties involved and when an individual like this shows up they can cause people to question using remote workers. To be clear this situation was an onsite gig, but I can’t help but wonder how eager they will be to do remote work now.