Monday, January 14, 2019

10 Years In The Cloud: A Retrospective

I am celebrating 10 years of cloud computing work. This post looks back on a decade of cloud activity and where it has led.

2008-2009: Cloud Computing, the New Thing 

In late 2008, working at Microsoft Partner Neudesic, our CTO Tim Marshall and I were invited to a Microsoft feedback session in Redmond about "Project Red Dog". Red Dog, it turns out, was about this new thing called Cloud Computing. Amazon had been doing this for a few years, and Microsoft was going to also enter the market. This "cloud computing" was a new idea and a new way of doing things—but it sounded exciting. A few months later, "Windows Azure" was released. As Neudesic is a consulting company, we started learning it and looking for early prospects.

When Microsoft introduces a new product or service, a lot of work goes into evangelism and education and finding early adopters. As a Microsoft partner, we did a lot of joint work with Microsoft: visits to prospects, proof-of-concept projects, training sessions, code camps.

Tim had his own ideas about developing the market, and one of those was starting Azure user groups in the ten or so locations we had across the United States. I and other colleagues (including Mickey Williams and Chris Rolon) started sponsoring monthly meetings, sometimes held at Microsoft field locations. Since this was all new, meeting attendance could just as easily be 5 or 20 or 50 people, depending. But we kept at it, and we got the word out there, and interest started growing. At meetings we would cover new cloud services that had just become available, or show off things we had built, or discussed useful patterns for applications. It was fun, and there was pizza.

We learned things about the cloud: the infrastructure was really advanced, but the individual hardware components could fail: you had to plan for redundancy and recovery. The economics of the cloud were different: you had to consider lifetime of the data and resources you allocated, else you would "leave the faucet running". Almost everyone who was an early adopter had an Unexpectedly Large Cloud Bill story. Developers giggled with pleasure at the ease of self-deployment; but sometimes you'd hear a horror tale where someone lost important data all because they weren't careful enough when clicking in the management portal. We started reinforcing the importance of separating Production accounts from Development accounts.

2010-2014 : Azure Evangelism and Early Adopters

As Windows Azure was evangelized, prospects started to line up. I participated in a great deal of proof-of-concept project work, sometimes arranged by and paid for by Microsoft. One that stands out was going to Coca Cola headquarters in Atlanta to show how readily existing web sites could be migrated to Windows Azure. The first web site we migrated was in ASP.NET/SQL Server, which was a slam-dunk and just took a handful of days. The second site used Java Server Pages and Oracle—definitely not in my wheelhouse—but in two weeks' time we had migrated it as well.

I wrote The Windows Azure Handbook in 2010, which I believe was the first book out for Azure. The book contained Microsoft messaging from the time: Platform-as-a-Service (PaaS) is better than Infrastructure-as-a-Service (IaaS) and so on. Today Azure is equally well-suited for PaaS and IaaS and the message has changed. We've learned that there are those who value the cloud for innovative new ways of doing things (the PaaS people); but also those who value the ability to leverage existing skills and don't want their world rocked (the IaaS people).


I also released through Neudesic an Azure ROI calculator, long before there was a comprehensive one available from Microsoft. You can see from this screenshot how few cloud services there were in those early years. The number of cloud services available today is vast and ever-expanding.


There were real cloud projects happening too by this time. At first, there had been a lot of interest but prospects seemed hesitant to actually take the plunge. There was for example a great fear of vendor lock-in. Eventually, and with increasing rapidity, adoption started happening. The vast majority of these projects were web site & database migration for established companies; but start-ups had a different mentality, they wanted to do everything in the cloud from Day L.

As head of the Custom App Dev practice at Neudesic, I made sure we had Azure-trained consultants in every region. As new cloud services appeared, this interested our other practices. SQL Azure database and (later on) Power BI interested the SQL / Business Intelligence practice. Service Bus interested the Connected Systems practice.

Microsoft started a Windows Azure category of their Most Valuable Professional program, and I was honored to be a Microsoft MVP from 2010-2014. I met some great MVPs on my visits to Microsoft (and hired one, Michael Collier), along with the Windows Azure product team.

Although activity was intense, Windows Azure wasn't perfect. For three years in a row, Azure went down during the annual MVP summit, usually for reasons like somone having forgotten to renew a security certificate. We MVPs were initially amused, but in later years it meant customers were affected. AWS also seemed to have a hiccup as well once or twice a year. We started educating customers about what dependency on a cloud platform meant for reliability, and fallback plans for when the a region or entire platform was unavailable. Both platforms have improved in reliability since then.

In 2011 Microsoft asked me to teach Azure training sessions in Amsterdam and Germany. This was a fun trip—except for the blistering winter snowstorm—and I met some MVPs including Kris van der Mast and Christian Weyer. This helped me realize that cloud computing was a worldwide phenomenon, and also that different regions had different problems to address: in Europe, for example, there were laws about where clients' data had to be stored, and that didn't always align well with existing data centers.

My Azure class in Munich, Germany

As the years went by, Azure added more and more services and would occasionally drop support for a service (never popular). New data centers were continually added around the world.

Azure Storage Explorer

I created a free storage tool named Azure Storage Explorer and placed it on CodePlex, which turned out to be a hit. Over the next few years, Azure Storage Explorer had over 280,000 downloads! I would do a handful of updates a year to ASE, usually because Microsoft had added a new feature or because the Storage API had changed.


Eventually, there was one breaking API change too many and I stopped maintaining it--but made the source available on CodePlex. A second reason for not working on it is simply how busy I was on cloud projects.

A few years later, Microsoft finally came out with their own tool, with nearly the same name: Microsoft Azure Storage Explorer. You can also now manage storage through the Azure Portal. It's about time!

Recently I've had some thoughts about creating some new, updated cloud tools. See the end of this post for more.

2015-2019: The Maturing Cloud Becomes Essential

Cloud has exploded and is no longer something reserved for brazen early adopters or just a few specialists. At Neudesic, we consult widely on multiple cloud platforms: Microsoft Azure, Amazon Web Services, and now Google Cloud Platform.

New cloud services continue to arrive. There are services for Mobile and APIs and Non-Relational Databases and Distributed Memory Cache and Machine Learning. We now have Serverless Computing (AWS Lambda or Azure Functions), where you don't even have to allocate a server: just upload your function code and the platform takes it from there.

Names were changed. Windows Azure became Microsoft Azure, so the branding wouldn't be focused on one operating system. SQL Azure became SQL Database. Azure Web Sites became Azure App Services. Even Visual Studio Team Services / TFS Online was rebranded as Azure DevOps.

Software-as-a-Service (SaaS)

About 4 years ago I joined a product team to work on creating a Software-as-a-Service offering out of a legacy HR product named HRadvocate. It was a major amount of work to update the architecture and user interface, but eventually we had something deployed to Windows Azure with a reliable SaaS architecture that kept clients' data isolated from each other in separate databases.

SaaS Architecture on Azure

Authentication was initially through Azure Active Directory, with the idea that enterpises could use Microsoft's ADConnect to link their enterprise AD to AAD. It turned out that clients were demanding Active Directory Federation Services (ADFS) integration, so we added support for that. Later we added SAML support so products like PingFederate can be used to authenticate. Now our SaaS product could authenticate each client differently.

An Azure customer required a hybrid architecture, where Azure-hosted HRadvocate needed to integrate with multiple other systems--all of which were local to the enterprise. These systems connected to the former HR system via database integration, a structure that had to be maintained. To fit into this arrangement, I developed SQL Connector, a set of SQL Server functions written in C# that allow enterprise databases to query data in the cloud. This allowed the cloud data to be synced locally. Now, the local systems could continue to use their existing database integration, even though our SaaS was now part of the mix.

Amazon Web Services

I'd obviously been very focused on Microsoft Azure up until now, but that was about to change. Client requirements for HRadvocate led to a decision that we had to be able to run on Amazon Web Services as well as Azure. This led to several years of work on AWS and I am now proficient in it. Getting our solution to work on both Azure and AWS—while keeping a common source code base—was a lot of work but was also very educational. Azure's Cloud Service, SQL Database, Blob Storage, and Redis Cache mapped in a straightforward way to AWS's Elastic Beanstalk/EC2, RDS SQL Server, S3, and ElastiCache. About the only thing we couldn't transition was Azure Active Directory, but that's fine since we offer multiple was of authentication.

SaaS Architecture on AWS

We also targeted Amazon's Commercial Cloud Services (C2S). To support this we added to the product the ability to run air-gapped (without Internet); this required locating and replacing any code (including from open source libraries) that was taking availability of the web for granted. Chart libraries like Google Charts had to replaced with Highcharts which could be local to the application. We added support for the FIPS 140-2 standard, using only algorithms and code for encryption that been certified to be compliant.

During this time, we continued supporting our product on Azure as well. Being able to run on two cloud platforms provided a lot of insight about what is the same and what is different between leading cloud platforms. There certainly seems to be a lot of copying going on between mainstream cloud platforms: when one provider comes out with a useful cloud service, it's not long before the competition has a very similar service. For example, Amazon has AWS Lambda for serverless-computing while Azure has Azure Functions. For those still worried about vendor lock-in, this keeping-up-with-the-Joneses activity should be comforting. The principles for building a good solution in the cloud transcend any one platform.

The Cloud in 2019

Ten years have gone by, and Cloud has certainly come into the mainstream. Just about all of us now use cloud computing every day, whether we realize it or not. Doing a web search? Streaming a movie? Using a social network? Making an online purchase? Cloud computing is an integral part of that.

Ten years ago, some big tech companies had cloud infrastructure but no one was providing cloud computing services to the public except Amazon. Now, there are clouds by Microsoft, Google, IBM, Oracle, SalesForce, SAP, VMWare, ...the list goes on and on. As for Microsoft, Azure is now also a leading cloud platform: it does PaaS and IaaS; half its VMs are reportedly running Linux; and there are a whopping 54 data centers worldwide. The growth has been phenomenal.

Cloud computing is no longer considered a speculative idea or a novelty for organizations: now, it's a common assumption that you'll be leveraging a cloud in anything new you develop. Ten years ago there was a lot of indecision about whether to go cloud or not; today, going to the cloud is a given, and the discussion is about which platform and which services to use.

Some of my Neudesic colleagues from the early days have gone on to work at Microsoft or Amazon.

Cloud platforms seem to have improved uptime from 10 years ago, but there are still those moments when something goes wrong and a substantial number of clients are affected. You can still be in for a long wait when a cloud platform is recovering from an issue and each customer account has to be restored.

It's been a really interesting decade of cloud work, and there is plenty more to come. The do-it-yourself nature of the cloud is inherently satisfying, as is being able to change your mind and alter your deployment as will. Services that handle the details and let you focus on your application are a joy to use. You still need to know what you're doing architecturally and keep the cloud's different economic model in mind, but things like auto-scale and recovery are increasingly included in new cloud services. New services like Machine Learning are opening up new vistas for developers, and there's never been a more fun time to experiment—for just pennies.


Sunday, January 6, 2019

Consultant Tips for Air Travel, Revisited

Back in 2012 I posted a series on How to be a Consultant, which included a segment on Air Travel. After several years of not having to travel at all, I resumed a grueling travel schedule in 2018. So, here's an updated review of what to expect in US air travel and some tips for making the best of it.

To set some context, my travel involved flying from Southern California to Dulles Airport in Washington DC and back, for a week at a time. I took approximately 46 air flights in 2018.

I knew when this started I was in for an adjustment: as bad as my memories of air travel were, several years had gone by and it was surely even worse now.

Tip #1: Choose Your Airline Carefully

All of my flights were on United Airlines, which brings me to Tip #1: choose your airline carefully.

While air travel is far less comfortable than it used to be across the board, that doesn't mean all of the airlines are exactly the same. For example, Southwest Airlines won't charge you for checked bags, even though nearly every other airline does. True, when one airline gets yet another nasty idea--like charging you for bags, or charging more for the better seats--most of the other airlines start doing the same thing. It's not 100% uniform, though, and some airlines have been known to back off from some of their more evil tactics when there is enough passenger backlash.

Now I know as I write this that you may have little to no choice in which airline you fly on: for some flights, one carrier is simpy dominant. That was the case for me, United Airlines was clearly the airline I would be using for my particular route. Still, if you're flying out of or into major airports you may find you do have a choice, and in those cases you should do some careful thinking about which airline to use. This is the age of Internet reviews, after all, so there is a great deal of online information to be found about airline experiences and rankings. You can even find reviews of particular seats on particular aircraft.

Tip #2: Leverage Airline Loyalty Programs

Now that you've determined the airline you'll be using, it's time to get the most out of them, which brings me to Tip #2: leverage airline loyalty programs. The constant reduction in service and new fees imposed by airlines is all about keeping their fares low so that you choose them when searching for a flight. Because of this environment, airlines tremendously value passenger loyalty, and they reward frequent flyers. Sign up for your airline's loyalty program, and be sure to specify your loyalty ID whenever you book a flight. You'll start accruing air miles, which will start paying off in benefits.

What kind of benefits can you expect? The specific benefits you get and what you have to do to qualify for them varies from one airline to another but can also be found online (here's United's MileagePlus).

In my earlier period of travel, I flew American Airlines, accrued air miles, and earned some status--but that was all ancient history now, and I was starting fresh with United.

Here's what I initially experienced, starting in early 2018 and having never flow United before. I was persona non grata:

  • When boarding, there were 5 boarding groups. I was almost always assigned Boarding Group 5, which meant I was one of the last few passengers on the plane.
  • Checking bags cost $30 for the first bag and $40 for the second

...and here's what things were like near the end of 2018, after I had been on 40+ United flights:

  • I had been awarded Premium Gold status
  • I was in Boarding Group 1 every time, first on board (well, first after some special groups like elite status passengers, military servicemen, and passengers with infants).
  • My checked bags were free
  • Special offers were extended to me when I made reservations
  • I was automatically added to upgrade lists in case there was a business class or first class seat available

That's quite a difference. The airlines may be mercilessly charging more and taking away comforts, but it feels amazing to get some special treatment and recognition for all that travel you're doing.

I should mention that another ingredient was using United's credit card, which accelerated my benefits. That's covered in the next tip.

Tip #3: Use the Airlines' Credit Card to Buy Your Tickets

Early on in my year of travel, I noticed that every United flight included an unwelcome push to sign up for their MileagePlus Explorer Credit Card. As much as I disklike aggressive sales to a captive audience, I had to admit the benefits sounded good given how frequently I was traveling. In the case of United's card, this included 50,000 air miles, Boarding Group 2, a free checked bag, and 2 day passes to the United Club lounge.


These benefits were real, and represented a way to "buy" my way into higher status simply by using the airline's credit card to reserve my flights. To be sure, these cards don't have a very good interest rate, but that didn't concern me in the least, since I expensed my flights promptly and always paid my bills in full.

After signing up for my card, I went from Boarding Group 5 to Boarding Group 2 on my very next flight and one of my checked bags was now free. I had jumpstarted my loyalty program!

After several months, the 50,000 miles were applied to my account--not to mention the miles I was getting for taking all those flights. I used these miles over the last year to buy quite a few flights for my daughter in Kentucky to come home to visit us in California in the summer and over the holidays.

As the loyalty program did it's thing, successive flights moved me to Silver and then Gold status. Now 2 checked bags were free, and I was in Boarding Group 1.

The United Club passes were also great (see Tip #4: Utilize Airline Lounges).

Note that the specific benefits change often with airline credit cards, so if you're planning to use a specific card, check what's currently being offered.

Tip #4: Utilize Airline Lounges

In my earlier years as a traveling consultant, American Airlines had the Admiral's Club. I saw signs for this in the airport but didn't know what it was. One time, when I was traveling with an executive, I was brought into the lounge as a guest--and what an eye opening experience! Really comfortable chairs. Work tables with outlets. Internet. Free drinks. Snacks. Newspapers. Magazines. Most of all, pleasant and safe surroundings. Quite the difference from sitting at the gate.

In the original version of these airport lounges, frequent flyers with the means would pay hundreds of dollars for an annual membership. In these leaner times, fewer passengers are able or willing to do that, so the lounges also offer day passes. for example, at United's United Club lounge, I can buy a Day Pass for $59.


Tip #4 is to utilize airline lounges when warranted. You might not think you spend enough time in an airport to warrant the cost of an airline lounge, but there are times when you will: that delayed flight; that cancelled flight that strands you in the airport overnight; that bad weather that causes mass cancellations and disrupts the airline schedules across the board, packing the gates with too many people. These are the unpleasant times when you may spend quite a few extra hours in the airport. In times like that, I don't hesitate to buy a day pass for the nearest airline lounge.

I had received two United Club day passes with my United credit card. On one particularly bad trip where I had to spend a rough night in Denver International Airport, I used a club pass once the lounge opened at 5 am to get into a better place where I could clean-up, enjoy the free refreshments, and nap until my mid-day flight in peace and safety. I used my second pass on what I knew would be my final flight, deliberately getting to the airport early to enjoy a few hours of comfort before boarding.

Tip #5: Use Direct Flights

For most of my travel history, I've had connections on my flights--but this year I came to change my mind and now insist on direct flights. In my case, a direct flight would mean driving all the way to Los Angeles International (unthinkable, could be 3 hours on California freeways) or San Diego (90 miles away). So, I would drive the 40-50 miles to a nearby airport and settle for a connecting flight, usually in Denver.

At first I didn't think connecting flights were all that bad. After all, it gave you a chance to use the restroom and perhaps buy a meal or a snack (I'm a diabetic, so keeping my energy up is important). But I learned through sad experience that connecting flights can be really, really problematic.

The first issue with connecting flights is how little time you may have between flights. Since the airline system is really busy, it's not unusual for flights to be delayed along with the ripple effect that can have on the system. If your connection has a really short layover time, such as less than an hour, you are really taking your chances. I found frequent delays in my connections at DIA, and sometimes missed the last flight of the day back home. That meant going to customer service, trying to find a different flight to a different airport, and arranging for my wife to meet me. That could also mean your checked bags are on their way to the original airport. A missed connection is simply a mess.

Missing a connection, as bad as it can be, is nothing compared to what a systematic airline system failure can mean to a connection. On one of my flights home, which had a connection in Denver, there were a lot of thunderstorms wreaking havoc over Colorado. About half an hour before we were supposed to land at DIA, our pilot announced what was happening with the weather and that DIA was temporarily no longer accepting planes to land. He announced that we would be diverting to Colorado Springs, where we would wait until DIA re-opened and then fly back there. Well, okay, what can you do. Clearly I wouldn't be making my connecting flight. A few mnutes later, another announcement: Colorado Springs was now overwhelmed with planes and no longer accepting planes to land. Oh, and we were extremely low on fuel. The new plan was to land at a nearby airport named Grand Junction that none of us had ever heard of--possibly the smallest airport I have ever been to. An airport so small, there wasn't even a jetway to deplane passengers from a plane our size.


The captain did buy pizza for the entire 170 passengers while we were waiting at Grand Junction, which was a nice gesture. We did eventually get back to Denver hours later, where a line to United Customer Service literally stretched the entire length of Terminal B. There was no flight out. While I did receive a hotel voucher from the airline, I was also assured that there weren't any hotel rooms available. That meant spending the night in the airport. I tried to make a "bed" out of several chairs near a cafe. It was an extremely uncomfortable evening, trying to sleep this way in the brightly lit airport that remained noisy at all hours. Finally at 5 am the United Club opened and soon after some airport stores opened. I quickly purchased shaving items and a hairbrush, made myself more presentable, and headed to the lounge where the wait for my mid-day replacement flight was much more comfortable.




It took me 27 hours all-told to get home. It is for the above reasons that I decided to avoid connecting flights, and I have ever since. My wife has graciously driven the 90 miles to San Diego International to drop me off and pick me up where I can get a direct flight to Dulles airport. Direct flights are a far less worrisome way to fly.

Tip #6: Buy Premium Economy Seating

If you're traveling on business, you're no doubt subject to an expense policy. More likely than not, you're required to fly in Economy class in the main cabin. Today, however, there are multiple leves of Economy: your airline may offer a "Premium Economy" that offers a far better experience. In United's case "Economy Plus" is the name given to a subset of the main cabin with seats that have extra legroom, are closer to the front of the plane, and include power outlets. If it's permitted by your expense policy, these are the seats you want to be sitting in.

The airlines keep finding ways to stuff more seats on planes. That means we have been steadily losing seat width and leg room for years. Airlines may wedge you in tighter and tighter from side-to-side, but at least the extra legroom in a Premium Economy seat allows you to fully extend your legs. It also makes it easier to get in and out of your seat, possibly without having to ask other passengers to get up.

Power in the seat means you can charge your phone or tablet. However, set your expectations accordingly. In a row of 3 seats, there will only be 2 power outlets, so there's no guarantee you will have access to one. Also, there's a big difference between Boeing and Airbus planes: on Boeing planes, the outlets are between the seats, somewhere underneath where you're sitting. Your chances of actually seeing the outlets are zero, so you are left trying to plug your power cord into an outlet you can't even see--awkward and frustrating at best. Airbus, on the other hand, puts their outlets betweeen the seat backs in front of you at a height you can easily reach, which makes a world of difference. It's actually possible to use the outlets on an Airbus.

I was hoping in-seat power would also let me plug in a laptop, but I have yet to see 3-prong grounded outlets on planes. Still, you can work on a laptop off battery pretty effectively in a Premium Economy seat. In regular economy, you might not have enough room to fully open the laptop.

If Premium Economy seating isn't an option for you, consider an Exit Row which will also give you more legroom.

Under no circumstances should you consider a sub-Economy class such as "Basic Economy" which (depending on the airline) may not even allow carry-on bags.

Tip #7: Choose Your Seat Carefully

After so much travel, I became an expert seat-selector. My critieria:

  • The seat needed to be Economy Plus or Exit Row. I wanted/needed the extra legroom, and the in-seat power would be valuable.
  • The seat should ideally be near the front of the plane, so I wouldn't have to wait as long to get off. This was particularly crucial if the flight was the first leg of a connection. 
  • Never book the very first row of the main cabin, because there are no seatback pockets or storage in front of you.
  • Consider proximity to the restrooms. They might be near the front or near the back, depending on the model plane. 
  • Unless very familiar with the location, check online seat reviews to discover particularly good or bad seats. Although I never recline my airline seat, those who do would probably care if the seat is able to recline or not.

And then, there's the question of which seat to select:

  • Window seat: The thrill of looking out airplane windows died for me some 25 years ago, but the window seat is valuable for another reason: you only have one human being pressed up against you. The window side provides a place of refuge. On the other hand, it's more work to get out of your seat. If you're hesitant to bother the people between you and the aisle to get up and you have to go, you may have some uncomfortable waiting time ahad of you. If you're using armrest controls for viewing movies, be forewarned that the passenger in the middle seat may (will) inadvertently block your access and even accidentally change your volume or channel.
  • Middle seat: This is of course where you don't want to be, with people uncomfortably pressed up against you on both sides. Avoid at all costs.
  • Aisle seat: Like the window seat, you only have one person pressed up against you--but you have flight attendants and passengers on the way to/from the restrooms constantly bumping you. And, depending on model plane, you may have less than half the under-seat storage room that middle and window have. On the plus side, if you're using armrest controls for watching movies you probably won't have another passenger's elbow in the way. You also have unfettered access to get to the restroom.

So which is better, window or aisle? That's a tough call. When I started flying in 2018 I always opted for the window seat, but by the time my year of flying ended I had switched to the aisle seat.

Despite the above criteria, seat selection is sometimes very limited. I've been forced to book the dreaded middle seat on a handful of flights simply because nothing else was available. Even when this happens, don't lose hope. When checking in for your flight, take another look at the seat map and change your seat selection if something has opened up. This has even happened for me at the very last minute, when checking in at the airport kiosk.

Tip #8: Choose Your Plane Carefully

When you reserve your flight, the specific kind of plane is usually mentioned--and you should think about that just as carefully as you do the fare price or arrival time. Some planes are simply more uncomfortable than others.

I already mentioned the poor arrangement of in-seat power on Boeing planes earlier under Tip #6. If using the power outlets is not important, then I don't find much reason to differentiate between Airbus and Boeing planes. A lot of the comfort factors such as legroom / row-spacing is airline-dictated.

Figure out what is most important to you about plane model, and then check online flight reviews to determine what will make you most comfortable.

On Boeing planes, the model I would avoid whenever possible is the 757. Although I've never been able to find anything online to substantiate that seat width is narrower on 757s, every time I fly these planes the seats are noticably narrow and uncomfortable.

Tip #9: Plan Your Entertainment

You're going to spend hours and hours on an airplane: what will you do to pass the time? One option is to watch in-flight movies or TV. Airlines offer quite a few choices for this now, which could involve a screen on the seat in front of you, or streaming to your phone or other device. Just what options are available will vary by airline and model plane.

On United, for instance, you can watch TV and movies free using your phone, tablet, or laptop. No charge is nice, but will it actually work? I found in actual experience that neither my phone or laptop browser would work, apparently because of app version, browser version, or missing plug-ins. Despite attempting to get things configured in advance of my flight, I never did get streaming to a device of mine working--but others on the flight clearly did.

What I did end up using quite a bit on United, when available, was DirectTV via the seatback screen, with armrest controls (however, see my warning about being able to access the armrest control under Tip #7). This included live DirectTV, but also 8 or so featured movies on dedicated channels. This I found really was the best way to pass 5-6 hours of flying and helped the hours speed by.




When I didn't have personal entertainment, I would read or nap. Reading I would do on my phone via Amazon Kindle.

If you're going to use in-flight video entertainment, be sure to invest in some ear-buds.

Tip #10: Make a Conscientious Choice About Bags

I've heard it a million times from business travelers: don't check your bags, so you don't have to wait endlessly for them at Baggage Claim. This used to be a slam-dunk decision, but now it's not so clear-cut.

There simply isn't as much overhead space for carry-ons as there used to be on planes. Indeed, some of the lower-levels of economy don't even allow you to use the overhead bins. As the airline industry has stuffed flights fuller and fuller, the war for bags and the ensuing rage have only gotten worse. It's typical while waiting for boarding to start to hear an appeal asking for volunteers to check your carry-ons because there won't be enough room for everyone's bags. If this does happen, you'll pick up your bag at Baggage Claim but there won't be a charge.

Personally, I decided this madness simply isn't worth it: I always check my bags, and my status means no baggage fees. Yes, it means waiting at the carousel which is extra time. But you might well end up there anyway given how insufficent space there is on planes nowadays. Now, I just relax in my seat while the other passengers and flight attendants war over space in the overhead bins. I do carry on a laptop, which fits under the seat in front of me.

If you are going to carry on your bags, then ensure you are in a low boarding group (see tips #1-3) and choose a bag size that will fit comfortably in the overhead space. Or better yet, something that will fit under your seat.

If you are checking a bag, make sure your bag looks unique. You'll often see signs at baggage claim warning you that many bags look alike, and it's true. After one flight, I waited for my black wardrobe bag but it never showed up. Instead, someone else's black wardrobe bag was left on the carousel. Clearly, someone had mistakenly made off with my bag. I brought the other bag and my bag claim check to the airline baggage office, and they were able to call the other party and get them to come back to the airport to exchange bags. But, you won't have to worry about this if your bag is unique-enough looking to stand out.

Tip #11: Buy Bundles to Simpify Expenses

As previously mentioned, airlines keep adding new kinds of fees. Fees for bags. Fees for seats. If you're purchasing tickets that you will be expensing, this can cause wrinkles. For me, it was a lot easier getting approval and reimbursement of my air travel costs if I had a single charge rather than multiple on my credit card. When making reservations, I took advantage of bundles.

A bundle is an offer you can take when booking your flight that combines those extra fees for a particular cabin class or bag check fees. Using a bundle not only simplifies your expensing, it also prevents your accounting department from over-scrutinizing multiple individual charges.

Tip #12: Get TSA Pre✓

TSA Pre✓ is like a fast lane for air travel. It's a faster and easier path through airport security.

What TSA Pre✓ does for you is let you go through security on a different security line that is a world of difference from the regular security line. What do you usually encounter at the airport security line? Grim-faced security officers who order you around. Take off your belt. Take off your shoes. Remove your electronics. Reassemble yourself on the other side. It's a hassle and it's a pain. TSA Pre✓ is a parallel universe where the TSA actually likes you and treats you like a well-known friend. You're greeted with a friendly smile, and you don't need to remove anything.

TSA Pre✓ costs $85 a year at the time of this writing and is well worth it. If you're traveling internationally, go for the slightly more expensive Global Entry program. Signing up will require an online submission, then visiting an office at a nearby airport to get interviewed and fingerprinted. Then you'll wait for a backgound check, approval, and assignment of a Known Traveler Number. Once you have your KTN, make sure you specify it when making reservations. The TSA Pre✓ logo will appear on your boarding pass and you'll be able to go through the fast lane.

Tip #13: Stay Positive

All sorts of things can go wrong in air travel, and even under the best circumstances it's rarely comfortable. A huge factor in whether a flight is pleasant or unpleasant is the attitude of the passengers and crew. You can't control other peoples' attititudes, but you can maintain a positive attitide yourself--and that will influence others. Be the positive person on your flight who overlooks the shortcomings of others and helps make the experience better.


































Sunday, December 9, 2018

Ghost : A Father-Daughter Project, Part 1

Over the last Thanksgiving break, our oldest daughter Susan came home from college for a few days. She is studying web design and marketing, but has found herself more and more becoming a front-end UX designer. She's been spending a lot of time working on HTML/CSS/JavaScript projects with great success.

While we had a few days together, we decided it would be fun to collaborate on a project that would be useful for her coursework: a program that learns. She suggested implementing the game of GHOST that our family has always played, and that's what we did.

Rules of GHOST

In case you're not familiar with it, GHOST is a game well-suited for car trips, sitting around the dinner table, or any other time you have 2 to N people sitting around with nothing to do. It's a spelling game, and if you make a valid word of 3 letters or more, you've lost the round. The first time you lose a round, you're a "G". The next time you lose a round, you're a "GH". Once you get all the way to "GHOST", you're out of the game. Here's a sample sequence or two to give you the idea:

Player 1: K
Player 2: N
Player 3: O
Player 1: C
Player 2: K
Player 3: "That's a word. You're a G."

Player 1: P
Player 2: A
Player 1: C
Player 2: I
Player 1: N
Player 2: A
Player 1: "I challenge you. What was your word?"
Player 2: "Ah, you got me. I didn't want to go out on PACING, so I bluffed. I'm a G-H-O."

Although GHOST is a simple game, there are nuances and strategy to it. Programming a competent player is not as simple to implement programatically as you might think.

Our Game

Our version of Ghost will be a two-player edition, human against computer.

Since Susan has done front-end development but not back-end or database work, I volunteered to put together a starting point program, and write the back-end code to her specifications. Since I spend much of my time with ASP.NET, I created a sample ASP.NET Model-View-Controller (MVC) project, and added web.config and code to connect to a SQL Server database.

Since this game is supposed to learn, it most definitely needs a database so it can grow its word list as it plays. This is just the simplest of databases: one table named Words containing one colum, Word. We initially seeded the word list with a couple dozen entries, but ever since the list has been growing as a result of game play. At the time of this writing, it has over 1400 words. We could of course license a dictionary, but that would defeat the learning purpose of this exercise.


Our first objective was to implement basic game play in order to arrive at a functional game, although not a very smart one. The basic algorithm for game play is this:

Basic Gameplay Flowchart (click to enlarge)

When it's the human's turn, he or she has 3 options:

1. Play: press a letter key to continue the word.
2. Challenge: click a Challenge button.
3. That's a Word: click a That's a Word button.


The Challenge and That's a Word buttons aren't visible unless at least 3 letters have been played.

Initially the human player goes first. In each new round, the game will flip who the starting player is.

Flow for Human Plays a Letter

When the human presses a letter key, the letter is added to the current word in play. Non-letter keys are ignored.

Next, a check is made to see whether the human player has just completed a word: if they have, they have lost the round. This is done with a call to the back end to look up the current word-in-play against the Ghost word list.

SELECT word FROM words WHERE word=@round

If the word is found in the word list, the game informs the player and a loss is scored for them. The player gets the next letter in G-H-O-S-T, and if T has been reached the game is over.


If the word was not found in the word list, the computer needs to make a turn. A query is made of the word list for winning words. Winning words that begin with the current word-in-play and are also the right length (odd or even) such that the computer will win if the word is played out. For example, let's say the current word in play is B A C. That means a winning word must begin with BAC and also be an odd-number of characters. The search for winning words would inlude BACON but not BACK or BACCARAT. If one or more winning words are found, one is randomly selected and the next letter is played.

A good algorithm for selecting a winning word took some thought and experimentation. The example query below shows how a winning word is selected if the human player went first. The first WHERE clause ensures the word selected begins with the word-in-play. The second clause ensures the word selected is longer than the word-in-play. The third clause ensures the selected word, when played out, will result in the human player making a complete word and not the computer. The ORDER BY clause tells SQL Server to select a random order.

SELECT TOP 1 Word from Words 
WHERE word LIKE @round + '%' 
AND LEN(word)>LEN(@round) 
AND ((LEN(word) % 2)=0) 
ORDER BY NEWID()

The above query is actually augmented further, because we don't want to target a winning word only to discover we accidentally made a losing word on the way; for example, planning to play P to pursue the word ZIPPER would be a mistake because ZIP is a losing word. To achieve this, more WHERE clauses are added to the query to ensure the computer does not select any word that could result in a losing position.

If no winning words are found, then the computer must either challenge the user or make a bluff. We came up with this rule: if the word-in-play is 3 letters or more in length and there are no losing words in the word list, a challenge is made. The human player can then admit they were bluffing, or tell the computer their word. If the human player was bluffing, a loss is scored for them. If a word is provided, the game adds the word to its word list and scores a loss for itself.



If not challenging, then the computer must bluff. Initiallly a random letter was selected for bluffing in our game, but that was often too obvious a bluff in game play, with nonsense letter combinations. Susan came up with the idea of scanning the word list for the next letter to play. This results in more credible letter sequences. The bluff letter is played.

Flow for Human Clicks Challenge Button

The human player can click a Challenge button if they don't believe the computer is making a valid word.

The game looks for a word in its word list that begins with the current word-in-play. If found, the user is informed what the word is, and then someone loses the round. Usually this is the human player, except in the case where the computer's word has been fully played: in that case, the computer has made a complete word and loses the round.


If the computer was bluffing (no matching word in the word list), it admits it was bluffing and takes a loss.


Flow for Human Clicks That's a Word Button

The human player can click a That's a Word button to indicate the computer has made a complete word. Since the Ghost game is designed to learn as it plays, it trusts the human player to be truthful (and a good speller), and adds the word to its word list database. Now it knows the new word for future play.


Of course, trusting the human player comes with risks. That's the reason for the next section we'll discuss, Administration.

Administration

Our game has a page for viewing the word list. If you add administrator credentials, this page also allows adding and removing words. This is important because our game trusts the human player in learning new words. If there's a typographic error, or a disallowed word (like a proper name), or profanity, we want to be able to correct it or remove it.


The back end of administrative functions are simple DELETE and INSERT queries to the database.

Summary

Well, that's our game--so far. From a learning / intelligence perspective, Ghost can:

  • Learn new words
  • Distinguish between potential winning and losing words
  • Bluff convincingly

Susan is next going to re-do my placeholder front-end with her own UX design, which I'm sure will be highly creative. We'll cover that in a future Part 2.

I am greatly enjoying teaming up with my daughter on a project--something we haven't done since those middle school science project days that now seem so long ago.

Friday, December 7, 2018

Visualizing Workflow Activity with Sankey Diagrams

In this post, I'll demonstrate how something called a Sankey Diagram can be used with charting software to visually show workflow activity.



The Problem of Showing Workflow Activity Effectively

If you've ever worked with business workflows in software, you've likely struggled with the problem of communicating workflow activity to clients: it's important, but it's also difficult. This is especially true with complex workflows. While users are inherently familiar with their own workflow (or a portion of it relating to their position), graphically depiciting activity can be daunting.

It's not difficult to understand why this is difficult problem: just look at how workflows are organized and stored in computer systems. Although you might see workflows depicted with flowcharts or UML diagrams at times, their storage in digital systems tends to be a complex hierarchy of multiple levels, sometimes captured in the form of an XML dialect. Entities involved in the workflow have states and have to be gated through allowable future states depending on the rules of the workflow. Advancing from one state to another can happen in response to a user action; in response to a message from an external system; because a certain amount of time has passed; or can be automatic. On top of all that, some workflow paths may run in parallel.

Most often, you'll see bar/column/pie/donut charts used to show a breakdown of activity within one workflow stage. That's a fine way to show what's going on in a single stage, but it doesn't provide a view of activity across the workflow. That across-the-workflow view can be pretty important: are a significant portion of online orders being returned by customers? You wouldn't get any insight into such connections just looking at workflow activity one stage at a time.

Sankey Diagrams Illustrate Flow Well

It's flow of activity where Sankey diagrams become very helpful. Sankey diagrams are intended to show flow, and they do so with connecting lines or arrows whose size is proportional to the amount of activity.

You can see some simple and complex examples of Sankey Charts on the Google Charts gallery. While there, notice that Google's implementation lets you hover over a particular flow for more information, including count. But even without a visible count, you can tell relative activity by the thickness of the connection between source and destination. In the example below, we can see that the B-X connection has far less activity than the B-Y connection. If you imagine that the start and end states are stages or substages of your workflow, you begin to see the possibilities.

Simple Sankey Diagran

Here's a more complex example from the same Google gallery page that shows flow across a series of states. Even though there's a lot more going on this time, transitions from one state to another are clearly shown and the rate of activity is easy to gauge from the width of the connecting lines. This is what makes Sankey diagrams great for illustrating workflow.

Complex Sankey Diagran

Although the Google library is very good, I'm going to be using the Highcharts chart library for the remainder of this post, simply because that's what I use regularly. Google Charts requires Internet access and the license terms disallow including the library locally in your application; in contrast, Highcharts can be self-contained in the application but the library does need to be purchased. Both libraries render excellent results.

If you want to play around with the idea of Sankey diagrams without doing any coding, check out the SankeyMATIC page, which will let you set up a Sankey diagram without doing any coding. It's a great way to prototype a Sankey diagram before deciding to invest development effort.

In looking at a Sankey diagram, you might get the idea that they must be complex to program but this is not the case at all. Most chart libraries that support Sankey diagrams simply take arrays as data input, wher each array element specifies the name of a source state, the name of a destination state, and a count. The chart library takes it from there, stitching things together for you. Both Google Charts and Highcharts operate this way. We'll see an example of that shortly.

Sample Scenario: An Order Workflow

To show an example, we'll imagine the order workflow for a company that accepts both online orders and phone orders.

  • Orders have to be prepaid unless credit is approved. 
  • Once an order is prepaid or credit-approved, associates assemble the order by pulling SKUs from a warehouse. The order may also be gift-wrapped. 
  • Once assembled, orders are placed on a shipping dock awaiting pick up by the shipping carrier.
  • Orders that were not prepaid are billed and followed-up by accounts receivable until the order is paid in full.

Our workflow is implemented as a series of stage-substage-state combinations. Specific actions connect one state to another. For example, submitting an online order that requests credit transitions state from Shopping | Online Order | Order Submitted to Order Configuration | Credit Approval | Credit Review; whereas a prepaid order would transition to Order Configuration | Payment Verification | Applying Payment.

Order Workflow Stages, Substages, and States

Now imagine this system is up and running and users want to see order activity. We could of course do the usual simple charts to show what is happening at each stage of the workflow. That might look something like this:


Single-Stage Views

This is certainly useful information; it lets us look into the breakdown of shopping activity. But it tells us nothing about what's happening across the workflow. So, we're now going to create a Sankey Diagram in Highcharts to provide that added view.

Creating Sankey Diagrams for the Sample Scenario

We can first start out simply, by showing transition from the first stage (Shopping) to the second stage (Order Confirmation). To do that, we'll take a standard JSFiddle for a Highcharts Sankey diagram and simply modify the data array as follows. We're merely supplying array entries, with the source-state, destination-state, and count. Our array includes all of the Shopping stage start states and all of the Order Confirmation stage end states.We know the source and end states from our workflow, and we know the counts by querying our order database.



Below is the resulting Sankey diagram. Even though we're only focusing on the first two stages of the overall workflow, we can see the Sankey diagram yields a very rich view of what is going on. We can hover over the connecting lines for the exact count, but just at-a-glance we can see a great deal. We see that phone orders are miniscule compared to online orders. We see that that most of the orders are in various states of credit approval processing. We are now getting a sense of how activity is flowing, which tells more of a story than just looking at one stage at a time.

Sankey diagram: Shopping Stage to Order Confirmation Stage

Now, let's add the remaining stages. Our data list now looks like this, with more array elements.


And below, the Sankey chart showing activity flow across the entire workflow. Now we really are geting a sense for what's happening across the board (JSFiddle).


I'd like to point out a few useful things about the Highcharts implementation. First off, you can hover over any conection line to get a tooltip showing the end-state name and the count. With some coding, you could also arrange it so that clicking on a section drill down into a detail chart.

Highcharts Sankey Diagram - Detail on Hover

Another useful feature is the menu at top right, which permits the chart to be export as a graphic.


Our sample scenario is a modest workflow: what if your workflow is much more complex, to the point where the diagram is really crammed? Well, you certainly need to keep the informaton understandable and you should strive to avoid overwhelming the user. Here are some strategies to consider:

  • Set an appropriate number of colors--too many may reduce understandability. Consider whether it makes sense to color-code stages.
  • When showing the full workflow, leave out the most detailed state level and provide that elsewhere.
  • Show multi-stage sequences in sections rather than the entire workflow in a single view.
  • Allow users to click on a stage to get an expanded view of the detail in that stage.
  • Group states of little interest together into a single node or leave them out altogether.

If I were doing this for a client rather than a blog post, I would put extra time on finishing touches: adjusting colors, adjusting font and text effects, and considering whether some of the information is not of interest to its audience. Even without doing so, I hope this introduction to Sankey diagrams provides some insight into how workflow activity can be shown to users in an insightful way.

I only discovered Sankey diagrams recently, but their usefulness was immediately apparent. Not only are they useful, they're also very simple to create using leading chart libraries.If you're facing the challenge of visualizing workflow activity, I encourage you to try them out.


Wednesday, October 31, 2018

Setting Up Transparent Data Encryption

This post discusses the Transparent Data Encryption (DTE) feature in SQL Server and how to use it.

TDE: What Is Is and Why It Exists

When it comes to database encryption, there are two areas to think about: encryption during transport and encryption at rest.

Encryption during transport means the communication between the database and your client (your application, or SQL Server Management Studio for example) is encrypted. Many developers who use SQL Server are already familiar with specifying Encrypt=True in a connection string. It isn't necessary to create a certificate to use this feature, but in a Production environment you'd want to create a certificate and configure the client to only trust that certificate.

All well and good, but encryption during transport doesn't change the fact that the database data on disk is not encrypted. If you dumped the database file of your Contacts database, you would see visible names and contact information. If someone made off with that file, they'd have access to the data.

This is where encryption at rest comes in: keeping the database data encrypted on disk. That means data is encrypted when inserted or updated, and decrypted when queried. If you consider what would be involved in doing this yourself in your application code, it's pretty daunting: you'd need to be sure encryption and decryption was applied uniformly, and doing so without a performance impact would be a major feat; plus, external applications like report generators would no longer be able to do anything with the database.

Fortunately, the Transparent Data Encryption feature exists and it is extremely well done. Once you turn it on, it just works. Data in the data file is encrypted. Data you work with isn't. Conceptually, you can think of it like the diagram below (and if you want all the specific encryption details, see the Microsoft documentaton link at the top of this post). And as we said earlier, the data can also be encrypted during transport with a connection string option.


In my experience TDE doesn't noticably impact performance. If you're an authorized user who has specified valid credentials, nothing will seem at all different to you. But if you dumped the database files, you would no longer be able to see understandable data.

Although TDE is a very nice feature, it's only available in Enterprise Edition--so it comes at a price. There is one other edition where TDE is available, and that's Developer Edition. This means you can experiment with the feature--or demonstrate it to a client--without having to buy Enterprise Edition up front. Understand, however, that you cannot use Developer Edition in a Production environment.

Enabling TDE

The procedure to enable TDE is not difficult. These are the steps:

1. Install SQL Server Developer Editon or Enterprise Edition.
2. Run SQL Statements to create a key and certificate.
3. Run SQL Statements to enable TDE.
4. Back up the certificate and key file.

1. Install SQL Server Developer Edition or Enterprise Edition


You can download SQL Server Developer Edition from the MSDN web site. For Enteprise Edition, follow the instructions you receive through your purchasing channel to obtain the software.

Create or restore a database, and ensure the database is functional and that you can get to it from SQL Server Management Studio.

2. Run SQL Statements to Create a Certificate


A master key and a certificate are needed for the encryption feature. To create them, run the statements below again the MASTER database.

USE master
GO

CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'my-password';
GO

CREATE CERTIFICATE TDEServerCert WITH SUBJECT = 'My DEK Certificate';

GO

3. Run SQL Statements to Enable TDE


Next, connect to your application database (name App in the example) and run the statements below to enable TDE:

USE App
GO

CREATE DATABASE ENCRYPTION KEY WITH ALGORITHM = AES_128 ENCRYPTION BY SERVER CERTIFICATE TDEServerCert;
GO

ALTER DATABASE App SET ENCRYPTION ON;
GO

4. Back Up the Certificate and Key File


This next step makes a back up of the certificate and key file used for TDE. This step is vital: any backups you make from this point forward cannot be restored unless you have the certificate and key files.

BACKUP CERTIFICATE TDEServerCert TO FILE = 'c:\xfer\TDEServerCert.crt'
    WITH PRIVATE KEY
    (
        FILE = 'c:\xfer\TDEServerCert.pfx',
        ENCRYPTION BY PASSWORD = 'my-password'
    )

Confirming DTE


After enabling DTE, you'll want to confirm your application still works like it always has. 

To confirm to yourself that TDE is really active, or provide evidence to an auditor, you can use this query:

SELECT [Name], is_master_key_encrypted_by_server, is_encrypted from master.sys.databases

This will display a list of databases and whether or not they are encrypted.

name    is_master_key_encrypted_by_server   is_encrypted
master  1                                   0
tempdb  0                                   1
model   0                                   0
msdb    0                                   0
App     0                                   1

If you're still skeptical, you can also dump your database files.


Thursday, August 23, 2018

Release Management and my release tool for full and differential releases

In this post I'll discuss some of the common tasks I perform for release management, and a tool I created to help with it, release.exe. You can find release.exe's source code here on github.

Release Management : Your Mileage May Vary

If you're responsible for software release management, source control is a given--but what else does release management entail? That really depends... it depends on what you hold important, on what constraints come with your target environment(s), and on what customer requirements you have to contend with. Release management might mean nothing more than deploying the latest code from source control to a public cloud; or, it might be a very complex multi-step process involving release packaging, electronic or media transfer to a customer, security scans, patching, approval(s), and network transfers by client IT departments--where some of the process is out of your hands. Whether simple or complex, good release management requires discipline and careful tracking. A well-thought-out procedure, supported with some tools, makes all the difference.

In the release management I regularly perform, common tasks are these:

1. Packaging up a full release to ship to a location, where it will be delivered to the client, go through multiple security processing steps, and eventually end-up on-site, ready for deployment.
2. On-site deployment of an approved release to new or existing servers.

The most interesting new development in all of this has been being able to generate differential releases, where only files that have been changed are delivered. This adds several more common tasks:

3. Packaging up a partial release (just what's changed) to ship to a location, and go through the same processing and approval steps.
4. On-site deployment of an approved partial release to new or existing servers.

Differential releases are massively valuable, especially when your full release might be tens of thousands of files (perhaps spanning multiple DVDs), whereas an update might have only changed a handful of files that take up 1/10th of a DVD. However, getting differential releases to work smoothly and seamlessly requires some careful attention to detail. Most importantly, you need a means to verify what you end up with is a complete, intact release.

To help with release packaging and on-site release verification, I created the release.exe command for Windows. Let's take a look at what it can do.

Hashing: a way to verify that a file has the expected contents

My release.exe command borrows an idea from my Alpha Micro minicomputer days: file hashes and hashed directory files. Back then, our DIR command had a very useful /HASH switch which would give us a hash code for a file, such as 156-078-940-021. Changing even a single byte of a file would yield a dramatically different hash.

When we would ship releases to customers, we would include a directory file of every file with its hash code. On the receiving end, a client could use a verify command which would read the hashed directory file and compare it against the computed hash of each file on the local system--displaying any discrepencies found. This process worked beautifully, and I've always missed having it on Windows. Now I have a version of the same concept in a tool I can use on Windows.

The release command can generate a file hash, with the command release hash:

Command form: release hash file

The hash is a partial MD5 hash. Why partial? Well, the entire hash is really long (20 segments), which is rather onerous if you need to send a hash code to someone or discuss it with someone else. So, I've shortened it to the the first two and last two segements of the full MD5 hash. Since the hash will change dramatically if even one byte changes, this is perfectly adequate for our puposes.

Here's a sample output:

path> release hash readme.txt
05B-8E8-D57-E7C readme.txt

path> release hash release.exe
BB9-AFA-F22-32A release.exe

File hashes will form the basis for packaging up releases with a manifest of files and their hashes; and for verifying those manifests on the receiving side.

Creating A Full Release Manifest

To generate a complete release, we first get the files intended for the release in a folder with the name of the release. For example, if our application's latest changeset in source control was 3105, we might create a 3105_release folder. Within that we copy all of our release files, which will likely include many files and many subfolders.

With the release files copied, we can now use the release create command to create a release manifest:

Command form: release create release-name.txt

3105_release> release create 3105.txt
Creating manifest for c:\3105_release
F7C-2C3-AE1-4BC C:\3105_release\readme.txt
63A-EE0-17F-2D4 C:\3105_release\bin\appmain.dll
9AB-6F4-RE3-007 C:\3105_release\bin\security.dll
3B2-B16-5Ac-007 C:\3105_release\bin\service.dll
47C-08D-A42-FD5 C:\3105_release\bin\en-US\resources.dll
98D-1E1-399-A7A C:\3105_release\Content\css\site.css
652-8A0-52A-ED0 C:\3105_release\Views\Login\login.cshtml
179-488-E60-E22 C:\3105_release\Views\App\main.cshtml
77c-874-963-791 C:\3105_release\Views\App\add.cshtml
6E5-3B0-68C-349 C:\3105_release\Views\Admin\customize.cshtml
E02-C9C-A53-37C C:\3105_release\Views\Admin\settings.cshtml
F01-a37-eed-629 C:\3105_release\Views\Report\monthlysales.cshtml
...

The result of all this is simply to add one file to the release, 3105.txt in this case, which contains every file in the release and its hash. We also add release.exe itself to the release folder. This will give us what we need on the receiving end to verify the release is correct.

Verifying a Release

Once your release has gone through all of the permutations that get it to where it needs to go, and you have deployed it, you'll want to verify that it is complete and intact. Because the release shipped with release.exe and the manifest .txt file, you can easily verify your release by opening a command window, CDing to the root of where the release was deployed to, and using the release verify command.

Command form: release verify release-name.txt

If every file in the manifest is present and has the expected hash, you'll see Verified Release in green.

c:\InetPub\wwwroot> release verify 3105.txt
8713 files checked
Release Verified

If on the other hand there are differences, you will see one or more errrors listed in yellow or red. Yellow indicates a file is present but doesn't have the expected hash. Red indicates a missing file.

c:\InetPub\wwwroot> release verify 3105.txt
FILE NOT FOUND   c:\3105_release\Views\Report\summary.cshtml
A41-BBC-B4B-125  c:\3105_release\Content\css\site.css - ERROR: file is different
782-661-022-411  c:\3105_release\web.config - ERROR: file is different
8713 files checked
3 error(s)

In reviewing the results, note that it may well be normal for a file or two to be different. For example, an ASP.NET web application might have a different web.config file, with settings specific to the target environment.

This simple procedure, which generally takes under a minute even for large releases, is a huge confidence builder that your release is right. If you're in a position where processing steps sometimes lose files, mangle files, or rename files, using release.exe can detect and warn you about all of that.

Creating A Differential Release

At the start of this article I mentioned differential releases, where only changed files are provided. You can generate a differential release (and its manifest .txt file) with the release diff command.

Command form: release diff release-name.txt prior-release-name.txt

Up until now, we have seen variations of the release command that create manifest .txt files or verify them. The release diff command is different: it will not only generate a manifest .txt file, it will also compare it to the prior full release's manfest .txt file--and then delete files from the release folder that have not changed. For this reason, a prominent warning is displayed. The operator must press Y to continue, after confirming they are in the directory they want to be and wish to proceed. Be careful you only run this command from a folder where you intend files to be removed.

Let's say some time has passed since your last full release (3105) and you now wish to issue release 3148--but only a dozen or so files that have changed.

1. You start by creating a 3148_release folder and publishing all of your release files to that folder. So far, this is identical to the process used for full releases.
2. You copy into the folder release.exe and the manifest from the last full release, 3105.txt.
3. Next, you use the release diff command to create a differential release:

3148_release> release diff 3148.txt 3105.txt
Differential release:
    New release manifest file ............ 3148.txt
    Prior release manifest file .......... 3105.txt
    Files common to prior release and this reease will be DELETED from this folder, leaving only new/changed files.

WARNING: This command will DELETE FILES from c:\3148_release\
Are you sure? Type Y to proceed 

3. After confirming this is what you want to do, you press Y and release.exe goes to work.
4. When release.exe is finished, you will see a summary of what it did:

...
Differential release created:
    Release manifest file .................. 3148.txt
    Files in Full Release .................. 8713
    Files in Differential Release .......... 12
    Files removed from this directory ...... 8701

Only 12 files were left in the directory, because the other 8701 files were identical to the last full release--so they don't need to be in the update. Your folder contains only the handful of files that have changed since last release, making for a smaller, simpler release package.

However, the 3148.txt manifest will list every file in the cumulative release and its hash. This is important, because on-site you will be overlaying this partial 3148 release on top of a prior 3105 full release. You want to be able to perform a release verify 3148.txt command which will verify the entire release, not just the changed files.

c:\InetPub\wwwroot> release verify 3148.txt
8713 files checked
Release Verified

Summary: 

The release.exe command has already made my life a lot easier, as someone who has to regularly generate releases--sometimes in a hurry. It is also making deployment a lot less problematic on the customer delivery side: the completeness and correctness of deployments can be immediately ascertained, and if there are problems the specific files are clearly identified.

Download source code