Wednesday, October 31, 2018

Setting Up Transparent Data Encryption

This post discusses the Transparent Data Encryption (DTE) feature in SQL Server and how to use it.

TDE: What Is Is and Why It Exists

When it comes to database encryption, there are two areas to think about: encryption during transport and encryption at rest.

Encryption during transport means the communication between the database and your client (your application, or SQL Server Management Studio for example) is encrypted. Many developers who use SQL Server are already familiar with specifying Encrypt=True in a connection string. It isn't necessary to create a certificate to use this feature, but in a Production environment you'd want to create a certificate and configure the client to only trust that certificate.

All well and good, but encryption during transport doesn't change the fact that the database data on disk is not encrypted. If you dumped the database file of your Contacts database, you would see visible names and contact information. If someone made off with that file, they'd have access to the data.

This is where encryption at rest comes in: keeping the database data encrypted on disk. That means data is encrypted when inserted or updated, and decrypted when queried. If you consider what would be involved in doing this yourself in your application code, it's pretty daunting: you'd need to be sure encryption and decryption was applied uniformly, and doing so without a performance impact would be a major feat; plus, external applications like report generators would no longer be able to do anything with the database.

Fortunately, the Transparent Data Encryption feature exists and it is extremely well done. Once you turn it on, it just works. Data in the data file is encrypted. Data you work with isn't. Conceptually, you can think of it like the diagram below (and if you want all the specific encryption details, see the Microsoft documentaton link at the top of this post). And as we said earlier, the data can also be encrypted during transport with a connection string option.


In my experience TDE doesn't noticably impact performance. If you're an authorized user who has specified valid credentials, nothing will seem at all different to you. But if you dumped the database files, you would no longer be able to see understandable data.

Although TDE is a very nice feature, it's only available in Enterprise Edition--so it comes at a price. There is one other edition where TDE is available, and that's Developer Edition. This means you can experiment with the feature--or demonstrate it to a client--without having to buy Enterprise Edition up front. Understand, however, that you cannot use Developer Edition in a Production environment.

Enabling TDE

The procedure to enable TDE is not difficult. These are the steps:

1. Install SQL Server Developer Editon or Enterprise Edition.
2. Run SQL Statements to create a key and certificate.
3. Run SQL Statements to enable TDE.
4. Back up the certificate and key file.

1. Install SQL Server Developer Edition or Enterprise Edition


You can download SQL Server Developer Edition from the MSDN web site. For Enteprise Edition, follow the instructions you receive through your purchasing channel to obtain the software.

Create or restore a database, and ensure the database is functional and that you can get to it from SQL Server Management Studio.

2. Run SQL Statements to Create a Certificate


A master key and a certificate are needed for the encryption feature. To create them, run the statements below again the MASTER database.

USE master
GO

CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'my-password';
GO

CREATE CERTIFICATE TDEServerCert WITH SUBJECT = 'My DEK Certificate';

GO

3. Run SQL Statements to Enable TDE


Next, connect to your application database (name App in the example) and run the statements below to enable TDE:

USE App
GO

CREATE DATABASE ENCRYPTION KEY WITH ALGORITHM = AES_128 ENCRYPTION BY SERVER CERTIFICATE TDEServerCert;
GO

ALTER DATABASE App SET ENCRYPTION ON;
GO

4. Back Up the Certificate and Key File


This next step makes a back up of the certificate and key file used for TDE. This step is vital: any backups you make from this point forward cannot be restored unless you have the certificate and key files.

BACKUP CERTIFICATE TDEServerCert TO FILE = 'c:\xfer\TDEServerCert.crt'
    WITH PRIVATE KEY
    (
        FILE = 'c:\xfer\TDEServerCert.pfx',
        ENCRYPTION BY PASSWORD = 'my-password'
    )

Confirming DTE


After enabling DTE, you'll want to confirm your application still works like it always has. 

To confirm to yourself that TDE is really active, or provide evidence to an auditor, you can use this query:

SELECT [Name], is_master_key_encrypted_by_server, is_encrypted from master.sys.databases

This will display a list of databases and whether or not they are encrypted.

name    is_master_key_encrypted_by_server   is_encrypted
master  1                                   0
tempdb  0                                   1
model   0                                   0
msdb    0                                   0
App     0                                   1

If you're still skeptical, you can also dump your database files.


Thursday, August 23, 2018

Release Management and my release tool for full and differential releases

In this post I'll discuss some of the common tasks I perform for release management, and a tool I created to help with it, release.exe. You can find release.exe's source code here on github.

Release Management : Your Mileage May Vary

If you're responsible for software release management, source control is a given--but what else does release management entail? That really depends... it depends on what you hold important, on what constraints come with your target environment(s), and on what customer requirements you have to contend with. Release management might mean nothing more than deploying the latest code from source control to a public cloud; or, it might be a very complex multi-step process involving release packaging, electronic or media transfer to a customer, security scans, patching, approval(s), and network transfers by client IT departments--where some of the process is out of your hands. Whether simple or complex, good release management requires discipline and careful tracking. A well-thought-out procedure, supported with some tools, makes all the difference.

In the release management I regularly perform, common tasks are these:

1. Packaging up a full release to ship to a location, where it will be delivered to the client, go through multiple security processing steps, and eventually end-up on-site, ready for deployment.
2. On-site deployment of an approved release to new or existing servers.

The most interesting new development in all of this has been being able to generate differential releases, where only files that have been changed are delivered. This adds several more common tasks:

3. Packaging up a partial release (just what's changed) to ship to a location, and go through the same processing and approval steps.
4. On-site deployment of an approved partial release to new or existing servers.

Differential releases are massively valuable, especially when your full release might be tens of thousands of files (perhaps spanning multiple DVDs), whereas an update might have only changed a handful of files that take up 1/10th of a DVD. However, getting differential releases to work smoothly and seamlessly requires some careful attention to detail. Most importantly, you need a means to verify what you end up with is a complete, intact release.

To help with release packaging and on-site release verification, I created the release.exe command for Windows. Let's take a look at what it can do.

Hashing: a way to verify that a file has the expected contents

My release.exe command borrows an idea from my Alpha Micro minicomputer days: file hashes and hashed directory files. Back then, our DIR command had a very useful /HASH switch which would give us a hash code for a file, such as 156-078-940-021. Changing even a single byte of a file would yield a dramatically different hash.

When we would ship releases to customers, we would include a directory file of every file with its hash code. On the receiving end, a client could use a verify command which would read the hashed directory file and compare it against the computed hash of each file on the local system--displaying any discrepencies found. This process worked beautifully, and I've always missed having it on Windows. Now I have a version of the same concept in a tool I can use on Windows.

The release command can generate a file hash, with the command release hash:

Command form: release hash file

The hash is a partial MD5 hash. Why partial? Well, the entire hash is really long (20 segments), which is rather onerous if you need to send a hash code to someone or discuss it with someone else. So, I've shortened it to the the first two and last two segements of the full MD5 hash. Since the hash will change dramatically if even one byte changes, this is perfectly adequate for our puposes.

Here's a sample output:

path> release hash readme.txt
05B-8E8-D57-E7C readme.txt

path> release hash release.exe
BB9-AFA-F22-32A release.exe

File hashes will form the basis for packaging up releases with a manifest of files and their hashes; and for verifying those manifests on the receiving side.

Creating A Full Release Manifest

To generate a complete release, we first get the files intended for the release in a folder with the name of the release. For example, if our application's latest changeset in source control was 3105, we might create a 3105_release folder. Within that we copy all of our release files, which will likely include many files and many subfolders.

With the release files copied, we can now use the release create command to create a release manifest:

Command form: release create release-name.txt

3105_release> release create 3105.txt
Creating manifest for c:\3105_release
F7C-2C3-AE1-4BC C:\3105_release\readme.txt
63A-EE0-17F-2D4 C:\3105_release\bin\appmain.dll
9AB-6F4-RE3-007 C:\3105_release\bin\security.dll
3B2-B16-5Ac-007 C:\3105_release\bin\service.dll
47C-08D-A42-FD5 C:\3105_release\bin\en-US\resources.dll
98D-1E1-399-A7A C:\3105_release\Content\css\site.css
652-8A0-52A-ED0 C:\3105_release\Views\Login\login.cshtml
179-488-E60-E22 C:\3105_release\Views\App\main.cshtml
77c-874-963-791 C:\3105_release\Views\App\add.cshtml
6E5-3B0-68C-349 C:\3105_release\Views\Admin\customize.cshtml
E02-C9C-A53-37C C:\3105_release\Views\Admin\settings.cshtml
F01-a37-eed-629 C:\3105_release\Views\Report\monthlysales.cshtml
...

The result of all this is simply to add one file to the release, 3105.txt in this case, which contains every file in the release and its hash. We also add release.exe itself to the release folder. This will give us what we need on the receiving end to verify the release is correct.

Verifying a Release

Once your release has gone through all of the permutations that get it to where it needs to go, and you have deployed it, you'll want to verify that it is complete and intact. Because the release shipped with release.exe and the manifest .txt file, you can easily verify your release by opening a command window, CDing to the root of where the release was deployed to, and using the release verify command.

Command form: release verify release-name.txt

If every file in the manifest is present and has the expected hash, you'll see Verified Release in green.

c:\InetPub\wwwroot> release verify 3105.txt
8713 files checked
Release Verified

If on the other hand there are differences, you will see one or more errrors listed in yellow or red. Yellow indicates a file is present but doesn't have the expected hash. Red indicates a missing file.

c:\InetPub\wwwroot> release verify 3105.txt
FILE NOT FOUND   c:\3105_release\Views\Report\summary.cshtml
A41-BBC-B4B-125  c:\3105_release\Content\css\site.css - ERROR: file is different
782-661-022-411  c:\3105_release\web.config - ERROR: file is different
8713 files checked
3 error(s)

In reviewing the results, note that it may well be normal for a file or two to be different. For example, an ASP.NET web application might have a different web.config file, with settings specific to the target environment.

This simple procedure, which generally takes under a minute even for large releases, is a huge confidence builder that your release is right. If you're in a position where processing steps sometimes lose files, mangle files, or rename files, using release.exe can detect and warn you about all of that.

Creating A Differential Release

At the start of this article I mentioned differential releases, where only changed files are provided. You can generate a differential release (and its manifest .txt file) with the release diff command.

Command form: release diff release-name.txt prior-release-name.txt

Up until now, we have seen variations of the release command that create manifest .txt files or verify them. The release diff command is different: it will not only generate a manifest .txt file, it will also compare it to the prior full release's manfest .txt file--and then delete files from the release folder that have not changed. For this reason, a prominent warning is displayed. The operator must press Y to continue, after confirming they are in the directory they want to be and wish to proceed. Be careful you only run this command from a folder where you intend files to be removed.

Let's say some time has passed since your last full release (3105) and you now wish to issue release 3148--but only a dozen or so files that have changed.

1. You start by creating a 3148_release folder and publishing all of your release files to that folder. So far, this is identical to the process used for full releases.
2. You copy into the folder release.exe and the manifest from the last full release, 3105.txt.
3. Next, you use the release diff command to create a differential release:

3148_release> release diff 3148.txt 3105.txt
Differential release:
    New release manifest file ............ 3148.txt
    Prior release manifest file .......... 3105.txt
    Files common to prior release and this reease will be DELETED from this folder, leaving only new/changed files.

WARNING: This command will DELETE FILES from c:\3148_release\
Are you sure? Type Y to proceed 

3. After confirming this is what you want to do, you press Y and release.exe goes to work.
4. When release.exe is finished, you will see a summary of what it did:

...
Differential release created:
    Release manifest file .................. 3148.txt
    Files in Full Release .................. 8713
    Files in Differential Release .......... 12
    Files removed from this directory ...... 8701

Only 12 files were left in the directory, because the other 8701 files were identical to the last full release--so they don't need to be in the update. Your folder contains only the handful of files that have changed since last release, making for a smaller, simpler release package.

However, the 3148.txt manifest will list every file in the cumulative release and its hash. This is important, because on-site you will be overlaying this partial 3148 release on top of a prior 3105 full release. You want to be able to perform a release verify 3148.txt command which will verify the entire release, not just the changed files.

c:\InetPub\wwwroot> release verify 3148.txt
8713 files checked
Release Verified

Summary: 

The release.exe command has already made my life a lot easier, as someone who has to regularly generate releases--sometimes in a hurry. It is also making deployment a lot less problematic on the customer delivery side: the completeness and correctness of deployments can be immediately ascertained, and if there are problems the specific files are clearly identified.

Download source code


Saturday, August 4, 2018

My First Chrome Extension: Airgap Scan

Today I wrote my first Chrome Extension, and it was fun. I want to share what the experience was like. The code for this post may be found here: https://github.com/davidpallmann/AirgapScan

Chrome has become a favorite browser to many, and if you do web development at all you have no doubt seen how important Chrome Extensions have become. Some of the ones I use frequently are WhatFont (tells me what font I am looking at when I hover over text in a page) and ng-inspector (AngularJS variable inspector), among many others.

It's always best to learn something new when you have a firm project idea in mind, something that needs to be created. Fortunately, I had a project in mind.

AirgapScan: An Extension That Scans Pages for Disallowed Internet URLs

Today I decided I was in need of a Chrome Extension to help verify whether my web site pages were air-gapped. What is air-gapping? Air-gapping is when your software has to be able to run in a location that allows no Internet access; for various reasons, there are security-minded customers with that requirement. Honoring this requirement is harder than you might think: as a modern developer, we tend to take Internet availability for granted. And, we frequently rely on open source libraries, many of which also take Internet availability for granted.

And so, having made changes to support air-gapping, it's important to test that we haven't missed an Internet reference somewhere. That's why I wanted this Chrome extension: when our testers visit one of our solution's web pages, I want the extension to report if there are airgap violations (that is, Internet access outside of the approved network).

The way I'd like this to work is as follows: you browse to a page in your application. If you want to check your air-gapping, you right-click the AirgapScan icon and select Scan Page. You'll either get a happy green message box telling you all is well, or a red alert box listing the HREF(s) found in the page that are disallowed.

Hello, World

But you have to walk before you can run, so next up was to take a basic tutorial and create a simple "Hello, World!" extension in order to get familiar with the basics. I stumbled across an excellent getting started tutorial by Jake Prins, How to Create and Publish a Chrome Extension in 20 Minutes which walked me through the basics.

I was surprised and pleased to learn just how easy it is to write a Chrome Extension. In a nutshell, here are the basics:

  1. Web Technologies. Your extension is written in familiar HTML, CSS, and JavaScript.
  2. Developer Mode. While developing, you can easily load and re-load your extension in Chrome as you make changes, making for a great interactive experience as you work. This is done by visiting chrome://extensions and switching on Developer Mode. When you want to load your extension, click LOAD UNPACKED and browse to the folder where your files are. It's a very simple process.
  3. Manifest. Your extension starts with a manifest.json file, in which you declare a number of things about your extension--including name, version, icon, permissions needed, and which css / script files it uses.
  4. Scripts. You'll write some JavaScript code to do your magic. Depending on what you do, you may have to create more than one based on Google's rules. Again, tutorials and documentation are your friend.
  5. You Can Use Your Favorite Libraries. Used to using jQuery? Or one of the many other popular libraries out there? It's fine to include those in your extension--just copy the .js/.css files to your extension folder and declare them in your manifest.
  6. Your Extension Can Do A Lot Of Things. In the tutorial I took, I learned I could control the page that is created when a new browser tab is opened. Later on, I learned how to scan the current page's DOM. You can also do things like add context menus to a selection on the page. It's a really powerful platform.
Creating the AirgapScan Extension

1. The Manifest

The first element of any Chrome Extension is the manifest, manifest.json.

{
  "manifest_version": 2,
  "name": "Airgap Scan",
  "author": "David Pallmann",
  "version": "1.0",
  "description": "Scans the page for Internet references. Useful for testing software meant for air-gapped environments (without public Internet access).",
  "background": {
"scripts": [ "background.js" ]
  },
  "icons": {
"128": "icon-128.png"
  },
  "browser_action": {
   "default_icon": "tab-icon.png",
   "default_title": "Airgap Scanner"
 },
  "content_scripts": [
    {
      "matches": [
        ""
      ],
  "css": ["jquery-confirm.min.css"],
      "js": ["jquery-2.2.4.min.js", "jquery-confirm.min.js", "content.js"]
    }
  ],
  "permissions" : [
    "contextMenus"
    ]
}

Key things to note about the manifest:

  • It lists required permissions, similar to what you do in a phone app. When the user installs, they'll be asked to confirm they are okay with the extension's required permissions. In my case, I had to list adding context menus as a a permission, since I want to use a context menu item to perform scans on-demand.
  • The background property, scripts property specifies one of my script files, background.js.
  • The content_scripts object declares some important things. 
    • The matches property indicates which URLs the extension is active for (all URLs, in my case). 
    • The css property lists CSS files we're including (jquery-confirm.min.css). 
    • The js property lists JavaScript files we're including: my own source file content.js, plus libraries jquery.js and jquery-confirm.js.

Why is my JavaScript code in two places (background.js and content.js)? Well, with Chrome extensions there are content scripts which run in the context of page (content.js). But you may also need a global script or page that is running for the lifetime of your extension (background.js).

2. Content.js

Content.js is my page-level JavaScript code. This includes the following important elements:

  • A message listener, whose purpose is to listen for the context menu's Scan Page command being clicked. When that happens, the listener invokes the airgapScan function.
  • The airgapScan function, which is the heart of the extension. It uses jQuery to find all the HREFs on the page. It discounts some of them, such as mailto: and javascript: links. The rest, it compares against the array of allowableNetworks. If the href partially matches any of the allowable networks, all is well. If not, an error is counted and the URL is added to a list of in-violation URLs. After scanning the HREFs on the page, a green message box (if no errors) or red alert box (displaying the problem URLs) is displayed.

// content.js

// This routes a message from background.js (context menu action selected) to the airgapScan function in this file.

chrome.runtime.onMessage.addListener(function(request, sender, sendResponse) {
    sendResponse(airgapScan());
});


// airgapScan: scan the page, examine each href. Collect a list of non-allowable hrefs and display an alert.

function airgapScan() {

console.log('--- Airgap Scan ---');

var allowableNetworks = [ '://10.',       // allowed: [http|https]://10.x.x.x 
  '://www.mytestdomain.com'  // allowed: [http|https]://www.mytestdomain.com...
];
var urlCount = 0;
var errorCount = 0;
var url;
var urls = [];
var errList = '';
var listAll = false;

$("a").each(function() {
if (this.href != undefined) {
url = this.href;
if (url!=null && url!='' && url.indexOf('javascript:')==-1 && url.indexOf('mailto:')==-1) {
urlCount++;
urls.push(url);
var error = true;
for (var p = 0; p < allowableNetworks.length; p++) {
if (url.indexOf(allowableNetworks[p])!=-1) {
error = false;
break;
}
}
if (error) {
errorCount++;
console.error('URL outside of network detected: ' + url);
errList = errList + '
' + url;

}

}
}
})
if (listAll && urls.length > 0) {
for (var i = 0; i < urls.length; i++) {
console.log(i.toString() + ': ' + urls[i]);
}
}

console.log('--- end Airgap Scan - URLs: ' + urlCount.toString() + ', Errors: ' + errorCount.toString() + ' ---');

if (errorCount > 0) {
if (errorCount==1) {
$.alert({
//icon: 'fa fa-warning',
type: 'red',
title: 'Airgap Alert',
content: 'Warning: Airgrap scan found 1 url that violates airgap rules:
' + errList,

useBootstrap: false
});
}
else {
$.alert({
//icon: 'fa fa-warning',
type: 'red',
title: 'Airgap Alert',
content: 'Warning: Airgrap scan found ' + errorCount.toString() + ' urls that violate airgap rules:
' + errList,

useBootstrap: false
});
}
}
else {
$.alert({
title: 'Airgap OK',
type: 'green',
content: 'All good: No airgap errors found',
useBootstrap: false
});
}
}

// Default state is that the user initiates a scan from the context menu. Uncomment the line below if you want the scan to automatically run when a page loads. 
//airgapScan();

3. Background.js

Background.js is the lifetime-of-the-extension JavaScript file. It contains
  • A call to Chrome.contextMenus.create, which adds a context menu item to the extension, available to the user by right-clicking it's icon.
  • A listener to respond to the menu item being clicked. This in turns sends a message to content.js to please invoke the airgapScan method.

// Add "Scan page" action to extension context menu.


chrome.contextMenus.create({
"id": "AG_ScanPage",
    title: "Scan Page",
    contexts: ["browser_action"],
    onclick: function() {
    }
});

// When context menu item is selected, send a message to context.js to run an airgap Scan.

chrome.contextMenus.onClicked.addListener(function(info, tab) {
  if (tab && info.menuItemId=="AG_ScanPage")
    chrome.tabs.sendMessage(tab.id, {args: null }, function(response) {
    });
});

4. Library Files

As mentioned earlier, we are using a few libraries: jquery and jquery-confirm. The .js and .css files for them are included in the folder, and are referenced in the manifest.

5. Icons

Lastly, we have some icons in different sizes. The icon for AirgapScan is shown below.


And that's it. Time from first-time-hello-world-extension to AirgapScan was just a few hours on a Saturday. 

AirgapScan in Action

As I developed AirgapScan, I continually tested in chrome://extensions. When I had an update, I would REMOVE and then LOAD UNPACKED to get the latest changes applied, then visit a fresh page to test it out.


After visiting a page to be tested, the AG icon is visible. Hovering over it shows its name in a tooltip. Right-clicking it shows the Scan Page context menu that the extension code added.


Clicking Scan Page quickly comes back with a message box with the results of the scan. If one or more in-violation HREFs are found, a red alert box itemizes them.


If no violations are found, a green Airgap OK message appears.



You can download this extension here: https://github.com/davidpallmann/AirgapScan

This was a lot of fun, plus I created something that my team needs. Chrome Extensions are surprisingly easy to create and the platform is well thought through which makes it a pleasure to use. I am highly motivated now to create other extensions.











Sunday, December 3, 2017

An AngularJS Dashboard, Part 9: Unit Tests

NOTE: for best results, view the http: version of this page (else you won't get syntax highlighting).

This is Part 9 in a series on creating a dashboard in AngularJS. I'm blogging as I progressively create the dashboard, refining my Angular experience along the way. An online demo of the latest work is available here.

Previously in Part 8, we added role support and improved the mobile experience. Today, we're going to add unit tests for our AngularJS code using Jasmine. Unit testing of code is highly important, and Angular cites testability as one of its core principles.

Here's what we'll be ending up with:

Unit Testing

We've wanted to have unit tests all along during this series, but ran into many problems getting them working. I've shared in the past that one of the frustrating things about Angular is how much surface area is exposed in the framework and how many different ways there are of doing something; but that problem is multiplied tenfold when it comes to unit testing a component. I struggled for many weeks to find the right combination of code in my tests that would instantiate my component and test its functions. The good news is, I'm finally there and can at last tackle the subject in today's post.

On the AngularJS web site (angularjs.org), two tools are listed as recommended tools for unit testing: Karma and Jasmine. After some research, I settled on Jasmine (https://jasmine.github.io/).

Visual Studio Integration

The next step was to make it possible to have my Jasmine tests run in Visual Studio's Test Explorer. This can be achieved by going to Visual Studio's Tools and Extensions menu and installing Chutzpah. You can read up on Chutzpah here.

Chutzpah in Visual Studio Extensions and Updates

With Chutzpah installed, the latest Dashboard project code will now show tests in Visual Studio's Test Explorer. You'll notice that there are a great many tests. The idea is to test every property and function in our component's controller. The tests themselves are mostly very simple, as we shall see.

Dashboard project with Jasmine tests in Visual Studio

To run the tests, right-click Module : [ component : dashboard ] and select Run Selected Tests.

Running All Tests

Soon afterward, the test results will be displayed. All of the tests should have passed, and should show in green.

All Tests Passed

Test Setup

The convention encouraged by the AngularJS team is to name your tests after your component / controller / service files with the file type '.spec' inserted. In our dashboard project, the tests are in the file named dashboardController.spec.js.

References

In our spec file, we begin with a special section of reference comments. These lines are processed by Jasmine and cause needed JavaScript files to be loaded--they kind of serve the same purpose as script tags in an HTML page. You'll find that many of the .js files loaded in the project's index.html page are reproduced here.
/// <reference path="../Scripts/es6-promise.auto.min.js" />
/// <reference path="../Scripts/jquery-3.2.1.min.js" />
/// <reference path="../Scripts/Scripts/jquery-ui.min.js" />
/// <reference path="../Scripts/jquery-ui.touch-punch.js" />
/// <reference path="../Scripts/spectrum.js" />
/// <reference path="../Scripts/toastr.js" />
/// <reference path="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js" />
/// <reference path="../Scripts/angular.js" />
/// <reference path="../Scripts/angular-mocks.js" />
/// <reference path="../app/app.module.js" />
/// <reference path="../components/dashboard/google.chart.service.js" />
/// <reference path=""https://www.gstatic.com/charts/loader.js" />
/// <reference path="../components/dashboard/demo.data.service.js" />
/// <reference path="../components/dashboard/dashboard.component.js" />
References

Note that we are referencing our  canned demo data service in our references, not the sql data service. It is a common practice in AngularJS unit testing to mock out services. Our already-existing demo data service will serve this purpose.

Support Functions

The next section includes some support functions. These will be used to load the controller's html template.
// Dashboard component unit tests

// test support functions

function httpGetSync(filePath) {
    var xhr = new XMLHttpRequest();
    xhr.open("GET", filePath, false);
    xhr.send();
    return xhr.responseText;
}

function preloadTemplate(path) {
    return inject(function ($templateCache) {
        var response = httpGetSync(path);
        $templateCache.put(path, response);
    });
}
Support Functions

Describe Block and Template Compilation

All of our tests are enclosed in a describe block.
describe('component: dashboard', function () {
    var $rootScope = null;
    var element, scope;
    var ChartService, httpBackend;

    beforeEach(module('dashboardApp'));
    beforeEach(module('dashboard'));
    beforeEach(preloadTemplate('/components/dashboard/dashboard.template.html'));

    var ChartService, DataService, http, controller, $ctrl;

    beforeEach(inject(function (_$rootScope_, _$compile_, $injector) {

        $compile = _$compile_;
        $rootScope = _$rootScope_;

        scope = $rootScope.$new();

        element = angular.element('<html><head><title>Dashboard</title><meta charset="utf-8" /><link rel="icon" href="data:;base64,iVBORw0KGgo="><link href="https://fonts.googleapis.com/css?family=Roboto" rel="stylesheet"><link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css"><link href="http://maxcdn.bootstrapcdn.com/font-awesome/4.7.0/css/font-awesome.min.css" rel="stylesheet" /><link href="Content/css/dashboard.css" rel="stylesheet" /><link href="Content/css/toastr.css" rel="stylesheet" /><link href="Content/css/spectrum.css" rel="stylesheet" />' +
            '<body><div style="width: 1920px"><dashboard id="dashboard"></dashboard></div></body></html>');

        $compile(element)(scope);
        scope.$digest();
        $ctrl = scope.$$childHead.$ctrl;
    }));

    ..tests...
});
describe block

The describe statement gives us the category name we saw earlier in Visual Studio Test Explorer. Within it, are several beforeEach statements. The first two declare our dashboard app and module. The third loads the controller's HTML template file. The fourth encloses an inject statement.

The inject statement injects dependencies. To someone doing their first ever AngularJS project, I find inject kind of fascinating: there are dozens of objects you can pass to it as parameters, and you seem to have great freedom in what you include. We are passing a number of JS objects and functions important to setting up the test: the AngularJS root scope $rootScope$, its $compile  object, and the $injector object. Notice that the injector expects surrounding underscores in some of the names. This is apparently a convention in AngularJS testing. We save these values in variables for later use.

Next, we compile our HTML. Usually our project has an index.html page which AngularJS renders into. In our test, we have a similar fragment of HTML assigned to the variable element. The $compile function is used compile the element and set up a scope. Then, scope.$digest() is called to perform an AngularJS digest cycle. Lastly, the controller of the component is assigned to the variable $ctrl. If all of this sounds a bit complex and non-obvious, it was! It took many, many frustrating weeks before I found the right combination of code that would work.

The Tests

And now, we can discuss the tests themselves. Below are the first few, which test controller properties. Notice that these tests are self-documenting: the it(...) function's first parameter is a description of what is being tested; these are where the test names came from that we saw earlier in Visual Studio Test Explorer. The second parameter is a function to run. The function can do whatever it needs to, but in our case we are mostly concerned with verifying properties in the controller contain expected values. We use the expect statement and a condition clause such as .toContain or .toEqual to check a property against an expected value.

    // These tests confirm the controller contains expected properties 

    it('$ctrl.title has expected value', function () {
        expect($ctrl.title).toContain('Dashboard');
    });

    it('$ctrl.chartProvider has expected value', function () {
        expect($ctrl.chartProvider).toContain('Google');
    });

    it('$ctrl.dataProvider has expected value', function () {
        expect($ctrl.dataProvider).toContain('Demo');
    });

    it('$ctrl.tilesacross has expected value', function () {
        var expectedTilesAcross = 8;
        expect($ctrl.tilesacross).toEqual(expectedTilesAcross);
    });

Property tests

Further down in the spec file are tests that invoke controller functions. We can access properties and functions in our controller via the $ctrl object. As with the property tests, we use expect to verify the results are correct.
     it('$ctrl.moveTileUp(id) to swap tiles', function () {
         var title1 = $ctrl.tiles[0].title;
         var title2 = $ctrl.tiles[1].title;
         $ctrl.moveTileUp('2');
         expect($ctrl.tiles[0].title).toEqual(title2);
         expect($ctrl.tiles[1].title).toEqual(title1);
     });

     it('$ctrl.moveTileDown(id) to swap tiles', function () {
         var title1 = $ctrl.tiles[0].title;
         var title2 = $ctrl.tiles[1].title;
         $ctrl.moveTileDown('1');
         expect($ctrl.tiles[0].title).toEqual(title2);
         expect($ctrl.tiles[1].title).toEqual(title1);
     });

     it('$ctrl.removeTile(id) to remove tile | resetDashboard to restore tile', function () {
         var length = $ctrl.tiles.length;
         $ctrl.removeTile('1');
         expect($ctrl.tiles.length).toEqual(length - 1);
         $ctrl.resetDashboard();
         expect($ctrl.tiles.length).toEqual(length); 
     });
Function tests

Promises, Promises

One very big problem I had getting my tests working had to do with JavaScript promises. In our controller, we normally use the Promise operation to invoke service functions, which may or may not be asynchronous. It turns out Jasmine and AngularJS together don't innately support promises.

My first attempt to resolve this issue was to add a Promise polyfill as a reference. Unfortunately, this didn't solve anything and tests were still failing.

To ultimately combat this problem, I added a flag to the data service named requiresPromise. It is true in the sql data service (which makes Ajax calls to the MVC controller), and false in the demo data service (which simply returns objects). With this testable flag in place, the controller code that used promises is now bypassed. It's a bit disheartening that I had to code around this issue, but I have yet to find a better solution.

Summary

Today we added unit tests, written in Jasmine, integrated with Visual Studio Test Explorer. We have written tests for the vast majority of properties and functions in the controller.

Download Code
Dashboard_08.zip
https://drive.google.com/open?id=11bej1Wf_YmqqW0Saed-J0SfTy2ERR1Qu

Monday, November 27, 2017

An AngularJS Dashboard, Part 8: Mobile Improvements and Role Support

NOTE: for best results, view the http: version of this page (else you won't get syntax highlighting).

This is Part 8 in a series on creating a dashboard in AngularJS. I'm blogging as I progressively create the dashboard, refining my Angular experience along the way. An online demo of the latest work is available here.

Previously in Part 7, we added a smart tile fill algorithm, improving how our dashboard renders regardless of the layout's mix of tile sizes or the width of the browser window. Today in Part 8, we're going to focus on two areas: improving the mobile experience abd adding role support. We'll be improving the mobile experience by setting a better initial viewport page width on devices and implementing a friendler way to re-arrange the dashboard. By role support, we mean restricting who can see certain dashboard information and adding the ability to customize dashboards for different roles/departments.

Today we will:
• Add a meta tag for viewport sizing on mobile devices
• Detect a too-small mobile width and adjust viewport size
• Add an alternative to drag-and-drop for reordering tiles on mobile devices
• Relocate some actions in the tile action menu into a second dashboard menu
• Add a new tile action, Copy Tile
• Allow different users to sign in to our demo project
• Track roles for users
• Support dashboards layouts for roles
• Allow tiles to be restricted to a role
• Add personalized tiles through the use of data queries that filter data for the current user

Here's a glimpse of what we'll end up with today:

Today's objective

A Better Mobile Experience

While we haven't exactly ignored mobile devices up till now, we haven't really focused on making mobile a fantastic experience. If you look back at earlier posts in the series, you'll see that the mobile views werewelltoo big. Although the tile layout would re-render for available space, the size of the dashboard layout was simply too large, which meant the tile content and menus were too small to be of practical use to a mobile user.

MOBILE VIEWPORT

By default, mobile device browsers fully expect to be handed pages that are too large for their screen size; as a practical matter, they therefore support zooming and scrolling. Because our markup hasn't addressed this behavior, our dashboard has been rendering too packed on small screens, resulting in text that is too small to read and controls that are too small to interact with. Although the user could certainly scale the page, they shouldn't have to. What we want is a great out-of-the-box display on phones and tables that is easy to interact with.

Fortunately, there is an element we can add to our markup that addresses this: a meta tag that sets viewport size. The typical tag looks like this, which is also what we'll use:

<meta id="viewport" name="viewport" content="width=device-width, initial-scale=1"> 

This immediately makes things better, but doesn't solve all of our problems. The smallest possible rendering of our dashboard is one tile across, but since tiles can be 1- or 2-units wide/tall, we can't really render dashboards well unless we're sure we have enough room for 2 tile units across-or 16 + 200 + 16 + 200 + 16. On something like a portrait iPhone we might be only 375 pixels wide. We're going need to add logic, then, to check whether our screen width is under 500 pixels. If it is, we're going to use jQuery to replace our meta viewport tag with a different one, telling the device to scale to 500 pixels across. This will give us room to render our dashboard.
var scope1, ctrl1;
$(document).ready(function () {
    if (window.innerWidth < 500) {  // If device is so small we can't fit a wide tile, scale meta viewport tag to minimum width of 500px
        var mvp = document.getElementById('viewport');
        mvp.setAttribute('content', 'width=500');
    }
    if (typeof google != 'undefined') {
        google.charts.load("current", { packages: ["corechart", 'table'] });
    }
});

With this in place, we can now try our dashboard out on a variety of mobile devices. When we do, as the results below show, we are seeing markedly improved displays that are large enough to read and use. The desktop, which previously looked fine, remains unchanged.

Dashboard on Android Phone (Portrait)

Dashboard on iPad (Landscape)

Dashboard on Desktop

Tile Menu and Dashboard Menu

Up until now, our tiles have had a tile menu. An ellipsis appears at the top right of a tile, when hovered over or clicked on, revealing a menu of tile and dashboard actions. Some of those actions really affect the entire dashboard, not just the current tile: Add Tile, Reset Dashboard, Make Default Layout. To factor things better in the UI, we now have a separate dashboard menu for dashboard actions, represented by a gear icon at top right.

Dashboard Menu

The tile menu remains, with a smaller number of options. We've also added a new tile action, Copy Tile, which adds a copy of the current tile to the layout.

Tile Menu

Rearranging Tiles

A second mobile concern is the way we've provided users to rearrange tile layout: drag-and-drop. This works great from the desktop, but is problematic on Android and iOS mobile devices. Although we've previously added the Touch-Punch JavaScript library to get touch events working, the experience is still problematic--especially on iOS devices. We also experimented with adding some polyfills for HTML5 drag and drop, but none of this solved the issue. What we've decided to do, then, is replace drag-and-drop on mobile devices with a reordering dialog.

The Rearrange Tiles action now displays the following dialog on devices 1024 wide or smaller:

Mobile Reorder Tiles Dialog

The user can quickly move tiles up or down by touching the arrow keys. As the tile order is changed, the dashboard layout updates live. Clicking Save commits changes, and Cancel undoes them.

On the desktop, Rearrange Tiles continues to provide the same drag-and-drop experience we've had previously. Now every user can customize their dashboard layout easily, regardless of the device they're using.

Role Support

Dashboards are all about providing meaningful information at-a-glance--but that information isn't necesarily meant for all eyes. To make ng-dashboard smarter about what it shows, we need awareness of a user's roles.

User Logins

In order to make it easy to test out how roles are working, our simple test project has been upgraded. The current user is displayed at top right, and by clicking on the name you can select a different login from the drop-down menu. You can choose between John Smith (an Admin), Marcia Brady (a Manager), and Stuart Downey (in Sales). The code will set a cookie and remember who you are; and if you're signing on for the first time, it will log you in as john.smith. There are no passwords to worry about.

Changing login

Representing Roles

Since our dashboard as we release it is just in a simple test project without real security, how do will we handle role management? We're going to assume that whatever authentication/authorization system you have, each user can be said to be in one or more roles/departments--and that those roles can be represented by an array of string names, such as "Employee", "Manager", "Executive", "Sales", "Accounting", etc. In addition, we are attaching special significance to the role "Admin": this role gets access to advanced dashboard actions such as Make Default Layout. This scheme should hopefully be easily adaptible to your authN/authZ system.

In our test project's MVC controller, a few user names are hard-coded (john.smith, an Admin; marcia.brady, a manager; and Stuart Downey, a salesman). This would more properly be driven by our database, but that work is still on the backlog.

Role Defaults for Dashboards 

Up until now, we have had two places to look for the user's dashboard when we load it. First, if the user has saved a custom edition of their dashboard this will be in the DashboardLayout table under the user's username. If that isn't found, the code will look for the default dashboard for all users, stores in the same table under username 'default'. This was done with a query order by Priority that selected the top 1 match. This ensured a customized layout would be selected if one existed for the user, otherwise the default layout would be used.

All of that is working well, but it would be valuable to also store default layouts for various roles.
Imagine for example that you've set up the perfect dashboard for salespeople in your organization--how do you get it to all your salespeople, and to new salespeople who join the company in the future? Using the 'default' layout ins't a good idea, because not everyone in your organization is a salesperson; and a customized layout for one user is only available to that one user. What we need, then, is another level where a dashboard layout can be saved for a role. With today's update, the code now searches for a dashboard to show the user in a 3-level search:

1. User Saved Custom Layout: Use the saved custom dashboard with username=.
2. Default Role Layout: Otherwise, use the saved custom dashboard with username=.
3. Default Layout: Otherwise, use the default dashboard for all users (username='default').

Step 2 needs some explanation, because a user can be in multiple roles; if a user was in both the Manger and Sales roles, and dashboards were defined for each role, which one should be used? The way we handle that is to use the Priority field already built in to the DashboardLayout table. The default layout is priority 1, and saved user custom layouts are priority 10. That leaves priorities 2-8 for role-based layouts. If ng-dashboard found a Sales dashboard with priority 3 and a Manager dashboard with priority 5, it would choose the Sales dashboard. This is the query used to select a matching dashboard based on priority:

SELECT TOP 1 DashboardId FROM DashboardLayout 
WHERE DashboardName='Home' AND
Username=username OR Username IN (role-list) OR Username='default')
ORDER BY [Priority] DESC

Although the addition of role support is important and useful, note that we do not yet have UI support for it--another item for our backlog. That means, right now, you'd need to do some databasee work in order to define a role-based layout in the database in order to use it. Probably the easiest way to do that right now is to first save a custom layout, then update the DashboardLayout record's priority and username (to be a role name).

With all of this in place, we now have a default dashboard layout for everybody, the ability to specify default layouts for different roles, all while preserving a user's right to customize their dashboard layout to their liking.

Restricting Tiles to a Role

It would be useful to be able to restrict some dashboard information to certain roles. For example, you might want to limit compensation information to Accounting, or perhaps limit order information to Sales, or restrict employee information to their manager. Although you could take the approach of designing different dashboard for different departments or roles, that doesn't work well in practice; it's more useful to be able to restrict things at the tile level, so that good dashboard layouts can be freely across the organization without fear of someone seeing something they shouldn't.

Our approach will be to two-fold. First , we've added a role column to the database DashboardQuery table. This allows us to indicate that a query requires a role, such as Manager. When the MVC controller loads queries to pass on to the Angular controller, it will not include any queries the user isn't authorized for.

Secondly, we've added a role property to tiles.  Here's the tile configuration dialog with the new Required Role input. If a role is specified, the tile will only be available to users in that role; if a user without the required role accesses the dashboard, the tile will not render.

Required Role in Tile Configuration Dialog

To make this work in Angular, we've made the following changes to our code:

• In the MVC controller, LoadDashboard now includes a list of all system roles in the dashboard object it returns. In our test project, these names are hard-coded; in a real application, they should be supplied by the authorization system. It also returns a filtered list of queries; any queries that require a role the current user doesn't have are not included.
Dashboard dashboard = new Dashboard()
{
    DashboardName = "Home",
    Username = username,
    IsAdmin = CurrentUserIsAdmin(Request),
    Tiles = new List<tile>(),
    Queries = new List<dashboardquery>(),
    Roles = new List<string>(),
    IsDefault = false
};

dashboard.Roles.Add("Accounting");
dashboard.Roles.Add("Admin");
dashboard.Roles.Add("Employee");
dashboard.Roles.Add("Executive");
dashboard.Roles.Add("Manager");
dashboard.Roles.Add("Marketing");
dashboard.Roles.Add("Manufacturing");
dashboard.Roles.Add("Sales");

...

using (SqlConnection conn = new SqlConnection(System.Configuration.ConfigurationManager.AppSettings["Database"]))
{
    conn.Open();

    // Load queries.

    DashboardQuery dashboardQuery = null;

    String query = "SELECT * FROM DashboardQuery ORDER BY Name";

    dashboard.Queries.Add(new DashboardQuery()
        {
            QueryName = "inline",
            ValueType = "number",
            Role = ""
        });

    bool addQuery = false;
    String[] roles = CurrentUserRoles(Request);

    using (SqlCommand cmd = new SqlCommand(query, conn))
    {
        using(SqlDataReader reader = cmd.ExecuteReader())
        {
            while(reader.Read())
            {
                dashboardQuery = new DashboardQuery()
                {
                        QueryName = Convert.ToString(reader["Name"]),
                        ValueType = Convert.ToString(reader["ValueType"]),
                        Role = Convert.ToString(reader["Role"])
                };

                addQuery = true;
                if (!String.IsNullOrEmpty(dashboardQuery.Role)) // don't add query if it requires a role the user doesn't have
                {
                    if (roles != null)
                    {
                        addQuery = false;
                        foreach(String role in roles)
                        {
                            if (role==dashboardQuery.Role)
                            {
                                addQuery = true;
                                break;
                            }
                        }
                    }
                }

                if (addQuery)
                {
                    dashboard.Queries.Add(dashboardQuery);
                }
            }
        }
    }

MVC Controller C# LoadDashboard code to return  master role list

• The MVC controller GetUser function returns the user's roles
// /Dashboard/GetUser .... returns username and admin privilege of cucrrent user.

[HttpGet]
public JsonResult GetUser()
{
// If ?user=<username> specified, set username and create cookie
String username = null; // = Request.QueryString["user"];
if (String.IsNullOrEmpty(username)) // if nothing specified in URL, ...
{
    // ...Check for existing username cookie
    if (Request != null && Request.Cookies.AllKeys.Contains("dashboard-username"))
    {
        username = Request.Cookies["dashboard-username"].Value;
    }
    else
    {
        username = "john.smith";    // else default john.smith
        HttpCookie cookie = new HttpCookie("dashboard-username");   // Set cookie
        cookie.Value = username;
        Response.Cookies.Add(cookie);
    }
}
else
{
    HttpCookie cookie = new HttpCookie("dashboard-username");   // Set cookie
    cookie.Value = username;
    Response.Cookies.Add(cookie);
}

User user = new User()
{
    Username = CurrentUsername(Request),
    Roles = CurrentUserRoles(Request),
    IsAdmin = CurrentUserIsAdmin(Request)
};
return Json(user, JsonRequestBehavior.AllowGet);
}

MVC Controller C# GetUser code to return user roles

• The Data Service passes the roles to the controller, which holds the list in a roles variable.
// Load tile definitions and perform remaining initilization.

self.LoadDashboard = function () {
    if (!DataService.requiresPromise) {
        self.tiles = DataService.getTileLayout();
        self.queries = DataService.queries;
        self.roles = DataService.roles;
Angular controller storing queries in LoadDashboard

• The Angular controller's ComputeLayout function now sets a hidden flag on each tile. A tile is marked hidden if it requires a role the user does not have.
self.computeLayout = function () {
    if (self.tiles == null) return;

    var matrix = [];

    for (var r = 0; r < self.tilesdown; r++) {
        matrix.push([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]); // support up to 20 tile units across 
    }

    var numcols = self.tablecolumns();
    if (numcols < 1) numcols = 1;

    // This is used in template to render things like table <col> elements.
    self.tilesacross = numcols;
    self.tileunits = [];
    for (var u = 0; u < numcols; u++) {
        self.tileunits.push(u);
    }

    // set tile.hidden for each tile based on role assignments.

    var tile = null;
    for (var t = 0; t < self.tiles.length; t++) {
        tile = self.tiles[t];
        tile.hidden = false;
        if (tile.role) {
            if (!self.userInRole(tile.role)) {
                tile.hidden = true;
            }
        }
    }
Angular ComputeLayout function code to mark unauthorized tiles hidden 

• Lastly, the HTML template markup has an ng-if condition (line 2) which will skip over rendering a tile is tile.hidden is true.
                <!-- Populated tile (data loaded) -->
                <div id="tile-{{tile.id}}" ng-if="tile.haveData && !tile.hidden"
                     class="tile" ng-class="tile.classes" ng-style="{ 'background-color': $ctrl.tileColor(tile.id), 'color': $ctrl.tileTextColor(tile.id), 'top': $ctrl.tileY(tile.id), 'left': $ctrl.tileX(tile.id), 'border': ($ctrl.configIndex==$index)?'dotted':'none', 'border-color': ($ctrl.configIndex==$index)?'white':'transparent' }"
                     style="overflow: hidden; position: absolute; display: inline-block"
                     draggable="true" ondragstart="tile_dragstart(event);"
                     ondrop="tile_drop(event);" ondragover="tile_dragover(event);">
                    <div class="dropdown" style="height: 100%">
                        <!-- tile menu -->
                        <div class="hovermenu">
                            <a href="javascript:void(0)" class="dropdown-toggle" style="text-decoration: none; color: inherit" data-toggle="dropdown">
                                <i class="fa fa-ellipsis-h" aria-hidden="true"></i>
                            </a>
                            <ul class="dropdown-menu" style="margin-top: -10px; margin-left:-144px !important; font-size: 16px !important">
                                <li><a id="tile-config-{{tile.id}}" href="#" onclick="configureTile(this.id);"><i class="fa fa-gear" aria-hidden="true"></i>  Configure Tile</a></li>
                                <li><a id="tile-config-{{tile.id}}" href="#" onclick="copyTile(this.id);"><i class="fa fa-clone" aria-hidden="true"></i>  Copy Tile</a></li>
                                <li><a id=tile-remove-{{tile.id}}" href="#" onclick="removeTileConfirm(this.id);"><i class="fa fa-trash-o" aria-hidden="true"></i>  Remove Tile</a></li>
                            </ul>
                        </div>
                        <a id="config-{{tile.id}}" ng-href="{{$ctrl.rearranging ? null : tile.link}}" style="color: inherit; text-decoration: inherit;">
                            <div style="overflow: hidden; white-space: nowrap"> {{tile.title}}</div>
                            <div style="position: relative; height: 100%">
                                <!-- COUNTER tile -->
                                <div ng-if="tile.type=='counter' && tile.height==1"
                                     style="text-align: center; position: absolute; left: 0; right: 0; margin: 0 auto; top: 25px">
                                    <div style="font-size: 72px">{{tile.value}}</div>
                                    <div style="font-size: 16px">{{tile.label}}</div>
                                </div>
HTML Markup code to Conditionally Render Tiles

Personalized Tiles

Much of the information you might track in a business dashboard--such as sales, orders, customers, inventory levels, etc.--is more useful to a user if it can be filtered to be personal. For example, imagine a tile that shows the count of orders, or a table tile listing open orders. If you're a salesperson, you would probably be more interested in those tiles if they listed your orders.

To support personalization, we'll make it possible for our data queries (stored in the DashboardQuery database table) to reference the current user's username. The symbol @username in a query will be automatically replaced by the current user name when the MVC controller executes a data query.

For example, we've previously shown an Orders counter tile that uses the Order Count data query. If instead we wanted a My Orders counter tile, we can ceate an Order Count (My Orders) query backed by this query:

SELECT COUNT(ord.OrderId) FROM [Order] ord INNER JOIN Employee emp ON emp.EmployeeId=ord.SalesPersonEmployeeId AND emp.Username=@username

We can then use this personalized query to define a My Orders tile, limited to the Sales role.

Personalized Tile My Orders

The resulting tile will show different results based on the current user:

My Orders tile for stuart.downey (a salesman)


My Orders tile for john.smith (an admin, no order)

(no tile displayed)
My Orders tile for marcia.brady (not in Sales role, tile not visible)

Roles in Action

Now it's time to see this role support in action. Imagine you are HRAdmin John Smith, and you have just created the following default dashboard for employees:
  1. A Customers counter tile, showing the count of customers. Available to all.
  2. An Orders counter tile, showing the count of orders. Available to all.
  3. A My Orders counter tile, showing the current user's order count. Restricted to Sales.
  4. A Customer Satisfaction KPI tile, showing average customer rating. Available to all.
  5. A My Open Orders table tile, listing the current user's open orders. Restricted to Sales.
  6. A Revenue Share by Store pie chart tile, showng revenue by store. Restricted to Sales.
  7. An Orders tile, listing all orders. Available to all.
  8. A Revenue by Store bar chart tile, showing revenue amounts by store. Restricted to Sales.
  9. A My Direct Reports table tile, showing current user's direct reports. Restricted to Manager.
Let's see how this dashboard is shown to different users, starting with our admin, John Smith. Because he is an admin, John gets to see all tiles. Because John is neither a manager nor a salesperson, the personalized tiles My Orders, My Open Orders, and My Direct Reports are empty as there is no data for John.

Default dashboard as viewed by Admin john.smith

Now, instead imagine the dashboard is being viewed by Stuart Downey, a sales executive who has the Sales role. His dashboard is shown below. Note he is seeing different data for his personalized tiles than John: he has direct reports, and he has orders.

Default dashboard as viewed by Sales Executive Stuart Downey

Finally, let's view the dashboard as Office Manager Marcia Brady. Marcia is not in the Sales role, so tiles like My Orders, My Direct Orders, Revenue Share by Store, and Revenue by Store don't appear. But she is a manager, so she does see her direct reports listed in the My Direct Reports tile.

Default dashboard as viewed by Office Manager Marcia Brady

Our role-based features are working well: users see only what they are authorized to see, and much of the information is personalized for their scope of interest.

To see this for yourself, download the sample project and database and follow the intstructions to use the SQL Server Data Service.

Summary

Today in Part 8 we achieved the following:

  • Improved the mobile experience
    • Made use of a meta viewport tag to set ideal page width on mobile devices
    • Changed the Rearrange Tiles implementation for mobile devices to use a reordering dialog
    • Adding a dashboard menu, and relocated some actions from the tile menu
    • Added a Copy Tile action to the tile menu
  • Added role support
    • Added the concept of multiple user logins and roles to the test project
    • Added support for default layouts for roles
    • Added ability to restrict a tile to a role
    • Added personalized tiles through the use of @username in data queries
In this update we've also made some progress on unit testing--but we're going to explore all that in Part 9.

AngularJS: How is it Holding Up?

And how are we feeling about AngularJS at this point? There's definitely good and bad:

  • Angular has some powerful directives (ng-if, ng-style, ng-repeat, etc.) that have been fun to leverage in our markup; however, some of these directives have quirks so occasionally this has been frustrating to get right.
  • Angular more or less forces you to modularize everything you do--you end up with separate controllers, services, templates. That's good. On the other hand, we are loading many  more discrete files now. It remains to be seen if we can take advantage of something like ASP.NET bundles to make loading of dependency JS files more efficient.
  • Angular is designed to facilitate user testing. In theory, our controller and our services, should all be easily testable. In practice, this is one of the things that has taken the longest to get working. There are a lot of caveats to writing Angular tests, which we'll get into next time.
  • There's a huge amount of online inormation and support for Angular. But, you get a lot of conflicting advice. Part of the reason for that is there's so much exposed in Angular in terms of layers, API, and JavaScript objects. There are many ways to do things. Perhaps that's good, in some ways, but when you're trying to get a problem solved it's frustrating to have to sort through dozens of suggestions and perspectives before you land on something that works.
So far, AngularJS has been useful but also frustrating at times.

Download Code
Dashboard_08.zip
https://drive.google.com/open?id=11bej1Wf_YmqqW0Saed-J0SfTy2ERR1Qu

Next: Blogged: An AngularJS Dashboard, Part 9: Unit Tests