Super Simple systemJS Module Loader Tutorial

I’m currently in the process of building a new web application.  I want to use a module loader.  I have used Required as a AMD loader, but with ES2015 (ES6) around the corner and supports module loading, I would like to use something that is somewhat future proof.  So, I’ve decided to investigate SystemJS.

To learn SystemJS , I wanted to create a very simple application with no frills and uses ES5. Once I have this figured out, I will start using ES6 and/or TypeScript.


SystemJS is a universal module loader.  It can load modules synchronously or asynchronously.  In this post I’m going to load a very simple global module dynamically. SystemJS can load these additional modual formats:

  • esm: ECMAScript Module (previously referred to as es6)
  • cjs: CommonJS
  • amd: Asynchronous Module Definition
  • global: Global shim module format
  • register: System.register or System.registerDynamic compatibility module format

Assumption of Reader

I assume the reader has the following knowledge

  • Some knowledge node.js and npm
  • JavaScript
  • Chrome Developer Tools

I assume that since you want to learn about systemJS, which is a more advanced tool, that you understand NPM and how to install
libraries using package.json.  I also assume you know how to create a server to run the examples on.  In this example, I’m using HTTP-Server that I installed using NPM.

Tools and Version I used

Here’s the structure of the application

  • package.json: is used to install systemJS
  • .gitmore: can be ignored.  It’s used with Git source code repository.
  • node_modules: where npm installs systemJS
  • main.js: the module to be dynamically loaded and execute by systemjs

All source code can be found here:

Example without using systemJS:

Example using systemJS:

First run “npm install”. This will look in package.json and find dependencies to install.  The only client dependency that is required is systemjs.

npm install

file structure with systemJS

Let’s now display a very simple HTML page.  Here’s the code:

Let’s do a a sanity check to validate that the server is working. You can use any server. I have decided to use http-server for it’s simplicity and no bloat.  Below in the images, you will notice I install http-server globally, and then ran it by just call “http-server”.  I then load the page using the port that http-server provided.

npm install http-server;


confirm htpp-server is run

Everything should be working now.

Let’s create a module. I’m using the term module somewhat loosely here.  Since the code is in a separate file, systemJS will treat it like a module. Here I’m using the Revealing Module Pattern.

Let’s create a new folder called app and add the file main.js. Here’s the code to include in main.js.

global module

We aren’t using systemJS and not doing module loading yet, but we will soon. For the time being, let’s see how main.js is loaded.

Add a reference in index.html to main.js. This can be added in the <head> element.  Then add the script that calls the domUpdater after the last div.  The script must be after the div, because of how the HTML page loads and is executed.
index html without using systemJS

Lets run the web page and see how the modules are loaded.  In the image below you will see that in Chrome Developer Tools, I added a breakpoint in the index.html. This allows me to see what files have been loaded in the network tab. As we can see the main.js file has been loaded. The main.js file is loaded as soon as the page is loaded. Imagine if you had 20 files that need to be loaded. Bundling can take care of this somewhat; but what if you had multiple bundles and not all bundles are needed when the page loads?

debug web page without systemJS

Now let’s do some module loading using systemJS.

In index.html, remove the script in the Head that is referencing main.js.  Also remove the script after the div that is calling domUpdated

Now we need to add the systemJS library and load mainJS using SystemJS.

In index.html, reference the system.js file in the <head>.

index.html with systemJS and module loading

Now, refresh the page and put a breakpoint on the console.debug, and refresh again.  If you look at the Network tab in Chrome Developer Tools, you will see that main.js has not been loaded yet.

debug module loading and systemJS

Continue the application and you will see in the network tab that main.js is loaded dynamically in the process..

debug systemJS and dynamic module loading


How I Navigated to AngularJS

This article is about the process I’ve gone through of selecting AngularjS as my MV* JavaScript framework of choice. I will discuss the evolution I went through from learning jQuery, Knockout, Backbone.js, and eventually settling on AngularJS.

I’ve been programming in JavaScript since the late 90′s. JavaScript is not going anywhere. JavaScript is very popular today, and I believe it will only get more popular. For programming a browser, JavaScript is the least common denominator. This is the only language that all Browsers support. But, JavaScript has many pitfalls; just read the book JavaScript: The Good Parts by Douglas Crockford. In the early years, I used JavaScript mostly for validation and simple DOM manipulation. Writing different code for each browser sucked. Back in the early years JavaScript seemed like a toy, there weren’t many best practices and the expectations of what could be accomplished with JavaScript was limited.
Continue reading “How I Navigated to AngularJS”

Setting-up AngularJS, Angular Seed, Node.js and Karma

I’ve used AngularJS for a few months, but I have no knowledge when it comes to testing AngularJS apps. I have a subscription to and wanted to go through their online video training course for AngularJS. Specifically with this course I want to learn how to use Karma to do testing.

I’m usually extremely happy with course, but in the beginning of this course I was somewhat disappointed. In Section 7 (“Angular Seed”) new technologies were introduced. The author introduced Angular-Seed, Node.js and Karma. I’ve worked with Node.js, but there are probably many people who have never used it. I believe the author took for granted that the student knew Node.js. For those who have never used Node.js this could be an obstacle.

When I started the course I didn’t have Node.js installed. I installed Node.js and things didn’t work as intended. I couldn’t run tests in Node.js because Karma was not installed. Once I installed Karma, Chrome wouldn’t launch in the tests.

With all the issues I was having, I wondered if others were having the same problems. If others were having similar issues, were they discouraged and not continuing with the course. So I decided to create this blog post to help others to get stared with AngularJS, Angular-Seed, Node.js and Karma.

Here are my assumptions:

  • You are a windows developer
  • Google Chrome browser is installed
  • You will use the Windows Command Prompt instead of Bash
  • You have little or no knowledge of Node.js
  • You have little or no knowledge of Karma
  • You will use Node.js as the web server. I will step you through this.
  • You have access to and IDE. Visual Studio will be fine. NotePad++ would also probably work. I use JetBrains WebStorm. JetBrains has a free 30 day trial for WebStorm.

Here are the high level step we will follow:

  1. Download and install Angular-Seed
  2. Download and install JetBrains WebStorm (Optional)
  3. Download and install Node.js
  4. Confirm Node.js is installed
  5. Run Karma Unit Tests – will fail because Karma is not installed
  6. Install Karma
  7. Run Karma Unit Tests again – Will fail because Chrome will not start
  8. Add System Variable to Windows
  9. Confirm System Variable Were Added
  10. Run Unit Tests again – should succeed
  11. Confirm that Units are being tracked by Karma
  12. Start Web Server by using Node.js

Continue reading “Setting-up AngularJS, Angular Seed, Node.js and Karma”

Table Scans and Index Scans affects more than the table they access

SQL Server only queries data in memory (data cache). If the data needed is not cached, SQL Server will retrieve the data from disk and load it to data cache, and then SQL Server will use the data from the cache.

I have a general guide line that Table Scans and Index Scans are bad. This may not be an issue for small tables, but for large tables scans can cause significant performance issues. For example, if a query accesses a table that is 20 GB in size and a scan occurs, then there is a good chance that all data for that entity will be loaded in memory. If this data is not in memory, then SQL must fetch the data from disk and load it into memory. Fetching data from disk is usually an expensive IO process. If there is not enough available space in the data cache, SQL Server will remove (flush) data from the cache to make room for data that was retrieved from disk.

Data that is used often and cached can be removed from the cache due to poor queries or the lack of an index. Here’s a contrived example. We have 2 tables. The 1st table is Orders and it contains 10 million records and requires 3 GB of disk space. The Orders table is extremely important and is used in most queries and queries that access this table must return very quickly. The 2nd table TaskLog; contains 200 million records and requires 7 GB of disk space. For simplicity, neither table has any non-cluster indexes.

Let’s presume that the server has 8 GB memory. If all queries are executed on the Orders table, eventually most of the data from the Orders table would be in the data cache. There would be little need for SQL to access the disk. Queries would execute fairly fast.

Now, UserA queries the TaskLog table. The query gets counts of TaskType(see example query below). When the user executes this query a table scan is used. Since the data is not in memory, SQL Server will transfer the data from disk to memory. The problem is that there is not enough memory to contain both the Orders and TaskLog table. Since there’s not enough memory SQL Server will flush Orders data from memory and replace it with data with the TaskLog data.

SELECT Count(TaskType)
FROM TaskLog

Now the issue is that any queries that need to access Ordres will be retrieve from disk. This will incur a penalty in performance.

There are many options to solve this problem; indexes could be created on both the Orders and TaskLog table, more memory could be added, and there are probably other options.

But how do you identify if memory allocation is a problem. Below is a query that retrieves space used by all Cluster Indexes and Non-Cluster Indexes. It will show the size of the entity on disk and how much of the entity is in memory.

       WHEN Index_MB != 0 AND Buffer_MB != 0 THEN
            CAST(Buffer_MB AS Float) / CAST(Index_MB AS Float)
       ELSE 0
    END IndexInBuffer_Percent
        OBJECT_NAME(i.OBJECT_ID) AS TableName, AS IndexName,
        i.index_id AS IndexID,
        SUM(a.used_pages) / 128 AS 'Index_MB'
    FROM sys.indexes AS i
    JOIN sys.partitions AS p ON
        p.OBJECT_ID = i.OBJECT_ID
        AND p.index_id = i.index_id
    JOIN sys.allocation_units AS a ON
        a.container_id = p.partition_id
    GROUP BY i.OBJECT_ID,i.index_id,
) PhysicalSize
        obj.[name] TableName,
        i.[name] IndexName,
        obj.[index_id] IndexID,
        count_BIG(*)AS Buffered_Page_Count ,
        count_BIG(*) /128 as Buffer_MB --8192 / (1024 * 1024)
    FROM sys.dm_os_buffer_descriptors AS bd
        SELECT object_name(object_id) AS name
            ,index_id ,allocation_unit_id, object_id
        FROM sys.allocation_units AS au
        INNER JOIN sys.partitions AS p ON
            au.container_id = p.hobt_id
            AND (au.type = 1 OR au.type = 3 OR au.type = 2)
    ) AS obj ON
        bd.allocation_unit_id = obj.allocation_unit_id
    LEFT JOIN sys.indexes i on
        i.object_id = obj.object_id
        AND i.index_id = obj.index_id
    WHERE database_id = db_id()
    GROUP BY, obj.index_id , i.[name],i.[type_desc]
) BufferSize ON
    PhysicalSize.TableName = BufferSize.TableName
    AND PhysicalSize.IndexID = BufferSize.IndexID

Here sample result from the query (names have been changed to protect the innocent)

Table Name Index Name Index MB Buffer MB Index In Buffer Percent
Table1 PK_Table1 211875 20586 10%
Table2 PK_Table2 3711 3348 90%
Table3 PK_Table3 27689 2246 8%
Table4 IX_Table4_A 52181 1675 3%
Table5 PK_Table5 278409 1436 1%
Table4 IX_Table4_B 28585 1418 5%
Table2 IX_Table2_A 725 745 103%
Table6 PK_Table6 572 572 100%
Table3 IX_Table3_A 15701 493 3%
Table3 IX_Table3_B 17756 467 3%
Table7 PK_Table7 461 461 100%

Table2 is equivalent to our Orders table in the example. It’s very important that results from this table are returned fairly fast. As we can see 90% of data for PK_Table2 is stored in memory; this is good.

PK_Table1 is 211 GB and 20 GB are in memory. For this example speed in retrieving data from this table isn’t that important and 20GB in memory seems too much. This could be an indication that a scan is being used to access this data, or someone is running a query that they shouldn’t. This provides some good information to further my investigation.

Having one bad query can affect not just the performance of 1 table but the performance of the system as a whole.

How-to deploy application to Windows Azure Compute Emulator with CSRUN

It’s a pain that every time that I want to start a Windows Azure application locally, I must first start Visual Studio and start the debugger. In this post I will describe how to start an application in Windows Azure Compute Emulator without using Visual Studio debugger. This will work for an application that is a Web Role or Worker Role. There are multiple ways to do this, but I believe this may be the simplest.

In this post I will start from the very beginning of creating a new Windows Azure Cloud Service project and creating a default ASP.NET MVC 4 website. If you want to skip all the setup and get to the meat of csrun, then scroll to the bottom of the post.

Things Needed

  • Visual Studio 2012
  • Windows Azure SDK

Let’s get started.
Continue reading “How-to deploy application to Windows Azure Compute Emulator with CSRUN”

Re-Learning Backbone.js – Require.js (AMD and Shim)

In this post we are going to learn how to use Require.js with Backbone.js and Underscore.js

This post build on the Re-Learning Backbone.js series.

As usual, the examples in this tutorial are extremely simple. We have one goal here and that is to load Underscore.js and Backbone.js using Require.js

We are going to start out with an example that doesn’t function correctly. Don’t worry, I believe it’s important to show you the evolution of creating an application from the very beginning to a working version. We will take very small steps to get where we need to go.

Here are the libraries and their version that we will use in this post:

  • jQuery – version: 1.8.3
  • RequireJS – version: 2.1.2
  • Backbone.js – version: 0.9.9
  • Underscore.js – version 1.4.3

Here’s the source code for the example below: Source

Getting Started

To get started we need a structure for our website such as this:

We also need the following libraries jQuery, Backbone.js, Underscore.js and Require.js. These libraries should be stored in the “libs” directory.

First create two files. The first file will be called Require1.html and this file will be in the root directory. Add the following code to the file.

Here’s the code for Require1.html

<html >
    <script data-main="scripts/main1" src="scripts/libs/require.js"></script>

All the HTML file does is tell RequireJS to execute the main.js file in the script directory.

The second file will be called main1.js and this file will be in the “scripts” directory. Add the following code to the file.

    urlArgs: "bust=" + (new Date()).getTime(),  //Remove for prod
    paths: {
        "jquery": "libs/jquery-1.8.3",
        "underscore": "libs/underscore",
        "backbone": "libs/backbone"

require(["jquery", "underscore", "backbone"],
    function ($, _, Backbone) {
        console.log("Test output");
        console.log("$: " + typeof $);
        console.log("_: " + typeof _);
        console.log("Backbone: " + typeof Backbone);

The main1.js file has two parts, the config method and the require method. The config method is used to setup RequireJs. The config is not mandatory, but it does simplify the code. In the config we included only the “paths” option, but there are many other options available. To see a list of options go to RequireJs Config Options. In the paths option we identify the modules that will be needed. Each module file is in the “scripts/libs” directory. You will notice that each JavaScript file that is referenced does not include the “.js” extension. RequireJS assumes that all files are scripts and the “.js” is not needed. The order of the values in the config paths is not important; the files can be in any order. If we wanted we could have put backbone first.
Continue reading “Re-Learning Backbone.js – Require.js (AMD and Shim)”

Browser Script Loading

In this post I will discuss how scripts are loaded and executed by the browser.

It’s common to have scripts loaded by the browser by using the script tag (<script>) with a src (source) attribute. In the old days, the early versions of IE, FireFox, and Chrome, the script tags would load and execute synchronously. Today newer browsers support the ability to download scripts in parallel. The scripts are still executed in order they are declared.

Here’s a list of browser support for asynchronous downloading of scripts:

The previous image is from BrowserScope:,Firefox%203.0,Firefox%203.5,Firefox%203.6,IE%206,IE%208,Opera%2010.10,Safari%203.2,Safari%204.0

In the following example we will load 6 scripts (note the script at the bottom). The order of the scripts does matter. Since backbone.js depends on underscore.js, underscore.js will be declared before backbone.js. Here’s the code that will be used to help us understand script loading.
Continue reading “Browser Script Loading”

Re-Learning Backbone.js – Nested Views

Previously in this series we have learned about views and collection. Now lets learn about creating nested views based on a collection.

Most if not all of the concepts learned in the previous Re-Learning Backgone.js tutorials will be used in this blog post. As in previous posts, we will start-off very simple and take very small steps to get to our end goal. The reason to take all these steps is to make sure each addition to our solution works. This will be a 7 part post.

For this post our end goal is to create a very simple list of movies. Each individual movie displayed will be managed by an individual Backbone view. There will be another view that creates and manages the child views. I have bigger plans than this, but first wanted to do something simple.

Continue reading “Re-Learning Backbone.js – Nested Views”

Re-Learning Backbone.js – Events (Pub-Sub)

Since we are here to learn about Backbone.js, we are going to use the built in feature of Backbone called Events. Backbone.js Events is a feature that provides a Pub-Sub. As usually I’m going to attempt to keep this a simple as possible. To provide a basic understanding of Pub-Sub, we will not work with views, models or collections; we will only work with Backbone Event. My goal is to keep this extremely simple, and the concepts that you learn here can assistance you when build more complex, maintainable, extensible, flexible, and plus other bilities websites.

Below is an example of a protypical webpage. In a webpage like this, anytime the user does an action on the webpage, such as login, search, sort and other other action, the page refreshes. One reason we are learning Backbone.js is so that we can provide a better experience to the user by providing single page applications (SPA).

If we created an application like the following with Backbone.js, there could be many views (items in red boxes). In a webpage like this, the developer my define in code that the Search view is aware of the Movie List view, or the Login view is aware of the Recommendation view. For example, if the user logs-in, the Login view will directly notify the Recommendation view of the login. This is fine for simple SPA. But if this is done on a complex website like Trello, which is created with Backbone.js, maintainability and extensibility and other bilities may become an issue.

For example, what happens if management wants to replace the Recommendation view with a new Friends View. Now the developer must change the code in the Login view to update the Friends view. In this example the developer has tightly coupled the Login view to the Recommendation view. Probably not the best decision the developer has made.

Here’s a few quotes:

“Define a one-to-many dependency between objects so that when one object changes state, all its dependents are notified and updated automatically” Gang of Four book on Design patterns

“Publishers are loosely coupled to subscribers, and need not even know of their existence. With the topic being the focus, publishers and subscribers are allowed to remain ignorant of system topology. Each can continue to operate normally regardless of the other. In the traditional tightly coupled client–server paradigm, the client cannot post messages to the server while the server process is not running, nor can the server receive messages unless the client is running. Many pub/sub systems decouple not only the locations of the publishers and subscribers, but also decouple them temporally. A common strategy used by middleware analysts with such pub/sub systems is to take down a publisher to allow the subscriber to work through the backlog (a form of bandwidth throttling).” WikiPedia –

“The general idea behind the Observer pattern is the promotion of loose coupling (or decoupling as it’s also referred as). Rather than single objects calling on the methods of other objects, an object instead subscribes to a specific task or activity of another object and is notified when it occurs. Observers are also called Subscribers and we refer to the object being observed as the Publisher (or the subject). Publishers notify subscribers when events occur.” Addy Osmani –

This is somewhat techno babble, but once you understand the Pub-Sub or Observer pattern, these definitions make a lot of sense. It doesn’t help out a lot here, but hopefully we get you to the point where these definitions do make sense.

So forget about the example above. We are going to use a simpler example. Lets assume we have a security system. The security system is made-up of 3 key parts: a door (publisher), control panel (hub), and customer service (subscriber). Anytime the door (publisher) opens, the customer service (subscriber) will be notified. But we don’t want to tightly couple these objects together. We needed a mediator that manages the subscribers and the publishers, and this is where the control panel (hub) comes in.
Continue reading “Re-Learning Backbone.js – Events (Pub-Sub)”

Re-Learning Backbone.js – Collections

Backbone.js collections are used to store and manage a group of similar or related objects. If all we wanted to do was store related objects we could use a JavaScript array, but Backbone.js Collection provides an infrastructure that allows us to manage the set of data more effectively and efficiently.

As usual, I will attempt to keep the sample very simple. We will focus on collections, but we will need the assistance of models in this example.

In the following example we are creating 2 models and adding them to a collection.


Close Bitnami banner

In the first section of the code we declare a Movie model class. There’s nothing special here.
Continue reading “Re-Learning Backbone.js – Collections”