One kubectl context per shell session

Context switching is no fun. Be it the kernel/user mode switching, multiple tasking at work or changing between kubectl contexts. It’s an overhead that should be eliminated or at least reduced.

I deal with multiple Kubernetes clusters on a daily basis. Without some nice tools/tricks, switching between these clusters is tedious and dangerous. If not careful, a deadly command could be carried out on a totally unexpected cluster.

What I have today, in terms of tools, are kube-ps1, kubectx and kubens. In zsh, it looks like:

These are all good. No kidding, these goodies have been making my life so much easier. It’s very easy to switch between contexts and it always shows in the prompt. But, there is one problem it doesn’t solve. It’s the constant context switching. Even it’s little effort to switch, it adds up when you have to do it hundreds of times.

Every kubectl context switch is global. It is an actual change of the kubeconfig file after all. It would be really nice if we can stick to one context in a shell session and a different contex in another session. Recently, I have discovered a technique that achieves this. Simply adding this snippet to the .zshrc file:

# kubeconfig per session
file="$(mktemp -t "kubectx.XXXXXX")"
export KUBECONFIG="${file}:${KUBECONFIG}"
cat <<EOF >"${file}"
apiVersion: v1
kind: Config
current-context: ""

It’ll create a temporary kubeconfig file for each zsh session and contains the context information without impacting any other sessions.

A game changer.

One kubectl context per shell session

Execute a bash script via C#/.NET Core

With .NET Core now being a cross-platform framework, it’s very easy to invoke a Bash script from C# codes. It’s not a common practice, but in cases that are lack of a .NET library or REST/rpc API, being able to run a script out-of-process is valuable. So here is a nice extension method that I wrote and found it a joy to call.

To call the method, one can simply do, e.g.:

Execute a bash script via C#/.NET Core

echo vs printf


In bash, or generally the family of shell languages, echo and printf are often used to output messages to the screen (or terminal, or tty, or stdout, to earn a few more geek points…). It mostly doesn’t matter when to use which. In fact, most of the time, echo is used.

But here is a case that will bite if one doesn’t understand a little more about the details.

When preparing a Kubernetes secret yaml file, the secret data itself needs to be base64ed. E.g. the password value in the below yaml snippet.

apiVersion: v1

kind: Secret


  name: mysql-pass


  password: bWFnaWNfcGFzc3dvcmQ=

To generate this base64 string, one often does this in a shell: echo "magic_password" | base64 -, then copy the output to the yaml file. Guess what, soon after applying this yaml to the cluster, one would be into long hours of investigation of authentication failures.

How could the heck the password be bad? S/he asked while scratching her/his head.

The devil is in the details. Notice the differences between these two commands:

> echo "magic_password"
> echo -n "magic_password"

The % in the second command output means that it’s a partial line. In another word, it’s ending with no newline character. It also means that the first command ended with one which would contribute to the downstream base64 command. Now you see where the problem is? An invisible newline char finds its way into the password by echo. No wonder why all those password errors.

So the -n switch solves the problem? Yes, but not really recommended. The -n behavior is not always consistent across different systems. The true way of doing this correctly is, printf "magic_password". printf never outputs an extra newline without being forced to by an explicit format like printf "hello\n".


echo vs printf

An alternative of kubectl patch

Usually if a resource needs to be updated in place in Kubernetes, A few options are available. If the resource was created using kubectl create/apply -f, one just need to update the yaml file and apply -f it again. I see this option is used mostly.

However, I have a secret created --from-literal before. I didn’t have a yaml file for it. In this case, lots of articles suggest to kubectl patch it. Looking at the documentation, oh boy, I don’t know you my friend, I literally have a headache.

So I quickly moved on and figured this little but nice trick to complete the task:

kubectl create secret generic my-super-password --from-literal=password=12345678 --dry-run --output yaml | k apply -f -

An alternative of kubectl patch

Negation in ssh config

A little trick I learned today.

So Yubikey can’t be cooler when it comes to securing your private key and you know, all sorts of identity/authentication related stuff. At work, it’s enfored via ~/.ssh/config using PKCS11Provider /Library/OpenSC/lib/

But there are cases that Yubikey is not used. E.g., to git clone from a repo where SSH is used with a regular key pair. A simple negation entry can solve this need:

# Use my regular identity for Azure DevOps
Host ! *
    PKCS11Provider /Library/OpenSC/lib/

Negation in ssh config

A lazy loading solution for Angular 1.x

In this post, I’m going to show you a solution of lazy loading Angular 1.x modules. Angular 1.x doesn’t have it supported out of the box and it’s a very critical feature for many large applications dealing serious businesses.

The demo project used for this post can be found from here

Aren’t this problem already solved by Angular 2 and Aurelia?

Some of you might ask, given that Angular 2 is already in beta stage, and also there is another even better framework called Aurelia almost ready for its first release, why do we still need to care about Angular 1.x? There indeed are some valid reasons for that.

  • Many existing Angular 1.x projects will just not migrate to the new framework
  • Both Angular 2 and Aurelia are just in beta stage and it’ll take time for the majority to be confident enough to start to use them on new critical projects
  • etc.

So this solution will still be helpful for at least a while.

And a bonus point, in this solution, I’m also gonna show you how to write ES6/ES2015 codes and use systemjs loader even for today’s Angular 1.x projects. Another bonus point, the lazy loaded modules are also well bundled using systemjs-builder. So that you can have a seamless workflow for both development and production environments.

In the rest of this post, if not explicitly declared, by the term Angular, I’ll just mean Angular 1.x.

Why does it matter?

It’s funny that Angular fosters modular design/separation of concern for large client applications, but doesn’t provide a lazy loading story. The module meta language it provides is far from ideal, but it still works (plain ES6/ES2015 module is the one true king of module kingdom).

Modular design helps with a lot of things including team collaboration, maintainability, and etc. But it doesn’t really help in production if the good modules all have to be loaded entirely beforehand for the app to run.

In reality, we want to load only the needed modules initially for a faster boot experience and lazily load the other modules when user triggers the related functionality of the app. And this really matters for most serious applications regarding performance.

All right then, how?

So you are still interested in this offering. Great, let’s get to the details. In order to achieve this lazy loading goal, three problems have to be solved:

  1. When, where and how is a module going be triggered to load?
  2. How is a module going to be actually loaded?
  3. Once the module is loaded, how should it be registered to Angular, so that it can be used down the road?
I’ll give all answers to these three questions later in following sections. But first let’s imagine a demo project, so that we can code it up and it’ll be much easier to see the real working codes than just read a dry post.

The little demo project

We’ll have this structure for the demo. Logically the app will have a homepage (the initial load) where we can link to other two lazy-loaded pages (powered by Angular). They are the contact page and about page.

The app.* will serve for homepage purpose as the main entry point of the app. In each lazy-loaded module, we’ll have all their Angular resources defined in a self-contained way and wire them all up in the respective module.js which you’ll see later also serves the purpose of bundling point.


Without further due, let’s get to resolve the three problems to lazy load Angular modules.

The trigger

In a JavaScript client app, it usually takes a router component to serve the navigation purpose. It is natural to think if we can somehow extend the router, then we can trigger the actual loading when a navigation is requested and register the loaded modules to Angular. And this is indeed true for our solution. We’ll use ui-router to easily define the lazy loading points and seamlessly wire up with systemjs to do the actual loading work.

We favor ui-router over ng-route because it provides more convenient ways of providing lazy loading support which in turn comes from the ui-router-extras project, the future states. Following is a snippet of how the wire-up looks like.

The key pieces to notice in the above snippet are:

  • A state factory called systemLazy is created by using $futureStateProvider.stateFactory function. This state factory delegates the state preparation (the lazy loading) to a service called SystemLazyLoadService. More on the details of this service in next section
  • Then we add two future states, the contact and about modules using function addSystemLazyState which in turn calls function $futureStateProvider.futureState. Notice how we take care of the state name, the routing Url, the source location of the JavaScript module and optionally the export key of the Angular module (respectively contact and about found in the module.js files)

The loading and registration

Now let’s talk about the actual module loading and the registration of the loaded Angular module. As I mentioned above, this is achieved by the SystemLazyLoadService which looks like below snippet.

You may noticed that this is just a regular ES6/ES2015 module which is also registered as an Angular service. The logic is fairly straightforward. It mainly does two things:

  1. Loading: On line 11, we are doing System.import and let systemjs take care of the actual loading business. Thanks to the great systemjs loader, this single line of code is all we need for the loading part
  2. Registration: Once the module is loaded back via systemjs, the next big thing is to register the module into Angular, so that we can use the module down the road. We are using a nice library called ocLazyLoad to take of this part of the business. Again, while it is just one line of code on line 18, ocLazyLoad is actually doing a lot of work behind the scene. With ocLazyLoad’s help, we can stay away from dealing with Angular’s variety of providers to register all lazy loaded Angular resources

The last and important matter: bundling

Now we have solved the three problems in order to enable lazy loading of Angular modules. By integrating all these libraries, we now can seamlessly define the lazy loading points and load the respective module only when it is needed. Nice, but there is one last very important thing before we can call this solution complete. It is the bundling. As I mentioned above, the well crafted modules will not help in a production environment if we don’t have a bundle story.

By using systemjs-builder, we have also achieved this goal easily. Following is an excerpt of the bundle.js file you can find from the demo project.

Notice at the bottom of the script we have three separate bundles generated, namely the app entry point (the initial loading), the contact module and the about module. These modules are corresponding to the future states defined in app.js.

Following is a config sample to enable the usage of the generated bundle files. With this config, systemjs will be able to load the bundles instead of the actual individual module files.


In this post, I presented a solution to enable lazy loading for Angular 1.x modules. This solution will help a lot regarding app boot performance when the app functionalities grow along the road.

While the next generation JavaScript frameworks like Angular 2 and Aurelia are great and almost ready to release, I see there are still a large base of existing apps that will just stay with Angular 1.x and this lazy loading solution can be of a great support for their maintenances.

The accompanied demo project can be found from here

Hope this helps,


A lazy loading solution for Angular 1.x

A LESS plugin for systemjs/builder

In previous post, I have briefly mentioned that systemjs/builder has a great support of extensibiity by providing a plug-in mechanism. In this post, I will show you how we can leverage this and make loading/bundling LESS files work on top of the systemjs loading pipeline. We are essentially aiming for two goals:

  1. During development time, we should be able to save and refresh to see the results of LESS file changes
  2. During producing time, we should be able to compile and bundle the generated CSS into the bundle file

The github repository of this plug-in and its usage can be found from here

A brief word of LESS

According to its official website: Less is a CSS pre-processor, meaning that it extends the CSS language, adding features that allow variables, mixins, functions and many other techniques that allow you to make CSS that is more maintainable, themable and extendable.

LESS can run in multiple different environments, most importantly, in browser and node.js. These are the two exact environments that our plug-in will need to support. However, unlike the usuall cases, we will invoke LESS API programmatically, instead of running the node.js CLI or using a <script /> to include it on a web page.

The entry point of LESS API looks likes below:

A quick overview of the plug-in mechanim of systemjs

According to systemjs documentation:

A plugin is just a set of overrides for the loader hooks of the ES6 module specification. The hooks plugins can override are locate, fetch, translate and instantiate.

The behavior of the hooks is:

  • Locate: Overrides the location of the plugin resource
  • Fetch: Called with third argument representing default fetch function, has full control of fetch output.
  • Translate: Returns the translated source from load.source, can also set load.metadata.sourceMap for full source maps support.
  • Instantiate: Providing this hook as a promise or function allows the plugin to hook instantiate. Any return value becomes the defined custom module object for the plugin call.

In our case, we are going to override the Translate hook and another undocumented but obviously necessary one for the bundling scenario. It’s called bundle.

The implementation of system-less, a LESS plug-in for systemjs

Our first goal is to be able to load LESS files and apply the generated CSS styles on the fly during development time. We implement this by overriding the Translate hook like this:

There are three major parts of this implementation. First, we import the LESS browser compilation module less/lib/less-browser. This module is a wrapper of the core LESS logic. Second, we call the render method to compile the loaded LESS file content. Notice that the file content is already loaded by the systemjs pipeline, so that we don’t need to worry about the network loading part of it. Third, once we get the compiled results, the CSS styles, we need to inject them to the DOM, so that the browser will be able to pick them up and render the related markups with the new styles.

It’s a fairly straightforward logic to compile and apply LESS files in browsers.

Now it comes to the second goal of being able to compile and bundle LESS into the bundle file. This is a must-have goal for today’s web landscape. We can’t afford to load and compile LESS on the fly for a production system. That would be a kill of perfromance. Unlike loading LESS in browser, bundling via systemjs-builder happens in node.js environment. So the logic will be a bit different. Here is what it looks like:

There a few different things to notice from this implementation. First, we have a minified version of the injection logic which will be inlined into the bundle. It is to be called to inject the CSS styles when systemjs loads the bundles. Second, now we have stubs of system.register for each of the LESS/CSS files. This will be interpretated correctly by systemjs during the load time. Third, optionally for this post but a must-have for a real plug-in, we use clean-css to optimize the generated CSS styles. With this implementation, during producing time, systemjs-builder will be able to figure out the LESS files and compile and bundle them into the bundle file together with other resources.


In this post, I walked through the process of developing a systemjs/builder plug-in for LESS resources. This plug-in mechanism is a powerful tool to extend the systemjs/builder functionality. In fact, there are already quite a few great plug-ins developed and can be used directly in your project. With these plug-ins, we can easily set up a seamless workflow that easily save and refresh for the development time and optimize the loading performance for production time using bundling.

Hope this helps,


A LESS plugin for systemjs/builder

JavaScript modules and a loader, systemjs

In this post, I will talk a little bit about how to write modular JavaScript codes and how to use the modules via a popular loader, systemjs. This post is accompanied by this demo project:

The very first piece of JavaScript code

Every JavaScript developer can understand the following code, if you are fine to call it “code”.

It’s nothing but a cool little trick to help a user select the entire text inside the input box. It was the very first piece of JavaScript code I wrote, I still remember, in 2000, almost 15 years ago. JavaScript can be simply used like that. In fact, 15 years ago, there were so many places on the web using this kind of scattered JavaScript to serve various purposes that can’t be achieved by only HTML (or table if you recall). The point is, JavaScript by that time was not something that you will treat as a first class web development technique. Java, PHP or ASP were, but not JavaScript.

Fast forward

15 years later, now days, JavaScript is not only the first citizen of the web front-end society, but also the cool kid for many other areas like server side (node.js) and even Internet of things. This piece of code onmouseover="" is never cool anymore, instead it is almost like a crime if you write it not just for kidding. JavaScript is not a scripting language anymore. In fact, all main stream JavaScript implementations are JIT compilation based or at least mix compilation and interpretation, for decent performance purpose.

We started to write thousands of lines of JavaScript codes in either one file or a set of files. These codes plus all the libraries (jQuery, Angular, just to name a few) are almost a sea of JavaScript codes, yet most of them are loaded into browser by using the plain old <script/> tags that we are all familiar with. But we all know the pain of using this tag and the consequences if not enough attention is paid when maintaining these tags, the ordering, in particular. And the round trips overhead the many <script/> tags will incur. You might argue that putting everything into a bundle or a few bundles solves the problem. But again, what about the churns you have to deal with the bundle definition files. You still have to be very careful with the ordering, etc.

The Popular module formats/loaders

So to resolve this JavaScript codes organization problem, the module concept had become more and more popular in the past several years, and eventually led to a few popular module formats and their loaders. One of them is CommonJS, the de facto module standard on node.js platform. The other is AMD which was invented for browser scenarios.

The CommonJS loading scenario is relatively straightforward because the modules/files are loaded directly from the local file system. It is naturally a sync operation. In fact, in node.js, it is just a require() function call, as shown below:

The AMD format, however, is a bit different. In a browser world, loading resources, e.g. scripts from an Internet server should always be asynchronous considering the IO latency. Also, the module codes need to be wrapped, usually in IIFE format, otherwise, they are all gonna be globals. Here is an example:

In order to load the scripts, a browser context aware loader is needed. RequireJS was the de facto AMD loader. It is a bit outdated now. I’ll show you why later. Today we have the new cool kid called systemjs. We’ll get back to systemjs’ details later in this post. Based on the above code snippets, we can see there are some clear pros and cons for each format.

CommonJS format is really nice in the sense that we don’t need to wrap things in a function call. And the node.js loader (require()) will take care of the exports holder as well. But the bad part is also obvious in that it doesn’t have async semantic in the loader. You can require anywhere in the code, but we really need it to be async for browser scenarios. AMD and its loaders support async very well, totally work, however, the syntax, the wrapper style is not ideal. It’s just half way to an ideal JavaScript module solution.

The ideal module solution, ES6/ES2015

The JavaScript community is moving very quickly lately. Particularly the ES6 or ES2015 had been approved with quite a few goodies in it. I personally think the new module format is the one with the biggest potential impacts to the web. With the new ES6 module format, we can re-write the codes above like this:

In the above code snippets, I’m also using the new ES6 class syntax which is another very nice feature. Back to the point of module format, you can see now we have a much cleaner way of defining modules and their dependencies. There is no need to wrap things in an IIFE anymore. There is also no need for the magic exports holder object either. With all these good aspects, what really left is that we need a loader to make this module format works. And the loader better work in a much less overhead way.

The systemjs loader

Systemjs is a loader that can support the new ES6 module format, perfectly. In addition to that, it supports not only the new format, but also all popular legacy formats, CommonJS, AMD, and even globals. Isn’t it awesome? In fact, it is even more powerful, I’ll show you why in below. For a detailed demo, please see this little github demo repository.

Firstly, systemjs can load various different module formats as I mentioned above. Though in reality, we don’t usually mix too many different formats in a project. What we really want is the ES6 module format support. It is amazing to have this supported even before the main stream browsers fully support it. It accomplishes this with the help of the transpilers, the popular ones are Babel, Traceur and our beloved TypeScript. What does this mean? This means that, you can write ES6 modules today and don’t need to worry about if the browsers support them or not, because the ES6 modules will be seamlessly transpiled down to ES5 which is fully supported today by all main stream browsers. The transpilation can happen on the fly in browser for development workflow. It can also and must happen during build/bundle time for production scenarios, of course.

Secondly, systemjs has plugin loaders supported, meaning that many other kinds of resources can be universally loaded via systemjs just like JavaScript. This pattern today is so popular and it is so easy to use to manage modular web client apps (aka SPA). For example, HTML template files can be loaded dynamically and also bundled together with the feature JavaScript components. So are the LESS/CSS files, they can be authored and bundled in the same modular way. And all load via systemjs seamlessly/happily.

Thirdly, systemjs is not alone. It is accompanied by two other awesome tools, JSPM and systemjs-builder. JSPM as the name suggests is a package manager for browser scenarios, similarly as npm to node.js. With JSPM’s support, you can easily consume both well designed npm packages, and  even the raw repositories on github.

Systemjs-builder is the build/bundle part of the systemjs story. Remember bundling is a way to overcome the HTTP 1.x header of line blocking issue that will eventually disappear once HTTP 2 is established by major Internet service providers. Bundling by that time will be an anti-patter that we would need to un-learn. Really the loader is the thing we are looking at and bundling is the nice necessary feature it carries to solve the reality issue. This is also the reason why I personally favor the loader concept/tool over the bundler solutions like webpack. Webpack is also an awesome tool, but I don’t see a clear future of it because it solves the problem in a not-very-correct way.


The momentum we are seeing in the JavaScript and web front-end community is very exciting. It is generating very good stuff right now. The ES6 module formats and systemjs are just two of the many. I wish this little post has been helpful for folks that are new to this new world. Again, please go check out this little github demo repository to get some practical experiences of how all these things work together, beautifully.

Till next time,


JavaScript modules and a loader, systemjs

A list of readings on async programming

Understanding the SynchronizationContext in ASP.NET

It’s All About the SynchronizationContext

Don’t Block on Async Code

ExecutionContext vs SynchronizationContext

A list of readings on async programming

Something I should have been aware of two years ago

I really wish I have known about these changes two years ago. So that I can save some hours for resolving a weird gacutil issue.

Hope this helps,


Something I should have been aware of two years ago