One kubectl context per shell session

Context switching is no fun. Be it the kernel/user mode switching, multiple tasking at work or changing between kubectl contexts. It’s an overhead that should be eliminated or at least reduced.

I deal with multiple Kubernetes clusters on a daily basis. Without some nice tools/tricks, switching between these clusters is tedious and dangerous. If not careful, a deadly command could be carried out on a totally unexpected cluster.

What I have today, in terms of tools, are kube-ps1, kubectx and kubens. In zsh, it looks like:

These are all good. No kidding, these goodies have been making my life so much easier. It’s very easy to switch between contexts and it always shows in the prompt. But, there is one problem it doesn’t solve. It’s the constant context switching. Even it’s little effort to switch, it adds up when you have to do it hundreds of times.

Every kubectl context switch is global. It is an actual change of the kubeconfig file after all. It would be really nice if we can stick to one context in a shell session and a different contex in another session. Recently, I have discovered a technique that achieves this. Simply adding this snippet to the .zshrc file:

# kubeconfig per session
file="$(mktemp -t "kubectx.XXXXXX")"
export KUBECONFIG="${file}:${KUBECONFIG}"
cat <<EOF >"${file}"
apiVersion: v1
kind: Config
current-context: ""
EOF

It’ll create a temporary kubeconfig file for each zsh session and contains the context information without impacting any other sessions.

A game changer.

One kubectl context per shell session

Execute a bash script via C#/.NET Core

With .NET Core now being a cross-platform framework, it’s very easy to invoke a Bash script from C# codes. It’s not a common practice, but in cases that are lack of a .NET library or REST/rpc API, being able to run a script out-of-process is valuable. So here is a nice extension method that I wrote and found it a joy to call.

To call the method, one can simply do, e.g.:

Execute a bash script via C#/.NET Core

echo vs printf

Apps-Terminal-Pc-104-icon

In bash, or generally the family of shell languages, echo and printf are often used to output messages to the screen (or terminal, or tty, or stdout, to earn a few more geek points…). It mostly doesn’t matter when to use which. In fact, most of the time, echo is used.

But here is a case that will bite if one doesn’t understand a little more about the details.

When preparing a Kubernetes secret yaml file, the secret data itself needs to be base64ed. E.g. the password value in the below yaml snippet.

apiVersion: v1

kind: Secret

metadata:

  name: mysql-pass

data:

  password: bWFnaWNfcGFzc3dvcmQ=

To generate this base64 string, one often does this in a shell: echo "magic_password" | base64 -, then copy the output to the yaml file. Guess what, soon after applying this yaml to the cluster, one would be into long hours of investigation of authentication failures.

How could the heck the password be bad? S/he asked while scratching her/his head.

The devil is in the details. Notice the differences between these two commands:

> echo "magic_password"
magic_password
> echo -n "magic_password"
magic_password%

The % in the second command output means that it’s a partial line. In another word, it’s ending with no newline character. It also means that the first command ended with one which would contribute to the downstream base64 command. Now you see where the problem is? An invisible newline char finds its way into the password by echo. No wonder why all those password errors.

So the -n switch solves the problem? Yes, but not really recommended. The -n behavior is not always consistent across different systems. The true way of doing this correctly is, printf "magic_password". printf never outputs an extra newline without being forced to by an explicit format like printf "hello\n".

Enjoy!

echo vs printf

An alternative of kubectl patch

Usually if a resource needs to be updated in place in Kubernetes, A few options are available. If the resource was created using kubectl create/apply -f, one just need to update the yaml file and apply -f it again. I see this option is used mostly.

However, I have a secret created --from-literal before. I didn’t have a yaml file for it. In this case, lots of articles suggest to kubectl patch it. Looking at the documentation, oh boy, I don’t know you my friend, I literally have a headache.

So I quickly moved on and figured this little but nice trick to complete the task:

kubectl create secret generic my-super-password --from-literal=password=12345678 --dry-run --output yaml | k apply -f -

An alternative of kubectl patch

Negation in ssh config

A little trick I learned today.

So Yubikey can’t be cooler when it comes to securing your private key and you know, all sorts of identity/authentication related stuff. At work, it’s enfored via ~/.ssh/config using PKCS11Provider /Library/OpenSC/lib/opensc-pkcs11.so.

But there are cases that Yubikey is not used. E.g., to git clone from a repo where SSH is used with a regular key pair. A simple negation entry can solve this need:

# Use my regular identity for Azure DevOps
Host !ssh.dev.azure.com *
    PKCS11Provider /Library/OpenSC/lib/opensc-pkcs11.so

Negation in ssh config

A lazy loading solution for Angular 1.x

In this post, I’m going to show you a solution of lazy loading Angular 1.x modules. Angular 1.x doesn’t have it supported out of the box and it’s a very critical feature for many large applications dealing serious businesses.

The demo project used for this post can be found from here https://github.com/jack4it/angular-1x-lazy-load.

Aren’t this problem already solved by Angular 2 and Aurelia?

Some of you might ask, given that Angular 2 is already in beta stage, and also there is another even better framework called Aurelia almost ready for its first release, why do we still need to care about Angular 1.x? There indeed are some valid reasons for that.

  • Many existing Angular 1.x projects will just not migrate to the new framework
  • Both Angular 2 and Aurelia are just in beta stage and it’ll take time for the majority to be confident enough to start to use them on new critical projects
  • etc.

So this solution will still be helpful for at least a while.

And a bonus point, in this solution, I’m also gonna show you how to write ES6/ES2015 codes and use systemjs loader even for today’s Angular 1.x projects. Another bonus point, the lazy loaded modules are also well bundled using systemjs-builder. So that you can have a seamless workflow for both development and production environments.

In the rest of this post, if not explicitly declared, by the term Angular, I’ll just mean Angular 1.x.

Why does it matter?

It’s funny that Angular fosters modular design/separation of concern for large client applications, but doesn’t provide a lazy loading story. The module meta language it provides is far from ideal, but it still works (plain ES6/ES2015 module is the one true king of module kingdom).

Modular design helps with a lot of things including team collaboration, maintainability, and etc. But it doesn’t really help in production if the good modules all have to be loaded entirely beforehand for the app to run.

In reality, we want to load only the needed modules initially for a faster boot experience and lazily load the other modules when user triggers the related functionality of the app. And this really matters for most serious applications regarding performance.

All right then, how?

So you are still interested in this offering. Great, let’s get to the details. In order to achieve this lazy loading goal, three problems have to be solved:

  1. When, where and how is a module going be triggered to load?
  2. How is a module going to be actually loaded?
  3. Once the module is loaded, how should it be registered to Angular, so that it can be used down the road?
I’ll give all answers to these three questions later in following sections. But first let’s imagine a demo project, so that we can code it up and it’ll be much easier to see the real working codes than just read a dry post.

The little demo project

We’ll have this structure for the demo. Logically the app will have a homepage (the initial load) where we can link to other two lazy-loaded pages (powered by Angular). They are the contact page and about page.

The app.* will serve for homepage purpose as the main entry point of the app. In each lazy-loaded module, we’ll have all their Angular resources defined in a self-contained way and wire them all up in the respective module.js which you’ll see later also serves the purpose of bundling point.

2015-12-29_12-51-27

Without further due, let’s get to resolve the three problems to lazy load Angular modules.

The trigger

In a JavaScript client app, it usually takes a router component to serve the navigation purpose. It is natural to think if we can somehow extend the router, then we can trigger the actual loading when a navigation is requested and register the loaded modules to Angular. And this is indeed true for our solution. We’ll use ui-router to easily define the lazy loading points and seamlessly wire up with systemjs to do the actual loading work.

We favor ui-router over ng-route because it provides more convenient ways of providing lazy loading support which in turn comes from the ui-router-extras project, the future states. Following is a snippet of how the wire-up looks like.

The key pieces to notice in the above snippet are:

  • A state factory called systemLazy is created by using $futureStateProvider.stateFactory function. This state factory delegates the state preparation (the lazy loading) to a service called SystemLazyLoadService. More on the details of this service in next section
  • Then we add two future states, the contact and about modules using function addSystemLazyState which in turn calls function $futureStateProvider.futureState. Notice how we take care of the state name, the routing Url, the source location of the JavaScript module and optionally the export key of the Angular module (respectively contact and about found in the module.js files)

The loading and registration

Now let’s talk about the actual module loading and the registration of the loaded Angular module. As I mentioned above, this is achieved by the SystemLazyLoadService which looks like below snippet.

You may noticed that this is just a regular ES6/ES2015 module which is also registered as an Angular service. The logic is fairly straightforward. It mainly does two things:

  1. Loading: On line 11, we are doing System.import and let systemjs take care of the actual loading business. Thanks to the great systemjs loader, this single line of code is all we need for the loading part
  2. Registration: Once the module is loaded back via systemjs, the next big thing is to register the module into Angular, so that we can use the module down the road. We are using a nice library called ocLazyLoad to take of this part of the business. Again, while it is just one line of code on line 18, ocLazyLoad is actually doing a lot of work behind the scene. With ocLazyLoad’s help, we can stay away from dealing with Angular’s variety of providers to register all lazy loaded Angular resources

The last and important matter: bundling

Now we have solved the three problems in order to enable lazy loading of Angular modules. By integrating all these libraries, we now can seamlessly define the lazy loading points and load the respective module only when it is needed. Nice, but there is one last very important thing before we can call this solution complete. It is the bundling. As I mentioned above, the well crafted modules will not help in a production environment if we don’t have a bundle story.

By using systemjs-builder, we have also achieved this goal easily. Following is an excerpt of the bundle.js file you can find from the demo project.

Notice at the bottom of the script we have three separate bundles generated, namely the app entry point (the initial loading), the contact module and the about module. These modules are corresponding to the future states defined in app.js.

Following is a config sample to enable the usage of the generated bundle files. With this config, systemjs will be able to load the bundles instead of the actual individual module files.

Summary

In this post, I presented a solution to enable lazy loading for Angular 1.x modules. This solution will help a lot regarding app boot performance when the app functionalities grow along the road.

While the next generation JavaScript frameworks like Angular 2 and Aurelia are great and almost ready to release, I see there are still a large base of existing apps that will just stay with Angular 1.x and this lazy loading solution can be of a great support for their maintenances.

The accompanied demo project can be found from here https://github.com/jack4it/angular-1x-lazy-load.

Hope this helps,

-Jack

A lazy loading solution for Angular 1.x

A LESS plugin for systemjs/builder

In previous post, I have briefly mentioned that systemjs/builder has a great support of extensibiity by providing a plug-in mechanism. In this post, I will show you how we can leverage this and make loading/bundling LESS files work on top of the systemjs loading pipeline. We are essentially aiming for two goals:

  1. During development time, we should be able to save and refresh to see the results of LESS file changes
  2. During producing time, we should be able to compile and bundle the generated CSS into the bundle file

The github repository of this plug-in and its usage can be found from here https://github.com/jack4it/system-less.

A brief word of LESS

According to its official website: Less is a CSS pre-processor, meaning that it extends the CSS language, adding features that allow variables, mixins, functions and many other techniques that allow you to make CSS that is more maintainable, themable and extendable.

LESS can run in multiple different environments, most importantly, in browser and node.js. These are the two exact environments that our plug-in will need to support. However, unlike the usuall cases, we will invoke LESS API programmatically, instead of running the node.js CLI or using a <script /> to include it on a web page.

The entry point of LESS API looks likes below:

A quick overview of the plug-in mechanim of systemjs

According to systemjs documentation:

A plugin is just a set of overrides for the loader hooks of the ES6 module specification. The hooks plugins can override are locate, fetch, translate and instantiate.

The behavior of the hooks is:

  • Locate: Overrides the location of the plugin resource
  • Fetch: Called with third argument representing default fetch function, has full control of fetch output.
  • Translate: Returns the translated source from load.source, can also set load.metadata.sourceMap for full source maps support.
  • Instantiate: Providing this hook as a promise or function allows the plugin to hook instantiate. Any return value becomes the defined custom module object for the plugin call.

In our case, we are going to override the Translate hook and another undocumented but obviously necessary one for the bundling scenario. It’s called bundle.

The implementation of system-less, a LESS plug-in for systemjs

Our first goal is to be able to load LESS files and apply the generated CSS styles on the fly during development time. We implement this by overriding the Translate hook like this:

There are three major parts of this implementation. First, we import the LESS browser compilation module less/lib/less-browser. This module is a wrapper of the core LESS logic. Second, we call the render method to compile the loaded LESS file content. Notice that the file content is already loaded by the systemjs pipeline, so that we don’t need to worry about the network loading part of it. Third, once we get the compiled results, the CSS styles, we need to inject them to the DOM, so that the browser will be able to pick them up and render the related markups with the new styles.

It’s a fairly straightforward logic to compile and apply LESS files in browsers.

Now it comes to the second goal of being able to compile and bundle LESS into the bundle file. This is a must-have goal for today’s web landscape. We can’t afford to load and compile LESS on the fly for a production system. That would be a kill of perfromance. Unlike loading LESS in browser, bundling via systemjs-builder happens in node.js environment. So the logic will be a bit different. Here is what it looks like:

There a few different things to notice from this implementation. First, we have a minified version of the injection logic which will be inlined into the bundle. It is to be called to inject the CSS styles when systemjs loads the bundles. Second, now we have stubs of system.register for each of the LESS/CSS files. This will be interpretated correctly by systemjs during the load time. Third, optionally for this post but a must-have for a real plug-in, we use clean-css to optimize the generated CSS styles. With this implementation, during producing time, systemjs-builder will be able to figure out the LESS files and compile and bundle them into the bundle file together with other resources.

Summary

In this post, I walked through the process of developing a systemjs/builder plug-in for LESS resources. This plug-in mechanism is a powerful tool to extend the systemjs/builder functionality. In fact, there are already quite a few great plug-ins developed and can be used directly in your project. With these plug-ins, we can easily set up a seamless workflow that easily save and refresh for the development time and optimize the loading performance for production time using bundling.

Hope this helps,

-Jack

A LESS plugin for systemjs/builder