rss_feed

Programs and Tools

Is Software as a Service a Good Idea?

August 18, 2020

Whether you like it or not, software as a service (SaaS) has become extremely popular amongst both individuals and companies. But how good of an idea is it?

The past decade or so has seen an ever-increasing number of people turning towards software as a service (SaaS) providers to meet their computing needs.

Popular examples include Gmail, Google Drive, Google Docs, Jira, etc., but hundreds if not thousands of other companies both large and small have started offering SaaS. Some of their offerings are free and others require the user to subscribe.

This all sounds fine and dandy and there are certainly a lot of benefits, but is it all rainbows, fairies and roses? Or is there a darker side to it as well? Should everyone jump on the bandwagon or are there still reasons for sticking with more traditional applications installed locally on a user’s machine?

Those are some of the questions we are going to be exploring here.

The Basics

First of all, let’s start by establishing a few of the basics of SaaS.

What is SaaS?

For the purposes of this article, we will be looking at the offerings that provide full-fledged applications on the internet. They run in a browser and/or as an app on a smartphone or tablet and are available from any modern device with an internet connection.

The data produced in or by the application is saved on a server and is only accessible by the user through the application itself. In most cases, an internet connection is required to even use the application at all.

Here are some popular examples:

  • G Suite (Google Docs, Gmail, etc)
  • Jira Cloud
  • WordPress.com (not WordPress.org)
  • Blogger
  • Salesforce
  • DocuSign
  • Slack
  • etc

There are, of course, examples of SaaS that are more open such as WordPress.org or Jira Server which allow you to download the software and install it on your own server. However, we are only going to focus on the closed systems for now.

Who Uses SaaS?

Users range from private individuals looking for an easy and convenient way to get stuff done on their digital devices to large, multi-national corporations who are looking for a way to outsource part of their IT expenses as well as a way to provide a secure, consistent, centralized way for their employees to do their work and collaborate online.

The new and sudden challenges faced by companies to keep their employees productive and working during the current COVID-19 pandemic has driven even more to use SaaS.

So now that we’ve covered some of the basics, let’s get to the good part.

…read the rest of the article →

Windows 95 as an Electron App

August 6, 2020

An operating system that was released 25 years ago and required an entire computer to run can now instead be run in a browser window with an interpreted language.

I know this project has been around for a while and I have run across it before, but today I decided to post about Windows 95 running in Electron. Why? Because it’s possible.

This is certainly a marvel of modern technology in that an operating system that was released 25 years ago and required an entire computer to run can now instead be run in a browser window using an interpreted language like JavaScript as its underpinnings.

It doesn’t run perfectly, but it runs decently enough to perform most tasks and, I would argue, runs in many cases better than it did on some old hardware. You can even run old games with it, albeit they can be somewhat buggy.

Windows 95 Way Back When

Gateway 2000 PC
Gateway 2000 PC

I’m old enough to remember when Windows 95 came out. At the time, we had a family 486 PC (a Gateway 2000) running MS-DOS and Windows 3.1. We had to upgrade the RAM in order to install Windows 95. Unfortunately, I don’t remember what the exact specs of the machine were, but the RAM was definitely in the low double-digit MB range. All I specifically remember was the 200 MB hard drive that always seemed to be full.

We also added a CD-ROM which the computer didn’t originally come with in order to make installing Windows 95 easier. It was possible to purchase Windows 95 on CDs or on floppy disks. There were, however, thirteen floppy disks required to install the OS which is why we opted for the CD version.

…read the rest of the article →

Debugging Node.js Remotely with Visual Studio Code

August 4, 2020

Visual Studio Code is a tool with many talents. Among those is the ability to not just debug Node.js applications, but also to debug them remotely.

Debugging a Node.js application remotely using Visual Studio Code is a small matter of configuration. Microsoft’s do-all editor makes it easy to create a debug configuration that teams can even commit into their repositories so that all developers can benefit from it.

In order to simulate a remote Node.js application in this article, we are going to run a simple one in Docker. That means the following will also work for applications running in a Docker container. If you are only interested in the configuration for Visual Studio Code, then feel free to scroll down to the “Visual Studio Code Configuration” section below.

The following example project is also available as a repository on GitHub which may make it easier to understand the structure.

A Simple Node.js Script

The first thing we need to do is to set up a basic Node.js app that we can test with. We won’t program much here because we don’t need to for this example. Instead, we will just include this simple script:

const printTest = () => {
    let test = 'test';
    test += ' value';
    console.log(test);
};

setInterval(printTest, 1000);

The code doesn’t really do much, but what it allows us to do is to set a breakpoint at line 3 and then step through the code in order to see the difference between test before and after it is assigned ” value”. setInterval(printTest, 1000) is the application loop and keeps it running in Docker so we don’t have to restart it every time. That’s enough for our purposes here.

Of course, this code is just representative of any other Node.js code that can be debugged. This would also easily work with TypeScript without any further changes to the code or configuration.

We will then save it as ourScript.js and copy it into our Docker container.

Docker Configuration

The next thing we need to do is to set up is our Docker environment for testing. In order to do this, we will create a simple Dockerfile that looks like this:

FROM node:14-alpine
WORKDIR /app
COPY ./ourScript.js .
CMD ["node", "--inspect=0.0.0.0:9229", "./ourScript.js"]
EXPOSE 9229

Again, there isn’t anything fancy going on here. There are, however, two important parts of this configuration that will enable us to debug remotely.

These are: --inspect=0.0.0.0:9229 in CMD and EXPOSE 9229.

Both of these are critical for debugging a Node.js app/script remotely in Docker. If you want to debug an application on a server, you will most likely just need to use the --inspect flag without the IP address.

…read the rest of the article →

Using Redis Sentinel with Docker and Marathon

July 17, 2020

Note: This article expands upon the principles as explained in my other post about Redis Sentinel and Docker Compose. If you haven’t read it yet, you should do so before continuing with this article.

Using Redis Sentinel with Docker and Marathon is a relatively complex procedure that requires every instance of Redis to be able to communicate with all other instances.

Using Redis Sentinel with Docker and Marathon is a relatively complex procedure that requires every instance of Redis to be able to communicate with all other instances. The Sentinels have to talk to both the master and the slaves while the slaves have to be able to synchronize with the master.

Since there will be a lot of code in this article, I have created a GitHub repository where it might be easier to follow and understand.

The example in this post will work with the same setup as defined in my other article about Redis Sentinel and Docker Compose:

  • We need to define a master instance.
  • We need to setup one or more slave instances.
  • We need to start at least three Sentinel instances.
  • They all need to communicate with each other.

This setup is relatively easy to accomplish with Docker Compose, but what if we want each instance to run in its own Docker container with Marathon? That is where things begin to get a little more complex.

Marathon Configuration

First of all, we need to setup our Marathon configuration so that it deploys our Docker images properly. Essentially, this is the same as the “docker-compose.yml” file from my last post, but in JSON format plus a couple of extra necessary parameters for Marathon:

…read the rest of the article →

Using Redis Sentinel with Docker Compose

July 13, 2020

Redis is an easy-to-use solution for anyone looking for a robust key-value store. It is feature-rich, but relatively simple to use and even has official Docker images. This post will not go into anymore detail as to what exactly Redis is as it assumes the reader already knows. If not, you can read about it on the official Redis website.

What we will discuss, however, is how to create a failover solution using Redis Sentinel and Docker Compose. There are several code examples in this post, so it might be easier to follow them as well as to understand the project structure on GitHub.

Redis Sentinel is essentially a mode in which the Redis server is started that watches the master Redis instance and chooses a replacement from the slave instances in the event that the master instance is unreachable.

In order for it to do this, the following needs to be configured:

  • We need to define a master instance.
  • We need to setup one or more slave instances.
  • We need to start at least three Sentinel instances.
  • They all need to communicate with each other.

So how do we do all of this?

Redis makes it relatively easy. First, we start a normal instance of Redis which will be the master instance. Then we start additional instances that will become the slaves, but when doing it, we pass a flag with the IP address/hostname and port of the master instance. This flag defines the slaves as slaves and also tells them which instance is the master. An example command looks like this:

redis-server --slaveof 127.0.0.1 6379

Now we have the first two bullet points in our list taken care of, but still need to start at least two Sentinel instances. Redis needs at least two instances so that the Sentinels can “vote” for a slave instance to become master. This is a bit trickier as we first have to define a configuration file. More on that later, but for now, here is the command to use when starting a Sentinel instance:

redis-server /redis/sentinel.conf --sentinel

All of these instances will need to be on separate servers or setup with different configurations if running on the same server. That is beyond the scope of this article though as we are going to isolate each instance in a Docker container as a solution to this problem.

To use Docker, we will need to start multiple containers at once. The best way to do that is with Docker Compose. This article will assume some background knowledge of both Docker and Docker Compose. First we need to create a Compose File that looks something like the following:

…read the rest of the article →

Emulating Mac OS 9 on macOS 10.15

June 27, 2020

Several years ago, I bought an old, colorful iMac G3 running Mac OS 9. It runs my old software wonderfully, but an emulated version of Mac OS 9 on my modern MacBook Pro is just so much more convenient.

For several years now, I have had an old iMac G3 from about 2000 sitting around in one corner of my home office. It works perfectly fine, but I rarely start it up because I really don’t have much of a use for it most days. Occasionally, I play old games or run old software that I still have from 20 years ago, but those occasions are few and far between.

But today boredom and curiosity got the better of me.

I fired up the old iMac and, as usual, it reliably started right up. Compared to my modern MacBook Pro, however, it is obviously noisy, slow and the resolution is terrible. Those are always the first things that strike me whenever I decide to use the iMac, so the thought I had today was: why not try to emulate Mac OS 9 on my MacBook Pro running macOS 10.15 Catalina instead of always having to boot my iMac?

iMac Running Mac OS 9
iMac Running Mac OS 9

Emulation cannot, of course, replace the experience of actually using the iMac though since it doesn’t give you the full immersion experience of using an authentically vintage computer. Whenever I sit in frontend of that old CRT screen listening to it hum, I always feel a bit like I did back then when I was in school and using AppleWorks on one of these colorful machines to type up my homework — usually some essay first written by hand.

But I digress. I decided to try to emulate Mac OS 9 on my MacBook Pro so that I wouldn’t always have to start the iMac whenever I felt like playing Age of Empires, Civilization III, the original Tomb Raider or Railroad Tycoon II. Plus, I figured I could really jack up the specs on the emulator which would allow me to play a few more games that my iMac won’t run (it only has 64 MB of RAM). Not to mention I could then also run Photoshop 6 again. (the last version of Photoshop I purchased).

…read the rest of the article →

Why I Use Virtual Machines for Projects

June 19, 2020

I don’t like clutter. In fact, I am very picky about what I install on my computer which is one reason I love virtual machines (VMs). They enable me to install all sorts of things to try them out and then, when I feel like the machine has become cluttered, I can just delete it and start anew without any risk of data loss.

The reason I mention that is because I like to try out new programming languages and frameworks which always involves installing new runtimes, compilers, libraries, IDEs (and/or plugins), etc. I like to experiment. Most of the time when I try one out, I end up leaving it installed for a while, but rarely use it again. That means I forget about it and eventually the unused installations fossilize and turn into layers of sediment that build up over time.

Not really, but you get the picture.

To prevent this, I use a virtual machine. Any new programming language, tools, etc I want to try first get installed in a virtual machine. The majority of the time, I use it for a short while and then stop. At some point, I will delete the VM with its fossilized runtimes and compilers and start again.

There are, of course, exceptions. If I notice I really enjoy a specific language or a particular editor, I will install it on the host machine. This is exactly the process I went through when transitioning most of my personal development from PHP, to Java, then to Node.js, then to TypeScript. These subsequently made their way from the VM onto my host machine since I use them all of the time.

The Operating Systems I Use

Needless to say, I use Linux in my virtual machines. It is free, there are several different flavors and it just simply works. Not to mention, other than maybe a few Microsoft technologies, most development tools, languages, etc are made to run on Linux. You can install them without a hassle and they just simply work. I’ve found this is especially important if you just want to try it out since a miserable experience getting everything to work would negatively impact your first impression.

My host machine is a late 2013 model 15″ MacBook Pro. And yes, it is that old, but I bought the top-of-the-line model at the time and still don’t feel the need to upgrade yet. It runs the latest version of macOS (as of this writing 10.15 Catalina) and has been a very robust machine that has served me extremely well over the past seven years. I will probably use it until it dies.

But back to Linux.

Not only do I like to try out new programming languages, I also like to try out new flavors of Linux. That means almost every time I install a new VM, it is not what I had before. Sometimes it is a Debian-derivative such as Ubuntu or Debian itself, other times it is a RHEL-based flavor like Fedora or CentOS, but sometimes I also feel like OpenSUSE or even Arch Linux. And even if I keep the same flavor, I almost always choose a different GUI. Sometimes Gnome, sometimes Xfce, sometimes KDE, and so on.

VMs are, of course, absolutely ideal for this. It would be a nightmare to constantly try to install new versions of Linux on bare metal — especially if you require wireless internet like I do.

…read the rest of the article →

The Productivity Tools I Use

June 11, 2020

In this post, I talk about which productivity tools I use for my work as a developer, my personal development projects and my personal life.

Like so many other people out there, I have a lot to do. There is always some project that is calling, some person needing an answer from me, some book that is begging to be read, some piece of writing that won’t compose itself, etc. Over the years, I have tried a large number of different methods for organizing myself and increasing my productivity. These range from digital solutions that promise to be the ultimate tool to increase one’s productivity to simple analog solutions using pen and paper.

And yet none of the tools I’ve tried have been that promised be-all and end-all solution that magically boosts my productivity. 

I have, though, managed to narrow it down over the years to a specific set I use depending on the type of task at hand. In this post, I am going to talk a bit about which tools I use in which contexts with the intention of hopefully helping someone choose the right one for their needs.

Moving from Analog to Digital

Paper Planner Format
Paper Planner Format

Last year, I switched from a largely analog organization system to a fully digital one. For years, I used paper planners with a weekly format that had the days on the left and a blank page on the right side of the spread which I would use to plan my to-dos for the week. This list of tasks I would transfer from week to week which was tedious, but since I had to do it by hand, it meant I re-evaluated each and every item on the list every week. Through this method, unimportant to-dos would eventually be dropped entirely freeing up both physical and mental space for more important things.

This system worked very well for a long time. Then I started a new job where I got a new iPhone, a new iPad Air with an Apple Pencil and a new MacBook Pro. I’ve had various smartphones and tablets over the years, but with the excitement of the new equipment, I decided to finally make the plunge into an all-digital system.

…read the rest of the article →