rss_feed

Development

Subscription Fatigue and Software

November 26, 2023

Have you ever upgraded one of your favorite programs or apps that you bought a permanent license for, only to be confronted by the need to subscribe to use the newest version?

This happened to me recently for one of the apps I frequently use on my iPad: GoodNotes. I bought version 5 a couple of years ago, but starting with version 6, they require a subscription to use it. I would have paid a new one-time license fee for the upgrade, but that is no longer an option and therefore I have moved on to Apple Notes instead.

GoodNotes is not the first instance of this happening to me and I find it to be a rather disturbing trend in the software industry. From an economic standpoint, I can see why a subscription model is much better for software development companies than a one-time purchase model, but it most certainly isn’t better for their customers.

Why Companies Are Moving to Subscription Models

Ever more software companies are moving to a subscription model in order to generate a more consistent stream of revenue. Instead of having spikes of income when they release new major versions of a piece of software, subscriptions allow them to have a continuous cash flow.

When Microsoft, for example, moved Office to a subscription model, they claimed it would allow them to focus more on the development of the software rather than on marketing major new version releases. It also has the added business benefit of ensuring a constant revenue stream.

From a business perspective, both of those are no-brainers. After all, what company wouldn’t want to have continuous cash flow while lowering marketing and release costs? This is, however, an entirely selfish perspective that could backfire.

Subscription Fatigue

Something that these software companies seem to forget is that a large number of other companies are also doing the same thing. This means that users end up with subscription fatigue which may drive away potential customers.

It is not uncommon for users to be subscribed to many different pieces of software or services at the same time. The costs add up pretty quickly which means that users are likely to second-guess whether they really need to sign up for a new one.

If I, for example, have to pay $10 or even $5 a month for a piece of software, I am going to carefully consider whether I really need it badly enough to spend money on it every month. Part of that concerns the fact that I am already paying for other programs or services monthly or annually and at some point, I just don’t want to or can’t afford to spend any more money on software subscriptions.

I don’t have a problem paying for the software I use on a regular basis such as my Apple One subscription. I use iCloud Drive, Apple Music, Apple TV+ and the accompanying software almost every single day. The benefit I get from it therefore justifies the recurring cost.

However, most applications, such as GoodNotes or Microsoft Office, I only use sporadically. I may play with a free trial subscription, but most of the time, I cancel it before I have to start paying for it because I just can’t justify the recurring cost.

Applications I only use every once in a while are exactly the type that I would pay a one-time license fee for, but am not willing to pay for continuously.

Conclusion

To be fair to software subscribers, there are certainly some upsides to it. For example, you always have the latest version and most of the time you get a continuous stream of new features. There are even customers who can benefit from software subscriptions such as other companies.

By subscribing to a product such as Microsoft Office, they can continuously deploy small software updates with incremental changes instead of having to purchase brand-new licenses for major updates with vast changes. This reduces costs in that employees may have to be retrained to use new major versions of software and the process of deploying the major upgrade is also costly.

That said, the individual consumer is the one who loses out. Their pockets aren’t nearly as deep and the benefits, while real, are fewer. At some point, the cost simply exceeds the benefits for software you don’t use all the time.

With so many companies starting to require a subscription for their software, it has gotten to the point that if I see they require one, I am unlikely to even consider it. Anymore, I just skip over it and look for something else that is either free or I only have to pay for once.

This article originally appeared on Alex’s Notebook.

Linker: October 27, 2023

October 27, 2023

As always, there is a lot going on in the world of tech. This week’s “Linker” features experimental updates, new releases and a few tips and tricks.

Microsoft is testing new Windows 11 privacy controls in Europe. As most people in the tech world are aware, privacy laws are much stricter in Europe than in most other parts of the world. It looks like Microsoft is adding additional privacy controls in Windows 11 to better comply with local laws.

Report finds few open source projects actively maintained. A study done on open source projects shows open source maintenance is in decline and that 1 in 8 source downloads have a known risk. Only about 11% of open-source projects are actively maintained.

Turning a Node.js Monolith into a Monorepo without Disrupting the Team. The title pretty much says it all. It is a useful article about the best strategy for turning a monolith into a monorepo without too much disruption of development.

What I Learned as a Product Designer at Apple. An ex-designer from Apple gives insight into what she learned during her time at the company. It is an interesting read for anyone who wants to better understand how Apple sees design.

Microsoft .NET 8 nears the finish line. .NET 8 is getting closer to its final release. There are a number of improvements and changes and it will receive three years of support.

Node.js 21 brings WebSocket client. Node 21 was recently released and one of the features that were added was experimental support for a built-in WebSocket client.

Linker: October 20, 2023

October 20, 2023

This is the first of a regular series of posts featuring links of interest to developers and nerds. Enjoy them and if you have any suggestions, feel free to let us know by contacting us or in the comments below!

Angular users want better server-side rendering. A survey conducted in 2022 shows that developers not only want better server-side rendering in Angular, but they also want better debugging, profiling and testing.

30 Best Web Development Frameworks for 2023: A Comprehensive Guide. This is a great guide for anyone looking to start a new web project in 2023. It includes a comprehensive list of both frontend and backend frameworks in a variety of languages and weighs their pros and cons.

Green hills forever: Windows XP activation algorithm cracked after 21 years. This is probably entirely useless information, but could be amusing if, for example, you have an old PC lying around or want to run Windows XP in a virtual machine. The only real use-case I can think of for that would be to run old software such as games, but perhaps you have a better idea of what to do with it.

Node.js 21 is now available! The OpenJS Foundation announced the release of Node 21 this week. Following tradition, we can expect Node 20 to move into LTS later this month.

GNOME has a pull request to drop X11 support plus Ubuntu 13.10 is out. Linux & Open Source News:

Why I’ve Switched from React to Angular for My Projects

October 15, 2023

For years, my go-to frontend framework was React. I started using it professionally around 2015 and so it was a natural choice to use for my personal projects as well since I was already in the mindset.

Of course, I have also dabbled around with others since then, particularly with Vue, but they never really stuck. I always just ended up using React in the end for whatever project I wanted to pursue.

So why would I suddenly want to start using Angular when nothing else stuck?

Note: I know that some developers call React a library rather than a framework. That is probably more accurate, but I’m going to continue calling it a framework in this article because when I say “React”, I’m referring to the entire ecosystem rather than just React itself.

A New Job

Earlier this year, I started a new job. I was hired as a frontend developer, but the project called for using Angular rather than React. When I applied for the job, I figured it would give me a good opportunity to truly try out another framework and see how it works, looks and feels for me. What I discovered blew me away.

Now, I know Angular isn’t the most popular framework amongst developers and particularly not amongst frontend developers. I’m pretty sure that is because it forces you to think much more like a backend developer than frameworks like React or Vue.

It is a strongly opinionated, object-oriented framework and you have to think in terms of classes (not the CSS variety!), services, components and modules. React and Vue are much more flexible in terms of how you can compose your components and structure your code. Both have their pros and cons, but as a full-stack developer, I’ve found I rather like having to write object-oriented code.

Why Angular?

After having used Angular professionally for several months now, I have decided that React just can’t compare to Angular in a number of different ways. I’ll sum them up here in no particular order.

Services

I love services in Angular. They are the primary way data is handled and are simple, intuitive, easy to unit test and ideal for handling large amounts of data.

So how do they work in Angular? I won’t go into a huge amount of detail because that is beyond the scope of this article, but I will give a very quick overview. Essentially, they are injectable classes that are used to fetch and store data in the application.

Angular treats each one of them as a singleton and therefore the data stored in the class is persisted during entirety of the user’s session. To use them, you just inject them into your component classes by importing it as a variable in the constructor and then you have access to all public functions and variables, including the data they hold.

When comparing Angular services to the closest thing in the React world, you get a combination of the native Context API and a third-party global store such as Redux. The Context API allows you to avoid prop drilling, but has to be used with caution because every time a change happens, React re-renders all of the rendered components that use it. Data is only persisted as long as the component containing the provider is still mounted.

Redux and other stores are globally available throughout the application and data is persisted throughout the entire user session. Re-rendering isn’t as much of an issue here, but all of that comes at a cost: massive amounts of boilerplate code. Writing reducers and thunk actions in Redux is powerful, but tedious and painful, especially if you use TypeScript. Getting the types right can have you pulling your hair out in frustration.

Angular services solve all of those problems and are as easy to use as writing any other class in TypeScript. That’s it. That’s literally it. No boilerplating, no re-rendering problems. It just works.

…read the rest of the article →

The Problem of Having Too Many Technologies to Choose From

September 27, 2023

There are constantly new libraries and frameworks coming out on the market. Most are open source and some even become popular very quickly. While this might seem like a great thing since it continuously pushes what technology can do, there are some darker sides to it that no one seems to talk about.

Staying On Top of Things

First of all, developers have to constantly keep up with the latest developments. While this isn’t a problem in itself, the rate at which new technologies emerge makes it very difficult, time-consuming and often stressful to do so.

Personally, I love to play with new frameworks as they come out. I love to experiment with them and see what they can do. In fact, I feel like I read more documentation than I do books. Sometimes I fall in love with one and want to immediately abandon the codebase for whatever project I’m currently working on and start over again with that new tech. Most of the time though, I try it and the excitement quickly fizzles out.

I do this in my free time which, since having my first child, has been significantly reduced and I am no longer able to experiment and play as much as I could before. Fortunately, I have no pressing need to learn anything new at the moment that isn’t for work (which I use work time for), but it also means I am going to fall behind.

Decision Time

Even if you are able to keep up with the latest technologies, that doesn’t mean you don’t have any more problems. In fact, one of my greatest problems stems from the fact that I have experimented and played with a significant number of libraries and frameworks: what the heck do I use for my projects?

I am constantly coming up with new development projects to keep myself entertained and, once I have the initial idea, the first hurdle to getting started is deciding what technologies I want to use. Since I have played with several of them, I know a lot about their pros and cons. This might seem like a good thing, but it means I have to think a lot about what to use before I begin.

This usually takes the form of first sorting out the technologies that I either don’t want to work with or that aren’t suitable for the project. Then, I weigh the pros and cons of each of what remains until I finally come up with a reason to pick one.

Of course, it isn’t necessarily a bad thing at all to carefully choose technologies that are most suitable for your project. The problem with it lies in the fact that so many technologies are so similar with only nuanced pros and cons that several of them would be equally as good for the task at hand.

For example, if you are going to create a web application, do you choose React, Vue or Angular? Or something else entirely? Svelte? Would it make more sense to just skip a frontend framework entirely and go with good old-fashioned server-side rendering?

All of them will get the job done just fine, so what do you choose? Whatever happens to be your favorite at the time? In my case, I have professional experience with all but Svelte, so I know their ins and outs pretty well and am equally as efficient in terms of development time with all of them.

The same goes with backend technologies. Should I build a TypeScript-based backend? If so, do I use Nest.js? Or do I just use Express or Fastify? Or do I go with something even more mature and feature-rich such as Symfony for PHP or Spring Boot for Java?

I have experience in all of them and could easily work with any one of them. Because of that, I often end up spending days getting wrapped up in the decision and often don’t even start the project because I exhaust myself just deciding what to use to make it!

This isn’t a problem that exists for work, however. Usually, you work with a team of people and the technologies are already established. If you are starting a new project, there is usually some sort of consensus amongst the developers about which technologies to use and it usually quickly becomes obvious what the choice is going to be.

Conclusion

I have discussed this issue with other developers and it seems like I am not alone with these problems. Some either fight to keep up with what’s new on the market or they are stressed out because they don’t have time and are unable to.

Surprisingly few seem to have the problem of deciding which technologies to use as I described above. That is because they just simply have a set of technologies they always use for personal projects and, even if they experiment with new ones, they still use their favorites for everything because they know them best and are most comfortable with them.

I have always thought that having a set of favorite technologies I can fall back on would be ideal, but that starts the loop all over again: which ones do I choose?!?

Do you have a problem deciding which technologies you want to use for your projects? How do you keep up with emerging technologies? Let us know in the comments below!

Vue.js: Route-Level Code Splitting with a Page Loader

June 18, 2023

Vue.js makes it easy to implement router-level code splitting. Page loaders are a great way to indicate that your application is loading.

Vue.js combined with Vue Router makes it easy to implement router-level code splitting. For this post, I’m going to assume you know what I am talking about. If you don’t, please see the Vue Router documentation for Lazy Loading Routers.

When code splitting at the router level, however, it means that the user downloads a new JavaScript file every time the route changes. As the user browses the website, these files will be cached and the browser won’t have to download them again, but the initial load may take a little while before it is finished which is why it is important to give the user feedback to let him or her know that your website is still actually doing something.

That is where page loaders come in. A page loader is nothing more than some sort of indication that a page is loading. They come in many forms from simple text (“Loading…”) to fancy animations. For this example, we are going to use a Google-style loader that will look like this when it is done:

Our page loader is an animated bar (in Vue.js’s green!) that runs across the top of the screen. This is a convenient way of displaying a loading status as it is non-blocking and universal.

So how do we do that?

Fortunately, Vue.js makes it easy. I also tried to implement this example using React and React Router, but gave up at after two days of fighting with it. It only took me a couple of hours to implement it in Vue.js and most of it was spent trying to get the bar animation the way I wanted it. The actual logic that shows and hides the page loader was done in only a few minutes.

At this point, if you would just like to get straight into the code without the explanation, see the GitHub repository I created for it.

Code Splitting

The first thing we need to do is add the code splitting at the route-level by lazy-loading the views. When you create a Vue.js app using their init script, the About page will automatically be lazy-loaded. We will modify this slightly to lazy load the homepage as well using the import() function:

import { createRouter, createWebHistory } from 'vue-router'

const router = createRouter({
  history: createWebHistory(import.meta.env.BASE_URL),
  routes: [
    {
      path: '/',
      name: 'home',
      component: () => import('../views/HomeView.vue')
    },
    {
      path: '/about',
      name: 'about',
      // route level code-splitting
      // this generates a separate chunk (About.[hash].js) for this route
      // which is lazy-loaded when the route is visited.
      component: () => import('../views/AboutView.vue')
    }
  ]
})

export default router

That is all that needs to be done in order to code-split at the route-level. As you add more routes to your application, you just use the import() function to lazy-load them.

Adding a UI Store

Before we get to the page loader component, we need to add the ability to control when the page loader should be shown. To do this, we are going to use Vue.js’s default store, Pinia. In this example, I have created a store called “UI”, but you can put this logic in any existing store you might have depending on what makes sense for your application.

…read the rest of the article →

It’s Amazing How Little Work Developers Are Allowed to Get Done

June 6, 2023

Having been employed as a developer in one form or another for the past couple of decades, I can thoroughly relate to what this blogger has written about how little work actually gets done and why.

I’ve been employed in tech for years, but I’ve almost never worked” — that is the title of a blog article I came across today that rang so true in my ears that I immediately had to write a blog post about it myself.

Having been employed as a developer in one form or another for the past couple of decades, I can thoroughly relate to what this blogger has written. Frankly, it is shocking how little work actually gets done in programming teams.

That isn’t to say that programmers are lazy or aren’t willing to work, but rather the companies that employ them often prevent them from working. Either management does not give them enough to do or processes irrelevant to their actual work prevent them from getting anything done. In most cases, it’s a little bit of both.

The Agile Way of Working

The author of the article also includes scathing criticism of the popular Agile way of working. He argues that not only is “productivity […] sacrificed in the name of predictability”, but also that it leads to “task bloat” where people make tasks much more difficult than they actually are, as well as the fact that “agility”, in practice, makes companies much more rigid rather than “agile”.

Essentially, he puts into accurate words what I myself have experienced as a developer at various companies. Proponents of the Agile Method will argue that it streamlines processes and promotes communication. In my experience, that means the opposite: more “meta-work” (i.e. managing Jira, physical boards, paperwork, demos, estimating difficulty, etc) and more meetings, most of which are irrelevant for me and my job at any given time.

The Agile Method is an enormous distraction from your actual work as a developer. It’s a system designed to make managers feel good and in control and they do so by preventing developers from getting their jobs done.

That is, at least, my experience and I would argue that articles like this one prove that I am not alone in my opinion of it.

Hype About New Technologies

In the world of programming, new technologies are constantly emerging and being hyped. It seems like every month there is some great new library, framework, AI chat bot, programming language, software architecture, compiler, packager, etc that will improve the lives of developers and companies alike.

However, the opposite is frequently true. Not only does management frequently jump on the bandwagon of whatever is currently trendy and order products be built around it, they do it in a way that demoralizes developers.

They may, for instance, hire extra developers specifically to work on it which leads to more “task bloating” since there isn’t enough work to go around. Even worse is when developers are forced to build a product that no one wants or uses. It was just built at the whim of some manager who thought the latest and greatest was going to solve all of their company’s problems.

Again, I’m not alone in having frequently experienced that in the workplace. The blogger also talks about his own experiences with it.

Conclusion

I am not going to summarize everything he says in his article as that would be pointless. Suffice it to say that it is extremely well-written and makes a huge number of valid points about the work experience as a developer.

There is also a German translation of this blog article available from Golem.de:

Wir arbeiten nicht. Null.

This article originally appeared on Alex’s Notebook.

What has your experience been with working as a developer or with developers?

Reducing the Number of WordPress Plugins

May 29, 2023

One of WordPress’s greatest strengths is its positively huge plugin ecosystem. There are plugins for just about everything you can think of out there. At the same time, however, it is also its Achilles’ heel.

Anyone can write a plugin and publish it on the official WordPress repository for anyone else to install. On the one hand, that’s a wonderful thing. It brings an enormous amount of freedom to the platform, but on the other hand, it also has the potential for serious security risks to your WordPress installation.

In fact, we frequently see that in the tech news. Headlines about security vulnerabilities found in plugins are not a rare sight in the tech news. It doesn’t matter whether the programmer was acting maliciously or not, many are simply not secure. The WordPress plugin ecosystem is rife with security holes.

As such, I have always been very cautious about installing plugins in my WordPress installations. My first rule is that I only install plugins from trusted sources and with a lot of installs. Anything from Automattic (the company behind WordPress), for example, is generally a safe bet.

Not only will going with a trusted source with a lot of installs make the likelihood higher that the quality of the code is higher, the plugins are also more likely to receive updates if there is a security vulnerability found. Others may never see another update even if there is a severe security hole. It is entirely up to the developer.

Other than that, I regularly go through the few plugins I do have and see if I can’t somehow get rid of another one of them. Newer versions of the WordPress Core will sometimes bring features that you previously had to rely on plugins for. Other times, you just stop using whatever functionality they offered, so you can uninstall them.

As a programmer, I also don’t install plugins that perform a simple task. Instead, I just add the functionality to my theme or I program a plugin myself so that I know I can trust it. I know exactly what it does. While I run the risk of unwittingly introducing a security hole myself, it at least won’t be so widespread that bad actors are likely to take the time to actively exploit it.

Plugins will also potentially slow down your website by loading unnecessary JavaScript files, fonts, images, CSS files, or other resources. They may be needed for the plugin to work, but if you either don’t need the functionality anymore or only need a fraction of what it loads, it may not be worth the performance trade-off.

In any case, plugins can be a wonderful way of expanding the functionality of your WordPress website, especially if you aren’t a programmer, but they also have a negative side to them. In my opinion, you can absolutely enjoy the massive plugin ecosystem, but they are to be enjoyed with caution.

This article originally appeared on Alex’s Notebook.

What are you experiences with WordPress and its plugin ecosystem? Have you ever had any security or incompatibility issues with them? Let us know in the comments below!

WordPress vs a Custom-Made Website

September 9, 2020

There are valid reasons for choosing WordPress to power your website, but there are also many good reasons for creating a custom-made website. In this article, we will explore some of them.

When I first started working on Developer’s Notebook, the website was originally going to be a completely static website based on the React framework Next.js. The plan was to write articles as Markdown files which would be committed into the project’s repository. When the project was built, they would then be made into static pages by Next.js. A simple enough concept.

In fact, the code I wrote for this concept I set up as an open source project which is still available on my GitHub account today.

I wrote a generator that automatically put together the RSS feed as well as a sitemap.xml file when the project was built. I also spent time pouring over the schema.org specs and implementing a strategy to automatically add the correct data for posts for SEO. 

It was a ton of work and I put a lot of time into it. And yet, in the end, I still decided to go for WordPress to power Developer’s Notebook. Why did I do that?

Use-Cases

I don’t love WordPress. I feel I need to start off by saying that right away. I don’t hate it either, but it isn’t always my first choice. PHP is not my favorite language and I have enough experience with WordPress to know how unstable it can become if you don’t treat it properly, but I also know it has its strengths and sometimes a WordPress website can even be advantageous when compared to a custom-made website.

Small Projects

Let’s start off with small projects. When it comes to WordPress, it is important to distinguish between small and large projects. For our purposes here, we can define a small project as being a simple application. I’m not talking about scale here, but rather simplicity. A fairly simple website without much custom logic is a great candidate for WordPress regardless of scale.

A few examples of this could be a blog, a basic website such as for a restaurant, a personal portfolio, a shop with WooCommerce, a forum with bbPress, a self-hosted social network with BuddyPress, a news website, an online magazine and so on. The list is very long.

These are ideal candidates for WordPress because it provides most of the functionality out of the box. For some of them, you have to install extra plugins, but the point is that it doesn’t involve a lot of complex, custom business logic.

Large Projects

On the other hand, we have large, complex projects. A lot of businesses start with a WordPress website just to get online as soon as possible. This makes perfect business sense as they can get started without much hassle.

However, it is often the case that their business-needs outgrow what WordPress was originally designed for. At that point, it doesn’t make much sense to keep WordPress around and it is usually advisable to switch to a custom-made website since maintaining an outgrown instance of WordPress will generally cost more money, time and effort in the end than just simply rebuilding it.

This problem isn’t just specific to WordPress, but could be applied to most pre-made software that offers certain functionality out-of-the-box. This sort of software isn’t inherently bad for businesses, but it just isn’t possible for a single platform to cater to every specific need.

…read the rest of the article →

Debugging Node.js Remotely with Visual Studio Code

August 4, 2020

Visual Studio Code is a tool with many talents. Among those is the ability to not just debug Node.js applications, but also to debug them remotely.

Debugging a Node.js application remotely using Visual Studio Code is a small matter of configuration. Microsoft’s do-all editor makes it easy to create a debug configuration that teams can even commit into their repositories so that all developers can benefit from it.

In order to simulate a remote Node.js application in this article, we are going to run a simple one in Docker. That means the following will also work for applications running in a Docker container. If you are only interested in the configuration for Visual Studio Code, then feel free to scroll down to the “Visual Studio Code Configuration” section below.

The following example project is also available as a repository on GitHub which may make it easier to understand the structure.

A Simple Node.js Script

The first thing we need to do is to set up a basic Node.js app that we can test with. We won’t program much here because we don’t need to for this example. Instead, we will just include this simple script:

const printTest = () => {
    let test = 'test';
    test += ' value';
    console.log(test);
};

setInterval(printTest, 1000);

The code doesn’t really do much, but what it allows us to do is to set a breakpoint at line 3 and then step through the code in order to see the difference between test before and after it is assigned ” value”. setInterval(printTest, 1000) is the application loop and keeps it running in Docker so we don’t have to restart it every time. That’s enough for our purposes here.

Of course, this code is just representative of any other Node.js code that can be debugged. This would also easily work with TypeScript without any further changes to the code or configuration.

We will then save it as ourScript.js and copy it into our Docker container.

Docker Configuration

The next thing we need to do is to set up is our Docker environment for testing. In order to do this, we will create a simple Dockerfile that looks like this:

FROM node:14-alpine
WORKDIR /app
COPY ./ourScript.js .
CMD ["node", "--inspect=0.0.0.0:9229", "./ourScript.js"]
EXPOSE 9229

Again, there isn’t anything fancy going on here. There are, however, two important parts of this configuration that will enable us to debug remotely.

These are: --inspect=0.0.0.0:9229 in CMD and EXPOSE 9229.

Both of these are critical for debugging a Node.js app/script remotely in Docker. If you want to debug an application on a server, you will most likely just need to use the --inspect flag without the IP address.

…read the rest of the article →

Using Redis Sentinel with Docker and Marathon

July 17, 2020

Note: This article expands upon the principles as explained in my other post about Redis Sentinel and Docker Compose. If you haven’t read it yet, you should do so before continuing with this article.

Using Redis Sentinel with Docker and Marathon is a relatively complex procedure that requires every instance of Redis to be able to communicate with all other instances.

Using Redis Sentinel with Docker and Marathon is a relatively complex procedure that requires every instance of Redis to be able to communicate with all other instances. The Sentinels have to talk to both the master and the slaves while the slaves have to be able to synchronize with the master.

Since there will be a lot of code in this article, I have created a GitHub repository where it might be easier to follow and understand.

The example in this post will work with the same setup as defined in my other article about Redis Sentinel and Docker Compose:

  • We need to define a master instance.
  • We need to setup one or more slave instances.
  • We need to start at least three Sentinel instances.
  • They all need to communicate with each other.

This setup is relatively easy to accomplish with Docker Compose, but what if we want each instance to run in its own Docker container with Marathon? That is where things begin to get a little more complex.

Marathon Configuration

First of all, we need to setup our Marathon configuration so that it deploys our Docker images properly. Essentially, this is the same as the “docker-compose.yml” file from my last post, but in JSON format plus a couple of extra necessary parameters for Marathon:

…read the rest of the article →

Using Redis Sentinel with Docker Compose

July 13, 2020

Redis is an easy-to-use solution for anyone looking for a robust key-value store. It is feature-rich, but relatively simple to use and even has official Docker images. This post will not go into anymore detail as to what exactly Redis is as it assumes the reader already knows. If not, you can read about it on the official Redis website.

What we will discuss, however, is how to create a failover solution using Redis Sentinel and Docker Compose. There are several code examples in this post, so it might be easier to follow them as well as to understand the project structure on GitHub.

Redis Sentinel is essentially a mode in which the Redis server is started that watches the master Redis instance and chooses a replacement from the slave instances in the event that the master instance is unreachable.

In order for it to do this, the following needs to be configured:

  • We need to define a master instance.
  • We need to setup one or more slave instances.
  • We need to start at least three Sentinel instances.
  • They all need to communicate with each other.

So how do we do all of this?

Redis makes it relatively easy. First, we start a normal instance of Redis which will be the master instance. Then we start additional instances that will become the slaves, but when doing it, we pass a flag with the IP address/hostname and port of the master instance. This flag defines the slaves as slaves and also tells them which instance is the master. An example command looks like this:

redis-server --slaveof 127.0.0.1 6379

Now we have the first two bullet points in our list taken care of, but still need to start at least two Sentinel instances. Redis needs at least two instances so that the Sentinels can “vote” for a slave instance to become master. This is a bit trickier as we first have to define a configuration file. More on that later, but for now, here is the command to use when starting a Sentinel instance:

redis-server /redis/sentinel.conf --sentinel

All of these instances will need to be on separate servers or setup with different configurations if running on the same server. That is beyond the scope of this article though as we are going to isolate each instance in a Docker container as a solution to this problem.

To use Docker, we will need to start multiple containers at once. The best way to do that is with Docker Compose. This article will assume some background knowledge of both Docker and Docker Compose. First we need to create a Compose File that looks something like the following:

…read the rest of the article →