Simply put, containerizing your code means placing your code in a consistent environment to guarantee that the code can run reliably no matter where the container is deployed. One of the most expensive headaches for software engineers has stemmed from sinking endless hours in developing software that, when deployed, behaves differently than the environment it was developed in.
By running code inside a container, the host machine is responsible for running a consistent container engine, while the container is responsible for running a consistent application environment. Any networking mappings are done at the time the container is built to ensure the Application Code receives network traffic if needed.
Check out a modifiable version of these diagrams here
Aside from the luxuries of running code in a consistent environment agnostic to its deployment destination, Engineers can improve site reliability by managing the health of running containers without having to debug application code as a first option.
For example, let's say an Engineering team deploys an application that has a tiny-yet-mighty bug in it. This little nuisance bug has the ability to crash the entire application, and only 1 in 10000 users are likely to stumble across it. Without containerization, a bug like this could take down an entire server, possibly affecting other running processes and force the Engineering team to manually respond and revive the server. With the server revived, that buys the team some time to diagnose the real source of the bug.
If the same code were to run inside a container, this bug would– at worst– take down the container, leaving the server and any adjacent processes unaffected. Furthermore, if the server ran a process that managed the health of that container, then instead of pinging a member on the Engineering team to manually revive the container, this container management process could instead be given permissions to revive the container itself until all health checks succeed. If this kind of thing sounds familiar to you, Kubernetes (k8s) is a tool that does this kind of thing and more.
In today's cloud computing world, containerizing your code is almost always a good move. Running your code in a protected, consistent environment can only remove ambiguity for when unexpected scenarios occur. But when it comes to containerizing code that ultimately runs somewhere else (as is the case for front-end code), then the justification loses merit because all that container is doing is receiving traffic to request static files. With a myriad of services which offer free-to-affordable, robust static file hosting (like S3), the only thing better for Engineers who manage containers is a not managing anything at all.
There are however, some scenarios that would still justify containerizing code that isn't executed in the host container. One example is when that container is a part of a larger network of other containers, and those surrounding containers benefit from having the front-end code within its ecosystem. Ultimately, the decision to go this route incurs more responsibility for the system and, if not carefully considered, could be overkill depending on the use case.
At Breadboard Engineering, we're constantly weighing tradeoffs to ensure we are solving the right challenges with the right amount of effort. Prudential engineering without solving for problems that don't exist is a foundational merit of our team. As with any challenge, understanding the problem being solved is the best foothold for creating a lasting solution. Sounds like your kind of building environment? Breadboard Engineering is hiring