Posts in system architect

Cloud Native

How does your team is working with cloud infrastructure?

Cloud Native Topics

  • Development process
  • System design
  • Builds and packages
  • Deployments
  • Release
  • Cloud infrastructure

Development process

When working with cloud systems the proven methods to develop and run applications in production is DevOps

DevOps is the practice of code > build > test > deploy > release > repeat

DevOps is about bridging the gap between development to production

System design

You can use different methods of designing applications but if your applications are stored in the cloud then Microservices might be better to implement, otherwise why use on-premise approach when you can enable the benefits of using cloud infrastructure?

Builds CI and packages

Containers by now are the default approach to build applications as its comfortable to ship and deploy

Its easy to start local development environment using containers and work on your application

CI is an integral part as this will initiate your committed code to get build and get deployed

Deployments

Deployments to production is easier when the application is packaged as a container image

This is the next step after builds as this is the confirmation that the tests are ok and the container image with the latest code already pushed to the image repository and ready for deployment to production

Release

This is the step after the deployment is successful and the new image is used in production

Now the choice is when to enable new features using feature-toggle

Once the new features are enabled the release step is complete

Cloud Infrastructure

To manage the containers and successfully implement Microservices you’ll need Kubernetes cluster to orchestrate the containers runtime

Other services like sending email, databases, load balancers and more, can be integrated with your Kubernetes cluster to be used for the entire stack

Summary

Cloud Native is proven to have better results and happy developers

but Hey! you can always start a long running VM and install some stuff on it

DR for production systems

If you clicked this blog post then you probably needed to implement or asked about DR in your production systems.

Let’s take a look at the options for DR in production.

Non-prod Vs. Prod

In order to provide a DR solution we’ll need to discuss the current systems status and to be exact, is the product deployed in production or ship to clients via download as a software package.

For non-prod that is basically a company that develop a software that is sent to clients and is installed on premises at the client’s data-center.

It can be self-managed with support SLA but the client is responsible for its infrastructure and operations.

For production it means that the software is deployed to a system that is actively handling requests from clients and must be online in order for users to use it.

Both scenarios are very different because in non-prod there’s no DevOps since the product is not in production, but if the product is in production then DevOps is needed in order to bridge the gap between development and production systems.

Passive DR

passive DR is the option of having a duplicate infrastructure and systems to be activated when needed.

The goal is to have minimum downtime and a 2nd location of data that is available when needed.

This approach is suitable for non-prod since there is no real need for active DR since the product is not in production.

Also when switching passive DR to active DR is takes time to check and test that every system is active and data is indeed replicated as expected so its a slow process.

Active DR

Active DR is the option of having a duplicate infrastructure and systems to be active in concurrently with the main production systems.

Basically its two sets of production running simultaneously, this is very suitable for production since the DR is active and is working in production.

  • Active DR can take the new load immediately and respond to it via auto-scaling.
  • Production load can be shared between the main site and the DR.
  • Active DR is used, meaning the costs are already paid in contrast to passive DR that is paid but not in use.

Summary

If your company needs a DR solution then you have two choices to pick from, review your stack and design of the systems and decide what is the best option for your product and consider both options.

Benefits of Databases in Microservices

If you are using Microservices approach in your stack then you might want to take it a step further and add a dedicated database for each services.

In this blog post we’ll discuss the benefits of having databases in Microservices.

Dedicated database per service

When every service have its own database its actually simplify the process since every application is based on CRUD.

And every service can be built with specific requirements that it needs.

Mix databases types

One of the common scenarios when building a software is how to store the data, what kind of database should we use? SQL? NoSQL?

Choosing one type of database over another can limit the application stack, why not use all types of databases?

So lets assume that the login service stores users, passwords and emails. so it does not require too much efficiency and speed since the login is done once per user as long as the sessions is open and the user did not logged out.

In this case any we can choose the easiest and fastest login implementation.

What about feed service? lets say your application have a feed of data for the users so this should be low latency and very fast, so you’ll probably want to use key-value store database.

You get the gist, every service is now have its own mini stack.

Enhanced database security

Once your services use their own database, it means that only that specific service is accessing that database.

meaning access is based access.

Unlike one big database that all services connect to, and lets be honest probably use the same credentials.

So you can add a rules that only a specific service can access its database and not other services, also only that service have the credentials for CRUD for that specific database.

Reduce database load

If every service is connecting to its own database then the R/W operations are faster due to less database connections, unlike one big database that all services connect to.

That reduce hardware requirements as well, since the database hardware can be designed by the load of each service.

Clean database

What I mean by clean database is that every database have unused data or deprecated data that need to be cleaned.

By using dedicated database for each service that cleanup is easier since you know that the data belong only to its parent service and any modifications to the application can be easily applied to the database too.

Lets assume you decided that a specific service is deprecated, you’ll probably want to delete its data.

How do you do that if you work with one big database? but it that service is only using its dedicated database then you simply deprecate that service with its own database.

Backup is easier

When you have one big database for all services you need to backup that entire database regardless of the usage or unused data.

What about restore? same applies here that you’ll need to restore the entire database and not specific data that might be affected.

Lets assume that every service have its own database and you need to schedule a backup or restore, then you only need to so it per service and not the entire database with all services data.

SpinningOps helps startups improve their system design, contact HERE and ask what can we do for your application.

Bots As Part Of Your Cloud Ops

Do you have bots in your stack?

How do you assign permissions to a bot?

Bots in the context of your cloud ops

I’ll explain, let’s say you use Jenkins for your CI/CD pipelines, how does Jenkins clone new code from the code repository?

Or how does your Slack channel receives alerts from an app?

Sometimes you need bots in your stack but here’s the challenge, what permissions does a bot gets?

Let’s start with naming your bot

A good practice is to name the bot as what it’s supposed to do, for example:

  • bot-jenkins
  • bot-slack
  • bot-s3-read-only
  • etc..

you get the point, start the naming with bot so other people won’t confuse it with other users.

What about naming policies?

I set the same naming for permissions and policies, it’s very easy to manage something that you know what it is and what it does by it’s name

Keys or Roles?

I prefer using Roles, it’s better than just putting a secret somewhere not knowing who use it

but there might be a situation where you’ll need to use keys, for example if you need to rsync files from a remote server on a different cloud vendor than Roles is not an option, just make sure those keys have the exact permissions it needs for the task

Once the task is done deactivate the keys

Summary

Bots are part of your stack so add the exact permissions for the specific task

6 Rules For Cloud Architect

Are you a cloud architect? How do you plan a new infrastructure for a product?

How do you build workflow in the cloud?

What are the considerations of cloud security, costs and automation?

All are relevant questions when planning a new cloud design for a product runtime, so let’s discuss it.
Also, this is my approach and it serve me well in all of my designs and cloud operations in production.

Preplan

Preplan is not part of the 6 rules and just a starting point.

It’s better to plan before starting to build any project, in order to plan you’ll need to understand the product, ask these questions:

  • What problem is the product solves?
  • Who’s going to use it? (demographics)
  • What are the business risks of downtime?
  • What is the expected or current revenue?
  • What is the technical flow of the product? (user login, integrate with API, consume data from database, etc..)

The more you ask the more information you’ll have in the design process, so don’t skip this step.
It’s easy to just go into building stuff and not ask for need and requirements.

Costs

If your design will cost more than the revenue the product won’t justify itself, this is very important as a bad design in costs perspective can have a significant affect on the entire business operations.

So, in every step of the planning consider costs!

Cloud Security

In every product there’s a risk factor in term of business risks, what if the application is down for 1-hour? what is the affect in term of reputation and revenue?

What if some services and data are exposed to unauthorized parties?

So, ensure to include security measures in the design to make sure your product is protected, but don’t overdue it as it can cause issues with workflow and runtime.

Balance is key here.

Automation

Building and working without automation means spending time on repetitive tasks, this is not efficient and will cause slow delivery.

Try using IaC (Infrastructure as code) approach, this means you can deploy and modify entire infrastructure in minutes.

Also, you can find out the current stack components by checking the IaC files.

Combine IaC and Immutable infrastructure to get maximum results.

Decouple Dependencies

When building software and infrastructure it’s easy to tie components and hard-code stuff, the more hard-coded and dependency there is between different components the more issues it will cause.

Let’s say that you designed the infrastructure with hard-coded IP addresses, this means those IP cannot change, the same for other config files.

Another example is start-up of a service that is deepened on other services, for example application that’s require the monitoring agent to start, monitoring is nice but should not affect production services.

Continuous Software Updates

Software freeze is a risk in my opinion, this approach will lead to more work that needs planning and the longest the freeze the hardest it is to upgrade.

Let’s say you’re using Python3.6 and are using pip packages in your code, this means you cannot upgrade your OS because new OS comes with latest Python version and that python version uses the latest pip version.

So now you can’t upgrade your Python, pip or OS, just because you did not integrated updates in the regular operations.

Keep your code and system up to date!

Remove Single Point Of Failure

Similar to couple dependencies that can cause issues, relying on a single endpoint or component is risky, let’s say you’re using one load-balancer, what happened if that load-balancer is overloaded?

Single database? the same issue

Those are simple examples but in your product there are probably more components that are defined as single point of failure.

The less single point of failure the better!

Compute Group Vs. Cluster

Do you have a cluster or compute group for your production?

What is a compute group?

What is a cluster?

Compute Group

Compute Group is a set of identical servers doing the same function.

Example of a compute group: Apache (web servers)
Your website traffic is increasing and you need to add more servers to handle the load, so you add another web server (let’s say it’s Apache) and than another.

Those web servers are doing the same thing (function) which is presenting web files (HTML) to visitors on the website, now those web servers are not connected together and don’t “know” about other web servers.

So, how does this scenario work?

The Load-Balancer is forwarding traffic to the web servers (let’s say it’s round-robin) and those web servers are presenting files to the visitors.

You can add or remove web servers per demand.

Cluster

Cluster is a group of compute servers that are connected and can operate together and “know” about each member of the cluster.

Example of a cluster: Kafka

Kafka has a minimum of 3 nodes in a cluster and those 3 nodes are configured with the IP of the other members of the cluster and elect a leader out of the 3 nodes.

If there’s an issue with one of the nodes the other can take it’s responsibility, thus you achieve high-availability.

3 Rules For Cloud Security

What is your cloud security approach?

When designing a product to work on the cloud it’s best practice to include IT and Cloud security in the product runtime, infrastructure and operations.

The Challenge

When using cloud the approach needs to be different than on-perm or just consuming SaaS from another provider, it’s very easy to open ports and permit access to cloud resources, and because it’s in the “cloud” it might be accessible from public and external networks.

Keeping track of modifications or preventing admins and developers access to modify resources can hinder the normal operation of IT and Development, so it’s better to implement a different approach.

An approach that is a mindset of Cloud security considerations in every project and modification, changes are necessary in order to improve and develop the product you’re working on.

Authentication

Authentication means: who are you?

Examples of identify in roles and positions:

  • admin
  • developer
  • contractor
  • customers
  • etc..

Authorization

Authorization means: What can you do?

Examples of permissions:

  • add users
  • delete users
  • add new clients
  • open security-group ports
  • download files
  • access resources (databases, servers)
  • etc..

Connection

Connection means: Where are you connecting from?

Example of connections:

  • Official HQ Offices
  • Remote workers (VPN)
  • Customers (anywhere)
  • Private-Link
  • etc..

How to successfully have a secure cloud account?

Choose the best suited approach for you and your team and implement that approach as a mindset, the approach with those 3 recommended rules is easy to remember and easy to implement.

Do you maintain a regular cloud security operation?
Do you know the status of your cloud security?
If your answer is yes, than contact us now and we’ll do the cloud security for you.
To contact us click HERE

How software update freeze can make your stack obsolete

Do you update your software frequently?

Is software update part of your CI/CD pipelines?

What is continuous update?

The issue with hard-coding software versions and not updating

Just to clarify this post is relevant to 3rd-party software and packages you import to your application (via apt, yum, pip, gem etc.. or downloading binary .jar etc..) also for OS versions

The wrong approach in my opinion is to statically add version numbers to imported packages and use the same OS version throughout your infrastructure and code

Why you ask?

Once your code work with other software (3rd-party) and tests are ok, you assume that the process is complete and resume working on your code

everything works until it doesn’t !

Scenario 1

let’s say you’re using java and a vulnerability is discovered and fixed with a new release, now you need to upgrade to new release but your runtime version is too far behind the latest version and cannot be upgraded

or better yet, you can upgrade but other components that communicate with your code is not compatible with the latest version

what do you do? oh yes, revert !

Scenario 2

the operating system is a few versions behind the latest, let’s say Ubuntu 18.04 and now you want to use Ubuntu 22.04 and your code is python 3.6

guess what Ubuntu 22.04 does not ship with the same python as Ubuntu 18.04

now you need to compile python 3.6 from source and install it to Ubuntu 22.04 and make sure to update the PATH to use python 3.6

Backlog

so now you decided to use python 10 instead of python 3.6 but what about pip packages? they are probably not compatible, why? because pip use the python version too

now go over your entire code and make sure every function works with new python version, then you’ll probably decide it’s too much work right now and not to upgrade

Solution

Simple, don’t freeze software updates!

if you keep your software up to date (including your OS) it forces you to adapt as you go! no need to upgrade or schedule upgrades because it’s a mindset, your software is evolving

stopping your software from evolving does not make sense, in fact it’s the opposite from what your job description is… developer

How to keep your software with latest version?

The answer is CD (continuous delivery)

CD means how fast, reliable and how frequent you deploy your code to production

So the goal is to deploy to production whenever you want and a few times per day, if you do that you know your code is in a releasable state

So using the latest release software while keeping a releasable state will make your job easier and your product better

Backup vs. Restore

What is your approach regarding your application’s data recovery?

Is it Backup? or Restore?

What is application backup?

It depends on your application but in most cases your application will have database so that is basically your priority along with your core code of your application.

So you’re probably using code repository to manage and store your code, you can always clone the entire repo and save it somewhere safe.

As for the database you should dump it to a safe store location.

So both approaches are copy the data to a safe external location, it can be S3, local hard drive or remote server. all those options require a remote copy of the data.

The challenge is when you’ll need to use that backup, will it be restorable?

What is application restore?

Application restore is the process were you backup and restore at the same time, meaning a backup to the data and restore right away.

When you restore the data right after the backup is made you ensure that it is does what it supposed to do, and can be sure that when or if you’ll want to use it, you can.

How can you backup and restore in one process?

Let’s describe a simple example, let’s assume it’s a MySQL database so you can dump the database then copy it to a testing server and restore it to a MySQL container.

Once the restore is complete the next stage is to connect to the database and verify the data is ok.

so this is the process

  1. dump the db
  2. copy it to testing server / instance
  3. restore it to a container
  4. connect to the db and run a query

Why should you backup and restore in one process?

Again ensuring the backup is ok and the reliability and confidence that it’s available on demand.

How to choose the best performing hardware for a server

Choosing the best optional hardware for a server is something that happens often, and making the right decision can prevent issues later on with those servers, so how do you choose server hardware?

Ask

my approach is to ask first, what is the server meant for? what tasks should that server run?

the more you ask the better and more informed decision you’ll make.

Calculate

make a list of hardware flavors and add prices, choose the lowest price with the closest hardware requirements of the application.

Test

once you’ll have the information regarding what tasks the server should do you’re ready to test solutions.

start a test server with the minimal hardware requirements, install the application and work it in a lab environment, obviously you can’t test production systems like that but you can test everything except traffic and user behavior.

keep playing with the server’s hardware flavor until you’ll get the best performing and cost efficiency option.

Disposable Application Resources

build your server as disposable application resources, what does it means?

it means the application that is installed on your server is just the application, no database and no local configuration is saved. just the runtime code of the application.

for data and config use mounted volume or disk or NFS and attach it to the server thus making the server a disposable compute resource.

using this approach you can scale your server however you need or per runtime and load.