Page 1 of 11
Choosing AWS Cognito for your user authentication and authorization needs is an excellent option. Cognito provides a lot of capabilities, and with all the flexibility comes some complexity. It is hard to wrap your head around how to set it up, you probably have questions like:
- should I use a User Pool or an Identity Pool?
- If I create a User Pool, do I need to use a federated Identity Provider?
- When the documentation says that Cognito can be used as an OIDC what does it mean?
The goal of this article is to shed some light on this topics and help you set Cognito for your project. We’ll use go for the examples, but should be able to understand the ideas behind the code an dapply them to a project using other languages.
Read More...
Terraform doesn’t concern itself with the directory structure of our project. It cares about state. We, as the users of the project, are the ones who benefit from a clean and easy-to-understand directory structure.
In this post, we’ll explore basic directory structures used for Terraform projects.
Note: If you want a more in-depth discussion about the state and directory structure relationship, you might like my guide Meditations on Directory Structure for Terraform Projects.
Read More...
ArgoCD’s documentation is quite good. I just feel there is one key question that is often left unanswered. How do I get my private SSH key into ArgoCD in a declarative way that doesn’t require hard coding the key into a secret YAML file?
In this post, we are going to use the External Secrets Operator (ESO) to get the private SSH key from AWS SSM Parameter Store and inject it into ArgoCD using a Kubernetes Secret.
Read More...
Part of the design principles of the security pillar in the AWS Well architected framework is “Implement a strong identity foundation”, that is:
“Implement the principle of least privilege and enforce separation of duties with appropriate authorization for each interaction with your AWS resources. Centralize identity management, and aim to eliminate reliance on long-term static credentials.”
Well Architected Framework - Security Pillar - Design principles
When we start using Kubernetes in AWS EKS, we might take some shortcuts during the learning phase and add all the policies to a role that we directly assign to our nodes. The issue is that we give much more privileges than we require just for practicality. And it can also become a habit. The way we learn something sometimes becomes our default pattern. So let’s break that habit and explore how to set IAM Roles to Kubernetes service accounts in AWS EKS.
Read More...
The fastest way to learn something is through practice. Most of my work is on AWS, so running a local Kubernetes cluster is not the best option. I want to test Kubernetes integration with other AWS services. To run experiments, I create a cluster using eksctl, run my tests and then destroy it.
In this post, I will assume that you already have an AWS account and have installed the aws cli, eksctl and kubectl on your machine.
Read More...
If you check the AWS documentation, they use eksctl to create the EKS cluster. eksctl uses CloudFormation, and even if in the end, I could fetch the template, it feels like eksctl is an imperative way of creating an EKS Cluster. I prefer to keep track of all of my infrastructure as code, and using eksctl leaves an essential part of the infrastructure out of the codebase, the cluster itself.
I’ll describe how to create a Kubernetes cluster in Amazon EKS using Terraform in this article.
Read More...
I’ve been using AWS EC2 instances for a while now, and I’ve always struggled to find a clean way to manage the users and ssh keys for the instances. I’ve tried a few different approaches and settled on one that I think is the best so far.
In this article, I’ll create a regular EC2 instance as an example. But you can use the same approach to set up users and ssh keys or run other commands on any other type of EC2 instance (e.g. Bastion hosts, EKS nodes, ECS nodes, etcetera). The key is using cloud-init.
Read More...
Sometimes it feels easier to work on complex and challenging tasks with our tools. We forget how to do the simple initial steps for a project. The reason for this is that we lack practice starting projects. If you work for a company, you’ll probably set up your Infrastructure as Code (IaC) once and then iterate for it. Unless you work on a consultancy or start projects just for fun, you might forget the initial steps to set up a project.
Read More...
I decided to add a new copy button to all of the code blocks on my site. I’ll show you how to do the same, but I also want to share my thoughts on adding dependencies to any project.
Currently, it is as easy as importing a new library to add new functionality to any project. It didn’t use to be like that—blog posts were as valuable as code. Code used to be simpler, and it was created for a specific scenario. But if you understood how the code worked, you could change it and adapt it to solve your problem.
Read More...
Good command-line tools are a pleasure to work with. A feature I’m always grateful for is when the developer took the time to provide an output that is human readable and an output that is easy to pass to another tool for additional processing. A neat trick to differentiate between these two cases is by having the application identify if the output is being “piped” to another program or not. If the process’s output is not being piped or redirected, we can assume that the user is looking at the results via a terminal. If that is not the case, our application could behave differently and show output that is easier to parse by another program. In this post, I’ll show you how to determine if the stdout of a program is being redirected or piped using the Go programming language.
Read More...