Featured
Latest
Browse
Archive
Using sidecar containers in k8s
Pods are the basic unit of deployment in k8s and in a typical k8s setup, your application container is probably running inside a pod. Pods themselves are containers, and you can run multiple containers sharing the same volume and network interfaces of the pod concurrently. This is called the sidecar pattern. We run an emr django application connected to a RDS postgres database using RDS IAM auth. To make this work, we run a cronjob which generates a token every 10 minutes which is used by the a
Setting up AWS Amplify for a Next JS SSR app with Terraform
Setting up Amplify for nextjs SSR via Terraform
Kedarkantha Trek
Summary of kedarkantha trek
Reducing our Deployment times by 87%
Leveraging github actions by using state to determine whether to build a dockerimage or not.
Moving from AWS Loadbalancers to Nginx
We run a k8s cluster for our healthcare EMR application. At the time, we only had one public facing service, so we used a Kubernetes Service of Type LoadBalancer. resource "kubernetes_service" "django" { metadata { name = "django" namespace = kubernetes_namespace.app_namespace.metadata[0].name annotations = tomap( { "service.beta.kubernetes.io/aws-load-balancer-ssl-cert" = var.django_acm_arn, "service.beta.kubernetes.io/aws-load-balancer-ssl-por
Chopta Chandrashila Trek
Pro Tips 1. Add/plan for buffer days for bad weather. Summit hikes usually go to the highest altitude possible, and the weather at those places can change within minutes. 2. Prebook your hotels/hostels. Internet connectivity is spotty. Use hotel aggregator apps to look for hotels and call them up. Sometimes you might get a cheaper rate. Ask if geyser and other facilities are available before booking. 3. Figure out multiple ways to get to your base village of the hike. If you miss your bus, k
Django Bulk Save - 2 Fast 2 Big
Continuing from Django Bulk Save, I wanted to test how would the copy_from function perform when I scale to over a million records. The copy_from scales incredibly well, it took 4 seconds to write 1.6 million rows. The rest of the article strays away from the actual topic. I decided to play around with different CSV libraries and figure out what would be the most efficient library to use when you have a file this large. You'd probably never encounter a scenario where you'd have to parse a mil
Streaming through millions of records trying to match an encrypted field : Java
A year ago I was working on a really tiny part of a large non-consumer facing product (idk what's the correct term). I can't divulge much, I have done a lot of illegal things like calling a politician an idiot on social media, so I don't want to add more to the list. Imagine you ran an ice cream subscription service, and you had a lot of people on the payroll tasked with managing renewals for subscribers. Let's call them agents. You had an online portal for customers to manage/renew their subsc
Connecting Django to RDS via pgbouncer using IAM auth
Setting up the infra We primarily use a django backend with RDS postgres in a kubernetes cluster for production environments. We were trying to setup an environment which would be HIPAA compliant and secure as well, as well as keep complexity to a minimum in the backend layer. We decided to go with IAM authentication to connect to RDS for the django application. The tricky part was to figure out how we would add pgbouncer between RDS and django to manage connection pooling issues. We setup a
Django Bulk Save
Using bulk_create? You could switch to copy_from if your usecase fits.