VPC, bastion host (rarely needed due to TGW), organization-wide netwokr routes and set of security groups for every cluster
based on AWS EKS, providing HA, managed masters/etcds, simple upgrades and IAM integration
docker/kubelet is tuned to allow wide range of applications (like ELK requiring ulimit=65536)
worker nodes are created as EC2 spot instances reducing EC2 costs up to 90% at the expense of the rare situation of node being killed due to high spot instance demand. This situation happens approximately once in a month and is handled automatically by Kubernetes machinery causing minor disturbance
worker nodes live inside two AWS Availability Zones. This reduces a chance that cluster runs out of spot instance capacity, but induces Regional Data Transfer costs ($ 0.01/GB at the time of writing). If you application is doing heavy data hauling, it might be useful to use AZ affinity
cluster-autoscaler dynamically allocates and removes new worker nodes when capacity is requested. It also takes care of pod reallocation when spot instances get terminated due to high demand. overprovisioner makes sure there is always a spare capacity available eliminating node provisioning delay.
Thanks to Amazon VPC CNI plugin Kubernetes pods have the same IP address inside the pod as they do on the VPC network. It allows developers to directly interact with Kubernetes pods (or even better – Kubernetes headless services) even if they do they own service discovery and advertisement like Kafka does.
Kubernetes pod logs are collected with filebeat, stored in AWS ElasticSearch and accessible via Kubernetes-hosted Kibana. es-curator wipes logs after 7 days.
Kubernetes Services can be automatically exposed via Kibernetes Ingresses with the help of jx/exposecontroller. It manages FQDN host names and TLS annotations so you don’t need to hardcode them in each deployment.
HTTP and potentially tcp/udp ingresses are exposed externally via nginx-ingress using single AWS ELB.
By default each ingress is exposed as HTTPS with a help of letsencrypt/cert-manager. Permanent services like Jenkins/Nexus receive production certificates, dynamic ones like feature branch preview deploys get staging certificate due to request quota.
Jenkins, configured to use Kubernetes pods as build agents. CI secrets are pre-created and accessible as Kubernetes secrets
Nexus with S3 plugin (at the time of writing repos needs to be manually configured to use it)
ChartMuseum / Monocular to store and browse released helm charts
Kubernetes Dashboard in full-access mode
Prometheus Operator with MetricsServer, Prometheus and Grafana for collecting and displaying both cluster and application metrics
KeyCloak for providing centralized authentication bridge between OpenID/SAML enabled services and LDAP/OneLogin/Okta identity providers. Comes pre-configured to use organization’s Active Directory. Can be easily extended to allow custom users (i.e. external client/collaborator logins) or social login
oauth2-proxy that can be used as nginx authentication url for services that don’t support authentication natively. It’s enabled merely adding couple of annotations to the Ingress saving hours of developer’s time
(Coming Soon) Sentry
Arrange a Demo
Contact us to arrange a short demo, either personally or via Zoom.