adapted from a conversation with areyesgfx
In order to more easily test, develop, and deploy software, it's generally helpful to have a local, repeatable configuration to enable rapid prototyping and development. The first thing is that this is definitely not something that you can implement in a day or a week, it's definitely an "aspirational architecture", and takes effort from eng, ops, etc to make work, if you're starting with an existing project. There are things that you can do when starting a new project that can help set this up to enable success further on.
The basic concept is that a developer can run the whole stack (or at least their subsection of it) totally locally. It's sometimes not totally possible (sometimes need to reach out to an auth server or something that's too annoying to mock, but there's ways around that too). The basic setup is based around Docker with some related workflows. The two main components are the base app, and a docker-compose file.
The docker-compose file has the definitions of all the dependent services of the base app. For most cases, this is the database, messaging service, a metrics acceptor, etc. For things that reach out to AWS services, localstack is amazing. It can mock most AWS services, even on the free tier, and behaves exactly like S3/DynamoDB/DocumentDB/SecretsManager, but completely locally.
In the base app, the most important thing is that all the endpoints are parameterized/configurable, so they can be set to point to the localhost/docker environment instead of the "real" urls. For instance, instead of having the database endpoint be hardcoded, it can be set up to point at the local redis/whatever instance that's running in docker, the same way that there's different "qa" and "prod" configurations with a "local" configuration.
The workflow for devs looks like this, once all the dependencies are defined in the docker-compose:
docker compose up starts all the dependency serviceslocal config, and there's two options here:http://redis:6379)http://localhost:6379/etc/hosts mapping the dependency name to localhostdocker compose down and shut down all the dependencies and clean things upThis way there's a lot of benefits, namely that you can do literally whatever you want to the database then just blow it away and try again if it didn't work. Usually people's local DBs are self-populated with dummy data, so you shouldn't be pulling full copies of prod to people's computers (also that's a massive liability issue since there's tons of PII and GDPR stuff that shouldn't be able to be distributed to random people's computers.) Of the two options, the "run on host, dependencies in containers" is (as a developer) far preferred. Everything getting put to localhost is a minor hiccup, but the ability to run the app natively and actually hook up a debugger is massively more beneficial for productivity instead of being limited to println debugging.
To deploy with this setup, there are also a few ways to handle this, but the way I've found to be most straightforward is as follows (this is based off ECS/EC2 architecture, but Kubernetes architecture works conceptually the same):
qa or latest or some semantically versioned (i.e. v2.3.1) tagmain branch: when it's time to run a qa deployment there's a script that updates the ECS/EC2 cluster definition/load balancer group with the newly built qa tagged image and then forces a redeployment to the cluster. prod tag is updated to point at the qa tag, and again the redeploy/cutover/pod rolling deployment is run