A few years ago, while working as an Infrastructure Engineer for a startup team, I wanted to be able to do something really simple, but powerful. To share a browser bookmarks folder of all our service endpoints with my team, preferably with two-way synchronization. I googled for existing solutions in the Firefox/Google Chrome Extension stores but found squat. So I took it upon myself to design and implement a solution, and this is what I came up with!
As a User, I would like to be able to install a Browser Extension, which allows me to log in to an online service, create and manage organizations and teams, with the possibility of inviting other colleagues. By creating a team within an organization, a bookmark folder should be created automatically in the browser. Any bookmarks or sub-folders created at any level under the team folder should be transparently synchronized across other team members, either immediately or when they login to the service themselves on their browser.
In terms of Data Security, team members have either Read and/or Write access to all content on the team folder, but no further granularity than that, for simplicity. This simplicity has an impact on the architecture design down the line. It will help secure data access privileges.
A Modern Application Architecture Solution
In my opinion, the unique challenge with this project is implementing efficient and robust synchronization across users. All other features are solved problems. Also, synchronization should be easy to implement and an easily replaceable component, in case more efficient solutions arise in the future. I divided the solution into 3 parts:
- Public Restful API for authentication and User, Organisation, Team management
- Data Persistence Layer for state
For synchronization, I decided to leverage server-side Apache CouchDB for its replication capabilities. On the client-side, the browser extension maintains a two-way synchronized state between the browser bookmarks and a PouchDB database. Periodically, the extension will replicate the local PouchDb database documents with the central CouchDB database, therefore replicating state with every other user. Queue Power Rangers soundtrack!
I considered for several days if to either persist all the state in CouchDB (including user management) or separate user management state and bookmark state. I chose the latter for modularity. Plus, Relational Databases are more appropriate for relational data as Users, Organizations, Teams, Team Membership, etc, due to its data integrity and transactional features. As for bookmarks, the data model is similar across browsers although there are differences. So, a NoSQL database is more ideal with the crucial benefit of CouchDB builtin replication!
The Restful API is a Stateless Application. It exposes Data Models as Rest Resources but never writes any state to disk, per se. Logs are directed to standard out. The Data Persistence applications (the databases) are stateful Applications. CouchDB has an HTTP interface as well. Both the REST API and CouchDB are public services. The browser extension communicates with both, for implementing different features. A common authentication method needs to be implemented. I chose JWT, as the token can be utilized to authenticate against both services.
Allons-y! To the code editor
After some architecture design towards a proof of concept, I was itching to start some coding! My first task was implementing the Restful API, with the Django Rest Framework.
I initialized a new Git repo with a fresh Django project and added Django Rest Framework support. So much productivity! Then, I ensured database credentials and other values could be set at runtime from environment variables, following twelve-factor application. Application runtime dependencies for production, development and unit testing are separated in different requirements files. I coded REST endpoints of CRUD operations for:
- Team Membership
plus login (JTW token generation) and “logout” (JTW token blacklisting). For authorizing HTTP requests against CouchDB, I deployed quay/jwtproxy. If configured with the same secret key as the token generator, it can validate a JWT token and reverse proxy requests to CouchDB. There are arguably better alternatives for which I will migrate at some point, service meshes like Istio.
For a local development environment, thinking ahead for deployments, I containerized the application. Created production, development and unit testing Dockerfiles. Each Docker image contains the source code plus respective runtime dependencies. This ensures minimal Docker image sizes in production which reduces deployment time. It also improves quality and security by not bundling development dependencies in production.
Also, I created production, development and unit testing Docker Compose files. Leveraged composition for being DRY. This allowed me to:
- have my development environment isolated from my host laptop environment
- have a production-like local runtime environment to test production-like configuration
- run unit tests also in an isolated environment
I host all my projects on Gitlab. I added Gitlab Continuous Integration jobs to:
- Build Docker images. Images are tagged with the commit Semantic Version, if available, and the commit hash
- Run unit tests through the testing Docker image
- Push Docker images to Gitlab’s Private Docker Registry, if builds pass
Monolithic vs Modular Architectures
With all the features above, I ticked most of the items factors of a twelve-factor application. But, I had also created a monolithic application architecture. All the REST API features are implemented in the same code base, which has some adverse effects. During development:
- the code base is larger and so are the number of runtime dependency Python libraries
- all unit tests are executed even when only some parts of the application change. My unit test count was reaching 40 when I last checked and that was just for the REST API operations.
During deployment, larger Docker images also mean slower deployments and more wasted disk space in the container orchestrator worker nodes.
So, I started refactoring the monolithic Restful API into several, also Restful API, sub-projects. Each microservice has its own Git repository, Dockerfile, and Continuous Integration job:
- Authentication: Responds with a JSON Web Token on successful login. Also supports refreshing an existing token.
- Account Management: Endpoints for new Account registration, e-mail verification and profile management.
- Teams: Restful endpoints for collaborative features. Implements Organizations, Teams and Memberships in both.
- Authorization: Not a Restful API. The Account Management, Teams and CouchDB public services require authentication. So, I implemented Single Sign-on with JWT plus a reverse proxy. It verifies token authenticity and proxies the request to the respective backend service. I am planning to deploy the application to Kubernetes as container orchestrator. So, I looked into Service Meshes to replace this service in the future. The Service Mesh will secure both public access and micro-service inter-communication.
The application development currently halted. I utilize it sporadically to try out new tools and ideas. I would love to opensource it like Keybase or Bitwarden. With Gitlab CI in place, Docker Images automatically pushed to a Docker Registry, I am ready to go to Cloud Infrastructure town! I want to be able to quickly create, change or destroy a Cloud development runtime environment. I also want to have multiple runtime environments, like staging and production. So, the next steps are to create a development GKE cluster and setup Gitlab Auto Devops.