Backend Engineer - Bridgelinx Technologies

Feb 2022 - Present


Bridgelinx, a pioneering Pakistani startup that secured an impressive $10 million in seed funding, aspired to transform the landscape of freight movement within the country. When I joined Bridgelinx as a Backend Engineer, the company had recently concluded its successful seed round, and the product was transitioning beyond the MVP stage. My primary mission was to develop the backend for the company's suite of 16 apps and web portals, which were initially directly connected to Firebase. The lack of a relational database resulted in data redundancy issues and negatively impacted application performance.

During my tenure at Bridgelinx, I undertook several impactful projects, a few of which are elaborated below:

Project Order to Jobs

Working closely with Bridgelinx's Process Design and Implementation (PDI) team, we identified critical areas where the product struggled to accurately map real-world business processes. After extensive discussions and collaboration with the PDI team, our Engineering Lead and I formulated a strategic roadmap, known as "Project Order to Jobs." The core concept behind this project was to segregate carrier and shipper data within the product, dynamically linking the data as various stages of the freight movement process unfolded.

Once project details were finalized, our Engineering Lead and I meticulously crafted the database schema and created a seamless migration plan to transition data from Firebase to the new backend successfully.

Backend Implementation

Given the interconnected nature of the data, we opted for a graph database, Neo4j, as our primary database solution. I spearheaded the backend development using Node.js and GraphQL. GraphQL was chosen for its advantages:

  • Validation Layer: GraphQL enforces data type validation as defined in the schema, eliminating the need for manual validation and type-casting.
  • Neo4j Integration: It seamlessly integrates with Neo4j, with the GraphQL schema adopted by the database. The @neo4j/graphql package facilitated the generation of queries and mutations, expediting rapid prototyping.
  • Efficient Data Retrieval: GraphQL allows clients to request only the necessary data, reducing bandwidth consumption.

To enhance query performance, I employed Redis for caching infrequently updated data and also utilized it as a broker for GraphQL subscriptions.

Maintaining History

Recognizing that most of our users lacked technical expertise and human errors were a challenge, I designed and implemented a service to maintain a comprehensive history of each data change. This service was connected to a Postgres instance and recorded crucial information including

  • The GraphQl type of data, the type of operation performed (create, update or delete) and the name of the GraphQl mutation which triggered the data change.
  • The previous data values, updated data values and their difference. The code for computing the difference between two nested javascript objects can be found on my github.
  • The timestamp at which the operation was executed.
  • Information of the user who executed the operation.
  • The primary identifier for the data in the main database.

The impact of this project was manifold. Firstly, it allowed us to diagnose bugs in the product which may have been missed even in a very rigorous QA routine. Secondly, it helped us identify the areas where the other teams needed more training or clarification, greatly boosting the overall business impact. Thirdly, it helped us with feature planning and user experience improvement decisions.

Deployment to AWS

I orchestrated the entire backend architecture's deployment to AWS, excluding the Neo4j database, utilizing various AWS services, including:

  • CodePipeline to manage the CI/CD pipeline.
  • ElasticBeanstalk to manage environments, EC2 instances and the Amazon RDS PostgreSQL instance for history maintenance.
  • ElastiCache for a managed Redis instance.
  • AWS Lambda functions to create and retrieve history.
  • API Gateway to enable access to the lambda functions.
  • AWS SQS for queuing requests to the lambda functions.

As for the primary database, I selected Neo4j Aura Professional Edition.

Migrating Data from Firebase

One of my significant contributions was the design and implementation of a robust data migration tool. This tool was instrumental in seamlessly transferring and aligning existing data with the new schema, all while maintaining data integrity. Notably, the new schema introduced some entities that were previously non-existent, and the migration plan we devised during the brainstorming phase ensured a smooth transition.

Finance Revamp

Another notable undertaking was the overhaul of the financial and invoicing system. I implemented a comprehensive ledger for all financial transactions, enabling the introduction of client wallets. Clients could settle invoices using their wallet balances and make advance deposits, significantly enhancing working capital management.

Back to Experience