I'm a senior backend engineer with over 20 years in software and more than 7 years working professionally with Go.
I’ve built a broad range of Go services, including REST/JSON microservices, gRPC/Protobuf APIs and event-driven systems using platforms such as Amazon SQS/SNS and Google Pub/Sub. My work often involves designing and implementing concurrent processing patterns, using goroutines, channels, mutexes, wait groups and structured context propagation.
I have substantial experience integrating services with modern observability stacks. This includes sending JSON logs, metrics, spans and traces through OpenTelemetry into systems such as Grafana, and building dashboards and alerts based on SLIs and operational behaviour. I’ve used these tools extensively to troubleshoot and refine production systems in real time.
I value high-quality testing and have practical experience with unit, integration and end-to-end testing. I frequently use gomock and testify to support modern testing workflows, including generating mocks and validating behaviour cleanly and predictably.
I’ve deployed services through Jenkins, Harness and GitHub Actions, and have written custom GitHub Apps to support CI/CD automation. I’ve also optimised PR pipelines with automated linting, testing and other quality checks.
Operationally, I’ve deployed services to Kubernetes across GKE, EKS and AKS, as well as to Google Cloud Run and AWS Lambda.
I created and maintain Config, a lightweight Go library for managing service configuration across multiple sources with predictable precedence and hot-reload support. It can read configuration from command-line arguments → environment variables → YAML/JSON files (in merged priority), and populate Go structs via expressive tags (default, required, src, base64, literal). It also supports querying values directly by path.
The library includes support for nested structures, slice indexing, duration parsing ("30m", "2 hours"), and live reload behaviour — when watched files change, structs are rehydrated or callbacks invoked automatically. Designed without dependencies like spf13/viper, it focuses on clarity, predictable behaviour and operational reliability.
Notable capabilities:
Shortify.pro is a production-grade URL shortening platform that I designed and built using a clean, service-oriented architecture. The system runs as a set of lightweight Go microservices on Google Cloud Run, backed by Firestore as a scalable NoSQL datastore. Each service is independently deployable, fast to start, and designed to operate predictably under load.
The platform uses an event-driven pipeline: when a URL is shortened, a post-processing event is published to Google Pub/Sub, triggering downstream tasks such as analytics enrichment, statistics aggregation and security checks. As part of its safety model, the system integrates with the Google Safe Browsing API to detect phishing, malware or otherwise unsafe destinations.
The frontend is built in React, and the backend issues signed JWT tokens to allow users to manage their shortened URLs securely. Deployment is fully automated using GitHub Actions, covering linting, tests, container builds and incremental rollout to Cloud Run.
For observability, all services emit structured logs, metrics, spans and traces using OpenTelemetry, surfaced in Grafana dashboards. The public API surface is protected by Cloudflare, which provides caching, rate-limiting and edge-level security filtering.
Flow Control is a lightweight traffic-management service that tracks how often individual traffic sources (such as IP addresses) access specific API endpoints. It exposes simple APIs: one to record that a source has accessed a target, and another to check whether that source has exceeded a configured rate limit for that target. This gives operators a fast, reliable way to determine when a client is behaving normally, approaching a limit, or needs to be slowed or blocked.
The platform is built around three core capabilities:
1. Event Recording
A dedicated endpoint accepts reports that a particular source has accessed a particular target. These recorded events
form the basis for all rate-limit evaluations and daily analytics.
2. Rate-Limit Evaluation
A second endpoint checks whether a given source is currently “over limit” for a specific target according to the
operator’s configuration. Firestore is used as the underlying counter store, providing predictable performance,
horizontal scalability, and simple operational behaviour.
3. Cloudflare Enforcement with Exponential Backoff
If an operator chooses to block a source, Flow Control integrates directly with Cloudflare’s Firewall API. Each ban
increments the source’s daily ban count, and ban durations increase automatically using exponential backoff — short
blocks for first-time offenders, progressively longer blocks for repeat abuse. This allows gentle handling of
legitimate users while still suppressing persistent problematic traffic.
Architecturally, the service stays deliberately simple: stateless Go services running on Cloud Run; Firestore-backed counters for per-source/per-target tracking; per-target rate-limit rules for fine-grained behaviour; and automatic Cloudflare bans that escalate via backoff. Because it runs entirely serverlessly and exposes its own lightweight APIs, Flow Control does not require an API gateway in front of your services — it can be called directly from any application or edge worker that reports traffic events or performs checks.
Flow Control gives API operators a clean, deterministic way to measure incoming traffic behaviour and apply consistent, automated responses — from soft rate limits to full Cloudflare bans — without adding operational complexity.