For Technical Interviewers

Abror Aliboyev

Software Engineer

9 years building with TypeScript, Python, and Go. I'm not tied to specific frameworks — I've picked up new stacks, languages, and tools throughout my career as projects demanded it. I've written Python modules, worked at the protocol level, managed bare-metal servers, and debugged things most developers never touch. The frameworks I mention are what I've used most recently, but I'm comfortable adapting to whatever the problem needs.

Selected engineering wins

  • 14,000+ publications — built the entire platform, every integration, solo
  • 99.9% uptime across 20+ microservices at a US anti-fraud startup
  • Recovered a corrupted 11M-row database in 26 hours under exam-season pressure
  • 100× speed improvement — rewrote Python scrapers to Go
  • Designed pipelines from scripts → Airflow → Temporal for unreliable legal data

How I approach work

Align first, then build.

Before anyone writes code, the team should be on the same page — API protocols, data schemas, conventions. Align on how things communicate and what the contracts look like, then everyone can go build with confidence instead of discovering incompatibilities later. I'm also obsessed with efficiency — every line of code matters, and readable code beats a wall of comments. Bugs are part of development; they should be minimized, not treated as moral failures. Good architecture doesn't need shortcuts, hardcoded values, or placeholder fixes.

Understand before you touch.

At Codevalet, I joined an 80k+ line codebase spread across multiple microservices. Before writing any code, I spent a week reading, tracing, and documenting the entire architecture — every service, every connection, every implicit assumption. I wrote two documents: one explaining how it all worked, and another making the case for why we should rewrite it from scratch and take nothing from the current codebase. That investment paid off — when I started coding, I shipped my first PR in two days. I'm confident in my ability to get up to speed on existing code quickly, and I think that comes from taking documentation seriously before jumping in.

No shortcuts.

I don't like implementing solutions with quick fixes and placeholders. If I'm asked to take a shortcut, I'll still try to find a better way — because it's the developer who suffers bad code and maintenance down the road. A well-planned architecture usually doesn't require hardcoded values or temporary workarounds. When it does, that's a signal something needs rethinking, not patching.

Test what matters, not everything.

I don't enjoy writing tests for the sake of coverage numbers. Test the parts you know are critical — the business logic, the edge cases, the integration points. Leave room for things to break if they're not critical. When testing tries to cover 100% of a codebase, development becomes boring and slow. These days AI handles a lot of the tedious test writing, which honestly makes me much happier about the whole process.

If I have an opinion, I'll show you why.

When I disagree with a technical decision, I don't just argue — I explain my reasoning and, if needed, build a prototype to demonstrate the alternative. If it's beyond my scope, I'll come asking for input. But if a task is trusted to me, I expect the space to use my expertise. I've found that the best teams work this way — trust the person closest to the problem, and discuss when it affects the bigger picture.

Readability is the real documentation.

I'd rather read well-structured code with clear naming than a codebase covered in comments explaining what should be obvious from the code itself. Types help too — TypeScript strict mode with well-designed types is documentation that the compiler enforces. Comments go stale, types don't.

The stack, in depth

Frontend

Started with JavaScript and PHP, moved to Vue through Laravel's ecosystem, built scienceweb.uz in Vue 2.6 and later rewrote it with Vue 3 and Composition API. Moved to React for the ecosystem breadth. I know the tradeoffs — Next.js is great for most apps but starts struggling when the frontend is highly interactive; Vue handles that better performance-wise, and TanStack is a better React story than Next.js for certain use cases. I pick based on what the project actually needs, not habit. Tailwind CSS for styling, Zustand over Redux for state. I avoid component libraries when I can — custom components are easier to evolve.

Backend & Languages

Python, TypeScript, and Go — not just at the framework level. I've written Python modules (custom Elasticsearch query generation, GitHub Linguist port from Ruby), understand asyncio deeply and know when async/await actually helps vs when it's unnecessary overhead. I can explain the difference between multiprocessing parallelism, multithreaded concurrency, and cooperative multitasking — many developers conflate these. FastAPI is my preferred Python framework; Flask is too minimal, Django too heavy, but I can work with both. Node.js and Express/Fastify for TypeScript backends. Go is my current focus — I love its concurrency model and tried implementing Telegram's MTProto protocol as a learning exercise.

Databases

PostgreSQL is my default — it handles most use cases well. I've also worked with MongoDB (including running self-managed clusters on EC2), MySQL (recovered a corrupted 11M-row database by hand), and Redis. Wrote a Python module to generate Elasticsearch queries and understand its internals well enough to reason about scoring and analyzers. For AI projects I've used vector databases — Weaviate and Qdrant — for semantic search and embeddings.

Infrastructure

Kubernetes, Docker, Helm, Nginx, Linux, Git — the full picture around software delivery, not just the code. Went through CKA exam preparation and run a 9-node cluster at home with Calico CNI and eBPF — including debugging real issues like ARM and eBPF networking incompatibilities. On cloud, I worked intensively with AWS at YoFi — SAM templates, CDK, SST, Lambda, DynamoDB, Neptune. Wrote SAM templates to spawn and auto-scale MongoDB clusters on raw EC2. Ansible for provisioning, ArgoCD for GitOps, Traefik for ingress, Victoria Metrics and Grafana for observability.

Tooling & Process

Biome over ESLint + Prettier — one tool, faster, opinionated. Bun as package manager for speed. Conventional commits for readable git history. Code reviews focused on architecture and correctness, not style (that's what formatters are for). I write tests for behavior, not implementation — integration tests over unit tests for most business logic.

Hard problems I've dealt with

Recovering a corrupted MySQL database under exam pressure

A university Moodle LMS platform I maintained at I-Edu Group went down right before exam season. The sysadmin ran a distro update and rebooted the server mid-write, corrupting parts of a MySQL database with over 11 million rows across 100+ tables. The worst hit was the answers table — the one holding test questions and student responses. I spent 26 hours straight tracking down corrupted rows from over a million entries, manually recovering what I could. In the end, we got back about 99% of the data. It was one of the most stressful experiences I've had, but the platform came back online and exams went ahead.

AWS Neptune bottleneck and MongoDB migration at YoFi

At YoFi, our data ingestion pipeline hit a wall with AWS Neptune — a serverless graph database that became the main bottleneck. No matter how much we scaled compute and RAM, it degraded past 100 orders per second, and we were processing 200k+ orders with the volume growing exponentially. RDS was also becoming unsustainably expensive, and SQL wasn't a natural fit for Shopify's data structure. We migrated to MongoDB, which solved the cost and flexibility problems, but Atlas pricing was still too high. I ended up writing SAM templates and scripts to spawn and auto-scale MongoDB clusters on EC2 instances — not Kubernetes, just raw EC2 with MongoDB's own replication. Getting replication stable on that setup had its own set of challenges, but we got it working and the cost savings were significant.

Building reliable legal data pipelines for Lexpert

For the Lexpert agent in Asy AI, I needed to build a stable data pipeline over legal data from lex.uz — which is not a reliable data platform by any measure. I built and rewrote the pipeline several times. First attempt was simple Python scripts, but when scale and observability demanded more, I adopted Airflow and RabbitMQ. That helped with flow but added Kubernetes deployment complexity — Airflow's experience on Kubernetes is not the smoothest, and developing workflows on it was error-prone. When I switched to Temporal, everything clicked — the flexibility to run workflows at scale and speed made a real difference. I also rewrote the scrapers from Python and Scrapy to Go, which gave roughly 100x speed improvement since Scrapy tasks would sometimes get stuck on different response formats. Scraping lex.uz reliably is still one of the hardest data challenges I've worked on.

Beyond work

I build side projects to explore ideas and learn new things. Tried implementing Telegram's MTProto in Go, run a homelab cluster that I constantly tinker with, and built this site as an experiment — a portfolio that adapts to its audience rather than presenting a one-size-fits-all resume.

Open source contributions, personal tools, and technical writing are how I stay curious and give back to the community.

From the blog

Keep exploring