Cranial Technologies is seeking a Senior Distributed Systems Engineer / Architect to design and build highly scalable custom systems that process large volumes of data across CPU, disk, and network intensive workloads. This is a deeply hands-on role requiring strong systems thinking, algorithm design, and performance optimization skills.
What You'll Do
- Design and implement scalable distributed systems that handle heavy CPU, disk, and network workloads.
- Architect systems for high throughput, reliability, and efficient resource utilization.
- Develop distributed algorithms and data processing pipelines.
- Analyze system behavior to identify bottlenecks across compute, storage, and network layers.
- Optimize workloads for maximum efficiency and minimal resource waste.
- Develop strategies for parallelization, batching, and workload scheduling.
- Implement system components and tooling primarily in Python and Bash.
- Build custom orchestration, automation, and distributed job execution mechanisms.
- Write efficient algorithms and low-level logic to manage large-scale workloads.
- Build instrumentation, metrics, and telemetry to measure system performance.
- Develop dashboards and analysis workflows to guide optimization decisions.
- Use empirical data and experimentation to improve system behavior.
- Design systems that operate reliably across distributed environments.
- Implement monitoring, debugging, and recovery mechanisms for large-scale systems.
- Collaborate with infrastructure and platform teams to ensure smooth deployment and operation.
What We're Looking For
- Strong experience building distributed systems or large-scale backend infrastructure
- Deep understanding of systems performance (CPU, memory, disk I/O, networking)
- Experience optimizing workloads for throughput and efficiency
- Strong Python development skills
- Strong Bash / shell scripting skills
- Ability to implement and reason about algorithms and system-level logic
- Experience with parallel processing, distributed job execution, or large data pipelines
- Familiarity with Linux systems, resource scheduling, and performance tuning
- Understanding of networked systems and distributed coordination
- Strong data-driven mindset with focus on measurement and experimentation
- Experience building observability, metrics, and instrumentation
- Ability to debug complex systems in production environments
- U.S. citizenship
Nice to Have
- Experience with high-performance computing (HPC) workloads
- Experience with containerized environments (Docker/Kubernetes)
- Background in large-scale data processing or distributed compute frameworks
- Familiarity with performance profiling tools and system tracing
Technical Stack
- Python
- Bash
- Linux
- Docker
- Kubernetes
Work Mode
This is a hybrid role with remote work options.
Cranial Technologies is an equal opportunity employer.


