Paramo Technologies is seeking a DataOps Platform Engineer to operate, automate, and evolve our hybrid data platform infrastructure. This role is central to enabling data teams by ensuring platform stability, scalability, and reliability across on-prem and Azure environments using modern infrastructure and automation tooling.
What You'll Do
- Operate and maintain core platform services: Spark, Flink, Airflow, OpenMetadata, and supporting components.
- Support and optimize internal analytical engines such as StarRocks, and assist in the rollout of technologies like Neo4j.
- Operate hybrid infrastructure: on-prem (VMware vSphere) + Azure (networking, identity, managed services, private connectivity patterns).
- Enable platform connectivity and operational readiness for data warehouse / analytics ecosystems (e.g., Synapse, Snowflake, BigQuery or similar): authentication/access patterns, runtime integration, operational monitoring (no pipeline ownership).
- Implement and maintain Infrastructure as Code using Terraform.
- Automate configuration and deployments using Ansible.
- Maintain GitOps workflows using ArgoCD.
- Manage CI/CD pipelines for platform components via Jenkins.
- Maintain and improve the observability stack: Logging: Fluent Bit + Graylog, Metrics/alerts/dashboards: New Relic / Grafana.
- Support Splunk usage including platform-side onboarding and operational administration tasks (e.g., forwarders/integrations, source onboarding, access/retention patterns as needed in coordination with security/IT).
- Drive platform reliability through incident response, RCA, and lifecycle operations (upgrades, patches, feature enablement), working in an Agile/Kanban delivery model.
- Produce and maintain high-quality documentation: runbooks, operational guides, onboarding docs, and service ownership standards.
What We're Looking For
- 5+ years of experience operating distributed systems or data platform components (Spark/Flink/Airflow).
- Practical experience operating cloud/hybrid environments, including Azure services used in data platforms (networking/connectivity, identity, managed services).
- Solid Linux, networking, troubleshooting, and automation skills.
- Hands-on experience with Terraform and Ansible.
- Practical experience with GitOps (ideally ArgoCD).
- Experience with CI/CD (preferably Jenkins).
- Hands-on familiarity with observability fundamentals: logs, metrics, alerts (Fluent Bit, Graylog, New Relic, Grafana).
- Strong operational ownership: incident handling, problem management, and continuous improvement (SLIs/SLOs, alert quality, capacity).
- Upper-intermediate English
- On-Call availability
Nice to Have
- Experience with analytical databases / distributed engines (StarRocks, ClickHouse, Druid, etc.).
- Interest or exposure to graph databases (Neo4j).
- Familiarity with developer-platform enablement tools such as Backstage (or similar).
- Exposure to Azure ML Studio or adjacent ML platform tooling (enablement/ops perspective).
Technical Stack
Our platform leverages technologies including Spark, Flink, Airflow, OpenMetadata, StarRocks, Neo4j, VMware vSphere, Azure, Terraform, Ansible, ArgoCD, Jenkins, Fluent Bit, Graylog, New Relic, Grafana, Splunk, Synapse, Snowflake, and BigQuery.
Team & Environment
Our culture is open, collaborative, and respectful. We value professional development, support well-being, listen to team needs, and aim to create a home-like environment where everyone can thrive.
Benefits & Compensation
- 22 days of annual leave
- 10 days of public/national holidays
- Health insurance options
- Access to online learning platforms
- On-site English classes in some countries
- Professional development support
Compensation includes a competitive salary.
Work Mode
This role supports a hybrid work model, with flexibility to work remotely or on-site based on personal preference and location.
We are an equal opportunity employer and welcome applications from all qualified candidates regardless of background.







