Stanford Health Care is looking for an Enterprise Information Management Platform Administrator to join our Enterprise Information Management department. This role will focus on the highly automated administration, monitoring, and support of our Enterprise Information Management (EIM) platform, Databricks on Azure. You will be responsible for ensuring data integrity, security, and compliance while designing and performing standard Data Ops support processes and leading the development of new solutions using cloud and AI technologies.
What You'll Do
- Administer Cloud Platform workspaces: users/groups, workspace objects, permissions, cluster policies, pools, jobs, repos, and access to compute.
- Lead development of automations and infrastructure-as-code for repeatable workspace provisioning and configuration.
- Design and enforce platform standards: environment separation, workspace segmentation, cluster policy design, catalog structure, and secure library/dependency management.
- Oversee and refine data platform and operational processes, ensuring high availability and performance of data pipelines.
- Respond to incidents using defined support processes; assess post-incident learnings and drive preventive actions.
- Analyze platform performance metrics versus SLAs and vendor contracts, identifying and driving improvements.
- Design and implement scalable and efficient data pipelines for various data types from sources including databases, APIs, and third-party services.
- Partner with Security and Compliance to define requirements and solutions as technology evolves.
- Operate and support Catalog and access controls over catalogs, schemas, tables, external locations, and storage credentials in accordance with SHC Policies.
- Configure and maintain secure connectivity to platform services: ADLS Gen2, Key Vault, Azure Monitor/Log Analytics, private networking, and approved ingestion endpoints.
- Assess controls to meet healthcare data security requirements for PHI/PII: encryption in transit/at rest, secure secret management, key rotation coordination, audit logging, and conduct recurring access reviews.
- Monitor cost governance, usage reporting, cluster sizing guidance, scheduling, idle/overprovision reduction, and tagging/chargeback support.
- Create comprehensive documentation of solutions and operations processes.
- Conduct training sessions for internal teams on operational protocols and best practices.
- Work closely with cross-functional teams and lead discussions on platform capabilities and operational processes/standards.
- Evaluate and recommend new tools and technologies to enhance the cloud data platform's capabilities.
What We're Looking For
- A BS/BA degree in information technology, information systems, business management, business analytics, business administration or a directly related field from an accredited college or university.
- Four (4) or more years of experience as a Cloud Engineer, Data Engineer, or in a similar role with a focus on cloud technologies.
- Certifications in Azure (e.g., Azure Solutions Architect, Azure Data Engineer) and/or Databricks.
- Strong knowledge of Azure services, including Databricks, Azure Fabric, Azure Data Factory, Azure SQL Database, and Azure Storage.
- Experience with a variety of cloud database services.
- Proficiency in infrastructure-as-code tools (e.g., Terraform, ARM templates, Bicep).
- Recent experience with architecture, design and implementation of complex, highly available and highly scalable solutions.
- Experience with data operations, ETL processes, and data pipeline management.
- Familiarity with data security best practices and compliance standards.
- Proficiency with CI/CD for production deployments.
- Current knowledge across the breadth of Databricks product and platform features.
- Excellent problem-solving skills and the ability to work independently and collaboratively.
- Strong communication skills, with the ability to convey technical concepts to non-technical stakeholders.
- Experience with DevOps practices and tools (e.g., Azure DevOps, Git).
- Knowledge of programming languages such as Python, PySpark, SQL, R, Scala.
Technical Stack
- Cloud Platforms: Azure, GCP, AWS
- Azure Services: Databricks, Azure Fabric, Azure Data Factory, Azure SQL Database, Azure Storage
- Infrastructure-as-Code: Terraform, ARM templates, Bicep
- DevOps Tools: Azure DevOps, Git
- Programming Languages: Python, PySpark, SQL, R, Scala
Team & Environment
This role will reside in Stanford Health's Enterprise Information Management department.
Benefits & Compensation
- Compensation generally starts at $66.52 - $88.14 per hour.
Work Mode
This position is onsite at Stanford Health Care.
Stanford Health Care strongly values diversity and is committed to equal opportunity and non-discrimination in all of its policies and practices, including the area of employment. Accordingly, SHC does not discriminate against any person on the basis of race, color, sex, sexual orientation or gender identity and/or expression, religion, age, national or ethnic origin, political beliefs, marital status, medical condition, genetic information, veteran status, or disability, or the perception of any of the above.



