Ann Arbor, Michigan, United States Remote (Global) Employment USD 195,000 - 235,000 Yearly

Utilidata is hiring a Principal Platform Architect

About the Role

Utilidata is looking for a Principal Platform Architect to define and lead the architecture of our Karman data center solution. You will focus on high-frequency waveform acquisition, low-latency control paths, and hardware/software co-design for GPU-dense environments, serving as a technical leader and mentor across multiple engineering disciplines.

What You'll Do

  • Own the end-to-end architecture of the Karman data center platform, spanning high-resolution waveform sensing and acquisition, embedded firmware and Yocto-based OS layers, edge compute and control loops, and off-device visibility, analytics, and orchestration.
  • Define architectural principles that prioritize determinism, reliability, security, and low latency in power-critical production environments.
  • Architect systems integrating GPU telemetry, power states, and control interfaces.
  • Serve as a technical leader and mentor for sub-system level principal engineers and architects across hardware, firmware, operating system layers, and on-device software teams.
  • Partner with Product and Customer teams to align architecture with real-world data center constraints.
  • Drive technical reviews, architectural documentation, and long-term roadmap planning.
  • Translate product and customer requirements into technical designs that meet near-term business commitments while establishing a clear path to long-term scalability.
  • Architect ultra-low latency software paths for waveform capture and processing, real-time anomaly detection, and closed-loop power control.
  • Ensure system designs support predictable response times suitable for power capping and spike mitigation in GPU-dense server racks.
  • Lead performance modeling, instrumentation, and optimization across the full stack.
  • Provide deep technical leadership in low-level OS internals, kernel scheduling, memory management, I/O paths, firmware, bootloaders, BSPs, hardware abstraction layers, and real-time/near-real-time execution environments.
  • Partner with hardware teams on sensor integration, signal fidelity, timing accuracy, and compute placement.

What We're Looking For

  • 10+ years of experience in systems architecture, embedded systems, or high-performance platforms.
  • Prior experience designing and delivering distributed embedded systems from initial architecture through production build and large-scale deployment in mission-critical environments.
  • Deep expertise in low-level operating systems, firmware, and embedded software.
  • Proven track record designing low latency, high throughput, real-time or near-real-time systems.
  • Strong background in hardware-software co-design, especially in power-constrained or performance-critical environments.
  • Experience with performance profiling, tracing, and optimization across CPU, memory, networking, and I/O.
  • Experience with IoT or edge platforms that operate at scale and under strict reliability constraints.
  • Familiarity with real-time scheduling, timing analysis, and deterministic system behavior.
  • Proven ability to lead and influence across teams and organizational levels without formal authority by building trust and driving alignment.
  • Proven ability to communicate effectively across all organizational levels and translate complex information into clear, actionable guidance.
  • Demonstrated ability to lead architecture across multiple teams and disciplines.
  • Willingness to travel up to 20% of time.

Nice to Have

  • Experience working with GPU-based systems or other accelerators, including power and performance tradeoffs.
  • Familiarity with NVIDIA SOC architectures and developing productionizable implementations.
  • Experience in data center infrastructure, power systems, or energy-aware computing.
  • Familiarity with RDMA-based architectures, high-speed interconnects, and zero-copy data pipelines.
  • Exposure to AI/ML inference pipelines, especially in real-time or edge contexts.

Technical Stack

  • Yocto-based OS
  • GPU telemetry
  • NVIDIA SOC architectures
  • RDMA-based architectures
  • High-speed interconnects
  • AI/ML inference pipelines

Team & Environment

You will work closely with engineering leadership, hardware, firmware, platform, and data teams.

Benefits & Compensation

  • Base compensation of $195,000 to $235,000 depending on experience.
  • Health, dental, vision insurance
  • Employer-match 401k
  • Flexible paid time off
  • Flexible work environment
  • Mentorship and growth opportunities

Work Mode

This is a global role, with locations in the United States.

Utilidata values the diversity of our team. We provide equal employment opportunities without regard to race, color, religion, creed, sex, gender, sexual orientation, gender identity or expression, national origin, age, physical disability, mental disability, medical condition, pregnancy or childbirth, sexual orientation, genetics, genetic information, marital status, or status as a covered veteran or any other basis protected by applicable federal, state and local laws.

Required Skills
Yocto-based OSGPU telemetryNVIDIA SOC architecturesRDMA-based architecturesHigh-speed interconnectsAI/ML inference pipelinesSystems ArchitectureEmbedded SystemsHardware-Software Co-designLow-level Operating SystemsFirmwareDistributed SystemsReal-time Systems
Need to work legally in Thailand?

Work permits without the paperwork nightmare

Thai immigration rules are strict and easy to get wrong. SVBL handles the bureaucracy — correct visa type, proper documentation, timely submissions. You focus on your work.

Right visa type for your situation
Document preparation & submission
Deadline tracking & renewals
Direct liaison with immigration
Talk to an expert
10+ years experience
About company
Utilidata

A fast-growing NVIDIA-backed edge AI company enabling greater visibility and control of power utilization in energy-intensive infrastructure, like the electric grid and data centers. Karman, the company's distributed AI platform powered by a custom NVIDIA module, is transforming the way utility companies operate the grid edge and will enable data centers to unlock more compute for the same provisioned power.

Visit website
Job Details
Department Engineering
Category embedded
Posted 14 days ago