Responsibilities
- Use and develop web crawling technologies to capture and catalog data on the internet
- Support and improve our web crawling infrastructure
- Structure, define, and model captured data, providing semantic data definition and automate data quality monitoring for data that we crawl
- Develop new techniques to increase speed, efficiency, scalability, and reliability of web crawls
- Use big data processing platform to build data pipelines, publish data, and ensure the reliable availability of data that we crawl
- Work with our data product and engineering team to design and implement new data products with captured data, and enhance and improve upon existing products
Requirements
- 7+ years industry experience with clear examples of strategic technical problem solving and implementation
- Strong software development architecture and fundamentals for backend applications
- Solid understanding of browser rendering pipeline, web application architecture (auth, cookies, http request/response)
- Solid programming experience: strong grasp of object-oriented design and experience building applications using asynchronous programming paradigms (e.g., async/await, event loops, or concurrency libraries)
- Experience building crawlers
- Proficient in Linux / Unix command line utilities, Linux system administration, architecture, and resource management
- Experience evaluating data quality and maintaining consistently high data standards across new feature releases (e.g., consistency, accuracy, validity, completeness)
- Thrive in a fast paced environment and be able to work independently
- Work effectively remotely (able to be proactive about managing blockers, proactive on reaching out and asking questions, and participating in team activities)
- Strong written communication skills on Slack/Chat and in documents
- Experienced in writing data design docs (pipeline design, dataflow, schema design)
- Can scope and breakdown projects, communicate and collaborate progress and blockers effectively with your manager, team, and stakeholders
Nice to Have
- Degree in a quantitative discipline such as computer science, mathematics, statistics, or engineering
- Experience as a Red Teamer
- Experience working in data acquisition
- Experience in network architecture and how to debug and inspect network traffic (DNS, IPv4, Proxies, Application ports and interfaces; packet capture and analysis)
- Experience with Apache Spark
- Experience with SQL, including writing advanced queries (e.g., window functions, CTEs)
- Experience with streaming data platforms (e.g. Kafka or other pub/sub; Spark streaming or other stream processing)
- Experience with cloud computing services (AWS (preferred), GCP, Azure or similar)
- Experience working in Databricks (including delta live tables, data lakehouse patterns, etc.)
- Knowledge of modern data design and storage patterns (e.g., incremental updating, partitioning and segmentation, rebuilds and backfills)
- Experience with data warehousing (e.g., Databricks, Snowflake, Redshift, BigQuery, or similar)
- Understanding of modern data storage formats and tools (e.g., parquet, ORC, Avro, Delta Lake)
Benefits
- Stock
- Competitive Salaries
- Unlimited paid time off
- Medical, dental, & vision insurance
- Health, fitness, and office stipends
- The permanent ability to work wherever and however you want
Work Arrangement
Hybrid
Additional Information
- People Data Labs does not discriminate on the basis of race, sex, color, religion, age, national origin, marital status, disability, veteran status, genetic information, sexual orientation, gender identity or any other reason prohibited by law in provision of employment opportunities and benefits.
- Qualified Applicants with arrest or conviction records will be considered for Employment in accordance with the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act.
