Come [legally] hack with us the data on the largest exchange that’s running our world. Not NASDAQ; the one with way more events - the Global Ads Exchanges, where millions of ads are born and clicked every second.Step behind the curtain of algorithms and competitors that move $1T of annual budgets. Plunge into a world of ISP-volumes of traffic, sub-second predictions and TBs of live, real-world data. Use cutting-edge analysis, ML and engineering or just plain hacker-like thinking to out-perform the market.Appeely is a Data-Science startup, leveraging data analysis, ML, engineering and multi-disciplinary thinking to gain a market edge and exploit hidden opportunities in real-time advertising. Processing over 0.5m requests per second and serving over 20B sub-second predictions daily, we build and operate Machine Learning algorithms running on the world’s largest Real-time-bidding (RTB) Ad-Exchanges.Arpeely is a Google AdX vendor and serves clients spanning from startups to S&P 10 companies.Founded by ex-IDF veterans, Arpeely is an Ad-tech startup building and operating Machine Learning algorithms for autonomous media acquisition. Our business and tech meet at the intersection of the exciting fields of Digital Marketing, Fraud Detection and ML\AI. Connected to the world’s largest Real-Time-Bidding Ad-exchange - Google AdX, we use data, ML, engineering, and hacker-like thinking to short-circuit and out-perform the market.We are looking for a passionate Senior DevOps Engineer to join our small all-star team. As part of your role, you will design, implement and deploy

We are looking for a passionate Senior DevOps Engineer to join our small all-star team. As part of your role, you will design, implement and deploy products and infrastructures which will directly amplify our core business and enable new strategic growth opportunities. The backbone systems you’ll build will sustain extreme impression loads and support sub-second ML prediction and on-the-go ML training.

If you are experienced but still hungry to learn and impact - we’d love to have you on our team!

You'll be responsible for

  • Scaling our RTB bidders and ML prediction infrastructure to support extreme loads of data and traffic - 1m QPS.
  • Own deployment, monitoring, troubleshooting, maintenance, and uptime of all of our production GCP cloud environments.
  • Infrastructure for massive data (1TB/hr) ingestion, continuous ML training and real-time prediction.
  • Building, updating, and implementing CI/CD pipeline and DevOps automation processes, methodologies, and tools.
  • Work closely with Engineering and Data teams, taking full responsibility and ownership from conception to post-deployment in a collaborative, fast-paced environment
  • Stay on top of modern technologies with our infra stack: GCP, Kubernetes, Prometheus, Graphana, Python3, Go, BigQuery, Redis, MySQL

    Who we are looking for

    • At Least 2 years of experience as a DevOps Engineer role preferably at a startup company
    • Strong technical skills, good system and infrastructure understanding
    • Experienced with building the full application release cycle (CI/CD)
    • Familiar with how modern web applications work and scale
    • Networking, firewall rules management and application security knowledge
    • Familiar with Linux environment, scripting and programming
    • Ability to see the bigger picture and carry out system architecture planning
    • Proven DevOps and Infrastructure experience - advantage
    • Understanding of product and a passion for building software that impacts millions of users
    • Experience with Redis, relational and NoSQL databases\data warehouses or equivalent
    • Comfortable in Linux environments
    • Independence, ownership, and a sense of urgency. A “Self-starter” with a startup mentality

      Work from our [newly renovated] offices in Tel Aviv, Midtown Commerce Tower