This job is currently Archived,
Posted on : 26 February 2017
Join our team DataCamp is building the future of data science education. Our students get real hands-on experience by completing self-paced, interactive data science courses from the best instructors in the world, right in the browser. In fact, over 800,000 students around the world have completed nearly 50 million DataCamp exercises to date! The role We are looking for a talented data engineer that can help us build distributed data pipelines. You will be responsible for managing our entire data flow: ranging from the data ingestion from all kinds of different sources, to provisioning a computation layer using Apache Spark. You will work closely together with the Application Engineers, DevOps Engineers and Data Scientists, in order to provide the infrastructure and technology required for creating prediction systems, recommendation engines, etc. What are we looking for You have 2+ years of experience using Python or R in production. You have built cool stuff before. Can develop ownership of a product or feature. You can write clean, maintainable, performant, and testable code. You are passionate about data science and education. You work well in a team! Experience with large scale distributed computing, API design for distributed services is a plus. Experience with developing and maintaining production data pipelines, Kafka, Apache Spark, etc is a plus. A notion of data science concepts is also a plus. What we offer In addition to joining a creative, flexible and international start-up, you ll enjoy the following: A competitive salary including a company car. Stock options in an early stage start-up with a lot of growth potential. Flexibility: Want to work from home? Want to take a week off to go on holidays? Want to work different hours? We can work something out. Free lunch every day, when working from our Leuven's office. Opportunity to work in an internationally focused, fast-paced technology start-up.
3000 Leuven Belgium
Find a Job Find Candidates