Big Data Engineer (Romania)

For our Inventory Forecast team in Bucharest, located at the 35th floor in Sky Tower building (highest high in Romania), we are hiring a Big Data Engineer. The Inventory Forecast team delivers for publishers and buyers a 360 degrees view into their future inventory and estimated fill-rate, while assisting them into the day to day media planning activities.

If interested, please send your CV at 
For the selected candidates we are offering a relocation package and 1 salary upfront payment at relocation.

As a Big Data Engineer, you will:
- Participate in the analysis and design of services processing huge amounts of data and offering relevant business insights to our customers
- Apply all the math you studied in university to enhance our forecasting models
- Contribute to the development and maintenance effort during the full cycle of the product
- Use Amazon Web Services to deploy and scale in order to achieve the desired performance
- Positively influence the scalability of our systems through your technical choices
- Significantly influence the evolution of the global ad tech market through correct predictions of future revenue
- Continuously re-invent the product you're working on in order to disrupt the very dynamic ad tech industry

Your Job Requirements:
- Studies in Computer Science, Mathematics, IT, or related discipline
- Extended knowledge of JAVA programming, with a particular focus on concurrent/multi-threaded programming, Spring, JDBC
- Interest in debugging & solving JVM performance problems
- Proven ability in Java analysis, design and development for high performance applications
- Experience applying development best practices (documentation, code reviews, coding standards, design patterns)
- Knowledge of Linux/UNIX environments
- Good knowledge of the English language, both orally and written, as we have global exposure

The following are assets:
- A deep understanding of current Big-Data technologies and the ability to offer suggestions on the ones best suited to each problem
- Hands on experience with the Hadoop stack (eg. MapReduce, YARN) & Spark on AWS (e.g. administration, configuration management, monitoring, debugging, and performance tuning)
- SQL knowledge and the trade-offs involved in choosing SQL, No-SQL, DWH solutions
- Knowledge of deploying and scaling cloud services using AWS
- Experience with AWS Elastic Map Reduce
- Experience / interest in Spark, Scala & functional programming
- Appetite for mathematical models and machine learning

Our offer (bonuses, benefits) - what’s in it for you:
- Casual & friendly working environment with opportunities to impact the company with your ideas and involvement, we have a start-up mindset and easy going communication style (even with Top Management)
- Technology diversity (autonomy to choose which technology you want to use for the quarterly roadmap project), real technical challenges
- Working in an international/multi-cultural team from each office we have
- Flexible working program and Work from home option
- Individual trainings and certifications annual budget
- Bonus system, on top of base salary, paid quarterly (for real, not just on paper