Who we are
We are a AI startup with ~10 curious employees who are passionate about computer vision. We provide photorealistic augmented images so that visual AI can be trained safely and reliably. Our training data is therefore particularly relevant to the AR industry and autonomous vehicles. We have a unique recording technology that enriches existing training data with mixed reality according to customer needs. This puts us one rabbit’s jump ahead of our competitors.
However, a team full of smart rabbits like ours can only achieve top performance and continue to grow if a data talent like you supports us in our novel approach to data delivery. As Senior Data Pipeline Engineer (*) for our Computer Vision and Machine Learning Data pipelines you would be responsible for our data infrastructure.
What we offer
- Flexible work and home office options
- 30 paid vacation days
- A fair salary (up to 90k yearly) plus virtual stock options
- A permanent contract
- A personal mentor during your onboarding
- Exciting colleagues that are highly technology-driven and an international team
- Various further development and training opportunities
- An open error culture and a growing feedback culture
- A growing corporate culture which we invite you to shape together with us
- For talents outside of Germany: Relocation help and German language courses
Who we would like to be a part of our team
- Captivating talents who work in an agile AI environment and want to contribute their own ideas
- Inspiring and self-confident people with a positive attitude
- Pleasant people with a high quality awareness, organizational talent, and a high level of commitment
- Self-sufficient colleagues with an independent way of working.
How you can help us
- Be responsible for implementing and administering high-performance computing solutions to scale our products for our growing customer base
- Be tightly integrated into our team to plan and define end-to-end solutions consisting of hardware and operating system technologies
- Build and maintain our data pipelines which run on premise and in the cloud
- Polish our data security and user management services
- Manage our on premise and cloud IT services.
How you can convince us
- Vast experiences as a Data Engineer or in a similar position, preferably in a startup or in a fast growing environment
- Hands-on proficiency in Python (3.9+), especially with modules dealing with concurrency, such as threading, multiprocessing and asyncio
- Profound understanding of container technology such as Docker as well as administration and orchestration of services based on containers
- Great experience with databases and large data volumes
- Solid understanding of Linux server operating systems and cloud services like AWS
- Deep knowledge of orchestration technologies such as Kubernetes and Luigi
- Strong communication skills in English and ability to work in a highly motivated team.
If you want to make a difference with us, please apply here. Please attach your résumé including your GitHub link or other code samples, references, and any additional information you consider useful for us, as a single PDF file.
*At rabbitAI, we believe in equal opportunities for everyone. We firmly reject discrimination and instead advocate inclusion. Because for us, the individual counts – not gender, sexual identity, handicap, age, origin or religion!