• Transamerica
  • Denver, CO

Job Description

We are looking for Data/Machine Learning engineers at all levels to help us build a robust and scalable data platform to support AI/ML data pipelines, reporting and data analysis as our business scales. We use cloud native (AWS) cutting-edge technologies like Spark, Kinesis/Kafka Streaming, Graph , infrastructure as code, CI/CD to deliver high-quality data solutions to analysts, data scientists, and partners. We're looking for an engineer that takes ownership in their work, has a strong focus on quality, and enjoys working in a collaborative environment. At Transamerica, we believe achieving a secure future requires both smart financial planning and a healthy lifestyle. We're using data science, machine learning, computer vision, natural language processing, and Iot to revolutionize the way our customers save, invest, protect, and retire and to help them develop better wellness habits. As part of the Data Engineering team in our Analytics Execution group, you will work with data scientists and analytics engagement managers to develop innovative data-based solutions that transform the way we do business.

Job Description


:Must have a solid understanding of data engineering, integration, and warehousing concepts and patterns. Must have experience with design, build, and maintain batch and streaming data solutions at scale in both on-premises and cloud environments, specifically in the Hadoop ecosystem You're proficient with Linux operations and development, including basic commands and shell scripting You can demonstrate experience with DevOps methodologies and continuous integration/continuous delivery practices Must be fluent in Python, R, and Java Must have excellent experience command of SQL Must have good experience and knowledge with Data Modeling concepts. You have a passion for data science and machine learning with a strong desire to develop your analysis and modeling skills

Preferred Qualifications

:Must have 3 -5 years of experience building data productionalized pipelines. Must have strong experience ingesting huge volumes of structured and unstructured data both in streaming and batch ingestions patterns. ​2 - 4 years of Cloud development experience with AWS and or Azure stack. Exposure with and have solid experience with statistical analysis and machine learning libraries Must have previous experience with NoSQL database implementations You understand the fundamentals of lambda architectures and serverless. applications Must be proficient in Tableau Must be comfortable with leveraging ETL tools, like Informatica. You are proficient in Scala or Node.js You have a master's degree in a quantitative field

Job Description

:Partner with data scientists, analytics engagement managers, and other data engineers to discover, collect, cleanse, and refine the data needed for analysis and modeling Analyze large data sets to extract actionable insights and inform experimental design and model development Design robust, reusable and scalable data driven solutions and data pipeline frameworks to automate the ingestion, processing and delivery of both structured and unstructured batch and real-time streaming data Build models using basic statistical and machine learning techniques, partnering with data scientists for education and guidance We're looking for an engineer that takes ownership in their work, has a strong focus on quality, and enjoys working in a collaborative environment.

Working Conditions

Office environment