Collaborate with PMO team on product backlog grooming, sprint planning, execution, review and retrospective. Analyze and translate business requirements and technical requirements into user stories. Design and develop programs, pipelines and scripts to extract, enrich, transform, perform analysis, move data and load various business systems data into Cloud data warehouses and operational data stores. Utilize DataBricks environment to build batch and streaming pipelines. Use Snowflake streams to perform CDC and perform continuous incremental loads to Snowflake. Use the Delta file format to enable the merge/update operations on big data systems. Use Azure cloud services including ADF, Storage Accounts (ADLS-Gen2, Blob), Event Hub, App Services, AKS and Databricks to perform data curation and build data applications. Utilize AWS Cloud services including S3, IAM, EC2, MKS and Snowflake to build distributed cloud data warehouse. Use tools including Airflow and a data brick scheduler to schedule data pipelines. Build images using Docker tools; orchestrate execution of images in cloud using K8s and deploy the services. Made use of Python Flask and developed APIs to meet business requirements. Build data validation framework to validate quality of data. Participate in back-end development practices, code reviews, code tuning, improvements, balancing, usability and automation. Create and perform User Acceptance Testing to provide business requirement results. Create user manuals, Knowledge Management Articles and run books to support frictionless operations in production.
Work Location: Various unanticipated work locations throughout the United States; relocation may be required. Must be willing to relocate.
Minimum Requirements:
Education: Master – Computer Science or Computer Information Systems and Information Technology
Experience: One (1) year