Job Title: Senior Data Engineer - Spark, Scala, Databrick.
Remote (EST and CST time zones is Required)
LinkedIn is Must
Exp: 7 + Years
Engagement Description
• Code development, code maintenance, code promotions
• Leading or assisting application solution design
Required Skills/Experience
• Knowledge in using HTML, JavaScript, , .NET C#, JAVA, JSON, XML, or similar programming languages
· Hadoop Developer(Scala/Spark/Hive)
· Prior Java experience/UI
Required Qualifications:
• BTech in computer science or similar field with minimum 5 years’ experience of Data Engineering
• Experience in ETL/Pipeline Development using tools such as Azure Databricks/Apache Spark and Azure DataFactory with development expertise on batch and real-time data integration
• Experience in programming using Scala or Python
• Experience in writing the Store Procedures
• Experience in data ingestion, preparation, integration and operationalization techniques in optimally addressing the data requirements
• Experience in Cloud data warehouse like Azure Synapse, Snowflake analytical warehouse
• Experience with Orchestration tools, Azure DevOps and GitHub
• Experience in building end to end architecture for Data Lakes, Data Warehouses and Data Marts
• Experience in relational data processing technology like MS SQL, Delta lake, Spark SQL
• Experience to own end-to-end development, including coding, testing, debugging and deployment
• Extensive knowledge of ETL and Data Warehousing concepts, strategies, methodologies
• Familiarity with Azure services like Azure functions, Azure Data Lake Store, Azure Cosmos
• Familiarity with Healthcare business data models
• Ability to provide solutions that are forward-thinking in data and analytics
• Must be team oriented with strong collaboration, prioritization, and adaptability skill
• Excellent written and verbal communication skills including presentation skills