Cognizant logo

Big Data Developer

Cognizant
Full-time
On-site
New South Wales
IT Infrastructure

Position Summary:

We are seeking a highly skilled and experienced Big Data Developer to join our engineering team. The ideal candidate will have strong expertise in building and maintaining robust, scalable, and high-performance big data solutions using modern big data tools and technologies. You will be responsible for developing data pipelines, transform large volumes of data, and optimizing data infrastructure to enable real-time and batch processing for business insights and analytics

Mandatory Skills:

· Design, develop and maintain scalable and secure big data processing systems using tools like Apache Spark, Hadoop, Hive, Kafka etc.

· Build and optimize data pipelines and architectures for data ingestion, processing and storage from diverse source (structured, semi-structured, unstructured)

· Develop ETL/ELT workflows and data transformation logic using custom code in Python, Scala, Java and job scheduling tools such as Control-M.

· Translate business need/requirements into technical specifications, develop application strategy and high-level design documentation

· Involve on solution design discussions, competitive displacement, proof-of-concept engagements, solution demonstrations and technical workshops.

· Responsible to manage the development sprints effectively and efficiently from planning to execution to review in an agile development environment.

· Convert business knowledge into technical knowledge and vice – versa and able to translate those insights into effective frontline action.

· Responsible to build and manage end-to-end big data streaming/batch pipeline as per business requirements.

· Report and update the management/customer about the issues/blockers on time to mitigate risk.

· Collaborate with different teams, analysts and stake holders to understand data requirements and deliver efficient data solutions.

· Ensure data quality, consistency and security by implementing appropriate data validation, logging, monitoring, and government standards.

· Tune and monitor the performance of data systems, jobs and queries.

· Integrate and manage data platforms in Azure cloud environment.

· Mentor junior developers and support team best practices in code review, version control, and testing.

· Document system designs, data flows, and technical specifications.

· Proactively identify and eliminate impediments and facilitate flow.

· Experience in project management software (i.e., JIRA, Manuscript, etc.) to support task estimates and execution.

· Involving the engineering development projects and facilitates sprint releases.

· Create reusable assets / supporting artifacts based on the industry landscape

Mandatory Knowledge & Qualifications:

· Bachelor’s or master’s degree in computer science, data engineering, information technology, or related fields.

· 4+ years of experience in Big Data technologies.

· Should have experience in template-oriented data validation / cleansing framework.

· Strong programming skills in Python, Scala, Shell Scripting and Java.

· Extensive experience in data transformation and building ETL pipelines using PySpark.

· Strong knowledge in performance improvements of existing big data Streaming or Batch Pipeline.

· Proficiency in big data tools and frameworks such as Apache Spark, Hadoop, Kafka, and Hive.

· Strong proficiency in design and develop generic ingestion and streaming frameworks which can be used and leveraged by new initiatives/projects.

· Experience with NoSQL Databases – HBase and Azure Cosmos DB.

· Should have experience in optimizing high volume data load in Spark.

· Deep knowledge in Azure cloud platform.

· Experience in identifying complex data lineage using Apache Atlas.

· Strong knowledge of creating, scheduling and monitoring jobs using Control-M.

· Strong knowledge in Data analysis, Natural Language Processing (NLP), Machine Learning (ML) techniques, Data visualization.

· Deep understanding of data governance tool – Alex.

· Mandatory to have strong understanding of BFS domain.

· Build predictive models and machine-learning algorithms.

· Present information using data visualization techniques.

Required Skills & Technology Stack:

· Bachelor’s or master’s degree in computer science, data engineering, information technology, or related fields.

· 4+ years of experience in Big Data technologies.

· Should have experience in template-oriented data validation / cleansing framework.

· Strong programming skills in Python, Scala, Shell Scripting and Java.

· Extensive experience in data transformation and building ETL pipelines using PySpark.

· Strong knowledge in performance improvements of existing big data Streaming or Batch Pipeline.

· Proficiency in big data tools and frameworks such as Apache Spark, Hadoop, Kafka, and Hive.

· Strong proficiency in design and develop generic ingestion and streaming frameworks which can be used and leveraged by new initiatives/projects.

· Experience with NoSQL Databases – HBase and Azure Cosmos DB.

· Should have experience in optimizing high volume data load in Spark.

· Deep knowledge in Azure cloud platform.

· Experience in identifying complex data lineage using Apache Atlas.

· Strong knowledge of creating, scheduling and monitoring jobs using Control-M.

· Strong knowledge in Data analysis, Natural Language Processing (NLP), Machine Learning (ML) techniques, Data visualization.

· Deep understanding of data governance tool – Alex.

· Mandatory to have strong understanding of BFS domain.

· Build predictive models and machine-learning algorithms.

· Present information using data visualization techniques.

Required Skills & Technology Stack:

Primary Skills:

• Hadoop – Expert

• Apache Spark – Expert

• Apache Atlas – Expert

• Python – Expert

• Scala – Expert

• Shell Script – Expert

• PySpark – Expert

• Scikitlearn – Expert

• Matplotlib – Expert

• Azure Storage Explorer – Expert

Database Skills:

• HBase - Expert

• Hive - Expert

• Oracle – Expert

• SqlServer – Expert

• MySQL – Expert

• Cosmos DB - Expert

Other Skills:

• NLP – Expert

• ML – Expert

• Data Science – Expert

• Data Visualization – Expert

Tools:

• PyCharm/VSCode - Advanced

• Control M – Advanced

• Code Versioning Tool - Advanced

• Project Management tools (JIRA, Confluence, Service NOW) – Advanced

• MS Visio - Expert

• HP ALM/QC- Expert

Salary Range: $75,000-$85,000

Date of Posting: 25/August/2025

Next Steps: If you feel this opportunity suits you, or Cognizant is the type of organization you would like to join, we want to have a conversation with you! Please apply directly with us.

For a complete list of open opportunities with Cognizant, visit http://www.cognizant.com/careers. Cognizant is committed to providing Equal Employment Opportunities. Successful candidates will be required to undergo a background check.

#LI-CTSAPAC