contact us nseit

Contact Us

close contact us nseit

Current Openings

Join us today to unlock possibilities

AWS Data Engineer

Mumbai & Pune

Experience : Minimum 3 Years

Education : BS degree in Computer Science, Data Engineering or similar

Employment Type : Regular/Full Time

Job Description & Summary

We are looking for a Data Engineer to design AWS data ingestion and transformation pipelines based on the specific needs driven by the Product Owners and user stories. The candidate should possess strong knowledge and interest across big data technologies and have a background in data engineering. Candidate will also have to work directly with senior data engineers, product owners and customers to deliver data products in a collaborative and agile environment. They’ll also have to continuously integrate and ship code into our cloud production environments

Desired Skills & Experience

  • Building the Data Lake using AWS technologies like S3, EKS, ECS, AWS Glue, AWS KMS, AWS Firehose, EMR
  • Developing sustainable, scalable and adaptable data pipelines
  • Operationalizing data pipelines to support advanced analytics and decision making
  • Building data APIs and data delivery services to support critical operational and analytical applications
  • Leveraging capabilities of Databricks Lakehouse functionality as needed to build Common/Conformed layers within the data lake
  • Contributing to the design of robust systems with an eye on the long-term maintenance and support of the application
  • Leveraging reusable code modules to solve problems across the team and organization
  • Handling multiple functions and roles for the projects and Agile teams
  • At least 2-3 years' experience with designing and developing Data Pipelines for Data Ingestion or Transformation using AWS technologies.
  • At least 1 years' experience in the following Big Data frameworks: File Format (Parquet, AVRO, ORC), Resource Management, Distributed Processing.
  • At least 2-3 years' experience developing applications with Monitoring, Build Tools, Version Control, Unit Test, TDD, Change Management to support DevOps
  • At least 2 years' experience with SQL and Shell Scripting experience
  • At least 1-2 years' experience with Spark programming (PySpark or Scala)
  • At least a years’ experience with Databricks implementations
  • Familiarity with the concepts of “delta lake” and “lakehouse” technologies
  • At least 1 years’ experience in designing, building, and deploying production-level data pipelines using tools from Hadoop stack. Comfortable developing applications that use tools like Hive/Impala, HBase, Oozie, Spark, NiFi, Apache Beam, Apache Airflow etc.
  • At least a year’s experience with MS Azure, Amazon Web Services (AWS), Google Compute or another public cloud service
  • At least 1 years’ experience working with Streaming using Spark or Flink or Kafka
  • Intermediate experience/knowledge in at least one scripting language (Python, Perl, JavaScript)
  • Some experience implementing open-source frameworks & exposure to various open source & package software architectures (Elastic Search, Spark, Scala, Splunk, and Jenkins etc.).


  • Self-paced learning pursuits to stay current with technology trends in the data engineering
  • Intermediate experience in a Data Pipeline Development role. Demonstrated strong execution capabilities domain


Java Developer/ Senior Developers


C/C++/Linux - Developer/ Senior Developer

Are you ready for your NSEIT journey?

Want to know more about us ?

Know More