ATS

Create
the future
with us

current openings

Job Title: Lead Data Scientist

Location: New Jersey, USA (Hybrid)

Experience: 12+ Years

Job Type: Full-time / Contract-to-Hire

Domain: BFSI | Healthcare | Retail | Insurance | Telecom

Job Summary:

We are seeking a seasoned Lead Data Scientist with 12+ years of end-to-end experience in data science, machine learning, AI integration, and advanced analytics to join our enterprise-grade data team based in New Jersey. The ideal candidate should bring a proven track record of driving data-centric decision making, designing scalable AI/ML models, and mentoring cross-functional data teams to build production-ready solutions.

Key Responsibilities:

Lead and architect large-scale data science solutions using Python, R, and SQL in cloud-native environments (Azure, AWS, or GCP).

Develop, train, and deploy predictive models for classification, regression, NLP, computer vision, and recommendation systems.

Collaborate with business stakeholders to translate use cases into data-driven strategies and measurable KPIs.

Implement MLOps best practices, version control, model monitoring, and continuous integration using tools like MLflow, DVC, and GitHub Actions.

Design and optimize data pipelines for model training and inference using Apache Spark, Databricks, and Airflow.

Utilize deep learning frameworks such as TensorFlow, PyTorch for advanced AI solutions.

Conduct A/B testing, hypothesis testing, and causal inference for experimentation and optimization.

Perform feature engineering and data wrangling on large datasets using pandas, NumPy, and PySpark.

Lead initiatives in Responsible AI, model explainability, and bias mitigation using SHAP, LIME, and fairness toolkits.

Present findings to CXO-level stakeholders with compelling visualizations (Power BI / Tableau).

Manage and mentor a team of junior and mid-level data scientists and data engineers.

Ensure data governance and compliance with frameworks like HIPAA, GDPR, SOC2.

Collaborate with DevOps and data engineering teams to scale models in production.

Required Skills:
Programming: Python, R, SQL, PySpark

AI/ML: Scikit-learn, TensorFlow, PyTorch, XGBoost

Big Data: Hadoop, Spark, Hive, Databricks

Cloud: Azure ML, AWS SageMaker, GCP Vertex AI

MLOps: MLflow, DVC, Airflow, Docker, Kubernetes

Visualization: Power BI, Tableau, Matplotlib, Seaborn

Version Control & CI/CD: Git, GitHub Actions, Azure DevOps

Databases: PostgreSQL, MongoDB, Snowflake, Azure Synapse

Certifications Preferred:

Microsoft Certified: Azure Data Scientist Associate

Google Professional Machine Learning Engineer

AWS Certified Machine Learning – Specialty

Nice to Have:
Experience with GenAI/LLM implementations (OpenAI, LangChain)

Contributions to data product design, microservices, or ML APIs

Involvement in Data Mesh / Data Fabric architectures

Job Title: Principal Data Scientist / AI & Analytics Leader

Location: New Jersey, USA (Onsite/Hybrid)

Experience Required: 12+ Years

Employment Type: Full-time / Long-term Contract

About the Role:
We are looking for a visionary Data Science Leader who brings a rare blend of hands-on technical expertise, strategic thinking, and team leadership to help solve complex business problems using data, AI, and analytics. This role demands excellence in architecting scalable machine learning solutions, leading AI innovation, and aligning technical outcomes with real-world business value.

Core Responsibilities:
Architect end-to-end AI/ML pipelines, from raw data to actionable insights, deploying models in scalable production environments.

Lead multi-disciplinary teams of data scientists, ML engineers, and analysts, ensuring timely delivery of high-impact projects.

Drive the adoption of AI/ML in business units, transforming traditional processes in domains like insurance underwriting, patient risk modeling, fraud detection, and demand forecasting.

Integrate advanced analytics with enterprise data lakes, cloud-native platforms (Azure, AWS, GCP), and business intelligence tools.

Define and govern ML model lifecycle, including data collection, labeling, training, retraining, monitoring, and drift detection.

Champion Responsible AI by implementing fairness, interpretability, and governance frameworks.

Present findings to executive leadership with an emphasis on ROI, efficiency gains, and strategic advantages.

Implement best-in-class data engineering and ML infrastructure using Spark, Airflow, Docker, and Kubernetes.

Collaborate with IT, product, and compliance teams to ensure secure, scalable, and compliant AI deployments.

Key Technologies and Tools:
Languages: Python, Scala, R, SQL

ML/DL: Scikit-learn, LightGBM, CatBoost, TensorFlow, PyTorch

Big Data & ETL: Apache Spark, Kafka, Azure Data Factory, Snowflake

MLOps: MLflow, Kubeflow, Airflow, GitHub Actions, Jenkins

Visualization: Power BI, Tableau, Plotly

Databases: PostgreSQL, MongoDB, CosmosDB

Cloud Ecosystems: Azure (preferred), AWS, GCP

DevOps Integration: Docker, Kubernetes, Terraform

Preferred Qualifications:
Experience in domain-specific AI: Healthcare AI, Financial Risk Modeling, Customer 360, Retail Personalization

Strong grasp of statistical modeling, optimization algorithms, and causal inference

Exposure to GenAI platforms (OpenAI, Hugging Face, LangChain) for conversational analytics or intelligent automation

Certified in one or more:

Azure AI Engineer Associate

AWS Certified ML Specialty

Google Cloud Professional Data Engineer

Why Join Us?

Work with Fortune 500 clients across BFSI, Healthcare, E-Commerce, and Manufacturing

Lead a cutting-edge AI CoE (Center of Excellence)

Opportunity to mentor, innovate, and drive real digital transformation

Job Title: Senior Data Engineer – Python Specialist

Location: New Jersey, USA (Hybrid / Onsite Flexibility)

Experience: 12+ Years

Employment Type: Full-time / Contract-to-Hire

Industry Domains: Insurance | Banking | Healthcare | Retail | Logistics

Job Summary
We are seeking a highly experienced Senior Data Engineer with deep proficiency in Python-based data engineering, cloud integration, and distributed data pipeline development. The ideal candidate will have a proven history of designing scalable data solutions in modern cloud ecosystems (Azure, AWS, GCP) and delivering high-performance ETL/ELT systems in production environments.

Key Responsibilities:
Design and develop complex ETL/ELT pipelines using Python and orchestration tools such as Apache Airflow or Azure Data Factory.

Engineer robust data ingestion frameworks from diverse sources (APIs, flat files, relational/NoSQL DBs, streaming data).

Work with structured, semi-structured, and unstructured data to build curated, trusted datasets for analytics, ML, and reporting use cases.

Optimize data pipelines for scalability, cost-efficiency, and performance, leveraging partitioning, indexing, and caching techniques.

Build reusable data components and frameworks for automated data validation, logging, and alerting.

Collaborate with Data Scientists and Analysts to ensure data availability and quality for model training and visualization.

Implement data governance policies including data lineage, metadata management, and access control.

Deploy and manage solutions on cloud platforms like Azure Synapse, AWS Redshift, Google BigQuery, or Snowflake.

Apply DevOps practices in data engineering – using Git, Docker, CI/CD pipelines, and IaC tools like Terraform.

Build real-time streaming pipelines using Kafka, Spark Structured Streaming, or Kinesis where required.

Technical Skillset:
Programming: Python (Pandas, PySpark, SQLAlchemy), SQL, Shell Scripting

ETL Tools: Airflow, Azure Data Factory, AWS Glue, dbt

Data Lakes & Warehouses: Azure Data Lake Gen2, Snowflake, Redshift, Synapse Analytics

Big Data: Apache Spark, Hadoop Ecosystem

Databases: PostgreSQL, MySQL, MongoDB, Cassandra

Cloud Platforms: Azure (preferred), AWS, GCP

Streaming Platforms: Apache Kafka, AWS Kinesis

CI/CD & DevOps: Git, Docker, Jenkins, GitHub Actions

Monitoring & Logging: Prometheus, Grafana, ELK Stack

Others: PySpark, FastAPI, RESTful APIs

Preferred Certifications:
Microsoft Certified: Azure Data Engineer Associate

AWS Certified: Big Data Specialty or Data Analytics Specialty

Google Cloud Certified: Professional Data Engineer

Key Soft Skills:
Strong problem-solving and debugging mindset

Excellent stakeholder communication and requirement translation skills

Experience leading small teams or mentoring junior data engineers

Ability to work independently on large-scale data initiatives

Job Title: Senior Multi-Cloud Engineer – Remote

Location: Remote (Anywhere in the USA)

Experience: 12+ Years

Employment Type: Full-time / W2 or C2C Contract

Domain: BFSI | Healthcare | Retail | Telecom | Logistics

About the Role
We are looking for a Senior Multi-Cloud Engineer with a deep understanding of AWS, Azure, and GCP cloud ecosystems, responsible for architecting and managing hybrid-cloud environments for enterprise clients. This role involves designing secure, scalable, and automated cloud solutions, while ensuring cost-efficiency, governance, and observability across platforms.

Key Responsibilities:
Architect, deploy, and manage cloud infrastructure across AWS, Azure, and GCP using Infrastructure-as-Code (Terraform, Bicep, ARM, CloudFormation).

Implement CI/CD pipelines using tools like Azure DevOps, GitHub Actions, GitLab CI, or Jenkins for multi-cloud deployments.

Design and manage Kubernetes clusters (AKS, EKS, GKE) and containerized applications (Docker, Helm).

Configure and manage networking, DNS, VPNs, load balancers, and firewalls across cloud environments.

Create and enforce multi-cloud security policies using tools like Azure Security Center, AWS GuardDuty, GCP Security Command Center.

Implement cost optimization strategies using native cost analyzers and third-party platforms.

Monitor cloud environments using Prometheus, Grafana, ELK, CloudWatch, Azure Monitor, or Stackdriver.

Work with DevOps teams to define reusable patterns and modules, ensuring standardization and compliance.

Lead disaster recovery, backup, and high availability strategies across cloud providers.

Collaborate with developers, architects, and project managers to align cloud infrastructure with app needs and business goals.

Support cloud-to-cloud migration projects and hybrid infrastructure integrations.

Key Technical Skills:
Cloud Platforms: AWS, Azure, GCP (Expert in at least two, working knowledge of the third)

IaC Tools: Terraform (multi-provider), ARM/Bicep, CloudFormation, Pulumi

Containers & Orchestration: Docker, Kubernetes (AKS/EKS/GKE), Helm

DevOps & CI/CD: Azure DevOps, GitHub Actions, Jenkins, GitLab CI

Security & IAM: Azure RBAC, AWS IAM, GCP IAM, HashiCorp Vault

Monitoring & Logging: CloudWatch, Azure Monitor, Stackdriver, ELK Stack, Datadog

Networking: VNet, VPC, Peering, Load Balancers, Private Link, Route Tables

Automation: Python, PowerShell, Bash scripting

Backup/DR: Azure Site Recovery, AWS Backup, GCP Snapshots

Cost Tools: AWS Cost Explorer, Azure Cost Management, GCP Pricing Calculator

Certifications Preferred:
AWS Certified Solutions Architect – Professional

Microsoft Certified: Azure Solutions Architect Expert

Google Professional Cloud Architect

CKA or CKS (Certified Kubernetes Administrator/Security)

Why Join Us?

100% remote-first environment

Exposure to multi-industry cloud transformations

Opportunity to lead cloud modernization & migration programs

Competitive compensation, paid certifications, and flexible hours

    Contact Form









    Scroll to Top