Search open roles.

AI Engineer

heva

heva

Software Engineering, Data Science
Bengaluru, Karnataka, India
Posted on Oct 6, 2025

Link to Job Description

https://heva.notion.site/AI-Engineer-Bengaluru-IN-2804d306509080108a6fe562bb8b3c09?source=copy_link

Company Overview

heva is an AI practice management platform for doctors serving medical tourists. Founded by product and technical leaders from Apple, Google, Stryker, and Harvard, heva is scaling quickly and eager to grow its team.

We are backed by top tier investors, including Flybridge Capital, Benchstrength, Collide Capital, Spice Capital, Bharat Biotech, and The MBA Fund.

Join us to reshape the future of global healthcare.

Role Overview

As an AI Engineer at heva, you will architect, build, and maintain the intelligent backbone of our conversational systems. You’ll work at the cutting edge of Generative AI, leveraging tools like LangChain, LangGraph, and LangSmith to create scalable, multi-agent conversational flows that deliver real utility to patients and providers across borders. This is a backend-heavy role focused on building memory-rich agents using LLMs, vector databases, and event-driven architectures.

You will spend 80% of your time developing and optimizing AI agent pipelines, and 20% collaborating across AI research, product, and engineering teams to improve outcomes, reliability, and speed of deployment.

We’re looking for someone deeply familiar with GenAI fundamentals — long-term vs short-term memory, retrieval-augmented generation (RAG), embedding stores, multi-agent orchestration — and experienced in deploying these systems in production environments using Python and modern cloud infrastructure.

Responsibilities

  • Architect, develop, and maintain conversational AI agents using LangChain, LangGraph, and LangSmith.
  • Integrate and fine-tune LLMs from OpenAI, Anthropic, and other providers to support dynamic workflows.
  • Design and implement memory strategies (short-term, long-term, episodic) to support continuous, context-rich user interactions.
  • Build RAG pipelines with Pinecone, Cohere, PostgreSQL, and Redis.
  • Manage and evolve a multi-agent architecture (MCP) that supports coordination between autonomous agents.
  • Optimize agent performance, latency, token usage, and prompt structure for real-time applications.
  • Develop backend services in Python for orchestration, data flow, and API integrations.
  • Collaborate with product, backed, and frontend engineers to connect agent outputs to user-facing experiences.
  • Ensure system robustness and observability through proper logging, tracing, and testing workflows.
  • Stay up to date with emerging research and GenAI frameworks; help guide heva’s AI roadmap.

Qualifications

  • 5+ years of backend engineering experience, with at least 2 years focused on LLMs or AI agent systems.
  • Strong Python engineering skills and experience with event-driven, asynchronous architectures.
  • Deep understanding of GenAI concepts: embeddings, memory types, RAG, and prompt engineering.
  • Production-level experience with LangChain, LangGraph, or similar orchestration frameworks.
  • Familiarity with OpenAI, Anthropic, Cohere, or similar foundation model APIs.
  • Experience with vector databases like Pinecone and data stores like PostgreSQL and Redis.
  • Knowledge of multi-agent systems and coordination strategies (MCP or similar patterns).
  • Strong problem-solving, debugging, and architectural decision-making skills.
  • Clear communicator and collaborator with cross-functional teams.
  • Advanced English proficiency - C1 or C2