🌐 Welcome to TANITAI

Tanit AI is a healthcare technology startup pioneering neuro-symbolic AI to transform fertility and reproductive medicine. In a world where 1 in 6 people are affected by infertility, our mission is to elevate fertility practice for doctors and empower patients in their parenthood journey

➑ Mission

At Tanit AI, our mission is to leverage advanced artificial intelligence to educate and support individuals in their fertility journey.

We strive to provide accessible, personalized information and guidance to help people make informed decisions about their reproductive health.

➑ Vision

Our vision is to be a global leader in AI-driven fertility education, fostering a future where technology and human understanding converge to address critical reproductive health challenges. We aim to empower individuals worldwide with the knowledge and tools needed to navigate their fertility journey with confidence.

Role Description

We are seeking a passionate Data / Bioinformatics Engineer (Knowledge Graph) intern to design and develop a large-scale medical knowledge graph that integrates multiple biomedical ontologies through public APIs and open data sources.

This knowledge graph will form the foundation for medical reasoning, semantic search, and decision-support systems, powering explainable AI capabilities within Tanit AI’s assistant. The intern will work with graph databases (Neo4j), ontology data models to create an intelligent knowledge infrastructure ready for next-generation GraphRAG (graph + retrieval-augmented generation) applications.

Key Responsibilities

Ontology integration & data modeling

🧬 Ingest and harmonize biomedical ontologies and terminologies using open APIs and datasets.

πŸ—ΊοΈ Map entities, relationships, and hierarchies into a unified medical graph schema.

Graph design & implementation

🧩 Model medical concepts and relationships using Neo4j or equivalent graph databases.

πŸ—οΈ Define graph schemas, metadata standards, and semantic relationships for reasoning and retrieval.

Data pipelines & automation

βš™οΈ Develop robust ETL pipelines to fetch, transform, and update ontology data at scale.

πŸ“š Ensure reproducibility, versioning, and provenance tracking for all imported sources.

Required Skills

🐍 Strong Python skills (data processing, APIs, automation).

πŸ§ͺ Strong understanding of biomedical ontologies.

🌐 Familiarity with graph databases and knowledge graph modeling.