32dots HEIDELBERG AI
REFERENCE LIBRARY

Templates worth stealing

Every template below is a real, working n8n.io workflow you can copy into your own space in two clicks. Each one is drawn in the same n8n-Agent shape we use in the course, so you can see at a glance which nodes are the trigger, the agent, the model, the memory, and the tools — and where your own ideas plug in.

How to use: read the drawing, read when to use, then click the link to open the template on n8n.io. Use the Import from URL button inside your n8n editor to paste it straight into a new workflow.

Agent foundations

AI Agent chat

n8n.io ↗
💬Chat TriggerTriggerTRIGGER🧠AI AgentOrchestratorORCHESTRATOROpenAI Chat ModelChat ModelCHAT MODEL

The solid arrows show the main flow; the dashed lines are how Model, Memory, and Tools plug into the Agent on its three input ports. This is the same shape n8n draws on its canvas.

What it does. The canonical agent template — a chat trigger wired straight into an AI Agent node with one chat model behind it. No memory, no tools. The single smallest thing that counts as an 'agent'.

When you'd use it. Starting point for any new idea. Copy it, add one tool or a memory node, and you've got your own thing. Great for testing that a credential works before layering on complexity.

Open on n8n.io →

Scrape and summarize webpages with AI

n8n.io ↗
▶️Manual / Webhook T…TriggerTRIGGER🧠AI AgentOrchestratorORCHESTRATOROpenAI Chat ModelChat ModelCHAT MODEL🌐HTTP RequestToolTOOL

The solid arrows show the main flow; the dashed lines are how Model, Memory, and Tools plug into the Agent on its three input ports. This is the same shape n8n draws on its canvas.

What it does. Fetch a webpage, hand the HTML to an AI Agent, get a clean summary back. The agent holds a HTTP Request tool it can call more than once (e.g. follow links).

When you'd use it. A lab-news digest, summarising a pre-print's abstract, pulling protocol highlights from a supplier page. Anything where 'read a page and tell me the gist' is the whole task.

Open on n8n.io →

RAG — know your documents

Basic RAG chat

n8n.io ↗
💬Chat TriggerTriggerTRIGGER🧠AI AgentOrchestratorORCHESTRATOROpenAI Chat ModelChat ModelCHAT MODEL🧵Simple MemoryShort-term memoryMEMORY🗂️Vector Store (Simp…ToolTOOL

The solid arrows show the main flow; the dashed lines are how Model, Memory, and Tools plug into the Agent on its three input ports. This is the same shape n8n draws on its canvas.

What it does. Two flows in one: (1) ingest documents into a vector store, (2) chat over them. The agent retrieves the top matching chunks and the model answers only from those.

When you'd use it. Your lab's SOPs, a folder of protocols, recent meeting notes — anything you want the AI to answer grounded in, not guess about. The 'hello world' of RAG.

🧵
Simple Memory — session key gotcha

The Simple Memory node needs a session key to separate different users' chat histories. When the Chat Trigger connects directly to the Agent you can leave the key mode on Connected Chat Trigger Node and n8n fills it in automatically. The moment you add any node between the Chat Trigger and the Agent (a Set node, a router, a webhook pre-processor), that auto-detection breaks — you must switch to Define below and enter: ={{ $('When chat message received').first().json.sessionId }}. This was the exact bug that broke Session 2 (cos2 voice-to-lab-ai) — intermediate nodes between trigger and memory made the key come through empty.

Open on n8n.io →

PDF-based RAG with OpenAI + Pinecone + Cohere reranking

n8n.io ↗
📤Form UploadTrigger (ingest)TRIGGER🧠AI AgentOrchestratorORCHESTRATOROpenAI Chat ModelChat ModelCHAT MODEL🗂️Pinecone Vector St…ToolTOOL🎯Cohere RerankerToolTOOL

The solid arrows show the main flow; the dashed lines are how Model, Memory, and Tools plug into the Agent on its three input ports. This is the same shape n8n draws on its canvas.

What it does. Upload PDFs through a form → chunk → embed → Pinecone. Chat side adds a Cohere reranker so the three chunks the model actually reads are the three best of twenty candidates, not just the three highest-scoring embeddings.

When you'd use it. Reading 30+ papers for a review and you want to chat over them. Reranking is the quiet step that separates 'cute demo' from 'I'd actually quote this in my thesis'.

Open on n8n.io →

Local chatbot with RAG (Ollama + Qdrant)

n8n.io ↗
📤Form Upload (PDFs)Trigger (ingest)TRIGGER🧠AI AgentOrchestratorORCHESTRATOR🏠Ollama Chat ModelChat ModelCHAT MODEL🧵Simple MemoryShort-term memoryMEMORY🗂️Qdrant Vector Stor…ToolTOOL

The solid arrows show the main flow; the dashed lines are how Model, Memory, and Tools plug into the Agent on its three input ports. This is the same shape n8n draws on its canvas.

What it does. Same RAG pattern, but the chat model runs on Ollama and the vectors sit in Qdrant — all on your own machine. No API keys, no cloud, no data leaving the building.

When you'd use it. When the documents contain unpublished data, patient information, or anything that can't leave institute hardware. Slower than the cloud version; zero data-leaving risk.

🧵
Simple Memory — session key gotcha

The Simple Memory node needs a session key to separate different users' chat histories. When the Chat Trigger connects directly to the Agent you can leave the key mode on Connected Chat Trigger Node and n8n fills it in automatically. The moment you add any node between the Chat Trigger and the Agent (a Set node, a router, a webhook pre-processor), that auto-detection breaks — you must switch to Define below and enter: ={{ $('When chat message received').first().json.sessionId }}. This was the exact bug that broke Session 2 (cos2 voice-to-lab-ai) — intermediate nodes between trigger and memory made the key come through empty.

Open on n8n.io →

Voice & inbox automation

Voice-to-email (Telegram + Whisper + Gmail)

n8n.io ↗
✈️Telegram TriggerTriggerTRIGGER🎙️Whisper STTPre-processPRE-PROCESS🧠AI AgentOrchestratorORCHESTRATOR📤Gmail SendPost-processPOST-PROCESSOpenAI Chat ModelChat ModelCHAT MODEL

The solid arrows show the main flow; the dashed lines are how Model, Memory, and Tools plug into the Agent on its three input ports. This is the same shape n8n draws on its canvas.

What it does. Send a voice note to a Telegram bot → Whisper transcribes it → an AI Agent drafts a polished email → Gmail sends (or saves as draft). Same shape as your Session 02 cos2 workflow, different post-process.

When you'd use it. Replying to supervisor / collaborator emails while walking between the wet-lab and the desk. Talk once, Gmail draft sits there when you're back at the keyboard.

Open on n8n.io →

Smart email auto-responder

n8n.io ↗
📥Gmail PollTriggerTRIGGER🔀Classify (If)Pre-process (router)PRE-PROCESS🧠AI AgentOrchestratorORCHESTRATOR📤Gmail DraftPost-processPOST-PROCESSOpenAI Chat ModelChat ModelCHAT MODEL

The solid arrows show the main flow; the dashed lines are how Model, Memory, and Tools plug into the Agent on its three input ports. This is the same shape n8n draws on its canvas.

What it does. Polls Gmail every hour. An AI classifier tags each email (urgent / FYI / needs reply / spam). An AI Agent drafts a reply only for the 'needs reply' bucket. Human approves in a second step before sending.

When you'd use it. A PI's inbox, a lab-manager alias, a shared student-help address. Keeps the Agent from auto-sending anything risky — the human stays in the loop.

Open on n8n.io →

Knowledge graph & GraphRAG

Academic knowledge graph from research papers (Neo4j)

n8n.io ↗
📤Form / Drive Trigg…TriggerTRIGGER📄PDF ExtractPre-processPRE-PROCESS🧠AI AgentOrchestratorORCHESTRATOROpenAI Chat ModelChat ModelCHAT MODEL🕸️Neo4j WriteToolTOOL

The solid arrows show the main flow; the dashed lines are how Model, Memory, and Tools plug into the Agent on its three input ports. This is the same shape n8n draws on its canvas.

What it does. Drop a batch of papers in; an AI Agent pulls out entities (authors, methods, species, datasets) and relationships, then writes them as nodes and edges into Neo4j. You end up with a queryable graph instead of a pile of PDFs.

When you'd use it. The moment a literature review gets big enough that a flat spreadsheet isn't enough — e.g. 'which labs used which antibodies on which mouse line'. Graph queries answer that in seconds; keyword search never will.

Open on n8n.io →

Chat with your PDFs via GraphRAG (no vector store)

n8n.io ↗
📤Form UploadTrigger (ingest)TRIGGER🧠AI AgentOrchestratorORCHESTRATOROpenAI Chat ModelChat ModelCHAT MODEL🧵Simple MemoryShort-term memoryMEMORY🕸️Graph Query ToolToolTOOL

The solid arrows show the main flow; the dashed lines are how Model, Memory, and Tools plug into the Agent on its three input ports. This is the same shape n8n draws on its canvas.

What it does. Ingests documents into a graph instead of a vector store. The agent traverses the graph at query time — following entity → relation → entity hops — so answers can chain facts across papers, not just retrieve one chunk.

When you'd use it. When 'show me the chunk that mentions X' isn't enough and you need 'what connects X to Y'. Shines for mechanism-of-action questions, pathway walks, 'who cites whom'.

🧵
Simple Memory — session key gotcha

The Simple Memory node needs a session key to separate different users' chat histories. When the Chat Trigger connects directly to the Agent you can leave the key mode on Connected Chat Trigger Node and n8n fills it in automatically. The moment you add any node between the Chat Trigger and the Agent (a Set node, a router, a webhook pre-processor), that auto-detection breaks — you must switch to Define below and enter: ={{ $('When chat message received').first().json.sessionId }}. This was the exact bug that broke Session 2 (cos2 voice-to-lab-ai) — intermediate nodes between trigger and memory made the key come through empty.

Open on n8n.io →

Custom reasoning agent with GraphRAG + ontology

n8n.io ↗
💬Chat TriggerTriggerTRIGGER📐Ontology LoaderPre-processPRE-PROCESS🧠AI AgentOrchestratorORCHESTRATOROpenAI Chat ModelChat ModelCHAT MODEL🕸️Graph Query ToolToolTOOL

The solid arrows show the main flow; the dashed lines are how Model, Memory, and Tools plug into the Agent on its three input ports. This is the same shape n8n draws on its canvas.

What it does. Same idea as GraphRAG, but the graph is shaped by a hand-written ontology (your domain's types + allowed relations). The agent reasons inside those rules, which stops it from inventing relationships the domain doesn't permit.

When you'd use it. Any domain with a real schema — medical, legal, regulatory, experimental protocols. When 'close enough' isn't good enough and the agent must respect that Gene → encodes → Protein but never Gene → authored-by → Protein.

Open on n8n.io →

Document parsing & OCR

AI research agents with Mistral OCR

n8n.io ↗
📤Form / Drive Trigg…TriggerTRIGGER🔍Mistral OCRPre-processPRE-PROCESS🧠AI AgentOrchestratorORCHESTRATOR📝Notion / Doc WritePost-processPOST-PROCESSMistral Chat ModelChat ModelCHAT MODEL

The solid arrows show the main flow; the dashed lines are how Model, Memory, and Tools plug into the Agent on its three input ports. This is the same shape n8n draws on its canvas.

What it does. Mistral's OCR turns scanned PDFs (figures, tables, handwritten scans) into clean structured text. An AI research agent then reads the output and writes a synthesis. Handles what plain PDF-text extraction quietly fails on.

When you'd use it. Old scanned papers, historical lab notebooks, conference posters, anything where the PDF is actually an image. Also when tables matter — Mistral OCR preserves structure plain extraction wipes out.

Open on n8n.io →

Parse PDF/DOCX from Drive → OCR → Slack

n8n.io ↗
📁Drive TriggerTriggerTRIGGER📄PDF / DOCX ParsePre-processPRE-PROCESS🔍OCR FallbackPre-processPRE-PROCESS🧠AI AgentOrchestratorORCHESTRATOR💬Slack AlertPost-processPOST-PROCESSOpenAI Chat ModelChat ModelCHAT MODEL

The solid arrows show the main flow; the dashed lines are how Model, Memory, and Tools plug into the Agent on its three input ports. This is the same shape n8n draws on its canvas.

What it does. Watches a Drive folder, extracts text + tables from every new PDF or DOCX, runs OCR on the scanned ones, and pings Slack with a summary. One pipeline that handles both native and scanned documents.

When you'd use it. A shared 'to-read' folder for the lab — new papers, SOPs, meeting notes — where you want a one-line summary in Slack so nothing slips. Same shape works for Mattermost.

Open on n8n.io →

Research paper analysis system

n8n.io ↗
📤Form UploadTriggerTRIGGER📄Section ExtractorPre-processPRE-PROCESS🧠AI AgentOrchestratorORCHESTRATOROpenAI Chat ModelChat ModelCHAT MODEL🧵Simple MemoryShort-term memoryMEMORY

The solid arrows show the main flow; the dashed lines are how Model, Memory, and Tools plug into the Agent on its three input ports. This is the same shape n8n draws on its canvas.

What it does. End-to-end paper-reading pipeline: parse the PDF, extract sections (abstract / methods / results), summarise each, and expose a chat interface you can ask questions of the paper through.

When you'd use it. When you have a single paper you actually need to understand — not skim — and you want quick answers to 'what's the effect size', 'what n did they use', 'what's the limitation they buried in the discussion'.

🧵
Simple Memory — session key gotcha

The Simple Memory node needs a session key to separate different users' chat histories. When the Chat Trigger connects directly to the Agent you can leave the key mode on Connected Chat Trigger Node and n8n fills it in automatically. The moment you add any node between the Chat Trigger and the Agent (a Set node, a router, a webhook pre-processor), that auto-detection breaks — you must switch to Define below and enter: ={{ $('When chat message received').first().json.sessionId }}. This was the exact bug that broke Session 2 (cos2 voice-to-lab-ai) — intermediate nodes between trigger and memory made the key come through empty.

Open on n8n.io →

Scientific research pipelines

Comprehensive literature review across PubMed + arXiv + Scholar

n8n.io ↗
💬Form / Chat Trigge…TriggerTRIGGER🔀Multi-DB Fan-outPre-processPRE-PROCESS🧠AI AgentOrchestratorORCHESTRATOROpenAI Chat ModelChat ModelCHAT MODEL🧬PubMed SearchToolTOOL📐arXiv SearchToolTOOL🎓Scholar SearchToolTOOL

The solid arrows show the main flow; the dashed lines are how Model, Memory, and Tools plug into the Agent on its three input ports. This is the same shape n8n draws on its canvas.

What it does. One query → parallel searches across PubMed, arXiv, and Google Scholar → deduplicates → ranks → the agent writes a structured review (themes, gaps, disagreements) with citations back to each source.

When you'd use it. Kicking off a new project or writing the background section of a grant. Covers biomed (PubMed) + physical / ML (arXiv) + the long tail (Scholar) in one go, so you don't miss a community.

Open on n8n.io →

arXiv paper summarisation

n8n.io ↗
▶️Manual / ScheduleTriggerTRIGGER📐arXiv FetchPre-processPRE-PROCESS📄PDF ExtractPre-processPRE-PROCESS🧠AI AgentOrchestratorORCHESTRATOROpenAI Chat ModelChat ModelCHAT MODEL

The solid arrows show the main flow; the dashed lines are how Model, Memory, and Tools plug into the Agent on its three input ports. This is the same shape n8n draws on its canvas.

What it does. Give it an arXiv ID or URL → downloads the PDF → an AI Agent writes a tight structured summary (contribution, method, results, limits). Handles the maths-heavy pre-prints arXiv is full of.

When you'd use it. Keeping up with a fast-moving ML / stats / comp-bio area where ten new pre-prints land every Friday. Paired with a schedule trigger + a Mattermost post, this is your personal 'paper of the day' bot.

Open on n8n.io →

Daily RAG paper hub: arXiv → Notion

n8n.io ↗
ScheduleTriggerTRIGGER📐arXiv SearchPre-processPRE-PROCESS🧠AI AgentOrchestratorORCHESTRATOR📝Notion WritePost-processPOST-PROCESSOpenAI Chat ModelChat ModelCHAT MODEL🗂️Vector StoreToolTOOL

The solid arrows show the main flow; the dashed lines are how Model, Memory, and Tools plug into the Agent on its three input ports. This is the same shape n8n draws on its canvas.

What it does. Every morning: pulls new arXiv papers on your topics, embeds them into a small RAG store, and writes one Notion page per paper with summary + key quotes + 'why you'd care'. Your RAG index grows on its own.

When you'd use it. When you want a durable, searchable archive of what you're reading — not just a firehose. A month in, you can chat over every paper you've ever opened, grounded in the exact text.

Open on n8n.io →

Open deep-research autonomous agent

n8n.io ↗
💬Chat / Form Trigge…TriggerTRIGGER🗺️Planner StepPre-processPRE-PROCESS🧠AI AgentOrchestratorORCHESTRATOROpenAI Chat ModelChat ModelCHAT MODEL🧵Simple MemoryLong-horizon memoryMEMORY🌐Web SearchToolTOOL📥HTTP FetchToolTOOL

The solid arrows show the main flow; the dashed lines are how Model, Memory, and Tools plug into the Agent on its three input ports. This is the same shape n8n draws on its canvas.

What it does. Give it a question; the agent plans, searches, reads, re-queries, and iterates until it has a multi-page, cited answer. Same shape as the paid 'deep research' products — but running in your own n8n.

When you'd use it. The open-ended questions that take a human half a day — 'what's the current consensus on X', 'who are the serious labs working on Y', 'what are the three strongest arguments against Z'. Run it overnight; read the report in the morning.

🧵
Simple Memory — session key gotcha

The Simple Memory node needs a session key to separate different users' chat histories. When the Chat Trigger connects directly to the Agent you can leave the key mode on Connected Chat Trigger Node and n8n fills it in automatically. The moment you add any node between the Chat Trigger and the Agent (a Set node, a router, a webhook pre-processor), that auto-detection breaks — you must switch to Define below and enter: ={{ $('When chat message received').first().json.sessionId }}. This was the exact bug that broke Session 2 (cos2 voice-to-lab-ai) — intermediate nodes between trigger and memory made the key come through empty.

Open on n8n.io →

Missing something you need? Drop a suggestion in #lab-ai and we'll add it here — or paste any n8n.io template URL into the channel and the AI will reply with a drawing in this same style.