Indexify simplifies building and serving durable, multi-stage workflows as inter-connected Python functions and automagically deploys them as APIs.
A workflow encodes data ingestion and transformation stages that can be implemented using Python functions. Each of these functions is a logical compute unit that can be retried upon failure or assigned to specific hardware.
Note
Indexify is the Open-Source core compute engine that powers Tensorlake's Serverless Workflow Engine for processing unstructured data. Book a Demo to learn more about the platform.
Indexify is a versatile data processing framework for all kinds of use-cases, including:
- Scraping and Summarizing Websites
- Extracting and Indexing PDF Documents
- Transcribing and Summarizing Audio Files
- Object Detection and Description
- Knowledge Graph RAG and Question Answering
- Dynamic Routing: Route data to different specialized models based on conditional branching logic.
- Local Inference: Execute LLMs directly within workflow functions using LLamaCPP, vLLM, or Hugging Face Transformers.
- Distributed Processing: Run functions in parallel across machines so that results across functions can be combined as they complete.
- Workflow Versioning: Version compute graphs to update previously processed data to reflect the latest functions and models.
- Resource Allocation: Span workflows across GPU and CPU instances so that functions can be assigned to their optimal hardware.
- Request Optimization: Maximize GPU utilization by automatically queuing and batching invocations in parallel.
Install Indexify's SDK and CLI into your development environment:
pip install indexifyDefine a workflow by implementing its data transformation as composable Python functions. Functions decorated with @indexify_function(). These functions form the edges of a Graph, which is the representation of a compute graph.
Functions serve as discrete units within a Graph, defining the boundaries for retry attempts and resource allocation. They separate computationally heavy tasks like LLM inference from lightweight ones like database writes.
The example below is a pipeline that calculates the sum of squares for the first consecutive whole numbers.
frompydanticimportBaseModelfromindexifyimportindexify_function, indexify_router, GraphfromtypingimportList, UnionclassDocument(BaseModel): pages: List[str] # Parse a pdf and extract text@indexify_function()defprocess_document(file: File) ->Document: # Process a PDF and extract pagesclassTextChunk(BaseModel): chunk: strpage_number: int# Chunk the pages for embedding and retreival@indexify_function()defchunk_document(document: Document) ->List[TextChunk]: # Split the pages# Embed a single chunk.# Note: (Automatic Map) Indexify automatically parallelize functions when they consume an element# from functions that produces a List@indexify_functions()defembed_and_write(chunk: TextChunk) ->ChunkEmbedding: # run an embedding model on the chunk# write_to_db# Constructs a compute graph connecting the three functions defined above into a workflow that generates# runs them as a pipelinegraph=Graph(name="document_ingestion_pipeline", start_node=process_document, description="...") graph.add_edge(process_document, chunk_document) graph.add_edge(chunk_document, embed_and_write)Read the Docs to learn more about how to test, deploy and create API endpoints for Workflows.
- Architecture of Indexify
- Packaging Dependencies of Functions
- Programming Model
- Deploying Compute Graph Endpoints using Docker Compose
- Deploying Compute Graph Endpoints using Kubernetes
- Function Batching: Process multiple functions in a single batch to improve efficiency.
- Data Localized Execution: Boost performance by prioritizing execution on machines where intermediate outputs exist already.
- Reducer Optimizations: Optimize performance by batching the serial execution of reduce function calls.
- Parallel Scheduling: Reduce latency by enabling parallel execution across multiple machines.
- Cyclic Graph Support: Enable more flexible agentic behaviors by leveraging cycles in graphs.
- Ephemeral Graphs: Perform multi-stage inference and retrieval without persisting intermediate outputs.
- Data Loader Functions: Stream values into graphs over time using the
yieldkeyword.
- TypeScript SDK: Build an SDK for writing workflows in Typescript.