Skip to content

intelligentnode/Intelli

Repository files navigation

Intelli

A framework for creating chatbots and AI agent workflows. It enables seamless integration with multiple AI models, including OpenAI, LLaMA, deepseek, Stable Diffusion, and Mistral, through a unified access layer. Intelli also supports Model Context Protocol (MCP) for standardized interaction with AI models.

Install

# Basic installation pip install intelli # With MCP support pip install "intelli[mcp]"

For detailed usage instructions, refer to the documentation.

Code Examples

Create Chatbot

Switch between multiple chatbot providers without changing your code.

fromintelli.function.chatbotimportChatbot, ChatProviderfromintelli.model.input.chatbot_inputimportChatModelInputdefcall_chatbot(provider, model=None, api_key=None, options=None): # prepare common input input=ChatModelInput("You are a helpful assistant.", model) input.add_user_message("What is the capital of France?") # creating chatbot instancechatbot=Chatbot(api_key, provider, options=options) response=chatbot.chat(input) returnresponse# call chatGPT (GPT-5 is default)call_chatbot(ChatProvider.OPENAI) # call GPT-4 explicitlycall_chatbot(ChatProvider.OPENAI, "gpt-4o") # call claude3call_chatbot(ChatProvider.ANTHROPIC, "claude-3-7-sonnet-20250219") # call google geminicall_chatbot(ChatProvider.GEMINI) # Call NVIDIA Deepseekcall_chatbot(ChatProvider.NVIDIA, "deepseek-ai/deepseek-r1") # Call vLLM (self-hosted)call_chatbot(ChatProvider.VLLM, "meta-llama/Llama-3.1-8B-Instruct", options={"baseUrl": "http://localhost:8000"})

Create AI Flows

You can create a flow of tasks executed by different AI models. Here's an example of creating a blog post flow:

fromintelli.flowimportAgent, Task, SequenceFlow, TextTaskInput, TextProcessor# define agentsblog_agent=Agent(agent_type='text', provider='openai', mission='write blog posts', model_params={'key': YOUR_OPENAI_API_KEY, 'model': 'gpt-4'}) copy_agent=Agent(agent_type='text', provider='gemini', mission='generate description', model_params={'key': YOUR_GEMINI_API_KEY, 'model': 'gemini'}) artist_agent=Agent(agent_type='image', provider='stability', mission='generate image', model_params={'key': YOUR_STABILITY_API_KEY}) # define taskstask1=Task(TextTaskInput('blog post about electric cars'), blog_agent, log=True) task2=Task(TextTaskInput('Generate short image description for image model'), copy_agent, pre_process=TextProcessor.text_head, log=True) task3=Task(TextTaskInput('Generate cartoon style image'), artist_agent, log=True) # start sequence flowflow=SequenceFlow([task1, task2, task3], log=True) final_result=flow.start()

Graph-Based Agents

To build async flows with multiple paths, refer to the flow tutorial.

Or build the entire flow using natural language with Vibe Agents. Refer to the documentation for more details.

Generate Images

Use the image controller to generate arts from multiple models with minimum code change:

fromintelli.controller.remote_image_modelimportRemoteImageModelfromintelli.model.input.image_inputimportImageModelInput# model details - change only two words to switchprovider="openai"model_name="dall-e-3"# prepare the input detailsprompts="cartoonishly-styled solitary snake logo, looping elegantly to form both the body of the python and an abstract play on data nodes."image_input=ImageModelInput(prompt=prompt, width=1024, height=1024, model=model_name) # call the model openai/stabilitywrapper=RemoteImageModel(your_api_key, provider) results=wrapper.generate_images(image_input)

GGUF Optimized Models

Llama CPP provides an efficient way to run language models locally with support for models in the new GGUF format, check the docs.

MCP Calculator Demo

Check out the MCP Calculator Demo for sample how to create an MCP server with math operations and a client that uses flow to interpret natural language queries.

MCP DataFrame Demo

Check out the MCP DataFrame Demo for an example of how to serve dataframes as MCP servers and utilize them within Intelli flows, enabling integration with AI models.

Connect Your Docs With Chatbot

IntelliPy allows you to chat with your docs using multiple LLMs. To connect your data, visit the IntelliNode App, start a project using the Document option, upload your documents or images, and copy the generated One Key. This key will be used to connect the chatbot to your uploaded data.

# creating chatbot with the intellinode one keybot=Chatbot(YOUR_OPENAI_API_KEY, "openai",{"one_key": YOUR_ONE_KEY}) input=ChatModelInput("You are a helpful assistant.") # uses GPT-5 by defaultinput.add_user_message("What is the procedure for requesting a refund according to the user manual?") # optional to returne the searched file nameinput.attach_reference=Trueresponse=bot.chat(input)

Repository Setup

  1. Install the requirements.
pip install -r requirements.txt
  1. Rename .example.env to .env and fill the keys.

  2. Run the test cases, examples below.

# images python3 -m unittest intelli.test.integration.test_remote_image_model # chatbot python3 -m unittest intelli.test.integration.test_chatbot # mistral python3 -m unittest intelli.test.integration.test_mistralai_wrapper # ai flows python3 -m unittest intelli.test.integration.test_flow_sequence

Pillars

  • The wrapper layer provides low-level access to the latest AI models.
  • The controller layer offers a unified input to any AI model by handling the differences.
  • The function layer provides abstract functionality that extends based on the app's use cases.
  • Flows: create a flow of ai agents working toward user tasks.
  • Vibe Agents: generate a graph of agents from an intent.

If you reference Vibe Agents, please cite this repository (see CITATION.cff).