Introduction to Ingra - Functions as a Service (FaaS)

Learn how Ingra, a Functions as a Service (FaaS) platform, integrates with Large Language Models (LLMs) to enable unlimited function tool calling and seamless automation.

Introduction

Ingra is a highly extensible platform designed as a Functions as a Service (FaaS) to integrate with various Large Language Models (LLMs). Ingra allows LLMs to execute a wide range of functions and even create new ones on the fly. The platform aims to democratize AI, allowing users to quickly prototype, automate workflows, and integrate diverse services through seamless function execution.

How Ingra Works

Ingra hosts functions that you either create or subscribe to. These functions are accessible via an API that can be invoked by any GPT or LLM using an OpenAPI contract.

Read more about that here at OpenAPI and Swagger Specs.

How Ingra works with LLMs / GPTs (e.g. ChatGPT)

Following is a graph visualizing how Ingra plays a role for any LLMs or GPTs Function Tool Calling.

sequenceDiagram autonumber participant User participant GPT participant IngraHubs as Ingra Hubs participant VM as Secure VM participant Function as Function Logic User->>GPT: Sends Prompt GPT->>IngraHubs: API Request (OpenAPI) IngraHubs->>VM: Invoke Function VM->>Function: Load Context (Env Vars, OAuth Tokens, Args) Function-->>VM: Execute Logic VM-->>IngraHubs: Return Result IngraHubs-->>GPT: Send Result GPT-->>User: Provide Output %% Add self-feedback loop alt Self-Feedback Needed GPT->>GPT: Self-Feedback & Re-Evaluate GPT->>IngraHubs: Re-Invoke API end

Lets break down each step in detail:

  1. User Sends Prompt: The interaction begins with the user sending a prompt to GPT. This could be a request for information, an action to perform, or any task that GPT can interpret.
  2. GPT Processes the Prompt: GPT interprets the users prompt and formulates an API request using the OpenAPI schema provided by Ingra.
  3. Ingra Receives the Request: Ingra takes the API request, identifies the appropriate function, and validates it before proceeding.
  4. Secure VM Invokes the Function: Once validated, Ingra invokes the function within a secure Virtual Machine (VM). This ensures safe and isolated execution of the function.
  5. Load Function Context: Within the VM, the function loads its context, which includes environment variables, OAuth tokens, and any arguments necessary for execution.
  6. Execute Function Logic: The function logic is executed using the loaded context. This step is where the main processing occurs based on the users initial prompt.
  7. Return Execution Result: After execution, the VM returns the result back to Ingra, which processes the output.
  8. GPT Provides Output: Ingra sends the execution result back to GPT, which then provides the final output to the user.

Self-Feedback and Re-Evaluation

Why This Works:

It's simple! Invoked functions always return an output, whether they succeed or fail. Modern LLMs like GPT-3 can interpret these outputs, allowing them to refine and re-evaluate the prompt to achieve the desired result.

A key feature of this workflow is the self-feedback loop within GPT. After providing the initial result, GPT can:

  • Self-Feedback: Review the output to decide if further refinement is necessary.
  • Adjust & Re-Evaluate: If needed, GPT can modify its prompt or API request and re-invoke the function through Ingra to enhance the results. This iterative process allows for more dynamic interactions and ensures that the output aligns closely with the users needs.

Core Features

Read more about it here