Skip to content
/ ingra Public

An open-source, AI-powered Functions-as-a-Service (FaaS) platform that enables LLMs like ChatGPT to perform real-world tasks through dynamic function execution. By connecting LLMs to function calls, Ingra unlocks automation, complex integrations, and real-time context loading—paving the way towards true AGI potential and beyond.

License

Notifications You must be signed in to change notification settings

ingra-ai/ingra

Repository files navigation

Ingra

Ingra is an open-source platform designed to enrich developer experience to curate and hook AI function tool calling capabilities. Seamlessly integrate to any LLMs by providing capabilities to curate, manage, and host functions that can be used to create your own personal assistant suite tailored to your needs.

Our goal is to make these tool freely available for everyone, and enabling a community-driven approach to personal assistant development.

What's Inside?

This repository is organized as a Turborepo, containing multiple applications and packages:

Documentation

Visit Ingra Documentation to view the full documentation.

Applications

  • hubs: FS - Next.js application serving UI for hubs interface and serving LLM api calls for collections and functions.
  • auth: FE - Next.js application serving auth UI.
  • docs: Another Next.js application for documentation purposes.

Packages

  • @repo/components: A React component library shared across applications.
  • @repo/shared: Utilities library shared across applications.
  • @repo/db: Database schema utilizes Prisma
  • @repo/eslint-config: Shared ESLint configurations.
  • @repo/typescript-config: TypeScript configurations used throughout the monorepo.

Tech Stack

The project leverages a modern tech stack for optimal performance and scalability:

  • Frontend Framework: Next.js (^14.1.0)
  • UI Components: Headless UI, Radix UI, Heroicons, Lucide React
  • Form Handling: React Hook Form, @hookform/resolvers
  • State Management: Built-in Next.js capabilities, Context API
  • Styling: Tailwind CSS, SASS, Tailwind Merge, Tailwindcss Animate
  • Backend: Prisma (5.10.2) as the ORM
  • Database: PostgreSQL (Prisma)
  • Deployment: Vercel Platform
  • Email Service: AWS SES
  • Authentication: Custom built-in implementation without using any 3rd party for enhanced security
  • Additional Libraries: date-fns for date handling, jsonwebtoken for JWT based auth, uuid for generating unique identifiers, zod for forms validation, chrono-node for natural date-time parsing.

Features

  • Functions Hub: Curate a variety of functions that can be integrated into your personal assistant suite. The hub serves as a central repository for all available functions.
  • Auto-Generated OpenAPI and Swagger Specs: Automatically generate OpenAPI and Swagger specifications for your curated functions, ensuring standardized documentation and ease of use.
  • Virtual Machine Execution: All functions run in a secure virtual machine environment, providing isolation and security.
  • Privacy Controls: Provide users with granular privacy controls to manage their data and interactions, ensuring transparency and trust.
  • Integration to OpenAI ChatGPT Plugin: Provide seamless experience to integrate personal curated functions to ChatGPT Plugin
  • Function Marketplace: A marketplace where users can share, rate, and review functions, enhancing discoverability and quality.
  • Self Function Generation: Provide an API endpoint for user to curate their own functions by using GPT.
  • Chat: Provides built-in AI Assistant utilizing LangChain to converse and interact with tools calling. Has Cron.
  • Self Hosting: An option to switch database endpoint, or running local LLMs. This should help users flexibly to host their own data. Configurables would be psql, redis, pinecone and aws ses setup. (google oauth or openai api key are overridable from env variables built-in)
  • [ ]
  • Workflows Hub: Integrate workflow to curate one or more functions for more comprehensive task.
  • User Onboarding: Streamlined onboarding process to help new users get started quickly and efficiently, including step-by-step guides and support resources.
  • Advanced Security Features: Implement advanced security measures such as encryption, secure access controls, and more to protect user data and function integrity.
  • Collaboration Tools: Develop tools to facilitate collaboration among users and developers, such as version control, code reviews, and more.
  • Personalization Options: Offer advanced personalization options to tailor the assistant’s behavior and responses to individual preferences.

Installation

  1. Clone the repository:

    git clone https://github.com/ingra-ai/ingra-hubs
    cd ingra-hubs
  2. Install dependencies:

    pnpm install
    # or, if you're using npm
    npm install
    # or, if you're using yarn
    yarn install
  3. Set up your environment variables:

    cp .env.example apps/hubs/.env
    cp .env.example apps/docs/.env
    cp .env.example apps/auth/.env

    Copy the .env.example file to a new file named ./apps/hubs/.env and fill in your database connection details and any other environment variables required.

Environment Variables

The Ingra Hubs utilizes several environment variables for its configuration. You will need to set these up in your .env file. Here's a guide to the required environment variables:

Database Setup with Prisma

To understand how to play around with Prisma, view the Prisma readme

  1. Configure your database connection:

    Ensure your .env file includes the database connection string for your PostgreSQL database:

    DATABASE_URL="postgresql://user:password@localhost:5432/mydatabase?schema=public"
  2. Run Prisma migrations:

    To set up your database schema or apply any updates, run the Prisma migration command:

    pnpm db:migrate:dev

    This command creates the tables in your database according to the schema defined in prisma/schema.prisma.

  3. Generate Prisma client:

    Generate the Prisma client to interact with your database:

    pnpm db:generate

Authentication

We're just using Magic Link AUTH to authenticate user.

Therefore, the following configuration is needed

JWT Configuration Setup

To securely manage user sessions and authentication, Ingra Hubs utilizes JSON Web Tokens (JWT). You must generate a secure JWT secret for your application:

  • JWT_SECRET: A secret key used for signing JWT tokens. Ensure this is a long, random string to maintain security.

You can use OpenSSL to generate a secure secret:

openssl rand -base64 48

Setup AWS SES Configuration

These variables are required for integrating Amazon Simple Email Service (SES) for sending emails:

  • AWS_SES_REGION: The AWS region where your SES instance is located, e.g., us-east-1.
  • AWS_SES_ACCESS_KEY: Your AWS access key ID for SES.
  • AWS_SES_SECRET: Your AWS secret access key for SES.
  • AWS_SES_MAIL_FROM: The email address used as the sender in emails sent from the portal. This email must be verified with Amazon SES.

For more detailed instructions and additional configurations, refer to the Amazon SES Documentation.


Running the Development Server

pnpm dev

Turborepo Remote Caching

Turborepo can use a technique known as Remote Caching to share cache artifacts across machines, enabling you to share build caches with your team and CI/CD pipelines.

By default, Turborepo will cache locally. To enable Remote Caching you will need an account with Vercel. If you don't have an account you can create one, then enter the following commands:

cd my-turborepo
npx turbo login

This will authenticate the Turborepo CLI with your Vercel account.

Next, you can link your Turborepo to your Remote Cache by running the following command from the root of your Turborepo:

npx turbo link

Contributing

We welcome contributions to the Ingra Hubs! Whether it's bug reports, feature requests, or contributions to code, please feel free to make a pull request or open an issue. Thanks in advance

License

Ingra Hubs is open source software licensed as MIT.

Learn More

To further enhance your development with Ingra Hubs, explore the documentation and resources of the key technologies and libraries used in the project:

Turborepo Useful Links

Learn more about the power of Turborepo:

About

An open-source, AI-powered Functions-as-a-Service (FaaS) platform that enables LLMs like ChatGPT to perform real-world tasks through dynamic function execution. By connecting LLMs to function calls, Ingra unlocks automation, complex integrations, and real-time context loading—paving the way towards true AGI potential and beyond.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published