1
submitted 1 year ago* (last edited 1 year ago) by Blaed@lemmy.world to c/fosai@lemmy.world

Hello everyone!

I am working on figuring out better workflows bringing back more consistent post schedules. In the meantime, I'd like to leave you with a new update from LocalAI & Continue.

Check these projects out! More info from the Continue & LocalAI teams below:

Continue

The open-source autopilot for software development A VS Code extension that brings the power of ChatGPT to your IDE

LocalAI

LocalAI is a drop-in replacement REST API that’s compatible with OpenAI API specifications for local inferencing. It allows you to run LLMs (and not only) locally or on-prem with consumer grade hardware, supporting multiple model families that are compatible with the ggml format. Does not require GPU.

Combining the Power of Continue + LocalAI!


Note

From this release the llama backend supports only gguf files (see 943 ). LocalAI however still supports ggml files. We ship a version of llama.cpp before that change in a separate backend, named llama-stable to allow still loading ggml files. If you were specifying the llama backend manually to load ggml files from this release you should use llama-stable instead, or do not specify a backend at all (LocalAI will automatically handle this).

Continue

logo

This document presents an example of integration with continuedev/continue.

Screenshot

For a live demonstration, please click on the link below:

Integration Setup Walkthrough

  1. As outlined in continue's documentation, install the Visual Studio Code extension from the marketplace and open it.

  2. In this example, LocalAI will download the gpt4all model and set it up as "gpt-3.5-turbo". Refer to the docker-compose.yaml file for details.

    # Clone LocalAI
    git clone https://github.com/go-skynet/LocalAI
    
    cd LocalAI/examples/continue
    
    # Start with docker-compose
    docker-compose up --build -d
    
  3. Type /config within Continue's VSCode extension, or edit the file located at ~/.continue/config.py on your system with the following configuration:

    from continuedev.src.continuedev.libs.llm.openai import OpenAI, OpenAIServerInfo
    
    config = ContinueConfig(
       ...
       models=Models(
            default=OpenAI(
               api_key="my-api-key",
               model="gpt-3.5-turbo",
               openai_server_info=OpenAIServerInfo(
                  api_base="http://localhost:8080",
                  model="gpt-3.5-turbo"
               )
            )
       ),
    )
    

This setup enables you to make queries directly to your model running in the Docker container. Note that the api_key does not need to be properly set up; it is included here as a placeholder.

If editing the configuration seems confusing, you may copy and paste the provided default config.py file over the existing one in ~/.continue/config.py after initializing the extension in the VSCode IDE.

Additional Resources

no comments (yet)
sorted by: hot top controversial new old
there doesn't seem to be anything here
this post was submitted on 29 Aug 2023
1 points (100.0% liked)

Free Open-Source Artificial Intelligence

0 readers
2 users here now

Welcome to Free Open-Source Artificial Intelligence!

We are a community dedicated to forwarding the availability and access to:

Free Open Source Artificial Intelligence (F.O.S.A.I.)

Have no idea where to begin with AI/LLMs? Try visiting our Lemmy Crash Course for Free Open-Source AI. When you're done with that, head over to FOSAI ▲ XYZ or check out the FOSAI LLM Guide for more info.

Monthly Roadmap

October 2023

More AI Communities

AI Resources

Learn

Build

Serve

Fediverse / FOSAI

LLM Leaderboards

LLM Search Tools

LLM Evaluations

GitHub Projects

Documentation Theory

founded 1 year ago
MODERATORS