Emiola Mariam Olamide

🤖 LLM_Agent - Run AI Locally and Access Real-Time Data

Download LLM_Agent

📖 Overview

LLM_Agent is an AI project that lets you run a powerful Large Language Model (Qwen2.5-0.5B) right on your own computer. You don’t need any technical skills to use it. Built with tools like FastAPI and llama-cpp-python, this app allows you to chat naturally and even fetch real-time information from the web in “Search Mode.” It features a user-friendly frontend made with HTML, CSS, and JavaScript, and it operates fully within Docker.

🚀 Getting Started

To get started with LLM_Agent, follow these simple steps to download and run the software.

✨ Key Features

💾 System Requirements

To use LLM_Agent, your computer should meet the following requirements:

📥 Download & Install

To get LLM_Agent, visit this page to download: GitHub Releases.

📦 Step-by-Step Installation

  1. Download: Go to the Releases page and download the latest version of LLM_Agent suitable for your system.

  2. Install Docker: If you haven’t done this already, make sure you have Docker installed. Follow the instructions on the Docker website.

  3. Open Terminal/Command Prompt:
    • For Windows, search for “cmd” in the start menu.
    • For Mac, open “Terminal” from your applications.
    • For Linux, you can open Terminal from your application menu.
  4. Navigate to the Download Folder: Use the cd command to navigate to the location where you downloaded LLM_Agent.
    cd path/to/your/download/folder
    
  5. Run LLM_Agent:
    • Type the following command and press Enter:
      docker-compose up
      
    • This will start the application. Wait for a few moments as Docker sets everything up.
  6. Access the Application: Open your web browser and go to http://localhost:8000. This will take you to the LLM_Agent interface.

🧑‍🤝‍🧑 Using LLM_Agent

Once you have LLM_Agent running, you will see a simple interface where you can interact with the AI.

🔍 Switching Modes

📜 Troubleshooting

If you encounter any issues while installing or running LLM_Agent, try the following:

💬 Support

If you need help or have questions about LLM_Agent, you can open an issue on the GitHub Issues page.

🏷️ Topics

🔗 Additional Resources

For further information on Docker and FastAPI, consider checking out the following resources:

By following these steps, you can easily download, install, and start using LLM_Agent. Enjoy exploring the capabilities of your personal AI!