LLM Llama2 Llava AI-Messenger on WSL Links for Needed Info

 Hi


I am happy to play with LLM distributions.

Here you are whats needed to go with Llava and AI-Messenger


How to build on windows? · Issue #103 · ggerganov/llama.cpp · GitHub

GitHub - liltom-eth/llama2-webui: Run any Llama 2 locally with gradio UI on GPU or CPU from anywhere (Linux/Windows/Mac). Use `llama2-wrapper` as your local llama2 backend for Generative Agents/Apps.


WSL/WSL/wsl-config.md at main · MicrosoftDocs/WSL · GitHub

https://github.com/microsoft/WSL2-Linux-Kernel%20kernel=C://temp//myCustomKernel



Answer on Which LocalLlama enables to generate pictures? from:

Copilot

LocalLlama is a fascinating project that combines language models with image generation capabilities. Let me provide you with some details:

  1. LLaVA (Language-Led Vision and Audio):

  2. Quick Start Example for LLaVA:

    • To experiment with LLaVA, follow these steps:
      1. Build llama.cpp with LLaVA support:
        git clone https://github.com/ggerganov/llama.cpp.git
        cd llama.cpp
        git checkout llava
        mkdir build && cd build
        cmake ..
        cmake --build . --config Release
        mkdir -p ~/.ai/bin/llava
        cp bin/llava bin/ggml-metal.metal ~/.ai/bin/llava
        
      2. Download LLaVA models from Hugging Face. You can choose the f16 model or a quantized version suitable for your system:
        mkdir -p ~/.ai/models/llava
        cd ~/.ai/models/llava
        wget https://huggingface.co/mys/ggml_llava-v1.5-7b/resolve/main/ggml-model-f16.gguf
        wget https://huggingface.co/mys/ggml_llava-v1.5-7b/resolve/main/mmproj-model-f16.gguf
        
      3. Describe an image (e.g., input-picture.jpg):
        cd ~/.ai/models/llava
        ~/.ai/bin/llava/llava ggml-model-f16.gguf mmproj-model-f16.gguf ~/Desktop/input-picture.jpg
        
    • The model repository on Hugging Face is available here, and the LLaVA branch in llama.cpp can be found here1.

Feel free to explore LLaVA and create intriguing image descriptions! 📸🌟


There can be a problem downloading F-16 GGUF via Windows 11, Linux seems better with this but also as native, WSL made me a problems with running the config with kernel CUDA ready error: "0x80070057 parametr jest niepoprawny." if you know ho to fix it you are home with WSL and F-16 GGUF for llava.

AI-Messenger:

https://github.com/shinomakoi/AI-Messenger 

For video generating instead of F-16 and AI-Messagenger you can use e.g.


README.md · ali-vilab/text-to-video-ms-1.7b at main (huggingface.co)

It's installed via pip and used by short programs in Python 3.


Regarding Ollama - the webui that works with Ollama and enables adding more models like MidJourney (md) is hereL GitHub - liltom-eth/llama2-webui: Run any Llama 2 locally with gradio UI on GPU or CPU from anywhere (Linux/Windows/Mac). Use `llama2-wrapper` as your local llama2 backend for Generative Agents/Apps.



Cheers

@BrainLaundryMaster

Popularne posty z tego bloga

Quick Commands for Linux Kernel Compilation

Linux 5.19.11 AMD64 Deb Packaged in Ubuntu and now Firmware works! Wi-Fi!