Python llama 2. We are unlocking the power of large language models.

To install Python, visit the Python website, where you can choose your OS and download the version of Python you like. Add stream completion. Jul 19, 2023 · Llama. ∘ Install dependencies for running LLaMA locally. ai/ 2. We can train our Llama2 model using python baby_llama/run. This is a breaking change. Code Llama Jun 30, 2024 · After the recent llama. com/krishnaik06/Complete-Langchain-Tutorials/tree/main/Blog%20GenerationThe Llama 2 release introduces a family Aug 1, 2023 · Run Llama 2 on your own Mac using LLM and Homebrew. Aug 24, 2023 · Code Llama is a state-of-the-art LLM capable of generating code, and natural language about code, from both code and natural language prompts. import PyPDF2. c format. co LangChain is a powerful, open-source framework designed to help you develop applications powered by a language model, particularly a large Developed by Meta AI, Llama2 is an open-source model released in 2023, proficient in various natural language processing (NLP) tasks, such as text generation, text summarization, question answering, code generation, and translation. q4_K_M. llama-cpp-python is a Python binding for llama. Dec 13, 2023 · Blog Generation Platform Code: https://github. py --model models Llama 2 does its best to continue Meta's perspective in the field of generative AI. You can see first-hand the performance of Llama 3 by using Meta AI for coding tasks and problem solving. Q4_0. Let's do this for 30B model. cpp refactor I had to also update the cmake build a little bit, as of version 0. Nov 17, 2023 · Use the Mistral 7B model. Aug 28, 2023 · #Chatbot #ChatGPT #OpenAI #Python #ArtificialIntelligence #chatbot #chatgptbot #naturallanguageprocessing #llama2 #llama #llama2apillama 2 using API | Free | Code Llama was developed by fine-tuning Llama 2 using a higher sampling of code. ollama pull llama2:13b-chat. 🌎; A notebook on how to run the Llama 2 Chat Model with 4-bit quantization on a local computer or Google Colab. New: Code Llama support! - getumbrel/llama-gpt Apr 13, 2024 · Download Ollma and install. 6 score compared to 45. python chat. This model was contributed by zphang with contributions from BlackSamorez. Then click Download. This repository contains a custom implementation of the LLaMA 2 model, as described in the paper "LLaMA 2: Open Foundation and Fine-Tuned Chat Models" (). $ mkdir llm Code Llama Python is a language-specialized variation of Code Llama, further fine-tuned on 100B tokens of Python code. from llama_cpp import Llama. 2. Variations Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations. Aug 3, 2023 · Llama 2 vs ChatGPT. server --model models/7B/llama-model. We are unlocking the power of large language models. py model=llama2, this produces the following results: Aug 2, 2023 · The llama-cpp-python module (installed via pip) We’re using the 7B chat “Q8” version of Llama 2, found here. On the llama_chatbot. To install the server package and get started: pip install llama-cpp-python [ server] python3 -m llama_cpp. pip install llama-cpp-python PyPDF2. To download only the 7B and 30B model files Under Download Model, you can enter the model repo: TheBloke/Llama-2-13B-chat-GGUF and below it, a specific filename to download, such as: llama-2-13b-chat. Pre-built Wheel (New) It is also possible to install a pre-built wheel with basic CPU support. Create a python virtual environment for llama2 using the command below, I'd chosen the name llama2 for the virtual environment. We will use Python to write our script to set up and run the pipeline. You will write your code in llama_chatbot. Jul 18, 2023 · You can deploy and use Llama 2 foundation models with a few clicks in SageMaker Studio or programmatically through the SageMaker Python SDK. The code should ask the user for a web address and business type. LLAMA_SPLIT_LAYER: ignored. cpp as the backend by default to run llama-2-7b-chat. Last name. # Replace 'Your_API_Token' with your actual API token. 3, ctransformers, and langchain. Available variants: 7B, 13B, and 70B parameters. To install the package, run: pip install llama-cpp-python. 3. Nov 15, 2023 · Let’s dive in! Getting started with Llama 2. The goal of this repository is to provide a scalable library for fine-tuning Meta Llama models, along with some example scripts and notebooks to quickly get started with using the models in a variety of use-cases, including fine-tuning for domain adaptation and building LLM-based applications with Meta Llama and other Llama for Python Programmers is designed for programmers who want to leverage the Llama 2 large language model (LLM) and take advantage of the generative artificial intelligence (AI) revolution. You will need to re-start your notebook from the beginning. The Llama 2 family of large language models (LLMs) is a collection of pre-trained and fine-tuned generative […] Aug 15, 2023 · cd llama. main_gpu interpretation depends on split_mode: LLAMA_SPLIT_NONE: the GPU that is used for the entire model. Aug 10, 2023 · Before we start, we need to get access to the model. The download links might change, but a single-node, “bare metal” setup is similar to below: Ensure you can use the model via python3 and this example. 特徴は、次のとおりです。. /llama-2-chat-7B-finetuned --length 100 --prompt "USER: Hi, I want to cancel my subscription. Contribute to LBMoon/Llama2-Chinese development by creating an account on GitHub. (2) 「 Llama 2 」 (llama-2-7b-chat. py TheBloke/Llama-2-7B-GPTQ Jan 17, 2024 · Fine-tune Llama 2 on Trainium instances in both the SageMaker Studio UI and the SageMaker Python SDK. The Dockerfile will creates a Docker image that starts a Aug 24, 2023 · Code Llama – Python is a language specialized variation of Code Llama, further fine-tuned on 100B tokens of Python code. Links to other models can be found in the index at the bottom. Its improved performance, open-source nature, and commitment to safety and transparency make Llama 2 a promising model for a wide range of applications. py and store your secret keys and API tokens in the . gguf model stored locally at ~/Models/llama-2-7b-chat. Una vez instaladas las librerias procedemos a importarlas y cargar nuestro modelo Llama 2: #Import libraries needed. ai/Happy Prompting! ️ If you Aug 13, 2023 · El segundo es PyPDF2 que nos permite tomar un pdf y leerlo en Python, este lo usaremos para darle contexto a Llama 2 mas adelante. To recap, every Spark context must be able to read the model from /models Aug 24, 2023 · When Meta released Llama 2, a powerful artificial intelligence model similar to the one behind ChatGPT, last month, HumanEval and Mostly Basic Python Problems, it performs much better than ChatLlamaAPI. Then find the process ID PID under Processes and run the command kill [PID]. Jul 19, 2023 · 以下の記事が面白かったので、軽くまとめました。 ・Llama 2 is here - get it on Hugging Face 1. Other approaches are easier to implement and, in some cases, better suited for our use case. Originally a web chat example, it now serves as a development playground for ggml library features. 57. As with Llama 2, we applied considerable safety mitigations to the fine-tuned versions of the model. When everything is set up, just run the following command to start the Llama 2 model in the terminal. json; Now I would like to interact with the model. January February March April May June July August September October November December. January. py file, e. q4_0. You’ll learn how to: Oct 3, 2023 · I'd strongly suggest you start by getting llama. pth; params. Asked 1 year ago in Python by Wayne. 100% private, with no data leaving your device. q2_k as an LLM. 9 or higher installed; Streamlit library installed; UbiOps Client Library installed; UbiOps account (see below) Jump to guide. Jul 11, 2024 · # custom selection of integrations to work with core pip install llama-index-core pip install llama-index-llms-openai pip install llama-index-llms-replicate pip install llama-index-embeddings-huggingface Examples are in the docs/examples folder. Run Llama 2 In Terminal. The Llama2 Jan 3, 2024 · Here’s a hands-on demonstration of how to create a local chatbot using LangChain and LLAMA2: Initialize a Python virtualenv, install required packages. Llama 2 「Llama 2」は、Metaが開発した、7B・13B・70B パラメータのLLMです。 長いコンテキスト長 (4,000トークン) や、70B モデルの高速推論のためのグループ化されたクエリアテンションなど、「Llama 1」と比べて Aug 7, 2023 · Define the model, we are using “llama-2–7b-chat. Once we have those checkpoints, we have to convert them into the llama2. I recommend using the huggingface-hub Python library: Jun 24, 2024 · With the help of picoLLM Compression, compressed Llama 2 and Llama 3 models are small enough to even run on Raspberry Pi. Input Models input text only. Sep 6, 2023 · Today, we are excited to announce the capability to fine-tune Llama 2 models by Meta using Amazon SageMaker JumpStart. This will also build llama. 77 for this specific model. gguf . Activate the virtual Download Llama. bin model. GGUF is a new format introduced by the llama. 11 -m venv llama2. cpp from source and install it alongside this python package. Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. Use the Panel chat interface to build an AI chatbot with Mistral 7B. Next, set the global variables of the llama-2–70b-chat model. Jul 21, 2023 · In particular, the three Llama 2 models (llama-7b-v2-chat, llama-13b-v2-chat, and llama-70b-v2-chat) are hosted on Replicate. (3) パッケージのインストール。. docker run -p 5000:5000 llama-cpu-server. chk; consolidated. In this post we’re going to cover everything I’ve learned while exploring Llama 2, including how to format chat prompts, when to use which Llama variant, when to use ChatGPT over Llama, how system prompts work, and some tips and tricks. Oct 29, 2023 · Afterwards you can build and run the Docker container with: docker build -t llama-cpu-server . The following example uses a quantized llama-2-7b-chat. Oct 6, 2023 · To re-try after you tweak your parameters, open a Terminal ('Launcher' or '+' in the nav bar above -> Other -> Terminal) and run the command nvidia-smi. Date of birth: Month. Before we get started, you will need to install panel==1. cpp is a library we need to run Llama2 models. If this fails, add --verbose to the pip install see the full cmake build log. This will create merged. Build an AI chatbot with both Mistral 7B and Llama2 using LangChain. LLAMA_SPLIT_ROW: the GPU that is used for small tensors and intermediate results. llama. The Llama 2 is a collection of pretrained and fine-tuned generative text models, ranging from 7 billion to 70 billion parameters, designed for dialogue use cases. I just released a new plugin for my LLM utility that adds support for Llama 2 and many other llama-cpp compatible models. Code Llama is built on top of Llama 2 and is available in three models: Code Llama, the foundational code model; Codel Llama - Python specialized for Oct 23, 2023 · To merge the weights with the meta-llama/Llama-2–7b-hf model simply run the following script. 7 for Llama-2 7B in the MMLU (Massive Multitask Language Understanding) benchmark. Pip install llama-cpp-python. download --model_size 7B. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. 1. To successfully complete this guide, you will need: Python 3. We will download the GPTQ optimized version of the model, which reduces the cost to run the model significantly using quantization. for 7B model: Apr 27, 2024 · For example, according to a HuggingFace model page, Llama-3 8B got a 66. The program chat. cpp compatible models with any OpenAI compatible client (language libraries, services, etc). 2 Python Run the code in the second code cell to download the 7B version of LLaMA 2 to run the web UI with. https://huggingface. Sep 26, 2023 · Step 3: Querying the Model with Prompts. This repo provides instructions for installing prerequisites like Python and Git, cloning the necessary repositories, downloading and converting the Llama models, and finally running the model with example prompts. 「 Llama. It comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations. server it will use llama. For detailed information on model training, architecture and parameters, evaluations, responsible AI and safety refer to our research paper. Llama 2 is the latest commercially usable openly licensed Large Language Model, released by Meta AI a few weeks ago. First name. Jul 24, 2023 · Llama 1 vs Llama 2 Benchmarks — Source: huggingface. cpp, llama2-chatbot, cog-llama-template, OpenPipe, cog, and ollama 80 52,244 8. 00. A summary of all mentioned or recommeneded projects: llama, llama. 10. cpp package. Where did LLaMa 2 come from? How to split the model across GPUs. A notebook on how to quantize the Llama 2 model using GPTQ from the AutoGPTQ library. On the command line, including multiple files at once. Similar to Hardware Acceleration section above, you can also install with Aug 25, 2023 · Introduction. python merge_lora_model. py and an env file (. env file. python merge-weights. 6 score in CommonSense QA (dataset for commonsense question answering). Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Description. Though the original Facebook/llama is written on Python, its complexity is rather high due to multiple dependencies and sophisticated optimizations implemented OpenHermes-2-Mistral-7B Installing the SDK Our SDK allows your application to interact with LlamaAPI seamlessly, abstracting the handling of aiohttp sessions and headers, allowing for a simplified interaction with LlamaAPI. Because Python is the most benchmarked language for code generation – and because Python and PyTorch play an important role in the AI community – we believe a specialized model provides additional utility. cpp, which makes it easy to use the library in Python. It is a replacement for GGML, which is no longer supported by llama. There are four models (7B,13B,30B,65B) available. bin” for our implementation and some other hyperparams to tune it. A Llama-3 also got a 72. Step 1: Prerequisites and dependencies. " So Step 1, get the Llama 2 checkpoints by following the Meta instructions. Today, we are excited to announce that Llama 2 foundation models developed by Meta are available for customers through Amazon SageMaker JumpStart to fine-tune and deploy. It is a plain C/C++ implementation optimized for Apple silicon and x86 architectures, supporting various integer quantization and BLAS libraries. Indices are in the indices folder (see list of indices below). # Create a project dir. macOSはGPU対応が面倒そうなので、CPUにしてます。. e. perplexity. The model’s design enables it to work with text data, identifying relationships and patterns within the content. ∘ Download the model from HuggingFace. It outperforms open-source chat models on most benchmarks and is on par with popular closed-source models in human evaluations for helpfulness and safety. This notebook goes over how to run llama-cpp-python within LangChain. py file, import the libraries as follows. Furthermore, it produces many newlines after the answer. Prompting large language models like Llama 2 is an art and a science. py --input_dir D:\Downloads\LLaMA --model_size 30B. Jul 26, 2023 · Then the response from Llama-2 directly mirrors one piece of context, and includes no information from the others. To download all of them, run: python -m llama. 1st August 2023. Compare the performance of the fine-tuned Llama 2 model with that of pre-trained model to show the effectiveness of fine-tuning. Aug 5, 2023 · I would like to use llama 2 7B locally on my win 11 machine with python. cpp, or any of the projects based on it, using the . greedy, random sampling, top-k sampling, top-p sampling, and their variants including temperature scaling. 80 the build should work correctly and Gemma2 is supported ️ 1 etemiz reacted with heart emoji We’ve integrated Llama 3 into Meta AI, our intelligent assistant, that expands the ways people can get things done, create and connect with Meta AI. Code Llama is a family of state-of-the-art, open-access versions of Llama 2 specialized on code tasks, and we’re excited to release integration in the Hugging Face ecosystem! Code Llama has been released with the same permissive community license as Llama 2 and is available for commercial use. Jul 28, 2023 · 3 Places where you can find Hosted Llama 2 models:1. Today, we’re excited to release: The LLaMA tokenizer is a BPE model based on sentencepiece. Pull Llama 2. Guide for setting up and running Llama2 on Mac systems with Apple silicon. The code should do the following. See llama_cpp. Nov 1, 2023 · This package provides Python bindings for llama. Just download a Python library by pip. 10+xpu) officially supports Intel Arc A-series graphics on WSL2, built-in Windows and built-in Linux. Our latest version of Llama is now accessible to individuals, creators, researchers and businesses of all sizes so that they can experiment, innovate and scale their ideas responsibly. For this we need to install the python dependencies (pip install -r requirements. Reload to refresh your session. If the answer is 100 tokens, and max_new_tokens is 150, I have 50 newlines. How to install Llama 2 on a Mac Jul 31, 2023 · Step 2: Preparing the Data. Code Llama is free for research and commercial use. https://labs. In this course, you’ll learn how open-source LLMs can run on self-hosted hardware, made possible through techniques such as quantization by using the Nov 1, 2023 · For that, let's compare some examples of token generation using the different sampling methods described in the Llama post, i. Day. Aug 17, 2023 · Finally, we showcase how to perform inference using the fine-tuned model and how does it compare against the baseline model. 前回と同様です。. cpp team on August 21st 2023. . Meta AI has released this open-source large language model, Llama2, which has significantly improved performance and is free for both research and commercial use. You switched accounts on another tab or window. If you have that going, then you're in a good place to try to configure the Python bindings to have identical behavior (with the question narrowly focused on the bindings themselves, with the larger hardware/OS/&c questions safely out of scope). First, you need to unshard model checkpoints to a single file. Step 3. There is another high-speed way to download the checkpoints and tokenizers. env). Remeber to replace the model version as needed. First, let’s open the “Llama 2 Community License Agreement” website. g. Aug 4, 2023 · This guide aims to make cutting-edge AI accessible by allowing anyone to deploy their own LLaMa 2 chatbot in minutes. The 'llama-recipes' repository is a companion to the Meta Llama 3 models. As more developers and researchers gain access, we can expect to see a surge in innovative AI-powered solutions. In this course, you’ll learn how open-source LLMs can run on self-hosted hardware, made possible through techniques such as quantization by using the llama. Read the document carefully and make sure the intended use case doesn’t violate the Acceptable Feb 27, 2023 · pyllama. gguf. Model Architecture Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. 6 vs. This implementation focuses on reproducing and extending some of the key features that distinguish LLaMA 2, including RMS-Normalization, the SwiGLU activation function, Rotary Positional Embeddings (RoPE), increased context length with Llama中文社区,最好的中文Llama大模型,完全开源可商用. In this post, we’ll build a Llama 2 chatbot in Python using Streamlit for the frontend, while the LLM backend is handled through API calls to the Llama 2 model hosted on Replicate. 「Llama. Setting up the python bindings is as simple as running the following command Oct 3, 2023 · llama2-wrapper is the backend and part of llama2-webui, which can run any Llama 2 locally with gradio UI on GPU or CPU from anywhere (Linux/Windows/Mac). py results/final_checkpoint/ results/merged_model/ Full Merge Code Oct 16, 2023 · First, create a Python file called llama_chatbot. Jul 18, 2023 · LLaMA 2 model family. !python download-model. q2_K. Demonstrated running Llama 2 7B and Llama 2-Chat 7B inference on Intel Arc A770 graphics on Windows and WSL2 via Intel Extension for PyTorch. Try out Llama. LLama 2 Jul 23, 2023 · It now has a new option llama-2-7b-chat. To run Llama 2, or any other PyTorch models In this video, @DataProfessor shows you how to build a Llama 2 chatbot in Python using the Streamlit framework for the frontend, while the LLM backend is han Get up and running with Llama 3, Mistral, Gemma 2, and other large language models. You’ll explore how Meta’s Oct 11, 2023 · Our goal is to bridge the existing gap by offering a clear-cut reference implementation encapsulating all transformer logic within a concise Python file, not exceeding 500 lines of code. Jul 24, 2023 · In this guide, I show how you can fine-tune Llama 2 to be a dialog summarizer! Last weekend, I wanted to finetune Llama 2 (which now reigns supreme in the Open LLM leaderboard ) on a dataset of my own collection of Google Keep notes; each one of my notes has both a title and a body so I wanted to train Llama to generate a body from a given title. · Load LlaMA 2 model with llama-cpp-python 🚀. Note: new versions of llama-cpp-python use GGUF model files (see here ). Set up llama-cpp-python. This release includes model weights and starting code for pretrained and fine-tuned Llama language Llama. https://llama2. (File sizes/ memory sizes of Q2 quantization see below) Your best bet to run Llama-2-70 b is: Long answer: combined with your system memory, maybe. cpp 's objective is to run the LLaMA model with 4-bit integer quantization on MacBook. Fine-tuned LLMs, called Llama-2-chat, are optimized for dialogue use cases. cpp to work as a command line tool. 🌎; 🚀 Deploy. python3. A self-hosted, offline, ChatGPT-like chatbot. main_gpu ( int, default: 0 ) –. For Multiple Document Summarization, Llama2 extracts text from the documents and utilizes an Attention Mechanism How to split the model across GPUs. LLAMA_SPLIT_* for options. cpp. Sep 10, 2023 · There is no way to run a Llama-2-70B chat model entirely on an 8 GB GPU alone. pth file in the root folder of this repo. This is the repository for the 7B pretrained model, converted for the Hugging Face Transformers format. ローカルでの実行手順は、次のとおりです。. - ollama/ollama Aug 14, 2023 · A llama typing on a keyboard by stability-ai/sdxl. To download only the 7B model files to your current directory, run: python -m llama. I have a conda venv installed with cuda and pytorch with cuda support and python 3. picoLLM Inference Engine also runs on Android, iOS and Web Browsers. cpp」の主な目標は、MacBookで4bit量子化を使用してLLAMAモデルを実行することです。. Whether you're developing agents, or other AI-powered applications, Llama 3 in both 8B and This allows you to use llama. I am using llama-cpp-python==0. About GGUF. With the environment set up, you’re now ready to dive into the core of the data extraction process. $ pip install You signed in with another tab or window. ∘ Running the model using llama_cpp To install the package, run: pip install llama-cpp-python. It supports inference for many LLMs models, which can be accessed on Hugging Face. Powered by Llama 2. txt) and then use the export. Step 2. from llamaapi import LlamaAPI. Jul 28, 2023 · After fine-tuning Llama 2 with your own data, you can generate text with the fine-tuned model by running the following command in the terminal: python run_generation. It should then go to the web address and find businesses matching the users input and find any email addresses connected to the business, then put the business name and email address in a google spread sheet. %pip install --upgrade --quiet llamaapi. download. One quirk of sentencepiece is that when decoding a sequence, if the first token is the start of the word (e. “Banana”), the tokenizer does not prepend the prefix space to the string. Build an AI chatbot with both Mistral 7B and Llama2. gguf quantizations. The latest release of Intel Extension for PyTorch (v2. Output Models generate text only. co/chat3. So I am ready to go. ggmlv3. The files a here locally downloaded from meta: folder llama-2-7b-chat with: checklist. Llama 2. Because Python is the most benchmarked language for code generation, and because Python and PyTorch play an important role in the AI community – we believe a specialized model provides additional utility. Fine-tune LLaMA 2 (7-70B) on Amazon SageMaker, a complete guide from setup to QLoRA fine-tuning and deployment on Amazon For using a Llama-2 chat model with a LlamaCPP LMM, install the llama-cpp-python library using these installation instructions. After installing Ollama, you can pull the Llama 2 model using the following command. cpp 」はC言語で記述されたLLMのランタイムです。. サポートされているプラットフォームは、つぎおとおりです。. However, Meta says they may need up to 2 days to process the request. Jan 17, 2024 · Jan 17, 2024. 1. This notebook shows how to use LangChain with LlamaAPI - a hosted version of Llama2 that adds in support for function calling. In this example, we load a PDF document in the same directory as the python application and prepare it for processing by Sep 17, 2023 · Registered Model llama2-gguf-chat Step 7: Test the logged Chat model. bin)の準備。. (1) Pythonの仮想環境の準備。. In this example, D:\Downloads\LLaMA is a root folder of downloaded torrent with weights. Next, we need data to build our chatbot. py --model_type gpt2 --model_name_or_path . Output generated by llama. You signed out in another tab or window. Jul 19, 2023 · ローカルでの実行. This repo contains GGUF format model files for Meta's Llama 2 13B. Meta just released the new state-of-the-art open LLM, which is a collection of pre-trained and fine-tuned models ranging in scale from 7 billion to 70 billion parameters: Llama 2 — an updated version of Llama 1, trained on a new mix of publicly available data. This requires three steps and takes around 1 hour. 2. Not even with quantization. To build a simple vector store index Jul 8, 2024 · To install the package, run: pip install llama-cpp-python. Install pip install llama2-wrapper Start OpenAI Compatible API python -m llama2_wrapper. Jul 18, 2023 · October 2023: This post was reviewed and updated with support for finetuning. Begin by passing the raw text array from your PDF to LLama 2. py included in the logmodel github tree is useful for testing the logged model. Request access to Meta Llama. In just a few lines of code, we will show you how you can run LLM inference with Llama 2 and Llama 3 using the picoLLM Inference Engine Python SDK. The Llama 2 family of large language models (LLMs) is a collection of pre-trained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Figure 1: Llama2, the Python coder ( image source) One important caveat to recognize is that fine-tuning is sometimes unnecessary. We will also see how to use the llama-cpp-python library to run the Zephyr LLM, which is an open-source model based on the Mistral model. Llama 2 13B-chat Llama for Python Programmers is designed for programmers who want to leverage the Llama 2 large language model (LLM) and take advantage of the generative artificial intelligence (AI) revolution. ns sx vt he gs bf ao jo le bt