Exploring LLMs with Ollama and Llama3

I am going to use NodeJs and Windows 11 , however everything here should be the same for Linux or Mac , I just like my Windows machine for personal use.

Steps to install the LLMs:

1) Install OLLAMA :

this can be done by going to the ollama website https://ollama.com/ or using curl liek this: curl -fsSL https://ollama.com/install.sh | sh

2) Pull LLama3 with OLLAMA

Ollama works something like a package manager but for LLMs , so to download LLAMA3 use the following command: ollama pull llama3

3) After doing so you can use Llama3 on your terminal

ollma run llama3 "make a helloe world in C++"

Steps to use it on usefull scenarios:

1) setup a nodejs/nmp project


mkdir personal-llama
cd personal-llama
npm init -y

we need to install ollama which is a client for the ollama server that we are running on the same machine
sqlite3 because I want to have some personal data inside the ollama so it can execute responses based on me.
chalk for nice logs
nanoid for tracking ids.

npm install ollama sqlite3 chalk nanoid

2) Setup Ollama and the DB

First we need to connect to our Ollama server, which is by default on port 11434 const ollama = new Ollama({ host: 'http://localhost:11434' }); Also take the question from the arguments , or in the way you would like to:

const question = process.argv.slice(2).join(' ') || 
  chalk.red('Error: Please provide a question');
then connect to the database with SQLIte


const PERSONAL_DATABASE_FILE = 'personal.sqlite';
const db = await open({
  filename: PERSONAL_DATABASE_FILE,
  driver: sqlite3.Database
});
  

3) Prepare the response with Ollama and LLama3

use the chat member function of ollama . which requires a setup so the model would know what is the context and data that it would retain.


const response = await ollama.chat({
  model: 'llama3',
  messages: [
    {
      role: 'system',
      content: `
      You are a personal assitant of ---Put your name----
      Friends And Family Data:${familyAndFriendsData}
      `
    },
    {
    role: 'user',
    content: `
   
    Question: ${question}
    
    `
  }],
  requestId: nanoid()
});

Now Use Ollama with WhatsApp:

1) setup ypur whatsapp-web.js project

you can setup the whatsapp project by using this repo whatsapp bot repo there you will find the client.on("message") callback , in which you need to use the following code .


	const getLlamaAnswer = async (message) => {

    const output = await execSync(`npm run start "${message.body}"`, {cwd: LLAMA_DIR});
    const stringOutput = output?.toString();

    if (!stringOutput) {
        return "Lo siento, no tengo una respuesta para eso.";
    }

   
    const keyword = "Response:";

    const responseIndex = stringOutput.indexOf(keyword);

   
    const extractedResponse = stringOutput.slice(responseIndex + keyword.length).trim();
    console.log(extractedResponse); 
    

    return extractedResponse;

}

  const output = await getLlamaAnswer(message);

  await client.sendMessage(message.from, `${output}`);
  

Which executes the script that we already created on previos steps , remember LLAMA_DIR is the directory of your nodejs project. So it executes on the local machine then returns the output , in my case I had a console log at the end of my first script with "Response:" at the begining , that is why I have this extractResponse step.

Results ? here are some images:

A complete conversation on whatsapp

So the AI did some imagination on responses

Comments

Popular posts from this blog

Analysis of dark patterns in League of Legends and Star Wars:Battlefront II with the usage of Loot Boxes

Kerstin's Fate developer diary