课程: Large Language Models on AWS: Building and Deploying Open-Source LLMs

免费学习该课程!

今天就开通帐号,24,700 门业界名师课程任您挑!

Compiling llama.cpp demo

Compiling llama.cpp demo

- [Instructor] Here I have llama.cpp downloaded. How do I know that this code base is installed? Typically what I do, if I forget, is I look at the prompt, right? And I can see that it's in master because I'm using OMIZSH. But I also could do a get remote dash V. And this tells me, oh, okay, good. I'm inside of a repo. Here's exactly the origin, and I can go from there. Now, what I typically would recommend is to optimize the compile if you're going to be working with this. In my particular situation here, what I would typically do, is actually optimize the compile for my architecture. So in this case, let's break through the command. So first step, the time command is typically something I'll do when I do a first compile, just to make sure I have a sanity check and I can look at how long it took to compile. Now if we just run it real quick, you can see here it says, okay, you know, 0.166 total. So it's already been compiled. I don't have to do anything different. But in the case of…

内容