课程: Large Language Models on AWS: Building and Deploying Open-Source LLMs
免费学习该课程!
今天就开通帐号,24,700 门业界名师课程任您挑!
Compiling llama.cpp demo
课程: Large Language Models on AWS: Building and Deploying Open-Source LLMs
Compiling llama.cpp demo
- [Instructor] Here I have llama.cpp downloaded. How do I know that this code base is installed? Typically what I do, if I forget, is I look at the prompt, right? And I can see that it's in master because I'm using OMIZSH. But I also could do a get remote dash V. And this tells me, oh, okay, good. I'm inside of a repo. Here's exactly the origin, and I can go from there. Now, what I typically would recommend is to optimize the compile if you're going to be working with this. In my particular situation here, what I would typically do, is actually optimize the compile for my architecture. So in this case, let's break through the command. So first step, the time command is typically something I'll do when I do a first compile, just to make sure I have a sanity check and I can look at how long it took to compile. Now if we just run it real quick, you can see here it says, okay, you know, 0.166 total. So it's already been compiled. I don't have to do anything different. But in the case of…
内容
-
-
-
(已锁定)
Implications of Amdahl’s law: A walkthrough4 分钟 5 秒
-
(已锁定)
Compiling llama.cpp demo4 分钟 17 秒
-
(已锁定)
GGUF file format3 分钟 18 秒
-
(已锁定)
Python UV scripting3 分钟 55 秒
-
Python UV packaging overview1 分钟 59 秒
-
(已锁定)
Key concepts in llama.cpp walkthrough4 分钟 37 秒
-
(已锁定)
GGUF quantized llama.cpp end-to-end demo4 分钟 3 秒
-
(已锁定)
Llama.cpp on AWS G5 demo4 分钟 20 秒
-
(已锁定)
-