课程: Large Language Models on AWS: Building and Deploying Open-Source LLMs
免费学习该课程!
今天就开通帐号,24,700 门业界名师课程任您挑!
Implications of Amdahl’s law: A walkthrough
课程: Large Language Models on AWS: Building and Deploying Open-Source LLMs
Implications of Amdahl’s law: A walkthrough
- [Instructor] Let's take a look at parallel compilation with llama.cpp. This is really common when you're dealing with large language models. You have to git clone a project and you have to compile it locally on your machine. And it's important to understand some of the implications of compiling. So first up here we look at some real data on my Lambda box that has a thread ripper. It's a 24 core 48 thread thread ripper. And what it really exposes is Amdahl's law in practice via compilation. First up in x axis we have parallel jobs. In this case it's the -j flag in the make. And every time you add another number in there, you're going to add more threads. Now, some of the threads may be IO bound, right? So CPU isn't important, but eventually you start to run out of a gain from doing threads. So in the blue line here, we're going to track compilation and this is a left y axis. And then the green line shows CPU utilization and this is the right y axis. The yellow reference line here…
内容
-
-
-
(已锁定)
Implications of Amdahl’s law: A walkthrough4 分钟 5 秒
-
(已锁定)
Compiling llama.cpp demo4 分钟 17 秒
-
(已锁定)
GGUF file format3 分钟 18 秒
-
(已锁定)
Python UV scripting3 分钟 55 秒
-
Python UV packaging overview1 分钟 59 秒
-
(已锁定)
Key concepts in llama.cpp walkthrough4 分钟 37 秒
-
(已锁定)
GGUF quantized llama.cpp end-to-end demo4 分钟 3 秒
-
(已锁定)
Llama.cpp on AWS G5 demo4 分钟 20 秒
-
(已锁定)
-