Saturday, 12 October 2024

Llama3 on a shoestring Part 2: Upgrading the CPU

Generated locally on a shoestring Stable Diffusion


In Part 1, while llama3 runs the resulting chatbot is frustratingly slow. A key problem is the pre-built docker container which requires the CPU to have support for AVX instructions before it would use the GPU. If you have a GPU you can do without an AVX CPU, but this requires a rebuild from source code like what I did when installing GPU support for tensorflow on the same CPU.

Docker provided a very quick and tempting way to test various LLM models without interfering with other python installs, so it was worth having a quick look at what AVX is.

AVX instructions from 2011 CPUs
 

AVX is Avanced Vector Extensions, first shipped on Intel CPUs in 2011. My AMD Phenom II X4 was bought in 2009 and thus missed the boat. Now the Phenom II uses an AM3+ socket, so there is hope that a later AM3+ CPU might have AVX support. This turned out to be the AMD Bulldozer series. These are sold under the AMD FX-4000 to 8000 series and support AVX.

AMD Bulldozer FX-4000 to FX-8000 series


Incredibly they are still on sale online with a China vendor offering FX-4100 for just RM26.40 (about USD6) up to an FX-6350 for RM148.50 (USD34). That fits my idea of a shoestring budget so I plomped for the mid-range FX-6100 at  RM49.50 (USD11.50).


 

AMD FX-6100 is now just RM49.50

The next thing to do is to check if my equally ancient mainboard supports the FX-6100. This was the Asus M5A78LE. The manual says it does support the FX series. 

And since LLM programs require lots of memory, I might as well push my luck and fill it up. The M5A78LE  takes a maximum of 32GB DDR3 DRAM, twice my current 16GB. I picked up 8GB x 4 Kingston Hyper X Fury Blue (ie 1600MHz) for RM181.5 (USD42) so the whole upgrade cost me RM231 (USD 53).


 

Happily both worked without trouble, and where it previously failed, now the gpu-enabled docker container ran:

$docker run -it --rm --gpus=all -v /home/heong/ollama:/root/.ollama:z -p 11434:11434 --name ollama ollama/ollama

2024/10/12 07:32:48 routes.go:1153: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PRO

XY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http:/

/0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"

time=2024-10-12T07:32:48.673Z level=INFO source=images.go:753 msg="total blobs: 11"

time=2024-10-12T07:32:48.820Z level=INFO source=images.go:760 msg="total unused blobs remove

d: 0"

time=2024-10-12T07:32:48.822Z level=INFO source=routes.go:1200 msg="Listening on [::]:11434

(version 0.3.12)"

time=2024-10-12T07:32:48.885Z level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu_avx cpu_avx2 cuda_v11 cuda_v12 cpu]"

time=2024-10-12T07:32:48.907Z level=INFO source=gpu.go:199 msg="looking for compatible GPUs"

time=2024-10-12T07:32:49.342Z level=INFO source=types.go:107 msg="inference compute" id=GPU-

49ab809b-7b47-3fd0-60c1-f03c4a8959bd library=cuda variant=v12 compute=8.6 driver=12.6 name="

NVIDIA GeForce RTX 3060" total="11.7 GiB" available="11.2 GiB"

You can query it with curl:

$curl http://localhost:11434/api/generate -d '{"model": "llama2","prompt": "Tell me about Jeeves the butler","stream": true,"options": {"seed": 123,"top_k": 20,"top_p": 0.9,"temperature": 0}}'

And the speed went up quite a bit. 

No comments:

Post a Comment