chatbot

How to build Ollama to run LLMs on RISC-V Linux

RISC-V is the new entrant into the SBC/low-end desktop space, and as I'm in possession of a HiFive Premier P550 motherboard, I am running it through my usual gauntlet of benchmarks—partly to see how fast it is, and partly to gauge how far along RISC-V support is in general across a wide swath of Linux software.

From my first tests on the VisionFive 2 back in 2023 to today, RISC-V has seen quite a bit of growth, fueled by economics, geopolitical wrangling, and developer interest.

The P550 uses the ESWIN EIC7700X SoC, and while it doesn't have a fast CPU, by modern standards, it is fast enough—and the system has enough RAM and IO—to run most modern Linux-y things. Including llama.cpp and Ollama!

Compiling Ollama for RISC-V Linux

I'm running Ubuntu 24.04.1 on my P550 board, and when I try running Ollama's simple install script, I get:

If AI chatbots are the future, I hate it

AT&T Fiber Internet - speedtest graph

About a week ago, my home Internet (AT&T Fiber) went from the ~1 Gbps I pay for down to about 100 Mbps (see how I monitor my home Internet with a Pi). It wasn't too inconvenient, and I considered waiting it out to see if the speed recovered at some point, because latency was fine.

But as you can see around 7/7 on that graph, the 100 Mbps went down to about eight, and that's the point where my wife starts noticing how slow the Internet is. Action level.

So I fired up AT&T's support chat. I'm a programmer, I can usually find ways around the wily ways of chatbots.

Except AT&T's AI-powered chatbot seems to have a fiendish tendency to equate 'WiFi' with 'Internet', no doubt due to so many people thinking they are one and the same.