It is possible to load and run 14 Billion parameter llm AI models on Raspberry Pi5 with 16 GB of memory ($120). However, they can be slow with about 0.6 tokens per second. A 13 billion parameter model can run at 1.36 tokens per second. Improved firmware (better SDRAM timing) improved results.