
Large Language Models Speed Test
Let's test the LLMs' speed on GPU vs CPU
Comparing prediction speed of several versions of LLMs: llama3 (Meta/Facebook), phi3 (Microsoft), gemma (Google), mistral(open source) on CPU and GPU.
Let's test the LLMs' speed on GPU vs CPU
Comparing prediction speed of several versions of LLMs: llama3 (Meta/Facebook), phi3 (Microsoft), gemma (Google), mistral(open source) on CPU and GPU.
Let's test logical fallacy detection quality of different LLMs
Comparing several versions LLMs: llama3 (Meta), phi3 (Microsoft), gemma (Google), mistral(open source) and qwen(Alibaba).
Quite some time ago I trained object detector AI
On one cold winter day in July … that is in Australia … I felt urgent need to train AI model for detecting uncapped concrete reinforcement bars…