Contact Us
We read every email. We are a small team and try to reply within 3-5 business days. For the fastest response, please pick the right address below so we can route your message correctly.
How to reach us
- General questions, feedback, bug reports Found a wrong VRAM number? A model that should not be in our database? Tell us and we will fix it.
- Hardware or model data corrections If you have measured tokens-per-second on a real machine and our estimate is way off, we want to know. Please include your GPU model, the inference engine and version (e.g. Ollama 0.x.x, llama.cpp build XYZ), the model name and quantization, the context length, and your measured tokens/s.
- Editorial or content corrections If something on a guide page is inaccurate, outdated, or unclear, point us at the URL and the specific paragraph.
- Press, partnerships, sponsorship Briefly tell us who you are and what you have in mind. We do not accept paid placements inside articles, but we are happy to discuss display ad partnerships and product reviews.
- Privacy questions, GDPR / CCPA requests Data subject access requests, deletion requests, or any privacy-related concern.
- Legal, copyright, DMCA Trademark, copyright, or other legal notices.
Before you write
A few questions we get often are already answered on the site - it is faster to check first:
- "How do you calculate VRAM?" - see the methodology page.
- "Which model should I start with on a laptop with 16 GB RAM?" - see the 2026 Model Guide.
- "What is Q4 vs Q5 vs Q8?" - see Choosing the Right Quantization.
- "Should I use Ollama or LM Studio?" - see Ollama vs LM Studio.
- General getting-started questions - see the FAQ.
What we cannot help with
- We cannot run a model for you. Everything on RunLocalModel happens client-side.
- We cannot recover lost model files, debug your specific Python environment, or troubleshoot CUDA driver issues. Those questions are better posted in the llama.cpp discussions or on the relevant model's Hugging Face page.
- We do not give individualized hardware purchasing advice over email. Use the home page checker with the GPU you are considering and you will get the same answer we would give.