{"id":6712,"date":"2024-07-02T09:58:51","date_gmt":"2024-07-02T08:58:51","guid":{"rendered":"https:\/\/rewirenow.com\/?p=6712"},"modified":"2025-08-20T14:32:47","modified_gmt":"2025-08-20T13:32:47","slug":"fine-tuning-hallucinations-in-llms","status":"publish","type":"post","link":"https:\/\/rewirenow.com\/en\/resources\/blog\/fine-tuning-hallucinations-in-llms\/","title":{"rendered":"Fine-tuning hallucinations in LLMs"},"content":{"rendered":"\n\n<section class=\"container-block   \"  style=\"\" >\r\n        <div class=\"block  container  container--1024  width-under--laptop  wysiwyg\">\r\n        \n<div class=\"at-spacer at-spacer--69d4f0703a50e\" style=\"background-color: transparent\">\n<style>\n\/* Ultrawide *\/\n.at-spacer--69d4f0703a50e {\n    height: 10px;\n}\n\n\/* Desktop *\/\n@media (max-width: 1920px) {\n    .at-spacer--69d4f0703a50e {\n        height: 10px;\n    }\n}\n\n\/* Laptop *\/\n@media (max-width: 1024px) {\n    .at-spacer--69d4f0703a50e {\n        height: 10px;\n    }\n}\n\n\/* Mobile *\/\n@media (max-width: 767px) {\n    .at-spacer--69d4f0703a50e {\n        height: 10px;\n    }\n}\n<\/style>\n<\/div>\n\n\n\n<p>This article originally <a class=\"\" href=\"https:\/\/www.linkedin.com\/pulse\/fine-tuning-hallucinations-wouter-huygen-76c1e\/\">published on LinkedIn<\/a>. The writer, <a class=\"\" href=\"https:\/\/www.linkedin.com\/in\/whuygen\/\">Wouter Huygen<\/a> is partner and CEO at Rewire. <\/p>\n\n\n<div class=\"at-spacer at-spacer--69d4f0703a71e\" style=\"background-color: transparent\">\n<style>\n\/* Ultrawide *\/\n.at-spacer--69d4f0703a71e {\n    height: 30px;\n}\n\n\/* Desktop *\/\n@media (max-width: 1920px) {\n    .at-spacer--69d4f0703a71e {\n        height: 30px;\n    }\n}\n\n\/* Laptop *\/\n@media (max-width: 1024px) {\n    .at-spacer--69d4f0703a71e {\n        height: 30px;\n    }\n}\n\n\/* Mobile *\/\n@media (max-width: 767px) {\n    .at-spacer--69d4f0703a71e {\n        height: 30px;\n    }\n}\n<\/style>\n<\/div>\n\n\n\n<p>A <a class=\"\" href=\"https:\/\/arxiv.org\/pdf\/2405.05904v2\">new paper<\/a> reveals that fine-tuning is not a wonder drug for prevailing LLM hallucinations. Rather the reverse: fine-tuning can actually worsen performance when aiming to develop factual correctness in specialized application domains.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-style-plain has-small-font-size is-layout-flow wp-block-quote-is-layout-flow\" style=\"border-style:none;border-width:0px;border-radius:0px;font-style:normal;font-weight:100\">\n<p class=\"has-large-font-size\" style=\"font-style:normal;font-weight:300\"><em>Using supervised learning on new knowledge fine-tunes hallucinations, instead of enhancing accuracy<\/em><\/p>\n<\/blockquote>\n\n\n\n<p>These findings could have profound implications. What if these areas precisely provide most of the value from LLM use cases?<\/p>\n\n\n<div class=\"at-spacer at-spacer--69d4f0703b6a4\" style=\"background-color: transparent\">\n<style>\n\/* Ultrawide *\/\n.at-spacer--69d4f0703b6a4 {\n    height: 40px;\n}\n\n\/* Desktop *\/\n@media (max-width: 1920px) {\n    .at-spacer--69d4f0703b6a4 {\n        height: 40px;\n    }\n}\n\n\/* Laptop *\/\n@media (max-width: 1024px) {\n    .at-spacer--69d4f0703b6a4 {\n        height: 40px;\n    }\n}\n\n\/* Mobile *\/\n@media (max-width: 767px) {\n    .at-spacer--69d4f0703b6a4 {\n        height: 40px;\n    }\n}\n<\/style>\n<\/div>\n\n\n\n<h3 class=\"wp-block-heading\">The hard problem of hallucinations<\/h3>\n\n\n\n<p><br>The beauty of LLMs is that they are very generic and general-purpose: they contain \u201cknowledge\u201d on a very wide range of subjects covered in the training data. This forms the basis for the claim that the current path will get us (close) to AGI. I don\u2019t think that is true, but that\u2019s for another day.<\/p>\n\n\n\n<p>Clearly, generative AI currently works only up to a point. Measuring hallucination rates is notoriously difficult, but roughly speaking the tech works well in 80% of the cases. And yes, performance depends on many factors including the prompting abilities of the user. That being said, getting rid of the remaining 20% is arguably the biggest headache of the AI industry.<\/p>\n\n\n\n<p>A long standing question in neuroscience and philosophy is how consciousness arises in the brain. How does a bunch of molecules and electromagnetic waves produce the miracle of our conscious experience? This is referred to as the hard problem of consciousness. But what if science has the premise all wrong? What if consciousness does not arise from matter, but matter is (an illusion) formed in consciousness?<\/p>\n\n\n\n<blockquote class=\"wp-block-quote has-large-font-size is-layout-flow wp-block-quote-is-layout-flow\" style=\"border-style:none;border-width:0px;border-radius:0px\">\n<p class=\"has-large-font-size\" style=\"font-style:normal;font-weight:300\"><em>Hallucinations are the current hard problem-to-crack for AI<\/em><\/p>\n<\/blockquote>\n\n\n\n<p>Similarly, hallucinations are a cause of generative AI technology, not a consequence. The technology is designed to dream up content, based on probabilistic relationships captured in the model parameters.<\/p>\n\n\n\n<p>Big tech proclaims that the issue can be solved through further scaling, but experts in the field increasingly recognize we have to view it as a remaining feature, not a bug. After all, who would not be hallucinating after reading the entire internet \ud83d\ude09<\/p>\n\n\n\n<p>For the short term, the applicability of LLMs \u2013 despite their amazing feats \u2013 remains more limited than we might hope. Especially in high stakes situations and\/or very specialized areas. And these might just be the areas that herald the most value (e.g. in healthcare, proving accurate diagnostic\/treatment solutions).<\/p>\n\n\n\n<p>Unless fundamental algorithmic breakthroughs come along, or scaling proves to work after all, we have to learn how to make the best of what we&#8217;ve got. Work with the strengths, while minimizing downside impact.<\/p>\n\n\n<div class=\"at-spacer at-spacer--69d4f0703b979\" style=\"background-color: transparent\">\n<style>\n\/* Ultrawide *\/\n.at-spacer--69d4f0703b979 {\n    height: 40px;\n}\n\n\/* Desktop *\/\n@media (max-width: 1920px) {\n    .at-spacer--69d4f0703b979 {\n        height: 40px;\n    }\n}\n\n\/* Laptop *\/\n@media (max-width: 1024px) {\n    .at-spacer--69d4f0703b979 {\n        height: 40px;\n    }\n}\n\n\/* Mobile *\/\n@media (max-width: 767px) {\n    .at-spacer--69d4f0703b979 {\n        height: 40px;\n    }\n}\n<\/style>\n<\/div>\n\n\n\n<h3 class=\"wp-block-heading\">Using Fine-Tuning to develop domain specific applications<\/h3>\n\n\n\n<p>Since the beginning of the Gen AI hype, fine-tuning is touted as one of the ways to improve performance on specific application areas. The approach is to use supervised learning on domain-specific data (e.g. proprietary company data) to fine-tune a foundational (open source) model to specialize it for a certain use case and increase factuality.<\/p>\n\n\n\n<p>Intuitively this makes sense. The foundation model is pre-trained on generic text prediction with a very broad base of foundational knowledge. Further fine-tuning would then provide the required specialization, based on proprietary and company-specific facts.<\/p>\n\n\n<div class=\"at-spacer at-spacer--69d4f0703bad3\" style=\"background-color: transparent\">\n<style>\n\/* Ultrawide *\/\n.at-spacer--69d4f0703bad3 {\n    height: 40px;\n}\n\n\/* Desktop *\/\n@media (max-width: 1920px) {\n    .at-spacer--69d4f0703bad3 {\n        height: 40px;\n    }\n}\n\n\/* Laptop *\/\n@media (max-width: 1024px) {\n    .at-spacer--69d4f0703bad3 {\n        height: 40px;\n    }\n}\n\n\/* Mobile *\/\n@media (max-width: 767px) {\n    .at-spacer--69d4f0703bad3 {\n        height: 40px;\n    }\n}\n<\/style>\n<\/div>\n\n\n\n<h3 class=\"wp-block-heading\">Fine-tuning does not work well for new information<\/h3>\n\n\n\n<p><br>A <a class=\"\" href=\"https:\/\/arxiv.org\/pdf\/2405.05904v2\">recent paper<\/a> investigates the impact of fine-tuning on new information. The authors aimed to validate the hypothesis that new knowledge can have unexpected negative impact on model performance, rather than improving it in a specific area. The outcomes are surprising, counter-intuitive at first glance, and impactful.<\/p>\n\n\n\n<p>Fine-tuning with new knowledge works much slower than for existing knowledge (i.e. knowledge that was included in the pre-training data set). But most importantly, beyond a certain point of training, new knowledge deteriorates model performance on existing knowledge. In other words, incorporating specific new information in fine-tuning increases hallucinations. Worse yet, the hallucination rate grows linearly with more training in unknown content.<\/p>\n\n\n\n<p>In intuitive terms, it seems as if the model gets confused with new information and \u201cunlearns\u201d existing knowledge.<\/p>\n\n\n\n<p><strong>Exhibit 1<\/strong>. Train and development accuracies as a function of the fine-tuning duration, when fine-tuning on 50% Known and 50% Unknown examples. <\/p>\n\n\n\n<p><\/p>\n\n\n\n<figure class=\"wp-block-image size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"940\" height=\"889\" data-src=\"https:\/\/rewirenow.com\/app\/uploads\/2024\/07\/Train-and-dev-accuracy-of-LLMs.png\" alt=\"\" class=\"wp-image-6779 lazyload\" style=\"--smush-placeholder-width: 940px; --smush-placeholder-aspect-ratio: 940\/889;width:597px;height:auto\" data-srcset=\"https:\/\/rewirenow.com\/app\/uploads\/2024\/07\/Train-and-dev-accuracy-of-LLMs.png 940w, https:\/\/rewirenow.com\/app\/uploads\/2024\/07\/Train-and-dev-accuracy-of-LLMs-300x284.png 300w, https:\/\/rewirenow.com\/app\/uploads\/2024\/07\/Train-and-dev-accuracy-of-LLMs-768x726.png 768w, https:\/\/rewirenow.com\/app\/uploads\/2024\/07\/Train-and-dev-accuracy-of-LLMs-125x118.png 125w, https:\/\/rewirenow.com\/app\/uploads\/2024\/07\/Train-and-dev-accuracy-of-LLMs-63x60.png 63w\" data-sizes=\"(max-width: 940px) 100vw, 940px\" src=\"data:image\/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==\" \/><noscript><img loading=\"lazy\" decoding=\"async\" width=\"940\" height=\"889\" src=\"https:\/\/rewirenow.com\/app\/uploads\/2024\/07\/Train-and-dev-accuracy-of-LLMs.png\" alt=\"\" class=\"wp-image-6779\" style=\"width:597px;height:auto\" srcset=\"https:\/\/rewirenow.com\/app\/uploads\/2024\/07\/Train-and-dev-accuracy-of-LLMs.png 940w, https:\/\/rewirenow.com\/app\/uploads\/2024\/07\/Train-and-dev-accuracy-of-LLMs-300x284.png 300w, https:\/\/rewirenow.com\/app\/uploads\/2024\/07\/Train-and-dev-accuracy-of-LLMs-768x726.png 768w, https:\/\/rewirenow.com\/app\/uploads\/2024\/07\/Train-and-dev-accuracy-of-LLMs-125x118.png 125w, https:\/\/rewirenow.com\/app\/uploads\/2024\/07\/Train-and-dev-accuracy-of-LLMs-63x60.png 63w\" sizes=\"(max-width: 940px) 100vw, 940px\" \/><\/noscript><\/figure>\n\n\n\n<p>Source: <a class=\"\" href=\"https:\/\/arxiv.org\/pdf\/2405.05904v2\">paper<\/a> from Zorik Gekhman et al.<\/p>\n\n\n\n<p><br>These conclusions have serious implications for anyone aiming to develop specialized LLM use cases. Fine-tuning remains useful for strengthening model performance in known areas as well as improving the form and structure of the desired output. But using fine-tuning to increase factuality on new information does not work well and has undesirable, opposite affects.<\/p>\n\n\n<div class=\"at-spacer at-spacer--69d4f0703c2f3\" style=\"background-color: transparent\">\n<style>\n\/* Ultrawide *\/\n.at-spacer--69d4f0703c2f3 {\n    height: 40px;\n}\n\n\/* Desktop *\/\n@media (max-width: 1920px) {\n    .at-spacer--69d4f0703c2f3 {\n        height: 40px;\n    }\n}\n\n\/* Laptop *\/\n@media (max-width: 1024px) {\n    .at-spacer--69d4f0703c2f3 {\n        height: 40px;\n    }\n}\n\n\/* Mobile *\/\n@media (max-width: 767px) {\n    .at-spacer--69d4f0703c2f3 {\n        height: 40px;\n    }\n}\n<\/style>\n<\/div>\n\n\n\n<h3 class=\"wp-block-heading\">The unfortunate correlation between accuracy and value<\/h3>\n\n\n\n<p><br>Using LLMs to build <em>knowledge assistants<\/em> is a promising use case across many fields. These use cases thrive well in highly knowledge-intensive industries, allowing users to query situation specific information on-demand. This includes healthcare workers, pharmaceutical advisors, customer service, professional services, etc. Not only do these assistants increase effectiveness and efficiency of their users, they also allow to accumulate enterprise knowledge and IP in a much more sustainable and scalable manner. They become like digital co-workers that never resign, unless you fire them.<\/p>\n\n\n\n<p>As long as humans can be in the loop, verifying output, or when the impact of inaccurate information is low, the current LLM technology is already good enough. But in many situations, most of the value would actually come from reliability and factual correctness rather than an 80%- answer that can be manually adjusted (like drafting an email).<\/p>\n\n\n<div class=\"at-spacer at-spacer--69d4f0703c44e\" style=\"background-color: transparent\">\n<style>\n\/* Ultrawide *\/\n.at-spacer--69d4f0703c44e {\n    height: 40px;\n}\n\n\/* Desktop *\/\n@media (max-width: 1920px) {\n    .at-spacer--69d4f0703c44e {\n        height: 40px;\n    }\n}\n\n\/* Laptop *\/\n@media (max-width: 1024px) {\n    .at-spacer--69d4f0703c44e {\n        height: 40px;\n    }\n}\n\n\/* Mobile *\/\n@media (max-width: 767px) {\n    .at-spacer--69d4f0703c44e {\n        height: 40px;\n    }\n}\n<\/style>\n<\/div>\n\n\n\n<h3 class=\"wp-block-heading\">What to do instead?<\/h3>\n\n\n\n<p><br>To enhance performance in specific application areas amidst existing technological constraints, companies and developers must adopt a pragmatic and empirical engineering approach. This involves employing a combination of sophisticated techniques to forge optimal solutions. Innovations like Retrieval-Augmented Generation (RAG), fine-tuning processes accounting for new versus existing knowledge, advanced context embedding, and post-processing output verification are reshaping our methodologies daily.<\/p>\n\n\n\n<p>The new insights discussed here demonstrate the importance to stay abreast of the fast developing field to continue pushing the performance boundaries of Gen AI applications. Until new breakthroughs happen in foundation models, we have to keep learning new tricks of the trade to get the most out of today&#8217;s state of the art.<\/p>\n\n\r\n    <\/div>\r\n<\/section>\n\n<section class=\"container-block   \"  style=\"\" >\r\n        <div class=\"block  container  container--1024  width-under--laptop  wysiwyg\">\r\n        \n\n<section class=\"cta-box-block  js-cta-box  block\" >\n    <div class=\"container\">\n        <div class=\"cta-box\" style=\"background: linear-gradient(0deg, rgba(37,206,206, 0.13), rgba(106,61,255, 0.13));\">\n                            <figure class=\"cta-box__image\">\n                        <span class=\"cta-box__img\" data-lazy style=\"background-color: #f1f1f1;\">\r\n        \r\n                    <img data-lazy-full=\"https:\/\/rewirenow.com\/app\/uploads\/2024\/02\/gen-ai-bg-980x600.jpg\" alt=\"Decorative background iconography - Generative AI consulting services for enterprises\" data-is-lazy>\r\n            <\/span>\r\n\r\n                <\/figure>\n                        <div class=\"cta-box__content\">\n                                    <h4 class=\"cta-box__title\">Turning Generative AI potential into bottom line impact<\/h4>\n                                                    <p class=\"cta-box__description\">Our strategies enable you to harness generative AI, moving beyond marginal or tactical gains to achieve transformational success.<\/p>\n                                                    <a class=\"cta-box__link btn btn--next\" href=\"https:\/\/rewirenow.com\/en\/data-ai-transformation\/generative-ai\/\" target=\"_blank\" aria-label=\"Explore our Generative AI services\">Explore our Generative AI services<\/a>\n                            <\/div>\n        <\/div>\n    <\/div>\n<\/section>\n\r\n    <\/div>\r\n<\/section>","protected":false},"excerpt":{"rendered":"<p>Fine-tuning can worsen factual correctness in specialized application domains. We discuss the implications.<\/p>\n","protected":false},"author":4,"featured_media":6716,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[141],"tags":[146,147,143,142,145,144],"class_list":["post-6712","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-generative-ai","tag-ai","tag-artificial-intelligence","tag-gen-ai","tag-generative-ai","tag-large-language-model","tag-llm"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v22.3 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Fine-tuning hallucinations in LLMs | Rewire | Data &amp; AI Consultancy<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/rewirenow.com\/en\/resources\/blog\/fine-tuning-hallucinations-in-llms\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Fine-tuning hallucinations in LLMs | Rewire | Data &amp; AI Consultancy\" \/>\n<meta property=\"og:description\" content=\"Fine-tuning can worsen factual correctness in specialized application domains. We discuss the implications.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/rewirenow.com\/en\/resources\/blog\/fine-tuning-hallucinations-in-llms\/\" \/>\n<meta property=\"og:site_name\" content=\"Rewire | Data &amp; AI Consultancy\" \/>\n<meta property=\"article:published_time\" content=\"2024-07-02T08:58:51+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-08-20T13:32:47+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/rewirenow.com\/app\/uploads\/2024\/07\/hofmann-natalia-lxrkrBx-c_o-unsplash.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1920\" \/>\n\t<meta property=\"og:image:height\" content=\"1280\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"alexgevers\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"alexgevers\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/rewirenow.com\/en\/resources\/blog\/fine-tuning-hallucinations-in-llms\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/rewirenow.com\/en\/resources\/blog\/fine-tuning-hallucinations-in-llms\/\"},\"author\":{\"name\":\"alexgevers\",\"@id\":\"https:\/\/rewirenow.com\/en\/#\/schema\/person\/0d620038fdcaeafad7e2d6dbba026cf9\"},\"headline\":\"Fine-tuning hallucinations in LLMs\",\"datePublished\":\"2024-07-02T08:58:51+00:00\",\"dateModified\":\"2025-08-20T13:32:47+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/rewirenow.com\/en\/resources\/blog\/fine-tuning-hallucinations-in-llms\/\"},\"wordCount\":1017,\"publisher\":{\"@id\":\"https:\/\/rewirenow.com\/en\/#organization\"},\"image\":{\"@id\":\"https:\/\/rewirenow.com\/en\/resources\/blog\/fine-tuning-hallucinations-in-llms\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/rewirenow.com\/app\/uploads\/2024\/07\/hofmann-natalia-lxrkrBx-c_o-unsplash.jpg\",\"keywords\":[\"AI\",\"Artificial Intelligence\",\"Gen AI\",\"Generative AI\",\"Large Language Model\",\"LLM\"],\"articleSection\":[\"Generative AI\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/rewirenow.com\/en\/resources\/blog\/fine-tuning-hallucinations-in-llms\/\",\"url\":\"https:\/\/rewirenow.com\/en\/resources\/blog\/fine-tuning-hallucinations-in-llms\/\",\"name\":\"Fine-tuning hallucinations in LLMs | Rewire | Data &amp; AI Consultancy\",\"isPartOf\":{\"@id\":\"https:\/\/rewirenow.com\/en\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/rewirenow.com\/en\/resources\/blog\/fine-tuning-hallucinations-in-llms\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/rewirenow.com\/en\/resources\/blog\/fine-tuning-hallucinations-in-llms\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/rewirenow.com\/app\/uploads\/2024\/07\/hofmann-natalia-lxrkrBx-c_o-unsplash.jpg\",\"datePublished\":\"2024-07-02T08:58:51+00:00\",\"dateModified\":\"2025-08-20T13:32:47+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/rewirenow.com\/en\/resources\/blog\/fine-tuning-hallucinations-in-llms\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/rewirenow.com\/en\/resources\/blog\/fine-tuning-hallucinations-in-llms\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/rewirenow.com\/en\/resources\/blog\/fine-tuning-hallucinations-in-llms\/#primaryimage\",\"url\":\"https:\/\/rewirenow.com\/app\/uploads\/2024\/07\/hofmann-natalia-lxrkrBx-c_o-unsplash.jpg\",\"contentUrl\":\"https:\/\/rewirenow.com\/app\/uploads\/2024\/07\/hofmann-natalia-lxrkrBx-c_o-unsplash.jpg\",\"width\":1920,\"height\":1280,\"caption\":\"Surreal image of woman in white gown on desert\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/rewirenow.com\/en\/resources\/blog\/fine-tuning-hallucinations-in-llms\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/rewirenow.com\/en\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Fine-tuning hallucinations in LLMs\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/rewirenow.com\/en\/#website\",\"url\":\"https:\/\/rewirenow.com\/en\/\",\"name\":\"Rewire | Data & AI Consultancy\",\"description\":\"Impossible No More\",\"publisher\":{\"@id\":\"https:\/\/rewirenow.com\/en\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/rewirenow.com\/en\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/rewirenow.com\/en\/#organization\",\"name\":\"Rewire | Data & AI Consultancy\",\"url\":\"https:\/\/rewirenow.com\/en\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/rewirenow.com\/en\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/rewirenow.com\/app\/uploads\/2024\/03\/Rewire-Logo.png\",\"contentUrl\":\"https:\/\/rewirenow.com\/app\/uploads\/2024\/03\/Rewire-Logo.png\",\"width\":1080,\"height\":1080,\"caption\":\"Rewire | Data & AI Consultancy\"},\"image\":{\"@id\":\"https:\/\/rewirenow.com\/en\/#\/schema\/logo\/image\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\/\/rewirenow.com\/en\/#\/schema\/person\/0d620038fdcaeafad7e2d6dbba026cf9\",\"name\":\"alexgevers\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/rewirenow.com\/en\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/975c6046cc08cb2dd08b47d3dcde1977?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/975c6046cc08cb2dd08b47d3dcde1977?s=96&d=mm&r=g\",\"caption\":\"alexgevers\"}}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Fine-tuning hallucinations in LLMs | Rewire | Data &amp; AI Consultancy","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/rewirenow.com\/en\/resources\/blog\/fine-tuning-hallucinations-in-llms\/","og_locale":"en_US","og_type":"article","og_title":"Fine-tuning hallucinations in LLMs | Rewire | Data &amp; AI Consultancy","og_description":"Fine-tuning can worsen factual correctness in specialized application domains. We discuss the implications.","og_url":"https:\/\/rewirenow.com\/en\/resources\/blog\/fine-tuning-hallucinations-in-llms\/","og_site_name":"Rewire | Data &amp; AI Consultancy","article_published_time":"2024-07-02T08:58:51+00:00","article_modified_time":"2025-08-20T13:32:47+00:00","og_image":[{"width":1920,"height":1280,"url":"https:\/\/rewirenow.com\/app\/uploads\/2024\/07\/hofmann-natalia-lxrkrBx-c_o-unsplash.jpg","type":"image\/jpeg"}],"author":"alexgevers","twitter_card":"summary_large_image","twitter_misc":{"Written by":"alexgevers","Est. reading time":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/rewirenow.com\/en\/resources\/blog\/fine-tuning-hallucinations-in-llms\/#article","isPartOf":{"@id":"https:\/\/rewirenow.com\/en\/resources\/blog\/fine-tuning-hallucinations-in-llms\/"},"author":{"name":"alexgevers","@id":"https:\/\/rewirenow.com\/en\/#\/schema\/person\/0d620038fdcaeafad7e2d6dbba026cf9"},"headline":"Fine-tuning hallucinations in LLMs","datePublished":"2024-07-02T08:58:51+00:00","dateModified":"2025-08-20T13:32:47+00:00","mainEntityOfPage":{"@id":"https:\/\/rewirenow.com\/en\/resources\/blog\/fine-tuning-hallucinations-in-llms\/"},"wordCount":1017,"publisher":{"@id":"https:\/\/rewirenow.com\/en\/#organization"},"image":{"@id":"https:\/\/rewirenow.com\/en\/resources\/blog\/fine-tuning-hallucinations-in-llms\/#primaryimage"},"thumbnailUrl":"https:\/\/rewirenow.com\/app\/uploads\/2024\/07\/hofmann-natalia-lxrkrBx-c_o-unsplash.jpg","keywords":["AI","Artificial Intelligence","Gen AI","Generative AI","Large Language Model","LLM"],"articleSection":["Generative AI"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/rewirenow.com\/en\/resources\/blog\/fine-tuning-hallucinations-in-llms\/","url":"https:\/\/rewirenow.com\/en\/resources\/blog\/fine-tuning-hallucinations-in-llms\/","name":"Fine-tuning hallucinations in LLMs | Rewire | Data &amp; AI Consultancy","isPartOf":{"@id":"https:\/\/rewirenow.com\/en\/#website"},"primaryImageOfPage":{"@id":"https:\/\/rewirenow.com\/en\/resources\/blog\/fine-tuning-hallucinations-in-llms\/#primaryimage"},"image":{"@id":"https:\/\/rewirenow.com\/en\/resources\/blog\/fine-tuning-hallucinations-in-llms\/#primaryimage"},"thumbnailUrl":"https:\/\/rewirenow.com\/app\/uploads\/2024\/07\/hofmann-natalia-lxrkrBx-c_o-unsplash.jpg","datePublished":"2024-07-02T08:58:51+00:00","dateModified":"2025-08-20T13:32:47+00:00","breadcrumb":{"@id":"https:\/\/rewirenow.com\/en\/resources\/blog\/fine-tuning-hallucinations-in-llms\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/rewirenow.com\/en\/resources\/blog\/fine-tuning-hallucinations-in-llms\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/rewirenow.com\/en\/resources\/blog\/fine-tuning-hallucinations-in-llms\/#primaryimage","url":"https:\/\/rewirenow.com\/app\/uploads\/2024\/07\/hofmann-natalia-lxrkrBx-c_o-unsplash.jpg","contentUrl":"https:\/\/rewirenow.com\/app\/uploads\/2024\/07\/hofmann-natalia-lxrkrBx-c_o-unsplash.jpg","width":1920,"height":1280,"caption":"Surreal image of woman in white gown on desert"},{"@type":"BreadcrumbList","@id":"https:\/\/rewirenow.com\/en\/resources\/blog\/fine-tuning-hallucinations-in-llms\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/rewirenow.com\/en\/"},{"@type":"ListItem","position":2,"name":"Fine-tuning hallucinations in LLMs"}]},{"@type":"WebSite","@id":"https:\/\/rewirenow.com\/en\/#website","url":"https:\/\/rewirenow.com\/en\/","name":"Rewire | Data & AI Consultancy","description":"Impossible No More","publisher":{"@id":"https:\/\/rewirenow.com\/en\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/rewirenow.com\/en\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/rewirenow.com\/en\/#organization","name":"Rewire | Data & AI Consultancy","url":"https:\/\/rewirenow.com\/en\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/rewirenow.com\/en\/#\/schema\/logo\/image\/","url":"https:\/\/rewirenow.com\/app\/uploads\/2024\/03\/Rewire-Logo.png","contentUrl":"https:\/\/rewirenow.com\/app\/uploads\/2024\/03\/Rewire-Logo.png","width":1080,"height":1080,"caption":"Rewire | Data & AI Consultancy"},"image":{"@id":"https:\/\/rewirenow.com\/en\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/rewirenow.com\/en\/#\/schema\/person\/0d620038fdcaeafad7e2d6dbba026cf9","name":"alexgevers","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/rewirenow.com\/en\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/975c6046cc08cb2dd08b47d3dcde1977?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/975c6046cc08cb2dd08b47d3dcde1977?s=96&d=mm&r=g","caption":"alexgevers"}}]}},"_links":{"self":[{"href":"https:\/\/rewirenow.com\/en\/wp-json\/wp\/v2\/posts\/6712"}],"collection":[{"href":"https:\/\/rewirenow.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/rewirenow.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/rewirenow.com\/en\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/rewirenow.com\/en\/wp-json\/wp\/v2\/comments?post=6712"}],"version-history":[{"count":10,"href":"https:\/\/rewirenow.com\/en\/wp-json\/wp\/v2\/posts\/6712\/revisions"}],"predecessor-version":[{"id":10459,"href":"https:\/\/rewirenow.com\/en\/wp-json\/wp\/v2\/posts\/6712\/revisions\/10459"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/rewirenow.com\/en\/wp-json\/wp\/v2\/media\/6716"}],"wp:attachment":[{"href":"https:\/\/rewirenow.com\/en\/wp-json\/wp\/v2\/media?parent=6712"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/rewirenow.com\/en\/wp-json\/wp\/v2\/categories?post=6712"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/rewirenow.com\/en\/wp-json\/wp\/v2\/tags?post=6712"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}