<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Finetuning on Kaushik's Portfolio</title><link>https://kauuu.github.io/topics/finetuning/</link><description>Recent content in Finetuning on Kaushik's Portfolio</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Thu, 25 Dec 2025 00:00:00 +0100</lastBuildDate><atom:link href="https://kauuu.github.io/topics/finetuning/index.xml" rel="self" type="application/rss+xml"/><item><title>Evaluating and Finetuning LLMs for Multilingual Legal Summarisation</title><link>https://kauuu.github.io/projects/llm-legal-summarisation/</link><pubDate>Thu, 25 Dec 2025 00:00:00 +0100</pubDate><guid>https://kauuu.github.io/projects/llm-legal-summarisation/</guid><description>Adapted Apertus 8B/70B for Swiss legal summarisation, showing full fine-tuning outperforms GPT-4o and Claude 3.5 Sonnet on BERTScore and ROUGE, while highlighting trade-offs between LoRA and full fine-tuning. Click to read more!</description></item></channel></rss>