<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>AI on Kaushik's Portfolio</title><link>https://kauuu.github.io/topics/ai/</link><description>Recent content in AI on Kaushik's Portfolio</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Thu, 05 Mar 2026 00:00:00 +0100</lastBuildDate><atom:link href="https://kauuu.github.io/topics/ai/index.xml" rel="self" type="application/rss+xml"/><item><title>Characterisation of Goalkeeper Actions from Skeletal Data</title><link>https://kauuu.github.io/projects/action-recognition/</link><pubDate>Thu, 05 Mar 2026 00:00:00 +0100</pubDate><guid>https://kauuu.github.io/projects/action-recognition/</guid><description>Characterised goalkeeper actions from skeletal data by learning robust representations via contrastive learning and analysing their structure using clustering techniques. Click to read more!</description></item><item><title>Mechanistic Interpretability of VLM on Spatial Relational Reasoning</title><link>https://kauuu.github.io/projects/vlm-interpretability/</link><pubDate>Sun, 01 Feb 2026 00:00:00 +0100</pubDate><guid>https://kauuu.github.io/projects/vlm-interpretability/</guid><description>Studied LLaVA-1.5-7B using a controlled synthetic benchmark; revealed an existence bias causing yes-bias in binary relational tasks through logit lens, linear probing, and attention analyses, showing mid-layer representations are linearly decodable but misaligned with final decisions. Click to read more!</description></item><item><title>Evaluating and Finetuning LLMs for Multilingual Legal Summarisation</title><link>https://kauuu.github.io/projects/llm-legal-summarisation/</link><pubDate>Thu, 25 Dec 2025 00:00:00 +0100</pubDate><guid>https://kauuu.github.io/projects/llm-legal-summarisation/</guid><description>Adapted Apertus 8B/70B for Swiss legal summarisation, showing full fine-tuning outperforms GPT-4o and Claude 3.5 Sonnet on BERTScore and ROUGE, while highlighting trade-offs between LoRA and full fine-tuning. Click to read more!</description></item><item><title>Diffusion Models: An Intuitive Understanding</title><link>https://kauuu.github.io/posts/diffusion-model/</link><pubDate>Thu, 06 Nov 2025 22:43:32 +0100</pubDate><guid>https://kauuu.github.io/posts/diffusion-model/</guid><description>&lt;p&gt;&lt;strong&gt;Disclaimer&lt;/strong&gt;: This post is my notes on understanding diffusion models from an intuitive perspective. It is not a formal explanation, and I might have made mistakes. Please reach out to me if you find any errors!&lt;/p&gt;
&lt;h1 id="introduction"&gt;Introduction&lt;/h1&gt;
&lt;p&gt;The original paper was from UC Berkley and went by the name &lt;em&gt;Denoising Diffusion Probabilistic Model&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;The original idea might sound counter-intuitive, but it takes a random noise and transforms it into a realistic image &lt;em&gt;step-by-step&lt;/em&gt;.&lt;/p&gt;</description></item></channel></rss>