diff --git a/CHARACTER_REPLACEMENT_GUIDE.md b/CHARACTER_REPLACEMENT_GUIDE.md
new file mode 100644
index 0000000..3435fde
--- /dev/null
+++ b/CHARACTER_REPLACEMENT_GUIDE.md
@@ -0,0 +1,149 @@
+can# 3D Character Replacement Guide
+
+This guide explains how to replace the default 3D character in the Work Experience section with your own avatar.
+
+## Current Character Setup
+
+- **Location**: Work Experience section displays a 3D human character
+- **Model File**: `/public/models/animations/developer.glb`
+- **Component**: `src/components/Developer.jsx`
+- **Animations**: 4 animations (idle, salute, clapping, victory)
+- **Animation Files**: Located in `/public/models/animations/` folder
+
+## Option 1: ReadyPlayerMe (Recommended)
+
+### Step 1: Create Your Avatar
+1. **Direct URL**: https://readyplayer.me/avatar
+2. **Alternative**: Go to https://readyplayer.me/ → Click "Try it now" → Create account
+3. **Choose Creation Method**:
+ - **From Photo**: Upload a clear photo of yourself
+ - **From Scratch**: Build manually using their editor
+4. **Customize**: Adjust hair, clothing, accessories, facial features
+5. **Download**: Export as GLB format
+
+### Step 2: Replace the Model
+1. **Rename** your downloaded file to `developer.glb`
+2. **Replace** the existing file at `/public/models/animations/developer.glb`
+3. **Keep** all animation files (idle.fbx, salute.fbx, clapping.fbx, victory.fbx)
+
+### Step 3: Test
+1. Run `npm run dev`
+2. Navigate to Work Experience section
+3. Hover over different work experiences to test animations
+4. Check that the model loads without errors
+
+## Option 2: VRoid Studio (Most Customizable)
+
+### Step 1: Create Avatar
+1. **Download**: https://vroid.com/en/studio
+2. **Create**: Anime-style avatar from scratch
+3. **Export**: As VRM format
+4. **Convert**: Use online converter to GLB format
+
+### Step 2: Replace Model
+- Follow same steps as Option 1, Step 2
+
+## Option 3: Mixamo + Adobe
+
+### Step 1: Create Character
+1. **Visit**: https://www.mixamo.com/
+2. **Create Account**: Adobe account required
+3. **Choose Character**: Select base character or upload custom
+4. **Customize**: Appearance and clothing
+5. **Download**: As GLB format
+
+### Step 2: Replace Model
+- Follow same steps as Option 1, Step 2
+
+## Alternative Options
+
+### VRChat Integration
+- **URL**: https://hub.vrcav.com/
+- **Process**: Create Avatar → Select Ready Player Me
+- Often has photo upload feature when main site doesn't
+
+### Free Model Sources
+- **Sketchfab**: Search for "human character GLB"
+- **Mixamo**: Free characters with animations
+- **OpenGameArt**: Free 3D models
+
+## Troubleshooting
+
+### If Animations Don't Work
+1. **Check Console**: Look for errors in browser dev tools
+2. **Bone Structure**: Ensure your model has similar bone names
+3. **Retarget Animations**: Use Blender to retarget animations to your model
+
+### If Model is Wrong Size
+1. **Scale**: Adjust `scale={3}` in `src/sections/Experience.jsx:26`
+2. **Position**: Modify `position-y={-3}` in `src/sections/Experience.jsx:26`
+
+### If Materials Look Wrong
+Update material references in `src/components/Developer.jsx`:
+- `Wolf3D_Hair` - Hair material
+- `Wolf3D_Skin` - Skin material
+- `Wolf3D_Body` - Body material
+- `Wolf3D_Outfit_Top/Bottom` - Clothing materials
+
+### Animation Issues
+The model expects these bone names for animations:
+- Hips (root bone)
+- Standard humanoid bone structure
+- If bones don't match, animations won't work properly
+
+## Technical Details
+
+### Model Requirements
+- **Format**: GLB/GLTF
+- **Rigged**: Must have skeleton for animations
+- **Bone Structure**: Humanoid bone names compatible with existing animations
+- **Size**: Keep under 10MB for good performance
+
+### Animation Files
+- **idle.fbx**: Default standing pose
+- **salute.fbx**: Saluting gesture
+- **clapping.fbx**: Clapping hands
+- **victory.fbx**: Victory pose
+
+### Component Structure
+```jsx
+// src/components/Developer.jsx
+const Developer = ({ animationName = 'idle', ...props }) => {
+ // Loads model and animations
+ // Switches between animations based on work experience hover
+}
+```
+
+### Usage in Experience Section
+```jsx
+// src/sections/Experience.jsx
+
+```
+
+## File Structure
+```
+public/
+├── models/
+ ├── animations/
+ │ ├── developer.glb <- Replace this file
+ │ ├── idle.fbx <- Keep these
+ │ ├── salute.fbx <- Keep these
+ │ ├── clapping.fbx <- Keep these
+ │ └── victory.fbx <- Keep these
+```
+
+## Testing Checklist
+- [ ] Model loads without errors
+- [ ] All 4 animations work (idle, salute, clapping, victory)
+- [ ] Hover interactions trigger animations
+- [ ] Model is properly scaled and positioned
+- [ ] No console errors
+- [ ] Performance is acceptable
+
+## Need Help?
+If you encounter issues:
+1. Check browser console for errors
+2. Ensure model format is GLB
+3. Verify bone structure matches expected format
+4. Test with a simple model first
+5. Consider using Blender for model adjustments
\ No newline at end of file
diff --git a/CLAUDE.md b/CLAUDE.md
new file mode 100644
index 0000000..d9e0a45
--- /dev/null
+++ b/CLAUDE.md
@@ -0,0 +1,88 @@
+# CLAUDE.md
+
+This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
+
+## Development Commands
+
+- `npm run dev` - Start development server (runs on http://localhost:5173)
+- `npm run build` - Build production version
+- `npm run preview` - Preview production build locally
+- `npm run lint` - Run ESLint to check code quality
+
+## Project Architecture
+
+This is a React + Three.js portfolio website built with Vite. The site showcases an interactive 3D experience with sections for hero, about, projects, publications, work experience, and contact.
+
+### Key Technologies
+- **React 18** - UI framework
+- **Three.js + React Three Fiber** - 3D graphics and animations
+- **React Three Drei** - Helper utilities for Three.js
+- **GSAP** - Animations and transitions
+- **Tailwind CSS** - Styling framework
+- **EmailJS** - Contact form functionality
+- **Leva** - 3D development controls (hidden in production)
+
+### Core Structure
+
+**Main App Flow**: `App.jsx` renders sections in order: Navbar → Hero → About → Projects → Publications → WorkExperience → Contact → Footer
+
+**3D Scene Architecture**:
+- `Hero.jsx` contains the main 3D canvas with interactive elements
+- `HeroCamera.jsx` handles camera controls and mouse interactions
+- `HackerRoom.jsx` is the main 3D room model
+- Floating 3D elements: `Cube.jsx`, `Rings.jsx`, tech logos (`PythonLogo.jsx`, `PyTorchLogo.jsx`, etc.)
+- `calculateSizes()` in `constants/index.js` handles responsive positioning
+
+**Data Management**:
+- `src/constants/index.js` contains all static data (projects, publications, work experience, navigation)
+- Responsive breakpoints handled via `react-responsive` hooks
+
+**Component Pattern**:
+- 3D components in `/components` directory
+- Page sections in `/sections` directory
+- Shared utilities in `/hooks` directory
+- All 3D models stored in `/public/models`
+- Textures and assets in `/public/textures` and `/public/assets`
+
+### Email Configuration
+
+The contact form and newsletter subscription use EmailJS. Environment variables needed:
+- `VITE_EMAILJS_SERVICE_ID` - EmailJS service ID
+- `VITE_EMAILJS_TEMPLATE_ID` - Template ID for contact form
+- `VITE_EMAILJS_NEWSLETTER_TEMPLATE_ID` - Template ID for newsletter subscriptions
+- `VITE_EMAILJS_PUBLIC_KEY` - EmailJS public key
+
+**Important**: Use `VITE_` prefix for environment variables in Vite (not `REACT_APP_`)
+
+For detailed setup instructions, see `EMAILJS_SETUP.md`
+
+### 3D Model Loading
+
+Models are loaded from `/public/models` using `useGLTF` hook. Key models:
+- `hacker-room.glb` - Main desk/room scene
+- `computer.glb` - Interactive computer for projects
+- `cube.glb`, `react.glb` - Floating elements
+- Animation files in `/models/animations/`
+
+### Performance Considerations
+
+- All 3D models are preloaded using `useGLTF.preload()`
+- Responsive sizing calculated once and passed to components
+- Suspense boundaries with custom loading components
+- Media queries determine render complexity based on device
+
+### Deployment
+
+#### Vercel Deployment
+1. Connect your GitHub repository to Vercel
+2. Configure environment variables in Vercel project settings:
+ - `VITE_EMAILJS_SERVICE_ID`
+ - `VITE_EMAILJS_TEMPLATE_ID`
+ - `VITE_EMAILJS_NEWSLETTER_TEMPLATE_ID`
+ - `VITE_EMAILJS_PUBLIC_KEY`
+3. Deploy using `npm run build`
+4. Test contact form and newsletter functionality after deployment
+
+#### Local Development with EmailJS
+Create `.env.local` file with EmailJS environment variables for local testing.
+Never commit this file to version control.
\ No newline at end of file
diff --git a/EMAILJS_SETUP.md b/EMAILJS_SETUP.md
new file mode 100644
index 0000000..f85af63
--- /dev/null
+++ b/EMAILJS_SETUP.md
@@ -0,0 +1,151 @@
+# EmailJS Setup Guide for Contact Form & Newsletter
+
+This guide explains how to configure EmailJS for both the contact form and newsletter subscription functionality on your Vercel-deployed website.
+
+## Required Environment Variables
+
+Add these environment variables to your Vercel project settings:
+
+```
+VITE_EMAILJS_SERVICE_ID=your_service_id_here
+VITE_EMAILJS_TEMPLATE_ID=your_contact_template_id_here
+VITE_EMAILJS_NEWSLETTER_TEMPLATE_ID=your_newsletter_template_id_here
+VITE_EMAILJS_PUBLIC_KEY=your_public_key_here
+```
+
+## EmailJS Account Setup
+
+### 1. Create EmailJS Account
+1. Go to [EmailJS.com](https://www.emailjs.com/)
+2. Sign up for a free account
+3. Verify your email address
+
+### 2. Add Email Service
+1. Go to the "Email Services" section
+2. Click "Add New Service"
+3. Choose your email provider (Gmail, Outlook, etc.)
+4. Follow the setup instructions for your provider
+5. Note the **Service ID** for your environment variables
+
+### 3. Create Email Templates
+
+#### Contact Form Template
+1. Go to "Email Templates" section
+2. Click "Create New Template"
+3. Set up the template with these variables:
+ - `{{from_name}}` - Sender's name
+ - `{{from_email}}` - Sender's email
+ - `{{to_name}}` - Your name (Jan Magnus Heimann)
+ - `{{to_email}}` - Your email (jan@heimann.ai)
+ - `{{message}}` - Contact message content
+4. Example template:
+
+```
+Subject: New Contact Form Message from {{from_name}}
+
+From: {{from_name}} ({{from_email}})
+To: {{to_name}}
+
+Message:
+{{message}}
+
+---
+This message was sent from your portfolio contact form.
+```
+
+5. Save and note the **Template ID**
+
+#### Newsletter Subscription Template
+1. Create another new template for newsletter subscriptions
+2. Set up with these variables:
+ - `{{subscriber_email}}` - Newsletter subscriber's email
+ - `{{to_name}}` - Your name
+ - `{{to_email}}` - Your email
+ - `{{message}}` - Subscription notification message
+3. Example template:
+
+```
+Subject: New Newsletter Subscription
+
+Hello {{to_name}},
+
+You have a new newsletter subscription!
+
+Subscriber Email: {{subscriber_email}}
+
+{{message}}
+
+---
+This notification was sent from your portfolio newsletter signup.
+```
+
+4. Save and note the **Newsletter Template ID**
+
+### 4. Get Public Key
+1. Go to "Account" section
+2. Find your **Public Key**
+3. Note this for your environment variables
+
+## Vercel Deployment Setup
+
+### 1. Add Environment Variables to Vercel
+1. Go to your Vercel project dashboard
+2. Navigate to Settings → Environment Variables
+3. Add each of the four environment variables:
+ - `VITE_EMAILJS_SERVICE_ID`
+ - `VITE_EMAILJS_TEMPLATE_ID`
+ - `VITE_EMAILJS_NEWSLETTER_TEMPLATE_ID`
+ - `VITE_EMAILJS_PUBLIC_KEY`
+4. Set them for Production, Preview, and Development environments
+
+### 2. Redeploy Your Application
+After adding environment variables, trigger a new deployment so Vercel picks up the new configuration.
+
+## Testing
+
+### Local Testing
+1. Create a `.env.local` file in your project root
+2. Add your environment variables:
+```
+VITE_EMAILJS_SERVICE_ID=your_service_id_here
+VITE_EMAILJS_TEMPLATE_ID=your_contact_template_id_here
+VITE_EMAILJS_NEWSLETTER_TEMPLATE_ID=your_newsletter_template_id_here
+VITE_EMAILJS_PUBLIC_KEY=your_public_key_here
+```
+3. Run `npm run dev` and test both forms
+
+### Production Testing
+1. Deploy to Vercel with environment variables configured
+2. Test the contact form at `yourdomain.com/#contact`
+3. Test the newsletter signup at `yourdomain.com/#blog`
+
+## Troubleshooting
+
+### Common Issues
+1. **Environment variables not found**: Make sure variables start with `VITE_` prefix for Vite
+2. **EmailJS service errors**: Verify your service ID and public key are correct
+3. **Template not found**: Double-check template IDs match exactly
+4. **CORS errors**: EmailJS should handle CORS automatically, but verify your domain is allowlisted in EmailJS settings
+
+### Email Delivery Issues
+1. Check your EmailJS dashboard for sent email logs
+2. Verify your email service connection is active
+3. Check spam folders for test emails
+4. Ensure your email service has proper authentication
+
+## Features
+
+### Contact Form (`/src/sections/Contact.jsx`)
+- Full name, email, and message fields
+- Form validation
+- Loading states during submission
+- Success/error notifications
+- Automatic form reset after successful submission
+
+### Newsletter Subscription (`/src/sections/Blog.jsx`)
+- Email address field with validation
+- Loading states during subscription
+- Success/error notifications
+- Automatic field reset after successful subscription
+
+Both forms use the shared Alert component for consistent user feedback.
\ No newline at end of file
diff --git a/README.md b/README.md
index cc5e3d7..49017e6 100644
--- a/README.md
+++ b/README.md
@@ -11,11 +11,7 @@
-
A 3D Dev Portfolio
-
-
- Build this project step by step with our detailed tutorial on JavaScript Mastery YouTube. Join the JSM family!!
-
+
3D Developer Portfolio
## 📋 Table of Contents
@@ -27,7 +23,6 @@
5. 🕸️ [Snippets (Code to Copy)](#snippets)
6. 🔗 [Links](#links)
7. 📦 [Assets](#assets)
-8. 🚀 [More](#more)
## 🚨 Tutorial
@@ -872,11 +867,3 @@ Here is the list of all the resources used in the project video:
Models and Assets used in the project can be found [here](https://drive.google.com/file/d/1UiJyotDmF2_tBC-GeLpRZuFY_gx5e7iX/view?usp=sharing)
-## 🚀 More
-**Advance your skills with Next.js Pro Course**
-
-Enjoyed creating this project? Dive deeper into our PRO courses for a richer learning experience. They're packed with detailed explanations, cool features, and exercises to boost your skills. Give it a go!
-
-
-
-
diff --git a/VIDEO_REPLACEMENT_GUIDE.md b/VIDEO_REPLACEMENT_GUIDE.md
new file mode 100644
index 0000000..5fe6ae6
--- /dev/null
+++ b/VIDEO_REPLACEMENT_GUIDE.md
@@ -0,0 +1,232 @@
+# Project Demo Video Replacement Guide
+
+This guide explains how to replace the default demo videos in the "My Selected Work" section with your actual project content.
+
+## Current Video Setup
+
+- **Location**: "My Selected Work" section displays videos on 3D computer screens
+- **Video Files**: `/public/textures/project/project1.mp4` through `project4.mp4`
+- **Component**: `src/components/DemoComputer.jsx`
+- **Display**: Videos loop automatically on interactive 3D computer models
+- **Integration**: Each project in `src/constants/index.js` has a `texture` property pointing to its video
+
+## Video Files to Replace
+
+### Project 1: AutoApply - AI Job Application SaaS
+- **File**: `/public/textures/project/project1.mp4`
+- **Current**: Generic demo video
+- **Replace With**: AutoApply platform demo, dashboard metrics, job application process
+
+### Project 2: OpenRLHF Fork - Scalable RLHF Framework
+- **File**: `/public/textures/project/project2.mp4`
+- **Current**: Generic demo video
+- **Replace With**: Training dashboards, performance graphs, model comparisons
+
+### Project 3: ArchUnit TypeScript - Architecture Testing
+- **File**: `/public/textures/project/project3.mp4`
+- **Current**: Generic demo video
+- **Replace With**: Code analysis results, dependency graphs, test outputs
+
+### Project 4: Domain-Specific GPT-2 Fine-Tuning
+- **File**: `/public/textures/project/project4.mp4`
+- **Current**: Generic demo video
+- **Replace With**: Training progress, text generation examples, model comparisons
+
+## Video Requirements
+
+### Technical Specifications
+- **Format**: MP4 (H.264 codec recommended)
+- **Resolution**: 1920x1080 or 1280x720 (16:9 aspect ratio works best)
+- **Duration**: 10-30 seconds (loops automatically)
+- **File Size**: Keep under 10MB each for good web performance
+- **Frame Rate**: 30fps recommended
+- **Audio**: Not required (videos play without sound)
+
+### Content Guidelines
+- **Show Real Functionality**: Display actual project features, not mockups
+- **Clear Visuals**: High contrast, readable text, smooth animations
+- **Loop Seamlessly**: Ensure first and last frames connect smoothly
+- **Focus on Key Features**: Highlight main project capabilities
+- **Professional Quality**: Clean, polished screen recordings
+
+## Content Creation Ideas
+
+### AutoApply Video Content
+- **Dashboard Overview**: Show user analytics, success rates, application tracking
+- **Job Application Process**: Demonstrate automated form filling
+- **AI Detection**: Visualize YOLOv8 form detection in action
+- **Results Metrics**: Display $480K ARR, 10K+ users, 78K+ applications
+- **Multi-agent System**: Show GPT-4 and Claude-3 API integration
+
+### OpenRLHF Video Content
+- **Training Dashboard**: Real-time loss curves, convergence graphs
+- **Performance Metrics**: 15% memory reduction, 23% faster convergence
+- **Multi-GPU Setup**: Show distributed training across 8x A100 clusters
+- **DPO/PPO Comparison**: Before/after training pipeline results
+- **Code Examples**: Brief code snippets with syntax highlighting
+
+### ArchUnit Video Content
+- **Dependency Analysis**: Show circular dependency detection
+- **Architecture Validation**: Live testing of code structure rules
+- **Pattern Matching**: Demonstrate glob/regex pattern matching
+- **GitHub Integration**: Show the 400+ stars, community adoption
+- **Testing Framework**: Jest/Mocha integration examples
+
+### GPT-2 Video Content
+- **Training Progress**: Show loss curves, ROUGE score improvements
+- **Text Generation**: Live aerospace paper summarization demo
+- **Tokenization Process**: Visualize domain-specific vocabulary
+- **Model Comparison**: Before/after fine-tuning results
+- **Technical Metrics**: 12% ROUGE improvement, 4 GPU setup
+
+## Recording Tools
+
+### Screen Recording Software
+- **macOS**: QuickTime Player, Screenshot (Cmd+Shift+5)
+- **Windows**: OBS Studio, Bandicam, Camtasia
+- **Cross-Platform**: OBS Studio (free), Loom, ScreenFlow
+
+### Video Editing Tools
+- **Basic**: iMovie (macOS), Movie Maker (Windows)
+- **Advanced**: Adobe Premiere Pro, Final Cut Pro, DaVinci Resolve
+- **Online**: Canva, Kapwing, ClipChamp
+
+### Optimization Tools
+- **HandBrake**: Free video compression
+- **FFmpeg**: Command-line video processing
+- **Online**: CloudConvert, Zamzar
+
+## Step-by-Step Replacement Process
+
+### Step 1: Create Your Videos
+1. **Plan Content**: Decide what to show for each project
+2. **Set Up Recording**: Use screen recording software
+3. **Record in High Quality**: 1080p or 720p, 30fps
+4. **Keep It Short**: 10-30 seconds per video
+5. **Edit if Needed**: Trim, add transitions, optimize
+
+### Step 2: Optimize Videos
+1. **Compress**: Use HandBrake or similar tool
+2. **Check Size**: Ensure each video is under 10MB
+3. **Test Playback**: Verify videos play smoothly
+4. **Ensure Loop**: First and last frames should connect
+
+### Step 3: Replace Files
+1. **Backup Originals**: Copy current videos to backup folder
+2. **Replace Files**:
+ - Replace `project1.mp4` with AutoApply video
+ - Replace `project2.mp4` with OpenRLHF video
+ - Replace `project3.mp4` with ArchUnit video
+ - Replace `project4.mp4` with GPT-2 video
+3. **Keep Same Names**: Don't change filenames, just replace content
+
+### Step 4: Test
+1. **Start Dev Server**: Run `npm run dev`
+2. **Navigate to Projects**: Go to "My Selected Work" section
+3. **Check All Videos**: Verify each project displays correctly
+4. **Test Interactions**: Hover over projects, check video loops
+5. **Check Console**: Look for any loading errors
+
+## File Structure
+```
+public/
+├── textures/
+ ├── project/
+ │ ├── project1.mp4 <- AutoApply demo
+ │ ├── project2.mp4 <- OpenRLHF demo
+ │ ├── project3.mp4 <- ArchUnit demo
+ │ ├── project4.mp4 <- GPT-2 demo
+ │ └── project5.mp4 <- (unused, can be removed)
+```
+
+## Troubleshooting
+
+### Video Not Playing
+1. **Check Format**: Ensure MP4 with H.264 codec
+2. **Check Size**: Large files may cause loading issues
+3. **Browser Console**: Look for error messages
+4. **Try Different Browser**: Test in Chrome, Firefox, Safari
+
+### Poor Performance
+1. **Reduce File Size**: Compress videos further
+2. **Lower Resolution**: Use 720p instead of 1080p
+3. **Shorter Duration**: Trim to 10-15 seconds
+4. **Check Network**: Slow connections may struggle
+
+### Video Quality Issues
+1. **Increase Bitrate**: Higher quality encoding
+2. **Check Source**: Ensure original recording is high quality
+3. **Avoid Upscaling**: Don't increase resolution of low-quality source
+4. **Test on Different Devices**: Mobile vs desktop performance
+
+### Loop Issues
+1. **Match First/Last Frame**: Ensure seamless loop
+2. **Add Fade Transition**: Smooth transition between end and start
+3. **Check Video Length**: Very short videos may loop too quickly
+
+## Advanced Customization
+
+### Adding 3D Elements
+If you want to enhance the demos with 3D visualizations:
+- **AutoApply**: Floating job application forms, animated success metrics
+- **OpenRLHF**: Neural network node visualizations, GPU cluster representations
+- **ArchUnit**: Interactive dependency trees, architecture layer displays
+- **GPT-2**: Transformer architecture visualization, token flow animations
+
+### Custom Video Textures
+You can also use the videos as textures on other 3D objects:
+```jsx
+// Example: Use video on a different 3D shape
+const videoTexture = useVideoTexture('/textures/project/project1.mp4');
+```
+
+### Dynamic Video Switching
+For interactive demos, videos can be switched based on user interaction:
+```jsx
+// Example: Switch videos based on hover state
+const currentVideo = isHovered ? '/textures/project/demo.mp4' : '/textures/project/idle.mp4';
+```
+
+## Testing Checklist
+- [ ] All 4 videos replaced with actual project content
+- [ ] Videos load without errors
+- [ ] Videos loop seamlessly
+- [ ] File sizes are optimized (under 10MB each)
+- [ ] Videos display correctly on 3D computer screens
+- [ ] No console errors
+- [ ] Performance is acceptable across devices
+- [ ] Videos are relevant to their respective projects
+
+## Performance Tips
+- **Preload Important Videos**: Videos are automatically preloaded
+- **Use Video Compression**: H.264 codec with appropriate bitrate
+- **Test on Mobile**: Ensure videos work on different devices
+- **Monitor Network Usage**: Large videos may impact loading times
+
+## Need Help?
+If you encounter issues:
+1. Check browser console for errors
+2. Verify video format is MP4 with H.264 codec
+3. Test with a simple, small video first
+4. Consider using online video converters
+5. Check that file names match exactly
+6. Ensure videos are in the correct directory
+
+## Alternative Approaches
+
+### Image Sequences
+Instead of video, you can use image sequences:
+- Convert video to image frames
+- Use `useTexture` with animated sprite sheets
+- Better for simple animations
+
+### GIF Support
+While not recommended for performance, GIFs can work:
+- Convert to MP4 for better compression
+- Use online GIF to MP4 converters
+
+### Interactive Demos
+For more advanced demos, consider:
+- Embedded iframes showing live applications
+- Interactive WebGL demos
+- Real-time API demonstrations
\ No newline at end of file
diff --git a/cv.txt b/cv.txt
new file mode 100644
index 0000000..a6c75a2
--- /dev/null
+++ b/cv.txt
@@ -0,0 +1,149 @@
+\documentclass[a4paper]{article}
+
+\usepackage[utf8]{inputenc}
+\usepackage{fontenc}
+\usepackage{enumitem}
+\usepackage[margin=0.5in]{geometry}
+\usepackage{hyperref}
+\usepackage{anyfontsize}
+
+% Remove section numbering
+\setcounter{secnumdepth}{0}
+
+% Custom font size - meeting 10.5+ requirement
+\renewcommand{\normalsize}{\fontsize{10.5}{12.6}\selectfont}
+\normalsize
+
+% Define section style with better spacing
+\renewcommand{\section}[1]{%
+ \vspace{0.4em}%
+ {\Large\textbf{#1}}\\[-0.7em]%
+ \rule{\textwidth}{1pt}%
+ \vspace{0.2em}%
+}
+
+\begin{document}
+\pagenumbering{gobble}
+
+\begin{center}
+{\Large\textbf{Jan Magnus Heimann}}\\[0.2em]
+heimann.ai\\[0.2em]
+jan@heimann.ai
+\end{center}
+
+\section{Professional Profile}
+AI/ML Engineer specializing in Reinforcement Learning and Large Language Models with proven track record of deploying production-grade AI systems. Delivered significant business impact including \$1.5M cost reduction through RL-based optimization and 20\% engagement improvement in advertising. Expert in training and fine-tuning transformer models, implementing multi-agent RL systems, and building scalable ML pipelines.
+
+\section{Skills}
+\textbf{Programming Languages:} Python, JavaScript, TypeScript, C++, SQL, Swift\\
+\textbf{Machine Learning:} PyTorch, TensorFlow, Hugging Face Transformers, LangChain, CUDA, JAX\\
+\textbf{Reinforcement Learning:} PPO, SAC, DQN, A3C, Multi-Agent RL, Reward Shaping, Policy Gradient Methods\\
+\textbf{LLMs \& NLP:} Fine-tuning (LoRA/QLoRA), RAG Systems, Context Engineering, Vector Databases\\
+\textbf{MLOps:} Docker, Kubernetes, AWS, GCP, MLflow, Weights \& Biases, Model Serving, Comet ML
+
+
+
+
+
+\section{Experience}
+
+\textbf{Machine Learning Engineer} \hfill Apr 2025 – Present\\[0.05em]
+\textit{DRWN AI}
+\begin{itemize}[leftmargin=*, topsep=1pt, itemsep=1pt, label=$\bullet$]
+ \item Developing Multi-Agent Reinforcement Learning system using PPO to optimize advertising budget allocation, achieving 15-25\% improvement in cost-per-acquisition (CPA) across client campaigns
+ \item Implemented custom reward functions adapting to diverse KPIs (CTR, ROAS, impressions), reducing average cost-per-click by 18\% while maintaining target reach
+ \item Built real-time inference pipeline serving RL policies with 95ms latency, processing 2M+ daily bid decisions across 50+ active campaigns
+ \item Integrated transformer models for campaign feature extraction, improving RL convergence speed by 30\% through better state representations
+\end{itemize}
+
+\textbf{Machine Learning Engineer/Advisor, Part time} \hfill Oct 2024 – Mar 2025\\[0.05em]
+\textit{Deepmask GmbH}
+\begin{itemize}[leftmargin=*, topsep=1pt, itemsep=1pt, label=$\bullet$]
+ \item Fine-tuned DeepSeek R1 (70B parameters) using LoRA with rank-16 adaptation, achieving +4\% BLEU and +6\% ROUGE-L on German benchmarks
+ \item Implemented production RAG system combining dense embeddings with hybrid search, processing 100K+ documents with 92\% retrieval accuracy
+ \item Optimized LLM inference using quantization and batching strategies, achieving 3x throughput improvement while maintaining quality
+ \item Built comprehensive evaluation framework tracking perplexity, task-specific metrics, and human preference alignment across multiple German NLP benchmarks
+\end{itemize}
+
+\textbf{Machine Learning Engineer} \hfill Mar 2024 – Mar 2025\\[0.05em]
+\textit{Rocket Factory Augsburg AG}
+\begin{itemize}[leftmargin=*, topsep=1pt, itemsep=1pt, label=$\bullet$]
+ \item Designed RL pipeline using PPO to optimize rocket design parameters, training agents to minimize cost-per-payload while satisfying structural constraints
+ \item Implemented Graph Neural Networks to encode rocket component relationships, providing state representations for RL agents evaluating 100K+ configurations
+ \item Created custom OpenAI Gym environment interfacing with physics simulators, enabling RL agents to learn from 10K+ simulated trajectories
+ \item Achieved \$1.5M projected cost reduction per launch through RL-discovered optimizations improving structural efficiency by 12\%
+\end{itemize}
+
+\textbf{Assistant Machine Learning Researcher} \hfill May 2024 – Dec 2024\\[0.05em]
+\textit{Massachusetts Institute of Technology}
+\begin{itemize}[leftmargin=*, topsep=1pt, itemsep=1pt, label=$\bullet$]
+ \item Developed Graph Neural Networks with attention mechanisms for material synthesis prediction, improving accuracy by 9.2\% over baseline methods
+ \item Implemented multi-task transformer pretraining on 500K material descriptions, fine-tuning shared representations across 12 downstream tasks
+ \item Applied BERT-style masked language modeling to scientific text, creating domain-specific embeddings that improved material property prediction by 4.7\%
+\end{itemize}
+
+\textbf{Software Engineer} \hfill Jan 2023 – Mar 2024\\[0.05em]
+\textit{OHB Systems AG}
+\begin{itemize}[leftmargin=*, topsep=1pt, itemsep=1pt, label=$\bullet$]
+ \item Built ML pipeline automating FEM analysis using Gaussian Processes for uncertainty quantification, reducing engineering cycle time by 25\%
+ \item Developed LSTM-based anomaly detection for satellite telemetry data, implementing attention mechanisms for interpretable predictions
+ \item Deployed models using MLflow and Docker, establishing continuous training pipelines triggered by distribution shift detection
+\end{itemize}
+
+\textbf{Co-Founder/Software Lead} \hfill Jan 2021 – Dec 2022\\[0.05em]
+\textit{GetMoBie GmbH}
+\begin{itemize}[leftmargin=*, topsep=1pt, itemsep=1pt, label=$\bullet$]
+ \item Led development of mobile banking application serving 20K+ users, presenting at "Die Höhle der Löwen" TV show
+ \item Implemented Random Forest models for transaction categorization and fraud detection on 1M+ records, achieving 0.95 AUC
+ \item Built collaborative filtering recommendation system using matrix factorization, increasing financial product adoption by 15\%
+ \item Managed team of 5 developers while establishing ML pipelines for real-time inference and model monitoring
+\end{itemize}
+
+\textbf{Machine Learning Engineer Intern} \hfill Aug 2020 – May 2021\\[0.05em]
+\textit{BMW AG}
+\begin{itemize}[leftmargin=*, topsep=1pt, itemsep=1pt, label=$\bullet$]
+ \item Created job recommendation system using collaborative filtering on implicit feedback data, facilitating 100+ internal role transitions
+ \item Implemented document classification using TF-IDF and SVM, achieving 89\% F1-score on 50K corporate documents
+\end{itemize}
+
+\section{Projects}
+
+\textbf{AutoApply - AI Job Application Automation SaaS}
+\begin{itemize}[leftmargin=*, topsep=1pt, itemsep=1pt, label=$\bullet$]
+ \item Built multi-agent system using GPT-4 and Claude-3 APIs to automate job applications, generating \$480K ARR with 10K+ monthly active users
+ \item Implemented form detection using fine-tuned YOLOv8 achieving 94.3\% accuracy, processing 78K+ successful applications
+ \item Scaled infrastructure to handle 2.8M+ monthly queries with 99.7\% uptime using containerized microservices
+\end{itemize}
+
+\textbf{OpenRLHF Fork - Scalable RLHF Training Framework}
+\begin{itemize}[leftmargin=*, topsep=1pt, itemsep=1pt, label=$\bullet$]
+ \item Forked and enhanced OpenRLHF framework to implement hybrid DPO/PPO training pipeline, reducing GPU memory usage by 15\% through gradient checkpointing optimizations
+ \item Achieved 23\% faster convergence on reward model training by implementing adaptive KL penalty scheduling and batch-wise advantage normalization
+ \item Contributed multi-node distributed training support using DeepSpeed ZeRO-3, enabling training of 13B parameter models on 8x A100 clusters
+\end{itemize}
+
+\textbf{Domain-Specific GPT-2 Fine-Tuning}
+\begin{itemize}[leftmargin=*, topsep=1pt, itemsep=1pt, label=$\bullet$]
+ \item Fine-tuned GPT-2 medium on 10K aerospace papers using custom tokenizer with domain-specific vocabulary extensions
+ \item Achieved 12\% ROUGE score improvement for technical summarization through careful hyperparameter tuning and data augmentation
+ \item Implemented distributed training across 4 GPUs using gradient accumulation to simulate larger batch sizes
+\end{itemize}
+
+\textbf{ArchUnit TypeScript - Open Source Library}
+\begin{itemize}[leftmargin=*, topsep=1pt, itemsep=1pt, label=$\bullet$]
+ \item Created TypeScript architecture testing library achieving 400+ GitHub stars and widespread adoption in JavaScript ecosystem
+ \item Implemented AST-based static analysis supporting circular dependency detection, layered architecture validation, and code metrics (LCOM, coupling, abstractness)
+ \item Built pattern matching system with glob/regex support and universal testing framework integration (Jest, Vitest, Jasmine, Mocha)
+\end{itemize}
+
+\section{Publications}
+Heimann, J., et al. "Reaction Graph Networks for Inorganic Synthesis Condition Prediction of Solid State Materials", \textit{AI4Mat-2024: NeurIPS 2024 Workshop on AI for Accelerated Materials Design}
+
+\section{Education}
+\textbf{Bachelor of Science in Aerospace Engineering} \hfill 2025\\
+Technical University of Munich
+
+\textbf{Bachelor of Science in Astronomical \& Planetary Sciences} \hfill 2024\\
+Arizona State University
+
+\end{document}
\ No newline at end of file
diff --git a/debug_jan_model.js b/debug_jan_model.js
new file mode 100644
index 0000000..4d48b1f
--- /dev/null
+++ b/debug_jan_model.js
@@ -0,0 +1,34 @@
+// Temporary debug script to inspect jan.glb model structure
+import React, { useEffect } from 'react';
+import { useGLTF, useGraph } from '@react-three/drei';
+import { SkeletonUtils } from 'three-stdlib';
+
+const ModelDebugger = () => {
+ const { scene } = useGLTF('/models/animations/jan.glb');
+ const clone = React.useMemo(() => SkeletonUtils.clone(scene), [scene]);
+ const { nodes, materials } = useGraph(clone);
+
+ useEffect(() => {
+ console.log('=== JAN.GLB MODEL STRUCTURE ===');
+ console.log('Available nodes:', Object.keys(nodes));
+ console.log('Available materials:', Object.keys(materials));
+
+ // Log each node with its properties
+ Object.entries(nodes).forEach(([name, node]) => {
+ console.log(`Node: ${name}`, {
+ type: node.type,
+ hasGeometry: !!node.geometry,
+ hasMaterial: !!node.material,
+ hasChildren: node.children?.length > 0,
+ childrenCount: node.children?.length || 0,
+ position: node.position,
+ rotation: node.rotation,
+ scale: node.scale
+ });
+ });
+ }, [nodes, materials]);
+
+ return null;
+};
+
+export default ModelDebugger;
\ No newline at end of file
diff --git a/index.html b/index.html
index bca8e7f..d26457b 100644
--- a/index.html
+++ b/index.html
@@ -1,13 +1,16 @@
-
+
+
-
+
- Adrian Hajdin
-
-
+ Jan Magnus Heimann - AI/ML Engineer
+
+
+
-
-
+
+
+