diff --git a/CHARACTER_REPLACEMENT_GUIDE.md b/CHARACTER_REPLACEMENT_GUIDE.md new file mode 100644 index 0000000..3435fde --- /dev/null +++ b/CHARACTER_REPLACEMENT_GUIDE.md @@ -0,0 +1,149 @@ +can# 3D Character Replacement Guide + +This guide explains how to replace the default 3D character in the Work Experience section with your own avatar. + +## Current Character Setup + +- **Location**: Work Experience section displays a 3D human character +- **Model File**: `/public/models/animations/developer.glb` +- **Component**: `src/components/Developer.jsx` +- **Animations**: 4 animations (idle, salute, clapping, victory) +- **Animation Files**: Located in `/public/models/animations/` folder + +## Option 1: ReadyPlayerMe (Recommended) + +### Step 1: Create Your Avatar +1. **Direct URL**: https://readyplayer.me/avatar +2. **Alternative**: Go to https://readyplayer.me/ → Click "Try it now" → Create account +3. **Choose Creation Method**: + - **From Photo**: Upload a clear photo of yourself + - **From Scratch**: Build manually using their editor +4. **Customize**: Adjust hair, clothing, accessories, facial features +5. **Download**: Export as GLB format + +### Step 2: Replace the Model +1. **Rename** your downloaded file to `developer.glb` +2. **Replace** the existing file at `/public/models/animations/developer.glb` +3. **Keep** all animation files (idle.fbx, salute.fbx, clapping.fbx, victory.fbx) + +### Step 3: Test +1. Run `npm run dev` +2. Navigate to Work Experience section +3. Hover over different work experiences to test animations +4. Check that the model loads without errors + +## Option 2: VRoid Studio (Most Customizable) + +### Step 1: Create Avatar +1. **Download**: https://vroid.com/en/studio +2. **Create**: Anime-style avatar from scratch +3. **Export**: As VRM format +4. **Convert**: Use online converter to GLB format + +### Step 2: Replace Model +- Follow same steps as Option 1, Step 2 + +## Option 3: Mixamo + Adobe + +### Step 1: Create Character +1. **Visit**: https://www.mixamo.com/ +2. **Create Account**: Adobe account required +3. **Choose Character**: Select base character or upload custom +4. **Customize**: Appearance and clothing +5. **Download**: As GLB format + +### Step 2: Replace Model +- Follow same steps as Option 1, Step 2 + +## Alternative Options + +### VRChat Integration +- **URL**: https://hub.vrcav.com/ +- **Process**: Create Avatar → Select Ready Player Me +- Often has photo upload feature when main site doesn't + +### Free Model Sources +- **Sketchfab**: Search for "human character GLB" +- **Mixamo**: Free characters with animations +- **OpenGameArt**: Free 3D models + +## Troubleshooting + +### If Animations Don't Work +1. **Check Console**: Look for errors in browser dev tools +2. **Bone Structure**: Ensure your model has similar bone names +3. **Retarget Animations**: Use Blender to retarget animations to your model + +### If Model is Wrong Size +1. **Scale**: Adjust `scale={3}` in `src/sections/Experience.jsx:26` +2. **Position**: Modify `position-y={-3}` in `src/sections/Experience.jsx:26` + +### If Materials Look Wrong +Update material references in `src/components/Developer.jsx`: +- `Wolf3D_Hair` - Hair material +- `Wolf3D_Skin` - Skin material +- `Wolf3D_Body` - Body material +- `Wolf3D_Outfit_Top/Bottom` - Clothing materials + +### Animation Issues +The model expects these bone names for animations: +- Hips (root bone) +- Standard humanoid bone structure +- If bones don't match, animations won't work properly + +## Technical Details + +### Model Requirements +- **Format**: GLB/GLTF +- **Rigged**: Must have skeleton for animations +- **Bone Structure**: Humanoid bone names compatible with existing animations +- **Size**: Keep under 10MB for good performance + +### Animation Files +- **idle.fbx**: Default standing pose +- **salute.fbx**: Saluting gesture +- **clapping.fbx**: Clapping hands +- **victory.fbx**: Victory pose + +### Component Structure +```jsx +// src/components/Developer.jsx +const Developer = ({ animationName = 'idle', ...props }) => { + // Loads model and animations + // Switches between animations based on work experience hover +} +``` + +### Usage in Experience Section +```jsx +// src/sections/Experience.jsx + +``` + +## File Structure +``` +public/ +├── models/ + ├── animations/ + │ ├── developer.glb <- Replace this file + │ ├── idle.fbx <- Keep these + │ ├── salute.fbx <- Keep these + │ ├── clapping.fbx <- Keep these + │ └── victory.fbx <- Keep these +``` + +## Testing Checklist +- [ ] Model loads without errors +- [ ] All 4 animations work (idle, salute, clapping, victory) +- [ ] Hover interactions trigger animations +- [ ] Model is properly scaled and positioned +- [ ] No console errors +- [ ] Performance is acceptable + +## Need Help? +If you encounter issues: +1. Check browser console for errors +2. Ensure model format is GLB +3. Verify bone structure matches expected format +4. Test with a simple model first +5. Consider using Blender for model adjustments \ No newline at end of file diff --git a/CLAUDE.md b/CLAUDE.md new file mode 100644 index 0000000..d9e0a45 --- /dev/null +++ b/CLAUDE.md @@ -0,0 +1,88 @@ +# CLAUDE.md + +This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository. + +## Development Commands + +- `npm run dev` - Start development server (runs on http://localhost:5173) +- `npm run build` - Build production version +- `npm run preview` - Preview production build locally +- `npm run lint` - Run ESLint to check code quality + +## Project Architecture + +This is a React + Three.js portfolio website built with Vite. The site showcases an interactive 3D experience with sections for hero, about, projects, publications, work experience, and contact. + +### Key Technologies +- **React 18** - UI framework +- **Three.js + React Three Fiber** - 3D graphics and animations +- **React Three Drei** - Helper utilities for Three.js +- **GSAP** - Animations and transitions +- **Tailwind CSS** - Styling framework +- **EmailJS** - Contact form functionality +- **Leva** - 3D development controls (hidden in production) + +### Core Structure + +**Main App Flow**: `App.jsx` renders sections in order: Navbar → Hero → About → Projects → Publications → WorkExperience → Contact → Footer + +**3D Scene Architecture**: +- `Hero.jsx` contains the main 3D canvas with interactive elements +- `HeroCamera.jsx` handles camera controls and mouse interactions +- `HackerRoom.jsx` is the main 3D room model +- Floating 3D elements: `Cube.jsx`, `Rings.jsx`, tech logos (`PythonLogo.jsx`, `PyTorchLogo.jsx`, etc.) +- `calculateSizes()` in `constants/index.js` handles responsive positioning + +**Data Management**: +- `src/constants/index.js` contains all static data (projects, publications, work experience, navigation) +- Responsive breakpoints handled via `react-responsive` hooks + +**Component Pattern**: +- 3D components in `/components` directory +- Page sections in `/sections` directory +- Shared utilities in `/hooks` directory +- All 3D models stored in `/public/models` +- Textures and assets in `/public/textures` and `/public/assets` + +### Email Configuration + +The contact form and newsletter subscription use EmailJS. Environment variables needed: +- `VITE_EMAILJS_SERVICE_ID` - EmailJS service ID +- `VITE_EMAILJS_TEMPLATE_ID` - Template ID for contact form +- `VITE_EMAILJS_NEWSLETTER_TEMPLATE_ID` - Template ID for newsletter subscriptions +- `VITE_EMAILJS_PUBLIC_KEY` - EmailJS public key + +**Important**: Use `VITE_` prefix for environment variables in Vite (not `REACT_APP_`) + +For detailed setup instructions, see `EMAILJS_SETUP.md` + +### 3D Model Loading + +Models are loaded from `/public/models` using `useGLTF` hook. Key models: +- `hacker-room.glb` - Main desk/room scene +- `computer.glb` - Interactive computer for projects +- `cube.glb`, `react.glb` - Floating elements +- Animation files in `/models/animations/` + +### Performance Considerations + +- All 3D models are preloaded using `useGLTF.preload()` +- Responsive sizing calculated once and passed to components +- Suspense boundaries with custom loading components +- Media queries determine render complexity based on device + +### Deployment + +#### Vercel Deployment +1. Connect your GitHub repository to Vercel +2. Configure environment variables in Vercel project settings: + - `VITE_EMAILJS_SERVICE_ID` + - `VITE_EMAILJS_TEMPLATE_ID` + - `VITE_EMAILJS_NEWSLETTER_TEMPLATE_ID` + - `VITE_EMAILJS_PUBLIC_KEY` +3. Deploy using `npm run build` +4. Test contact form and newsletter functionality after deployment + +#### Local Development with EmailJS +Create `.env.local` file with EmailJS environment variables for local testing. +Never commit this file to version control. \ No newline at end of file diff --git a/EMAILJS_SETUP.md b/EMAILJS_SETUP.md new file mode 100644 index 0000000..f85af63 --- /dev/null +++ b/EMAILJS_SETUP.md @@ -0,0 +1,151 @@ +# EmailJS Setup Guide for Contact Form & Newsletter + +This guide explains how to configure EmailJS for both the contact form and newsletter subscription functionality on your Vercel-deployed website. + +## Required Environment Variables + +Add these environment variables to your Vercel project settings: + +``` +VITE_EMAILJS_SERVICE_ID=your_service_id_here +VITE_EMAILJS_TEMPLATE_ID=your_contact_template_id_here +VITE_EMAILJS_NEWSLETTER_TEMPLATE_ID=your_newsletter_template_id_here +VITE_EMAILJS_PUBLIC_KEY=your_public_key_here +``` + +## EmailJS Account Setup + +### 1. Create EmailJS Account +1. Go to [EmailJS.com](https://www.emailjs.com/) +2. Sign up for a free account +3. Verify your email address + +### 2. Add Email Service +1. Go to the "Email Services" section +2. Click "Add New Service" +3. Choose your email provider (Gmail, Outlook, etc.) +4. Follow the setup instructions for your provider +5. Note the **Service ID** for your environment variables + +### 3. Create Email Templates + +#### Contact Form Template +1. Go to "Email Templates" section +2. Click "Create New Template" +3. Set up the template with these variables: + - `{{from_name}}` - Sender's name + - `{{from_email}}` - Sender's email + - `{{to_name}}` - Your name (Jan Magnus Heimann) + - `{{to_email}}` - Your email (jan@heimann.ai) + - `{{message}}` - Contact message content +4. Example template: + +``` +Subject: New Contact Form Message from {{from_name}} + +From: {{from_name}} ({{from_email}}) +To: {{to_name}} + +Message: +{{message}} + +--- +This message was sent from your portfolio contact form. +``` + +5. Save and note the **Template ID** + +#### Newsletter Subscription Template +1. Create another new template for newsletter subscriptions +2. Set up with these variables: + - `{{subscriber_email}}` - Newsletter subscriber's email + - `{{to_name}}` - Your name + - `{{to_email}}` - Your email + - `{{message}}` - Subscription notification message +3. Example template: + +``` +Subject: New Newsletter Subscription + +Hello {{to_name}}, + +You have a new newsletter subscription! + +Subscriber Email: {{subscriber_email}} + +{{message}} + +--- +This notification was sent from your portfolio newsletter signup. +``` + +4. Save and note the **Newsletter Template ID** + +### 4. Get Public Key +1. Go to "Account" section +2. Find your **Public Key** +3. Note this for your environment variables + +## Vercel Deployment Setup + +### 1. Add Environment Variables to Vercel +1. Go to your Vercel project dashboard +2. Navigate to Settings → Environment Variables +3. Add each of the four environment variables: + - `VITE_EMAILJS_SERVICE_ID` + - `VITE_EMAILJS_TEMPLATE_ID` + - `VITE_EMAILJS_NEWSLETTER_TEMPLATE_ID` + - `VITE_EMAILJS_PUBLIC_KEY` +4. Set them for Production, Preview, and Development environments + +### 2. Redeploy Your Application +After adding environment variables, trigger a new deployment so Vercel picks up the new configuration. + +## Testing + +### Local Testing +1. Create a `.env.local` file in your project root +2. Add your environment variables: +``` +VITE_EMAILJS_SERVICE_ID=your_service_id_here +VITE_EMAILJS_TEMPLATE_ID=your_contact_template_id_here +VITE_EMAILJS_NEWSLETTER_TEMPLATE_ID=your_newsletter_template_id_here +VITE_EMAILJS_PUBLIC_KEY=your_public_key_here +``` +3. Run `npm run dev` and test both forms + +### Production Testing +1. Deploy to Vercel with environment variables configured +2. Test the contact form at `yourdomain.com/#contact` +3. Test the newsletter signup at `yourdomain.com/#blog` + +## Troubleshooting + +### Common Issues +1. **Environment variables not found**: Make sure variables start with `VITE_` prefix for Vite +2. **EmailJS service errors**: Verify your service ID and public key are correct +3. **Template not found**: Double-check template IDs match exactly +4. **CORS errors**: EmailJS should handle CORS automatically, but verify your domain is allowlisted in EmailJS settings + +### Email Delivery Issues +1. Check your EmailJS dashboard for sent email logs +2. Verify your email service connection is active +3. Check spam folders for test emails +4. Ensure your email service has proper authentication + +## Features + +### Contact Form (`/src/sections/Contact.jsx`) +- Full name, email, and message fields +- Form validation +- Loading states during submission +- Success/error notifications +- Automatic form reset after successful submission + +### Newsletter Subscription (`/src/sections/Blog.jsx`) +- Email address field with validation +- Loading states during subscription +- Success/error notifications +- Automatic field reset after successful subscription + +Both forms use the shared Alert component for consistent user feedback. \ No newline at end of file diff --git a/README.md b/README.md index cc5e3d7..49017e6 100644 --- a/README.md +++ b/README.md @@ -11,11 +11,7 @@ tailwindcss -

A 3D Dev Portfolio

- -
- Build this project step by step with our detailed tutorial on JavaScript Mastery YouTube. Join the JSM family!! -
+

3D Developer Portfolio

## 📋 Table of Contents @@ -27,7 +23,6 @@ 5. 🕸️ [Snippets (Code to Copy)](#snippets) 6. 🔗 [Links](#links) 7. 📦 [Assets](#assets) -8. 🚀 [More](#more) ## 🚨 Tutorial @@ -872,11 +867,3 @@ Here is the list of all the resources used in the project video: Models and Assets used in the project can be found [here](https://drive.google.com/file/d/1UiJyotDmF2_tBC-GeLpRZuFY_gx5e7iX/view?usp=sharing) -## 🚀 More -**Advance your skills with Next.js Pro Course** - -Enjoyed creating this project? Dive deeper into our PRO courses for a richer learning experience. They're packed with detailed explanations, cool features, and exercises to boost your skills. Give it a go! - - -Project Banner - diff --git a/VIDEO_REPLACEMENT_GUIDE.md b/VIDEO_REPLACEMENT_GUIDE.md new file mode 100644 index 0000000..5fe6ae6 --- /dev/null +++ b/VIDEO_REPLACEMENT_GUIDE.md @@ -0,0 +1,232 @@ +# Project Demo Video Replacement Guide + +This guide explains how to replace the default demo videos in the "My Selected Work" section with your actual project content. + +## Current Video Setup + +- **Location**: "My Selected Work" section displays videos on 3D computer screens +- **Video Files**: `/public/textures/project/project1.mp4` through `project4.mp4` +- **Component**: `src/components/DemoComputer.jsx` +- **Display**: Videos loop automatically on interactive 3D computer models +- **Integration**: Each project in `src/constants/index.js` has a `texture` property pointing to its video + +## Video Files to Replace + +### Project 1: AutoApply - AI Job Application SaaS +- **File**: `/public/textures/project/project1.mp4` +- **Current**: Generic demo video +- **Replace With**: AutoApply platform demo, dashboard metrics, job application process + +### Project 2: OpenRLHF Fork - Scalable RLHF Framework +- **File**: `/public/textures/project/project2.mp4` +- **Current**: Generic demo video +- **Replace With**: Training dashboards, performance graphs, model comparisons + +### Project 3: ArchUnit TypeScript - Architecture Testing +- **File**: `/public/textures/project/project3.mp4` +- **Current**: Generic demo video +- **Replace With**: Code analysis results, dependency graphs, test outputs + +### Project 4: Domain-Specific GPT-2 Fine-Tuning +- **File**: `/public/textures/project/project4.mp4` +- **Current**: Generic demo video +- **Replace With**: Training progress, text generation examples, model comparisons + +## Video Requirements + +### Technical Specifications +- **Format**: MP4 (H.264 codec recommended) +- **Resolution**: 1920x1080 or 1280x720 (16:9 aspect ratio works best) +- **Duration**: 10-30 seconds (loops automatically) +- **File Size**: Keep under 10MB each for good web performance +- **Frame Rate**: 30fps recommended +- **Audio**: Not required (videos play without sound) + +### Content Guidelines +- **Show Real Functionality**: Display actual project features, not mockups +- **Clear Visuals**: High contrast, readable text, smooth animations +- **Loop Seamlessly**: Ensure first and last frames connect smoothly +- **Focus on Key Features**: Highlight main project capabilities +- **Professional Quality**: Clean, polished screen recordings + +## Content Creation Ideas + +### AutoApply Video Content +- **Dashboard Overview**: Show user analytics, success rates, application tracking +- **Job Application Process**: Demonstrate automated form filling +- **AI Detection**: Visualize YOLOv8 form detection in action +- **Results Metrics**: Display $480K ARR, 10K+ users, 78K+ applications +- **Multi-agent System**: Show GPT-4 and Claude-3 API integration + +### OpenRLHF Video Content +- **Training Dashboard**: Real-time loss curves, convergence graphs +- **Performance Metrics**: 15% memory reduction, 23% faster convergence +- **Multi-GPU Setup**: Show distributed training across 8x A100 clusters +- **DPO/PPO Comparison**: Before/after training pipeline results +- **Code Examples**: Brief code snippets with syntax highlighting + +### ArchUnit Video Content +- **Dependency Analysis**: Show circular dependency detection +- **Architecture Validation**: Live testing of code structure rules +- **Pattern Matching**: Demonstrate glob/regex pattern matching +- **GitHub Integration**: Show the 400+ stars, community adoption +- **Testing Framework**: Jest/Mocha integration examples + +### GPT-2 Video Content +- **Training Progress**: Show loss curves, ROUGE score improvements +- **Text Generation**: Live aerospace paper summarization demo +- **Tokenization Process**: Visualize domain-specific vocabulary +- **Model Comparison**: Before/after fine-tuning results +- **Technical Metrics**: 12% ROUGE improvement, 4 GPU setup + +## Recording Tools + +### Screen Recording Software +- **macOS**: QuickTime Player, Screenshot (Cmd+Shift+5) +- **Windows**: OBS Studio, Bandicam, Camtasia +- **Cross-Platform**: OBS Studio (free), Loom, ScreenFlow + +### Video Editing Tools +- **Basic**: iMovie (macOS), Movie Maker (Windows) +- **Advanced**: Adobe Premiere Pro, Final Cut Pro, DaVinci Resolve +- **Online**: Canva, Kapwing, ClipChamp + +### Optimization Tools +- **HandBrake**: Free video compression +- **FFmpeg**: Command-line video processing +- **Online**: CloudConvert, Zamzar + +## Step-by-Step Replacement Process + +### Step 1: Create Your Videos +1. **Plan Content**: Decide what to show for each project +2. **Set Up Recording**: Use screen recording software +3. **Record in High Quality**: 1080p or 720p, 30fps +4. **Keep It Short**: 10-30 seconds per video +5. **Edit if Needed**: Trim, add transitions, optimize + +### Step 2: Optimize Videos +1. **Compress**: Use HandBrake or similar tool +2. **Check Size**: Ensure each video is under 10MB +3. **Test Playback**: Verify videos play smoothly +4. **Ensure Loop**: First and last frames should connect + +### Step 3: Replace Files +1. **Backup Originals**: Copy current videos to backup folder +2. **Replace Files**: + - Replace `project1.mp4` with AutoApply video + - Replace `project2.mp4` with OpenRLHF video + - Replace `project3.mp4` with ArchUnit video + - Replace `project4.mp4` with GPT-2 video +3. **Keep Same Names**: Don't change filenames, just replace content + +### Step 4: Test +1. **Start Dev Server**: Run `npm run dev` +2. **Navigate to Projects**: Go to "My Selected Work" section +3. **Check All Videos**: Verify each project displays correctly +4. **Test Interactions**: Hover over projects, check video loops +5. **Check Console**: Look for any loading errors + +## File Structure +``` +public/ +├── textures/ + ├── project/ + │ ├── project1.mp4 <- AutoApply demo + │ ├── project2.mp4 <- OpenRLHF demo + │ ├── project3.mp4 <- ArchUnit demo + │ ├── project4.mp4 <- GPT-2 demo + │ └── project5.mp4 <- (unused, can be removed) +``` + +## Troubleshooting + +### Video Not Playing +1. **Check Format**: Ensure MP4 with H.264 codec +2. **Check Size**: Large files may cause loading issues +3. **Browser Console**: Look for error messages +4. **Try Different Browser**: Test in Chrome, Firefox, Safari + +### Poor Performance +1. **Reduce File Size**: Compress videos further +2. **Lower Resolution**: Use 720p instead of 1080p +3. **Shorter Duration**: Trim to 10-15 seconds +4. **Check Network**: Slow connections may struggle + +### Video Quality Issues +1. **Increase Bitrate**: Higher quality encoding +2. **Check Source**: Ensure original recording is high quality +3. **Avoid Upscaling**: Don't increase resolution of low-quality source +4. **Test on Different Devices**: Mobile vs desktop performance + +### Loop Issues +1. **Match First/Last Frame**: Ensure seamless loop +2. **Add Fade Transition**: Smooth transition between end and start +3. **Check Video Length**: Very short videos may loop too quickly + +## Advanced Customization + +### Adding 3D Elements +If you want to enhance the demos with 3D visualizations: +- **AutoApply**: Floating job application forms, animated success metrics +- **OpenRLHF**: Neural network node visualizations, GPU cluster representations +- **ArchUnit**: Interactive dependency trees, architecture layer displays +- **GPT-2**: Transformer architecture visualization, token flow animations + +### Custom Video Textures +You can also use the videos as textures on other 3D objects: +```jsx +// Example: Use video on a different 3D shape +const videoTexture = useVideoTexture('/textures/project/project1.mp4'); +``` + +### Dynamic Video Switching +For interactive demos, videos can be switched based on user interaction: +```jsx +// Example: Switch videos based on hover state +const currentVideo = isHovered ? '/textures/project/demo.mp4' : '/textures/project/idle.mp4'; +``` + +## Testing Checklist +- [ ] All 4 videos replaced with actual project content +- [ ] Videos load without errors +- [ ] Videos loop seamlessly +- [ ] File sizes are optimized (under 10MB each) +- [ ] Videos display correctly on 3D computer screens +- [ ] No console errors +- [ ] Performance is acceptable across devices +- [ ] Videos are relevant to their respective projects + +## Performance Tips +- **Preload Important Videos**: Videos are automatically preloaded +- **Use Video Compression**: H.264 codec with appropriate bitrate +- **Test on Mobile**: Ensure videos work on different devices +- **Monitor Network Usage**: Large videos may impact loading times + +## Need Help? +If you encounter issues: +1. Check browser console for errors +2. Verify video format is MP4 with H.264 codec +3. Test with a simple, small video first +4. Consider using online video converters +5. Check that file names match exactly +6. Ensure videos are in the correct directory + +## Alternative Approaches + +### Image Sequences +Instead of video, you can use image sequences: +- Convert video to image frames +- Use `useTexture` with animated sprite sheets +- Better for simple animations + +### GIF Support +While not recommended for performance, GIFs can work: +- Convert to MP4 for better compression +- Use online GIF to MP4 converters + +### Interactive Demos +For more advanced demos, consider: +- Embedded iframes showing live applications +- Interactive WebGL demos +- Real-time API demonstrations \ No newline at end of file diff --git a/cv.txt b/cv.txt new file mode 100644 index 0000000..a6c75a2 --- /dev/null +++ b/cv.txt @@ -0,0 +1,149 @@ +\documentclass[a4paper]{article} + +\usepackage[utf8]{inputenc} +\usepackage{fontenc} +\usepackage{enumitem} +\usepackage[margin=0.5in]{geometry} +\usepackage{hyperref} +\usepackage{anyfontsize} + +% Remove section numbering +\setcounter{secnumdepth}{0} + +% Custom font size - meeting 10.5+ requirement +\renewcommand{\normalsize}{\fontsize{10.5}{12.6}\selectfont} +\normalsize + +% Define section style with better spacing +\renewcommand{\section}[1]{% + \vspace{0.4em}% + {\Large\textbf{#1}}\\[-0.7em]% + \rule{\textwidth}{1pt}% + \vspace{0.2em}% +} + +\begin{document} +\pagenumbering{gobble} + +\begin{center} +{\Large\textbf{Jan Magnus Heimann}}\\[0.2em] +heimann.ai\\[0.2em] +jan@heimann.ai +\end{center} + +\section{Professional Profile} +AI/ML Engineer specializing in Reinforcement Learning and Large Language Models with proven track record of deploying production-grade AI systems. Delivered significant business impact including \$1.5M cost reduction through RL-based optimization and 20\% engagement improvement in advertising. Expert in training and fine-tuning transformer models, implementing multi-agent RL systems, and building scalable ML pipelines. + +\section{Skills} +\textbf{Programming Languages:} Python, JavaScript, TypeScript, C++, SQL, Swift\\ +\textbf{Machine Learning:} PyTorch, TensorFlow, Hugging Face Transformers, LangChain, CUDA, JAX\\ +\textbf{Reinforcement Learning:} PPO, SAC, DQN, A3C, Multi-Agent RL, Reward Shaping, Policy Gradient Methods\\ +\textbf{LLMs \& NLP:} Fine-tuning (LoRA/QLoRA), RAG Systems, Context Engineering, Vector Databases\\ +\textbf{MLOps:} Docker, Kubernetes, AWS, GCP, MLflow, Weights \& Biases, Model Serving, Comet ML + + + + + +\section{Experience} + +\textbf{Machine Learning Engineer} \hfill Apr 2025 – Present\\[0.05em] +\textit{DRWN AI} +\begin{itemize}[leftmargin=*, topsep=1pt, itemsep=1pt, label=$\bullet$] + \item Developing Multi-Agent Reinforcement Learning system using PPO to optimize advertising budget allocation, achieving 15-25\% improvement in cost-per-acquisition (CPA) across client campaigns + \item Implemented custom reward functions adapting to diverse KPIs (CTR, ROAS, impressions), reducing average cost-per-click by 18\% while maintaining target reach + \item Built real-time inference pipeline serving RL policies with 95ms latency, processing 2M+ daily bid decisions across 50+ active campaigns + \item Integrated transformer models for campaign feature extraction, improving RL convergence speed by 30\% through better state representations +\end{itemize} + +\textbf{Machine Learning Engineer/Advisor, Part time} \hfill Oct 2024 – Mar 2025\\[0.05em] +\textit{Deepmask GmbH} +\begin{itemize}[leftmargin=*, topsep=1pt, itemsep=1pt, label=$\bullet$] + \item Fine-tuned DeepSeek R1 (70B parameters) using LoRA with rank-16 adaptation, achieving +4\% BLEU and +6\% ROUGE-L on German benchmarks + \item Implemented production RAG system combining dense embeddings with hybrid search, processing 100K+ documents with 92\% retrieval accuracy + \item Optimized LLM inference using quantization and batching strategies, achieving 3x throughput improvement while maintaining quality + \item Built comprehensive evaluation framework tracking perplexity, task-specific metrics, and human preference alignment across multiple German NLP benchmarks +\end{itemize} + +\textbf{Machine Learning Engineer} \hfill Mar 2024 – Mar 2025\\[0.05em] +\textit{Rocket Factory Augsburg AG} +\begin{itemize}[leftmargin=*, topsep=1pt, itemsep=1pt, label=$\bullet$] + \item Designed RL pipeline using PPO to optimize rocket design parameters, training agents to minimize cost-per-payload while satisfying structural constraints + \item Implemented Graph Neural Networks to encode rocket component relationships, providing state representations for RL agents evaluating 100K+ configurations + \item Created custom OpenAI Gym environment interfacing with physics simulators, enabling RL agents to learn from 10K+ simulated trajectories + \item Achieved \$1.5M projected cost reduction per launch through RL-discovered optimizations improving structural efficiency by 12\% +\end{itemize} + +\textbf{Assistant Machine Learning Researcher} \hfill May 2024 – Dec 2024\\[0.05em] +\textit{Massachusetts Institute of Technology} +\begin{itemize}[leftmargin=*, topsep=1pt, itemsep=1pt, label=$\bullet$] + \item Developed Graph Neural Networks with attention mechanisms for material synthesis prediction, improving accuracy by 9.2\% over baseline methods + \item Implemented multi-task transformer pretraining on 500K material descriptions, fine-tuning shared representations across 12 downstream tasks + \item Applied BERT-style masked language modeling to scientific text, creating domain-specific embeddings that improved material property prediction by 4.7\% +\end{itemize} + +\textbf{Software Engineer} \hfill Jan 2023 – Mar 2024\\[0.05em] +\textit{OHB Systems AG} +\begin{itemize}[leftmargin=*, topsep=1pt, itemsep=1pt, label=$\bullet$] + \item Built ML pipeline automating FEM analysis using Gaussian Processes for uncertainty quantification, reducing engineering cycle time by 25\% + \item Developed LSTM-based anomaly detection for satellite telemetry data, implementing attention mechanisms for interpretable predictions + \item Deployed models using MLflow and Docker, establishing continuous training pipelines triggered by distribution shift detection +\end{itemize} + +\textbf{Co-Founder/Software Lead} \hfill Jan 2021 – Dec 2022\\[0.05em] +\textit{GetMoBie GmbH} +\begin{itemize}[leftmargin=*, topsep=1pt, itemsep=1pt, label=$\bullet$] + \item Led development of mobile banking application serving 20K+ users, presenting at "Die Höhle der Löwen" TV show + \item Implemented Random Forest models for transaction categorization and fraud detection on 1M+ records, achieving 0.95 AUC + \item Built collaborative filtering recommendation system using matrix factorization, increasing financial product adoption by 15\% + \item Managed team of 5 developers while establishing ML pipelines for real-time inference and model monitoring +\end{itemize} + +\textbf{Machine Learning Engineer Intern} \hfill Aug 2020 – May 2021\\[0.05em] +\textit{BMW AG} +\begin{itemize}[leftmargin=*, topsep=1pt, itemsep=1pt, label=$\bullet$] + \item Created job recommendation system using collaborative filtering on implicit feedback data, facilitating 100+ internal role transitions + \item Implemented document classification using TF-IDF and SVM, achieving 89\% F1-score on 50K corporate documents +\end{itemize} + +\section{Projects} + +\textbf{AutoApply - AI Job Application Automation SaaS} +\begin{itemize}[leftmargin=*, topsep=1pt, itemsep=1pt, label=$\bullet$] + \item Built multi-agent system using GPT-4 and Claude-3 APIs to automate job applications, generating \$480K ARR with 10K+ monthly active users + \item Implemented form detection using fine-tuned YOLOv8 achieving 94.3\% accuracy, processing 78K+ successful applications + \item Scaled infrastructure to handle 2.8M+ monthly queries with 99.7\% uptime using containerized microservices +\end{itemize} + +\textbf{OpenRLHF Fork - Scalable RLHF Training Framework} +\begin{itemize}[leftmargin=*, topsep=1pt, itemsep=1pt, label=$\bullet$] + \item Forked and enhanced OpenRLHF framework to implement hybrid DPO/PPO training pipeline, reducing GPU memory usage by 15\% through gradient checkpointing optimizations + \item Achieved 23\% faster convergence on reward model training by implementing adaptive KL penalty scheduling and batch-wise advantage normalization + \item Contributed multi-node distributed training support using DeepSpeed ZeRO-3, enabling training of 13B parameter models on 8x A100 clusters +\end{itemize} + +\textbf{Domain-Specific GPT-2 Fine-Tuning} +\begin{itemize}[leftmargin=*, topsep=1pt, itemsep=1pt, label=$\bullet$] + \item Fine-tuned GPT-2 medium on 10K aerospace papers using custom tokenizer with domain-specific vocabulary extensions + \item Achieved 12\% ROUGE score improvement for technical summarization through careful hyperparameter tuning and data augmentation + \item Implemented distributed training across 4 GPUs using gradient accumulation to simulate larger batch sizes +\end{itemize} + +\textbf{ArchUnit TypeScript - Open Source Library} +\begin{itemize}[leftmargin=*, topsep=1pt, itemsep=1pt, label=$\bullet$] + \item Created TypeScript architecture testing library achieving 400+ GitHub stars and widespread adoption in JavaScript ecosystem + \item Implemented AST-based static analysis supporting circular dependency detection, layered architecture validation, and code metrics (LCOM, coupling, abstractness) + \item Built pattern matching system with glob/regex support and universal testing framework integration (Jest, Vitest, Jasmine, Mocha) +\end{itemize} + +\section{Publications} +Heimann, J., et al. "Reaction Graph Networks for Inorganic Synthesis Condition Prediction of Solid State Materials", \textit{AI4Mat-2024: NeurIPS 2024 Workshop on AI for Accelerated Materials Design} + +\section{Education} +\textbf{Bachelor of Science in Aerospace Engineering} \hfill 2025\\ +Technical University of Munich + +\textbf{Bachelor of Science in Astronomical \& Planetary Sciences} \hfill 2024\\ +Arizona State University + +\end{document} \ No newline at end of file diff --git a/debug_jan_model.js b/debug_jan_model.js new file mode 100644 index 0000000..4d48b1f --- /dev/null +++ b/debug_jan_model.js @@ -0,0 +1,34 @@ +// Temporary debug script to inspect jan.glb model structure +import React, { useEffect } from 'react'; +import { useGLTF, useGraph } from '@react-three/drei'; +import { SkeletonUtils } from 'three-stdlib'; + +const ModelDebugger = () => { + const { scene } = useGLTF('/models/animations/jan.glb'); + const clone = React.useMemo(() => SkeletonUtils.clone(scene), [scene]); + const { nodes, materials } = useGraph(clone); + + useEffect(() => { + console.log('=== JAN.GLB MODEL STRUCTURE ==='); + console.log('Available nodes:', Object.keys(nodes)); + console.log('Available materials:', Object.keys(materials)); + + // Log each node with its properties + Object.entries(nodes).forEach(([name, node]) => { + console.log(`Node: ${name}`, { + type: node.type, + hasGeometry: !!node.geometry, + hasMaterial: !!node.material, + hasChildren: node.children?.length > 0, + childrenCount: node.children?.length || 0, + position: node.position, + rotation: node.rotation, + scale: node.scale + }); + }); + }, [nodes, materials]); + + return null; +}; + +export default ModelDebugger; \ No newline at end of file diff --git a/index.html b/index.html index bca8e7f..d26457b 100644 --- a/index.html +++ b/index.html @@ -1,13 +1,16 @@ - + + - + - Adrian Hajdin - - + Jan Magnus Heimann - AI/ML Engineer + + +
- - + + + \ No newline at end of file diff --git a/package-lock.json b/package-lock.json index f94cda8..bdf2e23 100644 --- a/package-lock.json +++ b/package-lock.json @@ -13,6 +13,7 @@ "@react-three/drei": "^9.111.3", "@react-three/fiber": "^8.17.6", "@types/three": "^0.168.0", + "gray-matter": "^4.0.3", "gsap": "^3.12.5", "leva": "^0.9.35", "maath": "^0.10.8", @@ -20,7 +21,9 @@ "react": "^18.3.1", "react-dom": "^18.3.1", "react-globe.gl": "^2.27.2", + "react-markdown": "^10.1.0", "react-responsive": "^10.0.0", + "react-syntax-highlighter": "^15.6.1", "three": "^0.167.1", "three-stdlib": "^2.32.2" }, @@ -2202,6 +2205,15 @@ "resolved": "https://registry.npmjs.org/@types/debounce/-/debounce-1.2.4.tgz", "integrity": "sha512-jBqiORIzKDOToaF63Fm//haOCHuwQuLa2202RK4MozpA6lh93eCBc+/8+wZn5OzjJt3ySdc+74SXWXB55Ewtyw==" }, + "node_modules/@types/debug": { + "version": "4.1.12", + "resolved": "https://registry.npmjs.org/@types/debug/-/debug-4.1.12.tgz", + "integrity": "sha512-vIChWdVG3LG1SMxEvI/AK+FWJthlrqlTu7fbrlywTkkaONwk/UAGaULXRlf8vkzFBLVm0zkMdCquhL5aOjhXPQ==", + "license": "MIT", + "dependencies": { + "@types/ms": "*" + } + }, "node_modules/@types/draco3d": { "version": "1.4.10", "resolved": "https://registry.npmjs.org/@types/draco3d/-/draco3d-1.4.10.tgz", @@ -2210,8 +2222,40 @@ "node_modules/@types/estree": { "version": "1.0.5", "resolved": "https://registry.npmjs.org/@types/estree/-/estree-1.0.5.tgz", - "integrity": "sha512-/kYRxGDLWzHOB7q+wtSUQlFrtcdUccpfy+X+9iMBpHK8QLLhx2wIPYuS5DYtR9Wa/YlZAbIovy7qVdB1Aq6Lyw==", - "dev": true + "integrity": "sha512-/kYRxGDLWzHOB7q+wtSUQlFrtcdUccpfy+X+9iMBpHK8QLLhx2wIPYuS5DYtR9Wa/YlZAbIovy7qVdB1Aq6Lyw==" + }, + "node_modules/@types/estree-jsx": { + "version": "1.0.5", + "resolved": "https://registry.npmjs.org/@types/estree-jsx/-/estree-jsx-1.0.5.tgz", + "integrity": "sha512-52CcUVNFyfb1A2ALocQw/Dd1BQFNmSdkuC3BkZ6iqhdMfQz7JWOFRuJFloOzjk+6WijU56m9oKXFAXc7o3Towg==", + "license": "MIT", + "dependencies": { + "@types/estree": "*" + } + }, + "node_modules/@types/hast": { + "version": "3.0.4", + "resolved": "https://registry.npmjs.org/@types/hast/-/hast-3.0.4.tgz", + "integrity": "sha512-WPs+bbQw5aCj+x6laNGWLH3wviHtoCv/P3+otBhbOhJgG8qtpdAMlTCxLtsTWA7LH1Oh/bFCHsBn0TPS5m30EQ==", + "license": "MIT", + "dependencies": { + "@types/unist": "*" + } + }, + "node_modules/@types/mdast": { + "version": "4.0.4", + "resolved": "https://registry.npmjs.org/@types/mdast/-/mdast-4.0.4.tgz", + "integrity": "sha512-kGaNbPh1k7AFzgpud/gMdvIm5xuECykRR+JnWKQno9TAXVa6WIVCGTPvYGekIDL4uwCZQSYbUxNBSb1aUo79oA==", + "license": "MIT", + "dependencies": { + "@types/unist": "*" + } + }, + "node_modules/@types/ms": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/@types/ms/-/ms-2.1.0.tgz", + "integrity": "sha512-GsCCIZDE/p3i96vtEqx+7dBUGXrc7zeSK3wwPHIaRThS+9OhWIXRqzs4d6k1SVU8g91DrNRWxWUGhp5KXQb2VA==", + "license": "MIT" }, "node_modules/@types/offscreencanvas": { "version": "2019.7.3", @@ -2267,6 +2311,12 @@ "meshoptimizer": "~0.18.1" } }, + "node_modules/@types/unist": { + "version": "3.0.3", + "resolved": "https://registry.npmjs.org/@types/unist/-/unist-3.0.3.tgz", + "integrity": "sha512-ko/gIFJRv177XgZsZcBwnqJN5x/Gien8qNOn0D5bQU/zAzVf9Zt3BlcUiLqhV9y4ARk0GbT3tnUiPNgnTXzc/Q==", + "license": "MIT" + }, "node_modules/@types/webxr": { "version": "0.5.20", "resolved": "https://registry.npmjs.org/@types/webxr/-/webxr-0.5.20.tgz", @@ -2275,8 +2325,7 @@ "node_modules/@ungap/structured-clone": { "version": "1.2.0", "resolved": "https://registry.npmjs.org/@ungap/structured-clone/-/structured-clone-1.2.0.tgz", - "integrity": "sha512-zuVdFrMJiuCDQUMCzQaD6KL28MjnqqN8XnAqiEq9PNm/hCPTSGfrXCOfwj1ow4LFb/tNymJPwsNbVePc1xFqrQ==", - "dev": true + "integrity": "sha512-zuVdFrMJiuCDQUMCzQaD6KL28MjnqqN8XnAqiEq9PNm/hCPTSGfrXCOfwj1ow4LFb/tNymJPwsNbVePc1xFqrQ==" }, "node_modules/@use-gesture/core": { "version": "10.3.1", @@ -2613,6 +2662,16 @@ "url": "https://github.com/sponsors/ljharb" } }, + "node_modules/bail": { + "version": "2.0.2", + "resolved": "https://registry.npmjs.org/bail/-/bail-2.0.2.tgz", + "integrity": "sha512-0xO6mYd7JB2YesxDKplafRpsiOzPt9V02ddPCLbY1xYGPOX24NTyN50qnUxgCPcSoYMhKpAuBTjQoRZCAkUDRw==", + "license": "MIT", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, "node_modules/balanced-match": { "version": "1.0.2", "resolved": "https://registry.npmjs.org/balanced-match/-/balanced-match-1.0.2.tgz", @@ -2800,6 +2859,16 @@ } ] }, + "node_modules/ccount": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/ccount/-/ccount-2.0.1.tgz", + "integrity": "sha512-eyrF0jiFpY+3drT6383f1qhkbGsLSifNAjA61IUjZjmLCWjItY6LB9ft9YhoDgwfmclB2zhu51Lc7+95b8NRAg==", + "license": "MIT", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, "node_modules/chalk": { "version": "2.4.2", "resolved": "https://registry.npmjs.org/chalk/-/chalk-2.4.2.tgz", @@ -2814,6 +2883,46 @@ "node": ">=4" } }, + "node_modules/character-entities": { + "version": "2.0.2", + "resolved": "https://registry.npmjs.org/character-entities/-/character-entities-2.0.2.tgz", + "integrity": "sha512-shx7oQ0Awen/BRIdkjkvz54PnEEI/EjwXDSIZp86/KKdbafHh1Df/RYGBhn4hbe2+uKC9FnT5UCEdyPz3ai9hQ==", + "license": "MIT", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/character-entities-html4": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/character-entities-html4/-/character-entities-html4-2.1.0.tgz", + "integrity": "sha512-1v7fgQRj6hnSwFpq1Eu0ynr/CDEw0rXo2B61qXrLNdHZmPKgb7fqS1a2JwF0rISo9q77jDI8VMEHoApn8qDoZA==", + "license": "MIT", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/character-entities-legacy": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/character-entities-legacy/-/character-entities-legacy-3.0.0.tgz", + "integrity": "sha512-RpPp0asT/6ufRm//AJVwpViZbGM/MkjQFxJccQRHmISF/22NBtsHqAWmL+/pmkPWoIUJdWyeVleTl1wydHATVQ==", + "license": "MIT", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/character-reference-invalid": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/character-reference-invalid/-/character-reference-invalid-2.0.1.tgz", + "integrity": "sha512-iBZ4F4wRbyORVsu0jPV7gXkOsGYjGHPmAyv+HiHG8gi5PtC9KI2j1+v8/tlibRvjoWX027ypmG/n0HtO5t7unw==", + "license": "MIT", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, "node_modules/chokidar": { "version": "3.6.0", "resolved": "https://registry.npmjs.org/chokidar/-/chokidar-3.6.0.tgz", @@ -2870,6 +2979,16 @@ "resolved": "https://registry.npmjs.org/colord/-/colord-2.9.3.tgz", "integrity": "sha512-jeC1axXpnb0/2nn/Y1LPuLdgXBLH7aDcHu4KEKfqw3CUhX7ZpfBSlPKyqXE6btIgEzfWtrX3/tyBCaCvXvMkOw==" }, + "node_modules/comma-separated-tokens": { + "version": "2.0.3", + "resolved": "https://registry.npmjs.org/comma-separated-tokens/-/comma-separated-tokens-2.0.3.tgz", + "integrity": "sha512-Fu4hJdvzeylCfQPp9SGWidpzrMs7tTrlu6Vb8XGaRGck8QSNZJJp538Wrb60Lax4fPwR64ViY468OIUTbRlGZg==", + "license": "MIT", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, "node_modules/commander": { "version": "4.1.1", "resolved": "https://registry.npmjs.org/commander/-/commander-4.1.1.tgz", @@ -3149,7 +3268,6 @@ "version": "4.3.6", "resolved": "https://registry.npmjs.org/debug/-/debug-4.3.6.tgz", "integrity": "sha512-O/09Bd4Z1fBrU4VzkhFqVgpPzaGbw6Sm9FEkBT1A/YBXQFGuuSxa1dN2nxgxS34JmKXqYx8CZAwEVoJFImUXIg==", - "dev": true, "dependencies": { "ms": "2.1.2" }, @@ -3162,6 +3280,19 @@ } } }, + "node_modules/decode-named-character-reference": { + "version": "1.2.0", + "resolved": "https://registry.npmjs.org/decode-named-character-reference/-/decode-named-character-reference-1.2.0.tgz", + "integrity": "sha512-c6fcElNV6ShtZXmsgNgFFV5tVX2PaV4g+MOAkb8eXHvn6sryJBrZa9r0zV6+dtTyoCKxtDy5tyQ5ZwQuidtd+Q==", + "license": "MIT", + "dependencies": { + "character-entities": "^2.0.0" + }, + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, "node_modules/deep-is": { "version": "0.1.4", "resolved": "https://registry.npmjs.org/deep-is/-/deep-is-0.1.4.tgz", @@ -3226,6 +3357,19 @@ "webgl-constants": "^1.1.1" } }, + "node_modules/devlop": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/devlop/-/devlop-1.1.0.tgz", + "integrity": "sha512-RWmIqhcFf1lRYBvNmr7qTNuyCt/7/ns2jbpp1+PalgE/rDQcBT0fioSMUpJ93irlUhC5hrg4cYqe6U+0ImW0rA==", + "license": "MIT", + "dependencies": { + "dequal": "^2.0.0" + }, + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, "node_modules/didyoumean": { "version": "1.2.2", "resolved": "https://registry.npmjs.org/didyoumean/-/didyoumean-1.2.2.tgz", @@ -3731,6 +3875,19 @@ "url": "https://opencollective.com/eslint" } }, + "node_modules/esprima": { + "version": "4.0.1", + "resolved": "https://registry.npmjs.org/esprima/-/esprima-4.0.1.tgz", + "integrity": "sha512-eGuFFw7Upda+g4p+QHvnW0RyTX/SVeJBDM/gCtMARO0cLuT2HcEKnTPvhjV6aGeqrCB/sbNop0Kszm0jsaWU4A==", + "license": "BSD-2-Clause", + "bin": { + "esparse": "bin/esparse.js", + "esvalidate": "bin/esvalidate.js" + }, + "engines": { + "node": ">=4" + } + }, "node_modules/esquery": { "version": "1.6.0", "resolved": "https://registry.npmjs.org/esquery/-/esquery-1.6.0.tgz", @@ -3764,6 +3921,16 @@ "node": ">=4.0" } }, + "node_modules/estree-util-is-identifier-name": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/estree-util-is-identifier-name/-/estree-util-is-identifier-name-3.0.0.tgz", + "integrity": "sha512-hFtqIDZTIUZ9BXLb8y4pYGyk6+wekIivNVTcmvk8NoOh+VeRn5y6cEHzbURrWbfp1fIqdVipilzj+lfaadNZmg==", + "license": "MIT", + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, "node_modules/esutils": { "version": "2.0.3", "resolved": "https://registry.npmjs.org/esutils/-/esutils-2.0.3.tgz", @@ -3773,6 +3940,12 @@ "node": ">=0.10.0" } }, + "node_modules/extend": { + "version": "3.0.2", + "resolved": "https://registry.npmjs.org/extend/-/extend-3.0.2.tgz", + "integrity": "sha512-fjquC59cD7CyW6urNXK0FBufkZcoiGG80wTuPujX590cB5Ttln20E2UB4S/WARVqhXffZl2LNgS+gQdPIIim/g==", + "license": "MIT" + }, "node_modules/extend-shallow": { "version": "2.0.1", "resolved": "https://registry.npmjs.org/extend-shallow/-/extend-shallow-2.0.1.tgz", @@ -3847,6 +4020,19 @@ "reusify": "^1.0.4" } }, + "node_modules/fault": { + "version": "1.0.4", + "resolved": "https://registry.npmjs.org/fault/-/fault-1.0.4.tgz", + "integrity": "sha512-CJ0HCB5tL5fYTEA7ToAq5+kTwd++Borf1/bifxd9iT70QcXr4MRrO3Llf8Ifs70q+SJcGHFtnIE/Nw6giCtECA==", + "license": "MIT", + "dependencies": { + "format": "^0.2.0" + }, + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, "node_modules/fflate": { "version": "0.8.2", "resolved": "https://registry.npmjs.org/fflate/-/fflate-0.8.2.tgz", @@ -3955,6 +4141,14 @@ "url": "https://github.com/sponsors/isaacs" } }, + "node_modules/format": { + "version": "0.2.2", + "resolved": "https://registry.npmjs.org/format/-/format-0.2.2.tgz", + "integrity": "sha512-wzsgA6WOq+09wrU1tsJ09udeR/YZRaeArL9e1wPbFg3GG2yDnC2ldKpxs4xunpFF9DgqCqOIra3bc1HWrJ37Ww==", + "engines": { + "node": ">=0.4.x" + } + }, "node_modules/fraction.js": { "version": "4.3.7", "resolved": "https://registry.npmjs.org/fraction.js/-/fraction.js-4.3.7.tgz", @@ -4227,6 +4421,43 @@ "integrity": "sha512-EtKwoO6kxCL9WO5xipiHTZlSzBm7WLT627TqC/uVRd0HKmq8NXyebnNYxDoBi7wt8eTWrUrKXCOVaFq9x1kgag==", "dev": true }, + "node_modules/gray-matter": { + "version": "4.0.3", + "resolved": "https://registry.npmjs.org/gray-matter/-/gray-matter-4.0.3.tgz", + "integrity": "sha512-5v6yZd4JK3eMI3FqqCouswVqwugaA9r4dNZB1wwcmrD02QkV5H0y7XBQW8QwQqEaZY1pM9aqORSORhJRdNK44Q==", + "license": "MIT", + "dependencies": { + "js-yaml": "^3.13.1", + "kind-of": "^6.0.2", + "section-matter": "^1.0.0", + "strip-bom-string": "^1.0.0" + }, + "engines": { + "node": ">=6.0" + } + }, + "node_modules/gray-matter/node_modules/argparse": { + "version": "1.0.10", + "resolved": "https://registry.npmjs.org/argparse/-/argparse-1.0.10.tgz", + "integrity": "sha512-o5Roy6tNG4SL/FOkCAN6RzjiakZS25RLYFrcMttJqbdd8BWrnA+fGz57iN5Pb06pvBGvl5gQ0B48dJlslXvoTg==", + "license": "MIT", + "dependencies": { + "sprintf-js": "~1.0.2" + } + }, + "node_modules/gray-matter/node_modules/js-yaml": { + "version": "3.14.1", + "resolved": "https://registry.npmjs.org/js-yaml/-/js-yaml-3.14.1.tgz", + "integrity": "sha512-okMH7OXXJ7YrN9Ok3/SXrnu4iX9yOk+25nqX4imS2npuvTYDmo/QEZoqwZkYaIDk3jVvBOTOIEgEhaLOynBS9g==", + "license": "MIT", + "dependencies": { + "argparse": "^1.0.7", + "esprima": "^4.0.0" + }, + "bin": { + "js-yaml": "bin/js-yaml.js" + } + }, "node_modules/gsap": { "version": "3.12.5", "resolved": "https://registry.npmjs.org/gsap/-/gsap-3.12.5.tgz", @@ -4323,11 +4554,151 @@ "node": ">= 0.4" } }, + "node_modules/hast-util-parse-selector": { + "version": "2.2.5", + "resolved": "https://registry.npmjs.org/hast-util-parse-selector/-/hast-util-parse-selector-2.2.5.tgz", + "integrity": "sha512-7j6mrk/qqkSehsM92wQjdIgWM2/BW61u/53G6xmC8i1OmEdKLHbk419QKQUjz6LglWsfqoiHmyMRkP1BGjecNQ==", + "license": "MIT", + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/hast-util-to-jsx-runtime": { + "version": "2.3.6", + "resolved": "https://registry.npmjs.org/hast-util-to-jsx-runtime/-/hast-util-to-jsx-runtime-2.3.6.tgz", + "integrity": "sha512-zl6s8LwNyo1P9uw+XJGvZtdFF1GdAkOg8ujOw+4Pyb76874fLps4ueHXDhXWdk6YHQ6OgUtinliG7RsYvCbbBg==", + "license": "MIT", + "dependencies": { + "@types/estree": "^1.0.0", + "@types/hast": "^3.0.0", + "@types/unist": "^3.0.0", + "comma-separated-tokens": "^2.0.0", + "devlop": "^1.0.0", + "estree-util-is-identifier-name": "^3.0.0", + "hast-util-whitespace": "^3.0.0", + "mdast-util-mdx-expression": "^2.0.0", + "mdast-util-mdx-jsx": "^3.0.0", + "mdast-util-mdxjs-esm": "^2.0.0", + "property-information": "^7.0.0", + "space-separated-tokens": "^2.0.0", + "style-to-js": "^1.0.0", + "unist-util-position": "^5.0.0", + "vfile-message": "^4.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/hast-util-whitespace": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/hast-util-whitespace/-/hast-util-whitespace-3.0.0.tgz", + "integrity": "sha512-88JUN06ipLwsnv+dVn+OIYOvAuvBMy/Qoi6O7mQHxdPXpjy+Cd6xRkWwux7DKO+4sYILtLBRIKgsdpS2gQc7qw==", + "license": "MIT", + "dependencies": { + "@types/hast": "^3.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/hastscript": { + "version": "6.0.0", + "resolved": "https://registry.npmjs.org/hastscript/-/hastscript-6.0.0.tgz", + "integrity": "sha512-nDM6bvd7lIqDUiYEiu5Sl/+6ReP0BMk/2f4U/Rooccxkj0P5nm+acM5PrGJ/t5I8qPGiqZSE6hVAwZEdZIvP4w==", + "license": "MIT", + "dependencies": { + "@types/hast": "^2.0.0", + "comma-separated-tokens": "^1.0.0", + "hast-util-parse-selector": "^2.0.0", + "property-information": "^5.0.0", + "space-separated-tokens": "^1.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/hastscript/node_modules/@types/hast": { + "version": "2.3.10", + "resolved": "https://registry.npmjs.org/@types/hast/-/hast-2.3.10.tgz", + "integrity": "sha512-McWspRw8xx8J9HurkVBfYj0xKoE25tOFlHGdx4MJ5xORQrMGZNqJhVQWaIbm6Oyla5kYOXtDiopzKRJzEOkwJw==", + "license": "MIT", + "dependencies": { + "@types/unist": "^2" + } + }, + "node_modules/hastscript/node_modules/@types/unist": { + "version": "2.0.11", + "resolved": "https://registry.npmjs.org/@types/unist/-/unist-2.0.11.tgz", + "integrity": "sha512-CmBKiL6NNo/OqgmMn95Fk9Whlp2mtvIv+KNpQKN2F4SjvrEesubTRWGYSg+BnWZOnlCaSTU1sMpsBOzgbYhnsA==", + "license": "MIT" + }, + "node_modules/hastscript/node_modules/comma-separated-tokens": { + "version": "1.0.8", + "resolved": "https://registry.npmjs.org/comma-separated-tokens/-/comma-separated-tokens-1.0.8.tgz", + "integrity": "sha512-GHuDRO12Sypu2cV70d1dkA2EUmXHgntrzbpvOB+Qy+49ypNfGgFQIC2fhhXbnyrJRynDCAARsT7Ou0M6hirpfw==", + "license": "MIT", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/hastscript/node_modules/property-information": { + "version": "5.6.0", + "resolved": "https://registry.npmjs.org/property-information/-/property-information-5.6.0.tgz", + "integrity": "sha512-YUHSPk+A30YPv+0Qf8i9Mbfe/C0hdPXk1s1jPVToV8pk8BQtpw10ct89Eo7OWkutrwqvT0eicAxlOg3dOAu8JA==", + "license": "MIT", + "dependencies": { + "xtend": "^4.0.0" + }, + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/hastscript/node_modules/space-separated-tokens": { + "version": "1.1.5", + "resolved": "https://registry.npmjs.org/space-separated-tokens/-/space-separated-tokens-1.1.5.tgz", + "integrity": "sha512-q/JSVd1Lptzhf5bkYm4ob4iWPjx0KiRe3sRFBNrVqbJkFaBm5vbbowy1mymoPNLRa52+oadOhJ+K49wsSeSjTA==", + "license": "MIT", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/highlight.js": { + "version": "10.7.3", + "resolved": "https://registry.npmjs.org/highlight.js/-/highlight.js-10.7.3.tgz", + "integrity": "sha512-tzcUFauisWKNHaRkN4Wjl/ZA07gENAjFl3J/c480dprkGTg5EQstgaNFqBfUqCq54kZRIEcreTsAgF/m2quD7A==", + "license": "BSD-3-Clause", + "engines": { + "node": "*" + } + }, + "node_modules/highlightjs-vue": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/highlightjs-vue/-/highlightjs-vue-1.0.0.tgz", + "integrity": "sha512-PDEfEF102G23vHmPhLyPboFCD+BkMGu+GuJe2d9/eH4FsCwvgBpnc9n0pGE+ffKdph38s6foEZiEjdgHdzp+IA==", + "license": "CC0-1.0" + }, "node_modules/hls.js": { "version": "1.3.5", "resolved": "https://registry.npmjs.org/hls.js/-/hls.js-1.3.5.tgz", "integrity": "sha512-uybAvKS6uDe0MnWNEPnO0krWVr+8m2R0hJ/viql8H3MVK+itq8gGQuIYoFHL3rECkIpNH98Lw8YuuWMKZxp3Ew==" }, + "node_modules/html-url-attributes": { + "version": "3.0.1", + "resolved": "https://registry.npmjs.org/html-url-attributes/-/html-url-attributes-3.0.1.tgz", + "integrity": "sha512-ol6UPyBWqsrO6EJySPz2O7ZSr856WDrEzM5zMqp+FJJLGMW35cLYmmZnl0vztAZxRUoNZJFTCohfjuIJ8I4QBQ==", + "license": "MIT", + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, "node_modules/hyphenate-style-name": { "version": "1.1.0", "resolved": "https://registry.npmjs.org/hyphenate-style-name/-/hyphenate-style-name-1.1.0.tgz", @@ -4416,6 +4787,12 @@ "integrity": "sha512-k/vGaX4/Yla3WzyMCvTQOXYeIHvqOKtnqBduzTHpzpQZzAskKMhZ2K+EnBiSM9zGSoIFeMpXKxa4dYeZIQqewQ==", "dev": true }, + "node_modules/inline-style-parser": { + "version": "0.2.4", + "resolved": "https://registry.npmjs.org/inline-style-parser/-/inline-style-parser-0.2.4.tgz", + "integrity": "sha512-0aO8FkhNZlj/ZIbNi7Lxxr12obT7cL1moPfE4tg1LkX7LlLfC6DeX4l2ZEud1ukP9jNQyNnfzQVqwbwmAATY4Q==", + "license": "MIT" + }, "node_modules/internal-slot": { "version": "1.0.7", "resolved": "https://registry.npmjs.org/internal-slot/-/internal-slot-1.0.7.tgz", @@ -4438,6 +4815,30 @@ "node": ">=12" } }, + "node_modules/is-alphabetical": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/is-alphabetical/-/is-alphabetical-2.0.1.tgz", + "integrity": "sha512-FWyyY60MeTNyeSRpkM2Iry0G9hpr7/9kD40mD/cGQEuilcZYS4okz8SN2Q6rLCJ8gbCt6fN+rC+6tMGS99LaxQ==", + "license": "MIT", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/is-alphanumerical": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/is-alphanumerical/-/is-alphanumerical-2.0.1.tgz", + "integrity": "sha512-hmbYhX/9MUMF5uh7tOXyK/n0ZvWpad5caBA17GsC6vyuCqaWliRG5K1qS9inmUhEMaOBIW7/whAnSwveW/LtZw==", + "license": "MIT", + "dependencies": { + "is-alphabetical": "^2.0.0", + "is-decimal": "^2.0.0" + }, + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, "node_modules/is-array-buffer": { "version": "3.0.4", "resolved": "https://registry.npmjs.org/is-array-buffer/-/is-array-buffer-3.0.4.tgz", @@ -4566,6 +4967,16 @@ "url": "https://github.com/sponsors/ljharb" } }, + "node_modules/is-decimal": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/is-decimal/-/is-decimal-2.0.1.tgz", + "integrity": "sha512-AAB9hiomQs5DXWcRB1rqsxGUstbRroFOPPVAomNk/3XHR5JyEZChOyTWe2oayKnsSsr/kcGqF+z6yuH6HHpN0A==", + "license": "MIT", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, "node_modules/is-extendable": { "version": "1.0.1", "resolved": "https://registry.npmjs.org/is-extendable/-/is-extendable-1.0.1.tgz", @@ -4634,6 +5045,16 @@ "node": ">=0.10.0" } }, + "node_modules/is-hexadecimal": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/is-hexadecimal/-/is-hexadecimal-2.0.1.tgz", + "integrity": "sha512-DgZQp241c8oO6cA1SbTEWiXeoxV42vlcJxgH+B3hi1AiqqKruZR3ZGF8In3fj4+/y/7rHvlOZLZtgJ/4ttYGZg==", + "license": "MIT", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, "node_modules/is-map": { "version": "2.0.3", "resolved": "https://registry.npmjs.org/is-map/-/is-map-2.0.3.tgz", @@ -4691,6 +5112,18 @@ "node": ">=8" } }, + "node_modules/is-plain-obj": { + "version": "4.1.0", + "resolved": "https://registry.npmjs.org/is-plain-obj/-/is-plain-obj-4.1.0.tgz", + "integrity": "sha512-+Pgi+vMuUNkJyExiMBt5IlFoMyKnr5zhJ4Uspz58WOhBF5QoIZkFyNHIbBAtHwzVAgk5RtndVNsDRN61/mmDqg==", + "license": "MIT", + "engines": { + "node": ">=12" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, "node_modules/is-plain-object": { "version": "2.0.4", "resolved": "https://registry.npmjs.org/is-plain-object/-/is-plain-object-2.0.4.tgz", @@ -5012,6 +5445,15 @@ "json-buffer": "3.0.1" } }, + "node_modules/kind-of": { + "version": "6.0.3", + "resolved": "https://registry.npmjs.org/kind-of/-/kind-of-6.0.3.tgz", + "integrity": "sha512-dcS1ul+9tmeD95T+x28/ehLgd9mENa3LsvDTtzm3vyBEO7RPptvAD+t44WVXaUjTBRcrpFeFlC8WCruUR456hw==", + "license": "MIT", + "engines": { + "node": ">=0.10.0" + } + }, "node_modules/leva": { "version": "0.9.35", "resolved": "https://registry.npmjs.org/leva/-/leva-0.9.35.tgz", @@ -5096,6 +5538,16 @@ "integrity": "sha512-0KpjqXRVvrYyCsX1swR/XTK0va6VQkQM6MNo7PqW77ByjAhoARA8EfrP1N4+KlKj8YS0ZUCtRT/YUuhyYDujIQ==", "dev": true }, + "node_modules/longest-streak": { + "version": "3.1.0", + "resolved": "https://registry.npmjs.org/longest-streak/-/longest-streak-3.1.0.tgz", + "integrity": "sha512-9Ri+o0JYgehTaVBBDoMqIl8GXtbWg711O3srftcHhZ0dqnETqLaoIK0x17fUw9rFSlK/0NlsKe0Ahhyl5pXE2g==", + "license": "MIT", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, "node_modules/loose-envify": { "version": "1.4.0", "resolved": "https://registry.npmjs.org/loose-envify/-/loose-envify-1.4.0.tgz", @@ -5107,6 +5559,20 @@ "loose-envify": "cli.js" } }, + "node_modules/lowlight": { + "version": "1.20.0", + "resolved": "https://registry.npmjs.org/lowlight/-/lowlight-1.20.0.tgz", + "integrity": "sha512-8Ktj+prEb1RoCPkEOrPMYUN/nCggB7qAWe3a7OpMjWQkh3l2RD5wKRQ+o8Q8YuI9RG/xs95waaI/E6ym/7NsTw==", + "license": "MIT", + "dependencies": { + "fault": "^1.0.0", + "highlight.js": "~10.7.0" + }, + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, "node_modules/lru-cache": { "version": "5.1.1", "resolved": "https://registry.npmjs.org/lru-cache/-/lru-cache-5.1.1.tgz", @@ -5133,47 +5599,642 @@ "css-mediaquery": "^0.1.2" } }, - "node_modules/merge-value": { - "version": "1.0.0", - "resolved": "https://registry.npmjs.org/merge-value/-/merge-value-1.0.0.tgz", - "integrity": "sha512-fJMmvat4NeKz63Uv9iHWcPDjCWcCkoiRoajRTEO8hlhUC6rwaHg0QCF9hBOTjZmm4JuglPckPSTtcuJL5kp0TQ==", - "dependencies": { - "get-value": "^2.0.6", - "is-extendable": "^1.0.0", - "mixin-deep": "^1.2.0", - "set-value": "^2.0.0" + "node_modules/mdast-util-from-markdown": { + "version": "2.0.2", + "resolved": "https://registry.npmjs.org/mdast-util-from-markdown/-/mdast-util-from-markdown-2.0.2.tgz", + "integrity": "sha512-uZhTV/8NBuw0WHkPTrCqDOl0zVe1BIng5ZtHoDk49ME1qqcjYmmLmOf0gELgcRMxN4w2iuIeVso5/6QymSrgmA==", + "license": "MIT", + "dependencies": { + "@types/mdast": "^4.0.0", + "@types/unist": "^3.0.0", + "decode-named-character-reference": "^1.0.0", + "devlop": "^1.0.0", + "mdast-util-to-string": "^4.0.0", + "micromark": "^4.0.0", + "micromark-util-decode-numeric-character-reference": "^2.0.0", + "micromark-util-decode-string": "^2.0.0", + "micromark-util-normalize-identifier": "^2.0.0", + "micromark-util-symbol": "^2.0.0", + "micromark-util-types": "^2.0.0", + "unist-util-stringify-position": "^4.0.0" }, - "engines": { - "node": ">=0.10.0" + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" } }, - "node_modules/merge2": { - "version": "1.4.1", - "resolved": "https://registry.npmjs.org/merge2/-/merge2-1.4.1.tgz", - "integrity": "sha512-8q7VEgMJW4J8tcfVPy8g09NcQwZdbwFEqhe/WZkoIzjn/3TGDwtOCYtXGxA3O8tPzpczCCDgv+P2P5y00ZJOOg==", - "dev": true, - "engines": { - "node": ">= 8" + "node_modules/mdast-util-mdx-expression": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/mdast-util-mdx-expression/-/mdast-util-mdx-expression-2.0.1.tgz", + "integrity": "sha512-J6f+9hUp+ldTZqKRSg7Vw5V6MqjATc+3E4gf3CFNcuZNWD8XdyI6zQ8GqH7f8169MM6P7hMBRDVGnn7oHB9kXQ==", + "license": "MIT", + "dependencies": { + "@types/estree-jsx": "^1.0.0", + "@types/hast": "^3.0.0", + "@types/mdast": "^4.0.0", + "devlop": "^1.0.0", + "mdast-util-from-markdown": "^2.0.0", + "mdast-util-to-markdown": "^2.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" } }, - "node_modules/meshline": { - "version": "3.3.1", - "resolved": "https://registry.npmjs.org/meshline/-/meshline-3.3.1.tgz", - "integrity": "sha512-/TQj+JdZkeSUOl5Mk2J7eLcYTLiQm2IDzmlSvYm7ov15anEcDJ92GHqqazxTSreeNgfnYu24kiEvvv0WlbCdFQ==", - "peerDependencies": { - "three": ">=0.137" + "node_modules/mdast-util-mdx-jsx": { + "version": "3.2.0", + "resolved": "https://registry.npmjs.org/mdast-util-mdx-jsx/-/mdast-util-mdx-jsx-3.2.0.tgz", + "integrity": "sha512-lj/z8v0r6ZtsN/cGNNtemmmfoLAFZnjMbNyLzBafjzikOM+glrjNHPlf6lQDOTccj9n5b0PPihEBbhneMyGs1Q==", + "license": "MIT", + "dependencies": { + "@types/estree-jsx": "^1.0.0", + "@types/hast": "^3.0.0", + "@types/mdast": "^4.0.0", + "@types/unist": "^3.0.0", + "ccount": "^2.0.0", + "devlop": "^1.1.0", + "mdast-util-from-markdown": "^2.0.0", + "mdast-util-to-markdown": "^2.0.0", + "parse-entities": "^4.0.0", + "stringify-entities": "^4.0.0", + "unist-util-stringify-position": "^4.0.0", + "vfile-message": "^4.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" } }, - "node_modules/meshoptimizer": { - "version": "0.18.1", - "resolved": "https://registry.npmjs.org/meshoptimizer/-/meshoptimizer-0.18.1.tgz", - "integrity": "sha512-ZhoIoL7TNV4s5B6+rx5mC//fw8/POGyNxS/DZyCJeiZ12ScLfVwRE/GfsxwiTkMYYD5DmK2/JXnEVXqL4rF+Sw==" - }, - "node_modules/micromatch": { - "version": "4.0.8", - "resolved": "https://registry.npmjs.org/micromatch/-/micromatch-4.0.8.tgz", - "integrity": "sha512-PXwfBhYu0hBCPw8Dn0E+WDYb7af3dSLVWKi3HGv84IdF4TyFoC0ysxFd0Goxw7nSv4T/PzEJQxsYsEiFCKo2BA==", - "dev": true, + "node_modules/mdast-util-mdxjs-esm": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/mdast-util-mdxjs-esm/-/mdast-util-mdxjs-esm-2.0.1.tgz", + "integrity": "sha512-EcmOpxsZ96CvlP03NghtH1EsLtr0n9Tm4lPUJUBccV9RwUOneqSycg19n5HGzCf+10LozMRSObtVr3ee1WoHtg==", + "license": "MIT", + "dependencies": { + "@types/estree-jsx": "^1.0.0", + "@types/hast": "^3.0.0", + "@types/mdast": "^4.0.0", + "devlop": "^1.0.0", + "mdast-util-from-markdown": "^2.0.0", + "mdast-util-to-markdown": "^2.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/mdast-util-phrasing": { + "version": "4.1.0", + "resolved": "https://registry.npmjs.org/mdast-util-phrasing/-/mdast-util-phrasing-4.1.0.tgz", + "integrity": "sha512-TqICwyvJJpBwvGAMZjj4J2n0X8QWp21b9l0o7eXyVJ25YNWYbJDVIyD1bZXE6WtV6RmKJVYmQAKWa0zWOABz2w==", + "license": "MIT", + "dependencies": { + "@types/mdast": "^4.0.0", + "unist-util-is": "^6.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/mdast-util-to-hast": { + "version": "13.2.0", + "resolved": "https://registry.npmjs.org/mdast-util-to-hast/-/mdast-util-to-hast-13.2.0.tgz", + "integrity": "sha512-QGYKEuUsYT9ykKBCMOEDLsU5JRObWQusAolFMeko/tYPufNkRffBAQjIE+99jbA87xv6FgmjLtwjh9wBWajwAA==", + "license": "MIT", + "dependencies": { + "@types/hast": "^3.0.0", + "@types/mdast": "^4.0.0", + "@ungap/structured-clone": "^1.0.0", + "devlop": "^1.0.0", + "micromark-util-sanitize-uri": "^2.0.0", + "trim-lines": "^3.0.0", + "unist-util-position": "^5.0.0", + "unist-util-visit": "^5.0.0", + "vfile": "^6.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/mdast-util-to-markdown": { + "version": "2.1.2", + "resolved": "https://registry.npmjs.org/mdast-util-to-markdown/-/mdast-util-to-markdown-2.1.2.tgz", + "integrity": "sha512-xj68wMTvGXVOKonmog6LwyJKrYXZPvlwabaryTjLh9LuvovB/KAH+kvi8Gjj+7rJjsFi23nkUxRQv1KqSroMqA==", + "license": "MIT", + "dependencies": { + "@types/mdast": "^4.0.0", + "@types/unist": "^3.0.0", + "longest-streak": "^3.0.0", + "mdast-util-phrasing": "^4.0.0", + "mdast-util-to-string": "^4.0.0", + "micromark-util-classify-character": "^2.0.0", + "micromark-util-decode-string": "^2.0.0", + "unist-util-visit": "^5.0.0", + "zwitch": "^2.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/mdast-util-to-string": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/mdast-util-to-string/-/mdast-util-to-string-4.0.0.tgz", + "integrity": "sha512-0H44vDimn51F0YwvxSJSm0eCDOJTRlmN0R1yBh4HLj9wiV1Dn0QoXGbvFAWj2hSItVTlCmBF1hqKlIyUBVFLPg==", + "license": "MIT", + "dependencies": { + "@types/mdast": "^4.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/merge-value": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/merge-value/-/merge-value-1.0.0.tgz", + "integrity": "sha512-fJMmvat4NeKz63Uv9iHWcPDjCWcCkoiRoajRTEO8hlhUC6rwaHg0QCF9hBOTjZmm4JuglPckPSTtcuJL5kp0TQ==", + "dependencies": { + "get-value": "^2.0.6", + "is-extendable": "^1.0.0", + "mixin-deep": "^1.2.0", + "set-value": "^2.0.0" + }, + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/merge2": { + "version": "1.4.1", + "resolved": "https://registry.npmjs.org/merge2/-/merge2-1.4.1.tgz", + "integrity": "sha512-8q7VEgMJW4J8tcfVPy8g09NcQwZdbwFEqhe/WZkoIzjn/3TGDwtOCYtXGxA3O8tPzpczCCDgv+P2P5y00ZJOOg==", + "dev": true, + "engines": { + "node": ">= 8" + } + }, + "node_modules/meshline": { + "version": "3.3.1", + "resolved": "https://registry.npmjs.org/meshline/-/meshline-3.3.1.tgz", + "integrity": "sha512-/TQj+JdZkeSUOl5Mk2J7eLcYTLiQm2IDzmlSvYm7ov15anEcDJ92GHqqazxTSreeNgfnYu24kiEvvv0WlbCdFQ==", + "peerDependencies": { + "three": ">=0.137" + } + }, + "node_modules/meshoptimizer": { + "version": "0.18.1", + "resolved": "https://registry.npmjs.org/meshoptimizer/-/meshoptimizer-0.18.1.tgz", + "integrity": "sha512-ZhoIoL7TNV4s5B6+rx5mC//fw8/POGyNxS/DZyCJeiZ12ScLfVwRE/GfsxwiTkMYYD5DmK2/JXnEVXqL4rF+Sw==" + }, + "node_modules/micromark": { + "version": "4.0.2", + "resolved": "https://registry.npmjs.org/micromark/-/micromark-4.0.2.tgz", + "integrity": "sha512-zpe98Q6kvavpCr1NPVSCMebCKfD7CA2NqZ+rykeNhONIJBpc1tFKt9hucLGwha3jNTNI8lHpctWJWoimVF4PfA==", + "funding": [ + { + "type": "GitHub Sponsors", + "url": "https://github.com/sponsors/unifiedjs" + }, + { + "type": "OpenCollective", + "url": "https://opencollective.com/unified" + } + ], + "license": "MIT", + "dependencies": { + "@types/debug": "^4.0.0", + "debug": "^4.0.0", + "decode-named-character-reference": "^1.0.0", + "devlop": "^1.0.0", + "micromark-core-commonmark": "^2.0.0", + "micromark-factory-space": "^2.0.0", + "micromark-util-character": "^2.0.0", + "micromark-util-chunked": "^2.0.0", + "micromark-util-combine-extensions": "^2.0.0", + "micromark-util-decode-numeric-character-reference": "^2.0.0", + "micromark-util-encode": "^2.0.0", + "micromark-util-normalize-identifier": "^2.0.0", + "micromark-util-resolve-all": "^2.0.0", + "micromark-util-sanitize-uri": "^2.0.0", + "micromark-util-subtokenize": "^2.0.0", + "micromark-util-symbol": "^2.0.0", + "micromark-util-types": "^2.0.0" + } + }, + "node_modules/micromark-core-commonmark": { + "version": "2.0.3", + "resolved": "https://registry.npmjs.org/micromark-core-commonmark/-/micromark-core-commonmark-2.0.3.tgz", + "integrity": "sha512-RDBrHEMSxVFLg6xvnXmb1Ayr2WzLAWjeSATAoxwKYJV94TeNavgoIdA0a9ytzDSVzBy2YKFK+emCPOEibLeCrg==", + "funding": [ + { + "type": "GitHub Sponsors", + "url": "https://github.com/sponsors/unifiedjs" + }, + { + "type": "OpenCollective", + "url": "https://opencollective.com/unified" + } + ], + "license": "MIT", + "dependencies": { + "decode-named-character-reference": "^1.0.0", + "devlop": "^1.0.0", + "micromark-factory-destination": "^2.0.0", + "micromark-factory-label": "^2.0.0", + "micromark-factory-space": "^2.0.0", + "micromark-factory-title": "^2.0.0", + "micromark-factory-whitespace": "^2.0.0", + "micromark-util-character": "^2.0.0", + "micromark-util-chunked": "^2.0.0", + "micromark-util-classify-character": "^2.0.0", + "micromark-util-html-tag-name": "^2.0.0", + "micromark-util-normalize-identifier": "^2.0.0", + "micromark-util-resolve-all": "^2.0.0", + "micromark-util-subtokenize": "^2.0.0", + "micromark-util-symbol": "^2.0.0", + "micromark-util-types": "^2.0.0" + } + }, + "node_modules/micromark-factory-destination": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/micromark-factory-destination/-/micromark-factory-destination-2.0.1.tgz", + "integrity": "sha512-Xe6rDdJlkmbFRExpTOmRj9N3MaWmbAgdpSrBQvCFqhezUn4AHqJHbaEnfbVYYiexVSs//tqOdY/DxhjdCiJnIA==", + "funding": [ + { + "type": "GitHub Sponsors", + "url": "https://github.com/sponsors/unifiedjs" + }, + { + "type": "OpenCollective", + "url": "https://opencollective.com/unified" + } + ], + "license": "MIT", + "dependencies": { + "micromark-util-character": "^2.0.0", + "micromark-util-symbol": "^2.0.0", + "micromark-util-types": "^2.0.0" + } + }, + "node_modules/micromark-factory-label": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/micromark-factory-label/-/micromark-factory-label-2.0.1.tgz", + "integrity": "sha512-VFMekyQExqIW7xIChcXn4ok29YE3rnuyveW3wZQWWqF4Nv9Wk5rgJ99KzPvHjkmPXF93FXIbBp6YdW3t71/7Vg==", + "funding": [ + { + "type": "GitHub Sponsors", + "url": "https://github.com/sponsors/unifiedjs" + }, + { + "type": "OpenCollective", + "url": "https://opencollective.com/unified" + } + ], + "license": "MIT", + "dependencies": { + "devlop": "^1.0.0", + "micromark-util-character": "^2.0.0", + "micromark-util-symbol": "^2.0.0", + "micromark-util-types": "^2.0.0" + } + }, + "node_modules/micromark-factory-space": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/micromark-factory-space/-/micromark-factory-space-2.0.1.tgz", + "integrity": "sha512-zRkxjtBxxLd2Sc0d+fbnEunsTj46SWXgXciZmHq0kDYGnck/ZSGj9/wULTV95uoeYiK5hRXP2mJ98Uo4cq/LQg==", + "funding": [ + { + "type": "GitHub Sponsors", + "url": "https://github.com/sponsors/unifiedjs" + }, + { + "type": "OpenCollective", + "url": "https://opencollective.com/unified" + } + ], + "license": "MIT", + "dependencies": { + "micromark-util-character": "^2.0.0", + "micromark-util-types": "^2.0.0" + } + }, + "node_modules/micromark-factory-title": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/micromark-factory-title/-/micromark-factory-title-2.0.1.tgz", + "integrity": "sha512-5bZ+3CjhAd9eChYTHsjy6TGxpOFSKgKKJPJxr293jTbfry2KDoWkhBb6TcPVB4NmzaPhMs1Frm9AZH7OD4Cjzw==", + "funding": [ + { + "type": "GitHub Sponsors", + "url": "https://github.com/sponsors/unifiedjs" + }, + { + "type": "OpenCollective", + "url": "https://opencollective.com/unified" + } + ], + "license": "MIT", + "dependencies": { + "micromark-factory-space": "^2.0.0", + "micromark-util-character": "^2.0.0", + "micromark-util-symbol": "^2.0.0", + "micromark-util-types": "^2.0.0" + } + }, + "node_modules/micromark-factory-whitespace": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/micromark-factory-whitespace/-/micromark-factory-whitespace-2.0.1.tgz", + "integrity": "sha512-Ob0nuZ3PKt/n0hORHyvoD9uZhr+Za8sFoP+OnMcnWK5lngSzALgQYKMr9RJVOWLqQYuyn6ulqGWSXdwf6F80lQ==", + "funding": [ + { + "type": "GitHub Sponsors", + "url": "https://github.com/sponsors/unifiedjs" + }, + { + "type": "OpenCollective", + "url": "https://opencollective.com/unified" + } + ], + "license": "MIT", + "dependencies": { + "micromark-factory-space": "^2.0.0", + "micromark-util-character": "^2.0.0", + "micromark-util-symbol": "^2.0.0", + "micromark-util-types": "^2.0.0" + } + }, + "node_modules/micromark-util-character": { + "version": "2.1.1", + "resolved": "https://registry.npmjs.org/micromark-util-character/-/micromark-util-character-2.1.1.tgz", + "integrity": "sha512-wv8tdUTJ3thSFFFJKtpYKOYiGP2+v96Hvk4Tu8KpCAsTMs6yi+nVmGh1syvSCsaxz45J6Jbw+9DD6g97+NV67Q==", + "funding": [ + { + "type": "GitHub Sponsors", + "url": "https://github.com/sponsors/unifiedjs" + }, + { + "type": "OpenCollective", + "url": "https://opencollective.com/unified" + } + ], + "license": "MIT", + "dependencies": { + "micromark-util-symbol": "^2.0.0", + "micromark-util-types": "^2.0.0" + } + }, + "node_modules/micromark-util-chunked": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/micromark-util-chunked/-/micromark-util-chunked-2.0.1.tgz", + "integrity": "sha512-QUNFEOPELfmvv+4xiNg2sRYeS/P84pTW0TCgP5zc9FpXetHY0ab7SxKyAQCNCc1eK0459uoLI1y5oO5Vc1dbhA==", + "funding": [ + { + "type": "GitHub Sponsors", + "url": "https://github.com/sponsors/unifiedjs" + }, + { + "type": "OpenCollective", + "url": "https://opencollective.com/unified" + } + ], + "license": "MIT", + "dependencies": { + "micromark-util-symbol": "^2.0.0" + } + }, + "node_modules/micromark-util-classify-character": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/micromark-util-classify-character/-/micromark-util-classify-character-2.0.1.tgz", + "integrity": "sha512-K0kHzM6afW/MbeWYWLjoHQv1sgg2Q9EccHEDzSkxiP/EaagNzCm7T/WMKZ3rjMbvIpvBiZgwR3dKMygtA4mG1Q==", + "funding": [ + { + "type": "GitHub Sponsors", + "url": "https://github.com/sponsors/unifiedjs" + }, + { + "type": "OpenCollective", + "url": "https://opencollective.com/unified" + } + ], + "license": "MIT", + "dependencies": { + "micromark-util-character": "^2.0.0", + "micromark-util-symbol": "^2.0.0", + "micromark-util-types": "^2.0.0" + } + }, + "node_modules/micromark-util-combine-extensions": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/micromark-util-combine-extensions/-/micromark-util-combine-extensions-2.0.1.tgz", + "integrity": "sha512-OnAnH8Ujmy59JcyZw8JSbK9cGpdVY44NKgSM7E9Eh7DiLS2E9RNQf0dONaGDzEG9yjEl5hcqeIsj4hfRkLH/Bg==", + "funding": [ + { + "type": "GitHub Sponsors", + "url": "https://github.com/sponsors/unifiedjs" + }, + { + "type": "OpenCollective", + "url": "https://opencollective.com/unified" + } + ], + "license": "MIT", + "dependencies": { + "micromark-util-chunked": "^2.0.0", + "micromark-util-types": "^2.0.0" + } + }, + "node_modules/micromark-util-decode-numeric-character-reference": { + "version": "2.0.2", + "resolved": "https://registry.npmjs.org/micromark-util-decode-numeric-character-reference/-/micromark-util-decode-numeric-character-reference-2.0.2.tgz", + "integrity": "sha512-ccUbYk6CwVdkmCQMyr64dXz42EfHGkPQlBj5p7YVGzq8I7CtjXZJrubAYezf7Rp+bjPseiROqe7G6foFd+lEuw==", + "funding": [ + { + "type": "GitHub Sponsors", + "url": "https://github.com/sponsors/unifiedjs" + }, + { + "type": "OpenCollective", + "url": "https://opencollective.com/unified" + } + ], + "license": "MIT", + "dependencies": { + "micromark-util-symbol": "^2.0.0" + } + }, + "node_modules/micromark-util-decode-string": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/micromark-util-decode-string/-/micromark-util-decode-string-2.0.1.tgz", + "integrity": "sha512-nDV/77Fj6eH1ynwscYTOsbK7rR//Uj0bZXBwJZRfaLEJ1iGBR6kIfNmlNqaqJf649EP0F3NWNdeJi03elllNUQ==", + "funding": [ + { + "type": "GitHub Sponsors", + "url": "https://github.com/sponsors/unifiedjs" + }, + { + "type": "OpenCollective", + "url": "https://opencollective.com/unified" + } + ], + "license": "MIT", + "dependencies": { + "decode-named-character-reference": "^1.0.0", + "micromark-util-character": "^2.0.0", + "micromark-util-decode-numeric-character-reference": "^2.0.0", + "micromark-util-symbol": "^2.0.0" + } + }, + "node_modules/micromark-util-encode": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/micromark-util-encode/-/micromark-util-encode-2.0.1.tgz", + "integrity": "sha512-c3cVx2y4KqUnwopcO9b/SCdo2O67LwJJ/UyqGfbigahfegL9myoEFoDYZgkT7f36T0bLrM9hZTAaAyH+PCAXjw==", + "funding": [ + { + "type": "GitHub Sponsors", + "url": "https://github.com/sponsors/unifiedjs" + }, + { + "type": "OpenCollective", + "url": "https://opencollective.com/unified" + } + ], + "license": "MIT" + }, + "node_modules/micromark-util-html-tag-name": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/micromark-util-html-tag-name/-/micromark-util-html-tag-name-2.0.1.tgz", + "integrity": "sha512-2cNEiYDhCWKI+Gs9T0Tiysk136SnR13hhO8yW6BGNyhOC4qYFnwF1nKfD3HFAIXA5c45RrIG1ub11GiXeYd1xA==", + "funding": [ + { + "type": "GitHub Sponsors", + "url": "https://github.com/sponsors/unifiedjs" + }, + { + "type": "OpenCollective", + "url": "https://opencollective.com/unified" + } + ], + "license": "MIT" + }, + "node_modules/micromark-util-normalize-identifier": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/micromark-util-normalize-identifier/-/micromark-util-normalize-identifier-2.0.1.tgz", + "integrity": "sha512-sxPqmo70LyARJs0w2UclACPUUEqltCkJ6PhKdMIDuJ3gSf/Q+/GIe3WKl0Ijb/GyH9lOpUkRAO2wp0GVkLvS9Q==", + "funding": [ + { + "type": "GitHub Sponsors", + "url": "https://github.com/sponsors/unifiedjs" + }, + { + "type": "OpenCollective", + "url": "https://opencollective.com/unified" + } + ], + "license": "MIT", + "dependencies": { + "micromark-util-symbol": "^2.0.0" + } + }, + "node_modules/micromark-util-resolve-all": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/micromark-util-resolve-all/-/micromark-util-resolve-all-2.0.1.tgz", + "integrity": "sha512-VdQyxFWFT2/FGJgwQnJYbe1jjQoNTS4RjglmSjTUlpUMa95Htx9NHeYW4rGDJzbjvCsl9eLjMQwGeElsqmzcHg==", + "funding": [ + { + "type": "GitHub Sponsors", + "url": "https://github.com/sponsors/unifiedjs" + }, + { + "type": "OpenCollective", + "url": "https://opencollective.com/unified" + } + ], + "license": "MIT", + "dependencies": { + "micromark-util-types": "^2.0.0" + } + }, + "node_modules/micromark-util-sanitize-uri": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/micromark-util-sanitize-uri/-/micromark-util-sanitize-uri-2.0.1.tgz", + "integrity": "sha512-9N9IomZ/YuGGZZmQec1MbgxtlgougxTodVwDzzEouPKo3qFWvymFHWcnDi2vzV1ff6kas9ucW+o3yzJK9YB1AQ==", + "funding": [ + { + "type": "GitHub Sponsors", + "url": "https://github.com/sponsors/unifiedjs" + }, + { + "type": "OpenCollective", + "url": "https://opencollective.com/unified" + } + ], + "license": "MIT", + "dependencies": { + "micromark-util-character": "^2.0.0", + "micromark-util-encode": "^2.0.0", + "micromark-util-symbol": "^2.0.0" + } + }, + "node_modules/micromark-util-subtokenize": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/micromark-util-subtokenize/-/micromark-util-subtokenize-2.1.0.tgz", + "integrity": "sha512-XQLu552iSctvnEcgXw6+Sx75GflAPNED1qx7eBJ+wydBb2KCbRZe+NwvIEEMM83uml1+2WSXpBAcp9IUCgCYWA==", + "funding": [ + { + "type": "GitHub Sponsors", + "url": "https://github.com/sponsors/unifiedjs" + }, + { + "type": "OpenCollective", + "url": "https://opencollective.com/unified" + } + ], + "license": "MIT", + "dependencies": { + "devlop": "^1.0.0", + "micromark-util-chunked": "^2.0.0", + "micromark-util-symbol": "^2.0.0", + "micromark-util-types": "^2.0.0" + } + }, + "node_modules/micromark-util-symbol": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/micromark-util-symbol/-/micromark-util-symbol-2.0.1.tgz", + "integrity": "sha512-vs5t8Apaud9N28kgCrRUdEed4UJ+wWNvicHLPxCa9ENlYuAY31M0ETy5y1vA33YoNPDFTghEbnh6efaE8h4x0Q==", + "funding": [ + { + "type": "GitHub Sponsors", + "url": "https://github.com/sponsors/unifiedjs" + }, + { + "type": "OpenCollective", + "url": "https://opencollective.com/unified" + } + ], + "license": "MIT" + }, + "node_modules/micromark-util-types": { + "version": "2.0.2", + "resolved": "https://registry.npmjs.org/micromark-util-types/-/micromark-util-types-2.0.2.tgz", + "integrity": "sha512-Yw0ECSpJoViF1qTU4DC6NwtC4aWGt1EkzaQB8KPPyCRR8z9TWeV0HbEFGTO+ZY1wB22zmxnJqhPyTpOVCpeHTA==", + "funding": [ + { + "type": "GitHub Sponsors", + "url": "https://github.com/sponsors/unifiedjs" + }, + { + "type": "OpenCollective", + "url": "https://opencollective.com/unified" + } + ], + "license": "MIT" + }, + "node_modules/micromatch": { + "version": "4.0.8", + "resolved": "https://registry.npmjs.org/micromatch/-/micromatch-4.0.8.tgz", + "integrity": "sha512-PXwfBhYu0hBCPw8Dn0E+WDYb7af3dSLVWKi3HGv84IdF4TyFoC0ysxFd0Goxw7nSv4T/PzEJQxsYsEiFCKo2BA==", + "dev": true, "dependencies": { "braces": "^3.0.3", "picomatch": "^2.3.1" @@ -5218,8 +6279,7 @@ "node_modules/ms": { "version": "2.1.2", "resolved": "https://registry.npmjs.org/ms/-/ms-2.1.2.tgz", - "integrity": "sha512-sGkPx+VjMtmA6MX27oA4FBFELFCZZ4S4XqeGOXCv68tT+jb3vk/RyaKWP0PTKyWtmLSM0b+adUTEvbs1PEaH2w==", - "dev": true + "integrity": "sha512-sGkPx+VjMtmA6MX27oA4FBFELFCZZ4S4XqeGOXCv68tT+jb3vk/RyaKWP0PTKyWtmLSM0b+adUTEvbs1PEaH2w==" }, "node_modules/mz": { "version": "2.7.0", @@ -5459,6 +6519,31 @@ "node": ">=6" } }, + "node_modules/parse-entities": { + "version": "4.0.2", + "resolved": "https://registry.npmjs.org/parse-entities/-/parse-entities-4.0.2.tgz", + "integrity": "sha512-GG2AQYWoLgL877gQIKeRPGO1xF9+eG1ujIb5soS5gPvLQ1y2o8FL90w2QWNdf9I361Mpp7726c+lj3U0qK1uGw==", + "license": "MIT", + "dependencies": { + "@types/unist": "^2.0.0", + "character-entities-legacy": "^3.0.0", + "character-reference-invalid": "^2.0.0", + "decode-named-character-reference": "^1.0.0", + "is-alphanumerical": "^2.0.0", + "is-decimal": "^2.0.0", + "is-hexadecimal": "^2.0.0" + }, + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/parse-entities/node_modules/@types/unist": { + "version": "2.0.11", + "resolved": "https://registry.npmjs.org/@types/unist/-/unist-2.0.11.tgz", + "integrity": "sha512-CmBKiL6NNo/OqgmMn95Fk9Whlp2mtvIv+KNpQKN2F4SjvrEesubTRWGYSg+BnWZOnlCaSTU1sMpsBOzgbYhnsA==", + "license": "MIT" + }, "node_modules/path-exists": { "version": "4.0.0", "resolved": "https://registry.npmjs.org/path-exists/-/path-exists-4.0.0.tgz", @@ -5769,6 +6854,15 @@ "url": "https://github.com/prettier/prettier?sponsor=1" } }, + "node_modules/prismjs": { + "version": "1.30.0", + "resolved": "https://registry.npmjs.org/prismjs/-/prismjs-1.30.0.tgz", + "integrity": "sha512-DEvV2ZF2r2/63V+tK8hQvrR2ZGn10srHbXviTlcv7Kpzw8jWiNTqbVgjO3IY8RxrrOUF8VPMQQFysYYYv0YZxw==", + "license": "MIT", + "engines": { + "node": ">=6" + } + }, "node_modules/promise-worker-transferable": { "version": "1.0.4", "resolved": "https://registry.npmjs.org/promise-worker-transferable/-/promise-worker-transferable-1.0.4.tgz", @@ -5788,6 +6882,16 @@ "react-is": "^16.13.1" } }, + "node_modules/property-information": { + "version": "7.1.0", + "resolved": "https://registry.npmjs.org/property-information/-/property-information-7.1.0.tgz", + "integrity": "sha512-TwEZ+X+yCJmYfL7TPUOcvBZ4QfoT5YenQiJuX//0th53DE6w0xxLEtfK3iyryQFddXuvkIk51EEgrJQ0WJkOmQ==", + "license": "MIT", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, "node_modules/punycode": { "version": "2.3.1", "resolved": "https://registry.npmjs.org/punycode/-/punycode-2.3.1.tgz", @@ -5917,6 +7021,33 @@ "react": ">=16.13.1" } }, + "node_modules/react-markdown": { + "version": "10.1.0", + "resolved": "https://registry.npmjs.org/react-markdown/-/react-markdown-10.1.0.tgz", + "integrity": "sha512-qKxVopLT/TyA6BX3Ue5NwabOsAzm0Q7kAPwq6L+wWDwisYs7R8vZ0nRXqq6rkueboxpkjvLGU9fWifiX/ZZFxQ==", + "license": "MIT", + "dependencies": { + "@types/hast": "^3.0.0", + "@types/mdast": "^4.0.0", + "devlop": "^1.0.0", + "hast-util-to-jsx-runtime": "^2.0.0", + "html-url-attributes": "^3.0.0", + "mdast-util-to-hast": "^13.0.0", + "remark-parse": "^11.0.0", + "remark-rehype": "^11.0.0", + "unified": "^11.0.0", + "unist-util-visit": "^5.0.0", + "vfile": "^6.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + }, + "peerDependencies": { + "@types/react": ">=18", + "react": ">=18" + } + }, "node_modules/react-reconciler": { "version": "0.27.0", "resolved": "https://registry.npmjs.org/react-reconciler/-/react-reconciler-0.27.0.tgz", @@ -5966,6 +7097,23 @@ "react": ">=16.8.0" } }, + "node_modules/react-syntax-highlighter": { + "version": "15.6.1", + "resolved": "https://registry.npmjs.org/react-syntax-highlighter/-/react-syntax-highlighter-15.6.1.tgz", + "integrity": "sha512-OqJ2/vL7lEeV5zTJyG7kmARppUjiB9h9udl4qHQjjgEos66z00Ia0OckwYfRxCSFrW8RJIBnsBwQsHZbVPspqg==", + "license": "MIT", + "dependencies": { + "@babel/runtime": "^7.3.1", + "highlight.js": "^10.4.1", + "highlightjs-vue": "^1.0.0", + "lowlight": "^1.17.0", + "prismjs": "^1.27.0", + "refractor": "^3.6.0" + }, + "peerDependencies": { + "react": ">= 0.14.0" + } + }, "node_modules/read-cache": { "version": "1.0.0", "resolved": "https://registry.npmjs.org/read-cache/-/read-cache-1.0.0.tgz", @@ -6008,6 +7156,122 @@ "url": "https://github.com/sponsors/ljharb" } }, + "node_modules/refractor": { + "version": "3.6.0", + "resolved": "https://registry.npmjs.org/refractor/-/refractor-3.6.0.tgz", + "integrity": "sha512-MY9W41IOWxxk31o+YvFCNyNzdkc9M20NoZK5vq6jkv4I/uh2zkWcfudj0Q1fovjUQJrNewS9NMzeTtqPf+n5EA==", + "license": "MIT", + "dependencies": { + "hastscript": "^6.0.0", + "parse-entities": "^2.0.0", + "prismjs": "~1.27.0" + }, + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/refractor/node_modules/character-entities": { + "version": "1.2.4", + "resolved": "https://registry.npmjs.org/character-entities/-/character-entities-1.2.4.tgz", + "integrity": "sha512-iBMyeEHxfVnIakwOuDXpVkc54HijNgCyQB2w0VfGQThle6NXn50zU6V/u+LDhxHcDUPojn6Kpga3PTAD8W1bQw==", + "license": "MIT", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/refractor/node_modules/character-entities-legacy": { + "version": "1.1.4", + "resolved": "https://registry.npmjs.org/character-entities-legacy/-/character-entities-legacy-1.1.4.tgz", + "integrity": "sha512-3Xnr+7ZFS1uxeiUDvV02wQ+QDbc55o97tIV5zHScSPJpcLm/r0DFPcoY3tYRp+VZukxuMeKgXYmsXQHO05zQeA==", + "license": "MIT", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/refractor/node_modules/character-reference-invalid": { + "version": "1.1.4", + "resolved": "https://registry.npmjs.org/character-reference-invalid/-/character-reference-invalid-1.1.4.tgz", + "integrity": "sha512-mKKUkUbhPpQlCOfIuZkvSEgktjPFIsZKRRbC6KWVEMvlzblj3i3asQv5ODsrwt0N3pHAEvjP8KTQPHkp0+6jOg==", + "license": "MIT", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/refractor/node_modules/is-alphabetical": { + "version": "1.0.4", + "resolved": "https://registry.npmjs.org/is-alphabetical/-/is-alphabetical-1.0.4.tgz", + "integrity": "sha512-DwzsA04LQ10FHTZuL0/grVDk4rFoVH1pjAToYwBrHSxcrBIGQuXrQMtD5U1b0U2XVgKZCTLLP8u2Qxqhy3l2Vg==", + "license": "MIT", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/refractor/node_modules/is-alphanumerical": { + "version": "1.0.4", + "resolved": "https://registry.npmjs.org/is-alphanumerical/-/is-alphanumerical-1.0.4.tgz", + "integrity": "sha512-UzoZUr+XfVz3t3v4KyGEniVL9BDRoQtY7tOyrRybkVNjDFWyo1yhXNGrrBTQxp3ib9BLAWs7k2YKBQsFRkZG9A==", + "license": "MIT", + "dependencies": { + "is-alphabetical": "^1.0.0", + "is-decimal": "^1.0.0" + }, + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/refractor/node_modules/is-decimal": { + "version": "1.0.4", + "resolved": "https://registry.npmjs.org/is-decimal/-/is-decimal-1.0.4.tgz", + "integrity": "sha512-RGdriMmQQvZ2aqaQq3awNA6dCGtKpiDFcOzrTWrDAT2MiWrKQVPmxLGHl7Y2nNu6led0kEyoX0enY0qXYsv9zw==", + "license": "MIT", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/refractor/node_modules/is-hexadecimal": { + "version": "1.0.4", + "resolved": "https://registry.npmjs.org/is-hexadecimal/-/is-hexadecimal-1.0.4.tgz", + "integrity": "sha512-gyPJuv83bHMpocVYoqof5VDiZveEoGoFL8m3BXNb2VW8Xs+rz9kqO8LOQ5DH6EsuvilT1ApazU0pyl+ytbPtlw==", + "license": "MIT", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/refractor/node_modules/parse-entities": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/parse-entities/-/parse-entities-2.0.0.tgz", + "integrity": "sha512-kkywGpCcRYhqQIchaWqZ875wzpS/bMKhz5HnN3p7wveJTkTtyAB/AlnS0f8DFSqYW1T82t6yEAkEcB+A1I3MbQ==", + "license": "MIT", + "dependencies": { + "character-entities": "^1.0.0", + "character-entities-legacy": "^1.0.0", + "character-reference-invalid": "^1.0.0", + "is-alphanumerical": "^1.0.0", + "is-decimal": "^1.0.0", + "is-hexadecimal": "^1.0.0" + }, + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/refractor/node_modules/prismjs": { + "version": "1.27.0", + "resolved": "https://registry.npmjs.org/prismjs/-/prismjs-1.27.0.tgz", + "integrity": "sha512-t13BGPUlFDR7wRB5kQDG4jjl7XeuH6jbJGt11JHPL96qwsEHNX2+68tFXqc1/k+/jALsbSWJKUOT/hcYAZ5LkA==", + "license": "MIT", + "engines": { + "node": ">=6" + } + }, "node_modules/regenerator-runtime": { "version": "0.14.1", "resolved": "https://registry.npmjs.org/regenerator-runtime/-/regenerator-runtime-0.14.1.tgz", @@ -6031,6 +7295,39 @@ "url": "https://github.com/sponsors/ljharb" } }, + "node_modules/remark-parse": { + "version": "11.0.0", + "resolved": "https://registry.npmjs.org/remark-parse/-/remark-parse-11.0.0.tgz", + "integrity": "sha512-FCxlKLNGknS5ba/1lmpYijMUzX2esxW5xQqjWxw2eHFfS2MSdaHVINFmhjo+qN1WhZhNimq0dZATN9pH0IDrpA==", + "license": "MIT", + "dependencies": { + "@types/mdast": "^4.0.0", + "mdast-util-from-markdown": "^2.0.0", + "micromark-util-types": "^2.0.0", + "unified": "^11.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/remark-rehype": { + "version": "11.1.2", + "resolved": "https://registry.npmjs.org/remark-rehype/-/remark-rehype-11.1.2.tgz", + "integrity": "sha512-Dh7l57ianaEoIpzbp0PC9UKAdCSVklD8E5Rpw7ETfbTl3FqcOOgq5q2LVDhgGCkaBv7p24JXikPdvhhmHvKMsw==", + "license": "MIT", + "dependencies": { + "@types/hast": "^3.0.0", + "@types/mdast": "^4.0.0", + "mdast-util-to-hast": "^13.0.0", + "unified": "^11.0.0", + "vfile": "^6.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, "node_modules/require-from-string": { "version": "2.0.2", "resolved": "https://registry.npmjs.org/require-from-string/-/require-from-string-2.0.2.tgz", @@ -6218,6 +7515,19 @@ "loose-envify": "^1.1.0" } }, + "node_modules/section-matter": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/section-matter/-/section-matter-1.0.0.tgz", + "integrity": "sha512-vfD3pmTzGpufjScBh50YHKzEu2lxBWhVEHsNGoEXmCmn2hKGfeNLYMzCJpe8cD7gqX7TJluOVpBkAequ6dgMmA==", + "license": "MIT", + "dependencies": { + "extend-shallow": "^2.0.1", + "kind-of": "^6.0.0" + }, + "engines": { + "node": ">=4" + } + }, "node_modules/semver": { "version": "6.3.1", "resolved": "https://registry.npmjs.org/semver/-/semver-6.3.1.tgz", @@ -6349,6 +7659,16 @@ "node": ">=0.10.0" } }, + "node_modules/space-separated-tokens": { + "version": "2.0.2", + "resolved": "https://registry.npmjs.org/space-separated-tokens/-/space-separated-tokens-2.0.2.tgz", + "integrity": "sha512-PEGlAwrG8yXGXRjW32fGbg66JAlOAwbObuqVoJpv/mRgoWDQfgH1wDPvtzWyUSNAXBGSk8h755YDbbcEy3SH2Q==", + "license": "MIT", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, "node_modules/split-string": { "version": "3.1.0", "resolved": "https://registry.npmjs.org/split-string/-/split-string-3.1.0.tgz", @@ -6372,6 +7692,12 @@ "node": ">=0.10.0" } }, + "node_modules/sprintf-js": { + "version": "1.0.3", + "resolved": "https://registry.npmjs.org/sprintf-js/-/sprintf-js-1.0.3.tgz", + "integrity": "sha512-D9cPgkvLlV3t3IzL0D0YLvGA9Ahk4PcvVwUbN0dSGr1aP0Nrt4AEnTUbuGvquEC0mA64Gqt1fzirlRs5ibXx8g==", + "license": "BSD-3-Clause" + }, "node_modules/stats-gl": { "version": "2.2.8", "resolved": "https://registry.npmjs.org/stats-gl/-/stats-gl-2.2.8.tgz", @@ -6547,6 +7873,20 @@ "url": "https://github.com/sponsors/ljharb" } }, + "node_modules/stringify-entities": { + "version": "4.0.4", + "resolved": "https://registry.npmjs.org/stringify-entities/-/stringify-entities-4.0.4.tgz", + "integrity": "sha512-IwfBptatlO+QCJUo19AqvrPNqlVMpW9YEL2LIVY+Rpv2qsjCGxaDLNRgeGsQWJhfItebuJhsGSLjaBbNSQ+ieg==", + "license": "MIT", + "dependencies": { + "character-entities-html4": "^2.0.0", + "character-entities-legacy": "^3.0.0" + }, + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, "node_modules/strip-ansi": { "version": "6.0.1", "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-6.0.1.tgz", @@ -6572,6 +7912,15 @@ "node": ">=8" } }, + "node_modules/strip-bom-string": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/strip-bom-string/-/strip-bom-string-1.0.0.tgz", + "integrity": "sha512-uCC2VHvQRYu+lMh4My/sFNmF2klFymLX1wHJeXnbEJERpV/ZsVuonzerjfrGpIGF7LBVa1O7i9kjiWvJiFck8g==", + "license": "MIT", + "engines": { + "node": ">=0.10.0" + } + }, "node_modules/strip-json-comments": { "version": "3.1.1", "resolved": "https://registry.npmjs.org/strip-json-comments/-/strip-json-comments-3.1.1.tgz", @@ -6584,6 +7933,24 @@ "url": "https://github.com/sponsors/sindresorhus" } }, + "node_modules/style-to-js": { + "version": "1.1.17", + "resolved": "https://registry.npmjs.org/style-to-js/-/style-to-js-1.1.17.tgz", + "integrity": "sha512-xQcBGDxJb6jjFCTzvQtfiPn6YvvP2O8U1MDIPNfJQlWMYfktPy+iGsHE7cssjs7y84d9fQaK4UF3RIJaAHSoYA==", + "license": "MIT", + "dependencies": { + "style-to-object": "1.0.9" + } + }, + "node_modules/style-to-object": { + "version": "1.0.9", + "resolved": "https://registry.npmjs.org/style-to-object/-/style-to-object-1.0.9.tgz", + "integrity": "sha512-G4qppLgKu/k6FwRpHiGiKPaPTFcG3g4wNVX/Qsfu+RqQM30E7Tyu/TEgxcL9PNLF5pdRLwQdE3YKKf+KF2Dzlw==", + "license": "MIT", + "dependencies": { + "inline-style-parser": "0.2.4" + } + }, "node_modules/sucrase": { "version": "3.35.0", "resolved": "https://registry.npmjs.org/sucrase/-/sucrase-3.35.0.tgz", @@ -6879,6 +8246,16 @@ "node": ">=8.0" } }, + "node_modules/trim-lines": { + "version": "3.0.1", + "resolved": "https://registry.npmjs.org/trim-lines/-/trim-lines-3.0.1.tgz", + "integrity": "sha512-kRj8B+YHZCc9kQYdWfJB2/oUl9rA99qbowYYBtr4ui4mZyAQ2JpvVBd/6U2YloATfqBhBTSMhTpgBHtU0Mf3Rg==", + "license": "MIT", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, "node_modules/troika-three-text": { "version": "0.49.1", "resolved": "https://registry.npmjs.org/troika-three-text/-/troika-three-text-0.49.1.tgz", @@ -6906,6 +8283,16 @@ "resolved": "https://registry.npmjs.org/troika-worker-utils/-/troika-worker-utils-0.49.0.tgz", "integrity": "sha512-1xZHoJrG0HFfCvT/iyN41DvI/nRykiBtHqFkGaGgJwq5iXfIZFBiPPEHFpPpgyKM3Oo5ITHXP5wM2TNQszYdVg==" }, + "node_modules/trough": { + "version": "2.2.0", + "resolved": "https://registry.npmjs.org/trough/-/trough-2.2.0.tgz", + "integrity": "sha512-tmMpK00BjZiUyVyvrBK7knerNgmgvcV/KLVyuma/SC+TQN167GrMRciANTz09+k3zW8L8t60jWO1GpfkZdjTaw==", + "license": "MIT", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, "node_modules/ts-interface-checker": { "version": "0.1.13", "resolved": "https://registry.npmjs.org/ts-interface-checker/-/ts-interface-checker-0.1.13.tgz", @@ -7064,6 +8451,93 @@ "url": "https://github.com/sponsors/ljharb" } }, + "node_modules/unified": { + "version": "11.0.5", + "resolved": "https://registry.npmjs.org/unified/-/unified-11.0.5.tgz", + "integrity": "sha512-xKvGhPWw3k84Qjh8bI3ZeJjqnyadK+GEFtazSfZv/rKeTkTjOJho6mFqh2SM96iIcZokxiOpg78GazTSg8+KHA==", + "license": "MIT", + "dependencies": { + "@types/unist": "^3.0.0", + "bail": "^2.0.0", + "devlop": "^1.0.0", + "extend": "^3.0.0", + "is-plain-obj": "^4.0.0", + "trough": "^2.0.0", + "vfile": "^6.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/unist-util-is": { + "version": "6.0.0", + "resolved": "https://registry.npmjs.org/unist-util-is/-/unist-util-is-6.0.0.tgz", + "integrity": "sha512-2qCTHimwdxLfz+YzdGfkqNlH0tLi9xjTnHddPmJwtIG9MGsdbutfTc4P+haPD7l7Cjxf/WZj+we5qfVPvvxfYw==", + "license": "MIT", + "dependencies": { + "@types/unist": "^3.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/unist-util-position": { + "version": "5.0.0", + "resolved": "https://registry.npmjs.org/unist-util-position/-/unist-util-position-5.0.0.tgz", + "integrity": "sha512-fucsC7HjXvkB5R3kTCO7kUjRdrS0BJt3M/FPxmHMBOm8JQi2BsHAHFsy27E0EolP8rp0NzXsJ+jNPyDWvOJZPA==", + "license": "MIT", + "dependencies": { + "@types/unist": "^3.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/unist-util-stringify-position": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/unist-util-stringify-position/-/unist-util-stringify-position-4.0.0.tgz", + "integrity": "sha512-0ASV06AAoKCDkS2+xw5RXJywruurpbC4JZSm7nr7MOt1ojAzvyyaO+UxZf18j8FCF6kmzCZKcAgN/yu2gm2XgQ==", + "license": "MIT", + "dependencies": { + "@types/unist": "^3.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/unist-util-visit": { + "version": "5.0.0", + "resolved": "https://registry.npmjs.org/unist-util-visit/-/unist-util-visit-5.0.0.tgz", + "integrity": "sha512-MR04uvD+07cwl/yhVuVWAtw+3GOR/knlL55Nd/wAdblk27GCVt3lqpTivy/tkJcZoNPzTwS1Y+KMojlLDhoTzg==", + "license": "MIT", + "dependencies": { + "@types/unist": "^3.0.0", + "unist-util-is": "^6.0.0", + "unist-util-visit-parents": "^6.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/unist-util-visit-parents": { + "version": "6.0.1", + "resolved": "https://registry.npmjs.org/unist-util-visit-parents/-/unist-util-visit-parents-6.0.1.tgz", + "integrity": "sha512-L/PqWzfTP9lzzEa6CKs0k2nARxTdZduw3zyh8d2NVBnsyvHjSX4TWse388YrrQKbvI8w20fGjGlhgT96WwKykw==", + "license": "MIT", + "dependencies": { + "@types/unist": "^3.0.0", + "unist-util-is": "^6.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, "node_modules/update-browserslist-db": { "version": "1.1.0", "resolved": "https://registry.npmjs.org/update-browserslist-db/-/update-browserslist-db-1.1.0.tgz", @@ -7142,6 +8616,34 @@ "resolved": "https://registry.npmjs.org/v8n/-/v8n-1.5.1.tgz", "integrity": "sha512-LdabyT4OffkyXFCe9UT+uMkxNBs5rcTVuZClvxQr08D5TUgo1OFKkoT65qYRCsiKBl/usHjpXvP4hHMzzDRj3A==" }, + "node_modules/vfile": { + "version": "6.0.3", + "resolved": "https://registry.npmjs.org/vfile/-/vfile-6.0.3.tgz", + "integrity": "sha512-KzIbH/9tXat2u30jf+smMwFCsno4wHVdNmzFyL+T/L3UGqqk6JKfVqOFOZEpZSHADH1k40ab6NUIXZq422ov3Q==", + "license": "MIT", + "dependencies": { + "@types/unist": "^3.0.0", + "vfile-message": "^4.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/vfile-message": { + "version": "4.0.2", + "resolved": "https://registry.npmjs.org/vfile-message/-/vfile-message-4.0.2.tgz", + "integrity": "sha512-jRDZ1IMLttGj41KcZvlrYAaI3CfqpLpfpf+Mfig13viT6NKvRzWZ+lXz0Y5D60w6uJIBAOGq9mSHf0gktF0duw==", + "license": "MIT", + "dependencies": { + "@types/unist": "^3.0.0", + "unist-util-stringify-position": "^4.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, "node_modules/vite": { "version": "5.4.2", "resolved": "https://registry.npmjs.org/vite/-/vite-5.4.2.tgz", @@ -7446,6 +8948,15 @@ "integrity": "sha512-l4Sp/DRseor9wL6EvV2+TuQn63dMkPjZ/sp9XkghTEbV9KlPS1xUsZ3u7/IQO4wxtcFB4bgpQPRcR3QCvezPcQ==", "dev": true }, + "node_modules/xtend": { + "version": "4.0.2", + "resolved": "https://registry.npmjs.org/xtend/-/xtend-4.0.2.tgz", + "integrity": "sha512-LKYU1iAXJXUgAXn9URjiu+MWhyUXHsvfp7mcuYm9dSUKK0/CjtrUwFAxD82/mCWbtLsGjFIad0wIsod4zrTAEQ==", + "license": "MIT", + "engines": { + "node": ">=0.4" + } + }, "node_modules/yallist": { "version": "3.1.1", "resolved": "https://registry.npmjs.org/yallist/-/yallist-3.1.1.tgz", @@ -7499,6 +9010,16 @@ "optional": true } } + }, + "node_modules/zwitch": { + "version": "2.0.4", + "resolved": "https://registry.npmjs.org/zwitch/-/zwitch-2.0.4.tgz", + "integrity": "sha512-bXE4cR/kVZhKZX/RjPEflHaKVhUVl85noU3v6b8apfQEc1x4A+zBxjZ4lN8LqGd6WZ3dl98pY4o717VFmoPp+A==", + "license": "MIT", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } } } } diff --git a/package.json b/package.json index e6fc783..5c0f418 100644 --- a/package.json +++ b/package.json @@ -15,6 +15,7 @@ "@react-three/drei": "^9.111.3", "@react-three/fiber": "^8.17.6", "@types/three": "^0.168.0", + "gray-matter": "^4.0.3", "gsap": "^3.12.5", "leva": "^0.9.35", "maath": "^0.10.8", @@ -22,7 +23,9 @@ "react": "^18.3.1", "react-dom": "^18.3.1", "react-globe.gl": "^2.27.2", + "react-markdown": "^10.1.0", "react-responsive": "^10.0.0", + "react-syntax-highlighter": "^15.6.1", "three": "^0.167.1", "three-stdlib": "^2.32.2" }, diff --git a/public/assets/MongoDB_SpringGreen.png b/public/assets/MongoDB_SpringGreen.png new file mode 100644 index 0000000..7b6492d Binary files /dev/null and b/public/assets/MongoDB_SpringGreen.png differ diff --git a/public/assets/autoapply.png b/public/assets/autoapply.png new file mode 100644 index 0000000..72e36a3 Binary files /dev/null and b/public/assets/autoapply.png differ diff --git a/public/assets/claude.png b/public/assets/claude.png new file mode 100644 index 0000000..5a38939 Binary files /dev/null and b/public/assets/claude.png differ diff --git a/public/assets/deepmask.png b/public/assets/deepmask.png new file mode 100644 index 0000000..328bfbd Binary files /dev/null and b/public/assets/deepmask.png differ diff --git a/public/assets/getmobie.png b/public/assets/getmobie.png new file mode 100644 index 0000000..d483546 Binary files /dev/null and b/public/assets/getmobie.png differ diff --git a/public/assets/huggingface.png b/public/assets/huggingface.png new file mode 100644 index 0000000..49e2841 Binary files /dev/null and b/public/assets/huggingface.png differ diff --git a/public/assets/jan_ghibli.png b/public/assets/jan_ghibli.png new file mode 100644 index 0000000..08ba240 Binary files /dev/null and b/public/assets/jan_ghibli.png differ diff --git a/public/assets/linkedin.svg b/public/assets/linkedin.svg new file mode 100644 index 0000000..3b1151b --- /dev/null +++ b/public/assets/linkedin.svg @@ -0,0 +1,7 @@ + + + + + + + \ No newline at end of file diff --git a/public/assets/mit.png b/public/assets/mit.png new file mode 100644 index 0000000..286b971 Binary files /dev/null and b/public/assets/mit.png differ diff --git a/public/assets/ohb.png b/public/assets/ohb.png new file mode 100644 index 0000000..42b4270 Binary files /dev/null and b/public/assets/ohb.png differ diff --git a/public/assets/publication1.png b/public/assets/publication1.png new file mode 100644 index 0000000..e0984c5 Binary files /dev/null and b/public/assets/publication1.png differ diff --git a/public/assets/python.png b/public/assets/python.png new file mode 100644 index 0000000..20f36f4 Binary files /dev/null and b/public/assets/python.png differ diff --git a/public/assets/pytorch.png b/public/assets/pytorch.png new file mode 100644 index 0000000..872b5dc Binary files /dev/null and b/public/assets/pytorch.png differ diff --git a/public/assets/rfa.png b/public/assets/rfa.png new file mode 100644 index 0000000..5e53e13 Binary files /dev/null and b/public/assets/rfa.png differ diff --git a/public/assets/x.svg b/public/assets/x.svg new file mode 100644 index 0000000..8f60fab --- /dev/null +++ b/public/assets/x.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/public/favicon.ico b/public/favicon.ico new file mode 100644 index 0000000..8a9a762 Binary files /dev/null and b/public/favicon.ico differ diff --git a/public/models/animations/jan.glb b/public/models/animations/jan.glb new file mode 100644 index 0000000..15d4a77 Binary files /dev/null and b/public/models/animations/jan.glb differ diff --git a/public/textures/jan_photo.png b/public/textures/jan_photo.png new file mode 100644 index 0000000..c2293eb Binary files /dev/null and b/public/textures/jan_photo.png differ diff --git a/public/textures/project/autoapply.mp4 b/public/textures/project/autoapply.mp4 new file mode 100644 index 0000000..3f53739 Binary files /dev/null and b/public/textures/project/autoapply.mp4 differ diff --git a/public/vite.svg b/public/vite.svg index e7b8dfb..73ac9ee 100644 --- a/public/vite.svg +++ b/public/vite.svg @@ -1 +1 @@ - \ No newline at end of file +pls \ No newline at end of file diff --git a/src/App.jsx b/src/App.jsx index 7e4fbc9..4d74008 100644 --- a/src/App.jsx +++ b/src/App.jsx @@ -3,9 +3,10 @@ import About from './sections/About.jsx'; import Footer from './sections/Footer.jsx'; import Navbar from './sections/Navbar.jsx'; import Contact from './sections/Contact.jsx'; -import Clients from './sections/Clients.jsx'; +import Publications from './sections/Publications.jsx'; import Projects from './sections/Projects.jsx'; import WorkExperience from './sections/Experience.jsx'; +import Blog from './sections/Blog.jsx'; const App = () => { return ( @@ -14,8 +15,9 @@ const App = () => { - + +
diff --git a/src/components/BlogCard.jsx b/src/components/BlogCard.jsx new file mode 100644 index 0000000..1e716a7 --- /dev/null +++ b/src/components/BlogCard.jsx @@ -0,0 +1,77 @@ +import { useState } from 'react'; + +const BlogCard = ({ post, onReadMore }) => { + const [isHovered, setIsHovered] = useState(false); + + const formatDate = (dateString) => { + const date = new Date(dateString); + return date.toLocaleDateString('en-US', { + year: 'numeric', + month: 'long', + day: 'numeric' + }); + }; + + return ( +
setIsHovered(true)} + onMouseLeave={() => setIsHovered(false)} + > + + {/* Category and date */} +
+ {post.category} + {formatDate(post.date)} +
+ + {/* Title */} +

+ {post.title} +

+ + {/* Excerpt */} +

+ {post.excerpt} +

+ + {/* Tags */} +
+ {post.tags.slice(0, 3).map((tag, index) => ( + + {tag} + + ))} + {post.tags.length > 3 && ( + +{post.tags.length - 3} more + )} +
+ + {/* Footer */} +
+
+ {post.author} + + {post.readTime} +
+ + +
+
+ ); +}; + +export default BlogCard; \ No newline at end of file diff --git a/src/components/BlogPost.jsx b/src/components/BlogPost.jsx new file mode 100644 index 0000000..68cb768 --- /dev/null +++ b/src/components/BlogPost.jsx @@ -0,0 +1,169 @@ +import ReactMarkdown from 'react-markdown'; +import { Prism as SyntaxHighlighter } from 'react-syntax-highlighter'; +import { vscDarkPlus } from 'react-syntax-highlighter/dist/esm/styles/prism'; + +const BlogPost = ({ post, onBack }) => { + const formatDate = (dateString) => { + const date = new Date(dateString); + return date.toLocaleDateString('en-US', { + year: 'numeric', + month: 'long', + day: 'numeric' + }); + }; + + const components = { + code: ({ node, inline, className, children, ...props }) => { + const match = /language-(\w+)/.exec(className || ''); + return !inline && match ? ( + + {String(children).replace(/\n$/, '')} + + ) : ( + + {children} + + ); + }, + h1: ({ children }) => ( +

+ {children} +

+ ), + h2: ({ children }) => ( +

+ {children} +

+ ), + h3: ({ children }) => ( +

+ {children} +

+ ), + p: ({ children }) => ( +

+ {children} +

+ ), + ul: ({ children }) => ( +
    + {children} +
+ ), + ol: ({ children }) => ( +
    + {children} +
+ ), + li: ({ children }) => ( +
  • + {children} +
  • + ), + blockquote: ({ children }) => ( +
    + {children} +
    + ), + strong: ({ children }) => ( + + {children} + + ), + em: ({ children }) => ( + + {children} + + ), + a: ({ children, href }) => ( + + {children} + + ) + }; + + return ( +
    + {/* Back button */} + + + {/* Article header */} +
    +
    + + {/* Category and date */} +
    + {post.category} + {formatDate(post.date)} +
    + + {/* Title */} +

    + {post.title} +

    + + {/* Author info */} +
    +
    +

    {post.author}

    +

    {post.readTime}

    +
    +
    + + {/* Tags */} +
    + {post.tags.map((tag, index) => ( + + {tag} + + ))} +
    +
    + + {/* Content */} +
    + + {post.content} + +
    +
    + + {/* Back to blog button */} +
    + +
    +
    + ); +}; + +export default BlogPost; \ No newline at end of file diff --git a/src/components/CanvasErrorBoundary.jsx b/src/components/CanvasErrorBoundary.jsx new file mode 100644 index 0000000..c297d62 --- /dev/null +++ b/src/components/CanvasErrorBoundary.jsx @@ -0,0 +1,58 @@ +import React from 'react'; +import { useMobileDetection } from '../hooks/useMobileDetection'; + +class CanvasErrorBoundary extends React.Component { + constructor(props) { + super(props); + this.state = { hasError: false, error: null }; + } + + static getDerivedStateFromError(error) { + return { hasError: true, error }; + } + + componentDidCatch(error, errorInfo) { + console.error('Canvas Error:', error, errorInfo); + } + + render() { + if (this.state.hasError) { + return this.props.fallback || ; + } + + return this.props.children; + } +} + +const CanvasErrorFallback = ({ error }) => { + const deviceInfo = useMobileDetection(); + + return ( +
    +
    +
    + + + +
    +

    + {deviceInfo.isMobile ? 'Mobile 3D View Unavailable' : '3D View Error'} +

    +

    + {deviceInfo.isMobile + ? 'Your device may not support 3D graphics. Please try on desktop for the full experience.' + : 'Unable to load 3D content. Please refresh the page or try a different browser.' + } +

    + +
    +
    + ); +}; + +export default CanvasErrorBoundary; \ No newline at end of file diff --git a/src/components/ClaudeLogo.jsx b/src/components/ClaudeLogo.jsx new file mode 100644 index 0000000..caf7858 --- /dev/null +++ b/src/components/ClaudeLogo.jsx @@ -0,0 +1,18 @@ +import { Float, useTexture } from '@react-three/drei'; + +const ClaudeLogo = (props) => { + const texture = useTexture('/assets/claude.png'); + + return ( + + + + + + + + + ); +}; + +export default ClaudeLogo; \ No newline at end of file diff --git a/src/components/DemoComputer.jsx b/src/components/DemoComputer.jsx index 9106491..02e7278 100644 --- a/src/components/DemoComputer.jsx +++ b/src/components/DemoComputer.jsx @@ -12,7 +12,12 @@ const DemoComputer = (props) => { const { nodes, materials, animations } = useGLTF('/models/computer.glb'); const { actions } = useAnimations(animations, group); - const txt = useVideoTexture(props.texture ? props.texture : '/textures/project/project1.mp4'); + // Use video texture directly + const txt = useVideoTexture(props.texture, { + muted: true, + loop: true, + start: true, + }); useEffect(() => { if (txt) { diff --git a/src/components/DemoComputerSimple.jsx b/src/components/DemoComputerSimple.jsx new file mode 100644 index 0000000..f401437 --- /dev/null +++ b/src/components/DemoComputerSimple.jsx @@ -0,0 +1,98 @@ +import { useRef, useEffect } from 'react'; +import { useGLTF, useTexture, useVideoTexture } from '@react-three/drei'; +import gsap from 'gsap'; +import { useGSAP } from '@gsap/react'; + +const DemoComputerSimple = (props) => { + const group = useRef(); + const { nodes, materials } = useGLTF('/models/computer.glb'); + + // Use video texture directly without complex mobile detection + const txt = useVideoTexture(props.texture, { + muted: true, + loop: true, + start: true, + }); + + useEffect(() => { + if (txt) { + txt.flipY = false; + } + }, [txt]); + + useGSAP(() => { + gsap.from(group.current.rotation, { + y: Math.PI / 2, + duration: 1, + ease: 'power3.out', + }); + }, [txt]); + + return ( + + + + + + + + + + + + + + + + + + + + ); +}; + +useGLTF.preload('/models/computer.glb'); + +export default DemoComputerSimple; \ No newline at end of file diff --git a/src/components/Developer.jsx b/src/components/Developer.jsx index 2115710..08623df 100644 --- a/src/components/Developer.jsx +++ b/src/components/Developer.jsx @@ -12,7 +12,7 @@ import { SkeletonUtils } from 'three-stdlib'; const Developer = ({ animationName = 'idle', ...props }) => { const group = useRef(); - const { scene } = useGLTF('/models/animations/developer.glb'); + const { scene } = useGLTF('/models/animations/jan.glb'); const clone = React.useMemo(() => SkeletonUtils.clone(scene), [scene]); const { nodes, materials } = useGraph(clone); @@ -31,80 +31,76 @@ const Developer = ({ animationName = 'idle', ...props }) => { group, ); + // Debug: Log available nodes and materials + useEffect(() => { + console.log('=== JAN.GLB MODEL STRUCTURE ==='); + console.log('Available nodes:', Object.keys(nodes)); + console.log('Available materials:', Object.keys(materials)); + + // Log each node with its properties + Object.entries(nodes).forEach(([name, node]) => { + console.log(`Node: ${name}`, { + type: node.type, + hasGeometry: !!node.geometry, + hasMaterial: !!node.material, + hasChildren: node.children?.length > 0, + childrenCount: node.children?.length || 0 + }); + }); + }, [nodes, materials]); + useEffect(() => { actions[animationName].reset().fadeIn(0.5).play(); return () => actions[animationName].fadeOut(0.5); }, [animationName]); + // Helper function to safely render a mesh if it exists + const renderMesh = (nodeName, materialName, meshName) => { + const node = nodes[nodeName]; + const material = materials[materialName]; + + if (!node || !material) { + return null; + } + + return ( + + ); + }; + return ( + {/* Use the Hips bone as root (confirmed to exist in jan.glb) */} - - - - - - - - - - + + {/* Render available meshes - only the ones that exist in jan.glb */} + {renderMesh('Wolf3D_Hair', 'Wolf3D_Hair', 'Hair')} + {renderMesh('Wolf3D_Body', 'Wolf3D_Body', 'Body')} + {renderMesh('Wolf3D_Outfit_Bottom', 'Wolf3D_Outfit_Bottom', 'OutfitBottom')} + {renderMesh('Wolf3D_Outfit_Footwear', 'Wolf3D_Outfit_Footwear', 'Footwear')} + {renderMesh('Wolf3D_Outfit_Top', 'Wolf3D_Outfit_Top', 'OutfitTop')} + {renderMesh('EyeLeft', 'Wolf3D_Eye', 'EyeLeft')} + {renderMesh('EyeRight', 'Wolf3D_Eye', 'EyeRight')} + {renderMesh('Wolf3D_Head', 'Wolf3D_Skin', 'Head')} + {renderMesh('Wolf3D_Teeth', 'Wolf3D_Teeth', 'Teeth')} ); }; -useGLTF.preload('/models/animations/developer.glb'); +useGLTF.preload('/models/animations/jan.glb'); + +// Preload animations to reduce loading warnings +useFBX.preload('/models/animations/idle.fbx'); +useFBX.preload('/models/animations/salute.fbx'); +useFBX.preload('/models/animations/clapping.fbx'); +useFBX.preload('/models/animations/victory.fbx'); export default Developer; diff --git a/src/components/EnhancedCanvasLoader.jsx b/src/components/EnhancedCanvasLoader.jsx new file mode 100644 index 0000000..0e87ac6 --- /dev/null +++ b/src/components/EnhancedCanvasLoader.jsx @@ -0,0 +1,90 @@ +import { Html, useProgress } from '@react-three/drei'; +import { useState, useEffect } from 'react'; +import { useMobileDetection } from '../hooks/useMobileDetection'; + +const EnhancedCanvasLoader = ({ timeout = 10000, onTimeout }) => { + const { progress } = useProgress(); + const [isTimedOut, setIsTimedOut] = useState(false); + const deviceInfo = useMobileDetection(); + + useEffect(() => { + const timer = setTimeout(() => { + if (progress < 100) { + setIsTimedOut(true); + onTimeout?.(); + } + }, timeout); + + return () => clearTimeout(timer); + }, [progress, timeout, onTimeout]); + + if (isTimedOut) { + return ( + +
    + + + +
    +

    + {deviceInfo.isMobile + ? 'Loading is taking longer than expected on mobile' + : 'Loading timeout - please refresh to try again' + } +

    + + + ); + } + + return ( + + +

    + {progress !== 0 ? `${progress.toFixed(2)}%` : 'Loading...'} +

    + {deviceInfo.isMobile && ( +

    + Loading 3D content... +

    + )} + + ); +}; + +export default EnhancedCanvasLoader; \ No newline at end of file diff --git a/src/components/GlobalErrorBoundary.jsx b/src/components/GlobalErrorBoundary.jsx new file mode 100644 index 0000000..3e4ec3f --- /dev/null +++ b/src/components/GlobalErrorBoundary.jsx @@ -0,0 +1,84 @@ +import React from 'react'; + +class GlobalErrorBoundary extends React.Component { + constructor(props) { + super(props); + this.state = { hasError: false, error: null, errorInfo: null }; + } + + static getDerivedStateFromError(error) { + // Update state so the next render will show the fallback UI + return { hasError: true }; + } + + componentDidCatch(error, errorInfo) { + // Log the error to console in development + if (process.env.NODE_ENV === 'development') { + console.error('Global Error Boundary caught an error:', error); + console.error('Error Info:', errorInfo); + } + + // You can also log the error to an error reporting service here + this.setState({ + error: error, + errorInfo: errorInfo + }); + } + + render() { + if (this.state.hasError) { + // Custom fallback UI + return ( +
    +
    +
    + + + +
    + +

    + Something went wrong +

    + +

    + We encountered an error while loading the 3D content. Please try refreshing the page. +

    + +
    + + + +
    + + {process.env.NODE_ENV === 'development' && this.state.error && ( +
    + + Error Details (Development Only) + +
    +                  {this.state.error.toString()}
    +                  {this.state.errorInfo.componentStack}
    +                
    +
    + )} +
    +
    + ); + } + + return this.props.children; + } +} + +export default GlobalErrorBoundary; \ No newline at end of file diff --git a/src/components/HackerRoom.jsx b/src/components/HackerRoom.jsx index cb02cc4..608455c 100644 --- a/src/components/HackerRoom.jsx +++ b/src/components/HackerRoom.jsx @@ -5,17 +5,39 @@ Files: hacker-room-new.glb [34.62MB] > /Users/hsuwinlat/Desktop/jsm pj/threejscc */ import { useGLTF, useTexture } from '@react-three/drei'; +import { useEffect } from 'react'; export function HackerRoom(props) { const { nodes, materials } = useGLTF('/models/hacker-room.glb'); const monitortxt = useTexture('textures/desk/monitor.png'); - const screenTxt = useTexture('textures/desk/screen.png'); + // Option 1: Replace the original screen.png file + // const screenTxt = useTexture('textures/desk/screen.png'); + + // Option 2: Use your own photo (uncomment the line below and comment the line above) + const screenTxt = useTexture('textures/jan_photo.png'); + + // Configure the photo texture to show face clearly + useEffect(() => { + if (screenTxt) { + // Center the face and prevent hair cutoff + screenTxt.wrapS = screenTxt.wrapT = 1001; // ClampToEdgeWrapping + screenTxt.offset.set(0.1, 0.2); // Shift more left and slightly up to center face + screenTxt.repeat.set(1.2, 1.2); // Scale up slightly to show more of the image + screenTxt.center.set(0.5, 0.5); // Set center point for transformations + screenTxt.needsUpdate = true; + } + }, [screenTxt]); return ( - + diff --git a/src/components/HuggingFaceLogo.jsx b/src/components/HuggingFaceLogo.jsx new file mode 100644 index 0000000..4a6a726 --- /dev/null +++ b/src/components/HuggingFaceLogo.jsx @@ -0,0 +1,18 @@ +import { Float, useTexture } from '@react-three/drei'; + +const HuggingFaceLogo = (props) => { + const texture = useTexture('/assets/huggingface.png'); + + return ( + + + + + + + + + ); +}; + +export default HuggingFaceLogo; \ No newline at end of file diff --git a/src/components/MobileOptimizedCanvas.jsx b/src/components/MobileOptimizedCanvas.jsx new file mode 100644 index 0000000..cc82c49 --- /dev/null +++ b/src/components/MobileOptimizedCanvas.jsx @@ -0,0 +1,116 @@ +import { Canvas } from '@react-three/fiber'; +import { useMobileDetection } from '../hooks/useMobileDetection'; +import { useEffect, useState } from 'react'; + +const MobileOptimizedCanvas = ({ children, fallback, ...props }) => { + const deviceInfo = useMobileDetection(); + const [webglSupported, setWebglSupported] = useState(true); + const [canvasError, setCanvasError] = useState(false); + + useEffect(() => { + // Check WebGL support + const canvas = document.createElement('canvas'); + const gl = canvas.getContext('webgl') || canvas.getContext('experimental-webgl'); + + if (!gl) { + console.warn('WebGL not supported'); + setWebglSupported(false); + return; + } + + // Check for mobile-specific WebGL issues + if (deviceInfo.isMobile) { + try { + const extension = gl.getExtension('WEBGL_lose_context'); + if (extension) { + // Test context stability + const testBuffer = gl.createBuffer(); + if (!testBuffer) { + throw new Error('WebGL context unstable'); + } + gl.deleteBuffer(testBuffer); + } + } catch (error) { + console.warn('Mobile WebGL context issues:', error); + setWebglSupported(false); + } + } + + // Cleanup + if (gl.getExtension('WEBGL_lose_context')) { + gl.getExtension('WEBGL_lose_context').loseContext(); + } + }, [deviceInfo.isMobile]); + + // Mobile-specific Canvas configuration + const getMobileCanvasProps = () => { + const baseProps = { + style: { + width: '100%', + height: '100%', + display: 'block', + touchAction: 'none' // Prevent mobile scroll interference + }, + onCreated: (state) => { + // Mobile performance optimizations + state.gl.setSize(state.size.width, state.size.height); + state.gl.setPixelRatio(Math.min(window.devicePixelRatio, 2)); // Limit pixel ratio on mobile + + // Mobile-specific renderer settings + if (deviceInfo.isMobile) { + state.gl.physicallyCorrectLights = false; + state.gl.powerPreference = 'low-power'; + } + + // Force a render to ensure Canvas initializes + state.gl.render(state.scene, state.camera); + }, + onError: (error) => { + console.error('Canvas error:', error); + setCanvasError(true); + } + }; + + if (deviceInfo.isMobile) { + return { + ...baseProps, + dpr: Math.min(window.devicePixelRatio, 2), // Limit DPI on mobile + performance: { min: 0.1, max: 0.8 }, // Conservative performance settings + gl: { + antialias: false, // Disable antialiasing on mobile for performance + alpha: true, + powerPreference: 'low-power', + failIfMajorPerformanceCaveat: false + } + }; + } + + return baseProps; + }; + + // Show fallback if WebGL is not supported or Canvas errored + if (!webglSupported || canvasError) { + return fallback || ( +
    +
    +
    + + + +
    +

    + {deviceInfo.isMobile ? 'Mobile 3D not supported' : '3D content unavailable'} +

    +
    +
    + ); + } + + return ( + + {children} + + ); +}; + +export default MobileOptimizedCanvas; \ No newline at end of file diff --git a/src/components/MobileProjectDisplay.jsx b/src/components/MobileProjectDisplay.jsx new file mode 100644 index 0000000..9b67da4 --- /dev/null +++ b/src/components/MobileProjectDisplay.jsx @@ -0,0 +1,49 @@ +import { useMobileDetection } from '../hooks/useMobileDetection'; +import { generateFallbackImage } from '../utils/projectFallbacks'; + +const MobileProjectDisplay = ({ texture, project }) => { + const deviceInfo = useMobileDetection(); + const fallbackImage = generateFallbackImage(texture); + + // For mobile, show a simple 2D preview instead of 3D + if (deviceInfo.isMobile) { + return ( +
    + {/* Background pattern */} +
    +
    +
    + + {/* Project preview */} +
    +
    + Project preview { + e.target.src = '/assets/project-logo1.png'; + }} + /> +
    + +

    + Project Preview +

    + +

    + View on desktop for interactive 3D demo +

    + + {/* Decorative elements */} +
    +
    +
    +
    + ); + } + + return null; // Let the 3D Canvas handle desktop +}; + +export default MobileProjectDisplay; \ No newline at end of file diff --git a/src/components/MongoDBLogo.jsx b/src/components/MongoDBLogo.jsx new file mode 100644 index 0000000..28f1f16 --- /dev/null +++ b/src/components/MongoDBLogo.jsx @@ -0,0 +1,18 @@ +import { Float, useTexture } from '@react-three/drei'; + +const MongoDBLogo = (props) => { + const texture = useTexture('/assets/MongoDB_SpringGreen.png'); + + return ( + + + + + + + + + ); +}; + +export default MongoDBLogo; \ No newline at end of file diff --git a/src/components/PyTorchLogo.jsx b/src/components/PyTorchLogo.jsx new file mode 100644 index 0000000..fb0da99 --- /dev/null +++ b/src/components/PyTorchLogo.jsx @@ -0,0 +1,18 @@ +import { Float, useTexture } from '@react-three/drei'; + +const PyTorchLogo = (props) => { + const texture = useTexture('/assets/pytorch.png'); + + return ( + + + + + + + + + ); +}; + +export default PyTorchLogo; \ No newline at end of file diff --git a/src/components/PythonLogo.jsx b/src/components/PythonLogo.jsx new file mode 100644 index 0000000..ab2ebf6 --- /dev/null +++ b/src/components/PythonLogo.jsx @@ -0,0 +1,18 @@ +import { Float, useTexture } from '@react-three/drei'; + +const PythonLogo = (props) => { + const texture = useTexture('/assets/python.png'); + + return ( + + + + + + + + + ); +}; + +export default PythonLogo; \ No newline at end of file diff --git a/src/constants/index.js b/src/constants/index.js index 54252cd..04500c9 100644 --- a/src/constants/index.js +++ b/src/constants/index.js @@ -1,288 +1,487 @@ -export const navLinks = [ - { - id: 1, - name: 'Home', - href: '#home', - }, - { - id: 2, - name: 'About', - href: '#about', - }, - { - id: 3, - name: 'Work', - href: '#work', - }, - { - id: 4, - name: 'Contact', - href: '#contact', - }, -]; - -export const clientReviews = [ - { - id: 1, - name: 'Emily Johnson', - position: 'Marketing Director at GreenLeaf', - img: 'assets/review1.png', - review: - 'Working with Adrian was a fantastic experience. He transformed our outdated website into a modern, user-friendly platform. His attention to detail and commitment to quality are unmatched. Highly recommend him for any web dev projects.', - }, - { - id: 2, - name: 'Mark Rogers', - position: 'Founder of TechGear Shop', - img: 'assets/review2.png', - review: - 'Adrian’s expertise in web development is truly impressive. He delivered a robust and scalable solution for our e-commerce site, and our online sales have significantly increased since the launch. He’s a true professional! Fantastic work.', - }, - { - id: 3, - name: 'John Dohsas', - position: 'Project Manager at UrbanTech ', - img: 'assets/review3.png', - review: - 'I can’t say enough good things about Adrian. He was able to take our complex project requirements and turn them into a seamless, functional website. His problem-solving abilities are outstanding.', - }, - { - id: 4, - name: 'Ether Smith', - position: 'CEO of BrightStar Enterprises', - img: 'assets/review4.png', - review: - 'Adrian was a pleasure to work with. He understood our requirements perfectly and delivered a website that exceeded our expectations. His skills in both frontend backend dev are top-notch.', - }, -]; - -export const myProjects = [ - { - title: 'Podcastr - AI Podcast Platform', - desc: 'Podcastr is a revolutionary Software-as-a-Service platform that transforms the way podcasts are created. With advanced AI-powered features like text-to-multiple-voices functionality, it allows creators to generate diverse voiceovers from a single text input.', - subdesc: - 'Built as a unique Software-as-a-Service app with Next.js 14, Tailwind CSS, TypeScript, Framer Motion and Convex, Podcastr is designed for optimal performance and scalability.', - href: 'https://www.youtube.com/watch?v=zfAb95tJvZQ', - texture: '/textures/project/project1.mp4', - logo: '/assets/project-logo1.png', - logoStyle: { - backgroundColor: '#2A1816', - border: '0.2px solid #36201D', - boxShadow: '0px 0px 60px 0px #AA3C304D', - }, - spotlight: '/assets/spotlight1.png', - tags: [ - { +export const navLinks = [{ id: 1, - name: 'React.js', - path: '/assets/react.svg', - }, - { - id: 2, - name: 'TailwindCSS', - path: 'assets/tailwindcss.png', - }, - { - id: 3, - name: 'TypeScript', - path: '/assets/typescript.png', - }, - { - id: 4, - name: 'Framer Motion', - path: '/assets/framer.png', - }, - ], - }, - { - title: 'LiveDoc - Real-Time Google Docs Clone', - desc: 'LiveDoc is a powerful collaborative app that elevates the capabilities of real-time document editing. As an enhanced version of Google Docs, It supports millions of collaborators simultaneously, ensuring that every change is captured instantly and accurately.', - subdesc: - 'With LiveDoc, users can experience the future of collaboration, where multiple contributors work together in real time without any lag, by using Next.js and Liveblocks newest features.', - href: 'https://www.youtube.com/watch?v=y5vE8y_f_OM', - texture: '/textures/project/project2.mp4', - logo: '/assets/project-logo2.png', - logoStyle: { - backgroundColor: '#13202F', - border: '0.2px solid #17293E', - boxShadow: '0px 0px 60px 0px #2F6DB54D', + name: 'Home', + href: '#home', }, - spotlight: '/assets/spotlight2.png', - tags: [ - { - id: 1, - name: 'React.js', - path: '/assets/react.svg', - }, - { + { id: 2, - name: 'TailwindCSS', - path: 'assets/tailwindcss.png', - }, - { - id: 3, - name: 'TypeScript', - path: '/assets/typescript.png', - }, - { - id: 4, - name: 'Framer Motion', - path: '/assets/framer.png', - }, - ], - }, - { - title: 'CarePulse - Health Management System', - desc: 'An innovative healthcare platform designed to streamline essential medical processes. It simplifies patient registration, appointment scheduling, and medical record management, providing a seamless experience for both healthcare providers and patients.', - subdesc: - 'With a focus on efficiency, CarePulse integrantes complex forms and SMS notifications, by using Next.js, Appwrite, Twillio and Sentry that enhance operational workflows.', - href: 'https://www.youtube.com/watch?v=lEflo_sc82g', - texture: '/textures/project/project3.mp4', - logo: '/assets/project-logo3.png', - logoStyle: { - backgroundColor: '#60f5a1', - background: - 'linear-gradient(0deg, #60F5A150, #60F5A150), linear-gradient(180deg, rgba(255, 255, 255, 0.9) 0%, rgba(208, 213, 221, 0.8) 100%)', - border: '0.2px solid rgba(208, 213, 221, 1)', - boxShadow: '0px 0px 60px 0px rgba(35, 131, 96, 0.3)', + name: 'About', + href: '#about', }, - spotlight: '/assets/spotlight3.png', - tags: [ - { - id: 1, - name: 'React.js', - path: '/assets/react.svg', - }, - { - id: 2, - name: 'TailwindCSS', - path: 'assets/tailwindcss.png', - }, - { + { id: 3, - name: 'TypeScript', - path: '/assets/typescript.png', - }, - { + name: 'Work', + href: '#work', + }, + { id: 4, - name: 'Framer Motion', - path: '/assets/framer.png', - }, - ], - }, - { - title: 'Horizon - Online Banking Platform', - desc: 'Horizon is a comprehensive online banking platform that offers users a centralized finance management dashboard. It allows users to connect multiple bank accounts, monitor real-time transactions, and seamlessly transfer money to other users.', - subdesc: - 'Built with Next.js 14 Appwrite, Dwolla and Plaid, Horizon ensures a smooth and secure banking experience, tailored to meet the needs of modern consumers.', - href: 'https://www.youtube.com/watch?v=PuOVqP_cjkE', - texture: '/textures/project/project4.mp4', - logo: '/assets/project-logo4.png', - logoStyle: { - backgroundColor: '#0E1F38', - border: '0.2px solid #0E2D58', - boxShadow: '0px 0px 60px 0px #2F67B64D', + name: 'Blog', + href: '#blog', }, - spotlight: '/assets/spotlight4.png', - tags: [ - { + { + id: 5, + name: 'Contact', + href: '#contact', + }, +]; + +export const publications = [{ id: 1, - name: 'React.js', - path: '/assets/react.svg', - }, - { - id: 2, - name: 'TailwindCSS', - path: 'assets/tailwindcss.png', - }, - { - id: 3, - name: 'TypeScript', - path: '/assets/typescript.png', - }, - { - id: 4, - name: 'Framer Motion', - path: '/assets/framer.png', - }, - ], - }, - { - title: 'Imaginify - AI Photo Manipulation App', - desc: 'Imaginify is a groundbreaking Software-as-a-Service application that empowers users to create stunning photo manipulations using AI technology. With features like AI-driven image editing, a payments system, and a credits-based model.', - subdesc: - 'Built with Next.js 14, Cloudinary AI, Clerk, and Stripe, Imaginify combines cutting-edge technology with a user-centric approach. It can be turned into a side income or even a full-fledged business.', - href: 'https://www.youtube.com/watch?v=Ahwoks_dawU', - texture: '/textures/project/project5.mp4', - logo: '/assets/project-logo5.png', - logoStyle: { - backgroundColor: '#1C1A43', - border: '0.2px solid #252262', - boxShadow: '0px 0px 60px 0px #635BFF4D', + title: 'Reaction Graph Networks for Inorganic Synthesis Condition Prediction of Solid State Materials', + authors: 'Heimann, J., et al.', + venue: 'AI4Mat-2024: NeurIPS 2024 Workshop on AI for Accelerated Materials Design', + conference: 'NeurIPS 2024', + workshop: 'AI4Mat-2024', + workshopFull: 'AI for Accelerated Materials Design', + year: '2024', + abstract: 'We present a novel approach using Graph Neural Networks with attention mechanisms to predict inorganic material synthesis conditions. Our method achieves significant improvements in accuracy over baseline approaches by leveraging the structural relationships in reaction graphs and incorporating domain-specific material science knowledge.', + image: '/assets/publication1.png', // You can add publication images here + pdf: 'https://openreview.net/forum?id=VGsXQOTs1E', + tags: ['Graph Neural Networks', 'Materials Science', 'Deep Learning', 'Synthesis Prediction', 'AI4Science'] + } + // Add more publications as they become available +]; + +export const myProjects = [{ + title: 'AutoApply - AI Job Application SaaS', + desc: 'A revolutionary multi-agent system that automates job applications using GPT-4 and Claude-3 APIs. AutoApply has generated $480K ARR with 10K+ monthly active users by intelligently applying to relevant positions.', + subdesc: 'Built with fine-tuned YOLOv8 for form detection achieving 94.3% accuracy, processing 78K+ successful applications. Scaled infrastructure handles 2.8M+ monthly queries with 99.7% uptime using containerized microservices.', + href: 'https://github.com/janMagnusHeimann/autoapply-turbo-charge-jobs', + texture: '/textures/project/autoapply.mp4', + logo: '/assets/autoapply.png', + logoStyle: { + backgroundColor: '#2A1816', + border: '0.2px solid #36201D', + boxShadow: '0px 0px 60px 0px #AA3C304D', + }, + spotlight: '/assets/spotlight1.png', + tags: [{ + id: 1, + name: 'GPT-4', + path: '/assets/claude.png', + }, + { + id: 2, + name: 'React', + path: '/assets/react.svg', + }, + { + id: 3, + name: 'Python', + path: '/assets/python.png', + }, + { + id: 4, + name: 'TypeScript', + path: '/assets/typescript.png', + }, + ], }, - spotlight: '/assets/spotlight5.png', - tags: [ - { + { + title: 'OpenRLHF Fork - Scalable RLHF Framework', + desc: 'Enhanced OpenRLHF framework implementing hybrid DPO/PPO training pipeline with significant performance improvements. Reduced GPU memory usage by 15% and achieved 23% faster convergence on reward model training.', + subdesc: 'Contributed multi-node distributed training support using DeepSpeed ZeRO-3, enabling training of 13B parameter models on 8x A100 clusters. Implemented adaptive KL penalty scheduling and batch-wise advantage normalization.', + href: 'https://github.com/janMagnusHeimann/OpenRLHF', + texture: '/textures/project/project2.mp4', + logo: '/assets/project-logo2.png', + logoStyle: { + backgroundColor: '#13202F', + border: '0.2px solid #17293E', + boxShadow: '0px 0px 60px 0px #2F6DB54D', + }, + spotlight: '/assets/spotlight2.png', + tags: [{ + id: 1, + name: 'PyTorch', + path: '/assets/pytorch.png', + }, + { + id: 2, + name: 'RLHF', + path: '/assets/huggingface.png', + }, + { + id: 3, + name: 'DeepSpeed', + path: '/assets/python.png', + }, + { + id: 4, + name: 'PPO/DPO', + path: '/assets/claude.png', + }, + ], + }, + { + title: 'ArchUnit TypeScript - Architecture Testing', + desc: 'Open source TypeScript architecture testing library with 400+ GitHub stars and widespread adoption in the JavaScript ecosystem. Enables developers to validate architectural rules and maintain code quality at scale.', + subdesc: 'Implemented AST-based static analysis supporting circular dependency detection, layered architecture validation, and code metrics. Built pattern matching system with glob/regex support and universal testing framework integration.', + href: 'https://github.com/LukasNiessen/ArchUnitTS', + texture: '/textures/project/project3.mp4', + logo: '/assets/typescript.png', + logoStyle: { + backgroundColor: '#60f5a1', + background: 'linear-gradient(0deg, #60F5A150, #60F5A150), linear-gradient(180deg, rgba(255, 255, 255, 0.9) 0%, rgba(208, 213, 221, 0.8) 100%)', + border: '0.2px solid rgba(208, 213, 221, 1)', + boxShadow: '0px 0px 60px 0px rgba(35, 131, 96, 0.3)', + }, + spotlight: '/assets/spotlight3.png', + tags: [{ + id: 1, + name: 'TypeScript', + path: '/assets/typescript.png', + }, + { + id: 2, + name: 'AST Analysis', + path: '/assets/terminal.png', + }, + { + id: 3, + name: 'Jest', + path: '/assets/react.svg', + }, + { + id: 4, + name: 'Open Source', + path: '/assets/github.svg', + }, + ], + }, + { + title: 'Domain-Specific GPT-2 Fine-Tuning', + desc: 'Fine-tuned GPT-2 medium on 10K aerospace papers using custom tokenizer with domain-specific vocabulary extensions. Achieved significant improvements in technical summarization capabilities.', + subdesc: 'Implemented distributed training across 4 GPUs using gradient accumulation and achieved 12% ROUGE score improvement for technical summarization through careful hyperparameter tuning and data augmentation.', + href: 'https://github.com/jan-heimann/gpt2-aerospace-finetuning', + texture: '/textures/project/project4.mp4', + logo: '/assets/project-logo4.png', + logoStyle: { + backgroundColor: '#0E1F38', + border: '0.2px solid #0E2D58', + boxShadow: '0px 0px 60px 0px #2F67B64D', + }, + spotlight: '/assets/spotlight4.png', + tags: [{ + id: 1, + name: 'GPT-2', + path: '/assets/claude.png', + }, + { + id: 2, + name: 'HuggingFace', + path: '/assets/huggingface.png', + }, + { + id: 3, + name: 'PyTorch', + path: '/assets/pytorch.png', + }, + { + id: 4, + name: 'NLP', + path: '/assets/python.png', + }, + ], + }, +]; + +export const blogPosts = [ + { id: 1, - name: 'React.js', - path: '/assets/react.svg', - }, - { + title: "Building Scalable Machine Learning Pipelines with MLflow and Docker", + excerpt: "A deep dive into creating production-ready ML pipelines that scale efficiently across different environments.", + content: `# Building Scalable Machine Learning Pipelines with MLflow and Docker + +## Introduction + +In today's rapidly evolving AI landscape, deploying machine learning models to production requires more than just good algorithms. This article explores how to build robust, scalable ML pipelines using MLflow for experiment tracking and Docker for containerization. + +## Key Components + +### 1. MLflow for Experiment Management +- **Model Registry**: Version control for ML models +- **Experiment Tracking**: Monitor metrics, parameters, and artifacts +- **Model Serving**: Deploy models as REST APIs + +### 2. Docker for Containerization +- **Reproducible Environments**: Consistent deployment across platforms +- **Scalability**: Easy horizontal scaling with orchestration tools +- **Isolation**: Prevent dependency conflicts + +## Implementation Strategy + +\`\`\`python +import mlflow +import mlflow.sklearn +from mlflow.models import infer_signature + +# Track experiment +with mlflow.start_run(): + model = train_model(X_train, y_train) + + # Log metrics + mlflow.log_metric("accuracy", accuracy) + mlflow.log_metric("f1_score", f1) + + # Log model + signature = infer_signature(X_test, predictions) + mlflow.sklearn.log_model(model, "model", signature=signature) +\`\`\` + +## Best Practices + +1. **Version Everything**: Code, data, and models +2. **Automate Testing**: Unit tests and integration tests +3. **Monitor Performance**: Real-time model performance tracking +4. **Implement CI/CD**: Automated deployment pipelines + +## Conclusion + +Building scalable ML pipelines requires careful consideration of tooling, architecture, and operational practices. MLflow and Docker provide a solid foundation for production ML systems.`, + author: "Jan Heimann", + date: "2025-01-08", + readTime: "8 min read", + tags: ["MLflow", "Docker", "Machine Learning", "DevOps", "Production"], + category: "ML Engineering", + }, + { id: 2, - name: 'TailwindCSS', - path: 'assets/tailwindcss.png', - }, - { + title: "Graph Neural Networks for Materials Discovery", + excerpt: "Exploring how Graph Neural Networks can revolutionize materials science by predicting synthesis conditions and properties.", + content: `# Graph Neural Networks for Materials Discovery + +## The Challenge + +Materials discovery traditionally relies on expensive experiments and trial-and-error approaches. Graph Neural Networks (GNNs) offer a promising solution by modeling the structural relationships in materials. + +## Why GNNs for Materials? + +### Graph Representation +- **Atoms as Nodes**: Each atom becomes a node with features +- **Bonds as Edges**: Chemical bonds define the graph structure +- **Structural Awareness**: Natural representation of molecular structure + +### Advantages over Traditional ML +- **Permutation Invariance**: Order of atoms doesn't matter +- **Size Flexibility**: Handle molecules of varying sizes +- **Interpretability**: Attention mechanisms show important regions + +## Implementation with PyTorch Geometric + +\`\`\`python +import torch +import torch.nn.functional as F +from torch_geometric.nn import GCNConv, global_mean_pool + +class MaterialGNN(torch.nn.Module): + def __init__(self, num_features, hidden_dim, num_classes): + super(MaterialGNN, self).__init__() + self.conv1 = GCNConv(num_features, hidden_dim) + self.conv2 = GCNConv(hidden_dim, hidden_dim) + self.conv3 = GCNConv(hidden_dim, hidden_dim) + self.classifier = torch.nn.Linear(hidden_dim, num_classes) + + def forward(self, x, edge_index, batch): + x = F.relu(self.conv1(x, edge_index)) + x = F.relu(self.conv2(x, edge_index)) + x = F.relu(self.conv3(x, edge_index)) + x = global_mean_pool(x, batch) + return self.classifier(x) +\`\`\` + +## Real-World Applications + +1. **Synthesis Prediction**: Predicting optimal conditions for material synthesis +2. **Property Prediction**: Estimating material properties from structure +3. **Drug Discovery**: Accelerating pharmaceutical research +4. **Catalyst Design**: Optimizing catalytic materials + +## Results and Impact + +Our research shows significant improvements over traditional methods: +- **9.2% accuracy improvement** in synthesis prediction +- **Faster convergence** in training time +- **Better generalization** to unseen materials + +## Future Directions + +- **Multi-modal Integration**: Combining structural and experimental data +- **Uncertainty Quantification**: Providing confidence estimates +- **Active Learning**: Iteratively improving models with new data + +The future of materials discovery lies in the intelligent combination of domain knowledge and advanced ML techniques.`, + author: "Jan Heimann", + date: "2025-01-05", + readTime: "12 min read", + tags: ["Graph Neural Networks", "Materials Science", "PyTorch", "AI4Science"], + category: "Research", + }, + { id: 3, - name: 'TypeScript', - path: '/assets/typescript.png', - }, - { - id: 4, - name: 'Framer Motion', - path: '/assets/framer.png', - }, - ], - }, + title: "Optimizing React Three Fiber Performance", + excerpt: "Tips and tricks for building smooth 3D web experiences with React Three Fiber, focusing on performance optimization.", + content: `# Optimizing React Three Fiber Performance + +## Introduction + +React Three Fiber (R3F) brings the power of Three.js to React applications, but achieving smooth 60fps performance requires careful optimization. This guide covers essential techniques for building performant 3D web experiences. + +## Key Optimization Strategies + +### 1. Geometry and Material Optimization + +\`\`\`jsx +import { useMemo } from 'react' +import { useFrame } from '@react-three/fiber' + +function OptimizedMesh() { + // Memoize geometry to prevent recreation + const geometry = useMemo(() => new THREE.SphereGeometry(1, 32, 32), []) + + // Reuse materials across instances + const material = useMemo(() => new THREE.MeshStandardMaterial({ + color: 'hotpink' + }), []) + + return +} +\`\`\` + +### 2. Instancing for Multiple Objects + +\`\`\`jsx +import { useRef } from 'react' +import { InstancedMesh } from 'three' + +function InstancedSpheres({ count = 1000 }) { + const meshRef = useRef() + + useFrame(() => { + // Animate instances efficiently + for (let i = 0; i < count; i++) { + // Update individual instance transforms + } + }) + + return ( + + {/* Individual instances */} + + ) +} +\`\`\` + +### 3. Level of Detail (LOD) + +\`\`\`jsx +import { Detailed } from '@react-three/drei' + +function LODModel() { + return ( + + + + + + ) +} +\`\`\` + +## Performance Monitoring + +### Frame Rate Monitoring +- Use \`useFrame\` callback timing +- Implement performance budgets +- Monitor GPU utilization + +### Memory Management +- Dispose of unused geometries and materials +- Use object pooling for frequently created objects +- Monitor memory leaks with DevTools + +## Best Practices + +1. **Frustum Culling**: Don't render objects outside the camera view +2. **Texture Optimization**: Use appropriate texture sizes and formats +3. **Shader Optimization**: Minimize fragment shader complexity +4. **Batch Operations**: Group similar rendering operations + +## Conclusion + +Building performant 3D web applications requires a deep understanding of both React and Three.js optimization techniques. By following these practices, you can create smooth, engaging 3D experiences that run well across devices.`, + author: "Jan Heimann", + date: "2025-01-02", + readTime: "10 min read", + tags: ["React Three Fiber", "Three.js", "Performance", "3D Web", "Optimization"], + category: "Frontend Development", + } ]; export const calculateSizes = (isSmall, isMobile, isTablet) => { - return { - deskScale: isSmall ? 0.05 : isMobile ? 0.06 : 0.065, - deskPosition: isMobile ? [0.5, -4.5, 0] : [0.25, -5.5, 0], - cubePosition: isSmall ? [4, -5, 0] : isMobile ? [5, -5, 0] : isTablet ? [5, -5, 0] : [9, -5.5, 0], - reactLogoPosition: isSmall ? [3, 4, 0] : isMobile ? [5, 4, 0] : isTablet ? [5, 4, 0] : [12, 3, 0], - ringPosition: isSmall ? [-5, 7, 0] : isMobile ? [-10, 10, 0] : isTablet ? [-12, 10, 0] : [-24, 10, 0], - targetPosition: isSmall ? [-5, -10, -10] : isMobile ? [-9, -10, -10] : isTablet ? [-11, -7, -10] : [-13, -13, -10], - }; + return { + deskScale: isSmall ? 0.05 : isMobile ? 0.06 : 0.065, + deskPosition: isMobile ? [0.5, -4.5, 0] : [0.25, -5.5, 0], + reactLogoPosition: isSmall ? [3, 4, 0] : isMobile ? [5, 4, 0] : isTablet ? [5, 4, 0] : [12, 3, 0], + ringPosition: isSmall ? [-5, 7, 0] : isMobile ? [-10, 10, 0] : isTablet ? [-12, 10, 0] : [-24, 10, 0], + // Tech logo positions + pythonPosition: isSmall ? [2, 4, 0] : isMobile ? [4, 4, 0] : isTablet ? [4, 4, 0] : [10, 3, 0], + huggingfacePosition: isSmall ? [4, -2, 0] : isMobile ? [6, -2, 0] : isTablet ? [7, -2, 0] : [10, -2, 0], + mongodbPosition: isSmall ? [-4, 1, 0] : isMobile ? [-6, 1, 0] : isTablet ? [-8, 1, 0] : [-14, 1, 0], + }; }; -export const workExperiences = [ - { - id: 1, - name: 'Framer', - pos: 'Lead Web Developer', - duration: '2022 - Present', - title: "Framer serves as my go-to tool for creating interactive prototypes. I use it to bring designs to life, allowing stakeholders to experience the user flow and interactions before development.", - icon: '/assets/framer.svg', - animation: 'victory', - }, - { - id: 2, - name: 'Figma', - pos: 'Web Developer', - duration: '2020 - 2022', - title: "Figma is my collaborative design platform of choice. I utilize it to work seamlessly with team members and clients, facilitating real-time feedback and design iterations. Its cloud-based.", - icon: '/assets/figma.svg', - animation: 'clapping', - }, - { - id: 3, - name: 'Notion', - pos: 'Junior Web Developer', - duration: '2019 - 2020', - title: "Notion helps me keep my projects organized. I use it for project management, task tracking, and as a central hub for documentation, ensuring that everything from design notes to.", - icon: '/assets/notion.svg', - animation: 'salute', - }, -]; +export const workExperiences = [{ + id: 1, + name: 'DRWN AI', + pos: 'Machine Learning Engineer', + duration: 'Apr 2025 - Present', + title: "Developing Multi-Agent Reinforcement Learning systems using PPO to optimize advertising budget allocation, achieving 15-25% improvement in cost-per-acquisition across client campaigns. Built real-time inference pipeline serving RL policies with 95ms latency.", + icon: '/assets/framer.svg', + animation: 'victory', + }, + { + id: 2, + name: 'Rocket Factory Augsburg', + pos: 'Machine Learning Engineer', + duration: 'Mar 2024 - Mar 2025', + title: "Designed RL pipeline using PPO to optimize rocket design parameters, achieving $1.5M projected cost reduction per launch. Implemented Graph Neural Networks to encode rocket component relationships and created custom OpenAI Gym environments.", + icon: '/assets/rfa.png', + animation: 'clapping', + }, + { + id: 3, + name: 'MIT', + pos: 'Assistant ML Researcher', + duration: 'May 2024 - Dec 2024', + title: "Developed Graph Neural Networks with attention mechanisms for material synthesis prediction, improving accuracy by 9.2% over baseline methods. Implemented multi-task transformer pretraining on 500K material descriptions.", + icon: '/assets/mit.png', + animation: 'salute', + }, + { + id: 4, + name: 'Deepmask GmbH', + pos: 'ML Engineer/Advisor', + duration: 'Oct 2024 - Mar 2025', + title: "Fine-tuned DeepSeek R1 (70B parameters) using LoRA with rank-16 adaptation, achieving +4% BLEU and +6% ROUGE-L on German benchmarks. Implemented production RAG system with 92% retrieval accuracy.", + icon: '/assets/deepmask.png', + animation: 'idle', + }, + { + id: 5, + name: 'OHB Systems AG', + pos: 'Software Engineer', + duration: 'Jan 2023 - Mar 2024', + title: "Built ML pipeline automating FEM analysis using Gaussian Processes, reducing engineering cycle time by 25%. Developed LSTM-based anomaly detection for satellite telemetry data and deployed models using MLflow and Docker.", + icon: '/assets/ohb.png', + animation: 'salute', + }, + { + id: 6, + name: 'GetMoBie GmbH', + pos: 'Co-Founder/Software Lead', + duration: 'Jan 2021 - Dec 2022', + title: "Led development of mobile banking application serving 20K+ users, presented at 'Die Höhle der Löwen' TV show. Implemented Random Forest models for transaction categorization and fraud detection on 1M+ records.", + icon: '/assets/getmobie.png', + animation: 'clapping', + }, +]; \ No newline at end of file diff --git a/src/content/README.md b/src/content/README.md new file mode 100644 index 0000000..84f8e9f --- /dev/null +++ b/src/content/README.md @@ -0,0 +1,117 @@ +# 📝 Blog Content Management + +## How to Add New Articles + +Adding new blog articles is super simple! Just edit the `blogPosts.js` file and add your article as a JavaScript object. + +### Step 1: Open the Blog Posts File + +Open `src/content/blogPosts.js` in your code editor. + +### Step 2: Add Your Article + +Add a new object to the `blogPosts` array: + +```javascript +export const blogPosts = [ + // ... existing articles ... + { + id: 5, // increment this number + title: "Your Article Title", + excerpt: "Brief description of your article (shows on cards)", + author: "Jan Heimann", + date: "2025-01-10", // YYYY-MM-DD format + readTime: "7 min read", + tags: ["Tag1", "Tag2", "Tag3"], + category: "Your Category", + content: `# Your Article Title + +## Introduction + +Your markdown content goes here... + +### Code Examples + +\`\`\`javascript +const example = () => { + console.log("This is a code example!"); +}; +\`\`\` + +### More Content + +- Use markdown syntax +- **Bold text** +- *Italic text* +- [Links](https://example.com) + +## Conclusion + +Wrap up your article here.` + } +]; +``` + +### Step 3: Save and It's Live! + +That's it! Your article will automatically appear on the website with: +- ✅ Full search functionality +- ✅ Category filtering +- ✅ Syntax highlighting for code +- ✅ Responsive design +- ✅ Markdown rendering + +## Article Fields Explained + +| Field | Description | Required | +|-------|-------------|----------| +| `id` | Unique number for the article | ✅ Yes | +| `title` | Main headline | ✅ Yes | +| `excerpt` | Brief description (shows on cards) | ✅ Yes | +| `author` | Your name | ✅ Yes | +| `date` | Publication date (YYYY-MM-DD) | ✅ Yes | +| `readTime` | Estimated reading time | ✅ Yes | +| `tags` | Array of searchable tags | ✅ Yes | +| `category` | Category for filtering | ✅ Yes | +| `content` | Full markdown content | ✅ Yes | + +## Markdown Features + +Your `content` field supports: + +- **Headers**: `# H1`, `## H2`, `### H3` +- **Emphasis**: `**bold**`, `*italic*` +- **Code blocks**: \`\`\`language\`\`\` +- **Inline code**: \`code\` +- **Links**: `[text](url)` +- **Lists**: `- item` or `1. item` +- **Line breaks**: Just add empty lines + +## Categories + +Common categories you can use: +- "ML Engineering" +- "Research" +- "Frontend Development" +- "AI & Business" +- "Tutorial" +- "Opinion" + +Or create your own! New categories automatically appear in the filter. + +## Pro Tips + +1. **Use descriptive tags** - they're searchable +2. **Keep excerpts under 150 characters** for best card display +3. **Include code examples** - they get syntax highlighting +4. **Use proper markdown headers** for good structure + +## Example Categories by Content Type + +- **Technical Deep Dives**: "ML Engineering", "Research" +- **Tutorials**: "Tutorial", "How-To" +- **Business Content**: "AI & Business", "Entrepreneurship" +- **Web Development**: "Frontend Development", "React" +- **Opinion Pieces**: "Opinion", "Industry Analysis" + +That's it! Super simple content management. 🎉 \ No newline at end of file diff --git a/src/content/blog/_template.md b/src/content/blog/_template.md new file mode 100644 index 0000000..56c0633 --- /dev/null +++ b/src/content/blog/_template.md @@ -0,0 +1,92 @@ +--- +title: "Your Article Title Here" +excerpt: "A brief, compelling description of your article that will appear on the blog cards. Keep it under 150 characters for best results." +author: "Jan Heimann" +date: "2025-01-09" +readTime: "5 min read" +tags: ["Tag1", "Tag2", "Tag3", "Tag4"] +category: "Your Category" +featured: false +--- + +# Your Article Title Here + +## Introduction + +Start with a compelling introduction that hooks the reader and explains what they'll learn from this article. + +## Main Content Sections + +### Use descriptive subheadings + +Write your content using markdown syntax. You can include: + +- **Bold text** for emphasis +- *Italic text* for emphasis +- `inline code` for technical terms +- [Links](https://example.com) to external resources + +### Code Examples + +```python +# Python code example +def hello_world(): + print("Hello, World!") + return "success" + +# More complex example +class ExampleClass: + def __init__(self, name): + self.name = name + + def greet(self): + return f"Hello, {self.name}!" +``` + +```javascript +// JavaScript example +const exampleFunction = (param) => { + console.log(`Parameter: ${param}`); + return param * 2; +}; + +// React component example +const MyComponent = ({ data }) => { + return ( +
    +

    {data.title}

    +

    {data.description}

    +
    + ); +}; +``` + +### Lists and Structure + +1. **Numbered lists** for step-by-step instructions +2. **Bullet points** for features or benefits +3. **Subheadings** to break up content + +Key points to remember: +- Keep paragraphs concise +- Use code examples liberally +- Include practical takeaways +- End with actionable insights + +## Conclusion + +Summarize the main points and provide next steps or additional resources for readers who want to dive deeper. + +--- + +## Template Usage Instructions + +1. **Filename**: Use descriptive, kebab-case filenames (e.g., `my-awesome-article.md`) +2. **Frontmatter**: Update all fields in the frontmatter section +3. **Date Format**: Use YYYY-MM-DD format +4. **Tags**: Use relevant, searchable tags +5. **Category**: Choose from existing categories or create new ones +6. **Featured**: Set to `true` for featured articles +7. **Content**: Write in markdown format with proper headings + +This file serves as a template - copy it to create new articles! \ No newline at end of file diff --git a/src/content/blog/building-ai-powered-saas.md b/src/content/blog/building-ai-powered-saas.md new file mode 100644 index 0000000..540d0f6 --- /dev/null +++ b/src/content/blog/building-ai-powered-saas.md @@ -0,0 +1,296 @@ +--- +title: "Building AutoApply: Lessons from Creating an AI-Powered SaaS that Generated $480K ARR" +excerpt: "Key insights and technical challenges from building a multi-agent system that automates job applications using GPT-4 and Claude-3, serving 10K+ monthly active users." +author: "Jan Heimann" +date: "2025-01-09" +readTime: "15 min read" +tags: ["SaaS", "AI", "GPT-4", "Claude-3", "Computer Vision", "YOLOv8", "Entrepreneurship"] +category: "AI & Business" +featured: true +--- + +# Building AutoApply: Lessons from Creating an AI-Powered SaaS that Generated $480K ARR + +## The Problem That Started It All + +Job searching is broken. The average job seeker spends 5+ hours per application, manually filling out repetitive forms, only to get rejected or ignored. After experiencing this frustration firsthand, I decided to build a solution that would automate the entire process using cutting-edge AI. + +## The Technical Architecture + +### Multi-Agent System Design + +AutoApply uses a sophisticated multi-agent architecture where different AI models handle specific tasks: + +```python +class JobApplicationAgent: + def __init__(self): + self.form_detector = YOLOv8FormDetector() + self.text_processor = GPT4TextProcessor() + self.content_generator = Claude3ContentGenerator() + self.quality_checker = QualityAssuranceAgent() + + async def process_application(self, job_url, user_profile): + # 1. Detect form elements using computer vision + form_elements = await self.form_detector.detect_forms(job_url) + + # 2. Extract job requirements + job_data = await self.text_processor.extract_requirements(job_url) + + # 3. Generate tailored responses + responses = await self.content_generator.generate_responses( + job_data, user_profile, form_elements + ) + + # 4. Quality assurance + validated_responses = await self.quality_checker.validate(responses) + + return validated_responses +``` + +### Computer Vision for Form Detection + +The breakthrough came when I realized that traditional web scraping wasn't reliable enough. Instead, I fine-tuned YOLOv8 to detect form elements visually: + +```python +import torch +from ultralytics import YOLO + +class FormDetector: + def __init__(self): + self.model = YOLO('models/form_detector_v8.pt') + self.confidence_threshold = 0.85 + + def detect_forms(self, screenshot): + results = self.model(screenshot) + + detected_elements = [] + for result in results: + boxes = result.boxes + for box in boxes: + if box.conf > self.confidence_threshold: + element_type = self.get_element_type(box.cls) + detected_elements.append({ + 'type': element_type, + 'bbox': box.xyxy.tolist(), + 'confidence': box.conf.item() + }) + + return detected_elements +``` + +**Key Achievement**: 94.3% accuracy in form detection across 1,000+ different job sites. + +## Scaling Challenges and Solutions + +### 1. API Rate Limiting + +With 10K+ monthly active users, API costs and rate limits became critical: + +```python +import asyncio +import aiohttp +from collections import defaultdict + +class APIRateLimiter: + def __init__(self): + self.request_counts = defaultdict(int) + self.last_reset = defaultdict(float) + self.semaphores = { + 'openai': asyncio.Semaphore(50), # 50 concurrent requests + 'anthropic': asyncio.Semaphore(30) + } + + async def make_request(self, provider, endpoint, data): + async with self.semaphores[provider]: + # Implement exponential backoff + for attempt in range(3): + try: + async with aiohttp.ClientSession() as session: + response = await session.post(endpoint, json=data) + if response.status == 200: + return await response.json() + except Exception as e: + await asyncio.sleep(2 ** attempt) + + raise Exception(f"Failed after 3 attempts for {provider}") +``` + +### 2. Database Optimization + +Processing 2.8M+ monthly queries required careful database design: + +```sql +-- Optimized indexing for job applications +CREATE INDEX CONCURRENTLY idx_applications_user_status_date +ON applications (user_id, status, created_at DESC); + +-- Partitioning by date for better performance +CREATE TABLE applications_2024_01 PARTITION OF applications +FOR VALUES FROM ('2024-01-01') TO ('2024-02-01'); +``` + +## Business Model and Growth + +### Revenue Streams + +1. **Subscription Tiers**: + - Basic: $29/month (50 applications) + - Pro: $79/month (200 applications) + - Enterprise: $199/month (unlimited) + +2. **Usage-Based Pricing**: $0.50 per additional application + +### Growth Metrics + +- **MRR Growth**: $0 → $40K in 6 months +- **User Acquisition**: 70% organic, 30% paid +- **Churn Rate**: 8% monthly (industry average: 15%) +- **Customer LTV**: $340 + +## Key Technical Innovations + +### 1. Context-Aware Response Generation + +```python +class ContextualResponseGenerator: + def __init__(self): + self.embeddings_model = SentenceTransformer('all-MiniLM-L6-v2') + self.response_cache = {} + + def generate_response(self, question, job_context, user_profile): + # Create semantic embeddings for context matching + question_embedding = self.embeddings_model.encode(question) + + # Find similar past responses + similar_responses = self.find_similar_responses(question_embedding) + + # Generate contextual response + prompt = f""" + Job Context: {job_context} + User Profile: {user_profile} + Question: {question} + Similar Past Responses: {similar_responses} + + Generate a tailored response that: + 1. Addresses the specific question + 2. Highlights relevant experience + 3. Matches the company's tone + """ + + return self.llm.generate(prompt) +``` + +### 2. Quality Assurance Pipeline + +```python +class QualityAssuranceAgent: + def __init__(self): + self.grammar_checker = LanguageTool('en-US') + self.relevance_scorer = RelevanceScorer() + + async def validate_response(self, response, job_requirements): + checks = await asyncio.gather( + self.check_grammar(response), + self.check_relevance(response, job_requirements), + self.check_length(response), + self.check_keywords(response, job_requirements) + ) + + return { + 'is_valid': all(check['passed'] for check in checks), + 'score': sum(check['score'] for check in checks) / len(checks), + 'suggestions': [check['suggestion'] for check in checks if check['suggestion']] + } +``` + +## Lessons Learned + +### 1. AI Model Selection Matters + +- **GPT-4**: Excellent for reasoning and complex tasks +- **Claude-3**: Better for creative writing and nuanced responses +- **Custom Models**: Essential for domain-specific tasks (form detection) + +### 2. User Experience is Everything + +```javascript +// Real-time progress tracking +const ApplicationProgress = ({ applicationId }) => { + const [progress, setProgress] = useState(0); + const [currentStep, setCurrentStep] = useState(''); + + useEffect(() => { + const ws = new WebSocket(`ws://api.autoapply.co/progress/${applicationId}`); + + ws.onmessage = (event) => { + const data = JSON.parse(event.data); + setProgress(data.progress); + setCurrentStep(data.current_step); + }; + + return () => ws.close(); + }, [applicationId]); + + return ( +
    +
    +

    Currently: {currentStep}

    +
    + ); +}; +``` + +### 3. Monitoring and Observability + +```python +import structlog +from opentelemetry import trace + +logger = structlog.get_logger() +tracer = trace.get_tracer(__name__) + +@tracer.start_as_current_span("process_application") +def process_application(job_url, user_id): + with tracer.start_as_current_span("form_detection"): + forms = detect_forms(job_url) + logger.info("Forms detected", count=len(forms), user_id=user_id) + + with tracer.start_as_current_span("content_generation"): + responses = generate_responses(forms) + logger.info("Responses generated", user_id=user_id) + + return responses +``` + +## The Road Ahead + +### Current Challenges + +1. **Ethical Considerations**: Ensuring fair use and transparency +2. **Technical Debt**: Refactoring early MVP code +3. **Scalability**: Preparing for 100K+ users +4. **Competition**: Staying ahead of copycats + +### Future Enhancements + +- **Multi-language Support**: Expanding to European markets +- **Video Interview Prep**: AI-powered interview coaching +- **Performance Analytics**: Detailed success metrics +- **Enterprise Features**: Team management and reporting + +## Conclusion + +Building AutoApply taught me that successful AI products require more than just good models – they need excellent user experience, robust infrastructure, and continuous iteration based on user feedback. + +The key takeaways: +1. **Start with a real problem** you've experienced yourself +2. **Combine multiple AI models** for better results +3. **Focus on reliability** over feature complexity +4. **Monitor everything** – AI systems are unpredictable +5. **Scale gradually** – don't overengineer early + +AutoApply continues to evolve, and I'm excited to see how AI will transform the job search process in the coming years. + +--- + +*Want to learn more about building AI-powered SaaS products? Feel free to reach out – I'm always happy to share insights with fellow entrepreneurs and developers.* \ No newline at end of file diff --git a/src/content/blog/cuda-basics-blog-post.md b/src/content/blog/cuda-basics-blog-post.md new file mode 100644 index 0000000..e81f54e --- /dev/null +++ b/src/content/blog/cuda-basics-blog-post.md @@ -0,0 +1,456 @@ +# CUDA: Unleashing the Power of GPU Computing + +## Introduction + +In the world of high-performance computing, the shift from CPU-only processing to GPU-accelerated computing has been nothing short of revolutionary. At the heart of this transformation lies CUDA (Compute Unified Device Architecture), NVIDIA's parallel computing platform that has democratized GPU programming and enabled breakthroughs in fields ranging from scientific computing to artificial intelligence. Whether you're looking to accelerate your scientific simulations, train deep learning models, or process massive datasets, understanding CUDA is essential. + +## What is CUDA? + +CUDA is a parallel computing platform and programming model developed by NVIDIA that enables developers to use GPUs (Graphics Processing Units) for general-purpose computing. Introduced in 2006, CUDA transformed GPUs from specialized graphics rendering devices into powerful parallel processors capable of tackling complex computational problems. + +The key insight behind CUDA is that many computational problems can be expressed as parallel operations—the same operation applied to many data elements simultaneously. While CPUs excel at sequential tasks with complex branching logic, GPUs with their thousands of cores are perfect for parallel workloads. CUDA provides the tools and abstractions to harness this massive parallelism. + +### Why GPU Computing? + +Consider this comparison: +- A modern CPU might have 8-16 cores, each optimized for sequential execution +- A modern GPU has thousands of smaller cores designed for parallel execution +- For parallelizable tasks, GPUs can be 10-100x faster than CPUs + +## Core Concepts and Architecture + +### 1. The CUDA Programming Model + +CUDA extends C/C++ with a few key concepts: + +```cuda +// CPU code (host) +int main() { + int *h_data, *d_data; // h_ for host, d_ for device + int size = 1024 * sizeof(int); + + // Allocate memory on host + h_data = (int*)malloc(size); + + // Allocate memory on GPU + cudaMalloc(&d_data, size); + + // Copy data from host to device + cudaMemcpy(d_data, h_data, size, cudaMemcpyHostToDevice); + + // Launch kernel with 256 blocks, 1024 threads per block + myKernel<<<256, 1024>>>(d_data); + + // Copy results back + cudaMemcpy(h_data, d_data, size, cudaMemcpyDeviceToHost); + + // Cleanup + cudaFree(d_data); + free(h_data); +} + +// GPU code (device) +__global__ void myKernel(int *data) { + int idx = blockIdx.x * blockDim.x + threadIdx.x; + data[idx] = data[idx] * 2; // Simple operation +} +``` + +### 2. Thread Hierarchy + +CUDA organizes threads in a hierarchical structure: + +- **Thread**: The basic unit of execution +- **Block**: A group of threads that can cooperate and share memory +- **Grid**: A collection of blocks + +```cuda +// Understanding thread indexing +__global__ void indexExample() { + // Global thread ID calculation + int globalIdx = blockIdx.x * blockDim.x + threadIdx.x; + + // 2D grid example + int x = blockIdx.x * blockDim.x + threadIdx.x; + int y = blockIdx.y * blockDim.y + threadIdx.y; + int idx = y * gridDim.x * blockDim.x + x; +} +``` + +### 3. Memory Hierarchy + +CUDA provides several memory types with different characteristics: + +```cuda +__global__ void memoryExample(float *input, float *output) { + // Shared memory - fast, shared within block + __shared__ float tile[256]; + + // Registers - fastest, private to each thread + float temp = input[threadIdx.x]; + + // Global memory - large but slow + output[threadIdx.x] = temp; + + // Constant memory - cached, read-only + // Texture memory - cached, optimized for 2D locality +} +``` + +### 4. GPU Architecture Basics + +Modern NVIDIA GPUs consist of: +- **Streaming Multiprocessors (SMs)**: Independent processors that execute blocks +- **CUDA Cores**: Basic arithmetic units within SMs +- **Warp Schedulers**: Manage thread execution in groups of 32 (warps) +- **Memory Controllers**: Handle data movement + +## Writing Your First CUDA Program + +Let's create a complete CUDA program that adds two vectors: + +```cuda +#include +#include + +// CUDA kernel for vector addition +__global__ void vectorAdd(float *a, float *b, float *c, int n) { + // Calculate global thread ID + int tid = blockIdx.x * blockDim.x + threadIdx.x; + + // Boundary check + if (tid < n) { + c[tid] = a[tid] + b[tid]; + } +} + +int main() { + int n = 1000000; // 1 million elements + size_t size = n * sizeof(float); + + // Allocate host memory + float *h_a = (float*)malloc(size); + float *h_b = (float*)malloc(size); + float *h_c = (float*)malloc(size); + + // Initialize input vectors + for (int i = 0; i < n; i++) { + h_a[i] = i; + h_b[i] = i * 2; + } + + // Allocate device memory + float *d_a, *d_b, *d_c; + cudaMalloc(&d_a, size); + cudaMalloc(&d_b, size); + cudaMalloc(&d_c, size); + + // Copy input data to device + cudaMemcpy(d_a, h_a, size, cudaMemcpyHostToDevice); + cudaMemcpy(d_b, h_b, size, cudaMemcpyHostToDevice); + + // Launch kernel + int threadsPerBlock = 256; + int blocksPerGrid = (n + threadsPerBlock - 1) / threadsPerBlock; + vectorAdd<<>>(d_a, d_b, d_c, n); + + // Copy result back to host + cudaMemcpy(h_c, d_c, size, cudaMemcpyDeviceToHost); + + // Verify results + for (int i = 0; i < 10; i++) { + printf("%.0f + %.0f = %.0f\n", h_a[i], h_b[i], h_c[i]); + } + + // Cleanup + free(h_a); free(h_b); free(h_c); + cudaFree(d_a); cudaFree(d_b); cudaFree(d_c); + + return 0; +} +``` + +Compile and run: +```bash +nvcc vector_add.cu -o vector_add +./vector_add +``` + +## Advanced CUDA Features + +### 1. Shared Memory Optimization + +Shared memory is crucial for performance optimization: + +```cuda +__global__ void matrixMultiplyShared(float *A, float *B, float *C, int width) { + __shared__ float tileA[16][16]; + __shared__ float tileB[16][16]; + + int bx = blockIdx.x, by = blockIdx.y; + int tx = threadIdx.x, ty = threadIdx.y; + + int row = by * 16 + ty; + int col = bx * 16 + tx; + + float sum = 0.0f; + + // Loop over tiles + for (int tile = 0; tile < width/16; tile++) { + // Load tiles into shared memory + tileA[ty][tx] = A[row * width + tile * 16 + tx]; + tileB[ty][tx] = B[(tile * 16 + ty) * width + col]; + __syncthreads(); + + // Compute partial product + for (int k = 0; k < 16; k++) { + sum += tileA[ty][k] * tileB[k][tx]; + } + __syncthreads(); + } + + C[row * width + col] = sum; +} +``` + +### 2. Atomic Operations + +For concurrent updates to shared data: + +```cuda +__global__ void histogram(int *data, int *hist, int n) { + int tid = blockIdx.x * blockDim.x + threadIdx.x; + if (tid < n) { + atomicAdd(&hist[data[tid]], 1); + } +} +``` + +### 3. Dynamic Parallelism + +Launch kernels from within kernels: + +```cuda +__global__ void parentKernel(int *data, int n) { + if (threadIdx.x == 0) { + // Launch child kernel + childKernel<<<1, 256>>>(data + blockIdx.x * 256, 256); + } +} +``` + +### 4. CUDA Streams + +Enable concurrent operations: + +```cuda +cudaStream_t stream1, stream2; +cudaStreamCreate(&stream1); +cudaStreamCreate(&stream2); + +// Async operations on different streams +cudaMemcpyAsync(d_a, h_a, size, cudaMemcpyHostToDevice, stream1); +cudaMemcpyAsync(d_b, h_b, size, cudaMemcpyHostToDevice, stream2); + +kernel1<<>>(d_a); +kernel2<<>>(d_b); + +cudaStreamSynchronize(stream1); +cudaStreamSynchronize(stream2); +``` + +## Optimization Techniques + +### 1. Coalesced Memory Access + +Ensure threads access contiguous memory: + +```cuda +// Good - coalesced access +__global__ void good(float *data) { + int idx = blockIdx.x * blockDim.x + threadIdx.x; + float val = data[idx]; // Thread 0->data[0], Thread 1->data[1], etc. +} + +// Bad - strided access +__global__ void bad(float *data) { + int idx = blockIdx.x * blockDim.x + threadIdx.x; + float val = data[idx * 32]; // Thread 0->data[0], Thread 1->data[32], etc. +} +``` + +### 2. Occupancy Optimization + +Balance resources for maximum throughput: + +```cuda +// Use CUDA occupancy calculator +int blockSize; +int minGridSize; +cudaOccupancyMaxPotentialBlockSize(&minGridSize, &blockSize, myKernel, 0, 0); + +// Launch with optimal configuration +myKernel<<>>(data); +``` + +### 3. Warp-Level Primitives + +Leverage warp-level operations: + +```cuda +__global__ void warpReduce(float *data) { + float val = data[threadIdx.x]; + + // Warp-level reduction + for (int offset = 16; offset > 0; offset /= 2) { + val += __shfl_down_sync(0xffffffff, val, offset); + } + + if (threadIdx.x % 32 == 0) { + // Thread 0 of each warp has the sum + atomicAdd(output, val); + } +} +``` + +## CUDA Libraries and Ecosystem + +NVIDIA provides highly optimized libraries: + +### 1. cuBLAS - Linear Algebra + +```cpp +#include + +cublasHandle_t handle; +cublasCreate(&handle); + +// Matrix multiplication: C = alpha * A * B + beta * C +cublasSgemm(handle, CUBLAS_OP_N, CUBLAS_OP_N, + m, n, k, &alpha, + d_A, m, d_B, k, &beta, d_C, m); +``` + +### 2. cuDNN - Deep Learning + +```cpp +#include + +cudnnHandle_t cudnn; +cudnnCreate(&cudnn); + +// Convolution forward pass +cudnnConvolutionForward(cudnn, &alpha, xDesc, x, wDesc, w, + convDesc, algo, workspace, workspaceSize, + &beta, yDesc, y); +``` + +### 3. Thrust - C++ Template Library + +```cpp +#include +#include + +thrust::device_vector d_vec(1000000); +thrust::sort(d_vec.begin(), d_vec.end()); +``` + +## Debugging and Profiling + +### 1. Error Checking + +Always check CUDA errors: + +```cuda +#define CUDA_CHECK(call) \ + do { \ + cudaError_t error = call; \ + if (error != cudaSuccess) { \ + fprintf(stderr, "CUDA error at %s:%d - %s\n", \ + __FILE__, __LINE__, cudaGetErrorString(error)); \ + exit(1); \ + } \ + } while(0) + +// Usage +CUDA_CHECK(cudaMalloc(&d_data, size)); +``` + +### 2. NVIDIA Nsight Tools + +- **Nsight Systems**: System-wide performance analysis +- **Nsight Compute**: Kernel-level profiling +- **cuda-memcheck**: Memory error detection + +```bash +# Profile your application +nsys profile ./my_cuda_app +ncu --set full ./my_cuda_app +``` + +## Common Pitfalls and Best Practices + +### 1. Memory Management +- Always free allocated memory +- Use cudaMallocManaged for unified memory when appropriate +- Be aware of memory bandwidth limitations + +### 2. Thread Divergence +```cuda +// Avoid divergent branches +if (threadIdx.x < 16) { + // Half the warp takes this path +} else { + // Other half takes this path - causes divergence +} +``` + +### 3. Grid and Block Size Selection +- Block size should be multiple of 32 (warp size) +- Consider hardware limits (max threads per block, registers) +- Use occupancy calculator for guidance + +### 4. Synchronization +```cuda +// Block-level synchronization +__syncthreads(); + +// Device-level synchronization +cudaDeviceSynchronize(); +``` + +## Real-World Applications + +1. **Deep Learning**: Training neural networks (PyTorch, TensorFlow) +2. **Scientific Computing**: Molecular dynamics, climate modeling +3. **Image Processing**: Real-time filters, computer vision +4. **Finance**: Monte Carlo simulations, risk analysis +5. **Cryptography**: Password cracking, blockchain mining + +## Getting Started Resources + +1. **NVIDIA CUDA Toolkit**: Essential development tools +2. **CUDA Programming Guide**: Comprehensive official documentation +3. **CUDA by Example**: Excellent book for beginners +4. **GPU Gems**: Advanced techniques and algorithms +5. **NVIDIA Developer Forums**: Active community support + +## Future of CUDA + +CUDA continues to evolve with new GPU architectures: +- **Tensor Cores**: Specialized units for AI workloads +- **Ray Tracing Cores**: Hardware-accelerated ray tracing +- **Multi-Instance GPU (MIG)**: Partition GPUs for multiple users +- **CUDA Graphs**: Reduce kernel launch overhead + +## Conclusion + +CUDA has transformed the computing landscape by making GPU programming accessible to developers worldwide. What started as a way to use graphics cards for general computation has evolved into a comprehensive ecosystem powering everything from AI breakthroughs to scientific discoveries. + +The key to mastering CUDA is understanding its parallel execution model and memory hierarchy. Start with simple kernels, profile your code, and gradually optimize. Remember that not all problems benefit from GPU acceleration—CUDA shines when you have massive parallelism and arithmetic intensity. + +As we enter an era of increasingly parallel computing, CUDA skills become ever more valuable. Whether you're accelerating existing applications or building new ones from scratch, CUDA provides the tools to harness the incredible power of modern GPUs. + +--- + +*Ready to start your CUDA journey? Download the CUDA Toolkit from NVIDIA's developer site and begin with simple vector operations. The world of accelerated computing awaits, and with CUDA, you have the key to unlock it.* \ No newline at end of file diff --git a/src/content/blog/future-of-ai-blog-post.md b/src/content/blog/future-of-ai-blog-post.md new file mode 100644 index 0000000..56af311 --- /dev/null +++ b/src/content/blog/future-of-ai-blog-post.md @@ -0,0 +1,199 @@ +# The Future of AI: Navigating the Next Decade of Intelligent Systems + +## The AI revolution is just beginning. Here's what leaders need to know about the transformative technologies that will reshape business and society in the coming decade. + +As we stand at the threshold of 2025, artificial intelligence has evolved from a promising technology to a fundamental driver of business transformation. The rapid advancement from simple chatbots to sophisticated reasoning systems like OpenAI's o1 and DeepSeek's R1 signals that we're entering a new phase of AI capability—one that will fundamentally reshape how organizations operate, compete, and create value. + +The question is no longer whether AI will transform your industry, but how quickly you can adapt to harness its potential while navigating its complexities. + +## The Current State: AI at an Inflection Point + +Today's AI landscape is characterized by unprecedented capability and accessibility. Large language models have democratized access to AI, enabling organizations of all sizes to leverage sophisticated natural language processing, code generation, and analytical capabilities. Meanwhile, specialized AI systems are achieving superhuman performance in domains ranging from protein folding to strategic game playing. + +But we're witnessing something more profound than incremental improvement. The emergence of multimodal models that seamlessly process text, images, and audio, combined with reasoning capabilities that can tackle complex mathematical and scientific problems, suggests we're approaching a fundamental shift in what machines can accomplish. + +**Key indicators of this inflection point:** +- AI models demonstrating emergent capabilities not explicitly programmed +- Dramatic cost reductions in AI deployment (100x decrease in inference costs since 2020) +- Integration of AI into critical business processes across industries +- Shift from AI as a tool to AI as a collaborative partner + +## Five Transformative Trends Shaping AI's Future + +### 1. The Rise of Agentic AI + +The next frontier of AI isn't just about answering questions—it's about taking action. Agentic AI systems will autonomously pursue complex goals, manage multi-step processes, and coordinate with other AI agents and humans to accomplish objectives. + +**What this means for business:** +- Autonomous AI employees handling complete workflows +- Self-improving systems that optimize their own performance +- AI-to-AI marketplaces where specialized agents collaborate +- Dramatic reduction in operational overhead for routine tasks + +**Timeline:** Early agentic systems are already emerging. Expect widespread adoption by 2027, with mature ecosystems by 2030. + +### 2. Reasoning and Scientific Discovery + +The ability of AI to engage in complex reasoning marks a paradigm shift. Models like OpenAI's o1 and DeepSeek's R1 demonstrate that AI can now work through multi-step problems, explore hypotheses, and even conduct scientific research. + +**Transformative potential:** +- Acceleration of drug discovery and materials science +- AI-driven hypothesis generation and experimental design +- Mathematical theorem proving and discovery +- Complex system optimization across supply chains and infrastructure + +**Business impact:** Organizations that integrate reasoning AI into their R&D processes will achieve 10x productivity gains in innovation cycles. + +### 3. The Convergence of Physical and Digital AI + +As robotics hardware catches up with AI software, we're approaching an era where AI won't just think—it will act in the physical world with unprecedented dexterity and autonomy. + +**Key developments:** +- Humanoid robots entering manufacturing and service industries +- AI-powered autonomous systems in agriculture, construction, and logistics +- Seamless integration between digital planning and physical execution +- Embodied AI learning from physical interactions + +**Projection:** By 2030, 30% of physical labor in structured environments will be augmented or automated by AI-powered robotics. + +### 4. Personalized AI: From General to Specific + +The future of AI is deeply personal. Rather than one-size-fits-all models, we're moving toward AI systems that adapt to individual users, learning their preferences, work styles, and goals. + +**Evolution pathway:** +- Personal AI assistants that understand context and history +- Domain-specific AI trained on proprietary organizational knowledge +- Adaptive learning systems that improve through interaction +- Privacy-preserving personalization through federated learning + +**Critical consideration:** The balance between personalization and privacy will define the boundaries of acceptable AI deployment. + +### 5. AI Governance and Ethical AI by Design + +As AI systems become more powerful and pervasive, governance frameworks are evolving from afterthoughts to fundamental architecture components. + +**Emerging frameworks:** +- Built-in explainability and audit trails +- Automated bias detection and mitigation +- Regulatory compliance through technical standards +- International cooperation on AI safety standards + +**Business imperative:** Organizations that build ethical AI practices now will avoid costly retrofitting and maintain social license to operate. + +## Industries at the Forefront of AI Transformation + +### Healthcare: From Reactive to Predictive + +AI is shifting healthcare from treating illness to preventing it. Continuous monitoring, genetic analysis, and behavioral data will enable AI to predict health issues years before symptoms appear. + +**2030 vision:** +- AI-driven personalized medicine based on individual genetics +- Virtual health assistants managing chronic conditions +- Drug discovery timelines reduced from decades to years +- Surgical robots performing complex procedures with superhuman precision + +### Financial Services: Intelligent Money + +The financial sector is becoming an AI-first industry, with algorithms making microsecond trading decisions and AI advisors managing trillions in assets. + +**Transformation vectors:** +- Real-time fraud prevention with 99.99% accuracy +- Hyper-personalized financial products +- Autonomous trading systems operating within regulatory frameworks +- Democratized access to sophisticated financial strategies + +### Education: Adaptive Learning at Scale + +AI tutors that adapt to each student's learning style, pace, and interests will make personalized education accessible globally. + +**Revolutionary changes:** +- AI teaching assistants providing 24/7 support +- Curriculum that evolves based on job market demands +- Skill verification through AI-proctored assessments +- Lifelong learning companions that grow with learners + +### Manufacturing: The Autonomous Factory + +Smart factories will self-optimize, predict maintenance needs, and adapt production in real-time to demand fluctuations. + +**Industry 5.0 features:** +- Zero-defect manufacturing through AI quality control +- Demand-driven production with minimal waste +- Human-robot collaboration enhancing worker capabilities +- Supply chain orchestration across global networks + +## Navigating the Challenges Ahead + +### The Talent Imperative + +The AI skills gap represents both the greatest challenge and opportunity for organizations. Success requires not just hiring AI specialists but reskilling entire workforces. + +**Strategic priorities:** +- Establish AI literacy programs for all employees +- Create centers of excellence for AI innovation +- Partner with educational institutions for talent pipelines +- Develop retention strategies for AI talent + +### Infrastructure and Integration + +Legacy systems and data silos remain significant barriers to AI adoption. Organizations must modernize their technology stacks while maintaining operational continuity. + +**Critical investments:** +- Cloud-native architectures supporting AI workloads +- Data governance frameworks ensuring quality and compliance +- API-first strategies enabling AI integration +- Edge computing infrastructure for real-time AI + +### Ethical and Societal Considerations + +As AI systems gain capability, questions of accountability, fairness, and societal impact become paramount. + +**Essential considerations:** +- Establishing clear accountability for AI decisions +- Ensuring equitable access to AI benefits +- Managing workforce transitions with dignity +- Contributing to societal discussions on AI governance + +## Strategic Imperatives for Leaders + +### 1. Develop an AI-First Mindset + +Stop thinking of AI as a technology to implement and start thinking of it as a capability to cultivate. Every business process, customer interaction, and strategic decision should be examined through the lens of AI enhancement. + +### 2. Invest in Data as a Strategic Asset + +AI is only as good as the data it learns from. Organizations must treat data as a strategic asset, investing in quality, governance, and accessibility. + +### 3. Build Adaptive Organizations + +The pace of AI advancement requires organizational agility. Create structures that can rapidly experiment, learn, and scale successful AI initiatives. + +### 4. Embrace Responsible Innovation + +Ethical AI isn't a constraint—it's a competitive advantage. Organizations that build trust through responsible AI practices will win in the long term. + +### 5. Think Ecosystem, Not Enterprise + +The future of AI is collaborative. Build partnerships, participate in industry initiatives, and contribute to the broader AI ecosystem. + +## The Road Ahead: 2025-2035 + +The next decade will witness AI's evolution from a powerful tool to an indispensable partner in human progress. We'll see: + +- **2025-2027**: Consolidation of current capabilities, widespread adoption of generative AI, emergence of early agentic systems +- **2028-2030**: Breakthrough in artificial general intelligence (AGI) capabilities, seamless human-AI collaboration, transformation of major industries +- **2031-2035**: Potential achievement of AGI, fundamental restructuring of work and society, new forms of human-AI symbiosis + +## Conclusion: The Time for Action is Now + +The future of AI isn't a distant possibility—it's unfolding before us at an accelerating pace. Organizations that move decisively to build AI capabilities, while thoughtfully addressing the associated challenges, will shape the next era of human achievement. + +The choice isn't whether to adopt AI, but how quickly and effectively you can integrate it into your organization's DNA. Those who hesitate risk not just competitive disadvantage but potential irrelevance. + +As we navigate this transformative period, success will belong to those who view AI not as a threat to human potential but as its greatest amplifier. The organizations that thrive will be those that combine the creativity, empathy, and wisdom of humans with the speed, scale, and precision of AI. + +The future of AI is not predetermined—it's being written now by the choices we make and the actions we take. What role will your organization play in shaping this future? + +--- + +*The journey to an AI-powered future begins with a single step. Whether you're just starting your AI transformation or looking to accelerate existing initiatives, the time for action is now. The future belongs to those who prepare for it today.* \ No newline at end of file diff --git a/src/content/blog/graph-neural-networks-materials.md b/src/content/blog/graph-neural-networks-materials.md new file mode 100644 index 0000000..7ce5985 --- /dev/null +++ b/src/content/blog/graph-neural-networks-materials.md @@ -0,0 +1,73 @@ +--- +title: "Graph Neural Networks for Materials Discovery" +excerpt: "Exploring how Graph Neural Networks can revolutionize materials science by predicting synthesis conditions and properties." +author: "Jan Heimann" +date: "2025-01-05" +readTime: "12 min read" +tags: ["Graph Neural Networks", "Materials Science", "PyTorch", "AI4Science"] +category: "Research" +featured: true +--- + +# Graph Neural Networks for Materials Discovery + +## The Challenge + +Materials discovery traditionally relies on expensive experiments and trial-and-error approaches. Graph Neural Networks (GNNs) offer a promising solution by modeling the structural relationships in materials. + +## Why GNNs for Materials? + +### Graph Representation +- **Atoms as Nodes**: Each atom becomes a node with features +- **Bonds as Edges**: Chemical bonds define the graph structure +- **Structural Awareness**: Natural representation of molecular structure + +### Advantages over Traditional ML +- **Permutation Invariance**: Order of atoms doesn't matter +- **Size Flexibility**: Handle molecules of varying sizes +- **Interpretability**: Attention mechanisms show important regions + +## Implementation with PyTorch Geometric + +```python +import torch +import torch.nn.functional as F +from torch_geometric.nn import GCNConv, global_mean_pool + +class MaterialGNN(torch.nn.Module): + def __init__(self, num_features, hidden_dim, num_classes): + super(MaterialGNN, self).__init__() + self.conv1 = GCNConv(num_features, hidden_dim) + self.conv2 = GCNConv(hidden_dim, hidden_dim) + self.conv3 = GCNConv(hidden_dim, hidden_dim) + self.classifier = torch.nn.Linear(hidden_dim, num_classes) + + def forward(self, x, edge_index, batch): + x = F.relu(self.conv1(x, edge_index)) + x = F.relu(self.conv2(x, edge_index)) + x = F.relu(self.conv3(x, edge_index)) + x = global_mean_pool(x, batch) + return self.classifier(x) +``` + +## Real-World Applications + +1. **Synthesis Prediction**: Predicting optimal conditions for material synthesis +2. **Property Prediction**: Estimating material properties from structure +3. **Drug Discovery**: Accelerating pharmaceutical research +4. **Catalyst Design**: Optimizing catalytic materials + +## Results and Impact + +Our research shows significant improvements over traditional methods: +- **9.2% accuracy improvement** in synthesis prediction +- **Faster convergence** in training time +- **Better generalization** to unseen materials + +## Future Directions + +- **Multi-modal Integration**: Combining structural and experimental data +- **Uncertainty Quantification**: Providing confidence estimates +- **Active Learning**: Iteratively improving models with new data + +The future of materials discovery lies in the intelligent combination of domain knowledge and advanced ML techniques. \ No newline at end of file diff --git a/src/content/blog/langchain-basics-blog-post.md b/src/content/blog/langchain-basics-blog-post.md new file mode 100644 index 0000000..10a514e --- /dev/null +++ b/src/content/blog/langchain-basics-blog-post.md @@ -0,0 +1,466 @@ +# LangChain: Building Powerful LLM Applications Made Simple + +## Introduction + +The rise of large language models (LLMs) like GPT-4, Claude, and LLaMA has opened up incredible possibilities for AI-powered applications. However, building production-ready LLM applications involves much more than just making API calls to these models. You need to handle prompts, manage conversation history, connect to external data sources, and orchestrate complex workflows. This is where LangChain comes in—a framework designed to simplify and streamline the development of LLM-powered applications. + +## What is LangChain? + +LangChain is an open-source framework created by Harrison Chase in October 2022 that provides a set of tools and abstractions for building applications with LLMs. It's designed around the principle of composability, allowing developers to chain together different components to create sophisticated applications. + +The framework addresses several key challenges in LLM application development: +- **Context management**: Handling conversation history and context windows +- **Data connectivity**: Integrating LLMs with external data sources +- **Agent capabilities**: Building LLMs that can use tools and take actions +- **Memory systems**: Implementing short-term and long-term memory for applications +- **Prompt engineering**: Managing and optimizing prompts systematically + +LangChain has quickly become one of the most popular frameworks in the LLM ecosystem, with implementations in both Python and JavaScript/TypeScript. + +## Core Concepts and Components + +### 1. Models: The Foundation + +LangChain provides a unified interface for working with different LLM providers. Whether you're using OpenAI, Anthropic, Hugging Face, or local models, the interface remains consistent. + +```python +from langchain.llms import OpenAI +from langchain.chat_models import ChatOpenAI + +# Standard LLM +llm = OpenAI(temperature=0.7) +response = llm("What is the capital of France?") + +# Chat model (for conversation-style interactions) +chat = ChatOpenAI(temperature=0.7) +from langchain.schema import HumanMessage, SystemMessage + +messages = [ + SystemMessage(content="You are a helpful geography teacher."), + HumanMessage(content="What is the capital of France?") +] +response = chat(messages) +``` + +### 2. Prompts: Template Management + +Prompt templates help you create reusable, dynamic prompts that can be filled with variables at runtime. + +```python +from langchain.prompts import PromptTemplate, ChatPromptTemplate + +# Simple prompt template +prompt = PromptTemplate( + input_variables=["product"], + template="What are the main features of {product}?" +) + +# Chat prompt template +chat_prompt = ChatPromptTemplate.from_messages([ + ("system", "You are a helpful assistant that explains technical concepts."), + ("human", "Explain {concept} in simple terms.") +]) + +# Using the template +formatted_prompt = prompt.format(product="iPhone 15") +response = llm(formatted_prompt) +``` + +### 3. Chains: Composing Components + +Chains are the core of LangChain's composability. They allow you to combine multiple components into a single, reusable pipeline. + +```python +from langchain.chains import LLMChain, SimpleSequentialChain + +# Basic LLM Chain +chain = LLMChain(llm=llm, prompt=prompt) +result = chain.run("smartphone") + +# Sequential chain - output of one becomes input of next +first_prompt = PromptTemplate( + input_variables=["topic"], + template="Write a brief outline about {topic}." +) +second_prompt = PromptTemplate( + input_variables=["outline"], + template="Expand this outline into a detailed article: {outline}" +) + +chain1 = LLMChain(llm=llm, prompt=first_prompt) +chain2 = LLMChain(llm=llm, prompt=second_prompt) + +overall_chain = SimpleSequentialChain( + chains=[chain1, chain2], + verbose=True +) +result = overall_chain.run("artificial intelligence") +``` + +### 4. Memory: Maintaining Context + +LangChain provides various memory implementations to maintain conversation context across interactions. + +```python +from langchain.memory import ConversationBufferMemory, ConversationSummaryMemory +from langchain.chains import ConversationChain + +# Buffer memory - stores everything +memory = ConversationBufferMemory() +conversation = ConversationChain( + llm=llm, + memory=memory, + verbose=True +) + +# Have a conversation +conversation.predict(input="Hi, my name is Alex") +conversation.predict(input="What's my name?") # Will remember! + +# Summary memory - summarizes long conversations +summary_memory = ConversationSummaryMemory(llm=llm) +conversation_with_summary = ConversationChain( + llm=llm, + memory=summary_memory, + verbose=True +) +``` + +### 5. Document Loaders and Text Splitters + +For RAG (Retrieval Augmented Generation) applications, LangChain provides tools to load and process documents. + +```python +from langchain.document_loaders import TextLoader, PyPDFLoader +from langchain.text_splitter import RecursiveCharacterTextSplitter + +# Load documents +loader = PyPDFLoader("document.pdf") +documents = loader.load() + +# Split into chunks +text_splitter = RecursiveCharacterTextSplitter( + chunk_size=1000, + chunk_overlap=200 +) +chunks = text_splitter.split_documents(documents) +``` + +### 6. Vector Stores and Embeddings + +Vector stores enable semantic search over your documents. + +```python +from langchain.embeddings import OpenAIEmbeddings +from langchain.vectorstores import FAISS + +# Create embeddings +embeddings = OpenAIEmbeddings() + +# Create vector store +vectorstore = FAISS.from_documents(chunks, embeddings) + +# Search for relevant documents +query = "What is the main topic discussed?" +relevant_docs = vectorstore.similarity_search(query, k=3) +``` + +## Building a RAG Application + +Let's build a complete RAG (Retrieval Augmented Generation) application that can answer questions about uploaded documents. + +```python +from langchain.chains import RetrievalQA +from langchain.document_loaders import DirectoryLoader +from langchain.embeddings import OpenAIEmbeddings +from langchain.text_splitter import RecursiveCharacterTextSplitter +from langchain.vectorstores import Chroma +from langchain.chat_models import ChatOpenAI + +# 1. Load documents +loader = DirectoryLoader('./data', glob="**/*.pdf") +documents = loader.load() + +# 2. Split documents +text_splitter = RecursiveCharacterTextSplitter( + chunk_size=1500, + chunk_overlap=200 +) +splits = text_splitter.split_documents(documents) + +# 3. Create embeddings and vector store +embeddings = OpenAIEmbeddings() +vectorstore = Chroma.from_documents( + documents=splits, + embedding=embeddings, + persist_directory="./chroma_db" +) + +# 4. Create retriever +retriever = vectorstore.as_retriever( + search_type="similarity", + search_kwargs={"k": 4} +) + +# 5. Create QA chain +llm = ChatOpenAI(temperature=0, model_name="gpt-4") +qa_chain = RetrievalQA.from_chain_type( + llm=llm, + chain_type="stuff", + retriever=retriever, + return_source_documents=True, + verbose=True +) + +# 6. Ask questions +query = "What are the main points discussed in the documents?" +result = qa_chain({"query": query}) + +print(f"Answer: {result['result']}") +print(f"Source documents: {result['source_documents']}") +``` + +## Agents: LLMs with Tools + +One of LangChain's most powerful features is the ability to create agents—LLMs that can use tools to accomplish tasks. + +```python +from langchain.agents import initialize_agent, Tool +from langchain.agents import AgentType +from langchain.tools import DuckDuckGoSearchRun +from langchain.tools import ShellTool + +# Define tools +search = DuckDuckGoSearchRun() +shell = ShellTool() + +tools = [ + Tool( + name="Search", + func=search.run, + description="Useful for searching the internet for current information" + ), + Tool( + name="Terminal", + func=shell.run, + description="Useful for running shell commands" + ) +] + +# Create agent +agent = initialize_agent( + tools, + llm, + agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, + verbose=True +) + +# Use the agent +result = agent.run("Search for the current weather in San Francisco and create a file with that information") +``` + +## Advanced Features + +### 1. Custom Chains + +Create your own chains for specific use cases: + +```python +from langchain.chains.base import Chain +from typing import Dict, List + +class CustomAnalysisChain(Chain): + """Custom chain for analyzing text sentiment and extracting entities.""" + + @property + def input_keys(self) -> List[str]: + return ["text"] + + @property + def output_keys(self) -> List[str]: + return ["sentiment", "entities"] + + def _call(self, inputs: Dict[str, str]) -> Dict[str, str]: + text = inputs["text"] + + # Sentiment analysis + sentiment_prompt = f"Analyze the sentiment of this text: {text}" + sentiment = self.llm(sentiment_prompt) + + # Entity extraction + entity_prompt = f"Extract all named entities from this text: {text}" + entities = self.llm(entity_prompt) + + return { + "sentiment": sentiment, + "entities": entities + } +``` + +### 2. Streaming Responses + +For better user experience, stream responses as they're generated: + +```python +from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler + +# Create LLM with streaming +streaming_llm = ChatOpenAI( + streaming=True, + callbacks=[StreamingStdOutCallbackHandler()], + temperature=0 +) + +# Use in a chain +chain = LLMChain(llm=streaming_llm, prompt=prompt) +chain.run("Tell me a story") # Will print as it generates +``` + +### 3. LangChain Expression Language (LCEL) + +LCEL provides a declarative way to compose chains: + +```python +from langchain.schema.runnable import RunnablePassthrough + +# Create a chain using LCEL +rag_chain = ( + {"context": retriever, "question": RunnablePassthrough()} + | prompt + | llm + | StrOutputParser() +) + +# Run the chain +result = rag_chain.invoke("What is the main topic?") +``` + +### 4. Callbacks and Monitoring + +Monitor and control your LangChain applications: + +```python +from langchain.callbacks import StdOutCallbackHandler +from langchain.callbacks.manager import CallbackManager + +# Custom callback +class CustomCallback(StdOutCallbackHandler): + def on_llm_start(self, serialized, prompts, **kwargs): + print(f"LLM started with prompt: {prompts[0]}") + + def on_llm_end(self, response, **kwargs): + print(f"LLM finished. Token usage: {response.llm_output}") + +# Use callbacks +callback_manager = CallbackManager([CustomCallback()]) +llm = ChatOpenAI(callback_manager=callback_manager) +``` + +## Best Practices and Tips + +### 1. Error Handling + +Always implement proper error handling, especially for API calls: + +```python +from langchain.schema import OutputParserException +import time + +def robust_chain_call(chain, input_text, max_retries=3): + for attempt in range(max_retries): + try: + return chain.run(input_text) + except Exception as e: + if attempt == max_retries - 1: + raise + print(f"Attempt {attempt + 1} failed: {e}") + time.sleep(2 ** attempt) # Exponential backoff +``` + +### 2. Cost Management + +Monitor and control API costs: + +```python +from langchain.callbacks import get_openai_callback + +with get_openai_callback() as cb: + result = llm("What is the meaning of life?") + print(f"Total Tokens: {cb.total_tokens}") + print(f"Total Cost: ${cb.total_cost}") +``` + +### 3. Prompt Optimization + +Use few-shot examples for better results: + +```python +from langchain.prompts import FewShotPromptTemplate + +examples = [ + {"input": "happy", "output": "sad"}, + {"input": "tall", "output": "short"}, +] + +example_prompt = PromptTemplate( + input_variables=["input", "output"], + template="Input: {input}\nOutput: {output}" +) + +few_shot_prompt = FewShotPromptTemplate( + examples=examples, + example_prompt=example_prompt, + prefix="Give the opposite of each input:", + suffix="Input: {input}\nOutput:", + input_variables=["input"] +) +``` + +### 4. Testing + +Test your chains thoroughly: + +```python +import pytest +from unittest.mock import Mock + +def test_chain(): + # Mock the LLM + mock_llm = Mock() + mock_llm.return_value = "Paris" + + # Create chain with mock + chain = LLMChain(llm=mock_llm, prompt=prompt) + result = chain.run("France") + + assert result == "Paris" + mock_llm.assert_called_once() +``` + +## Common Use Cases + +1. **Customer Support Chatbots**: Using conversation memory and RAG for context-aware responses +2. **Document Analysis**: Extracting insights from large document collections +3. **Code Generation**: Creating development tools that understand context +4. **Research Assistants**: Agents that can search, analyze, and synthesize information +5. **Data Processing Pipelines**: Automated workflows for processing unstructured data + +## Getting Started Resources + +1. **Official Documentation**: Comprehensive guides at python.langchain.com +2. **LangChain Hub**: Repository of shared prompts and chains +3. **Community**: Active Discord and GitHub discussions +4. **Templates**: Pre-built application templates for common use cases +5. **LangSmith**: Tool for debugging and monitoring LangChain applications + +## Conclusion + +LangChain has emerged as an essential framework for building LLM applications, providing the tools and abstractions needed to go from prototype to production. Its modular design, extensive integrations, and active community make it an excellent choice for developers looking to harness the power of LLMs. + +The key to mastering LangChain is understanding its composable nature. Start with simple chains, experiment with different components, and gradually build more complex applications. Whether you're building a simple chatbot or a sophisticated AI agent, LangChain provides the flexibility and power you need. + +As the LLM landscape continues to evolve rapidly, LangChain keeps pace by adding new features, integrations, and optimizations. By learning LangChain, you're not just learning a framework—you're gaining the skills to build the next generation of AI-powered applications. + +--- + +*Ready to start building with LangChain? Install it with `pip install langchain openai` and begin creating your first LLM application. The future of AI applications is being built with LangChain, and now you have the knowledge to be part of it.* \ No newline at end of file diff --git a/src/content/blog/mlflow-docker-pipelines.md b/src/content/blog/mlflow-docker-pipelines.md new file mode 100644 index 0000000..ee5f737 --- /dev/null +++ b/src/content/blog/mlflow-docker-pipelines.md @@ -0,0 +1,59 @@ +--- +title: "Building Scalable Machine Learning Pipelines with MLflow and Docker" +excerpt: "A deep dive into creating production-ready ML pipelines that scale efficiently across different environments." +author: "Jan Heimann" +date: "2025-01-08" +readTime: "8 min read" +tags: ["MLflow", "Docker", "Machine Learning", "DevOps", "Production"] +category: "ML Engineering" +featured: true +--- + +# Building Scalable Machine Learning Pipelines with MLflow and Docker + +## Introduction + +In today's rapidly evolving AI landscape, deploying machine learning models to production requires more than just good algorithms. This article explores how to build robust, scalable ML pipelines using MLflow for experiment tracking and Docker for containerization. + +## Key Components + +### 1. MLflow for Experiment Management +- **Model Registry**: Version control for ML models +- **Experiment Tracking**: Monitor metrics, parameters, and artifacts +- **Model Serving**: Deploy models as REST APIs + +### 2. Docker for Containerization +- **Reproducible Environments**: Consistent deployment across platforms +- **Scalability**: Easy horizontal scaling with orchestration tools +- **Isolation**: Prevent dependency conflicts + +## Implementation Strategy + +```python +import mlflow +import mlflow.sklearn +from mlflow.models import infer_signature + +# Track experiment +with mlflow.start_run(): + model = train_model(X_train, y_train) + + # Log metrics + mlflow.log_metric("accuracy", accuracy) + mlflow.log_metric("f1_score", f1) + + # Log model + signature = infer_signature(X_test, predictions) + mlflow.sklearn.log_model(model, "model", signature=signature) +``` + +## Best Practices + +1. **Version Everything**: Code, data, and models +2. **Automate Testing**: Unit tests and integration tests +3. **Monitor Performance**: Real-time model performance tracking +4. **Implement CI/CD**: Automated deployment pipelines + +## Conclusion + +Building scalable ML pipelines requires careful consideration of tooling, architecture, and operational practices. MLflow and Docker provide a solid foundation for production ML systems. \ No newline at end of file diff --git a/src/content/blog/openrlhf-blog-post.md b/src/content/blog/openrlhf-blog-post.md new file mode 100644 index 0000000..19c8a09 --- /dev/null +++ b/src/content/blog/openrlhf-blog-post.md @@ -0,0 +1,131 @@ +# OpenRLHF: The Game-Changing Framework for Reinforcement Learning from Human Feedback + +## Introduction + +In the rapidly evolving landscape of large language models (LLMs), Reinforcement Learning from Human Feedback (RLHF) has emerged as a crucial technique for aligning AI systems with human values and preferences. However, implementing RLHF efficiently at scale has remained a significant challenge—until now. Enter OpenRLHF, an open-source framework that's revolutionizing how researchers and developers approach RLHF training. + +## What is OpenRLHF? + +OpenRLHF is the first easy-to-use, high-performance open-source RLHF framework built on Ray, vLLM, ZeRO-3, and HuggingFace Transformers. Designed to make RLHF training simple and accessible, it addresses the key pain points that have historically made RLHF implementation complex and resource-intensive. + +The framework has gained significant traction in the AI community, with notable adoptions including: +- CMU's Advanced Natural Language Processing course using it as a teaching case +- HKUST successfully reproducing DeepSeek-R1 training on small models +- MIT & Microsoft utilizing it for research on emergent thinking in LLMs +- Multiple academic papers and industry projects building on top of the framework + +## Key Features That Set OpenRLHF Apart + +### 1. Distributed Architecture with Ray + +OpenRLHF leverages Ray for efficient distributed scheduling, separating Actor, Reward, Reference, and Critic models across different GPUs. This architecture enables scalable training for models up to 70B parameters, making it accessible for a wider range of research applications. + +The framework also supports Hybrid Engine scheduling, allowing all models and vLLM engines to share GPU resources. This minimizes idle time and maximizes GPU utilization—a critical factor when dealing with expensive compute resources. + +### 2. vLLM Inference Acceleration + +One of the most significant bottlenecks in RLHF training is sample generation, which typically consumes about 80% of the training time. OpenRLHF addresses this through integration with vLLM and Auto Tensor Parallelism (AutoTP), delivering high-throughput, memory-efficient sample generation. This native integration with HuggingFace Transformers ensures seamless and fast generation, making it arguably the fastest RLHF framework available today. + +### 3. Memory-Efficient Training + +Built on DeepSpeed's ZeRO-3, deepcompile, and AutoTP, OpenRLHF enables large model training without heavyweight frameworks. It works directly with HuggingFace, making it easy to load and fine-tune pretrained models without the usual memory overhead concerns. + +### 4. Advanced Algorithm Implementations + +The framework doesn't just implement standard PPO—it incorporates advanced tricks and optimizations from the community's best practices. Beyond PPO, OpenRLHF supports: + +- **REINFORCE++ and variants** (REINFORCE++-baseline, GRPO, RLOO) +- **Direct Preference Optimization (DPO)** and its variants (IPO, cDPO) +- **Kahneman-Tversky Optimization (KTO)** +- **Iterative DPO** for online RLHF workflows +- **Rejection Sampling** and **Conditional SFT** +- **Knowledge Distillation** capabilities +- **Process Reward Model (PRM)** support + +## Performance That Speaks Volumes + +OpenRLHF demonstrates impressive performance gains compared to existing solutions. In benchmarks against optimized versions of DSChat: + +- **7B models**: 1.82x speedup +- **13B models**: 2.5x speedup +- **34B models**: 2.4x speedup +- **70B models**: 2.3x speedup + +These improvements translate directly into faster experimentation cycles and reduced compute costs—critical factors for both research labs and production deployments. + +## Getting Started with OpenRLHF + +Installation is straightforward, with Docker being the recommended approach: + +```bash +# Launch Docker container +docker run --runtime=nvidia -it --rm --shm-size="10g" --cap-add=SYS_ADMIN -v $PWD:/openrlhf nvcr.io/nvidia/pytorch:24.07-py3 bash + +# Install OpenRLHF +pip install openrlhf + +# For vLLM acceleration (recommended) +pip install openrlhf[vllm] + +# For the latest features +pip install git+https://github.com/OpenRLHF/OpenRLHF.git +``` + +## Real-World Applications + +The versatility of OpenRLHF makes it suitable for various use cases: + +### 1. Standard RLHF Training +Train models using human preference data to improve helpfulness, harmlessness, and honesty. + +### 2. Reinforced Fine-tuning +Implement custom reward functions for domain-specific optimization without needing human annotations. + +### 3. Multi-turn Dialogue Optimization +Support for complex conversational scenarios with proper handling of chat templates. + +### 4. Multimodal Extensions +Projects like LMM-R1 demonstrate how OpenRLHF can be extended for multimodal tasks. + +## Advanced Features for Production + +### Flexible Data Processing +OpenRLHF provides sophisticated data handling capabilities: +- Support for multiple dataset formats +- Integration with HuggingFace's chat templates +- Ability to mix multiple datasets with configurable sampling probabilities +- Packing of training samples for efficiency + +### Model Checkpoint Compatibility +Full compatibility with HuggingFace models means you can: +- Use any pretrained model from the HuggingFace Hub +- Save checkpoints in standard formats +- Seamlessly integrate with existing ML pipelines + +### Performance Optimization Options +- Ring Attention support for handling longer sequences +- Flash Attention 2 integration +- QLoRA and LoRA support for parameter-efficient training +- Gradient checkpointing for memory optimization + +## Community and Ecosystem + +OpenRLHF has fostered a vibrant community with contributors from major tech companies and research institutions including ByteDance, Tencent, Alibaba, Baidu, Allen AI, and Berkeley's Starling Team. + +The project maintains comprehensive documentation, provides example scripts for various training scenarios, and offers both GitHub Issues and direct communication channels for support. + +## Looking Forward + +As RLHF continues to be crucial for developing aligned AI systems, OpenRLHF is positioned to be the go-to framework for researchers and practitioners. Recent developments show the framework adapting to new techniques like REINFORCE++ and supporting reproduction efforts of state-of-the-art models like DeepSeek-R1. + +The roadmap includes continued performance optimizations, support for emerging RLHF algorithms, and enhanced tooling for production deployments. + +## Conclusion + +OpenRLHF represents a significant step forward in democratizing RLHF training. By addressing the key challenges of scalability, performance, and ease of use, it enables more researchers and developers to experiment with and deploy RLHF-trained models. Whether you're a researcher exploring new alignment techniques or an engineer building production AI systems, OpenRLHF provides the tools and flexibility needed to succeed. + +If you're interested in contributing or using OpenRLHF, visit the [GitHub repository](https://github.com/OpenRLHF/OpenRLHF) or check out the [comprehensive documentation](https://openrlhf.readthedocs.io/). The future of aligned AI is being built collaboratively, and OpenRLHF is leading the charge. + +--- + +*This post is based on OpenRLHF version as of January 2025. For the latest updates and features, please refer to the official repository.* \ No newline at end of file diff --git a/src/content/blog/pytorch-basics-blog-post.md b/src/content/blog/pytorch-basics-blog-post.md new file mode 100644 index 0000000..cb4510f --- /dev/null +++ b/src/content/blog/pytorch-basics-blog-post.md @@ -0,0 +1,303 @@ +# PyTorch: A Comprehensive Guide to the Deep Learning Framework + +## Introduction + +In the world of deep learning, choosing the right framework can make the difference between a smooth development experience and endless frustration. PyTorch has emerged as one of the most popular choices among researchers and practitioners alike, known for its intuitive design, dynamic computation graphs, and Pythonic nature. Whether you're building your first neural network or developing state-of-the-art models, PyTorch provides the tools and flexibility you need. + +## What is PyTorch? + +PyTorch is an open-source machine learning library developed by Facebook's AI Research lab (FAIR) and released in 2016. Built on the Torch library, PyTorch brings the power of GPU-accelerated tensor computations to Python with an emphasis on flexibility and ease of use. + +What sets PyTorch apart is its philosophy: it's designed to be intuitive and Pythonic, making it feel like a natural extension of Python rather than a separate framework. This approach has made it the preferred choice for many researchers, leading to its adoption in countless research papers and production systems at companies like Tesla, Uber, and Microsoft. + +## Core Concepts and Components + +### 1. Tensors: The Foundation + +At the heart of PyTorch are tensors—multi-dimensional arrays similar to NumPy's ndarrays but with GPU acceleration capabilities. Tensors are the basic building blocks for all computations in PyTorch. + +```python +import torch + +# Creating tensors +x = torch.tensor([1.0, 2.0, 3.0]) +y = torch.zeros(3, 4) # 3x4 matrix of zeros +z = torch.randn(2, 3, 4) # 2x3x4 tensor with random values + +# Moving tensors to GPU +if torch.cuda.is_available(): + x = x.to('cuda') + # or x = x.cuda() +``` + +### 2. Autograd: Automatic Differentiation + +PyTorch's automatic differentiation engine, autograd, is what makes training neural networks possible. It automatically computes gradients for tensor operations, enabling backpropagation without manual derivative calculations. + +```python +# Enable gradient computation +x = torch.tensor([2.0, 3.0], requires_grad=True) +y = x ** 2 + 3 * x + 1 + +# Compute gradients +y.sum().backward() +print(x.grad) # Gradients: dy/dx = 2x + 3 +``` + +### 3. Neural Network Module (torch.nn) + +The `torch.nn` module provides high-level building blocks for constructing neural networks. It includes pre-built layers, activation functions, and loss functions. + +```python +import torch.nn as nn +import torch.nn.functional as F + +class SimpleNet(nn.Module): + def __init__(self): + super(SimpleNet, self).__init__() + self.fc1 = nn.Linear(784, 128) + self.fc2 = nn.Linear(128, 10) + self.dropout = nn.Dropout(0.2) + + def forward(self, x): + x = F.relu(self.fc1(x)) + x = self.dropout(x) + x = self.fc2(x) + return F.log_softmax(x, dim=1) +``` + +### 4. Optimizers + +PyTorch provides various optimization algorithms through `torch.optim`, making it easy to train models with different optimization strategies. + +```python +model = SimpleNet() +optimizer = torch.optim.Adam(model.parameters(), lr=0.001) + +# Training loop +for epoch in range(num_epochs): + for batch_data, batch_labels in dataloader: + # Forward pass + outputs = model(batch_data) + loss = F.nll_loss(outputs, batch_labels) + + # Backward pass + optimizer.zero_grad() + loss.backward() + optimizer.step() +``` + +## Building Your First Neural Network + +Let's walk through a complete example of building and training a neural network for image classification using the MNIST dataset. + +```python +import torch +import torch.nn as nn +import torch.optim as optim +from torch.utils.data import DataLoader +from torchvision import datasets, transforms + +# Define the network +class MNISTNet(nn.Module): + def __init__(self): + super(MNISTNet, self).__init__() + self.conv1 = nn.Conv2d(1, 32, kernel_size=3) + self.conv2 = nn.Conv2d(32, 64, kernel_size=3) + self.fc1 = nn.Linear(9216, 128) + self.fc2 = nn.Linear(128, 10) + self.pool = nn.MaxPool2d(2) + self.dropout = nn.Dropout(0.25) + + def forward(self, x): + x = self.pool(F.relu(self.conv1(x))) + x = self.pool(F.relu(self.conv2(x))) + x = x.view(-1, 9216) # Flatten + x = F.relu(self.fc1(x)) + x = self.dropout(x) + x = self.fc2(x) + return F.log_softmax(x, dim=1) + +# Set up data +transform = transforms.Compose([ + transforms.ToTensor(), + transforms.Normalize((0.1307,), (0.3081,)) +]) + +train_dataset = datasets.MNIST('./data', train=True, download=True, transform=transform) +train_loader = DataLoader(train_dataset, batch_size=64, shuffle=True) + +# Initialize model, loss, and optimizer +device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') +model = MNISTNet().to(device) +optimizer = optim.Adam(model.parameters(), lr=0.001) + +# Training loop +model.train() +for epoch in range(10): + for batch_idx, (data, target) in enumerate(train_loader): + data, target = data.to(device), target.to(device) + + optimizer.zero_grad() + output = model(data) + loss = F.nll_loss(output, target) + loss.backward() + optimizer.step() + + if batch_idx % 100 == 0: + print(f'Epoch: {epoch}, Batch: {batch_idx}, Loss: {loss.item():.4f}') +``` + +## Key Features That Make PyTorch Powerful + +### 1. Dynamic Computation Graphs + +Unlike static graph frameworks, PyTorch builds its computation graph on-the-fly. This means you can use regular Python control flow (if statements, loops) in your models, making debugging and experimentation much easier. + +```python +def dynamic_model(x, use_dropout=True): + x = self.layer1(x) + if use_dropout: # Python control flow! + x = self.dropout(x) + for i in range(x.size(0)): # Dynamic loops! + if x[i].sum() > 0: + x[i] = self.special_layer(x[i]) + return x +``` + +### 2. Easy Debugging + +Since PyTorch executes operations immediately (eager execution), you can use standard Python debugging tools like pdb, print statements, or IDE debuggers to inspect your code. + +### 3. Rich Ecosystem + +PyTorch has spawned a rich ecosystem of libraries: +- **torchvision**: Computer vision datasets, models, and transforms +- **torchtext**: Natural language processing utilities +- **torchaudio**: Audio processing tools +- **PyTorch Lightning**: High-level framework for organizing PyTorch code +- **Hugging Face Transformers**: State-of-the-art NLP models + +### 4. Production Ready + +With TorchScript and TorchServe, PyTorch models can be optimized and deployed in production environments: + +```python +# Convert to TorchScript for production +scripted_model = torch.jit.script(model) +scripted_model.save('model.pt') + +# Load and use in production +loaded = torch.jit.load('model.pt') +prediction = loaded(input_tensor) +``` + +## Advanced PyTorch Features + +### Custom Datasets + +Creating custom datasets is straightforward with PyTorch's Dataset class: + +```python +from torch.utils.data import Dataset + +class CustomDataset(Dataset): + def __init__(self, data_path): + self.data = self.load_data(data_path) + + def __len__(self): + return len(self.data) + + def __getitem__(self, idx): + sample = self.data[idx] + # Process and return sample + return processed_sample +``` + +### Mixed Precision Training + +PyTorch supports automatic mixed precision training for faster training with minimal code changes: + +```python +from torch.cuda.amp import autocast, GradScaler + +scaler = GradScaler() + +for data, target in dataloader: + optimizer.zero_grad() + + with autocast(): + output = model(data) + loss = criterion(output, target) + + scaler.scale(loss).backward() + scaler.step(optimizer) + scaler.update() +``` + +### Distributed Training + +Scale your training across multiple GPUs or machines: + +```python +import torch.distributed as dist +import torch.multiprocessing as mp + +def train(rank, world_size): + dist.init_process_group("nccl", rank=rank, world_size=world_size) + model = nn.parallel.DistributedDataParallel(model) + # Training code here + +if __name__ == "__main__": + world_size = torch.cuda.device_count() + mp.spawn(train, args=(world_size,), nprocs=world_size) +``` + +## Best Practices and Tips + +### 1. Memory Management +- Use `del` and `torch.cuda.empty_cache()` to free up GPU memory +- Detach tensors from the computation graph when not needed: `tensor.detach()` +- Use gradient checkpointing for very deep models + +### 2. Performance Optimization +- Set `torch.backends.cudnn.benchmark = True` for convolutional networks +- Use DataLoader with multiple workers: `num_workers > 0` +- Profile your code with `torch.profiler` to identify bottlenecks + +### 3. Reproducibility +```python +# Set random seeds for reproducibility +torch.manual_seed(42) +torch.cuda.manual_seed_all(42) +torch.backends.cudnn.deterministic = True +torch.backends.cudnn.benchmark = False +``` + +## Common Pitfalls and How to Avoid Them + +1. **Forgetting to zero gradients**: Always call `optimizer.zero_grad()` before `loss.backward()` +2. **Not moving data to the correct device**: Ensure both model and data are on the same device +3. **In-place operations on leaf variables**: Avoid operations like `x += 1` on tensors with `requires_grad=True` +4. **Memory leaks**: Remember to detach tensors when accumulating losses or metrics + +## Getting Started Resources + +1. **Official Tutorials**: PyTorch.org provides excellent tutorials for beginners +2. **PyTorch Lightning**: For organizing complex projects +3. **Fast.ai**: High-level library built on PyTorch with excellent courses +4. **Papers with Code**: Find PyTorch implementations of research papers + +## Conclusion + +PyTorch has revolutionized deep learning development by providing a framework that's both powerful and intuitive. Its dynamic nature, combined with strong GPU support and a rich ecosystem, makes it an excellent choice for both research and production applications. + +Whether you're prototyping a new architecture, implementing a research paper, or building a production system, PyTorch provides the flexibility and tools you need. Its Pythonic design means that if you know Python, you're already halfway to mastering PyTorch. + +The key to becoming proficient with PyTorch is practice. Start with simple models, experiment with the examples provided, and gradually work your way up to more complex architectures. The PyTorch community is vibrant and helpful, so don't hesitate to seek help when needed. + +As deep learning continues to evolve, PyTorch remains at the forefront, constantly adding new features while maintaining its core philosophy of being researcher-friendly and production-ready. Whether you're building the next breakthrough in AI or solving practical business problems, PyTorch is a framework that will grow with your needs. + +--- + +*Ready to start your PyTorch journey? Install it with `pip install torch torchvision` and begin experimenting. The future of AI is being built with PyTorch, and now you have the knowledge to be part of it.* \ No newline at end of file diff --git a/src/content/blog/react-three-fiber-performance.md b/src/content/blog/react-three-fiber-performance.md new file mode 100644 index 0000000..5f1574c --- /dev/null +++ b/src/content/blog/react-three-fiber-performance.md @@ -0,0 +1,100 @@ +--- +title: "Optimizing React Three Fiber Performance" +excerpt: "Tips and tricks for building smooth 3D web experiences with React Three Fiber, focusing on performance optimization." +author: "Jan Heimann" +date: "2025-01-02" +readTime: "10 min read" +tags: ["React Three Fiber", "Three.js", "Performance", "3D Web", "Optimization"] +category: "Frontend Development" +featured: false +--- + +# Optimizing React Three Fiber Performance + +## Introduction + +React Three Fiber (R3F) brings the power of Three.js to React applications, but achieving smooth 60fps performance requires careful optimization. This guide covers essential techniques for building performant 3D web experiences. + +## Key Optimization Strategies + +### 1. Geometry and Material Optimization + +```jsx +import { useMemo } from 'react' +import { useFrame } from '@react-three/fiber' + +function OptimizedMesh() { + // Memoize geometry to prevent recreation + const geometry = useMemo(() => new THREE.SphereGeometry(1, 32, 32), []) + + // Reuse materials across instances + const material = useMemo(() => new THREE.MeshStandardMaterial({ + color: 'hotpink' + }), []) + + return +} +``` + +### 2. Instancing for Multiple Objects + +```jsx +import { useRef } from 'react' +import { InstancedMesh } from 'three' + +function InstancedSpheres({ count = 1000 }) { + const meshRef = useRef() + + useFrame(() => { + // Animate instances efficiently + for (let i = 0; i < count; i++) { + // Update individual instance transforms + } + }) + + return ( + + {/* Individual instances */} + + ) +} +``` + +### 3. Level of Detail (LOD) + +```jsx +import { Detailed } from '@react-three/drei' + +function LODModel() { + return ( + + + + + + ) +} +``` + +## Performance Monitoring + +### Frame Rate Monitoring +- Use `useFrame` callback timing +- Implement performance budgets +- Monitor GPU utilization + +### Memory Management +- Dispose of unused geometries and materials +- Use object pooling for frequently created objects +- Monitor memory leaks with DevTools + +## Best Practices + +1. **Frustum Culling**: Don't render objects outside the camera view +2. **Texture Optimization**: Use appropriate texture sizes and formats +3. **Shader Optimization**: Minimize fragment shader complexity +4. **Batch Operations**: Group similar rendering operations + +## Conclusion + +Building performant 3D web applications requires a deep understanding of both React and Three.js optimization techniques. By following these practices, you can create smooth, engaging 3D experiences that run well across devices. \ No newline at end of file diff --git a/src/content/blogPosts.js b/src/content/blogPosts.js new file mode 100644 index 0000000..5d6c07a --- /dev/null +++ b/src/content/blogPosts.js @@ -0,0 +1,1641 @@ +export const blogPosts = [ + { + id: 1, + title: "Building AutoApply: Lessons from Creating an AI-Powered SaaS that Generated $480K ARR", + excerpt: "Key insights and technical challenges from building a multi-agent system that automates job applications using GPT-4 and Claude-3, serving 10K+ monthly active users.", + author: "Jan Heimann", + date: "2025-01-09", + readTime: "15 min read", + tags: ["SaaS", "AI", "GPT-4", "Claude-3", "Computer Vision", "YOLOv8", "Entrepreneurship"], + category: "My Projects", + content: `## Coming Soon + +This detailed case study about building AutoApply is currently being prepared and will be available soon. + +In the meantime, if you have questions about building AI-powered SaaS products or want to learn more about the technical challenges behind AutoApply, feel free to reach out! + +--- + +*Check back soon for the full story of how AutoApply was built from concept to 10000+ active users.*` + }, + { + id: 2, + title: "OpenRLHF: The Game-Changing Framework for Reinforcement Learning from Human Feedback", + excerpt: "An open-source framework built on Ray, vLLM, ZeRO-3, and HuggingFace Transformers that makes RLHF training simple and accessible, with up to 2.5x speedup over existing solutions.", + author: "Jan Heimann", + date: "2025-01-15", + readTime: "8 min read", + tags: ["RLHF", "Ray", "vLLM", "ZeRO-3", "HuggingFace", "OpenSource"], + category: "ML Engineering", + content: `## Introduction + +In the rapidly evolving landscape of large language models (LLMs), Reinforcement Learning from Human Feedback (RLHF) has emerged as a crucial technique for aligning AI systems with human values and preferences. However, implementing RLHF efficiently at scale has remained a significant challenge—until now. Enter OpenRLHF, an open-source framework that's revolutionizing how researchers and developers approach RLHF training. + +## What is OpenRLHF? + +OpenRLHF is the first easy-to-use, high-performance open-source RLHF framework built on Ray, vLLM, ZeRO-3, and HuggingFace Transformers. Designed to make RLHF training simple and accessible, it addresses the key pain points that have historically made RLHF implementation complex and resource-intensive. + +The framework has gained significant traction in the AI community, with notable adoptions including: +- CMU's Advanced Natural Language Processing course using it as a teaching case +- HKUST successfully reproducing DeepSeek-R1 training on small models +- MIT & Microsoft utilizing it for research on emergent thinking in LLMs +- Multiple academic papers and industry projects building on top of the framework + +## Key Features That Set OpenRLHF Apart + +### 1. Distributed Architecture with Ray + +OpenRLHF leverages Ray for efficient distributed scheduling, separating Actor, Reward, Reference, and Critic models across different GPUs. This architecture enables scalable training for models up to 70B parameters, making it accessible for a wider range of research applications. + +The framework also supports Hybrid Engine scheduling, allowing all models and vLLM engines to share GPU resources. This minimizes idle time and maximizes GPU utilization—a critical factor when dealing with expensive compute resources. + +### 2. vLLM Inference Acceleration + +One of the most significant bottlenecks in RLHF training is sample generation, which typically consumes about 80% of the training time. OpenRLHF addresses this through integration with vLLM and Auto Tensor Parallelism (AutoTP), delivering high-throughput, memory-efficient sample generation. This native integration with HuggingFace Transformers ensures seamless and fast generation, making it arguably the fastest RLHF framework available today. + +### 3. Memory-Efficient Training + +Built on DeepSpeed's ZeRO-3, deepcompile, and AutoTP, OpenRLHF enables large model training without heavyweight frameworks. It works directly with HuggingFace, making it easy to load and fine-tune pretrained models without the usual memory overhead concerns. + +### 4. Advanced Algorithm Implementations + +The framework doesn't just implement standard PPO—it incorporates advanced tricks and optimizations from the community's best practices. Beyond PPO, OpenRLHF supports: + +- **REINFORCE++ and variants** (REINFORCE++-baseline, GRPO, RLOO) +- **Direct Preference Optimization (DPO)** and its variants (IPO, cDPO) +- **Kahneman-Tversky Optimization (KTO)** +- **Iterative DPO** for online RLHF workflows +- **Rejection Sampling** and **Conditional SFT** +- **Knowledge Distillation** capabilities +- **Process Reward Model (PRM)** support + +## Performance That Speaks Volumes + +OpenRLHF demonstrates impressive performance gains compared to existing solutions. In benchmarks against optimized versions of DSChat: + +- **7B models**: 1.82x speedup +- **13B models**: 2.5x speedup +- **34B models**: 2.4x speedup +- **70B models**: 2.3x speedup + +These improvements translate directly into faster experimentation cycles and reduced compute costs—critical factors for both research labs and production deployments. + +## Getting Started with OpenRLHF + +Installation is straightforward, with Docker being the recommended approach: + +\`\`\`bash +# Launch Docker container +docker run --runtime=nvidia -it --rm --shm-size="10g" --cap-add=SYS_ADMIN -v $PWD:/openrlhf nvcr.io/nvidia/pytorch:24.07-py3 bash + +# Install OpenRLHF +pip install openrlhf + +# For vLLM acceleration (recommended) +pip install openrlhf[vllm] + +# For the latest features +pip install git+https://github.com/OpenRLHF/OpenRLHF.git +\`\`\` + +## Real-World Applications + +The versatility of OpenRLHF makes it suitable for various use cases: + +### 1. Standard RLHF Training +Train models using human preference data to improve helpfulness, harmlessness, and honesty. + +### 2. Reinforced Fine-tuning +Implement custom reward functions for domain-specific optimization without needing human annotations. + +### 3. Multi-turn Dialogue Optimization +Support for complex conversational scenarios with proper handling of chat templates. + +### 4. Multimodal Extensions +Projects like LMM-R1 demonstrate how OpenRLHF can be extended for multimodal tasks. + +## Advanced Features for Production + +### Flexible Data Processing +OpenRLHF provides sophisticated data handling capabilities: +- Support for multiple dataset formats +- Integration with HuggingFace's chat templates +- Ability to mix multiple datasets with configurable sampling probabilities +- Packing of training samples for efficiency + +### Model Checkpoint Compatibility +Full compatibility with HuggingFace models means you can: +- Use any pretrained model from the HuggingFace Hub +- Save checkpoints in standard formats +- Seamlessly integrate with existing ML pipelines + +### Performance Optimization Options +- Ring Attention support for handling longer sequences +- Flash Attention 2 integration +- QLoRA and LoRA support for parameter-efficient training +- Gradient checkpointing for memory optimization + +## Community and Ecosystem + +OpenRLHF has fostered a vibrant community with contributors from major tech companies and research institutions including ByteDance, Tencent, Alibaba, Baidu, Allen AI, and Berkeley's Starling Team. + +The project maintains comprehensive documentation, provides example scripts for various training scenarios, and offers both GitHub Issues and direct communication channels for support. + +## Looking Forward + +As RLHF continues to be crucial for developing aligned AI systems, OpenRLHF is positioned to be the go-to framework for researchers and practitioners. Recent developments show the framework adapting to new techniques like REINFORCE++ and supporting reproduction efforts of state-of-the-art models like DeepSeek-R1. + +The roadmap includes continued performance optimizations, support for emerging RLHF algorithms, and enhanced tooling for production deployments. + +## Conclusion + +OpenRLHF represents a significant step forward in democratizing RLHF training. By addressing the key challenges of scalability, performance, and ease of use, it enables more researchers and developers to experiment with and deploy RLHF-trained models. Whether you're a researcher exploring new alignment techniques or an engineer building production AI systems, OpenRLHF provides the tools and flexibility needed to succeed. + +If you're interested in contributing or using OpenRLHF, visit the [GitHub repository](https://github.com/OpenRLHF/OpenRLHF) or check out the [comprehensive documentation](https://openrlhf.readthedocs.io/). The future of aligned AI is being built collaboratively, and OpenRLHF is leading the charge. + +--- + +*This post is based on OpenRLHF version as of January 2025. For the latest updates and features, please refer to the official repository.*` + }, + { + id: 3, + title: "PyTorch: A Comprehensive Guide to the Deep Learning Framework", + excerpt: "In the world of deep learning, PyTorch has emerged as one of the most popular choices among researchers and practitioners alike, known for its intuitive design, dynamic computation graphs, and Pythonic nature.", + author: "Jan Heimann", + date: "2025-01-12", + readTime: "12 min read", + tags: ["PyTorch", "Deep Learning", "Machine Learning", "AI", "Framework"], + category: "ML Engineering", + content: `## Introduction + +In the world of deep learning, choosing the right framework can make the difference between a smooth development experience and endless frustration. PyTorch has emerged as one of the most popular choices among researchers and practitioners alike, known for its intuitive design, dynamic computation graphs, and Pythonic nature. Whether you're building your first neural network or developing state-of-the-art models, PyTorch provides the tools and flexibility you need. + +## What is PyTorch? + +PyTorch is an open-source machine learning library developed by Facebook's AI Research lab (FAIR) and released in 2016. Built on the Torch library, PyTorch brings the power of GPU-accelerated tensor computations to Python with an emphasis on flexibility and ease of use. + +What sets PyTorch apart is its philosophy: it's designed to be intuitive and Pythonic, making it feel like a natural extension of Python rather than a separate framework. This approach has made it the preferred choice for many researchers, leading to its adoption in countless research papers and production systems at companies like Tesla, Uber, and Microsoft. + +## Core Concepts and Components + +### 1. Tensors: The Foundation + +At the heart of PyTorch are tensors—multi-dimensional arrays similar to NumPy's ndarrays but with GPU acceleration capabilities. Tensors are the basic building blocks for all computations in PyTorch. + +\`\`\`python +import torch + +# Creating tensors +x = torch.tensor([1.0, 2.0, 3.0]) +y = torch.zeros(3, 4) # 3x4 matrix of zeros +z = torch.randn(2, 3, 4) # 2x3x4 tensor with random values + +# Moving tensors to GPU +if torch.cuda.is_available(): + x = x.to('cuda') + # or x = x.cuda() +\`\`\` + +### 2. Autograd: Automatic Differentiation + +PyTorch's automatic differentiation engine, autograd, is what makes training neural networks possible. It automatically computes gradients for tensor operations, enabling backpropagation without manual derivative calculations. + +\`\`\`python +# Enable gradient computation +x = torch.tensor([2.0, 3.0], requires_grad=True) +y = x ** 2 + 3 * x + 1 + +# Compute gradients +y.sum().backward() +print(x.grad) # Gradients: dy/dx = 2x + 3 +\`\`\` + +### 3. Neural Network Module (torch.nn) + +The \`torch.nn\` module provides high-level building blocks for constructing neural networks. It includes pre-built layers, activation functions, and loss functions. + +\`\`\`python +import torch.nn as nn +import torch.nn.functional as F + +class SimpleNet(nn.Module): + def __init__(self): + super(SimpleNet, self).__init__() + self.fc1 = nn.Linear(784, 128) + self.fc2 = nn.Linear(128, 10) + self.dropout = nn.Dropout(0.2) + + def forward(self, x): + x = F.relu(self.fc1(x)) + x = self.dropout(x) + x = self.fc2(x) + return F.log_softmax(x, dim=1) +\`\`\` + +### 4. Optimizers + +PyTorch provides various optimization algorithms through \`torch.optim\`, making it easy to train models with different optimization strategies. + +\`\`\`python +model = SimpleNet() +optimizer = torch.optim.Adam(model.parameters(), lr=0.001) + +# Training loop +for epoch in range(num_epochs): + for batch_data, batch_labels in dataloader: + # Forward pass + outputs = model(batch_data) + loss = F.nll_loss(outputs, batch_labels) + + # Backward pass + optimizer.zero_grad() + loss.backward() + optimizer.step() +\`\`\` + +## Building Your First Neural Network + +Let's walk through a complete example of building and training a neural network for image classification using the MNIST dataset. + +\`\`\`python +import torch +import torch.nn as nn +import torch.optim as optim +from torch.utils.data import DataLoader +from torchvision import datasets, transforms + +# Define the network +class MNISTNet(nn.Module): + def __init__(self): + super(MNISTNet, self).__init__() + self.conv1 = nn.Conv2d(1, 32, kernel_size=3) + self.conv2 = nn.Conv2d(32, 64, kernel_size=3) + self.fc1 = nn.Linear(9216, 128) + self.fc2 = nn.Linear(128, 10) + self.pool = nn.MaxPool2d(2) + self.dropout = nn.Dropout(0.25) + + def forward(self, x): + x = self.pool(F.relu(self.conv1(x))) + x = self.pool(F.relu(self.conv2(x))) + x = x.view(-1, 9216) # Flatten + x = F.relu(self.fc1(x)) + x = self.dropout(x) + x = self.fc2(x) + return F.log_softmax(x, dim=1) + +# Set up data +transform = transforms.Compose([ + transforms.ToTensor(), + transforms.Normalize((0.1307,), (0.3081,)) +]) + +train_dataset = datasets.MNIST('./data', train=True, download=True, transform=transform) +train_loader = DataLoader(train_dataset, batch_size=64, shuffle=True) + +# Initialize model, loss, and optimizer +device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') +model = MNISTNet().to(device) +optimizer = optim.Adam(model.parameters(), lr=0.001) + +# Training loop +model.train() +for epoch in range(10): + for batch_idx, (data, target) in enumerate(train_loader): + data, target = data.to(device), target.to(device) + + optimizer.zero_grad() + output = model(data) + loss = F.nll_loss(output, target) + loss.backward() + optimizer.step() + + if batch_idx % 100 == 0: + print(f'Epoch: {epoch}, Batch: {batch_idx}, Loss: {loss.item():.4f}') +\`\`\` + +## Key Features That Make PyTorch Powerful + +### 1. Dynamic Computation Graphs + +Unlike static graph frameworks, PyTorch builds its computation graph on-the-fly. This means you can use regular Python control flow (if statements, loops) in your models, making debugging and experimentation much easier. + +\`\`\`python +def dynamic_model(x, use_dropout=True): + x = self.layer1(x) + if use_dropout: # Python control flow! + x = self.dropout(x) + for i in range(x.size(0)): # Dynamic loops! + if x[i].sum() > 0: + x[i] = self.special_layer(x[i]) + return x +\`\`\` + +### 2. Easy Debugging + +Since PyTorch executes operations immediately (eager execution), you can use standard Python debugging tools like pdb, print statements, or IDE debuggers to inspect your code. + +### 3. Rich Ecosystem + +PyTorch has spawned a rich ecosystem of libraries: +- **torchvision**: Computer vision datasets, models, and transforms +- **torchtext**: Natural language processing utilities +- **torchaudio**: Audio processing tools +- **PyTorch Lightning**: High-level framework for organizing PyTorch code +- **Hugging Face Transformers**: State-of-the-art NLP models + +### 4. Production Ready + +With TorchScript and TorchServe, PyTorch models can be optimized and deployed in production environments: + +\`\`\`python +# Convert to TorchScript for production +scripted_model = torch.jit.script(model) +scripted_model.save('model.pt') + +# Load and use in production +loaded = torch.jit.load('model.pt') +prediction = loaded(input_tensor) +\`\`\` + +## Advanced PyTorch Features + +### Custom Datasets + +Creating custom datasets is straightforward with PyTorch's Dataset class: + +\`\`\`python +from torch.utils.data import Dataset + +class CustomDataset(Dataset): + def __init__(self, data_path): + self.data = self.load_data(data_path) + + def __len__(self): + return len(self.data) + + def __getitem__(self, idx): + sample = self.data[idx] + # Process and return sample + return processed_sample +\`\`\` + +### Mixed Precision Training + +PyTorch supports automatic mixed precision training for faster training with minimal code changes: + +\`\`\`python +from torch.cuda.amp import autocast, GradScaler + +scaler = GradScaler() + +for data, target in dataloader: + optimizer.zero_grad() + + with autocast(): + output = model(data) + loss = criterion(output, target) + + scaler.scale(loss).backward() + scaler.step(optimizer) + scaler.update() +\`\`\` + +### Distributed Training + +Scale your training across multiple GPUs or machines: + +\`\`\`python +import torch.distributed as dist +import torch.multiprocessing as mp + +def train(rank, world_size): + dist.init_process_group("nccl", rank=rank, world_size=world_size) + model = nn.parallel.DistributedDataParallel(model) + # Training code here + +if __name__ == "__main__": + world_size = torch.cuda.device_count() + mp.spawn(train, args=(world_size,), nprocs=world_size) +\`\`\` + +## Best Practices and Tips + +### 1. Memory Management +- Use \`del\` and \`torch.cuda.empty_cache()\` to free up GPU memory +- Detach tensors from the computation graph when not needed: \`tensor.detach()\` +- Use gradient checkpointing for very deep models + +### 2. Performance Optimization +- Set \`torch.backends.cudnn.benchmark = True\` for convolutional networks +- Use DataLoader with multiple workers: \`num_workers > 0\` +- Profile your code with \`torch.profiler\` to identify bottlenecks + +### 3. Reproducibility +\`\`\`python +# Set random seeds for reproducibility +torch.manual_seed(42) +torch.cuda.manual_seed_all(42) +torch.backends.cudnn.deterministic = True +torch.backends.cudnn.benchmark = False +\`\`\` + +## Common Pitfalls and How to Avoid Them + +1. **Forgetting to zero gradients**: Always call \`optimizer.zero_grad()\` before \`loss.backward()\` +2. **Not moving data to the correct device**: Ensure both model and data are on the same device +3. **In-place operations on leaf variables**: Avoid operations like \`x += 1\` on tensors with \`requires_grad=True\` +4. **Memory leaks**: Remember to detach tensors when accumulating losses or metrics + +## Getting Started Resources + +1. **Official Tutorials**: PyTorch.org provides excellent tutorials for beginners +2. **PyTorch Lightning**: For organizing complex projects +3. **Fast.ai**: High-level library built on PyTorch with excellent courses +4. **Papers with Code**: Find PyTorch implementations of research papers + +## Conclusion + +PyTorch has revolutionized deep learning development by providing a framework that's both powerful and intuitive. Its dynamic nature, combined with strong GPU support and a rich ecosystem, makes it an excellent choice for both research and production applications. + +Whether you're prototyping a new architecture, implementing a research paper, or building a production system, PyTorch provides the flexibility and tools you need. Its Pythonic design means that if you know Python, you're already halfway to mastering PyTorch. + +The key to becoming proficient with PyTorch is practice. Start with simple models, experiment with the examples provided, and gradually work your way up to more complex architectures. The PyTorch community is vibrant and helpful, so don't hesitate to seek help when needed. + +As deep learning continues to evolve, PyTorch remains at the forefront, constantly adding new features while maintaining its core philosophy of being researcher-friendly and production-ready. Whether you're building the next breakthrough in AI or solving practical business problems, PyTorch is a framework that will grow with your needs. + +--- + +*Ready to start your PyTorch journey? Install it with \`pip install torch torchvision\` and begin experimenting. The future of AI is being built with PyTorch, and now you have the knowledge to be part of it.*` + }, + { + id: 4, + title: "LangChain: Building Powerful LLM Applications Made Simple", + excerpt: "A comprehensive guide to LangChain, the framework that simplifies building production-ready LLM applications with composable components and advanced RAG capabilities.", + author: "Jan Heimann", + date: "2025-01-16", + readTime: "12 min read", + tags: ["LangChain", "LLM", "AI", "Python", "RAG"], + category: "ML Engineering", + content: `## Introduction + +The rise of large language models (LLMs) like GPT-4, Claude, and LLaMA has opened up incredible possibilities for AI-powered applications. However, building production-ready LLM applications involves much more than just making API calls to these models. You need to handle prompts, manage conversation history, connect to external data sources, and orchestrate complex workflows. This is where LangChain comes in—a framework designed to simplify and streamline the development of LLM-powered applications. + +## What is LangChain? + +LangChain is an open-source framework created by Harrison Chase in October 2022 that provides a set of tools and abstractions for building applications with LLMs. It's designed around the principle of composability, allowing developers to chain together different components to create sophisticated applications. + +The framework addresses several key challenges in LLM application development: +- **Context management**: Handling conversation history and context windows +- **Data connectivity**: Integrating LLMs with external data sources +- **Agent capabilities**: Building LLMs that can use tools and take actions +- **Memory systems**: Implementing short-term and long-term memory for applications +- **Prompt engineering**: Managing and optimizing prompts systematically + +LangChain has quickly become one of the most popular frameworks in the LLM ecosystem, with implementations in both Python and JavaScript/TypeScript. + +## Core Concepts and Components + +### 1. Models: The Foundation + +LangChain provides a unified interface for working with different LLM providers. Whether you're using OpenAI, Anthropic, Hugging Face, or local models, the interface remains consistent. + +\`\`\`python +from langchain.llms import OpenAI +from langchain.chat_models import ChatOpenAI + +# Standard LLM +llm = OpenAI(temperature=0.7) +response = llm("What is the capital of France?") + +# Chat model (for conversation-style interactions) +chat = ChatOpenAI(temperature=0.7) +from langchain.schema import HumanMessage, SystemMessage + +messages = [ + SystemMessage(content="You are a helpful geography teacher."), + HumanMessage(content="What is the capital of France?") +] +response = chat(messages) +\`\`\` + +### 2. Prompts: Template Management + +Prompt templates help you create reusable, dynamic prompts that can be filled with variables at runtime. + +\`\`\`python +from langchain.prompts import PromptTemplate, ChatPromptTemplate + +# Simple prompt template +prompt = PromptTemplate( + input_variables=["product"], + template="What are the main features of {product}?" +) + +# Chat prompt template +chat_prompt = ChatPromptTemplate.from_messages([ + ("system", "You are a helpful assistant that explains technical concepts."), + ("human", "Explain {concept} in simple terms.") +]) + +# Using the template +formatted_prompt = prompt.format(product="iPhone 15") +response = llm(formatted_prompt) +\`\`\` + +### 3. Chains: Composing Components + +Chains are the core of LangChain's composability. They allow you to combine multiple components into a single, reusable pipeline. + +\`\`\`python +from langchain.chains import LLMChain, SimpleSequentialChain + +# Basic LLM Chain +chain = LLMChain(llm=llm, prompt=prompt) +result = chain.run("smartphone") + +# Sequential chain - output of one becomes input of next +first_prompt = PromptTemplate( + input_variables=["topic"], + template="Write a brief outline about {topic}." +) +second_prompt = PromptTemplate( + input_variables=["outline"], + template="Expand this outline into a detailed article: {outline}" +) + +chain1 = LLMChain(llm=llm, prompt=first_prompt) +chain2 = LLMChain(llm=llm, prompt=second_prompt) + +overall_chain = SimpleSequentialChain( + chains=[chain1, chain2], + verbose=True +) +result = overall_chain.run("artificial intelligence") +\`\`\` + +### 4. Memory: Maintaining Context + +LangChain provides various memory implementations to maintain conversation context across interactions. + +\`\`\`python +from langchain.memory import ConversationBufferMemory, ConversationSummaryMemory +from langchain.chains import ConversationChain + +# Buffer memory - stores everything +memory = ConversationBufferMemory() +conversation = ConversationChain( + llm=llm, + memory=memory, + verbose=True +) + +# Have a conversation +conversation.predict(input="Hi, my name is Alex") +conversation.predict(input="What's my name?") # Will remember! + +# Summary memory - summarizes long conversations +summary_memory = ConversationSummaryMemory(llm=llm) +conversation_with_summary = ConversationChain( + llm=llm, + memory=summary_memory, + verbose=True +) +\`\`\` + +### 5. Document Loaders and Text Splitters + +For RAG (Retrieval Augmented Generation) applications, LangChain provides tools to load and process documents. + +\`\`\`python +from langchain.document_loaders import TextLoader, PyPDFLoader +from langchain.text_splitter import RecursiveCharacterTextSplitter + +# Load documents +loader = PyPDFLoader("document.pdf") +documents = loader.load() + +# Split into chunks +text_splitter = RecursiveCharacterTextSplitter( + chunk_size=1000, + chunk_overlap=200 +) +chunks = text_splitter.split_documents(documents) +\`\`\` + +### 6. Vector Stores and Embeddings + +Vector stores enable semantic search over your documents. + +\`\`\`python +from langchain.embeddings import OpenAIEmbeddings +from langchain.vectorstores import FAISS + +# Create embeddings +embeddings = OpenAIEmbeddings() + +# Create vector store +vectorstore = FAISS.from_documents(chunks, embeddings) + +# Search for relevant documents +query = "What is the main topic discussed?" +relevant_docs = vectorstore.similarity_search(query, k=3) +\`\`\` + +## Building a RAG Application + +Let's build a complete RAG (Retrieval Augmented Generation) application that can answer questions about uploaded documents. + +\`\`\`python +from langchain.chains import RetrievalQA +from langchain.document_loaders import DirectoryLoader +from langchain.embeddings import OpenAIEmbeddings +from langchain.text_splitter import RecursiveCharacterTextSplitter +from langchain.vectorstores import Chroma +from langchain.chat_models import ChatOpenAI + +# 1. Load documents +loader = DirectoryLoader('./data', glob="**/*.pdf") +documents = loader.load() + +# 2. Split documents +text_splitter = RecursiveCharacterTextSplitter( + chunk_size=1500, + chunk_overlap=200 +) +splits = text_splitter.split_documents(documents) + +# 3. Create embeddings and vector store +embeddings = OpenAIEmbeddings() +vectorstore = Chroma.from_documents( + documents=splits, + embedding=embeddings, + persist_directory="./chroma_db" +) + +# 4. Create retriever +retriever = vectorstore.as_retriever( + search_type="similarity", + search_kwargs={"k": 4} +) + +# 5. Create QA chain +llm = ChatOpenAI(temperature=0, model_name="gpt-4") +qa_chain = RetrievalQA.from_chain_type( + llm=llm, + chain_type="stuff", + retriever=retriever, + return_source_documents=True, + verbose=True +) + +# 6. Ask questions +query = "What are the main points discussed in the documents?" +result = qa_chain({"query": query}) + +print(f"Answer: {result['result']}") +print(f"Source documents: {result['source_documents']}") +\`\`\` + +## Agents: LLMs with Tools + +One of LangChain's most powerful features is the ability to create agents—LLMs that can use tools to accomplish tasks. + +\`\`\`python +from langchain.agents import initialize_agent, Tool +from langchain.agents import AgentType +from langchain.tools import DuckDuckGoSearchRun +from langchain.tools import ShellTool + +# Define tools +search = DuckDuckGoSearchRun() +shell = ShellTool() + +tools = [ + Tool( + name="Search", + func=search.run, + description="Useful for searching the internet for current information" + ), + Tool( + name="Terminal", + func=shell.run, + description="Useful for running shell commands" + ) +] + +# Create agent +agent = initialize_agent( + tools, + llm, + agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, + verbose=True +) + +# Use the agent +result = agent.run("Search for the current weather in San Francisco and create a file with that information") +\`\`\` + +## Advanced Features + +### 1. Custom Chains + +Create your own chains for specific use cases: + +\`\`\`python +from langchain.chains.base import Chain +from typing import Dict, List + +class CustomAnalysisChain(Chain): + """Custom chain for analyzing text sentiment and extracting entities.""" + + @property + def input_keys(self) -> List[str]: + return ["text"] + + @property + def output_keys(self) -> List[str]: + return ["sentiment", "entities"] + + def _call(self, inputs: Dict[str, str]) -> Dict[str, str]: + text = inputs["text"] + + # Sentiment analysis + sentiment_prompt = f"Analyze the sentiment of this text: {text}" + sentiment = self.llm(sentiment_prompt) + + # Entity extraction + entity_prompt = f"Extract all named entities from this text: {text}" + entities = self.llm(entity_prompt) + + return { + "sentiment": sentiment, + "entities": entities + } +\`\`\` + +### 2. Streaming Responses + +For better user experience, stream responses as they're generated: + +\`\`\`python +from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler + +# Create LLM with streaming +streaming_llm = ChatOpenAI( + streaming=True, + callbacks=[StreamingStdOutCallbackHandler()], + temperature=0 +) + +# Use in a chain +chain = LLMChain(llm=streaming_llm, prompt=prompt) +chain.run("Tell me a story") # Will print as it generates +\`\`\` + +### 3. LangChain Expression Language (LCEL) + +LCEL provides a declarative way to compose chains: + +\`\`\`python +from langchain.schema.runnable import RunnablePassthrough + +# Create a chain using LCEL +rag_chain = ( + {"context": retriever, "question": RunnablePassthrough()} + | prompt + | llm + | StrOutputParser() +) + +# Run the chain +result = rag_chain.invoke("What is the main topic?") +\`\`\` + +### 4. Callbacks and Monitoring + +Monitor and control your LangChain applications: + +\`\`\`python +from langchain.callbacks import StdOutCallbackHandler +from langchain.callbacks.manager import CallbackManager + +# Custom callback +class CustomCallback(StdOutCallbackHandler): + def on_llm_start(self, serialized, prompts, **kwargs): + print(f"LLM started with prompt: {prompts[0]}") + + def on_llm_end(self, response, **kwargs): + print(f"LLM finished. Token usage: {response.llm_output}") + +# Use callbacks +callback_manager = CallbackManager([CustomCallback()]) +llm = ChatOpenAI(callback_manager=callback_manager) +\`\`\` + +## Best Practices and Tips + +### 1. Error Handling + +Always implement proper error handling, especially for API calls: + +\`\`\`python +from langchain.schema import OutputParserException +import time + +def robust_chain_call(chain, input_text, max_retries=3): + for attempt in range(max_retries): + try: + return chain.run(input_text) + except Exception as e: + if attempt == max_retries - 1: + raise + print(f"Attempt {attempt + 1} failed: {e}") + time.sleep(2 ** attempt) # Exponential backoff +\`\`\` + +### 2. Cost Management + +Monitor and control API costs: + +\`\`\`python +from langchain.callbacks import get_openai_callback + +with get_openai_callback() as cb: + result = llm("What is the meaning of life?") + print(f"Total Tokens: ${'{'}{cb.total_tokens}{'}'}") + print(f"Total Cost: ${'{'}{cb.total_cost}{'}'}") +\`\`\` + +### 3. Prompt Optimization + +Use few-shot examples for better results: + +\`\`\`python +from langchain.prompts import FewShotPromptTemplate + +examples = [ + {"input": "happy", "output": "sad"}, + {"input": "tall", "output": "short"}, +] + +example_prompt = PromptTemplate( + input_variables=["input", "output"], + template="Input: {input}\\nOutput: {output}" +) + +few_shot_prompt = FewShotPromptTemplate( + examples=examples, + example_prompt=example_prompt, + prefix="Give the opposite of each input:", + suffix="Input: {input}\\nOutput:", + input_variables=["input"] +) +\`\`\` + +### 4. Testing + +Test your chains thoroughly: + +\`\`\`python +import pytest +from unittest.mock import Mock + +def test_chain(): + # Mock the LLM + mock_llm = Mock() + mock_llm.return_value = "Paris" + + # Create chain with mock + chain = LLMChain(llm=mock_llm, prompt=prompt) + result = chain.run("France") + + assert result == "Paris" + mock_llm.assert_called_once() +\`\`\` + +## Common Use Cases + +1. **Customer Support Chatbots**: Using conversation memory and RAG for context-aware responses +2. **Document Analysis**: Extracting insights from large document collections +3. **Code Generation**: Creating development tools that understand context +4. **Research Assistants**: Agents that can search, analyze, and synthesize information +5. **Data Processing Pipelines**: Automated workflows for processing unstructured data + +## Getting Started Resources + +1. **Official Documentation**: Comprehensive guides at python.langchain.com +2. **LangChain Hub**: Repository of shared prompts and chains +3. **Community**: Active Discord and GitHub discussions +4. **Templates**: Pre-built application templates for common use cases +5. **LangSmith**: Tool for debugging and monitoring LangChain applications + +## Conclusion + +LangChain has emerged as an essential framework for building LLM applications, providing the tools and abstractions needed to go from prototype to production. Its modular design, extensive integrations, and active community make it an excellent choice for developers looking to harness the power of LLMs. + +The key to mastering LangChain is understanding its composable nature. Start with simple chains, experiment with different components, and gradually build more complex applications. Whether you're building a simple chatbot or a sophisticated AI agent, LangChain provides the flexibility and power you need. + +As the LLM landscape continues to evolve rapidly, LangChain keeps pace by adding new features, integrations, and optimizations. By learning LangChain, you're not just learning a framework—you're gaining the skills to build the next generation of AI-powered applications. + +--- + +*Ready to start building with LangChain? Install it with \`pip install langchain openai\` and begin creating your first LLM application. The future of AI applications is being built with LangChain, and now you have the knowledge to be part of it.*` + }, + { + id: 5, + title: "CUDA: Unleashing the Power of GPU Computing", + excerpt: "Master GPU programming with CUDA to accelerate your computational workloads by 10-100x over traditional CPU processing and unlock massive parallel computing power.", + author: "Jan Heimann", + date: "2025-01-17", + readTime: "15 min read", + tags: ["CUDA", "GPU", "Parallel Computing", "NVIDIA", "Performance"], + category: "ML Engineering", + content: `## Introduction + +In the world of high-performance computing, the shift from CPU-only processing to GPU-accelerated computing has been nothing short of revolutionary. At the heart of this transformation lies CUDA (Compute Unified Device Architecture), NVIDIA's parallel computing platform that has democratized GPU programming and enabled breakthroughs in fields ranging from scientific computing to artificial intelligence. Whether you're looking to accelerate your scientific simulations, train deep learning models, or process massive datasets, understanding CUDA is essential. + +## What is CUDA? + +CUDA is a parallel computing platform and programming model developed by NVIDIA that enables developers to use GPUs (Graphics Processing Units) for general-purpose computing. Introduced in 2006, CUDA transformed GPUs from specialized graphics rendering devices into powerful parallel processors capable of tackling complex computational problems. + +The key insight behind CUDA is that many computational problems can be expressed as parallel operations—the same operation applied to many data elements simultaneously. While CPUs excel at sequential tasks with complex branching logic, GPUs with their thousands of cores are perfect for parallel workloads. CUDA provides the tools and abstractions to harness this massive parallelism. + +### Why GPU Computing? + +Consider this comparison: +- A modern CPU might have 8-16 cores, each optimized for sequential execution +- A modern GPU has thousands of smaller cores designed for parallel execution +- For parallelizable tasks, GPUs can be 10-100x faster than CPUs + +## Core Concepts and Architecture + +### 1. The CUDA Programming Model + +CUDA extends C/C++ with a few key concepts: + +\`\`\`cuda +// CPU code (host) +int main() { + int *h_data, *d_data; // h_ for host, d_ for device + int size = 1024 * sizeof(int); + + // Allocate memory on host + h_data = (int*)malloc(size); + + // Allocate memory on GPU + cudaMalloc(&d_data, size); + + // Copy data from host to device + cudaMemcpy(d_data, h_data, size, cudaMemcpyHostToDevice); + + // Launch kernel with 256 blocks, 1024 threads per block + myKernel<<<256, 1024>>>(d_data); + + // Copy results back + cudaMemcpy(h_data, d_data, size, cudaMemcpyDeviceToHost); + + // Cleanup + cudaFree(d_data); + free(h_data); +} + +// GPU code (device) +__global__ void myKernel(int *data) { + int idx = blockIdx.x * blockDim.x + threadIdx.x; + data[idx] = data[idx] * 2; // Simple operation +} +\`\`\` + +### 2. Thread Hierarchy + +CUDA organizes threads in a hierarchical structure: + +- **Thread**: The basic unit of execution +- **Block**: A group of threads that can cooperate and share memory +- **Grid**: A collection of blocks + +\`\`\`cuda +// Understanding thread indexing +__global__ void indexExample() { + // Global thread ID calculation + int globalIdx = blockIdx.x * blockDim.x + threadIdx.x; + + // 2D grid example + int x = blockIdx.x * blockDim.x + threadIdx.x; + int y = blockIdx.y * blockDim.y + threadIdx.y; + int idx = y * gridDim.x * blockDim.x + x; +} +\`\`\` + +### 3. Memory Hierarchy + +CUDA provides several memory types with different characteristics: + +\`\`\`cuda +__global__ void memoryExample(float *input, float *output) { + // Shared memory - fast, shared within block + __shared__ float tile[256]; + + // Registers - fastest, private to each thread + float temp = input[threadIdx.x]; + + // Global memory - large but slow + output[threadIdx.x] = temp; + + // Constant memory - cached, read-only + // Texture memory - cached, optimized for 2D locality +} +\`\`\` + +### 4. GPU Architecture Basics + +Modern NVIDIA GPUs consist of: +- **Streaming Multiprocessors (SMs)**: Independent processors that execute blocks +- **CUDA Cores**: Basic arithmetic units within SMs +- **Warp Schedulers**: Manage thread execution in groups of 32 (warps) +- **Memory Controllers**: Handle data movement + +## Writing Your First CUDA Program + +Let's create a complete CUDA program that adds two vectors: + +\`\`\`cuda +#include +#include + +// CUDA kernel for vector addition +__global__ void vectorAdd(float *a, float *b, float *c, int n) { + // Calculate global thread ID + int tid = blockIdx.x * blockDim.x + threadIdx.x; + + // Boundary check + if (tid < n) { + c[tid] = a[tid] + b[tid]; + } +} + +int main() { + int n = 1000000; // 1 million elements + size_t size = n * sizeof(float); + + // Allocate host memory + float *h_a = (float*)malloc(size); + float *h_b = (float*)malloc(size); + float *h_c = (float*)malloc(size); + + // Initialize input vectors + for (int i = 0; i < n; i++) { + h_a[i] = i; + h_b[i] = i * 2; + } + + // Allocate device memory + float *d_a, *d_b, *d_c; + cudaMalloc(&d_a, size); + cudaMalloc(&d_b, size); + cudaMalloc(&d_c, size); + + // Copy input data to device + cudaMemcpy(d_a, h_a, size, cudaMemcpyHostToDevice); + cudaMemcpy(d_b, h_b, size, cudaMemcpyHostToDevice); + + // Launch kernel + int threadsPerBlock = 256; + int blocksPerGrid = (n + threadsPerBlock - 1) / threadsPerBlock; + vectorAdd<<>>(d_a, d_b, d_c, n); + + // Copy result back to host + cudaMemcpy(h_c, d_c, size, cudaMemcpyDeviceToHost); + + // Verify results + for (int i = 0; i < 10; i++) { + printf("%.0f + %.0f = %.0f\\n", h_a[i], h_b[i], h_c[i]); + } + + // Cleanup + free(h_a); free(h_b); free(h_c); + cudaFree(d_a); cudaFree(d_b); cudaFree(d_c); + + return 0; +} +\`\`\` + +Compile and run: +\`\`\`bash +nvcc vector_add.cu -o vector_add +./vector_add +\`\`\` + +## Advanced CUDA Features + +### 1. Shared Memory Optimization + +Shared memory is crucial for performance optimization: + +\`\`\`cuda +__global__ void matrixMultiplyShared(float *A, float *B, float *C, int width) { + __shared__ float tileA[16][16]; + __shared__ float tileB[16][16]; + + int bx = blockIdx.x, by = blockIdx.y; + int tx = threadIdx.x, ty = threadIdx.y; + + int row = by * 16 + ty; + int col = bx * 16 + tx; + + float sum = 0.0f; + + // Loop over tiles + for (int tile = 0; tile < width/16; tile++) { + // Load tiles into shared memory + tileA[ty][tx] = A[row * width + tile * 16 + tx]; + tileB[ty][tx] = B[(tile * 16 + ty) * width + col]; + __syncthreads(); + + // Compute partial product + for (int k = 0; k < 16; k++) { + sum += tileA[ty][k] * tileB[k][tx]; + } + __syncthreads(); + } + + C[row * width + col] = sum; +} +\`\`\` + +### 2. Atomic Operations + +For concurrent updates to shared data: + +\`\`\`cuda +__global__ void histogram(int *data, int *hist, int n) { + int tid = blockIdx.x * blockDim.x + threadIdx.x; + if (tid < n) { + atomicAdd(&hist[data[tid]], 1); + } +} +\`\`\` + +### 3. Dynamic Parallelism + +Launch kernels from within kernels: + +\`\`\`cuda +__global__ void parentKernel(int *data, int n) { + if (threadIdx.x == 0) { + // Launch child kernel + childKernel<<<1, 256>>>(data + blockIdx.x * 256, 256); + } +} +\`\`\` + +### 4. CUDA Streams + +Enable concurrent operations: + +\`\`\`cuda +cudaStream_t stream1, stream2; +cudaStreamCreate(&stream1); +cudaStreamCreate(&stream2); + +// Async operations on different streams +cudaMemcpyAsync(d_a, h_a, size, cudaMemcpyHostToDevice, stream1); +cudaMemcpyAsync(d_b, h_b, size, cudaMemcpyHostToDevice, stream2); + +kernel1<<>>(d_a); +kernel2<<>>(d_b); + +cudaStreamSynchronize(stream1); +cudaStreamSynchronize(stream2); +\`\`\` + +## Optimization Techniques + +### 1. Coalesced Memory Access + +Ensure threads access contiguous memory: + +\`\`\`cuda +// Good - coalesced access +__global__ void good(float *data) { + int idx = blockIdx.x * blockDim.x + threadIdx.x; + float val = data[idx]; // Thread 0->data[0], Thread 1->data[1], etc. +} + +// Bad - strided access +__global__ void bad(float *data) { + int idx = blockIdx.x * blockDim.x + threadIdx.x; + float val = data[idx * 32]; // Thread 0->data[0], Thread 1->data[32], etc. +} +\`\`\` + +### 2. Occupancy Optimization + +Balance resources for maximum throughput: + +\`\`\`cuda +// Use CUDA occupancy calculator +int blockSize; +int minGridSize; +cudaOccupancyMaxPotentialBlockSize(&minGridSize, &blockSize, myKernel, 0, 0); + +// Launch with optimal configuration +myKernel<<>>(data); +\`\`\` + +### 3. Warp-Level Primitives + +Leverage warp-level operations: + +\`\`\`cuda +__global__ void warpReduce(float *data) { + float val = data[threadIdx.x]; + + // Warp-level reduction + for (int offset = 16; offset > 0; offset /= 2) { + val += __shfl_down_sync(0xffffffff, val, offset); + } + + if (threadIdx.x % 32 == 0) { + // Thread 0 of each warp has the sum + atomicAdd(output, val); + } +} +\`\`\` + +## CUDA Libraries and Ecosystem + +NVIDIA provides highly optimized libraries: + +### 1. cuBLAS - Linear Algebra + +\`\`\`cpp +#include + +cublasHandle_t handle; +cublasCreate(&handle); + +// Matrix multiplication: C = alpha * A * B + beta * C +cublasSgemm(handle, CUBLAS_OP_N, CUBLAS_OP_N, + m, n, k, &alpha, + d_A, m, d_B, k, &beta, d_C, m); +\`\`\` + +### 2. cuDNN - Deep Learning + +\`\`\`cpp +#include + +cudnnHandle_t cudnn; +cudnnCreate(&cudnn); + +// Convolution forward pass +cudnnConvolutionForward(cudnn, &alpha, xDesc, x, wDesc, w, + convDesc, algo, workspace, workspaceSize, + &beta, yDesc, y); +\`\`\` + +### 3. Thrust - C++ Template Library + +\`\`\`cpp +#include +#include + +thrust::device_vector d_vec(1000000); +thrust::sort(d_vec.begin(), d_vec.end()); +\`\`\` + +## Debugging and Profiling + +### 1. Error Checking + +Always check CUDA errors: + +\`\`\`cuda +#define CUDA_CHECK(call) \\ + do { \\ + cudaError_t error = call; \\ + if (error != cudaSuccess) { \\ + fprintf(stderr, "CUDA error at %s:%d - %s\\n", \\ + __FILE__, __LINE__, cudaGetErrorString(error)); \\ + exit(1); \\ + } \\ + } while(0) + +// Usage +CUDA_CHECK(cudaMalloc(&d_data, size)); +\`\`\` + +### 2. NVIDIA Nsight Tools + +- **Nsight Systems**: System-wide performance analysis +- **Nsight Compute**: Kernel-level profiling +- **cuda-memcheck**: Memory error detection + +\`\`\`bash +# Profile your application +nsys profile ./my_cuda_app +ncu --set full ./my_cuda_app +\`\`\` + +## Common Pitfalls and Best Practices + +### 1. Memory Management +- Always free allocated memory +- Use cudaMallocManaged for unified memory when appropriate +- Be aware of memory bandwidth limitations + +### 2. Thread Divergence +\`\`\`cuda +// Avoid divergent branches +if (threadIdx.x < 16) { + // Half the warp takes this path +} else { + // Other half takes this path - causes divergence +} +\`\`\` + +### 3. Grid and Block Size Selection +- Block size should be multiple of 32 (warp size) +- Consider hardware limits (max threads per block, registers) +- Use occupancy calculator for guidance + +### 4. Synchronization +\`\`\`cuda +// Block-level synchronization +__syncthreads(); + +// Device-level synchronization +cudaDeviceSynchronize(); +\`\`\` + +## Real-World Applications + +1. **Deep Learning**: Training neural networks (PyTorch, TensorFlow) +2. **Scientific Computing**: Molecular dynamics, climate modeling +3. **Image Processing**: Real-time filters, computer vision +4. **Finance**: Monte Carlo simulations, risk analysis +5. **Cryptography**: Password cracking, blockchain mining + +## Getting Started Resources + +1. **NVIDIA CUDA Toolkit**: Essential development tools +2. **CUDA Programming Guide**: Comprehensive official documentation +3. **CUDA by Example**: Excellent book for beginners +4. **GPU Gems**: Advanced techniques and algorithms +5. **NVIDIA Developer Forums**: Active community support + +## Future of CUDA + +CUDA continues to evolve with new GPU architectures: +- **Tensor Cores**: Specialized units for AI workloads +- **Ray Tracing Cores**: Hardware-accelerated ray tracing +- **Multi-Instance GPU (MIG)**: Partition GPUs for multiple users +- **CUDA Graphs**: Reduce kernel launch overhead + +## Conclusion + +CUDA has transformed the computing landscape by making GPU programming accessible to developers worldwide. What started as a way to use graphics cards for general computation has evolved into a comprehensive ecosystem powering everything from AI breakthroughs to scientific discoveries. + +The key to mastering CUDA is understanding its parallel execution model and memory hierarchy. Start with simple kernels, profile your code, and gradually optimize. Remember that not all problems benefit from GPU acceleration—CUDA shines when you have massive parallelism and arithmetic intensity. + +As we enter an era of increasingly parallel computing, CUDA skills become ever more valuable. Whether you're accelerating existing applications or building new ones from scratch, CUDA provides the tools to harness the incredible power of modern GPUs. + +--- + +*Ready to start your CUDA journey? Download the CUDA Toolkit from NVIDIA's developer site and begin with simple vector operations. The world of accelerated computing awaits, and with CUDA, you have the key to unlock it.*` + }, + { + id: 6, + title: "The Future of AI: Navigating the Next Decade of Intelligent Systems", + excerpt: "Exploring the trajectory from current LLMs to AGI and the transformative impact AI will have on business and society in the coming decade.", + author: "Jan Heimann", + date: "2025-01-18", + readTime: "14 min read", + tags: ["AI", "AGI", "Future Tech", "Machine Learning", "Society"], + category: "Future of AI", + content: `## The AI revolution is just beginning. Here's what leaders need to know about the transformative technologies that will reshape business and society in the coming decade. + +As we stand at the threshold of 2025, artificial intelligence has evolved from a promising technology to a fundamental driver of business transformation. The rapid advancement from simple chatbots to sophisticated reasoning systems like OpenAI's o1 and DeepSeek's R1 signals that we're entering a new phase of AI capability—one that will fundamentally reshape how organizations operate, compete, and create value. + +The question is no longer whether AI will transform your industry, but how quickly you can adapt to harness its potential while navigating its complexities. + +## The Current State: AI at an Inflection Point + +Today's AI landscape is characterized by unprecedented capability and accessibility. Large language models have democratized access to AI, enabling organizations of all sizes to leverage sophisticated natural language processing, code generation, and analytical capabilities. Meanwhile, specialized AI systems are achieving superhuman performance in domains ranging from protein folding to strategic game playing. + +But we're witnessing something more profound than incremental improvement. The emergence of multimodal models that seamlessly process text, images, and audio, combined with reasoning capabilities that can tackle complex mathematical and scientific problems, suggests we're approaching a fundamental shift in what machines can accomplish. + +**Key indicators of this inflection point:** +- AI models demonstrating emergent capabilities not explicitly programmed +- Dramatic cost reductions in AI deployment (100x decrease in inference costs since 2020) +- Integration of AI into critical business processes across industries +- Shift from AI as a tool to AI as a collaborative partner + +## Five Transformative Trends Shaping AI's Future + +### 1. The Rise of Agentic AI + +The next frontier of AI isn't just about answering questions—it's about taking action. Agentic AI systems will autonomously pursue complex goals, manage multi-step processes, and coordinate with other AI agents and humans to accomplish objectives. + +**What this means for business:** +- Autonomous AI employees handling complete workflows +- Self-improving systems that optimize their own performance +- AI-to-AI marketplaces where specialized agents collaborate +- Dramatic reduction in operational overhead for routine tasks + +**Timeline:** Early agentic systems are already emerging. Expect widespread adoption by 2027, with mature ecosystems by 2030. + +### 2. Reasoning and Scientific Discovery + +The ability of AI to engage in complex reasoning marks a paradigm shift. Models like OpenAI's o1 and DeepSeek's R1 demonstrate that AI can now work through multi-step problems, explore hypotheses, and even conduct scientific research. + +**Transformative potential:** +- Acceleration of drug discovery and materials science +- AI-driven hypothesis generation and experimental design +- Mathematical theorem proving and discovery +- Complex system optimization across supply chains and infrastructure + +**Business impact:** Organizations that integrate reasoning AI into their R&D processes will achieve 10x productivity gains in innovation cycles. + +### 3. The Convergence of Physical and Digital AI + +As robotics hardware catches up with AI software, we're approaching an era where AI won't just think—it will act in the physical world with unprecedented dexterity and autonomy. + +**Key developments:** +- Humanoid robots entering manufacturing and service industries +- AI-powered autonomous systems in agriculture, construction, and logistics +- Seamless integration between digital planning and physical execution +- Embodied AI learning from physical interactions + +**Projection:** By 2030, 30% of physical labor in structured environments will be augmented or automated by AI-powered robotics. + +### 4. Personalized AI: From General to Specific + +The future of AI is deeply personal. Rather than one-size-fits-all models, we're moving toward AI systems that adapt to individual users, learning their preferences, work styles, and goals. + +**Evolution pathway:** +- Personal AI assistants that understand context and history +- Domain-specific AI trained on proprietary organizational knowledge +- Adaptive learning systems that improve through interaction +- Privacy-preserving personalization through federated learning + +**Critical consideration:** The balance between personalization and privacy will define the boundaries of acceptable AI deployment. + +### 5. AI Governance and Ethical AI by Design + +As AI systems become more powerful and pervasive, governance frameworks are evolving from afterthoughts to fundamental architecture components. + +**Emerging frameworks:** +- Built-in explainability and audit trails +- Automated bias detection and mitigation +- Regulatory compliance through technical standards +- International cooperation on AI safety standards + +**Business imperative:** Organizations that build ethical AI practices now will avoid costly retrofitting and maintain social license to operate. + +## Industries at the Forefront of AI Transformation + +### Healthcare: From Reactive to Predictive + +AI is shifting healthcare from treating illness to preventing it. Continuous monitoring, genetic analysis, and behavioral data will enable AI to predict health issues years before symptoms appear. + +**2030 vision:** +- AI-driven personalized medicine based on individual genetics +- Virtual health assistants managing chronic conditions +- Drug discovery timelines reduced from decades to years +- Surgical robots performing complex procedures with superhuman precision + +### Financial Services: Intelligent Money + +The financial sector is becoming an AI-first industry, with algorithms making microsecond trading decisions and AI advisors managing trillions in assets. + +**Transformation vectors:** +- Real-time fraud prevention with 99.99% accuracy +- Hyper-personalized financial products +- Autonomous trading systems operating within regulatory frameworks +- Democratized access to sophisticated financial strategies + +### Education: Adaptive Learning at Scale + +AI tutors that adapt to each student's learning style, pace, and interests will make personalized education accessible globally. + +**Revolutionary changes:** +- AI teaching assistants providing 24/7 support +- Curriculum that evolves based on job market demands +- Skill verification through AI-proctored assessments +- Lifelong learning companions that grow with learners + +### Manufacturing: The Autonomous Factory + +Smart factories will self-optimize, predict maintenance needs, and adapt production in real-time to demand fluctuations. + +**Industry 5.0 features:** +- Zero-defect manufacturing through AI quality control +- Demand-driven production with minimal waste +- Human-robot collaboration enhancing worker capabilities +- Supply chain orchestration across global networks + +## Navigating the Challenges Ahead + +### The Talent Imperative + +The AI skills gap represents both the greatest challenge and opportunity for organizations. Success requires not just hiring AI specialists but reskilling entire workforces. + +**Strategic priorities:** +- Establish AI literacy programs for all employees +- Create centers of excellence for AI innovation +- Partner with educational institutions for talent pipelines +- Develop retention strategies for AI talent + +### Infrastructure and Integration + +Legacy systems and data silos remain significant barriers to AI adoption. Organizations must modernize their technology stacks while maintaining operational continuity. + +**Critical investments:** +- Cloud-native architectures supporting AI workloads +- Data governance frameworks ensuring quality and compliance +- API-first strategies enabling AI integration +- Edge computing infrastructure for real-time AI + +### Ethical and Societal Considerations + +As AI systems gain capability, questions of accountability, fairness, and societal impact become paramount. + +**Essential considerations:** +- Establishing clear accountability for AI decisions +- Ensuring equitable access to AI benefits +- Managing workforce transitions with dignity +- Contributing to societal discussions on AI governance + +## Strategic Imperatives for Leaders + +### 1. Develop an AI-First Mindset + +Stop thinking of AI as a technology to implement and start thinking of it as a capability to cultivate. Every business process, customer interaction, and strategic decision should be examined through the lens of AI enhancement. + +### 2. Invest in Data as a Strategic Asset + +AI is only as good as the data it learns from. Organizations must treat data as a strategic asset, investing in quality, governance, and accessibility. + +### 3. Build Adaptive Organizations + +The pace of AI advancement requires organizational agility. Create structures that can rapidly experiment, learn, and scale successful AI initiatives. + +### 4. Embrace Responsible Innovation + +Ethical AI isn't a constraint—it's a competitive advantage. Organizations that build trust through responsible AI practices will win in the long term. + +### 5. Think Ecosystem, Not Enterprise + +The future of AI is collaborative. Build partnerships, participate in industry initiatives, and contribute to the broader AI ecosystem. + +## The Road Ahead: 2025-2035 + +The next decade will witness AI's evolution from a powerful tool to an indispensable partner in human progress. We'll see: + +- **2025-2027**: Consolidation of current capabilities, widespread adoption of generative AI, emergence of early agentic systems +- **2028-2030**: Breakthrough in artificial general intelligence (AGI) capabilities, seamless human-AI collaboration, transformation of major industries +- **2031-2035**: Potential achievement of AGI, fundamental restructuring of work and society, new forms of human-AI symbiosis + +## Conclusion: The Time for Action is Now + +The future of AI isn't a distant possibility—it's unfolding before us at an accelerating pace. Organizations that move decisively to build AI capabilities, while thoughtfully addressing the associated challenges, will shape the next era of human achievement. + +The choice isn't whether to adopt AI, but how quickly and effectively you can integrate it into your organization's DNA. Those who hesitate risk not just competitive disadvantage but potential irrelevance. + +As we navigate this transformative period, success will belong to those who view AI not as a threat to human potential but as its greatest amplifier. The organizations that thrive will be those that combine the creativity, empathy, and wisdom of humans with the speed, scale, and precision of AI. + +The future of AI is not predetermined—it's being written now by the choices we make and the actions we take. What role will your organization play in shaping this future? + +--- + +*The journey to an AI-powered future begins with a single step. Whether you're just starting your AI transformation or looking to accelerate existing initiatives, the time for action is now. The future belongs to those who prepare for it today.*` + } +]; + +// Utility functions for blog management +export const getBlogCategories = () => { + const categories = [...new Set(blogPosts.map(post => post.category))]; + return ['All', ...categories]; +}; + + +export const getBlogPostById = (id) => { + return blogPosts.find(post => post.id === id); +}; + +export const getBlogPostsByCategory = (category) => { + if (category === 'All') return blogPosts; + return blogPosts.filter(post => post.category === category); +}; + +export const searchBlogPosts = (query) => { + const searchTerm = query.toLowerCase(); + return blogPosts.filter(post => + post.title.toLowerCase().includes(searchTerm) || + post.excerpt.toLowerCase().includes(searchTerm) || + post.tags.some(tag => tag.toLowerCase().includes(searchTerm)) + ); +}; \ No newline at end of file diff --git a/src/hooks/useMobileDetection.js b/src/hooks/useMobileDetection.js new file mode 100644 index 0000000..c19ac41 --- /dev/null +++ b/src/hooks/useMobileDetection.js @@ -0,0 +1,62 @@ +import { useState, useEffect } from 'react'; +import { useMediaQuery } from 'react-responsive'; + +export const useMobileDetection = () => { + const [deviceInfo, setDeviceInfo] = useState({ + isMobile: false, + isTablet: false, + isDesktop: false, + isIOS: false, + isAndroid: false, + isSafari: false, + isChrome: false, + supportsVideoTexture: true, + }); + + const isMobileQuery = useMediaQuery({ maxWidth: 768 }); + const isTabletQuery = useMediaQuery({ minWidth: 768, maxWidth: 1024 }); + const isSmallQuery = useMediaQuery({ maxWidth: 440 }); + + useEffect(() => { + const userAgent = navigator.userAgent || navigator.vendor || window.opera; + + // Detect mobile devices + const isMobile = isMobileQuery || /Android|iPhone|iPad|iPod|BlackBerry|IEMobile|Opera Mini/i.test(userAgent); + const isTablet = isTabletQuery && !isMobile; + const isDesktop = !isMobile && !isTablet; + + // Detect specific platforms + const isIOS = /iPad|iPhone|iPod/.test(userAgent) && !window.MSStream; + const isAndroid = /Android/i.test(userAgent); + + // Detect browsers + const isSafari = /Safari/.test(userAgent) && !/Chrome/.test(userAgent); + const isChrome = /Chrome/.test(userAgent); + + // Determine video texture support + let supportsVideoTexture = true; + + // iOS Safari has known issues with video textures + if (isIOS && isSafari) { + supportsVideoTexture = false; + } + + // Some Android devices have performance issues with video textures + if (isAndroid && isSmallQuery) { + supportsVideoTexture = false; + } + + setDeviceInfo({ + isMobile, + isTablet, + isDesktop, + isIOS, + isAndroid, + isSafari, + isChrome, + supportsVideoTexture, + }); + }, [isMobileQuery, isTabletQuery, isSmallQuery]); + + return deviceInfo; +}; \ No newline at end of file diff --git a/src/hooks/useMobileVideoTexture.js b/src/hooks/useMobileVideoTexture.js new file mode 100644 index 0000000..06d6434 --- /dev/null +++ b/src/hooks/useMobileVideoTexture.js @@ -0,0 +1,94 @@ +import { useEffect, useRef, useState } from 'react'; +import { useVideoTexture, useTexture } from '@react-three/drei'; +import { useMobileDetection } from './useMobileDetection'; + +export const useMobileVideoTexture = (src, fallbackImage = null) => { + const deviceInfo = useMobileDetection(); + const [shouldUseVideo, setShouldUseVideo] = useState(false); // Start with false for mobile + const [isLoading, setIsLoading] = useState(true); + const [hasError, setHasError] = useState(false); + const videoRef = useRef(null); + + // Determine if we should use video based on device capabilities + useEffect(() => { + // On mobile devices, default to static images to avoid loading issues + if (deviceInfo.isMobile || deviceInfo.isTablet) { + setShouldUseVideo(false); + setIsLoading(false); + return; + } + + // Only use video on desktop + setShouldUseVideo(true); + }, [deviceInfo]); + + // Use video texture only if device supports it and we've determined it should use video + const videoTexture = useVideoTexture(shouldUseVideo ? src : null, { + muted: true, + loop: true, + playsInline: true, + autoPlay: true, + }); + + // Always load fallback image + const fallbackTexture = useTexture(fallbackImage || '/assets/project-logo1.png'); + + useEffect(() => { + if (shouldUseVideo && videoTexture) { + const video = videoTexture?.source?.data; + if (video && video instanceof HTMLVideoElement) { + videoRef.current = video; + + // Set mobile-specific attributes + video.setAttribute('playsinline', true); + video.setAttribute('muted', true); + video.setAttribute('autoplay', true); + video.setAttribute('loop', true); + + // iOS Safari specific handling + if (deviceInfo.isIOS && deviceInfo.isSafari) { + video.muted = true; + video.defaultMuted = true; + + // Force playsinline for iOS Safari + video.setAttribute('webkit-playsinline', true); + video.setAttribute('playsinline', true); + } + + // Error handling + const handleError = () => { + console.warn('Video failed to load, using fallback'); + setHasError(true); + setShouldUseVideo(false); + setIsLoading(false); + }; + + const handleCanPlay = () => { + setIsLoading(false); + setHasError(false); + }; + + video.addEventListener('error', handleError); + video.addEventListener('canplaythrough', handleCanPlay); + + // Cleanup + return () => { + video.removeEventListener('error', handleError); + video.removeEventListener('canplaythrough', handleCanPlay); + }; + } + } else { + // Not using video, mark as loaded + setIsLoading(false); + } + }, [shouldUseVideo, videoTexture, deviceInfo]); + + // Return appropriate texture or fallback + return { + texture: shouldUseVideo && !hasError ? videoTexture : fallbackTexture, + isLoading, + hasError, + shouldUseVideo, + deviceInfo, + }; +}; \ No newline at end of file diff --git a/src/index.css b/src/index.css index 3f89dd8..0faacc7 100644 --- a/src/index.css +++ b/src/index.css @@ -1,197 +1,203 @@ @import url('https://fonts.cdnfonts.com/css/general-sans'); - @tailwind base; @tailwind components; @tailwind utilities; - * { - scroll-behavior: smooth; + scroll-behavior:smooth; } body { - background: #010103; - font-family: 'General Sans', sans-serif; + background:#010103; + font-family:'General Sans', sans-serif; } @layer utilities { - .c-space { - @apply sm:px-10 px-5; - } - - .head-text { - @apply sm:text-4xl text-3xl font-semibold text-gray_gradient; - } - - .nav-ul { - @apply flex flex-col items-center gap-4 sm:flex-row md:gap-6 relative z-20; - } - - .nav-li { - @apply text-neutral-400 hover:text-white font-generalsans max-sm:hover:bg-black-500 max-sm:w-full max-sm:rounded-md py-2 max-sm:px-5; - } - - .nav-li_a { - @apply text-lg md:text-base hover:text-white transition-colors; - } - - .nav-sidebar { - @apply absolute left-0 right-0 bg-black-200 backdrop-blur-sm transition-all duration-300 ease-in-out overflow-hidden z-20 mx-auto sm:hidden block; - } - - .text-gray_gradient { - @apply bg-gradient-to-r from-[#BEC1CF] from-60% via-[#D5D8EA] via-60% to-[#D5D8EA] to-100% bg-clip-text text-transparent; - } - - /* button component */ - .btn { - @apply flex gap-4 items-center justify-center cursor-pointer p-3 rounded-md bg-black-300 transition-all active:scale-95 text-white mx-auto; - } - - .btn-ping { - @apply animate-ping absolute inline-flex h-full w-full rounded-full bg-green-400 opacity-75; - } - - .btn-ping_dot { - @apply relative inline-flex rounded-full h-3 w-3 bg-green-500; - } - - /* hero section */ - .hero_tag { - @apply text-center xl:text-6xl md:text-5xl sm:text-4xl text-3xl font-generalsans font-black !leading-normal; - } - - /* about section */ - .grid-container { - @apply w-full h-full border border-black-300 bg-black-200 rounded-lg sm:p-7 p-4 flex flex-col gap-5; - } - - .grid-headtext { - @apply text-xl font-semibold mb-2 text-white font-generalsans; - } - - .grid-subtext { - @apply text-[#afb0b6] text-base font-generalsans; - } - - .copy-container { - @apply cursor-pointer flex justify-center items-center gap-2; - } - - /* projects section */ - .arrow-btn { - @apply w-10 h-10 p-3 cursor-pointer active:scale-95 transition-all rounded-full arrow-gradient; - } - - .tech-logo { - @apply w-10 h-10 rounded-md p-2 bg-neutral-100 bg-opacity-10 backdrop-filter backdrop-blur-lg flex justify-center items-center; - } - - /* clients section */ - .client-container { - @apply grid md:grid-cols-2 grid-cols-1 gap-5 mt-12; - } - - .client-review { - @apply rounded-lg md:p-10 p-5 col-span-1 bg-black-300 bg-opacity-50; - } - - .client-content { - @apply flex lg:flex-row flex-col justify-between lg:items-center items-start gap-5 mt-7; - } - - /* work experience section */ - .work-container { - @apply grid lg:grid-cols-3 grid-cols-1 gap-5 mt-12; - } - - .work-canvas { - @apply col-span-1 rounded-lg bg-black-200 border border-black-300; - } - - .work-content { - @apply col-span-2 rounded-lg bg-black-200 border border-black-300; - } - - .work-content_container { - @apply grid grid-cols-[auto_1fr] items-start gap-5 transition-all ease-in-out duration-500 cursor-pointer hover:bg-black-300 rounded-lg sm:px-5 px-2.5; - } - - .work-content_logo { - @apply rounded-3xl w-16 h-16 p-2 bg-black-600; - } - - .work-content_bar { - @apply flex-1 w-0.5 mt-4 h-full bg-black-300 group-hover:bg-black-500 group-last:hidden; - } - - /* contact section */ - .contact-container { - @apply max-w-xl relative z-10 sm:px-10 px-5 mt-12; - } - - .field-label { - @apply text-lg text-white-600; - } - - .field-input { - @apply w-full bg-black-300 px-5 py-2 min-h-14 rounded-lg placeholder:text-white-500 text-lg text-white-800 shadow-black-200 shadow-2xl focus:outline-none; - } - - .field-btn { - @apply bg-black-500 px-5 py-2 min-h-12 rounded-lg shadow-black-200 shadow-2xl flex justify-center items-center text-lg text-white gap-3; - } - - .field-btn_arrow { - @apply w-2.5 h-2.5 object-contain invert brightness-0; - } - - /* footer */ - .social-icon { - @apply w-12 h-12 rounded-full flex justify-center items-center bg-black-300 border border-black-200; - } + .c-space { + @apply sm:px-10 px-5; + } + .head-text { + @apply sm:text-4xl text-3xl font-semibold text-gray_gradient; + } + .nav-ul { + @apply flex flex-col items-center gap-4 sm:flex-row md:gap-6 relative z-20; + } + .nav-li { + @apply text-neutral-400 hover:text-white font-generalsans max-sm:hover:bg-black-500 max-sm:w-full max-sm:rounded-md py-2 max-sm:px-5; + } + .nav-li_a { + @apply text-lg md:text-base hover:text-white transition-colors; + } + .nav-sidebar { + @apply absolute left-0 right-0 bg-black-200 backdrop-blur-sm transition-all duration-300 ease-in-out overflow-hidden z-20 mx-auto sm:hidden block; + } + .text-gray_gradient { + @apply bg-gradient-to-r from-[#BEC1CF] from-60% via-[#D5D8EA] via-60% to-[#D5D8EA] to-100% bg-clip-text text-transparent; + } + /* button component */ + .btn { + @apply flex gap-4 items-center justify-center cursor-pointer p-3 rounded-md bg-black-300 transition-all active:scale-95 text-white mx-auto; + } + .btn-ping { + @apply animate-ping absolute inline-flex h-full w-full rounded-full bg-green-400 opacity-75; + } + .btn-ping_dot { + @apply relative inline-flex rounded-full h-3 w-3 bg-green-500; + } + /* hero section */ + .hero_tag { + @apply text-center xl:text-6xl md:text-5xl sm:text-4xl text-3xl font-generalsans font-black !leading-normal; + } + /* about section */ + .grid-container { + @apply w-full h-full border border-black-300 bg-black-200 rounded-lg sm:p-7 p-4 flex flex-col gap-5; + } + .grid-headtext { + @apply text-xl font-semibold mb-2 text-white font-generalsans; + } + .grid-subtext { + @apply text-[#afb0b6] text-base font-generalsans; + } + .copy-container { + @apply cursor-pointer flex justify-center items-center gap-2; + } + /* projects section */ + .arrow-btn { + @apply w-10 h-10 p-3 cursor-pointer active:scale-95 transition-all rounded-full arrow-gradient; + } + .tech-logo { + @apply w-10 h-10 rounded-md p-2 bg-neutral-100 bg-opacity-10 backdrop-filter backdrop-blur-lg flex justify-center items-center; + } + /* publications section */ + .publications-container { + @apply grid grid-cols-1 gap-8 mt-12; + } + .publication-item { + @apply rounded-lg p-6 bg-black-300 bg-opacity-50 border border-black-200 hover:bg-opacity-70 transition-all duration-300; + } + .publication-image { + @apply mb-6; + } + .publication-content { + @apply space-y-4; + } + .publication-header { + @apply flex flex-col lg:flex-row lg:justify-between lg:items-start gap-3; + } + .publication-title { + @apply text-xl font-semibold text-white-800 leading-tight; + } + .publication-meta { + @apply flex flex-col lg:items-end gap-1; + } + .publication-venue { + @apply text-white-600 font-medium text-sm; + } + .publication-conference { + @apply text-white-800 font-bold text-lg bg-gradient-to-r from-blue-400 to-purple-500 bg-clip-text text-transparent mb-1; + } + .publication-workshop { + @apply text-white-700 font-medium text-sm; + } + .publication-workshop-subtitle { + @apply text-white-600 font-light text-sm mt-1 mb-3 italic; + } + .publication-year { + @apply text-white-500 text-sm; + } + .publication-authors { + @apply text-white-700 font-light; + } + .publication-abstract { + @apply text-white-600 leading-relaxed; + } + .publication-actions { + @apply flex flex-wrap gap-4 pt-2; + } + .publication-link { + @apply flex items-center gap-2 text-white-600 hover:text-white transition-colors cursor-pointer; + } + .publication-tags { + @apply flex flex-wrap gap-2 pt-3; + } + .publication-tag { + @apply px-3 py-1 bg-black-200 text-white-600 text-sm rounded-full border border-black-100; + } + /* work experience section */ + .work-container { + @apply grid lg:grid-cols-3 grid-cols-1 gap-5 mt-12; + } + .work-canvas { + @apply col-span-1 rounded-lg bg-black-200 border border-black-300; + } + .work-content { + @apply col-span-2 rounded-lg bg-black-200 border border-black-300; + } + .work-content_container { + @apply grid grid-cols-[auto_1fr] items-start gap-5 transition-all ease-in-out duration-500 cursor-pointer hover:bg-black-300 rounded-lg sm:px-5 px-2.5; + } + .work-content_logo { + @apply rounded-3xl w-16 h-16 p-2 bg-black-600; + } + .work-content_bar { + @apply flex-1 w-0.5 mt-4 h-full bg-black-300 group-hover:bg-black-500 group-last:hidden; + } + /* contact section */ + .contact-container { + @apply max-w-xl relative z-10 sm:px-10 px-5 mt-12; + } + .field-label { + @apply text-lg text-white-600; + } + .field-input { + @apply w-full bg-black-300 px-5 py-2 min-h-14 rounded-lg placeholder:text-white-500 text-lg text-white-800 shadow-black-200 shadow-2xl focus:outline-none; + } + .field-btn { + @apply bg-black-500 px-5 py-2 min-h-12 rounded-lg shadow-black-200 shadow-2xl flex justify-center items-center text-lg text-white gap-3; + } + .field-btn_arrow { + @apply w-2.5 h-2.5 object-contain invert brightness-0; + } + /* footer */ + .social-icon { + @apply w-12 h-12 rounded-full flex justify-center items-center bg-black-300 border border-black-200; + } } .waving-hand { - animation-name: wave-animation; - animation-duration: 2.5s; - animation-iteration-count: infinite; - transform-origin: 70% 70%; - display: inline-block; + animation-name:wave-animation; + animation-duration:2.5s; + animation-iteration-count:infinite; + transform-origin:70% 70%; + display:inline-block; } .arrow-gradient { - background-image: linear-gradient( - to right, - rgba(255, 255, 255, 0.1) 10%, - rgba(255, 255, 255, 0.000025) 50%, - rgba(255, 255, 255, 0.000025) 50%, - rgba(255, 255, 255, 0.025) 100% - ); + background-image:linear-gradient( to right, rgba(255, 255, 255, 0.1) 10%, rgba(255, 255, 255, 0.000025) 50%, rgba(255, 255, 255, 0.000025) 50%, rgba(255, 255, 255, 0.025) 100%); } @keyframes wave-animation { - 0% { - transform: rotate(0deg); - } - 15% { - transform: rotate(14deg); - } - 30% { - transform: rotate(-8deg); - } - 40% { - transform: rotate(14deg); - } - 50% { - transform: rotate(-4deg); - } - 60% { - transform: rotate(10deg); - } - 70% { - transform: rotate(0deg); - } - 100% { - transform: rotate(0deg); - } -} + 0% { + transform:rotate(0deg); + } + 15% { + transform:rotate(14deg); + } + 30% { + transform:rotate(-8deg); + } + 40% { + transform:rotate(14deg); + } + 50% { + transform:rotate(-4deg); + } + 60% { + transform:rotate(10deg); + } + 70% { + transform:rotate(0deg); + } + 100% { + transform:rotate(0deg); + } +} \ No newline at end of file diff --git a/src/sections/About.jsx b/src/sections/About.jsx index 3b97ba7..26a9e06 100644 --- a/src/sections/About.jsx +++ b/src/sections/About.jsx @@ -7,7 +7,7 @@ const About = () => { const [hasCopied, setHasCopied] = useState(false); const handleCopy = () => { - navigator.clipboard.writeText(' adrian@jsmastery.pro'); + navigator.clipboard.writeText('jan@heimann.ai'); setHasCopied(true); setTimeout(() => { @@ -20,13 +20,12 @@ const About = () => {
    - grid-1 + Jan Magnus Heimann
    -

    Hi, I’m Adrian Hajdin

    +

    Hi, I'm Jan Magnus Heimann

    - With 12 years of experience, I have honed my skills in both frontend and backend dev, creating dynamic - and responsive websites. + AI/ML Engineer specializing in Reinforcement Learning and Large Language Models with proven track record of deploying production-grade AI systems and delivering significant business impact.

    @@ -34,13 +33,30 @@ const About = () => {
    - grid-2 +
    +
    +
    + Python +
    +
    + React +
    +
    + PyTorch +
    +
    + TypeScript +
    +
    + HuggingFace +
    +
    +

    Tech Stack

    - I specialize in a variety of languages, frameworks, and tools that allow me to build robust and scalable - applications + I specialize in Python, PyTorch, TensorFlow, and advanced ML frameworks for building robust and scalable AI applications including multi-agent RL systems and fine-tuned LLMs.

    @@ -58,12 +74,12 @@ const About = () => { showGraticules globeImageUrl="//unpkg.com/three-globe/example/img/earth-night.jpg" bumpImageUrl="//unpkg.com/three-globe/example/img/earth-topology.png" - labelsData={[{ lat: 40, lng: -100, text: 'Rjieka, Croatia', color: 'white', size: 15 }]} + labelsData={[{ lat: 48.1351, lng: 11.5820, text: 'Munich, Germany', color: 'white', size: 15 }]} />
    -

    I’m very flexible with time zone communications & locations

    -

    I'm based in Rjieka, Croatia and open to remote work worldwide.

    +

    I'm very flexible with time zone communications & locations

    +

    I'm based in Munich, Germany and open to remote work worldwide.

    @@ -74,10 +90,9 @@ const About = () => { grid-3
    -

    My Passion for Coding

    +

    My Passion for AI & Machine Learning

    - I love solving problems and building things through code. Programming isn't just my - profession—it's my passion. I enjoy exploring new technologies, and enhancing my skills. + I love solving complex problems through AI and building systems that push the boundaries of what's possible. Machine Learning isn't just my profession—it's my passion for creating intelligent solutions.

    @@ -95,7 +110,7 @@ const About = () => {

    Contact me

    copy -

    adrian@jsmastery.pro

    +

    jan@heimann.ai

    diff --git a/src/sections/Blog.jsx b/src/sections/Blog.jsx new file mode 100644 index 0000000..c3db654 --- /dev/null +++ b/src/sections/Blog.jsx @@ -0,0 +1,203 @@ +import { useState } from 'react'; +import emailjs from '@emailjs/browser'; +import { blogPosts, getBlogCategories } from '../content/blogPosts.js'; +import BlogCard from '../components/BlogCard.jsx'; +import BlogPost from '../components/BlogPost.jsx'; +import useAlert from '../hooks/useAlert.js'; +import Alert from '../components/Alert.jsx'; + +const Blog = () => { + const [selectedPost, setSelectedPost] = useState(null); + const [searchTerm, setSearchTerm] = useState(''); + const [selectedCategory, setSelectedCategory] = useState('All'); + const [newsletterEmail, setNewsletterEmail] = useState(''); + const [newsletterLoading, setNewsletterLoading] = useState(false); + const { alert, showAlert, hideAlert } = useAlert(); + + // Get unique categories + const categories = getBlogCategories(); + + // Filter posts based on search and category + const filteredPosts = blogPosts.filter(post => { + const matchesSearch = post.title.toLowerCase().includes(searchTerm.toLowerCase()) || + post.excerpt.toLowerCase().includes(searchTerm.toLowerCase()) || + post.tags.some(tag => tag.toLowerCase().includes(searchTerm.toLowerCase())); + + const matchesCategory = selectedCategory === 'All' || post.category === selectedCategory; + + return matchesSearch && matchesCategory; + }); + + // Posts are already sorted by date in contentLoader + const sortedPosts = filteredPosts; + + const handleReadMore = (post) => { + setSelectedPost(post); + }; + + const handleBack = () => { + setSelectedPost(null); + }; + + const handleNewsletterSubmit = (e) => { + e.preventDefault(); + if (!newsletterEmail.trim()) return; + + setNewsletterLoading(true); + + emailjs + .send( + import.meta.env.VITE_EMAILJS_SERVICE_ID, + import.meta.env.VITE_EMAILJS_NEWSLETTER_TEMPLATE_ID, + { + subscriber_email: newsletterEmail, + to_name: 'Jan Magnus Heimann', + to_email: 'jan@heimann.ai', + message: `New newsletter subscription from: ${newsletterEmail}`, + }, + import.meta.env.VITE_EMAILJS_PUBLIC_KEY, + ) + .then( + () => { + setNewsletterLoading(false); + showAlert({ + show: true, + text: 'Successfully subscribed to newsletter! 🎉', + type: 'success', + }); + + setTimeout(() => { + hideAlert(); + setNewsletterEmail(''); + }, 3000); + }, + (error) => { + setNewsletterLoading(false); + console.error(error); + + showAlert({ + show: true, + text: 'Failed to subscribe. Please try again 😢', + type: 'danger', + }); + }, + ); + }; + + if (selectedPost) { + return ; + } + + return ( +
    + {alert.show && } +
    + {/* Header */} +
    +

    Tech Blog

    +

    + Sharing insights on machine learning, web development, and cutting-edge technology +

    +
    + + {/* Search and Filter */} +
    + {/* Search */} +
    + setSearchTerm(e.target.value)} + className="w-full bg-black-300 border border-black-200 rounded-lg px-4 py-2 pl-10 text-white placeholder-gray-400 focus:outline-none focus:border-blue-500" + /> + + + +
    + + {/* Category Filter */} +
    + {categories.map(category => ( + + ))} +
    +
    + + {/* Results count */} +
    +

    + {filteredPosts.length === blogPosts.length + ? `${filteredPosts.length} articles` + : `${filteredPosts.length} of ${blogPosts.length} articles`} +

    +
    + + + {/* All Posts */} +
    + + {filteredPosts.length === 0 ? ( +
    +

    + No articles found matching your search. +

    +
    + ) : ( +
    + {sortedPosts.map(post => ( + + ))} +
    + )} +
    + + {/* Newsletter Signup */} +
    +

    Stay Updated

    +

    + Get notified when I publish new articles about machine learning, web development, and technology insights. +

    +
    + setNewsletterEmail(e.target.value)} + required + className="flex-1 bg-black-200 border border-black-100 rounded-lg px-4 py-2 text-white placeholder-gray-400 focus:outline-none focus:border-blue-500" + /> + +
    +
    +
    +
    + ); +}; + +export default Blog; \ No newline at end of file diff --git a/src/sections/Clients.jsx b/src/sections/Clients.jsx deleted file mode 100644 index 6a09893..0000000 --- a/src/sections/Clients.jsx +++ /dev/null @@ -1,37 +0,0 @@ -import { clientReviews } from '../constants/index.js'; - -const Clients = () => { - return ( -
    -

    Hear from My Clients

    - -
    - {clientReviews.map((item) => ( -
    -
    -

    {item.review}

    - -
    -
    - reviewer -
    -

    {item.name}

    -

    {item.position}

    -
    -
    - -
    - {Array.from({ length: 5 }).map((_, index) => ( - star - ))} -
    -
    -
    -
    - ))} -
    -
    - ); -}; - -export default Clients; diff --git a/src/sections/Contact.jsx b/src/sections/Contact.jsx index 10cbdc7..d142d50 100644 --- a/src/sections/Contact.jsx +++ b/src/sections/Contact.jsx @@ -22,16 +22,16 @@ const Contact = () => { emailjs .send( - import.meta.env.VITE_APP_EMAILJS_SERVICE_ID, - import.meta.env.VITE_APP_EMAILJS_TEMPLATE_ID, + import.meta.env.VITE_EMAILJS_SERVICE_ID, + import.meta.env.VITE_EMAILJS_TEMPLATE_ID, { from_name: form.name, - to_name: 'JavaScript Mastery', + to_name: 'Jan Magnus Heimann', from_email: form.email, - to_email: 'sujata@jsmastery.pro', + to_email: 'jan@heimann.ai', message: form.message, }, - import.meta.env.VITE_APP_EMAILJS_PUBLIC_KEY, + import.meta.env.VITE_EMAILJS_PUBLIC_KEY, ) .then( () => { @@ -74,8 +74,7 @@ const Contact = () => {

    Let's talk

    - Whether you’re looking to build a new website, improve your existing platform, or bring a unique project to - life, I’m here to help. + Whether you're looking to implement AI/ML solutions, optimize existing systems with reinforcement learning, or discuss cutting-edge research opportunities, I'm here to help.

    diff --git a/src/sections/Footer.jsx b/src/sections/Footer.jsx index 64a9d18..6d1fb10 100644 --- a/src/sections/Footer.jsx +++ b/src/sections/Footer.jsx @@ -8,18 +8,18 @@ const Footer = () => {
    -

    © 2024 Adrian Hajdin. All rights reserved.

    +

    © 2025 Jan Magnus Heimann. All rights reserved.

    ); }; diff --git a/src/sections/Hero.jsx b/src/sections/Hero.jsx index dfbbefd..13d260e 100644 --- a/src/sections/Hero.jsx +++ b/src/sections/Hero.jsx @@ -4,11 +4,7 @@ import { Canvas } from '@react-three/fiber'; import { useMediaQuery } from 'react-responsive'; import { PerspectiveCamera } from '@react-three/drei'; -import Cube from '../components/Cube.jsx'; -import Rings from '../components/Rings.jsx'; -import ReactLogo from '../components/ReactLogo.jsx'; import Button from '../components/Button.jsx'; -import Target from '../components/Target.jsx'; import CanvasLoader from '../components/Loading.jsx'; import HeroCamera from '../components/HeroCamera.jsx'; import { calculateSizes } from '../constants/index.js'; @@ -26,9 +22,9 @@ const Hero = () => {

    - Hi, I am Adrian 👋 + Hi, I am Jan Magnus Heimann 👋

    -

    Building Products & Brands

    +

    AI/ML Engineer & Researcher

    @@ -42,12 +38,6 @@ const Hero = () => { - - - - - - diff --git a/src/sections/Navbar.jsx b/src/sections/Navbar.jsx index f36939d..7882340 100644 --- a/src/sections/Navbar.jsx +++ b/src/sections/Navbar.jsx @@ -25,7 +25,7 @@ const Navbar = () => {
    - Adrian + Jan Magnus Heimann + +

    View on GitHub

    + arrow +
    +
    +
    - +
    + {/* Mobile: Show simple 2D preview */} + {deviceInfo.isMobile ? ( + + ) : ( + /* Desktop: Show 3D Canvas */ + + + +
    + }> + + + + +
    + +
    + )} +
    - - -
    - - - -
    - }> - - - - -
    - -
    -
    + ))}
    ); diff --git a/src/sections/Publications.jsx b/src/sections/Publications.jsx new file mode 100644 index 0000000..57713f9 --- /dev/null +++ b/src/sections/Publications.jsx @@ -0,0 +1,105 @@ +import { publications } from '../constants/index.js'; + +const Publications = () => { + return ( +
    +

    Research Publications

    +

    + My research contributions to the field of AI and machine learning, focusing on innovative approaches to material science and synthesis prediction. +

    + +
    + {publications.map((publication) => ( +
    + {/* Publication Image/Visual */} + {publication.image && ( +
    + {`${publication.title} +
    + )} + + {/* Publication Content */} +
    +
    +

    {publication.title}

    + {publication.workshop && ( +

    + NeurIPS: {publication.workshop} → {publication.workshopFull} +

    + )} +
    + {publication.conference && ( + {publication.conference} + )} + {publication.year} +
    +
    + +
    +

    {publication.authors}

    +
    + + {publication.abstract && ( +
    +

    {publication.abstract}

    +
    + )} + +
    + {publication.pdf && ( + + link + PDF + + )} + {publication.arxiv && ( + + link + arXiv + + )} + {publication.code && ( + + github + Code + + )} +
    + + {publication.tags && ( +
    + {publication.tags.map((tag, index) => ( + + {tag} + + ))} +
    + )} +
    +
    + ))} +
    +
    + ); +}; + +export default Publications; \ No newline at end of file diff --git a/src/utils/contentLoader.js b/src/utils/contentLoader.js new file mode 100644 index 0000000..2e59d2c --- /dev/null +++ b/src/utils/contentLoader.js @@ -0,0 +1,159 @@ +import matter from 'gray-matter'; + +// Auto-discover all markdown files in the blog content directory +const blogFiles = import.meta.glob('../content/blog/*.md', { as: 'raw', eager: true }); + +// Process markdown files and extract metadata +const processMarkdownFiles = () => { + const posts = []; + + console.log('Blog files found:', Object.keys(blogFiles)); + console.log('Total files:', Object.keys(blogFiles).length); + + Object.entries(blogFiles).forEach(([path, content]) => { + // Extract filename for slug generation + const filename = path.split('/').pop().replace('.md', ''); + + // Skip template files (starting with underscore) + if (filename.startsWith('_')) { + console.log('Skipping template file:', filename); + return; + } + + console.log('Processing file:', filename); + + // Parse frontmatter and content + const { data: frontmatter, content: markdownContent } = matter(content); + + const slug = filename.toLowerCase().replace(/[^a-z0-9]+/g, '-'); + + // Generate ID from filename hash (simple approach) + const id = filename.split('').reduce((acc, char) => acc + char.charCodeAt(0), 0); + + // Validate required frontmatter fields + const requiredFields = ['title', 'excerpt', 'author', 'date', 'readTime', 'tags', 'category']; + const hasRequiredFields = requiredFields.every(field => frontmatter[field]); + + if (!hasRequiredFields) { + console.warn(`Blog post ${filename} is missing required frontmatter fields:`, { + missing: requiredFields.filter(field => !frontmatter[field]), + available: Object.keys(frontmatter) + }); + return; + } + + // Create post object + const post = { + id, + slug, + filename, + title: frontmatter.title, + excerpt: frontmatter.excerpt, + author: frontmatter.author, + date: frontmatter.date, + readTime: frontmatter.readTime, + tags: Array.isArray(frontmatter.tags) ? frontmatter.tags : [frontmatter.tags], + category: frontmatter.category, + content: markdownContent, + // Additional metadata + lastModified: new Date().toISOString(), + wordCount: markdownContent.split(/\s+/).length, + estimatedReadTime: Math.ceil(markdownContent.split(/\s+/).length / 200) // ~200 words per minute + }; + + posts.push(post); + }); + + return posts; +}; + +// Load and process all blog posts +export const loadBlogPosts = () => { + try { + const posts = processMarkdownFiles(); + + // Sort posts by date (newest first) + posts.sort((a, b) => new Date(b.date) - new Date(a.date)); + + return posts; + } catch (error) { + console.error('Error loading blog posts:', error); + return []; + } +}; + +// Get a single blog post by slug +export const getBlogPost = (slug) => { + const posts = loadBlogPosts(); + return posts.find(post => post.slug === slug); +}; + +// Get blog posts by category +export const getBlogPostsByCategory = (category) => { + const posts = loadBlogPosts(); + return posts.filter(post => post.category === category); +}; + + +// Get all unique categories +export const getBlogCategories = () => { + const posts = loadBlogPosts(); + const categories = [...new Set(posts.map(post => post.category))]; + return ['All', ...categories]; +}; + +// Get all unique tags +export const getBlogTags = () => { + const posts = loadBlogPosts(); + const tags = [...new Set(posts.flatMap(post => post.tags))]; + return tags.sort(); +}; + +// Search blog posts +export const searchBlogPosts = (query) => { + const posts = loadBlogPosts(); + const searchTerm = query.toLowerCase(); + + return posts.filter(post => + post.title.toLowerCase().includes(searchTerm) || + post.excerpt.toLowerCase().includes(searchTerm) || + post.tags.some(tag => tag.toLowerCase().includes(searchTerm)) || + post.category.toLowerCase().includes(searchTerm) + ); +}; + +// Get blog statistics +export const getBlogStats = () => { + const posts = loadBlogPosts(); + + return { + totalPosts: posts.length, + categories: getBlogCategories().length - 1, // Exclude 'All' + tags: getBlogTags().length, + totalWords: posts.reduce((acc, post) => acc + post.wordCount, 0), + averageReadTime: Math.ceil(posts.reduce((acc, post) => acc + post.estimatedReadTime, 0) / posts.length) + }; +}; + +// Utility function to validate markdown file structure +export const validateMarkdownFile = (content) => { + try { + const { data: frontmatter, content: markdownContent } = matter(content); + + const requiredFields = ['title', 'excerpt', 'author', 'date', 'readTime', 'tags', 'category']; + const missingFields = requiredFields.filter(field => !frontmatter[field]); + + return { + valid: missingFields.length === 0, + frontmatter, + content: markdownContent, + missingFields, + errors: missingFields.length > 0 ? [`Missing required fields: ${missingFields.join(', ')}`] : [] + }; + } catch (error) { + return { + valid: false, + errors: [error.message] + }; + } +}; \ No newline at end of file diff --git a/src/utils/projectFallbacks.js b/src/utils/projectFallbacks.js new file mode 100644 index 0000000..d583a4c --- /dev/null +++ b/src/utils/projectFallbacks.js @@ -0,0 +1,35 @@ +// Project fallback images mapping +export const projectFallbacks = { + '/textures/project/project1.mp4': '/assets/project-logo1.png', + '/textures/project/project2.mp4': '/assets/project-logo2.png', + '/textures/project/project3.mp4': '/assets/project-logo3.png', + '/textures/project/project4.mp4': '/assets/project-logo4.png', + '/textures/project/project5.mp4': '/assets/project-logo5.png', +}; + +// Generate fallback image path from video path +export const generateFallbackImage = (videoPath) => { + if (!videoPath) return '/assets/project-logo1.png'; + + // Use mapping first + if (projectFallbacks[videoPath]) { + return projectFallbacks[videoPath]; + } + + // Fallback to pattern matching + const matches = videoPath.match(/project(\d+)\.mp4/); + if (matches && matches[1]) { + return `/assets/project-logo${matches[1]}.png`; + } + + return '/assets/project-logo1.png'; +}; + +// Alternative static screenshots (if you want to create dedicated screenshots) +export const projectScreenshots = { + '/textures/project/project1.mp4': '/assets/screenshots/autoapply-screenshot.png', + '/textures/project/project2.mp4': '/assets/screenshots/openrlhf-screenshot.png', + '/textures/project/project3.mp4': '/assets/screenshots/archunit-screenshot.png', + '/textures/project/project4.mp4': '/assets/screenshots/gpt2-screenshot.png', + '/textures/project/project5.mp4': '/assets/screenshots/project5-screenshot.png', +}; \ No newline at end of file