Blog

Recent LLM Development Highlights

LLM
AI
Development
Automation

A summary of my recent work developing LLM-powered applications, automation systems, and AI-enhanced tooling.

Over the past months, I have been actively building and refining a broad range of LLM-powered tools and applications. These projects span practical engineering workflows, domain-specific automation, signal-processing support tools, and real-time cloud-connected AI endpoints. Below is a summary of the main developments.

🚀 LLM-Powered Development Highlights

1. Local AI Endpoints on Mobile Devices

I built a small, efficient edge AI endpoint on Pixel, capable of receiving data via Cloudflare Tunnel and producing LLM outputs in real time.
This allows:

  • local image/audio inference
  • lightweight embeddings
  • remote command execution
  • secure zero-trust gateway integration

This setup demonstrates how smartphones can act as miniature LLM servers for rapid prototyping.


2. Cloudflare AI Search + RAG Indexing

I implemented Cloudflare AI Search to index my own Python projects and documents, enabling:

  • semantic search
  • document-aware Q&A
  • automatic reranking
  • similarity caching for speed

I experimented with pausing/resuming indexing, tuning chunking, and customizing retrieval strategies for different repos.


3. Automated Technical Writing Workflows

Built automated templates using LLMs for:

  • LaTeX article/report/book generation
  • daily Swedish vocabulary and “Dagens svensklektion” compilation
  • CNC machining analytics documentation
  • ML books with structured conceptual questions, MCQs, and diagrams

These workflows turn raw Markdown into professional documents efficiently.


4. AI-Enhanced Web Applications

Developed several AI-backed web demos, including:

  • a passive towed-array sonar simulation with beamforming, LOFAR, and recognition logic
  • a crypto-forensics reporting tool
  • interactive DSP teaching tools

The frontend is built with React, while the backend connects to LLM APIs for explanation generation and analysis.


5. LLM-Powered Personal Productivity Tools

Created specialized assistants for:

  • structured daily logs and summaries
  • job-matching architecture design
  • ML engineering

These automations significantly reduce writing time and help maintain consistent documentation quality.


6. LLM-Integrated Dev Environment

Enhanced my Neovim setup with:

  • custom AI code completion
  • FZF-based search + LLM inline refactoring
  • automated README improvement tools
  • Python/Rust code generation pipelines

This workflow supports rapid prototyping across signal processing, CNC analytics, and ML engineering.


7. Secure Cloud Workflows

Integrated LLM operations into:

  • Cloudflare Zero-Trust
  • secure SSH over Tunnel
  • GitHub → Cloudflare Pages deployment pipelines
  • local → remote artifact synchronization

This enables safe running of LLM services and reproducible deployments without exposing internal networks.


🎯 Summary

These recent LLM developments span practical engineering, cloud automation, signal-processing support, language learning, and secure infrastructure. Together, they form a rapidly developing environment that blends LLMs + DSP + cloud + automation, supporting both my personal workflows and applied industry projects.

More deep-dives will follow in upcoming posts.