For an upcoming project requiring 24/7 data ingestion and processing, I set up Claude Code on a remote Linux server (VPS) and discovered a whole new world.

I learned about the Sandbox concept: an isolated environment where you can experiment freely without affecting anything outside it. The main implementations are servers (VPS) and containers. Claude Code itself runs in a sandboxed environment - if something breaks, it stays contained. You can customize the setup for your project and let it run unsupervised.

I’d encountered terms like container, Docker, cloud deployment, and remote MCP servers before, but never really dug into what they meant.

This time I wanted to understand all these concepts before setting up the server.

What I Learned

VPS - Virtual Private Server

A Linux machine in the cloud running 24/7. You rent it and connect remotely via SSH.

Why use one? Long-running tasks without keeping your computer on, a clean isolated environment, and agents that run whenever you need them.

Session Management - tmux

A tool that keeps sessions running even after you disconnect from the server. Essential when you want Claude Code to keep working while you’re offline, or for long-running tasks.

Docker

Software for building and running containers (among other things).

Image

A static file, a “template” of an application. Contains code + dependencies + configurations. Doesn’t run, just sits there.

Container

A packaged application with everything it needs to run. When you run an image, a working container is created.

Why use containers: to ensure what runs locally will run exactly the same elsewhere. Nothing worse than “it works on my machine” - containers guarantee a unified standard.

Docker Hub

A central repository of ready-made images (like GitHub for Docker). Want PostgreSQL? There’s a ready-made image to run. When deploying from Docker to the cloud, you can push to Docker Hub.


Setting Up the Server

I asked Claude where to host the server. After comparing options, I went with Hetzner - good enough for learning and personal projects. Production will move to Google Cloud or AWS later.

I created an account and spun up a new server. Claude recommended the specs that fit my needs.

SSH Key

I created an SSH key - a secure way to connect to a remote server.

You open a terminal and connect to the server over the internet - all communication is encrypted.

It has two parts: a private key that stays on your machine, and a public key you give to the server.

When connecting, the server verifies that the private and public keys match.

Run the command ssh root@IP in the terminal and you get a terminal running on the remote Linux server.

I installed a VS Code extension that allows working on remote servers - much more convenient.

Once the server was ready, I gave Claude Code the IP and it walked me through connecting and setting up Claude Code on it.


Preparing the Environment

I ran Claude Code on the server while my local Claude Code helped in parallel - installing GitHub CLI, Docker, Vercel, and other tools.

How I connected to GitHub: via SSH key. I already had one from my local machine, so I copied it to the server and connected through it.

How to sync between local and remote Docker: through Docker Hub - essentially GitHub for Docker images. Push and pull from anywhere.

Infrastructure Summary

  • Claude Code connected to subscription
  • Node.js + Bun
  • Docker
  • GitHub
  • VS Code Remote
  • Skills
  • Vercel

What Can You Do With This?

You can host whatever you want on it - it has a public IP: agents, APIs, websites, databases - things Vercel doesn’t support. For example, a full application backend.

Other interesting use cases: an agent connected to my WhatsApp running 24/7, automations, API calls.

Claude Code on the server can run scripts, browse with Puppeteer, send WhatsApp messages, make API calls, schedule tasks with cron, run a headless browser - a browser without a GUI running in the background.

It’s like a computer running 24/7 that Claude controls - agents execute tasks, data flows in and gets processed, all remotely.


The Demo: Prediction Model with InfluxDB

To experiment, I used Claude Code on the server to build a prediction model demo with InfluxDB - a database built specifically for time series data.

Why Is It Better Than PostgreSQL or Regular SQL?

Better data storage: each data point is saved with a timestamp as the primary identifier instead of an ID.

This enables writing millions of points per second, aggressive compression (10x-100x more efficient), time-optimized queries like “give me the average of the last 5 minutes”, and automatic data management (like deleting data older than 30 days).

Classic use cases: sensors, server metrics, financial market data, weather.


The Simulation

We built a simulation showing how weather affects fish tank conditions: real weather data combined with realistic simulated sensor readings.

Architecture

  • Python script that collects data, creates simulated data, and sends to InfluxDB
  • Script that pulls data from InfluxDB
  • Dashboard on Docker displaying visualizations with Grafana

Docker: InfluxDB is a separate server (like PostgreSQL). Instead of installing it on the machine, Docker runs it in an isolated container. Advantages: easy to install, easy to remove, no “mess” on the system.

The Data

Weather via Open-Meteo - a completely free API with no registration required. Send a GET request with coordinates, get temperature, humidity, wind.

RAS Simulator - a Python class Claude wrote that simulates fish tank sensors. It takes the external temperature and uses it to calculate water temperature, pH, dissolved oxygen, and ammonia.

The scripts run continuously in the background and update in real-time.

Prediction Model

Using the Prophet library - Meta’s library for time series forecasting.

The model predicts water temperature: takes weather data as a regressor (explanatory variable), writes predictions to InfluxDB, displayed on dashboard with blue line for actual and dashed orange line for predicted.