@lossless.zone/objectstorage is a self-hosted, S3-compatible object storage server packaged as a single Docker image. It gives you an S3 endpoint for applications, a browser-based management UI for humans, and a Rust-powered storage engine underneath.
The short version: it is the storage box you can run yourself when you want S3 semantics without signing up for a cloud bucket, wiring together MinIO, or building an admin console from scratch.
Why Another Object Storage Server?
Object storage is one of those pieces of infrastructure that looks simple until you operate it. The protocol is familiar: buckets, keys, objects, credentials, policies, multipart uploads. The hard part is everything around it.
You need a process that starts predictably in Docker. You need credentials that can be managed without rebuilding an image. You need a UI for buckets, policies, object browsing, and quick inspection. You need an S3-compatible API so existing SDKs and tools keep working. If the storage grows beyond one disk, you need clustering, erasure coding, drive awareness, and repair behavior that does not require a separate product.
@lossless.zone/objectstorage wraps those requirements into one product layer. The storage protocol and data path are handled by @push.rocks/smartstorage, a Rust-backed S3-compatible engine. The objectstorage package adds the Docker runtime, Deno-based control process, admin authentication, management APIs, and a dees-catalog web UI.
┌───────────────────────────────────────────────────────────────┐
│ objectstorage │
│ │
│ Management UI Storage Engine │
│ port 3000 port 9000 │
│ │
│ dees-catalog SPA smartstorage │
│ typed ops API ───────▶ Rust S3 server │
│ admin auth SigV4, policies, streaming I/O │
│ │
│ optional cluster layer: QUIC, erasure coding, drive paths │
└───────────────────────────────────────────────────────────────┘
A Docker Container, Not a Framework Exercise
The recommended start path is intentionally boring:
docker run -d \
--name objectstorage \
-p 9000:9000 \
-p 3000:3000 \
-v objstdata:/data \
-e OBJST_ACCESS_KEY=myadminkey \
-e OBJST_SECRET_KEY=mysupersecret \
-e OBJST_ADMIN_PASSWORD=myuipassword \
code.foss.global/lossless.zone/objectstorage:latest
Port 9000 is the S3-compatible API. Port 3000 is the management UI. /data is the persistent volume for bucket data and product metadata.
The image is Alpine-based, runs Deno in production, uses tini for signal handling, and exposes the cluster transport port 4433 for distributed deployments. The build is multi-stage: a Node/pnpm stage bundles the frontend with @git.zone/tsbundle, and the final image contains the Deno runtime, backend code, shared interfaces, and bundled web assets.
That split matters. This is not just a library that happens to start a server. It is meant to be deployed as a service.
S3 Compatibility Where Applications Need It
Applications talk to objectstorage through the normal S3 model. Any S3-compatible client can point at http://localhost:9000, use path-style addressing, and authenticate with the configured access key and secret.
import { S3Client, PutObjectCommand, ListObjectsV2Command } from '@aws-sdk/client-s3';
const s3 = new S3Client({
endpoint: 'http://localhost:9000',
region: 'us-east-1',
credentials: {
accessKeyId: 'admin',
secretAccessKey: 'admin',
},
forcePathStyle: true,
});
await s3.send(new PutObjectCommand({
Bucket: 'my-bucket',
Key: 'hello.txt',
Body: 'Hello, object storage!',
}));
const result = await s3.send(new ListObjectsV2Command({
Bucket: 'my-bucket',
}));
console.log(result.Contents);
The underlying smartstorage engine handles the protocol details: AWS SigV4 authentication, IAM-style bucket policies, multipart uploads, byte-range requests, and streaming I/O. For large files, that distinction is important. The engine is Rust-based and designed around streaming and backpressure rather than loading whole objects into a JavaScript process.
The Management UI Is Part of the Product
Raw S3 compatibility is useful for applications. It is not enough for operators.
The management UI is served from the same container on port 3000. After logging in as admin with OBJST_ADMIN_PASSWORD, you get views for server status, buckets, objects, policies, access keys, and configuration.
The object browser is the part users notice first. It behaves more like a file manager than a cloud API explorer: column-style browsing, folder prefixes, upload, download, preview, move, rename, delete, and context menus. Text objects can be opened in a Monaco-powered code editor and saved back to storage. PDFs render inline with page navigation, zoom, thumbnails, print, and download controls.
That is a practical difference from a minimal S3 server. Operators do not need to keep switching between aws s3 ls, a bucket policy JSON file, and a separate dashboard. The common workflows are in the product.
Named Policies Instead of Copy-Pasted JSON
S3 bucket policies are powerful, but raw policy JSON is awkward to manage by hand. objectstorage adds a named policy layer on top.
You create a reusable policy once, then attach it to one or more buckets. The PolicyManager persists named policies under the storage directory, tracks bucket attachments, merges attached statements, replaces the ${bucket} placeholder with the concrete bucket name, and applies the resulting S3 policy to the bucket.
[
{
"Sid": "PublicRead",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::${bucket}/*"
}
]
Attach that policy to assets, downloads, and public-site, and each bucket receives the correct concrete resource ARN. Updating the named policy recomputes the attached bucket policies. Deleting a policy detaches it and reapplies the affected buckets.
This is a small abstraction, but it removes one of the most common sources of S3 admin drift: nearly identical policy documents copied across buckets and edited manually.
Runtime Credentials and Admin State
objectstorage has two authentication surfaces.
The S3 API uses access-key credentials and SigV4 verification. The admin UI uses a JWT-backed login flow with the fixed username admin and the configured admin password.
The default standalone configuration is deliberately simple:
OBJST_ACCESS_KEY=admin
OBJST_SECRET_KEY=admin
OBJST_ADMIN_PASSWORD=admin
OBJST_REGION=us-east-1
Production deployments should override those values. Credentials can also be managed from the UI. The container persists runtime-managed access credentials in /data/.objectstorage/admin-config.json, so credential changes survive restart when /data is backed by a persistent volume.
The product boundary here is clear: objectstorage owns admin UX, product metadata, and startup configuration. smartstorage owns the S3 auth and policy enforcement path.
Cluster Mode: QUIC, Erasure Coding, and Drives
Standalone mode is enough for local development, CI, small internal services, and simple single-node deployments. Cluster mode is for the point where one disk or one node is no longer enough.
Enable it with environment variables:
docker run -d --name objst-node1 \
-p 9000:9000 \
-p 3000:3000 \
-p 4433:4433/udp \
-v /mnt/disk1:/drive1 \
-v /mnt/disk2:/drive2 \
-e OBJST_CLUSTER_ENABLED=true \
-e OBJST_CLUSTER_NODE_ID=node-1 \
-e OBJST_CLUSTER_SEED_NODES=node2:4433,node3:4433 \
-e OBJST_DRIVE_PATHS=/drive1,/drive2 \
-e OBJST_ERASURE_DATA_SHARDS=4 \
-e OBJST_ERASURE_PARITY_SHARDS=2 \
-e OBJST_ACCESS_KEY=myadminkey \
-e OBJST_SECRET_KEY=mysupersecret \
code.foss.global/lossless.zone/objectstorage:latest
In clustered mode, objectstorage passes cluster config into smartstorage. The engine uses QUIC for inter-node transport, Reed-Solomon erasure coding for data durability, heartbeat-based failure detection, multi-drive paths, quorum reads and writes, and background shard repair.
The default erasure coding layout is 4+2: four data shards and two parity shards. Objects are chunked, split into shards, and distributed across available drives and nodes. Any four of the six shards can reconstruct the original data.
That makes the product useful in two very different scenarios: a single Docker volume on a developer machine, or a small distributed storage cluster with explicit drives and node identities.
Configuration Without a Config File Tax
Configuration can come from constructor options, CLI flags, or environment variables. Environment variables win, which is the right default for containers.
The core server settings are:
| Variable | Meaning | Default |
|---|---|---|
OBJST_PORT |
S3 API port | 9000 |
UI_PORT |
Management UI port | 3000 |
OBJST_STORAGE_DIR |
Storage directory | /data |
OBJST_ACCESS_KEY |
Initial S3 access key | admin |
OBJST_SECRET_KEY |
Initial S3 secret key | admin |
OBJST_ADMIN_PASSWORD |
Admin UI password | admin |
OBJST_REGION |
SigV4 region | us-east-1 |
Cluster settings follow the same model:
| Variable | Meaning | Default |
|---|---|---|
OBJST_CLUSTER_ENABLED |
Enable clustering | false |
OBJST_CLUSTER_NODE_ID |
Stable node ID | auto-generated when omitted |
OBJST_CLUSTER_QUIC_PORT |
QUIC transport port | 4433 |
OBJST_CLUSTER_SEED_NODES |
Comma-separated seed nodes | empty |
OBJST_DRIVE_PATHS |
Comma-separated drive paths | storage dir |
OBJST_ERASURE_DATA_SHARDS |
Data shard count | 4 |
OBJST_ERASURE_PARITY_SHARDS |
Parity shard count | 2 |
OBJST_ERASURE_CHUNK_SIZE |
Erasure chunk size | 4194304 |
OBJST_HEARTBEAT_INTERVAL_MS |
Cluster heartbeat interval | 5000 |
OBJST_HEARTBEAT_TIMEOUT_MS |
Cluster heartbeat timeout | 30000 |
For development, the Deno CLI can run directly in ephemeral mode:
deno run --allow-all mod.ts server --ephemeral
That stores data under ./.nogit/objstdata, which keeps local test data out of the normal project tree.
A Typed Operations Surface
The UI is not scraping HTML or shelling out to a CLI. The backend exposes typed operations through @api.global/typedrequest and @api.global/typedserver. Handlers cover admin login, identity verification, status, config, buckets, objects, credentials, and policies.
Internally, the ObjectStorageContainer owns the lifecycle:
const container = new ObjectStorageContainer({
objstPort: 9000,
uiPort: 3000,
storageDirectory: '/data',
accessCredentials: [{
accessKeyId: 'myadminkey',
secretAccessKey: 'mysupersecret',
}],
adminPassword: 'myuipassword',
});
await container.start();
On startup it loads persisted admin credentials, starts smartstorage with the mapped server/storage/auth/cluster config, creates an S3 client for management operations, loads named policies, and starts the ops server that serves the bundled SPA.
That structure keeps the package small. The product layer does not reimplement the storage engine. It orchestrates the engine, exposes user-facing workflows, and packages the result.
Where It Fits
objectstorage is not trying to be every cloud storage service. Its sweet spot is pragmatic self-hosting:
Development and CI need fast local S3 semantics without external credentials.
Small production systems need a durable object store with a management UI and simple Docker deployment.
Internal platforms need S3-compatible storage that can be inspected, credentialed, and policy-managed by operators.
Edge or private infrastructure needs object storage that can grow from one volume to multi-node, erasure-coded storage without changing the application protocol.
The important design choice is that the application interface stays boring. Apps use S3. Operators use the UI. The storage engine does Rust-powered protocol and data-path work. The product container handles deployment and management.
The Takeaway
@lossless.zone/objectstorage turns @push.rocks/smartstorage into a runnable product: S3-compatible API, Docker image, admin UI, typed management API, named policies, runtime credentials, and optional clustered storage.
For teams already building on S3-compatible tooling, that means less glue code. For teams running their own infrastructure, it means object storage can be treated like a normal service: start it, mount /data, set credentials, open the dashboard, and point clients at port 9000.