Compare commits

...

No commits in common. "aa2bacc0cc0a76345043eabec0d4c8203e78eb50" and "08bf13cc32ec651d8d78ddfe0f04dd5c6619ec6d" have entirely different histories.

86 changed files with 1408 additions and 5878 deletions

40
.claude/settings.json Normal file
View File

@ -0,0 +1,40 @@
{
"$schema": "https://claude.ai/claude-code/settings.schema.json",
"permissions": {
"allow": [
"Bash(bun test*)",
"Bash(bun run*)",
"Bash(bun build*)",
"Bash(bun install*)",
"Bash(bunx tsc --noEmit*)",
"Bash(git status*)",
"Bash(git diff*)",
"Bash(git log*)",
"Bash(git add*)",
"Bash(git commit*)",
"Bash(ls*)",
"Bash(cat /proc/*)",
"Bash(cat /sys/*)"
],
"deny": [
"Bash(rm -rf /)*",
"Bash(sudo *)",
"Bash(*--force*)"
]
},
"hooks": {
"PostToolUse": [
{
"matcher": "Edit|Write",
"hooks": [
{
"type": "command",
"command": "bunx tsc --noEmit --pretty 2>&1 | head -20 || true",
"description": "Type check after file changes",
"timeout": 10000
}
]
}
]
}
}

28
.claude/skills/build.md Normal file
View File

@ -0,0 +1,28 @@
# /build
Build the systant CLI binary.
## Instructions
1. Run type checking first: `bunx tsc --noEmit`
2. If types pass, build the binary: `bun build index.ts --compile --outfile dist/systant`
3. Report the binary size and location
4. If there are errors, show them clearly and suggest fixes
## Nix Build
For NixOS deployment, the binary is built by Nix using:
```bash
nix build .#systant
```
If you update dependencies (bun.lock), update the hash in `nix/package.nix`:
```bash
nix build .#systant 2>&1 | grep 'got:'
```
## Success Criteria
- No TypeScript errors
- Binary created at `dist/systant`
- Binary is executable

26
.claude/skills/plan.md Normal file
View File

@ -0,0 +1,26 @@
# /plan
Enter planning mode to design an implementation approach.
## Instructions
1. Enter plan mode using EnterPlanMode tool
2. Explore the codebase to understand current state
3. Identify affected files and components
4. Design the implementation approach
5. Present the plan for user approval before coding
## When to Use
- New features
- Architectural changes
- Complex bug fixes
- Refactoring tasks
## Output
A clear plan including:
- Files to create/modify
- Key implementation steps
- Potential risks or considerations
- Testing approach

28
.claude/skills/release.md Normal file
View File

@ -0,0 +1,28 @@
# /release
Prepare a release of systant.
## Instructions
1. Ensure working directory is clean (`git status`)
2. Run tests: `bun test`
3. Type check: `bunx tsc --noEmit`
4. Build binary: `bun build index.ts --compile --outfile dist/systant`
5. Ask user for version bump type (patch/minor/major)
6. Update version in package.json
7. Create git commit with message: "release: v{version}"
8. Create git tag: `v{version}`
9. Report next steps (push, publish, etc.)
## Prerequisites
- All tests must pass
- No TypeScript errors
- Clean git working directory (or user confirms to proceed)
## Success Criteria
- Binary built successfully
- Version bumped in package.json
- Git commit and tag created
- Clear instructions for next steps

22
.claude/skills/test.md Normal file
View File

@ -0,0 +1,22 @@
# /test
Run the test suite.
## Instructions
1. Run `bun test` to execute all tests
2. If tests fail:
- Analyze the failure messages
- Identify the root cause
- Suggest specific fixes
3. If tests pass, report the summary
## Options
- `/test <pattern>` - Run tests matching a pattern (e.g., `/test metrics`)
- `/test --watch` - Run in watch mode
## Success Criteria
- All tests pass
- Clear reporting of any failures with actionable suggestions

View File

@ -0,0 +1,16 @@
# /typecheck
Run TypeScript type checking.
## Instructions
1. Run `bunx tsc --noEmit --pretty`
2. If errors found:
- List each error with file, line, and message
- Provide suggested fixes for each
3. If no errors, confirm success
## Success Criteria
- Report all type errors clearly
- Suggest actionable fixes

62
.gitignore vendored
View File

@ -1,36 +1,46 @@
# The directory Mix will write compiled artifacts to. # See https://help.github.com/articles/ignoring-files/ for more about ignoring files.
server/_build/
# If you run "mix test --cover", coverage assets end up here. # Dependencies
server/cover/ node_modules
.pnp
.pnp.js
# The directory Mix downloads your dependencies sources to. # Local env files
server/deps/ .env
.env.local
.env.development.local
.env.test.local
.env.production.local
# Where third-party dependencies like ExDoc output generated docs. # Testing
/doc/ coverage
# If the VM crashes, it generates a dump, let's ignore it too. # Turbo
erl_crash.dump .turbo
# Also ignore archive artifacts (built via "mix archive.build"). # Vercel
*.ez .vercel
# Ignore package tarball (built via "mix hex.build"). # Build Outputs
system_stats_daemon-*.tar .next/
out/
build
dist
# Temporary files, for example, from tests.
/tmp/
# Nix direnv cache and generated files # Debug
.direnv/ npm-debug.log*
yarn-debug.log*
yarn-error.log*
# Nix result symlinks # Misc
result .DS_Store
result-* *.pem
auth_failures.log
# Home Assistant development files # Nix
dev-config/.storage/ .direnv
dev-config/home-assistant.log* .direnv/*
dev-config/deps/
dev-config/.cloud/ # Local config (use systant.toml.example as template)
systant.toml

View File

@ -1,3 +1,3 @@
{ {
"elixirLS.projectDir": "server" "terminal.integrated.fontFamily": "Fira Code"
} }

103
AGENTS.md Normal file
View File

@ -0,0 +1,103 @@
# Agents
Specialized sub-agents for systant development tasks.
## test-runner
Use this agent after writing or modifying code to run the test suite and verify changes.
**Responsibilities:**
- Run `bun test` and report results
- Identify failing tests and their root causes
- Suggest fixes for test failures
- Run specific test files when targeted testing is needed
**Trigger:** After implementing features, fixing bugs, or modifying existing code.
## code-reviewer
Use this agent to review code changes before committing.
**Responsibilities:**
- Check for Bun best practices (no Node.js patterns)
- Verify type safety and explicit return types
- Look for potential bugs or edge cases
- Ensure code follows project conventions
- Flag any security concerns (especially in command execution)
**Trigger:** Before creating commits or PRs.
## metrics-specialist
Use this agent when working on system metric collection.
**Responsibilities:**
- Understand Linux /proc and /sys interfaces
- Know cross-platform metric collection strategies
- Ensure metrics are properly typed and documented
- Validate metric units and normalization
**Context:** Systant collects CPU, memory, disk, and network metrics. Metrics should be normalized (percentages 0-100, bytes for sizes) and include metadata for Home Assistant discovery.
## mqtt-specialist
Use this agent when working on MQTT publishing or Home Assistant integration.
**Responsibilities:**
- Understand MQTT topic conventions
- Know Home Assistant discovery protocol
- Ensure proper QoS and retain flag usage
- Handle connection lifecycle (connect, reconnect, disconnect)
- Design topic hierarchies for commands and events
**Context:** Systant publishes to MQTT with Home Assistant auto-discovery. Topics follow the pattern `systant/{hostname}/{metric_type}`. Command topics use `systant/{hostname}/command/{action}`.
## events-specialist
Use this agent when working on the event/command system.
**Responsibilities:**
- Design secure command execution with allowlists
- Implement event handlers and action dispatching
- Ensure proper input validation and sanitization
- Handle timeouts and error reporting
- Consider security implications of remote command execution
**Context:** Systant listens for MQTT commands and executes configured actions. Security is paramount - all commands must be validated against an allowlist, inputs sanitized, and execution sandboxed where possible.
## debug-investigator
Use this agent when troubleshooting issues or unexpected behavior.
**Responsibilities:**
- Add strategic logging to trace execution
- Isolate the problem to specific components
- Form and test hypotheses
- Propose minimal fixes
**Trigger:** When something isn't working as expected.
## architect
Use this agent for design decisions and architectural questions.
**Responsibilities:**
- Evaluate trade-offs between approaches
- Consider future extensibility
- Maintain consistency with existing patterns
- Document decisions in code comments or CLAUDE.md
**Trigger:** When facing design choices or planning new features.
## security-auditor
Use this agent when reviewing security-sensitive code.
**Responsibilities:**
- Review command execution paths for injection vulnerabilities
- Validate input sanitization
- Check allowlist/denylist implementations
- Ensure proper authentication for MQTT commands
- Review file system access patterns
**Context:** Systant executes commands based on MQTT messages. This is a critical attack surface that requires careful security review.

289
CLAUDE.md
View File

@ -1,225 +1,120 @@
# CLAUDE.md # Systant
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository. A system monitoring agent that collects metrics, monitors services, and reports to MQTT/Home Assistant; and responds to events over MQTT to trigger commands or other behavior.
## Common Commands ## Project Overview
### Development Systant is a lightweight CLI tool written in Bun/TypeScript that:
```bash - Collects system metrics (CPU, memory, disk, network)
# Install dependencies - Monitors service health
mix deps.get - Publishes data to MQTT brokers
- Supports Home Assistant auto-discovery
- **Listens for MQTT commands** to trigger actions (run scripts, restart services, etc.)
- **Responds to events** with configurable handlers
- Runs as a daemon or one-shot command
# Compile the project ### Architecture
mix compile
# Run in development (non-halt mode) ```
mix run --no-halt index.ts # CLI entry point (yargs)
src/
# Run tests commands/ # CLI command handlers
mix test metrics/ # System metric collectors
mqtt/ # MQTT client and publishing
# Run specific test events/ # MQTT event listeners and handlers
mix test test/systant_test.exs actions/ # Executable actions (shell, service, notify)
ha/ # Home Assistant discovery
# Enter development shell (via Nix) config/ # Configuration loading
nix develop
# Run both server and dashboard together (recommended)
just dev
# or directly: hivemind
# Run components individually
just server # or: cd server && mix run --no-halt
just dashboard # or: cd dashboard && mix phx.server
# Other just commands
just deps # Install dependencies for both projects
just compile # Compile both projects
just test # Run tests for both projects
just clean # Clean both projects
``` ```
### Production ### Event/Command System
```bash
# Build production release
MIX_ENV=prod mix release
# Run production release Systant subscribes to MQTT topics and executes configured actions:
_build/prod/rel/systant/bin/systant start
```
Topic: systant/{hostname}/command/{action}
Payload: { "args": [...], "timeout": 30 }
Topic: systant/{hostname}/event/{event_name}
Payload: { ... event data ... }
``` ```
## Architecture Overview Actions are sandboxed and configurable via allowlists in the config file. Security is critical - never execute arbitrary commands without validation.
This is an Elixir OTP application that serves as a systemd daemon for MQTT-based system monitoring, designed for deployment across multiple NixOS hosts to integrate with Home Assistant. ### Key Design Decisions
### Core Components - **Single binary**: Compiles to standalone executable via `bun build --compile`
- **Systant.Application** (`lib/systant/application.ex`): OTP application supervisor that starts the MQTT client - **No external services**: Uses Bun built-ins (sqlite, file, etc.)
- **Systant.MqttClient** (`lib/systant/mqtt_client.ex`): GenServer handling MQTT connection, metrics publishing, and command subscriptions - **Config-driven**: TOML configuration for flexibility
- **Systant.MqttHandler** (`lib/systant/mqtt_handler.ex`): Custom Tortoise handler for processing command messages with security validation - **Typed throughout**: Full TypeScript with strict mode
- **Systant.CommandExecutor** (`lib/systant/command_executor.ex`): Secure command execution engine with whitelist validation and audit logging
- **Systant.SystemMetrics** (`lib/systant/system_metrics.ex`): Comprehensive Linux system metrics collection with configuration support
- **Systant.Config** (`lib/systant/config.ex`): TOML-based configuration loader with environment variable overrides
- **Dashboard.Application** (`dashboard/lib/dashboard/application.ex`): Phoenix LiveView dashboard application
- **Dashboard.MqttSubscriber** (`dashboard/lib/dashboard/mqtt_subscriber.ex`): Real-time MQTT subscriber that feeds data to the LiveView dashboard
### Key Libraries ## Tech Stack
- **Tortoise**: MQTT client library for pub/sub functionality
- **Jason**: JSON encoding/decoding for message payloads
- **Toml**: TOML configuration file parsing
- **Phoenix LiveView**: Real-time dashboard framework
### MQTT Behavior - **Runtime**: Bun (not Node.js)
- Publishes comprehensive system metrics (CPU, memory, disk, GPU, network, temperature, processes) to stats topic - **CLI**: yargs
- Subscribes to commands topic for incoming events that can trigger user-customizable actions - **Config**: TOML
- Uses hostname-based randomized client ID to avoid conflicts across multiple hosts - **MQTT**: mqtt.js or Bun-native when available
- Configurable startup delay (default 5 seconds) before first metrics publish - **Package**: Nix flake for reproducible builds
- Real-time metrics collection with configurable intervals
- **Connection verification**: Tests MQTT connectivity on startup with timeout-based validation
- **Graceful shutdown**: Exits cleanly via `System.stop(1)` when MQTT broker unavailable (prevents crash dumps)
### Configuration System ## Bun Conventions
Systant uses a TOML-based configuration system with environment variable overrides:
- **Config File**: `systant.toml` (current dir, `~/.config/systant/`, or `/etc/systant/`) Default to using Bun instead of Node.js.
- **Module Control**: Enable/disable metric collection modules (cpu, memory, disk, gpu, network, temperature, processes, system)
- **Filtering Options**: Configurable filtering for disks, network interfaces, processes
- **Environment Overrides**: `MQTT_HOST`, `MQTT_PORT`, `SYSTANT_INTERVAL`, `SYSTANT_LOG_LEVEL`
#### Key Configuration Sections - Use `bun <file>` instead of `node <file>` or `ts-node <file>`
- `[general]`: Collection intervals, enabled modules - Use `bun test` instead of `jest` or `vitest`
- `[mqtt]`: Broker settings, client ID prefix, credentials - Use `bun build <file.html|file.ts|file.css>` instead of `webpack` or `esbuild`
- `[commands]`: Command execution settings, security options - Use `bun install` instead of `npm install` or `yarn install` or `pnpm install`
- `[[commands.available]]`: User-defined command definitions with security parameters - Use `bun run <script>` instead of `npm run <script>`
- `[disk]`: Mount filtering, filesystem exclusions - Use `bunx <package> <command>` instead of `npx <package> <command>`
- `[gpu]`: NVIDIA/AMD GPU limits and settings - Bun automatically loads .env, so don't use dotenv.
- `[network]`: Interface filtering, traffic thresholds
- `[processes]`: Top process limits, sorting options
- `[temperature]`: CPU/sensor temperature monitoring
### Default Configuration ### Bun APIs
- **MQTT Host**: `mqtt.home` (configurable via `MQTT_HOST`)
- **Stats Topic**: `systant/${hostname}/stats` (per-host topics)
- **Command Topic**: `systant/${hostname}/commands` (per-host topics)
- **Response Topic**: `systant/${hostname}/responses` (command responses)
- **Publish Interval**: 30 seconds (configurable via `SYSTANT_INTERVAL`)
- **Command System**: Enabled by default with example commands (restart, info, df, ps, ping)
### NixOS Deployment - `Bun.serve()` for HTTP/WebSocket servers
This project includes a complete Nix packaging and NixOS module: - `bun:sqlite` for SQLite (not better-sqlite3)
- `Bun.file()` for file I/O (not node:fs readFile/writeFile)
- `Bun.$\`cmd\`` for shell commands (not execa)
- Native `WebSocket` (not ws)
- **Package**: `nix/package.nix` - Builds the Elixir release using beamPackages.mixRelease ### Testing
- **Module**: `nix/nixos-module.nix` - Provides `services.systant` configuration options
- **Development**: Use `nix develop` for development shell with Elixir/Erlang
The NixOS module supports: ```ts
- Configurable MQTT connection settings import { test, expect, describe, beforeEach } from "bun:test";
- Per-host topic naming using `${config.networking.hostName}`
- Environment variable configuration for runtime settings
- Systemd service with security hardening
- Auto-restart and logging to systemd journal
## Dashboard describe("MetricCollector", () => {
test("collects CPU metrics", async () => {
The project includes a Phoenix LiveView dashboard (`dashboard/`) that provides real-time monitoring of all systant instances. const metrics = await collectCPU();
expect(metrics.usage).toBeGreaterThanOrEqual(0);
### Dashboard Features });
- Real-time host status updates via MQTT subscription });
- LiveView interface showing all connected hosts
- Automatic reconnection and error handling
### Dashboard MQTT Configuration
- Subscribes to `systant/+/stats` to receive updates from all hosts
- Uses hostname-based client ID: `systant-dashboard-${hostname}` to avoid conflicts
- Connects to `mqtt.home:1883` (same broker as systant instances)
### Important Implementation Notes
- **Tortoise Handler**: The `handle_message/3` callback must return `{:ok, state}`, not `[]`
- **Topic Parsing**: Topics may arrive as lists or strings, handle both formats
- **Client ID Conflicts**: Use unique client IDs to prevent connection instability
## Development Roadmap
### Phase 1: System Metrics Collection (Completed)
- ✅ **SystemMetrics Module**: `server/lib/systant/system_metrics.ex` - Comprehensive metrics collection
- ✅ **CPU Metrics**: Load averages (1/5/15min) via `/proc/loadavg`
- ✅ **Memory Metrics**: System memory data via `/proc/meminfo` with usage percentages
- ✅ **Disk Metrics**: Disk usage and capacity via `df` command with configurable filtering
- ✅ **GPU Metrics**: NVIDIA (nvidia-smi) and AMD (rocm-smi) GPU monitoring with temperature, utilization, memory
- ✅ **Network Metrics**: Interface statistics via `/proc/net/dev` with traffic filtering
- ✅ **Temperature Metrics**: CPU temperature and lm-sensors data via system files and `sensors` command
- ✅ **Process Metrics**: Top processes by CPU/memory via `ps` command with configurable limits
- ✅ **System Info**: Uptime via `/proc/uptime`, kernel version, OS info, Erlang runtime data
- ✅ **MQTT Integration**: Real metrics published with configurable intervals replacing simple messages
- ✅ **Configuration System**: Complete TOML-based configuration with environment overrides
- ✅ **Dashboard Integration**: Phoenix LiveView dashboard with real-time graphical metrics display
#### Implementation Details
- Uses Linux native system commands and `/proc` filesystem for accuracy over Erlang os_mon
- Configuration-driven metric collection with per-module enable/disable capabilities
- Advanced filtering: disk mounts/types, network interfaces, process thresholds
- Graceful error handling with fallbacks when commands/files unavailable
- JSON payload structure: `{timestamp, hostname, cpu, memory, disk, gpu, network, temperature, processes, system}`
- Dashboard displays metrics as progress bars and cards with color-coded status indicators
- TOML configuration with environment variable overrides for deployment flexibility
### Phase 2: Command System (Completed)
- ✅ **Command Execution**: `server/lib/systant/command_executor.ex` - Secure command processing with whitelist validation
- ✅ **MQTT Handler**: `server/lib/systant/mqtt_handler.ex` - Custom Tortoise handler for command message processing
- ✅ **User Configuration**: Commands fully configurable via `systant.toml` with security parameters
- ✅ **MQTT Integration**: Commands via `systant/{hostname}/commands`, responses via `systant/{hostname}/responses`
- ✅ **Security Features**: Whitelist-only execution, parameter validation, timeouts, comprehensive logging
- ✅ **Built-in Commands**: `list` command shows all available user-defined commands
#### Command System Features
- **User-Configurable Commands**: Define custom commands in `systant.toml` with triggers, allowed parameters, timeouts
- **Enterprise Security**: No arbitrary shell execution, strict parameter validation, execution timeouts
- **Simple Interface**: Send `{"command":"trigger","params":[...]}`, receive structured JSON responses
- **Request Tracking**: Auto-generated request IDs for command/response correlation
- **Comprehensive Logging**: Full audit trail of all command executions with timing and results
#### Example Command Usage
```bash
# Send commands via MQTT
mosquitto_pub -t "systant/hostname/commands" -m '{"command":"list"}'
mosquitto_pub -t "systant/hostname/commands" -m '{"command":"info"}'
mosquitto_pub -t "systant/hostname/commands" -m '{"command":"df","params":["/home"]}'
mosquitto_pub -t "systant/hostname/commands" -m '{"command":"restart","params":["nginx"]}'
# Listen for responses
mosquitto_sub -t "systant/+/responses"
``` ```
### Phase 3: Home Assistant Integration (Completed) Run tests: `bun test`
- ✅ **MQTT Auto-Discovery**: `server/lib/systant/ha_discovery.ex` - Publishes HA discovery configurations for automatic device registration Run specific: `bun test src/metrics`
- ✅ **Device Registration**: Creates unified "Systant {hostname}" device in Home Assistant with comprehensive sensor suite Watch mode: `bun test --watch`
- ✅ **Sensor Auto-Discovery**: CPU load averages, memory usage, system uptime, temperatures, GPU metrics, disk usage, network throughput
- ✅ **Configuration Integration**: TOML-based enable/disable with `homeassistant.discovery_enabled` setting
- ✅ **Value Templates**: Proper JSON path extraction for nested metrics data with error handling
- ✅ **Real-time Updates**: Seamless integration with existing MQTT stats publishing - no additional topics needed
#### Home Assistant Integration Features ## Code Style
- **Automatic Discovery**: No custom integration required - uses standard MQTT discovery protocol
- **Device Grouping**: All sensors grouped under single "Systant {hostname}" device for clean organization
- **Comprehensive Metrics**: CPU, memory, disk, GPU (NVIDIA/AMD), network throughput, temperature, and system sensors
- **Configuration Control**: Enable/disable discovery via `systant.toml` configuration
- **Template Flexibility**: Advanced Jinja2 templates handle optional/missing data gracefully
- **Topic Structure**: Discovery on `homeassistant/#`, stats remain on `systant/{hostname}/stats`
#### Setup Instructions - Prefer `async/await` over callbacks
1. **Configure MQTT Discovery**: Set `homeassistant.discovery_enabled = true` in `systant.toml` - Use explicit return types on public functions
2. **Start Systant**: Discovery messages published automatically on startup (1s after MQTT connection) - Prefer `interface` over `type` for object shapes
3. **Check Home Assistant**: Device and sensors appear automatically in MQTT integration - Use `const` by default, `let` only when reassignment needed
4. **Verify Metrics**: All sensors should show current values within 30 seconds - No classes unless state encapsulation is genuinely needed
- Prefer pure functions and composition
#### Available Sensors ## Commands
- **CPU**: Load averages (1m, 5m, 15m), temperature
- **Memory**: Usage percentage, used/total in GB
- **Disk**: Root and home filesystem usage percentages
- **GPU**: NVIDIA/AMD utilization, temperature, memory usage
- **Network**: RX/TX throughput in MB/s for primary interface (real-time bandwidth monitoring)
- **System**: Uptime in hours, kernel version, online status
### Future Plans ```bash
- Multi-host deployment for comprehensive system monitoring bun run start # Run in development
- Advanced alerting and threshold monitoring bun run dist # Build standalone binary
- Historical data retention and trending bun test # Run tests
bun test --watch # Watch mode
```
## Planning Protocol
When implementing features:
1. Discuss the approach before writing code
2. Start with types/interfaces
3. Write tests alongside implementation
4. Keep PRs focused and small

View File

@ -1,2 +0,0 @@
server: cd server && mix run --no-halt
dashboard: cd dashboard && mix phx.server

View File

@ -1,91 +1,15 @@
# Systant # systant
A comprehensive Elixir-based system monitoring solution with real-time dashboard, designed for deployment across multiple NixOS hosts. To install dependencies:
## Components
- **Server** (`server/`): Elixir OTP application that collects and publishes system metrics via MQTT
- **Dashboard** (`dashboard/`): Phoenix LiveView web dashboard for real-time monitoring
- **Nix Integration**: Complete NixOS module and packaging for easy deployment
## Features
### System Metrics Collection
- **CPU**: Load averages (1/5/15min) and utilization monitoring
- **Memory**: System memory usage and swap monitoring
- **Disk**: Usage statistics and capacity monitoring for all drives
- **System Alarms**: Real-time alerts for disk space, memory pressure, etc.
- **System Info**: Uptime, Erlang/OTP versions, scheduler information
### Real-time Dashboard
- Phoenix LiveView interface showing all connected hosts
- Live system metrics and alert monitoring
- Automatic reconnection and error handling
### MQTT Integration
- Publishes comprehensive system metrics every 30 seconds
- Uses hostname-based topics: `systant/${hostname}/stats`
- Structured JSON payloads with full system data
- Configurable MQTT broker connection
## Quick Start
### Development Environment
```bash ```bash
# Enter Nix development shell bun install
nix develop
# Run both server and dashboard together (recommended)
just dev
# Or run components individually
just server # Start systant server
just dashboard # Start Phoenix LiveView dashboard
# Other development commands
just deps # Install dependencies for both projects
just compile # Compile both projects
just test # Run tests for both projects
just clean # Clean both projects
``` ```
#### Hivemind Process Management To run:
The project uses Hivemind for managing multiple processes during development:
- Server runs on MQTT publishing system metrics every 30 seconds
- Dashboard runs on http://localhost:4000 with real-time LiveView interface
- Color-coded logs for easy debugging (server=green, dashboard=yellow)
### Production Deployment (NixOS)
```bash ```bash
# Build and install via Nix bun run index.ts
nix build
sudo nixos-rebuild switch --flake .
# Or use the NixOS module in your configuration:
# imports = [ ./path/to/systant/nix/nixos-module.nix ];
# services.systant.enable = true;
``` ```
## Configuration This project was created using `bun init` in bun v1.3.6. [Bun](https://bun.com) is a fast all-in-one JavaScript runtime.
Default MQTT configuration (customizable via environment variables):
- **Host**: `mqtt.home:1883`
- **Topics**: `systant/${hostname}/stats` and `systant/${hostname}/commands`
- **Interval**: 30 seconds
- **Client ID**: `systant_${random}` (auto-generated to avoid conflicts)
## Architecture
- **Server**: `server/lib/systant/mqtt_client.ex` - MQTT publishing and command handling
- **Metrics**: `server/lib/systant/system_metrics.ex` - System data collection using `:os_mon`
- **Dashboard**: `dashboard/lib/dashboard/mqtt_subscriber.ex` - Real-time MQTT data consumption
- **Nix**: `nix/package.nix` and `nix/nixos-module.nix` - Complete packaging and deployment
## Roadmap
- ✅ **Phase 1**: System metrics collection with real-time dashboard
- 🔄 **Phase 2**: Command system for remote host management
- 🔄 **Phase 3**: Home Assistant integration for automation
See `CLAUDE.md` for detailed development context and implementation notes.

152
bun.lock Normal file
View File

@ -0,0 +1,152 @@
{
"lockfileVersion": 1,
"configVersion": 1,
"workspaces": {
"": {
"name": "systant",
"dependencies": {
"mqtt": "^5.14.1",
"smol-toml": "^1.6.0",
"yargs": "^18.0.0",
},
"devDependencies": {
"@types/bun": "latest",
"@types/yargs": "^17.0.35",
},
"peerDependencies": {
"typescript": "^5",
},
},
},
"packages": {
"@babel/runtime": ["@babel/runtime@7.28.6", "", {}, "sha512-05WQkdpL9COIMz4LjTxGpPNCdlpyimKppYNoJ5Di5EUObifl8t4tuLuUBBZEpoLYOmfvIWrsp9fCl0HoPRVTdA=="],
"@types/bun": ["@types/bun@1.3.6", "", { "dependencies": { "bun-types": "1.3.6" } }, "sha512-uWCv6FO/8LcpREhenN1d1b6fcspAB+cefwD7uti8C8VffIv0Um08TKMn98FynpTiU38+y2dUO55T11NgDt8VAA=="],
"@types/node": ["@types/node@25.0.9", "", { "dependencies": { "undici-types": "~7.16.0" } }, "sha512-/rpCXHlCWeqClNBwUhDcusJxXYDjZTyE8v5oTO7WbL8eij2nKhUeU89/6xgjU7N4/Vh3He0BtyhJdQbDyhiXAw=="],
"@types/readable-stream": ["@types/readable-stream@4.0.23", "", { "dependencies": { "@types/node": "*" } }, "sha512-wwXrtQvbMHxCbBgjHaMGEmImFTQxxpfMOR/ZoQnXxB1woqkUbdLGFDgauo00Py9IudiaqSeiBiulSV9i6XIPig=="],
"@types/ws": ["@types/ws@8.18.1", "", { "dependencies": { "@types/node": "*" } }, "sha512-ThVF6DCVhA8kUGy+aazFQ4kXQ7E1Ty7A3ypFOe0IcJV8O/M511G99AW24irKrW56Wt44yG9+ij8FaqoBGkuBXg=="],
"@types/yargs": ["@types/yargs@17.0.35", "", { "dependencies": { "@types/yargs-parser": "*" } }, "sha512-qUHkeCyQFxMXg79wQfTtfndEC+N9ZZg76HJftDJp+qH2tV7Gj4OJi7l+PiWwJ+pWtW8GwSmqsDj/oymhrTWXjg=="],
"@types/yargs-parser": ["@types/yargs-parser@21.0.3", "", {}, "sha512-I4q9QU9MQv4oEOz4tAHJtNz1cwuLxn2F3xcc2iV5WdqLPpUnj30aUuxt1mAxYTG+oe8CZMV/+6rU4S4gRDzqtQ=="],
"abort-controller": ["abort-controller@3.0.0", "", { "dependencies": { "event-target-shim": "^5.0.0" } }, "sha512-h8lQ8tacZYnR3vNQTgibj+tODHI5/+l06Au2Pcriv/Gmet0eaj4TwWH41sO9wnHDiQsEj19q0drzdWdeAHtweg=="],
"ansi-regex": ["ansi-regex@6.2.2", "", {}, "sha512-Bq3SmSpyFHaWjPk8If9yc6svM8c56dB5BAtW4Qbw5jHTwwXXcTLoRMkpDJp6VL0XzlWaCHTXrkFURMYmD0sLqg=="],
"ansi-styles": ["ansi-styles@6.2.3", "", {}, "sha512-4Dj6M28JB+oAH8kFkTLUo+a2jwOFkuqb3yucU0CANcRRUbxS0cP0nZYCGjcc3BNXwRIsUVmDGgzawme7zvJHvg=="],
"base64-js": ["base64-js@1.5.1", "", {}, "sha512-AKpaYlHn8t4SVbOHCy+b5+KKgvR4vrsD8vbvrbiQJps7fKDTkjkDry6ji0rUJjC0kzbNePLwzxq8iypo41qeWA=="],
"bl": ["bl@6.1.6", "", { "dependencies": { "@types/readable-stream": "^4.0.0", "buffer": "^6.0.3", "inherits": "^2.0.4", "readable-stream": "^4.2.0" } }, "sha512-jLsPgN/YSvPUg9UX0Kd73CXpm2Psg9FxMeCSXnk3WBO3CMT10JMwijubhGfHCnFu6TPn1ei3b975dxv7K2pWVg=="],
"broker-factory": ["broker-factory@3.1.13", "", { "dependencies": { "@babel/runtime": "^7.28.6", "fast-unique-numbers": "^9.0.26", "tslib": "^2.8.1", "worker-factory": "^7.0.48" } }, "sha512-H2VALe31mEtO/SRcNp4cUU5BAm1biwhc/JaF77AigUuni/1YT0FLCJfbUxwIEs9y6Kssjk2fmXgf+Y9ALvmKlw=="],
"buffer": ["buffer@6.0.3", "", { "dependencies": { "base64-js": "^1.3.1", "ieee754": "^1.2.1" } }, "sha512-FTiCpNxtwiZZHEZbcbTIcZjERVICn9yq/pDFkTl95/AxzD1naBctN7YO68riM/gLSDY7sdrMby8hofADYuuqOA=="],
"buffer-from": ["buffer-from@1.1.2", "", {}, "sha512-E+XQCRwSbaaiChtv6k6Dwgc+bx+Bs6vuKJHHl5kox/BaKbhiXzqQOwK4cO22yElGp2OCmjwVhT3HmxgyPGnJfQ=="],
"bun-types": ["bun-types@1.3.6", "", { "dependencies": { "@types/node": "*" } }, "sha512-OlFwHcnNV99r//9v5IIOgQ9Uk37gZqrNMCcqEaExdkVq3Avwqok1bJFmvGMCkCE0FqzdY8VMOZpfpR3lwI+CsQ=="],
"cliui": ["cliui@9.0.1", "", { "dependencies": { "string-width": "^7.2.0", "strip-ansi": "^7.1.0", "wrap-ansi": "^9.0.0" } }, "sha512-k7ndgKhwoQveBL+/1tqGJYNz097I7WOvwbmmU2AR5+magtbjPWQTS1C5vzGkBC8Ym8UWRzfKUzUUqFLypY4Q+w=="],
"commist": ["commist@3.2.0", "", {}, "sha512-4PIMoPniho+LqXmpS5d3NuGYncG6XWlkBSVGiWycL22dd42OYdUGil2CWuzklaJoNxyxUSpO4MKIBU94viWNAw=="],
"concat-stream": ["concat-stream@2.0.0", "", { "dependencies": { "buffer-from": "^1.0.0", "inherits": "^2.0.3", "readable-stream": "^3.0.2", "typedarray": "^0.0.6" } }, "sha512-MWufYdFw53ccGjCA+Ol7XJYpAlW6/prSMzuPOTRnJGcGzuhLn4Scrz7qf6o8bROZ514ltazcIFJZevcfbo0x7A=="],
"debug": ["debug@4.4.3", "", { "dependencies": { "ms": "^2.1.3" } }, "sha512-RGwwWnwQvkVfavKVt22FGLw+xYSdzARwm0ru6DhTVA3umU5hZc28V3kO4stgYryrTlLpuvgI9GiijltAjNbcqA=="],
"emoji-regex": ["emoji-regex@10.6.0", "", {}, "sha512-toUI84YS5YmxW219erniWD0CIVOo46xGKColeNQRgOzDorgBi1v4D71/OFzgD9GO2UGKIv1C3Sp8DAn0+j5w7A=="],
"escalade": ["escalade@3.2.0", "", {}, "sha512-WUj2qlxaQtO4g6Pq5c29GTcWGDyd8itL8zTlipgECz3JesAiiOKotd8JU6otB3PACgG6xkJUyVhboMS+bje/jA=="],
"event-target-shim": ["event-target-shim@5.0.1", "", {}, "sha512-i/2XbnSz/uxRCU6+NdVJgKWDTM427+MqYbkQzD321DuCQJUqOuJKIA0IM2+W2xtYHdKOmZ4dR6fExsd4SXL+WQ=="],
"events": ["events@3.3.0", "", {}, "sha512-mQw+2fkQbALzQ7V0MY0IqdnXNOeTtP4r0lN9z7AAawCXgqea7bDii20AYrIBrFd/Hx0M2Ocz6S111CaFkUcb0Q=="],
"fast-unique-numbers": ["fast-unique-numbers@9.0.26", "", { "dependencies": { "@babel/runtime": "^7.28.6", "tslib": "^2.8.1" } }, "sha512-3Mtq8p1zQinjGyWfKeuBunbuFoixG72AUkk4VvzbX4ykCW9Q4FzRaNyIlfQhUjnKw2ARVP+/CKnoyr6wfHftig=="],
"get-caller-file": ["get-caller-file@2.0.5", "", {}, "sha512-DyFP3BM/3YHTQOCUL/w0OZHR0lpKeGrxotcHWcqNEdnltqFwXVfhEBQ94eIo34AfQpo0rGki4cyIiftY06h2Fg=="],
"get-east-asian-width": ["get-east-asian-width@1.4.0", "", {}, "sha512-QZjmEOC+IT1uk6Rx0sX22V6uHWVwbdbxf1faPqJ1QhLdGgsRGCZoyaQBm/piRdJy/D2um6hM1UP7ZEeQ4EkP+Q=="],
"help-me": ["help-me@5.0.0", "", {}, "sha512-7xgomUX6ADmcYzFik0HzAxh/73YlKR9bmFzf51CZwR+b6YtzU2m0u49hQCqV6SvlqIqsaxovfwdvbnsw3b/zpg=="],
"ieee754": ["ieee754@1.2.1", "", {}, "sha512-dcyqhDvX1C46lXZcVqCpK+FtMRQVdIMN6/Df5js2zouUsqG7I6sFxitIC+7KYK29KdXOLHdu9zL4sFnoVQnqaA=="],
"inherits": ["inherits@2.0.4", "", {}, "sha512-k/vGaX4/Yla3WzyMCvTQOXYeIHvqOKtnqBduzTHpzpQZzAskKMhZ2K+EnBiSM9zGSoIFeMpXKxa4dYeZIQqewQ=="],
"ip-address": ["ip-address@10.1.0", "", {}, "sha512-XXADHxXmvT9+CRxhXg56LJovE+bmWnEWB78LB83VZTprKTmaC5QfruXocxzTZ2Kl0DNwKuBdlIhjL8LeY8Sf8Q=="],
"js-sdsl": ["js-sdsl@4.3.0", "", {}, "sha512-mifzlm2+5nZ+lEcLJMoBK0/IH/bDg8XnJfd/Wq6IP+xoCjLZsTOnV2QpxlVbX9bMnkl5PdEjNtBJ9Cj1NjifhQ=="],
"lru-cache": ["lru-cache@10.4.3", "", {}, "sha512-JNAzZcXrCt42VGLuYz0zfAzDfAvJWW6AfYlDBQyDV5DClI2m5sAmK+OIO7s59XfsRsWHp02jAJrRadPRGTt6SQ=="],
"minimist": ["minimist@1.2.8", "", {}, "sha512-2yyAR8qBkN3YuheJanUpWC5U3bb5osDywNB8RzDVlDwDHbocAJveqqj1u8+SVD7jkWT4yvsHCpWqqWqAxb0zCA=="],
"mqtt": ["mqtt@5.14.1", "", { "dependencies": { "@types/readable-stream": "^4.0.21", "@types/ws": "^8.18.1", "commist": "^3.2.0", "concat-stream": "^2.0.0", "debug": "^4.4.1", "help-me": "^5.0.0", "lru-cache": "^10.4.3", "minimist": "^1.2.8", "mqtt-packet": "^9.0.2", "number-allocator": "^1.0.14", "readable-stream": "^4.7.0", "rfdc": "^1.4.1", "socks": "^2.8.6", "split2": "^4.2.0", "worker-timers": "^8.0.23", "ws": "^8.18.3" }, "bin": { "mqtt_pub": "build/bin/pub.js", "mqtt_sub": "build/bin/sub.js", "mqtt": "build/bin/mqtt.js" } }, "sha512-NxkPxE70Uq3Ph7goefQa7ggSsVzHrayCD0OyxlJgITN/EbzlZN+JEPmaAZdxP1LsIT5FamDyILoQTF72W7Nnbw=="],
"mqtt-packet": ["mqtt-packet@9.0.2", "", { "dependencies": { "bl": "^6.0.8", "debug": "^4.3.4", "process-nextick-args": "^2.0.1" } }, "sha512-MvIY0B8/qjq7bKxdN1eD+nrljoeaai+qjLJgfRn3TiMuz0pamsIWY2bFODPZMSNmabsLANXsLl4EMoWvlaTZWA=="],
"ms": ["ms@2.1.3", "", {}, "sha512-6FlzubTLZG3J2a/NVCAleEhjzq5oxgHyaCU9yYXvcLsvoVaHJq/s5xXI6/XXP6tz7R9xAOtHnSO/tXtF3WRTlA=="],
"number-allocator": ["number-allocator@1.0.14", "", { "dependencies": { "debug": "^4.3.1", "js-sdsl": "4.3.0" } }, "sha512-OrL44UTVAvkKdOdRQZIJpLkAdjXGTRda052sN4sO77bKEzYYqWKMBjQvrJFzqygI99gL6Z4u2xctPW1tB8ErvA=="],
"process": ["process@0.11.10", "", {}, "sha512-cdGef/drWFoydD1JsMzuFf8100nZl+GT+yacc2bEced5f9Rjk4z+WtFUTBu9PhOi9j/jfmBPu0mMEY4wIdAF8A=="],
"process-nextick-args": ["process-nextick-args@2.0.1", "", {}, "sha512-3ouUOpQhtgrbOa17J7+uxOTpITYWaGP7/AhoR3+A+/1e9skrzelGi/dXzEYyvbxubEF6Wn2ypscTKiKJFFn1ag=="],
"readable-stream": ["readable-stream@4.7.0", "", { "dependencies": { "abort-controller": "^3.0.0", "buffer": "^6.0.3", "events": "^3.3.0", "process": "^0.11.10", "string_decoder": "^1.3.0" } }, "sha512-oIGGmcpTLwPga8Bn6/Z75SVaH1z5dUut2ibSyAMVhmUggWpmDn2dapB0n7f8nwaSiRtepAsfJyfXIO5DCVAODg=="],
"rfdc": ["rfdc@1.4.1", "", {}, "sha512-q1b3N5QkRUWUl7iyylaaj3kOpIT0N2i9MqIEQXP73GVsN9cw3fdx8X63cEmWhJGi2PPCF23Ijp7ktmd39rawIA=="],
"safe-buffer": ["safe-buffer@5.2.1", "", {}, "sha512-rp3So07KcdmmKbGvgaNxQSJr7bGVSVk5S9Eq1F+ppbRo70+YeaDxkw5Dd8NPN+GD6bjnYm2VuPuCXmpuYvmCXQ=="],
"smart-buffer": ["smart-buffer@4.2.0", "", {}, "sha512-94hK0Hh8rPqQl2xXc3HsaBoOXKV20MToPkcXvwbISWLEs+64sBq5kFgn2kJDHb1Pry9yrP0dxrCI9RRci7RXKg=="],
"smol-toml": ["smol-toml@1.6.0", "", {}, "sha512-4zemZi0HvTnYwLfrpk/CF9LOd9Lt87kAt50GnqhMpyF9U3poDAP2+iukq2bZsO/ufegbYehBkqINbsWxj4l4cw=="],
"socks": ["socks@2.8.7", "", { "dependencies": { "ip-address": "^10.0.1", "smart-buffer": "^4.2.0" } }, "sha512-HLpt+uLy/pxB+bum/9DzAgiKS8CX1EvbWxI4zlmgGCExImLdiad2iCwXT5Z4c9c3Eq8rP2318mPW2c+QbtjK8A=="],
"split2": ["split2@4.2.0", "", {}, "sha512-UcjcJOWknrNkF6PLX83qcHM6KHgVKNkV62Y8a5uYDVv9ydGQVwAHMKqHdJje1VTWpljG0WYpCDhrCdAOYH4TWg=="],
"string-width": ["string-width@7.2.0", "", { "dependencies": { "emoji-regex": "^10.3.0", "get-east-asian-width": "^1.0.0", "strip-ansi": "^7.1.0" } }, "sha512-tsaTIkKW9b4N+AEj+SVA+WhJzV7/zMhcSu78mLKWSk7cXMOSHsBKFWUs0fWwq8QyK3MgJBQRX6Gbi4kYbdvGkQ=="],
"string_decoder": ["string_decoder@1.3.0", "", { "dependencies": { "safe-buffer": "~5.2.0" } }, "sha512-hkRX8U1WjJFd8LsDJ2yQ/wWWxaopEsABU1XfkM8A+j0+85JAGppt16cr1Whg6KIbb4okU6Mql6BOj+uup/wKeA=="],
"strip-ansi": ["strip-ansi@7.1.2", "", { "dependencies": { "ansi-regex": "^6.0.1" } }, "sha512-gmBGslpoQJtgnMAvOVqGZpEz9dyoKTCzy2nfz/n8aIFhN/jCE/rCmcxabB6jOOHV+0WNnylOxaxBQPSvcWklhA=="],
"tslib": ["tslib@2.8.1", "", {}, "sha512-oJFu94HQb+KVduSUQL7wnpmqnfmLsOA/nAh6b6EH0wCEoK0/mPeXU6c3wKDV83MkOuHPRHtSXKKU99IBazS/2w=="],
"typedarray": ["typedarray@0.0.6", "", {}, "sha512-/aCDEGatGvZ2BIk+HmLf4ifCJFwvKFNb9/JeZPMulfgFracn9QFcAf5GO8B/mweUjSoblS5In0cWhqpfs/5PQA=="],
"typescript": ["typescript@5.9.3", "", { "bin": { "tsc": "bin/tsc", "tsserver": "bin/tsserver" } }, "sha512-jl1vZzPDinLr9eUt3J/t7V6FgNEw9QjvBPdysz9KfQDD41fQrC2Y4vKQdiaUpFT4bXlb1RHhLpp8wtm6M5TgSw=="],
"undici-types": ["undici-types@7.16.0", "", {}, "sha512-Zz+aZWSj8LE6zoxD+xrjh4VfkIG8Ya6LvYkZqtUQGJPZjYl53ypCaUwWqo7eI0x66KBGeRo+mlBEkMSeSZ38Nw=="],
"util-deprecate": ["util-deprecate@1.0.2", "", {}, "sha512-EPD5q1uXyFxJpCrLnCc1nHnq3gOa6DZBocAIiI2TaSCA7VCJ1UJDMagCzIkXNsUYfD1daK//LTEQ8xiIbrHtcw=="],
"worker-factory": ["worker-factory@7.0.48", "", { "dependencies": { "@babel/runtime": "^7.28.6", "fast-unique-numbers": "^9.0.26", "tslib": "^2.8.1" } }, "sha512-CGmBy3tJvpBPjUvb0t4PrpKubUsfkI1Ohg0/GGFU2RvA9j/tiVYwKU8O7yu7gH06YtzbeJLzdUR29lmZKn5pag=="],
"worker-timers": ["worker-timers@8.0.29", "", { "dependencies": { "@babel/runtime": "^7.28.6", "tslib": "^2.8.1", "worker-timers-broker": "^8.0.15", "worker-timers-worker": "^9.0.13" } }, "sha512-9jk0MWHhWAZ2xlJPXr45oe5UF/opdpfZrY0HtyPizWuJ+ce1M3IYk/4IIdGct3kn9Ncfs+tkZt3w1tU6KW2Fsg=="],
"worker-timers-broker": ["worker-timers-broker@8.0.15", "", { "dependencies": { "@babel/runtime": "^7.28.6", "broker-factory": "^3.1.13", "fast-unique-numbers": "^9.0.26", "tslib": "^2.8.1", "worker-timers-worker": "^9.0.13" } }, "sha512-Te+EiVUMzG5TtHdmaBZvBrZSFNauym6ImDaCAnzQUxvjnw+oGjMT2idmAOgDy30vOZMLejd0bcsc90Axu6XPWA=="],
"worker-timers-worker": ["worker-timers-worker@9.0.13", "", { "dependencies": { "@babel/runtime": "^7.28.6", "tslib": "^2.8.1", "worker-factory": "^7.0.48" } }, "sha512-qjn18szGb1kjcmh2traAdki1eiIS5ikFo+L90nfMOvSRpuDw1hAcR1nzkP2+Hkdqz5thIRnfuWx7QSpsEUsA6Q=="],
"wrap-ansi": ["wrap-ansi@9.0.2", "", { "dependencies": { "ansi-styles": "^6.2.1", "string-width": "^7.0.0", "strip-ansi": "^7.1.0" } }, "sha512-42AtmgqjV+X1VpdOfyTGOYRi0/zsoLqtXQckTmqTeybT+BDIbM/Guxo7x3pE2vtpr1ok6xRqM9OpBe+Jyoqyww=="],
"ws": ["ws@8.19.0", "", { "peerDependencies": { "bufferutil": "^4.0.1", "utf-8-validate": ">=5.0.2" }, "optionalPeers": ["bufferutil", "utf-8-validate"] }, "sha512-blAT2mjOEIi0ZzruJfIhb3nps74PRWTCz1IjglWEEpQl5XS/UNama6u2/rjFkDDouqr4L67ry+1aGIALViWjDg=="],
"y18n": ["y18n@5.0.8", "", {}, "sha512-0pfFzegeDWJHJIAmTLRP2DwHjdF5s7jo9tuztdQxAhINCdvS+3nGINqPd00AphqJR/0LhANUS6/+7SCb98YOfA=="],
"yargs": ["yargs@18.0.0", "", { "dependencies": { "cliui": "^9.0.1", "escalade": "^3.1.1", "get-caller-file": "^2.0.5", "string-width": "^7.2.0", "y18n": "^5.0.5", "yargs-parser": "^22.0.0" } }, "sha512-4UEqdc2RYGHZc7Doyqkrqiln3p9X2DZVxaGbwhn2pi7MrRagKaOcIKe8L3OxYcbhXLgLFUS3zAYuQjKBQgmuNg=="],
"yargs-parser": ["yargs-parser@22.0.0", "", {}, "sha512-rwu/ClNdSMpkSrUb+d6BRsSkLUq1fmfsY6TOpYzTwvwkg1/NRG85KBy3kq++A8LKQwX6lsu+aWad+2khvuXrqw=="],
"concat-stream/readable-stream": ["readable-stream@3.6.2", "", { "dependencies": { "inherits": "^2.0.3", "string_decoder": "^1.1.1", "util-deprecate": "^1.0.1" } }, "sha512-9u/sniCrY3D5WdsERHzHE4G2YCXqoG5FTHUiCC4SIbr6XcLZBY05ya9EKjYek9O5xOAwjGq+1JdGBAS7Q9ScoA=="],
}
}

View File

@ -1,5 +0,0 @@
[
import_deps: [:phoenix],
plugins: [Phoenix.LiveView.HTMLFormatter],
inputs: ["*.{heex,ex,exs}", "{config,lib,test}/**/*.{heex,ex,exs}"]
]

37
dashboard/.gitignore vendored
View File

@ -1,37 +0,0 @@
# The directory Mix will write compiled artifacts to.
/_build/
# If you run "mix test --cover", coverage assets end up here.
/cover/
# The directory Mix downloads your dependencies sources to.
/deps/
# Where 3rd-party dependencies like ExDoc output generated docs.
/doc/
# Ignore .fetch files in case you like to edit your project deps locally.
/.fetch
# If the VM crashes, it generates a dump, let's ignore it too.
erl_crash.dump
# Also ignore archive artifacts (built via "mix archive.build").
*.ez
# Temporary files, for example, from tests.
/tmp/
# Ignore package tarball (built via "mix hex.build").
dashboard-*.tar
# Ignore assets that are produced by build tools.
/priv/static/assets/
# Ignore digested assets cache.
/priv/static/cache_manifest.json
# In case you use Node.js/npm, you want to ignore these.
npm-debug.log
/assets/node_modules/

View File

@ -1,18 +0,0 @@
# Dashboard
To start your Phoenix server:
* Run `mix setup` to install and setup dependencies
* Start Phoenix endpoint with `mix phx.server` or inside IEx with `iex -S mix phx.server`
Now you can visit [`localhost:4000`](http://localhost:4000) from your browser.
Ready to run in production? Please [check our deployment guides](https://hexdocs.pm/phoenix/deployment.html).
## Learn more
* Official website: https://www.phoenixframework.org/
* Guides: https://hexdocs.pm/phoenix/overview.html
* Docs: https://hexdocs.pm/phoenix
* Forum: https://elixirforum.com/c/phoenix-forum
* Source: https://github.com/phoenixframework/phoenix

View File

@ -1,5 +0,0 @@
@import "tailwindcss/base";
@import "tailwindcss/components";
@import "tailwindcss/utilities";
/* This file is for your main application CSS */

View File

@ -1,44 +0,0 @@
// If you want to use Phoenix channels, run `mix help phx.gen.channel`
// to get started and then uncomment the line below.
// import "./user_socket.js"
// You can include dependencies in two ways.
//
// The simplest option is to put them in assets/vendor and
// import them using relative paths:
//
// import "../vendor/some-package.js"
//
// Alternatively, you can `npm install some-package --prefix assets` and import
// them using a path starting with the package name:
//
// import "some-package"
//
// Include phoenix_html to handle method=PUT/DELETE in forms and buttons.
import "phoenix_html"
// Establish Phoenix Socket and LiveView configuration.
import {Socket} from "phoenix"
import {LiveSocket} from "phoenix_live_view"
import topbar from "../vendor/topbar"
let csrfToken = document.querySelector("meta[name='csrf-token']").getAttribute("content")
let liveSocket = new LiveSocket("/live", Socket, {
longPollFallbackMs: 2500,
params: {_csrf_token: csrfToken}
})
// Show progress bar on live navigation and form submits
topbar.config({barColors: {0: "#29d"}, shadowColor: "rgba(0, 0, 0, .3)"})
window.addEventListener("phx:page-loading-start", _info => topbar.show(300))
window.addEventListener("phx:page-loading-stop", _info => topbar.hide())
// connect if there are any LiveViews on the page
liveSocket.connect()
// expose liveSocket on window for web console debug logs and latency simulation:
// >> liveSocket.enableDebug()
// >> liveSocket.enableLatencySim(1000) // enabled for duration of browser session
// >> liveSocket.disableLatencySim()
window.liveSocket = liveSocket

View File

@ -1,74 +0,0 @@
// See the Tailwind configuration guide for advanced usage
// https://tailwindcss.com/docs/configuration
const plugin = require("tailwindcss/plugin")
const fs = require("fs")
const path = require("path")
module.exports = {
content: [
"./js/**/*.js",
"../lib/dashboard_web.ex",
"../lib/dashboard_web/**/*.*ex"
],
theme: {
extend: {
colors: {
brand: "#FD4F00",
}
},
},
plugins: [
require("@tailwindcss/forms"),
// Allows prefixing tailwind classes with LiveView classes to add rules
// only when LiveView classes are applied, for example:
//
// <div class="phx-click-loading:animate-ping">
//
plugin(({addVariant}) => addVariant("phx-click-loading", [".phx-click-loading&", ".phx-click-loading &"])),
plugin(({addVariant}) => addVariant("phx-submit-loading", [".phx-submit-loading&", ".phx-submit-loading &"])),
plugin(({addVariant}) => addVariant("phx-change-loading", [".phx-change-loading&", ".phx-change-loading &"])),
// Embeds Heroicons (https://heroicons.com) into your app.css bundle
// See your `CoreComponents.icon/1` for more information.
//
plugin(function({matchComponents, theme}) {
let iconsDir = path.join(__dirname, "../deps/heroicons/optimized")
let values = {}
let icons = [
["", "/24/outline"],
["-solid", "/24/solid"],
["-mini", "/20/solid"],
["-micro", "/16/solid"]
]
icons.forEach(([suffix, dir]) => {
fs.readdirSync(path.join(iconsDir, dir)).forEach(file => {
let name = path.basename(file, ".svg") + suffix
values[name] = {name, fullPath: path.join(iconsDir, dir, file)}
})
})
matchComponents({
"hero": ({name, fullPath}) => {
let content = fs.readFileSync(fullPath).toString().replace(/\r?\n|\r/g, "")
let size = theme("spacing.6")
if (name.endsWith("-mini")) {
size = theme("spacing.5")
} else if (name.endsWith("-micro")) {
size = theme("spacing.4")
}
return {
[`--hero-${name}`]: `url('data:image/svg+xml;utf8,${content}')`,
"-webkit-mask": `var(--hero-${name})`,
"mask": `var(--hero-${name})`,
"mask-repeat": "no-repeat",
"background-color": "currentColor",
"vertical-align": "middle",
"display": "inline-block",
"width": size,
"height": size
}
}
}, {values})
})
]
}

View File

@ -1,165 +0,0 @@
/**
* @license MIT
* topbar 2.0.0, 2023-02-04
* https://buunguyen.github.io/topbar
* Copyright (c) 2021 Buu Nguyen
*/
(function (window, document) {
"use strict";
// https://gist.github.com/paulirish/1579671
(function () {
var lastTime = 0;
var vendors = ["ms", "moz", "webkit", "o"];
for (var x = 0; x < vendors.length && !window.requestAnimationFrame; ++x) {
window.requestAnimationFrame =
window[vendors[x] + "RequestAnimationFrame"];
window.cancelAnimationFrame =
window[vendors[x] + "CancelAnimationFrame"] ||
window[vendors[x] + "CancelRequestAnimationFrame"];
}
if (!window.requestAnimationFrame)
window.requestAnimationFrame = function (callback, element) {
var currTime = new Date().getTime();
var timeToCall = Math.max(0, 16 - (currTime - lastTime));
var id = window.setTimeout(function () {
callback(currTime + timeToCall);
}, timeToCall);
lastTime = currTime + timeToCall;
return id;
};
if (!window.cancelAnimationFrame)
window.cancelAnimationFrame = function (id) {
clearTimeout(id);
};
})();
var canvas,
currentProgress,
showing,
progressTimerId = null,
fadeTimerId = null,
delayTimerId = null,
addEvent = function (elem, type, handler) {
if (elem.addEventListener) elem.addEventListener(type, handler, false);
else if (elem.attachEvent) elem.attachEvent("on" + type, handler);
else elem["on" + type] = handler;
},
options = {
autoRun: true,
barThickness: 3,
barColors: {
0: "rgba(26, 188, 156, .9)",
".25": "rgba(52, 152, 219, .9)",
".50": "rgba(241, 196, 15, .9)",
".75": "rgba(230, 126, 34, .9)",
"1.0": "rgba(211, 84, 0, .9)",
},
shadowBlur: 10,
shadowColor: "rgba(0, 0, 0, .6)",
className: null,
},
repaint = function () {
canvas.width = window.innerWidth;
canvas.height = options.barThickness * 5; // need space for shadow
var ctx = canvas.getContext("2d");
ctx.shadowBlur = options.shadowBlur;
ctx.shadowColor = options.shadowColor;
var lineGradient = ctx.createLinearGradient(0, 0, canvas.width, 0);
for (var stop in options.barColors)
lineGradient.addColorStop(stop, options.barColors[stop]);
ctx.lineWidth = options.barThickness;
ctx.beginPath();
ctx.moveTo(0, options.barThickness / 2);
ctx.lineTo(
Math.ceil(currentProgress * canvas.width),
options.barThickness / 2
);
ctx.strokeStyle = lineGradient;
ctx.stroke();
},
createCanvas = function () {
canvas = document.createElement("canvas");
var style = canvas.style;
style.position = "fixed";
style.top = style.left = style.right = style.margin = style.padding = 0;
style.zIndex = 100001;
style.display = "none";
if (options.className) canvas.classList.add(options.className);
document.body.appendChild(canvas);
addEvent(window, "resize", repaint);
},
topbar = {
config: function (opts) {
for (var key in opts)
if (options.hasOwnProperty(key)) options[key] = opts[key];
},
show: function (delay) {
if (showing) return;
if (delay) {
if (delayTimerId) return;
delayTimerId = setTimeout(() => topbar.show(), delay);
} else {
showing = true;
if (fadeTimerId !== null) window.cancelAnimationFrame(fadeTimerId);
if (!canvas) createCanvas();
canvas.style.opacity = 1;
canvas.style.display = "block";
topbar.progress(0);
if (options.autoRun) {
(function loop() {
progressTimerId = window.requestAnimationFrame(loop);
topbar.progress(
"+" + 0.05 * Math.pow(1 - Math.sqrt(currentProgress), 2)
);
})();
}
}
},
progress: function (to) {
if (typeof to === "undefined") return currentProgress;
if (typeof to === "string") {
to =
(to.indexOf("+") >= 0 || to.indexOf("-") >= 0
? currentProgress
: 0) + parseFloat(to);
}
currentProgress = to > 1 ? 1 : to;
repaint();
return currentProgress;
},
hide: function () {
clearTimeout(delayTimerId);
delayTimerId = null;
if (!showing) return;
showing = false;
if (progressTimerId != null) {
window.cancelAnimationFrame(progressTimerId);
progressTimerId = null;
}
(function loop() {
if (topbar.progress("+.1") >= 1) {
canvas.style.opacity -= 0.05;
if (canvas.style.opacity <= 0.05) {
canvas.style.display = "none";
fadeTimerId = null;
return;
}
}
fadeTimerId = window.requestAnimationFrame(loop);
})();
},
};
if (typeof module === "object" && typeof module.exports === "object") {
module.exports = topbar;
} else if (typeof define === "function" && define.amd) {
define(function () {
return topbar;
});
} else {
this.topbar = topbar;
}
}.call(this, window, document));

View File

@ -1,65 +0,0 @@
# This file is responsible for configuring your application
# and its dependencies with the aid of the Config module.
#
# This configuration file is loaded before any dependency and
# is restricted to this project.
# General application configuration
import Config
config :dashboard,
generators: [timestamp_type: :utc_datetime]
# Configures the endpoint
config :dashboard, DashboardWeb.Endpoint,
url: [host: "localhost"],
adapter: Bandit.PhoenixAdapter,
render_errors: [
formats: [html: DashboardWeb.ErrorHTML, json: DashboardWeb.ErrorJSON],
layout: false
],
pubsub_server: Dashboard.PubSub,
live_view: [signing_salt: "kl+uafFV"]
# Configures the mailer
#
# By default it uses the "Local" adapter which stores the emails
# locally. You can see the emails in your browser, at "/dev/mailbox".
#
# For production it's recommended to configure a different adapter
# at the `config/runtime.exs`.
config :dashboard, Dashboard.Mailer, adapter: Swoosh.Adapters.Local
# Configure esbuild (the version is required)
config :esbuild,
version: "0.17.11",
dashboard: [
args:
~w(js/app.js --bundle --target=es2017 --outdir=../priv/static/assets --external:/fonts/* --external:/images/*),
cd: Path.expand("../assets", __DIR__),
env: %{"NODE_PATH" => Path.expand("../deps", __DIR__)}
]
# Configure tailwind (the version is required)
config :tailwind,
version: "3.4.3",
dashboard: [
args: ~w(
--config=tailwind.config.js
--input=css/app.css
--output=../priv/static/assets/app.css
),
cd: Path.expand("../assets", __DIR__)
]
# Configures Elixir's Logger
config :logger, :console,
format: "$time $metadata[$level] $message\n",
metadata: [:request_id]
# Use Jason for JSON parsing in Phoenix
config :phoenix, :json_library, Jason
# Import environment specific config. This must remain at the bottom
# of this file so it overrides the configuration defined above.
import_config "#{config_env()}.exs"

View File

@ -1,75 +0,0 @@
import Config
# For development, we disable any cache and enable
# debugging and code reloading.
#
# The watchers configuration can be used to run external
# watchers to your application. For example, we can use it
# to bundle .js and .css sources.
config :dashboard, DashboardWeb.Endpoint,
# Binding to loopback ipv4 address prevents access from other machines.
# Change to `ip: {0, 0, 0, 0}` to allow access from other machines.
http: [ip: {127, 0, 0, 1}, port: 4000],
check_origin: false,
code_reloader: true,
debug_errors: true,
secret_key_base: "fQwe0EM9wVUgpFSQi1AcH3YzXPWDo8oX39gORi8+lcMNR4WCwpRS8cXb5LKd/kY6",
watchers: [
esbuild: {Esbuild, :install_and_run, [:dashboard, ~w(--sourcemap=inline --watch)]},
tailwind: {Tailwind, :install_and_run, [:dashboard, ~w(--watch)]}
]
# ## SSL Support
#
# In order to use HTTPS in development, a self-signed
# certificate can be generated by running the following
# Mix task:
#
# mix phx.gen.cert
#
# Run `mix help phx.gen.cert` for more information.
#
# The `http:` config above can be replaced with:
#
# https: [
# port: 4001,
# cipher_suite: :strong,
# keyfile: "priv/cert/selfsigned_key.pem",
# certfile: "priv/cert/selfsigned.pem"
# ],
#
# If desired, both `http:` and `https:` keys can be
# configured to run both http and https servers on
# different ports.
# Watch static and templates for browser reloading.
config :dashboard, DashboardWeb.Endpoint,
live_reload: [
patterns: [
~r"priv/static/(?!uploads/).*(js|css|png|jpeg|jpg|gif|svg)$",
~r"priv/gettext/.*(po)$",
~r"lib/dashboard_web/(controllers|live|components)/.*(ex|heex)$"
]
]
# Enable dev routes for dashboard and mailbox
config :dashboard, dev_routes: true
# Do not include metadata nor timestamps in development logs
config :logger, :console, format: "[$level] $message\n"
# Set a higher stacktrace during development. Avoid configuring such
# in production as building large stacktraces may be expensive.
config :phoenix, :stacktrace_depth, 20
# Initialize plugs at runtime for faster development compilation
config :phoenix, :plug_init_mode, :runtime
config :phoenix_live_view,
# Include HEEx debug annotations as HTML comments in rendered markup
debug_heex_annotations: true,
# Enable helpful, but potentially expensive runtime checks
enable_expensive_runtime_checks: true
# Disable swoosh api client as it is only required for production adapters.
config :swoosh, :api_client, false

View File

@ -1,20 +0,0 @@
import Config
# Note we also include the path to a cache manifest
# containing the digested version of static files. This
# manifest is generated by the `mix assets.deploy` task,
# which you should run after static files are built and
# before starting your production server.
config :dashboard, DashboardWeb.Endpoint, cache_static_manifest: "priv/static/cache_manifest.json"
# Configures Swoosh API Client
config :swoosh, api_client: Swoosh.ApiClient.Finch, finch_name: Dashboard.Finch
# Disable Swoosh Local Memory Storage
config :swoosh, local: false
# Do not print debug messages in production
config :logger, level: :info
# Runtime production configuration, including reading
# of environment variables, is done on config/runtime.exs.

View File

@ -1,102 +0,0 @@
import Config
# config/runtime.exs is executed for all environments, including
# during releases. It is executed after compilation and before the
# system starts, so it is typically used to load production configuration
# and secrets from environment variables or elsewhere. Do not define
# any compile-time configuration in here, as it won't be applied.
# The block below contains prod specific runtime configuration.
# ## Using releases
#
# If you use `mix release`, you need to explicitly enable the server
# by passing the PHX_SERVER=true when you start it:
#
# PHX_SERVER=true bin/dashboard start
#
# Alternatively, you can use `mix phx.gen.release` to generate a `bin/server`
# script that automatically sets the env var above.
if System.get_env("PHX_SERVER") do
config :dashboard, DashboardWeb.Endpoint, server: true
end
if config_env() == :prod do
# The secret key base is used to sign/encrypt cookies and other secrets.
# A default value is used in config/dev.exs and config/test.exs but you
# want to use a different value for prod and you most likely don't want
# to check this value into version control, so we use an environment
# variable instead.
secret_key_base =
System.get_env("SECRET_KEY_BASE") ||
raise """
environment variable SECRET_KEY_BASE is missing.
You can generate one by calling: mix phx.gen.secret
"""
host = System.get_env("PHX_HOST") || "example.com"
port = String.to_integer(System.get_env("PORT") || "4000")
config :dashboard, :dns_cluster_query, System.get_env("DNS_CLUSTER_QUERY")
config :dashboard, DashboardWeb.Endpoint,
url: [host: host, port: 443, scheme: "https"],
http: [
# Enable IPv6 and bind on all interfaces.
# Set it to {0, 0, 0, 0, 0, 0, 0, 1} for local network only access.
# See the documentation on https://hexdocs.pm/bandit/Bandit.html#t:options/0
# for details about using IPv6 vs IPv4 and loopback vs public addresses.
ip: {0, 0, 0, 0, 0, 0, 0, 0},
port: port
],
secret_key_base: secret_key_base
# ## SSL Support
#
# To get SSL working, you will need to add the `https` key
# to your endpoint configuration:
#
# config :dashboard, DashboardWeb.Endpoint,
# https: [
# ...,
# port: 443,
# cipher_suite: :strong,
# keyfile: System.get_env("SOME_APP_SSL_KEY_PATH"),
# certfile: System.get_env("SOME_APP_SSL_CERT_PATH")
# ]
#
# The `cipher_suite` is set to `:strong` to support only the
# latest and more secure SSL ciphers. This means old browsers
# and clients may not be supported. You can set it to
# `:compatible` for wider support.
#
# `:keyfile` and `:certfile` expect an absolute path to the key
# and cert in disk or a relative path inside priv, for example
# "priv/ssl/server.key". For all supported SSL configuration
# options, see https://hexdocs.pm/plug/Plug.SSL.html#configure/1
#
# We also recommend setting `force_ssl` in your config/prod.exs,
# ensuring no data is ever sent via http, always redirecting to https:
#
# config :dashboard, DashboardWeb.Endpoint,
# force_ssl: [hsts: true]
#
# Check `Plug.SSL` for all available options in `force_ssl`.
# ## Configuring the mailer
#
# In production you need to configure the mailer to use a different adapter.
# Also, you may need to configure the Swoosh API client of your choice if you
# are not using SMTP. Here is an example of the configuration:
#
# config :dashboard, Dashboard.Mailer,
# adapter: Swoosh.Adapters.Mailgun,
# api_key: System.get_env("MAILGUN_API_KEY"),
# domain: System.get_env("MAILGUN_DOMAIN")
#
# For this example you need include a HTTP client required by Swoosh API client.
# Swoosh supports Hackney and Finch out of the box:
#
# config :swoosh, :api_client, Swoosh.ApiClient.Hackney
#
# See https://hexdocs.pm/swoosh/Swoosh.html#module-installation for details.
end

View File

@ -1,24 +0,0 @@
import Config
# We don't run a server during test. If one is required,
# you can enable the server option below.
config :dashboard, DashboardWeb.Endpoint,
http: [ip: {127, 0, 0, 1}, port: 4002],
secret_key_base: "3kX5M3PaOeCmcUWHkFMjWsDhknhlbtZz14hLZACeEJXkV2i6tAGNw/7H5Fq2aYiL",
server: false
# In test we don't send emails
config :dashboard, Dashboard.Mailer, adapter: Swoosh.Adapters.Test
# Disable swoosh api client as it is only required for production adapters
config :swoosh, :api_client, false
# Print only warnings and errors during test
config :logger, level: :warning
# Initialize plugs at runtime for faster test compilation
config :phoenix, :plug_init_mode, :runtime
# Enable helpful, but potentially expensive runtime checks
config :phoenix_live_view,
enable_expensive_runtime_checks: true

View File

@ -1,9 +0,0 @@
defmodule Dashboard do
@moduledoc """
Dashboard keeps the contexts that define your domain
and business logic.
Contexts are also responsible for managing your data, regardless
if it comes from the database, an external API or others.
"""
end

View File

@ -1,35 +0,0 @@
defmodule Dashboard.Application do
# See https://hexdocs.pm/elixir/Application.html
# for more information on OTP Applications
@moduledoc false
use Application
@impl true
def start(_type, _args) do
children = [
DashboardWeb.Telemetry,
{DNSCluster, query: Application.get_env(:dashboard, :dns_cluster_query) || :ignore},
{Phoenix.PubSub, name: Dashboard.PubSub},
# Start the Finch HTTP client for sending emails
{Finch, name: Dashboard.Finch},
# Start real MQTT subscriber
Dashboard.MqttSubscriber,
# Start to serve requests, typically the last entry
DashboardWeb.Endpoint
]
# See https://hexdocs.pm/elixir/Supervisor.html
# for other strategies and supported options
opts = [strategy: :one_for_one, name: Dashboard.Supervisor]
Supervisor.start_link(children, opts)
end
# Tell Phoenix to update the endpoint configuration
# whenever the application is updated.
@impl true
def config_change(changed, _new, removed) do
DashboardWeb.Endpoint.config_change(changed, removed)
:ok
end
end

View File

@ -1,3 +0,0 @@
defmodule Dashboard.Mailer do
use Swoosh.Mailer, otp_app: :dashboard
end

View File

@ -1,97 +0,0 @@
defmodule Dashboard.MqttSubscriber do
@moduledoc """
Simple MQTT subscriber for development dashboard.
"""
use GenServer
require Logger
alias Phoenix.PubSub
@pubsub_topic "systant:hosts"
def start_link(opts) do
GenServer.start_link(__MODULE__, opts, name: __MODULE__)
end
def get_hosts do
GenServer.call(__MODULE__, :get_hosts)
end
@impl true
def init(_opts) do
# Start MQTT connection directly with hostname-based client ID to avoid conflicts
{:ok, hostname} = :inet.gethostname()
client_id = "systant-dashboard-#{hostname}"
connection_opts = [
client_id: client_id,
server: {Tortoise.Transport.Tcp, host: "mqtt.home", port: 1883},
handler: {__MODULE__, []},
subscriptions: [{"systant/+/stats", 0}]
]
case Tortoise.Connection.start_link(connection_opts) do
{:ok, _pid} ->
Logger.info("Dashboard MQTT subscriber connected successfully")
{:ok, %{hosts: %{}}}
{:error, {:already_started, _pid}} ->
Logger.info("Dashboard MQTT connection already exists, reusing")
{:ok, %{hosts: %{}}}
{:error, reason} ->
Logger.error("Failed to connect to MQTT broker: #{inspect(reason)}")
{:stop, reason}
end
end
@impl true
def handle_call(:get_hosts, _from, state) do
{:reply, state.hosts, state}
end
@impl true
def handle_info(_msg, state) do
{:noreply, state}
end
# Tortoise handler callbacks
def connection(status, state) do
Logger.info("MQTT connection status: #{status}")
{:ok, state}
end
def subscription(status, topic, state) do
Logger.info("MQTT subscription status for #{topic}: #{status}")
{:ok, state}
end
def handle_message(topic, payload, state) do
topic_parts = if is_binary(topic), do: String.split(topic, "/"), else: topic
case topic_parts do
["systant", hostname, "stats"] ->
case Jason.decode(payload) do
{:ok, data} ->
host_data = Map.put(data, "last_seen", DateTime.utc_now())
# Broadcast to LiveView
PubSub.broadcast(Dashboard.PubSub, @pubsub_topic, {:host_update, hostname, host_data})
# Update our state
GenServer.cast(__MODULE__, {:update_host, hostname, host_data})
{:error, _reason} ->
:ok
end
_ ->
:ok
end
{:ok, state}
end
@impl true
def handle_cast({:update_host, hostname, host_data}, state) do
updated_hosts = Map.put(state.hosts, hostname, host_data)
{:noreply, %{state | hosts: updated_hosts}}
end
@impl true
def terminate(_reason, _state), do: []
end

View File

@ -1,116 +0,0 @@
defmodule DashboardWeb do
@moduledoc """
The entrypoint for defining your web interface, such
as controllers, components, channels, and so on.
This can be used in your application as:
use DashboardWeb, :controller
use DashboardWeb, :html
The definitions below will be executed for every controller,
component, etc, so keep them short and clean, focused
on imports, uses and aliases.
Do NOT define functions inside the quoted expressions
below. Instead, define additional modules and import
those modules here.
"""
def static_paths, do: ~w(assets fonts images favicon.ico robots.txt)
def router do
quote do
use Phoenix.Router, helpers: false
# Import common connection and controller functions to use in pipelines
import Plug.Conn
import Phoenix.Controller
import Phoenix.LiveView.Router
end
end
def channel do
quote do
use Phoenix.Channel
end
end
def controller do
quote do
use Phoenix.Controller,
formats: [:html, :json],
layouts: [html: DashboardWeb.Layouts]
use Gettext, backend: DashboardWeb.Gettext
import Plug.Conn
unquote(verified_routes())
end
end
def live_view do
quote do
use Phoenix.LiveView,
layout: {DashboardWeb.Layouts, :app}
unquote(html_helpers())
end
end
def live_component do
quote do
use Phoenix.LiveComponent
unquote(html_helpers())
end
end
def html do
quote do
use Phoenix.Component
# Import convenience functions from controllers
import Phoenix.Controller,
only: [get_csrf_token: 0, view_module: 1, view_template: 1]
# Include general helpers for rendering HTML
unquote(html_helpers())
end
end
defp html_helpers do
quote do
# Translation
use Gettext, backend: DashboardWeb.Gettext
# HTML escaping functionality
import Phoenix.HTML
# Core UI components
import DashboardWeb.CoreComponents
# Shortcut for generating JS commands
alias Phoenix.LiveView.JS
# Routes generation with the ~p sigil
unquote(verified_routes())
end
end
def verified_routes do
quote do
use Phoenix.VerifiedRoutes,
endpoint: DashboardWeb.Endpoint,
router: DashboardWeb.Router,
statics: DashboardWeb.static_paths()
end
end
@doc """
When used, dispatch to the appropriate controller/live_view/etc.
"""
defmacro __using__(which) when is_atom(which) do
apply(__MODULE__, which, [])
end
end

View File

@ -1,676 +0,0 @@
defmodule DashboardWeb.CoreComponents do
@moduledoc """
Provides core UI components.
At first glance, this module may seem daunting, but its goal is to provide
core building blocks for your application, such as modals, tables, and
forms. The components consist mostly of markup and are well-documented
with doc strings and declarative assigns. You may customize and style
them in any way you want, based on your application growth and needs.
The default components use Tailwind CSS, a utility-first CSS framework.
See the [Tailwind CSS documentation](https://tailwindcss.com) to learn
how to customize them or feel free to swap in another framework altogether.
Icons are provided by [heroicons](https://heroicons.com). See `icon/1` for usage.
"""
use Phoenix.Component
use Gettext, backend: DashboardWeb.Gettext
alias Phoenix.LiveView.JS
@doc """
Renders a modal.
## Examples
<.modal id="confirm-modal">
This is a modal.
</.modal>
JS commands may be passed to the `:on_cancel` to configure
the closing/cancel event, for example:
<.modal id="confirm" on_cancel={JS.navigate(~p"/posts")}>
This is another modal.
</.modal>
"""
attr :id, :string, required: true
attr :show, :boolean, default: false
attr :on_cancel, JS, default: %JS{}
slot :inner_block, required: true
def modal(assigns) do
~H"""
<div
id={@id}
phx-mounted={@show && show_modal(@id)}
phx-remove={hide_modal(@id)}
data-cancel={JS.exec(@on_cancel, "phx-remove")}
class="relative z-50 hidden"
>
<div id={"#{@id}-bg"} class="bg-zinc-50/90 fixed inset-0 transition-opacity" aria-hidden="true" />
<div
class="fixed inset-0 overflow-y-auto"
aria-labelledby={"#{@id}-title"}
aria-describedby={"#{@id}-description"}
role="dialog"
aria-modal="true"
tabindex="0"
>
<div class="flex min-h-full items-center justify-center">
<div class="w-full max-w-3xl p-4 sm:p-6 lg:py-8">
<.focus_wrap
id={"#{@id}-container"}
phx-window-keydown={JS.exec("data-cancel", to: "##{@id}")}
phx-key="escape"
phx-click-away={JS.exec("data-cancel", to: "##{@id}")}
class="shadow-zinc-700/10 ring-zinc-700/10 relative hidden rounded-2xl bg-white p-14 shadow-lg ring-1 transition"
>
<div class="absolute top-6 right-5">
<button
phx-click={JS.exec("data-cancel", to: "##{@id}")}
type="button"
class="-m-3 flex-none p-3 opacity-20 hover:opacity-40"
aria-label={gettext("close")}
>
<.icon name="hero-x-mark-solid" class="h-5 w-5" />
</button>
</div>
<div id={"#{@id}-content"}>
{render_slot(@inner_block)}
</div>
</.focus_wrap>
</div>
</div>
</div>
</div>
"""
end
@doc """
Renders flash notices.
## Examples
<.flash kind={:info} flash={@flash} />
<.flash kind={:info} phx-mounted={show("#flash")}>Welcome Back!</.flash>
"""
attr :id, :string, doc: "the optional id of flash container"
attr :flash, :map, default: %{}, doc: "the map of flash messages to display"
attr :title, :string, default: nil
attr :kind, :atom, values: [:info, :error], doc: "used for styling and flash lookup"
attr :rest, :global, doc: "the arbitrary HTML attributes to add to the flash container"
slot :inner_block, doc: "the optional inner block that renders the flash message"
def flash(assigns) do
assigns = assign_new(assigns, :id, fn -> "flash-#{assigns.kind}" end)
~H"""
<div
:if={msg = render_slot(@inner_block) || Phoenix.Flash.get(@flash, @kind)}
id={@id}
phx-click={JS.push("lv:clear-flash", value: %{key: @kind}) |> hide("##{@id}")}
role="alert"
class={[
"fixed top-2 right-2 mr-2 w-80 sm:w-96 z-50 rounded-lg p-3 ring-1",
@kind == :info && "bg-emerald-50 text-emerald-800 ring-emerald-500 fill-cyan-900",
@kind == :error && "bg-rose-50 text-rose-900 shadow-md ring-rose-500 fill-rose-900"
]}
{@rest}
>
<p :if={@title} class="flex items-center gap-1.5 text-sm font-semibold leading-6">
<.icon :if={@kind == :info} name="hero-information-circle-mini" class="h-4 w-4" />
<.icon :if={@kind == :error} name="hero-exclamation-circle-mini" class="h-4 w-4" />
{@title}
</p>
<p class="mt-2 text-sm leading-5">{msg}</p>
<button type="button" class="group absolute top-1 right-1 p-2" aria-label={gettext("close")}>
<.icon name="hero-x-mark-solid" class="h-5 w-5 opacity-40 group-hover:opacity-70" />
</button>
</div>
"""
end
@doc """
Shows the flash group with standard titles and content.
## Examples
<.flash_group flash={@flash} />
"""
attr :flash, :map, required: true, doc: "the map of flash messages"
attr :id, :string, default: "flash-group", doc: "the optional id of flash container"
def flash_group(assigns) do
~H"""
<div id={@id}>
<.flash kind={:info} title={gettext("Success!")} flash={@flash} />
<.flash kind={:error} title={gettext("Error!")} flash={@flash} />
<.flash
id="client-error"
kind={:error}
title={gettext("We can't find the internet")}
phx-disconnected={show(".phx-client-error #client-error")}
phx-connected={hide("#client-error")}
hidden
>
{gettext("Attempting to reconnect")}
<.icon name="hero-arrow-path" class="ml-1 h-3 w-3 animate-spin" />
</.flash>
<.flash
id="server-error"
kind={:error}
title={gettext("Something went wrong!")}
phx-disconnected={show(".phx-server-error #server-error")}
phx-connected={hide("#server-error")}
hidden
>
{gettext("Hang in there while we get back on track")}
<.icon name="hero-arrow-path" class="ml-1 h-3 w-3 animate-spin" />
</.flash>
</div>
"""
end
@doc """
Renders a simple form.
## Examples
<.simple_form for={@form} phx-change="validate" phx-submit="save">
<.input field={@form[:email]} label="Email"/>
<.input field={@form[:username]} label="Username" />
<:actions>
<.button>Save</.button>
</:actions>
</.simple_form>
"""
attr :for, :any, required: true, doc: "the data structure for the form"
attr :as, :any, default: nil, doc: "the server side parameter to collect all input under"
attr :rest, :global,
include: ~w(autocomplete name rel action enctype method novalidate target multipart),
doc: "the arbitrary HTML attributes to apply to the form tag"
slot :inner_block, required: true
slot :actions, doc: "the slot for form actions, such as a submit button"
def simple_form(assigns) do
~H"""
<.form :let={f} for={@for} as={@as} {@rest}>
<div class="mt-10 space-y-8 bg-white">
{render_slot(@inner_block, f)}
<div :for={action <- @actions} class="mt-2 flex items-center justify-between gap-6">
{render_slot(action, f)}
</div>
</div>
</.form>
"""
end
@doc """
Renders a button.
## Examples
<.button>Send!</.button>
<.button phx-click="go" class="ml-2">Send!</.button>
"""
attr :type, :string, default: nil
attr :class, :string, default: nil
attr :rest, :global, include: ~w(disabled form name value)
slot :inner_block, required: true
def button(assigns) do
~H"""
<button
type={@type}
class={[
"phx-submit-loading:opacity-75 rounded-lg bg-zinc-900 hover:bg-zinc-700 py-2 px-3",
"text-sm font-semibold leading-6 text-white active:text-white/80",
@class
]}
{@rest}
>
{render_slot(@inner_block)}
</button>
"""
end
@doc """
Renders an input with label and error messages.
A `Phoenix.HTML.FormField` may be passed as argument,
which is used to retrieve the input name, id, and values.
Otherwise all attributes may be passed explicitly.
## Types
This function accepts all HTML input types, considering that:
* You may also set `type="select"` to render a `<select>` tag
* `type="checkbox"` is used exclusively to render boolean values
* For live file uploads, see `Phoenix.Component.live_file_input/1`
See https://developer.mozilla.org/en-US/docs/Web/HTML/Element/input
for more information. Unsupported types, such as hidden and radio,
are best written directly in your templates.
## Examples
<.input field={@form[:email]} type="email" />
<.input name="my-input" errors={["oh no!"]} />
"""
attr :id, :any, default: nil
attr :name, :any
attr :label, :string, default: nil
attr :value, :any
attr :type, :string,
default: "text",
values: ~w(checkbox color date datetime-local email file month number password
range search select tel text textarea time url week)
attr :field, Phoenix.HTML.FormField,
doc: "a form field struct retrieved from the form, for example: @form[:email]"
attr :errors, :list, default: []
attr :checked, :boolean, doc: "the checked flag for checkbox inputs"
attr :prompt, :string, default: nil, doc: "the prompt for select inputs"
attr :options, :list, doc: "the options to pass to Phoenix.HTML.Form.options_for_select/2"
attr :multiple, :boolean, default: false, doc: "the multiple flag for select inputs"
attr :rest, :global,
include: ~w(accept autocomplete capture cols disabled form list max maxlength min minlength
multiple pattern placeholder readonly required rows size step)
def input(%{field: %Phoenix.HTML.FormField{} = field} = assigns) do
errors = if Phoenix.Component.used_input?(field), do: field.errors, else: []
assigns
|> assign(field: nil, id: assigns.id || field.id)
|> assign(:errors, Enum.map(errors, &translate_error(&1)))
|> assign_new(:name, fn -> if assigns.multiple, do: field.name <> "[]", else: field.name end)
|> assign_new(:value, fn -> field.value end)
|> input()
end
def input(%{type: "checkbox"} = assigns) do
assigns =
assign_new(assigns, :checked, fn ->
Phoenix.HTML.Form.normalize_value("checkbox", assigns[:value])
end)
~H"""
<div>
<label class="flex items-center gap-4 text-sm leading-6 text-zinc-600">
<input type="hidden" name={@name} value="false" disabled={@rest[:disabled]} />
<input
type="checkbox"
id={@id}
name={@name}
value="true"
checked={@checked}
class="rounded border-zinc-300 text-zinc-900 focus:ring-0"
{@rest}
/>
{@label}
</label>
<.error :for={msg <- @errors}>{msg}</.error>
</div>
"""
end
def input(%{type: "select"} = assigns) do
~H"""
<div>
<.label for={@id}>{@label}</.label>
<select
id={@id}
name={@name}
class="mt-2 block w-full rounded-md border border-gray-300 bg-white shadow-sm focus:border-zinc-400 focus:ring-0 sm:text-sm"
multiple={@multiple}
{@rest}
>
<option :if={@prompt} value="">{@prompt}</option>
{Phoenix.HTML.Form.options_for_select(@options, @value)}
</select>
<.error :for={msg <- @errors}>{msg}</.error>
</div>
"""
end
def input(%{type: "textarea"} = assigns) do
~H"""
<div>
<.label for={@id}>{@label}</.label>
<textarea
id={@id}
name={@name}
class={[
"mt-2 block w-full rounded-lg text-zinc-900 focus:ring-0 sm:text-sm sm:leading-6 min-h-[6rem]",
@errors == [] && "border-zinc-300 focus:border-zinc-400",
@errors != [] && "border-rose-400 focus:border-rose-400"
]}
{@rest}
>{Phoenix.HTML.Form.normalize_value("textarea", @value)}</textarea>
<.error :for={msg <- @errors}>{msg}</.error>
</div>
"""
end
# All other inputs text, datetime-local, url, password, etc. are handled here...
def input(assigns) do
~H"""
<div>
<.label for={@id}>{@label}</.label>
<input
type={@type}
name={@name}
id={@id}
value={Phoenix.HTML.Form.normalize_value(@type, @value)}
class={[
"mt-2 block w-full rounded-lg text-zinc-900 focus:ring-0 sm:text-sm sm:leading-6",
@errors == [] && "border-zinc-300 focus:border-zinc-400",
@errors != [] && "border-rose-400 focus:border-rose-400"
]}
{@rest}
/>
<.error :for={msg <- @errors}>{msg}</.error>
</div>
"""
end
@doc """
Renders a label.
"""
attr :for, :string, default: nil
slot :inner_block, required: true
def label(assigns) do
~H"""
<label for={@for} class="block text-sm font-semibold leading-6 text-zinc-800">
{render_slot(@inner_block)}
</label>
"""
end
@doc """
Generates a generic error message.
"""
slot :inner_block, required: true
def error(assigns) do
~H"""
<p class="mt-3 flex gap-3 text-sm leading-6 text-rose-600">
<.icon name="hero-exclamation-circle-mini" class="mt-0.5 h-5 w-5 flex-none" />
{render_slot(@inner_block)}
</p>
"""
end
@doc """
Renders a header with title.
"""
attr :class, :string, default: nil
slot :inner_block, required: true
slot :subtitle
slot :actions
def header(assigns) do
~H"""
<header class={[@actions != [] && "flex items-center justify-between gap-6", @class]}>
<div>
<h1 class="text-lg font-semibold leading-8 text-zinc-800">
{render_slot(@inner_block)}
</h1>
<p :if={@subtitle != []} class="mt-2 text-sm leading-6 text-zinc-600">
{render_slot(@subtitle)}
</p>
</div>
<div class="flex-none">{render_slot(@actions)}</div>
</header>
"""
end
@doc ~S"""
Renders a table with generic styling.
## Examples
<.table id="users" rows={@users}>
<:col :let={user} label="id">{user.id}</:col>
<:col :let={user} label="username">{user.username}</:col>
</.table>
"""
attr :id, :string, required: true
attr :rows, :list, required: true
attr :row_id, :any, default: nil, doc: "the function for generating the row id"
attr :row_click, :any, default: nil, doc: "the function for handling phx-click on each row"
attr :row_item, :any,
default: &Function.identity/1,
doc: "the function for mapping each row before calling the :col and :action slots"
slot :col, required: true do
attr :label, :string
end
slot :action, doc: "the slot for showing user actions in the last table column"
def table(assigns) do
assigns =
with %{rows: %Phoenix.LiveView.LiveStream{}} <- assigns do
assign(assigns, row_id: assigns.row_id || fn {id, _item} -> id end)
end
~H"""
<div class="overflow-y-auto px-4 sm:overflow-visible sm:px-0">
<table class="w-[40rem] mt-11 sm:w-full">
<thead class="text-sm text-left leading-6 text-zinc-500">
<tr>
<th :for={col <- @col} class="p-0 pb-4 pr-6 font-normal">{col[:label]}</th>
<th :if={@action != []} class="relative p-0 pb-4">
<span class="sr-only">{gettext("Actions")}</span>
</th>
</tr>
</thead>
<tbody
id={@id}
phx-update={match?(%Phoenix.LiveView.LiveStream{}, @rows) && "stream"}
class="relative divide-y divide-zinc-100 border-t border-zinc-200 text-sm leading-6 text-zinc-700"
>
<tr :for={row <- @rows} id={@row_id && @row_id.(row)} class="group hover:bg-zinc-50">
<td
:for={{col, i} <- Enum.with_index(@col)}
phx-click={@row_click && @row_click.(row)}
class={["relative p-0", @row_click && "hover:cursor-pointer"]}
>
<div class="block py-4 pr-6">
<span class="absolute -inset-y-px right-0 -left-4 group-hover:bg-zinc-50 sm:rounded-l-xl" />
<span class={["relative", i == 0 && "font-semibold text-zinc-900"]}>
{render_slot(col, @row_item.(row))}
</span>
</div>
</td>
<td :if={@action != []} class="relative w-14 p-0">
<div class="relative whitespace-nowrap py-4 text-right text-sm font-medium">
<span class="absolute -inset-y-px -right-4 left-0 group-hover:bg-zinc-50 sm:rounded-r-xl" />
<span
:for={action <- @action}
class="relative ml-4 font-semibold leading-6 text-zinc-900 hover:text-zinc-700"
>
{render_slot(action, @row_item.(row))}
</span>
</div>
</td>
</tr>
</tbody>
</table>
</div>
"""
end
@doc """
Renders a data list.
## Examples
<.list>
<:item title="Title">{@post.title}</:item>
<:item title="Views">{@post.views}</:item>
</.list>
"""
slot :item, required: true do
attr :title, :string, required: true
end
def list(assigns) do
~H"""
<div class="mt-14">
<dl class="-my-4 divide-y divide-zinc-100">
<div :for={item <- @item} class="flex gap-4 py-4 text-sm leading-6 sm:gap-8">
<dt class="w-1/4 flex-none text-zinc-500">{item.title}</dt>
<dd class="text-zinc-700">{render_slot(item)}</dd>
</div>
</dl>
</div>
"""
end
@doc """
Renders a back navigation link.
## Examples
<.back navigate={~p"/posts"}>Back to posts</.back>
"""
attr :navigate, :any, required: true
slot :inner_block, required: true
def back(assigns) do
~H"""
<div class="mt-16">
<.link
navigate={@navigate}
class="text-sm font-semibold leading-6 text-zinc-900 hover:text-zinc-700"
>
<.icon name="hero-arrow-left-solid" class="h-3 w-3" />
{render_slot(@inner_block)}
</.link>
</div>
"""
end
@doc """
Renders a [Heroicon](https://heroicons.com).
Heroicons come in three styles outline, solid, and mini.
By default, the outline style is used, but solid and mini may
be applied by using the `-solid` and `-mini` suffix.
You can customize the size and colors of the icons by setting
width, height, and background color classes.
Icons are extracted from the `deps/heroicons` directory and bundled within
your compiled app.css by the plugin in your `assets/tailwind.config.js`.
## Examples
<.icon name="hero-x-mark-solid" />
<.icon name="hero-arrow-path" class="ml-1 w-3 h-3 animate-spin" />
"""
attr :name, :string, required: true
attr :class, :string, default: nil
def icon(%{name: "hero-" <> _} = assigns) do
~H"""
<span class={[@name, @class]} />
"""
end
## JS Commands
def show(js \\ %JS{}, selector) do
JS.show(js,
to: selector,
time: 300,
transition:
{"transition-all transform ease-out duration-300",
"opacity-0 translate-y-4 sm:translate-y-0 sm:scale-95",
"opacity-100 translate-y-0 sm:scale-100"}
)
end
def hide(js \\ %JS{}, selector) do
JS.hide(js,
to: selector,
time: 200,
transition:
{"transition-all transform ease-in duration-200",
"opacity-100 translate-y-0 sm:scale-100",
"opacity-0 translate-y-4 sm:translate-y-0 sm:scale-95"}
)
end
def show_modal(js \\ %JS{}, id) when is_binary(id) do
js
|> JS.show(to: "##{id}")
|> JS.show(
to: "##{id}-bg",
time: 300,
transition: {"transition-all transform ease-out duration-300", "opacity-0", "opacity-100"}
)
|> show("##{id}-container")
|> JS.add_class("overflow-hidden", to: "body")
|> JS.focus_first(to: "##{id}-content")
end
def hide_modal(js \\ %JS{}, id) do
js
|> JS.hide(
to: "##{id}-bg",
transition: {"transition-all transform ease-in duration-200", "opacity-100", "opacity-0"}
)
|> hide("##{id}-container")
|> JS.hide(to: "##{id}", transition: {"block", "block", "hidden"})
|> JS.remove_class("overflow-hidden", to: "body")
|> JS.pop_focus()
end
@doc """
Translates an error message using gettext.
"""
def translate_error({msg, opts}) do
# When using gettext, we typically pass the strings we want
# to translate as a static argument:
#
# # Translate the number of files with plural rules
# dngettext("errors", "1 file", "%{count} files", count)
#
# However the error messages in our forms and APIs are generated
# dynamically, so we need to translate them by calling Gettext
# with our gettext backend as first argument. Translations are
# available in the errors.po file (as we use the "errors" domain).
if count = opts[:count] do
Gettext.dngettext(DashboardWeb.Gettext, "errors", msg, msg, count, opts)
else
Gettext.dgettext(DashboardWeb.Gettext, "errors", msg, opts)
end
end
@doc """
Translates the errors for a field from a keyword list of errors.
"""
def translate_errors(errors, field) when is_list(errors) do
for {^field, {msg, opts}} <- errors, do: translate_error({msg, opts})
end
end

View File

@ -1,14 +0,0 @@
defmodule DashboardWeb.Layouts do
@moduledoc """
This module holds different layouts used by your application.
See the `layouts` directory for all templates available.
The "root" layout is a skeleton rendered as part of the
application router. The "app" layout is set as the default
layout on both `use DashboardWeb, :controller` and
`use DashboardWeb, :live_view`.
"""
use DashboardWeb, :html
embed_templates "layouts/*"
end

View File

@ -1,32 +0,0 @@
<header class="px-4 sm:px-6 lg:px-8">
<div class="flex items-center justify-between border-b border-zinc-100 py-3 text-sm">
<div class="flex items-center gap-4">
<a href="/">
<img src={~p"/images/logo.svg"} width="36" />
</a>
<p class="bg-brand/5 text-brand rounded-full px-2 font-medium leading-6">
v{Application.spec(:phoenix, :vsn)}
</p>
</div>
<div class="flex items-center gap-4 font-semibold leading-6 text-zinc-900">
<a href="https://twitter.com/elixirphoenix" class="hover:text-zinc-700">
@elixirphoenix
</a>
<a href="https://github.com/phoenixframework/phoenix" class="hover:text-zinc-700">
GitHub
</a>
<a
href="https://hexdocs.pm/phoenix/overview.html"
class="rounded-lg bg-zinc-100 px-2 py-1 hover:bg-zinc-200/80"
>
Get Started <span aria-hidden="true">&rarr;</span>
</a>
</div>
</div>
</header>
<main class="px-4 py-20 sm:px-6 lg:px-8">
<div class="mx-auto max-w-2xl">
<.flash_group flash={@flash} />
{@inner_content}
</div>
</main>

View File

@ -1,17 +0,0 @@
<!DOCTYPE html>
<html lang="en" class="[scrollbar-gutter:stable]">
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<meta name="csrf-token" content={get_csrf_token()} />
<.live_title default="Dashboard" suffix=" · Phoenix Framework">
{assigns[:page_title]}
</.live_title>
<link phx-track-static rel="stylesheet" href={~p"/assets/app.css"} />
<script defer phx-track-static type="text/javascript" src={~p"/assets/app.js"}>
</script>
</head>
<body class="bg-white">
{@inner_content}
</body>
</html>

View File

@ -1,24 +0,0 @@
defmodule DashboardWeb.ErrorHTML do
@moduledoc """
This module is invoked by your endpoint in case of errors on HTML requests.
See config/config.exs.
"""
use DashboardWeb, :html
# If you want to customize your error pages,
# uncomment the embed_templates/1 call below
# and add pages to the error directory:
#
# * lib/dashboard_web/controllers/error_html/404.html.heex
# * lib/dashboard_web/controllers/error_html/500.html.heex
#
# embed_templates "error_html/*"
# The default is to render a plain text page based on
# the template name. For example, "404.html" becomes
# "Not Found".
def render(template, _assigns) do
Phoenix.Controller.status_message_from_template(template)
end
end

View File

@ -1,21 +0,0 @@
defmodule DashboardWeb.ErrorJSON do
@moduledoc """
This module is invoked by your endpoint in case of errors on JSON requests.
See config/config.exs.
"""
# If you want to customize a particular status code,
# you may add your own clauses, such as:
#
# def render("500.json", _assigns) do
# %{errors: %{detail: "Internal Server Error"}}
# end
# By default, Phoenix returns the status message from
# the template name. For example, "404.json" becomes
# "Not Found".
def render(template, _assigns) do
%{errors: %{detail: Phoenix.Controller.status_message_from_template(template)}}
end
end

View File

@ -1,9 +0,0 @@
defmodule DashboardWeb.PageController do
use DashboardWeb, :controller
def home(conn, _params) do
# The home page is often custom made,
# so skip the default app layout.
render(conn, :home, layout: false)
end
end

View File

@ -1,10 +0,0 @@
defmodule DashboardWeb.PageHTML do
@moduledoc """
This module contains pages rendered by PageController.
See the `page_html` directory for all templates available.
"""
use DashboardWeb, :html
embed_templates "page_html/*"
end

View File

@ -1,98 +0,0 @@
<.flash_group flash={@flash} />
<div class="left-[40rem] fixed inset-y-0 right-0 z-0 hidden lg:block xl:left-[50rem]">
<svg
viewBox="0 0 1480 957"
fill="none"
aria-hidden="true"
class="absolute inset-0 h-full w-full"
preserveAspectRatio="xMinYMid slice"
>
<path fill="#EE7868" d="M0 0h1480v957H0z" />
<path
d="M137.542 466.27c-582.851-48.41-988.806-82.127-1608.412 658.2l67.39 810 3083.15-256.51L1535.94-49.622l-98.36 8.183C1269.29 281.468 734.115 515.799 146.47 467.012l-8.928-.742Z"
fill="#FF9F92"
/>
<path
d="M371.028 528.664C-169.369 304.988-545.754 149.198-1361.45 665.565l-182.58 792.025 3014.73 694.98 389.42-1689.25-96.18-22.171C1505.28 697.438 924.153 757.586 379.305 532.09l-8.277-3.426Z"
fill="#FA8372"
/>
<path
d="M359.326 571.714C-104.765 215.795-428.003-32.102-1349.55 255.554l-282.3 1224.596 3047.04 722.01 312.24-1354.467C1411.25 1028.3 834.355 935.995 366.435 577.166l-7.109-5.452Z"
fill="#E96856"
fill-opacity=".6"
/>
<path
d="M1593.87 1236.88c-352.15 92.63-885.498-145.85-1244.602-613.557l-5.455-7.105C-12.347 152.31-260.41-170.8-1225-131.458l-368.63 1599.048 3057.19 704.76 130.31-935.47Z"
fill="#C42652"
fill-opacity=".2"
/>
<path
d="M1411.91 1526.93c-363.79 15.71-834.312-330.6-1085.883-863.909l-3.822-8.102C72.704 125.95-101.074-242.476-1052.01-408.907l-699.85 1484.267 2837.75 1338.01 326.02-886.44Z"
fill="#A41C42"
fill-opacity=".2"
/>
<path
d="M1116.26 1863.69c-355.457-78.98-720.318-535.27-825.287-1115.521l-1.594-8.816C185.286 163.833 112.786-237.016-762.678-643.898L-1822.83 608.665 571.922 2635.55l544.338-771.86Z"
fill="#A41C42"
fill-opacity=".2"
/>
</svg>
</div>
<div class="px-4 py-10 sm:px-6 sm:py-28 lg:px-8 xl:px-28 xl:py-32">
<div class="mx-auto max-w-xl lg:mx-0">
<svg viewBox="0 0 71 48" class="h-12" aria-hidden="true">
<path
d="m26.371 33.477-.552-.1c-3.92-.729-6.397-3.1-7.57-6.829-.733-2.324.597-4.035 3.035-4.148 1.995-.092 3.362 1.055 4.57 2.39 1.557 1.72 2.984 3.558 4.514 5.305 2.202 2.515 4.797 4.134 8.347 3.634 3.183-.448 5.958-1.725 8.371-3.828.363-.316.761-.592 1.144-.886l-.241-.284c-2.027.63-4.093.841-6.205.735-3.195-.16-6.24-.828-8.964-2.582-2.486-1.601-4.319-3.746-5.19-6.611-.704-2.315.736-3.934 3.135-3.6.948.133 1.746.56 2.463 1.165.583.493 1.143 1.015 1.738 1.493 2.8 2.25 6.712 2.375 10.265-.068-5.842-.026-9.817-3.24-13.308-7.313-1.366-1.594-2.7-3.216-4.095-4.785-2.698-3.036-5.692-5.71-9.79-6.623C12.8-.623 7.745.14 2.893 2.361 1.926 2.804.997 3.319 0 4.149c.494 0 .763.006 1.032 0 2.446-.064 4.28 1.023 5.602 3.024.962 1.457 1.415 3.104 1.761 4.798.513 2.515.247 5.078.544 7.605.761 6.494 4.08 11.026 10.26 13.346 2.267.852 4.591 1.135 7.172.555ZM10.751 3.852c-.976.246-1.756-.148-2.56-.962 1.377-.343 2.592-.476 3.897-.528-.107.848-.607 1.306-1.336 1.49Zm32.002 37.924c-.085-.626-.62-.901-1.04-1.228-1.857-1.446-4.03-1.958-6.333-2-1.375-.026-2.735-.128-4.031-.61-.595-.22-1.26-.505-1.244-1.272.015-.78.693-1 1.31-1.184.505-.15 1.026-.247 1.6-.382-1.46-.936-2.886-1.065-4.787-.3-2.993 1.202-5.943 1.06-8.926-.017-1.684-.608-3.179-1.563-4.735-2.408l-.043.03a2.96 2.96 0 0 0 .04-.029c-.038-.117-.107-.12-.197-.054l.122.107c1.29 2.115 3.034 3.817 5.004 5.271 3.793 2.8 7.936 4.471 12.784 3.73A66.714 66.714 0 0 1 37 40.877c1.98-.16 3.866.398 5.753.899Zm-9.14-30.345c-.105-.076-.206-.266-.42-.069 1.745 2.36 3.985 4.098 6.683 5.193 4.354 1.767 8.773 2.07 13.293.51 3.51-1.21 6.033-.028 7.343 3.38.19-3.955-2.137-6.837-5.843-7.401-2.084-.318-4.01.373-5.962.94-5.434 1.575-10.485.798-15.094-2.553Zm27.085 15.425c.708.059 1.416.123 2.124.185-1.6-1.405-3.55-1.517-5.523-1.404-3.003.17-5.167 1.903-7.14 3.972-1.739 1.824-3.31 3.87-5.903 4.604.043.078.054.117.066.117.35.005.699.021 1.047.005 3.768-.17 7.317-.965 10.14-3.7.89-.86 1.685-1.817 2.544-2.71.716-.746 1.584-1.159 2.645-1.07Zm-8.753-4.67c-2.812.246-5.254 1.409-7.548 2.943-1.766 1.18-3.654 1.738-5.776 1.37-.374-.066-.75-.114-1.124-.17l-.013.156c.135.07.265.151.405.207.354.14.702.308 1.07.395 4.083.971 7.992.474 11.516-1.803 2.221-1.435 4.521-1.707 7.013-1.336.252.038.503.083.756.107.234.022.479.255.795.003-2.179-1.574-4.526-2.096-7.094-1.872Zm-10.049-9.544c1.475.051 2.943-.142 4.486-1.059-.452.04-.643.04-.827.076-2.126.424-4.033-.04-5.733-1.383-.623-.493-1.257-.974-1.889-1.457-2.503-1.914-5.374-2.555-8.514-2.5.05.154.054.26.108.315 3.417 3.455 7.371 5.836 12.369 6.008Zm24.727 17.731c-2.114-2.097-4.952-2.367-7.578-.537 1.738.078 3.043.632 4.101 1.728.374.388.763.768 1.182 1.106 1.6 1.29 4.311 1.352 5.896.155-1.861-.726-1.861-.726-3.601-2.452Zm-21.058 16.06c-1.858-3.46-4.981-4.24-8.59-4.008a9.667 9.667 0 0 1 2.977 1.39c.84.586 1.547 1.311 2.243 2.055 1.38 1.473 3.534 2.376 4.962 2.07-.656-.412-1.238-.848-1.592-1.507Zm17.29-19.32c0-.023.001-.045.003-.068l-.006.006.006-.006-.036-.004.021.018.012.053Zm-20 14.744a7.61 7.61 0 0 0-.072-.041.127.127 0 0 0 .015.043c.005.008.038 0 .058-.002Zm-.072-.041-.008-.034-.008.01.008-.01-.022-.006.005.026.024.014Z"
fill="#FD4F00"
/>
</svg>
<h1 class="text-brand mt-10 flex items-center text-sm font-semibold leading-6">
<.icon name="hero-computer-desktop" class="h-4 w-4" />
Systant Dashboard
<small class="bg-brand/5 text-[0.8125rem] ml-3 rounded-full px-2 font-medium leading-6">
Real-time monitoring
</small>
</h1>
<p class="text-[2rem] mt-4 font-semibold leading-10 tracking-tighter text-zinc-900 text-balance">
Monitor all your hosts in real-time.
</p>
<p class="mt-4 text-base leading-7 text-zinc-600">
Phoenix LiveView dashboard for systant hosts. Get real-time system statistics via MQTT from all your monitored servers.
</p>
<div class="flex">
<div class="w-full sm:w-auto">
<div class="mt-10 grid grid-cols-1 gap-x-6 gap-y-4 sm:grid-cols-2">
<a
href="/hosts"
class="group relative rounded-2xl px-6 py-4 text-sm font-semibold leading-6 text-zinc-900 sm:py-6"
>
<span class="absolute inset-0 rounded-2xl bg-zinc-50 transition group-hover:bg-zinc-100 sm:group-hover:scale-105">
</span>
<span class="relative flex items-center gap-4 sm:flex-col">
<.icon name="hero-computer-desktop" class="h-6 w-6" />
View Hosts
</span>
</a>
<a
href="https://github.com/ryanpandya/systant"
class="group relative rounded-2xl px-6 py-4 text-sm font-semibold leading-6 text-zinc-900 sm:py-6"
>
<span class="absolute inset-0 rounded-2xl bg-zinc-50 transition group-hover:bg-zinc-100 sm:group-hover:scale-105">
</span>
<span class="relative flex items-center gap-4 sm:flex-col">
<svg viewBox="0 0 24 24" aria-hidden="true" class="h-6 w-6">
<path
fill-rule="evenodd"
clip-rule="evenodd"
d="M12 0C5.37 0 0 5.506 0 12.303c0 5.445 3.435 10.043 8.205 11.674.6.107.825-.262.825-.585 0-.292-.015-1.261-.015-2.291C6 21.67 5.22 20.346 4.98 19.654c-.135-.354-.72-1.446-1.23-1.738-.42-.23-1.02-.8-.015-.815.945-.015 1.62.892 1.845 1.261 1.08 1.86 2.805 1.338 3.495 1.015.105-.8.42-1.338.765-1.645-2.67-.308-5.46-1.37-5.46-6.075 0-1.338.465-2.446 1.23-3.307-.12-.308-.54-1.569.12-3.26 0 0 1.005-.323 3.3 1.26.96-.276 1.98-.415 3-.415s2.04.139 3 .416c2.295-1.6 3.3-1.261 3.3-1.261.66 1.691.24 2.952.12 3.26.765.861 1.23 1.953 1.23 3.307 0 4.721-2.805 5.767-5.475 6.075.435.384.81 1.122.81 2.276 0 1.645-.015 2.968-.015 3.383 0 .323.225.707.825.585a12.047 12.047 0 0 0 5.919-4.489A12.536 12.536 0 0 0 24 12.304C24 5.505 18.63 0 12 0Z"
fill="#18181B"
/>
</svg>
Source Code
</span>
</a>
</div>
</div>
</div>
</div>
</div>

View File

@ -1,52 +0,0 @@
defmodule DashboardWeb.Endpoint do
use Phoenix.Endpoint, otp_app: :dashboard
# The session will be stored in the cookie and signed,
# this means its contents can be read but not tampered with.
# Set :encryption_salt if you would also like to encrypt it.
@session_options [
store: :cookie,
key: "_dashboard_key",
signing_salt: "C+2rMQXr",
same_site: "Lax"
]
socket "/live", Phoenix.LiveView.Socket,
websocket: [connect_info: [session: @session_options]],
longpoll: [connect_info: [session: @session_options]]
# Serve at "/" the static files from "priv/static" directory.
#
# You should set gzip to true if you are running phx.digest
# when deploying your static files in production.
plug Plug.Static,
at: "/",
from: :dashboard,
gzip: false,
only: DashboardWeb.static_paths()
# Code reloading can be explicitly enabled under the
# :code_reloader configuration of your endpoint.
if code_reloading? do
socket "/phoenix/live_reload/socket", Phoenix.LiveReloader.Socket
plug Phoenix.LiveReloader
plug Phoenix.CodeReloader
end
plug Phoenix.LiveDashboard.RequestLogger,
param_key: "request_logger",
cookie_key: "request_logger"
plug Plug.RequestId
plug Plug.Telemetry, event_prefix: [:phoenix, :endpoint]
plug Plug.Parsers,
parsers: [:urlencoded, :multipart, :json],
pass: ["*/*"],
json_decoder: Phoenix.json_library()
plug Plug.MethodOverride
plug Plug.Head
plug Plug.Session, @session_options
plug DashboardWeb.Router
end

View File

@ -1,25 +0,0 @@
defmodule DashboardWeb.Gettext do
@moduledoc """
A module providing Internationalization with a gettext-based API.
By using [Gettext](https://hexdocs.pm/gettext), your module compiles translations
that you can use in your application. To use this Gettext backend module,
call `use Gettext` and pass it as an option:
use Gettext, backend: DashboardWeb.Gettext
# Simple translation
gettext("Here is the string to translate")
# Plural translation
ngettext("Here is the string to translate",
"Here are the strings to translate",
3)
# Domain-based translation
dgettext("errors", "Here is the error message to translate")
See the [Gettext Docs](https://hexdocs.pm/gettext) for detailed usage.
"""
use Gettext.Backend, otp_app: :dashboard
end

View File

@ -1,605 +0,0 @@
defmodule DashboardWeb.HostsLive do
@moduledoc """
LiveView for real-time systant host monitoring.
"""
use DashboardWeb, :live_view
alias Phoenix.PubSub
@pubsub_topic "systant:hosts"
@impl true
def mount(_params, _session, socket) do
if connected?(socket) do
# Subscribe to host updates from MQTT
PubSub.subscribe(Dashboard.PubSub, @pubsub_topic)
end
# Start with empty hosts - will be populated by MQTT
hosts = %{}
socket =
socket
|> assign(:hosts, hosts)
|> assign(:show_raw_data, %{}) # Track which hosts show raw data
|> assign(:page_title, "Systant Hosts")
{:ok, socket}
end
@impl true
def handle_info({:host_update, hostname, host_data}, socket) do
require Logger
Logger.info("LiveView received host update for #{hostname}: #{inspect(host_data)}")
updated_hosts = Map.put(socket.assigns.hosts, hostname, host_data)
{:noreply, assign(socket, :hosts, updated_hosts)}
end
@impl true
def handle_event("toggle_raw", %{"hostname" => hostname}, socket) do
current_state = Map.get(socket.assigns.show_raw_data, hostname, false)
updated_raw_data = Map.put(socket.assigns.show_raw_data, hostname, !current_state)
{:noreply, assign(socket, :show_raw_data, updated_raw_data)}
end
@impl true
def render(assigns) do
~H"""
<div class="px-4 py-10 sm:px-6 sm:py-28 lg:px-8 xl:px-28 xl:py-32">
<div class="mx-auto max-w-xl lg:mx-0 lg:max-w-3xl">
<h1 class="text-brand mt-10 flex items-center text-sm font-semibold leading-6">
<.icon name="hero-computer-desktop" class="h-4 w-4" />
Systant Host Monitor
</h1>
<p class="text-[2rem] mt-4 font-semibold leading-10 tracking-tighter text-zinc-900">
Real-time system monitoring across all hosts
</p>
<p class="mt-4 text-base leading-7 text-zinc-600">
Live MQTT-powered dashboard showing statistics from all your systant-enabled hosts.
</p>
<div class="mt-10 grid gap-6">
<%= if Enum.empty?(@hosts) do %>
<div class="rounded-lg border border-zinc-200 p-8 text-center">
<.icon name="hero-signal-slash" class="mx-auto h-12 w-12 text-zinc-400" />
<h3 class="mt-4 text-lg font-semibold text-zinc-900">No hosts detected</h3>
<p class="mt-2 text-sm text-zinc-600">
Waiting for systant hosts to publish data via MQTT...
</p>
</div>
<% else %>
<%= for {hostname, host_data} <- @hosts do %>
<.host_card
hostname={hostname}
data={host_data}
show_raw={Map.get(@show_raw_data, hostname, false)}
/>
<% end %>
<% end %>
</div>
</div>
</div>
"""
end
attr :hostname, :string, required: true
attr :data, :map, required: true
attr :show_raw, :boolean, default: false
defp host_card(assigns) do
assigns = assign(assigns, :show_raw, assigns[:show_raw] || false)
~H"""
<div class="rounded-lg border border-zinc-200 bg-white p-6 shadow-sm">
<!-- Host Header -->
<div class="flex items-center justify-between mb-6">
<div class="flex items-center space-x-3">
<div class="rounded-full bg-green-100 p-2">
<.icon name="hero-server" class="h-5 w-5 text-green-600" />
</div>
<div>
<h3 class="text-lg font-semibold text-zinc-900"><%= @hostname %></h3>
<p class="text-sm text-zinc-600">
Last seen: <%= format_datetime(@data["last_seen"]) %>
</p>
</div>
</div>
<div class="flex items-center space-x-2">
<button
phx-click="toggle_raw"
phx-value-hostname={@hostname}
class="text-xs px-2 py-1 rounded border border-zinc-300 hover:bg-zinc-50"
>
<%= if @show_raw, do: "Hide Raw", else: "Show Raw" %>
</button>
<div class="rounded-full bg-green-100 px-3 py-1">
<span class="text-xs font-medium text-green-800">Online</span>
</div>
</div>
</div>
<%= if @show_raw do %>
<!-- Raw Data View -->
<div class="mt-4">
<h4 class="text-sm font-medium text-zinc-700 mb-2">Raw Data:</h4>
<pre class="text-xs bg-zinc-50 p-3 rounded border overflow-x-auto">
<%= Jason.encode!(@data, pretty: true) %>
</pre>
</div>
<% else %>
<!-- Graphical Dashboard View -->
<div class="grid grid-cols-1 md:grid-cols-2 lg:grid-cols-3 gap-6">
<!-- CPU Load Averages -->
<.metric_card
title="CPU Load Average"
icon="hero-cpu-chip"
data={@data["cpu"]}
type={:load_average}
/>
<!-- Memory Usage -->
<.metric_card
title="Memory Usage"
icon="hero-circle-stack"
data={@data["memory"]}
type={:memory}
/>
<!-- Disk Usage -->
<.metric_card
title="Disk Usage"
icon="hero-hard-drive"
data={@data["disk"]}
type={:disk}
/>
<!-- GPU Metrics -->
<%= if @data["gpu"] do %>
<.metric_card
title="GPU Status"
icon="hero-tv"
data={@data["gpu"]}
type={:gpu}
/>
<% end %>
<!-- Network Interfaces -->
<%= if @data["network"] && length(@data["network"]) > 0 do %>
<.metric_card
title="Network Interfaces"
icon="hero-signal"
data={@data["network"]}
type={:network}
/>
<% end %>
<!-- Temperature Sensors -->
<%= if @data["temperature"] do %>
<.metric_card
title="Temperature"
icon="hero-fire"
data={@data["temperature"]}
type={:temperature}
/>
<% end %>
</div>
<!-- Additional Metrics Row -->
<%= if @data["processes"] do %>
<div class="mt-6">
<.metric_card
title="Top Processes"
icon="hero-list-bullet"
data={@data["processes"]}
type={:processes}
/>
</div>
<% end %>
<!-- System Info -->
<div class="mt-6 p-4 bg-zinc-50 rounded-lg">
<h4 class="text-sm font-medium text-zinc-700 mb-2">System Information</h4>
<div class="grid grid-cols-2 md:grid-cols-4 gap-4 text-sm">
<div>
<span class="text-zinc-600">Uptime:</span>
<span class="ml-1 font-medium"><%= format_uptime(@data["system"]["uptime_seconds"]) %></span>
</div>
<div>
<span class="text-zinc-600">Erlang:</span>
<span class="ml-1 font-medium"><%= @data["system"]["erlang_version"] %></span>
</div>
<div>
<span class="text-zinc-600">OTP:</span>
<span class="ml-1 font-medium"><%= @data["system"]["otp_release"] %></span>
</div>
<div>
<span class="text-zinc-600">Schedulers:</span>
<span class="ml-1 font-medium"><%= @data["system"]["schedulers"] %></span>
</div>
</div>
</div>
<% end %>
</div>
"""
end
defp format_datetime(%DateTime{} = datetime) do
Calendar.strftime(datetime, "%Y-%m-%d %H:%M:%S UTC")
end
defp format_datetime(_), do: "Unknown"
defp format_uptime(nil), do: "Unknown"
defp format_uptime(seconds) when is_integer(seconds) do
days = div(seconds, 86400)
hours = div(rem(seconds, 86400), 3600)
minutes = div(rem(seconds, 3600), 60)
cond do
days > 0 -> "#{days}d #{hours}h #{minutes}m"
hours > 0 -> "#{hours}h #{minutes}m"
true -> "#{minutes}m"
end
end
defp format_uptime(_), do: "Unknown"
attr :title, :string, required: true
attr :icon, :string, required: true
attr :data, :map, required: true
attr :type, :atom, required: true
defp metric_card(assigns) do
~H"""
<div class="bg-white border border-zinc-200 rounded-lg p-4">
<div class="flex items-center space-x-2 mb-3">
<.icon name={@icon} class="h-5 w-5 text-zinc-600" />
<h4 class="font-medium text-zinc-900"><%= @title %></h4>
</div>
<%= case @type do %>
<% :load_average -> %>
<.load_average_display data={@data} />
<% :memory -> %>
<.memory_display data={@data} />
<% :disk -> %>
<.disk_display data={@data} />
<% :gpu -> %>
<.gpu_display data={@data} />
<% :network -> %>
<.network_display data={@data} />
<% :temperature -> %>
<.temperature_display data={@data} />
<% :processes -> %>
<.processes_display data={@data} />
<% _ -> %>
<p class="text-sm text-zinc-500">No data available</p>
<% end %>
</div>
"""
end
defp load_average_display(assigns) do
~H"""
<%= if @data do %>
<div class="space-y-2">
<div class="flex justify-between items-center">
<span class="text-sm text-zinc-600">1 min</span>
<span class="font-mono text-sm"><%= format_float(@data["avg1"]) %></span>
</div>
<.progress_bar value={@data["avg1"]} max={4.0} color={load_color(@data["avg1"])} />
<div class="flex justify-between items-center">
<span class="text-sm text-zinc-600">5 min</span>
<span class="font-mono text-sm"><%= format_float(@data["avg5"]) %></span>
</div>
<.progress_bar value={@data["avg5"]} max={4.0} color={load_color(@data["avg5"])} />
<div class="flex justify-between items-center">
<span class="text-sm text-zinc-600">15 min</span>
<span class="font-mono text-sm"><%= format_float(@data["avg15"]) %></span>
</div>
<.progress_bar value={@data["avg15"]} max={4.0} color={load_color(@data["avg15"])} />
</div>
<% else %>
<p class="text-sm text-zinc-500">No load data available</p>
<% end %>
"""
end
defp memory_display(assigns) do
~H"""
<%= if @data && @data["total_kb"] do %>
<div class="space-y-2">
<div class="flex justify-between items-center">
<span class="text-sm text-zinc-600">Used</span>
<span class="font-mono text-sm"><%= @data["used_percent"] %>%</span>
</div>
<.progress_bar value={@data["used_percent"]} max={100} color={memory_color(@data["used_percent"])} />
<div class="text-xs text-zinc-500 space-y-1">
<div class="flex justify-between">
<span>Total:</span>
<span><%= format_kb(@data["total_kb"]) %></span>
</div>
<div class="flex justify-between">
<span>Used:</span>
<span><%= format_kb(@data["used_kb"]) %></span>
</div>
<div class="flex justify-between">
<span>Available:</span>
<span><%= format_kb(@data["available_kb"]) %></span>
</div>
</div>
</div>
<% else %>
<p class="text-sm text-zinc-500">No memory data available</p>
<% end %>
"""
end
defp disk_display(assigns) do
~H"""
<%= if @data && @data["disks"] do %>
<div class="space-y-3">
<%= for disk <- @data["disks"] do %>
<div class="space-y-1">
<div class="flex justify-between items-center">
<span class="text-xs text-zinc-600 truncate"><%= disk["mounted_on"] %></span>
<span class="font-mono text-xs"><%= disk["use_percent"] %>%</span>
</div>
<.progress_bar value={disk["use_percent"]} max={100} color={disk_color(disk["use_percent"])} />
<div class="flex justify-between text-xs text-zinc-500">
<span><%= disk["used"] %> used</span>
<span><%= disk["available"] %> free</span>
</div>
</div>
<% end %>
</div>
<% else %>
<p class="text-sm text-zinc-500">No disk data available</p>
<% end %>
"""
end
attr :value, :any, required: true
attr :max, :any, required: true
attr :color, :string, default: "bg-blue-500"
defp progress_bar(assigns) do
assigns = assign(assigns, :percentage, min(assigns.value / assigns.max * 100, 100))
~H"""
<div class="w-full bg-zinc-200 rounded-full h-2">
<div
class={"h-2 rounded-full transition-all duration-300 #{@color}"}
style={"width: #{@percentage}%"}
>
</div>
</div>
"""
end
# Helper functions for formatting and colors
defp format_float(nil), do: "N/A"
defp format_float(value) when is_float(value), do: :erlang.float_to_binary(value, decimals: 2)
defp format_float(value), do: to_string(value)
defp format_kb(nil), do: "N/A"
defp format_kb(kb) when is_integer(kb) do
cond do
kb >= 1_048_576 -> "#{Float.round(kb / 1_048_576, 1)} GB"
kb >= 1_024 -> "#{Float.round(kb / 1_024, 1)} MB"
true -> "#{kb} KB"
end
end
defp load_color(load) when is_float(load) do
cond do
load >= 2.0 -> "bg-red-500"
load >= 1.0 -> "bg-yellow-500"
true -> "bg-green-500"
end
end
defp load_color(_), do: "bg-zinc-400"
defp memory_color(percent) when is_float(percent) do
cond do
percent >= 90 -> "bg-red-500"
percent >= 75 -> "bg-yellow-500"
true -> "bg-blue-500"
end
end
defp memory_color(_), do: "bg-zinc-400"
defp disk_color(percent) when is_integer(percent) do
cond do
percent >= 90 -> "bg-red-500"
percent >= 80 -> "bg-yellow-500"
true -> "bg-green-500"
end
end
defp disk_color(_), do: "bg-zinc-400"
# GPU Display Component
defp gpu_display(assigns) do
~H"""
<%= if @data do %>
<div class="space-y-3">
<!-- NVIDIA GPUs -->
<%= if @data["nvidia"] && length(@data["nvidia"]) > 0 do %>
<div class="text-xs text-zinc-600 font-medium mb-2">NVIDIA</div>
<%= for gpu <- @data["nvidia"] do %>
<div class="space-y-1">
<div class="flex justify-between items-center">
<span class="text-xs text-zinc-600 truncate"><%= gpu["name"] %></span>
<span class="font-mono text-xs"><%= gpu["utilization_percent"] %>%</span>
</div>
<.progress_bar value={gpu["utilization_percent"]} max={100} color={gpu_color(gpu["utilization_percent"])} />
<div class="flex justify-between text-xs text-zinc-500">
<span><%= gpu["temperature_c"] %>°C</span>
<span><%= format_mb(gpu["memory_used_mb"]) %>/<%= format_mb(gpu["memory_total_mb"]) %></span>
</div>
</div>
<% end %>
<% end %>
<!-- AMD GPUs -->
<%= if @data["amd"] && length(@data["amd"]) > 0 do %>
<div class="text-xs text-zinc-600 font-medium mb-2">AMD</div>
<%= for gpu <- @data["amd"] do %>
<div class="space-y-1">
<div class="flex justify-between items-center">
<span class="text-xs text-zinc-600 truncate"><%= gpu["name"] %></span>
<span class="font-mono text-xs"><%= gpu["utilization_percent"] || "N/A" %>%</span>
</div>
<.progress_bar value={gpu["utilization_percent"] || 0} max={100} color={gpu_color(gpu["utilization_percent"])} />
<div class="text-xs text-zinc-500">
<span><%= format_float(gpu["temperature_c"]) %>°C</span>
</div>
</div>
<% end %>
<% end %>
<%= if (length(@data["nvidia"] || []) + length(@data["amd"] || [])) == 0 do %>
<p class="text-sm text-zinc-500">No GPUs detected</p>
<% end %>
</div>
<% else %>
<p class="text-sm text-zinc-500">No GPU data available</p>
<% end %>
"""
end
# Network Display Component
defp network_display(assigns) do
~H"""
<%= if @data && length(@data) > 0 do %>
<div class="space-y-3">
<%= for interface <- Enum.take(@data, 3) do %>
<div class="space-y-1">
<div class="flex justify-between items-center">
<span class="text-xs text-zinc-600 font-medium"><%= interface["interface"] %></span>
</div>
<div class="grid grid-cols-2 gap-2 text-xs">
<div class="text-zinc-500">
<span class="text-green-600"></span> <%= format_bytes(interface["rx_bytes"]) %>
</div>
<div class="text-zinc-500">
<span class="text-blue-600"></span> <%= format_bytes(interface["tx_bytes"]) %>
</div>
</div>
<%= if (interface["rx_errors"] + interface["tx_errors"]) > 0 do %>
<div class="text-xs text-red-500">
Errors: RX <%= interface["rx_errors"] %>, TX <%= interface["tx_errors"] %>
</div>
<% end %>
</div>
<% end %>
</div>
<% else %>
<p class="text-sm text-zinc-500">No network interfaces</p>
<% end %>
"""
end
# Temperature Display Component
defp temperature_display(assigns) do
~H"""
<%= if @data do %>
<div class="space-y-3">
<!-- CPU Temperature -->
<%= if @data["cpu"] do %>
<div class="space-y-1">
<div class="flex justify-between items-center">
<span class="text-xs text-zinc-600">CPU</span>
<span class={"font-mono text-xs text-#{temp_color(@data["cpu"])}"}><%= format_float(@data["cpu"]) %>°C</span>
</div>
</div>
<% end %>
<!-- Sensor Data -->
<%= if @data["sensors"] && map_size(@data["sensors"]) > 0 do %>
<%= for {chip_name, temps} <- Enum.take(@data["sensors"], 3) do %>
<div class="text-xs text-zinc-600 font-medium"><%= chip_name %></div>
<%= for {sensor, temp} <- Enum.take(temps, 2) do %>
<div class="flex justify-between items-center">
<span class="text-xs text-zinc-500"><%= sensor %></span>
<span class={"font-mono text-xs text-#{temp_color(temp)}"}><%= format_float(temp) %>°C</span>
</div>
<% end %>
<% end %>
<% end %>
<%= if !@data["cpu"] && (!@data["sensors"] || map_size(@data["sensors"]) == 0) do %>
<p class="text-sm text-zinc-500">No temperature sensors</p>
<% end %>
</div>
<% else %>
<p class="text-sm text-zinc-500">No temperature data</p>
<% end %>
"""
end
# Processes Display Component
defp processes_display(assigns) do
~H"""
<%= if @data && length(@data) > 0 do %>
<div class="space-y-2">
<%= for process <- Enum.take(@data, 8) do %>
<div class="flex justify-between items-center text-xs">
<div class="flex-1 min-w-0">
<div class="truncate font-mono text-zinc-700"><%= process["command"] %></div>
<div class="text-zinc-500"><%= process["user"] %> (PID <%= process["pid"] %>)</div>
</div>
<div class="text-right ml-2">
<div class="font-mono text-zinc-700"><%= format_float(process["cpu_percent"]) %>%</div>
<div class="text-zinc-500"><%= format_float(process["memory_percent"]) %>%</div>
</div>
</div>
<% end %>
</div>
<% else %>
<p class="text-sm text-zinc-500">No process data</p>
<% end %>
"""
end
# Additional helper functions
defp format_mb(nil), do: "N/A"
defp format_mb(mb) when is_integer(mb) do
cond do
mb >= 1024 -> "#{Float.round(mb / 1024, 1)} GB"
true -> "#{mb} MB"
end
end
defp format_bytes(bytes) when is_integer(bytes) do
cond do
bytes >= 1_073_741_824 -> "#{Float.round(bytes / 1_073_741_824, 1)} GB"
bytes >= 1_048_576 -> "#{Float.round(bytes / 1_048_576, 1)} MB"
bytes >= 1_024 -> "#{Float.round(bytes / 1_024, 1)} KB"
true -> "#{bytes} B"
end
end
defp format_bytes(_), do: "N/A"
defp gpu_color(util) when is_integer(util) do
cond do
util >= 80 -> "bg-red-500"
util >= 50 -> "bg-yellow-500"
true -> "bg-green-500"
end
end
defp gpu_color(_), do: "bg-zinc-400"
defp temp_color(temp) when is_number(temp) do
cond do
temp >= 80 -> "red-600"
temp >= 70 -> "yellow-600"
temp >= 60 -> "yellow-500"
true -> "green-600"
end
end
defp temp_color(_), do: "zinc-500"
end

View File

@ -1,45 +0,0 @@
defmodule DashboardWeb.Router do
use DashboardWeb, :router
pipeline :browser do
plug :accepts, ["html"]
plug :fetch_session
plug :fetch_live_flash
plug :put_root_layout, html: {DashboardWeb.Layouts, :root}
plug :protect_from_forgery
plug :put_secure_browser_headers
end
pipeline :api do
plug :accepts, ["json"]
end
scope "/", DashboardWeb do
pipe_through :browser
get "/", PageController, :home
live "/hosts", HostsLive, :index
end
# Other scopes may use custom stacks.
# scope "/api", DashboardWeb do
# pipe_through :api
# end
# Enable LiveDashboard and Swoosh mailbox preview in development
if Application.compile_env(:dashboard, :dev_routes) do
# If you want to use the LiveDashboard in production, you should put
# it behind authentication and allow only admins to access it.
# If your application does not have an admins-only section yet,
# you can use Plug.BasicAuth to set up some basic authentication
# as long as you are also using SSL (which you should anyway).
import Phoenix.LiveDashboard.Router
scope "/dev" do
pipe_through :browser
live_dashboard "/dashboard", metrics: DashboardWeb.Telemetry
forward "/mailbox", Plug.Swoosh.MailboxPreview
end
end
end

View File

@ -1,70 +0,0 @@
defmodule DashboardWeb.Telemetry do
use Supervisor
import Telemetry.Metrics
def start_link(arg) do
Supervisor.start_link(__MODULE__, arg, name: __MODULE__)
end
@impl true
def init(_arg) do
children = [
# Telemetry poller will execute the given period measurements
# every 10_000ms. Learn more here: https://hexdocs.pm/telemetry_metrics
{:telemetry_poller, measurements: periodic_measurements(), period: 10_000}
# Add reporters as children of your supervision tree.
# {Telemetry.Metrics.ConsoleReporter, metrics: metrics()}
]
Supervisor.init(children, strategy: :one_for_one)
end
def metrics do
[
# Phoenix Metrics
summary("phoenix.endpoint.start.system_time",
unit: {:native, :millisecond}
),
summary("phoenix.endpoint.stop.duration",
unit: {:native, :millisecond}
),
summary("phoenix.router_dispatch.start.system_time",
tags: [:route],
unit: {:native, :millisecond}
),
summary("phoenix.router_dispatch.exception.duration",
tags: [:route],
unit: {:native, :millisecond}
),
summary("phoenix.router_dispatch.stop.duration",
tags: [:route],
unit: {:native, :millisecond}
),
summary("phoenix.socket_connected.duration",
unit: {:native, :millisecond}
),
sum("phoenix.socket_drain.count"),
summary("phoenix.channel_joined.duration",
unit: {:native, :millisecond}
),
summary("phoenix.channel_handled_in.duration",
tags: [:event],
unit: {:native, :millisecond}
),
# VM Metrics
summary("vm.memory.total", unit: {:byte, :kilobyte}),
summary("vm.total_run_queue_lengths.total"),
summary("vm.total_run_queue_lengths.cpu"),
summary("vm.total_run_queue_lengths.io")
]
end
defp periodic_measurements do
[
# A module, function and arguments to be invoked periodically.
# This function must call :telemetry.execute/3 and a metric must be added above.
# {DashboardWeb, :count_users, []}
]
end
end

View File

@ -1,80 +0,0 @@
defmodule Dashboard.MixProject do
use Mix.Project
def project do
[
app: :dashboard,
version: "0.1.0",
elixir: "~> 1.14",
elixirc_paths: elixirc_paths(Mix.env()),
start_permanent: Mix.env() == :prod,
aliases: aliases(),
deps: deps()
]
end
# Configuration for the OTP application.
#
# Type `mix help compile.app` for more information.
def application do
[
mod: {Dashboard.Application, []},
extra_applications: [:logger, :runtime_tools]
]
end
# Specifies which paths to compile per environment.
defp elixirc_paths(:test), do: ["lib", "test/support"]
defp elixirc_paths(_), do: ["lib"]
# Specifies your project dependencies.
#
# Type `mix help deps` for examples and options.
defp deps do
[
{:phoenix, "~> 1.7.21"},
{:phoenix_html, "~> 4.1"},
{:phoenix_live_reload, "~> 1.2", only: :dev},
{:phoenix_live_view, "~> 1.0"},
{:floki, ">= 0.30.0", only: :test},
{:phoenix_live_dashboard, "~> 0.8.3"},
{:esbuild, "~> 0.8", runtime: Mix.env() == :dev},
{:tailwind, "~> 0.2.0", runtime: Mix.env() == :dev},
{:heroicons,
github: "tailwindlabs/heroicons",
tag: "v2.1.1",
sparse: "optimized",
app: false,
compile: false,
depth: 1},
{:swoosh, "~> 1.5"},
{:finch, "~> 0.13"},
{:telemetry_metrics, "~> 1.0"},
{:telemetry_poller, "~> 1.0"},
{:gettext, "~> 0.26"},
{:jason, "~> 1.2"},
{:dns_cluster, "~> 0.1.1"},
{:bandit, "~> 1.5"},
{:tortoise, "~> 0.9.5"}
]
end
# Aliases are shortcuts or tasks specific to the current project.
# For example, to install project dependencies and perform other setup tasks, run:
#
# $ mix setup
#
# See the documentation for `Mix` for more info on aliases.
defp aliases do
[
setup: ["deps.get", "assets.setup", "assets.build"],
"assets.setup": ["tailwind.install --if-missing", "esbuild.install --if-missing"],
"assets.build": ["tailwind dashboard", "esbuild dashboard"],
"assets.deploy": [
"tailwind dashboard --minify",
"esbuild dashboard --minify",
"phx.digest"
]
]
end
end

View File

@ -1,37 +0,0 @@
%{
"bandit": {:hex, :bandit, "1.7.0", "d1564f30553c97d3e25f9623144bb8df11f3787a26733f00b21699a128105c0c", [:mix], [{:hpax, "~> 1.0", [hex: :hpax, repo: "hexpm", optional: false]}, {:plug, "~> 1.18", [hex: :plug, repo: "hexpm", optional: false]}, {:telemetry, "~> 0.4 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}, {:thousand_island, "~> 1.0", [hex: :thousand_island, repo: "hexpm", optional: false]}, {:websock, "~> 0.5", [hex: :websock, repo: "hexpm", optional: false]}], "hexpm", "3e2f7a98c7a11f48d9d8c037f7177cd39778e74d55c7af06fe6227c742a8168a"},
"castore": {:hex, :castore, "1.0.14", "4582dd7d630b48cf5e1ca8d3d42494db51e406b7ba704e81fbd401866366896a", [:mix], [], "hexpm", "7bc1b65249d31701393edaaac18ec8398d8974d52c647b7904d01b964137b9f4"},
"dns_cluster": {:hex, :dns_cluster, "0.1.3", "0bc20a2c88ed6cc494f2964075c359f8c2d00e1bf25518a6a6c7fd277c9b0c66", [:mix], [], "hexpm", "46cb7c4a1b3e52c7ad4cbe33ca5079fbde4840dedeafca2baf77996c2da1bc33"},
"esbuild": {:hex, :esbuild, "0.10.0", "b0aa3388a1c23e727c5a3e7427c932d89ee791746b0081bbe56103e9ef3d291f", [:mix], [{:jason, "~> 1.4", [hex: :jason, repo: "hexpm", optional: false]}], "hexpm", "468489cda427b974a7cc9f03ace55368a83e1a7be12fba7e30969af78e5f8c70"},
"expo": {:hex, :expo, "1.1.0", "f7b9ed7fb5745ebe1eeedf3d6f29226c5dd52897ac67c0f8af62a07e661e5c75", [:mix], [], "hexpm", "fbadf93f4700fb44c331362177bdca9eeb8097e8b0ef525c9cc501cb9917c960"},
"file_system": {:hex, :file_system, "1.1.0", "08d232062284546c6c34426997dd7ef6ec9f8bbd090eb91780283c9016840e8f", [:mix], [], "hexpm", "bfcf81244f416871f2a2e15c1b515287faa5db9c6bcf290222206d120b3d43f6"},
"finch": {:hex, :finch, "0.20.0", "5330aefb6b010f424dcbbc4615d914e9e3deae40095e73ab0c1bb0968933cadf", [:mix], [{:mime, "~> 1.0 or ~> 2.0", [hex: :mime, repo: "hexpm", optional: false]}, {:mint, "~> 1.6.2 or ~> 1.7", [hex: :mint, repo: "hexpm", optional: false]}, {:nimble_options, "~> 0.4 or ~> 1.0", [hex: :nimble_options, repo: "hexpm", optional: false]}, {:nimble_pool, "~> 1.1", [hex: :nimble_pool, repo: "hexpm", optional: false]}, {:telemetry, "~> 0.4 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "2658131a74d051aabfcba936093c903b8e89da9a1b63e430bee62045fa9b2ee2"},
"floki": {:hex, :floki, "0.38.0", "62b642386fa3f2f90713f6e231da0fa3256e41ef1089f83b6ceac7a3fd3abf33", [:mix], [], "hexpm", "a5943ee91e93fb2d635b612caf5508e36d37548e84928463ef9dd986f0d1abd9"},
"gen_state_machine": {:hex, :gen_state_machine, "3.0.0", "1e57f86a494e5c6b14137ebef26a7eb342b3b0070c7135f2d6768ed3f6b6cdff", [:mix], [], "hexpm", "0a59652574bebceb7309f6b749d2a41b45fdeda8dbb4da0791e355dd19f0ed15"},
"gettext": {:hex, :gettext, "0.26.2", "5978aa7b21fada6deabf1f6341ddba50bc69c999e812211903b169799208f2a8", [:mix], [{:expo, "~> 0.5.1 or ~> 1.0", [hex: :expo, repo: "hexpm", optional: false]}], "hexpm", "aa978504bcf76511efdc22d580ba08e2279caab1066b76bb9aa81c4a1e0a32a5"},
"heroicons": {:git, "https://github.com/tailwindlabs/heroicons.git", "88ab3a0d790e6a47404cba02800a6b25d2afae50", [tag: "v2.1.1", sparse: "optimized", depth: 1]},
"hpax": {:hex, :hpax, "1.0.3", "ed67ef51ad4df91e75cc6a1494f851850c0bd98ebc0be6e81b026e765ee535aa", [:mix], [], "hexpm", "8eab6e1cfa8d5918c2ce4ba43588e894af35dbd8e91e6e55c817bca5847df34a"},
"jason": {:hex, :jason, "1.4.4", "b9226785a9aa77b6857ca22832cffa5d5011a667207eb2a0ad56adb5db443b8a", [:mix], [{:decimal, "~> 1.0 or ~> 2.0", [hex: :decimal, repo: "hexpm", optional: true]}], "hexpm", "c5eb0cab91f094599f94d55bc63409236a8ec69a21a67814529e8d5f6cc90b3b"},
"mime": {:hex, :mime, "2.0.7", "b8d739037be7cd402aee1ba0306edfdef982687ee7e9859bee6198c1e7e2f128", [:mix], [], "hexpm", "6171188e399ee16023ffc5b76ce445eb6d9672e2e241d2df6050f3c771e80ccd"},
"mint": {:hex, :mint, "1.7.1", "113fdb2b2f3b59e47c7955971854641c61f378549d73e829e1768de90fc1abf1", [:mix], [{:castore, "~> 0.1.0 or ~> 1.0", [hex: :castore, repo: "hexpm", optional: true]}, {:hpax, "~> 0.1.1 or ~> 0.2.0 or ~> 1.0", [hex: :hpax, repo: "hexpm", optional: false]}], "hexpm", "fceba0a4d0f24301ddee3024ae116df1c3f4bb7a563a731f45fdfeb9d39a231b"},
"nimble_options": {:hex, :nimble_options, "1.1.1", "e3a492d54d85fc3fd7c5baf411d9d2852922f66e69476317787a7b2bb000a61b", [:mix], [], "hexpm", "821b2470ca9442c4b6984882fe9bb0389371b8ddec4d45a9504f00a66f650b44"},
"nimble_pool": {:hex, :nimble_pool, "1.1.0", "bf9c29fbdcba3564a8b800d1eeb5a3c58f36e1e11d7b7fb2e084a643f645f06b", [:mix], [], "hexpm", "af2e4e6b34197db81f7aad230c1118eac993acc0dae6bc83bac0126d4ae0813a"},
"phoenix": {:hex, :phoenix, "1.7.21", "14ca4f1071a5f65121217d6b57ac5712d1857e40a0833aff7a691b7870fc9a3b", [:mix], [{:castore, ">= 0.0.0", [hex: :castore, repo: "hexpm", optional: false]}, {:jason, "~> 1.0", [hex: :jason, repo: "hexpm", optional: true]}, {:phoenix_pubsub, "~> 2.1", [hex: :phoenix_pubsub, repo: "hexpm", optional: false]}, {:phoenix_template, "~> 1.0", [hex: :phoenix_template, repo: "hexpm", optional: false]}, {:phoenix_view, "~> 2.0", [hex: :phoenix_view, repo: "hexpm", optional: true]}, {:plug, "~> 1.14", [hex: :plug, repo: "hexpm", optional: false]}, {:plug_cowboy, "~> 2.7", [hex: :plug_cowboy, repo: "hexpm", optional: true]}, {:plug_crypto, "~> 1.2 or ~> 2.0", [hex: :plug_crypto, repo: "hexpm", optional: false]}, {:telemetry, "~> 0.4 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}, {:websock_adapter, "~> 0.5.3", [hex: :websock_adapter, repo: "hexpm", optional: false]}], "hexpm", "336dce4f86cba56fed312a7d280bf2282c720abb6074bdb1b61ec8095bdd0bc9"},
"phoenix_html": {:hex, :phoenix_html, "4.2.1", "35279e2a39140068fc03f8874408d58eef734e488fc142153f055c5454fd1c08", [:mix], [], "hexpm", "cff108100ae2715dd959ae8f2a8cef8e20b593f8dfd031c9cba92702cf23e053"},
"phoenix_live_dashboard": {:hex, :phoenix_live_dashboard, "0.8.7", "405880012cb4b706f26dd1c6349125bfc903fb9e44d1ea668adaf4e04d4884b7", [:mix], [{:ecto, "~> 3.6.2 or ~> 3.7", [hex: :ecto, repo: "hexpm", optional: true]}, {:ecto_mysql_extras, "~> 0.5", [hex: :ecto_mysql_extras, repo: "hexpm", optional: true]}, {:ecto_psql_extras, "~> 0.7", [hex: :ecto_psql_extras, repo: "hexpm", optional: true]}, {:ecto_sqlite3_extras, "~> 1.1.7 or ~> 1.2.0", [hex: :ecto_sqlite3_extras, repo: "hexpm", optional: true]}, {:mime, "~> 1.6 or ~> 2.0", [hex: :mime, repo: "hexpm", optional: false]}, {:phoenix_live_view, "~> 0.19 or ~> 1.0", [hex: :phoenix_live_view, repo: "hexpm", optional: false]}, {:telemetry_metrics, "~> 0.6 or ~> 1.0", [hex: :telemetry_metrics, repo: "hexpm", optional: false]}], "hexpm", "3a8625cab39ec261d48a13b7468dc619c0ede099601b084e343968309bd4d7d7"},
"phoenix_live_reload": {:hex, :phoenix_live_reload, "1.6.0", "2791fac0e2776b640192308cc90c0dbcf67843ad51387ed4ecae2038263d708d", [:mix], [{:file_system, "~> 0.2.10 or ~> 1.0", [hex: :file_system, repo: "hexpm", optional: false]}, {:phoenix, "~> 1.4", [hex: :phoenix, repo: "hexpm", optional: false]}], "hexpm", "b3a1fa036d7eb2f956774eda7a7638cf5123f8f2175aca6d6420a7f95e598e1c"},
"phoenix_live_view": {:hex, :phoenix_live_view, "1.1.2", "af6f090e3dc7d5ff41de10aa1039e0543e8151f99afa44097a832bcb139790d8", [:mix], [{:igniter, ">= 0.6.16 and < 1.0.0-0", [hex: :igniter, repo: "hexpm", optional: true]}, {:jason, "~> 1.0", [hex: :jason, repo: "hexpm", optional: true]}, {:lazy_html, "~> 0.1.0", [hex: :lazy_html, repo: "hexpm", optional: true]}, {:phoenix, "~> 1.6.15 or ~> 1.7.0 or ~> 1.8.0-rc", [hex: :phoenix, repo: "hexpm", optional: false]}, {:phoenix_html, "~> 3.3 or ~> 4.0", [hex: :phoenix_html, repo: "hexpm", optional: false]}, {:phoenix_template, "~> 1.0", [hex: :phoenix_template, repo: "hexpm", optional: false]}, {:phoenix_view, "~> 2.0", [hex: :phoenix_view, repo: "hexpm", optional: true]}, {:plug, "~> 1.15", [hex: :plug, repo: "hexpm", optional: false]}, {:telemetry, "~> 0.4.2 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "54b2e4a31b8689a1604b3a2e0b1d54bb89e9476022c9ebbe585e9dd800674965"},
"phoenix_pubsub": {:hex, :phoenix_pubsub, "2.1.3", "3168d78ba41835aecad272d5e8cd51aa87a7ac9eb836eabc42f6e57538e3731d", [:mix], [], "hexpm", "bba06bc1dcfd8cb086759f0edc94a8ba2bc8896d5331a1e2c2902bf8e36ee502"},
"phoenix_template": {:hex, :phoenix_template, "1.0.4", "e2092c132f3b5e5b2d49c96695342eb36d0ed514c5b252a77048d5969330d639", [:mix], [{:phoenix_html, "~> 2.14.2 or ~> 3.0 or ~> 4.0", [hex: :phoenix_html, repo: "hexpm", optional: true]}], "hexpm", "2c0c81f0e5c6753faf5cca2f229c9709919aba34fab866d3bc05060c9c444206"},
"plug": {:hex, :plug, "1.18.1", "5067f26f7745b7e31bc3368bc1a2b818b9779faa959b49c934c17730efc911cf", [:mix], [{:mime, "~> 1.0 or ~> 2.0", [hex: :mime, repo: "hexpm", optional: false]}, {:plug_crypto, "~> 1.1.1 or ~> 1.2 or ~> 2.0", [hex: :plug_crypto, repo: "hexpm", optional: false]}, {:telemetry, "~> 0.4.3 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "57a57db70df2b422b564437d2d33cf8d33cd16339c1edb190cd11b1a3a546cc2"},
"plug_crypto": {:hex, :plug_crypto, "2.1.1", "19bda8184399cb24afa10be734f84a16ea0a2bc65054e23a62bb10f06bc89491", [:mix], [], "hexpm", "6470bce6ffe41c8bd497612ffde1a7e4af67f36a15eea5f921af71cf3e11247c"},
"swoosh": {:hex, :swoosh, "1.19.5", "5abd71be78302ba21be56a2b68d05c9946ff1f1bd254f949efef09d253b771ac", [:mix], [{:bandit, ">= 1.0.0", [hex: :bandit, repo: "hexpm", optional: true]}, {:cowboy, "~> 1.1 or ~> 2.4", [hex: :cowboy, repo: "hexpm", optional: true]}, {:ex_aws, "~> 2.1", [hex: :ex_aws, repo: "hexpm", optional: true]}, {:finch, "~> 0.6", [hex: :finch, repo: "hexpm", optional: true]}, {:gen_smtp, "~> 0.13 or ~> 1.0", [hex: :gen_smtp, repo: "hexpm", optional: true]}, {:hackney, "~> 1.9", [hex: :hackney, repo: "hexpm", optional: true]}, {:jason, "~> 1.0", [hex: :jason, repo: "hexpm", optional: false]}, {:mail, "~> 0.2", [hex: :mail, repo: "hexpm", optional: true]}, {:mime, "~> 1.1 or ~> 2.0", [hex: :mime, repo: "hexpm", optional: false]}, {:mua, "~> 0.2.3", [hex: :mua, repo: "hexpm", optional: true]}, {:multipart, "~> 0.4", [hex: :multipart, repo: "hexpm", optional: true]}, {:plug, "~> 1.9", [hex: :plug, repo: "hexpm", optional: true]}, {:plug_cowboy, ">= 1.0.0", [hex: :plug_cowboy, repo: "hexpm", optional: true]}, {:req, "~> 0.5.10 or ~> 0.6 or ~> 1.0", [hex: :req, repo: "hexpm", optional: true]}, {:telemetry, "~> 0.4.2 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "c953f51ee0a8b237e0f4307c9cefd3eb1eb751c35fcdda2a8bccb991766473be"},
"tailwind": {:hex, :tailwind, "0.2.4", "5706ec47182d4e7045901302bf3a333e80f3d1af65c442ba9a9eed152fb26c2e", [:mix], [{:castore, ">= 0.0.0", [hex: :castore, repo: "hexpm", optional: false]}], "hexpm", "c6e4a82b8727bab593700c998a4d98cf3d8025678bfde059aed71d0000c3e463"},
"telemetry": {:hex, :telemetry, "1.3.0", "fedebbae410d715cf8e7062c96a1ef32ec22e764197f70cda73d82778d61e7a2", [:rebar3], [], "hexpm", "7015fc8919dbe63764f4b4b87a95b7c0996bd539e0d499be6ec9d7f3875b79e6"},
"telemetry_metrics": {:hex, :telemetry_metrics, "1.1.0", "5bd5f3b5637e0abea0426b947e3ce5dd304f8b3bc6617039e2b5a008adc02f8f", [:mix], [{:telemetry, "~> 0.4 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "e7b79e8ddfde70adb6db8a6623d1778ec66401f366e9a8f5dd0955c56bc8ce67"},
"telemetry_poller": {:hex, :telemetry_poller, "1.3.0", "d5c46420126b5ac2d72bc6580fb4f537d35e851cc0f8dbd571acf6d6e10f5ec7", [:rebar3], [{:telemetry, "~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "51f18bed7128544a50f75897db9974436ea9bfba560420b646af27a9a9b35211"},
"thousand_island": {:hex, :thousand_island, "1.3.14", "ad45ebed2577b5437582bcc79c5eccd1e2a8c326abf6a3464ab6c06e2055a34a", [:mix], [{:telemetry, "~> 0.4 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "d0d24a929d31cdd1d7903a4fe7f2409afeedff092d277be604966cd6aa4307ef"},
"tortoise": {:hex, :tortoise, "0.9.9", "2e467570ef1d342d4de8fdc6ba3861f841054ab524080ec3d7052ee07c04501d", [:mix], [{:gen_state_machine, "~> 2.0 or ~> 3.0", [hex: :gen_state_machine, repo: "hexpm", optional: false]}], "hexpm", "4a316220b4b443c2497f42702f0c0616af3e4b2cbc6c150ebebb51657a773797"},
"websock": {:hex, :websock, "0.5.3", "2f69a6ebe810328555b6fe5c831a851f485e303a7c8ce6c5f675abeb20ebdadc", [:mix], [], "hexpm", "6105453d7fac22c712ad66fab1d45abdf049868f253cf719b625151460b8b453"},
"websock_adapter": {:hex, :websock_adapter, "0.5.8", "3b97dc94e407e2d1fc666b2fb9acf6be81a1798a2602294aac000260a7c4a47d", [:mix], [{:bandit, ">= 0.6.0", [hex: :bandit, repo: "hexpm", optional: true]}, {:plug, "~> 1.14", [hex: :plug, repo: "hexpm", optional: false]}, {:plug_cowboy, "~> 2.6", [hex: :plug_cowboy, repo: "hexpm", optional: true]}, {:websock, "~> 0.5", [hex: :websock, repo: "hexpm", optional: false]}], "hexpm", "315b9a1865552212b5f35140ad194e67ce31af45bcee443d4ecb96b5fd3f3782"},
}

View File

@ -1,11 +0,0 @@
## `msgid`s in this file come from POT (.pot) files.
##
## Do not add, change, or remove `msgid`s manually here as
## they're tied to the ones in the corresponding POT file
## (with the same domain).
##
## Use `mix gettext.extract --merge` or `mix gettext.merge`
## to merge POT files into PO files.
msgid ""
msgstr ""
"Language: en\n"

View File

@ -1,10 +0,0 @@
## This is a PO Template file.
##
## `msgid`s here are often extracted from source code.
## Add new translations manually only if they're dynamic
## translations that can't be statically extracted.
##
## Run `mix gettext.extract` to bring this file up to
## date. Leave `msgstr`s empty as changing them here has no
## effect: edit them in PO (`.po`) files instead.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 152 B

View File

@ -1,6 +0,0 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 71 48" fill="currentColor" aria-hidden="true">
<path
d="m26.371 33.477-.552-.1c-3.92-.729-6.397-3.1-7.57-6.829-.733-2.324.597-4.035 3.035-4.148 1.995-.092 3.362 1.055 4.57 2.39 1.557 1.72 2.984 3.558 4.514 5.305 2.202 2.515 4.797 4.134 8.347 3.634 3.183-.448 5.958-1.725 8.371-3.828.363-.316.761-.592 1.144-.886l-.241-.284c-2.027.63-4.093.841-6.205.735-3.195-.16-6.24-.828-8.964-2.582-2.486-1.601-4.319-3.746-5.19-6.611-.704-2.315.736-3.934 3.135-3.6.948.133 1.746.56 2.463 1.165.583.493 1.143 1.015 1.738 1.493 2.8 2.25 6.712 2.375 10.265-.068-5.842-.026-9.817-3.24-13.308-7.313-1.366-1.594-2.7-3.216-4.095-4.785-2.698-3.036-5.692-5.71-9.79-6.623C12.8-.623 7.745.14 2.893 2.361 1.926 2.804.997 3.319 0 4.149c.494 0 .763.006 1.032 0 2.446-.064 4.28 1.023 5.602 3.024.962 1.457 1.415 3.104 1.761 4.798.513 2.515.247 5.078.544 7.605.761 6.494 4.08 11.026 10.26 13.346 2.267.852 4.591 1.135 7.172.555ZM10.751 3.852c-.976.246-1.756-.148-2.56-.962 1.377-.343 2.592-.476 3.897-.528-.107.848-.607 1.306-1.336 1.49Zm32.002 37.924c-.085-.626-.62-.901-1.04-1.228-1.857-1.446-4.03-1.958-6.333-2-1.375-.026-2.735-.128-4.031-.61-.595-.22-1.26-.505-1.244-1.272.015-.78.693-1 1.31-1.184.505-.15 1.026-.247 1.6-.382-1.46-.936-2.886-1.065-4.787-.3-2.993 1.202-5.943 1.06-8.926-.017-1.684-.608-3.179-1.563-4.735-2.408l-.077.057c1.29 2.115 3.034 3.817 5.004 5.271 3.793 2.8 7.936 4.471 12.784 3.73A66.714 66.714 0 0 1 37 40.877c1.98-.16 3.866.398 5.753.899Zm-9.14-30.345c-.105-.076-.206-.266-.42-.069 1.745 2.36 3.985 4.098 6.683 5.193 4.354 1.767 8.773 2.07 13.293.51 3.51-1.21 6.033-.028 7.343 3.38.19-3.955-2.137-6.837-5.843-7.401-2.084-.318-4.01.373-5.962.94-5.434 1.575-10.485.798-15.094-2.553Zm27.085 15.425c.708.059 1.416.123 2.124.185-1.6-1.405-3.55-1.517-5.523-1.404-3.003.17-5.167 1.903-7.14 3.972-1.739 1.824-3.31 3.87-5.903 4.604.043.078.054.117.066.117.35.005.699.021 1.047.005 3.768-.17 7.317-.965 10.14-3.7.89-.86 1.685-1.817 2.544-2.71.716-.746 1.584-1.159 2.645-1.07Zm-8.753-4.67c-2.812.246-5.254 1.409-7.548 2.943-1.766 1.18-3.654 1.738-5.776 1.37-.374-.066-.75-.114-1.124-.17l-.013.156c.135.07.265.151.405.207.354.14.702.308 1.07.395 4.083.971 7.992.474 11.516-1.803 2.221-1.435 4.521-1.707 7.013-1.336.252.038.503.083.756.107.234.022.479.255.795.003-2.179-1.574-4.526-2.096-7.094-1.872Zm-10.049-9.544c1.475.051 2.943-.142 4.486-1.059-.452.04-.643.04-.827.076-2.126.424-4.033-.04-5.733-1.383-.623-.493-1.257-.974-1.889-1.457-2.503-1.914-5.374-2.555-8.514-2.5.05.154.054.26.108.315 3.417 3.455 7.371 5.836 12.369 6.008Zm24.727 17.731c-2.114-2.097-4.952-2.367-7.578-.537 1.738.078 3.043.632 4.101 1.728a13 13 0 0 0 1.182 1.106c1.6 1.29 4.311 1.352 5.896.155-1.861-.726-1.861-.726-3.601-2.452Zm-21.058 16.06c-1.858-3.46-4.981-4.24-8.59-4.008a9.667 9.667 0 0 1 2.977 1.39c.84.586 1.547 1.311 2.243 2.055 1.38 1.473 3.534 2.376 4.962 2.07-.656-.412-1.238-.848-1.592-1.507Zl-.006.006-.036-.004.021.018.012.053Za.127.127 0 0 0 .015.043c.005.008.038 0 .058-.002Zl-.008.01.005.026.024.014Z"
fill="#FD4F00"
/>
</svg>

Before

Width:  |  Height:  |  Size: 3.0 KiB

View File

@ -1,5 +0,0 @@
# See https://www.robotstxt.org/robotstxt.html for documentation on how to use the robots.txt file
#
# To ban all spiders from the entire site uncomment the next two lines:
# User-agent: *
# Disallow: /

View File

@ -1,14 +0,0 @@
defmodule DashboardWeb.ErrorHTMLTest do
use DashboardWeb.ConnCase, async: true
# Bring render_to_string/4 for testing custom views
import Phoenix.Template
test "renders 404.html" do
assert render_to_string(DashboardWeb.ErrorHTML, "404", "html", []) == "Not Found"
end
test "renders 500.html" do
assert render_to_string(DashboardWeb.ErrorHTML, "500", "html", []) == "Internal Server Error"
end
end

View File

@ -1,12 +0,0 @@
defmodule DashboardWeb.ErrorJSONTest do
use DashboardWeb.ConnCase, async: true
test "renders 404" do
assert DashboardWeb.ErrorJSON.render("404.json", %{}) == %{errors: %{detail: "Not Found"}}
end
test "renders 500" do
assert DashboardWeb.ErrorJSON.render("500.json", %{}) ==
%{errors: %{detail: "Internal Server Error"}}
end
end

View File

@ -1,8 +0,0 @@
defmodule DashboardWeb.PageControllerTest do
use DashboardWeb.ConnCase
test "GET /", %{conn: conn} do
conn = get(conn, ~p"/")
assert html_response(conn, 200) =~ "Peace of mind from prototype to production"
end
end

View File

@ -1,37 +0,0 @@
defmodule DashboardWeb.ConnCase do
@moduledoc """
This module defines the test case to be used by
tests that require setting up a connection.
Such tests rely on `Phoenix.ConnTest` and also
import other functionality to make it easier
to build common data structures and query the data layer.
Finally, if the test case interacts with the database,
we enable the SQL sandbox, so changes done to the database
are reverted at the end of every test. If you are using
PostgreSQL, you can even run database tests asynchronously
by setting `use DashboardWeb.ConnCase, async: true`, although
this option is not recommended for other databases.
"""
use ExUnit.CaseTemplate
using do
quote do
# The default endpoint for testing
@endpoint DashboardWeb.Endpoint
use DashboardWeb, :verified_routes
# Import conveniences for testing with connections
import Plug.Conn
import Phoenix.ConnTest
import DashboardWeb.ConnCase
end
end
setup _tags do
{:ok, conn: Phoenix.ConnTest.build_conn()}
end
end

View File

@ -1 +0,0 @@
ExUnit.start()

8
flake.lock generated
View File

@ -20,16 +20,16 @@
}, },
"nixpkgs": { "nixpkgs": {
"locked": { "locked": {
"lastModified": 1753939845, "lastModified": 1768395095,
"narHash": "sha256-K2ViRJfdVGE8tpJejs8Qpvvejks1+A4GQej/lBk5y7I=", "narHash": "sha256-ZhuYJbwbZT32QA95tSkXd9zXHcdZj90EzHpEXBMabaw=",
"owner": "NixOS", "owner": "NixOS",
"repo": "nixpkgs", "repo": "nixpkgs",
"rev": "94def634a20494ee057c76998843c015909d6311", "rev": "13868c071cc73a5e9f610c47d7bb08e5da64fdd5",
"type": "github" "type": "github"
}, },
"original": { "original": {
"owner": "NixOS", "owner": "NixOS",
"ref": "nixos-unstable", "ref": "nixpkgs-unstable",
"repo": "nixpkgs", "repo": "nixpkgs",
"type": "github" "type": "github"
} }

View File

@ -1,82 +1,59 @@
{ {
description = "Elixir system monitor daemon"; description = "Systant - System monitoring agent with MQTT and Home Assistant integration";
inputs = { inputs.nixpkgs.url = "github:NixOS/nixpkgs/nixpkgs-unstable";
nixpkgs.url = "github:NixOS/nixpkgs/nixos-unstable"; inputs.flake-utils.url = "github:numtide/flake-utils";
flake-utils.url = "github:numtide/flake-utils";
};
outputs = outputs =
{ {
self, self,
nixpkgs, nixpkgs,
flake-utils, flake-utils,
...
}: }:
{
# NixOS module (system-independent)
nixosModules.default = import ./nix/nixos-module.nix;
# Overlay to add systant to pkgs
overlays.default = final: prev: {
systant = final.callPackage ./nix/package.nix { src = self; };
};
}
//
flake-utils.lib.eachDefaultSystem ( flake-utils.lib.eachDefaultSystem (
system: system:
let let
pkgs = import nixpkgs { pkgs = import nixpkgs {
system = system; inherit system;
config.allowUnfree = true; config.allowUnfree = true;
overlays = [ self.overlays.default ];
}; };
in in
{ {
packages = {
systant = pkgs.systant;
default = pkgs.systant;
};
apps.default = {
type = "app";
program = "${pkgs.systant}/bin/systant";
};
devShells.default = pkgs.mkShell { devShells.default = pkgs.mkShell {
buildInputs = with pkgs; [ buildInputs = with pkgs; [
# Elixir/Erlang for server bashInteractive
elixir glibcLocales
elixir-ls git
erlang bun
inotify-tools
# File watching for Phoenix live reload
inotifyTools
# Process management and task running
hivemind
just
# AI/Development tools
claude-code
# Node.js for Phoenix assets
nodejs_20
# Mosquito for MQTT support
mosquitto
]; ];
shellHook = '' shellHook = ''
echo "Systant development environment loaded" export PROJECT_ROOT=$PWD
echo "Elixir: $(elixir --version | tail -1)"
echo "Node.js: $(node --version)"
echo ""
echo "Directories:"
echo " server/ - Elixir systant daemon"
echo " dashboard/ - Phoenix LiveView dashboard"
echo ""
echo "Commands:"
echo " cd server && mix run --no-halt - Run systant daemon"
echo " cd dashboard && mix phx.server - Run Phoenix dashboard"
''; '';
}; };
packages = {
default = pkgs.callPackage ./nix/package.nix {
src = ./server;
};
systant = pkgs.callPackage ./nix/package.nix {
src = ./server;
};
};
apps = {
default = {
type = "app";
program = "${self.packages.${system}.default}/bin/systant";
};
};
} }
) );
// {
nixosModules.default = import ./nix/nixos-module.nix;
};
} }

130
index.ts Normal file
View File

@ -0,0 +1,130 @@
import yargs from "yargs";
import { hideBin } from "yargs/helpers";
import { loadConfig } from "./src/config";
import { connect } from "./src/mqtt";
import { createEntityManager } from "./src/entities";
const DEFAULT_CONFIG_PATH = "./systant.toml";
yargs(hideBin(process.argv))
.scriptName("systant")
.usage("$0 <cmd> [args]")
.command(
"run",
"Start the systant daemon",
(yargs) => {
return yargs.option("config", {
alias: "c",
type: "string",
default: DEFAULT_CONFIG_PATH,
describe: "Path to config file",
});
},
async (argv) => {
await run(argv.config);
}
)
.command(
"check",
"Check config and connectivity",
(yargs) => {
return yargs.option("config", {
alias: "c",
type: "string",
default: DEFAULT_CONFIG_PATH,
describe: "Path to config file",
});
},
async (argv) => {
await check(argv.config);
}
)
.command(
"once",
"Poll all entity states once, then exit",
(yargs) => {
return yargs.option("config", {
alias: "c",
type: "string",
default: DEFAULT_CONFIG_PATH,
describe: "Path to config file",
});
},
async (argv) => {
await once(argv.config);
}
)
.demandCommand(1, "\nError: You need to specify a command!")
.help()
.parse();
async function run(configPath: string): Promise<void> {
const config = await loadConfig(configPath);
console.log(`Starting systant on ${config.systant.hostname}`);
const mqtt = await connect(config, config.systant.hostname);
const entities = createEntityManager(config, mqtt);
await entities.start();
// Handle shutdown
const shutdown = async () => {
console.log("\nShutting down...");
entities.stop();
await mqtt.disconnect();
process.exit(0);
};
process.on("SIGINT", shutdown);
process.on("SIGTERM", shutdown);
console.log("Systant running. Press Ctrl+C to stop.");
}
async function check(configPath: string): Promise<void> {
console.log(`Checking config: ${configPath}`);
try {
const config = await loadConfig(configPath);
const entityCount = Object.keys(config.entities).length;
console.log("Config loaded successfully");
console.log(` MQTT broker: ${config.mqtt.broker}`);
console.log(` Entities: ${entityCount} configured (default interval: ${config.systant.defaultInterval}s)`);
console.log(` HA discovery: ${config.homeassistant.discovery}`);
console.log("\nTesting MQTT connection...");
const hostname = config.systant.hostname;
const mqtt = await connect(config, hostname);
console.log("MQTT connection successful");
await mqtt.disconnect();
console.log("\nAll checks passed!");
} catch (err) {
console.error("Check failed:", err instanceof Error ? err.message : err);
process.exit(1);
}
}
async function once(configPath: string): Promise<void> {
const config = await loadConfig(configPath);
const hostname = config.systant.hostname;
const mqtt = await connect(config, hostname);
const entities = createEntityManager(config, mqtt);
// Start will do initial poll of all entities
await entities.start();
// Wait a moment for the initial polls to complete
await new Promise((resolve) => setTimeout(resolve, 1000));
entities.stop();
await mqtt.disconnect();
console.log("Entity states published successfully");
}

View File

@ -1,33 +0,0 @@
# Systant development tasks
# Start both server and dashboard
dev:
hivemind
# Start just the server
server:
cd server && mix run --no-halt
# Start just the dashboard
dashboard:
cd dashboard && mix phx.server
# Install dependencies for both projects
deps:
cd server && mix deps.get
cd dashboard && mix deps.get
# Compile both projects
compile:
cd server && mix compile
cd dashboard && mix compile
# Run tests for both projects
test:
cd server && mix test
cd dashboard && mix test
# Clean both projects
clean:
cd server && mix clean
cd dashboard && mix clean

View File

@ -1,100 +1,122 @@
{ { config, lib, pkgs, ... }:
config,
lib,
pkgs,
...
}:
with lib;
let let
cfg = config.services.systant; cfg = config.systant;
settingsFormat = pkgs.formats.toml { };
in in
{ {
options.services.systant = { options.systant = {
enable = mkEnableOption "Systant MQTT Daemon"; enable = lib.mkEnableOption "systant system monitoring agent";
package = mkOption { package = lib.mkOption {
type = types.package; type = lib.types.package;
default = pkgs.callPackage ./package.nix { src = ../server; }; default = pkgs.systant;
description = "The systant package to use"; defaultText = lib.literalExpression "pkgs.systant";
description = "The systant package to use.";
}; };
mqttHost = mkOption { configFile = lib.mkOption {
type = types.str; type = lib.types.nullOr lib.types.path;
default = "localhost";
description = "MQTT broker hostname";
};
mqttPort = mkOption {
type = types.int;
default = 1883;
description = "MQTT broker port";
};
mqttUsername = mkOption {
type = types.nullOr types.str;
default = null; default = null;
description = "MQTT username (null for no auth)"; description = ''
Path to the systant configuration file (TOML).
If set, this takes precedence over the settings option.
'';
}; };
mqttPassword = mkOption { settings = lib.mkOption {
type = types.nullOr types.str; type = settingsFormat.type;
default = null; default = { };
description = "MQTT password (null for no auth)"; description = ''
Configuration for systant in Nix attribute set form.
Will be converted to TOML. Ignored if configFile is set.
'';
example = lib.literalExpression ''
{
mqtt = {
broker = "mqtt://localhost:1883";
topicPrefix = "systant";
};
entities = {
cpu_usage = {
type = "sensor";
state_command = "awk '/^cpu / {u=$2+$4; t=$2+$4+$5; print int(u*100/t)}' /proc/stat";
unit = "%";
icon = "mdi:cpu-64-bit";
name = "CPU Usage";
};
};
homeassistant = {
discovery = true;
discoveryPrefix = "homeassistant";
};
}
'';
}; };
statsTopic = mkOption { user = lib.mkOption {
type = types.str; type = lib.types.str;
default = "systant/${config.networking.hostName}/stats"; default = "systant";
description = "MQTT topic for publishing stats"; description = "User account under which systant runs.";
}; };
commandTopic = mkOption { group = lib.mkOption {
type = types.str; type = lib.types.str;
default = "systant/${config.networking.hostName}/commands"; default = "systant";
description = "MQTT topic for receiving commands"; description = "Group under which systant runs.";
};
publishInterval = mkOption {
type = types.int;
default = 30000;
description = "Interval between stats publications (milliseconds)";
}; };
}; };
config = mkIf cfg.enable { config = lib.mkIf cfg.enable {
systemd.user.services.systant = { # Create systant user/group if using defaults
description = "Systant MQTT Daemon"; users.users.${cfg.user} = lib.mkIf (cfg.user == "systant") {
after = [ "network.target" ]; isSystemUser = true;
wantedBy = [ "default.target" ]; group = cfg.group;
description = "Systant service user";
environment = {
SYSTANT_MQTT_HOST = cfg.mqttHost;
SYSTANT_MQTT_PORT = toString cfg.mqttPort;
SYSTANT_MQTT_USERNAME = mkIf (cfg.mqttUsername != null) cfg.mqttUsername;
SYSTANT_MQTT_PASSWORD = mkIf (cfg.mqttPassword != null) cfg.mqttPassword;
SYSTANT_STATS_TOPIC = cfg.statsTopic;
SYSTANT_COMMAND_TOPIC = cfg.commandTopic;
SYSTANT_PUBLISH_INTERVAL = toString cfg.publishInterval;
# Override RELEASE_COOKIE to bypass file reading
RELEASE_COOKIE = "systant-bypass-cookie";
# Set log level to debug for troubleshooting
SYSTANT_LOG_LEVEL = "debug";
# Ensure we have the full system PATH including /run/current-system/sw/bin where grim lives
PATH = mkForce "/run/wrappers/bin:/run/current-system/sw/bin:/usr/bin:/bin";
}; };
users.groups.${cfg.group} = lib.mkIf (cfg.group == "systant") { };
# Generate config file from settings if configFile not provided
environment.etc."systant/config.toml" = lib.mkIf (cfg.configFile == null && cfg.settings != { }) {
source = settingsFormat.generate "systant-config.toml" cfg.settings;
};
systemd.services.systant = {
description = "Systant system monitoring agent";
wantedBy = [ "multi-user.target" ];
after = [ "network-online.target" ];
wants = [ "network-online.target" ];
serviceConfig = { serviceConfig = {
Type = "exec"; Type = "simple";
ExecStart = "${cfg.package}/bin/systant start"; User = cfg.user;
ExecStop = "${cfg.package}/bin/systant stop"; Group = cfg.group;
Restart = "always"; ExecStart =
RestartSec = 5; let
StandardOutput = "journal"; configPath =
StandardError = "journal"; if cfg.configFile != null
SyslogIdentifier = "systant"; then cfg.configFile
WorkingDirectory = "${cfg.package}"; else "/etc/systant/config.toml";
in
"${cfg.package}/bin/systant run --config ${configPath}";
Restart = "on-failure";
RestartSec = "5s";
# Hardening
NoNewPrivileges = true;
ProtectSystem = "strict";
ProtectHome = true;
PrivateTmp = true;
ProtectKernelTunables = true;
ProtectKernelModules = true;
ProtectControlGroups = true;
# Allow reading system metrics
ReadOnlyPaths = [
"/proc"
"/sys"
];
}; };
}; };
}; };

View File

@ -1,46 +1,66 @@
{ {
lib, lib,
beamPackages, stdenvNoCC,
src, stdenv,
bun,
cacert,
src, # passed from flake.nix
}: }:
beamPackages.mixRelease rec { let
# Fixed-output derivation to fetch npm dependencies
# Update the hash when bun.lock changes by running:
# nix build .#systant 2>&1 | grep 'got:'
deps = stdenvNoCC.mkDerivation {
pname = "systant-deps";
version = "0.1.0";
inherit src;
buildPhase = ''
export HOME=$TMPDIR
bun install --frozen-lockfile
'';
installPhase = ''
cp -r node_modules $out
'';
nativeBuildInputs = [ bun cacert ];
outputHashMode = "recursive";
outputHashAlgo = "sha256";
# To update: nix build .#systant 2>&1 | grep 'got:'
outputHash = "sha256-hQ1ZzOFOHHeaAtyfCXxX6jpqB7poFLwavgMW8yMwaHw=";
};
in
stdenv.mkDerivation {
pname = "systant"; pname = "systant";
version = "0.1.0"; version = "0.1.0";
inherit src; inherit src;
# Disable distributed Erlang to avoid COOKIE requirement nativeBuildInputs = [ bun ];
postInstall = ''
# Create wrapper script that sets proper environment including COOKIE buildPhase = ''
mv $out/bin/systant $out/bin/.systant-wrapped export HOME=$TMPDIR
cat > $out/bin/systant << EOF cp -r ${deps} node_modules
#!/bin/sh chmod -R u+w node_modules
export RELEASE_DISTRIBUTION=none bun build index.ts --compile --outfile systant
export RELEASE_NODE=nonode@nohost
export RELEASE_COOKIE=dummy_cookie_for_single_node
# Default to "start" command if no arguments provided
if [ \$# -eq 0 ]; then
exec "$out/bin/.systant-wrapped" start
else
exec "$out/bin/.systant-wrapped" "\$@"
fi
EOF
chmod +x $out/bin/systant
''; '';
# Mix dependencies will be automatically fetched and cached by Nix installPhase = ''
mixFodDeps = beamPackages.fetchMixDeps { mkdir -p $out/bin
pname = "systant-mix-deps"; cp systant $out/bin/systant
inherit src version; '';
sha256 = "sha256-99aIYuSEO7V0Scgh6c4+FIStQpM2ccUvY1NwBArvhi8=";
}; # Bun's compiled binaries don't like being stripped
dontStrip = true;
meta = with lib; { meta = with lib; {
description = "Systant - System stats MQTT daemon for monitoring system metrics"; description = "System monitoring agent with MQTT and Home Assistant integration";
homepage = "https://git.ryanpandya.com/ryan/systant"; homepage = "https://git.ryanpandya.com/ryan/systant";
license = licenses.mit; license = licenses.mit;
maintainers = [ ];
platforms = platforms.linux; platforms = platforms.linux;
}; };
} }

23
package.json Normal file
View File

@ -0,0 +1,23 @@
{
"name": "systant",
"version": "0.1.0",
"module": "index.ts",
"devDependencies": {
"@types/bun": "latest",
"@types/yargs": "^17.0.35"
},
"peerDependencies": {
"typescript": "^5"
},
"private": true,
"scripts": {
"start": "bun run index.ts",
"dist": "bun build index.ts --compile --outfile dist/systant"
},
"type": "module",
"dependencies": {
"mqtt": "^5.14.1",
"smol-toml": "^1.6.0",
"yargs": "^18.0.0"
}
}

View File

@ -1,4 +0,0 @@
# Used by "mix format"
[
inputs: ["{mix,.formatter}.exs", "{config,lib,test}/**/*.{ex,exs}"]
]

View File

@ -1,5 +0,0 @@
import Config
config :logger, :console,
format: "$time $metadata[$level] $message\n",
metadata: [:request_id]

View File

@ -1,18 +0,0 @@
import Config
# Get hostname for topic construction
hostname = case :inet.gethostname() do
{:ok, hostname} -> List.to_string(hostname)
_ -> "unknown"
end
# Runtime configuration that can use environment variables
config :systant, Systant.MqttClient,
host: System.get_env("SYSTANT_MQTT_HOST", "mqtt.home"),
port: String.to_integer(System.get_env("SYSTANT_MQTT_PORT", "1883")),
client_id: System.get_env("SYSTANT_CLIENT_ID", "systant"),
username: System.get_env("SYSTANT_MQTT_USERNAME"),
password: System.get_env("SYSTANT_MQTT_PASSWORD"),
stats_topic: System.get_env("SYSTANT_STATS_TOPIC", "systant/#{hostname}/stats"),
command_topic: System.get_env("SYSTANT_COMMAND_TOPIC", "systant/#{hostname}/commands"),
publish_interval: String.to_integer(System.get_env("SYSTANT_PUBLISH_INTERVAL", "30000"))

View File

@ -1,18 +0,0 @@
defmodule Systant do
@moduledoc """
Documentation for `Systant`.
"""
@doc """
Hello world.
## Examples
iex> Systant.hello()
:world
"""
def hello do
:world
end
end

View File

@ -1,19 +0,0 @@
defmodule Systant.Application do
# See https://hexdocs.pm/elixir/Application.html
# for more information on OTP Applications
@moduledoc false
use Application
@impl true
def start(_type, _args) do
children = [
{Systant.MqttClient, []}
]
# See https://hexdocs.pm/elixir/Supervisor.html
# for other strategies and supported options
opts = [strategy: :one_for_one, name: Systant.Supervisor]
Supervisor.start_link(children, opts)
end
end

View File

@ -1,493 +0,0 @@
defmodule Systant.CommandExecutor do
@moduledoc """
Secure command execution system for Systant.
Executes only predefined commands from the configuration with strict validation,
parameter checking, timeouts, and comprehensive logging.
"""
require Logger
@doc """
Execute a command based on MQTT command message
"""
def execute_command(command_data, config) do
with {:ok, parsed_command} <- parse_command(command_data),
{:ok, command_config} <- find_command_config(parsed_command.trigger, config),
{:ok, validated_params} <- validate_parameters(parsed_command.params, command_config),
{:ok, final_command} <- build_command(command_config, validated_params) do
Logger.info(
"Executing command: #{command_config["name"]} with params: #{inspect(validated_params)}"
)
execute_system_command(final_command, command_config, parsed_command)
else
{:error, reason} ->
Logger.warning("Command execution failed: #{reason}")
{:error, reason}
end
end
@doc """
List all available commands from configuration
"""
def list_available_commands(config) do
commands_config = Systant.Config.get(config, ["commands"]) || %{}
if commands_config["enabled"] do
available = commands_config["available"] || []
Enum.map(available, fn cmd ->
%{
name: cmd["name"],
description: cmd["description"],
trigger: cmd["trigger"],
allowed_params: cmd["allowed_params"] || [],
timeout: cmd["timeout"] || 10,
detached: cmd["detached"] || false
}
end)
else
[]
end
end
# Private functions
defp parse_command(command_data) do
case command_data do
%{"command" => trigger} = data when is_binary(trigger) ->
{:ok,
%{
trigger: trigger,
params: data["params"] || [],
request_id: data["request_id"] || generate_request_id(),
timestamp: data["timestamp"] || DateTime.utc_now() |> DateTime.to_iso8601()
}}
_ ->
{:error,
"Invalid command format. Expected: {\"command\": \"trigger\", \"params\": [...]}"}
end
end
defp find_command_config(trigger, config) do
commands_config = Systant.Config.get(config, ["commands"]) || %{}
unless commands_config["enabled"] do
{:error, "Command execution is disabled in configuration"}
else
available = commands_config["available"] || []
case Enum.find(available, fn cmd -> cmd["trigger"] == trigger end) do
nil -> {:error, "Command '#{trigger}' not found in configuration"}
command_config -> {:ok, command_config}
end
end
end
defp validate_parameters(params, command_config) when is_list(params) do
allowed_params = command_config["allowed_params"] || []
# If no parameters are allowed, params must be empty
if Enum.empty?(allowed_params) and not Enum.empty?(params) do
{:error, "Command '#{command_config["trigger"]}' does not accept parameters"}
else
# Validate each parameter against allowed list
invalid_params =
Enum.reject(params, fn param ->
Enum.member?(allowed_params, param)
end)
if Enum.empty?(invalid_params) do
{:ok, params}
else
{:error,
"Invalid parameters: #{inspect(invalid_params)}. Allowed: #{inspect(allowed_params)}"}
end
end
end
defp validate_parameters(_, _), do: {:error, "Parameters must be a list"}
defp build_command(command_config, params) do
base_command = command_config["command"]
if is_binary(base_command) do
# Substitute parameters in the command string
final_command_string = substitute_parameters_in_string(base_command, params)
# If running as root and this looks like a Wayland command, wrap with sudo
final_command_with_user = maybe_wrap_with_sudo(final_command_string, command_config)
# Return the command string directly - we'll handle shell execution in execute_regular_command
{:ok, final_command_with_user}
else
{:error, "Command configuration must be a string"}
end
end
defp maybe_wrap_with_sudo(command_string, command_config) do
# Check if we're running as root and this command needs user privileges
if System.get_env("USER") == "root" and needs_user_privileges?(command_string, command_config) do
# Get the first non-root user ID (typically 1000)
case find_user_uid() do
{:ok, uid} ->
"sudo -u '##{uid}' #{command_string}"
{:error, _reason} ->
command_string
end
else
command_string
end
end
defp needs_user_privileges?(command_string, command_config) do
# Check if this is a Wayland command that needs user session
wayland_commands = ["grim", "hyprctl", "swaymsg", "wlr-", "waybar", "wofi"]
Enum.any?(wayland_commands, fn cmd ->
String.contains?(command_string, cmd)
end) or command_config["run_as_user"] == true
end
defp find_user_uid() do
# Look for the first non-root user in /run/user/
case File.ls("/run/user") do
{:ok, dirs} ->
user_dirs =
Enum.filter(dirs, fn dir ->
String.match?(dir, ~r/^\d+$/) and dir != "0"
end)
case user_dirs do
[uid | _] -> {:ok, uid}
[] -> {:error, "No user sessions found"}
end
{:error, reason} ->
{:error, "Cannot access /run/user: #{reason}"}
end
end
defp substitute_parameters_in_string(command_string, params) do
param_map = build_param_map(params)
# Replace $VARIABLE patterns in the command string
Enum.reduce(param_map, command_string, fn {var_name, value}, acc ->
String.replace(acc, "$#{var_name}", value)
end)
end
defp build_param_map(params) do
# For now, use simple mapping: first param is $SERVICE, $PATH, $PROCESS, $HOST, etc.
# In the future, could support named parameters
case params do
[param1] ->
%{"SERVICE" => param1, "PATH" => param1, "PROCESS" => param1, "HOST" => param1}
[param1, param2] ->
%{"SERVICE" => param1, "PATH" => param2, "PROCESS" => param1, "HOST" => param1}
_ ->
%{}
end
end
defp build_command_environment() do
# Get current environment
env = System.get_env()
# Start with current environment, but inject user's ~/.local/bin
enhanced_env = Map.put(env, "PATH", "#{env["HOME"]}/.local/bin:#{env["PATH"]}")
# If running as root, add Wayland session environment for user commands
if System.get_env("USER") == "root" do
# Find the user's Wayland session info
case find_user_wayland_session() do
{:ok, wayland_env} ->
Map.merge(enhanced_env, wayland_env)
{:error, _reason} ->
enhanced_env
end
else
enhanced_env
end
end
defp find_user_wayland_session() do
# Look for active Wayland sessions in /run/user/
case File.ls("/run/user") do
{:ok, dirs} ->
# Find the first user directory (typically 1000 for first user)
user_dirs =
Enum.filter(dirs, fn dir ->
String.match?(dir, ~r/^\d+$/) and File.exists?("/run/user/#{dir}/wayland-1")
end)
case user_dirs do
[uid | _] ->
runtime_dir = "/run/user/#{uid}"
{:ok,
%{
"XDG_RUNTIME_DIR" => runtime_dir,
"WAYLAND_DISPLAY" => "wayland-1"
}}
[] ->
{:error, "No active Wayland sessions found"}
end
{:error, reason} ->
{:error, "Cannot access /run/user: #{reason}"}
end
end
defp execute_system_command(final_command, command_config, parsed_command) do
is_detached = command_config["detached"] || false
# Convert to milliseconds
timeout = (command_config["timeout"] || 10) * 1000
# Build environment for command execution
env = build_command_environment()
if is_detached do
Logger.info("Executing detached command: #{inspect(final_command)}")
else
Logger.info("Executing system command: #{inspect(final_command)} (timeout: #{timeout}ms)")
end
Logger.debug("Environment PATH: #{Map.get(env, "PATH")}")
Logger.debug("Environment USER: #{Map.get(env, "USER")}")
Logger.debug("Environment HOME: #{Map.get(env, "HOME")}")
Logger.debug("Environment XDG_RUNTIME_DIR: #{Map.get(env, "XDG_RUNTIME_DIR")}")
if is_detached do
# For detached processes, spawn and immediately return success
execute_detached_command(final_command, env, parsed_command)
else
# For regular processes, wait for completion with timeout
execute_regular_command(final_command, env, timeout, parsed_command)
end
end
defp execute_detached_command(command_string, env, parsed_command) do
try do
# Use spawn to start process without waiting
port =
Port.open({:spawn_executable, "/bin/sh"}, [
:binary,
:exit_status,
args: ["-c", command_string],
env: Enum.map(env, fn {k, v} -> {String.to_charlist(k), String.to_charlist(v)} end)
])
# Close the port immediately to detach
Port.close(port)
Logger.info("Detached command started successfully")
{:ok,
%{
request_id: parsed_command.request_id,
command: parsed_command.trigger,
status: "success",
output: "Command started in detached mode",
detached: true,
timestamp: DateTime.utc_now() |> DateTime.to_iso8601()
}}
rescue
error ->
Logger.error("Failed to start detached command: #{inspect(error)}")
{:ok,
%{
request_id: parsed_command.request_id,
command: parsed_command.trigger,
status: "error",
output: "",
error: "Failed to start detached command: #{inspect(error)}",
timestamp: DateTime.utc_now() |> DateTime.to_iso8601()
}}
end
end
defp execute_regular_command(command_string, env, timeout, parsed_command) do
start_time = System.monotonic_time(:millisecond)
# Wrap the command with PID tracking
wrapper_script = """
echo "SYSTANT_PID:$$"
exec #{command_string}
"""
port =
Port.open({:spawn_executable, "/bin/sh"}, [
:binary,
:exit_status,
:stderr_to_stdout,
args: ["-c", wrapper_script],
env: Enum.map(env, fn {k, v} -> {String.to_charlist(k), String.to_charlist(v)} end)
])
# Set up monitoring
ref = Port.monitor(port)
# Collect output with PID extraction
output = collect_port_output_with_pid(port, ref, timeout, "", nil)
case output do
{:ok, data, exit_status, _pid} ->
execution_time = System.monotonic_time(:millisecond) - start_time
case exit_status do
0 ->
Logger.info("Command completed successfully in #{execution_time}ms")
{:ok,
%{
request_id: parsed_command.request_id,
command: parsed_command.trigger,
status: "success",
output: String.trim(data),
execution_time: execution_time / 1000.0,
timestamp: DateTime.utc_now() |> DateTime.to_iso8601()
}}
code ->
Logger.warning("Command failed with exit code #{code} in #{execution_time}ms")
{:ok,
%{
request_id: parsed_command.request_id,
command: parsed_command.trigger,
status: "error",
output: String.trim(data),
error: "Command exited with code #{code}",
execution_time: execution_time / 1000.0,
timestamp: DateTime.utc_now() |> DateTime.to_iso8601()
}}
end
{:timeout, partial_output, pid} ->
execution_time = System.monotonic_time(:millisecond) - start_time
# First, close the port to prevent more data
try do
Port.close(port)
rescue
_ -> :ok
end
# Kill the process group if we have a PID
if pid do
kill_process_group(pid)
end
# Flush any remaining port messages to prevent them from going to other processes
flush_port_messages(port)
Logger.error("Command timed out after #{timeout}ms and was terminated")
{:ok,
%{
request_id: parsed_command.request_id,
command: parsed_command.trigger,
status: "error",
output: String.trim(partial_output),
error: "Command timed out after #{timeout / 1000} seconds and was terminated",
execution_time: execution_time / 1000.0,
timestamp: DateTime.utc_now() |> DateTime.to_iso8601()
}}
{:error, reason} ->
execution_time = System.monotonic_time(:millisecond) - start_time
Logger.error("Command execution failed: #{inspect(reason)}")
{:ok,
%{
request_id: parsed_command.request_id,
command: parsed_command.trigger,
status: "error",
output: "",
error: "Execution failed: #{inspect(reason)}",
execution_time: execution_time / 1000.0,
timestamp: DateTime.utc_now() |> DateTime.to_iso8601()
}}
end
end
defp kill_process_group(pid) when is_integer(pid) do
# Kill the entire process group
Logger.info("Killing process group for PID #{pid}")
System.cmd("kill", ["-TERM", "-#{pid}"], stderr_to_stdout: true)
# Give it a moment to terminate gracefully
Process.sleep(100)
# Force kill if still alive
case System.cmd("kill", ["-0", "#{pid}"], stderr_to_stdout: true) do
{_, 0} ->
Logger.warning("Process #{pid} still alive, sending SIGKILL")
System.cmd("kill", ["-KILL", "-#{pid}"], stderr_to_stdout: true)
_ ->
:ok
end
end
defp kill_process_group(_), do: :ok
defp flush_port_messages(port) do
receive do
{^port, _} ->
# Recursively flush more messages
flush_port_messages(port)
after
0 ->
# No more messages
:ok
end
end
defp collect_port_output_with_pid(port, ref, timeout, acc, pid) do
receive do
{^port, {:data, data}} ->
# Extract PID if we see it in the output
{new_pid, cleaned_data} = extract_pid(data, pid)
collect_port_output_with_pid(port, ref, timeout, acc <> cleaned_data, new_pid)
{^port, {:exit_status, status}} ->
# Demonitor to avoid receiving DOWN message
Port.demonitor(ref, [:flush])
{:ok, acc, status, pid}
{:DOWN, ^ref, :port, ^port, reason} ->
{:error, reason}
after
timeout ->
# Demonitor to avoid receiving DOWN message after timeout
Port.demonitor(ref, [:flush])
{:timeout, acc, pid}
end
end
defp extract_pid(data, current_pid) do
case Regex.run(~r/SYSTANT_PID:(\d+)\n/, data) do
[full_match, pid_str] ->
pid = String.to_integer(pid_str)
cleaned = String.replace(data, full_match, "")
{pid, cleaned}
nil ->
{current_pid, data}
end
end
defp generate_request_id do
:crypto.strong_rand_bytes(16) |> Base.encode16(case: :lower)
end
end

View File

@ -1,251 +0,0 @@
defmodule Systant.Config do
@moduledoc """
Configuration loader and manager for Systant.
Loads configuration from TOML files with environment variable overrides.
Provides a clean API for accessing configuration values throughout the application.
"""
require Logger
@default_config_paths [
"systant.toml", # Current directory
"~/.config/systant/systant.toml", # User config
"/etc/systant/systant.toml" # System config
]
@default_config %{
"general" => %{
"enabled_modules" => ["cpu", "memory", "disk", "gpu", "network", "temperature", "processes", "system"],
"collection_interval" => 30000,
"startup_delay" => 5000
},
"cpu" => %{"enabled" => true},
"memory" => %{"enabled" => true, "show_detailed" => true},
"disk" => %{
"enabled" => true,
"include_mounts" => [],
"exclude_mounts" => ["/snap", "/boot", "/dev", "/sys", "/proc", "/run", "/tmp"],
"exclude_types" => ["tmpfs", "devtmpfs", "squashfs", "overlay"],
"min_usage_percent" => 1
},
"gpu" => %{
"enabled" => true,
"nvidia_enabled" => true,
"amd_enabled" => true,
"max_gpus" => 8
},
"network" => %{
"enabled" => true,
"include_interfaces" => [],
"exclude_interfaces" => ["lo", "docker0", "br-", "veth", "virbr"],
"min_bytes_threshold" => 1024
},
"temperature" => %{
"enabled" => true,
"cpu_temp_enabled" => true,
"sensors_enabled" => true,
"temp_unit" => "celsius"
},
"processes" => %{
"enabled" => true,
"max_processes" => 10,
"sort_by" => "cpu",
"min_cpu_percent" => 0.1,
"min_memory_percent" => 0.1,
"max_command_length" => 50
},
"system" => %{
"enabled" => true,
"include_uptime" => true,
"include_load_average" => true,
"include_kernel_version" => true,
"include_os_info" => true
},
"mqtt" => %{
"host" => "mqtt.home",
"port" => 1883,
"client_id_prefix" => "systant",
"username" => "",
"password" => "",
"qos" => 0
},
"homeassistant" => %{
"discovery_enabled" => true,
"discovery_prefix" => "homeassistant"
},
"logging" => %{
"level" => "info",
"log_config_changes" => true,
"log_metric_collection" => false
}
}
@doc """
Load configuration from TOML file with environment variable overrides.
Returns the merged configuration map.
"""
def load_config do
config =
@default_config
|> load_toml_config()
|> apply_env_overrides()
|> validate_config()
if get_in(config, ["logging", "log_config_changes"]) do
Logger.info("Systant configuration loaded successfully")
end
config
end
@doc """
Get a configuration value by path (e.g., ["disk", "enabled"] or "general.collection_interval")
"""
def get(config, path) when is_list(path) do
get_in(config, path)
end
def get(config, path) when is_binary(path) do
path_list = String.split(path, ".")
get_in(config, path_list)
end
@doc """
Check if a module is enabled in the configuration
"""
def module_enabled?(config, module_name) when is_binary(module_name) do
enabled_modules = get(config, ["general", "enabled_modules"]) || []
module_config = get(config, [module_name, "enabled"])
Enum.member?(enabled_modules, module_name) and module_config != false
end
@doc """
Get MQTT configuration with environment variable overrides
"""
def mqtt_config(config) do
mqtt_base = get(config, ["mqtt"]) || %{}
%{
host: System.get_env("MQTT_HOST") || mqtt_base["host"] || "mqtt.home",
port: parse_int(System.get_env("MQTT_PORT")) || mqtt_base["port"] || 1883,
client_id: generate_client_id(mqtt_base["client_id_prefix"] || "systant"),
username: System.get_env("MQTT_USERNAME") || mqtt_base["username"] || nil,
password: System.get_env("MQTT_PASSWORD") || mqtt_base["password"] || nil,
stats_topic: "systant/#{get_hostname()}/stats",
command_topic: "systant/#{get_hostname()}/commands",
publish_interval: get(config, ["general", "collection_interval"]) || 30000,
qos: mqtt_base["qos"] || 0
}
end
# Private functions
defp load_toml_config(default_config) do
config_file = find_config_file()
case config_file do
nil ->
Logger.info("No configuration file found, using defaults")
default_config
path ->
case File.read(path) do
{:ok, content} ->
case Toml.decode(content) do
{:ok, toml_config} ->
Logger.info("Loaded configuration from #{path}")
deep_merge(default_config, toml_config)
{:error, reason} ->
Logger.error("Failed to parse TOML config at #{path}: #{inspect(reason)}")
default_config
end
{:error, reason} ->
Logger.error("Failed to read config file #{path}: #{inspect(reason)}")
default_config
end
end
end
defp find_config_file do
expanded_paths = @default_config_paths |> Enum.map(&expand_config_path/1)
Logger.debug("Searching for config files at: #{inspect(expanded_paths)}")
case Enum.find(expanded_paths, &File.exists?/1) do
nil ->
Logger.debug("No config file found, checked: #{inspect(expanded_paths)}")
nil
path ->
Logger.debug("Found config file at: #{path}")
path
end
end
defp expand_config_path("~" <> rest) do
home = System.user_home()
Path.join(home, rest)
end
defp expand_config_path(path) do
Path.expand(path)
end
defp apply_env_overrides(config) do
# Apply environment variable overrides for common settings
config
|> put_env_override(["general", "collection_interval"], "SYSTANT_INTERVAL", &parse_int/1)
|> put_env_override(["logging", "level"], "SYSTANT_LOG_LEVEL", &String.downcase/1)
|> put_env_override(["mqtt", "host"], "MQTT_HOST")
|> put_env_override(["mqtt", "port"], "MQTT_PORT", &parse_int/1)
end
defp put_env_override(config, path, env_var, transform \\ &(&1)) do
case System.get_env(env_var) do
nil -> config
value ->
transformed_value = transform.(value)
put_in(config, path, transformed_value)
end
end
defp validate_config(config) do
# Basic validation - could be expanded
collection_interval = get(config, ["general", "collection_interval"])
if collection_interval && collection_interval < 1000 do
Logger.warning("Collection interval #{collection_interval}ms is very low, consider >= 1000ms")
end
config
end
defp deep_merge(left, right) when is_map(left) and is_map(right) do
Map.merge(left, right, fn _key, left_val, right_val ->
deep_merge(left_val, right_val)
end)
end
defp deep_merge(_left, right), do: right
defp parse_int(str) when is_binary(str) do
case Integer.parse(str) do
{int, _} -> int
_ -> nil
end
end
defp parse_int(int) when is_integer(int), do: int
defp parse_int(_), do: nil
defp generate_client_id(prefix) do
"#{prefix}_#{get_hostname()}_#{:rand.uniform(1000)}"
end
defp get_hostname do
case :inet.gethostname() do
{:ok, hostname} -> List.to_string(hostname)
_ -> "unknown"
end
end
end

View File

@ -1,370 +0,0 @@
defmodule Systant.HaDiscovery do
@moduledoc """
Home Assistant MQTT Discovery integration for Systant.
Publishes device and entity discovery configurations to Home Assistant
via MQTT following the HA discovery protocol.
Discovery topic format: homeassistant/<component>/<node_id>/<object_id>/config
"""
require Logger
@manufacturer "Systant"
@model "Systant"
@doc """
Publish all discovery configurations for a host
"""
def publish_discovery(client_pid, hostname, config \\ nil) do
app_config = config || Systant.Config.load_config()
ha_config = Systant.Config.get(app_config, ["homeassistant"]) || %{}
if ha_config["discovery_enabled"] != false do
discovery_prefix = ha_config["discovery_prefix"] || "homeassistant"
device_config = build_device_config(hostname)
# Publish device discovery first
publish_device_discovery(client_pid, hostname, device_config, discovery_prefix)
# Publish sensor discoveries
publish_sensor_discoveries(client_pid, hostname, device_config, discovery_prefix)
# Publish command buttons if commands are enabled
commands_config = Systant.Config.get(app_config, ["commands"]) || %{}
if commands_config["enabled"] do
publish_command_discoveries(client_pid, hostname, device_config, discovery_prefix, app_config)
end
Logger.info("Published Home Assistant discovery for #{hostname}")
else
Logger.info("Home Assistant discovery disabled in configuration")
end
end
@doc """
Remove all discovery configurations for a host
"""
def remove_discovery(client_pid, hostname, config \\ nil) do
app_config = config || Systant.Config.load_config()
ha_config = Systant.Config.get(app_config, ["homeassistant"]) || %{}
discovery_prefix = ha_config["discovery_prefix"] || "homeassistant"
# Remove by publishing empty payloads to discovery topics
sensors = get_sensor_definitions(hostname)
Enum.each(sensors, fn {component, object_id, _config} ->
topic = "#{discovery_prefix}/#{component}/#{hostname}/#{object_id}/config"
Tortoise.publish(client_pid, topic, "", retain: true)
end)
Logger.info("Removed Home Assistant discovery for #{hostname}")
end
# Private functions
defp publish_device_discovery(client_pid, hostname, device_config, discovery_prefix) do
# Use device-based discovery for multiple components
components_config = %{
device: device_config,
components: build_all_components(hostname, device_config)
}
topic = "#{discovery_prefix}/device/#{hostname}/config"
payload = Jason.encode!(components_config)
Tortoise.publish(client_pid, topic, payload, retain: true)
end
defp publish_sensor_discoveries(client_pid, hostname, device_config, discovery_prefix) do
sensors = get_sensor_definitions(hostname)
Enum.each(sensors, fn {component, object_id, config} ->
full_config = Map.merge(config, %{device: device_config})
topic = "#{discovery_prefix}/#{component}/#{hostname}/#{object_id}/config"
payload = Jason.encode!(full_config)
Tortoise.publish(client_pid, topic, payload, retain: true)
end)
end
defp publish_command_discoveries(client_pid, hostname, device_config, discovery_prefix, app_config) do
commands_config = Systant.Config.get(app_config, ["commands"]) || %{}
available_commands = commands_config["available"] || []
# Clear stale command slots (up to 2x current command count, minimum 10)
max_slots = max(length(available_commands) * 2, 10)
for i <- 0..(max_slots - 1) do
topic = "#{discovery_prefix}/button/#{hostname}/command_#{i}/config"
Tortoise.publish(client_pid, topic, "", retain: true)
end
# Publish actual command buttons
available_commands
|> Enum.with_index()
|> Enum.each(fn {cmd, index} ->
button_config = build_command_button_config(cmd, hostname, device_config)
topic = "#{discovery_prefix}/button/#{hostname}/command_#{index}/config"
payload = Jason.encode!(button_config)
Tortoise.publish(client_pid, topic, payload, retain: true)
end)
Logger.info("Published #{length(available_commands)} command buttons for #{hostname}")
end
defp build_device_config(hostname) do
%{
identifiers: ["systant_#{hostname}"],
name: hostname |> String.capitalize(),
manufacturer: @manufacturer,
model: @model,
sw_version: Application.spec(:systant, :vsn) |> to_string()
}
end
defp build_all_components(hostname, device_config) do
get_sensor_definitions(hostname)
|> Enum.map(fn {_component, object_id, config} ->
Map.merge(config, %{device: device_config})
|> Map.put(:object_id, object_id)
end)
end
defp get_sensor_definitions(hostname) do
base_topic = "systant/#{hostname}/stats"
[
# CPU Sensors
{"sensor", "cpu_load_1m",
build_sensor_config("CPU Load 1m", "#{base_topic}", "cpu.avg1", "load", "mdi:speedometer")},
{"sensor", "cpu_load_5m",
build_sensor_config("CPU Load 5m", "#{base_topic}", "cpu.avg5", "load", "mdi:speedometer")},
{"sensor", "cpu_load_15m",
build_sensor_config(
"CPU Load 15m",
"#{base_topic}",
"cpu.avg15",
"load",
"mdi:speedometer"
)},
# Memory Sensors
{"sensor", "memory_used_percent",
build_sensor_config(
"Memory Used",
"#{base_topic}",
"memory.used_percent",
"%",
"mdi:memory"
)},
{"sensor", "memory_used_gb",
build_sensor_config(
"Memory Used GB",
"#{base_topic}",
"memory.used_kb",
"GB",
"mdi:memory",
"{{ (value_json.memory.used_kb | float / 1024 / 1024) | round(2) }}"
)},
{"sensor", "memory_total_gb",
build_sensor_config(
"Memory Total GB",
"#{base_topic}",
"memory.total_kb",
"GB",
"mdi:memory",
"{{ (value_json.memory.total_kb | float / 1024 / 1024) | round(2) }}"
)},
# System Sensors
{"sensor", "uptime_hours",
build_sensor_config(
"Uptime",
"#{base_topic}",
"system.uptime_seconds",
"h",
"mdi:clock-outline",
"{{ (value_json.system.uptime_seconds | float / 3600) | round(1) }}"
)},
{"sensor", "kernel_version",
build_sensor_config(
"Kernel Version",
"#{base_topic}",
"system.kernel_version",
nil,
"mdi:linux"
)},
# Temperature Sensors
{"sensor", "cpu_temperature",
build_sensor_config(
"CPU Temperature",
"#{base_topic}",
"temperature.cpu",
"°C",
"mdi:thermometer"
)},
# GPU Sensors - NVIDIA
{"sensor", "gpu_nvidia_utilization",
build_sensor_config(
"NVIDIA GPU Utilization",
"#{base_topic}",
"gpu.nvidia[0].utilization_percent",
"%",
"mdi:expansion-card",
"{{ value_json.gpu.nvidia[0].utilization_percent if value_json.gpu.nvidia and value_json.gpu.nvidia|length > 0 else 0 }}"
)},
{"sensor", "gpu_nvidia_temperature",
build_sensor_config(
"NVIDIA GPU Temperature",
"#{base_topic}",
"gpu.nvidia[0].temperature_c",
"°C",
"mdi:thermometer",
"{{ value_json.gpu.nvidia[0].temperature_c if value_json.gpu.nvidia and value_json.gpu.nvidia|length > 0 else none }}"
)},
{"sensor", "gpu_nvidia_memory",
build_sensor_config(
"NVIDIA GPU Memory",
"#{base_topic}",
"gpu.nvidia[0].memory_used_mb",
"MB",
"mdi:memory",
"{{ value_json.gpu.nvidia[0].memory_used_mb if value_json.gpu.nvidia and value_json.gpu.nvidia|length > 0 else none }}"
)},
# GPU Sensors - AMD
{"sensor", "gpu_amd_utilization",
build_sensor_config(
"AMD GPU Utilization",
"#{base_topic}",
"gpu.amd[0].utilization_percent",
"%",
"mdi:expansion-card",
"{{ value_json.gpu.amd[0].utilization_percent if value_json.gpu.amd and value_json.gpu.amd|length > 0 else 0 }}"
)},
{"sensor", "gpu_amd_temperature",
build_sensor_config(
"AMD GPU Temperature",
"#{base_topic}",
"gpu.amd[0].temperature_c",
"°C",
"mdi:thermometer",
"{{ value_json.gpu.amd[0].temperature_c if value_json.gpu.amd and value_json.gpu.amd|length > 0 else none }}"
)},
# Disk Sensors - Main filesystem usage
{"sensor", "disk_root_usage",
build_sensor_config(
"Root Disk Usage",
"#{base_topic}",
"disk.disks",
"%",
"mdi:harddisk",
"{{ (value_json.disk.disks | selectattr('mounted_on', 'equalto', '/') | list | first).use_percent if value_json.disk.disks else 0 }}"
)},
{"sensor", "disk_home_usage",
build_sensor_config(
"Home Disk Usage",
"#{base_topic}",
"disk.disks",
"%",
"mdi:harddisk",
"{{ (value_json.disk.disks | selectattr('mounted_on', 'equalto', '/home') | list | first).use_percent if (value_json.disk.disks | selectattr('mounted_on', 'equalto', '/home') | list) else 0 }}"
)},
# Network Sensors - Primary interface throughput
{"sensor", "network_rx_throughput",
build_sensor_config(
"Network RX Throughput",
"#{base_topic}",
"network.rx_throughput",
"MB/s",
"mdi:download-network",
"{{ (value_json.network[0].rx_throughput_bps | float / 1024 / 1024) | round(2) if value_json.network and value_json.network|length > 0 else 0 }}"
)},
{"sensor", "network_tx_throughput",
build_sensor_config(
"Network TX Throughput",
"#{base_topic}",
"network.tx_throughput",
"MB/s",
"mdi:upload-network",
"{{ (value_json.network[0].tx_throughput_bps | float / 1024 / 1024) | round(2) if value_json.network and value_json.network|length > 0 else 0 }}"
)},
# Status sensors
{"sensor", "last_seen",
build_sensor_config("Last Seen", "#{base_topic}", "timestamp", nil, "mdi:clock-outline", "{{ value_json.timestamp }}")}
]
end
defp build_sensor_config(
name,
state_topic,
value_template_path,
unit,
icon,
custom_template \\ nil
) do
base_config = %{
name: name,
state_topic: state_topic,
value_template: custom_template || "{{ value_json.#{value_template_path} }}",
icon: icon,
unique_id:
"systant_#{String.replace(state_topic, "/", "_")}_#{String.replace(value_template_path, ".", "_")}",
availability: %{
topic: state_topic,
value_template: """
{% set last_seen = as_timestamp(value_json.timestamp) %}
{% set now = as_timestamp(now()) %}
{{ 'online' if (now - last_seen) < 180 else 'offline' }}
"""
},
origin: %{
name: "Systant",
sw_version: Application.spec(:systant, :vsn) |> to_string(),
support_url: "https://github.com/user/systant"
}
}
if unit do
Map.put(base_config, :unit_of_measurement, unit)
else
base_config
end
end
defp build_command_button_config(cmd, hostname, device_config) do
trigger = cmd["trigger"]
name = cmd["description"] || "#{String.capitalize(trigger)} Command"
icon = cmd["icon"] || "mdi:console-line"
%{
name: name,
command_topic: "systant/#{hostname}/commands",
payload_press: Jason.encode!(%{command: trigger}),
availability: %{
topic: "systant/#{hostname}/stats",
value_template: """
{% set last_seen = as_timestamp(value_json.timestamp) %}
{% set now = as_timestamp(now()) %}
{{ 'online' if (now - last_seen) < 180 else 'offline' }}
"""
},
device: device_config,
icon: icon,
unique_id: "systant_#{hostname}_command_#{trigger}",
origin: %{
name: "Systant",
sw_version: Application.spec(:systant, :vsn) |> to_string(),
support_url: "https://github.com/user/systant"
}
}
end
end

View File

@ -1,188 +0,0 @@
defmodule Systant.MqttClient do
use GenServer
require Logger
@moduledoc """
MQTT client for publishing system stats and handling commands
"""
def start_link(opts) do
GenServer.start_link(__MODULE__, opts, name: __MODULE__)
end
def init(_opts) do
# Load the TOML-based configuration
app_config = Systant.Config.load_config()
mqtt_config = Systant.Config.mqtt_config(app_config)
Logger.info("Starting MQTT client with config: #{inspect(mqtt_config)}")
Logger.info("Attempting to connect to MQTT broker at #{mqtt_config.host}:#{mqtt_config.port}")
# Get hostname using same method as SystemMetrics
{:ok, hostname_charlist} = :inet.gethostname()
hostname = List.to_string(hostname_charlist)
# Store both configs for later use
state_config = %{
app_config: app_config,
mqtt_config: mqtt_config,
previous_network_stats: nil,
hostname: hostname
}
connection_opts = [
client_id: mqtt_config.client_id,
server: {Tortoise.Transport.Tcp, host: to_charlist(mqtt_config.host), port: mqtt_config.port},
handler: {Systant.MqttHandler, [client_id: mqtt_config.client_id]},
user_name: mqtt_config.username,
password: mqtt_config.password,
subscriptions: [{mqtt_config.command_topic, mqtt_config.qos}]
]
case Tortoise.Connection.start_link(connection_opts) do
{:ok, _pid} ->
Logger.info("MQTT client process started, verifying connection...")
# Wait a bit to verify the connection actually works
case wait_for_connection(mqtt_config.client_id, 5000) do
:ok ->
Logger.info("MQTT connection verified successfully")
# Send system metrics after a short delay to ensure dashboard is ready
startup_delay = Systant.Config.get(app_config, ["general", "startup_delay"]) || 5000
Process.send_after(self(), :publish_startup_stats, startup_delay)
Logger.info("Will publish initial stats in #{startup_delay}ms")
# Publish Home Assistant discovery after MQTT connection
Process.send_after(self(), :publish_ha_discovery, 1000)
Logger.info("Will publish HA discovery in 1000ms")
schedule_stats_publish(mqtt_config.publish_interval)
{:ok, state_config}
:timeout ->
Logger.error("MQTT connection timeout - broker at #{mqtt_config.host}:#{mqtt_config.port} is not responding")
Logger.error("Shutting down systant due to MQTT connection failure")
System.stop(1)
{:stop, :connection_timeout}
{:error, reason} ->
Logger.error("MQTT connection verification failed: #{inspect(reason)}")
Logger.error("Shutting down systant due to MQTT connection failure")
System.stop(1)
{:stop, reason}
end
{:error, reason} ->
Logger.error("Failed to start MQTT client: #{inspect(reason)}")
{:stop, reason}
end
end
def handle_info(:publish_ha_discovery, state) do
Logger.info("Publishing Home Assistant discovery configuration")
# Get hostname from system metrics (reuse existing logic)
stats = Systant.SystemMetrics.collect_metrics(state.app_config)
Systant.HaDiscovery.publish_discovery(state.mqtt_config.client_id, stats.hostname, state.app_config)
{:noreply, state}
end
def handle_info(:publish_startup_stats, state) do
Logger.info("Publishing initial system metrics")
{_stats, updated_state} = collect_and_publish_stats(state)
{:noreply, updated_state}
end
def handle_info(:publish_stats, state) do
{_stats, updated_state} = collect_and_publish_stats(state)
schedule_stats_publish(state.mqtt_config.publish_interval)
{:noreply, updated_state}
end
def handle_info(_msg, state) do
{:noreply, state}
end
def terminate(reason, _state) do
Logger.info("MQTT client terminating: #{inspect(reason)}")
:ok
end
defp collect_and_publish_stats(state) do
# Collect metrics with previous network stats for throughput calculation
stats = Systant.SystemMetrics.collect_metrics(state.app_config, state.previous_network_stats)
# Store current network stats for next iteration
current_network_stats = case Map.get(stats, :network) do
network_data when is_list(network_data) ->
%{
interfaces: network_data,
timestamp: System.monotonic_time(:second)
}
_ -> nil
end
updated_state = Map.put(state, :previous_network_stats, current_network_stats)
# Publish the stats
payload = Jason.encode!(stats)
case Tortoise.publish(state.mqtt_config.client_id, state.mqtt_config.stats_topic, payload, qos: state.mqtt_config.qos) do
:ok ->
Logger.info("Published system metrics for #{stats.hostname}")
{:error, reason} ->
Logger.error("Failed to publish stats: #{inspect(reason)}")
end
{stats, updated_state}
end
# Legacy function for compatibility if needed
defp publish_stats(app_config, mqtt_config) do
stats = Systant.SystemMetrics.collect_metrics(app_config)
payload = Jason.encode!(stats)
case Tortoise.publish(mqtt_config.client_id, mqtt_config.stats_topic, payload, qos: mqtt_config.qos) do
:ok ->
Logger.info("Published system metrics for #{stats.hostname}")
{:error, reason} ->
Logger.error("Failed to publish stats: #{inspect(reason)}")
end
end
defp schedule_stats_publish(interval) do
Process.send_after(self(), :publish_stats, interval)
end
defp wait_for_connection(client_id, timeout_ms) do
# Try to publish a test message to verify the connection
test_topic = "systant/connection_test"
test_payload = "test"
try do
case Tortoise.publish_sync(client_id, test_topic, test_payload, qos: 0, timeout: timeout_ms) do
:ok ->
Logger.debug("MQTT connection test successful")
:ok
{:error, :timeout} ->
Logger.error("MQTT connection test timed out")
:timeout
{:error, reason} ->
Logger.error("MQTT connection test failed: #{inspect(reason)}")
{:error, reason}
other ->
Logger.error("MQTT connection test unexpected result: #{inspect(other)}")
{:error, other}
end
rescue
error ->
Logger.error("MQTT connection test exception: #{inspect(error)}")
{:error, :connection_failed}
catch
:exit, reason ->
Logger.error("MQTT connection test exit: #{inspect(reason)}")
{:error, :connection_failed}
end
end
end

View File

@ -1,163 +0,0 @@
defmodule Systant.MqttHandler do
@moduledoc """
Custom MQTT handler for processing command messages
"""
@behaviour Tortoise.Handler
require Logger
def init(args) do
Logger.info("Initializing MQTT handler")
# Get the client_id from the passed arguments
client_id = Keyword.get(args, :client_id)
Logger.info("Handler initialized with client_id: #{client_id}")
state = %{client_id: client_id}
{:ok, state}
end
def connection(status, state) do
case status do
:up ->
Logger.info("MQTT connection established successfully")
:down ->
Logger.error("MQTT connection lost - check MQTT broker availability and configuration")
:terminating ->
Logger.info("MQTT connection terminating")
{:error, reason} ->
Logger.error("MQTT connection failed: #{inspect(reason)}")
other ->
Logger.error("MQTT connection status unknown: #{inspect(other)}")
end
{:ok, state}
end
def subscription(status, topic_filter, state) do
case status do
:up ->
Logger.info("Subscribed to #{topic_filter}")
:down ->
Logger.warning("Subscription to #{topic_filter} lost")
end
{:ok, state}
end
def handle_message(topic, payload, state) do
# Topic can come as a list or string, normalize it
topic_str = case topic do
topic when is_binary(topic) -> topic
topic when is_list(topic) -> Enum.join(topic, "/")
_ -> to_string(topic)
end
Logger.info("Received MQTT message on topic: #{topic_str}")
# Only process command topics
if String.contains?(topic_str, "/commands") do
process_command_message(topic_str, payload, state)
else
Logger.debug("Ignoring non-command message on topic: #{topic_str}")
end
{:ok, state}
end
def terminate(reason, _state) do
Logger.info("MQTT handler terminating: #{inspect(reason)}")
:ok
end
# Private functions
defp process_command_message(topic, payload, state) do
try do
# Parse the JSON command
case Jason.decode(payload) do
{:ok, command_data} ->
Logger.info("Processing command: #{inspect(command_data)}")
execute_and_respond(command_data, topic, state)
{:error, reason} ->
Logger.error("Failed to parse command JSON: #{inspect(reason)}")
send_error_response(topic, "Invalid JSON format", nil, state)
end
rescue
error ->
Logger.error("Error processing command: #{inspect(error)}")
send_error_response(topic, "Command processing failed", nil, state)
end
end
defp execute_and_respond(command_data, topic, state) do
# Load current configuration
config = Systant.Config.load_config()
# Use client_id from handler state
client_id = state.client_id
# Handle special "list" command to show available commands
if command_data["command"] == "list" do
available_commands = Systant.CommandExecutor.list_available_commands(config)
response = %{
request_id: command_data["request_id"] || generate_request_id(),
command: "list",
status: "success",
output: "Available commands: #{Enum.map(available_commands, &(&1.trigger)) |> Enum.join(", ")}",
data: available_commands,
execution_time: 0.0,
timestamp: DateTime.utc_now() |> DateTime.to_iso8601()
}
response_topic = String.replace(topic, "/commands", "/responses")
response_payload = Jason.encode!(response)
Tortoise.publish_sync(client_id, response_topic, response_payload, qos: 0)
else
case Systant.CommandExecutor.execute_command(command_data, config) do
{:ok, response} ->
# Send response to the response topic
response_topic = String.replace(topic, "/commands", "/responses")
response_payload = Jason.encode!(response)
case Tortoise.publish_sync(client_id, response_topic, response_payload, qos: 0) do
:ok ->
Logger.info("Command response sent successfully")
{:error, reason} ->
Logger.error("Failed to send command response: #{inspect(reason)}")
end
{:error, reason} ->
send_error_response(topic, reason, command_data["request_id"], state)
end
end
end
defp send_error_response(topic, error_message, request_id, state) do
client_id = state.client_id
response_topic = String.replace(topic, "/commands", "/responses")
error_response = %{
request_id: request_id || "unknown",
command: "unknown",
status: "error",
output: "",
error: error_message,
execution_time: 0.0,
timestamp: DateTime.utc_now() |> DateTime.to_iso8601()
}
response_payload = Jason.encode!(error_response)
case Tortoise.publish_sync(client_id, response_topic, response_payload, qos: 0) do
:ok ->
Logger.info("Error response sent successfully")
{:error, reason} ->
Logger.error("Failed to send error response: #{inspect(reason)}")
end
end
defp generate_request_id do
:crypto.strong_rand_bytes(16) |> Base.encode16(case: :lower)
end
end

View File

@ -1,841 +0,0 @@
defmodule Systant.SystemMetrics do
@moduledoc """
Collects system metrics using Erlang's built-in :os_mon application.
Provides CPU, memory, disk, and network statistics.
"""
require Logger
@doc """
Collect system metrics based on configuration
"""
def collect_metrics(config \\ nil, previous_network_stats \\ nil) do
config = config || Systant.Config.load_config()
base_metrics = %{
timestamp: DateTime.utc_now() |> DateTime.to_iso8601(),
hostname: get_hostname()
}
# Collect metrics based on enabled modules
enabled_modules = Systant.Config.get(config, ["general", "enabled_modules"]) || []
Enum.reduce(enabled_modules, base_metrics, fn module_name, acc ->
if Systant.Config.module_enabled?(config, module_name) do
case module_name do
"cpu" -> Map.put(acc, :cpu, collect_cpu_metrics(config))
"memory" -> Map.put(acc, :memory, collect_memory_metrics(config))
"disk" -> Map.put(acc, :disk, collect_disk_metrics(config))
"gpu" -> Map.put(acc, :gpu, collect_gpu_metrics(config))
"network" -> Map.put(acc, :network, collect_network_metrics(config, previous_network_stats))
"temperature" -> Map.put(acc, :temperature, collect_temperature_metrics(config))
"processes" -> Map.put(acc, :processes, collect_process_metrics(config))
"system" -> Map.put(acc, :system, collect_system_info(config))
_ -> acc
end
else
acc
end
end)
end
@doc """
Collect CPU metrics using Linux system files and commands
"""
def collect_cpu_metrics(_config) do
get_load_averages()
end
@doc """
Collect memory metrics using Linux /proc/meminfo
"""
def collect_memory_metrics(_config) do
get_memory_info()
end
@doc """
Collect disk metrics using Linux df command
"""
def collect_disk_metrics(config) do
get_disk_usage(config)
end
@doc """
Collect GPU metrics from NVIDIA and AMD GPUs
"""
def collect_gpu_metrics(config) do
gpu_config = Systant.Config.get(config, ["gpu"]) || %{}
%{
nvidia: if(gpu_config["nvidia_enabled"] != false, do: get_nvidia_gpu_info(config), else: []),
amd: if(gpu_config["amd_enabled"] != false, do: get_amd_gpu_info(config), else: [])
}
end
@doc """
Collect network interface statistics with throughput calculation
"""
def collect_network_metrics(config, previous_stats \\ nil) do
get_network_stats(config, previous_stats)
end
@doc """
Collect temperature data from system sensors
"""
def collect_temperature_metrics(config) do
get_temperature_data(config)
end
@doc """
Collect top processes by CPU and memory usage
"""
def collect_process_metrics(config) do
get_top_processes(config)
end
@doc """
Collect general system information
"""
def collect_system_info(config) do
system_config = Systant.Config.get(config, ["system"]) || %{}
try do
base_info = %{}
base_info
|> maybe_add(:uptime_seconds, get_uptime(), system_config["include_uptime"])
|> maybe_add(:erlang_version, System.version(), true)
|> maybe_add(:otp_release, System.otp_release(), true)
|> maybe_add(:schedulers, System.schedulers(), true)
|> maybe_add(:logical_processors, System.schedulers_online(), true)
|> maybe_add(:kernel_version, get_kernel_version(), system_config["include_kernel_version"])
|> maybe_add(:os_info, get_os_info(), system_config["include_os_info"])
rescue
_ ->
Logger.warning("System info collection failed")
%{}
end
end
# Private helper functions
defp get_hostname do
case :inet.gethostname() do
{:ok, hostname} -> List.to_string(hostname)
_ -> "unknown"
end
end
defp get_uptime do
try do
# Get actual system uptime by reading /proc/uptime on Linux
case File.read("/proc/uptime") do
{:ok, content} ->
content
|> String.trim()
|> String.split(" ")
|> List.first()
|> String.to_float()
|> trunc()
_ ->
# Fallback to Erlang VM uptime if /proc/uptime unavailable
:erlang.statistics(:wall_clock) |> elem(0) |> div(1000)
end
rescue
_ -> nil
end
end
# Linux system metrics implementation
defp get_load_averages do
try do
case File.read("/proc/loadavg") do
{:ok, content} ->
[avg1, avg5, avg15 | _] = String.split(String.trim(content), " ")
%{
avg1: String.to_float(avg1),
avg5: String.to_float(avg5),
avg15: String.to_float(avg15)
}
_ -> nil
end
rescue
_ -> nil
end
end
defp get_memory_info do
try do
case File.read("/proc/meminfo") do
{:ok, content} ->
# Parse /proc/meminfo into a map
meminfo = content
|> String.split("\n")
|> Enum.reduce(%{}, fn line, acc ->
case String.split(line, ":") do
[key, value] ->
# Extract numeric value (remove "kB" suffix)
clean_value = value |> String.trim() |> String.replace(" kB", "")
case Integer.parse(clean_value) do
{num, _} -> Map.put(acc, String.trim(key), num)
_ -> acc
end
_ -> acc
end
end)
total = Map.get(meminfo, "MemTotal", 0)
available = Map.get(meminfo, "MemAvailable", 0)
free = Map.get(meminfo, "MemFree", 0)
used = total - available
%{
total_kb: total,
available_kb: available,
free_kb: free,
used_kb: used,
used_percent: if(total > 0, do: Float.round(used / total * 100.0, 2), else: 0)
}
_ -> nil
end
rescue
_ -> nil
end
end
defp get_disk_usage do
try do
# Use df command to get disk usage
case System.cmd("df", ["-h", "--exclude-type=tmpfs", "--exclude-type=devtmpfs"]) do
{output, 0} ->
disks = output
|> String.split("\n")
|> Enum.drop(1) # Skip header
|> Enum.filter(&(String.trim(&1) != ""))
|> Enum.map(fn line ->
case String.split(line) do
[filesystem, size, used, available, use_percent, mounted_on] ->
%{
filesystem: filesystem,
size: size,
used: used,
available: available,
use_percent: String.replace(use_percent, "%", "") |> parse_percentage(),
mounted_on: mounted_on
}
_ -> nil
end
end)
|> Enum.filter(&(&1 != nil))
%{disks: disks}
_ -> %{disks: []}
end
rescue
_ -> %{disks: []}
end
end
defp parse_percentage(str) do
case Integer.parse(str) do
{num, _} -> num
_ -> 0
end
end
# GPU Metrics Implementation
defp get_nvidia_gpu_info do
try do
case System.cmd("nvidia-smi", ["--query-gpu=name,utilization.gpu,utilization.memory,temperature.gpu,memory.used,memory.total", "--format=csv,noheader,nounits"]) do
{output, 0} ->
output
|> String.split("\n")
|> Enum.filter(&(String.trim(&1) != ""))
|> Enum.with_index()
|> Enum.map(fn {line, index} ->
case String.split(line, ", ") do
[name, gpu_util, mem_util, temp, mem_used, mem_total] ->
%{
id: index,
name: String.trim(name),
utilization_percent: parse_int(gpu_util),
memory_utilization_percent: parse_int(mem_util),
temperature_c: parse_int(temp),
memory_used_mb: parse_int(mem_used),
memory_total_mb: parse_int(mem_total)
}
_ -> nil
end
end)
|> Enum.filter(&(&1 != nil))
_ -> []
end
rescue
_ -> []
end
end
defp get_amd_gpu_info do
try do
# Try to get AMD GPU info from sysfs or rocm-smi if available
case System.cmd("rocm-smi", ["--showuse", "--showtemp", "--showmemuse", "--csv"]) do
{output, 0} ->
parse_rocm_smi_output(output)
_ ->
# Fallback to sysfs for basic AMD GPU info
get_amd_sysfs_info()
end
rescue
_ -> []
end
end
defp parse_rocm_smi_output(output) do
try do
lines = String.split(output, "\n") |> Enum.filter(&(String.trim(&1) != ""))
case lines do
[header | data_lines] ->
# Parse header to get column positions
headers = String.split(header, ",")
data_lines
|> Enum.with_index()
|> Enum.map(fn {line, index} ->
values = String.split(line, ",")
# Create a map from headers to values
data_map = Enum.zip(headers, values) |> Enum.into(%{})
%{
id: index,
name: "AMD GPU #{Map.get(data_map, "device", "unknown")}",
utilization_percent: parse_int(Map.get(data_map, "GPU use (%)", "0")),
memory_utilization_percent: parse_int(Map.get(data_map, "GPU Memory Allocated (VRAM%)", "0")),
temperature_c: parse_float(Map.get(data_map, "Temperature (Sensor edge) (C)", "0")),
memory_used_mb: nil, # rocm-smi doesn't provide absolute memory values in this format
memory_total_mb: nil
}
end)
_ -> []
end
rescue
_ -> []
end
end
defp get_amd_sysfs_info do
# Basic AMD GPU detection via sysfs
try do
case File.ls("/sys/class/drm") do
{:ok, entries} ->
entries
|> Enum.filter(&String.starts_with?(&1, "card"))
|> Enum.take(4) # Limit to first 4 GPUs
|> Enum.with_index()
|> Enum.map(fn {card, index} ->
%{
id: index,
name: "AMD GPU #{card}",
utilization_percent: nil,
memory_utilization_percent: nil,
temperature_c: nil,
memory_used_mb: nil,
memory_total_mb: nil
}
end)
_ -> []
end
rescue
_ -> []
end
end
# Network Metrics Implementation
defp get_network_stats do
try do
case File.read("/proc/net/dev") do
{:ok, content} ->
content
|> String.split("\n")
|> Enum.drop(2) # Skip header lines
|> Enum.filter(&(String.trim(&1) != ""))
|> Enum.map(&parse_network_interface/1)
|> Enum.filter(&(&1 != nil))
_ -> []
end
rescue
_ -> []
end
end
defp parse_network_interface(line) do
case String.split(line, ":") do
[interface_part, stats_part] ->
interface = String.trim(interface_part)
stats = stats_part |> String.trim() |> String.split() |> Enum.map(&parse_int/1)
if length(stats) >= 16 do
[rx_bytes, rx_packets, rx_errs, rx_drop, _, _, _, _,
tx_bytes, tx_packets, tx_errs, tx_drop | _] = stats
%{
interface: interface,
rx_bytes: rx_bytes,
rx_packets: rx_packets,
rx_errors: rx_errs,
rx_dropped: rx_drop,
tx_bytes: tx_bytes,
tx_packets: tx_packets,
tx_errors: tx_errs,
tx_dropped: tx_drop
}
else
nil
end
_ -> nil
end
end
# Temperature Metrics Implementation
defp get_temperature_data do
%{
cpu: get_cpu_temperature(),
sensors: get_lm_sensors_data()
}
end
defp get_cpu_temperature do
try do
# Try multiple common CPU temperature sources
cpu_temp_sources = [
"/sys/class/thermal/thermal_zone0/temp",
"/sys/class/thermal/thermal_zone1/temp",
"/sys/class/hwmon/hwmon0/temp1_input",
"/sys/class/hwmon/hwmon1/temp1_input"
]
Enum.find_value(cpu_temp_sources, fn path ->
case File.read(path) do
{:ok, content} ->
temp_millic = content |> String.trim() |> parse_int()
if temp_millic > 0, do: temp_millic / 1000.0, else: nil
_ -> nil
end
end)
rescue
_ -> nil
end
end
defp get_lm_sensors_data do
try do
case System.cmd("sensors", ["-A", "-j"]) do
{output, 0} ->
case Jason.decode(output) do
{:ok, data} -> simplify_sensors_data(data)
_ -> %{}
end
_ -> %{}
end
rescue
_ -> %{}
end
end
defp simplify_sensors_data(sensors_data) when is_map(sensors_data) do
sensors_data
|> Enum.reduce(%{}, fn {chip_name, chip_data}, acc ->
case chip_data do
chip_map when is_map(chip_map) ->
temps = extract_temperatures(chip_map)
if map_size(temps) > 0 do
Map.put(acc, chip_name, temps)
else
acc
end
_ -> acc
end
end)
end
defp simplify_sensors_data(_), do: %{}
defp extract_temperatures(chip_data) when is_map(chip_data) do
chip_data
|> Enum.reduce(%{}, fn {sensor_name, sensor_data}, acc ->
case sensor_data do
sensor_map when is_map(sensor_map) ->
temp_input = Map.get(sensor_map, "temp1_input") ||
Map.get(sensor_map, "temp2_input") ||
Map.get(sensor_map, "temp3_input")
if is_number(temp_input) do
Map.put(acc, sensor_name, temp_input)
else
acc
end
_ -> acc
end
end)
end
defp extract_temperatures(_), do: %{}
# Process Metrics Implementation
defp get_top_processes do
try do
case System.cmd("ps", ["aux", "--sort=-pcpu", "--no-headers"]) do
{output, 0} ->
output
|> String.split("\n")
|> Enum.take(10) # Top 10 processes
|> Enum.filter(&(String.trim(&1) != ""))
|> Enum.map(&parse_process_line/1)
|> Enum.filter(&(&1 != nil))
_ -> []
end
rescue
_ -> []
end
end
defp parse_process_line(line) do
case String.split(line) do
[user, pid, cpu, mem, _vsz, _rss, _tty, _stat, _start, _time | command_parts] ->
%{
user: user,
pid: parse_int(pid),
cpu_percent: parse_float(cpu),
memory_percent: parse_float(mem),
command: Enum.join(command_parts, " ") |> String.slice(0, 50) # Limit command length
}
_ -> nil
end
end
# Helper functions
defp parse_int(str) when is_binary(str) do
case Integer.parse(str) do
{num, _} -> num
_ -> 0
end
end
defp parse_int(num) when is_integer(num), do: num
defp parse_int(_), do: 0
defp parse_float(str) when is_binary(str) do
case Float.parse(str) do
{num, _} -> num
_ -> 0.0
end
end
defp parse_float(num) when is_float(num), do: num
defp parse_float(num) when is_integer(num), do: num * 1.0
defp parse_float(_), do: 0.0
# Configuration-aware helper functions
defp maybe_add(map, _key, _value, false), do: map
defp maybe_add(map, _key, _value, nil), do: map
defp maybe_add(map, key, value, _), do: Map.put(map, key, value)
defp get_kernel_version do
case File.read("/proc/version") do
{:ok, content} -> content |> String.trim() |> String.slice(0, 100)
_ -> nil
end
end
defp get_os_info do
try do
case File.read("/etc/os-release") do
{:ok, content} ->
content
|> String.split("\n")
|> Enum.reduce(%{}, fn line, acc ->
case String.split(line, "=", parts: 2) do
[key, value] ->
clean_value = String.trim(value, "\"")
Map.put(acc, String.downcase(key), clean_value)
_ -> acc
end
end)
|> Map.take(["name", "version", "id", "version_id"])
_ -> %{}
end
rescue
_ -> %{}
end
end
# Update helper functions to accept config parameter
defp get_disk_usage(config) do
disk_config = Systant.Config.get(config, ["disk"]) || %{}
try do
case System.cmd("df", ["-h", "--exclude-type=tmpfs", "--exclude-type=devtmpfs"]) do
{output, 0} ->
disks = output
|> String.split("\n")
|> Enum.drop(1)
|> Enum.filter(&(String.trim(&1) != ""))
|> Enum.map(fn line ->
case String.split(line) do
[filesystem, size, used, available, use_percent, mounted_on] ->
%{
filesystem: filesystem,
size: size,
used: used,
available: available,
use_percent: String.replace(use_percent, "%", "") |> parse_percentage(),
mounted_on: mounted_on
}
_ -> nil
end
end)
|> Enum.filter(&(&1 != nil))
|> filter_disks(disk_config)
%{disks: disks}
_ -> %{disks: []}
end
rescue
_ -> %{disks: []}
end
end
defp filter_disks(disks, config) do
include_mounts = config["include_mounts"] || []
exclude_mounts = config["exclude_mounts"] || []
exclude_types = config["exclude_types"] || []
min_usage = config["min_usage_percent"] || 0
disks
|> Enum.filter(fn disk ->
# Include filter (if specified)
include_match = if Enum.empty?(include_mounts) do
true
else
Enum.any?(include_mounts, &String.contains?(disk.mounted_on, &1))
end
# Exclude filter
exclude_match = Enum.any?(exclude_mounts, &String.starts_with?(disk.mounted_on, &1))
type_exclude = Enum.any?(exclude_types, &String.contains?(disk.filesystem, &1))
# Usage filter
usage_ok = disk.use_percent >= min_usage
include_match and not exclude_match and not type_exclude and usage_ok
end)
end
defp get_nvidia_gpu_info(config) do
gpu_config = Systant.Config.get(config, ["gpu"]) || %{}
max_gpus = gpu_config["max_gpus"] || 8
try do
case System.cmd("nvidia-smi", ["--query-gpu=name,utilization.gpu,utilization.memory,temperature.gpu,memory.used,memory.total", "--format=csv,noheader,nounits"]) do
{output, 0} ->
output
|> String.split("\n")
|> Enum.filter(&(String.trim(&1) != ""))
|> Enum.take(max_gpus)
|> Enum.with_index()
|> Enum.map(fn {line, index} ->
case String.split(line, ", ") do
[name, gpu_util, mem_util, temp, mem_used, mem_total] ->
%{
id: index,
name: String.trim(name),
utilization_percent: parse_int(gpu_util),
memory_utilization_percent: parse_int(mem_util),
temperature_c: parse_int(temp),
memory_used_mb: parse_int(mem_used),
memory_total_mb: parse_int(mem_total)
}
_ -> nil
end
end)
|> Enum.filter(&(&1 != nil))
_ -> []
end
rescue
_ -> []
end
end
defp get_amd_gpu_info(config) do
gpu_config = Systant.Config.get(config, ["gpu"]) || %{}
max_gpus = gpu_config["max_gpus"] || 8
try do
case System.cmd("rocm-smi", ["--showuse", "--showtemp", "--showmemuse", "--csv"]) do
{output, 0} ->
parse_rocm_smi_output(output) |> Enum.take(max_gpus)
_ ->
get_amd_sysfs_info() |> Enum.take(max_gpus)
end
rescue
_ -> []
end
end
defp get_network_stats(config, previous_stats \\ nil) do
network_config = Systant.Config.get(config, ["network"]) || %{}
current_time = System.monotonic_time(:second)
try do
case File.read("/proc/net/dev") do
{:ok, content} ->
current_interfaces = content
|> String.split("\n")
|> Enum.drop(2)
|> Enum.filter(&(String.trim(&1) != ""))
|> Enum.map(&parse_network_interface/1)
|> Enum.filter(&(&1 != nil))
|> filter_network_interfaces(network_config)
|> Enum.map(&calculate_throughput(&1, previous_stats, current_time))
current_interfaces
_ -> []
end
rescue
_ -> []
end
end
defp filter_network_interfaces(interfaces, config) do
include_interfaces = config["include_interfaces"] || []
exclude_interfaces = config["exclude_interfaces"] || []
min_bytes = config["min_bytes_threshold"] || 0
interfaces
|> Enum.filter(fn iface ->
# Include filter
include_match = if Enum.empty?(include_interfaces) do
true
else
Enum.member?(include_interfaces, iface.interface)
end
# Exclude filter
exclude_match = Enum.any?(exclude_interfaces, fn pattern ->
String.starts_with?(iface.interface, pattern)
end)
# Traffic threshold
has_traffic = (iface.rx_bytes + iface.tx_bytes) >= min_bytes
include_match and not exclude_match and has_traffic
end)
end
defp get_temperature_data(config) do
temp_config = Systant.Config.get(config, ["temperature"]) || %{}
result = %{}
result = if temp_config["cpu_temp_enabled"] != false do
Map.put(result, :cpu, get_cpu_temperature())
else
result
end
result = if temp_config["sensors_enabled"] != false do
Map.put(result, :sensors, get_lm_sensors_data())
else
result
end
result
end
defp get_top_processes(config) do
process_config = Systant.Config.get(config, ["processes"]) || %{}
max_processes = process_config["max_processes"] || 10
sort_by = process_config["sort_by"] || "cpu"
min_cpu = process_config["min_cpu_percent"] || 0.0
min_memory = process_config["min_memory_percent"] || 0.0
max_cmd_len = process_config["max_command_length"] || 50
sort_flag = case sort_by do
"memory" -> "-pmem"
_ -> "-pcpu"
end
try do
case System.cmd("ps", ["aux", "--sort=#{sort_flag}", "--no-headers"]) do
{output, 0} ->
output
|> String.split("\n")
|> Enum.take(max_processes * 2) # Get extra in case filtering removes some
|> Enum.filter(&(String.trim(&1) != ""))
|> Enum.map(&parse_process_line(&1, max_cmd_len))
|> Enum.filter(&(&1 != nil))
|> Enum.filter(fn proc ->
proc.cpu_percent >= min_cpu and proc.memory_percent >= min_memory
end)
|> Enum.take(max_processes)
_ -> []
end
rescue
_ -> []
end
end
defp parse_process_line(line, max_cmd_len) do
case String.split(line) do
[user, pid, cpu, mem, _vsz, _rss, _tty, _stat, _start, _time | command_parts] ->
%{
user: user,
pid: parse_int(pid),
cpu_percent: parse_float(cpu),
memory_percent: parse_float(mem),
command: Enum.join(command_parts, " ") |> String.slice(0, max_cmd_len)
}
_ -> nil
end
end
defp calculate_throughput(current_interface, previous_stats, current_time) do
case previous_stats do
%{interfaces: prev_interfaces, timestamp: prev_time} ->
# Find matching interface in previous data
prev_interface = Enum.find(prev_interfaces, &(&1.interface == current_interface.interface))
if prev_interface && prev_time do
time_diff = current_time - prev_time
if time_diff > 0 do
rx_bytes_diff = current_interface.rx_bytes - prev_interface.rx_bytes
tx_bytes_diff = current_interface.tx_bytes - prev_interface.tx_bytes
# Calculate bytes per second
rx_throughput = max(0, rx_bytes_diff / time_diff)
tx_throughput = max(0, tx_bytes_diff / time_diff)
current_interface
|> Map.put(:rx_throughput_bps, Float.round(rx_throughput, 2))
|> Map.put(:tx_throughput_bps, Float.round(tx_throughput, 2))
else
# First measurement or time error
current_interface
|> Map.put(:rx_throughput_bps, 0.0)
|> Map.put(:tx_throughput_bps, 0.0)
end
else
# No previous data for this interface
current_interface
|> Map.put(:rx_throughput_bps, 0.0)
|> Map.put(:tx_throughput_bps, 0.0)
end
_ ->
# No previous data at all
current_interface
|> Map.put(:rx_throughput_bps, 0.0)
|> Map.put(:tx_throughput_bps, 0.0)
end
end
end

View File

@ -1,46 +0,0 @@
defmodule SystemStatsDaemon.MixProject do
use Mix.Project
def project do
[
app: :systant,
version: "0.1.0",
elixir: "~> 1.18",
start_permanent: Mix.env() == :prod,
deps: deps(),
releases: releases()
]
end
# Run "mix help compile.app" to learn about applications.
def application do
[
extra_applications: [:logger],
mod: {Systant.Application, []}
]
end
# Run "mix help deps" to learn about dependencies.
defp deps do
[
{:tortoise, "~> 0.9.5"},
{:jason, "~> 1.4"},
{:toml, "~> 0.7"}
]
end
defp releases do
[
systant: [
include_executables_for: [:unix],
applications: [runtime_tools: :permanent],
include_erts: true,
strip_beams: false,
env: %{
"RELEASE_DISTRIBUTION" => "none",
"RELEASE_NODE" => "nonode@nohost"
}
]
]
end
end

View File

@ -1,11 +0,0 @@
%{
"cowlib": {:hex, :cowlib, "2.13.0", "db8f7505d8332d98ef50a3ef34b34c1afddec7506e4ee4dd4a3a266285d282ca", [:make, :rebar3], [], "hexpm", "e1e1284dc3fc030a64b1ad0d8382ae7e99da46c3246b815318a4b848873800a4"},
"emqtt": {:hex, :emqtt, "1.14.4", "f34fd1e612e3138e61e9a2d27b0f9674e1da87cc794d30b7916d96f6ee7eef71", [:rebar3], [{:cowlib, "2.13.0", [hex: :cowlib, repo: "hexpm", optional: false]}, {:getopt, "1.0.3", [hex: :getopt, repo: "hexpm", optional: false]}, {:gun, "2.1.0", [hex: :gun, repo: "hexpm", optional: false]}], "hexpm", "9065ba581ea899fde316b7eafd03f3c945044c151480bf3adabc6b62b0e60dad"},
"gen_state_machine": {:hex, :gen_state_machine, "3.0.0", "1e57f86a494e5c6b14137ebef26a7eb342b3b0070c7135f2d6768ed3f6b6cdff", [:mix], [], "hexpm", "0a59652574bebceb7309f6b749d2a41b45fdeda8dbb4da0791e355dd19f0ed15"},
"getopt": {:hex, :getopt, "1.0.3", "4f3320c1f6f26b2bec0f6c6446b943eb927a1e6428ea279a1c6c534906ee79f1", [:rebar3], [], "hexpm", "7e01de90ac540f21494ff72792b1e3162d399966ebbfc674b4ce52cb8f49324f"},
"gun": {:hex, :gun, "2.1.0", "b4e4cbbf3026d21981c447e9e7ca856766046eff693720ba43114d7f5de36e87", [:make, :rebar3], [{:cowlib, "2.13.0", [hex: :cowlib, repo: "hexpm", optional: false]}], "hexpm", "52fc7fc246bfc3b00e01aea1c2854c70a366348574ab50c57dfe796d24a0101d"},
"jason": {:hex, :jason, "1.4.4", "b9226785a9aa77b6857ca22832cffa5d5011a667207eb2a0ad56adb5db443b8a", [:mix], [{:decimal, "~> 1.0 or ~> 2.0", [hex: :decimal, repo: "hexpm", optional: true]}], "hexpm", "c5eb0cab91f094599f94d55bc63409236a8ec69a21a67814529e8d5f6cc90b3b"},
"syslog": {:hex, :syslog, "1.1.0", "6419a232bea84f07b56dc575225007ffe34d9fdc91abe6f1b2f254fd71d8efc2", [:rebar3], [], "hexpm", "4c6a41373c7e20587be33ef841d3de6f3beba08519809329ecc4d27b15b659e1"},
"toml": {:hex, :toml, "0.7.0", "fbcd773caa937d0c7a02c301a1feea25612720ac3fa1ccb8bfd9d30d822911de", [:mix], [], "hexpm", "0690246a2478c1defd100b0c9b89b4ea280a22be9a7b313a8a058a2408a2fa70"},
"tortoise": {:hex, :tortoise, "0.9.9", "2e467570ef1d342d4de8fdc6ba3861f841054ab524080ec3d7052ee07c04501d", [:mix], [{:gen_state_machine, "~> 2.0 or ~> 3.0", [hex: :gen_state_machine, repo: "hexpm", optional: false]}], "hexpm", "4a316220b4b443c2497f42702f0c0616af3e4b2cbc6c150ebebb51657a773797"},
}

View File

@ -1,6 +0,0 @@
#!/bin/sh
# Configure environment for release
export MIX_ENV=prod
export RELEASE_DISTRIBUTION=none
export RELEASE_NODE=nonode@nohost

View File

@ -1 +0,0 @@
/home/ryan/.config/systant/systant.toml

View File

@ -1,116 +0,0 @@
# Systant Configuration Example
# Copy to systant.toml and customize for your environment
[general]
enabled_modules = ["cpu", "memory", "disk", "gpu", "network", "temperature", "processes", "system"]
collection_interval = 30000 # milliseconds
startup_delay = 5000 # milliseconds
[mqtt]
host = "localhost" # MQTT broker hostname/IP
port = 1883 # MQTT broker port
client_id_prefix = "systant" # Prefix for MQTT client ID
username = "" # MQTT username (optional)
password = "" # MQTT password (optional)
qos = 0 # MQTT QoS level
# Home Assistant MQTT Discovery Configuration
[homeassistant]
discovery_enabled = true # Enable/disable HA auto-discovery
discovery_prefix = "homeassistant" # HA discovery topic prefix
[cpu]
enabled = true
[memory]
enabled = true
show_detailed = true
[disk]
enabled = true
include_mounts = [] # Only include these mounts (empty = all)
exclude_mounts = ["/snap", "/boot", "/dev", "/sys", "/proc", "/run", "/tmp"]
exclude_types = ["tmpfs", "devtmpfs", "squashfs", "overlay"]
min_usage_percent = 1 # Minimum usage to report
[gpu]
enabled = true
nvidia_enabled = true
amd_enabled = true
max_gpus = 8
[network]
enabled = true
include_interfaces = [] # Only include these interfaces (empty = all)
exclude_interfaces = ["lo", "docker0", "br-", "veth", "virbr"]
min_bytes_threshold = 1024 # Minimum traffic to report
[temperature]
enabled = true
cpu_temp_enabled = true
sensors_enabled = true
temp_unit = "celsius"
[processes]
enabled = true
max_processes = 10
sort_by = "cpu" # "cpu" or "memory"
min_cpu_percent = 0.0
min_memory_percent = 0.0
max_command_length = 50
[system]
enabled = true
include_uptime = true
include_load_average = true
include_kernel_version = true
include_os_info = true
[commands]
enabled = true
timeout_seconds = 30
log_executions = true
# Example commands - customize for your needs
[[commands.available]]
trigger = "restart"
command = "systemctl"
allowed_params = ["nginx", "apache2", "docker", "ssh"]
description = "Restart system services"
[[commands.available]]
trigger = "info"
command = "uname"
allowed_params = ["-a"]
description = "Show system information"
[[commands.available]]
trigger = "df"
command = "df"
allowed_params = ["-h", "/", "/home", "/var", "/tmp"]
description = "Show disk usage"
[[commands.available]]
trigger = "ps"
command = "ps"
allowed_params = ["aux", "--sort=-pcpu", "--sort=-pmem"]
description = "Show running processes"
[[commands.available]]
trigger = "ping"
command = "ping"
allowed_params = ["-c", "4", "8.8.8.8", "google.com", "1.1.1.1"]
description = "Network connectivity test"
# Example of a detached command for long-running processes
[[commands.available]]
trigger = "start_app"
command = "firefox"
detached = true # Don't wait for the process to exit, just launch it
timeout = 5 # Timeout only applies to launching, not running
description = "Start Firefox browser (detached)"
[logging]
level = "info" # debug, info, warn, error
log_config_changes = true
log_metric_collection = false

View File

@ -1,8 +0,0 @@
defmodule SystantTest do
use ExUnit.Case
doctest Systant
test "greets the world" do
assert Systant.hello() == :world
end
end

View File

@ -1 +0,0 @@
ExUnit.start()

130
src/config.ts Normal file
View File

@ -0,0 +1,130 @@
import { parse } from "smol-toml";
export interface SystantConfig {
hostname: string;
defaultInterval: number;
}
export interface MqttConfig {
broker: string;
username?: string;
password?: string;
clientId?: string;
topicPrefix: string;
}
export type EntityType = "sensor" | "binary_sensor" | "light" | "switch" | "button";
export interface EntityConfig {
type: EntityType;
state_command?: string; // for stateful entities (not button)
on_command?: string; // for light/switch
off_command?: string; // for light/switch
press_command?: string; // for button
interval?: number; // override default interval
unit?: string; // for sensor
device_class?: string; // for sensor (timestamp, etc.) or binary_sensor
icon?: string;
name?: string;
availability?: boolean; // default true; set false to keep last value when offline
}
export interface HomeAssistantConfig {
discovery: boolean;
discoveryPrefix: string;
}
export interface Config {
systant: SystantConfig;
mqtt: MqttConfig;
entities: Record<string, EntityConfig>;
homeassistant: HomeAssistantConfig;
}
const defaults = {
systant: {
defaultInterval: 30,
hostname: process.env.HOSTNAME || require("os").hostname(),
},
mqtt: {
broker: "mqtt://localhost:1883",
topicPrefix: "systant",
},
homeassistant: {
discovery: true,
discoveryPrefix: "homeassistant",
},
};
export async function loadConfig(path: string): Promise<Config> {
const file = Bun.file(path);
if (!(await file.exists())) {
throw new Error(`Config file not found: ${path}`);
}
const content = await file.text();
const parsed = parse(content) as Record<string, unknown>;
return buildConfig(parsed);
}
function buildConfig(parsed: Record<string, unknown>): Config {
const config: Config = {
systant: { ...defaults.systant },
mqtt: { ...defaults.mqtt },
entities: {},
homeassistant: { ...defaults.homeassistant },
};
// MQTT settings
if (parsed.mqtt && typeof parsed.mqtt === "object") {
Object.assign(config.mqtt, parsed.mqtt);
}
// Systant settings
if (parsed.systant && typeof parsed.systant === "object") {
Object.assign(config.systant, parsed.systant);
}
// Entities
if (parsed.entities && typeof parsed.entities === "object") {
const entities = parsed.entities as Record<string, unknown>;
for (const [key, value] of Object.entries(entities)) {
if (key === "interval") continue; // skip the default interval
if (value && typeof value === "object" && "type" in value) {
const e = value as Record<string, unknown>;
const entityType = String(e.type) as EntityType;
// Buttons need press_command, others need state_command
const hasRequiredCommand = entityType === "button"
? "press_command" in e
: "state_command" in e;
if (!hasRequiredCommand) continue;
config.entities[key] = {
type: entityType,
state_command: typeof e.state_command === "string" ? e.state_command : undefined,
on_command: typeof e.on_command === "string" ? e.on_command : undefined,
off_command: typeof e.off_command === "string" ? e.off_command : undefined,
press_command: typeof e.press_command === "string" ? e.press_command : undefined,
interval: typeof e.interval === "number" ? e.interval : undefined,
unit: typeof e.unit === "string" ? e.unit : undefined,
device_class: typeof e.device_class === "string" ? e.device_class : undefined,
icon: typeof e.icon === "string" ? e.icon : undefined,
name: typeof e.name === "string" ? e.name : undefined,
availability: typeof e.availability === "boolean" ? e.availability : true,
};
}
}
}
// Home Assistant
if (parsed.homeassistant && typeof parsed.homeassistant === "object") {
Object.assign(config.homeassistant, parsed.homeassistant);
}
return config;
}

110
src/entities.ts Normal file
View File

@ -0,0 +1,110 @@
import type { Config, EntityConfig } from "./config";
import type { MqttConnection } from "./mqtt";
export interface EntityManager {
start(): Promise<void>;
stop(): void;
}
export function createEntityManager(config: Config, mqtt: MqttConnection): EntityManager {
const timers: Timer[] = [];
async function setupStatefulEntity(id: string, entity: EntityConfig): Promise<void> {
const interval = entity.interval ?? config.systant.defaultInterval;
const isControllable = entity.type === "light" || entity.type === "switch";
if (!entity.state_command) return;
// State polling
const pollState = async () => {
try {
const output = await Bun.$`sh -c ${entity.state_command}`.text();
const value = output.trim();
console.debug(`[${id}] state: ${value}`);
await mqtt.publish(`${id}/state`, value, true); // retain state
} catch (err) {
console.error(`[${id}] state poll failed:`, err instanceof Error ? err.message : err);
}
};
// Initial state poll
await pollState();
// Schedule periodic polling
const timer = setInterval(pollState, interval * 1000);
timers.push(timer);
// Command subscription for controllable entities
if (isControllable && (entity.on_command || entity.off_command)) {
await mqtt.subscribe(`${id}/set`, async (_topic, payload) => {
const command = payload.toString().toUpperCase();
console.log(`[${id}] received command: ${command}`);
try {
if (command === "ON" && entity.on_command) {
await Bun.$`sh -c ${entity.on_command}`;
console.log(`[${id}] executed on_command`);
} else if (command === "OFF" && entity.off_command) {
await Bun.$`sh -c ${entity.off_command}`;
console.log(`[${id}] executed off_command`);
} else {
console.warn(`[${id}] unknown command or no handler: ${command}`);
return;
}
// Re-poll state after command execution
await new Promise((r) => setTimeout(r, 500)); // brief delay for state to settle
await pollState();
} catch (err) {
console.error(`[${id}] command failed:`, err instanceof Error ? err.message : err);
}
});
}
const typeLabel = isControllable ? `${entity.type} (controllable)` : entity.type;
console.log(` ${id}: ${typeLabel}, poll every ${interval}s`);
}
async function setupButton(id: string, entity: EntityConfig): Promise<void> {
if (!entity.press_command) return;
await mqtt.subscribe(`${id}/press`, async (_topic, _payload) => {
console.log(`[${id}] button pressed`);
try {
await Bun.$`sh -c ${entity.press_command}`;
console.log(`[${id}] executed press_command`);
} catch (err) {
console.error(`[${id}] press_command failed:`, err instanceof Error ? err.message : err);
}
});
console.log(` ${id}: button`);
}
return {
async start(): Promise<void> {
const entityCount = Object.keys(config.entities).length;
if (entityCount === 0) {
console.log("No entities configured");
return;
}
console.log(`Starting ${entityCount} entity manager(s):`);
for (const [id, entity] of Object.entries(config.entities)) {
if (entity.type === "button") {
await setupButton(id, entity);
} else {
await setupStatefulEntity(id, entity);
}
}
},
stop(): void {
for (const timer of timers) {
clearInterval(timer);
}
timers.length = 0;
},
};
}

177
src/mqtt.ts Normal file
View File

@ -0,0 +1,177 @@
import mqtt, { type MqttClient, type IClientOptions } from "mqtt";
import type { Config, EntityConfig } from "./config";
export interface MqttConnection {
client: MqttClient;
publish(topic: string, payload: string | object, retain?: boolean): Promise<void>;
subscribe(topic: string, handler: (topic: string, payload: Buffer) => void): Promise<void>;
disconnect(): Promise<void>;
}
export async function connect(config: Config, hostname: string): Promise<MqttConnection> {
const options: IClientOptions = {
clientId: config.mqtt.clientId || `systant-${hostname}`,
username: config.mqtt.username,
password: config.mqtt.password,
will: {
topic: `${config.mqtt.topicPrefix}/${hostname}/status`,
payload: Buffer.from("offline"),
qos: 1,
retain: true,
},
};
const client = mqtt.connect(config.mqtt.broker, options);
const handlers = new Map<string, (topic: string, payload: Buffer) => void>();
await new Promise<void>((resolve, reject) => {
client.on("connect", () => {
console.log(`Connected to MQTT broker: ${config.mqtt.broker}`);
resolve();
});
client.on("error", reject);
});
client.on("message", (topic, payload) => {
for (const [pattern, handler] of handlers) {
if (topicMatches(pattern, topic)) {
handler(topic, payload);
}
}
});
// Publish online status
await publishAsync(client, `${config.mqtt.topicPrefix}/${hostname}/status`, "online", true);
// Publish HA discovery if enabled
if (config.homeassistant.discovery) {
await publishDiscovery(client, config, hostname);
}
return {
client,
async publish(topic: string, payload: string | object, retain = false): Promise<void> {
const fullTopic = `${config.mqtt.topicPrefix}/${hostname}/${topic}`;
const data = typeof payload === "object" ? JSON.stringify(payload) : payload;
await publishAsync(client, fullTopic, data, retain);
},
async subscribe(topic: string, handler: (topic: string, payload: Buffer) => void): Promise<void> {
const fullTopic = `${config.mqtt.topicPrefix}/${hostname}/${topic}`;
handlers.set(fullTopic, handler);
await new Promise<void>((resolve, reject) => {
client.subscribe(fullTopic, { qos: 1 }, (err) => {
if (err) reject(err);
else resolve();
});
});
console.log(`Subscribed to: ${fullTopic}`);
},
async disconnect(): Promise<void> {
await publishAsync(client, `${config.mqtt.topicPrefix}/${hostname}/status`, "offline", true);
await new Promise<void>((resolve) => client.end(false, {}, () => resolve()));
},
};
}
function publishAsync(client: MqttClient, topic: string, payload: string, retain: boolean): Promise<void> {
return new Promise((resolve, reject) => {
client.publish(topic, payload, { qos: 1, retain }, (err) => {
if (err) reject(err);
else resolve();
});
});
}
function topicMatches(pattern: string, topic: string): boolean {
if (pattern === topic) return true;
if (pattern.endsWith("#")) {
return topic.startsWith(pattern.slice(0, -1));
}
return false;
}
async function publishDiscovery(client: MqttClient, config: Config, hostname: string): Promise<void> {
const prefix = config.homeassistant.discoveryPrefix;
const topicPrefix = config.mqtt.topicPrefix;
const entityCount = Object.keys(config.entities).length;
if (entityCount === 0) {
console.log("No entities configured, skipping HA discovery");
return;
}
for (const [id, entity] of Object.entries(config.entities)) {
const payload = buildDiscoveryPayload(id, entity, hostname, topicPrefix);
const discoveryTopic = `${prefix}/${entity.type}/${hostname}_${id}/config`;
await publishAsync(client, discoveryTopic, JSON.stringify(payload), true);
}
console.log(`Published Home Assistant discovery for ${entityCount} entity/entities`);
}
function buildDiscoveryPayload(
id: string,
entity: EntityConfig,
hostname: string,
topicPrefix: string
): Record<string, unknown> {
const displayName = entity.name || id.replace(/_/g, " ");
const payload: Record<string, unknown> = {
name: displayName,
unique_id: `systant_${hostname}_${id}`,
device: {
identifiers: [`systant_${hostname}`],
name: `${hostname}`,
manufacturer: "Systant",
},
};
// Stateful entities have a state_topic (buttons don't)
if (entity.type !== "button") {
payload.state_topic = `${topicPrefix}/${hostname}/${id}/state`;
}
// Add availability tracking unless explicitly disabled
if (entity.availability !== false) {
payload.availability_topic = `${topicPrefix}/${hostname}/status`;
payload.payload_available = "online";
payload.payload_not_available = "offline";
}
// Common optional fields
if (entity.icon) payload.icon = entity.icon;
// Type-specific fields
switch (entity.type) {
case "sensor":
if (entity.unit) payload.unit_of_measurement = entity.unit;
if (entity.device_class) payload.device_class = entity.device_class;
break;
case "binary_sensor":
payload.payload_on = "ON";
payload.payload_off = "OFF";
if (entity.device_class) payload.device_class = entity.device_class;
break;
case "light":
case "switch":
payload.command_topic = `${topicPrefix}/${hostname}/${id}/set`;
payload.payload_on = "ON";
payload.payload_off = "OFF";
payload.state_on = "ON";
payload.state_off = "OFF";
break;
case "button":
payload.command_topic = `${topicPrefix}/${hostname}/${id}/press`;
payload.payload_press = "PRESS";
break;
}
return payload;
}

75
systant.toml.example Normal file
View File

@ -0,0 +1,75 @@
# Systant Configuration
# Copy this to systant.toml and customize for your system
[systant]
# hostname = "myhost" # defaults to system hostname
[mqtt]
broker = "mqtt://localhost:1883"
# username = "user"
# password = "secret"
# clientId = "systant-myhost" # defaults to systant-{hostname}
topicPrefix = "systant"
[entities]
interval = 30 # default interval in seconds
# Sensor examples
[entities.cpu_usage]
type = "sensor"
state_command = "awk '/^cpu / {u=$2+$4; t=$2+$4+$5; print int(u*100/t)}' /proc/stat"
unit = "%"
icon = "mdi:cpu-64-bit"
name = "CPU Usage"
[entities.memory]
type = "sensor"
state_command = "awk '/MemTotal/{t=$2} /MemAvailable/{a=$2} END {print int((t-a)/t*100)}' /proc/meminfo"
unit = "%"
icon = "mdi:memory"
name = "Memory Usage"
[entities.last_seen]
type = "sensor"
state_command = "date -Iseconds"
device_class = "timestamp"
icon = "mdi:clock-check"
name = "Last Seen"
availability = false # keeps last value when offline
# Binary sensor example (read-only on/off)
# [entities.service_running]
# type = "binary_sensor"
# state_command = "systemctl is-active myservice >/dev/null && echo ON || echo OFF"
# device_class = "running"
# icon = "mdi:cog"
# name = "My Service"
# Light example (controllable, for things like monitors)
# [entities.screen]
# type = "light"
# state_command = "xrandr | grep -q 'connected primary' && echo ON || echo OFF"
# on_command = "xrandr --output DP-1 --auto"
# off_command = "xrandr --output DP-1 --off"
# icon = "mdi:monitor"
# name = "Screen"
# Switch example (controllable on/off)
# [entities.vpn]
# type = "switch"
# state_command = "systemctl is-active openvpn >/dev/null && echo ON || echo OFF"
# on_command = "systemctl start openvpn"
# off_command = "systemctl stop openvpn"
# icon = "mdi:vpn"
# name = "VPN"
# Button example (just executes a command)
# [entities.sync_time]
# type = "button"
# press_command = "ntpdate pool.ntp.org"
# icon = "mdi:clock-sync"
# name = "Sync Time"
[homeassistant]
discovery = true
discoveryPrefix = "homeassistant"

29
tsconfig.json Normal file
View File

@ -0,0 +1,29 @@
{
"compilerOptions": {
// Environment setup & latest features
"lib": ["ESNext"],
"target": "ESNext",
"module": "Preserve",
"moduleDetection": "force",
"jsx": "react-jsx",
"allowJs": true,
// Bundler mode
"moduleResolution": "bundler",
"allowImportingTsExtensions": true,
"verbatimModuleSyntax": true,
"noEmit": true,
// Best practices
"strict": true,
"skipLibCheck": true,
"noFallthroughCasesInSwitch": true,
"noUncheckedIndexedAccess": true,
"noImplicitOverride": true,
// Some stricter flags (disabled by default)
"noUnusedLocals": false,
"noUnusedParameters": false,
"noPropertyAccessFromIndexSignature": false
}
}