Compare commits
76 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| f96c656385 | |||
| 50c0b4d012 | |||
| 851903eba4 | |||
| cf2d2869d8 | |||
| ed969fe0a7 | |||
| 00e706b186 | |||
| 19ec70a540 | |||
| 09d80deec5 | |||
| b5881fc248 | |||
| a15d6b0e27 | |||
| fcc277d1d3 | |||
| 23259fcce6 | |||
| e24fad4adc | |||
| e73ada9adc | |||
| 8ffb06318e | |||
| 74d09a43cd | |||
| 4b11a6663c | |||
| ced5d77e27 | |||
| 9b8fdb411c | |||
| 1a2b8a6516 | |||
| f16c9c86eb | |||
| 8ca18f785e | |||
| f23163501c | |||
| d05d0938ff | |||
| 2ce8e95623 | |||
| dedcfc638d | |||
| 52081ad8f0 | |||
| 5ecea9fc21 | |||
| c4501b381f | |||
| 2c3c9e359a | |||
| 191f43372d | |||
| 694e20e370 | |||
| 706174eee3 | |||
| df0f3f42f9 | |||
| af448da749 | |||
| 18e54c4be9 | |||
| 652353c641 | |||
| 632f9e348c | |||
| 013e2fc2f6 | |||
| c89c2143ce | |||
| e83810fb76 | |||
| db5ca2f74f | |||
| 0c14da3435 | |||
| 5dfe632a81 | |||
| 1db8f2c70b | |||
| 8e4e436d18 | |||
| 8fbdb6aa8c | |||
| 465b5263ac | |||
| eb1c091bd2 | |||
| 48c468b4db | |||
| 6abc1e23c7 | |||
| e06c464fb0 | |||
| ad10a92f26 | |||
| f56103089d | |||
| 646858267b | |||
| 631cf21195 | |||
| aa05dca668 | |||
| c99cc3c18a | |||
| 4557e8af89 | |||
| 8bc3937956 | |||
| 37616c4d2d | |||
| 3f3c38d3ad | |||
| 50295c1b51 | |||
| 299bf98f13 | |||
| 20674b676f | |||
| 72d9ae137f | |||
| 70dd911a4c | |||
| 33b805b582 | |||
| aae616c92b | |||
| eef04ca7d5 | |||
| d1eabea3e3 | |||
| b24c08ec5a | |||
| 20a1a0e01a | |||
| 2af91aa749 | |||
| f57d755156 | |||
| e96ce8a7d3 |
BIN
PiCopy_Logo.png
Normal file
BIN
PiCopy_Logo.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 450 KiB |
136
README.md
136
README.md
@@ -1,3 +1,7 @@
|
||||
<div align="center">
|
||||
<img src="PiCopy_Logo.png" alt="PiCopy Logo" width="160">
|
||||
</div>
|
||||
|
||||
# PiCopy
|
||||
|
||||
**Automatische USB-Backup-Station für den Raspberry Pi mit Web-Interface**
|
||||
@@ -18,10 +22,19 @@ PiCopy verwandelt deinen Raspberry Pi in ein eigenständiges Backup-Gerät. Stec
|
||||
| 🔄 | **Duplikat-Behandlung** | Überspringen / Überschreiben / Umbenennen |
|
||||
| ✅ | **MD5-Verifizierung** | Jede Datei nach dem Kopieren auf Integrität prüfen |
|
||||
| 🗑️ | **Quelle leeren** | Quelldateien nach erfolgreichem Kopieren löschen (Move-Modus) |
|
||||
| 🔀 | **Mehrere Quellen** | Mehrere USB-Quell-Ports gleichzeitig auf ein Ziel kopieren |
|
||||
| 💾 | **Interner Speicher** | Pi-interne SD-Karte als Kopierziel verwenden |
|
||||
| 🖧 | **NAS / SMB Upload** | Nach dem lokalen Backup auf ein Netzlaufwerk hochladen |
|
||||
| 📁 | **Samba-Freigabe** | Internen Speicher als SMB-Netzwerkfreigabe bereitstellen |
|
||||
| 📡 | **WiFi-Fallback** | Erstellt einen eigenen Hotspot wenn kein WLAN verfügbar ist |
|
||||
| 🔒 | **WireGuard VPN** | VPN-Verbindung für sicheren Fernzugriff |
|
||||
| 📊 | **System-Monitoring** | CPU-Temperatur, RAM- und SD-Karten-Auslastung im Dashboard |
|
||||
| 💽 | **Speicherplatz-Panel** | Füllstand aller verbundenen Laufwerke (Quelle, Ziel, intern) mit Fortschrittsbalken |
|
||||
| ⚠️ | **Speicherplatz-Warnung** | Vor dem Kopieren wird geprüft ob genug Platz vorhanden ist – Warnung wenn nicht |
|
||||
| 🕐 | **Kopier-Verlauf** | Die letzten 100 Kopiervorgänge werden gespeichert |
|
||||
| ⚡ | **Headless-Betrieb** | Kein Monitor, keine Tastatur nötig |
|
||||
| 🔁 | **Autostart** | Startet automatisch beim Pi-Boot via systemd |
|
||||
| 💿 | **Laufwerk formatieren** | Ziel-USB direkt im Browser formatieren (exFAT / FAT32 / NTFS) |
|
||||
|
||||
---
|
||||
|
||||
@@ -81,13 +94,17 @@ Zeigt den Live-Fortschritt mit:
|
||||
- Prozentualer Fortschritt + Fortschrittsbalken
|
||||
- Dateizähler (`23 / 147 Dateien`)
|
||||
- Übertragene Datenmenge (`1.2 GB / 3.5 GB`)
|
||||
- Geschwindigkeit (`12.4 MB/s`)
|
||||
- Verbleibende Zeit (`⏱ noch ca. 4 Min.`)
|
||||
- Geschwindigkeit (`⚡ 12.4 MB/s`)
|
||||
- Verbleibende Zeit (`⏱ ~4 Min.`)
|
||||
- Aktuelle Datei
|
||||
- Phasen-Anzeige: *Kopieren → Verifizieren → Quelle leeren*
|
||||
|
||||
Nach dem Abschluss: Zusammenfassung mit ✕-Button (verschwindet nach 5 Minuten automatisch).
|
||||
|
||||
Ein laufender Kopiervorgang kann jederzeit über die *Abbrechen*-Schaltfläche gestoppt werden.
|
||||
|
||||
**Speicherplatz-Warnung:** Vor dem Start prüft PiCopy ob das Ziel-Laufwerk genug freien Speicher hat. Reicht der Platz nicht aus, erscheint ein gelbes Warnfeld im Log-Bereich mit der benötigten und verfügbaren Datenmenge. Die *Quelle leeren*-Funktion wird in diesem Fall automatisch deaktiviert.
|
||||
|
||||
### USB Port Konfiguration & Datei-Explorer
|
||||
|
||||
```
|
||||
@@ -104,6 +121,14 @@ Nach dem Abschluss: Zusammenfassung mit ✕-Button (verschwindet nach 5 Minuten
|
||||
- **Grauer Punkt** = Port konfiguriert, kein Gerät eingesteckt
|
||||
- **Datei-Explorer** zum Durchsuchen der verbundenen Laufwerke
|
||||
|
||||
#### Mehrere Quell-Ports
|
||||
|
||||
Es können mehrere USB-Ports als Quellen konfiguriert werden. Beim Start eines Kopiervorgangs werden alle verbundenen Quell-Ports nacheinander auf das Ziel kopiert, jeweils in einen eigenen Unterordner. Die Ports lassen sich über *Port-Zuweisung zurücksetzen* auf einmal löschen.
|
||||
|
||||
#### Interner Speicher als Ziel
|
||||
|
||||
Statt eines USB-Laufwerks kann der interne Speicher des Raspberry Pi (`/opt/picopy/internal`) als Kopierziel gewählt werden. Dies ist nützlich wenn kein Ziel-USB-Gerät vorhanden ist oder als Zwischenpuffer.
|
||||
|
||||
### Kopier-Einstellungen
|
||||
|
||||
| Einstellung | Standard | Beschreibung |
|
||||
@@ -127,6 +152,49 @@ Nach dem Abschluss: Zusammenfassung mit ✕-Button (verschwindet nach 5 Minuten
|
||||
| 📷+🎬 Beides | Fotos + Videos kombiniert |
|
||||
| ✕ Alle | Kein Filter – alle Dateien kopieren |
|
||||
|
||||
### Laufwerk formatieren
|
||||
|
||||
Das Ziel-Laufwerk kann direkt im Web-Interface formatiert werden, ohne einen PC zu benötigen. Der *Formatieren*-Button erscheint im Ziel-Card sobald ein USB-Gerät in der Dropdown-Liste ausgewählt ist.
|
||||
|
||||
| Dateisystem | Mac | Windows | Dateigrößen | Empfehlung |
|
||||
|---|---|---|---|---|
|
||||
| **exFAT** | ✅ lesen/schreiben | ✅ lesen/schreiben | unbegrenzt | Empfohlen für Foto/Video-Backup |
|
||||
| **FAT32** | ✅ lesen/schreiben | ✅ lesen/schreiben | max. 4 GB | Ältere Geräte / maximale Kompatibilität |
|
||||
| **NTFS** | ✅ lesen / ✗ schreiben | ✅ lesen/schreiben | unbegrenzt | Windows-only Workflows |
|
||||
|
||||
> **Hinweis:** Das Formatieren löscht alle Daten auf dem Laufwerk unwiderruflich. PiCopy fordert vor dem Start eine Bestätigung an.
|
||||
>
|
||||
> Die benötigten Pakete (`exfatprogs`, `dosfstools`, `ntfs-3g`) werden beim Installieren von PiCopy automatisch mitinstalliert.
|
||||
|
||||
### Kopier-Verlauf
|
||||
|
||||
Jeder abgeschlossene Kopiervorgang wird im Verlauf gespeichert (bis zu 100 Einträge). Der Verlauf zeigt:
|
||||
- Start-Zeitpunkt und Dauer
|
||||
- Anzahl kopierter, übersprungener und fehlerhafter Dateien
|
||||
- Übertragene Datenmenge
|
||||
- Eventuelle Fehlermeldung
|
||||
|
||||
Der Verlauf kann über das Web-Interface vollständig gelöscht werden.
|
||||
|
||||
### System-Monitoring
|
||||
|
||||
Das Dashboard zeigt live:
|
||||
|
||||
| Wert | Beschreibung |
|
||||
|---|---|
|
||||
| CPU-Temperatur | Aktuell in °C (aus `/sys/class/thermal/`) |
|
||||
| RAM gesamt / genutzt | In MB und als Prozentwert |
|
||||
| SD-Karte gesamt / genutzt | In GB und als Prozentwert |
|
||||
|
||||
#### Speicherplatz-Panel
|
||||
|
||||
Unterhalb der Systemwerte zeigt ein Speicherplatz-Panel den Füllstand aller verbundenen Laufwerke:
|
||||
|
||||
- **Quelle** (grün), **Ziel** (blau) und **sonstige** Geräte werden farblich unterschieden
|
||||
- Anzeige: genutzter / gesamter Speicher, freier Speicher und prozentualer Füllstand
|
||||
- Fortschrittsbalken wechselt die Farbe: grün (< 75 %), gelb (75–89 %), rot (≥ 90 %)
|
||||
- Interner Speicher wird ebenfalls angezeigt wenn er als Ziel konfiguriert ist
|
||||
|
||||
### Fernkopie – NAS / SMB
|
||||
|
||||
Nach dem lokalen Kopieren lädt PiCopy auf konfigurierte NAS-Freigaben hoch:
|
||||
@@ -136,13 +204,28 @@ Nach dem lokalen Kopieren lädt PiCopy auf konfigurierte NAS-Freigaben hoch:
|
||||
3. *Speichern & Verbindung testen* – PiCopy testet die Verbindung sofort
|
||||
4. Mehrere NAS-Ziele möglich, jedes einzeln aktivierbar
|
||||
|
||||
### Samba-Freigabe (Interner Speicher)
|
||||
|
||||
Der interne Speicher des Pi kann als SMB-Netzwerkfreigabe bereitgestellt werden:
|
||||
|
||||
- Freigabename: `PiCopy`
|
||||
- Samba wird bei erster Aktivierung automatisch installiert
|
||||
- Zugreifbar von Windows, macOS und Linux im gleichen Netzwerk
|
||||
|
||||
```
|
||||
\\<pi-ip>\PiCopy
|
||||
```
|
||||
|
||||
### WiFi-Einstellungen
|
||||
|
||||
| Modus | Beschreibung |
|
||||
|---|---|
|
||||
| **Heimnetz** | WLAN-Name und Passwort für die Router-Verbindung |
|
||||
| **Heimnetz** | WLAN-Netzwerke scannen und verbinden |
|
||||
| **Hotspot (AP)** | Eigenes WLAN wenn kein Heimnetz erreichbar |
|
||||
|
||||
- Verfügbare Netzwerke können direkt im Web-Interface gescannt und ausgewählt werden
|
||||
- Fällt die Heimnetz-Verbindung weg, wird der Hotspot automatisch aktiviert
|
||||
|
||||
**Hotspot-Standardwerte:**
|
||||
- SSID: `PiCopy`
|
||||
- Passwort: `PiCopy,`
|
||||
@@ -150,6 +233,19 @@ Nach dem lokalen Kopieren lädt PiCopy auf konfigurierte NAS-Freigaben hoch:
|
||||
|
||||
Der Hotspot startet automatisch beim Boot wenn das konfigurierte WLAN nicht verfügbar ist.
|
||||
|
||||
### WireGuard VPN
|
||||
|
||||
PiCopy unterstützt WireGuard für sicheren Fernzugriff (z. B. aus dem Internet):
|
||||
|
||||
1. *WireGuard installieren* klicken – installiert `wireguard` und `openresolv` via apt
|
||||
2. WireGuard-Konfigurationsdatei (`.conf`) in das Textfeld einfügen
|
||||
3. *Verbinden* klicken
|
||||
4. Optional: *Automatisch verbinden beim Start* aktivieren
|
||||
|
||||
Zum Trennen *Trennen* klicken oder WireGuard über das Interface deinstallieren.
|
||||
|
||||
> **Hinweis:** Der Private Key wird in der Anzeige maskiert (`****`), ist aber auf dem Pi gespeichert.
|
||||
|
||||
---
|
||||
|
||||
## Ordnerstruktur auf dem Ziel
|
||||
@@ -167,6 +263,15 @@ Der Hotspot startet automatisch beim Boot wenn das konfigurierte WLAN nicht verf
|
||||
└── notes.txt
|
||||
```
|
||||
|
||||
Bei mehreren Quell-Ports erhält jede Quelle ihren eigenen Unterordner:
|
||||
|
||||
```
|
||||
/ziel-laufwerk/
|
||||
└── 2024-01-15_143022/
|
||||
├── Samsung_USB/
|
||||
└── SanDisk_Extreme/
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Update
|
||||
@@ -243,7 +348,9 @@ sudo systemctl stop picopy
|
||||
| USB-Erkennung | `lsblk` + `udevadm` |
|
||||
| USB-Monitoring | `pyudev` (udev-Events) |
|
||||
| WiFi-Verwaltung | NetworkManager (`nmcli`) |
|
||||
| VPN | WireGuard (`wg-quick`) |
|
||||
| NAS-Sync | `rclone` (SMB) |
|
||||
| Netzwerkfreigabe | Samba (`smbd`) |
|
||||
| Service | systemd (Autostart, Auto-Restart) |
|
||||
|
||||
**Dateipfade auf dem Pi:**
|
||||
@@ -252,9 +359,13 @@ sudo systemctl stop picopy
|
||||
|---|---|
|
||||
| `/opt/picopy/app.py` | Hauptanwendung |
|
||||
| `/opt/picopy/config.json` | Konfiguration (Ports, WiFi, Einstellungen) |
|
||||
| `/opt/picopy/state.json` | Letzter Kopierstatus (persisted) |
|
||||
| `/opt/picopy/state.json` | Letzter Kopierstatus (persistiert) |
|
||||
| `/opt/picopy/history.json` | Kopier-Verlauf (max. 100 Einträge) |
|
||||
| `/opt/picopy/rclone.conf` | NAS-Zugangsdaten (rclone) |
|
||||
| `/opt/picopy/internal/` | Interner Speicher als Kopierziel |
|
||||
| `/opt/picopy/logs/picopy.log` | Log-Datei |
|
||||
| `/opt/picopy/version.txt` | Aktuelle Versionsnummer |
|
||||
| `/etc/wireguard/picopy.conf` | WireGuard-Konfiguration |
|
||||
| `/etc/systemd/system/picopy.service` | Systemd-Service |
|
||||
|
||||
---
|
||||
@@ -274,33 +385,28 @@ sudo systemctl stop picopy
|
||||
|
||||
So wird ein neues Release erstellt, das alle Nutzer automatisch als Update angezeigt bekommen:
|
||||
|
||||
**1. Versionen erhöhen**
|
||||
|
||||
In `app.py`:
|
||||
```python
|
||||
VERSION = '1.1.0' # ← neue Versionsnummer
|
||||
```
|
||||
**1. Version erhöhen**
|
||||
|
||||
In `version.txt`:
|
||||
```
|
||||
1.1.0
|
||||
1.0.72
|
||||
```
|
||||
|
||||
**2. Committen & pushen**
|
||||
|
||||
```bash
|
||||
git add app.py version.txt
|
||||
git commit -m "Release v1.1.0"
|
||||
git add version.txt
|
||||
git commit -m "Release v1.0.72"
|
||||
git push
|
||||
```
|
||||
|
||||
**3. Release/Tag in Gitea erstellen** *(optional, aber empfohlen)*
|
||||
|
||||
Unter [git.leuschner.dev/Tobias/PiCopy/releases](https://git.leuschner.dev/Tobias/PiCopy/releases) → *Neues Release* → Tag `v1.1.0` setzen.
|
||||
Unter [git.leuschner.dev/Tobias/PiCopy/releases](https://git.leuschner.dev/Tobias/PiCopy/releases) → *Neues Release* → Tag `v1.0.72` setzen.
|
||||
|
||||
**Das war's.** Alle laufenden PiCopy-Instanzen erkennen das Update innerhalb von 6 Stunden automatisch und zeigen das Badge im Web-Interface an.
|
||||
|
||||
> **Hinweis:** `version.txt` und `app.py` müssen immer dieselbe Versionsnummer haben. Die `version.txt` ist der einzige Vergleichspunkt – der Inhalt der `app.py` wird erst beim tatsächlichen Update-Install heruntergeladen.
|
||||
> **Hinweis:** `version.txt` ist die Quelle der Wahrheit. `app.py` liest diese Datei beim Start und der Installer/Updater legt sie neben der App unter `/opt/picopy/version.txt` ab.
|
||||
|
||||
---
|
||||
|
||||
|
||||
67
install.sh
67
install.sh
@@ -35,24 +35,69 @@ echo ""
|
||||
# ── System-Pakete ─────────────────────────────────────────────────────────────
|
||||
info "Systemabhängigkeiten werden installiert..."
|
||||
apt-get update -q
|
||||
apt-get install -y -q python3 python3-venv python3-pip util-linux rclone
|
||||
apt-get install -y -q python3 python3-venv python3-pip util-linux rclone \
|
||||
exfatprogs dosfstools ntfs-3g
|
||||
ok "Systemabhängigkeiten installiert"
|
||||
|
||||
# ── Verzeichnis anlegen ───────────────────────────────────────────────────────
|
||||
# ── Verzeichnisse anlegen ─────────────────────────────────────────────────────
|
||||
info "Installationsverzeichnis: $INSTALL_DIR"
|
||||
mkdir -p "$INSTALL_DIR/logs"
|
||||
mkdir -p "$INSTALL_DIR/picopy"
|
||||
mkdir -p "$INSTALL_DIR/routes"
|
||||
mkdir -p "$INSTALL_DIR/templates"
|
||||
|
||||
# ── App-Datei kopieren oder herunterladen ─────────────────────────────────────
|
||||
if [ -f "./app.py" ]; then
|
||||
info "Lokale app.py wird verwendet..."
|
||||
cp app.py "$INSTALL_DIR/app.py"
|
||||
else
|
||||
info "app.py wird heruntergeladen..."
|
||||
curl -sSfL "$REPO_RAW/app.py" -o "$INSTALL_DIR/app.py" \
|
||||
|| fail "Download fehlgeschlagen. Prüfe die Internet-Verbindung."
|
||||
fi
|
||||
# ── Hilfsfunktion: Datei kopieren oder herunterladen ──────────────────────────
|
||||
install_file() {
|
||||
local src="$1" # relativer Pfad im Repo / im lokalen Verzeichnis
|
||||
local dst="$2" # absoluter Zielpfad
|
||||
|
||||
if [ -f "./$src" ]; then
|
||||
info "Lokale Datei wird verwendet: $src"
|
||||
cp "./$src" "$dst"
|
||||
else
|
||||
info "Datei wird heruntergeladen: $src"
|
||||
curl -sSfL "$REPO_RAW/$src" -o "$dst" \
|
||||
|| warn "Download fehlgeschlagen: $src (nicht kritisch wenn optional)"
|
||||
fi
|
||||
}
|
||||
|
||||
# ── Hauptdateien ──────────────────────────────────────────────────────────────
|
||||
install_file "app.py" "$INSTALL_DIR/app.py"
|
||||
ok "app.py installiert"
|
||||
|
||||
install_file "version.txt" "$INSTALL_DIR/version.txt"
|
||||
ok "version.txt installiert"
|
||||
|
||||
install_file "PiCopy_Logo.png" "$INSTALL_DIR/PiCopy_Logo.png"
|
||||
ok "Logo installiert"
|
||||
|
||||
# ── picopy/ Paket ─────────────────────────────────────────────────────────────
|
||||
install_file "picopy/__init__.py" "$INSTALL_DIR/picopy/__init__.py"
|
||||
install_file "picopy/config.py" "$INSTALL_DIR/picopy/config.py"
|
||||
install_file "picopy/state.py" "$INSTALL_DIR/picopy/state.py"
|
||||
install_file "picopy/usb.py" "$INSTALL_DIR/picopy/usb.py"
|
||||
install_file "picopy/copy_engine.py" "$INSTALL_DIR/picopy/copy_engine.py"
|
||||
install_file "picopy/wifi.py" "$INSTALL_DIR/picopy/wifi.py"
|
||||
install_file "picopy/wireguard.py" "$INSTALL_DIR/picopy/wireguard.py"
|
||||
install_file "picopy/samba.py" "$INSTALL_DIR/picopy/samba.py"
|
||||
install_file "picopy/upload.py" "$INSTALL_DIR/picopy/upload.py"
|
||||
install_file "picopy/system.py" "$INSTALL_DIR/picopy/system.py"
|
||||
ok "picopy/-Paket installiert"
|
||||
|
||||
# ── routes/ Paket ─────────────────────────────────────────────────────────────
|
||||
install_file "routes/__init__.py" "$INSTALL_DIR/routes/__init__.py"
|
||||
install_file "routes/copy_routes.py" "$INSTALL_DIR/routes/copy_routes.py"
|
||||
install_file "routes/wifi_routes.py" "$INSTALL_DIR/routes/wifi_routes.py"
|
||||
install_file "routes/wireguard_routes.py" "$INSTALL_DIR/routes/wireguard_routes.py"
|
||||
install_file "routes/upload_routes.py" "$INSTALL_DIR/routes/upload_routes.py"
|
||||
install_file "routes/system_routes.py" "$INSTALL_DIR/routes/system_routes.py"
|
||||
install_file "routes/browse_routes.py" "$INSTALL_DIR/routes/browse_routes.py"
|
||||
ok "routes/-Paket installiert"
|
||||
|
||||
# ── templates/ ────────────────────────────────────────────────────────────────
|
||||
install_file "templates/index.html" "$INSTALL_DIR/templates/index.html"
|
||||
ok "Template installiert"
|
||||
|
||||
# ── Python-Umgebung ───────────────────────────────────────────────────────────
|
||||
info "Python venv wird erstellt..."
|
||||
python3 -m venv "$INSTALL_DIR/venv"
|
||||
|
||||
0
picopy/__init__.py
Normal file
0
picopy/__init__.py
Normal file
98
picopy/config.py
Normal file
98
picopy/config.py
Normal file
@@ -0,0 +1,98 @@
|
||||
"""PiCopy – Konfiguration, Pfade, Konstanten, Logging."""
|
||||
|
||||
import os
|
||||
import json
|
||||
import logging
|
||||
from pathlib import Path
|
||||
|
||||
RAW_BASE = 'https://git.leuschner.dev/Tobias/PiCopy/raw/branch/main'
|
||||
VERSION_FILE = Path(__file__).parent.parent / 'version.txt'
|
||||
|
||||
|
||||
def load_installed_version():
|
||||
try:
|
||||
return VERSION_FILE.read_text(encoding='utf-8').strip() or '1.0.4'
|
||||
except Exception:
|
||||
return 'X.X.X'
|
||||
|
||||
|
||||
VERSION = load_installed_version()
|
||||
|
||||
BASE_DIR = Path('/opt/picopy')
|
||||
CONFIG_FILE = BASE_DIR / 'config.json'
|
||||
STATE_FILE = BASE_DIR / 'state.json'
|
||||
LOG_DIR = BASE_DIR / 'logs'
|
||||
LOG_FILE = LOG_DIR / 'picopy.log'
|
||||
INTERNAL_DEST_DIR = BASE_DIR / 'internal'
|
||||
LOG_DIR.mkdir(parents=True, exist_ok=True)
|
||||
HISTORY_FILE = BASE_DIR / 'history.json'
|
||||
MAX_HISTORY = 100
|
||||
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format='%(asctime)s %(levelname)s %(message)s',
|
||||
handlers=[logging.FileHandler(LOG_FILE), logging.StreamHandler()]
|
||||
)
|
||||
log = logging.getLogger('picopy')
|
||||
|
||||
NM_AP_CON = 'PiCopy-AP'
|
||||
NM_CLIENT_CON = 'PiCopy-WiFi'
|
||||
WIFI_BOOT_WAIT = 25 # Sekunden warten beim Start bevor AP gestartet wird
|
||||
|
||||
DEFAULT_CONFIG = {
|
||||
# USB
|
||||
'source_ports': [], # [{port, label}, ...]
|
||||
'source_port': None, 'source_label': '', # Migration legacy
|
||||
'dest_port': None, 'dest_label': '',
|
||||
'dest_type': 'usb', 'internal_dest_label': 'Interner Speicher',
|
||||
'internal_share_enabled': False,
|
||||
'folder_format': '%Y-%m-%d', 'add_time': True,
|
||||
'subfolder': True, 'auto_copy': True,
|
||||
'file_filter': '', 'exclude_system': True,
|
||||
'duplicate_handling': 'skip',
|
||||
'verify_checksum': False, 'delete_source': False,
|
||||
# WiFi
|
||||
'wifi_ssid': '', 'wifi_password': '',
|
||||
'ap_ssid': 'PiCopy', 'ap_password': 'PiCopy,',
|
||||
# WireGuard
|
||||
'wireguard_auto': False,
|
||||
}
|
||||
|
||||
|
||||
def load_cfg():
|
||||
cfg = DEFAULT_CONFIG.copy()
|
||||
try:
|
||||
if CONFIG_FILE.exists():
|
||||
cfg.update(json.loads(CONFIG_FILE.read_text(encoding='utf-8')))
|
||||
except (json.JSONDecodeError, ValueError) as e:
|
||||
log.error(f'config.json korrupt ({e}), verwende Standardwerte')
|
||||
try: CONFIG_FILE.rename(CONFIG_FILE.with_suffix('.corrupt'))
|
||||
except Exception: pass
|
||||
except Exception as e:
|
||||
log.warning(f'config.json nicht lesbar: {e}')
|
||||
return cfg
|
||||
|
||||
|
||||
def save_cfg(cfg):
|
||||
_atomic_write(CONFIG_FILE, json.dumps(cfg, indent=2))
|
||||
|
||||
|
||||
def _atomic_write(path: Path, content: str) -> None:
|
||||
"""Schreibt atomar: erst .tmp, dann os.replace() - sicher bei Stromausfall."""
|
||||
tmp = path.with_suffix(path.suffix + '.tmp')
|
||||
try:
|
||||
tmp.write_text(content, encoding='utf-8')
|
||||
with open(tmp, 'rb') as fh:
|
||||
os.fsync(fh.fileno()) # Daten wirklich auf Datenträger schreiben
|
||||
os.replace(str(tmp), str(path)) # Atomares Umbenennen (POSIX-Garantie)
|
||||
except Exception:
|
||||
try: tmp.unlink(missing_ok=True)
|
||||
except Exception: pass
|
||||
raise
|
||||
|
||||
|
||||
def _fmt_bytes(b):
|
||||
if b < 1024: return f'{b} B'
|
||||
if b < 1024**2: return f'{b/1024:.1f} KB'
|
||||
if b < 1024**3: return f'{b/1024**2:.1f} MB'
|
||||
return f'{b/1024**3:.2f} GB'
|
||||
415
picopy/copy_engine.py
Normal file
415
picopy/copy_engine.py
Normal file
@@ -0,0 +1,415 @@
|
||||
"""PiCopy – Kopierlogik: do_copy, check_auto_copy, usb_monitor."""
|
||||
|
||||
import hashlib as _hashlib
|
||||
import os
|
||||
import re
|
||||
import shutil
|
||||
import subprocess
|
||||
import threading
|
||||
import time
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
|
||||
from picopy.config import load_cfg, _fmt_bytes, log
|
||||
from picopy.state import (
|
||||
copy_state, copy_lock, save_state, append_history, add_log
|
||||
)
|
||||
from picopy.usb import usb_devices, ensure_mount, internal_dest_device
|
||||
|
||||
_copy_thread: threading.Thread | None = None
|
||||
|
||||
SYSTEM_EXCLUDES = {
|
||||
'.DS_Store', 'Thumbs.db', 'thumbs.db', 'desktop.ini',
|
||||
'.Spotlight-V100', '.Trashes', '.fseventsd', '.TemporaryItems',
|
||||
'.VolumeIcon.icns', 'RECYCLER', '$RECYCLE.BIN',
|
||||
'System Volume Information', '.DocumentRevisions-V100',
|
||||
}
|
||||
|
||||
|
||||
def _should_copy(f: Path, cfg: dict) -> bool:
|
||||
if cfg.get('exclude_system'):
|
||||
for part in f.parts:
|
||||
if part in SYSTEM_EXCLUDES:
|
||||
return False
|
||||
if f.name.startswith('._'):
|
||||
return False
|
||||
filt = cfg.get('file_filter', '').strip()
|
||||
if filt:
|
||||
allowed = {e.strip().lower().lstrip('.') for e in filt.split(',') if e.strip()}
|
||||
if f.suffix.lower().lstrip('.') not in allowed:
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
def _unique_path(p: Path) -> Path:
|
||||
stem, suffix, parent = p.stem, p.suffix, p.parent
|
||||
i = 1
|
||||
while True:
|
||||
candidate = parent / f'{stem}_({i}){suffix}'
|
||||
if not candidate.exists():
|
||||
return candidate
|
||||
i += 1
|
||||
|
||||
|
||||
def _file_md5(p: Path) -> str:
|
||||
h = _hashlib.md5()
|
||||
with open(p, 'rb') as f:
|
||||
for chunk in iter(lambda: f.read(65536), b''):
|
||||
h.update(chunk)
|
||||
return h.hexdigest()
|
||||
|
||||
|
||||
def _resolve_source_ports(cfg) -> list:
|
||||
"""Gibt source_ports als [{port, label}]-Liste zurück. Migriert altes source_port-Feld."""
|
||||
ports = cfg.get('source_ports') or []
|
||||
if not ports and cfg.get('source_port'):
|
||||
ports = [{'port': cfg['source_port'], 'label': cfg.get('source_label', '')}]
|
||||
return ports
|
||||
|
||||
|
||||
def _configured_destination(cfg, devs):
|
||||
if cfg.get('dest_type') == 'internal':
|
||||
return internal_dest_device(cfg)
|
||||
return next((d for d in devs if d['usb_port'] == cfg.get('dest_port')), None)
|
||||
|
||||
|
||||
def do_copy(src_devs, dst_dev, cfg):
|
||||
"""Kopiert von einer oder mehreren Quellen auf ein Ziel."""
|
||||
dst_mp = None
|
||||
dst_owned = False
|
||||
src_mounts = [] # [(src_dev, src_mp, src_owned)]
|
||||
_upload_thread = None
|
||||
_hist = {
|
||||
'start': time.time(),
|
||||
'ok': False, 'copied': 0, 'skipped': 0, 'errors': 0,
|
||||
'bytes': 0, 'error_msg': '',
|
||||
}
|
||||
try:
|
||||
with copy_lock:
|
||||
copy_state.update(running=True, progress=0, error=None,
|
||||
done=0, total=0, logs=[], current='',
|
||||
bytes_total=0, bytes_done=0,
|
||||
start_ts=time.time(), eta_sec=None, speed_bps=0,
|
||||
phase='copy',
|
||||
space_warning=False, space_needed=0, space_free=0,
|
||||
last_success_file='')
|
||||
save_state()
|
||||
n = len(src_devs)
|
||||
add_log(f'Kopiervorgang gestartet ({n} Quelle{"n" if n != 1 else ""})')
|
||||
|
||||
dst_mp, dst_owned = ensure_mount(dst_dev)
|
||||
if not dst_mp:
|
||||
raise RuntimeError(f'Ziel nicht mountbar: {dst_dev["device"]}')
|
||||
add_log(f'Ziel: {dst_mp} ({dst_dev["label"]})')
|
||||
|
||||
ts = datetime.now()
|
||||
date_str = ts.strftime(cfg['folder_format'])
|
||||
if cfg.get('add_time'):
|
||||
date_str += '_' + ts.strftime('%H%M%S')
|
||||
|
||||
# -- Alle Quellen mounten & Dateien sammeln -------------------------
|
||||
# source_data: [(src_dev, src_path, files, dst_dir, incomplete_marker)]
|
||||
source_data = []
|
||||
total = 0
|
||||
bytes_total = 0
|
||||
|
||||
for src_dev in src_devs:
|
||||
with copy_lock:
|
||||
cancelled = not copy_state['running']
|
||||
if cancelled:
|
||||
add_log('Abgebrochen')
|
||||
return
|
||||
|
||||
src_mp_i, src_owned_i = ensure_mount(src_dev)
|
||||
src_mounts.append((src_dev, src_mp_i, src_owned_i))
|
||||
if not src_mp_i:
|
||||
add_log(f'Quelle nicht mountbar: {src_dev["device"]} - übersprungen')
|
||||
continue
|
||||
|
||||
add_log(f'Quelle: {src_mp_i} ({src_dev["label"]})')
|
||||
src_path = Path(src_mp_i)
|
||||
all_files = [f for f in src_path.rglob('*') if f.is_file()]
|
||||
files = [f for f in all_files if _should_copy(f, cfg)]
|
||||
n_filtered = len(all_files) - len(files)
|
||||
if n_filtered:
|
||||
add_log(f'{n_filtered} Dateien gefiltert ({src_dev["label"]})')
|
||||
|
||||
label = re.sub(r'[^\w\-]', '_', src_dev.get('label', 'source'))
|
||||
dst_dir_i = Path(dst_mp) / date_str
|
||||
if cfg.get('subfolder'):
|
||||
dst_dir_i = dst_dir_i / label
|
||||
dst_dir_i.mkdir(parents=True, exist_ok=True)
|
||||
add_log(f'Zielordner: {dst_dir_i}')
|
||||
|
||||
for stale in dst_dir_i.rglob('*.picopy_tmp'):
|
||||
stale.unlink(missing_ok=True)
|
||||
|
||||
incomplete_marker_i = dst_dir_i / '.picopy_incomplete'
|
||||
import json as _json
|
||||
incomplete_marker_i.write_text(_json.dumps({
|
||||
'started': datetime.now().isoformat(),
|
||||
'source': src_dev.get('label', ''),
|
||||
}))
|
||||
|
||||
total += len(files)
|
||||
bytes_total += sum(f.stat().st_size for f in files)
|
||||
source_data.append((src_dev, src_path, files, dst_dir_i, incomplete_marker_i))
|
||||
|
||||
with copy_lock:
|
||||
copy_state['total'] = total
|
||||
copy_state['bytes_total'] = bytes_total
|
||||
add_log(f'{total} Dateien gesamt ({_fmt_bytes(bytes_total)})')
|
||||
|
||||
# -- Speicherplatz-Prüfung ------------------------------------------
|
||||
try:
|
||||
dst_free = shutil.disk_usage(dst_mp).free
|
||||
except Exception:
|
||||
dst_free = 0
|
||||
if bytes_total > 0 and dst_free < bytes_total:
|
||||
with copy_lock:
|
||||
copy_state.update(space_warning=True,
|
||||
space_needed=bytes_total,
|
||||
space_free=dst_free)
|
||||
add_log(
|
||||
f'⚠ Nicht genug Speicherplatz! '
|
||||
f'Benötigt: {_fmt_bytes(bytes_total)}, '
|
||||
f'Verfügbar: {_fmt_bytes(dst_free)} – '
|
||||
f'Quelle wird nicht gelöscht'
|
||||
)
|
||||
save_state()
|
||||
|
||||
# -- Phase 1: Kopieren (alle Quellen) --------------------------------
|
||||
dup_mode = cfg.get('duplicate_handling', 'skip')
|
||||
all_copied_pairs = []
|
||||
skipped = 0
|
||||
io_errors = 0
|
||||
global_done = 0
|
||||
|
||||
for src_dev_i, src_path_i, files_i, dst_dir_i, _ in source_data:
|
||||
if len(src_devs) > 1:
|
||||
add_log(f'Kopiere: {src_dev_i["label"]}')
|
||||
for f in files_i:
|
||||
with copy_lock:
|
||||
cancelled = not copy_state['running']
|
||||
if cancelled:
|
||||
add_log('Abgebrochen')
|
||||
return
|
||||
global_done += 1
|
||||
rel = f.relative_to(src_path_i)
|
||||
dst_f = dst_dir_i / rel
|
||||
try:
|
||||
dst_f.parent.mkdir(parents=True, exist_ok=True)
|
||||
except OSError as mkdir_err:
|
||||
io_errors += 1
|
||||
add_log(f'⚠ Verzeichnis nicht erstellbar ({dst_f.parent.name}): {mkdir_err}')
|
||||
with copy_lock:
|
||||
copy_state.update(done=global_done,
|
||||
progress=int(global_done/total*100) if total else 100,
|
||||
current=str(f.name))
|
||||
continue
|
||||
|
||||
if dst_f.exists():
|
||||
if dup_mode == 'skip':
|
||||
if dst_f.stat().st_size == f.stat().st_size:
|
||||
skipped += 1
|
||||
with copy_lock:
|
||||
copy_state.update(done=global_done,
|
||||
progress=int(global_done/total*100) if total else 100,
|
||||
current=str(f.name))
|
||||
continue
|
||||
else:
|
||||
add_log(f'Unvollständige Datei, wird neu kopiert: {f.name}')
|
||||
elif dup_mode == 'rename':
|
||||
dst_f = _unique_path(dst_f)
|
||||
|
||||
fsize = f.stat().st_size
|
||||
tmp_f = dst_f.with_name(dst_f.name + '.picopy_tmp')
|
||||
try:
|
||||
shutil.copy2(f, tmp_f)
|
||||
os.replace(str(tmp_f), str(dst_f))
|
||||
except OSError as copy_err:
|
||||
try: tmp_f.unlink(missing_ok=True)
|
||||
except Exception: pass
|
||||
io_errors += 1
|
||||
add_log(f'⚠ Fehler bei {f.name}: {copy_err}')
|
||||
with copy_lock:
|
||||
copy_state.update(done=global_done,
|
||||
progress=int(global_done/total*100) if total else 100,
|
||||
current=str(f.name))
|
||||
continue
|
||||
all_copied_pairs.append((f, dst_f))
|
||||
|
||||
with copy_lock:
|
||||
copy_state['bytes_done'] += fsize
|
||||
copy_state['last_success_file'] = str(dst_f)
|
||||
bd = copy_state['bytes_done']
|
||||
bt = copy_state['bytes_total']
|
||||
elapsed = time.time() - copy_state['start_ts']
|
||||
speed = bd / elapsed if elapsed > 1 else 0
|
||||
eta = int((bt - bd) / speed) if speed > 0 and bt > bd else 0
|
||||
copy_state.update(done=global_done,
|
||||
progress=int(global_done/total*100) if total else 100,
|
||||
current=str(f.name), speed_bps=int(speed), eta_sec=eta)
|
||||
if global_done % 20 == 0:
|
||||
save_state()
|
||||
|
||||
msg_parts = [f'{len(all_copied_pairs)} kopiert']
|
||||
if skipped:
|
||||
msg_parts.append(f'{skipped} übersprungen')
|
||||
if io_errors:
|
||||
msg_parts.append(f'{io_errors} Fehler (I/O)')
|
||||
|
||||
# -- Phase 2: Verifizieren ------------------------------------------
|
||||
verify_errors = 0
|
||||
verified_pairs = list(all_copied_pairs)
|
||||
|
||||
if cfg.get('verify_checksum') and all_copied_pairs:
|
||||
with copy_lock:
|
||||
copy_state.update(phase='verify', progress=0, done=0,
|
||||
total=len(all_copied_pairs), current='',
|
||||
eta_sec=None, speed_bps=0)
|
||||
add_log(f'Verifiziere {len(all_copied_pairs)} Dateien...')
|
||||
verified_pairs = []
|
||||
|
||||
for i, (src_f, dst_f) in enumerate(all_copied_pairs):
|
||||
with copy_lock:
|
||||
cancelled = not copy_state['running']
|
||||
if not cancelled:
|
||||
copy_state.update(done=i+1,
|
||||
progress=int((i+1)/len(all_copied_pairs)*100),
|
||||
current=src_f.name)
|
||||
if cancelled:
|
||||
add_log('Abgebrochen')
|
||||
return
|
||||
if _file_md5(src_f) == _file_md5(dst_f):
|
||||
verified_pairs.append((src_f, dst_f))
|
||||
else:
|
||||
verify_errors += 1
|
||||
add_log(f'⚠ Prüfsummenfehler: {src_f.name}')
|
||||
try: dst_f.unlink()
|
||||
except Exception: pass
|
||||
|
||||
if verify_errors:
|
||||
msg_parts.append(f'{verify_errors} Prüfsummenfehler!')
|
||||
add_log(f'Verifizierung: {verify_errors} Fehler!')
|
||||
else:
|
||||
add_log(f'Alle {len(verified_pairs)} Dateien verifiziert ✓')
|
||||
|
||||
# -- Phase 3: Quelle löschen ----------------------------------------
|
||||
if cfg.get('delete_source') and verified_pairs:
|
||||
with copy_lock:
|
||||
_space_warn = copy_state.get('space_warning', False)
|
||||
if _space_warn:
|
||||
add_log('Quelldateien NICHT gelöscht (Speicherplatz unzureichend)')
|
||||
elif verify_errors:
|
||||
add_log('Quelldateien NICHT gelöscht (Prüfsummenfehler)')
|
||||
else:
|
||||
with copy_lock:
|
||||
copy_state.update(phase='delete', current='')
|
||||
add_log(f'Lösche {len(verified_pairs)} Quelldateien...')
|
||||
del_errors = 0
|
||||
for src_f, _ in verified_pairs:
|
||||
try:
|
||||
src_f.unlink()
|
||||
except Exception as e:
|
||||
del_errors += 1
|
||||
log.warning(f'Löschen fehlgeschlagen: {src_f}: {e}')
|
||||
if del_errors:
|
||||
msg_parts.append(f'{del_errors} Löschfehler')
|
||||
else:
|
||||
add_log('Quelle geleert ✓')
|
||||
|
||||
subprocess.run(['sync'], capture_output=True)
|
||||
for _, _, _, _, incomplete_marker_i in source_data:
|
||||
try: incomplete_marker_i.unlink(missing_ok=True)
|
||||
except Exception: pass
|
||||
|
||||
with copy_lock:
|
||||
copy_state['last_copy'] = datetime.now().isoformat()
|
||||
_hist['bytes'] = copy_state['bytes_done']
|
||||
_hist.update(ok=True, copied=len(all_copied_pairs),
|
||||
skipped=skipped, errors=io_errors)
|
||||
add_log('Fertig! ' + ', '.join(msg_parts))
|
||||
|
||||
dst_dir_root = Path(dst_mp) / date_str
|
||||
upload_files = [dst_f for _, dst_f in verified_pairs if dst_f.exists()]
|
||||
if upload_files:
|
||||
from picopy.upload import run_uploads
|
||||
_upload_thread = threading.Thread(
|
||||
target=run_uploads,
|
||||
args=(dst_dir_root, cfg, upload_files),
|
||||
daemon=True
|
||||
)
|
||||
_upload_thread.start()
|
||||
elif any(t.get('enabled') for t in cfg.get('upload_targets', [])):
|
||||
add_log('NAS-Upload: keine neu auf das Ziel übertragenen Dateien')
|
||||
|
||||
except Exception as e:
|
||||
log.exception('Copy failed')
|
||||
with copy_lock:
|
||||
copy_state['error'] = str(e)
|
||||
_hist['error_msg'] = str(e)
|
||||
add_log(f'Fehler: {e}')
|
||||
|
||||
finally:
|
||||
# Erst warten bis NAS-Upload fertig, dann erst unmounten
|
||||
if _upload_thread is not None and _upload_thread.is_alive():
|
||||
add_log('Warte auf NAS-Upload vor Unmount...')
|
||||
_upload_thread.join()
|
||||
subprocess.run(['sync'], capture_output=True)
|
||||
for _, src_mp_i, src_owned_i in src_mounts:
|
||||
if src_owned_i and src_mp_i:
|
||||
subprocess.run(['umount', src_mp_i], capture_output=True)
|
||||
if dst_owned and dst_mp:
|
||||
subprocess.run(['umount', dst_mp], capture_output=True)
|
||||
with copy_lock:
|
||||
copy_state['running'] = False
|
||||
copy_state['current'] = ''
|
||||
copy_state['phase'] = 'idle'
|
||||
save_state()
|
||||
# Verlaufseintrag speichern
|
||||
append_history({
|
||||
'ts': datetime.now().isoformat(),
|
||||
'duration': int(time.time() - _hist['start']),
|
||||
'sources': [d.get('label', d.get('device', '?')) for d in src_devs],
|
||||
'dest': dst_dev.get('label', dst_dev.get('device', '?')) if dst_dev else '?',
|
||||
'copied': _hist['copied'],
|
||||
'skipped': _hist['skipped'],
|
||||
'errors': _hist['errors'],
|
||||
'bytes': _hist['bytes'],
|
||||
'ok': _hist['ok'],
|
||||
'error': _hist['error_msg'],
|
||||
})
|
||||
|
||||
|
||||
def check_auto_copy():
|
||||
cfg = load_cfg()
|
||||
src_ports = _resolve_source_ports(cfg)
|
||||
if not cfg.get('auto_copy') or not src_ports:
|
||||
return
|
||||
if cfg.get('dest_type') != 'internal' and not cfg.get('dest_port'):
|
||||
return
|
||||
with copy_lock:
|
||||
if copy_state['running'] or copy_state['error']:
|
||||
return
|
||||
devs = usb_devices()
|
||||
srcs = [next((d for d in devs if d['usb_port'] == sp['port']), None) for sp in src_ports]
|
||||
srcs = [s for s in srcs if s is not None]
|
||||
dst = _configured_destination(cfg, devs)
|
||||
if srcs and dst:
|
||||
log.info(f'Auto-Copy: {len(srcs)} Quelle(n) und Ziel verbunden')
|
||||
threading.Thread(target=do_copy, args=(srcs, dst, cfg), daemon=True).start()
|
||||
|
||||
|
||||
def usb_monitor():
|
||||
try:
|
||||
import pyudev
|
||||
ctx = pyudev.Context()
|
||||
mon = pyudev.Monitor.from_netlink(ctx)
|
||||
mon.filter_by(subsystem='block', device_type='disk')
|
||||
for dev in iter(mon.poll, None):
|
||||
if dev.action == 'add':
|
||||
log.info(f'USB eingesteckt: {dev.device_node}')
|
||||
threading.Timer(3.0, check_auto_copy).start()
|
||||
except ImportError:
|
||||
log.warning('pyudev nicht verfügbar')
|
||||
151
picopy/samba.py
Normal file
151
picopy/samba.py
Normal file
@@ -0,0 +1,151 @@
|
||||
"""PiCopy – Interner Speicher/Samba: internal_share_state, alle Samba-Funktionen."""
|
||||
|
||||
import os
|
||||
import re
|
||||
import shutil
|
||||
import subprocess
|
||||
import threading
|
||||
from pathlib import Path
|
||||
|
||||
from picopy.config import INTERNAL_DEST_DIR, load_cfg, save_cfg, log
|
||||
|
||||
SAMBA_CONF = Path('/etc/samba/smb.conf')
|
||||
SAMBA_BEGIN = '# BEGIN PICOPY INTERNAL SHARE'
|
||||
SAMBA_END = '# END PICOPY INTERNAL SHARE'
|
||||
|
||||
internal_share_state = {
|
||||
'installed': False,
|
||||
'enabled': False,
|
||||
'active': False,
|
||||
'path': str(INTERNAL_DEST_DIR),
|
||||
'share': 'PiCopy',
|
||||
'pkg_running': False,
|
||||
'pkg_error': None,
|
||||
'error': None,
|
||||
}
|
||||
internal_share_lock = threading.Lock()
|
||||
|
||||
|
||||
def _internal_usage():
|
||||
INTERNAL_DEST_DIR.mkdir(parents=True, exist_ok=True)
|
||||
usage = shutil.disk_usage(INTERNAL_DEST_DIR)
|
||||
return {
|
||||
'path': str(INTERNAL_DEST_DIR),
|
||||
'total': usage.total,
|
||||
'used': usage.used,
|
||||
'free': usage.free,
|
||||
}
|
||||
|
||||
|
||||
def smbd_installed():
|
||||
return shutil.which('smbd') is not None
|
||||
|
||||
|
||||
def _systemctl(*args, timeout=20):
|
||||
try:
|
||||
return subprocess.run(['systemctl'] + list(args), capture_output=True,
|
||||
text=True, timeout=timeout)
|
||||
except Exception as e:
|
||||
return subprocess.CompletedProcess(['systemctl'] + list(args), 1,
|
||||
stdout='', stderr=str(e))
|
||||
|
||||
|
||||
def _smbd_active():
|
||||
if not smbd_installed():
|
||||
return False
|
||||
r = _systemctl('is-active', 'smbd', timeout=5)
|
||||
return r.returncode == 0 and r.stdout.strip() == 'active'
|
||||
|
||||
|
||||
def internal_share_update_state():
|
||||
cfg = load_cfg()
|
||||
usage = _internal_usage()
|
||||
with internal_share_lock:
|
||||
internal_share_state.update(
|
||||
installed=smbd_installed(),
|
||||
enabled=bool(cfg.get('internal_share_enabled')),
|
||||
active=_smbd_active(),
|
||||
path=usage['path'],
|
||||
total=usage['total'],
|
||||
used=usage['used'],
|
||||
free=usage['free'],
|
||||
)
|
||||
return dict(internal_share_state)
|
||||
|
||||
|
||||
def _write_samba_share(enabled: bool):
|
||||
old = SAMBA_CONF.read_text(encoding='utf-8') if SAMBA_CONF.exists() else ''
|
||||
pattern = re.compile(rf'\n?{re.escape(SAMBA_BEGIN)}.*?{re.escape(SAMBA_END)}\n?', re.S)
|
||||
cleaned = pattern.sub('\n', old).rstrip() + '\n'
|
||||
if enabled:
|
||||
INTERNAL_DEST_DIR.mkdir(parents=True, exist_ok=True)
|
||||
INTERNAL_DEST_DIR.chmod(0o755)
|
||||
block = f"""
|
||||
{SAMBA_BEGIN}
|
||||
[PiCopy]
|
||||
path = {INTERNAL_DEST_DIR}
|
||||
browseable = yes
|
||||
read only = yes
|
||||
guest ok = yes
|
||||
force user = root
|
||||
{SAMBA_END}
|
||||
"""
|
||||
cleaned += block
|
||||
tmp = SAMBA_CONF.with_suffix('.conf.picopy_tmp')
|
||||
tmp.write_text(cleaned, encoding='utf-8')
|
||||
os.replace(str(tmp), str(SAMBA_CONF))
|
||||
|
||||
|
||||
def _install_samba_if_needed():
|
||||
if smbd_installed():
|
||||
return True, ''
|
||||
with internal_share_lock:
|
||||
internal_share_state.update(pkg_running=True, pkg_error=None)
|
||||
try:
|
||||
r = subprocess.run(['apt-get', 'install', '-y', 'samba'],
|
||||
capture_output=True, text=True, timeout=300,
|
||||
env={**os.environ, 'DEBIAN_FRONTEND': 'noninteractive'})
|
||||
if r.returncode != 0:
|
||||
err = (r.stderr.strip().splitlines()[-1]
|
||||
if r.stderr.strip() else 'samba-Installation fehlgeschlagen')
|
||||
with internal_share_lock:
|
||||
internal_share_state['pkg_error'] = err
|
||||
return False, err
|
||||
return True, ''
|
||||
except Exception as e:
|
||||
with internal_share_lock:
|
||||
internal_share_state['pkg_error'] = str(e)
|
||||
return False, str(e)
|
||||
finally:
|
||||
with internal_share_lock:
|
||||
internal_share_state['pkg_running'] = False
|
||||
|
||||
|
||||
def set_internal_share_enabled(enabled: bool):
|
||||
ok, err = (True, '')
|
||||
if enabled:
|
||||
ok, err = _install_samba_if_needed()
|
||||
if not ok:
|
||||
return False, err
|
||||
elif not smbd_installed():
|
||||
cfg = load_cfg()
|
||||
cfg['internal_share_enabled'] = False
|
||||
save_cfg(cfg)
|
||||
internal_share_update_state()
|
||||
return True, ''
|
||||
try:
|
||||
_write_samba_share(enabled)
|
||||
if enabled:
|
||||
_systemctl('enable', '--now', 'smbd', timeout=60)
|
||||
_systemctl('restart', 'smbd', timeout=60)
|
||||
else:
|
||||
_systemctl('restart', 'smbd', timeout=60)
|
||||
cfg = load_cfg()
|
||||
cfg['internal_share_enabled'] = enabled
|
||||
save_cfg(cfg)
|
||||
internal_share_update_state()
|
||||
return True, ''
|
||||
except Exception as e:
|
||||
with internal_share_lock:
|
||||
internal_share_state['error'] = str(e)
|
||||
return False, str(e)
|
||||
73
picopy/state.py
Normal file
73
picopy/state.py
Normal file
@@ -0,0 +1,73 @@
|
||||
"""PiCopy – Kopierstatus, Verlauf, add_log."""
|
||||
|
||||
import json
|
||||
import threading
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
|
||||
from picopy.config import (
|
||||
STATE_FILE, HISTORY_FILE, MAX_HISTORY,
|
||||
_atomic_write, log
|
||||
)
|
||||
|
||||
copy_state = {
|
||||
'running': False, 'progress': 0,
|
||||
'total': 0, 'done': 0, 'current': '',
|
||||
'error': None, 'last_copy': None, 'logs': [],
|
||||
'bytes_total': 0, 'bytes_done': 0,
|
||||
'start_ts': None, 'eta_sec': None, 'speed_bps': 0,
|
||||
'phase': 'idle',
|
||||
'space_warning': False, 'space_needed': 0, 'space_free': 0,
|
||||
'last_success_file': '',
|
||||
}
|
||||
copy_lock = threading.Lock()
|
||||
|
||||
|
||||
def load_state():
|
||||
global copy_state
|
||||
try:
|
||||
if STATE_FILE.exists():
|
||||
saved = json.loads(STATE_FILE.read_text(encoding='utf-8'))
|
||||
saved['running'] = False
|
||||
saved['current'] = ''
|
||||
copy_state.update(saved)
|
||||
except (json.JSONDecodeError, ValueError) as e:
|
||||
log.warning(f'state.json korrupt ({e}), starte mit leerem Zustand')
|
||||
try: STATE_FILE.rename(STATE_FILE.with_suffix('.corrupt'))
|
||||
except Exception: pass
|
||||
except Exception as e:
|
||||
log.warning(f'state.json nicht lesbar: {e}')
|
||||
|
||||
|
||||
def save_state():
|
||||
try:
|
||||
with copy_lock:
|
||||
data = dict(copy_state)
|
||||
_atomic_write(STATE_FILE, json.dumps(data))
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
|
||||
def load_history() -> list:
|
||||
try:
|
||||
if HISTORY_FILE.exists():
|
||||
return json.loads(HISTORY_FILE.read_text(encoding='utf-8'))
|
||||
except Exception:
|
||||
pass
|
||||
return []
|
||||
|
||||
|
||||
def append_history(entry: dict):
|
||||
h = load_history()
|
||||
h.insert(0, entry)
|
||||
try:
|
||||
_atomic_write(HISTORY_FILE, json.dumps(h[:MAX_HISTORY]))
|
||||
except Exception as e:
|
||||
log.warning(f'Verlauf speichern fehlgeschlagen: {e}')
|
||||
|
||||
|
||||
def add_log(msg):
|
||||
log.info(msg)
|
||||
with copy_lock:
|
||||
copy_state['logs'].append({'t': datetime.now().strftime('%H:%M:%S'), 'm': msg})
|
||||
copy_state['logs'] = copy_state['logs'][-200:]
|
||||
222
picopy/system.py
Normal file
222
picopy/system.py
Normal file
@@ -0,0 +1,222 @@
|
||||
"""PiCopy – Systeminfo, Format-Drives, Update-System."""
|
||||
|
||||
import os
|
||||
import shutil
|
||||
import subprocess
|
||||
import threading
|
||||
import time
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
|
||||
import urllib.request as _urlreq
|
||||
|
||||
from picopy.config import BASE_DIR, RAW_BASE, VERSION, _atomic_write, log
|
||||
from picopy.state import copy_state
|
||||
|
||||
update_state = {
|
||||
'current': VERSION,
|
||||
'latest': None,
|
||||
'available': False,
|
||||
'checking': False,
|
||||
'error': None,
|
||||
'last_checked': None,
|
||||
}
|
||||
update_lock = threading.Lock()
|
||||
|
||||
format_state = {'running': False, 'error': None, 'done': False, 'fs': '', 'device': ''}
|
||||
|
||||
FORMAT_FILESYSTEMS = {
|
||||
'exfat': {
|
||||
'label': 'exFAT',
|
||||
'desc': 'Empfohlen – Mac & Windows, keine 4-GB-Dateigrößenbeschränkung',
|
||||
'cmd': lambda dev, name: ['mkfs.exfat', '-n', name, dev],
|
||||
'pkg': 'exfatprogs',
|
||||
},
|
||||
'fat32': {
|
||||
'label': 'FAT32',
|
||||
'desc': 'Mac & Windows, max. 4 GB pro Datei',
|
||||
'cmd': lambda dev, name: ['mkfs.vfat', '-F', '32', '-n', name[:11], dev],
|
||||
'pkg': 'dosfstools',
|
||||
},
|
||||
'ntfs': {
|
||||
'label': 'NTFS',
|
||||
'desc': 'Windows nativ, Mac nur lesen',
|
||||
'cmd': lambda dev, name: ['mkfs.ntfs', '-f', '-L', name[:32], dev],
|
||||
'pkg': 'ntfs-3g',
|
||||
},
|
||||
}
|
||||
|
||||
# Liste aller Dateien die beim Update heruntergeladen werden müssen
|
||||
UPDATE_FILES = [
|
||||
'app.py',
|
||||
'version.txt',
|
||||
'PiCopy_Logo.png',
|
||||
'picopy/__init__.py',
|
||||
'picopy/config.py',
|
||||
'picopy/state.py',
|
||||
'picopy/usb.py',
|
||||
'picopy/copy_engine.py',
|
||||
'picopy/wifi.py',
|
||||
'picopy/wireguard.py',
|
||||
'picopy/samba.py',
|
||||
'picopy/upload.py',
|
||||
'picopy/system.py',
|
||||
'routes/__init__.py',
|
||||
'routes/copy_routes.py',
|
||||
'routes/wifi_routes.py',
|
||||
'routes/wireguard_routes.py',
|
||||
'routes/upload_routes.py',
|
||||
'routes/system_routes.py',
|
||||
'routes/browse_routes.py',
|
||||
'templates/index.html',
|
||||
]
|
||||
|
||||
|
||||
def get_sysinfo() -> dict:
|
||||
info: dict = {}
|
||||
# CPU-Temperatur (Raspberry Pi)
|
||||
for zone in ('/sys/class/thermal/thermal_zone0/temp',
|
||||
'/sys/class/thermal/thermal_zone1/temp'):
|
||||
try:
|
||||
raw = Path(zone).read_text().strip()
|
||||
info['cpu_temp'] = round(int(raw) / 1000, 1)
|
||||
break
|
||||
except Exception:
|
||||
info['cpu_temp'] = None
|
||||
# RAM
|
||||
try:
|
||||
mem: dict = {}
|
||||
for line in Path('/proc/meminfo').read_text().splitlines():
|
||||
parts = line.split()
|
||||
if len(parts) >= 2:
|
||||
mem[parts[0].rstrip(':')] = int(parts[1])
|
||||
total = mem.get('MemTotal', 0)
|
||||
avail = mem.get('MemAvailable', 0)
|
||||
used = total - avail
|
||||
info['ram_total'] = round(total / 1024)
|
||||
info['ram_used'] = round(used / 1024)
|
||||
info['ram_pct'] = round(used / total * 100) if total else 0
|
||||
except Exception:
|
||||
info['ram_total'] = info['ram_used'] = info['ram_pct'] = None
|
||||
# SD-Karte (root-Dateisystem)
|
||||
try:
|
||||
du = shutil.disk_usage('/')
|
||||
info['disk_total'] = round(du.total / 1e9, 1)
|
||||
info['disk_used'] = round(du.used / 1e9, 1)
|
||||
info['disk_pct'] = round(du.used / du.total * 100) if du.total else 0
|
||||
except Exception:
|
||||
info['disk_total'] = info['disk_used'] = info['disk_pct'] = None
|
||||
return info
|
||||
|
||||
|
||||
def _vtuple(v):
|
||||
try:
|
||||
return tuple(int(x) for x in v.strip().lstrip('v').split('.'))
|
||||
except Exception:
|
||||
return (0,)
|
||||
|
||||
|
||||
def check_for_updates():
|
||||
with update_lock:
|
||||
if update_state['checking']:
|
||||
return
|
||||
update_state['checking'] = True
|
||||
update_state['error'] = None
|
||||
|
||||
try:
|
||||
req = _urlreq.urlopen(f'{RAW_BASE}/version.txt', timeout=10)
|
||||
latest = req.read().decode().strip()
|
||||
avail = _vtuple(latest) > _vtuple(VERSION)
|
||||
with update_lock:
|
||||
update_state.update(latest=latest, available=avail,
|
||||
last_checked=datetime.now().isoformat())
|
||||
if avail:
|
||||
log.info(f'Update verfügbar: {VERSION} -> {latest}')
|
||||
except Exception as e:
|
||||
with update_lock:
|
||||
update_state['error'] = str(e)
|
||||
log.warning(f'Update-Check fehlgeschlagen: {e}')
|
||||
finally:
|
||||
with update_lock:
|
||||
update_state['checking'] = False
|
||||
|
||||
|
||||
def update_check_loop():
|
||||
time.sleep(5) # Kurz nach Start einmalig prüfen
|
||||
while True:
|
||||
check_for_updates()
|
||||
time.sleep(6 * 3600) # Dann alle 6 Stunden
|
||||
|
||||
|
||||
def install_update():
|
||||
"""Lädt alle Moduldateien herunter, prüft Syntax und ersetzt sie atomar."""
|
||||
log.info('Update wird heruntergeladen...')
|
||||
|
||||
# Zuerst version.txt holen und neuen Code validieren
|
||||
vreq = _urlreq.urlopen(f'{RAW_BASE}/version.txt', timeout=10)
|
||||
new_version = vreq.read().decode().strip()
|
||||
|
||||
# app.py herunterladen und Syntax prüfen
|
||||
req = _urlreq.urlopen(f'{RAW_BASE}/app.py', timeout=60)
|
||||
new_app_code = req.read().decode()
|
||||
compile(new_app_code, 'app.py', 'exec')
|
||||
|
||||
# Logo
|
||||
logo_req = _urlreq.urlopen(f'{RAW_BASE}/PiCopy_Logo.png', timeout=30)
|
||||
logo_data = logo_req.read()
|
||||
|
||||
# Alle Dateien schreiben
|
||||
for rel_path in UPDATE_FILES:
|
||||
dest = BASE_DIR / rel_path
|
||||
dest.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
url = f'{RAW_BASE}/{rel_path}'
|
||||
if rel_path == 'app.py':
|
||||
content_bytes = new_app_code.encode('utf-8')
|
||||
elif rel_path == 'version.txt':
|
||||
content_bytes = (new_version + '\n').encode('utf-8')
|
||||
elif rel_path == 'PiCopy_Logo.png':
|
||||
content_bytes = logo_data
|
||||
else:
|
||||
try:
|
||||
r = _urlreq.urlopen(url, timeout=60)
|
||||
content_bytes = r.read()
|
||||
except Exception as e:
|
||||
log.warning(f'Update: {rel_path} konnte nicht heruntergeladen werden: {e}')
|
||||
continue
|
||||
|
||||
tmp = dest.with_suffix(dest.suffix + '.tmp')
|
||||
tmp.write_bytes(content_bytes)
|
||||
with open(tmp, 'rb') as fh:
|
||||
os.fsync(fh.fileno())
|
||||
os.replace(str(tmp), str(dest))
|
||||
|
||||
log.info('Update installiert - starte Dienst neu...')
|
||||
subprocess.Popen(['systemctl', 'restart', 'picopy'])
|
||||
|
||||
|
||||
def do_format(fs: str, name: str, dev: str):
|
||||
"""Formatiert ein Laufwerk. Wird in einem Thread ausgeführt."""
|
||||
format_state.update(running=True, error=None, done=False, fs=fs, device=dev)
|
||||
try:
|
||||
# Aushängen falls gemountet
|
||||
subprocess.run(['umount', dev], capture_output=True)
|
||||
|
||||
cmd = FORMAT_FILESYSTEMS[fs]['cmd'](dev, name)
|
||||
r = subprocess.run(cmd, capture_output=True, text=True, timeout=120)
|
||||
if r.returncode != 0:
|
||||
err = r.stderr.strip() or r.stdout.strip() or 'Unbekannter Fehler'
|
||||
# Hilfreiche Meldung wenn Paket fehlt
|
||||
pkg = FORMAT_FILESYSTEMS[fs]['pkg']
|
||||
if 'not found' in err or r.returncode == 127:
|
||||
err = f'Befehl nicht gefunden – bitte installieren: apt install {pkg}'
|
||||
format_state.update(error=err)
|
||||
return
|
||||
format_state.update(done=True)
|
||||
log.info(f'Formatierung {fs} auf {dev} abgeschlossen')
|
||||
except subprocess.TimeoutExpired:
|
||||
format_state.update(error='Timeout – Formatierung dauerte zu lange')
|
||||
except Exception as e:
|
||||
format_state.update(error=str(e))
|
||||
finally:
|
||||
format_state['running'] = False
|
||||
379
picopy/upload.py
Normal file
379
picopy/upload.py
Normal file
@@ -0,0 +1,379 @@
|
||||
"""PiCopy – NAS-Upload (rclone): upload_state, upload_lock, alle rclone-Helpers, run_uploads."""
|
||||
|
||||
import json
|
||||
import posixpath
|
||||
import re
|
||||
import select
|
||||
import subprocess
|
||||
import threading
|
||||
import time
|
||||
from pathlib import Path
|
||||
|
||||
from picopy.config import BASE_DIR, load_cfg, log
|
||||
from picopy.state import add_log
|
||||
|
||||
RCLONE_CONF = BASE_DIR / 'rclone.conf'
|
||||
|
||||
upload_state = {
|
||||
'running': False,
|
||||
'current': '',
|
||||
'results': [],
|
||||
'progress': 0,
|
||||
'total': 0,
|
||||
'done': 0,
|
||||
'bytes_total': 0,
|
||||
'bytes_done': 0,
|
||||
'current_file': '',
|
||||
'eta_sec': None,
|
||||
'speed_bps': 0,
|
||||
}
|
||||
upload_lock = threading.Lock()
|
||||
|
||||
|
||||
def _rclone(*args, timeout=60):
|
||||
try:
|
||||
return subprocess.run(
|
||||
['rclone', '--config', str(RCLONE_CONF)] + list(args),
|
||||
capture_output=True, text=True, timeout=timeout
|
||||
)
|
||||
except subprocess.TimeoutExpired:
|
||||
return subprocess.CompletedProcess(args, 1, stdout='', stderr=f'Timeout nach {timeout}s')
|
||||
except Exception as e:
|
||||
return subprocess.CompletedProcess(args, 1, stdout='', stderr=str(e))
|
||||
|
||||
|
||||
def _rclone_obscure(pw):
|
||||
r = subprocess.run(['rclone', 'obscure', pw],
|
||||
capture_output=True, text=True, timeout=10)
|
||||
return r.stdout.strip()
|
||||
|
||||
|
||||
def _parse_percent(text: str):
|
||||
m = re.search(r'(\d+(?:\.\d+)?)%', text)
|
||||
if not m:
|
||||
return None
|
||||
try:
|
||||
return max(0.0, min(100.0, float(m.group(1))))
|
||||
except ValueError:
|
||||
return None
|
||||
|
||||
|
||||
def _rclone_copyto_progress(src: Path, dest: str, base_done: int,
|
||||
file_size: int, total_bytes: int, start_ts: float,
|
||||
timeout: int = 7200):
|
||||
args = [
|
||||
'rclone', '--config', str(RCLONE_CONF),
|
||||
'copyto', str(src), dest,
|
||||
'--retries', '1',
|
||||
'--progress',
|
||||
'--stats', '1s',
|
||||
'--stats-one-line',
|
||||
]
|
||||
try:
|
||||
p = subprocess.Popen(args, stdout=subprocess.PIPE, stderr=subprocess.PIPE,
|
||||
text=True, bufsize=1)
|
||||
started = time.time()
|
||||
stderr_parts = []
|
||||
buf = ''
|
||||
while True:
|
||||
if p.poll() is not None:
|
||||
break
|
||||
if time.time() - started > timeout:
|
||||
p.kill()
|
||||
return subprocess.CompletedProcess(args, 1, stdout='', stderr=f'Timeout nach {timeout}s')
|
||||
|
||||
ready, _, _ = select.select([p.stderr], [], [], 0.2) if p.stderr else ([], [], [])
|
||||
if not ready:
|
||||
time.sleep(0.1)
|
||||
continue
|
||||
chunk = p.stderr.read(1)
|
||||
if not chunk:
|
||||
continue
|
||||
stderr_parts.append(chunk)
|
||||
if chunk not in ('\r', '\n'):
|
||||
buf += chunk
|
||||
continue
|
||||
|
||||
pct = _parse_percent(buf)
|
||||
buf = ''
|
||||
if pct is not None:
|
||||
transferred = int(file_size * pct / 100)
|
||||
bytes_done = base_done + transferred
|
||||
elapsed = time.time() - start_ts
|
||||
speed = bytes_done / elapsed if elapsed > 1 else 0
|
||||
eta = int((total_bytes - bytes_done) / speed) if speed > 0 and total_bytes > bytes_done else 0
|
||||
with upload_lock:
|
||||
upload_state.update(bytes_done=bytes_done,
|
||||
progress=int(bytes_done / total_bytes * 100) if total_bytes else 100,
|
||||
speed_bps=int(speed), eta_sec=eta)
|
||||
|
||||
stdout, stderr_tail = p.communicate(timeout=5)
|
||||
if stderr_tail:
|
||||
stderr_parts.append(stderr_tail)
|
||||
return subprocess.CompletedProcess(args, p.returncode, stdout=stdout or '',
|
||||
stderr=''.join(stderr_parts))
|
||||
except subprocess.TimeoutExpired:
|
||||
return subprocess.CompletedProcess(args, 1, stdout='', stderr=f'Timeout nach {timeout}s')
|
||||
except Exception as e:
|
||||
return subprocess.CompletedProcess(args, 1, stdout='', stderr=str(e))
|
||||
|
||||
|
||||
def _remote_name(tid):
|
||||
return f'picopy_{tid}'
|
||||
|
||||
|
||||
def _join_remote_path(*parts) -> str:
|
||||
return '/'.join(str(p).strip('/') for p in parts if str(p).strip('/'))
|
||||
|
||||
|
||||
def _remote_exists(remote_path: str) -> bool:
|
||||
return _remote_size(remote_path) is not None
|
||||
|
||||
|
||||
def _remote_size(remote_path: str):
|
||||
r = _rclone('lsjson', remote_path, timeout=20)
|
||||
if r.returncode != 0:
|
||||
return None
|
||||
try:
|
||||
data = json.loads(r.stdout or '[]')
|
||||
if isinstance(data, dict):
|
||||
return data.get('Size')
|
||||
if isinstance(data, list) and data:
|
||||
item = data[0]
|
||||
return item.get('Size') if isinstance(item, dict) else None
|
||||
return None
|
||||
except (json.JSONDecodeError, ValueError):
|
||||
return None
|
||||
|
||||
|
||||
def _remote_unique_rel_path(t: dict, rel_path: str) -> str:
|
||||
if not _remote_exists(_smb_conn(t, rel_path)):
|
||||
return rel_path
|
||||
|
||||
parent = posixpath.dirname(rel_path)
|
||||
name = posixpath.basename(rel_path)
|
||||
stem, suffix = posixpath.splitext(name)
|
||||
i = 1
|
||||
while True:
|
||||
candidate_name = f'{stem}_({i}){suffix}'
|
||||
candidate = _join_remote_path(parent, candidate_name)
|
||||
if not _remote_exists(_smb_conn(t, candidate)):
|
||||
return candidate
|
||||
i += 1
|
||||
|
||||
|
||||
def _smb_conn(t: dict, path: str = '') -> str:
|
||||
"""Baut ein rclone-Ziel fuer gespeicherte SMB-Targets.
|
||||
|
||||
Bei rclone SMB ist die Freigabe der erste Pfadteil nach dem Remote:
|
||||
remote:share/ordner. Die Remote-Konfiguration enthaelt Host und Login.
|
||||
"""
|
||||
share = t.get('smb_share', '')
|
||||
remote_path = _join_remote_path(share, path)
|
||||
if t.get('id'):
|
||||
return f'{_remote_name(t["id"])}:{remote_path}'
|
||||
|
||||
host = t.get('smb_host', '')
|
||||
if not host:
|
||||
return f':{remote_path}'
|
||||
conn = f':smb,host={host}'
|
||||
if t.get('smb_user'):
|
||||
conn += f',user={t["smb_user"]}'
|
||||
if t.get('smb_pass'):
|
||||
conn += f',pass={t["smb_pass"]}'
|
||||
conn += f':{remote_path}'
|
||||
return conn
|
||||
|
||||
|
||||
def configure_smb_remote(tid, host, share, user, pw):
|
||||
rn = _remote_name(tid)
|
||||
_rclone('config', 'delete', rn)
|
||||
args = ['config', 'create', rn, 'smb', f'host={host}']
|
||||
if user:
|
||||
args += [f'user={user}']
|
||||
if pw:
|
||||
args += [f'pass={_rclone_obscure(pw)}']
|
||||
r = _rclone(*args)
|
||||
return r.returncode == 0, r.stderr.strip()
|
||||
|
||||
|
||||
def delete_remote(tid):
|
||||
_rclone('config', 'delete', _remote_name(tid))
|
||||
|
||||
|
||||
def test_remote(tid):
|
||||
cfg = load_cfg()
|
||||
targets = cfg.get('upload_targets', [])
|
||||
t = next((x for x in targets if x['id'] == tid), {'id': tid})
|
||||
dest_root = t.get('dest_path', 'PiCopy').strip('/')
|
||||
root = _smb_conn(t)
|
||||
dest = _smb_conn(t, dest_root)
|
||||
test_dir_name = '.picopy_writetest'
|
||||
test_dir = _smb_conn(t, f'{dest_root}/{test_dir_name}' if dest_root else test_dir_name)
|
||||
# 1. Verbindung prüfen
|
||||
r = _rclone('lsd', root, timeout=15)
|
||||
if r.returncode != 0:
|
||||
err = r.stderr.strip().splitlines()[-1] if r.stderr.strip() else 'Verbindung fehlgeschlagen'
|
||||
return False, f'Verbindung: {err}'
|
||||
# 2. Zielordner und Schreibzugriff prüfen: Ziel anlegen, Testverzeichnis anlegen + sofort löschen
|
||||
mk = _rclone('mkdir', dest, timeout=15)
|
||||
if mk.returncode != 0:
|
||||
err = mk.stderr.strip().splitlines()[-1] if mk.stderr.strip() else 'Zielordner konnte nicht angelegt werden'
|
||||
return False, f'Zielordner: {err}'
|
||||
rw = _rclone('mkdir', test_dir, timeout=15)
|
||||
if rw.returncode != 0:
|
||||
err = rw.stderr.strip().splitlines()[-1] if rw.stderr.strip() else 'Schreiben fehlgeschlagen'
|
||||
return False, f'Kein Schreibzugriff: {err}'
|
||||
_rclone('rmdir', test_dir, timeout=10)
|
||||
return True, ''
|
||||
|
||||
|
||||
def run_uploads(local_dir: Path, cfg: dict, upload_files=None):
|
||||
"""Lädt die zuletzt lokal geschriebenen Dateien zu allen aktiven Fernzielen hoch."""
|
||||
# Frische Config laden damit zwischenzeitliche Änderungen (z.B. Deaktivierung) berücksichtigt werden
|
||||
current_cfg = load_cfg()
|
||||
targets = [t for t in current_cfg.get('upload_targets', []) if t.get('enabled')]
|
||||
if not targets:
|
||||
return
|
||||
|
||||
with upload_lock:
|
||||
upload_state.update(running=True, results=[], current='',
|
||||
progress=0, total=0, done=0,
|
||||
bytes_total=0, bytes_done=0,
|
||||
current_file='', eta_sec=None, speed_bps=0)
|
||||
|
||||
for t in targets:
|
||||
name = t.get('name', t['id'])
|
||||
with upload_lock:
|
||||
upload_state.update(current=name, progress=0, total=0, done=0,
|
||||
bytes_total=0, bytes_done=0,
|
||||
current_file='', eta_sec=None, speed_bps=0)
|
||||
|
||||
add_log(f'Upload >> {name}...')
|
||||
dest_root = t.get('dest_path', 'PiCopy').strip('/')
|
||||
root = _smb_conn(t)
|
||||
# local_dir ist der lokal erzeugte Datumsordner. Auf dem NAS soll die
|
||||
# gleiche Struktur entstehen wie auf dem Ziellaufwerk: Ziel/Datum/...
|
||||
dest_rel = _join_remote_path(dest_root, local_dir.name)
|
||||
dest = _smb_conn(t, dest_rel)
|
||||
share = t.get('smb_share', '')
|
||||
dest_label = _join_remote_path(share, dest_rel) or '/'
|
||||
add_log(f'Upload {name}: Ziel {dest_label}')
|
||||
|
||||
# Quellverzeichnis prüfen
|
||||
if not local_dir.exists():
|
||||
err = f'Quellverzeichnis nicht gefunden: {local_dir}'
|
||||
add_log(f'Upload {name}: ✗ {err}')
|
||||
with upload_lock:
|
||||
upload_state['results'].append({'name': name, 'ok': False, 'msg': err})
|
||||
continue
|
||||
|
||||
# 1. Verbindung prüfen
|
||||
conn = _rclone('lsd', root, timeout=15)
|
||||
add_log(f'Upload {name}: Verbindung rc={conn.returncode}')
|
||||
if conn.returncode != 0:
|
||||
err = (conn.stderr.strip().splitlines()[-1] if conn.stderr.strip()
|
||||
else 'NAS nicht erreichbar')
|
||||
add_log(f'Upload {name}: ✗ {err}')
|
||||
with upload_lock:
|
||||
upload_state['results'].append({'name': name, 'ok': False, 'msg': err})
|
||||
continue
|
||||
|
||||
# 2. Zielordner anlegen
|
||||
mk = _rclone('mkdir', dest, timeout=30)
|
||||
add_log(f'Upload {name}: mkdir rc={mk.returncode}'
|
||||
+ (f' err={mk.stderr.strip()[:100]}' if mk.returncode != 0 else ''))
|
||||
|
||||
# 3. Kopieren mit Fortschritt
|
||||
add_log(f'Upload {name}: starte copy von {local_dir}')
|
||||
dup_mode = cfg.get('duplicate_handling', 'skip')
|
||||
if upload_files is None:
|
||||
files = sorted(f for f in local_dir.rglob('*') if f.is_file())
|
||||
else:
|
||||
files = []
|
||||
for f in upload_files:
|
||||
f = Path(f)
|
||||
try:
|
||||
f.relative_to(local_dir)
|
||||
except ValueError:
|
||||
continue
|
||||
if f.is_file():
|
||||
files.append(f)
|
||||
files = sorted(files)
|
||||
dirs = sorted({p for f in files for p in f.relative_to(local_dir).parents
|
||||
if str(p) != '.'})
|
||||
bytes_total = sum(f.stat().st_size for f in files)
|
||||
with upload_lock:
|
||||
upload_state.update(total=len(files), bytes_total=bytes_total,
|
||||
progress=100 if not files else 0)
|
||||
|
||||
for d in dirs:
|
||||
_rclone('mkdir', _smb_conn(t, _join_remote_path(dest_rel, d.as_posix())), timeout=30)
|
||||
|
||||
errors = []
|
||||
skipped = 0
|
||||
start_ts = time.time()
|
||||
for idx, f in enumerate(files, start=1):
|
||||
rel = f.relative_to(local_dir).as_posix()
|
||||
fsize = f.stat().st_size
|
||||
remote_rel = _join_remote_path(dest_rel, rel)
|
||||
with upload_lock:
|
||||
upload_state.update(done=idx, current_file=rel,
|
||||
progress=int(idx / len(files) * 100) if files else 100)
|
||||
|
||||
if dup_mode == 'skip':
|
||||
remote_size = _remote_size(_smb_conn(t, remote_rel))
|
||||
if remote_size == fsize:
|
||||
skipped += 1
|
||||
with upload_lock:
|
||||
bd = upload_state['bytes_done'] + fsize
|
||||
elapsed = time.time() - start_ts
|
||||
speed = bd / elapsed if elapsed > 1 else 0
|
||||
eta = int((bytes_total - bd) / speed) if speed > 0 and bytes_total > bd else 0
|
||||
upload_state.update(bytes_done=bd,
|
||||
progress=int(bd / bytes_total * 100) if bytes_total else 100,
|
||||
speed_bps=int(speed), eta_sec=eta)
|
||||
continue
|
||||
elif dup_mode == 'rename':
|
||||
remote_rel = _remote_unique_rel_path(t, remote_rel)
|
||||
|
||||
with upload_lock:
|
||||
base_done = upload_state['bytes_done']
|
||||
rr = _rclone_copyto_progress(f, _smb_conn(t, remote_rel),
|
||||
base_done, fsize, bytes_total, start_ts)
|
||||
if rr.returncode != 0:
|
||||
errors.append(rr.stderr.strip() or f'{rel}: unbekannter Fehler')
|
||||
if len(errors) >= 5:
|
||||
break
|
||||
|
||||
with upload_lock:
|
||||
bd = base_done + fsize
|
||||
elapsed = time.time() - start_ts
|
||||
speed = bd / elapsed if elapsed > 1 else 0
|
||||
eta = int((bytes_total - bd) / speed) if speed > 0 and bytes_total > bd else 0
|
||||
upload_state.update(bytes_done=bd,
|
||||
progress=int(bd / bytes_total * 100) if bytes_total else 100,
|
||||
speed_bps=int(speed), eta_sec=eta)
|
||||
|
||||
r = subprocess.CompletedProcess(
|
||||
args=['rclone', 'copyto'],
|
||||
returncode=1 if errors else 0,
|
||||
stdout='',
|
||||
stderr='\n'.join(errors),
|
||||
)
|
||||
ok = r.returncode == 0
|
||||
err = ''
|
||||
if not ok:
|
||||
err = r.stderr.strip() or 'Unbekannter Fehler'
|
||||
add_log(f'Upload {name}: rclone stderr: {err[:300]}')
|
||||
elif skipped:
|
||||
add_log(f'Upload {name}: {skipped} Dateien übersprungen')
|
||||
|
||||
with upload_lock:
|
||||
upload_state['results'].append({'name': name, 'ok': ok, 'msg': err})
|
||||
add_log(f'Upload {name}: {"✓ OK" if ok else "✗ Fehler - " + err}')
|
||||
|
||||
with upload_lock:
|
||||
upload_state['running'] = False
|
||||
upload_state['current'] = ''
|
||||
upload_state['current_file'] = ''
|
||||
131
picopy/usb.py
Normal file
131
picopy/usb.py
Normal file
@@ -0,0 +1,131 @@
|
||||
"""PiCopy – USB-Erkennung: usb_devices, usb_port_of, ensure_mount, cleanup_stale_mounts."""
|
||||
|
||||
import os
|
||||
import re
|
||||
import json
|
||||
import subprocess
|
||||
from pathlib import Path
|
||||
|
||||
from picopy.config import INTERNAL_DEST_DIR, log
|
||||
|
||||
|
||||
def usb_port_of(dev_name):
|
||||
"""Gibt den physischen USB-Port-Pfad zurück (z.B. '2-2').
|
||||
Primär via udevadm, Fallback via sysfs."""
|
||||
# Primär: udevadm (zuverlässiger)
|
||||
try:
|
||||
r = subprocess.run(
|
||||
['udevadm', 'info', '-q', 'path', '-n', f'/dev/{dev_name}'],
|
||||
capture_output=True, text=True, timeout=5
|
||||
)
|
||||
if r.returncode == 0:
|
||||
port = None
|
||||
for seg in r.stdout.strip().split('/'):
|
||||
if re.fullmatch(r'\d+-[\d.]+', seg):
|
||||
port = seg
|
||||
if port:
|
||||
return port
|
||||
except Exception:
|
||||
pass
|
||||
# Fallback: sysfs readlink
|
||||
try:
|
||||
real = Path(f'/sys/block/{dev_name}').resolve()
|
||||
port = None
|
||||
for seg in str(real).split('/'):
|
||||
if re.fullmatch(r'\d+[\-\d.]+', seg) and ':' not in seg:
|
||||
port = seg
|
||||
return port
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
|
||||
def usb_devices():
|
||||
try:
|
||||
out = subprocess.check_output(
|
||||
['lsblk', '-J', '-o', 'NAME,TRAN,MOUNTPOINT,LABEL,SIZE,MODEL'],
|
||||
timeout=10, text=True
|
||||
)
|
||||
data = json.loads(out)
|
||||
except Exception as e:
|
||||
log.error(f'lsblk: {e}')
|
||||
return []
|
||||
|
||||
result = []
|
||||
for bd in data.get('blockdevices', []):
|
||||
if bd.get('tran') != 'usb':
|
||||
continue
|
||||
name = bd['name']
|
||||
port = usb_port_of(name)
|
||||
model = (bd.get('label') or bd.get('model') or name).strip()
|
||||
for child in (bd.get('children') or []):
|
||||
result.append({
|
||||
'device': f'/dev/{child["name"]}',
|
||||
'usb_port': port,
|
||||
'mount': child.get('mountpoint') or '',
|
||||
'label': (child.get('label') or model).strip(),
|
||||
'size': child.get('size') or bd.get('size') or '',
|
||||
})
|
||||
if not bd.get('children'):
|
||||
result.append({
|
||||
'device': f'/dev/{name}',
|
||||
'usb_port': port,
|
||||
'mount': bd.get('mountpoint') or '',
|
||||
'label': model,
|
||||
'size': bd.get('size') or '',
|
||||
})
|
||||
return result
|
||||
|
||||
|
||||
def ensure_mount(dev_info):
|
||||
if dev_info.get('internal'):
|
||||
INTERNAL_DEST_DIR.mkdir(parents=True, exist_ok=True)
|
||||
return str(INTERNAL_DEST_DIR), False
|
||||
mp = dev_info.get('mount')
|
||||
if mp:
|
||||
return mp, False
|
||||
dev = dev_info['device']
|
||||
mp = f'/mnt/picopy{dev.replace("/","_")}'
|
||||
os.makedirs(mp, exist_ok=True)
|
||||
r = subprocess.run(['mount', dev, mp], capture_output=True)
|
||||
if r.returncode:
|
||||
log.error(f'mount failed: {r.stderr.decode()}')
|
||||
return None, False
|
||||
return mp, True
|
||||
|
||||
|
||||
def cleanup_stale_mounts() -> None:
|
||||
"""Bereinigt beim Start hängen gebliebene PiCopy-Mounts (z.B. nach Stromausfall)."""
|
||||
try:
|
||||
with open('/proc/mounts') as fh:
|
||||
mps = [line.split()[1] for line in fh if '/mnt/picopy' in line]
|
||||
for mp in mps:
|
||||
log.info(f'Bereinige veralteten Mount: {mp}')
|
||||
subprocess.run(['umount', '-l', mp], capture_output=True)
|
||||
except Exception as e:
|
||||
log.warning(f'Stale-Mount-Bereinigung fehlgeschlagen: {e}')
|
||||
|
||||
|
||||
def internal_dest_device(cfg=None):
|
||||
from picopy.config import load_cfg, _fmt_bytes
|
||||
cfg = cfg or load_cfg()
|
||||
usage = _internal_usage()
|
||||
return {
|
||||
'device': 'internal',
|
||||
'usb_port': '__internal__',
|
||||
'mount': str(INTERNAL_DEST_DIR),
|
||||
'label': cfg.get('internal_dest_label') or 'Interner Speicher',
|
||||
'size': _fmt_bytes(usage['free']) + ' frei',
|
||||
'internal': True,
|
||||
}
|
||||
|
||||
|
||||
def _internal_usage():
|
||||
import shutil
|
||||
INTERNAL_DEST_DIR.mkdir(parents=True, exist_ok=True)
|
||||
usage = shutil.disk_usage(INTERNAL_DEST_DIR)
|
||||
return {
|
||||
'path': str(INTERNAL_DEST_DIR),
|
||||
'total': usage.total,
|
||||
'used': usage.used,
|
||||
'free': usage.free,
|
||||
}
|
||||
167
picopy/wifi.py
Normal file
167
picopy/wifi.py
Normal file
@@ -0,0 +1,167 @@
|
||||
"""PiCopy – WiFi: wifi_state, wifi_lock, nm(), Helpers, wifi_monitor."""
|
||||
|
||||
import subprocess
|
||||
import threading
|
||||
import time
|
||||
|
||||
from picopy.config import (
|
||||
NM_AP_CON, NM_CLIENT_CON, WIFI_BOOT_WAIT,
|
||||
load_cfg, log
|
||||
)
|
||||
|
||||
wifi_state = {
|
||||
'mode': 'unknown', # 'client' | 'ap' | 'disconnected'
|
||||
'ssid': '',
|
||||
'ip': '',
|
||||
}
|
||||
wifi_lock = threading.Lock()
|
||||
|
||||
|
||||
def nm(*args):
|
||||
return subprocess.run(['nmcli'] + list(args),
|
||||
capture_output=True, text=True, timeout=20)
|
||||
|
||||
|
||||
def get_wlan0_info():
|
||||
r = nm('-t', '-f', 'DEVICE,STATE,CONNECTION', 'dev')
|
||||
for line in r.stdout.splitlines():
|
||||
parts = line.split(':')
|
||||
if parts and parts[0] == 'wlan0':
|
||||
return {
|
||||
'state': parts[1] if len(parts) > 1 else '',
|
||||
'connection': ':'.join(parts[2:]) if len(parts) > 2 else '',
|
||||
}
|
||||
return {'state': '', 'connection': ''}
|
||||
|
||||
|
||||
def get_wifi_ip():
|
||||
r = nm('-t', '-f', 'IP4.ADDRESS', 'dev', 'show', 'wlan0')
|
||||
for line in r.stdout.splitlines():
|
||||
if 'IP4.ADDRESS' in line:
|
||||
ip = line.split(':')[-1].split('/')[0].strip()
|
||||
if ip:
|
||||
return ip
|
||||
return ''
|
||||
|
||||
|
||||
def is_client_connected():
|
||||
info = get_wlan0_info()
|
||||
return (info['state'] == 'connected'
|
||||
and info['connection']
|
||||
and NM_AP_CON not in info['connection'])
|
||||
|
||||
|
||||
def is_ap_active():
|
||||
r = nm('-t', '-f', 'NAME,STATE', 'con', 'show', '--active')
|
||||
return any(NM_AP_CON in l and 'activated' in l for l in r.stdout.splitlines())
|
||||
|
||||
|
||||
def start_ap(ssid, password):
|
||||
log.info(f'Starte AP: {ssid}')
|
||||
nm('con', 'delete', NM_AP_CON)
|
||||
time.sleep(1)
|
||||
r = nm('dev', 'wifi', 'hotspot',
|
||||
'ifname', 'wlan0',
|
||||
'ssid', ssid,
|
||||
'password', password,
|
||||
'con-name', NM_AP_CON)
|
||||
ok = r.returncode == 0
|
||||
if ok:
|
||||
log.info('AP gestartet')
|
||||
else:
|
||||
log.error(f'AP Fehler: {r.stderr}')
|
||||
return ok
|
||||
|
||||
|
||||
def stop_ap():
|
||||
log.info('Stoppe AP')
|
||||
nm('con', 'down', NM_AP_CON)
|
||||
|
||||
|
||||
def connect_client_wifi(ssid, password):
|
||||
log.info(f'Verbinde mit WiFi: {ssid}')
|
||||
# Bestehende PiCopy-WiFi Verbindung löschen
|
||||
nm('con', 'delete', NM_CLIENT_CON)
|
||||
time.sleep(1)
|
||||
r = nm('dev', 'wifi', 'connect', ssid,
|
||||
'password', password,
|
||||
'name', NM_CLIENT_CON,
|
||||
'ifname', 'wlan0')
|
||||
ok = r.returncode == 0
|
||||
if ok:
|
||||
log.info(f'Verbunden mit {ssid}')
|
||||
else:
|
||||
log.error(f'WiFi-Verbindung fehlgeschlagen: {r.stderr.strip()}')
|
||||
return ok
|
||||
|
||||
|
||||
def scan_wifi_networks():
|
||||
nm('dev', 'wifi', 'rescan')
|
||||
time.sleep(2)
|
||||
r = nm('-t', '-f', 'SSID,SIGNAL,SECURITY', 'dev', 'wifi', 'list')
|
||||
seen, nets = set(), []
|
||||
for line in r.stdout.splitlines():
|
||||
parts = line.split(':')
|
||||
if len(parts) >= 2:
|
||||
ssid = parts[0].strip()
|
||||
signal = parts[1].strip() if len(parts) > 1 else '0'
|
||||
security = ':'.join(parts[2:]).strip() if len(parts) > 2 else ''
|
||||
if ssid and ssid not in seen:
|
||||
seen.add(ssid)
|
||||
nets.append({'ssid': ssid, 'signal': int(signal) if signal.isdigit() else 0, 'security': security})
|
||||
return sorted(nets, key=lambda x: -x['signal'])
|
||||
|
||||
|
||||
def update_wifi_state():
|
||||
info = get_wlan0_info()
|
||||
if info['state'] == 'connected':
|
||||
if NM_AP_CON in info['connection']:
|
||||
with wifi_lock:
|
||||
wifi_state.update(mode='ap',
|
||||
ssid=load_cfg().get('ap_ssid', 'PiCopy'),
|
||||
ip='10.42.0.1')
|
||||
else:
|
||||
ip = get_wifi_ip()
|
||||
with wifi_lock:
|
||||
wifi_state.update(mode='client',
|
||||
ssid=info['connection'],
|
||||
ip=ip)
|
||||
else:
|
||||
with wifi_lock:
|
||||
wifi_state.update(mode='disconnected', ssid='', ip='')
|
||||
|
||||
|
||||
def wifi_monitor():
|
||||
log.info(f'WiFi-Monitor: warte {WIFI_BOOT_WAIT}s auf Verbindung...')
|
||||
time.sleep(WIFI_BOOT_WAIT)
|
||||
|
||||
while True:
|
||||
try:
|
||||
update_wifi_state()
|
||||
with wifi_lock:
|
||||
mode = wifi_state['mode']
|
||||
|
||||
if mode == 'disconnected':
|
||||
cfg = load_cfg()
|
||||
ssid = cfg.get('wifi_ssid', '')
|
||||
pw = cfg.get('wifi_password', '')
|
||||
|
||||
connected = False
|
||||
if ssid:
|
||||
connected = connect_client_wifi(ssid, pw)
|
||||
if connected:
|
||||
time.sleep(5)
|
||||
update_wifi_state()
|
||||
|
||||
if not connected:
|
||||
ap_ssid = cfg.get('ap_ssid', 'PiCopy')
|
||||
ap_pw = cfg.get('ap_password', 'PiCopy,')
|
||||
if start_ap(ap_ssid, ap_pw):
|
||||
time.sleep(3)
|
||||
with wifi_lock:
|
||||
wifi_state.update(mode='ap', ssid=ap_ssid, ip='10.42.0.1')
|
||||
|
||||
except Exception as e:
|
||||
log.error(f'WiFi-Monitor Fehler: {e}')
|
||||
|
||||
time.sleep(30)
|
||||
152
picopy/wireguard.py
Normal file
152
picopy/wireguard.py
Normal file
@@ -0,0 +1,152 @@
|
||||
"""PiCopy – WireGuard VPN: wg_state, wg_lock, alle wg_* Funktionen, wg_monitor."""
|
||||
|
||||
import os
|
||||
import re
|
||||
import shutil
|
||||
import subprocess
|
||||
import threading
|
||||
import time
|
||||
from pathlib import Path
|
||||
|
||||
from picopy.config import log
|
||||
|
||||
WG_CONF = Path('/etc/wireguard/picopy.conf')
|
||||
WG_IFACE = 'picopy'
|
||||
|
||||
|
||||
def wg_is_installed():
|
||||
return shutil.which('wg-quick') is not None
|
||||
|
||||
|
||||
wg_state = {
|
||||
'connected': False,
|
||||
'ip': '',
|
||||
'peer': '',
|
||||
'error': None,
|
||||
'has_config': False,
|
||||
'installed': False,
|
||||
'pkg_running': False,
|
||||
'pkg_action': '',
|
||||
'pkg_error': None,
|
||||
}
|
||||
wg_lock = threading.Lock()
|
||||
|
||||
|
||||
def wg_update_state():
|
||||
inst = wg_is_installed()
|
||||
has_conf = WG_CONF.exists()
|
||||
if not inst:
|
||||
with wg_lock:
|
||||
wg_state.update(installed=False, connected=False, ip='', peer='',
|
||||
has_config=has_conf)
|
||||
return
|
||||
r = subprocess.run(['wg', 'show', WG_IFACE],
|
||||
capture_output=True, text=True, timeout=5)
|
||||
if r.returncode != 0:
|
||||
with wg_lock:
|
||||
wg_state.update(installed=True, connected=False, ip='', peer='',
|
||||
has_config=has_conf)
|
||||
return
|
||||
ip_r = subprocess.run(['ip', '-4', 'addr', 'show', WG_IFACE],
|
||||
capture_output=True, text=True, timeout=5)
|
||||
ip = ''
|
||||
for line in ip_r.stdout.splitlines():
|
||||
if line.strip().startswith('inet '):
|
||||
ip = line.strip().split()[1].split('/')[0]
|
||||
break
|
||||
peer = ''
|
||||
for line in r.stdout.splitlines():
|
||||
if line.startswith('peer:'):
|
||||
peer = line.split(':', 1)[-1].strip()
|
||||
break
|
||||
with wg_lock:
|
||||
wg_state.update(installed=True, connected=True, ip=ip, peer=peer,
|
||||
error=None, has_config=has_conf)
|
||||
|
||||
|
||||
def wg_connect():
|
||||
if not WG_CONF.exists():
|
||||
with wg_lock:
|
||||
wg_state['error'] = 'Keine Konfiguration vorhanden'
|
||||
return False
|
||||
r = subprocess.run(['wg-quick', 'up', WG_IFACE],
|
||||
capture_output=True, text=True, timeout=30)
|
||||
if r.returncode == 0:
|
||||
time.sleep(1)
|
||||
wg_update_state()
|
||||
log.info('WireGuard verbunden')
|
||||
return True
|
||||
lines = r.stderr.strip().splitlines() if r.stderr.strip() else []
|
||||
real_errors = [l for l in lines if not l.strip().startswith('[#]')]
|
||||
err = (real_errors[-1] if real_errors else lines[-1] if lines else 'Unbekannter Fehler')
|
||||
if 'resolvconf' in err and 'not found' in err:
|
||||
err = 'resolvconf fehlt - bitte WireGuard deinstallieren und neu installieren (openresolv wird dann mitinstalliert)'
|
||||
with wg_lock:
|
||||
wg_state.update(connected=False, error=err)
|
||||
log.error(f'WireGuard Fehler: {err}')
|
||||
return False
|
||||
|
||||
|
||||
def wg_disconnect():
|
||||
r = subprocess.run(['wg-quick', 'down', WG_IFACE],
|
||||
capture_output=True, text=True, timeout=15)
|
||||
with wg_lock:
|
||||
wg_state.update(connected=False, ip='', peer='', error=None)
|
||||
log.info('WireGuard getrennt')
|
||||
return r.returncode == 0
|
||||
|
||||
|
||||
def _wg_apt(action: str, packages: list):
|
||||
"""Führt apt-get install/remove aus und aktualisiert pkg_state."""
|
||||
with wg_lock:
|
||||
if wg_state['pkg_running']:
|
||||
return
|
||||
wg_state.update(pkg_running=True, pkg_action=action, pkg_error=None)
|
||||
try:
|
||||
cmd = ['apt-get', action, '-y'] + packages
|
||||
r = subprocess.run(cmd, capture_output=True, text=True, timeout=300,
|
||||
env={**os.environ, 'DEBIAN_FRONTEND': 'noninteractive'})
|
||||
if r.returncode != 0:
|
||||
err = (r.stderr.strip().splitlines()[-1]
|
||||
if r.stderr.strip() else f'apt-get {action} fehlgeschlagen')
|
||||
log.error(f'WireGuard apt {action}: {err}')
|
||||
with wg_lock:
|
||||
wg_state['pkg_error'] = err
|
||||
else:
|
||||
log.info(f'WireGuard apt {action} abgeschlossen')
|
||||
except Exception as e:
|
||||
with wg_lock:
|
||||
wg_state['pkg_error'] = str(e)
|
||||
finally:
|
||||
with wg_lock:
|
||||
wg_state['pkg_running'] = False
|
||||
wg_state['pkg_action'] = ''
|
||||
wg_update_state()
|
||||
|
||||
|
||||
def wg_install():
|
||||
_wg_apt('install', ['wireguard', 'wireguard-tools', 'openresolv'])
|
||||
|
||||
|
||||
def wg_uninstall():
|
||||
wg_disconnect()
|
||||
_wg_apt('remove', ['wireguard', 'wireguard-tools'])
|
||||
|
||||
|
||||
def wg_save_config(content: str):
|
||||
try:
|
||||
WG_CONF.parent.mkdir(parents=True, exist_ok=True)
|
||||
WG_CONF.write_text(content, encoding='utf-8')
|
||||
WG_CONF.chmod(0o600)
|
||||
return True, ''
|
||||
except Exception as e:
|
||||
return False, str(e)
|
||||
|
||||
|
||||
def wg_monitor():
|
||||
while True:
|
||||
try:
|
||||
wg_update_state()
|
||||
except Exception:
|
||||
pass
|
||||
time.sleep(10)
|
||||
17
routes/__init__.py
Normal file
17
routes/__init__.py
Normal file
@@ -0,0 +1,17 @@
|
||||
"""PiCopy – register_routes(app): registriert alle Blueprints."""
|
||||
|
||||
|
||||
def register_routes(app):
|
||||
from routes.copy_routes import copy_bp
|
||||
from routes.wifi_routes import wifi_bp
|
||||
from routes.wireguard_routes import wireguard_bp
|
||||
from routes.upload_routes import upload_bp
|
||||
from routes.system_routes import system_bp
|
||||
from routes.browse_routes import browse_bp
|
||||
|
||||
app.register_blueprint(copy_bp)
|
||||
app.register_blueprint(wifi_bp)
|
||||
app.register_blueprint(wireguard_bp)
|
||||
app.register_blueprint(upload_bp)
|
||||
app.register_blueprint(system_bp)
|
||||
app.register_blueprint(browse_bp)
|
||||
147
routes/browse_routes.py
Normal file
147
routes/browse_routes.py
Normal file
@@ -0,0 +1,147 @@
|
||||
"""PiCopy – Blueprint: /api/browse, /api/history*, /api/internal-share*."""
|
||||
|
||||
import os
|
||||
import subprocess
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
|
||||
from flask import Blueprint, jsonify, request
|
||||
|
||||
from picopy.config import load_cfg, HISTORY_FILE, INTERNAL_DEST_DIR, log
|
||||
from picopy.state import load_history
|
||||
from picopy.usb import usb_devices, internal_dest_device
|
||||
from picopy.samba import internal_share_update_state, set_internal_share_enabled
|
||||
|
||||
browse_bp = Blueprint('browse', __name__)
|
||||
|
||||
_browse_mounts = {} # usb_port -> mount_point
|
||||
|
||||
|
||||
def _mp_is_alive(mp):
|
||||
"""Prüft ob ein Mount-Punkt wirklich aktiv und lesbar ist."""
|
||||
try:
|
||||
with open('/proc/mounts') as f:
|
||||
mounted = any(mp in line.split() for line in f)
|
||||
if not mounted:
|
||||
return False
|
||||
os.listdir(mp) # I/O-Test: schlägt fehl wenn Gerät entfernt wurde
|
||||
return True
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
|
||||
def _drop_browse_mount(port):
|
||||
"""Veralteten Mount bereinigen."""
|
||||
mp = _browse_mounts.pop(port, None)
|
||||
if mp:
|
||||
subprocess.run(['umount', '-l', mp], capture_output=True)
|
||||
log.info(f'Browse-Mount bereinigt: {mp}')
|
||||
|
||||
|
||||
def get_browse_mp(dev):
|
||||
if dev.get('internal'):
|
||||
INTERNAL_DEST_DIR.mkdir(parents=True, exist_ok=True)
|
||||
return str(INTERNAL_DEST_DIR)
|
||||
port = dev.get('usb_port', '')
|
||||
|
||||
# Auto-mount vom System bevorzugen
|
||||
if dev.get('mount') and _mp_is_alive(dev['mount']):
|
||||
return dev['mount']
|
||||
|
||||
# Gecachten Mount prüfen
|
||||
mp = _browse_mounts.get(port)
|
||||
if mp:
|
||||
if _mp_is_alive(mp):
|
||||
return mp
|
||||
_drop_browse_mount(port) # veraltet -> aufräumen
|
||||
|
||||
# Frisch mounten
|
||||
mp = f'/mnt/picopy_br_{port}'
|
||||
os.makedirs(mp, exist_ok=True)
|
||||
r = subprocess.run(['mount', dev['device'], mp], capture_output=True)
|
||||
if r.returncode == 0:
|
||||
_browse_mounts[port] = mp
|
||||
return mp
|
||||
return None
|
||||
|
||||
|
||||
@browse_bp.route('/api/browse')
|
||||
def r_browse():
|
||||
port = request.args.get('port', '')
|
||||
rpath = request.args.get('path', '').lstrip('/')
|
||||
|
||||
devs = usb_devices()
|
||||
dev = internal_dest_device(load_cfg()) if port == '__internal__' else None
|
||||
if dev is None:
|
||||
dev = next((d for d in devs if d['usb_port'] == port), None)
|
||||
if not dev:
|
||||
return jsonify(error='Gerät nicht verbunden - bitte neu einstecken'), 404
|
||||
|
||||
mp = get_browse_mp(dev)
|
||||
if not mp:
|
||||
return jsonify(error='Gerät nicht lesbar - bitte neu einstecken'), 500
|
||||
|
||||
try:
|
||||
base = Path(mp).resolve()
|
||||
target = (base / rpath).resolve()
|
||||
|
||||
if not str(target).startswith(str(base)):
|
||||
return jsonify(error='Ungültiger Pfad'), 400
|
||||
if not target.is_dir():
|
||||
return jsonify(error='Kein Verzeichnis'), 400
|
||||
|
||||
entries = []
|
||||
for item in sorted(target.iterdir(),
|
||||
key=lambda x: (x.is_file(), x.name.lower())):
|
||||
try:
|
||||
s = item.stat()
|
||||
entries.append({
|
||||
'name': item.name,
|
||||
'dir': item.is_dir(),
|
||||
'size': s.st_size if item.is_file() else None,
|
||||
'mtime': datetime.fromtimestamp(s.st_mtime).strftime('%d.%m.%y %H:%M'),
|
||||
})
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
rel = str(target.relative_to(base))
|
||||
return jsonify(path='' if rel == '.' else rel, entries=entries)
|
||||
|
||||
except OSError as e:
|
||||
import errno as _errno
|
||||
if e.errno == _errno.EIO:
|
||||
# I/O-Fehler = Gerät abgezogen, Mount bereinigen
|
||||
_drop_browse_mount(port)
|
||||
return jsonify(error='Gerät nicht mehr erreichbar - bitte neu einstecken'), 503
|
||||
return jsonify(error=str(e)), 500
|
||||
except Exception as e:
|
||||
return jsonify(error=str(e)), 500
|
||||
|
||||
|
||||
@browse_bp.route('/api/history')
|
||||
def r_history():
|
||||
return jsonify(load_history())
|
||||
|
||||
|
||||
@browse_bp.route('/api/history', methods=['DELETE'])
|
||||
def r_history_clear():
|
||||
try:
|
||||
HISTORY_FILE.write_text('[]', encoding='utf-8')
|
||||
except Exception:
|
||||
pass
|
||||
return jsonify(ok=True)
|
||||
|
||||
|
||||
@browse_bp.route('/api/internal-share/status')
|
||||
def r_internal_share_status():
|
||||
return jsonify(internal_share_update_state())
|
||||
|
||||
|
||||
@browse_bp.route('/api/internal-share', methods=['POST'])
|
||||
def r_internal_share_set():
|
||||
data = request.get_json(force=True) or {}
|
||||
enabled = bool(data.get('enabled'))
|
||||
ok, err = set_internal_share_enabled(enabled)
|
||||
if not ok:
|
||||
return jsonify(error=err), 500
|
||||
return jsonify(ok=True, status=internal_share_update_state())
|
||||
153
routes/copy_routes.py
Normal file
153
routes/copy_routes.py
Normal file
@@ -0,0 +1,153 @@
|
||||
"""PiCopy – Blueprint: /api/copy/*, /api/devices, /api/storage-info, /api/status, /api/config*."""
|
||||
|
||||
import shutil
|
||||
import subprocess
|
||||
import threading
|
||||
|
||||
from flask import Blueprint, jsonify, request
|
||||
|
||||
from picopy.config import load_cfg, save_cfg, _fmt_bytes
|
||||
from picopy.state import copy_state, copy_lock
|
||||
from picopy.usb import usb_devices, ensure_mount, internal_dest_device
|
||||
from picopy.wifi import wifi_state, wifi_lock
|
||||
from picopy.wireguard import wg_state, wg_lock
|
||||
from picopy.samba import internal_share_update_state
|
||||
import picopy.copy_engine as _ce
|
||||
|
||||
copy_bp = Blueprint('copy', __name__)
|
||||
|
||||
|
||||
def _resolve_source_ports(cfg) -> list:
|
||||
ports = cfg.get('source_ports') or []
|
||||
if not ports and cfg.get('source_port'):
|
||||
ports = [{'port': cfg['source_port'], 'label': cfg.get('source_label', '')}]
|
||||
return ports
|
||||
|
||||
|
||||
def _configured_destination(cfg, devs):
|
||||
if cfg.get('dest_type') == 'internal':
|
||||
return internal_dest_device(cfg)
|
||||
return next((d for d in devs if d['usb_port'] == cfg.get('dest_port')), None)
|
||||
|
||||
|
||||
@copy_bp.route('/api/devices')
|
||||
def r_devices():
|
||||
return jsonify(usb_devices())
|
||||
|
||||
|
||||
@copy_bp.route('/api/storage-info')
|
||||
def r_storage_info():
|
||||
cfg = load_cfg()
|
||||
devs = usb_devices()
|
||||
result = []
|
||||
|
||||
src_ports = {sp['port'] for sp in _resolve_source_ports(cfg)}
|
||||
dst_port = cfg.get('dest_port')
|
||||
|
||||
def _du_for_dev(dev):
|
||||
mp, owned = ensure_mount(dev)
|
||||
if not mp:
|
||||
return dict(total=None, used=None, free=None, pct=None)
|
||||
try:
|
||||
du = shutil.disk_usage(mp)
|
||||
return dict(total=du.total, used=du.used, free=du.free,
|
||||
pct=round(du.used / du.total * 100) if du.total else 0)
|
||||
except Exception:
|
||||
return dict(total=None, used=None, free=None, pct=None)
|
||||
finally:
|
||||
if owned:
|
||||
subprocess.run(['umount', mp], capture_output=True)
|
||||
|
||||
for dev in devs:
|
||||
port = dev['usb_port']
|
||||
if port in src_ports:
|
||||
role = 'source'
|
||||
elif port == dst_port:
|
||||
role = 'dest'
|
||||
else:
|
||||
role = 'other'
|
||||
entry = dict(
|
||||
role=role,
|
||||
label=dev.get('label') or dev.get('device') or f'Port {port}',
|
||||
port=port,
|
||||
device=dev.get('device', ''),
|
||||
size_str=dev.get('size', ''),
|
||||
)
|
||||
entry.update(_du_for_dev(dev))
|
||||
result.append(entry)
|
||||
|
||||
if cfg.get('dest_type') == 'internal':
|
||||
entry = dict(role='dest',
|
||||
label=cfg.get('internal_dest_label') or 'Interner Speicher',
|
||||
port='__internal__', device='internal', size_str='')
|
||||
entry.update(_du_for_dev({'internal': True}))
|
||||
result.append(entry)
|
||||
|
||||
return jsonify(result)
|
||||
|
||||
|
||||
@copy_bp.route('/api/config', methods=['GET', 'POST'])
|
||||
def r_config():
|
||||
if request.method == 'POST':
|
||||
cfg = load_cfg()
|
||||
cfg.update(request.get_json(force=True))
|
||||
save_cfg(cfg)
|
||||
return jsonify(ok=True)
|
||||
return jsonify(load_cfg())
|
||||
|
||||
|
||||
@copy_bp.route('/api/config/ports/reset', methods=['POST'])
|
||||
def r_ports_reset():
|
||||
cfg = load_cfg()
|
||||
cfg['source_ports'] = []
|
||||
cfg['source_port'] = None
|
||||
cfg['source_label'] = ''
|
||||
cfg['dest_port'] = None
|
||||
cfg['dest_label'] = ''
|
||||
cfg['dest_type'] = 'usb'
|
||||
save_cfg(cfg)
|
||||
return jsonify(ok=True)
|
||||
|
||||
|
||||
@copy_bp.route('/api/status')
|
||||
def r_status():
|
||||
with copy_lock:
|
||||
cs = dict(copy_state)
|
||||
with wifi_lock:
|
||||
ws = dict(wifi_state)
|
||||
with wg_lock:
|
||||
wgs = dict(wg_state)
|
||||
share = internal_share_update_state()
|
||||
return jsonify(copy=cs, wifi=ws, vpn=wgs, internal_share=share)
|
||||
|
||||
|
||||
@copy_bp.route('/api/copy/start', methods=['POST'])
|
||||
def r_start():
|
||||
with copy_lock:
|
||||
if copy_state['running']:
|
||||
return jsonify(error='Bereits aktiv'), 400
|
||||
if _ce._copy_thread is not None and _ce._copy_thread.is_alive():
|
||||
return jsonify(error='Abbruch wird noch abgeschlossen - bitte kurz warten und erneut versuchen.'), 400
|
||||
cfg = load_cfg()
|
||||
devs = usb_devices()
|
||||
body = request.get_json(force=True) or {}
|
||||
wanted_ports = body.get('ports') # None = alle konfigurierten Quellen
|
||||
src_ports = _resolve_source_ports(cfg)
|
||||
srcs = [next((d for d in devs if d['usb_port'] == sp['port']), None) for sp in src_ports]
|
||||
srcs = [s for s in srcs if s is not None]
|
||||
if wanted_ports is not None:
|
||||
srcs = [s for s in srcs if s['usb_port'] in wanted_ports]
|
||||
if not srcs: return jsonify(error='Keine Quellgeräte gefunden (Ports nicht verbunden)'), 400
|
||||
dst = _configured_destination(cfg, devs)
|
||||
if not dst: return jsonify(error='Zielgerät nicht gefunden'), 400
|
||||
t = threading.Thread(target=_ce.do_copy, args=(srcs, dst, cfg), daemon=True)
|
||||
_ce._copy_thread = t
|
||||
t.start()
|
||||
return jsonify(ok=True)
|
||||
|
||||
|
||||
@copy_bp.route('/api/copy/cancel', methods=['POST'])
|
||||
def r_cancel():
|
||||
with copy_lock:
|
||||
copy_state['running'] = False
|
||||
return jsonify(ok=True)
|
||||
86
routes/system_routes.py
Normal file
86
routes/system_routes.py
Normal file
@@ -0,0 +1,86 @@
|
||||
"""PiCopy – Blueprint: /api/sysinfo, /api/update/*, /api/format/*, /api/system/*."""
|
||||
|
||||
import subprocess
|
||||
import threading
|
||||
|
||||
from flask import Blueprint, jsonify, request
|
||||
|
||||
from picopy.usb import usb_devices
|
||||
from picopy.state import copy_state
|
||||
from picopy.system import (
|
||||
get_sysinfo, update_state, update_lock,
|
||||
format_state, FORMAT_FILESYSTEMS,
|
||||
check_for_updates, install_update, do_format,
|
||||
)
|
||||
|
||||
system_bp = Blueprint('system', __name__)
|
||||
|
||||
|
||||
@system_bp.route('/api/sysinfo')
|
||||
def r_sysinfo():
|
||||
return jsonify(get_sysinfo())
|
||||
|
||||
|
||||
@system_bp.route('/api/update/status')
|
||||
def r_update_status():
|
||||
with update_lock:
|
||||
return jsonify(dict(update_state))
|
||||
|
||||
|
||||
@system_bp.route('/api/update/check', methods=['POST'])
|
||||
def r_update_check():
|
||||
threading.Thread(target=check_for_updates, daemon=True).start()
|
||||
return jsonify(ok=True)
|
||||
|
||||
|
||||
@system_bp.route('/api/update/install', methods=['POST'])
|
||||
def r_update_install():
|
||||
from picopy.config import log
|
||||
try:
|
||||
install_update()
|
||||
return jsonify(ok=True)
|
||||
except SyntaxError as e:
|
||||
return jsonify(error=f'Update-Datei ungültig: {e}'), 500
|
||||
except Exception as e:
|
||||
log.exception('Update fehlgeschlagen')
|
||||
return jsonify(error=str(e)), 500
|
||||
|
||||
|
||||
@system_bp.route('/api/format/status')
|
||||
def r_format_status():
|
||||
return jsonify(dict(format_state))
|
||||
|
||||
|
||||
@system_bp.route('/api/format', methods=['POST'])
|
||||
def r_format():
|
||||
if format_state['running']:
|
||||
return jsonify(error='Formatierung läuft bereits'), 409
|
||||
if copy_state.get('running'):
|
||||
return jsonify(error='Kopiervorgang läuft – bitte warten'), 409
|
||||
|
||||
body = request.get_json(force=True)
|
||||
fs = body.get('fs', '').lower()
|
||||
name = (body.get('name') or 'PICOPY').upper()
|
||||
dev = body.get('device', '')
|
||||
|
||||
if fs not in FORMAT_FILESYSTEMS:
|
||||
return jsonify(error=f'Unbekanntes Dateisystem: {fs}'), 400
|
||||
if not dev.startswith('/dev/'):
|
||||
return jsonify(error='Ungültiges Gerät'), 400
|
||||
|
||||
# Sicherheitscheck: Gerät muss ein bekanntes USB-Gerät sein
|
||||
known = [d['device'] for d in usb_devices()]
|
||||
if dev not in known:
|
||||
return jsonify(error='Gerät nicht als USB-Laufwerk erkannt'), 400
|
||||
|
||||
threading.Thread(target=do_format, args=(fs, name, dev), daemon=True).start()
|
||||
return jsonify(ok=True)
|
||||
|
||||
|
||||
@system_bp.route('/api/system/reboot', methods=['POST'])
|
||||
def r_system_reboot():
|
||||
threading.Thread(target=lambda: (
|
||||
__import__('time').sleep(1),
|
||||
subprocess.Popen(['reboot'])
|
||||
), daemon=True).start()
|
||||
return jsonify(ok=True)
|
||||
123
routes/upload_routes.py
Normal file
123
routes/upload_routes.py
Normal file
@@ -0,0 +1,123 @@
|
||||
"""PiCopy – Blueprint: /api/upload/*."""
|
||||
|
||||
import subprocess
|
||||
import uuid as _uuid_mod
|
||||
|
||||
from flask import Blueprint, jsonify, request
|
||||
|
||||
from picopy.config import load_cfg, save_cfg
|
||||
from picopy.upload import (
|
||||
upload_state, upload_lock,
|
||||
configure_smb_remote, delete_remote, test_remote,
|
||||
_rclone_obscure, RCLONE_CONF as _RCLONE_CONF,
|
||||
)
|
||||
|
||||
upload_bp = Blueprint('upload', __name__)
|
||||
|
||||
|
||||
@upload_bp.route('/api/upload/targets', methods=['GET'])
|
||||
def r_upload_list():
|
||||
return jsonify(load_cfg().get('upload_targets', []))
|
||||
|
||||
|
||||
@upload_bp.route('/api/upload/targets', methods=['POST'])
|
||||
def r_upload_add():
|
||||
data = request.get_json(force=True)
|
||||
cfg = load_cfg()
|
||||
tid = data.get('id') or _uuid_mod.uuid4().hex[:8]
|
||||
ctype = data.get('type', 'smb')
|
||||
|
||||
if ctype != 'smb':
|
||||
return jsonify(error='Nur SMB/NAS wird unterstützt'), 400
|
||||
ok, err = configure_smb_remote(
|
||||
tid, data.get('host', ''), data.get('share', ''),
|
||||
data.get('user', ''), data.get('pass', ''))
|
||||
|
||||
if not ok:
|
||||
return jsonify(error=f'rclone: {err}'), 500
|
||||
|
||||
# Credentials direkt im Entry speichern (für Connection-String bei Upload)
|
||||
obscured_pw = _rclone_obscure(data.get('pass', '')) if data.get('pass') else ''
|
||||
entry = {
|
||||
'id': tid, 'type': ctype,
|
||||
'name': data.get('name', ctype),
|
||||
'dest_path': data.get('dest_path', 'PiCopy'),
|
||||
'enabled': True,
|
||||
'smb_host': data.get('host', ''),
|
||||
'smb_share': data.get('share', ''),
|
||||
'smb_user': data.get('user', ''),
|
||||
'smb_pass': obscured_pw,
|
||||
}
|
||||
targets = [t for t in cfg.get('upload_targets', []) if t['id'] != tid]
|
||||
targets.append(entry)
|
||||
cfg['upload_targets'] = targets
|
||||
save_cfg(cfg)
|
||||
return jsonify(ok=True, id=tid)
|
||||
|
||||
|
||||
@upload_bp.route('/api/upload/targets/<tid>', methods=['DELETE'])
|
||||
def r_upload_del(tid):
|
||||
cfg = load_cfg()
|
||||
cfg['upload_targets'] = [t for t in cfg.get('upload_targets', []) if t['id'] != tid]
|
||||
save_cfg(cfg)
|
||||
delete_remote(tid)
|
||||
return jsonify(ok=True)
|
||||
|
||||
|
||||
@upload_bp.route('/api/upload/browse', methods=['POST'])
|
||||
def r_upload_browse():
|
||||
"""Listet SMB-Freigaben eines Servers ohne gespeicherte Config (rclone connection string)."""
|
||||
data = request.get_json(force=True)
|
||||
host = data.get('host', '').strip()
|
||||
user = data.get('user', '').strip()
|
||||
pw = data.get('pass', '')
|
||||
if not host:
|
||||
return jsonify(error='Server-Adresse fehlt'), 400
|
||||
conn = f':smb,host={host}'
|
||||
if user:
|
||||
conn += f',user={user}'
|
||||
if pw:
|
||||
try:
|
||||
obscured = _rclone_obscure(pw)
|
||||
conn += f',pass={obscured}'
|
||||
except Exception:
|
||||
pass
|
||||
conn += ':'
|
||||
r = subprocess.run(
|
||||
['rclone', '--config', str(_RCLONE_CONF), 'lsd', conn],
|
||||
capture_output=True, text=True, timeout=15
|
||||
)
|
||||
if r.returncode != 0:
|
||||
lines = r.stderr.strip().splitlines()
|
||||
err = lines[-1] if lines else 'Verbindung fehlgeschlagen'
|
||||
return jsonify(error=err), 400
|
||||
shares = [line.strip().split()[-1] for line in r.stdout.splitlines() if line.strip()]
|
||||
return jsonify(shares=shares)
|
||||
|
||||
|
||||
@upload_bp.route('/api/upload/targets/<tid>/toggle', methods=['POST'])
|
||||
def r_upload_toggle(tid):
|
||||
cfg = load_cfg()
|
||||
for t in cfg.get('upload_targets', []):
|
||||
if t['id'] == tid:
|
||||
t['enabled'] = not t.get('enabled', True)
|
||||
break
|
||||
save_cfg(cfg)
|
||||
return jsonify(ok=True)
|
||||
|
||||
|
||||
@upload_bp.route('/api/upload/targets/<tid>/test', methods=['POST'])
|
||||
def r_upload_test(tid):
|
||||
from picopy.config import log
|
||||
try:
|
||||
ok, err = test_remote(tid)
|
||||
except Exception as e:
|
||||
log.exception('upload test failed')
|
||||
ok, err = False, str(e)
|
||||
return jsonify(ok=ok, error=err)
|
||||
|
||||
|
||||
@upload_bp.route('/api/upload/status')
|
||||
def r_upload_status():
|
||||
with upload_lock:
|
||||
return jsonify(dict(upload_state))
|
||||
83
routes/wifi_routes.py
Normal file
83
routes/wifi_routes.py
Normal file
@@ -0,0 +1,83 @@
|
||||
"""PiCopy – Blueprint: /api/wifi/*."""
|
||||
|
||||
import threading
|
||||
import time
|
||||
|
||||
from flask import Blueprint, jsonify, request
|
||||
|
||||
from picopy.config import load_cfg, save_cfg
|
||||
from picopy.wifi import (
|
||||
wifi_state, wifi_lock,
|
||||
is_ap_active, stop_ap, start_ap,
|
||||
connect_client_wifi, update_wifi_state,
|
||||
scan_wifi_networks,
|
||||
)
|
||||
|
||||
wifi_bp = Blueprint('wifi', __name__)
|
||||
|
||||
|
||||
@wifi_bp.route('/api/wifi/scan')
|
||||
def r_wifi_scan():
|
||||
nets = scan_wifi_networks()
|
||||
return jsonify(nets)
|
||||
|
||||
|
||||
@wifi_bp.route('/api/wifi/connect', methods=['POST'])
|
||||
def r_wifi_connect():
|
||||
data = request.get_json(force=True)
|
||||
ssid = data.get('ssid', '').strip()
|
||||
pw = data.get('password', '').strip()
|
||||
if not ssid:
|
||||
return jsonify(error='SSID fehlt'), 400
|
||||
cfg = load_cfg()
|
||||
cfg['wifi_ssid'] = ssid
|
||||
cfg['wifi_password'] = pw
|
||||
save_cfg(cfg)
|
||||
|
||||
def _connect():
|
||||
ap_was_active = is_ap_active()
|
||||
if ap_was_active:
|
||||
stop_ap()
|
||||
time.sleep(2)
|
||||
ok = connect_client_wifi(ssid, pw)
|
||||
if ok:
|
||||
time.sleep(5)
|
||||
update_wifi_state()
|
||||
else:
|
||||
if ap_was_active:
|
||||
start_ap(cfg.get('ap_ssid', 'PiCopy'), cfg.get('ap_password', 'PiCopy,'))
|
||||
update_wifi_state()
|
||||
|
||||
threading.Thread(target=_connect, daemon=True).start()
|
||||
return jsonify(ok=True, msg='Verbindungsversuch gestartet')
|
||||
|
||||
|
||||
@wifi_bp.route('/api/wifi/ap', methods=['POST'])
|
||||
def r_wifi_ap():
|
||||
data = request.get_json(force=True)
|
||||
ssid = data.get('ssid', '').strip()
|
||||
pw = data.get('password', '').strip()
|
||||
if not ssid or len(pw) < 8:
|
||||
return jsonify(error='SSID fehlt oder Passwort zu kurz (min. 8 Zeichen)'), 400
|
||||
cfg = load_cfg()
|
||||
cfg['ap_ssid'] = ssid
|
||||
cfg['ap_password'] = pw
|
||||
save_cfg(cfg)
|
||||
|
||||
def _restart_ap():
|
||||
if is_ap_active():
|
||||
stop_ap()
|
||||
time.sleep(2)
|
||||
start_ap(ssid, pw)
|
||||
time.sleep(3)
|
||||
with wifi_lock:
|
||||
wifi_state.update(mode='ap', ssid=ssid, ip='10.42.0.1')
|
||||
|
||||
threading.Thread(target=_restart_ap, daemon=True).start()
|
||||
return jsonify(ok=True)
|
||||
|
||||
|
||||
@wifi_bp.route('/api/wifi/status')
|
||||
def r_wifi_status():
|
||||
with wifi_lock:
|
||||
return jsonify(dict(wifi_state))
|
||||
71
routes/wireguard_routes.py
Normal file
71
routes/wireguard_routes.py
Normal file
@@ -0,0 +1,71 @@
|
||||
"""PiCopy – Blueprint: /api/wireguard/*."""
|
||||
|
||||
import re
|
||||
import threading
|
||||
|
||||
from flask import Blueprint, jsonify, request
|
||||
|
||||
from picopy.config import load_cfg, save_cfg
|
||||
from picopy.wireguard import (
|
||||
wg_state, wg_lock, WG_CONF,
|
||||
wg_connect, wg_disconnect,
|
||||
wg_install, wg_uninstall,
|
||||
wg_save_config,
|
||||
)
|
||||
|
||||
wireguard_bp = Blueprint('wireguard', __name__)
|
||||
|
||||
|
||||
@wireguard_bp.route('/api/wireguard/config', methods=['GET', 'POST'])
|
||||
def r_wg_config():
|
||||
if request.method == 'POST':
|
||||
data = request.get_json(force=True)
|
||||
content = data.get('content', '')
|
||||
if not content.strip():
|
||||
return jsonify(error='Konfiguration ist leer'), 400
|
||||
ok, err = wg_save_config(content)
|
||||
if not ok:
|
||||
return jsonify(error=err), 500
|
||||
auto = data.get('auto')
|
||||
if auto is not None:
|
||||
c = load_cfg()
|
||||
c['wireguard_auto'] = bool(auto)
|
||||
save_cfg(c)
|
||||
with wg_lock:
|
||||
wg_state['has_config'] = True
|
||||
return jsonify(ok=True)
|
||||
if WG_CONF.exists():
|
||||
content = WG_CONF.read_text(encoding='utf-8')
|
||||
masked = re.sub(r'(PrivateKey\s*=\s*)(.+)', r'\1****', content)
|
||||
return jsonify(exists=True, config=masked)
|
||||
return jsonify(exists=False, config='')
|
||||
|
||||
|
||||
@wireguard_bp.route('/api/wireguard/connect', methods=['POST'])
|
||||
def r_wg_connect():
|
||||
threading.Thread(target=wg_connect, daemon=True).start()
|
||||
return jsonify(ok=True, msg='Verbindungsversuch gestartet')
|
||||
|
||||
|
||||
@wireguard_bp.route('/api/wireguard/disconnect', methods=['POST'])
|
||||
def r_wg_disconnect():
|
||||
ok = wg_disconnect()
|
||||
return jsonify(ok=ok)
|
||||
|
||||
|
||||
@wireguard_bp.route('/api/wireguard/install', methods=['POST'])
|
||||
def r_wg_install():
|
||||
with wg_lock:
|
||||
if wg_state['pkg_running']:
|
||||
return jsonify(error='Bereits aktiv'), 400
|
||||
threading.Thread(target=wg_install, daemon=True).start()
|
||||
return jsonify(ok=True)
|
||||
|
||||
|
||||
@wireguard_bp.route('/api/wireguard/uninstall', methods=['POST'])
|
||||
def r_wg_uninstall():
|
||||
with wg_lock:
|
||||
if wg_state['pkg_running']:
|
||||
return jsonify(error='Bereits aktiv'), 400
|
||||
threading.Thread(target=wg_uninstall, daemon=True).start()
|
||||
return jsonify(ok=True)
|
||||
1793
templates/index.html
Normal file
1793
templates/index.html
Normal file
File diff suppressed because it is too large
Load Diff
@@ -1 +1 @@
|
||||
1.0.1
|
||||
1.0.72
|
||||
Reference in New Issue
Block a user