Some of the most common applications in a Usenet Stack are SABnzbd, Sonarr and Radarr.
SABnzbd is a binary newsreader which handles download from Usenet. Sonarr acts a PVR for TV shows and Radarr is a fork of Sonarr is a PVR for Movies. This post will show to deploy a Usenet Stack on Kubernetes
I recently helped out a friend to get these apps deployed on Kubernetes and I published the YAML files. You can find the files on my Github repo but I will share them further down as well.
All YAMLs below will create Kubernetes deployment and service to expose the app as a NodePort. Once you deployed an app with kubectl create -f , you can find the port to reach the app by running
kubectl get service sabnzbd-service NodePort 10.96.114.12 8080:32726/TCP 12d
In the example above, the app is accessible via any Kubernetes Node IP on port 32726.
The YAML file below will deploy a SABnzbd on Kubernetes with one container and one service.
Here is the actual YAML:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: sabnzbd-deployment
labels:
app: sabnzbd
spec:
replicas: 1
selector:
matchLabels:
app: sabnzbd
template:
metadata:
labels:
app: sabnzbd
spec:
containers:
- name: sabnzbd
image: linuxserver/sabnzbd
ports:
- containerPort: 6789
volumeMounts:
- mountPath: /config
name: sabnzbd-config
- mountPath: /downloads
name: sabnzbd-downloads
- mountPath: /incomplete-downloads
name: sabnzbd-incomplete
volumes:
- name: sabnzbd-config
hostPath:
path: /nfs/media/sabnzbd/config <<< change to local NFS mount on your k8s nodes
- name: sabnzbd-downloads
hostPath:
path: /nfs/media/sabnzbd/completed-downloads<<< change to local NFS mount on your k8s nodes
- name: sabnzbd-incomplete
hostPath:
path: /nfs/media/sabnzbd/incomplete-downloads<<< change to local NFS mount on your k8s nodes
---
kind: Service
apiVersion: v1
metadata:
name: sabnzbd-service
spec:
selector:
app: sabnzbd
ports:
- protocol: TCP
port: 8080
targetPort: 8080
type: NodePort
This YAML deploys a Sonarr on Kubernetes with one container and one service.
Here is the actual YAML:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: sonarr-deployment
labels:
app: sonarr
spec:
replicas: 1
selector:
matchLabels:
app: sonarr
template:
metadata:
labels:
app: sonarr
spec:
containers:
- name: sonarr
image: linuxserver/sonarr
ports:
- containerPort: 8989
volumeMounts:
- mountPath: /config
name: sonarr-config
- mountPath: /tv
name: sonarr-tv
volumes:
- name: sonarr-config
hostPath:
path: /nfs/media/sonarr/config<<< change to local NFS mount on your k8s nodes
- name: sonarr-tv
hostPath:
path: /nfs/media/sonarr/tv<<< change to local NFS mount on your k8s nodes
---
kind: Service
apiVersion: v1
metadata:
name: sonarr-service
spec:
selector:
app: sonarr
ports:
- protocol: TCP
port: 8989
targetPort: 8989
type: NodePort
This YAML deploys Radarr on Kubernetes with one container and one service.
Here is the actual YAML:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: radarr-deployment
labels:
app: radarr
spec:
replicas: 1
selector:
matchLabels:
app: radarr
template:
metadata:
labels:
app: radarr
spec:
containers:
- name: radarr
image: linuxserver/radarr
ports:
- containerPort: 7878
volumeMounts:
- mountPath: /config
name: radarr-config
- mountPath: /downloads
name: radarr-downloads
- mountPath: /movies
name: radarr-movies
volumes:
- name: radarr-config
hostPath:
path: /nfs/media/radarr/config <<< change to local NFS mount on your k8s nodes
- name: radarr-downloads
hostPath:
path: /nfs/media/radarr/downloads <<< change to local NFS mount on your k8s nodes
- name: radarr-movies
hostPath:
path: /nfs/media/radarr/movies <<< change to local NFS mount on your k8s nodes
---
kind: Service
apiVersion: v1
metadata:
name: radarr-service
spec:
selector:
app: radarr
ports:
- protocol: TCP
port: 7878
targetPort: 7878
type: NodePort



That’s the same issue I had. SQLite just doesn’t work well with NFS it seems. Just made it select the same node each time with a static path for that node and since then it’s been running fine!
yeah you’re right SQLite isn’t doing well with NFS.
Sorry for the late reply. I assume it’s either a networking issue or your creds to your indexer may not be correct. you should be able to get the logs with docker logs. It should telly ou more
dont know if anyone will reply to this, but why is the sabznbd container port 6789 but all other ports are 8080? Sabnzb’s port is 8080 so i didnt know where the 6789 came from
I’m pretty sure you found a bug in my yaml file. This doesn’t seem to make sense to me.