dms+k8s — DMS dialect for Kubernetes manifests
Brand: dms+k8s
File extension: .dms.k8s
Spec version: 0.1 (draft)
DMS tier required: 1 (_dms_tier: 1 in front matter)
Parent spec: TIER1.md
Tier-0 base: SPEC.md
Pre-1.0: breaking changes are still possible. No version-bump rules apply yet.
What this is
dms+k8s is a tier-1 dialect that expresses Kubernetes resource manifests
in DMS shape. Each top-level resource is an |<kind>(...) call carrying
the resource's kind, apiVersion, and metadata as decorator params;
the value tree carries spec / data / subjects / whatever sub-block
the kind uses.
Why a dialect rather than YAML? k8s YAML pain is well-documented:
- Document separators.
---between resources is positional and brittle; tools that join/split files must respect them. DMS file = one document; multi-resource manifests use a list root. - Indent fragility. A four-deep
metadata.labelsindented one space wrong silently parses as a different shape. DMS rejects indent ambiguity by construction. - String-vs-int coercion.
version: 1.10becomes float1.1,port: "80"is a string whileport: 80is an int — k8s YAML's type coercion has bitten everyone. DMS values are typed at the source. - Comment loss.
kubectl apply -fstrips comments. DMS preserves them through decode → mutate → re-emit. - Cross-file lint pain. Multi-doc YAML with selector mismatches between Deployment and Service is hard to grep. DMS tier-1 decorator sidecar makes labels / selectors a structured query surface, not strings to regex.
dms+k8s is not a wire format. A separate runtime layer converts
decoded Document_t1 ↔ k8s API JSON / YAML when kubectl or the API
server needs it.
Quick comparison
# Standard k8s YAML
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
namespace: prod
labels:
app: web
tier: frontend
spec:
replicas: 3
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: nginx
image: nginx:1.21
ports:
- containerPort: 80
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"
---
apiVersion: v1
kind: Service
metadata:
name: web
namespace: prod
spec:
selector:
app: web
ports:
- port: 80
targetPort: 80
In dms+k8s:
+++
_dms_tier: 1
_dms_imports:
+ dialect: "k8s"
version: "1.0.0"
+++
+ |resource("Deployment", api: "apps/v1", name: "web", namespace: "prod")
🏷️labels(app: "web", tier: "frontend")
spec:
replicas: 3
selector:
matchLabels:
app: "web"
template:
🏷️labels(app: "web")
spec:
containers:
+ |container("nginx", image: "nginx:1.21")
ports:
+ |port(80)
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"
+ |resource("Service", api: "v1", name: "web", namespace: "prod")
spec:
selector:
app: "web"
ports:
+ |port(80, target: 80)
Wins:
🏷️labels(...)is one decorator call instead of nestedmetadata.labels+spec.selector.matchLabels+spec.template.metadata.labelsrepetition. The dialect runtime applies the labels in all three places (or wherever the kind's contract requires).|port(80)is a one-line container port.|port(80, target: 80)is a Service port. Same family, different positional + named shape per kind context.- Numeric values stay numeric; quoted strings stay strings. No
"80"-vs-80ambiguity. - Comments and indentation are spec-defined, not whitespace-ambiguous.
Dialect canonical spec
+++
_dms_tier: 0
+++
name: "k8s"
version: "1.0.0"
version_strategy: "caret"
families:
+ name: "resource"
default_sigils: ["|"]
empty_default: {}
content_slot: "spec"
params:
mode: "positional"
positional:
- { name: "kind", type: "string", required: true }
typed:
api: { type: "string" }
name: { type: "string", required: true }
namespace: { type: "string" }
annotations: { type: "map_of string" }
required: []
+ name: "container"
default_sigils: ["|"]
empty_default: {}
content_slot: "config"
params:
mode: "positional"
positional:
- { name: "name", type: "string", required: true }
typed:
image: { type: "string", required: true }
imagePullPolicy: { type: "string" }
command: { type: "list_of string" }
args: { type: "list_of string" }
env: { type: "map_of string" }
workingDir: { type: "string" }
+ name: "port"
default_sigils: ["|"]
empty_default: {}
content_slot: ~
params:
mode: "positional"
positional:
- { name: "port", type: "integer", required: true }
typed:
target: { type: "integer" }
protocol: { type: "string" }
name: { type: "string" }
nodePort: { type: "integer" }
+ name: "labels"
default_sigils: ["🏷️"]
empty_default: {}
content_slot: ~
params:
mode: "wildcard"
+ name: "annotations"
default_sigils: ["📝"]
empty_default: {}
content_slot: ~
params:
mode: "wildcard"
+ name: "secret"
default_sigils: ["🔒"]
empty_default: ""
content_slot: ~
params:
mode: "positional"
positional:
- { name: "name", type: "string", required: true }
typed:
key: { type: "string" }
optional: { type: "boolean", default: false }
+ name: "configmap"
default_sigils: ["⚙️"]
empty_default: ""
content_slot: ~
params:
mode: "positional"
positional:
- { name: "name", type: "string", required: true }
typed:
key: { type: "string" }
optional: { type: "boolean", default: false }
+ name: "volume"
default_sigils: ["💾"]
empty_default: {}
content_slot: ~
params:
mode: "positional"
positional:
- { name: "name", type: "string", required: true }
typed:
emptyDir: { type: "any" }
hostPath: { type: "string" }
secret: { type: "string" }
configMap: { type: "string" }
pvc: { type: "string" }
+ name: "mount"
default_sigils: ["📂"]
empty_default: {}
content_slot: ~
params:
mode: "positional"
positional:
- { name: "name", type: "string", required: true }
- { name: "path", type: "string", required: true }
typed:
readOnly: { type: "boolean", default: false }
subPath: { type: "string" }
+ name: "selector"
default_sigils: ["🎯"]
empty_default: {}
content_slot: ~
params:
mode: "wildcard"
+ name: "rule"
default_sigils: ["🔐"]
empty_default: {}
content_slot: ~
params:
mode: "wildcard_with_typed"
typed:
verbs: { type: "list_of string", required: true }
resources: { type: "list_of string" }
apiGroups: { type: "list_of string" }
resourceNames: { type: "list_of string" }
+ name: "probe"
default_sigils: ["💚"]
empty_default: {}
content_slot: ~
params:
mode: "wildcard_with_typed"
typed:
httpGet: { type: "any" }
exec: { type: "any" }
tcpSocket: { type: "any" }
initialDelaySeconds: { type: "integer" }
periodSeconds: { type: "integer", default: 10 }
timeoutSeconds: { type: "integer", default: 1 }
successThreshold: { type: "integer", default: 1 }
failureThreshold: { type: "integer", default: 3 }
Sigil cheatsheet
| Sigil | Family | Shape |
|---|---|---|
\| |
resource |
top-level k8s object (Deployment, Service, ConfigMap, …) |
\| |
container |
container spec inside containers: |
\| |
port |
container port / service port |
| 🏷️ | labels |
metadata.labels + selector.matchLabels (auto-mirror) |
| 📝 | annotations |
metadata.annotations |
| 🔒 | secret |
Secret reference (envVar / volume / imagePullSecret) |
| ⚙️ | configmap |
ConfigMap reference |
| 💾 | volume |
volumes[] entry |
| 📂 | mount |
volumeMounts[] entry |
| 🎯 | selector |
spec.selector + spec.template.metadata.labels mirror |
| 🔐 | rule |
RBAC rule inside Role / ClusterRole |
| 💚 | probe |
livenessProbe / readinessProbe / startupProbe |
Worked example: Deployment + Service + ConfigMap
+++
_dms_tier: 1
_dms_imports:
+ dialect: "k8s"
version: "1.0.0"
+++
+ |resource("ConfigMap", api: "v1", name: "web-config", namespace: "prod")
data:
log_level: "info"
feature_flags: |
auth_v2=true
cache_v2=true
+ |resource("Deployment", api: "apps/v1", name: "web", namespace: "prod")
🏷️labels(app: "web", tier: "frontend", version: "v3.2.1")
📝annotations(
"owner": "platform-team",
"deployment.kubernetes.io/revision": "42"
)
spec:
replicas: 3
🎯selector(app: "web")
template:
spec:
containers:
+ |container("nginx", image: "nginx:1.21")
ports:
+ |port(80, name: "http")
env:
LOG_LEVEL: ⚙️configmap("web-config", key: "log_level")
API_KEY: 🔒secret("web-creds", key: "api_key")
💚probe(httpGet: {path: "/health", port: 80}, periodSeconds: 5)
volumeMounts:
+ 📂mount("config", "/etc/web/config", readOnly: true)
resources:
requests: {memory: "256Mi", cpu: "100m"}
limits: {memory: "512Mi", cpu: "500m"}
volumes:
+ 💾volume("config", configMap: "web-config")
+ |resource("Service", api: "v1", name: "web", namespace: "prod")
🏷️labels(app: "web")
spec:
🎯selector(app: "web")
ports:
+ |port(80, target: "http")
type: "ClusterIP"
Reads top-to-bottom. Reuse: 🏷️labels(app: "web") is written once on the
Deployment and once on the Service; the dialect runtime copies it into
each kind's correct YAML position (metadata.labels,
spec.selector.matchLabels, spec.template.metadata.labels,
spec.selector for Service).
Bigger example: app + Ingress + HorizontalPodAutoscaler + RBAC
A full production-grade application stack: the Deployment + Service + ConfigMap from the worked example above, extended with an Ingress, a HorizontalPodAutoscaler, and RBAC objects (ServiceAccount + Role + RoleBinding). Native YAML first, then the dms+k8s equivalent.
Native YAML (multi-document)
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: web
namespace: prod
labels:
app: web
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: web-role
namespace: prod
labels:
app: web
rules:
- apiGroups: [""]
resources: ["configmaps", "secrets"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: web-rolebinding
namespace: prod
labels:
app: web
subjects:
- kind: ServiceAccount
name: web
namespace: prod
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: web-role
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
namespace: prod
labels:
app: web
tier: frontend
version: v3.2.1
annotations:
owner: platform-team
deployment.kubernetes.io/revision: "42"
spec:
replicas: 3
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
tier: frontend
version: v3.2.1
spec:
serviceAccountName: web
containers:
- name: nginx
image: nginx:1.21
ports:
- containerPort: 80
name: http
env:
- name: LOG_LEVEL
valueFrom:
configMapKeyRef:
name: web-config
key: log_level
- name: API_KEY
valueFrom:
secretKeyRef:
name: web-creds
key: api_key
livenessProbe:
httpGet:
path: /health
port: 80
initialDelaySeconds: 5
periodSeconds: 5
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"
volumes:
- name: config
configMap:
name: web-config
---
apiVersion: v1
kind: Service
metadata:
name: web
namespace: prod
labels:
app: web
spec:
selector:
app: web
ports:
- port: 80
targetPort: http
type: ClusterIP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web
namespace: prod
labels:
app: web
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
ingressClassName: nginx
rules:
- host: myapp.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web
port:
name: http
tls:
- hosts:
- myapp.example.com
secretName: web-tls
---
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: web
namespace: prod
labels:
app: web
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: web
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
dms+k8s equivalent
+++
_dms_tier: 1
_dms_imports:
+ dialect: "k8s"
version: "1.0.0"
+++
# ── RBAC ─────────────────────────────────────────────────────────────────────
+ |resource("ServiceAccount", api: "v1", name: "web", namespace: "prod")
🏷️labels(app: "web")
# Role: rules are structured decorator calls — grep '🔐rule' finds all rules
+ |resource("Role", api: "rbac.authorization.k8s.io/v1",
name: "web-role", namespace: "prod")
🏷️labels(app: "web")
rules:
+ 🔐rule(
apiGroups: [""],
resources: ["configmaps", "secrets"],
verbs: ["get", "list", "watch"],
)
+ 🔐rule(
apiGroups: [""],
resources: ["pods"],
verbs: ["get", "list"],
)
+ |resource("RoleBinding", api: "rbac.authorization.k8s.io/v1",
name: "web-rolebinding", namespace: "prod")
🏷️labels(app: "web")
subjects:
+ {kind: "ServiceAccount", name: "web", namespace: "prod"}
roleRef:
apiGroup: "rbac.authorization.k8s.io"
kind: "Role"
name: "web-role"
# ── App resources ─────────────────────────────────────────────────────────────
+ |resource("Deployment", api: "apps/v1", name: "web", namespace: "prod")
# Labels written once here — runtime mirrors to metadata.labels,
# spec.selector.matchLabels, and spec.template.metadata.labels
🏷️labels(app: "web", tier: "frontend", version: "v3.2.1")
📝annotations(
owner: "platform-team",
"deployment.kubernetes.io/revision": "42",
)
spec:
replicas: 3
🎯selector(app: "web")
template:
spec:
serviceAccountName: "web"
containers:
+ |container("nginx", image: "nginx:1.21")
ports:
+ |port(80, name: "http")
env:
LOG_LEVEL: ⚙️configmap("web-config", key: "log_level")
API_KEY: 🔒secret("web-creds", key: "api_key")
# 💚probe: structured — httpGet.path and port are typed
💚probe(
httpGet: {path: "/health", port: 80},
initialDelaySeconds: 5,
periodSeconds: 5,
)
resources:
requests: {memory: "256Mi", cpu: "100m"}
limits: {memory: "512Mi", cpu: "500m"}
volumes:
+ 💾volume("config", configMap: "web-config")
+ |resource("Service", api: "v1", name: "web", namespace: "prod")
🏷️labels(app: "web")
spec:
🎯selector(app: "web")
ports:
+ |port(80, target: "http")
type: "ClusterIP"
# ── Ingress ───────────────────────────────────────────────────────────────────
+ |resource("Ingress", api: "networking.k8s.io/v1", name: "web", namespace: "prod")
🏷️labels(app: "web")
📝annotations(
"nginx.ingress.kubernetes.io/rewrite-target": "/",
"nginx.ingress.kubernetes.io/ssl-redirect": "true",
)
spec:
ingressClassName: "nginx"
rules:
+ {
host: "myapp.example.com",
http: {
paths: [
{
path: "/",
pathType: "Prefix",
backend: {service: {name: "web", port: {name: "http"}}},
},
],
},
}
tls:
+ {hosts: ["myapp.example.com"], secretName: "web-tls"}
# ── HorizontalPodAutoscaler ───────────────────────────────────────────────────
+ |resource("HorizontalPodAutoscaler", api: "autoscaling/v2",
name: "web", namespace: "prod")
🏷️labels(app: "web")
spec:
scaleTargetRef:
apiVersion: "apps/v1"
kind: "Deployment"
name: "web"
minReplicas: 2
maxReplicas: 10
metrics:
# CPU metric — averageUtilization: 70 is integer, not "70"
+ {
type: "Resource",
resource: {
name: "cpu",
target: {type: "Utilization", averageUtilization: 70},
},
}
+ {
type: "Resource",
resource: {
name: "memory",
target: {type: "Utilization", averageUtilization: 80},
},
}
What's visible in DMS that's lost or obscured in YAML
Labels auto-mirror across all six objects in one write. The YAML
above writes app: web in metadata.labels fourteen times across the
six resources (ServiceAccount, Role, RoleBinding, Deployment, Service,
Ingress, HPA). In dms+k8s, 🏷️labels(app: "web") appears once per
resource. The dialect runtime applies it wherever each kind's contract
demands it — metadata.labels, spec.selector.matchLabels,
spec.template.metadata.labels for the Deployment; just metadata.labels
for ServiceAccount; etc. A label rename is a one-line edit, not a
find-and-replace across fourteen YAML paths.
RBAC rules are structured, not nested YAML. 🔐rule(apiGroups: [...],
resources: [...], verbs: [...]) has a typed param signature: verbs is
list_of string, resources is list_of string. Omitting verbs or
supplying a non-list is a decoder error. In YAML, the rules: list is
untyped; a typo like verb: ["get"] (missing the s) silently produces
a rule with no verbs — k8s silently denies all access.
Integer metric targets stay integer. averageUtilization: 70 is a
DMS integer. YAML's averageUtilization: 70 is also an integer, but
YAML's "70" (quoted, as some templating tools emit) silently becomes a
string. DMS rejects the mismatch at the decoder; there is no coercion.
Single import covers all kinds. Ingress, HorizontalPodAutoscaler, Role,
RoleBinding, and ServiceAccount are all native k8s kinds — same dms+k8s
import handles them. Genuine CRDs from a third-party (e.g. Istio's
VirtualService) would live in a separate dialect like dms+k8s-istio,
imported as a second _dms_imports entry under a non-conflicting sigil
(see the CRDs section).
Auto-mirror semantics
Several decorators have kind-dependent placement. The dialect runtime applies them where each kind expects them:
| Decorator | Deployment / StatefulSet / DaemonSet | Service | ConfigMap / Secret | Job / CronJob |
|---|---|---|---|---|
🏷️labels(...) |
metadata.labels + spec.template.metadata.labels | metadata.labels | metadata.labels | metadata.labels + spec.template.metadata.labels |
🎯selector(...) |
spec.selector.matchLabels + spec.template.metadata.labels (consistency check) | spec.selector | (n/a) | spec.selector.matchLabels |
📝annotations(...) |
metadata.annotations | metadata.annotations | metadata.annotations | metadata.annotations |
A consistency check fires at decode time: a Deployment's 🎯selector and
its template's 🏷️labels must agree on the keys they share, otherwise
the runtime errors with the spec line of the offending decorator. This
catches the most common k8s YAML bug class.
Multi-document files
A dms+k8s file's body is a list of resources. Each list element is a
|resource(...) call. Order doesn't matter for kubectl-apply (kubectl
sorts by kind), but the encoder preserves source order for round-trip.
Single-resource files are also valid:
+++
_dms_tier: 1
_dms_imports:
+ dialect: "k8s"
version: "1.0.0"
+++
|resource("Namespace", api: "v1", name: "prod")
(Top-level scalar root with a single decorator on the document — body value tree contains the namespace's metadata + spec under the content_slot hoist.)
Versioning
The dialect's version field tracks Kubernetes API spec generations
loosely:
1.0.0— covers Deployment, Service, ConfigMap, Secret, Pod, Namespace, StatefulSet, DaemonSet, Job, CronJob, Role, ClusterRole, RoleBinding, ClusterRoleBinding, Ingress, NetworkPolicy. Stable resources at Kubernetes ≥ 1.27.- Future minor bumps add CRDs / Custom Resource families (kept additive).
- Major bump only on incompatible breaking changes to existing family signatures.
version_strategy: caret (default) — file's version: "1.0.0" matches
any installed 1.x.y ≥ 1.0.0. Pin tighter with version_strategy: tilde
in the dialect spec.
CRDs
Custom resources extend the family table at registry-load time. A port
that supports CRDs reads _dms_imports.allow / deny to scope which
CRD families a file may use, and reads CRD OpenAPI schemas to populate
each family's typed block.
+++
_dms_tier: 1
_dms_imports:
+ dialect: "k8s"
version: "1.0.0"
allow:
resource: ["Deployment", "Service", "ConfigMap"]
+ dialect: "k8s-istio"
version: "1.0.0"
bind:
"🕸️": ["virtualservice", "destinationrule"]
+++
The second import binds two Istio CRD families to a non-conflicting
sigil. Source code references them as 🕸️virtualservice(...).
Open questions
- Mixed kinds in one file vs one-per-file. Industry split. dms+k8s allows both; consumer pipelines can normalize either way.
statusblock round-trip. k8s API objects carry astatusset by controllers. Round-tripping through dms+k8s losesstatusby default (since authors don't write it). Add an_apply/_runtimeflag if a port wants to preserve server-side status.- JSON-Patch / Strategic-Merge-Patch generation from a diff between
two
Document_t1values. Useful forkubectl apply --patch. Not spec-defined yet. - Helm values overlay. A separate
dms+helmdialect could compose on top:_dms_importsadds helm template families, body has the values + chart references.
Why not just XML/YAML/JSON?
- YAML — type ambiguity, indent fragility, multi-doc bookkeeping. The pain dms+k8s most directly addresses.
- JSON — too verbose for hand-authoring; loses comments; no native multi-doc.
- HCL / Terraform — incompatible model (resources are
resource "foo" "bar" { ... }blocks); also doesn't address the k8s-specific label/selector mirror problem. - Pure DMS tier-0 — possible (just write nested tables), but loses
the structured-decorator surface that lets
🏷️labelsand🎯selectorcarry kind-aware placement semantics.
See also
- TIER1.md — tier-1 spec.
- dialects/dms+html.md — element-shaped data, sister dialect.
- dialects/dms+hcl.md — block-shaped data, contrast.