Start
knowledge-distillation-in-deep-learning
knowledge-distillation-in-deep-learning - Skill Dossier

knowledge-distillation-in-deep-learning
Design deployment-focused distillation systems that balance model size, accuracy, calibration, and cascade escalation under real resource limits. Best for teacher-student compression, threshold design, and failure-aware deployment. Activate on "model compression", "teacher- student", "distillation score", "cascade model", "edge deployment", or "model calibration". NOT for generic deep-learning overviews, prompt optimization, or training work without a concrete distillation objective.
Uncategorized
Allowed Tools
ReadWriteEditGlobGrep
Skills use the open SKILL.md standard — the same file works across all platforms.
Install all 463+ skills as a plugin
claude plugin marketplace add curiositech/windags-skills
claude plugin install windags-skills
Claude activates knowledge-distillation-in-deep-learning automatically when your task matches its description.
