Start
knowledge-distillation-in-deep-learning
knowledge-distillation-in-deep-learning - Skill Dossier
knowledge-distillation-in-deep-learning

knowledge-distillation-in-deep-learning

Design deployment-focused distillation systems that balance model size, accuracy, calibration, and cascade escalation under real resource limits. Best for teacher-student compression, threshold design, and failure-aware deployment. Activate on "model compression", "teacher- student", "distillation score", "cascade model", "edge deployment", or "model calibration". NOT for generic deep-learning overviews, prompt optimization, or training work without a concrete distillation objective.

Uncategorized

Allowed Tools

ReadWriteEditGlobGrep

Share this skill

Skills use the open SKILL.md standard — the same file works across all platforms.

Install all 463+ skills as a plugin
claude plugin marketplace add curiositech/windags-skills claude plugin install windags-skills

Claude activates knowledge-distillation-in-deep-learning automatically when your task matches its description.

View on GitHub
"Use knowledge-distillation-in-deep-learning to help me build a feature system"
"I need expert help with design deployment-focused distillation systems tha..."