Adapting to a Cambrian AI/SW/HW explosion with open co-design competitions and Collective Knowledge
Main Author: | Grigori Fursin |
---|---|
Format: | info Proceeding |
Bahasa: | eng |
Terbitan: |
, 2017
|
Subjects: | |
Online Access: |
https://zenodo.org/record/2544258 |
Daftar Isi:
- The original presentation was shared via SlideShare. Slides from the ARM's Research Summit'17 about the "Community-Driven and Knowledge-Guided Optimization of AI Applications Across the Whole SW/HW Stack": cKnowledge.org cKnowledge.org/repo cKnowledge.org/repo-beta cKnowledge.org/android-apps.html cKnowledge.org/ai developer.arm.com/research/summit Co-designing the whole AI/SW/HW stack in terms of speed, accuracy, energy consumption, size, costs, and other metrics has become extremely complex, long and costly. With no rigorous methodology for analyzing performance and accumulating optimisation knowledge, we are simply destined to drown in the ever growing number of design choices, system features and conflicting optimisation goals. We present our novel community-driven approach to solve the above problems. Originating from natural sciences, this approach is embodied in Collective Knowledge (CK), our open-source cross-platform workflow framework and repository for automatic, collaborative and reproducible experimentation. CK helps organize, unify and share representative workloads, data sets, AI frameworks, libraries, compilers, scripts, models and other artifacts as customizable and reusable components with a common JSON API. CK helps bring academia, industry and end-users together to gradually expose optimisation choices at all levels (e.g. from parameterized models and algorithmic skeletons to compiler flags and hardware configurations) and autotune them across diverse inputs and platforms. Optimization knowledge gets continuously aggregated in public or private repositories such as cKnowledge.org/repo in a reproducible way, and can be then mined and extrapolated to predict better AI algorithm choices, compiler transformations and hardware designs. We also demonstrate how we use this approach in practice together with ARM and other companies to adapt to a Cambrian AI/SW/HW explosion by creating an open repository of reusable AI artifacts, and then collaboratively optimising and co-designing the whole deep learning stack (software, hardware and models).