The cTuning Foundation is a global non-profit organization developing a common methodology and open-source tools to support sustainable, collaborative and reproducible research in Computer science and organize and automate artifact evaluation and reproducibility inititiaves at machine learning and systems conferences and journals.[1]
Founded | 2014 |
---|---|
Founder | Grigori Fursin |
Type | Non-profit research and development organization, Engineering organization |
Registration no. | W943003814 |
Focus | Collaborative software, Open Science, Open Source Software, Reproducibility, Computer Science, Machine learning, Artifact Evaluation, Performance tuning, Knowledge management |
Location | |
Origins | Collective Tuning Initiative & Milepost GCC |
Area served | Worldwide |
Method | Develop open-source tools, a public repository of knowledge, and a common methodology for collaborative and reproducible experimentation |
Website | ctuning |
Notable projects
edit- Collective Mind - a Python package with a collection of portable, extensible and ready-to-use automation recipes with a human-friendly interface to help the community compose, benchmark and optimize complex AI, ML and other applications and systems across diverse and continuously changing models, data sets, software and hardware.[2][3][4]
- Collective Knowledge - an open-source framework to organize software projects as a database of reusable components with common automation actions and extensible meta descriptions based on FAIR principles, implement portable research workflows, and crowdsource experiments across diverse platforms provided by volunteers.[5]
- ACM ReQuEST - Reproducible Quality-Efficient Systems Tournaments to co-design efficient software/hardware stacks for deep learning algorithms in terms of speed, accuracy and costs across diverse platforms, environments, libraries, models and data sets[6]
- MILEPOST GCC - open-source technology to build machine learning based self-optimizing compilers.
- Artifact Evaluation - validation of experimental results from published papers at the computer systems and machine learning conferences.[7][8][9]
- Reproducible Papers - a public index of reproducible papers with portable workflows and reusable research components.
History
editGrigori Fursin developed cTuning.org at the end of the Milepost project in 2009 to continue his research on machine learning based program and architecture optimization as a community effort.[10][11]
In 2014, cTuning Foundation was registered in France as a non-profit research and development organization. It received funding from the EU TETRACOM project and ARM to develop the Collective Knowledge Framework and prepare reproducible research methodology for ACM and IEEE conferences.[12]
In 2020, cTuning Foundation joined MLCommons as a founding member to accelerate innovation in ML.[13]
In 2023, cTuning Foundation joined the new initiative by the Autonomous Vehicle Computing Consortium and MLCommons to develop an automotive industry standard machine learning benchmark suite.[14]
Since 2024, cTuning Foundation supports the MLCommons Croissant Metadata Format to help standardize ML Datasets.[15]
Funding
editCurrent funding comes from the European Union research and development funding programme, Microsoft, and other organizations.[16]
References
edit- ^ "ACM TechTalk "Reproducing 150 Research Papers and Testing Them in the Real World: Challenges and Solutions with Grigori Fursin"". Retrieved 11 February 2021.
- ^ Fursin, Grigori (June 2023). Toward a common language to facilitate reproducible research and technology transfer: challenges and solutions. keynote at the 1st ACM Conference on Reproducibility and Replicability. doi:10.5281/zenodo.8105339.
- ^ Online catalog of automation recipes developed by MLCommons
- ^ HPCWire: MLPerf Releases Latest Inference Results and New Storage Benchmark, September 2023
- ^ Fursin, Grigori (October 2020). Collective Knowledge: organizing research projects as a database of reusable components and portable workflows with common interfaces. Philosophical Transactions of the Royal_Society. arXiv:2011.01149. doi:10.1098/rsta.2020.0211. Retrieved 22 October 2020.
- ^ Ceze, Luis (20 June 2018), ACM ReQuEST'18 front matters and report (PDF), ISBN 9781450359238
- ^ Fursin, Grigori; Bruce Childers; Alex K. Jones; Daniel Mosse (June 2014). TRUST'14. Proceedings of the 1st ACM SIGPLAN Workshop on Reproducible Research Methodologies and New Publication Models in Computer Engineering at PLDI'14. doi:10.1145/2618137.
- ^ Fursin, Grigori; Christophe Dubach (June 2014). Community-driven reviewing and validation of publications. Proceedings of TRUST'14 at PLDI'14. arXiv:1406.4020. doi:10.1145/2618137.2618142.
- ^ Childers, Bruce R; Grigori Fursin; Shriram Krishnamurthi; Andreas Zeller (March 2016). Artifact evaluation for publications. Dagstuhl Perspectives Workshop 15452. doi:10.4230/DagRep.5.11.29.
- ^ World's First Intelligent, Open Source Compiler Provides Automated Advice on Software Code Optimization, IBM press-release, June 2009 (link)
- ^ Grigori Fursin. Collective Tuning Initiative: automating and accelerating development and optimization of computing systems. Proceedings of the GCC Summit'09, Montreal, Canada, June 2009 (link)
- ^ Article on TTP project "COLLECTIVE KNOWLEDGE: A FRAMEWORK FOR SYSTEMATIC PERFORMANCE ANALYSIS AND OPTIMIZATION", HiPEACinfo, July 2015 (link)
- ^ MLCommons press-release: "MLCommons Launches and Unites 50+ Global Technology and Academic Leaders in AI and Machine Learning to Accelerate Innovation in ML" (link)
- ^ AVCC press-release: "AVCC and MLCommons Join Forces to Develop an Automotive Industry Standard Machine Learning Benchmark Suite" (link)
- ^ MLCommons press-release: "New Croissant Metadata Format helps Standardize ML Datasets. Support from Hugging Face, Google Dataset Search, Kaggle, and Open ML, makes datasets easily discoverable and usable." (link)
- ^ cTuning foundation partners