I would like to know llama2.c!
Reliability of This Article
by Our Founder/CEO&CTO Hiroyuki Chishiro
- He has been involved in 12 years of research on real-time systems.
- He teaches OS (Linux kernel) in English at the University of Tokyo.
- From September 2012 to August 2013, he was a visiting researcher at the Department of Computer Science, the University of North Carolina at Chapel Hill (UNC), Chapel Hill, North Carolina, United States. He has been involved in research and development of real-time Linux in C language.
- He has experienced in more than 15 years of programming languages: C/C++, Python, Solidity/Vyper, Java, Ruby, Go, Rust, D, HTML/CSS/JS/PHP, MATLAB, Verse (UEFN), Assembler (x64, ARM).
- While a faculty member at the University of Tokyo, he developed the "Extension of LLVM Compiler" in C++ language and his own real-time OS "Mcube Kernel" in C language, which he published as open source on GitHub.
- In January 2020-Present, he is CTO of Guarantee Happiness LLC, Chapel Hill, North Carolina, United States, in charge of e-commerce site development and web/social network marketing. In June 2022-Present, he is CEO&CTO of Japanese Tar Heel, Inc. in Chapel Hill, North Carolina, United States.
- We have been engaged in disseminating useful information on AI and Crypto (Web3).
- We have written more than 20 articles on AI including AI chatbots such as ChatGPT, Auto-GPT, Gemini (formerly Bard). He has experience in contract work as a prompt engineer, manager, and quality assurance (QA) for several companies in San Francisco, United States (Silicon Valley in the broadest sense of the word).
- We have written more than 40 articles on cryptocurrency (including smart contract programming). He has experience as an outsourced translator of English articles on cryptocurrency into Japanese for a company in London, England.
You can learn from us.
If you would like to know the recommended job sites for AI Engineers, please click the following.
If you would like to know the recommended job sites for Prompt Engineers, please click the following.
Table of Contents
What is llama2.c?
llama2.c is a C framework that can infer LLaMA 2.
llama2.c is developed under the influence of llama.cpp.
If you are interested in llama.cpp, please click the following.
llama2.c is available as open source on GitHub.
1 |
$ git clone https://github.com/karpathy/llama2.c |
The open source license is MIT License.
The features and limitations of llama2.c are as follows.
- The ability to train the LLaMA 2 LLM architecture in PyTorch and infer LLaMA 2 with a file "run.c" of about 700 lines written in pure C
- Super-simple, minimal and educational compared to llama.cpp
- (As of August 2024) only fp32 models can be inferred, so it is not possible to load models larger than 7B or to quantize the modelssp
Projects that Run llama2.c in Other Languages and Environments
Projects are underway to run llama2.c in other languages and environments.
Details are given on the official llama2.c page.
The main projects are as follows.
- C language
- llama3.c (@jameswdelancey): Ported version of llama2.c with LLaMA 3 8B base and instructions
- C++ language
- llama2.cpp (@leloykun): C++ port of llama2.c
- llama2.cpp (@coldlarry): C++ port of llama2.c
Introductory Videos of llama2.c
These are introductory videos of llama2.c.
Summary
We introduced llama2.c, a framework for inferring LLaMA 2 in C language.
We hope that more people will be influenced by llama2.c to learn C language!
If you would like to know the recommended job sites for AI Engineers, please click the following.
If you would like to know the recommended job sites for Prompt Engineers, please click the following.