Blade-2, GPT-4 or Claude-2;  Which AI language model is the best?

Can LLMs run natively on your iPhone? Discover MLC-LLM: An open framework that brings language models (LLM) directly to a broad class of GPU-accelerated platforms

Large Language Models (LLM) are the current hot topic in the field of Artificial Intelligence. A good level of progress has already been made in a wide range of sectors such as healthcare, finance, education, entertainment, etc. Well-known large language models such as GPT, DALLE and BERT perform extraordinary tasks and make life easier. While GPT-3 can complete codes, answer questions like humans, and generate content with a short natural language prompt, FROM 2 can create images that respond to a simple text description. These models are contributing to some huge transformations in AI and machine learning and helping them through a paradigm shift.

With the development of an increasing number of models, the need arises for powerful servers to meet their extensive computing, memory and hardware acceleration requirements. To make these models super effective and efficient, they would need to be able to run independently on consumer devices, which would increase their accessibility and availability, and allow users to access powerful AI tools on their personal devices without needing an internet connection or relying on cloud servers. MLC-LLM was recently introduced, an open framework that brings LLMs directly to a large class of platforms such as CUDA, Vulkan and Metal, which are also GPU accelerated.

MLC LLM enables native deployment of language models to a broad range of hardware backends including CPU and GPU and native applications. This means that any language model can run on local devices without the need for a server or cloud-based infrastructure. MLC LLM provides a productive framework that allows developers to optimize model performance for their own use cases, such as Natural Language Processing (NLP) or Computer Vision. It can also be accelerated using local GPUs, making it possible to run complex models with high accuracy and speed on personal devices.

Build high-quality training datasets with Kili Technology and solve NLP machine learning challenges to develop powerful ML applications

Specific instructions have been provided to run LLMs and chatbots natively on devices for iPhone, Windows, Linux, Mac and web browsers. For iPhone users, MLC LLM provides an iOS chat app which can be installed via the TestFlight page. The app requires at least 6GB of memory to run smoothly and has been tested on iPhone 14 Pro Max and iPhone 12 Pro. The text generation speed on the iOS app can be unstable at times and may slow down at first before returning to normal speed.

For Windows, Linux and Mac users, MLC LLM provides a command line interface (CLI) app to chat with the bot in the terminal. Before installing the CLI app, users need to install some dependencies including Conda to handle the app and the latest Vulkan driver for NVIDIA GPU users on Windows and Linux. After installing the dependencies, users can follow the prompts to install the CLI app and start chatting with the bot. For web browser users, MLC LLM provides a companion project called WebLLM, which delivers models natively to browsers. Everything runs inside the browser with no server support and is accelerated with WebGPU.

In conclusion, MLC LLM is an amazing universal solution for natively deploying LLM on different hardware backends and native applications. It’s a great option for developers who want to build models that can run on a wide variety of devices and hardware configurations.


Check out the Github link, ProjectAND Blogs. Don’t forget to subscribe our 20k+ ML SubReddit, Discord channelAND Email newsletterwhere we share the latest news on AI research, cool AI projects, and more. If you have any questions regarding the above article or if you have missed anything, please do not hesitate to email us at Asif@marktechpost.com

Check out 100s AI Tools in the AI ​​Tools Club


Tanya Malhotra is a final year student at Petroleum and Energy University, Dehradun pursuing BTech in Computer Engineering with a major in Artificial Intelligence and Machine Learning.
She is a data science enthusiast with good analytical and critical thinking, coupled with a burning interest in acquiring new skills, leading teams, and managing work in an organized manner.


Gain a competitive edge with data – actionable market insights for global brands, retailers, analysts and investors. (Sponsored)

#LLMs #run #natively #iPhone #Discover #MLCLLM #open #framework #brings #language #models #LLM #broad #class #GPUaccelerated #platforms
Image Source : www.marktechpost.com

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *