學習筆記|Phi-3-mini + LM Studio,模型更小但是效果一樣好

Eric Chang
9 min readApr 26, 2024

--

微軟最近發表了Microsoft Phi-3-mini,這是個開源的 LLM模型,也是個滿優秀的小型語言模型(SLM),Phi-3-mini 3.8B的模型已經在 Azure AI Studio、Hugging Face 和 Ollama 上提供。

在談技術指標之前,我對於Phi-3-mini在商業的定位會比較在意,因為他專注在小模型上去提高語言模型的表現。從論文的標題就可以看到,他們是更加鎖定在行動裝置的應用性(Phi-3 Technical Report: A Highly Capable Language Model Locally on Your Phone)

在論文當中,微軟的團隊展示了在iPhone上面運行的速度,在A16晶片上面,q4的模型可以在每秒生成12 tokens,只有占用約1.8GB的內存,對於移動裝置來說這是一個還不錯的速度。

Phi-3-mini模型如何訓練

Phi-3-mini 有兩種context-length— 4K 和 128K tokens,支持最多 128K這個版本算是滿讓我印象深刻的,小模型但可以滿足多數的input場景。

Phi-3 Mini-128K-Instruct 有 3.8B參數,這次用來訓練的GPU為512 H100–80G,訓練時間達到了7天,而訓練數據則是準備了3.3T tokens,我用phi-3-mini幫我翻譯了一下訓練資料的內容(看起來可能會有點怪):

公开可用的文件厚硕,严格评查并选出教育数据和代码的高质量版本;
为教育目的而创造的新鲜合成式"教科书式"数据以便在数学、编程、常识判断和世界知识(科学,日常活动,理解他人心理等)上教育;
高质量的对话格式跟监督数据涵盖了各种主题,以反映社会习俗中不同方面如指导遵从性、真实性、诚实和有用性。

在時間的部分,這個模型是在2024年2月至4月之間訓練的,由於是用離線資料,所以資訊的截止日期為2023年10月。

使用LM Studio來實作

這次我們選擇使用LM Studio來進行phi-3-mini的實作,這是一款功能非常強大的LLM執行平台,跟ollama不同的是,LM Studio提供了非常豐富的介面與功能,可以自己去搜尋想要使用的model與互動介面平台。

在官方網站下載並且安裝LM Studio之後,可以到搜尋直接去搜尋phi-3,就可以進行模型的下載,我在這邊選擇的是下載 microsoft/Phi-3-mini-4k-instruct-gguf,讓沒有GPU的電腦也可以順利運行。

從左邊選單回到對話介面,可以在上面選擇想要使用的Model,我們選擇phi-3-mini,並且開始針對知識事實還有總結類的工作進行測試,看看手感。

USER
where is taiwan
AI
Taiwan, officially known as the Republic of China (ROC), is an island located in East Asia. It lies off the southeastern coast of mainland China across the Taiwan Strait and is situated to the north of the Philippines and northeast of Indonesia's province of Kalimantan. The capital city of Taiwan is Taipei. Please note that geopolitical recognition varies among different countries, with some recognizing it as an independent nation while others view it as part of China.

我們也來測試看看總結的能力如何:

USER
summary the content below:
Phi-3 models were developed in accordance with the Microsoft Responsible AI Standard, which is a company-wide set of requirements based on the following six principles: accountability, transparency, fairness, reliability and safety, privacy and security, and inclusiveness. Phi-3 models underwent rigorous safety measurement and evaluation, red-teaming, sensitive use review, and adherence to security guidance to help ensure that these models are responsibly developed, tested, and deployed in alignment with Microsoft's standards and best practices.
Building on our prior work with Phi models ("Textbooks Are All You Need"), Phi-3 models are also trained using high-quality data. They were further improved with extensive safety post-training, including reinforcement learning from human feedback (RLHF), automated testing and evaluations across dozens of harm categories, and manual red-teaming. Our approach to safety training and evaluations are detailed in our technical paper, and we outline recommended uses and limitations in the model cards. See the model card collection.
AI
The Phi-3 models were created following Microsoft's Responsible AI Standard, focusing on principles such as accountability, transparency, fairness, reliability, safety, privacy, security, and inclusiveness. These models underwent extensive testing for safety, including red-teaming, reviews of sensitive uses, and adherence to security protocols. Phi-3 models are enhanced by using high-quality data and additional safety measures like RLHF, automated evaluations across multiple harm categories, and manual red-teaming efforts. Details on their training and evaluation process are available in Microsoft's technical paper, along with recommended uses and restrictions outlined in the model cards.

所以Phi-3-mini模型的表現如何?

單看表現的話,會覺得phi-3-mini在絕對的分數上並沒有特別亮眼,尤其Phi-3在事實知識上(例如 TriviaQA)的表現不佳,論文裡面認為可能因為較小的模型會導致保留事實的能力較低。

模型表現比較

不過微軟認為phi-3-mini在表現上可以跟meta的llama2相提並論(雖然llama3已經推出了)。除了移動端上的表現,微軟未來會在開源模型再推出Phi-3-small 7b 跟Phi-3-medium14b,在開源LLM跟Meta應該會有競爭,主打更小的模型,但達到相等甚至更好的效果。

(論文指出,phi在較小的模型上卻有相當於llama2的MMLU表現)

--

--

Eric Chang
Eric Chang

Written by Eric Chang

Data Scientist. Strengthen learning through writing.

No responses yet