[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/vt/ - Virtual Youtubers

Search:


View post   

>> No.57646921 [View]
File: 305 KB, 949x846, holy_war.png [View same] [iqdb] [saucenao] [google]
57646921

>>57643320
It's a fine-tune of Llama2, meaning that the system requirements for these models would be the same as base Llama2. 13B at half-precision would likely not even fit on a 3090, but 7B at half precision takes up approximately 14 GB of VRAM: https://discuss.huggingface.co/t/llama-7b-gpu-memory-requirement/34323/8
Of course, quantization can help out with this significantly.

Navigation
View posts[+24][+48][+96]