Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I agree that synchronization causes overhead, so 2x GPUs won't achieve the ideal 0.5x total runtime. But here, taking your Alpaca benchmark as an example, we are seeing 2x GPUs get 3.6x runtime with Huggingface, or 1.15x with Unsloth Max.

In other words, every benchmark, in either HF or Unsloth, is slower in absolute terms when going from 1 to 2 GPUs. That makes me think something is wrong with the test.

Could you share your benchmark code?



You can refer to QLoRA's official finetuning notebook https://colab.research.google.com/drive/1VoYNfYDKcKRQRor98Zb... for your reference!! Obviously I can't provide the code we have, but if you use the same datasets and the same settings (bsz = 2, ga = 4, max_grad_norm = 0.3, num_epochs = 1, seed = 3407, max_seq_len = 2048) you should be able to replicate it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: