Eyy, a web api! You could’ve just said that right away. There’s more than just web api’s kid.
How is this r api relevant in your choice of hardware to locally run these models?
Eyy, a web api! You could’ve just said that right away. There’s more than just web api’s kid.
How is this r api relevant in your choice of hardware to locally run these models?
No, just calling your bluff. git gud m8
Application Programming Interface, are you talking about something on the internet? On a gpu driver? On your phone?
Then also, what’s the size model you’re using? Define with int32? fp4? Somewhere in between? That’s where ram requirements come in
I get that you’re trying to do a mic drop or something, but you’re not being very clear
Again, you’d be waiting around all day
Then don’t go with an Apple chip. They’re impressive for how little power they consume. But any 50 watt chip will get absolutely destroyed by a 500 watt gpu, even one from almost a decade ago will beat it.
And you’ll save money to boot, if you don’t count your power bill
If you enjoy waiting around, sure
And then have them all fail within week of each other, under a couple months. They’re garbage, get something decent like a Toshiba or something
Going seagate is a great way to save cash and lose data
I just turn it on in the bios. And on windows, disable fast startup