wewbull

joined 2 years ago
[–] wewbull@feddit.uk 2 points 12 hours ago

...and quite warm.

[–] wewbull@feddit.uk 8 points 20 hours ago (2 children)

India is an addict. It's hooked on cheap russian oil and gas. It's going to have a hard time when it has to go cold turkey.

[–] wewbull@feddit.uk 6 points 20 hours ago

It's not an add-on feature. The LLM produces something with the best score it can. Things that increase the score:

  • Things appropriate to the tokens in the request
  • Things which look like what it's been trained on.

So that includes:

  • Relevant facts
  • grammatically correct language
  • friendly style of writing
  • etc

If it has no relevant facts then it will maximise the others to get a good score. Hence you get confidently wrong statements because sounding like it knows what it's talking about scores higher than actually giving correct information.

This process is inherent to machine learning at its current level though. It's like a "fake it until you make it" person, who will never admit they're wrong.

[–] wewbull@feddit.uk -1 points 20 hours ago

In this thread

🤯...🤯...🤯🤯...🤯

[–] wewbull@feddit.uk 1 points 1 day ago

Beat me to it.

[–] wewbull@feddit.uk 1 points 1 day ago

No wonder the birth rate has been down.

[–] wewbull@feddit.uk 1 points 1 day ago (1 children)

End to end encryption of a interaction with a chat-bot would mean the company doesn't decrypt your messages to it, operates on the encrypted text, gets an encrypted response which only you can decrypt and sends it to you. You then decrypt the response.

So yes. It would require operating on encrypted data.

[–] wewbull@feddit.uk 5 points 1 day ago

"Burning it out" still leaves contamination. You need to remove it.

[–] wewbull@feddit.uk 17 points 2 days ago (1 children)

I think it's different. The fundamental operation of all these models is multiplying big matrices of numbers together. GPUs are already optimised for this. Crypto was trying to make the algorithm fit the GPU rather than it being a natural fit.

With FPGAs you take a 10x loss in clock speed but can have precisely the algorithm you want. ASICs then give you the clock speed back.

GPUs are already ASICS that implement the ideal operation for ML/AI, so FPGAs would be a backwards step.

[–] wewbull@feddit.uk 1 points 2 days ago (4 children)

If an AI can work on encrypted data, it's not encrypted.

[–] wewbull@feddit.uk 40 points 2 days ago (9 children)

It's when the coffers of Microsoft, Amazon, Meta and investment banks dry up. All of them are losing billions every month but it's all driven by fewer than 10 companies. Nvidia is lapping up the money of course, but once the AI companies stop buying GPUs on crazy numbers it's going to be a rocky ride down.

view more: next ›