[-] projectmoon@lemm.ee 53 points 3 days ago

Depends on the continuity and who's writing it, but often yes. He was notably portrayed this way in the Justice League cartoon.

[-] projectmoon@lemm.ee 4 points 4 days ago

The only problem I really have, is context size. It's harder to get larger than 8k context size and maintain decent generation speed with 16 GB of VRAM and 16 GB of RAM. Gonna get more RAM at some point though, and hope ollama/llamacpp gets better at memory management. Hopefully the distributed running from llamaccp ends up in ollama.

[-] projectmoon@lemm.ee 7 points 4 days ago

I do have a local setup. Not powerful enough to run Mixtral 8x22b, but can run 8x7b (albeit quite slowly). Use it a lot.

[-] projectmoon@lemm.ee 7 points 4 days ago

No trying to get around anything. No funny instructions like my grandma singing a lullaby about illegal activities. Just using instructions to tell a story. Even things like having a superhero in a fight is enough to trigger this. Also doesn't explain why regen makes it continue.

25
submitted 4 days ago* (last edited 4 days ago) by projectmoon@lemm.ee to c/chatgpt@lemmy.world

Over the weekend (this past Saturday specifically), GPT-4o seems to have gone from capable and rather free for generating creative writing to not being able to generate basically anything due to alleged content policy violations. It'll just say "can't assist with that" or "can't continue." But 80% of the time, if you regenerate the response, it'll happily continue on its way.

It's like someone updated some policy configuration over the weekend and accidentally put an extra 0 in a field for censorship.

GPT-4 and GPT 3.5 seem unaffected by this, which makes it even weirder. Switching to GPT 4 will have none of the issues that 4o is having.

I noticed this happening literally in the middle of generating text.

See also: https://old.reddit.com/r/ChatGPT/comments/1droujl/ladies_gentlemen_this_is_how_annoying_kiddie/

https://old.reddit.com/r/ChatGPT/comments/1dr3axv/anyone_elses_ai_refusing_to_do_literally_anything/

15

Current situation: I've got a desktop with 16 GB of DDR4 RAM, a 1st gen Ryzen CPU from 2017, and an AMD RX 6800 XT GPU with 16 GB VRAM. I can 7 - 13b models extremely quickly using ollama with ROCm (19+ tokens/sec). I can run Beyonder 4x7b Q6 at around 3 tokens/second.

I want to get to a point where I can run Mixtral 8x7b at Q4 quant at an acceptable token speed (5+/sec). I can run Mixtral Q3 quant at about 2 to 3 tokens per second. Q4 takes an hour to load, and assuming I don't run out of memory, it also runs at about 2 tokens per second.

What's the easiest/cheapest way to get my system to be able to run the higher quants of Mixtral effectively? I know that I need more RAM Another 16 GB should help. Should I upgrade the CPU?

As an aside, I also have an older Nvidia GTX 970 lying around that I might be able to stick in the machine. Not sure if ollama can split across different brand GPUs yet, but I know this capability is in llama.cpp now.

Thanks for any pointers!

[-] projectmoon@lemm.ee 41 points 5 months ago

The fork was originally created because upstream NewPipe elected not to include sponsor block functionality.

[-] projectmoon@lemm.ee 29 points 5 months ago

Depends on the language. There is no explicit typing in JavaScript, for example. That's why Typescript was invented.

5
submitted 9 months ago by projectmoon@lemm.ee to c/meta@lemm.ee

Not sure if this has been asked before or not. I tried searching and couldn't find anything. I have an issue where any pictures from startrek.website do not show up on the homepage. It seems to only affect startrek.website. Going to the link directly loads the image just fine. Is this something wrong with lemm.ee?

[-] projectmoon@lemm.ee 28 points 9 months ago

It used to be open source, then it went completely closed. As mentioned, Organic Maps is the fork that is the continuation of the GPL app.

[-] projectmoon@lemm.ee 51 points 9 months ago

I think "complex" refers to the various dark patterns used by Windows and Mac/iOS to scare and/or force users that know nothing of computers into using the default browsers.

9
submitted 10 months ago* (last edited 10 months ago) by projectmoon@lemm.ee to c/protonprivacy@lemmy.world

For the past few days, the android app has been very slow. The app itself loads fine and is responsive, but it takes many seconds to load messages, sometimes up to 30 seconds. At first I thought it was a blip, but it's been going on for a few days now. Anyone else have this problem?

Edit: clearing cache in the app settings (not system settings) fixed it.

[-] projectmoon@lemm.ee 29 points 10 months ago

You should probably add what license the icon will be under, if it's submitted to the project. Creative Commons? GPL?

[-] projectmoon@lemm.ee 22 points 11 months ago

Unfortunately, it doesn't look like that BBC experiment is going well. They've barely posted anything, relative to what they could post. They should set up their systems to auto-post to Mastodon when they post to Twitter or where ever else.

[-] projectmoon@lemm.ee 28 points 11 months ago

Am I missing something? Or is the link to this tool not actually present in the post? I only see a screenshot.

[-] projectmoon@lemm.ee 21 points 11 months ago

Pretty sure the original developer of Infinity is one of the few people who will try to follow Reddit's new API rules and charge a subscription fee to cover it. At least that was the case a few months ago. Not sure what's currently happening.

view more: next ›

projectmoon

joined 1 year ago