314

Sorry if I'm not the first to bring this up. It seems like a simple enough solution.

you are viewing a single comment's thread
view the rest of the comments
[-] PlatinumSf@pawb.social 6 points 1 year ago

No joke, probably intel. The cards won't hold a candle to a 4090 but they're actually pretty decent for both gaming and ML tasks. AMD definitely needs to speed up the timeline on their new ML api tho.

[-] JoeCoT@kbin.social 6 points 1 year ago

Problem with Intel cards is that they're a relatively recent release, and not very popular yet. It's going to be a while before games optimize for them.

For example, the ARC cards aren't supported for Starfield. Like they might run but not as well as they could if Starfield had optimized for them too. But the card's only been out a year.

[-] luna@lemmy.catgirl.biz 3 points 1 year ago

The more people use Arc the quicker it becomes mainstream and optimised for but arc is still considered "beta" and slow in peoples minds even though there were huge improvements and the old benchmarks don't hold any value anymore. chicken and Egg problem. :/

Disclaimer: i have an arc 770 16GB because every other sensible upgrade path would have cost 3x-4x more for the same performance uplift (and I'm not buying an 8GB card in 2023+) but now I'm starting to get really angry at people blaming Intel for "not supporting this new game" - all that gpus should support is the graphics API to the letter of the specification, all this day-1 patching and driver hotfixes to make games run decent is bs. Games need to feed the API and GPUs need to process what the API tells it to, nothing more nothing less. It's a complex issue and i think Nvidia held the monopoly for too long, everything is optimised for Nvidia at the cost of making it worse for everyone else.

[-] dan@upvote.au 6 points 1 year ago* (last edited 1 year ago)

Isn't the entire point of DirectX and OpenGL that it abstracts away the GPU-specific details? You write code once and it works on any graphics card that supports the standard? It sounds like games are moving towards what we had in the old days, where they have specific code per graphics card?

[-] luna@lemmy.catgirl.biz 4 points 1 year ago

I think the issue started with gpu-architecture tailored technologies like physx or gameworks but im probably wrong. For example I have nothing against physx but it only runs on nvidia cores natively (fast), i have an issue when there's a monetary incentive or exclusive partnering of nvidia and game studios - so if you want to play the game with all the features, bells and whistles, it was designed with you would need to also buy their overpriced (and current gen: underperforming) gpus just because you'd be missing out on features or performance on any other gpu architecture.

If this trend continues everybody will need a €1k+ gpu from nvidia and a €1k+ gpu from AMD and hot-swap between them depending on what game you wish to play.

this post was submitted on 08 Sep 2023
314 points (100.0% liked)

Technology

37699 readers
275 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS