60
submitted 16 hours ago* (last edited 16 hours ago) by CoderSupreme@programming.dev to c/programming@programming.dev
34

I used Google before, but since I degoogled, I only have my contacts on my Android phone. However, I would like to be able to access them on Linux too and have them synced.

60
... (github.com)
submitted 7 months ago* (last edited 7 months ago) by CoderSupreme@programming.dev to c/linux@programming.dev
11
submitted 10 months ago* (last edited 8 months ago) by CoderSupreme@programming.dev to c/auai@programming.dev

Permanently Deleted

73
... (www.omgubuntu.co.uk)
submitted 11 months ago* (last edited 7 months ago) by CoderSupreme@programming.dev to c/opensource@lemmy.ml
130
... (www.phind.com)
submitted 11 months ago* (last edited 7 months ago) by CoderSupreme@programming.dev to c/technology@lemmy.ml
12
... (programming.dev)
submitted 11 months ago* (last edited 7 months ago) by CoderSupreme@programming.dev to c/localllama@sh.itjust.works

CogVLM: Visual Expert for Pretrained Language Models

Presents CogVLM, a powerful open-source visual language foundation model that achieves SotA perf on 10 classic cross-modal benchmarks

repo: https://github.com/THUDM/CogVLM abs: https://arxiv.org/abs/2311.03079

51
... (github.com)
submitted 11 months ago* (last edited 7 months ago) by CoderSupreme@programming.dev to c/opensource@lemmy.ml

A self-hosted BitTorrent indexer, DHT crawler, content classifier and torrent search engine with web UI, GraphQL API and Servarr stack integration.

31
... (github.com)
submitted 11 months ago* (last edited 7 months ago) by CoderSupreme@programming.dev to c/opensource@lemmy.ml

A terminal workspace with batteries included

14
... (programming.dev)
submitted 11 months ago* (last edited 7 months ago) by CoderSupreme@programming.dev to c/localllama@sh.itjust.works

article: https://x.ai

trained a prototype LLM (Grok-0) with 33 billion parameters. This early model approaches LLaMA 2 (70B) capabilities on standard LM benchmarks but uses only half of its training resources. In the last two months, we have made significant improvements in reasoning and coding capabilities leading up to Grok-1, a state-of-the-art language model that is significantly more powerful, achieving 63.2% on the HumanEval coding task and 73% on MMLU.

4
Permanently Deleted (programming.dev)
submitted 11 months ago* (last edited 7 months ago) by CoderSupreme@programming.dev to c/python@programming.dev

Permanently Deleted

-1
... (programming.dev)
submitted 11 months ago* (last edited 7 months ago) by CoderSupreme@programming.dev to c/opensource@lemmy.ml
view more: next ›

CoderSupreme

joined 1 year ago