1
2
submitted 1 year ago by ylai@lemmy.ml to c/models@lemmy.intai.tech
2
1
Med-PaLM (sites.research.google)

Med-PaLM is a large language model (LLM) designed to provide high quality answers to medical questions.

Med-PaLM harnesses the power of Google’s large language models, which we have aligned to the medical domain and evaluated using medical exams, medical research, and consumer queries. Our first version of Med-PaLM, preprinted in late 2022 and published in Nature in July 2023, was the first AI system to surpass the pass mark on US Medical License Exam (USMLE) style questions. Med-PaLM also generates accurate, helpful long-form answers to consumer health questions, as judged by panels of physicians and users.

We introduced our latest model, Med-PaLM 2, at Google Health’s annual health event The Check Up, in March, 2023. Med-PaLM 2 achieves an accuracy of 86.5% on USMLE-style questions, a 19% leap over our own state of the art results from Med-PaLM. According to physicians, the model's long-form answers to consumer medical questions improved substantially. In the coming months, Med-PaLM 2 will also be made available to a select group of Google Cloud customers for limited testing, to explore use cases and share feedback, as we investigate safe, responsible, and meaningful ways to use this technology.

3
1
submitted 1 year ago* (last edited 1 year ago) by ylai@lemmy.ml to c/models@lemmy.intai.tech

Nous-Hermes-Llama2-13b is currently the highest ranked 13B LLaMA finetune on the Open LLM Leaderboard.

Model Description

Nous-Hermes-Llama2-13b is a state-of-the-art language model fine-tuned on over 300,000 instructions. This model was fine-tuned by Nous Research, with Teknium and Emozilla leading the fine tuning process and dataset curation, Redmond AI sponsoring the compute, and several other contributors.

This Hermes model uses the exact same dataset as Hermes on Llama-1. This is to ensure consistency between the old Hermes and new, for anyone who wanted to keep Hermes as similar to the old one, just more capable.

This model stands out for its long responses, lower hallucination rate, and absence of OpenAI censorship mechanisms. The fine-tuning process was performed with a 4096 sequence length on an 8x a100 80GB DGX machine.

Announcements

4
1

cross-posted from: https://lemmy.world/post/1954892

It's looking really good! Major features include controlnet, support for SDXL, and a whole bunch of other cool things.

Download: https://github.com/invoke-ai/InvokeAI/releases/tag/v3.0.0

5
1
6
1
Llama 2 -Meta AI (ai.meta.com)
7
1

cross-posted from: https://lemmy.fmhy.ml/post/649641

We could have AI models in a couple years that hold the entire internet in their context window.

8
1
9
1
submitted 1 year ago* (last edited 1 year ago) by manitcor@lemmy.intai.tech to c/models@lemmy.intai.tech

Docs: https://phasellm.com/docs/phasellm/eval.html

This project provides a unified framework to test generative language models on a large number of different evaluation tasks.

Features:

  • 200+ tasks implemented. See the task-table for a complete list.
  • Support for models loaded via transformers (including quantization via AutoGPTQ), - GPT-NeoX, and Megatron-DeepSpeed, with a flexible tokenization-agnostic interface.
  • Support for commercial APIs including OpenAI, goose.ai, and TextSynth.
  • Support for evaluation on adapters (e.g. LoRa) supported in HuggingFace's PEFT library.
  • Evaluating with publicly available prompts ensures reproducibility and comparability between papers.
  • Task versioning to ensure reproducibility when tasks are updated.
10
1
submitted 1 year ago* (last edited 1 year ago) by manitcor@lemmy.intai.tech to c/models@lemmy.intai.tech

Model Description

Redmond-Hermes-Coder 15B is a state-of-the-art language model fine-tuned on over 300,000 instructions. This model was fine-tuned by Nous Research, with Teknium and Karan4D leading the fine tuning process and dataset curation, Redmond AI sponsoring the compute, and several other contributors.

This model was trained with a WizardCoder base, which itself uses a StarCoder base model.

The model is truly great at code, but, it does come with a tradeoff though. While far better at code than the original Nous-Hermes built on Llama, it is worse than WizardCoder at pure code benchmarks, like HumanEval.

It comes in at 39% on HumanEval, with WizardCoder at 57%. This is a preliminary experiment, and we are exploring improvements now.

However, it does seem better at non-code than WizardCoder on a variety of things, including writing tasks.

Model Training

The model was trained almost entirely on synthetic GPT-4 outputs. This includes data from diverse sources such as GPTeacher, the general, roleplay v1&2, code instruct datasets, Nous Instruct & PDACTL (unpublished), CodeAlpaca, Evol_Instruct Uncensored, GPT4-LLM, and Unnatural Instructions.

Additional data inputs came from Camel-AI's Biology/Physics/Chemistry and Math Datasets, Airoboros' (v1) GPT-4 Dataset, and more from CodeAlpaca. The total volume of data encompassed over 300,000 instructions.

11
1
submitted 1 year ago* (last edited 1 year ago) by manitcor@lemmy.intai.tech to c/models@lemmy.intai.tech

Models

Datasets

Repos

Related Papers

Credit:

Tweet

Archive:

@Yampeleg The first model to beat 100% of ChatGPT-3.5 Available on Huggingface

🔥 OpenChat_8192

🔥 105.7% of ChatGPT (Vicuna GPT-4 Benchmark)

Less than a month ago the world witnessed as ORCA [1] became the first model to ever outpace ChatGPT on Vicuna's benchmark.

Today, the race to replicate these results open-source comes to an end.

Minutes ago OpenChat scored 105.7% of ChatGPT.

But wait! There is more!

Not only OpenChat beated Vicuna's benchmark, it did so pulling off a LIMA [2] move!

Training was done using 6K GPT-4 conversations out of the ~90K ShareGPT conversations.

The model comes in three versions: the basic OpenChat model, OpenChat-8192 and OpenCoderPlus (Code generation: 102.5% ChatGPT)

This is a significant achievement considering that it's the first (released) open-source model to surpass the Vicuna benchmark. 🎉🎉

Congratulations to the authors!!


[1] - Orca: The first model to cross 100% of ChatGPT: https://arxiv.org/pdf/2306.02707.pdf [2] - LIMA: Less Is More for Alignment - TL;DR: Using small number of VERY high quality samples (1000 in the paper) can be as powerful as much larger datasets: https://arxiv.org/pdf/2305.11206

12
1
13
1
Model Catalog (lemmy.intai.tech)
submitted 1 year ago* (last edited 1 year ago) by manitcor@lemmy.intai.tech to c/models@lemmy.intai.tech
14
1
BLIP (lemmy.intai.tech)
15
1
submitted 1 year ago* (last edited 1 year ago) by manitcor@lemmy.intai.tech to c/models@lemmy.intai.tech
16
1
17
1
18
1
19
1
submitted 1 year ago* (last edited 1 year ago) by manitcor@lemmy.intai.tech to c/models@lemmy.intai.tech

A watermark-free Modelscope-based video model capable of generating high quality video at 1024 x 576. This model was trained with offset noise using 9,923 clips and 29,769 tagged frames at 24 frames, 1024x576 resolution.

20
1
submitted 1 year ago* (last edited 1 year ago) by manitcor@lemmy.intai.tech to c/models@lemmy.intai.tech
21
1
22
1
23
1
24
1
submitted 1 year ago* (last edited 1 year ago) by manitcor@lemmy.intai.tech to c/models@lemmy.intai.tech
25
1
view more: next ›

Machine Learning - Learning/Language Models

0 readers
1 users here now

Discussion of models, thier use, setup and options.

Please include models used with your outputs, workflows optional.

Model Catalog

We follow Lemmy’s code of conduct.

Communities

Useful links

founded 1 year ago
MODERATORS