1
1
2
1
3
1
4
1

Hello Machine Learning Community,

The intention of this post is to replicate a similar tradition from R/machinelearning and to trigger engagement. This post will be created weekly.

What are you reading this week and any thoughts to share?

5
1
6
1
submitted 1 year ago by fox@lemm.ee to c/machinelearning@lemmy.ml
7
1
8
1

Hello Machine Learning Community,

The intention of this post is to replicate a similar tradition from R/machinelearning and to trigger engagement. This post will be created weekly.

What are you reading this week and any thoughts to share?

9
1

I was looking through papers that combine LLMs and RL and this was pretty fascinating and the citations are perfect for continuing my search.

10
1
11
1
12
1

Hello Machine Learning Community,

The intention of this post is to replicate a similar tradition from R/machinelearning and to trigger engagement. This post will be created weekly.

What are you reading this week and any thought to share on it ?

13
1
14
1
15
1

I'd love to know what others are reading, why they think it's awesome (or not). In general, get an exposure to other sub genres of ML. Most of the papers I read are in the computer vision domain cause of work so I'd appreciate reading more about others.

So...

  1. Are you all interested in such a post ?
  2. If yes, which day of the week ?
16
1
Gaussian processes from scratch (peterroelants.github.io)
17
1

Great series on machine learning. Posting for anyone interested in more of the details on the AI's and LLM's and how they're built/trained.

18
1
SDXL Model Report (github.com)

TLDR of Stability-AI's Paper:

Summary: The document discusses the advancements and limitations of the Stable Diffusion (SDXL) model for text-to-image synthesis. SDXL shows significant improvements in synthesized image quality, prompt adherence, and composition. However, it also has limitations such as challenges in synthesizing intricate structures like human hands, achieving perfect photorealism, addressing biases, mitigating concept bleeding, and improving text rendering. The document also compares SDXL with Midjourney v5.1, where SDXL shows a slight preference in terms of prompt adherence. The document concludes with suggestions for future improvements.

Key Takeaways:

  1. SDXL outperforms or is statistically equal to Midjourney V5.1 in 7 out of 10 categories.
  2. SDXL does not achieve better FID scores than the previous SD versions. This suggests the need for additional quantitative performance scores, specifically for text-to-image foundation models.
  3. SDXL outperforms Midjourney V5.1 in all but two categories in the user preference comparison.
  4. The model may encounter challenges when synthesizing intricate structures, such as human hands.
  5. The model does not attain perfect photorealism. Certain nuances, such as subtle lighting effects or minute texture variations, may still be absent or less faithfully represented in the generated images.
  6. The model’s training process heavily relies on large-scale datasets, which can inadvertently introduce social and racial biases.
  7. The model may exhibit a phenomenon known as “concept bleeding” where distinct visual elements unintentionally merge or overlap.
  8. The model encounters difficulties when rendering long, legible text.
  9. Future work should investigate ways to provide a single stage of equal or better quality, improve text synthesis, enable scaling to much larger transformer-dominated architectures, decrease the compute needed for inference, and increase sampling speed.
19
1
20
1

I work with machine learning tasks daily, both as an ML researcher and as a hobby. The difference between what I can do at work and at home is significant - an A40 at work can do far more than the 3080 I have at home. This obviously makes sense, given the massively increased price point.

However, what I find odd is how there are no consumer level server GPUs targeted towards ML on the market. The A40 is not just a scaled up consumer GPU, and with machine learning growing as a hobby, consumer and enthusiast-level server GPUs are a surprising market gap.

21
1

https://public.dhe.ibm.com/ibmdl/export/pub/software/server/ibm-ai/conda/#/

On the face of it, the ability to run models larger than GPU memory would seem to be extremely valuable. Why did they give up? Not everyone has an 80GB GPU.

Was the performance too slow?

22
1
23
1

From the README.md:

This is the real release of the open-orca dataset.

alpaca format, uncensored.

Original paper from ms research: https://arxiv.org/abs/2306.02707

Hartford's blog post detailing his work: https://erichartford.com/openorca

Grab the data and get to work: https://huggingface.co/datasets/OpenOrca/open-orca

24
1
25
1
Vicuna 13B vs 33B (lemmy.world)
submitted 1 year ago* (last edited 1 year ago) by Technotica@lemmy.world to c/machinelearning@lemmy.ml

Hi! I posted this over at lemmy.world in the technology sublemmy, but I thought maybe some knowledgeable people here could answer the question easier.

Oh and I know it's not really designed to give out functioning code, but it does look good enough I thought I'd ask.

I have both Vicuna-13B and the preview version of Vicuna-33B running on my desktop and asked both a simple coding question from another user:

"What can I use to draw curved text in an SVG?"

I got these two answers, I am not really a code but, while the code looks like it could work, I don't think it is correct?

Can someone here tell me if either answer is correct or more correct than the other?

Thanks!

33B:

.....

User: How are you doing today?

Vicky: Today, I'm doing well, thank you for asking! How about you?

User: I am doing great! what can I use to draw curved text in an SVG?

Vicky: To draw curved text in an SVG, you can use the element. This allows you to wrap text along a path defined by an SVG path or elliptical arc. First, create the path using the 'path' or 'ellipse' element, then use the element to wrap the text along the path. Here's an example:

+++

html
<svg xmlns="http://www.w3.org/2000/svg" width="800" height="600" viewBox="0 0 800 600">
  <path id="curve" d="M200,200 Q400,200 400,400" fill="none" stroke="black" stroke-width="2"/>
  <text text-anchor="middle" dominant-baseline="central" x="300" y="300" text-path="url(#curve)">
    The quick brown fox jumps over the lazy dog
  </text>
</svg>

+++

In this example, a quadratic curve is defined using the 'Q' command in the 'path' element, and the text is wrapped along the curve using the element with the 'url(#curve)' attribute. You can adjust the 'd' attribute of the path and the text's position to create different curves with the text. .....

13B:

.....

User: what can I use to draw curved text in an SVG?

Vicky: In SVG, you can use the "textPath" element to create curved text. You can also use the "text" element and position it using "transform" attributes to achieve the same effect.

User:Can you give me a code example for that?

Vicky: Sure! Here's an example using the "textPath" element:

+++

php
<svg width="100" height="100">
  <text xlink:href="#text1" y="50" text-anchor="middle" font-size="20">
    This text has a curved path.
  </text>
  <rect x="50" y="50" width="30" height="10" fill="#000" />
  <text xlink:href="#text2" x="70" y="50" text-anchor="middle" font-size="20">
    This text has a curved path.
  </text>
</svg>

+++

And here's an example using the "text" element:

+++

php
<svg width="100" height="100">
  <text xlink:href="#text1" y="50" text-anchor="middle" font-size="20">
    This text has a curved path.
  </text>
  <rect x="50" y="50" width="30" height="10" fill="#000" />
  <text x="70" y="50" text-anchor="middle" font-size="20">
    This text has a curved path.
  </text>
</svg>

+++

*Note that you'll need to define the curved path for each text element using an SVG path string. In this case, the path strings are #text1 and #text2.

To use the same path for multiple text elements, you can define the path once and then reference it using the xlink:href attribute. The text-anchor attribute determines the point of the text element where the path is anchored.*

.....

(no idea why Lemmy doesn't italicize that last text part)

13B-Vicky gave more code, no idea which code example is better. 13B runs much faster than 33B, it would be really interesting to know if either or both example codes are valid and if 13B, despite being the simpler model gave a correct answer. 33B is a preview model not the final version.

What I am noticing is that neither has actually used the textPath element, though 33B used text-path, no idea if that exists.

p.s. by offloading 41 of the 63 layers to gpu 33B runs as fast as 13B without offloading but with CUDA on my machine. Very interesting.

view more: next ›

Machine Learning

0 readers
0 users here now

founded 4 years ago
MODERATORS