ChaoticNeutralCzech

joined 2 years ago
13
defection rule (datasheet4u.com)
 

No longer want to fight for an imperialist despot? There's a single-chip solution for that!

Here's the PDF for higher resolution: https://datasheet4u.com/pdf-down/C/D/1/CD1379CP_ShaoxingSilicoreTechnology.pdf

It's either detection or deflection. The chip detects horizontal and vertical sync pulses in composite video and drives the deflection yokes accordingly to create a linear sawtooth of the correct phase and frequency for the beam to scan a rectangle while the visible part of the video signal is transmitted.

[–] ChaoticNeutralCzech@lemmy.one 3 points 1 month ago (1 children)

Finally a surrealist reference I get

The aspect ratio problem only seems to affect images in edited comments. The viewport size remains cached even though the image src is different.

[–] ChaoticNeutralCzech@lemmy.one 2 points 2 months ago

Nah. Still, I find them pretentious and prefer en-dashes (which the text is also littered with): 20 em-dashes (—) and 5 en-dashes (–) – counted by my text editor – is just too many.

[–] ChaoticNeutralCzech@lemmy.one 5 points 2 months ago* (last edited 1 month ago) (1 children)

They have released a statement about this so rest assured it's OK.

Edit: I visited them at 39c3 and they're humans (with cat ears).

[–] ChaoticNeutralCzech@lemmy.one 5 points 2 months ago

Not yet, the TLD application needs to be submitted (that's what costs all that money) and approved, so it will take about 1.5 years if successful.

What's your 🏴󠁥󠁳󠁣󠁴󠁿Catalan project BTW?

[–] ChaoticNeutralCzech@lemmy.one 3 points 2 months ago* (last edited 2 months ago) (1 children)

Well, they're a team of 6 including a dedicated graphic design & marketing person and they've produced a video and FAQ too, plus they've succeeded at bringing the ICANN application fee down as a non-profit. Yes, "kinship-based infrastructure" rubs me the wrong way too but because it reeks of corporate investor talk, not AI. So I'm pretty sure they did take the time to write the article and every piece of text on the website. Not to mention the legal document (bound by Belgian law) that ensures the money goes towards the stated mission.

I don't like that a big tech corporation can register .meow too but there's no avoiding that. Even the Catalan domain, whose purpose is to promote their language and culture, has seen "misuse" such as nyan.cat.

 

The most :3 top-level domain could become real! 100% queer-owned and queer-operated with proceeds funding LGBTQIA+ infrastructure.

[–] ChaoticNeutralCzech@lemmy.one 1 points 5 months ago* (last edited 5 months ago)

Hmmm... All right for me, why can't I replicate the issue with this comment?

 

This image is square and the upper half is transparent

[–] ChaoticNeutralCzech@lemmy.one 5 points 8 months ago (2 children)

FYI, Thiruvananthapuram is in India. I don't know why neither cared to mention that.

[–] ChaoticNeutralCzech@lemmy.one 7 points 11 months ago

No way Teams is the most lightweight vehicle around

[–] ChaoticNeutralCzech@lemmy.one 5 points 11 months ago

Now do Krita

...oh wait

 

Désolé, je ne parle pas français et je ne peux donc pas le faire moi-même. Oui, c'est un peu hypocrite de ma part de rire de ce que je pense être une traduction automatique alors que j'en utilise une moi-même. Peut-être que les traductions françaises ne sont pas automatiques, mais je suppose que c'est le cas parce que la moitié des traductions allemandes sont ridicules.

Je pense que c'est l'un des plus mauvais:
Attention: Surveillez votre tête
I think this is one of the bad ones.

US company sells ridiculously machine-translated US safety signs that obviously don't follow European standards. Feel free to pick the funniest ones and make a collection.

Sorry, I can't speak French so I can't do that myself. Yes, it's a little hypocritical of me to laugh at what I think is a machine translation while using one myself. Maybe the French ones are not machine-translated but I'm guessing they might be because half of the German ones are ridiculous.

You're right. Later in the video, this shot with the same fake film effect appears and that's indubitably AI (look at bottom right):

The video narration implied this is footage from a rare or unfinished film, though.

 

Found by @CrayonRosary@lemmy.world: it originates here: Dune by Alejandro Jodorowsky - Teaser Trailer (1976)


Source: used as B-roll in the intro of this video: https://www.youtube.com/watch?v=f8AJk2Sns_k&t=3

Here are individual frames but image search (SauceNAO, Google Lens, IQDB, Yandex) has not been helpful.

Frames




























Transcript: Close-up shot of a woman's face with a neutral expression, short brown '80s hair, lipstick, thick sharp eyeliner and glowing aqua irises. Widescreen with a higher-than-usual amount of film artifacts.

 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: Electric girls on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original

This is the last one in the series. Bye!

 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: Electric girls on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original
Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea 🇰🇷 and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.

 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: Electric girls on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original
Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea 🇰🇷 and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.

 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: Gazebo on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original
Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea 🇰🇷 and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.

 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: The Infinity Gauntlet on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original
Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea 🇰🇷 and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.

15
submitted 1 year ago* (last edited 1 year ago) by ChaoticNeutralCzech@lemmy.one to c/morphmoe@ani.social
 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: Frostpunk Automaton on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original

See also: Land Dreadnought

 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: Crabsquid on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original

See also: Seamoth and other Subnautica creatures in the comments

view more: next ›