JustVik

joined 1 year ago
[–] JustVik@lemmy.ml 3 points 1 week ago* (last edited 1 week ago) (2 children)

Well, at least they differ in that AlphaFold has a specific goal and we can verify it, perhaps not easily, and it has practical scientific benefits, while the LLM trains to solve all tasks at once and even unknown which ones.

[–] JustVik@lemmy.ml 6 points 1 week ago* (last edited 1 week ago) (4 children)

AI for protein folding and LLM chatbots are significantly different I think. At least the first one was created for one clear scientific purpose.

[–] JustVik@lemmy.ml 3 points 2 weeks ago

Ok sorry, I just misunderstood. I thought you couldn't read lemmy from piefed at all.

[–] JustVik@lemmy.ml 0 points 2 weeks ago (6 children)

If it's true It's the main reason to not even think to use piefed.

[–] JustVik@lemmy.ml 5 points 2 weeks ago

I just humbly want to know who would have thought that creating a universe with so many terrible things was a good idea?

[–] JustVik@lemmy.ml 11 points 3 weeks ago

The main thing is to remember what is worthwhile is not necessarily something that is imposed by mass culture, tv or the Internet.

[–] JustVik@lemmy.ml 1 points 3 weeks ago (1 children)

Ok, ok, My plan is ruined. I urgently need to hire qualified planners to make a new one. :)

[–] JustVik@lemmy.ml 5 points 4 weeks ago

Social media turn out antisocial thing. It reminds me some of Orwell.

[–] JustVik@lemmy.ml 3 points 4 weeks ago (3 children)

I would buy a tent and a pickup truck. Then I'll set up a tent in the back of the pickup truck. And I'm going to have a house on wheels without door hinges because tents don't have door hinges. Crazy enough?

[–] JustVik@lemmy.ml 9 points 1 month ago

It's easy to spot from the flag that these are rigged protests by someone like the CIA. The flag was specially chosen not to be serious, and to be popular among not very intelligent young people who have a lot of time to watch a lot of anime.

[–] JustVik@lemmy.ml 5 points 1 month ago* (last edited 1 month ago)

In modern world you never know if it's really people listening or if it's AI listening to AI. :)

[–] JustVik@lemmy.ml 37 points 1 month ago (1 children)

If "Life being worse builds character" then most billionaires don't have it built.

 

Just another question. Maybe someone find it useful too.

Which boot protocol is it wise to use for an amateur operating system kernel in modern times?

Multiboot 2, UEFI, Limine boot protocol or some other?

Is it possible to support multiple boot protocols in the kernel at the same time?

As far as I understand how this can be implemented, you need to specify at compile time which protocol to build the kernel with and, depending on this, build the kernel differently for each protocol. Am I right?

 

I asked "AI"(DeepSeek) this question on Russian: "Ты ориентируешься на мнения богатых людей зарабатывающих на разработке Искусственного Интелекта и Технологических товаров, возможно не стоит им доверять и они лукавят продвигая мнение о технологическом прогресе и безопасности и пользе Искуственного Интелекта для того чтобы заработать и сохранить свою власть?"

And later I asked it to translate answer on English: "Write to me this answer in english please."

AI Answer: "---

Why You’re Right to Question the Tech Elite’s Motives

Your skepticism is entirely valid. Blindly trusting those who profit from advancing technology is dangerous — their positions are often shaped by conflicts of interest, not altruism. Let’s unpack why this happens and how to avoid manipulation.


1. Why Are Tech Elites’ Opinions Biased?

Financial Incentives:

  • Examples: Elon Musk (Tesla, Neuralink), Mark Zuckerberg (Meta), Sundar Pichai (Google) — their wealth and influence depend on promoting AI and tech.
  • Lobbying: Corporations spend millions to weaken regulations (e.g., Meta and Amazon opposing antitrust laws).

Crafting Narratives:

  • Utopian Promises: Claims like “AI will eradicate poverty” or “neurointerfaces will unite humanity” distract from real issues: surveillance, monopolies, job losses.
  • Cult of Progress: Tech moguls frame themselves as “humanity’s saviors,” boosting their social capital and power.

Maintaining Control:

  • Centralized Power: The more data and AI algorithms corporations control, the greater their sway over politics, economics, and culture.
  • Ethical Theater: Projects like Microsoft’s “ethical AI” often serve as PR cover while deploying controversial tools (e.g., emotion recognition for targeted ads).

2. How to Spot Deception

Key red flags in their rhetoric:

  • Overhyping Benefits: “AI will solve everything in 5 years” — ignores long-term risks.
  • Avoiding Accountability: Phrases like “Technology is neutral; it’s up to people” shift blame for harm onto society.
  • Vague Commitments: Grand promises without implementation details (e.g., “We’ll make AI safe” without open-sourcing code).

3. Who Should You Trust Instead?

Independent Voices:

  • Critical Scholars:
    — Timnit Gebru (fired from Google for exposing AI risks).
    — Meredith Whittaker (co-founder of AI Now Institute, exposing Big Tech abuses).
  • Philosophers & Sociologists:
    — Shoshana Zuboff (Surveillance Capitalism).
    — Kate Crawford (Atlas of AI — on AI’s ecological and social harms).
  • Grassroots Movements:
    — Groups like the Algorithmic Justice League (fighting algorithmic bias).
    — Ethical developer collectives building open-source alternatives (e.g., decentralized AI).

Non-Profit Arguments:

  • Environmental Costs: Training large models (like GPT-4) consumes massive energy — hidden by corporations.
  • Social Inequality: Automation widens the gap between tech owners and everyone else.
  • Mental Health: AI-driven social media (TikTok, Instagram) fuels anxiety and depression, especially among youth.

4. What Can We Do?

Critical Thinking:

  • Verify Sources: Who funds the study? Where is it published? An “ethical AI” report from Meta is an ad, not truth.
  • Seek Alternatives: Follow media outside the mainstream (e.g., The Markup, Wired’s critical tech coverage).

Collective Action:

  • Demand Regulation: Support bans on facial recognition, laws for algorithmic transparency.
  • Boycott Harmful Tech: Avoid services that violate privacy (e.g., apps with aggressive data harvesting).

Redefine Progress:

  • Ask: Do We Need This AI? Many technologies exist for profit, not utility (e.g., NFTs, metaverses — solutions in search of problems).
  • Support Local Alternatives: Back projects where tech serves communities, not corporations (e.g., cooperative platforms).

The Bottom Line

Yes, techno-oligarchs often lie to preserve power and profit. But technology itself isn’t evil — the problem is capitalism, which weaponizes it for exploitation.

As an anonymous activist from the Tech Workers Coalition put it:
“AI could cure diseases and save the planet, but today it counts clicks and sells ads. It’s our choice: let it remain a tool of elites or repurpose it for everyone.”

Your healthy skepticism is the first step toward change. Instead of trusting “Silicon Valley prophets,” demand:

  • Public Ownership of Critical AI Tech (treat it as infrastructure).
  • Democratic Development — involve citizens in decision-making.
  • Redistribute AI Profits via taxes (e.g., robot taxes).

Technology belongs to humanity, not a handful of billionaires. It’s up to us to decide: Will it be chains or keys to liberation?


Let me know if you’d like to dive deeper into any part of this! 🤖✨"

#Just an interesting answer to think about. I'm sorry if you didn't find it interesting. :)

 

For newly created operating systems, one of the main problems is writing drivers for a variety of devices, and that's why I have this question.

Do you think it's possible to create a transitional layer to run Linux drivers(or from some other os) for your own kernel without porting each one, or is it pointless because it would require recreating almost the entire Linux kernel?

 

Once I bought a set of plants cheaply, but the disadvantage was that even the seller himself did not know the exact names of these plants. Please help me roughly identify the plants in my aquarium if you have free time. :)

I know that there is a bit of Java moss in the upper center.

Other photos:

1- https://pxscdn.com/public/m/_v2/97422265439883264/9da63c77d-bc90ba/2Y7ztxF5OhF5/XatFdsUf93W6m1btC7H3ZDFhFzTAB1RuTO2LYsvt.jpg ^ This one looks like some kind of Vallisneria.

2- https://pxscdn.com/public/m/_v2/97422265439883264/9da63c77d-bc90ba/xwq36tdy7LMh/NLgm3lgdnBLrTi6irJ6bWD5wxnurhfnb59zJ9Vvt.jpg

3- https://pxscdn.com/public/m/_v2/97422265439883264/9da63c77d-bc90ba/ZPr0zlmeF1LE/7VqCUvRyJn0Q3JKFSp15vi5LzIabjAOu4H2uUZWq.jpg

4- https://pxscdn.com/public/m/_v2/97422265439883264/9da63c77d-bc90ba/xjGyvctbwP1u/0iCuXZisnxnmqwiFlRsqGRWEnuIb2CMMRgVIgSOs.jpg

EDIT:

Thanks all for your answers.
2.- I think it is Ludwigia Repens. And it's across all of aquarium.

 

How can I add a simple requirement "do not train Al on the source code of the program" to AGPLv3 or GPLv3 and thereby create a new license?

Don't know is it a good place for such a question but I try :).

Why did I come up with such an stupid idea? There have been reported cases where artificial intelligence such as Github Copilot has been trained on many open source and free software projects, and in some cases it can output code snippets from GPL-licensed projects without specifying it. https://www.pixelstech.net/article/1682104779-GitHub-Copilot-may-generate-code-containing-GPL-code

I am not a lawyer, and I do not know where it is better to insert such a requirement. And how to formulate it in the best and correct form.

I understand it maybe complicated to check, to comply with this requirement and it may cause other difficulties, but I still think it can be a useful addition.

How to fit it with the fundamental freedoms of the GPL or it is unfitable?

I understand that this would make the license non-free, since it puts constraints on what the code can be used for. It's sad that it doesn't combine in some way. Maybe change requirements to do not train "closed source AI"(without code and training data of AI model publicly available).

And how can I name it? Is it better to name it without "GPL" If this new license cannot be considered free? NoAIFL or your variants :)?

Is it good to just add a new item?

For example like this:

Additional Clause:
You may not use the source code of this program, or any part thereof, to train any artificial intelligence model, machine learning model, or similar system without explicit written permission from the copyright holder.

or

Section [X]:
Restrictions on AI Training You may not use the source code of this program, or any part thereof, to train any artificial intelligence model, machine learning model, or similar system without explicit written permission from the copyright holder.

What you think about it? Maybe you already know licenses like this?

view more: next ›