darksecret

joined 1 month ago
MODERATOR OF
[โ€“] darksecret@lemmy.zip 1 points 2 days ago

I admit that I could have been more explicit and comprehensive on the political issues, but it was already too long. But I think we agree on the overall premise. LLMs are a neutral weapon (as all weapons are) but the current landscape of corporatist acquisition of this quite powerful weaponry is a severe disadvantage to the disenfranchised, even at a (as you rightfully noted) geopolitical level, with the colonisation and exploitation of the global South. But my point was that, the only viable response to this, is to seize the weapons, not to flee the battlefield. The criticism of AI usage because its coopts the enemy, is as naive as "you hate capitalism, but still use things made in capitalism".

 

cross-posted from: https://lemmy.zip/post/64009036

cross-posted from: https://lemmy.zip/post/64009035

cross-posted from: https://lemmy.zip/post/64007684

Introduction

The current socio-political discourse is dominated by a new divisive issue concerning "AI" - so called Artificial Intelligence. While some are vehemently opposed to the idea of AI infiltrating newer and newer aspects of life, some are convinced of its revolutionary transformative power. The question of AI usage in our project of The Brotherhood, has also been put into question and this essay will attempt to put my^[not everyone working on the project, just my own] perspective on it.


What even is "AI"

What is typically referred to as "AI", is in the more technical corners, known as, LLMs, or Large Language Models. They are a new innovation^[still, pretty old, around 2017-18] in a long line of automation technology, going back to the mid-20th century, not long after the computer itself was starting to become a thing of utmost usefulness.

The long journey of automation

Actually, the computer itself can be seen as the first innovation in this automation technology. After all, the computer is a literal automatic computation^[and much more, of course!] machine, that uses some carefully arranged silicon and phosphorus to manipulate electron flows and deterministically execute some rigorously defined steps.
The idea to take this further and further, was always an ambition of early computer scientists. And as speed and size started getting accessible, effort was made for closer integration with humans. This was not a trivial task as the computer and the human spoke two different languages that might as well be from different universes. From punching cards, where programmers painstakingly "wrote" binary in a literal card to Fortran to programming languages to OS to GUIs and applications, we have made tools, for our tools, for our tools, in a seemingly endless recursion.
One biggest aspect that programmers got interested in, in the very late 20th century, was natural language processing, to further bridge the "language gap". This is what enabled the early internet, through search engines. Now, this fundamentally differs in structure to previous tools. This is not deterministic, as language itself was not deterministic. So these tools relied on various statistical tools like N-grams, Markov Models, Bayesian inference etc.

The parallel research on Neural Networks

Around the same time, with the advent of neuroscience^[that replaced the previous psychological models of Freud, Jung and Lacan, which were indeed not suited for STEM fields], another curious line of research began with the perceptron.
Very much influenced from early neuroscience, it slowly split from its initial inspiration and drifted towards statistical science, rather than trying to follow the exact structure of brains. This too, went through its own series of innovations with neural networks, backpropagation, Hopfield networks, CNNs, LSTMs etc.
But two innovations were critical for the explosion of interest in this very niche field -

  1. Deep neural networks, that made use of the newly popular GPUs, back in the early 2010s
  2. Transformers, which was the topic of a now, legendary 2017 paper, titled, "Attention is All You Need.

In the early 2020s, it was realised, that these two can be combined and scaled up massively^[and I mean massively] to gain a general semantic understanding of general language. This is where the two paths collided. What started as experimental cognitive research at the intersection of neuroscience and computation, turned into a statistical method to give the computers an understanding of semantic language! Thus began the era of LLMs.

An LLM is simply a statistical model trained to have a general understanding of semantics!


So What's All the Hype

What is True

The innovation, especially of GPUs and transformers are legit groundbreaking innovations that have broken a very long stall in their respective fields. And their combination to create LLMs are indeed a great engineering feat, even if not that innovative from a purely academic standpoint^[the massive scaling needed, is another level of brute-forcing. Think of the pyramids of Egypt - not as clever as it is awe-inspiring, simply due to scale].
And it is also true that this has opened up the pathway to some commercial usage in a way that was just not possible earlier. In a certain sense, it is an upgradation of the search engines with a powerful fuzzy semantic translator.
It is indeed a great addition to the coding landscape. Programming used to be 80% manual intellectual labour, where you had to go search for that one silly bug, or implement a very simple system for the 100th time. Now, a lot of this can be automated. However, to think, that this makes programming itself obsolete, is very naive. For most serious project, you still need to have great knowledge of computer science, but the entry to programming has been indeed lowered^[which is either very good news, or very disappointing, depending on how much you like to gatekeep your nerdy interests!]. Most serious programmers have simply become a senior software developer and have delegated the manual repetitive tasks to the "AI", which can understand natural language and turn them into code it has seen before^[if it has been trained in it].

What is the "Bubble"

What remains in heavy doubt is the "efficiency" problem. It is yet very unclear as to whether Moore's Law will come into play here and decrease costs as time passes by, or whether the architecture itself, despite its genuine innovations, is fundamentally limited. The big corporations are betting on the former.
Meanwhile some "tech-enthusiasts" have become a little too enthusiastic about the range of its applicability. The LLMs, like any sophisticated statistical model, requires massive amounts of structured data. In certain areas like day-to-day coding, or summaries, this is not that hard. However, in areas like robotics, it is still not a "done" job^[just getting structured data itself].
The more laughable matter is that some have put into esoteric questions of consciousness^[philosophical exploration of consciousness, is indeed possible, but requires a level of rigor and seriousness, that is missing from most such discussions] in this new light. This is in part, due to the specific ancestry it has, and mostly just due to human nature of "jumping the bandwagon".


What About the Political Issues

Now we come to the most important point of this discourse. I will break it down into specific points that are frequently put to question.

The Environmental Hazards

As it now stands, the development and deployment of LLMs remain highly inefficient. But technology and development always comes at the expense of natural resources and equilibrium. The question is not of, whether it is ethical, but who controls/decides how much is sustainable^[moreover, the current climate crisis has already put adequate strain on these resources in a lot of places].
At this point, however, it stops being a environmental concern and starts being a political one. The neoliberals would indeed argue that the market would balance itself when resource scarcity starts being critical, whereas opponents might argue that state intervention is needed to prevent a calamity at all. But whatever the arguments remain about their ideal states, what is true, is that, the real world is none of those "ideal world" situations.
The neoliberal free market does not exist in its full glory, as most of the technological market is monopolised by a few corporations. The current global climate crisis, is a failure of the free-markets of the industrial and the post-industrial era. Whereas state intervention, remains, at best, ineffectual, and at worst, prone to lobbying by the same monopolised corporations.

The conclusion is that the control of such critical decisions, remain concentrated in the hands of a few oligarchs who are prone to taking risky decisions and making mistakes.

The Data "Theft"

It is not unknown that the data that the LLMs are trained on, are public data. However, the access to such LLMs remain out of the hands of the people whose data made it come to fruition. It is also clear that the current copyright laws are not built to handle such cases.
Close-sourced LLMs represent a new kind of injustice with no easy solutions. On one hand, making LLMs accessible to all, would exasperate the "hype-train" and worsen the environmental impact. Whereas, stopping research on such lucrative frontiers would be catastrophically conservative. And again, this comes down to control - control of how and where to gather source data and how to commercialise it. But as long as the monopolies exists, especially on the production of cutting-edge of LLMs, control remains firmly on the hands of the select-few.

The Unemployment Issues

The layoffs have been quite eye-catching, since it happened on high-class educated employees. But this is a constant byproduct of changing times and advancing technology, especially in automation. This can not be avoided without an aversion to technology itself^[which is hard to sell in the modern world!].
However, this never leads to humans not having "any work left to do" at all. No, jobs come and jobs go! But as the current landscape stands, it is indeed the case that many millions of people will get trampled under the changing times - people who have long pursued a high-profile job, only to lose their long-expected market volume or high-end salary.
This represents an utter failure of our social contract. The fact that technological progress comes at the cost of social cohesion, is a reflection of our embarrassing societal technology in comparison to our other feats^[such as engineering, or research, or industrialisation]. An automation, theoretically, should be a boon to the labour force, taking away manual labour, in place of far more interesting jobs and more time for recreation! But alas, instead it represents an existential threat to a substantial section of the population!

No society can last which has a structural opposition to technological progress. The societal technology needs to keep up!


So Where Is The Brotherhood's Position on This

Now, The Brotherhood is NOT a monolithic entity. The different people in here, has significantly different positions on this^[the division is one of the reasons of this long essay]. However, I have been a significant part of this project from the start, and I can say what my position is, on this.
My philosophy is of pragmatism. One must keep the danger, very very close. The one who lives by the sword, dies by the sword. But one who forsakes the sword, lives under the sword! Currently, as it stands, "AI" is the brand new weapon, in this long warfare of control, of ideology, of dominance, as it always has been. But if the disenfranchised people needs to win, they can not afford to forsake the game. They can only win by playing the same game.
I have used AI IDEs very substantially to build the project - because I am not such a good programmer, and even if I were, I could not have done the entire project, alone, in such a short time. Now I know that this is not a replacement for actual skilled people, and in the best-case scenario, I never would have needed to use it too much. But unfortunately, reality is never perfect, and we had to do get by on what we could!
And that is my philosophy on AI usage. The rules of the game are no different, only the goals of the players and as long as we are working for a noble goal^[actually, we directly respond to that political problem of unemployment], we cannot compromise on not taking the best shot at victory!

The End Justifies The Means ๐Ÿ”ฅ

[โ€“] darksecret@lemmy.zip 4 points 4 days ago

Well they did say they don't use the information ๐Ÿคฃ

 

cross-posted from: https://lemmy.zip/post/64009035

cross-posted from: https://lemmy.zip/post/64007684

Introduction

The current socio-political discourse is dominated by a new divisive issue concerning "AI" - so called Artificial Intelligence. While some are vehemently opposed to the idea of AI infiltrating newer and newer aspects of life, some are convinced of its revolutionary transformative power. The question of AI usage in our project of The Brotherhood, has also been put into question and this essay will attempt to put my^[not everyone working on the project, just my own] perspective on it.


What even is "AI"

What is typically referred to as "AI", is in the more technical corners, known as, LLMs, or Large Language Models. They are a new innovation^[still, pretty old, around 2017-18] in a long line of automation technology, going back to the mid-20th century, not long after the computer itself was starting to become a thing of utmost usefulness.

The long journey of automation

Actually, the computer itself can be seen as the first innovation in this automation technology. After all, the computer is a literal automatic computation^[and much more, of course!] machine, that uses some carefully arranged silicon and phosphorus to manipulate electron flows and deterministically execute some rigorously defined steps.
The idea to take this further and further, was always an ambition of early computer scientists. And as speed and size started getting accessible, effort was made for closer integration with humans. This was not a trivial task as the computer and the human spoke two different languages that might as well be from different universes. From punching cards, where programmers painstakingly "wrote" binary in a literal card to Fortran to programming languages to OS to GUIs and applications, we have made tools, for our tools, for our tools, in a seemingly endless recursion.
One biggest aspect that programmers got interested in, in the very late 20th century, was natural language processing, to further bridge the "language gap". This is what enabled the early internet, through search engines. Now, this fundamentally differs in structure to previous tools. This is not deterministic, as language itself was not deterministic. So these tools relied on various statistical tools like N-grams, Markov Models, Bayesian inference etc.

The parallel research on Neural Networks

Around the same time, with the advent of neuroscience^[that replaced the previous psychological models of Freud, Jung and Lacan, which were indeed not suited for STEM fields], another curious line of research began with the perceptron.
Very much influenced from early neuroscience, it slowly split from its initial inspiration and drifted towards statistical science, rather than trying to follow the exact structure of brains. This too, went through its own series of innovations with neural networks, backpropagation, Hopfield networks, CNNs, LSTMs etc.
But two innovations were critical for the explosion of interest in this very niche field -

  1. Deep neural networks, that made use of the newly popular GPUs, back in the early 2010s
  2. Transformers, which was the topic of a now, legendary 2017 paper, titled, "Attention is All You Need.

In the early 2020s, it was realised, that these two can be combined and scaled up massively^[and I mean massively] to gain a general semantic understanding of general language. This is where the two paths collided. What started as experimental cognitive research at the intersection of neuroscience and computation, turned into a statistical method to give the computers an understanding of semantic language! Thus began the era of LLMs.

An LLM is simply a statistical model trained to have a general understanding of semantics!


So What's All the Hype

What is True

The innovation, especially of GPUs and transformers are legit groundbreaking innovations that have broken a very long stall in their respective fields. And their combination to create LLMs are indeed a great engineering feat, even if not that innovative from a purely academic standpoint^[the massive scaling needed, is another level of brute-forcing. Think of the pyramids of Egypt - not as clever as it is awe-inspiring, simply due to scale].
And it is also true that this has opened up the pathway to some commercial usage in a way that was just not possible earlier. In a certain sense, it is an upgradation of the search engines with a powerful fuzzy semantic translator.
It is indeed a great addition to the coding landscape. Programming used to be 80% manual intellectual labour, where you had to go search for that one silly bug, or implement a very simple system for the 100th time. Now, a lot of this can be automated. However, to think, that this makes programming itself obsolete, is very naive. For most serious project, you still need to have great knowledge of computer science, but the entry to programming has been indeed lowered^[which is either very good news, or very disappointing, depending on how much you like to gatekeep your nerdy interests!]. Most serious programmers have simply become a senior software developer and have delegated the manual repetitive tasks to the "AI", which can understand natural language and turn them into code it has seen before^[if it has been trained in it].

What is the "Bubble"

What remains in heavy doubt is the "efficiency" problem. It is yet very unclear as to whether Moore's Law will come into play here and decrease costs as time passes by, or whether the architecture itself, despite its genuine innovations, is fundamentally limited. The big corporations are betting on the former.
Meanwhile some "tech-enthusiasts" have become a little too enthusiastic about the range of its applicability. The LLMs, like any sophisticated statistical model, requires massive amounts of structured data. In certain areas like day-to-day coding, or summaries, this is not that hard. However, in areas like robotics, it is still not a "done" job^[just getting structured data itself].
The more laughable matter is that some have put into esoteric questions of consciousness^[philosophical exploration of consciousness, is indeed possible, but requires a level of rigor and seriousness, that is missing from most such discussions] in this new light. This is in part, due to the specific ancestry it has, and mostly just due to human nature of "jumping the bandwagon".


What About the Political Issues

Now we come to the most important point of this discourse. I will break it down into specific points that are frequently put to question.

The Environmental Hazards

As it now stands, the development and deployment of LLMs remain highly inefficient. But technology and development always comes at the expense of natural resources and equilibrium. The question is not of, whether it is ethical, but who controls/decides how much is sustainable^[moreover, the current climate crisis has already put adequate strain on these resources in a lot of places].
At this point, however, it stops being a environmental concern and starts being a political one. The neoliberals would indeed argue that the market would balance itself when resource scarcity starts being critical, whereas opponents might argue that state intervention is needed to prevent a calamity at all. But whatever the arguments remain about their ideal states, what is true, is that, the real world is none of those "ideal world" situations.
The neoliberal free market does not exist in its full glory, as most of the technological market is monopolised by a few corporations. The current global climate crisis, is a failure of the free-markets of the industrial and the post-industrial era. Whereas state intervention, remains, at best, ineffectual, and at worst, prone to lobbying by the same monopolised corporations.

The conclusion is that the control of such critical decisions, remain concentrated in the hands of a few oligarchs who are prone to taking risky decisions and making mistakes.

The Data "Theft"

It is not unknown that the data that the LLMs are trained on, are public data. However, the access to such LLMs remain out of the hands of the people whose data made it come to fruition. It is also clear that the current copyright laws are not built to handle such cases.
Close-sourced LLMs represent a new kind of injustice with no easy solutions. On one hand, making LLMs accessible to all, would exasperate the "hype-train" and worsen the environmental impact. Whereas, stopping research on such lucrative frontiers would be catastrophically conservative. And again, this comes down to control - control of how and where to gather source data and how to commercialise it. But as long as the monopolies exists, especially on the production of cutting-edge of LLMs, control remains firmly on the hands of the select-few.

The Unemployment Issues

The layoffs have been quite eye-catching, since it happened on high-class educated employees. But this is a constant byproduct of changing times and advancing technology, especially in automation. This can not be avoided without an aversion to technology itself^[which is hard to sell in the modern world!].
However, this never leads to humans not having "any work left to do" at all. No, jobs come and jobs go! But as the current landscape stands, it is indeed the case that many millions of people will get trampled under the changing times - people who have long pursued a high-profile job, only to lose their long-expected market volume or high-end salary.
This represents an utter failure of our social contract. The fact that technological progress comes at the cost of social cohesion, is a reflection of our embarrassing societal technology in comparison to our other feats^[such as engineering, or research, or industrialisation]. An automation, theoretically, should be a boon to the labour force, taking away manual labour, in place of far more interesting jobs and more time for recreation! But alas, instead it represents an existential threat to a substantial section of the population!

No society can last which has a structural opposition to technological progress. The societal technology needs to keep up!


So Where Is The Brotherhood's Position on This

Now, The Brotherhood is NOT a monolithic entity. The different people in here, has significantly different positions on this^[the division is one of the reasons of this long essay]. However, I have been a significant part of this project from the start, and I can say what my position is, on this.
My philosophy is of pragmatism. One must keep the danger, very very close. The one who lives by the sword, dies by the sword. But one who forsakes the sword, lives under the sword! Currently, as it stands, "AI" is the brand new weapon, in this long warfare of control, of ideology, of dominance, as it always has been. But if the disenfranchised people needs to win, they can not afford to forsake the game. They can only win by playing the same game.
I have used AI IDEs very substantially to build the project - because I am not such a good programmer, and even if I were, I could not have done the entire project, alone, in such a short time. Now I know that this is not a replacement for actual skilled people, and in the best-case scenario, I never would have needed to use it too much. But unfortunately, reality is never perfect, and we had to do get by on what we could!
And that is my philosophy on AI usage. The rules of the game are no different, only the goals of the players and as long as we are working for a noble goal^[actually, we directly respond to that political problem of unemployment], we cannot compromise on not taking the best shot at victory!

The End Justifies The Means ๐Ÿ”ฅ

 

cross-posted from: https://lemmy.zip/post/64007684

Introduction

The current socio-political discourse is dominated by a new divisive issue concerning "AI" - so called Artificial Intelligence. While some are vehemently opposed to the idea of AI infiltrating newer and newer aspects of life, some are convinced of its revolutionary transformative power. The question of AI usage in our project of The Brotherhood, has also been put into question and this essay will attempt to put my^[not everyone working on the project, just my own] perspective on it.


What even is "AI"

What is typically referred to as "AI", is in the more technical corners, known as, LLMs, or Large Language Models. They are a new innovation^[still, pretty old, around 2017-18] in a long line of automation technology, going back to the mid-20th century, not long after the computer itself was starting to become a thing of utmost usefulness.

The long journey of automation

Actually, the computer itself can be seen as the first innovation in this automation technology. After all, the computer is a literal automatic computation^[and much more, of course!] machine, that uses some carefully arranged silicon and phosphorus to manipulate electron flows and deterministically execute some rigorously defined steps.
The idea to take this further and further, was always an ambition of early computer scientists. And as speed and size started getting accessible, effort was made for closer integration with humans. This was not a trivial task as the computer and the human spoke two different languages that might as well be from different universes. From punching cards, where programmers painstakingly "wrote" binary in a literal card to Fortran to programming languages to OS to GUIs and applications, we have made tools, for our tools, for our tools, in a seemingly endless recursion.
One biggest aspect that programmers got interested in, in the very late 20th century, was natural language processing, to further bridge the "language gap". This is what enabled the early internet, through search engines. Now, this fundamentally differs in structure to previous tools. This is not deterministic, as language itself was not deterministic. So these tools relied on various statistical tools like N-grams, Markov Models, Bayesian inference etc.

The parallel research on Neural Networks

Around the same time, with the advent of neuroscience^[that replaced the previous psychological models of Freud, Jung and Lacan, which were indeed not suited for STEM fields], another curious line of research began with the perceptron.
Very much influenced from early neuroscience, it slowly split from its initial inspiration and drifted towards statistical science, rather than trying to follow the exact structure of brains. This too, went through its own series of innovations with neural networks, backpropagation, Hopfield networks, CNNs, LSTMs etc.
But two innovations were critical for the explosion of interest in this very niche field -

  1. Deep neural networks, that made use of the newly popular GPUs, back in the early 2010s
  2. Transformers, which was the topic of a now, legendary 2017 paper, titled, "Attention is All You Need.

In the early 2020s, it was realised, that these two can be combined and scaled up massively^[and I mean massively] to gain a general semantic understanding of general language. This is where the two paths collided. What started as experimental cognitive research at the intersection of neuroscience and computation, turned into a statistical method to give the computers an understanding of semantic language! Thus began the era of LLMs.

An LLM is simply a statistical model trained to have a general understanding of semantics!


So What's All the Hype

What is True

The innovation, especially of GPUs and transformers are legit groundbreaking innovations that have broken a very long stall in their respective fields. And their combination to create LLMs are indeed a great engineering feat, even if not that innovative from a purely academic standpoint^[the massive scaling needed, is another level of brute-forcing. Think of the pyramids of Egypt - not as clever as it is awe-inspiring, simply due to scale].
And it is also true that this has opened up the pathway to some commercial usage in a way that was just not possible earlier. In a certain sense, it is an upgradation of the search engines with a powerful fuzzy semantic translator.
It is indeed a great addition to the coding landscape. Programming used to be 80% manual intellectual labour, where you had to go search for that one silly bug, or implement a very simple system for the 100th time. Now, a lot of this can be automated. However, to think, that this makes programming itself obsolete, is very naive. For most serious project, you still need to have great knowledge of computer science, but the entry to programming has been indeed lowered^[which is either very good news, or very disappointing, depending on how much you like to gatekeep your nerdy interests!]. Most serious programmers have simply become a senior software developer and have delegated the manual repetitive tasks to the "AI", which can understand natural language and turn them into code it has seen before^[if it has been trained in it].

What is the "Bubble"

What remains in heavy doubt is the "efficiency" problem. It is yet very unclear as to whether Moore's Law will come into play here and decrease costs as time passes by, or whether the architecture itself, despite its genuine innovations, is fundamentally limited. The big corporations are betting on the former.
Meanwhile some "tech-enthusiasts" have become a little too enthusiastic about the range of its applicability. The LLMs, like any sophisticated statistical model, requires massive amounts of structured data. In certain areas like day-to-day coding, or summaries, this is not that hard. However, in areas like robotics, it is still not a "done" job^[just getting structured data itself].
The more laughable matter is that some have put into esoteric questions of consciousness^[philosophical exploration of consciousness, is indeed possible, but requires a level of rigor and seriousness, that is missing from most such discussions] in this new light. This is in part, due to the specific ancestry it has, and mostly just due to human nature of "jumping the bandwagon".


What About the Political Issues

Now we come to the most important point of this discourse. I will break it down into specific points that are frequently put to question.

The Environmental Hazards

As it now stands, the development and deployment of LLMs remain highly inefficient. But technology and development always comes at the expense of natural resources and equilibrium. The question is not of, whether it is ethical, but who controls/decides how much is sustainable^[moreover, the current climate crisis has already put adequate strain on these resources in a lot of places].
At this point, however, it stops being a environmental concern and starts being a political one. The neoliberals would indeed argue that the market would balance itself when resource scarcity starts being critical, whereas opponents might argue that state intervention is needed to prevent a calamity at all. But whatever the arguments remain about their ideal states, what is true, is that, the real world is none of those "ideal world" situations.
The neoliberal free market does not exist in its full glory, as most of the technological market is monopolised by a few corporations. The current global climate crisis, is a failure of the free-markets of the industrial and the post-industrial era. Whereas state intervention, remains, at best, ineffectual, and at worst, prone to lobbying by the same monopolised corporations.

The conclusion is that the control of such critical decisions, remain concentrated in the hands of a few oligarchs who are prone to taking risky decisions and making mistakes.

The Data "Theft"

It is not unknown that the data that the LLMs are trained on, are public data. However, the access to such LLMs remain out of the hands of the people whose data made it come to fruition. It is also clear that the current copyright laws are not built to handle such cases.
Close-sourced LLMs represent a new kind of injustice with no easy solutions. On one hand, making LLMs accessible to all, would exasperate the "hype-train" and worsen the environmental impact. Whereas, stopping research on such lucrative frontiers would be catastrophically conservative. And again, this comes down to control - control of how and where to gather source data and how to commercialise it. But as long as the monopolies exists, especially on the production of cutting-edge of LLMs, control remains firmly on the hands of the select-few.

The Unemployment Issues

The layoffs have been quite eye-catching, since it happened on high-class educated employees. But this is a constant byproduct of changing times and advancing technology, especially in automation. This can not be avoided without an aversion to technology itself^[which is hard to sell in the modern world!].
However, this never leads to humans not having "any work left to do" at all. No, jobs come and jobs go! But as the current landscape stands, it is indeed the case that many millions of people will get trampled under the changing times - people who have long pursued a high-profile job, only to lose their long-expected market volume or high-end salary.
This represents an utter failure of our social contract. The fact that technological progress comes at the cost of social cohesion, is a reflection of our embarrassing societal technology in comparison to our other feats^[such as engineering, or research, or industrialisation]. An automation, theoretically, should be a boon to the labour force, taking away manual labour, in place of far more interesting jobs and more time for recreation! But alas, instead it represents an existential threat to a substantial section of the population!

No society can last which has a structural opposition to technological progress. The societal technology needs to keep up!


So Where Is The Brotherhood's Position on This

Now, The Brotherhood is NOT a monolithic entity. The different people in here, has significantly different positions on this^[the division is one of the reasons of this long essay]. However, I have been a significant part of this project from the start, and I can say what my position is, on this.
My philosophy is of pragmatism. One must keep the danger, very very close. The one who lives by the sword, dies by the sword. But one who forsakes the sword, lives under the sword! Currently, as it stands, "AI" is the brand new weapon, in this long warfare of control, of ideology, of dominance, as it always has been. But if the disenfranchised people needs to win, they can not afford to forsake the game. They can only win by playing the same game.
I have used AI IDEs very substantially to build the project - because I am not such a good programmer, and even if I were, I could not have done the entire project, alone, in such a short time. Now I know that this is not a replacement for actual skilled people, and in the best-case scenario, I never would have needed to use it too much. But unfortunately, reality is never perfect, and we had to do get by on what we could!
And that is my philosophy on AI usage. The rules of the game are no different, only the goals of the players and as long as we are working for a noble goal^[actually, we directly respond to that political problem of unemployment], we cannot compromise on not taking the best shot at victory!

The End Justifies The Means ๐Ÿ”ฅ

 

cross-posted from: https://lemmy.zip/post/64007684

Introduction

The current socio-political discourse is dominated by a new divisive issue concerning "AI" - so called Artificial Intelligence. While some are vehemently opposed to the idea of AI infiltrating newer and newer aspects of life, some are convinced of its revolutionary transformative power. The question of AI usage in our project of The Brotherhood, has also been put into question and this essay will attempt to put my^[not everyone working on the project, just my own] perspective on it.


What even is "AI"

What is typically referred to as "AI", is in the more technical corners, known as, LLMs, or Large Language Models. They are a new innovation^[still, pretty old, around 2017-18] in a long line of automation technology, going back to the mid-20th century, not long after the computer itself was starting to become a thing of utmost usefulness.

The long journey of automation

Actually, the computer itself can be seen as the first innovation in this automation technology. After all, the computer is a literal automatic computation^[and much more, of course!] machine, that uses some carefully arranged silicon and phosphorus to manipulate electron flows and deterministically execute some rigorously defined steps.
The idea to take this further and further, was always an ambition of early computer scientists. And as speed and size started getting accessible, effort was made for closer integration with humans. This was not a trivial task as the computer and the human spoke two different languages that might as well be from different universes. From punching cards, where programmers painstakingly "wrote" binary in a literal card to Fortran to programming languages to OS to GUIs and applications, we have made tools, for our tools, for our tools, in a seemingly endless recursion.
One biggest aspect that programmers got interested in, in the very late 20th century, was natural language processing, to further bridge the "language gap". This is what enabled the early internet, through search engines. Now, this fundamentally differs in structure to previous tools. This is not deterministic, as language itself was not deterministic. So these tools relied on various statistical tools like N-grams, Markov Models, Bayesian inference etc.

The parallel research on Neural Networks

Around the same time, with the advent of neuroscience^[that replaced the previous psychological models of Freud, Jung and Lacan, which were indeed not suited for STEM fields], another curious line of research began with the perceptron.
Very much influenced from early neuroscience, it slowly split from its initial inspiration and drifted towards statistical science, rather than trying to follow the exact structure of brains. This too, went through its own series of innovations with neural networks, backpropagation, Hopfield networks, CNNs, LSTMs etc.
But two innovations were critical for the explosion of interest in this very niche field -

  1. Deep neural networks, that made use of the newly popular GPUs, back in the early 2010s
  2. Transformers, which was the topic of a now, legendary 2017 paper, titled, "Attention is All You Need.

In the early 2020s, it was realised, that these two can be combined and scaled up massively^[and I mean massively] to gain a general semantic understanding of general language. This is where the two paths collided. What started as experimental cognitive research at the intersection of neuroscience and computation, turned into a statistical method to give the computers an understanding of semantic language! Thus began the era of LLMs.

An LLM is simply a statistical model trained to have a general understanding of semantics!


So What's All the Hype

What is True

The innovation, especially of GPUs and transformers are legit groundbreaking innovations that have broken a very long stall in their respective fields. And their combination to create LLMs are indeed a great engineering feat, even if not that innovative from a purely academic standpoint^[the massive scaling needed, is another level of brute-forcing. Think of the pyramids of Egypt - not as clever as it is awe-inspiring, simply due to scale].
And it is also true that this has opened up the pathway to some commercial usage in a way that was just not possible earlier. In a certain sense, it is an upgradation of the search engines with a powerful fuzzy semantic translator.
It is indeed a great addition to the coding landscape. Programming used to be 80% manual intellectual labour, where you had to go search for that one silly bug, or implement a very simple system for the 100th time. Now, a lot of this can be automated. However, to think, that this makes programming itself obsolete, is very naive. For most serious project, you still need to have great knowledge of computer science, but the entry to programming has been indeed lowered^[which is either very good news, or very disappointing, depending on how much you like to gatekeep your nerdy interests!]. Most serious programmers have simply become a senior software developer and have delegated the manual repetitive tasks to the "AI", which can understand natural language and turn them into code it has seen before^[if it has been trained in it].

What is the "Bubble"

What remains in heavy doubt is the "efficiency" problem. It is yet very unclear as to whether Moore's Law will come into play here and decrease costs as time passes by, or whether the architecture itself, despite its genuine innovations, is fundamentally limited. The big corporations are betting on the former.
Meanwhile some "tech-enthusiasts" have become a little too enthusiastic about the range of its applicability. The LLMs, like any sophisticated statistical model, requires massive amounts of structured data. In certain areas like day-to-day coding, or summaries, this is not that hard. However, in areas like robotics, it is still not a "done" job^[just getting structured data itself].
The more laughable matter is that some have put into esoteric questions of consciousness^[philosophical exploration of consciousness, is indeed possible, but requires a level of rigor and seriousness, that is missing from most such discussions] in this new light. This is in part, due to the specific ancestry it has, and mostly just due to human nature of "jumping the bandwagon".


What About the Political Issues

Now we come to the most important point of this discourse. I will break it down into specific points that are frequently put to question.

The Environmental Hazards

As it now stands, the development and deployment of LLMs remain highly inefficient. But technology and development always comes at the expense of natural resources and equilibrium. The question is not of, whether it is ethical, but who controls/decides how much is sustainable^[moreover, the current climate crisis has already put adequate strain on these resources in a lot of places].
At this point, however, it stops being a environmental concern and starts being a political one. The neoliberals would indeed argue that the market would balance itself when resource scarcity starts being critical, whereas opponents might argue that state intervention is needed to prevent a calamity at all. But whatever the arguments remain about their ideal states, what is true, is that, the real world is none of those "ideal world" situations.
The neoliberal free market does not exist in its full glory, as most of the technological market is monopolised by a few corporations. The current global climate crisis, is a failure of the free-markets of the industrial and the post-industrial era. Whereas state intervention, remains, at best, ineffectual, and at worst, prone to lobbying by the same monopolised corporations.

The conclusion is that the control of such critical decisions, remain concentrated in the hands of a few oligarchs who are prone to taking risky decisions and making mistakes.

The Data "Theft"

It is not unknown that the data that the LLMs are trained on, are public data. However, the access to such LLMs remain out of the hands of the people whose data made it come to fruition. It is also clear that the current copyright laws are not built to handle such cases.
Close-sourced LLMs represent a new kind of injustice with no easy solutions. On one hand, making LLMs accessible to all, would exasperate the "hype-train" and worsen the environmental impact. Whereas, stopping research on such lucrative frontiers would be catastrophically conservative. And again, this comes down to control - control of how and where to gather source data and how to commercialise it. But as long as the monopolies exists, especially on the production of cutting-edge of LLMs, control remains firmly on the hands of the select-few.

The Unemployment Issues

The layoffs have been quite eye-catching, since it happened on high-class educated employees. But this is a constant byproduct of changing times and advancing technology, especially in automation. This can not be avoided without an aversion to technology itself^[which is hard to sell in the modern world!].
However, this never leads to humans not having "any work left to do" at all. No, jobs come and jobs go! But as the current landscape stands, it is indeed the case that many millions of people will get trampled under the changing times - people who have long pursued a high-profile job, only to lose their long-expected market volume or high-end salary.
This represents an utter failure of our social contract. The fact that technological progress comes at the cost of social cohesion, is a reflection of our embarrassing societal technology in comparison to our other feats^[such as engineering, or research, or industrialisation]. An automation, theoretically, should be a boon to the labour force, taking away manual labour, in place of far more interesting jobs and more time for recreation! But alas, instead it represents an existential threat to a substantial section of the population!

No society can last which has a structural opposition to technological progress. The societal technology needs to keep up!


So Where Is The Brotherhood's Position on This

Now, The Brotherhood is NOT a monolithic entity. The different people in here, has significantly different positions on this^[the division is one of the reasons of this long essay]. However, I have been a significant part of this project from the start, and I can say what my position is, on this.
My philosophy is of pragmatism. One must keep the danger, very very close. The one who lives by the sword, dies by the sword. But one who forsakes the sword, lives under the sword! Currently, as it stands, "AI" is the brand new weapon, in this long warfare of control, of ideology, of dominance, as it always has been. But if the disenfranchised people needs to win, they can not afford to forsake the game. They can only win by playing the same game.
I have used AI IDEs very substantially to build the project - because I am not such a good programmer, and even if I were, I could not have done the entire project, alone, in such a short time. Now I know that this is not a replacement for actual skilled people, and in the best-case scenario, I never would have needed to use it too much. But unfortunately, reality is never perfect, and we had to do get by on what we could!
And that is my philosophy on AI usage. The rules of the game are no different, only the goals of the players and as long as we are working for a noble goal^[actually, we directly respond to that political problem of unemployment], we cannot compromise on not taking the best shot at victory!

The End Justifies The Means ๐Ÿ”ฅ

 

cross-posted from: https://lemmy.zip/post/64007684

Introduction

The current socio-political discourse is dominated by a new divisive issue concerning "AI" - so called Artificial Intelligence. While some are vehemently opposed to the idea of AI infiltrating newer and newer aspects of life, some are convinced of its revolutionary transformative power. The question of AI usage in our project of The Brotherhood, has also been put into question and this essay will attempt to put my^[not everyone working on the project, just my own] perspective on it.


What even is "AI"

What is typically referred to as "AI", is in the more technical corners, known as, LLMs, or Large Language Models. They are a new innovation^[still, pretty old, around 2017-18] in a long line of automation technology, going back to the mid-20th century, not long after the computer itself was starting to become a thing of utmost usefulness.

The long journey of automation

Actually, the computer itself can be seen as the first innovation in this automation technology. After all, the computer is a literal automatic computation^[and much more, of course!] machine, that uses some carefully arranged silicon and phosphorus to manipulate electron flows and deterministically execute some rigorously defined steps.
The idea to take this further and further, was always an ambition of early computer scientists. And as speed and size started getting accessible, effort was made for closer integration with humans. This was not a trivial task as the computer and the human spoke two different languages that might as well be from different universes. From punching cards, where programmers painstakingly "wrote" binary in a literal card to Fortran to programming languages to OS to GUIs and applications, we have made tools, for our tools, for our tools, in a seemingly endless recursion.
One biggest aspect that programmers got interested in, in the very late 20th century, was natural language processing, to further bridge the "language gap". This is what enabled the early internet, through search engines. Now, this fundamentally differs in structure to previous tools. This is not deterministic, as language itself was not deterministic. So these tools relied on various statistical tools like N-grams, Markov Models, Bayesian inference etc.

The parallel research on Neural Networks

Around the same time, with the advent of neuroscience^[that replaced the previous psychological models of Freud, Jung and Lacan, which were indeed not suited for STEM fields], another curious line of research began with the perceptron.
Very much influenced from early neuroscience, it slowly split from its initial inspiration and drifted towards statistical science, rather than trying to follow the exact structure of brains. This too, went through its own series of innovations with neural networks, backpropagation, Hopfield networks, CNNs, LSTMs etc.
But two innovations were critical for the explosion of interest in this very niche field -

  1. Deep neural networks, that made use of the newly popular GPUs, back in the early 2010s
  2. Transformers, which was the topic of a now, legendary 2017 paper, titled, "Attention is All You Need.

In the early 2020s, it was realised, that these two can be combined and scaled up massively^[and I mean massively] to gain a general semantic understanding of general language. This is where the two paths collided. What started as experimental cognitive research at the intersection of neuroscience and computation, turned into a statistical method to give the computers an understanding of semantic language! Thus began the era of LLMs.

An LLM is simply a statistical model trained to have a general understanding of semantics!


So What's All the Hype

What is True

The innovation, especially of GPUs and transformers are legit groundbreaking innovations that have broken a very long stall in their respective fields. And their combination to create LLMs are indeed a great engineering feat, even if not that innovative from a purely academic standpoint^[the massive scaling needed, is another level of brute-forcing. Think of the pyramids of Egypt - not as clever as it is awe-inspiring, simply due to scale].
And it is also true that this has opened up the pathway to some commercial usage in a way that was just not possible earlier. In a certain sense, it is an upgradation of the search engines with a powerful fuzzy semantic translator.
It is indeed a great addition to the coding landscape. Programming used to be 80% manual intellectual labour, where you had to go search for that one silly bug, or implement a very simple system for the 100th time. Now, a lot of this can be automated. However, to think, that this makes programming itself obsolete, is very naive. For most serious project, you still need to have great knowledge of computer science, but the entry to programming has been indeed lowered^[which is either very good news, or very disappointing, depending on how much you like to gatekeep your nerdy interests!]. Most serious programmers have simply become a senior software developer and have delegated the manual repetitive tasks to the "AI", which can understand natural language and turn them into code it has seen before^[if it has been trained in it].

What is the "Bubble"

What remains in heavy doubt is the "efficiency" problem. It is yet very unclear as to whether Moore's Law will come into play here and decrease costs as time passes by, or whether the architecture itself, despite its genuine innovations, is fundamentally limited. The big corporations are betting on the former.
Meanwhile some "tech-enthusiasts" have become a little too enthusiastic about the range of its applicability. The LLMs, like any sophisticated statistical model, requires massive amounts of structured data. In certain areas like day-to-day coding, or summaries, this is not that hard. However, in areas like robotics, it is still not a "done" job^[just getting structured data itself].
The more laughable matter is that some have put into esoteric questions of consciousness^[philosophical exploration of consciousness, is indeed possible, but requires a level of rigor and seriousness, that is missing from most such discussions] in this new light. This is in part, due to the specific ancestry it has, and mostly just due to human nature of "jumping the bandwagon".


What About the Political Issues

Now we come to the most important point of this discourse. I will break it down into specific points that are frequently put to question.

The Environmental Hazards

As it now stands, the development and deployment of LLMs remain highly inefficient. But technology and development always comes at the expense of natural resources and equilibrium. The question is not of, whether it is ethical, but who controls/decides how much is sustainable^[moreover, the current climate crisis has already put adequate strain on these resources in a lot of places].
At this point, however, it stops being a environmental concern and starts being a political one. The neoliberals would indeed argue that the market would balance itself when resource scarcity starts being critical, whereas opponents might argue that state intervention is needed to prevent a calamity at all. But whatever the arguments remain about their ideal states, what is true, is that, the real world is none of those "ideal world" situations.
The neoliberal free market does not exist in its full glory, as most of the technological market is monopolised by a few corporations. The current global climate crisis, is a failure of the free-markets of the industrial and the post-industrial era. Whereas state intervention, remains, at best, ineffectual, and at worst, prone to lobbying by the same monopolised corporations.

The conclusion is that the control of such critical decisions, remain concentrated in the hands of a few oligarchs who are prone to taking risky decisions and making mistakes.

The Data "Theft"

It is not unknown that the data that the LLMs are trained on, are public data. However, the access to such LLMs remain out of the hands of the people whose data made it come to fruition. It is also clear that the current copyright laws are not built to handle such cases.
Close-sourced LLMs represent a new kind of injustice with no easy solutions. On one hand, making LLMs accessible to all, would exasperate the "hype-train" and worsen the environmental impact. Whereas, stopping research on such lucrative frontiers would be catastrophically conservative. And again, this comes down to control - control of how and where to gather source data and how to commercialise it. But as long as the monopolies exists, especially on the production of cutting-edge of LLMs, control remains firmly on the hands of the select-few.

The Unemployment Issues

The layoffs have been quite eye-catching, since it happened on high-class educated employees. But this is a constant byproduct of changing times and advancing technology, especially in automation. This can not be avoided without an aversion to technology itself^[which is hard to sell in the modern world!].
However, this never leads to humans not having "any work left to do" at all. No, jobs come and jobs go! But as the current landscape stands, it is indeed the case that many millions of people will get trampled under the changing times - people who have long pursued a high-profile job, only to lose their long-expected market volume or high-end salary.
This represents an utter failure of our social contract. The fact that technological progress comes at the cost of social cohesion, is a reflection of our embarrassing societal technology in comparison to our other feats^[such as engineering, or research, or industrialisation]. An automation, theoretically, should be a boon to the labour force, taking away manual labour, in place of far more interesting jobs and more time for recreation! But alas, instead it represents an existential threat to a substantial section of the population!

No society can last which has a structural opposition to technological progress. The societal technology needs to keep up!


So Where Is The Brotherhood's Position on This

Now, The Brotherhood is NOT a monolithic entity. The different people in here, has significantly different positions on this^[the division is one of the reasons of this long essay]. However, I have been a significant part of this project from the start, and I can say what my position is, on this.
My philosophy is of pragmatism. One must keep the danger, very very close. The one who lives by the sword, dies by the sword. But one who forsakes the sword, lives under the sword! Currently, as it stands, "AI" is the brand new weapon, in this long warfare of control, of ideology, of dominance, as it always has been. But if the disenfranchised people needs to win, they can not afford to forsake the game. They can only win by playing the same game.
I have used AI IDEs very substantially to build the project - because I am not such a good programmer, and even if I were, I could not have done the entire project, alone, in such a short time. Now I know that this is not a replacement for actual skilled people, and in the best-case scenario, I never would have needed to use it too much. But unfortunately, reality is never perfect, and we had to do get by on what we could!
And that is my philosophy on AI usage. The rules of the game are no different, only the goals of the players and as long as we are working for a noble goal^[actually, we directly respond to that political problem of unemployment], we cannot compromise on not taking the best shot at victory!

The End Justifies The Means ๐Ÿ”ฅ

-6
On AI Usage (lemmy.zip)
submitted 4 days ago* (last edited 4 days ago) by darksecret@lemmy.zip to c/theBrotherhoodSpace@lemmy.zip
 

Introduction

The current socio-political discourse is dominated by a new divisive issue concerning "AI" - so called Artificial Intelligence. While some are vehemently opposed to the idea of AI infiltrating newer and newer aspects of life, some are convinced of its revolutionary transformative power. The question of AI usage in our project of The Brotherhood, has also been put into question and this essay will attempt to put my^[not everyone working on the project, just my own] perspective on it.


What even is "AI"

What is typically referred to as "AI", is in the more technical corners, known as, LLMs, or Large Language Models. They are a new innovation^[still, pretty old, around 2017-18] in a long line of automation technology, going back to the mid-20th century, not long after the computer itself was starting to become a thing of utmost usefulness.

The long journey of automation

Actually, the computer itself can be seen as the first innovation in this automation technology. After all, the computer is a literal automatic computation^[and much more, of course!] machine, that uses some carefully arranged silicon and phosphorus to manipulate electron flows and deterministically execute some rigorously defined steps.
The idea to take this further and further, was always an ambition of early computer scientists. And as speed and size started getting accessible, effort was made for closer integration with humans. This was not a trivial task as the computer and the human spoke two different languages that might as well be from different universes. From punching cards, where programmers painstakingly "wrote" binary in a literal card to Fortran to programming languages to OS to GUIs and applications, we have made tools, for our tools, for our tools, in a seemingly endless recursion.
One biggest aspect that programmers got interested in, in the very late 20th century, was natural language processing, to further bridge the "language gap". This is what enabled the early internet, through search engines. Now, this fundamentally differs in structure to previous tools. This is not deterministic, as language itself was not deterministic. So these tools relied on various statistical tools like N-grams, Markov Models, Bayesian inference etc.

The parallel research on Neural Networks

Around the same time, with the advent of neuroscience^[that replaced the previous psychological models of Freud, Jung and Lacan, which were indeed not suited for STEM fields], another curious line of research began with the perceptron.
Very much influenced from early neuroscience, it slowly split from its initial inspiration and drifted towards statistical science, rather than trying to follow the exact structure of brains. This too, went through its own series of innovations with neural networks, backpropagation, Hopfield networks, CNNs, LSTMs etc.
But two innovations were critical for the explosion of interest in this very niche field -

  1. Deep neural networks, that made use of the newly popular GPUs, back in the early 2010s
  2. Transformers, which was the topic of a now, legendary 2017 paper, titled, "Attention is All You Need.

In the early 2020s, it was realised, that these two can be combined and scaled up massively^[and I mean massively] to gain a general semantic understanding of general language. This is where the two paths collided. What started as experimental cognitive research at the intersection of neuroscience and computation, turned into a statistical method to give the computers an understanding of semantic language! Thus began the era of LLMs.

An LLM is simply a statistical model trained to have a general understanding of semantics!


So What's All the Hype

What is True

The innovation, especially of GPUs and transformers are legit groundbreaking innovations that have broken a very long stall in their respective fields. And their combination to create LLMs are indeed a great engineering feat, even if not that innovative from a purely academic standpoint^[the massive scaling needed, is another level of brute-forcing. Think of the pyramids of Egypt - not as clever as it is awe-inspiring, simply due to scale].
And it is also true that this has opened up the pathway to some commercial usage in a way that was just not possible earlier. In a certain sense, it is an upgradation of the search engines with a powerful fuzzy semantic translator.
It is indeed a great addition to the coding landscape. Programming used to be 80% manual intellectual labour, where you had to go search for that one silly bug, or implement a very simple system for the 100th time. Now, a lot of this can be automated. However, to think, that this makes programming itself obsolete, is very naive. For most serious project, you still need to have great knowledge of computer science, but the entry to programming has been indeed lowered^[which is either very good news, or very disappointing, depending on how much you like to gatekeep your nerdy interests!]. Most serious programmers have simply become a senior software developer and have delegated the manual repetitive tasks to the "AI", which can understand natural language and turn them into code it has seen before^[if it has been trained in it].

What is the "Bubble"

What remains in heavy doubt is the "efficiency" problem. It is yet very unclear as to whether Moore's Law will come into play here and decrease costs as time passes by, or whether the architecture itself, despite its genuine innovations, is fundamentally limited. The big corporations are betting on the former.
Meanwhile some "tech-enthusiasts" have become a little too enthusiastic about the range of its applicability. The LLMs, like any sophisticated statistical model, requires massive amounts of structured data. In certain areas like day-to-day coding, or summaries, this is not that hard. However, in areas like robotics, it is still not a "done" job^[just getting structured data itself].
The more laughable matter is that some have put into esoteric questions of consciousness^[philosophical exploration of consciousness, is indeed possible, but requires a level of rigor and seriousness, that is missing from most such discussions] in this new light. This is in part, due to the specific ancestry it has, and mostly just due to human nature of "jumping the bandwagon".


What About the Political Issues

Now we come to the most important point of this discourse. I will break it down into specific points that are frequently put to question.

The Environmental Hazards

As it now stands, the development and deployment of LLMs remain highly inefficient. But technology and development always comes at the expense of natural resources and equilibrium. The question is not of, whether it is ethical, but who controls/decides how much is sustainable^[moreover, the current climate crisis has already put adequate strain on these resources in a lot of places].
At this point, however, it stops being a environmental concern and starts being a political one. The neoliberals would indeed argue that the market would balance itself when resource scarcity starts being critical, whereas opponents might argue that state intervention is needed to prevent a calamity at all. But whatever the arguments remain about their ideal states, what is true, is that, the real world is none of those "ideal world" situations.
The neoliberal free market does not exist in its full glory, as most of the technological market is monopolised by a few corporations. The current global climate crisis, is a failure of the free-markets of the industrial and the post-industrial era. Whereas state intervention, remains, at best, ineffectual, and at worst, prone to lobbying by the same monopolised corporations.

The conclusion is that the control of such critical decisions, remain concentrated in the hands of a few oligarchs who are prone to taking risky decisions and making mistakes.

The Data "Theft"

It is not unknown that the data that the LLMs are trained on, are public data. However, the access to such LLMs remain out of the hands of the people whose data made it come to fruition. It is also clear that the current copyright laws are not built to handle such cases.
Close-sourced LLMs represent a new kind of injustice with no easy solutions. On one hand, making LLMs accessible to all, would exasperate the "hype-train" and worsen the environmental impact. Whereas, stopping research on such lucrative frontiers would be catastrophically conservative. And again, this comes down to control - control of how and where to gather source data and how to commercialise it. But as long as the monopolies exists, especially on the production of cutting-edge of LLMs, control remains firmly on the hands of the select-few.

The Unemployment Issues

The layoffs have been quite eye-catching, since it happened on high-class educated employees. But this is a constant byproduct of changing times and advancing technology, especially in automation. This can not be avoided without an aversion to technology itself^[which is hard to sell in the modern world!].
However, this never leads to humans not having "any work left to do" at all. No, jobs come and jobs go! But as the current landscape stands, it is indeed the case that many millions of people will get trampled under the changing times - people who have long pursued a high-profile job, only to lose their long-expected market volume or high-end salary.
This represents an utter failure of our social contract. The fact that technological progress comes at the cost of social cohesion, is a reflection of our embarrassing societal technology in comparison to our other feats^[such as engineering, or research, or industrialisation]. An automation, theoretically, should be a boon to the labour force, taking away manual labour, in place of far more interesting jobs and more time for recreation! But alas, instead it represents an existential threat to a substantial section of the population!

No society can last which has a structural opposition to technological progress. The societal technology needs to keep up!


So Where Is The Brotherhood's Position on This

Now, The Brotherhood is NOT a monolithic entity. The different people in here, has significantly different positions on this^[the division is one of the reasons of this long essay]. However, I have been a significant part of this project from the start, and I can say what my position is, on this.
My philosophy is of pragmatism. One must keep the danger, very very close. The one who lives by the sword, dies by the sword. But one who forsakes the sword, lives under the sword! Currently, as it stands, "AI" is the brand new weapon, in this long warfare of control, of ideology, of dominance, as it always has been. But if the disenfranchised people needs to win, they can not afford to forsake the game. They can only win by playing the same game.
I have used AI IDEs very substantially to build the project - because I am not such a good programmer, and even if I were, I could not have done the entire project, alone, in such a short time. Now I know that this is not a replacement for actual skilled people, and in the best-case scenario, I never would have needed to use it too much. But unfortunately, reality is never perfect, and we had to do get by on what we could!
And that is my philosophy on AI usage. The rules of the game are no different, only the goals of the players and as long as we are working for a noble goal^[actually, we directly respond to that political problem of unemployment], we cannot compromise on not taking the best shot at victory!

The End Justifies The Means ๐Ÿ”ฅ

[โ€“] darksecret@lemmy.zip 1 points 6 days ago (2 children)

You can view the code today as well, on the public repo where it is licensed under GPL 3.0, and will always remain open-sourced.

 

cross-posted from: https://lemmy.zip/post/63831008

This is the new progress tracking ability added for users to check their progress of a skill graph. The backend handles this via Assessments and Capabilities - 2 general objects in the Brotherhood that will, in the future, serve as the blueprint for more exotic assessment systems like work-based, peer-verified assessments and capability certificates that will serve as the de-facto portable proof-of-skills.

This is currently in beta-stage and behind an invite-only access. However, very soon, signups will be open-for-all and you can test it for yourself. If you wish to try it out before that, please reach out through to us.

For more information, please visit the official website

[โ€“] darksecret@lemmy.zip 1 points 1 week ago

While I appreciate your opinion, we do NOT intend to follow on the corporate footsteps of showing professionalism with formal uniformity at the expense of personal expression. When a rocket scientist can be openly "furry", when a researcher can be pink-haired, we can not let ourselves be limited to tasteless formal names. The names are edgy because we are. The project is professional because we are. If those two things cannot coexist for you, then you, my friend, are stuck in the old world.

[โ€“] darksecret@lemmy.zip 1 points 1 week ago (4 children)

If you're interested to know, we are working very hard to make it open-access by June

[โ€“] darksecret@lemmy.zip 1 points 1 week ago

Yes, we do value ideas immensely. Even greater, we value our goals, which will not sway much, the practical implementations may vary as our experience increases. And we do not overwhelm the audience with minute details. However if you're interested, some of it can be found in the official documents

We do NOT value linguistic constructivism. Words mean what we want them to mean. While I understand where you're coming from, we can always retort that the best way to "reclaim" a name is to use it in a radically different scenario - a sort of deconstruction, if you will.

We have a very small team and the website is not polished as much as we want it to be. This the need to keep it invite-only for now. However, we are working hard to make it open-access ASAP.

We are NOT at all opposed to AI usage. And we solemnly believe (not all of us, however), that the fact that an automation tool (which is all it really is) hurts the labour force is a shameful showcase of our underwhelming social contract. I will make a post dedicated to this issue soon.

[โ€“] darksecret@lemmy.zip 1 points 1 week ago

It seems you have misunderstood the premise a bit. So I'll explain.

  1. It is not "better" than Wikipedia in the sense that it is competing. In fact it will work alongside Wikipedia. If Wikipedia is a heap of all information loosely linked with each other, the Skill Graph is the key to navigate the heap. It's the section of library that tells you how the library is itself arranged.

  2. You will use those free resources itself when you use the Skill Graph. We will NOT make documents. The internet already contains adeqaute sources, we simply chain them together in a way that makes learning make sense as it does in a curriculum. The trick is to do it in such a way as to not kill pluralism, hence the modular "Graph" format.

  3. We will not write documents, but if you mean who validates the source linking - no one. There will be many many sources linked for a topic and you will be able to filter it by popularity, type, media format, author etc. By making comparisons between sources easy, learners will be able to judge the best while choosing what suits them best.

Finally, this is NOT perfect, We simply believe that it is a good start by utilising the vast swathes of educational content on the internet. And a lot of other ideas are being brainstormed right now. To know our progress or to contribute your ideas as well, reach out and keep in touch.

 

We are working on a new system for self-learning and teaching. Think of it as Wikipedia but arranged pedagogically. This is a curation and free learning of all kinds of knowledge.

cross-posted from: https://lemmy.zip/post/63831008

This is the new progress tracking ability added for users to check their progress of a skill graph. The backend handles this via Assessments and Capabilities - 2 general objects in the Brotherhood that will, in the future, serve as the blueprint for more exotic assessment systems like work-based, peer-verified assessments and capability certificates that will serve as the de-facto portable proof-of-skills.

This is currently in beta-stage and behind an invite-only access. However, very soon, signups will be open-for-all and you can test it for yourself. If you wish to try it out before that, please reach out through to us.

For more information, please visit the official website

 

This is the new progress tracking ability added for users to check their progress of a skill graph. The backend handles this via Assessments and Capabilities - 2 general objects in the Brotherhood that will, in the future, serve as the blueprint for more exotic assessment systems like work-based, peer-verified assessments and capability certificates that will serve as the de-facto portable proof-of-skills.

This is currently in beta-stage and behind an invite-only access. However, very soon, signups will be open-for-all and you can test it for yourself. If you wish to try it out before that, please reach out through to us.

For more information, please visit the official website

[โ€“] darksecret@lemmy.zip 1 points 3 weeks ago

Very true, it has to be more than a software implementation. It has to be sustainable on its own and lucrative for workers and companies alike. If you're interested, we're working on that exact problem. You can visit our website

 

cross-posted from: https://lemmy.zip/post/62550551

For reference, the Skill Graph is an open-source self-navigable graph of all human knowledge

A sneak peek into the Skill Graph we are building for The Brotherhood. This interconnected graph and its nodes represent modular concepts and how one can learn one after another. Our next goal is to not only provide a roadmap, link free resources, make the process itself open source but also then use this to verify one's skills rather than a black box degree/certificate. But most importantly, one day this will encompass all of human knowledge and ability ๐Ÿ”ฅ

We are currently massively scaling up. There is need of curation of these wonderful graphs and connect them with each other to build the ultimate Skill Graph. If you are interested in working with us, please contact us personally. If you want to have more information, please check out our website or ask in the comments

0
submitted 4 weeks ago* (last edited 5 days ago) by darksecret@lemmy.zip to c/theBrotherhoodSpace@lemmy.zip
 

A sneak peek into the Skill Graph we are building for The Brotherhood. This interconnected graph and its nodes represent modular concepts and how one can learn one after another. Our next goal is to not only provide a roadmap, link free resources, make the process itself open source but also then use this to verify one's skills rather than a black box degree/certificate. But most importantly, one day this will encompass all of human knowledge and ability ๐Ÿ”ฅ

We are currently massively scaling up. There is need of curation of these wonderful graphs and connect them with each other to build the ultimate Skill Graph. If you are interested in working with us, please contact us personally. If you want to have more information, please check out our website or ask in the comments

[โ€“] darksecret@lemmy.zip 1 points 1 month ago

Anytime โค

[โ€“] darksecret@lemmy.zip 1 points 1 month ago

It was not written by any LLM. I do use Claude to write some lengthy material because it transforms my incoherent rants into structured documents which are easier to communicate with other people. However this was purely me. Not even a draft. I wrote this straight on my phone. I made the community to be professional and not too personal, maybe that's what gives the vibe. Thanks for reading anyways and we do not really care about AI usage, as long as the job is done well.

 

I don't know if this is a good place to share this. Let me know if it's not! cross-posted from: https://lemmy.zip/post/62109295

We have all heard it - A Jack of all trades is a master of none, but oftentimes better than the master of one. But does this world really condone it? We are conditioned to take up a career at 15 and run with it for the rest of our lives. Till the day we die of arthritis or dementia, it becomes our sole identity in society, on which we are judged and valued. Our other hobbies and passions, projects and visions are rendered a distraction, a nuisance, a roadblock in our career.

From the first moment we express a mild interest, the moment a little kid looks in awe at the stars, the society has made up its mind, the parents have dreamed up a career in astronomy. But is that really how human beings are supposed to learn, excel and explore the world? Are human beings worth no more than a cog in the machine? Can professionalism be only achieved with an inhumane mindless dedication? Can we truly prosper when our curiosity and passion have been transformed into a lifelong prison of career?

#An Alternative Way#

The despair that follows after this realisation that the world is not made for you, is heartbreaking. And I was at this exact place a couple of years ago, when I realised that if the world was not made for me, I must rebuild it better. And so was borne The Brotherhood. In this project we aim to

  1. Take back Education - Break the monopoly that the traditional academic institutions have on providing education with a structured open-source curated knowledge graph of all human knowledge.
  2. Take back Certification - Implement a decentralised peer-to-peer assessment and verification of skills where only your peers and employers rate your skills based on actual work.
  3. Provide Jobs Transparently - Use the assessments and skills to provide jobs to skilled individuals in a transparent way, where you can see the exact process and algorithm used to route work.
  4. Federative Economic Structure - The economy is hence restructured to small, fluid federations where ownership is strictly based on contribution, and is entitled to split and merge whenever.

#Goal#

The final goal of this is to free the learner and worker from the rigid structures of society and usher in a glorious age of freedom and exploration where you can

  • Leave your jobs for a couple of years to pursue a personal mission without thinking of how to get paid.
  • Work sustainably in your dream projects, your passion projects all your life and get paid fairly.
  • Go back to your career after a hiatus and receive no discrimination for leaving the industry, as long as you have retained your skills.
  • Destroy the traditional dilemma of higher education or work, by combining the two into one unified pipeline where you learn and work at the same time. NO Career Deadends.

If you would like to get more information on the project, we would advise you to check the official website and the detailed documents. If you want to get in touch, leave a comment, post an opinion, query your doubts in this community space. Never Stop Dreaming๐Ÿ”ฅ

[โ€“] darksecret@lemmy.zip 2 points 1 month ago

I am from India actually, but I'd love to hear your critique and why you think it sounds like "white supremacist"

view more: next โ€บ