Rats have reached the "put up stickers to proselytize" stage of their weird religion
https://www.lesswrong.com/posts/SvtKronRNrw9AxXwa/clement-l-s-shortform?commentId=ZifQXkhxo5dJvrL3Q
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
Rats have reached the "put up stickers to proselytize" stage of their weird religion
https://www.lesswrong.com/posts/SvtKronRNrw9AxXwa/clement-l-s-shortform?commentId=ZifQXkhxo5dJvrL3Q
not sure if this is entirely ignorable as a tactic or if the counter-tactic is to post similar stickers but with references/QR codes to classic shock sites.
I hate LLMs so much. Now, every time I read student writing, I have to wonder if it's "normal overwrought" or "LLM bullshit." You can make educated guesses, but the reasoning behind this is really no better than what the LLM does with tokens (on top of any internalized biases I have), so of course I don't say anything (unless there is a guaranteed giveaway, like "as a language model").
No one describes their algorithm as "efficiently doing [intermediate step]" unless you're describing it to a general, non-technical audience
what a coincidence
and yet it keeps appearing in my students' writing. It's exhausting.
Edit: I really can't overemphasize how exhausting it is. Students will send you a direct message in MS Teams where they obviously used an LLM. We used to get
my algorithm checks if an array is already sorted by going through it one by one and seeing if every element is smaller than the next element
which is non-technical and could use a pass, but is succinct, clear, and correct. Now, we get^1^
In order to determine if an array is sorted, we must first iterate through the array. In order to iterate through the array, we create a looping variable
i
initialized to0
. At each step of the loop, we check ifi
is less thann - 1
. If so, we then check if the element at indexi
is less than or equal to the element at indexi + 1
. If not, we outputFalse
. Otherwise, we incrementi
and repeat. If the loop finishes successfully, we outputTrue
.
and I'm fucking tired. Like, use your own fucking voice, please! I want to hear your voice in your writing. PLEASE.
1: Made up the example out of whole-cloth because I haven't determined if there are any LLMs I can use ethically. It gets the point across, but I suspect it's only half the length of what ChatGPT would output.
My sympathies.
Read somewhere that the practice of defending one's thesis was established because buying a thesis was such an established practice. Scaling that up for every single text is of course utterly impractical.
I had a recent conversation with someone who was convinced that machines learn when they regurgitate text, because "that is what humans do". My counterargument was that if regurgitation is learning then every student who crammed, regurgitated and forgot, must have learnt much more than anyone thought. I didn't get any reply, so I must assume that by reading my reply and creating a version of it in their head they immediately understood the errors of their ways.
I had a recent conversation with someone who was convinced that machines learn when they regurgitate text, because “that is what humans do”.
But we know the tech behind these models right? They dont change their weights when they produce output right? You could have a discussion if updating the values is learning, but it doesnt even do that right? (Feeding the questions back into the dataset used to train them is a different mechanic)
That's true, and that's one way to approach the topic.
I generally focus on humans being more complex than the caricature we need to be reduced to in order for the argument to appear plausible. Having some humanities training comes in handy because the prompt fans very rarely do.
i got quoted as an ai authority, talking about elon's rational boys https://www.dailydot.com/news/elon-musk-doge-coup-engineer-grant-democracy/
AI alignment is literally a bunch of amateur philosophers telling each other scary stories about The Terminator around a campfire
I love you, David.
If Jason Wilson calls you for a quote, you give him your best.
If only the Supreme Court had as much of a spine as the Romanian Constitutional Court
Law prohibiting driving after consuming substances declared as unconstitutional. Of course, the antidrug agency bypassed the courts and parliament to pass it anyway.
See also: Constitutional Court cancels election result after Russian interference
I don't know if this is the right place to ask, but a friend from the field is wondering if there are any examples of good AI companies out there? With AI not meaning LLM companies. Thanks!
sounds a bit of a xy question imo, and a good answer of examples would depend on the y part of the question, the whatever it is that (if my guess is right) your friend is actually looking to know/find
“AI” is branding, a marketing thing that a cadaverous swarm of ghouls got behind in the upswing of the slop wave (you can trace this by checking popularity of the term in the months after deepdream), a banner with which to claim to be doing something new, a “new handle” to use to try anchor anew in the imaginations of many people who were (by normal and natural humanity) not yet aware of all the theft and exploitation. this was not by accident
there are a fair of good machine learning systems and companies out there (and by dint of hype and market forces, some end up sticking the “AI” label on their products, because that’s just how this deeply fucked capitalist market incentivises). as other posters have said, medical technology has seen some good uses, there’s things like recommender[0] and mass-analysis system improvements, and I’ve seen the same in process environments[1]. there’s even a lot of “quiet and useful” forms of this that have been getting added to many daily use systems and products all around us: reasonably good text extractors as a baseline feature in pdf and image viewers, subject matchers to find pets and friends in photos, that sort of thing. but those don’t get headlines and silly valuation insanity as much of the industry is in the midst of
[0] - not always blanket good, there’s lots of critique possible here
[1] - things like production lines that can use correlative prediction for checking on likely faults
There are companies doing "cool-sounding" things with AI like Waymo. "Good" would require more definition.
The only thing that comes to mind is medical applications, drug research, etc. But that might just be a skewed perspective on my end because I know literally nothing about that industry or how AI technology is deployed there. I've just read research has been assisted by those tools and that seems, at least on the surface level, like a good thing.
apparent list of forbidden words in nsf grants
https://elk.zone/mathstodon.xyz/@sc_griffith/113947483650679740
Apparently "Trauma" is banned. That's going to be a problem.
This is what happens when you give power to bigoted morons.
Note: They're all problems. Just “Trauma" is kind of extra-important because of its use as a medical term.
Trauma surgery, Barotrauma, Traumatic Brain Injury, Penetrating Trauma, Blunt Trauma, Abdominal Trauma, Polytrauma, Etc.
Victim and Unjust are also there, which lawyers prob love to never be able to use.
I don't think "victim" is really a word that's even used especially much in "woke" (for a lack of a good word) writing anyway. Hell, even for things like sexual violence, "survivor" is generally preferred nomenclature specifically because many people feel that "victim" reduces the person's agency.
It's the rightoid chuds who keep accusing the "wokes" for performative victimhood and victim mentality, so I suppose that's why they somehow project and assume that "victim" is a particularly common word in left-wing vocabulary.
Good point, had not even thought of that. Shows how badly they are at understanding the people they are against. Reminds me how they went, a while back going after the military for actually reading the 'woke' literature. Only the military was doing it explicitly so they would understand their enemies, so they could stop them.
I’m not sure lawyers file for many NIH grants, but “victim" probably comes up in medical/science research. Pathology would be one possibility.
A quick pubmed search finds NIH such supported research as:
"In 2005 the genome of the 1918 influenza virus was completely determined by sequencing fragments of viral RNA preserved in autopsy tissues of 1918 victims”
Insights on influenza pathogenesis from the grave. 2011, Virus Research
"death of the child victim”
Characteristics, Classification, and Prevention of Child Maltreatment Fatalities. 2017 Military Medicine
Etc
I was already assuming the pc word list would spread to other subjects.
"De colonized" is also on there, that will give some interesting problems when automated filters for this hit Dutch texts (De means the).
E: there are so many other words on there like victim, and unjust, and equity, this will cause so many dumb problems. And of course if you go on the first definition of political correct ('you must express the party line on certain ideas or be punished') they created their own PC culture. (I know pointing out hypocrisy does nothing, but it amuses me for now).
"Gender Neural”
That typo is probably going to screw a lot of Neuroscience grants just because it'll match on some dumb regex.
Also, apparently Hispanic and Latino people don't exist?
at long last, we have found genai use case
result: decisive chinese cultural victory
+6 culture generation for each person gooning under a portrait of xi jinping. china's borders will expand quickly
ah yes, content from the well-known community mod Cursid Meier
He who controls the goons controls the universe. No wait that is only in EVE online.
courtesy of 404media: fuck almighty it’s all my nightmares all at once including the one where an amalgamation of the most aggressively mediocre VPs I’ve ever worked for replaces everything with AI and nobody stops them because tech is fucked and the horrors have already been normalized
“Both what I’ve seen, and what the administration sees, is you all are one of the most respected technology groups in the federal government,” Shedd told TTS workers. “You guys have been doing this far longer than I've been even aware that your group exists.”
(emphasis mine)
Well, maybe start acting like it champ.
Minor note, but Musk wears that jacket everywhere, even at a suit and tie dinner with Trump. I don't get how Trump (if my assumptions on him wanting to be seen as a certain type of having class/higher society type) stands him. Looking at the picture, he might be having second thoughts all the time.
I bet that jacket is some super-nerdy thing that William Gibson once mentioned in a book
@gerikson He probably bought it BECAUSE Gibson wrote about it in a novel and he thinks it makes him look cool and special.
Anyone matched the list of names of the dinguses currently wrecking US agencies from the inside with known LW or HN posters?