[-] CodeInvasion@sh.itjust.works 16 points 3 weeks ago

Coming from several people who work with SpaceX, there is a dedicated group of people that exist to distract Elon from all vital SpaceX functions.

[-] CodeInvasion@sh.itjust.works 13 points 2 months ago

This is truly a terrible accident. Given the flight tracking data and the cold, winter weather at the time, structural icing is likely to have caused the crash.

Ice will increase an aircraft’s stall speed, and especially when an aircraft is flown with autopilot on in icing conditions, the autopilot pitch trim can end up being set to the limits of the aircraft without the pilots ever knowing.

Eventually the icing situation becomes so severe that the stall speed of the ice-laden wing and elevator exceeds the current cruising speed and results in a aerodynamic stall, which if not immediately corrected with the right control inputs will develop into a spin.

The spin shown in several videos is a terrifying flat spin. Flat spins develop from normal spins after just a few rotations. It’s very sad and unfortunate that we can hear that both engines are giving power while the plane is in a flat spin towards the ground. The first thing to do when a spin is encountered is to eliminate all sources of power as this will aggravate a spin into a flat spin.

Once a flat spin is encountered, recovery from that condition is not guaranteed, especially in multi-engine aircraft where the outboard engines create a lot of rotational inertia.

[-] CodeInvasion@sh.itjust.works 211 points 4 months ago

Valve is a unique company with no traditional hierarchy. In business school, I read a very interesting Harvard Business Review article on the subject. Unfortunately it’s locked behind a paywall, but this is Google AI’s summary of the article which I confirm to be true from what I remember:

According to a Harvard Business Review article from 2013, Valve, the gaming company that created Half Life and Portal, has a unique organizational structure that includes a flat management system called "Flatland". This structure eliminates traditional hierarchies and bosses, allowing employees to choose their own projects and have autonomy. Other features of Valve's structure include: 

  • Self-allocated time: Employees have complete control over how they allocate their time 
  • No managers: There is no managerial oversight 
  • Fluid structure: Desks have wheels so employees can easily move between teams, or "cabals" 
  • Peer-based performance reviews: Employees evaluate each other's performance and stack rank them 
  • Hiring: Valve has a unique hiring process that supports recruiting people with a variety of skills
[-] CodeInvasion@sh.itjust.works 37 points 4 months ago

Someone did the math and realized we would need a 130% tariff on all goods to replace current income tax revenue.

People’s number one concern is inflation. If that tariff is created we will see 100% inflation over night!

[-] CodeInvasion@sh.itjust.works 19 points 5 months ago

You do realize that every posted on the Fediverse is open and publicly available? It’s not locked behind some API or controlled by any one company or entity.

Fediverse is the Wikipedia of encyclopedias and any researcher or engineer, including myself, can and will use Lemmy data to create AI datasets with absolutely no restrictions.

[-] CodeInvasion@sh.itjust.works 37 points 5 months ago

It took Hawking minutes to create some responses. Without the use of his hand due to his disease, he relied on the twitch of a few facial muscles to select from a list of available words.

As funny as it is, that interview, or any interview with Hawkins contains pre-drafted responses from Hawking and follows a script.

But the small facial movements showing his emotion still showed Hawking had fun doing it.

[-] CodeInvasion@sh.itjust.works 47 points 5 months ago* (last edited 5 months ago)

I am an LLM researcher at MIT, and hopefully this will help.

As others have answered, LLMs have only learned the ability to autocomplete given some input, known as the prompt. Functionally, the model is strictly predicting the probability of the next word^+^, called tokens, with some randomness injected so the output isn’t exactly the same for any given prompt.

The probability of the next word comes from what was in the model’s training data, in combination with a very complex mathematical method to compute the impact of all previous words with every other previous word and with the new predicted word, called self-attention, but you can think of this like a computed relatedness factor.

This relatedness factor is very computationally expensive and grows exponentially, so models are limited by how many previous words can be used to compute relatedness. This limitation is called the Context Window. The recent breakthroughs in LLMs come from the use of very large context windows to learn the relationships of as many words as possible.

This process of predicting the next word is repeated iteratively until a special stop token is generated, which tells the model go stop generating more words. So literally, the models builds entire responses one word at a time from left to right.

Because all future words are predicated on the previously stated words in either the prompt or subsequent generated words, it becomes impossible to apply even the most basic logical concepts, unless all the components required are present in the prompt or have somehow serendipitously been stated by the model in its generated response.

This is also why LLMs tend to work better when you ask them to work out all the steps of a problem instead of jumping to a conclusion, and why the best models tend to rely on extremely verbose answers to give you the simple piece of information you were looking for.

From this fundamental understanding, hopefully you can now reason the LLM limitations in factual understanding as well. For instance, if a given fact was never mentioned in the training data, or an answer simply doesn’t exist, the model will make it up, inferring the next most likely word to create a plausible sounding statement. Essentially, the model has been faking language understanding so much, that even when the model has no factual basis for an answer, it can easily trick a unwitting human into believing the answer to be correct.

—-

^+^more specifically these words are tokens which usually contain some smaller part of a word. For instance, understand and able would be represented as two tokens that when put together would become the word understandable.

[-] CodeInvasion@sh.itjust.works 41 points 8 months ago

AFAIK, there’s nothing stopping any company from scraping Lemmy either. The whole point pf reddit limiting API usage was so they could make money like this.

Outside of morals, there is nothing to stop anybody from training on data from Lemmy just like there’s nothing stopping me from using Wikipedia. Most conferences nowadays require a paragraph on ethics in the submission, but I and many of my colleagues would have no qualms saying we scraped our data from open source internet forums and blogs.

46

Aircraft’s last known position and speed show it climbing with decreasing speed. Based on the small loops shown, this was likely a training flight or proficiency check. It can be assumed the aircraft was placed into an intentional stall for training or VMC demo, but quickly departed controlled flight for an unknown reason. It was very windy in Massachusetts (up to 50 mph at altitude) and wind shear may have also been a factor.

According to online aviation blogs, those who knew the pilots say that two of the fatally injured occupants were experienced senior instructors.

https://www.flightaware.com/live/flight/N7345R

[-] CodeInvasion@sh.itjust.works 17 points 1 year ago

This is done by combining a Diffusion model with ControlNet interface. As long as you have a decently modern Nvidia GPU and familiarity with Python and Pytorch it's relatively simple to create your own model.

The ControlNet paper is here: https://arxiv.org/pdf/2302.05543.pdf

I implemented this paper back in March. It's as simple as it is brilliant. By using methods originally intended to adapt large pre-trained language models to a specific application, the author's created a new model architecture that can better control the output of a diffusion model.

[-] CodeInvasion@sh.itjust.works 12 points 1 year ago

I am a satellite software engineer turned program manager. This is not unexpected in this current environment, however the conditions that created the environment are abnormal.

This solar cycle is much stronger than past cycles. I'm on mobile, so I can't get a good screenshot, but you can go here to see this cycle and the last cycle, as well as an overlay of a normal cycle https://www.swpc.noaa.gov/products/solar-cycle-progression

As solar flux increases, the atmosphere expands considerably, causing more drag than predicted. During periods of solar minimum, satellites can remain in a very low orbit with minimal station keeping. However, at normal levels of solar maximum, 5 year orbits can easily degrade to 1 year orbits. Forecasters says we are still a year away from solar maximum, and flux is already higher than last cycle's all time high (which was also an anomalously strong cycle). So it will get worse before it gets better.

TLDR: Satellites are falling out of the sky because the sun is angy

[-] CodeInvasion@sh.itjust.works 24 points 1 year ago

Small aircraft have a carbon equivalent to large cars. My plane is from 1961 and has a fuel economy of 15mpg as the crow flies (arguably closer to 25mpg because of straight line measurements versus winding roads that can almost double the distance), seats 4 people comfortably, and flies at 160 mph.

[-] CodeInvasion@sh.itjust.works 26 points 1 year ago

The only upside I can think of is they'd actually start caring about the planet instead of thinking they'll be dead in 100 years anyway.

view more: next ›

CodeInvasion

joined 1 year ago