Curious: Does Montreal allow "employment at will" like most states in the US do? If it does, I can't imagine an UbiSoft contract not including it. The article definitely makes it sound like a termination for cause, but what's written on his termination paperwork may be entirely different.
MangoCats
AI has definitely made a mark on my industry (software development) in the past 6 months, as far as I can see it hasn't actually replaced anybody, it's just the most efficient tool for "doing the coding" now. In my end of software development "doing the coding" might be 20% of the job, the more important other 80% is defining what code needs to be written.
2 years ago, "doing the coding" was mostly searching with Google for examples of what you need on Stack Exchange and similar places.
30 years ago, "doing the coding" was building a library of reference books and code that you could "lift and shift" into whatever you were doing, and the code itself of shipping products was maybe 10% as complex as it is today (counting all the libraries and packages it was/is built with) - for the same pay, but the 80% definition of what code needs to be written time was largely the same. I'd spend 30 minutes to an hour every other day or so with my boss talking about what we should do or showing him proof of concept, prototype examples of what we'd talk about, then I'd spend the rest of the time writing code. Thing was: over half of that code never made it to customers, it was just building the things to get a feel for how they actually worked, and we'd focus on the best stuff and abandon the rest. That still goes on today.
Not just that, but "working with your hands" has been seen all kinds of machines automating people out of jobs for the past 200+ years, AI/LLM will only make automation more capable, and more undercutting of people's manual labor costs.
True, but in this case it seems worth doing due to the relatively patient, selective nature of the attack - it would at least clean out a compromised Notepad++ if it had not spread to a wider system compromise yet.
It doesn’t have any significance when talking about copyright.
I agree, but that doesn't stop journalists from recognizing a hot button topic and hyper-bashing that button as fast and hard and often as they can.
Point is: some humans can do this without a machine. If a human is assisted by a machine to do something that other humans can do but they cannot - that is illegal?
That's what I read in the article - the "researchers" may have had other interfaces they were using. Also, since that "research" came out, I suspect the models have compensated to prevent the appearance of copying...
Just give me the prompt and model that will generate an entire Harry Potter book so I can check it out.
Start with the first line of the book (enough that it won't be confused with other material in the training set...) the LLM will return some of the next line. Feed it that and it will return some of what comes next, rinse, lather, repeat - researchers have gotten significant chunks of novels regurgitated this way.
You may not have photographic memory, but dozens of flesh and blood humans do. Are they "illegal" to exist? They can read a book then recite it back to you.
Subpoena + publicity = uninsurable. And when you work for a low-profit endeavor, your "damages" are limited to the money you might have made were you insurable, at least that's how the courts measure it and the lawyers decide to take the case or not. OpenAI would probably gladly lose a case and pay whatever income The Midas Project lost as a result of OpenAI's actions - profit isn't the point of The Midas Project, reporting what is happening in the industry is, and that mission has been effectively thwarted with the uninsurable status.
this has absolutely nothing to do with freedom.
I have often said: the courts have absolutely nothing to do with justice, at least not the kind of justice people think they do.
ML already has demonstrated tremendous capability increases for automated machines, starting with postal letter sorters decades ago, proceeding through ever more advanced (and still limited, occasionally flawed - like people) image recognition.
LLM puts more of a "natural language interface" on things, making phone trees into something less infuriating to use and ultimately more helpful.
That's a matter of application
Yeah, although I can see LLMs being helpful as a front end, in addition to the traditional checklist systems used for safety regulation, medical Dx and other guidance, an LLM can (and has, for me) provided (incomplete, sometimes flawed) targeted insights into material it reviews - improving the human review process as an adjunct tool, not as a replacement for the human reviewer.
Definitely. Mostly I have been using LLM generated code to create deterministic processes which can be verified as correct - it's pretty good at that, I could write the same code myself but the AI agent/LLM can write that kind of (simple) program 5x-10x faster for 10% of the "brain fatigue" and I can focus on the real problems we're trying to solve. Having those deterministic tools again makes review and evaluation of large spreadsheets a more thorough and less labor intense process. People make mistakes, too, and when you give them (for this morning's example) a spreadsheet with 2000 rows and 30 columns to "evaluate" - beyond people's "context window capacity" as well... we need tools that focus on the important 50 lines and 8 columns without missing the occasional rare important datapoints...
The better modern models, in roughly the past 10 months or so, have turned a corner for some computer programming tasks, and over those 10 months they have improved rather significantly. It's not the panacea revolution that a lot of breathless journalists describe, but it's a better tool assisting in the creation of simple programs (and simple components of larger programs) than anything I have used in the previous 45 years, and over the past 10 months the level of complexity / size of programs the LLMs can effectively handle has roughly tripled, in my estimation for my applications.
When it's used for worthless garbage (as most of it seems to be today), I agree with this evaluation. Focused on good use cases? In specifically good use cases, the power / environmental impacts range from trivial to positive - in those cases where the AI agents/LLMs are saving human labor - human labor and its infrastructure has enormous environmental impact too.