Things you don't|do need|like to know about generative AI

It's funny. Like in other countries, we have several elections this year, so I'm shuffling through the parties' agendas to see what's what. It turns out one party has trained an AI to answer questions, though their programme isn't that big (311 pages). Hopefully, it gives correct answers and don't make up things. (They didn't do that themselves, actually; it's developed by an AI company). It's not bad.

And look! Another one has an AI, too! When I asked where they were situated, he replied 'on the left' - almost an euphemism when you see their priorities - and that they were 'fighting for [...] and peace'. I made a joke about that, but it understood the contradiction I was hinting at and elaborated in quite a convincing way. Nice job, too. Although, putting a machine to answer questions instead of humans could look a little odd from such a party. ;)

Welcome to the new AI world. That's actually a good use of generative AI, IMO.
 
Joined
Aug 29, 2020
Messages
12,482
Location
Good old Europe
Joined
Nov 11, 2019
Messages
2,706
Location
beRgen@noRway

I still prefer solid human intelligence economy experts. They have predicted 14 of the last 3 major economy crises.

pibbuR who once again claims that his posts are created with HI, enhance by two cups of coffee.
Haven't tools been responsible for market crashes in the past? Saying that LLMs should play a more active role in the decision-making when we know they have a fixed training set that doesn't contain the latest news and will likely all make the same decision at about the same time is a bit disturbing.
 
Joined
Aug 29, 2020
Messages
12,482
Location
Good old Europe
I think this current 'FAB' around 'AI' is lame and dangerous. All these system seem to be doing it taking in a whole lot of information and spitting it back out. The problem is the system doesn't do a good (or even a bad) job of filtering truth from fiction. Feed it a lot of propaganda or false data and you get a lot of propaganda or false data out. Even in straight forward data it messes up. For example ask it for the acceptable temperature range of an aquatic species of fish and if some website it uses has bad data it will blindly spit out that bad data.

Feed it a bunch of political propaganda and it will be overwhelmed by bad data.

Where it works well is if the data provided is 100% accurate and you need factual data based off of historical results it can then relatively accurate (after all a true upset is not likely to be predicted here) provide accurate forward prediction. However ask it for future results where there is no historical data or working model to predict and it will give you non-sense.

The real problem is social eng. People blindly trust these systems and so when they are fed non-sense this non-sense is propagated even further and often into non-sense action by the idiot who blindly believes the system.

Too many members of the general population are morons.
 
Joined
Jun 26, 2021
Messages
796
The future's so bright part 1...
"Every morning I wake up, an AI will tell me, ‘Eric, you have five meetings scheduled today. You do not need to join four of the five. You only need to join one. You can send a digital version of yourself"

"We’re not there yet,” said Yuan, “but that’s a reason why there’s limitations in today’s [large language models]. Everyone shares the same LLM. It doesn’t make any sense. I should have my own LLM — Eric’s LLM, Nilay’s LLM. All of us, we will have our own LLM."


The future's so bright part 2...


pibbuR who already has his personal, wetware based, LLM

PS. The novel "Kiln People" by David Brin deals with somewhat similar topics to number 1.
From Wikipedia:"The novel takes place in a future in which people can create clay duplicates (called "dittos" or golems) of themselves. A ditto retains all of the archetype's memories up until the time of duplication. The duplicate lasts only about a day, and the original person (referred to in the book as an archie, from "archetype", or "rig", from "original") can then choose whether or not to upload the ditto's memories"

Hereby recommended.
DS.
 
Joined
Nov 11, 2019
Messages
2,706
Location
beRgen@noRway
Nope, Copilot hasn't stolen your code.


I more or less agree with it. Copilot does what we've all done when studying programming or any other subject. I've read a number of copyrighted books, and what I write now is influenced to some degree by what I've read and learned, but it's not a (significant) 1:1 copy, and it doesn't make it a copyright infringement.

Granted, Copilot does it badly, but I don't think it's punishable by law. :D
 
Joined
Aug 29, 2020
Messages
12,482
Location
Good old Europe
Nope, Copilot hasn't stolen your code.


I more or less agree with it. Copilot does what we've all done when studying programming or any other subject. I've read a number of copyrighted books, and what I write now is influenced to some degree by what I've read and learned, but it's not a (significant) 1:1 copy, and it doesn't make it a copyright infringement.

Granted, Copilot does it badly, but I don't think it's punishable by law. :D
Partly agree. In the case of books, you probably bought those so you have a certain right to that information. For public information, people posted that information with a certain expectation of who can use it. An expectation that at that moment certainly didn't involve an LLM using it to learn.
What companies feeding their LLMs are currently doing is taking advantage of no legal regulation. They're abusing that, at least until/if it will get regulated.

Taking the topic even broader, artists and other's work that is being used also didn't agree to LLMs being fed their work.
Just because something is public doesn't mean it's not subject to laws and certain regulation. But until we have that regulation, if we'll even have it due the lobbying that is currently done, they'll feed as much as they can.

As soon as we'll have laws, copilot is stealing your work. Unfortunately it probably won't apply retroactively, if at all.
Fortunately it sounds like these LLMs will quickly hit a starvation, since there's just no more data and the newer versions require quantities that are just not feasible. Unless they start feeding what they produce back to them. Then it'll truly be on the way to be garbage in garbage out. :D
 
Joined
Jul 31, 2007
Messages
9,278
Partly agree. In the case of books, you probably bought those so you have a certain right to that information. For public information, people posted that information with a certain expectation of who can use it. An expectation that at that moment certainly didn't involve an LLM using it to learn.
What companies feeding their LLMs are currently doing is taking advantage of no legal regulation. They're abusing that, at least until/if it will get regulated.

Taking the topic even broader, artists and other's work that is being used also didn't agree to LLMs being fed their work.
Just because something is public doesn't mean it's not subject to laws and certain regulation. But until we have that regulation, if we'll even have it due the lobbying that is currently done, they'll feed as much as they can.
Solving the legal aspects seems to be a nightmare. It's already complicated for human work because of the differences between states.

It's true that the usage is completely different than what people initially knew, in code too, and there's a non-null risk that the LLM produces something very close to an original work if instructed to. Although, from the way neural network are designed, it seems hard to re-create an original if a number of similar data have been fed to the model. Then a human could do it, too; I'm pretty sure many people take bits of code and use them in their code, disregarding the licence model of the source. On the other hand, an AI does it mechanically on a lot of material and makes the outcome available to a lot of people. And there's the notion of 'significantly close', which also depends on the size of what is copied or imitated.

You could also see it as a smart system to massively share the experience.

It's a nice puzzle. As I said, I more or less agree, but not fully.

There's also a debate on whether what is produced by AI can be copyrighted or not (with issues like: 'is an AI an author?', 'tools don't create', and so on).

(I don't think any of those issues is covered by AI Act the EC is working on.)

Unless they start feeding what they produce back to them. Then it'll truly be on the way to be garbage in garbage out. :D
Yes, such a feedback loop would really be the end of us. :D
Or at least there'll be a lot more of garbage out there.

Another question I have is whether the code fed to Copilot actually works and is somewhat curated. Or does it eat anything, including WIP and abandoned stuff? I mean, when you look at the average repository on GitHub... :LOL:
 
Joined
Aug 29, 2020
Messages
12,482
Location
Good old Europe
I think the legality really needs to be clarified. Especially when your own work might be used to replace you. You should have a right to abstain from that.
Even worse, when some people wanted to delete their posts on things like Stackoverflow, they were blocked off from deleting their own posts. Goes to show who owns those posts once you make them.

About the repos being used by copilot, I kind of doubt they can filter. The quantity is just too large.
And about them reproducing copyrighted code, I remember reading threads from developers that had private git repos with very specific code, and they then tested copilot and it almost fully reproduced code they wrote directly in their private repos.
That sounds like a lawsuit waiting to happen.
 
Joined
Jul 31, 2007
Messages
9,278
I also think they just put everything and conveniently ignored the licences or even the state of the code.

Yeah, the SO issue could have been handled more gracefully. As much as I like the original idea and benefited from answers (I replied to a bunch of questions and did a few moderation chores, too), I think it's become a cesspit.
 
Joined
Aug 29, 2020
Messages
12,482
Location
Good old Europe

pibbuR who last time he checked knows more (a lot) about pibbuR than any AI (zero).
 
Joined
Nov 11, 2019
Messages
2,706
Location
beRgen@noRway
It's getting ridiculous. People who have apparently no clue how generative AI works start using it to make administrative decisions that impact lives. The fact that AI recommendations are supposed to be supervised by a human doesn't impress me, as I doubt those people have the luxury of time to review each case properly.


PS: I double-checked it wasn't April Fools' Day, but no...
 
Joined
Aug 29, 2020
Messages
12,482
Location
Good old Europe
More lower-quality content in perspective (not that their previous work was outstanding).


Lionsgate, Studio Behind ‘John Wick,’ Signs Deal With AI Startup Runway​

[…] Michael Burns, vice chairman of Lionsgate Studio, expects the company to be able to save “millions and millions of dollars” from using the new model. The studio behind the “John Wick” franchise and “Megalopolis” plans to initially use the new AI tool for internal purposes like storyboarding—laying out a series of graphics to show how a story unfolds—and eventually creating backgrounds and special effects, like explosions, for the big screen. […]

I wonder how an AI would keep successive frames consistently enough so that nothing off is noticeable. I just don't believe it'll work. Hopefully for them, they'll only lose a few millions and millions before realizing it was a silly idea.
 
Joined
Aug 29, 2020
Messages
12,482
Location
Good old Europe
View: https://x.com/Grady_Booch/status/1837153297330778317


Crypto mining was a problem in terms of wasted energy, but this is not?
Let alone the hypocrisy of complaining about crypto and other such ventures when a whole lot more is wasted on the military industrial complex churning through material and energy.

The main problem with all of this is that the consumption is part of the cycle. It's not just producing stuff and stockpiling it. It's consuming it to then produce more to make endless amounts of money.
Production and consumption just for the same for eating through everything, to feed the stocks of private companies.

And then the same establishment has the gall to tell the average person to watch their carbon footprint and energy consumption.
Starting with the hypocrite-in-chief. He can have huge mansions, but god forbid people in Africa start having a house and a car. Fucking criminals.

View: https://www.youtube.com/watch?v=mU5W1LvHN-o

I'll stop since it's gonna get moved to P&R.
 
Joined
Jul 31, 2007
Messages
9,278
Crypto mining was a problem in terms of wasted energy, but this is not?
Let alone the hypocrisy of complaining about crypto and other such ventures when a whole lot more is wasted on the military industrial complex churning through material and energy.
I saw that. And that's a few months after Amazon bought a data center nearby another nuclear station.

I despise some of the crypto blockchains (esp. with proof of work, like Bitcoin) because it's wasting power intentionally just to randomly select the next item of the chain, but this new AI trend is even worse because it's nonsensical and based on false assumptions. I'm not saying it won't get somewhere, but so much training right now is useless. No, it's not only useless but harmful, on top of that.

EDIT Oh, right, oops. I thought we were in P&R. :ROFLMAO:
 
Joined
Aug 29, 2020
Messages
12,482
Location
Good old Europe
Back
Top Bottom