Mount & Blade II - Bannerlord and ChatGPT

Joined
May 6, 2013
Messages
4,998
Location
Germany
Doesn't your own intelligence also rely only on data?
As already stated in the thread by @largh.
PS. I don't think these tools can be called artificial intelligence just yet. They are just language models using statistical tools putting most probable words in a sequence. The appearance/immersion is impressive, but it's still pretty much pure statistics, no intelligence/conscience involved.
 
Last edited:
Joined
Oct 1, 2010
Messages
36,405
Location
Spudlandia
Doesn't your own intelligence also rely only on data?
No.

Human intelligence can choose not to follow the most sensible, most predictable, most "correct" path. No A.I. will ever write the Benji chapter to "The Sound and the Fury" unless you feed it The Sound and the Fury and program it to reproduce those speech patterns. It will never make a creative decision. It will never decide to write run-on sentences like Quentin in that same book to represent his tormented stream of consciousness. If an A.I. chooses to use a suboptimal word or sentence structure, it'll only do so because it was programmed to do so every X sentences or Y pages, never because it's making an artistic decision.

It doesn't particularly surprise me that a lot of people find this bland output completely acceptable, because a lot of people wouldn't know good writing if it bit them on the ass.
 
Joined
Aug 31, 2013
Messages
4,927
Location
Portland, OR
I think you mean current AI. When true AI is achieved it will be a game changer. That could be a couple of hundred years away though. I imagine when that happens us gamers will be very happy indeed! No more waiting 6 years for BG3! It could spit out a couple of BG games every few hours by itself!
 
Joined
Oct 18, 2006
Messages
3,124
Location
Sigil
I think it was on The Infinite Monkey Cage (a BBC radio show) where I heard an AI researcher calling current AI a "revolution in computational statistics". She disliked calling it intelligent.

Anyway, the content generated was pretty cool, but very formulaic and unrealistic since it had no understanding of the situation outside of the language model. They all talked similarly, with no regard for the context apart from choice of words.

I tried AI dungeon a year or two ago. It was cool for a while, but became predictable and more of a curiosity than a game.
 
Joined
Feb 15, 2009
Messages
1,981
Location
Sweden
I can see where this type of 'AI' could become helpful for indie devs, where some compromises have to be made. I could see the tools for automating landscape creation, dungeons, and so on, being much improved quite quickly. I can imagine a bunch of artists and level designers producing datasets for such tools, in a way where they're properly remunerated and sign off on that use. Then the devs become sort of editors of a bunch of AI 'proposals', letting non-artists get close to what they want.

I think it will get a lot easier to get decent results with those tools, and then focus the human artists' talents where they have maximum impact. That sort of thing could work out quite well.
 
Joined
Nov 8, 2014
Messages
12,085
Doesn't your own intelligence also rely only on data?

As already stated in the thread by @largh.

Yeah, well, I guess AI is just a term - often used as a marketing buzzword - but I'd think that one of the requirements for "intelligence" would be inference and evaluation of "the output" (thoughts). In other words, the output is somehow compared to and weighted against other information collected by an individual/program.

Currently, one could simplify that ChatGPT produces sequences of words based on multidimensional likelihood defined from a huge dataset containing human-written text. It's more complex than that, though, but the point is that the program does not fact-check its output. I'd assume that adding such a capability would not be impossible and a next step toward a program that could be called AI.

Take all this with a grain of salt. I do statistics and data-analysis, but have never been dabbling with machine learning nor AI. Anyway, ChatGPT is a revolutionary piece of technology and such programs will likely influence many things we'll witness in the near future.
 
Joined
Jun 19, 2020
Messages
1,116
Location
Norway
Well, yeah, my question was kind of intentionally naive. Actually I know a thing or two about AI, although my days of active research are long in the past (and also were when the first machine learning hype was over and before the current one started).

Concerning ChatGPT a nice explanation is by a comparison with a feature of photo post-processing software. When you want to create a panorama picture from single images, but you failed to shoot the whole motive, so you have gaps in the combined image, e.g. a forest and sky. An AI can now fill that blank space connecting the pictures seamlessly, e.g. by painting the missing parts of the forest and sky. It's so good that humans hardly notice that it had been created by an AI.

ChatGPT does the same, but with text. It's just a more sophisticated fill-in-the-blanks.

Both is possible because the AI systems had been trained on a lot of data.

And to come to my initial native question: currently it's hardly imaginable how a potent AI system should even work without data.
Yes, there are systems with hand crafted rules (e.g. in game AIs), but that's a much lower level by of potency.
Or there are systems who learn by "themselves", which is so called reinforcement learning, e.g. by playing games against itself billions of times and thus finding good strategies by learning which actions lead to a victory. But still that's also data, with the only difference that the data has been created during the learning process by the AI system itself.

So as a summary: if an AI relies on data or not is not a good prediction for its quality. And if it was a prediction, then AI systems that use (a lot of) data, are (most times) a lot more powerful than those that do not.

(Which of course leaves out the much more complicated question what human intelligence actually relies on. Isn't it also data? But that's not my expertise, so I'm quiet on this one.)
 
Joined
May 6, 2013
Messages
4,998
Location
Germany
I think you mean current AI. When true AI is achieved it will be a game changer. That could be a couple of hundred years away though. I imagine when that happens us gamers will be very happy indeed! No more waiting 6 years for BG3! It could spit out a couple of BG games every few hours by itself!
That sounds great until it becomes Skynet and massacres us. :)
 
Joined
Oct 21, 2006
Messages
39,401
Location
Florida, US
That sounds great until it becomes Skynet and massacres us. :)
I think that could potentially become a worry even sooner. When it comes to strategy games, these relatively primitive AIs already have us beat. A long time ago the chess grandmaster Bobby Fischer was concerned that top-level chess was becoming too much of a memory game, too much about pattern recognition, and he proposed variations with random starting positions, a larger grid, and so on. He wanted to force more genuine, novel, strategic thought.

The trouble is that with anything of that nature, if an AI is given enough time and horsepower to play against itself and build up huge amounts of data, it will handily outperform us. I'd bet dollars to donuts the military is working on AI systems that recognise real-world strategic situations, to help develop optimal strategies. I could imagine a situation where something like that could run amok, even while the AI is still a dumb cousin of Skynet.
 
Joined
Nov 8, 2014
Messages
12,085
I'd bet dollars to donuts the military is working on AI systems that recognise real-world strategic situations, to help develop optimal strategies. I could imagine a situation where something like that could run amok, even while the AI is still a dumb cousin of Skynet.
I think an AI that is good in world-strategy is far in the future. The problem is that there is just one world where the AI could learn from. But one example of world hitory just isn't enough.
Like I hinted before the AI could train itself via reinforcement learning by just simulating world histories. However that's not a good approach because the rules are totally unclear. World politics depends on humans. But predicting human behaviour is quite difficult. This approach works well with things like games or physical systems where we know and (mostly) understand the laws of nature.

I think the more imminent danger is a scenario where a questionable entity (e.g. a guy like Elon Musk) builds and AI and falsely claims that the AI is capable of making the best decisions in world politics. If you then have political or military decision-makers who believe these claims due to a lack of an understanding of the technology, and then blindly follow the AI's decision, things could get ugly.

edit: Oh, I think I misread your comment. real-world strategic != world strategy. I still let the posting here as it still might fit the context.
 
Joined
May 6, 2013
Messages
4,998
Location
Germany
edit: Oh, I think I misread your comment. real-world strategic != world strategy. I still let the posting here as it still might fit the context.
Yes, I'm thinking of more limited strategic situations that could be modeled within the capabilities of this type of AI, not a sort of geopolitical supermind. That seems to require a far more advanced AI, capable of dealing with that level of nuance and complexity - the same sort of level you'd need for an AI to actually engage in what we might consider creative writing, as opposed to statistical pastiche.

I could see attempts at various types of autonomous combat drones, AI that directs their co-ordinated tactical behaviour, and so on. To stick with the sci-fi references, I could imagine us dealing with a rogue ED-209 situation long before we face HAL 9000.
 
Joined
Nov 8, 2014
Messages
12,085
Well, AI for warfare is scary yes. But I don't see why it should go rogue.
 
Joined
May 6, 2013
Messages
4,998
Location
Germany
Yes, there's no 'incentive', in the way a self-aware AI might decide it's had enough of the silly primates giving the orders. But I think there's all sorts of room for things to go wrong with lower-level autonomous AIs - just error, bugs, not realising there are edge cases where the decision-making goes bonkers, or even the risk of
these system surprising us with what they can do - 'Huh... would you look at that. That's not supposed to happen." :biggrin:
 
Joined
Nov 8, 2014
Messages
12,085
I like to think the AI could be used for more beneficial things like - "Create me an isometric RTwP cRPG that allows 6 player characters based on the 3.5 OGL with a story involving an evil dragon with a play time of 80-100 hours - GO!" - a few hours later I am playing a completely unique game with an original story!
 
Joined
Oct 18, 2006
Messages
3,124
Location
Sigil
Yes, there's no 'incentive', in the way a self-aware AI might decide it's had enough of the silly primates giving the orders. But I think there's all sorts of room for things to go wrong with lower-level autonomous AIs - just error, bugs, not realising there are edge cases where the decision-making goes bonkers, or even the risk of these system surprising us with what they can do - 'Huh... would you look at that. That's not supposed to happen." :biggrin:
Yeah, there is a lot room for errors of course. As hinted before in my point of view the most intimidating is that humans blindly trust AI's with important decisions. Thus it's important to invest in research of Explainable AI, which can give humans an explanation why a decision was proposed and why it is better than others. That's difficult for AIs using Deep Learning (so more or less all of the cool new stuff) because these are basically black box methods where it's difficult to tell why a result is at it is.
With low level errors it's difficult to tell. I mean an inherent characteristics of Deep Learning is that it can very well handle uncertain knowledge. E.g. if the data used to train the AI (training data) contains some wrong information, that's not a problem, because the trained model (which is the core "knowledge") will just pick the most probable solution. It won't make the wrong decision because of that one wrong information in the training data.
For example if you're training an AI to classify animals in pictures and in one picture of your training data a dog is wrongly marked as a cat, the AI will still be reliable, if you have enough other data. It won't classify dogs as cats just because of this one wrong example.
But like any other software system this has "just" to be tested with regular quality control that has been done for decades.
I like to think the AI could be used for more beneficial things like - "Create me an isometric RTwP cRPG that allows 6 player characters based on the 3.5 OGL with a story involving an evil dragon with a play time of 80-100 hours - GO!" - a few hours later I am playing a completely unique game with an original story!
I think we'll see this in our lifetime.
What we'll definitley see are 3D animated movies completely generated by AI systems. That's just a matter of years.
 
Last edited:
Joined
May 6, 2013
Messages
4,998
Location
Germany
I think we'll see this in our lifetime.
What we'll definitley see are 3D animated movies completely generated by AI systems. That's just a matter of years.
Perhaps in your lifetime :(

Sadly I think it will be around 50+ years till we get a truely capable AI and I'm unlikely to last that long!!! :(
 
Joined
Oct 18, 2006
Messages
3,124
Location
Sigil
Yeah, there is a lot room for errors of course. As hinted before in my point of view the most intimidating is that humans blindly trust AI's with important decisions. Thus it's important to invest in research of Explainable AI, which can give humans an explanation why a decision was proposed and why it is better than others. That's difficult for AIs using Deep Learning (so more or less all of the cool new stuff) because these are basically black box methods where it's difficult to tell why a result is at it is.
With low level errors it's difficult to tell. I mean an inherent characteristics of Deep Learning is that it can very well handle uncertain knowledge. E.g. if the data used to train the AI (training data) contains some wrong information, that's not a problem, because the trained model (which is the core "knowledge") will just pick the most probable solution. It won't make the wrong decision because of that one wrong information in the training data.
For example if you're training an AI to classify animals in pictures and in one picture of your training data a dog is wrongly marked as a cat, the AI will still be reliable, if you have enough other data. It won't classify dogs as cats just because of this one wrong example.
But like any other software system this has "just" to be tested with regular quality control that has been done for decades.
Yes, but I think as more complex autonomous systems are developed, they will require several Deep Learning systems interacting with each other, mediated by some sophisticated code. I think as that complexity increases, the more black-boxed things become to us, and that there is some risk of unexpected outcomes emerging.

I think there is also the caution that we probably don't know all there is to know about computer science and mathematics, and that when we create these complex AI systems that become increasingly opaque to us, there is always the possibility of one of them making a leap towards more advanced AI behaviour we hadn't conceived of.
 
Last edited:
Joined
Nov 8, 2014
Messages
12,085
I think there is also the caution that we probably don't know all there is to know about computer science and mathematics, and that when we create these complex AI systems that become increasingly opaque to us, there is always the possibility of one of them making a leap towards more advanced AI behaviour we hadn't conceived of.
As I've said I'm not an actual expert for Machine Learning, but I think I know the concepts. But I have no idea where an unintended leap might actually come from. How should that happen?

I think we'll more likely have a societal problem, because there will be those who understand and control the AI Systems and those who don't. So it's a question of power distribution. E.g. OpenAI (who developed ChatGPT) had been co-founded by Elon Musk (who resigned in 2018 but still gives money) and also Microsoft heavily invests. It's not open, it belongs to corporations. The next big systems will also be owned by companies. Meta/Google will react with something. So they will hold the power.
So it's up to governments and the UN (or here the EU) who somehow needs to regulate it and prevent misuse, or at least make it more difficult.

Btw. I'm looking forward to the day that someone (a lawmaker) has to actually define the term of "Artificial Intelligence" or what an "AI system" is. In Germany that hasn't been done yet.
Is there some country where AI has already been defined in a legal context?
 
Joined
May 6, 2013
Messages
4,998
Location
Germany
As I've said I'm not an actual expert for Machine Learning, but I think I know the concepts. But I have no idea where an unintended leap might actually come from. How should that happen?
Well, I'm no expert either, but I do read about the topic. In terms of where such a leap might come from, here's an article I read a while back, which I think gives a better idea of what I'm getting at.


I think probably the experts would agree that while a runaway process that leads to the 'singularity' remains quite far-fetched, the possibility of systems like this starting to do things we could not foresee is real.
 
Joined
Nov 8, 2014
Messages
12,085
Back
Top Bottom