Doesn't your own intelligence also rely only on data?As pointed out its still not full A.I but a dumb version relying on data.
Doesn't your own intelligence also rely only on data?As pointed out its still not full A.I but a dumb version relying on data.
As already stated in the thread by @largh.Doesn't your own intelligence also rely only on data?
PS. I don't think these tools can be called artificial intelligence just yet. They are just language models using statistical tools putting most probable words in a sequence. The appearance/immersion is impressive, but it's still pretty much pure statistics, no intelligence/conscience involved.
No.Doesn't your own intelligence also rely only on data?
Doesn't your own intelligence also rely only on data?
As already stated in the thread by @largh.
That sounds great until it becomes Skynet and massacres us.I think you mean current AI. When true AI is achieved it will be a game changer. That could be a couple of hundred years away though. I imagine when that happens us gamers will be very happy indeed! No more waiting 6 years for BG3! It could spit out a couple of BG games every few hours by itself!
I think that could potentially become a worry even sooner. When it comes to strategy games, these relatively primitive AIs already have us beat. A long time ago the chess grandmaster Bobby Fischer was concerned that top-level chess was becoming too much of a memory game, too much about pattern recognition, and he proposed variations with random starting positions, a larger grid, and so on. He wanted to force more genuine, novel, strategic thought.That sounds great until it becomes Skynet and massacres us.
I think an AI that is good in world-strategy is far in the future. The problem is that there is just one world where the AI could learn from. But one example of world hitory just isn't enough.I'd bet dollars to donuts the military is working on AI systems that recognise real-world strategic situations, to help develop optimal strategies. I could imagine a situation where something like that could run amok, even while the AI is still a dumb cousin of Skynet.
Yes, I'm thinking of more limited strategic situations that could be modeled within the capabilities of this type of AI, not a sort of geopolitical supermind. That seems to require a far more advanced AI, capable of dealing with that level of nuance and complexity - the same sort of level you'd need for an AI to actually engage in what we might consider creative writing, as opposed to statistical pastiche.edit: Oh, I think I misread your comment. real-world strategic != world strategy. I still let the posting here as it still might fit the context.
Yeah, there is a lot room for errors of course. As hinted before in my point of view the most intimidating is that humans blindly trust AI's with important decisions. Thus it's important to invest in research of Explainable AI, which can give humans an explanation why a decision was proposed and why it is better than others. That's difficult for AIs using Deep Learning (so more or less all of the cool new stuff) because these are basically black box methods where it's difficult to tell why a result is at it is.Yes, there's no 'incentive', in the way a self-aware AI might decide it's had enough of the silly primates giving the orders. But I think there's all sorts of room for things to go wrong with lower-level autonomous AIs - just error, bugs, not realising there are edge cases where the decision-making goes bonkers, or even the risk of these system surprising us with what they can do - 'Huh... would you look at that. That's not supposed to happen."
I think we'll see this in our lifetime.I like to think the AI could be used for more beneficial things like - "Create me an isometric RTwP cRPG that allows 6 player characters based on the 3.5 OGL with a story involving an evil dragon with a play time of 80-100 hours - GO!" - a few hours later I am playing a completely unique game with an original story!
Perhaps in your lifetimeI think we'll see this in our lifetime.
What we'll definitley see are 3D animated movies completely generated by AI systems. That's just a matter of years.
Yes, but I think as more complex autonomous systems are developed, they will require several Deep Learning systems interacting with each other, mediated by some sophisticated code. I think as that complexity increases, the more black-boxed things become to us, and that there is some risk of unexpected outcomes emerging.Yeah, there is a lot room for errors of course. As hinted before in my point of view the most intimidating is that humans blindly trust AI's with important decisions. Thus it's important to invest in research of Explainable AI, which can give humans an explanation why a decision was proposed and why it is better than others. That's difficult for AIs using Deep Learning (so more or less all of the cool new stuff) because these are basically black box methods where it's difficult to tell why a result is at it is.
With low level errors it's difficult to tell. I mean an inherent characteristics of Deep Learning is that it can very well handle uncertain knowledge. E.g. if the data used to train the AI (training data) contains some wrong information, that's not a problem, because the trained model (which is the core "knowledge") will just pick the most probable solution. It won't make the wrong decision because of that one wrong information in the training data.
For example if you're training an AI to classify animals in pictures and in one picture of your training data a dog is wrongly marked as a cat, the AI will still be reliable, if you have enough other data. It won't classify dogs as cats just because of this one wrong example.
But like any other software system this has "just" to be tested with regular quality control that has been done for decades.
As I've said I'm not an actual expert for Machine Learning, but I think I know the concepts. But I have no idea where an unintended leap might actually come from. How should that happen?I think there is also the caution that we probably don't know all there is to know about computer science and mathematics, and that when we create these complex AI systems that become increasingly opaque to us, there is always the possibility of one of them making a leap towards more advanced AI behaviour we hadn't conceived of.
Well, I'm no expert either, but I do read about the topic. In terms of where such a leap might come from, here's an article I read a while back, which I think gives a better idea of what I'm getting at.As I've said I'm not an actual expert for Machine Learning, but I think I know the concepts. But I have no idea where an unintended leap might actually come from. How should that happen?