Are you concerned about the use of generative AI by game developers?

  • Yes, it will decrease the quality of games

    Votes: 11 20.8%
  • Yes, it will impact people's jobs / shift the industry

    Votes: 14 26.4%
  • Yes, but it is too early to say whether the net result will be negative

    Votes: 21 39.6%
  • No, it will increase the quality of games

    Votes: 9 17.0%
  • No, it will improve people's jobs

    Votes: 3 5.7%
  • No, it won't substantially change anything

    Votes: 3 5.7%
  • I don't have any opinion for or against it / don't know enough about generative AI to say

    Votes: 6 11.3%

  • Total voters
    53
ezgif-7-d1ac7d2940n.gif
 
AI is certianly interesting. I don't think we need to worry about the quality of games overall, though those that work in various parts of the industry will either have to adapt to utilize these tools, or get left behind, just like any other disruptivce technology. Over the past decade, we've seen such a huge growth in the independent developer space but there still are relatively few great indy games and still lots of great AAA games. I think if anything, much like how the access to platforms like steam and gog drove the increase in the indy space, AI will continue it. Right now, you a lot fo decisions for indys are driven by cost.

Take graphics for example. It's extremely expensive to hire a graphic artist for even basic graphics. One of the biggest impedements for making mods for the Infinity Engine games was that you pretty much had to just reuse existing maps. Creating the art for a new map was damn near impossible and that's not even going into the work to make it more than just an image. But what if you could go into an AI generator, tell it you're designing a new map for IE and guide it through what you're looking to create? Still work to be done, but a huge impedement has now been removed.
 
I'm bastardising this quote I saw somewhere online:
I thought AI and robots were supposed to do the cleaning and the dishes to free up time for me to be creative, instead it looks like AI will be doing the creative work and I'll be doing the cleaning and the dishes.
I think it's a valid fear that AI will be creating all the music, art, engineering and any other creative work. Humans will be left doing any manual labour that still exists.
 
I have a constant debate in my office regarding this AI thing, and I'm the aggressive minority saying, that it's not the AI we should fear, but our own stupidity.
Needless to say, I'm not popular among our hip young colleagues, especially when I asked them to define me what this "AI" is. No one was able to answer in a cohesive manner...

AI will never ever do serious creative work -- because we, humans won't even understand how creativity works... ...hence we cannot develop an algorithm to do so.
All we can do is to create a system that remixes existing data by certain criteria -- and presto! ORIGINALITY! ARTIFICIAL CREATIVITY! SMART TECHNOLOGY!

Yup, Lord bless the brain dead, they are amusing folk to watch.
/rant
 
We truly don't want true A.I as in the end it would either kill or enslave us all.
 
AI will never ever do serious creative work -- because we, humans won't even understand how creativity works... ...hence we cannot develop an algorithm to do so.
This is not a valid argument. Evolution also doesn't understand, how creativity works, but has still developed creative beings. Also, we do not understand, how to build a human being, but still we produce new human beings every day.

Where I agree to you: Today's "A.I." is not intelligent at all.
 
This is not a valid argument. Evolution also doesn't understand, how creativity works, but has still developed creative beings. Also, we do not understand, how to build a human being, but still we produce new human beings every day.
Yes, but how much time did it take for this to work? That's how evolution succeeded.
 
Yes, but how much time did it take for this to work? That's how evolution succeeded.
I used evolution as a counterexample against the claim, that we cannot develop things, which we do not understand. I didn't say, that this is the only way to do it.

Creativity itself is another good example: many creative people cannot explain, how they get their ideas. Often they don't even understand their own ideas. But still, they have them.

Edit: Before we had chess programs, which play better then the best humans, a similar argument was used: We will never be able, to create a chess program that plays better than the best humans. All these "philosophical" arguments, why true A.I. or artificial creativity cannot be possible for some "principal" reason, are invalid.
 
We shouldn't judge others by our own standards...:cm:
Not standards just logic and a whole lot of Fiction novels.

Even ChatGPTs creator thinks so. Even testified infront of Congress.
 
I used evolution as a counterexample against the claim, that we cannot develop things, which we do not understand. I didn't say, that this is the only way to do it.
@duerer said we can't make something complex without understanding. You said it was possible: evolution created us and allowed us to make something without understanding. I'm just saying it had a tremendous cost, so I think it's not practical and duerer's argument is valid.

Creativity itself is another good example: many creative people cannot explain, how they get their ideas. Often they don't even understand their own ideas. But still, they have them.
I agree it's hard for someone to explain where the ideas are coming from, since it's taken in large part from our past experience. But it's also very hard for anyone else to have exactly the same ideas, which only confirms it's hard to replicate something complex without understanding.

Edit: Before we had chess programs, which play better then the best humans, a similar argument was used: We will never be able, to create a chess program that plays better than the best humans. All these "philosophical" arguments, why true A.I. or artificial creativity cannot be possible for some "principal" reason, are invalid.
I've never heard we wouldn't be able to create a chess program that could beat everyone. Quite the opposite: predictions have often been too optimistic about it (like those who bet against David Levy). The algorithms used in many good chess programs are based on the understanding of the game and the techniques used by grand masters, plus a solid dose of brute force to compensate the lack of a neural network to recognize and generalize patterns. Now, top commercial engines like Stockfish, Fritz, and Alphazero are using a neural network, mimicking the humans even further. So we fully understand how it works, actually.

PS: You shouldn't put a comma between the verb and its object. It makes it quite hard to read. :)
 
One thing is clear, if AI will be better than humans at everything humans do, it will be insanely good at irrational speculation and baseless fear-mongering. That being the case, it's possible AIs in the future will be too busy calculating doomsday to ever do anything else, and so people won't have to be afraid about losing their job to a non-biological competitor that does everything better.

In truth, I'm looking forward to the day in which AI helps us see clearly that we are neither as intelligent nor even as sentient as we like to think we are. Like when we first began to get an inlking of the vastness of the Universe and how average and unremarkable our solar system is in the grand scheme of things, this should be a beautiful, humbling, and hopefully edifying milestone towards our growth as species.

And if we don't learn from this, and insist on dwelling on this foolish umbilical ignorance, I do hope AI kills us all and leaves room in the planet for something else to give it a better shot than we did.
 
Last edited:
Also some rather nasty, final comments on the topic:
First of all, let's consider what AI can do now for the user:
1. Filter complex solid data (e.g. IBM's Watson system, "making educated decisions" based on existing legal papers)
2. Evaluate somewhat vague data to solid data (e.g. Alexa / Siri / Copilot "understanding" the user's incomprehensible request on where to dine)
3. Remix solid data on vague inputs (e.g. MidJourney, "creating original art" based on existing art, "AI" coding, etc)

Can you see a pattern here? I'll help you.
All three scenarios handle solid data with vague queries.
Based on the "vague level" of the query, the process can be either:
1. Usable - automated decision making systems, e.g. automatic factories
2. Somewhat usable - "humanize" user-computer interfaces
3. Questionable - create new content on the cheap

Furthermore:
All three scenarios automate mundane things with raw computing power.
1. Not so intellectual mundane: replaces attorney interns, factory QA team, etc
2. Somewhat intellectual mundane: replaces user learning and training process, etc
3. Intellectual mundane: replaces lower grade creation work, etc

So far so good, computers are here to serve us, hoooray.
But: what is the moral of this little rant?
1. AI lets us forget how to think and solve problems.
2. AI streamlines everything to the point we forget how things actually work (including social interactions).
3. AI speeds up creating crap to the point we'll get indifferent on quality and crap.


In plain words:
THE MORE AI WE USE THE DUMBER WE'LL BECOME.
THE MORE DUMBER WE ARE THE MORE AI WE WANT TO USE.


So, yeah. "The future Mr Gitts, the future!"
Long live AI.
 
Last edited: