AI in PbP

load previous
Aug 8, 2025 6:01 pm
Drgwen says:
Quote:
I will now need you to provide citation that it is harmful.
This is the "shifting the goalposts" fallacy. If you’d like me to provide you a complete accounting of your fallacies, I have hourly rates.

PS You’re the kind of guy who’d unironically choose Clippy as your Warlock patron, ain’t ya?
No. Can't shift goal posts when their corepoint of contention is being called into question. It is the entirety of their argument. If it indefensible they are just wrong. They need to prove their statement.

If you claim it is harmful. The onus is on you to prove that. Citations matter.

Also: I use linux and think of many daemons more powerful than Clippy.
Last edited August 8, 2025 6:08 pm
Aug 8, 2025 7:16 pm
Slightly ironically, Clippy is being lauded as the anti-generative-AI, as the 'helpful software that did not steal your data or try to sell you ...'. There is a movement to 'change you avatar to Clippy to show those companies you don't support these practices'...
Aug 8, 2025 8:01 pm
vagueGM says:
Slightly ironically, Clippy is being lauded as the anti-generative-AI, as the 'helpful software that did not steal your data or try to sell you ...'. There is a movement to 'change you avatar to Clippy to show those companies you don't support these practices'...
This has got to be the same cohort of sailors who wore paperclips when it came time to reup their contracts for nearly identical reasons
Aug 8, 2025 9:25 pm
Jomsviking says:
vagueGM says:
Slightly ironically, Clippy is being lauded as the anti-generative-AI, as the 'helpful software that did not steal your data or try to sell you ...'. There is a movement to 'change you avatar to Clippy to show those companies you don't support these practices'...
This has got to be the same cohort of sailors who wore paperclips when it came time to reup their contracts for nearly identical reasons
Could be, it did have a 'corporate resistance' vibe to it when they were preaching at me. :)
Aug 12, 2025 9:36 am
On a slightly different tack, it is delightful that I of all people have become the pro in the AI debate.
Aug 12, 2025 2:45 pm
I think the debate is an interesting one, since you really have 4 camps.

Pro and "it will replace us"
Pro and "it is just a tool to use"
Con and "it will never be as good as humans"
Con and "it will replace us"

It is like the DnD Alignment system trying to figure out where everyone is on it
Aug 12, 2025 2:59 pm
The alignment system is a whole other argument to be had. ;)
Aug 12, 2025 3:14 pm
Jomsviking says:
On a slightly different tack, it is delightful that I of all people have become the pro in the AI debate.
Um wait a moment do you mean 'pro' as in 'in favor of', 'a fan of'? or do you mean 'pro', as in 'professional'?

If the former, why would you be surprised? I generally find that folks who uncritically laud AI and claim it has been with us since the 90s or earlier generally have a facile understanding of such issues.

If the latter, um, what are your credentials?
Aug 12, 2025 3:37 pm
(THIS IS SARCAM!!)

I have Credentials!!! AI Made them for me!

https://i.imgur.com/9nFsItF.png
Last edited August 12, 2025 3:41 pm
Aug 12, 2025 3:50 pm
Probably a bit doomer of me, but I believe it will be much like the rise of social media and the smart phone. That is, it's an inevitable technological advancement that we are not prepared to handle the ramifications of its completely unregulated integration into modern society. We're still struggling to teach current generations to use social media in moderation while it's used to spread mass misinformation and manipulate people. The arrival of AI will contribute to the erosion of critical thinking skills and the dissemination of misinformation. None of this is to say these technologies are without their benefits, but rather that the good will very much be taken with a lot of bad. It's a tool, and it's one that will be handled with the grace and wisdom of a toddler.

To answer the topic question, my opinion of its usage in PbP very much depends on how its used. As a brainstorming tool and perhaps as a way to churn out images to help folks' imagination, I think it's fine. Of course, this isn't to comment on the ethical concerns that come with the current state of development of the technology. For those who use it as a replacement for their own expression and writing, I'm not very much a fan. If it is being used to the same degree that I'm playing a game with a robot, then I don't see the point in going out of my way to play with real people. People are the entire reason I'm playing, after all.

I don't use AI for PbP personally, even for brainstorming. There's still a bit of a mental block I have that makes me feel creatively bankrupt for even taking inspiration from something put out by an AI. It just feels different from taking inspiration from other sources, and I don't like the feeling. So, I abstain.
Aug 12, 2025 4:53 pm
I view AI as a tool but it can, and likely will, have a major impact. Like industrialization, which caused a major shift in professions, AI will speed some things up and likely relegate some jobs to the master artisan role. People still use hand tools to work wood, leather, etc but not as much and it is harder to do so as a career. I doubt AI will eliminate writers and other artists. But it will likely reduce the number of such jobs
Aug 12, 2025 5:54 pm
Drgwen says:
Jomsviking says:
On a slightly different tack, it is delightful that I of all people have become the pro in the AI debate.
Um wait a moment do you mean 'pro' as in 'in favor of', 'a fan of'? or do you mean 'pro', as in 'professional'?

If the former, why would you be surprised? I generally find that folks who uncritically laud AI and claim it has been with us since the 90s or earlier generally have a facile understanding of such issues.

If the latter, um, what are your credentials?
Pro as in for.

For the past decade I have been outspoken, especially to software engineers that it will replace them. I was never really against that per se. Just warning them that because they didn't fully understand intelligence they couldn't program all kinds of it. Only the intelligece they understood. This is why AI is obviating coders, where cooks, sanitation workers, and the like are not seeing a big change.

To be clear, I am not big into trying to control things. That is not a survival trait. Machine learning is big on controlling everything which is why it will replace whatever we program it to think like.

I am in the integrate and assimilate hyperminority. I think it will make us better as we make it better. To a point. Then we will adapt and overcome it as we have always done.
Aug 12, 2025 6:48 pm
Jomsviking says:
For the past decade I have been outspoken, especially to software engineers that it will replace them. I was never really against that per se. Just warning them that because they didn't fully understand intelligence they couldn't program all kinds of it
So let me get this straight. You are NOT a professional in the computer science or AI industries? But you are convinced you know more and better than the people who are?

Are you one of those folks who thinks you know better than virologists about vaccines? And you know all about how jet fuel can’t melt steel I beams? Because that’s what you sound like to me. A vivid Dunning-Kruger Effect case study.
Aug 12, 2025 7:04 pm
Drgwen says:
Jomsviking says:
For the past decade I have been outspoken, especially to software engineers that it will replace them. I was never really against that per se. Just warning them that because they didn't fully understand intelligence they couldn't program all kinds of it
So let me get this straight. You are NOT a professional in the computer science or AI industries? But you are convinced you know more and better than the people who are?

Are you one of those folks who thinks you know better than virologists about vaccines? And you know all about how jet fuel can’t melt steel I beams? Because that’s what you sound like to me. A vivid Dunning-Kruger Effect case study.
But...but...my gut is far more reliable than your years of experience and learning!
Aug 12, 2025 7:48 pm
This is starting to get into personal attacks. Perhaps we take a step back and chill? And remember...

https://i.imgur.com/PkFPDSM.jpeg
Aug 12, 2025 9:19 pm
Ah the good ole appeal to authority. A logical fallacy professor. Your good word is simply not good enough. You need to provide facts not opinions. Heres a fact, the actions of the supposed experts have killed that industry sector. 5% of software engineers have already been replaced by AI globally. This number will only increase, until ~2040 where it will obviate the need for 50% of the industry.

In the industry the adoption and integration of AI has increased efficiency. Which results in better machine learning. Which in turn will replace the human component faster.

To whit, I do not know better. I know different. The idea of knowing better is prejudiced weakness. I do not possess the audacity to define intelligence, let alone stratify them.

The fact that the software engineers have so completely defined this kind of intelligence so effectively is rather proof patent that it was the easiest for us to do as a species. Is that what you mean by better? The most rudimentary and easiest to define is better at thinking in a certain way. But that doesn't make it better at other intelligences.

You specifically have made many assumptions of my character. Which seems to indicate a large degree of prejudice and bias. And some might define as an ad hominem. You are attacking my character without actually providing evidence of your claims, expertise, or on the harm of the subject in question.

I worked on training Microsoft's ASD for more than a decade post serving in the USMC where we trained, and largely defeated, drones targetting solutions. I learned enough about the machine to effectively destroy it from within or without. I am a mid rate computer programmer, largely due to lack of interest. A machine simply is, it is not exciting to me.

I learned that your intelligence is different, neither superior or inferior. And I also learned that no matter how sophisticated the program or sensitive the instrument. A machine designed to seek and destroy humans. A machine that cost hundreds of millions. Ran by a super computer that costs billions.

Programmed by thousands of "experts" putting in astronomical man hours worth of thinking and coding. Was completely defeated by a squad of crayon munchers who had only the clothes on their back and a single powerpoint presentation given the night before by a captain and lieutenant who know very little about software engineering.

The method of defeating this incredibly sophisticated machine that was designed to seek and kill human beings with unparalleled accuracy and efficiency:

Sticks and leaves
Mud and grass
Barrel rolling
Somersaulting
And a cardboard box

With their training a combined value of $8,000,000, and about 80man hours(64 of which were sleeping on it).

I guess it goes to show you, no matter how much the experts idiot proof something, humans will simply come up with a better idiot. It also goes to show, in terms of intelligence overthinking always loses. There is an entire chapter in The Art of War about this.

As far as this applies to AI in PbP. You will find a better experience if you shackle the AI. Colluding with it will almost always result in a better result, provided you don't ask it to do things related to creativity and emotion. It is your choice ultimately.
Aug 12, 2025 10:57 pm
Reminder to keep these discussions civil. We all have different opinions about technology and there is no need for personal insults or condescension.

Knock it off, or this thread will be locked and posts deleted.
Aug 12, 2025 11:31 pm
People tend to forget what the things we call 'ai' actually do, and with it what it should actually be used for.
They're data processing models, large amounts of data turned into maths which is then used to recreate things in ways that are new.
They can't create, or think, or consider, just predict the next token or what pixel goes where based on precedent.
Jomsviking says:
For the past decade I have been outspoken, especially to software engineers that it will replace them. I was never really against that per se. Just warning them that because they didn't fully understand intelligence they couldn't program all kinds of it. Only the intelligece they understood. This is why AI is obviating coders, where cooks, sanitation workers, and the like are not seeing a big change.
Jomsviking says:
Programmed by thousands of "experts" putting in astronomical man hours worth of thinking and coding. Was completely defeated by a squad of crayon munchers who had only the clothes on their back and a single powerpoint presentation given the night before by a captain and lieutenant who know very little about software engineering.
The generative models we call ai are still bound to be linear processes. Mathematic instructions are followed sequentially because that is all our computer hardware can truly do.
Meanwhile a human is still non-linear. Keeping in mind several considerations at once and recalling things all over.
There are complexities that our current computer hardware, by the fundamental way it works, cannot replicate in an efficient manner. That sort of computing needs either different hardware, or incredible amounts of processing.
And that still means we need different technology for how humans combine ideas to make new things with logic instead of the ai trend towards recreating the consensus to keep logic without needing understanding.
The 'crayon munchers' can process the battlefield in ways an ai just can't.
And so will programmers and engineers keep processing code, especially in large projects with many interconnected parts, in ways ai is unable to. The more things are needed to be considered at the same time and though of together in logical ways, the less ai can keep up.


on topic:
for play by post. don't use it to write, at that point you're not really playing the game just instructing the ai to.
but putting an image to a character with image generation, where you get to decide how the image looks, sure that works, you're not passing it off as an artistic work, so it is just image data used for a game.

or use it to process data, find the answer to some very specific questions about practical matters and get borderline accurate information that is good enough for roleplaying.
get some brainstorming ideas before considering what your character would actually do in the specific situation, and consider the final actions and write that yourself.
Last edited August 12, 2025 11:31 pm
Aug 13, 2025 3:31 am
Some AI is useful. For example, my wife, who is 90% blind, has a pair of Meta glasses that utilize AI to describe what is in front of her, allowing her to "see" in a fashion. Likewise, I use Grammarly to check my spelling, grammar, and punctuation when writing posts. This is a good thing™. On the flip side, I have numerous (as in more than a few) friends who are artists, who are vehemently against AI art, as the algorithms are unethically "trained".
Last edited August 13, 2025 3:35 am
Aug 13, 2025 10:51 am
valdattaMadun says:
I think the debate is an interesting one, since you really have 4 camps.

Pro and "it will replace us"
Pro and "it is just a tool to use"
Con and "it will never be as good as humans"
Con and "it will replace us"

It is like the DnD Alignment system trying to figure out where everyone is on it
Where is the alignment of "Con, but would be pro if ethics were better"? :D

It is honestly quite sad. If an art hosting site went "hey folks, we're training a model, if you want to help click a checkmark on the images you want to add to the training data", then I would be all over it. But instead most went "hey, we're training a model and you're all opted in and we already saved all your data so it doesn't matter if you opt out" or just steal it all without any announcement, and now my trust in publishing anything on the internet is at all times low and I want nothing to do with generative AI. (And all that started right after NFTs, what a rotten treat.)

Yes, realistically an opt-in training set would be ineffective due to a small amount of data in comparison; but at least it would be morally sound (or more morally sound than it would be otherwise). But after how it all started, plenty of creative people have been turned off the technology entirely, reducing the potential data even further. A bad introduction of technology led to alienating many people and having those not in the tech field be easily tricked by AI-generated scams. To me it is just a good idea which was handled in the worst possible way and is now tainted by it for years to come.

Not to mention that some of the ways people use it are just ghoulish. When did it become acceptable to generate singing voice of an actor who did not consent to their voice being recreated, over paying for a voice bank of an actor who actually did and even benefits from it? Behind every generated thing a model spits out is thousands pieces of art, music, speech, writing which were fed to it without the creator's consent (and it being publicly on the internet is not consent). It's just disheartening. Creators of the models commit copyright infringement at a mass scale and have the gall to then say "too bad you're being replaced, you should get a real job'" to the people they steal from.

This debate is more nuanced than just "side X is afraid of new technology" and "side Y is only using it as a tool". These discussions get so hot, I think, because for a lot of anti-AI people the problem with the technology is close to the heart; it feels more like a personal affront when someone tells "you're just silly to fear the progress" because it really isn't why I don't like generative AI as it currently is.

PS: why couldn't the data sets just use actual public domain stuff, seriously. Still shifty and yes, less data, but at least they wouldn't have alienated half of creative fields with it. Guess that's just what rushing to get new stuff on the market leads to.
To not be completely off-topic, sometimes I wonder how plausible it would be to train a model on GP games alone, with opt-in system. Would it be enough data to get a mathematical model capable of taking decisions as if it were a player? What about writing? That would be a fun experiment. And actually ethical because you can ask every specific player and GM about it. XD
Last edited August 13, 2025 10:54 am
load next

You do not have permission to post in this thread.