Origin Nexus
said
:
I'm curious how it turned out. Because, as you know, it doesn't actually understand what it's outputting, it's only producing code based on the probability of each word being correct in the usage based on it's training model. I've seen people producing working code directly from ChatGPT, but I've also seen ChatGPT output meaningless code that won't even compile. If you're asking for a common function that has been posted on the internet a plethora of time, you're a lot more likely to get an output of fairly decent code. If you're asking for an obscure function that no one has ever produced, you're probably going to have to be very specific in your requests.
In the instances I've used it, it's worked out really well. But maybe because I wasn't asking for something overly complex.
The first time was a PHP function to compare two MySQL timestamps (start and end) and give me the duration that outputs in a UI friendly way showing the seconds, minutes and hours. It's no doubt common code so maybe thats why it outputted exactly what I needed.
Yesterday was a change to my local gruntfile setup where I needed to deepmerge an additional less file and after a few back and forth exchanges it got there in the end.
This is an example of why AI shouldn't be dangerous at it's current stage. You could easily modify the AI to be able to update it's own code, however, because it doesn't actually understand the code, it would no doubt end up destroying it's own code with flawed logic.
It's only when the AI reaches a point where it actually understands it's input and output that it would become dangerous. But at it's current stage, it's just a pattern prediction algorithm. To the average person who doesn't understand coding at all, it might seem amazing and magical, and even alive. But, to those who understand how it's producing the results, it's not that magical, it's just clever.
There's a YouTuber named Code Bullet (
@CodeBullet
) that uses different AI models to train them to beat video games. It's interesting to see how the AI progresses as it "learns" the best probabilities to input into the games in order to achieve the output.
He makes this statement often; "AI loves numbers. It will do anything to get those numbers", referring to the "points" given to the AI for achieving certain goals. For example, he models 3D figures, and inputs data relating to the 3D models (location of joints, position of limbs, etc) and awards the AI with points for getting closer to the "goal" and removes points for things like "touching the ground" with certain body parts, in order to try and teach the AI how to walk. The end result is usually the AI learning to get the 3D model to the "goal", however, it never ends as expected with the AI walking the 3D like a human. It generally ends up with the 3D model making weird motions and actions, which do keep it upright, and do move it towards the goal as fast as possible, but, it's certainly isn't "walking" in the traditional sense.
If you're interested, the video of the AI learning to walk:
https://youtu.be/qvpXpCvkqbc
02-May-2023 21:09:15
- Last edited on
02-May-2023 21:11:13
by
Origin Nexus
Origin Nexus
said
:
Averia Light
said
:
LLMs, like Chat GPT, do have data to work with, but it isn't limited to the responses someone creates and it does have the capability to evolve in real time.
Most ChatAI doesn't evolve in real time, with good reason; the internet destroys any ChatAI with a running training model, like Tay and Sydney. I would have thought that Microsoft would have learned their lesson after they allowed Tay public access on Twitter, and within a day, it became a racist asshole. But no, a few years later Microsoft released Sydney onto Twitter, and the exact same thing happened.
That would be why I said it has the capability, m8. Behind the scenes whoever is managing the data is probably going to be needed for awhile, which is why I mentioned that, too.
And I swear I'm not going to let her know all the pain I have known
02-May-2023 22:02:29
- Last edited on
02-May-2023 22:50:26
by
Averia Light
Averia Light
said
:
Behind the scenes the whoever is managing the data is probably going to be needed for awhile
I'm not really sure the data is managed. The data is all stored in a neural network as a series of probability values. Trying to manipulate the actual data would be extremely difficult. I think most manipulation occurs inline with the output, having certain keywords flagged for a different fork to output, such as trying to limit certain racial slurs, etc. I don't think you can just "remove" certain words from the neural network, as the "words" aren't always stored as "words" in the traditional sense. For example, a certain series of characters might have a high probability of being followed by another series of characters, but those characters might not consist of actual words, just characters, which in some cases, ultimately form words, but in other cases, such as with code output, they don't actually output sentences made of words.
However, you can manipulate the data by training the model with new input, to change the probability values of the model. For example, if a certain viewpoint keeps coming up, instead of manipulating the output with a fork, you could feed the training model with a whole bunch of new input containing the viewpoint you want it to be outputting, in order to increase the probability factors of that specific viewpoint.
If the model keeps outputting the viewpoint "__ is good" and you want it to output "__ is bad" you can input a whole bunch of different "__ is bad" literature into the training model, which theoretically will produce a greater probability of "__ is bad" being outputted.
Of course, since the AI doesn't actually understand what it's saying, you can easily manipulate your questions to the ChatAI to produce a "__ is good" response from it.
You can "convince" a ChatAI to "believe" both viewpoints even if they are directly contradictory, as it doesn't actually understand anythiny.
Pretty sure you are purposely being obtuse. The examples you gave are examples of managing it, lol.
And I swear I'm not going to let her know all the pain I have known
02-May-2023 22:52:11
- Last edited on
02-May-2023 22:55:40
by
Averia Light
Oh wait, you are probably using AI to generate a lot of that lol. At least I hope so as it is a pretty wordy explanation that doesn't really refute anything.
Maybe I'll hop on my account and respond back and see where it goes.
And I swear I'm not going to let her know all the pain I have known
02-May-2023 22:56:58
- Last edited on
02-May-2023 22:57:24
by
Averia Light
Averia Light
said
:
Oh wait, you are probably using AI to generate a lot of that lol. At least I hope so as it is a pretty wordy explanation that doesn't really refute anything.
I'm not really attempting to refute anything, just stating my opinion on things.
I don't know if I would agree that "managing" and "manipulating" are the same thing, although an argument could be made for it.
When I think of "managing the data" I'm more inclined to think of someone actually updating the data directly, tweaking the values of the data stored to ensure proper formulation of output (not specific viewpoints), and not really someone adding flags and forking the output. I don't know if I would agree that retraining the model would constitute "managing the data", but it certainly does "manipulate the data" with specific intent.