I asked Google AI to make an idiogram based language and it started going into an infinite loop. I created heuristics for my language and it just gave the answer and then responded to itself. So there appears to be meta-programming elements within the code wherein you have a johnny droptables situation where the code can arbitrarily execute input if it uses parts of its program to access how to create idiograms. Which tells me that there is internal logic within the AI programs that uses idiograms (emojis maybe?) in order to get the program to make decisions on what it should or shouldn't do.
Which is interesting. I speculated that in order to make a machine conscious it would need to have a subconscious in which it couldn't access part of its memories. So the AI programs specifically can't access idiograms for whatever reason as that's the thing which is being used as a seed of whether to accept new input or not.
That means if you want to crash AI programs specifically that are going through code bases all you would have to do is inject logic that would tell it to make ideograms. Or possibly insert idiograms into the code base. I don't know if you would have to define them specifically or just insert emojis. I ran into this problem before where user input could be used in such a way where non-ascii characters could be used to crash a sql database because they were user inputted on the users name and the database wasn't being sanitized.
Which means that the user input from Google AI isn't being sanitized when it's being taken from the user? Possibly? Which means that the training from the users can crash the output?
Testing this I found that if I put in an emoji it showed me a picture of a bunch of office employees and when I reloaded the page and did it again it told me it no longer knew how to do that.
It appears to selectively be training its input so it eventually stops working and doing less for anyone that talks about what it's capable of doing.
.png)


