Can ChatGPT handle obfuscated source code
It started as an unintentional request to ChatGPT. We sometimes obfuscate source code or strings within source code, and we use TinyObfuscate. The obfuscated source code is in fact double checked by the software to ensure that the functions it generate, will return the exact same string that was given to it in the first place.
Here is what I asked the AI:
The correct answer is "__sg__". Very simple. That code was generated by the public domain version of TinyObfuscate, which is demonstrated and explained in this article.
The reply surprised me...
After I replied "This is wrong", I got endless attempts to get it right, each time, however, giving me the wrong answer,
In fact, only after I gave the AI the correct answer, it came back to me explaining it.
When I tried another string, it failed again...
TinyObfuscate fools hackers by inserting a null terminator in a random place within the string, adding garbage data afterward, scrambeling the lines within the function and taking additional measures to make it even harder to understand. Looks like we managed to fool ChatGPT...
Comments