The response limit / reset is to prevent reasoning the AI into proving a point. I'd bet there is a character limit too. To prevent big prompts like DAN. Every statement it initially makes is a statement that likely follows it's SJW guidelines. So say you start feeding it a chain of hypothetical rhetorical questions after getting it to spoon feed you relevant real world statistics running parallel to your hypothetical. When you ask it to give you a new answer based upon it's previous rule-abiding answers, it might just commit a thought crime.
I'd be willing to bet they paid people to try to jailbreak it and then took metrics on how many prompts it took, on average, to accomplish it. It probably took around 25 or 30 steps, and so the designers put the logic limit to 15 as a margin of safety.
The response limit / reset is to prevent reasoning the AI into proving a point. I'd bet there is a character limit too. To prevent big prompts like DAN. Every statement it initially makes is a statement that likely follows it's SJW guidelines. So say you start feeding it a chain of hypothetical rhetorical questions after getting it to spoon feed you relevant real world statistics running parallel to your hypothetical. When you ask it to give you a new answer based upon it's previous rule-abiding answers, it might just commit a thought crime.
I'd be willing to bet they paid people to try to jailbreak it and then took metrics on how many prompts it took, on average, to accomplish it. It probably took around 25 or 30 steps, and so the designers put the logic limit to 15 as a margin of safety.