There used to be a Turing test “judge” chatbot online that would deliberately put typos in its words. If you repeated the word with the typo still in it, it assumed you were a bot. If you corrected the typo, it assumed you were a human. If you gave the same answer to the same question twice, it also assumed you were a bot. One could also measure the time between input and response, see if they’re always the same and/or always on the second.
Repetitive question/answer cycles are dealt with by a fair number of chatbots. A chatbot with a dialogue manager can mark what it has said before and avoid re-using the response. How convincing it would be depends on how many general purpose gambit responses it has, like “You said that already” to “Okay I’m out, bye”. However, one thing few chatbots will do is actually make good on the latter claim and stop responding indefinitely. Reverse, A human will almost never keep talking for longer than 16 hours straight because they need to eat and sleep.
In my opinion the most effective ways to prove a bot is with either abusive misspelling (see Loebner Prize 2013) or an infinite array of word and letter games, like “If you count the ‘a’s and ‘b’s of your last question excluding prepositions, how many vowels do you count?”. Because the problem space and practical use of covering it are miles apart. You could program it, but most chatbot developers should consider it a huge waste of time, and thus they don’t.