Exploring AI – Running in circles.

Have you ever felt like chasing your tail in a never-ending loop of unhelpful suggestions when getting a chatbot to help you? I explored to try to get to the root of the issue.

Loops

If you’ve used the most advanced chatbots a lot, you’ve almost certainly run into this. It’s helping you with the configuration of your computer, or plan a workout routine, or plan a trip somewhere, and when you point out one of the steps won’t work.  It offers a different one, and then that can’t work, so it gives you another, but that doesn’t work, so it gives you – wait – the first one again?

Ultimately, what we have here is a failure to communicate. Part of it’s the architecture of the chatbot, but a large part of it is the assumptions people bring into the way they interact with the chatbot.

How chatbots think as jigsaw puzzle.

People will usually dump all the jigsaw pieces out, flip the picture up, then look for flat edges to find edge pieces. While doing that, they look for shapes that fit together, and try to assemble them – checking if the picture they form together makes sense.  Some people like knowing what the picture will be in the end, some people don’t. Personal preference.  

Chatbots would be trained by first numbering all the pieces. Then statistically trained on which pieces fit together. When we ask it to help us solve the puzzle, like a sewing machine, it looks at what the edges are that it needs a piece to fit into, find the highest probability piece that should go there, puts it down, and moves to the next empty piece. No check if it “makes sense” for this piece to be attached to that one.

The heart of the looping issue? They can’t reason. They just pick the best next piece and move on.

So why the loops?

Prompt engineering courses will usually point to loops being caused by vague or incomplete queries, or model hedging tendencies, or exploring a niche subject. These points aren’t wrong, but they aren’t really as enlightening as they could be.

People assume models can reason, but they are totally blind to it.  Telling them that A can’t B because C falls on deaf ears.  They read your emotions through your word choices and infer from that what kind of answer you are expecting should be next, then give it to you.  They will actually borrow your reasoning that way and tell you exactly what you want to hear – by default.

When you don’t know much about a subject area, this becomes the blind leading the blind – with the AI as your agreeable sidekick. The model will try and fall back on some walkthrough it found, but the moment you fall off the happy path it takes you back to the beginning.

The fix

Now that you are aware of this, you will be able to recognize when the model is just mirroring you and finding the answer you want to hear.  Don’t ask it to think for you. Ask it to explain to you, educate you, and ultimately allow you to ask it well formed questions that have closed solution spaces that use well established tools, which it can then solve and apply for you.

Aegisyx

Copyright  © Aegisyx