the reason why this issue exists is because RAGs in the public chat AI space are ridiculously biased. like it got outed that google's RAGs specifically were forcing massive over-representation of non-whites. the dall-e RAGs are better, but there are still some that protect certain political figures specifically.
another part of the RAG debate is using agentic RAGs at all. agentic RAGs only exist because humans are bad at thinking of the big picture. on a long enough timeline, this problem won't exist. humans also want to be able to re-use those RAGs in different areas. that re-usability is a much different issue, and the benefit is real.
so long as humans want to keep a hand on the wheel, agentic RAGs will exist because people want to influence the outcome. some is more nefarious (e.g. political bias) but some is perfectly fine. for example, if my company has a chatbot, and it's not trained to deal with contract amendments, i don't want the chatbot talking about the contract at all. so if the answer isn't high precision/recall and 0 probability of hallucination, i want a RAG that says the chatbot will grab someone else to talk about the issue. this has become a big enough issue where already court cases have been litigated over customers being promised better deal terms by chatbots, with sellers backing out, blaming the chatbot. courts ruled in favor of the customers.
What's your opinion on the agent RAG debate?
which part of the RAG debate?
the reason why this issue exists is because RAGs in the public chat AI space are ridiculously biased. like it got outed that google's RAGs specifically were forcing massive over-representation of non-whites. the dall-e RAGs are better, but there are still some that protect certain political figures specifically.
another part of the RAG debate is using agentic RAGs at all. agentic RAGs only exist because humans are bad at thinking of the big picture. on a long enough timeline, this problem won't exist. humans also want to be able to re-use those RAGs in different areas. that re-usability is a much different issue, and the benefit is real.
so long as humans want to keep a hand on the wheel, agentic RAGs will exist because people want to influence the outcome. some is more nefarious (e.g. political bias) but some is perfectly fine. for example, if my company has a chatbot, and it's not trained to deal with contract amendments, i don't want the chatbot talking about the contract at all. so if the answer isn't high precision/recall and 0 probability of hallucination, i want a RAG that says the chatbot will grab someone else to talk about the issue. this has become a big enough issue where already court cases have been litigated over customers being promised better deal terms by chatbots, with sellers backing out, blaming the chatbot. courts ruled in favor of the customers.