Yes.
"hallucinate" implies "it's an unintended problem, there's little to be done about it".
"confabulate" implies the LLM has been designed to do that, which is true.
These types of neural network AI are designed to produce output that pleases its trainers, who don't care (and don't...