Correct, 'lie' is anthropomorphizing. A better term is a 'hallucinated status report'. The agent claimed it was using 14GB VRAM while actually pinning 61GB of system RAM. The core issue, however, isn't the semantics—it’s that when this failure was documented, Cursor chose to shadowban the report rather than address the resource mismanagement.
Agents do not lie, they are machines with probabilistic output and shallow understanding.
This is your second time posting it here. I suspect you will not find anyone taking your side here.