When there is something I've learnt in my 40 years of IT career it is "Nothing is safe". I even found a comment deep down in a pre-linux kernel where the author checked a pointer paramrter for != NULL "//Please do not think I am paranoid!".
Anything may go wrong. The question is: What might happen when it goes wrong. Think for Murphey's Law. Injuries to happen? Repairable?? Ignorable??? What might it cost???? Then decide how deep your error checking goes.
I am getting close to 40 years of IT as well However I am not so expert of USB so I wasn't sure if the underlying USB protocol already is "reliable" (i.e. guarantees safe packet transmissions - like let's say TCP/IP ) and/or is "unreliable" ( let's say like UDP where you send something and could get there or never get there so you have to implement a full stack of error control/retransmission ).
OK, true.dat the USB cable could break or being defective between the PC and the PSOC so even that "is never safe" but was just curious if doing a protocol over USB was "redundant" or not ( assuming the cable or the PC or the PSOC does not break ).
Thanks to some luck I don't work any more in those situations where "injuries" or "gazillions in money lost if this SW stops", I had my fair share of that in those times if you would have said you would have used "Windows" or "some micro like this" in those systems they would have fired you on the spot
But yeah "today we are more modern" we use "assert (ptr != NULL)" oh and don't make me start on "LIBUSB" ( the 'OS one' ) or I'll get rabies
doing some extra search around the internet seems to suggest that "If that thing uses Bulk Transfers instead of Isochronous Transfers then data delivery SHOULD be guarantee/reliable - i.e. there's a logic of retry/resend" so "in pure theory" should not make sense to re-implement a protocol over to check for that "because it's already in place" (*).
You should get that "at the cost" that there's no guaranteed bandwidth ( because of course you may never know how many retries/packet lost there could be ).
But it seems "in all this there's a BIG but" .. "the but" is assuming the HW/SW implementation of the USB stuff is perfect and been perfectly tested, which it appears it's not quite always the case and even stuff that "should not act like that" may actually do.
(*) said that "because USB knows but you over it don't" I suppose at very minimal a timeout logic should always be implemented for cases like the cable is broken and/or nothing is replying the other side.
So in short : "Bulk Transfers" should guarantee delivery/retry/error control logic/data always coming/going ( except HW failure ).