Ah yes. I remember Zed explaining about this in a live session once, and it expanded my understanding of computing history!
A byte can be 1 or 0 as we know. 8 bits can provide 256 distinct values ranging from the bitstring 00000000 to 11111111. These numbers are undefined (just numbers) without a reference that describes how they should be encoded. “utf-8” is an agreed set of references with certain values for certain numbers in the range from 0-255.
“utf-16” extends this number range (and therefore character range) in recognition of other languages, characters etc. Unicode is a popular encoding currently being expanded frequently with little emoji - that we all know and love.
As @florian says, without the encoding type, its just a bitstring. With the encoding types known, it can be recognised as a text file, or sound wave, or image, etc…