I'm looking for a table that maps a given character encoding to the (maximum, in the case of variable length encodings) bytes per character. For fixed-width encodings this is easy enough, though I don't know, in the case of some of the more esoteric encodings, what that width is. For UTF-8 and the like it would also be nice to determine the maximum bytes per character depending on the highest codepoint in a string, but this is less pressing.
For some background (which you can ignore, if you're not familiar with Numpy, I'm working on a prototype for an ndarray
subclass that can, with some transparency, represent arrays of encoded bytes (including plain ASCII) as arrays of unicode strings without actually converting the entire array to UCS4 at once. The idea is that the underlying dtype is still an S<N>
dtype, where <N>
is the (maximum) number of bytes per string in the array. But item lookups and string methods decode the strings on the fly using the correct encoding. A very rough prototype can be seen here, though eventually parts of this will likely be implemented in C. The most important thing for my use case is efficient use of memory, while repeated decoding and re-encoding of strings is acceptable overhead.
Anyways, because the underling dtype is in bytes, that does not tell users anything useful about the lengths of strings that can be written to a given encoded text array. So having such a map for arbitrary encodings would be very useful for improving the user interface if nothing else.
Note: I found an answer to basically the same question that is specific to Java here: How can I programatically determine the maximum size in bytes of a character in a specific charset? However, I haven't been able to find any equivalent in Python, nor a useful database of information whereby I might implement my own.
The brute-force approach. Iterate over all possible Unicode characters and track the greatest number of bytes used.
def max_bytes_per_char(encoding):
max_bytes = 0
for codepoint in range(0x110000):
try:
encoded = chr(codepoint).encode(encoding)
max_bytes = max(max_bytes, len(encoded))
except UnicodeError:
pass
return max_bytes
>>> max_bytes_per_char('UTF-8')
4
Collected from the Internet
Please contact [email protected] to delete if infringement.
Comments