Which is a reasonable and clean solution - I love simplicity of ASCII like every programmer does.
Except ASCII is not enough to represent my language, or even my name. Unicode is complex, but I'm glad it's here. I'm old enough to remember the absolute nightmare that was multi-language support before Unicode and now the problem of encodings is... almost solved.
>ASCII is not enough to represent my language, or even my name.
Hebrew and Arabic don't include vowels. While you think that writing your language needs vowels, we can tell from the existence of Hebrew and Arabic that you are probably wrong. It would take some getting used to, but just like that "scramble the letters in the middle of words, you can still read":
>Aocdrnig to a rscheearch at Cmabrigde Vinervtisy, it deosn't mttaer in waht oredr the Itteers in a wrod are, the olny iprmoetnt ting is taht the frist and Isat Itteer be at the rghit pclae. The rset can be a toatl mses and you can sitll raed it wouthit porbelm. Tihs is bcusee the huamn mnid deos not raed ervey teter by istlef, but the wrod as a wlohe.
your language, too, is redundant and could be modified to be simpler to write.
I'm not asking you to write your language with no vowels, I'm simply saying you could reduce to ASCII, get used to it, and civilization could move on. Stop clinging to the past, you are holding up the flying cars.
pot. kettle. unicode itself was the 95% change request, and this particular discussion is sparked by anguish about that change and people such as yourself who want to discuss their anguish about the change.
and you simply ignored the points that I went to the trouble to write down, and rather than considering them or thinking about them, you just started screaming "status quo status quo"
English itself lost some lovely letters because of the printing press (RIP, þ), so I suppose simplifying writing systems in the name of technological simplicity isn't unprecedented.
What would be neat is an ASCII or byte-encoding that simplified foreign language into ascii encoding on the data side to being basically ascii than recording it for display, only supporting a subset, unfortunately but eliminating all these edge cases and moving them away from the logic and database layers.
you let your fingers hit the keyboard before thinking at all.
in english, we have laws that sanction the selling of street drugs, and other laws that sanction funding for women's sports. in the first case, sanction means "forbid", and in the second case it means "encourage". although these usages are opposite in meaning, the words are used on a daily basis and nobody gets confused because context is everything.
Robię Ci łaskę could mean "badass" and robię ci laskę could mean "bad ass": if you read "robie ci laske" in ASCII (hey, i'm thinking that rhymes) nobody (except you) would be confused by that, it's not how functioning brains work.
i provided enough evidence in my original comment that you should have been able to realize that i was already talking about the issue you are pointing out so to rebut what i suggest you need to account for what i said and not argue against a strawman's tabula rasa
Except ASCII is not enough to represent my language, or even my name. Unicode is complex, but I'm glad it's here. I'm old enough to remember the absolute nightmare that was multi-language support before Unicode and now the problem of encodings is... almost solved.