How about this: we expose the known conversions of wxMBConv to wxruby global methods. Then we create a $WXSTRING_TO_RUBY and $RUBY_TO_WXSTRING globals that has a reference to those procs. In the ruby-string to wxString typemap, we use ruby to invoke $WXSTRING_TO_RUBY and $RUBY_TO_WXSTRING. That way, if people want a different method, they can write one (even a pure-ruby one) that changes the behavior. Nick> (replying to myself...see below) > > Kevin Smith wrote: > > Nick wrote: > > > >> > >> I''ll be the first to admit I''m out of my league in this area. It looks > >> like the KCODE built in variable is the string encoding. To get > >> started, we may need to start with ANSI to Unicode conversion, and go > >> from there. > > > > > > You''re probably right. I am still curious how the other ruby libraries > > have handled this issue, but for now I would propose the following > > three-phase approach: > > > > 1. All calls passing strings either way would (for now) assume whatever > > encoding is specified by KCODE, and would automatically convert all > > strings to or from that encoding. At this point, we would be > > internationalized, and if we stopped here, it would still be releasable. > > Ugh. I just now looked at the definition of KCODE, and I think we should > skip step 1 and go directly to step 2 (below). KCODE seems to only have > a single value to represent ALL 8-bit encodings, and that''s not good > enough for us. We need to know whether it''s 8859-1 or 8859-7, in order > to create the proper corresponding unicode string. > > We should use KCODE to initialize our variable if it is one of the > non-8-bit options. Otherwise, fall back to 8859-1 as a default, I suppose. > > > > > 2. Create our own wxruby-specific global and use that instead of KCODE. > > That would allow apps to use different encodings on the fly (but only > > one at a time). > > > > 3. Add additional parameters or methods (to be decided later) thatwould> > allow the encoding to be specified for each individual call. > > > > One other thing: Should we expose the wx conversion routines? Or does > > Ruby already have convenient calls to convert between UTF-8 and 8859-1, > > or between UCS-16 and KOI-8? > > > > Kevin > > Kevin > _______________________________________________ > wxruby-users mailing list > wxruby-users@rubyforge.org > http://rubyforge.org/mailman/listinfo/wxruby-users > >--
I think that would work, as long as the wxMBConf method signatures aren''t too ugly. It would probably be more effort than my suggestion, as well as being slighly less simple for end-users to switch encodings. But as you point out, it could be more flexible. Kevin Nick wrote:> How about this: we expose the known conversions of wxMBConv to wxruby > global methods. Then we create a $WXSTRING_TO_RUBY and $RUBY_TO_WXSTRING > globals that has a reference to those procs. In the ruby-string to > wxString typemap, we use ruby to invoke $WXSTRING_TO_RUBY and > $RUBY_TO_WXSTRING. That way, if people want a different method, they can > write one (even a pure-ruby one) that changes the behavior. > > Nick > > >>(replying to myself...see below) >> >>Kevin Smith wrote: >> >>>Nick wrote: >>> >>> >>>>I''ll be the first to admit I''m out of my league in this area. It looks >>>>like the KCODE built in variable is the string encoding. To get >>>>started, we may need to start with ANSI to Unicode conversion, and go >>>>from there. >>> >>> >>>You''re probably right. I am still curious how the other ruby libraries >>>have handled this issue, but for now I would propose the following >>>three-phase approach: >>> >>>1. All calls passing strings either way would (for now) assume whatever >>>encoding is specified by KCODE, and would automatically convert all >>>strings to or from that encoding. At this point, we would be >>>internationalized, and if we stopped here, it would still be releasable. >> >>Ugh. I just now looked at the definition of KCODE, and I think we should >>skip step 1 and go directly to step 2 (below). KCODE seems to only have >>a single value to represent ALL 8-bit encodings, and that''s not good >>enough for us. We need to know whether it''s 8859-1 or 8859-7, in order >>to create the proper corresponding unicode string. >> >>We should use KCODE to initialize our variable if it is one of the >>non-8-bit options. Otherwise, fall back to 8859-1 as a default, I suppose. >> >> >>>2. Create our own wxruby-specific global and use that instead of KCODE. >>>That would allow apps to use different encodings on the fly (but only >>>one at a time). >>> >>>3. Add additional parameters or methods (to be decided later) that > > would > >>>allow the encoding to be specified for each individual call. >>> >>>One other thing: Should we expose the wx conversion routines? Or does >>>Ruby already have convenient calls to convert between UTF-8 and 8859-1, >>>or between UCS-16 and KOI-8? >>> >>>Kevin >> >>Kevin >>_______________________________________________ >>wxruby-users mailing list >>wxruby-users@rubyforge.org >>http://rubyforge.org/mailman/listinfo/wxruby-users >> >> > >
Apparently Analagous Threads
- [Fwd: Re: Help a newbie pick a gui tool kit]
- unicode hacks - fixes for webrick and Safari
- rspec runner setting $KCODE considered harmful?
- Unicode support in FXRuby 1.6
- [ win32utils-Bugs-20722 ] Windows::Error.get_last_error only returns the first character (PATCH)