locale/doc/html/rationale.html
2015-10-18 17:31:48 +03:00

227 lines
19 KiB
HTML
Raw Permalink Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/xhtml;charset=UTF-8"/>
<meta http-equiv="X-UA-Compatible" content="IE=9"/>
<meta name="generator" content="Doxygen 1.8.6"/>
<title>Boost.Locale: Design Rationale</title>
<link href="tabs.css" rel="stylesheet" type="text/css"/>
<script type="text/javascript" src="jquery.js"></script>
<script type="text/javascript" src="dynsections.js"></script>
<link href="navtree.css" rel="stylesheet" type="text/css"/>
<script type="text/javascript" src="resize.js"></script>
<script type="text/javascript" src="navtree.js"></script>
<script type="text/javascript">
$(document).ready(initResizable);
$(window).load(resizeHeight);
</script>
<link href="doxygen.css" rel="stylesheet" type="text/css" />
</head>
<body>
<div id="top"><!-- do not remove this div, it is closed by doxygen! -->
<div id="titlearea">
<table cellspacing="0" cellpadding="0">
<tbody>
<tr style="height: 56px;">
<td id="projectlogo"><img alt="Logo" src="boost-small.png"/></td>
<td style="padding-left: 0.5em;">
<div id="projectname">Boost.Locale
</div>
</td>
</tr>
</tbody>
</table>
</div>
<!-- end header part -->
<!-- Generated by Doxygen 1.8.6 -->
<div id="navrow1" class="tabs">
<ul class="tablist">
<li><a href="index.html"><span>Main&#160;Page</span></a></li>
<li class="current"><a href="pages.html"><span>Related&#160;Pages</span></a></li>
<li><a href="modules.html"><span>Modules</span></a></li>
<li><a href="namespaces.html"><span>Namespaces</span></a></li>
<li><a href="annotated.html"><span>Classes</span></a></li>
<li><a href="files.html"><span>Files</span></a></li>
<li><a href="examples.html"><span>Examples</span></a></li>
</ul>
</div>
</div><!-- top -->
<div id="side-nav" class="ui-resizable side-nav-resizable">
<div id="nav-tree">
<div id="nav-tree-contents">
<div id="nav-sync" class="sync"></div>
</div>
</div>
<div id="splitbar" style="-moz-user-select:none;"
class="ui-resizable-handle">
</div>
</div>
<script type="text/javascript">
$(document).ready(function(){initNavTree('rationale.html','');});
</script>
<div id="doc-content">
<div class="header">
<div class="headertitle">
<div class="title">Design Rationale </div> </div>
</div><!--header-->
<div class="contents">
<div class="textblock"><ul>
<li><a class="el" href="rationale.html#rationale_why">Why is it needed?</a></li>
<li><a class="el" href="rationale.html#why_icu">Why use an ICU wrapper instead of ICU?</a></li>
<li><a class="el" href="rationale.html#why_icu_wrapper">Why an ICU wrapper and not an implementation-from-scratch?</a></li>
<li><a class="el" href="rationale.html#why_icu_api_is_hidden">Why is the ICU API not exposed to the user?</a></li>
<li><a class="el" href="rationale.html#why_gnu_gettext">Why use GNU Gettext catalogs for message formatting?</a></li>
<li><a class="el" href="rationale.html#why_posix_names">Why are POSIX locale names used and not something like the BCP-47 IETF language tag?</a></li>
<li><a class="el" href="rationale.html#why_linear_chunks">Why most parts of Boost.Locale work only on linear/contiguous chunks of text</a></li>
<li><a class="el" href="rationale.html#why_abstract_api">Why all Boost.Locale implementation is hidden behind abstract interfaces and does not use template metaprogramming?</a></li>
<li><a class="el" href="rationale.html#why_no_special_character_type">Why Boost.Locale does not provide char16_t/char32_t for non-C++0x platforms.</a></li>
</ul>
<h1><a class="anchor" id="rationale_why"></a>
Why is it needed?</h1>
<p>Why do we need a localization library, when standard C++ facets (should) provide most of the required functionality:</p>
<ul>
<li>Case conversion is done using the <code>std::ctype</code> facet</li>
<li>Collation is supported by <code>std::collate</code> and has nice integration with <code>std::locale</code> </li>
<li>There are <code>std::num_put</code> , <code>std::num_get</code> , <code>std::money_put</code> , <code>std::money_get</code> , <code>std::time_put</code> and <code>std::time_get</code> for numbers, time, and currency formatting and parsing.</li>
<li>There is a <code>std::messages</code> class that supports localized message formatting.</li>
</ul>
<p>So why do we need such library if we have all the functionality within the standard library?</p>
<p>Almost every(!) facet has design flaws:</p>
<ul>
<li><code>std::collate</code> supports only one level of collation, not allowing you to choose whether case- or accent-sensitive comparisons should be performed.</li>
<li><code>std::ctype</code>, which is responsible for case conversion, assumes that all conversions can be done on a per-character basis. This is probably correct for many languages but it isn't correct in general. <br/>
<ol type="1">
<li>Case conversion may change a string's length. For example, the German word "grüßen" should be converted to "GRÜSSEN" in upper case: the letter "ß" should be converted to "SS", but the <code>toupper</code> function works on a single-character basis.</li>
<li>Case conversion is context-sensitive. For example, the Greek word "ὈΔΥΣΣΕΎΣ" should be converted to "ὀδυσσεύς", where the Greek letter "Σ" is converted to "σ" or to "ς", depending on its position in the word.</li>
<li>Case conversion cannot assume that a character is a single code point, which is incorrect for both the UTF-8 and UTF-16 encodings, where individual code-points are represented by up to 4 <code>char</code> 's or two <code>wchar_t</code> 's on the Windows platform. This makes <code>std::ctype</code> totally useless with these encodings.</li>
</ol>
</li>
<li><code>std::numpunct</code> and <code>std::moneypunct</code> do not specify the code points for digit representation at all, so they cannot format numbers with the digits used under Arabic locales. For example, the number "103" is expected to be displayed as "١٠٣" in the <code>ar_EG</code> locale. <br/>
<code>std::numpunct</code> and <code>std::moneypunct</code> assume that the thousands separator is a single character. This is untrue for the UTF-8 encoding where only Unicode 0-0x7F range can be represented as a single character. As a result, localized numbers can't be represented correctly under locales that use the Unicode "EN SPACE" character for the thousands separator, such as Russian. <br/>
This actually causes real problems under GCC and SunStudio compilers, where formatting numbers under a Russian locale creates invalid UTF-8 sequences.</li>
<li><code>std::time_put</code> and <code>std::time_get</code> have several flaws:<ol type="1">
<li>They assume that the calendar is always Gregorian, by using <code>std::tm</code> for time representation, ignoring the fact that in many countries dates may be displayed using different calendars.</li>
<li>They always use a global time zone, not allowing specification of the time zone for formatting. The standard <code>std::tm</code> doesn't even include a timezone field at all.</li>
<li><code>std::time_get</code> is not symmetric with <code>std::time_put</code>, so you cannot parse dates and times created with <code>std::time_put</code> . (This issue is addressed in C++0x and some STL implementation like the Apache standard C++ library.)</li>
</ol>
</li>
<li><code>std::messages</code> does not provide support for plural forms, making it impossible to correctly localize such simple strings as "There are X files in the directory".</li>
</ul>
<p>Also, many features are not really supported by <code>std::locale</code> at all: timezones (as mentioned above), text boundary analysis, number spelling, and many others. So it is clear that the standard C++ locales are problematic for real-world applications.</p>
<h1><a class="anchor" id="why_icu"></a>
Why use an ICU wrapper instead of ICU?</h1>
<p>ICU is a very good localization library, but it has several serious flaws:</p>
<ul>
<li>It is absolutely unfriendly to C++ developers. It ignores popular C++ idioms (the STL, RTTI, exceptions, etc), instead mostly mimicking the Java API.</li>
<li>It provides support for only one kind of string, UTF-16, when some users may want other Unicode encodings. For example, for XML or HTML processing UTF-8 is much more convenient and UTF-32 easier to use. Also there is no support for "narrow" encodings that are still very popular, such as the ISO-8859 encodings.</li>
</ul>
<p>For example: Boost.Locale provides direct integration with <code>iostream</code> allowing a more natural way of data formatting. For example:</p>
<div class="fragment"><div class="line">cout &lt;&lt; <span class="stringliteral">&quot;You have &quot;</span>&lt;&lt;<a class="code" href="group__manipulators.html#ga97c4997f9692834ea7b5ed3e8137d5fd">as::currency</a> &lt;&lt; 134.45 &lt;&lt; <span class="stringliteral">&quot; in your account as of &quot;</span>&lt;&lt;<a class="code" href="group__manipulators.html#ga820edf843e20847a0c4ccb8da0c4acd8">as::datetime</a> &lt;&lt; <a class="code" href="group__manipulators.html#gae669b101cbeaed6f6d246ebdcaa8f39c">std::time</a>(0) &lt;&lt; endl;</div>
</div><!-- fragment --><h1><a class="anchor" id="why_icu_wrapper"></a>
Why an ICU wrapper and not an implementation-from-scratch?</h1>
<p>ICU is one of the best localization/Unicode libraries available. It consists of about half a million lines of well-tested, production-proven source code that today provides state-of-the art localization tools.</p>
<p>Reimplementing of even a small part of ICU's abilities is an infeasible project which would require many man-years. So the question is not whether we need to reimplement the Unicode and localization algorithms from scratch, but "Do we need a good
localization library in Boost?"</p>
<p>Thus Boost.Locale wraps ICU with a modern C++ interface, allowing future reimplementation of parts with better alternatives, but bringing localization support to Boost today and not in the not-so-near-if-at-all future.</p>
<h1><a class="anchor" id="why_icu_api_is_hidden"></a>
Why is the ICU API not exposed to the user?</h1>
<p>Yes, the entire ICU API is hidden behind opaque pointers and users have no access to it. This is done for several reasons:</p>
<ul>
<li>At some point, better localization tools may be accepted by future upcoming C++ standards, so they may not use ICU directly.</li>
<li>At some point, it should be possible to switch the underlying localization engine to something else, maybe the native operating system API or some other toolkit such as GLib or Qt that provides similar functionality.</li>
<li>Not all localization is done within ICU. For example, message formatting uses GNU Gettext message catalogs. In the future more functionality may be reimplemented directly in the Boost.Locale library.</li>
<li>Boost.Locale was designed with ABI stability in mind, as this library is being developed not only for Boost but also for the needs of the <a href="http://cppcms.sourceforge.net/">CppCMS C++ Web framework</a>.</li>
</ul>
<h1><a class="anchor" id="why_gnu_gettext"></a>
Why use GNU Gettext catalogs for message formatting?</h1>
<p>There are many available localization formats. The most popular so far are OASIS XLIFF, GNU gettext po/mo files, POSIX catalogs, Qt ts/tm files, Java properties, and Windows resources. However, the last three are useful only in their specific areas, and POSIX catalogs are too simple and limited, so there are only two reasonable options:</p>
<ol type="1">
<li>Standard localization format OASIS XLIFF.</li>
<li>GNU Gettext binary catalogs.</li>
</ol>
<p>The first one generally seems like a more correct localization solution, but it requires XML parsing for loading documents, it is very complicated format, and even ICU requires preliminary compilation of it into ICU resource bundles.</p>
<p>On the other hand:</p>
<ul>
<li>GNU Gettext binary catalogs have a very simple, robust and yet very useful file format.</li>
<li>It is at present the most popular and de-facto standard localization format (at least in the Open Source world).</li>
<li>It has very simple and powerful support for plural forms.</li>
<li>It uses the original English text as the key, making the process of internationalization much easier because at least one basic translation is always available.</li>
<li>There are many tools for editing and managing gettext catalogs, such as Poedit, kbabel etc.</li>
</ul>
<p>So, even though the GNU Gettext mo catalog format is not an officially approved file format:</p>
<ul>
<li>It is a de-facto standard and the most popular one.</li>
<li>Its implementation is much easier and does not require XML parsing and validation.</li>
</ul>
<dl class="section note"><dt>Note</dt><dd>Boost.Locale does not use any of the GNU Gettext code, it just reimplements the tool for reading and using mo-files, eliminating the biggest GNU Gettext flaw at present &ndash; thread safety when using multiple locales.</dd></dl>
<h1><a class="anchor" id="why_plain_number"></a>
Why is a plain number used for the representation of a date-time, instead of a Boost.DateTime date or Boost.DateTime ptime?</h1>
<p>There are several reasons:</p>
<ol type="1">
<li>A Gregorian Date by definition can't be used to represent locale-independent dates, because not all calendars are Gregorian.</li>
<li><code>ptime</code> &ndash; definitely could be used, but it has several problems: <br/>
<ul>
<li>It is created in GMT or Local time clock, when <code><a class="el" href="group__manipulators.html#gae669b101cbeaed6f6d246ebdcaa8f39c">time()</a></code> gives a representation that is independent of time zones (usually GMT time), and only later should it be represented in a time zone that the user requests. <br/>
The timezone is not a property of time itself, but it is rather a property of time formatting. <br/>
</li>
<li><code>ptime</code> already defines <code>operator&lt;&lt;</code> and <code>operator&gt;&gt;</code> for time formatting and parsing.</li>
<li>The existing facets for <code>ptime</code> formatting and parsing were not designed in a way that the user can override. The major formatting and parsing functions are not virtual. This makes it impossible to reimplement the formatting and parsing functions of <code>ptime</code> unless the developers of the Boost.DateTime library decide to change them. <br/>
Also, the facets of <code>ptime</code> are not "correctly" designed in terms of division of formatting information and locale information. Formatting information should be stored within <code>std::ios_base</code> and information about locale-specific formatting should be stored in the facet itself. <br/>
The user of the library should not have to create new facets to change simple formatting information like "display only
the date" or "display both date and time."</li>
</ul>
</li>
</ol>
<p>Thus, at this point, <code>ptime</code> is not supported for formatting localized dates and times.</p>
<h1><a class="anchor" id="why_posix_names"></a>
Why are POSIX locale names used and not something like the BCP-47 IETF language tag?</h1>
<p>There are several reasons:</p>
<ul>
<li>POSIX locale names have a very important feature: character encoding. When you specify for example fr-FR, you do not actually know how the text should be encoded &ndash; UTF-8, ISO-8859-1, ISO-8859-15 or maybe Windows-1252. This may vary between different operating systems and depends on the current installation. So it is critical to provide all the required information.</li>
<li>ICU fully understands POSIX locales and knows how to treat them correctly.</li>
<li>They are native locale names for most operating system APIs (with the exception of Windows)</li>
</ul>
<h1><a class="anchor" id="why_linear_chunks"></a>
Why most parts of Boost.Locale work only on linear/contiguous chunks of text</h1>
<p>There are two reasons:</p>
<ul>
<li>Boost.Locale relies heavily on the third-party APIs like ICU, POSIX or Win32 API, all of them work only on linear chunks of text, so providing non-linear API would just hide the real situation and would not bring real performance advantage.</li>
<li>In fact, all known libraries that work with Unicode: ICU, Qt, Glib, Win32 API, POSIX API and others accept an input as single linear chunk of text and there is a good reason for this: <br/>
<ol type="1">
<li>Most of supported operations on text like collation, case handling usually work on small chunks of text. For example: you probably would never want to compare two chapters of a book, but rather their titles.</li>
<li>We should remember that even very large texts require quite a small amount of memory, for example the entire book "War and Peace" takes only about 3MB of memory. <br/>
However:</li>
</ol>
</li>
<li>There are API's that support stream processing. For example: character set conversion using <code>std::codecvt</code> API works on streams of any size without problems.</li>
<li>When new API is introduced into Boost.Locale in future, such that it likely works on large chunks of text, will provide an interface for non-linear text handling.</li>
</ul>
<h1><a class="anchor" id="why_abstract_api"></a>
Why all Boost.Locale implementation is hidden behind abstract interfaces and does not use template metaprogramming?</h1>
<p>There are several major reasons:</p>
<ul>
<li>This is how the C++'s <code>std::locale</code> class is build. Each feature is represented using a subclass of <code>std::locale::facet</code> that provides an abstract API for specific operations it works on, see <a class="el" href="std_locales.html">Introduction to C++ Standard Library localization support</a>.</li>
<li>This approach allows to switch underlying API without changing the actual application code even in run-time depending on performance and localization requirements.</li>
<li>This approach reduces compilation times significantly. This is very important for library that may be used in almost every part of specific program.</li>
</ul>
<h1><a class="anchor" id="why_no_special_character_type"></a>
Why Boost.Locale does not provide char16_t/char32_t for non-C++0x platforms.</h1>
<p>There are several reasons:</p>
<ul>
<li>C++0x defines <code>char16_t</code> and <code>char32_t</code> as distinct types, so substituting is with something like <code>uint16_t</code> or <code>uint32_t</code> would not work as for example writing <code>uint16_t</code> to <code>uint32_t</code> stream would write a number to stream.</li>
<li>The C++ locales system would work only if standard facets like <code>std::num_put</code> are installed into the existing instance of <code>std::locale</code>, however in the many standard C++ libraries these facets are specialized for each specific character that the standard library supports, so an attempt to create a new facet would fail as it is not specialized.</li>
</ul>
<p>These are exactly the reasons why Boost.Locale fails with current limited C++0x characters support on GCC-4.5 (the second reason) and MSVC-2010 (the first reason)</p>
<p>So basically it is impossible to use non-C++ characters with the C++'s locales framework.</p>
<p>The best and the most portable solution is to use the C++'s <code>char</code> type and UTF-8 encodings. </p>
</div></div><!-- contents -->
</div><!-- doc-content -->
<li class="footer">
&copy; Copyright 2009-2012 Artyom Beilis, Distributed under the <a href="http://www.boost.org/LICENSE_1_0.txt">Boost Software License</a>, Version 1.0.
</li>
</ul>
</div>
</body>
</html>