The CStringW with wcout Bug Under the Hood

I discussed in a previous blog post a subtle bug involving CStringW and wcout, and later I showed how to fix it.

In this blog post, I’d like to discuss in more details what’s happening under the hood, and what triggers that bug.

Well, to understand the dynamics of that bug, you can consider the following simplified case of a function and a function template, implemented like this:

void f(const void*) {
  cout << "f(const void*)\n";
}

template <typename CharT> 
void f(const CharT*) {
  cout << "f(const CharT*)\n";
}

If s is a CStringW object, and you write f(s), which function will be invoked?

Well, you can write a simple compilable code containing these two functions, the required headers, and a simple main implementation like this:

int main() {
  CStringW s = L"Connie";
  f(s);
}

Then compile it, and observe the output. You know, printf-debugging™ is so cool! 🙂

Well, you’ll see that the program outputs “f(const void*)”. This means that the first function (the non-templated one, taking a const void*), is invoked.

So, why did the C++ compiler choose that overload? Why not f(const wchar_t*), synthesized from the second function template?

Well, the answer is in the rules that C++ compilers follow when doing template argument deduction. In particular, when deducing template arguments, the implicit conversions are not considered. So, in this case, the implicit CStringW conversion to const wchar_t* is not considered.

So, when overload resolution happens later, the only candidate available is f(const void*). Now, the implicit CStringW conversion to const wchar_t* is considered, and the first function is invoked.

Out of curiosity, if you comment out the first function, you’ll get a compiler error. MSVC complains with a message like this:

error C2672: ‘f’: no matching overloaded function found

error C2784: ‘void f(const CharT *)’: could not deduce template argument for ‘const CharT *’ from ‘ATL::CStringW’

The message is clear (almost…): “Could not deduce template argument for const CharT* from CStringW”: that’s because implicit conversions like this are not considered when deducing template arguments.

Well, what I’ve described above in a simplified case is basically what happens in the slightly more complex case of wcout.

wcout is an instance of wostream. wostream is declared in <iosfwd> as:

typedef basic_ostream<wchar_t, char_traits<wchar_t>> wostream;

Instead of the initial function f, in this case you have operator<<. In particular, here the candidates are an operator<< overload that is a member function of basic_ostream:

basic_ostream& basic_ostream::operator<<(const void *_Val)

and a template non-member function:

template<class _Elem, class _Traits> 
inline basic_ostream<_Elem, _Traits>& 
operator<<(basic_ostream<_Elem, _Traits>& _Ostr, const _Elem *_Val)

(This code is edited from the <ostream> standard header that comes with MSVC.)

When you write code like “wcout << s” (for a CStringW s), the implicit conversion from CStringW to const wchar_t* is not considered during template argument deduction. Then, overload resolution picks the basic_ostream::operator<<(const void*) member function (corresponding to the first f in the initial simplified case), so the string’s address is printed via this “const void*” overload (instead of the string itself).

On the other hand, when CStringW::GetString is explicitly invoked (as in “wcout << s.GetString()”), the compiler successfully deduces the template arguments for the non-member operator<< (deducing wchar_t for _Elem). And this operator<<(wostream&, const wchar_t*) prints the expected wchar_t string.

I know… There are aspects of C++ templates that are not easy.

 

Wiring CStringW with Output Streams

We saw earlier that there’s a subtle bug involving CStringW and wcout.

If you really want to make CStringW work with wcout without the additional GetString call, you can follow a common pattern that is used to enable a C++ class to work with output streams.

In particular, you can define an overload of operator<<, that takes references to the output stream and to the class of interest, e.g.:

std::wostream& operator<<(std::wostream& os, 
                          const CStringW& str)
{
    return (os << str.GetString());
}

Note that the call to CString::GetString is hidden inside the implementation of this operator<< overload.

With this overload, the following code will output the expected string:

CStringW s = L"Connie";
wcout << s;

Note that this will work also for other wcout-ish objects (output streams), like wostringstream.

 

Marshal STL string vectors Using Safe Arrays

Suppose you have a vector<string> in some cross-platform C++ code, and you want to marshal it across module or language boundaries on the Windows platform: Using a safe array is a valid option.

So, how can you achieve this goal?

Well, as it’s common in programming, you have to combine together some building blocks, and you get the solution to your problem.

A safe array can contain many different types, but the “natural” type for a Unicode string is BSTR. A BSTR is basically a length-prefixed Unicode string encoded using UTF-16.

ATL offers a convenient helper class to simplify safe array programming in C++: CComSafeArray. The MSDN Magazine article “Simplify Safe Array Programming in C++ with CComSafeArray” discusses with concrete sample code how to use this class. In particular, the paragraph “Producing a Safe Array of Strings” is the section of interest here.

So, this was the first building block. Now, let’s discuss the second.

You have a vector<string> as input. An important question to ask is what kind of encoding is used for the strings stored in the vector. It’s very common to store Unicode strings in std::string using the UTF-8 encoding. So, there’s an encoding impedance here: The input strings stored in the std::vector use UTF-8; but the output strings that will be stored as BSTR in the safe array use UTF-16. Ok, not a big problem: You just have to convert from UTF-8 to UTF-16. This is the other building block to solve the initial problem, and it’s discussed in the MSDN Magazine article “Unicode Encoding Conversions with STL Strings and Win32 APIs”.

So, to wrap up: You can go from a vector<string> to a safe array of BSTR strings following this path:

  1. Create a CComSafeArray<BSTR> of the same size of the input std::vector
  2. For each string in the input vector<string>, convert the UTF-8-encoded string to the corresponding UTF-16 wstring
  3. Create a CComBSTR from the previous wstring
  4. Invoke CComSafeArray::SetAt() to copy the CComBSTR into the safe array

The steps #1, #3, and #4 are discussed in the CComSafeArray MSDN article; the step #2 is discussed in the Unicode encoding conversion MSDN article.

Subtle Bug with std::min/max Function Templates

Suppose you have a function f that returns a double, and you want to store in a variable the value of this function, if this a positive number, or zero if the return value is negative. This line of C++ code tries to do that:

double x = std::max(0, f(/* something */));

Unfortunately, this apparently innocent code won’t compile!

The error message produced by VS2015’s MSVC compiler is not very clear, as often with C++ code involving templates.

So, what’s the problem with that code?

The problem is that the std::max function template is declared something like this:

template <typename T> 
const T& max(const T& a, const T& b)

If you look at the initial code invoking std::max, the first argument is of type int; the second argument is of type double (i.e. the return type of f).

Now, if you look at the declaration of std::max, you’ll see that both parameters are expected to be of the same type T. So, the C++ compiler complains as it’s unable to deduce the type of T in the code calling std::max: should T be int or double?

This ambiguity triggers a compile-time error.

To fix this error, you can use the double literal 0.0 instead of 0.

And, what if instead of 0 there’s a variable of type int?

Well, in this case you can either static_cast that variable to double:

double x = std::max(static_cast<double>(n), f(/* something */));

or, as an alternative, you can explicitly specify the double template type for std::max:

double x = std::max<double>(n, f(/* something */));

C++ Wrappers for Windows Registry APIs

I uploaded on GitHub some C++ code of mine that wraps some Windows registry C-interface APIs, using RAII, STL classes like std::wstring and std::vector, and signals error conditions using exceptions.

Using these high-level C++ wrappers, you can easily access the Windows registry with simple code like this:

// Open a registry key
RegKey key{ 
    HKEY_CURRENT_USER, 
    LR"(SOFTWARE\MyKey\SubKey)" 
};

// Read a DWORD value
DWORD dw = key.GetDwordValue(L"MyValue1");

// Read a string value
wstring s = key.GetStringValue(L"MyValue2");

// Enumerate the values under the given key
auto values = key.EnumValues();

// etc.

On the May issue of MSDN Magazine you’ll find an article describing some of the techniques applied in this code.

EDIT 2017-05-02: Added link to my MSDN Magazine article.

Updates to the ATL/STL Unicode Encoding Conversion Code

I’ve updated my code on GitHub for converting between UTF-8, using STL std::string, and UTF-16, using ATL CStringW.

Now, on errors, the code throws instances of a custom exception class that is derived from std::runtime_error, and is capable of containing more information than a simple CAtlException.

Moreover, I’ve added a couple of overloads for converting from source string views (specified using an STL-style [start, finish) pointer range). This makes it possible to efficiently convert only portions of longer strings, without creating ad hoc CString or std::string instances to store those partial views.

 

Custom C++ String Pool Allocator on GitHub

I’ve uploaded on GitHub some C++ code of mine, implementing a custom string pool allocator.

The basic idea is to allocate big chunks of memory, and then serve single string allocations carving memory from inside those blocks, with a simple fast pointer increase.

There’s also a benchmark comparing this custom allocator vs. STL’s strings.

Custom string pool allocator benchmark results.
Custom string pool allocator benchmark results.

The results clearly show that both allocating strings that way, and sorting them, is faster than using the default std::wstring class.

 

The New C++11 u16string Doesn’t Play Well with Win32 APIs

Someone asked why in this article I used std::wstring instead of the new C++11 std::u16string for Unicode UTF-16 text.

The key point is that Win32 Unicode UTF-16 APIs use wchar_t as their code unit type; wstring is based on wchar_t, so it works fine with those APIs.

On the other hand, u16string is based on the char16_t type, which is a new built-in type introduced in C++11, and is different from wchar_t.

So, if you have a u16string variable and you try to use it with a Win32 Unicode API, e.g.:

// std::u16string s;
SetWindowText(hWnd, s.c_str());

Visual Studio 2015 complains (emphasis mine):

error C2664: ‘BOOL SetWindowTextW(HWND,LPCWSTR)’: cannot convert argument 2 from ‘const char16_t *’ to ‘LPCWSTR’

note: Types pointed to are unrelated; conversion requires reinterpret_cast, C-style cast or function-style cast

wchar_t is non-portable in the sense that its size isn’t specified by the standard; but, all in all, if you are invoking Win32 APIs you are already in an area of code that is non-portable (as Win32 APIs are Windows platform specific), so adding wstring (or even CString!) to that mix doesn’t change anything with respect to portability (or lack thereof).

 

Unicode Encoding Conversions Between ATL and STL Strings

At the Windows platform-specific level, when using C++ frameworks like ATL (or MFC), it makes sense to represent strings using CString (CStringW in Unicode builds, which should be the default in modern Windows C++ code).

On the other hand, in cross-platform C++ code, using std::string makes sense as well.

The same Unicode text can be encoded in UTF-16 when stored in CString(W), and in UTF-8 for std::string.

So, there’s a need to convert between those two Unicode representations. I discussed the details of such conversions in my MSDN Magazine article: “Unicode Encoding Conversions with STL Strings and Win32 APIs”However, the C++ code associated to that article used std::wstring for UTF-16 strings.

I’ve created a new repository on GitHub, where I uploaded reusable code (in the form of a convenient header-only module) for converting between UTF-16 and UTF-8, using CStringW for UTF-16, and std::string for UTF-8. Please feel free to check it out!

 

Passing std::vector’s Underlying Array to C APIs

Often, there’s a need to pass some data stored as an array from C++ to C-interface APIs. The “default” first-choice STL container for storing arrays in C++ is std::vector. So, how to pass the array content managed by std::vector to a C-interface API?

The Wrong Way I saw that kind of C++ code:

// v is a std::vector<BYTE>.
// Pass it to a C-interface API: pointer + size in bytes
DoSomethingC( 
  /* Some cast, e.g.: (BYTE*) */ &v, 
  sizeof(v) 
);

That’s wrong, in two ways: for both the pointer and the size. Let’s talk the size first: sizeof(v) represents the size, in bytes, of an instance of std::vector, which is in general different from the size in bytes of the array data managed by the vector. For example, suppose that a std::vector is implemented using three pointers, e.g. to begin of data, to end of data, and to end of reserved capacity; in this case, sizeof(v) would be sizeof(pointer) * 3, which is 8 (pointer size, in bytes, in 64-bit architectures) * 3 = 24 bytes on 64-bit architectures (4*3 = 12 bytes on 32-bit).

But what the author of that piece of code actually wanted was the size in bytes of the array data managed (pointed to) by the std::vector, which you can get multiplying the vector’s element count returned from v.size() by the size in bytes of a single vector element. For a vector<BYTE>, the value returned by v.size() is just fine (in fact, sizeof(BYTE) is one).

Now let’s discuss the address (pointer) problem. “&v” points to the beginning of the std::vector’s internal representation (i.e. the internal “guts” of std::vector), which is implementation-defined, and isn’t interesting at all for the purpose of that piece of code. Actually, misinterpreting the std::vector’s internal implementation details with the array data managed by the vector is dangerous, as in case of write access the called function will end up stomping the vector’s internal state with unrelated bytes. So, on return, the vector object will be in a corrupted and unusable state, and the memory previously owned by the vector will be leaked.

In case of read access, the vector’s internal state will be read, instead of the intended actual std::vector’s array content.

The presence of a cast is also a signal that something may be wrong in the user’s code, and maybe the C++ compiler was actually helping with a warning or an error message, but it was silenced instead.

So, how to fix that? Well, the pointer to the array data managed by std::vector can be retrieved calling the vector::data() method. This method is offered in both a const version for read-only access to the vector’s content, and in a non-const version, for read-write access.

The Correct Way So, the correct code to pass the std::vector’s underlying array data to a C-interface API expecting a pointer and a size would be for the case discussed above:

DoSomethingC(v.data(), v.size());

Or, if you have e.g. a std::vector<double> and the size parameter is expressed in bytes (instead of element count):

DoSomethingC(v.data(), v.size() * sizeof(double));

An alternative syntax to calling vector::data() would be “&v[0]”, although the intent using vector::data() seems clearer to me. Moreover, vector::data() works also for empty vectors, returning nullptr in this case. Instead, “&v[0]” triggers a “vector subscript out of range” debug assertion failure in MSVC when used on an empty vector (in fact, for an empty vector it doesn’t make sense to access the first item at index zero, as the vector is empty and there’s no first item).

&v[0] on an empty vector: debug assertion failure
&v[0] on an empty vector: debug assertion failure