The Case of the Context-Menu Shell Extension That Doesn’t Get Invoked by Explorer

A developer was working on a context-menu shell extension.

He wrote all the skeleton infrastructure code, including some code to display the selected files when the shell extension gets invoked by Windows Explorer. It’s kind of the “Hello World” for context-menu shell extensions.

He happily builds his code within Visual Studio, copies the shell extension DLL to a virtual machine, registers the extension in the VM, right-clicks on a bunch of files… but nothing happens. His extension’s menu items just don’t show up in the Explorer context-menu.

What’s going wrong, he thought?

He starts thinking to all sorts of hypotheses and troubleshooting strategies.

He once again manually registers the COM DLL that implements the shell extension from the Command Prompt:

regsvr32 CoolContextMenuExtension.dll

A message box pops up, informing him that the registration has succeeded.

He tries right-clicking some files, but, once again, his extension’s menu items don’t show up.

He then checks that the extension was properly registered under the list of approved shell extensions (this was part of his shell extension infrastructure C++ code):

HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Shell Extensions\Approved

He finds the GUID of the shell extension in there, as expected.

He then checks that the extension was registered under the proper subkey in HKEY_CLASSES_ROOT. Found it there, too!

He starts wondering… what may be going so wrong?

He checks the C++ infrastructure code generated by the ATL Visual Studio wizard, including his own additions and modifications. The implementations of DllRegisterServer and DllUnregisterServer seem correct.

He also checks his own C++ code in the C++ class that implements the shell extension’s COM object. He has correctly implemented IShellExtInit::Initialize, and the three methods of IContextMenu: GetCommandString, InvokeCommand, and QueryContextMenu.

He takes a look at the class inheritance list: both IShellExtInit and IContextMenu are listed among the base classes.

The IShellExtInit and IContextMenu COM interfaces are correctly listed among the base classes.
The IShellExtInit and IContextMenu COM interfaces are correctly listed among the base classes.

He feels desperate. What’s going wrong?? He wrote several shell extensions in C++ in the past. Maybe there’s some bug in the current version of Visual Studio 2019 he is using?

Why wasn’t his context-menu shell extension getting called by Explorer?

 

I took a look at that code.

At some point, I was enlightened.

I gave a look at the COM interface map in the header file of the C++ shell extension class.

There’s a bug in the shell extension C++ object’s COM interface map.
There’s a bug in the shell extension C++ object’s COM interface map.

Wow! Can you spot the error?

Something is missing in there!

In fact, in addition to deriving the C++ shell extension class from IContextMenu, you also have to add an entry for IContextMenu in the COM map!

I quickly added the missing line:

    COM_INTERFACE_ENTRY(IContextMenu)

The COM interface map, with both the IShellExtInit and IContextMenu entries.
The COM interface map, with both the IShellExtInit and IContextMenu entries.

I rebuilt the extension, registered it, and, with great pleasure and satisfaction, the new custom items in the Explorer’s context-menu showed up! And, after selecting a bunch of files to test the extension, the message box invoked by the shell extension C++ code was correctly shown.

All right! 😊

So, what was going wrong under the hood?

Basically, since the context-menu shell extension was correctly registered under the proper key in the Windows Registry, I think that Windows Explorer was actually trying to invoke the shell extension’s methods.

In particular, I think Explorer called QueryInterface on the shell extension’s COM object, to get a pointer to IContextMenu. But, since IContextMenu was missing from the COM map, the QueryInterface code implemented by the ATL framework was unable to return the interface pointer back to Explorer. As a consequence of that, Explorer didn’t recognize the extension as a valid context-menu extension (despite the extension being correctly registered in the Windows Registry), and didn’t even call the IShellExtInit::Initialize method (that was actually available, as IShellExtInit had its own entry correctly listed in the COM map from the beginning).

The bug is in the details!

So, the moral of the story is to always check the COM map in addition to the base class list in your shell extension’s C++ class header file. The COM interfaces need to both be in the base class list and have their own entries in the COM map.

And, in addition, to me this experience has suggested that it would have been great if Visual Studio had issued at least a warning for having the IContextMenu COM interface listed among the base classes, but missing from the COM map! This would be an excellent time-and-bug-saving addition, Visual Studio IDE/Visual C++/Visual Assist X teams!

 

Where Should I Register My Context-Menu Shell Extension to Operate on All Files and Folders?

A developer was working on a context-menu shell extension. He registered the extension under:

HKEY_CLASSES_ROOT\*\shellex\ContextMenuHandlers

The developer noted that his context-menu extension was called for all files. But he also wanted his extension to be called for all directories.

If you want your context-menu shell extension to be invoked on all files and all file folders, the correct key to use for registration is HKCR\AllFileSystemObjects, not HKCR\*.

For more information, MSDN has a list of Predefined Shell Objects.

 

Simplifying Windows Registry Programming with the C++ WinReg Library

The native Windows Registry API is a C-interface API, that is low-level and kind of hard and cumbersome to use.

For example, suppose that you simply want to read a string value under a given key. You would end up writing code like this:

Sample code excerpt to read a string value from the Windows Registry using the native Windows API.
Sample code excerpt to read a string value from the Windows Registry using the native Windows API.

And this is just the part to query the destination string length. Then, you need to allocate a string object with proper size (and pay attention to proper size-in-bytes-to-size-in-wchar_ts conversion!), and after that you can finally read the actual string value into the local string object.

That’s definitely a lot of bug-prone C++ code, and this is just to query a string value!

Moreover, in modern C++ code you should prefer using nice higher-level resource manager classes with automatic resource cleanup, instead of raw HKEY handles.

Fortunately, it’s possible to hide that kind of complex and bug-prone code in a nice C++ library, that offers a much more programmer-friendly interface. This is basically what my C++ WinReg library does.

You can query a string value with just one simple line of C++ code using WinReg.
You can query a string value with just one simple line of C++ code using WinReg.

WinReg is an open-source C++ library, available on GitHub. For the sake of convenience, I packaged and distribute it as a header-only library, which is also available via the vcpkg package manager.

If you need to access the Windows Registry from your C++ code, you may want to give C++ WinReg a try.

 

Printing UTF-8 Text to the Windows Console: Sample Code

In a previous blog post, you saw that to print some UTF-8-encoded text to the Windows console, you first have to convert it to UTF-16.

In fact, calling _setmode to change stdout to _O_U8TEXT, and then trying to print UTF-8-encoded text with cout, resulted in a debug assertion failure in the VC++ runtime library. (Please take a look at the aforementioned blog post for more details.)

That blog post lacked some compilable demo code showing the solution, though. So, here you are:

// Test printing UTF-8-encoded text to the Windows console

#include "UnicodeConv.hpp"  // My Unicode conversion helpers

#include <fcntl.h>
#include <io.h>
#include <stdint.h>
#include <iostream>
#include <string>

int main()
{
    // Change stdout to Unicode UTF-16.
    //
    // Note: _O_U8TEXT doesn't seem to work, e.g.:
    // https://blogs.msmvps.com/gdicanio/2017/08/22/printing-utf-8-text-to-the-windows-console/
    //
    _setmode(_fileno(stdout), _O_U16TEXT);


    // Japanese name for Japan, encoded in UTF-8
    uint8_t utf8[] = {
        0xE6, 0x97, 0xA5, // U+65E5
        0xE6, 0x9C, 0xAC, // U+672C
        0x00
    };
    std::string japan(reinterpret_cast<const char*>(utf8));

    // Print UTF-16-encoded text
    std::wcout << Utf16FromUtf8("Japan") << L"\n\n";
    std::wcout << Utf16FromUtf8(japan)   << L'\n';
}

This is the output:

Unicode UTF-8 text converted to UTF-16 and printed out to the Windows console

All right.

Note also that I set the Windows console font to MS Gothic to be able to correctly render the Japanese kanjis.

The compilable C++ source code, including the implementation of the Utf16FromUtf8 Unicode conversion helper function, can be found here on GitHub.

 

C++ WinReg Update

The Windows Registry native C API is very low-level and hard to use.

A while back I developed some convenient higher-level C++ wrappers, that should make Windows Registry programming easier and more fun. You can find them on GitHub.

Just to give you a taste of this C++ WinReg library, you can simply open a registry key with code like this:

RegKey key{ HKEY_CURRENT_USER, L"SOFTWARE\\MyKey" };

And you can read registry values like that:

DWORD dw = key.GetDwordValue(L"MagicCode");
wstring s = key.GetStringValue(L"ReadMe");

Enumerating values under a given key is as easy as:

auto values = key.EnumValues();

Error management is performed using C++ exceptions (instead of “raw” error codes like in the native C API).

You can find more info in the README file.

I haven’t touched that project in a few months (being very busy), but yesterday I accepted an external contribution and merged a pull request adding a DeleteTree feature.

I take this occasion to thank everyone who showed interest and appreciation for this project.

Happy coding!

 

ATL::CStringW vs. std::wstring Performance: String Sorting and Comparisons

According to previous measurements, std::wstring performs better than ATL::CStringW:

  1. in all string concatenation tests
  2. when sorting string vectors that are made by small strings, thanks to std::basic_string’s SSO

So, I focused my attention on the string vector sorting scenario, and it seems to me that (at least in the VS2015 implementation), the slowdown of (non-SSO) wstrings is caused by wmemcmp calls, that are used to compare wstrings when sorting the string vectors. On the other hand, CStringW invokes wcscmp, that seems to run faster.

In fact, invoking std::sort with a custom comparator function that calls wcscmp to compare the C-style pointers returned by wstring::c_str, results in faster vector<wstring> sorting times. So, in this case wstring sorting performs better than CStringW even for non-SSO strings.

However, as Stephan T. Lavavej pointed out, std::basic_string supports embedded nulls, so wstring cannot use wcscmp (that works only for C-style null-terminated strings, without embedded nulls).

 

String Performance Tests: ATL vs. STL

I wrote some C++ code to test the performance of the ATL CStringW class vs. the C++ Standard Library’s std::wstring.

There are several aspects that can be considered when comparing string class performance: In the aforementioned code, I tested string vector sorting and string concatenation.

The code is available in this repository on GitHub.

You can take a look at the README for further details (including my test results).

 

Subtle Bug When Converting Strings to Lowercase

Suppose that you want to convert a std::string object to lowercase.

The first thing you would do is probably searching the std::string documentation for a convenient easy simple method named to_lower, or something like that. Unfortunately, there’s nothing like that.

So, you might start developing your own “to_lower” function. A typical implementation I’ve seen of such custom function goes something like this: For each character in the input string, convert it to lowercase invoking std::tolower. In fact, there’s even this sample code on cppreference.com:

// From http://en.cppreference.com/w/cpp/string/byte/tolower

std::string str_tolower(std::string s) {
    std::transform(s.begin(), s.end(), s.begin(), 
                   [](unsigned char c) { return std::tolower(c); }
                  );
    return s;
}

Well, if you try this code with something like str_tolower(“Connie”), everything seems to work fine, and you get “connie” as expected.

Now, since C++ folks like storing UTF-8-encoded text in std::string objects, in some large code base someone happily takes the aforementioned str_tolower function, and invokes it with their lovely UTF-8 strings. Fun ensured! …Well, actually, bugs ensured.

So, the problem is that str_tolower, under the hood, calls std::tolower on each char in the input string. While this works fine for pure ASCII strings like “Connie”, such code is a bug farm for UTF-8 strings. In fact, UTF-8 is a variable-width character encoding. So, there are some Unicode “characters” (code points) that are encoded in UTF-8 using one byte, while other characters are encoded using two bytes, and so on, up to four bytes. The poor std::tolower has no clue of such UTF-8 encoding features, so it innocently spits out wrong results, char by char.

For example, I tried invoking the above function on “PERCHÉ” (the last character is the Unicode U+00C9 LATIN CAPITAL LETTER E WITH ACUTE, encoded in UTF-8 as the two-byte sequence 0xC3 0x89), and the result I got was “perchÉ” instead of the expected “perché” (é is Unicode U+00E9, LATIN SMALL LETTER E WITH ACUTE). So, the pure ASCII characters in the input string were all correctly converted to lowercase, but the final non-ASCII character wasn’t.

Actually, it’s not the std::tolower function: It’s that this function was misused, invoking it in a way that the function was not designed for.

This is one of the perils of taking std::string-based C++ code that initially worked with ASCII strings, and thoughtlessly reuse it for UTF-8-encoded text.

In fact, we saw a very similar bug in a previous blog post.

So, how can you fix that problem? Well, a portable way is using the ICU library with its icu::UnicodeString class and its toLower method.

On the other hand, if you are writing Windows-specific C++ code, you can use the LcMapStringEx API. Note that this function uses the UTF-16 encoding (as almost all Windows Unicode APIs do). So, if you have UTF-8-encoded text stored in std::string objects, you first have to convert it from UTF-8 to UTF-16, then invoke the aforementioned API, and finally convert the UTF-16-encoded result back to UTF-8. For these UTF-8/UTF-16 conversions, you may find my MSDN Magazine article on “Unicode Encoding Conversions with STL Strings and Win32 APIs” interesting.

 

Maps with Case Insensitive String Keys

How to implement a map with case insensitive string keys? If you use the standard std::map associative container with std::string or std::wstring as key types, you get a case sensitive comparison by default.

If you take a look at std::map documentation, you’ll see that in addition to the key type and value type, there’s also a third template parameter that you can plug into std::map: it’s a comparison function object to sort the keys. The default option for this comparison function is std::less<Key>.

So, if you provide a custom comparison object that ignores the key string case, you can have a map with case insensitive keys:

map<string, ValueType, StringIgnoreCaseLess> myMap;

Now, the question is: What would such comparison object look like?

A Failed Approach

An approach I saw sometimes (e.g. on StackOverflow) is to use std::lexicographical_compare, comparing the strings char-by-char, after invoking tolower on each char. Basically, the idea is to compare the lowercase versions of each corresponding characters in the input strings. The code from the aforementioned SO answer follows:

// Code from: https://stackoverflow.com/a/1801913/1629821

struct ci_less
{
  // case-independent (ci) compare_less binary function
  struct nocase_compare
  {
    bool operator() (const unsigned char& c1, const unsigned char& c2) const {
      return tolower (c1) < tolower (c2); 
    }
  };
  
  bool operator() (const std::string & s1, const std::string & s2) const {
    return std::lexicographical_compare 
        (s1.begin (), s1.end (),   // source range
         s2.begin (), s2.end (),   // dest range
         nocase_compare ());       // comparison
  }
};

The problem with this code is that it doesn’t work for international strings. In fact, while for a pure ASCII string like “Connie” you can use this technique to successfully compare “Connie” with “connie” or “CONNIE”, this won’t work for strings containing international characters.

For example, consider the Italian word “perché”. The last character in “perché” is U+00E9, i.e. the ‘LATIN SMALL LETTER E WITH ACUTE’, which is encoded in UTF-8 as the hex byte sequence 0xC3 0xA9. Its uppercase form is É U+00C9 (encoded in UTF-8 as 0xC3 0x89). So, let’s assume you use the UTF-8 encoding to store your international text in std::string objects. Well, invoking tolower char by char, as implemented in the aforementioned SO answer, will fail. In fact, tolower is unable to correctly process UTF-8 sequences (at least in the Microsoft VS2015 CRT implementation I used in my tests).

A Better Approach for International Text

So, how to fix that? Well, on Windows there’s a CompareStringEx API (available since Vista, according to the MSDN documentation) that seems to work, at least in my tests with some Italian text. You can call this API passing the NORM_IGNORECASE comparison flag.

As for most Windows APIs, the Unicode encoding used for text is UTF-16. So, let’s start writing a nice C++ wrapper around this CompareStringEx C-interface API. Let’s assume that we want to compare two UTF-16 wstrings, and that they are “ordinary” strings that don’t contain embedded NULs. A possible implementation for this C++ helper function follows:

// C++ wrapper around the Windows CompareStringEx C API
inline int CompareStringIgnoreCase(const std::wstring& s1, 
                                   const std::wstring& s2)
{
    // According to the MSDN documentation, the CompareStringEx function 
    // is optimized for NORM_IGNORECASE and string lengths specified as -1.

    return ::CompareStringEx(
        LOCALE_NAME_INVARIANT,
        NORM_IGNORECASE,
        s1.c_str(),
        -1,
        s2.c_str(),
        -1,
        nullptr,        // reserved
        nullptr,        // reserved
        0               // reserved
    );
}

Now, you can simply invoke this C++ helper function inside the comparison object that will be used with std::map:

// Comparison object for std::map, ignoring string case
struct StringIgnoreCaseLess
{
    bool operator()(const std::wstring& s1, const std::wstring& s2) const
    {
        // (s1 < s2) ignoring string case
        return CompareStringIgnoreCase(s1, s2) == CSTR_LESS_THAN;
    }
};

This basically implements the condition (s1 < s2) ignoring the string case.

And, finally, you can simply plug this comparison object into a std::map with UTF-16-encoded wstring keys:

map<wstring, ValueType, StringIgnoreCaseLess> myCaseInsensitiveStringMap;

Or, using the nice C++11 alias template feature:

template <typename ValueType>
using CaseInsensitiveStringMap = std::map<std::wstring, ValueType, 
                                          StringIgnoreCaseLess>;

// Simply use CaseInsensitiveStringMap<ValueType>

You can find some compilable C++ sample code on GitHub.

Note on UTF-8 String Keys

If you really want to use UTF-8-encoded std::string keys, then you have to add some code to the comparison object, to first convert the input strings from UTF-8 to UTF-16, and then invoke CompareStringEx for the UTF-16 text comparison.

If you are working on C++ code that is already Windows platform specific, I think that choosing the UTF-16 encoding to represent international text is more “natural” (and more efficient) than converting back and forth between UTF-8 and UTF-16.