Tech-Ed Barcelona 2007: day 2

I just found out that the session booklet only contains 2 pages for taking memos of 5 sessions. It is beyong me how they could have thought this would suffice.


In the next break I will have a look around in the exposition hall to see if I can get a notepad or some other source of paper.


DAT202: Overview of SQL Server 2008


This session is hosted by Francois Ajenstat. This is only a level 200 session, but I think it would be usefull to have an idea of the new features in SQL 2008. After all, it will be released in a couple of months time.


The session itself started a couple of minutes late because there was a technical problem with one of the controllers. I hope this isn’t a trend. Last session yesterday started with a power outage.


The first thing that is immediately obvious is that Francois is a gifted speaker. He really connects with the audience and has the kind of presence that makes it appear as if it is the most natural thing in the world to talk in front of a large audience.


Security:


  • SQL 2008 now has support for transparent encryption of table data without needing developer support, and can use external key management.
  • Every action in a database is auditable. This is very nice in regulated environments where auditing is of critical importance. You now get it integrated in the database, for free.
  • Database mirroring is made a lot simpler, and has ways to resolve errors and corrupt database pages, making everything more reliable.

Performance:


  • Both data and backup archives can be compressed on the fly, resulting in significant performance increases due to decreased IO.
  • SQL 2008 comes with an integrated resource governor, allowing you to allocate resources to users, jobs, or anything with a GUID. There was a nice demo of a concurrent execution of a payroll job (critical) and a simple reporting job (less important) that were fighting over resources.
    The resource governor made it very easy to assign the payroll job to a fast resource pool that could use up to 80% CPU time, and a slow pool that could use 20% CPU time.
    The performance monitor immediately showed the behavior of the jobs in reaction to this.
  • Performance data collection and analysis has been simplified.

Policy based management:


  • A lot of the behavior in SQL 2008 can now be managed through policies. For example, who can do what, and what should table names look like, and are free form queries allowed,… all those things can be be configured through a policy, just like normal group policies.

Misc:


  • SQL 2008 now has intellisense to help you write queries. This can be a real timesaver.
  • There is support for something called the entity framework. This is basically a way to map logical data from tables and stored procedures to conceptual entities, even though the data can come from different tables or data sources. This is nothing revolutionary, but it can make life easier for developers.
  • It has become very easy to expose data on the internet in various ways.
  • There are several new data types, making it easier to store unstructured data like a documents and MP3s, and spatial information like GPS coordinates.
  • There is a lot of new support for Business Intelligence stuff, like powerful integrated reporting tools, like graphs, controls, gauges, …

All in all this was a very interesting session. I am not a database expert so perhaps I missed or misunderstood some stuff, but it became clear that developing a database is going to encompass more than setting up a table structure and providing a set of stored procedures to access them.


WIN202: Introduction of the Microsoft Sync framework.


This session is hosted by Phillip Vaughn.


The sync framework is a new addition to the .NET framework. I don’t know anything about it, so I attend this introductory talk primarily to know if this is something I should care about or not.


The key idea is to provide you as a programmer with a simple means to enable your applications to use data while disconnected from the data store, and then automatically synchronize when the connection returns. Conflicts should be detected and resolved, and users should be able to concurrently collaborate on the same data.


It also increases performance because your application will work on local data which gets synchronized in the background.


The sync framework is


Powerfull


  • It supports conflict detection and resolution.
  • It handles connection and storage errors.
  • It handles all corner cases that are notoriously hard to solve, like: A works independently on a dataset, copies it to be, B changes it again, A changes its own copy again, both upload at the same time and halfway through the conflict resolution, the connection drops…

Flexible, because it can work with:


  • Arbitrary data stores.
  • Arbitrary protocols.
  • Arbitrary network topology (peer to peer, master slave)

And finally, it lets you be productive because:


  • Creating Offline capable apps with VS2008 is dead easy.
  • It has built in support for lots of endpoints and protocols.
  • The runtime is expandable.

There was a demo with a customer database app that synchronized data on a PDA, outlook, and Vista Contacts.


Then there was also a demo of a sample app called ‘Sync toy’ which can be used to synchronize files and folders. It is open code and it works really nice. So nice in fact that I am probably going to use it at home to spread the data from my fileserver across different disks in a ‘set it and forget it’ way to safeguard data from disk crashes.


They key to synchronization resolution is to use meta data to solve alls sorts of common problems.


The sync framework is really impressive for a first release, and I think it is really worth looking into if you develop applications with offline capabilities.


TLA323: What’s new in C++ 2008


This session is hosted by Kate Gregory.


I was hoping to see Steve Teixeira here as well, but he was missing in action.


This session is also hosted in one of the bigger rooms, and it was fairly crowded. If I had to guess, I’d say that there are about 200 or 250 people here. Last year all the C++ sessions were shoved in the smaller side rooms, but they were overflowing. Luckily, the event organizers have responded to that.


This talk primarily discussed the changes to the in which you use VC++, and the way you should make your apps work with Vista.


The first part of the talk handled UAC (User Annoying Component) and what you can do to make it less annoying. Basically, you can do 2 things:


  • Instruct the linker to insert a manifest, declaring that you run elevated, so your app triggers the confirmation dialog at startup.
  • Instruct the linker to insert a manifest, declaring that you run without elevated privileges. But if your app does something it would need elevation for, it will fail.
  • The third option is not to use a manifest, but you shouldn’t do that because your app will run in a virtualized file system with virtualized registry.

Another important issue: Visual Studio itself doesn’t need to run elevated anymore.


VC++ 2008 also comes with a class designer, which is really a class viewer which allows you to see a class diagram of the code. It is not a designer, because in a survey, all corporate programmers they asked indicated that they would still make changes in their code, not in a class designer.


The resource editor can now also work with the high res Vista icons, though you cannot edit them. The justification for this is that programmers generally don’t design high res images. That is done by graphics people, and they have their own tools for that.


There is a new compiler switch /MPn that allows you to compile files on n processors at the same time. In a project with dozens, hundreds or thousands of source files, this can make a big difference.


If your project depends on an another assembly, it used to be the case that your entire project would be recompiled when that assembly changed, because the only way VC detected change was based on timestamp. In a large project, this would trigger a full rebuild almost every time. Now VC looks only at the signatures of the public classes (the meta data). As long as that stays the same, the assembly will not be marked as changed.


And finally, VC2008 supports multi-targeting, so you can specify that your app runs on .NET2.0, 3.0 or 3.5 without needing to swap development environments like you need to do today.


This session zoomed past. I had high expectations because I saw Kate speak before, and I was not disappointed. This was a really great session.


TLA408: Multicore is here! But how do you resolve data bottlenecks in native code


This session is hosted by Michael Wall.


Despite the fact that this is a level 400 session, it is crowded. Another 200 or 250 people would be my rough guess.


The session started of pretty dry, with a lot of slides about the new AMD processor. Every slide was followed by ‘..but I am not going to talk about that’.


As soon as that was past it became a lot better, and the session started with a simple example to illustrate the difference between array based operations and linked list based operations.


The idea is that with array based operations, the memory accesses can be calculated in advance and anticipated by the processor, so a lot can be prefetched. With list based access this is not true anymore.


To solve this, you can use an array with list item pointers which can be prefetched.


The processor has something called a Translation Lookaside Buffer which stores memory page addresses. That list is limited, so your code should use its data as local as possible to keep the TLB from having to find different memory pages.


A cache line is 64 bytes long. If you need to access one byte, you will get access to the next 63 bytes almost for free. So if you can make those 63 bytes useful, that is another performance win. Split often used data (hot) and rarely used data (cold) so that caching is efficient, and use small data types where possible.


The cache itself consists of several layers, and you should avoid as many cache loads as possible. If you can avoid using variables until you really need to, you don’t disrupt the cache. You can also manually prefetch data with compiler intrinsics. _mm_prefetch can do that for you.


You can also use _mm_stream_ps, _mm_stream_ss and _mm_stream_sd to transfer data directly to RAM instead of letting it flow through the cache like you would normally do. Suppose you write data to a large array and you are not going to need it for a while. If you just write it like you would normally do, the entire cache is blasted with useless data. Using the intrinsics you can avoid this, and you also avoid having to flush the cache to RAM in the first place.


Compiling for smallest code (which might be less efficient) can sometimes yield faster execution times than optimizing for speed. The reason is that smaller code causes less cache misses. Using ‘whole program optimization’ also helps.


If your application is multithreaded, it should be made NUMA aware. NUMA means that CPUs can faster access local memory then memory that is local to another processor. If your app runs on multiple cores, you should use the available win32 apis like GetLocalProcessorInfo and SetThreadAffinityMask to make sure that your threads stay on 1 NUMA node.


And finally, 64 bit compiled code is usually faster than the same code compiled for 32 bit, for the simple reason that there are double the amount of registers available in 64 bit mode.


The applications that are slower in 64 bit mode usually because code size increases (and thus the number of cache misses) and because data size increases if your app uses a lot of pointers.


This session was interesting and contained some good information for developing performant code.


TLA302: Best practices for native – manage interop in Visual C++ 2008


This session is hosted by Kate Gregory and is about the additional STL implementation that is delivered with VS2008: STL/CLR.


C++ programmers –well, some of them – often use the STL because it is a high performance library that is very flexible as well. It also comes with a wealth of containers and algorithms.


The problem with the existing CLR is that it didn’t allow you to put managed pointers into the container classes. So you couldn’t simply have a vector of string^ because vector simply did not handle string^ correctly.


VC2008 now ships with a second implementation of the STL in a new namespace ‘cliext’ that is designed to work with CLR type. That STL has the same rules and features as the old STL, but it’s containers and algorithms are faster than any .NET equivalent.


The reason is that as opposed to generics, C++ programmers pay the piper when they hit the build button. The compiler checks all types and method accesses and basically everything else at compile time. If your code is wrong, you will get compiler errors. If it isn’t you will have fast code because all the checking has been done already, and isn’t done at runtime.


Converting from .Net collections to STL/CLR collections can be done by explicitly implementing the conversion routines, which is pretty trivial.


Additionally, you can’t pass templates across DLL boundaries. There are several technical reasons which I am not going to get into here, but those reasons are the reason that I cannot pas an STL vector directly to another assembly, even if that assembly also uses C++/CLI.


To solve this, every container implements a type ‘generic_container’ which is a .Net wrapper of the container, which can be passed across DLL boundaries so that other STL/CLR code can happily work with it.


There were a lot of code demos to show how easy it is to use if you are familiar with the STL.


At the end of the session there was also some attention for the marshaling library. This library contains template functions that allow you to marshal native types to .NET types in a very convenient was. Currently this library ‘only’ provides conversions for all string types.


But –C++ rules – since they are template functions, you can easily provide your own specializations for converting a .NTE Rect to an MFC RECT or whatever.


Again, there was a large audience here. About 200 people or so.


This session very good, with lots of praise to Kate.


Afterthoughts


Today alone there were 3 sessions that centered on Visual C++. And all 3 had great attendance. I think that is showing that after the initial .NET hype, a lot of companies are coming to their senses again, and realize that there are some good reasons why C++ exists.


There is a tremendous amount of new stuff in Vista that can only be accessed easily from the C++ side, and it is going to stay like that for a long, long time because of a little thing called reality, which has shown that a completely managed platform is not yet feasible.


C++ has a niche where it fits, and it is not going to be replaced by anything, anytime soon.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>