Technology vs Magic

In honor of the death of Terry Pratchett I’ve been re-reading a few of his books. One in particular is “Maurice and his Educated Rodents“. The premise of the book is that a cat and some rats come across some magical debris that gives them human-like intelligence and speech.

Something in that got me thinking about the difference between magic and technology. People do research all the time to examine and improve the intelligence of rats and other mammals. Imagine what a wondrous technological advancement it would be to create a drug that could give human-like intelligence to a colony of rats! But the cat and rats in the book could also speak. Not just that, but they used human language. That’s magic. Not only do we have the intelligence increase, but the knowledge transfer as well. That’s clearly magic. There’s just something there that knows what you need for the advance to be practical.

Posted in non-computer | 1 Comment

Issues with HTC 8x 8.1 Upgrade

I have an HTC 8x phone on a Verizon MVNO (Page Plus). Today, I was excited to see that the Windows Phone 8.1 update was finally available for my device. That excitement was to be short-lived.

As the update finished installing, the final step is to reboot the phone into the new OS. Unfortunately, this didn’t go so well. My phone now continuously restarts in a never ending cycle. It reboots in a loop that I can’t get out of.

After consulting Google, I learned that my only recourse is a full factory reset. Unfortunately, the normal procedure for this does not work for my phone. What you are supposed to do is hold down the volume down button while the phone is off, and after turning it on tap the power button again at just the right time. Unfortunately, after many many attempts, I gave up and determined that this would not work for my phone.

Using the online chat features for HTC, Page Plus, Microsoft Windows Phone Support, and Verizon also got me nowhere (I would have called, but, well…). Page Plus particularly was unhelpful, which was disappointing. After 5 straight hours of work on this problem, I am still without a phone.

Early on in the process I was able to get to the screen with the ! icon on two occasions, but was not able to complete the hardware reset, nor was I able to reproduce the steps that produced that screen. I was also able on three occasions to get the lightning bolt/gear screen, but I wasn’t able to find any useful information on the purpose of that screen.

The good news is that I can force the phone to shut down and stay shut down. That’s really why I made this post: I haven’t seen that information anywhere yet. To do this, hold down both the volume up and volume down buttons at the same time. This will bring you to a new screen with three bar codes. From here, you can turn the phone off by holding down the camera button. Unfortunately, as soon as you connect the phone to a charger, it will start up again and re-enter the reboot cycle. The other thing you can do from this screen is connect the phone to a computer. You can make it work with Vista and XP, but Windows 7 and Windows 8 will have drivers out of the box. You can’t really do anything normal with the link, but later on this link may be required to replace the system ROM.

I have a theory as to what went wrong. I believe that the update botched the battery calibration, such that it believes that the battery is nearly empty (clearly is not, or the phone would be dead now). When the phone starts, it reads the battery state and believes that the battery is too low to boot into the OS, or even reset screen, and instead restarts itself. One other thing I was asked to do was to charge the phone for 10 minutes, and then hold down the volume down, volume up, and power buttons for 2 minutes. This is another item that I haven’t seen recorded anywhere else yet. They never said what this was supposed to do, but I have a suspicion that it was intended to reset the battery calibration.

Perhaps allowing the phone to fully discharge will make the battery calibration more accurate, allowing me to charge it somewhat and enter the factory reset screen, or even avoid the need to do the factory reset at all, if that is enough to allow the phone to finish booting (I’m not holding my breath here). Before I let that happen, though, I have one other option.

At this point what I believe I really need is to restore the original ROM. My time on chat with HTC and Microsoft leads me to understand that, for this product at least, Microsoft supplies materials to make a stock ROM to HTC. HTC much customize it for the phone’s specific hardware, and in turn provide materials for the customized ROM to Verizon. Verizon then customizes it further for their network and produces the final ROM update to distribute. Therefore, in this case, the only place to get the ROM that I need is Verizon. As I am not a direct Verizon customer, I was unable to communicate with them on the issue. I had to go through Page Plus, who seriously dropped the ball here in supporting me. They may have lost a customer over this issue.

Page Plus did suggest I try bringing the phone to a dealer, but I am an online customer and the nearest dealer is, shall we say, less than convenient. What I will do instead is try to bring the phone to a Verizon retail store, and see if they can help. I may be able to bypass the barrier in person that I could not over the web (seriously Verizon: if you’re going to allow MVNOs, accept the MVNO phone numbers as valid for creating support accounts). If that doesn’t work, I’ll have to let the phone drain and start looking on shady bittorrent sites for a download with the software I need (and am licensed for).

Posted in Uncategorized | 1 Comment

Installing AirServer on Windows

It’s no secret that I’m a fan of AirServer over AppleTVs for classrooms. The ability to mirror a faculty iPad to a large projector screen turns it into a power educational tool. I even have AirServer installed on an HTPC at home. My extensive use of this software means that I need to be able to install and activate the software in a reproducible way for our classroom computers. What you may not know is that installing AirServer on Windows is not as straightforward as we’d like.

This isn’t entirely AirServer’s fault. The AirPlay protocol (and processing power limitations of your iOS device) mandate that the video streams sent from  your device keep the original encoding of whatever that current random app is showing. AirServer depends on the operating system’s ability to decode these video streams. The variety of video types used in various iOS apps far exceeds what codecs are included out of the box with Windows.

I expect that AirServer could, of course, package the most-used of these codecs with their software (and I expect they do this to some extent already), but there are way too many to get them all in one place, and doing so would add licensing costs to the product that I’m happy to avoid. In order to reliably mirror your device there are still some common and uncommon codecs that you’ll want to be able to handle at the operating system level, as well as an additional networking standard you’ll need to support. In short, there are prerequisites you’ll need to get the most out of AirServer that are not included in the software’s installer. I think AirServer could do more with their Windows installer to make these easy to acquire, but until then I’ve got your back.

While I do need to install AirServer more than most, it’s still not all that often. To avoid mistakes, I keep copies of the prerequisites in the same network folder where I keep the AirServer installer itself, and arrange things in a way that encourages success. Here’s what that folder looks like:


Note how I’ve renamed files so that there is an intuitive flow for the installation process. We’ll go through the steps indicated one by one. AirServer itself has an automatic update mechanism, but most of the other items do not. I want to make sure I’m not pushing obsolete (and possibly insecure) software to my classrooms, and so the first step (Step 0) is to make sure that each of the items I’m using is the most recent (read: fully-patched) version available. I’m considering replacing several of the actual download packages with shortcut files to the download page for the project, to ensure I always get the latest version.

This brings us (at last) to the prereq’s themselves. The full list is available here. The first is Bonjour Print Services for Windows. The documentation says that iTunes is enough, but I’ve had better results when I ensure that the Print Services package is installed. Note that I don’t deploy iTunes to my classroom computers. For home machines, that would be an okay addition. Print Services is a free download from Apple, and it allows your Windows computer to support the multi-cast DNS protocol. I have strong feelings about this protocol that are not fit for public print, but for better or worse you need this for anything Apple, including AirPlay mirroring. This is the most important prerequisite. Without it, AirPlay just won’t work. Next up is QuickTime, also from Apple. You may even already have this one installed, but you’ll need it for the basic compression/decompression used for video rendered and compressed by iOS itself, as well as some app content.

The remainder are various open codec packages for use with Windows DirectShow. The packages combined allow you to play almost anything. Be sure to pick the correct x86 or x64 installer, depending on your operating system type. I also need to mention here that there is a current bug in the iOS YouTube app (YouTube videos still play through iOS Safari) and that some apps use copyright protection for their content and just will not mirror, even on a real AppleTV.

Now at last we come to installing AirServer itself. As you run through the installer, I need to call out a few of the options. The first is that you should NOT activate AirServer during the install process. This is especially important for my classrooms, where I need to support many users, but even on your home computer, if you have more than one user account that may want to use AirServer, do not activate at this time. The other option is whether you want to have AirServer run in the background automatically. For my classrooms, where many users log in and out throughout the day, I’ve found this option can cause problems. If you are the only (or primary) user on the machine, where it’s less common to be logging others in and out of the computer, it’s probably safe to let it run in the background.

Now at last AirServer is installed. However, it’s not activated yet, and won’t let you mirror. Let’s take care of that. To do this, you’ll need your license key. You’ll also need to start a command prompt. When the command prompt is open, enter the following commands:

"%ProgramFiles%\App Dynamic\AirServer\AirServerConsole.exe" activate <<License Key>>
"%ProgramFiles%\App Dynamic\AirServer\AirServerConsole.exe" set name <<MirrorName>>

Replace “<<License Key>>” with your license key, and “<<Mirror Name>>” with the name you want to show on your iPad or iPhone when you open up the AirPlay control panel to start mirroring. If you don’t activate and set the name in the console, it will only activate for the current user. When other users try to use the software, they’ll have to reactivate it and set their own name. The key will be saved, and they’ll be successful… at first. But it’s a step they shouldn’t have to take, and soon you’ll run out of activations for your license. The console method activates it once for every user on that PC.

Posted in Windows | Leave a comment

There are worse things than Exceptions

A piece of advise I’ve given on Stack Overflow more than once is to avoid the File.Exists() method, and others like it. Instead, I’ll tell people to just use a try/catch block, and put their time into writing a good exception handler. I won’t re-hash the reasoning here, as I’ve already covered it before. One of those links was even Gold badge -worthy.

One of the responses I often get to this strategy is that handling exceptions is slow. Why risk a slow exception handler if you can avoid it most of the time with a quick File.Exists() check? I think this argument misses the point first of all for correctness reasons. You still need the exception handler, and using File.Exists() to avoid it is a mistake. But more than that, I think that is just plain wrong about the performance issue, too. Here’s why.

Yes, handling exceptions is expensive from a performance standpoint; very expensive. Let’s get that out of the way: I’m not trying to say that exceptions should be your first choice in every situation. The list of things you can do in programming that are slower is very short. However, the list is not empty. Do you know what’s worse than exceptions? I/O. Disk and Network are far and away worse. Let me explain. Here’s a link and except that show just how much worse they can be:

Latency Comparison Numbers
L1 cache reference                            0.5 ns
Branch mispredict                             5   ns
L2 cache reference                            7   ns             14x L1 cache
Mutex lock/unlock                            25   ns
Main memory reference                       100   ns             20x L2 cache, 200x L1 cache
Compress 1K bytes with Zippy              3,000   ns
Send 1K bytes over 1 Gbps network        10,000   ns    0.01 ms
Read 4K randomly from SSD*              150,000   ns    0.15 ms
Read 1 MB sequentially from memory      250,000   ns    0.25 ms
Round trip within same datacenter       500,000   ns    0.5  ms
Read 1 MB sequentially from SSD*      1,000,000   ns    1    ms  4X memory
Disk seek                            10,000,000   ns   10    ms  20x datacenter roundtrip
Read 1 MB sequentially from disk     20,000,000   ns   20    ms  80x memory, 20X SSD
Send packet CA->Netherlands->CA     150,000,000   ns  150    ms

If thinking in nanoseconds isn’t your thing, here’s another reference that normalizes a single CPU cycle as 1 second and scales from there:

1 CPU cycle             0.3 ns      1 s
Level 1 cache access    0.9 ns      3 s
Level 2 cache access    2.8 ns      9 s
Level 3 cache access    12.9 ns     43 s
Main memory access      120 ns      6 min
Solid-state disk I/O    50-150 μs   2-6 days
Rotational disk I/O     1-10 ms     1-12 months
Internet: SF to NYC     40 ms       4 years
Internet: SF to UK      81 ms       8 years
Internet: SF to AUS     183 ms      19 years
OS virt. reboot         4 s         423 years
SCSI command time-out   30 s        3000 years
Hardware virt. reboot   40 s        4000 years
Physical system reboot  5 m         32 millenia

Taking even the best-case scenario for exceptions, you can access memory at least 480 times while waiting on the first response from a disk, and that’s assuming a very fast SSD. Many of us still need spinning hard-drives, where things get much, much worse.

For a comparison reference, Jon Skeet has blogged about exception handling, where he was able to handling them at a rate of between 42 and 188 per millisecond. While there were some issues with his benchmark, I think the point is spot on: relative to other options, exceptions may not be as bad as you think.

And that’s only the beginning of the story. When you use .Exists(), you incur this additional cost (and it is an addition: you have to do the same work again when you go to open the file) on every attempt. You pay this costs whether the file exists or not, because the disk still has to go look for it in it’s file tables. With the exception method, you only pay the extra costs like unwinding the call stack in the case of failure.

In other words, yes: exceptions are horribly costly. But compared to the disk check, it’s still faster — and not by just a small margin. Thankfully, this is unlikely to drive your app’s general performance… but I still want to put to bed the “exceptions are slow” argument for this specific task.

Posted in .net, c#, development, stackoverflow | Leave a comment

MVP No More

For the past five years I’ve been honored to be recognized as a recipient of a Microsoft MVP Award. One of roughly 4,000 awardees world-wide, this is an incredible honor, and I’ve been humbled to part of that group. Today my most recent award expired, and it will not be renewed.

I can’t say I didn’t see this coming. When I was first awarded five years ago, I was at the top of my game in the programming world. However, shortly thereafter my career took a turn as I left full-time programming to become a Systems Administrator. This was an amazing opportunity for me, but it also meant starting back at the bottom of the knowledge curve. Since then, it’s been harder and harder every year to continue contributing as a programmer at that same high level (though I still do participate a lot on Stack Overflow), and while my systems administration skills have grown, I have years to go in that area before I’ll be anywhere close to MVP-level.

So it’s been fun. I appreciated all the benefits, and I’ll remember the one Summit I was able to attend forever. I’m not bitter. I wasn’t looking for the award when it came, and it means more that I had it if Microsoft continues to protect the integrity of the program. It will stay on my resume (with notes for the award years), and probably continue to open doors for me long into the future. Maybe someday I’ll be re-awarded as Microsoft Server MVP. Until then, I’ll just keep doing what I do.

As a closing note, one of the nice things is that my understanding is that I can still use this the MS MVPs blog site, so I won’t have to move any content I’ve posted here, and I can continue to post here whenever I feel like I have something worth posting.

Posted in non-computer | Leave a comment

Cleaning an Infected Computer at Work

I have two basic philosophies underpinning how I approach infected computers. To begin with, I don’t really believe in cleaning an infected computer at all. I could cover the reasoning for this in more detail, but I already have a well-voted answer on that I think says it better than I could fit here. For computers that I manage at work, I capture base hard disk images for our deployed PCs, and can use those to rebuild an infected computer from scratch. Combine this with the fact that most end-user data lives on a server, rather than the local machine, and this process is often faster than cleaning the computer anyway.

That said, I don’t use roaming profiles, and therefore this process is still very disruptive for users. There are literally thousands of settings that go into a user profile, and while most will never change from the default, over time the cumulative effect of a setting here, an option there, can make a real difference. Additionally, just because you have a few pop-ups, it doesn’t mean you have a rootkit.

Therefore the policy I follow at work is that we do allow some clean-up before resorting wiping or replacing a computer. However, I limit the techniques I’ll use. Here is the full enumerated list:

  • Uninstalling unwanted items via the Control Panel
  • Editing specific registry keys where startup programs are kept
  • Manually disabling Services and Scheduled Tasks
  • Using MSConfig or the StartUp tab in the Task Manager (Windows 8 and later)
  • Editing the registry to remove a stubborn IE Addon or Chrome Extension
  • Manually deleting any files or folders left behind from an uninstall process
  • Using existing Antivirus software already on the computer

This is the extent of it. If these don’t get the job done, it’s time for a wipe. Some notable items that are not in the list include rebooting to safe mode, installing an anti-malware tool, and running an anti-virus scan in a clean environment. If I have to do those things, I usually figure I’m better off wiping the machine.

Even with the tools I will use, there’s a catch: I’ll only do this once for a given infection. If, after an initial clean-up attempt, there are still pop-ups or other signs of infection, or if the symptoms return, that’s it. It’s time to nuke the machine and start over.

The other philosophy I follow is regard administrator rights. I do allow staff to have administrator access on their own machines by default. This is a practice that pre-dates my time here, and one I was not fond of when I started. However, over time I’ve come to accept it as more helpful than hurtful… especially since the introduction of UAC. Under no circumstances do I permit UAC to be disabled, and there are some settings that are enforced through Active Directory Group Policy as well. But the main thing is that, by and large, I do permit administrator rights on end-user PCs.

This is important because I’m only will to wipe a machine for free once. For an end user, if it’s to the point where we’re replacing your machine for the second time, you’ll find you no longer have administrator rights to your computer when the third machine arrives. I worry that eventually this policy will lead to unreported infections, especially if it’s ever embraced by non-technical management to the point that maintaining the ability to have administrative access is necessary to being able to do your job. However, to date I’ve only had to enforce this one time.

Posted in security, superuser | Leave a comment

What it’s like to live through a Disaster

Less than a week ago, a tornado tore through a small town about 25 miles from my home, leaving it almost completely devastated. I am thankful that no one I know personally was hurt or even lost significant property, but I’ve had some stories from the experience, and I am very mindful and prayerful for those still living in this community.

Part of this experience has brought to me a new understanding of what it means to live though a disaster like this, which I hope I can share with you now. I will list the implications below. Not all of these apply to every family unit, but some family units will be subject all them, and some of them may surprise you:

  • No electricity for nearly a week, with no idea when it’s coming back.
  • No refrigeration
  • Personal food reserves destroyed, contaminated, or depleted, with no clear way to get more
  • No running water or sanitation
  • No Shelter
  • No cell phone service in the area. While coverage survived the initial disaster, the lack of power in the area eventually overwhelmed providers’ abilities to power the cell towers. If it had survived, there would be no way to charge your phone.
  • No news of the outside world. Help is on the way to this community, but many there have no way to know this, because they have lost the ability to use TV, Radio, and even cellular internet.
  • No way to leave, in the numerous cases where vehicles were destroyed.
  • No way to call for help, or any indication that it’s coming, because of the earlier mentioned isolation from electronic communications

Even in the United States, with all of our resources, it’s scary quickly you can become isolated and helpless. While people just a few miles away are fine, this small town is back in the stone age. And if you were hit particularly hard (loss of vehicle and food supplies) and don’t know your neighbors well, you could be in a particularly bad spot. Even if you have a strong family or other support network outside of town, you have no way to contact these people, or anyone else who could help. This is real desperation.

Fortunately, help is coming. Tomorrow morning, the church I attend is coordinating with Church of Christ Disaster Relief to open a location that will provide food and supplies to the victims of this disaster. So far, this is the only relief effort to visit this town, though I suspect it’s only the first.

As a member of the technical community, I was particularly interested in writing about this, because of attitudes I saw on some technical community web sites the last time a Christian relief organization provided disaster support. Technical folks often have a decidedly secular mindset; a common sentiment was that Christian relief organizations where really only interested in distributing Bibles, and that would be the bulk of the “supplies” provided.

I can tell you that nothing is further from the truth. Churches of Christ Disaster Relief maintains pre-loaded trucks that are ready to depart as soon as a need is identified. Some of the contents of these trucks are perishable food-stuffs that would need to be rotated if the truck sits too long… which doesn’t really happen because the organization is so active. There are several categories of box in each truck: food boxes that contain enough material to feed a family of four for a week, infant care boxes, with diapers and other necessaries, bottled water boxes, cleaning supplies, clothing and others. All of this is provided at no cost to victims, without discrimination. If more material is needed, more trucks will be sent (later trucks are more selectively loaded). And this is just the first wave. Later efforts will even provide furniture and appliances free of charge to those with real need.

Yes, there are a few bibles included (one in each food box), but they are not a significant part of the cost or mass/volume of the materials provided. The organization also often makes use of church buildings as convenient pre-existing locations to centralize their distribution efforts and members of those congregations to provide volunteer staffing at the distribution points. Yes, we do this in the name of Christ, because He first loved us, and we are not ashamed of this. But this is real relief, meeting real needs.


Posted in non-computer | Leave a comment

Can we stop using AddWithValue() already?

I see code examples posted online all the time that look like this:

cmd.Parameters.AddWithValue("@Parameter", txtTextBox1.Text);

This needs to stop. There is a problem with the AddWithValue() function: it has to infer the database type for your query parameter. Here’s the thing: sometimes it gets it wrong. This especially happens with database layers that deal in Object arrays or similar for the parameter data, where some of the important information ADO.Net uses to infer the type is missing. However, this can happen even when the .Net type is known. VarChar vs NVarChar or Char from strings is one way. Date vs DateTime is another.

The good news is that most of the time, these type mismatches don’t matter. Unfortunately, that’s not the whole story. Sometimes they do matter, and when it matters, it can matter in a big way.

For example, say you have a varchar database column, but send a string parameter using the AddWithValue() function. ADO.Net will send this to the database as an nvarchar value. The database is not permitted to implicitly convert your nvarchar parameter to a varchar value to match the column type for the query. That would be a narrowing conversion that has the potential to lose information from the original value (because you might have non-Latin characters in the parameter), and if that happened the database might produce the wrong query results. Instead, the database will likely need to convert the varchar column to nvarchar for this query (which is a widening conversion that is guaranteed not to lose information). The problem is that it will need to do this for every row in your table.

This conversion can also happen with some other mismatches: For example, date columns may need to be widened to datetime values. And don’t even get me started on what happens if you have a mismatch between a date or number type and a string type. Even with nvarchar or nchar, you may find that the lengths don’t match up, such that a table has to have every value in an nvarchar field of a specific length modified to match a value of a different length.

If that kind of operation sounds expensive to you (potential run-time conversions for data in a table containing possibly millions of rows), you’re right. It is. But that’s only the beginning. These newly converted values now no longer are technically the same value as what is stored in any indexes that may use this column, making those indexes useless for completing your query. Now we’re really hitting below the belt. Index use cuts to the core of database performance. Failing to hit an index can be the difference between a query taking hours or taking seconds, between a query taking minutes or returning instantly. And it all began with AddWithValue().

So what should you do instead? The solution is to be aware of the underlying database type you need to end up with, and then create a query parameter that uses this exact type. Here’s an example using a DateTime database type:

cmd.Parameters.Add("@Parameter", SqlDbType.DateTime).Value = MyDateTimeVariable;

Here’s another example using a decimal(11,4):

cmd.Parameters.Add("@Parameter", SqlDbType.Decimal, 11, 4).Value = MyDecimalVariable;

Note that while this is slightly longer, it’s still a single line of code. That’s it. This simple change to how you define parameters can potentially save significant performance penalties.

Posted in .net, c#, sql | 3 Comments

The N Word

No, not that N word. I’m talking about N string literal prefixes in T-SQL. Like this:


If you don’t know what that N is for, it tells Sql Server that your string literal is an nvarchar, rather than a varchar… that is, that the string literal may contain Unicode characters, so it can support non-ASCII characters. Things like this: 例子. But I can hear you now: that sample is all ASCII. Why does it matter? I’m glad you asked.

Let’s pretend for a minute that the Bar column from that example is a varchar column, and not an nvarchar column after all. We have a type mismatch on the comparison. Pop Quiz: what happens?

We’d like Sql Server to convert the ‘Baz’ literal to a varchar, because that is obviously more efficient. Unfortunately, it won’t work that way. Converting from nvarchar to varchar is a narrowing conversion. There are some things that can’t be accurately expressed when converting from nvarchar to varchar, which means there is a potential to lose information in the conversion. Sql Server is not smart enough to know that this particular literal will map to the smaller data type without data loss. If it converts the literal to a varchar, it might give you the wrong result, and Sql Server won’t do that.

Instead, it has no choice but to convert your Bar column to an nvarchar. I’ll say that again: it has no choice but to convert the value from every row in your Bar column to an nvarchar, even if you only get one row in the results. It can’t know if a given row matches your literal until it completes that conversion. Moreover, if you have an index on that column that would have helped, these converted values are not really the same value any more as what is stored in your index, meaning Sql Server can’t even use the index.

This could easily mean a night and day performance difference. A query that used to return instantly could literally take minutes to complete. A query that used to take a few seconds might now run for an hour.

Just in case you think this scenario seems unlikely, keep in mind that ADO.Net uses nvarchar parameter types by default if you use the AddWithValue() function or it otherwise can’t infer the parameter type. If that query parameter compares to a varchar column, you’ll end up in this exact situation, and I see it all the time.

The good news is that you’re okay going the other direction… at least in this scenario. If Bar is an nvarchar column and you define Baz as a varchar literal, converting the Baz literal would be a widening conversion, which Sql Server will be more than happy to perform. Your Bar column values are unchanged, and so you can still use an index with the Bar column.

I hope your conclusion from this example is not that you should always just omit the N prefix. That’s not the message I want to send at all. In fact, the same Stack Overflow question that prompted this example also included an example that would fail to even execute in the case of type mismatch. Instead, I hope I’ve shown here that it can really matter whether you get your SQL string literals right, and that it pays to keep the exact data types of your columns in mind.

Posted in sql, Sql Server | Leave a comment

The single most broken thing in CSS

Like most web people, I have tasted the Kool-aid, and it was good. I believe in the use of CSS for layout over tables (except, of course, for tabular data, which happens more than people realize). However, CSS is also known for being quirky and difficult to master, largely because of weak browser implementations. If you’re reading this, you probably just thought of Internet Explorer, but IE is not alone here. Even in the context of browser quirks I think CSS, with practice, holds up pretty well and is actually a very nice system… with one major exception.

CSS needs a way that is built-in and intuitive to support positioning an arbitrary number of block level elements visually side-by-side.

Let’s list out the requirements for what this feature should support, keeping in mind the spirit of separating content from presentation:

  1. It should scale to any number of elements, not just two or three.
  2. Multiple options for what to do as content gets wider or the window gets narrower: wrap, scroll, hide, etc.
  3. When style dictates the elements wrap to a new “line”, this should happen such that the elements still follow the same flow, in an order that mimics how text would flow (including awareness of the current browser culture).
  4. When styling dictates that elements wrap to a new “line”, you should be able to intuitively (and optionally) style them so each element on a new line will take positions below an element on the first line, so the result resembles a grid. If elements have varying widths, there should be multiple options for how to account for the space when an element further down is much wider than an element above it.
  5. You should not need to do any special styling for the first or last element that is different from other elements
  6. You should not need to add any extra markup to the document to indicate the first or last element, or to mark the beginning or end of the side-by-side sequence. We want the content separate from the styles, after all.
  7. Since potential side-by-side elements are siblings, in a sense, it is reasonable (and possibly necessary) to expect them to belong to some common parent element, perhaps even as the exclusive direct children of that parent.

I want to point out here that I’d be surprised if everything or nearly everything I just described isn’t already possible today. However, it’s not even close to intuitive. It requires hacks and a certain level of css-fu not easily attainable for the common designer or developer, or over-reliance on systems like bootstrap.

I believe that what CSS needs — what it’s really missing — is for these feature set to be supported in a first-class way that is discoverable for new and self-taught web developers and designers, that works because this is the whole point for this specific set of style, and not because some designer figured out how to shoehorn something else to make it do what they wanted. This is the Elephant in the CSS Room.

I feel pretty comfortable with that requirement outline. Sadly, I no longer do enough web design to take the next step: thinking through how the exact styles and syntax needed to implement this should actually look. I definitely lack the influence to take the step after that: getting a proposal before the committee that could actually cause this to be accepted to the spec. And no one is in a position to take the final step: getting browsers to support this in a reasonable uniform way in a reasonable prompt time frame. All of that makes this post little different than rant… but a guy can dream, can’t he?

Posted in development, web | Leave a comment