There are worse things than Exceptions

A piece of advise I’ve given on Stack Overflow more than once is to avoid the File.Exists() method, and others like it. Instead, I’ll tell people to just use a try/catch block, and put their time into writing a good exception handler. I won’t re-hash the reasoning here, as I’ve already covered it before. One of those links was even Gold badge -worthy.

One of the responses I often get to this strategy is that handling exceptions is slow. Why risk a slow exception handler if you can avoid it most of the time with a quick File.Exists() check? I think this argument misses the point first of all for correctness reasons. You still need the exception handler, and using File.Exists() to avoid it is a mistake. But more than that, I think that is just plain wrong about the performance issue, too. Here’s why.

Yes, handling exceptions is expensive from a performance standpoint; very expensive. Let’s get that out of the way: I’m not trying to say that exceptions should be your first choice in every situation. The list of things you can do in programming that are slower is very short. However, the list is not empty. Do you know what’s worse than exceptions? I/O. Disk and Network are far and away worse. Let me explain. Here’s a link and except that show just how much worse they can be:
https://gist.github.com/jboner/2841832

Latency Comparison Numbers
--------------------------
L1 cache reference                            0.5 ns
Branch mispredict                             5   ns
L2 cache reference                            7   ns             14x L1 cache
Mutex lock/unlock                            25   ns
Main memory reference                       100   ns             20x L2 cache, 200x L1 cache
Compress 1K bytes with Zippy              3,000   ns
Send 1K bytes over 1 Gbps network        10,000   ns    0.01 ms
Read 4K randomly from SSD*              150,000   ns    0.15 ms
Read 1 MB sequentially from memory      250,000   ns    0.25 ms
Round trip within same datacenter       500,000   ns    0.5  ms
Read 1 MB sequentially from SSD*      1,000,000   ns    1    ms  4X memory
Disk seek                            10,000,000   ns   10    ms  20x datacenter roundtrip
Read 1 MB sequentially from disk     20,000,000   ns   20    ms  80x memory, 20X SSD
Send packet CA->Netherlands->CA     150,000,000   ns  150    ms

If thinking in nanoseconds isn’t your thing, here’s another reference that normalizes a single CPU cycle as 1 second and scales from there:
http://blog.codinghorror.com/the-infinite-space-between-words/

1 CPU cycle             0.3 ns      1 s
Level 1 cache access    0.9 ns      3 s
Level 2 cache access    2.8 ns      9 s
Level 3 cache access    12.9 ns     43 s
Main memory access      120 ns      6 min
Solid-state disk I/O    50-150 μs   2-6 days
Rotational disk I/O     1-10 ms     1-12 months
Internet: SF to NYC     40 ms       4 years
Internet: SF to UK      81 ms       8 years
Internet: SF to AUS     183 ms      19 years
OS virt. reboot         4 s         423 years
SCSI command time-out   30 s        3000 years
Hardware virt. reboot   40 s        4000 years
Physical system reboot  5 m         32 millenia

Taking even the best-case scenario for exceptions, you can access memory at least 480 times while waiting on the first response from a disk, and that’s assuming a very fast SSD. Many of us still need spinning hard-drives, where things get much, much worse.

For a comparison reference, Jon Skeet has blogged about exception handling, where he was able to handling them at a rate of between 42 and 188 per millisecond. While there were some issues with his benchmark, I think the point is spot on: relative to other options, exceptions may not be as bad as you think.

And that’s only the beginning of the story. When you use .Exists(), you incur this additional cost (and it is an addition: you have to do the same work again when you go to open the file) on every attempt. You pay this costs whether the file exists or not, because the disk still has to go look for it in it’s file tables. With the exception method, you only pay the extra costs like unwinding the call stack in the case of failure.

In other words, yes: exceptions are horribly costly. But compared to the disk check, it’s still faster — and not by just a small margin. Thankfully, this is unlikely to drive your app’s general performance… but I still want to put to bed the “exceptions are slow” argument for this specific task.
Posted in .net, c#, development, stackoverflow | Leave a comment

MVP No More

For the past five years I’ve been honored to be recognized as a recipient of a Microsoft MVP Award. One of roughly 4,000 awardees world-wide, this is an incredible honor, and I’ve been humbled to part of that group. Today my most recent award expired, and it will not be renewed.

I can’t say I didn’t see this coming. When I was first awarded five years ago, I was at the top of my game in the programming world. However, shortly thereafter my career took a turn as I left full-time programming to become a Systems Administrator. This was an amazing opportunity for me, but it also meant starting back at the bottom of the knowledge curve. Since then, it’s been harder and harder every year to continue contributing as a programmer at that same high level (though I still do participate a lot on Stack Overflow), and while my systems administration skills have grown, I have years to go in that area before I’ll be anywhere close to MVP-level.

So it’s been fun. I appreciated all the benefits, and I’ll remember the one Summit I was able to attend forever. I’m not bitter. I wasn’t looking for the award when it came, and it means more that I had it if Microsoft continues to protect the integrity of the program. It will stay on my resume (with notes for the award years), and probably continue to open doors for me long into the future. Maybe someday I’ll be re-awarded as Microsoft Server MVP. Until then, I’ll just keep doing what I do.

As a closing note, one of the nice things is that my understanding is that I can still use this the MS MVPs blog site, so I won’t have to move any content I’ve posted here, and I can continue to post here whenever I feel like I have something worth posting.
Posted in non-computer | Leave a comment

Cleaning an Infected Computer at Work

I have two basic philosophies underpinning how I approach infected computers. To begin with, I don’t really believe in cleaning an infected computer at all. I could cover the reasoning for this in more detail, but I already have a well-voted answer on SuperUser.com that I think says it better than I could fit here. For computers that I manage at work, I capture base hard disk images for our deployed PCs, and can use those to rebuild an infected computer from scratch. Combine this with the fact that most end-user data lives on a server, rather than the local machine, and this process is often faster than cleaning the computer anyway.

That said, I don’t use roaming profiles, and therefore this process is still very disruptive for users. There are literally thousands of settings that go into a user profile, and while most will never change from the default, over time the cumulative effect of a setting here, an option there, can make a real difference. Additionally, just because you have a few pop-ups, it doesn’t mean you have a rootkit.

Therefore the policy I follow at work is that we do allow some clean-up before resorting wiping or replacing a computer. However, I limit the techniques I’ll use. Here is the full enumerated list:
  • Uninstalling unwanted items via the Control Panel
  • Editing specific registry keys where startup programs are kept
  • Manually disabling Services and Scheduled Tasks
  • Using MSConfig or the StartUp tab in the Task Manager (Windows 8 and later)
  • Editing the registry to remove a stubborn IE Addon or Chrome Extension
  • Manually deleting any files or folders left behind from an uninstall process
  • Using existing Antivirus software already on the computer

This is the extent of it. If these don’t get the job done, it’s time for a wipe. Some notable items that are not in the list include rebooting to safe mode, installing an anti-malware tool, and running an anti-virus scan in a clean environment. If I have to do those things, I usually figure I’m better off wiping the machine.

Even with the tools I will use, there’s a catch: I’ll only do this once for a given infection. If, after an initial clean-up attempt, there are still pop-ups or other signs of infection, or if the symptoms return, that’s it. It’s time to nuke the machine and start over.

The other philosophy I follow is regard administrator rights. I do allow staff to have administrator access on their own machines by default. This is a practice that pre-dates my time here, and one I was not fond of when I started. However, over time I’ve come to accept it as more helpful than hurtful… especially since the introduction of UAC. Under no circumstances do I permit UAC to be disabled, and there are some settings that are enforced through Active Directory Group Policy as well. But the main thing is that, by and large, I do permit administrator rights on end-user PCs.

This is important because I’m only will to wipe a machine for free once. For an end user, if it’s to the point where we’re replacing your machine for the second time, you’ll find you no longer have administrator rights to your computer when the third machine arrives. I worry that eventually this policy will lead to unreported infections, especially if it’s ever embraced by non-technical management to the point that maintaining the ability to have administrative access is necessary to being able to do your job. However, to date I’ve only had to enforce this one time.
Posted in security, superuser | Leave a comment

What it’s like to live through a Disaster

Less than a week ago, a tornado tore through a small town about 25 miles from my home, leaving it almost completely devastated. I am thankful that no one I know personally was hurt or even lost significant property, but I’ve had some stories from the experience, and I am very mindful and prayerful for those still living in this community.

Part of this experience has brought to me a new understanding of what it means to live though a disaster like this, which I hope I can share with you now. I will list the implications below. Not all of these apply to every family unit, but some family units will be subject all them, and some of them may surprise you:
  • No electricity for nearly a week, with no idea when it’s coming back.
  • No refrigeration
  • Personal food reserves destroyed, contaminated, or depleted, with no clear way to get more
  • No running water or sanitation
  • No Shelter
  • No cell phone service in the area. While coverage survived the initial disaster, the lack of power in the area eventually overwhelmed providers’ abilities to power the cell towers. If it had survived, there would be no way to charge your phone.
  • No news of the outside world. Help is on the way to this community, but many there have no way to know this, because they have lost the ability to use TV, Radio, and even cellular internet.
  • No way to leave, in the numerous cases where vehicles were destroyed.
  • No way to call for help, or any indication that it’s coming, because of the earlier mentioned isolation from electronic communications

Even in the United States, with all of our resources, it’s scary quickly you can become isolated and helpless. While people just a few miles away are fine, this small town is back in the stone age. And if you were hit particularly hard (loss of vehicle and food supplies) and don’t know your neighbors well, you could be in a particularly bad spot. Even if you have a strong family or other support network outside of town, you have no way to contact these people, or anyone else who could help. This is real desperation.

Fortunately, help is coming. Tomorrow morning, the church I attend is coordinating with Church of Christ Disaster Relief to open a location that will provide food and supplies to the victims of this disaster. So far, this is the only relief effort to visit this town, though I suspect it’s only the first.

As a member of the technical community, I was particularly interested in writing about this, because of attitudes I saw on some technical community web sites the last time a Christian relief organization provided disaster support. Technical folks often have a decidedly secular mindset; a common sentiment was that Christian relief organizations where really only interested in distributing Bibles, and that would be the bulk of the “supplies” provided.

I can tell you that nothing is further from the truth. Churches of Christ Disaster Relief maintains pre-loaded trucks that are ready to depart as soon as a need is identified. Some of the contents of these trucks are perishable food-stuffs that would need to be rotated if the truck sits too long… which doesn’t really happen because the organization is so active. There are several categories of box in each truck: food boxes that contain enough material to feed a family of four for a week, infant care boxes, with diapers and other necessaries, bottled water boxes, cleaning supplies, clothing and others. All of this is provided at no cost to victims, without discrimination. If more material is needed, more trucks will be sent (later trucks are more selectively loaded). And this is just the first wave. Later efforts will even provide furniture and appliances free of charge to those with real need.

Yes, there are a few bibles included (one in each food box), but they are not a significant part of the cost or mass/volume of the materials provided. The organization also often makes use of church buildings as convenient pre-existing locations to centralize their distribution efforts and members of those congregations to provide volunteer staffing at the distribution points. Yes, we do this in the name of Christ, because He first loved us, and we are not ashamed of this. But this is real relief, meeting real needs.

 
Posted in non-computer | Leave a comment

Can we stop using AddWithValue() already?

I see code examples posted online all the time that look like this:
cmd.Parameters.AddWithValue("@Parameter", txtTextBox1.Text);

This needs to stop. There is a problem with the AddWithValue() function: it has to infer the database type for your query parameter. Here’s the thing: sometimes it gets it wrong. This especially happens with database layers that deal in Object arrays or similar for the parameter data, where some of the important information ADO.Net uses to infer the type is missing. However, this can happen even when the .Net type is known. VarChar vs NVarChar or Char from strings is one way. Date vs DateTime is another.

The good news is that most of the time, these type mismatches don’t matter. Unfortunately, that’s not the whole story. Sometimes they do matter, and when it matters, it can matter in a big way.

For example, say you have a varchar database column, but send a string parameter using the AddWithValue() function. ADO.Net will send this to the database as an nvarchar value. The database is not permitted to implicitly convert your nvarchar parameter to a varchar value to match the column type for the query. That would be a narrowing conversion that has the potential to lose information from the original value (because you might have non-Latin characters in the parameter), and if that happened the database might produce the wrong query results. Instead, the database will likely need to convert the varchar column to nvarchar for this query (which is a widening conversion that is guaranteed not to lose information). The problem is that it will need to do this for every row in your table.

This conversion can also happen with some other mismatches: For example, date columns may need to be widened to datetime values. And don’t even get me started on what happens if you have a mismatch between a date or number type and a string type. Even with nvarchar or nchar, you may find that the lengths don’t match up, such that a table has to have every value in an nvarchar field of a specific length modified to match a value of a different length.

If that kind of operation sounds expensive to you (potential run-time conversions for data in a table containing possibly millions of rows), you’re right. It is. But that’s only the beginning. These newly converted values now no longer are technically the same value as what is stored in any indexes that may use this column, making those indexes useless for completing your query. Now we’re really hitting below the belt. Index use cuts to the core of database performance. Failing to hit an index can be the difference between a query taking hours or taking seconds, between a query taking minutes or returning instantly. And it all began with AddWithValue().

So what should you do instead? The solution is to be aware of the underlying database type you need to end up with, and then create a query parameter that uses this exact type. Here’s an example using a DateTime database type:
cmd.Parameters.Add("@Parameter", SqlDbType.DateTime).Value = MyDateTimeVariable;

Here’s another example using a decimal(11,4):
cmd.Parameters.Add("@Parameter", SqlDbType.Decimal, 11, 4).Value = MyDecimalVariable;

Note that while this is slightly longer, it’s still a single line of code. That’s it. This simple change to how you define parameters can potentially save significant performance penalties.
Posted in .net, c#, sql | 2 Comments

The N Word

No, not that N word. I’m talking about N string literal prefixes in T-SQL. Like this:
SELECT * FROM Foo WHERE Bar = N'Baz'

If you don’t know what that N is for, it tells Sql Server that your string literal is an nvarchar, rather than a varchar… that is, that the string literal may contain Unicode characters, so it can support non-ASCII characters. Things like this: 例子. But I can hear you now: that sample is all ASCII. Why does it matter? I’m glad you asked.

Let’s pretend for a minute that the Bar column from that example is a varchar column, and not an nvarchar column after all. We have a type mismatch on the comparison. Pop Quiz: what happens?

We’d like Sql Server to convert the ‘Baz’ literal to a varchar, because that is obviously more efficient. Unfortunately, it won’t work that way. Converting from nvarchar to varchar is a narrowing conversion. There are some things that can’t be accurately expressed when converting from nvarchar to varchar, which means there is a potential to lose information in the conversion. Sql Server is not smart enough to know that this particular literal will map to the smaller data type without data loss. If it converts the literal to a varchar, it might give you the wrong result, and Sql Server won’t do that.

Instead, it has no choice but to convert your Bar column to an nvarchar. I’ll say that again: it has no choice but to convert the value from every row in your Bar column to an nvarchar, even if you only get one row in the results. It can’t know if a given row matches your literal until it completes that conversion. Moreover, if you have an index on that column that would have helped, these converted values are not really the same value any more as what is stored in your index, meaning Sql Server can’t even use the index.

This could easily mean a night and day performance difference. A query that used to return instantly could literally take minutes to complete. A query that used to take a few seconds might now run for an hour.

Just in case you think this scenario seems unlikely, keep in mind that ADO.Net uses nvarchar parameter types by default if you use the AddWithValue() function or it otherwise can’t infer the parameter type. If that query parameter compares to a varchar column, you’ll end up in this exact situation, and I see it all the time.

The good news is that you’re okay going the other direction… at least in this scenario. If Bar is an nvarchar column and you define Baz as a varchar literal, converting the Baz literal would be a widening conversion, which Sql Server will be more than happy to perform. Your Bar column values are unchanged, and so you can still use an index with the Bar column.

I hope your conclusion from this example is not that you should always just omit the N prefix. That’s not the message I want to send at all. In fact, the same Stack Overflow question that prompted this example also included an example that would fail to even execute in the case of type mismatch. Instead, I hope I’ve shown here that it can really matter whether you get your SQL string literals right, and that it pays to keep the exact data types of your columns in mind.
Posted in sql, Sql Server | Leave a comment

The single most broken thing in CSS

Like most web people, I have tasted the Kool-aid, and it was good. I believe in the use of CSS for layout over tables (except, of course, for tabular data, which happens more than people realize). However, CSS is also known for being quirky and difficult to master, largely because of weak browser implementations. If you’re reading this, you probably just thought of Internet Explorer, but IE is not alone here. Even in the context of browser quirks I think CSS, with practice, holds up pretty well and is actually a very nice system… with one major exception.

CSS needs a way that is built-in and intuitive to support positioning an arbitrary number of block level elements visually side-by-side.

Let’s list out the requirements for what this feature should support, keeping in mind the spirit of separating content from presentation:
  1. It should scale to any number of elements, not just two or three.
  2. Multiple options for what to do as content gets wider or the window gets narrower: wrap, scroll, hide, etc.
  3. When style dictates the elements wrap to a new “line”, this should happen such that the elements still follow the same flow, in an order that mimics how text would flow (including awareness of the current browser culture).
  4. When styling dictates that elements wrap to a new “line”, you should be able to intuitively (and optionally) style them so each element on a new line will take positions below an element on the first line, so the result resembles a grid. If elements have varying widths, there should be multiple options for how to account for the space when an element further down is much wider than an element above it.
  5. You should not need to do any special styling for the first or last element that is different from other elements
  6. You should not need to add any extra markup to the document to indicate the first or last element, or to mark the beginning or end of the side-by-side sequence. We want the content separate from the styles, after all.
  7. Since potential side-by-side elements are siblings, in a sense, it is reasonable (and possibly necessary) to expect them to belong to some common parent element, perhaps even as the exclusive direct children of that parent.

I want to point out here that I’d be surprised if everything or nearly everything I just described isn’t already possible today. However, it’s not even close to intuitive. It requires hacks and a certain level of css-fu not easily attainable for the common designer or developer, or over-reliance on systems like bootstrap.

I believe that what CSS needs — what it’s really missing — is for these feature set to be supported in a first-class way that is discoverable for new and self-taught web developers and designers, that works because this is the whole point for this specific set of style, and not because some designer figured out how to shoehorn something else to make it do what they wanted. This is the Elephant in the CSS Room.

I feel pretty comfortable with that requirement outline. Sadly, I no longer do enough web design to take the next step: thinking through how the exact styles and syntax needed to implement this should actually look. I definitely lack the influence to take the step after that: getting a proposal before the committee that could actually cause this to be accepted to the spec. And no one is in a position to take the final step: getting browsers to support this in a reasonable uniform way in a reasonable prompt time frame. All of that makes this post little different than rant… but a guy can dream, can’t he?
Posted in development, web | Leave a comment

What a hunk of Junk!

I admit it: I’m a Star Wars fan, including the Extended Universe. I’ve read and reread (recently, even) a number of the books. There’s one thing that bothers me about the whole thing: the Millenium Falcon. I feel like other fans focus more than they should on lines like “She’s the fastest ship in the fleet” and less on lines like “You came in that thing? You’re braver than I thought.”

I think I’ve finally figured out how best to express this frustration. Take space ships from the Star Wars universe and translate them to real world cars. See, I feel like fans have an image of the Millenium Falcon as something like this:

Chevy Impala Autobot from Transformers Movie

Not the Millenium Falcon



Yes, it’s fast. Yes, it’s heavily modified. Yes, it has weapons. Most of all, it’s cool. But does it fit what I see as the Millenium Falcon’s place in the Star Wars universe? No. Not even close. That would look something more like this:

Old Box Truck

Millenium Falcon



Now that’s more like it. In fact, this may even be too nice. The Millenium Falcon is supposed to already be kind of… old by the time movies start. Above all else, it’s supposed to be a light freighter, and nothing says light freighter like the ubiquitous white box truck. Han was a smuggler, and as a smuggler he would not have always wanted to draw attention to himself.

This isn’t to say there was nothing special at all about the Millenium Falcon. Picture the van above after it’s had is engine and transmission replaced with the fastest set that can be made to fit, which would include a turbo and nitrous canisters. Maybe throw in some armor plating on the rear door, and give it an upgraded suspension that can handle the speed and weight. This truck could really fly. But in the end, it’s still a truck.
Posted in non-computer | 1 Comment

Four basic security lessons for undergrad CS Students

Security is a huge problem in the IT industry. It seems like we hear almost weekly about a new systems breach resulting in the leak of millions of user accounts. The recent breaches at Target and Kickstarter come to mind, and those are just the ones that made news. Often this is actually more of a people problem than a technology problem: of convincing non-technical employees the importance of following correct security procedures, and of convincing non-technical managers to allow developers to make the proper security investments. But many of these are the result of easily-correctable technical issues.

When looking at student work, I don’t expect them to be hardcore security experts. But I do expect that students have learned four basic lessons by the time they finish their undergrad work. Those lessons are, in no particular order:

1. A general idea of the correct way to store passwords.
Probably this general idea will be slightly wrong, but that’s okay. A recent grad is unlikely to be asked to build a new authentication system from the ground up. However, they should know enough to raise red flags if they see something done horribly wrong, and they should know enough to follow what’s going on if asked to fix a bug in an existing system. Getting down to nuts and bolts, the student should understand the difference between encrypting and hashing, they should know to use bcrypt or scrypt (or least not to use md5), and they should know they need a per-user salt.

2. How to avoid Sql Injection Attacks
Or, put another way, how to use sql query parameters in their platform of choice. This assumes, of course, that students have at least some exposure to databases as part of their degree (they should). Sql Injection should be part of that exposure.

I also take issue with that standard comic on the subject (see here). The problem is that it talks about sanitizing database inputs, and that’s just the wrong approach. If you’re thinking, “sanitize”, you’ve already lost; it implies you should write code that examines user input and removes bad things, which is the wrong approach. Real sql injection security lies in quarantining unsafe data. And, yes, I do think undergrad students should know the difference.

Quarantined data does not need to be sanitized. There can be all kings of attempted bad things in the data, but if the programmer used a mechanism that transmits this data to the database in a completely separate data block from the sql query, the chances of that data being executed as code drop to 0%. On the other hand, the chances of a bug, logical flaw, or ignorance of a potential exploit creeping into sanitizing code? Significantly higher.

3. How to avoid Cross-site Scripting / Cross-site Request Forgery Issues
If you work with the web (and today, who doesn’t?) you need to understand XSS/CSRF issues and how to mitigate them. This is definitely an issue for students, because often they’ll go straight from college to working on a web property, and may even do some web work before graduating. Simply put, they’re at risk for this from day one. The solution to this issue is to be diligent about escaping data. Better if your web platform helps you do this in the appropriate ways.

4. Don’t write your own security code
Perhaps the most important lesson. Security code is one of those areas that has hidden complexity. It’s easy to write security code that seems to work just fine — it may even pass an exhaustive suite of unit tests — but is still flawed in subtle ways such that you don’t catch it until six months after you get hacked. The solution here is to lean as much as possible on security code provided by the platform of your choice. This code will be written by people who understand the security issues in play. It will be battle-tested and designed in such a way that it helps you get the details right. Most of all, it’s backed and serviced by a vendor such that when (not if) flaws are discovered you are generally able to patch them without having to re-write code yourself.

Bonus Resources: I have two resources that I recommend that students at least be aware of. The idea is not to have a complete grasp of everything they cover, but to know where to look to get more information. The first is OWASP, especially their Top 10 list. The second is this Stack Exchange question: What technical details should a programmer of a web application consider before making the site public?
Posted in development | 2 Comments

What to look for in a bargain Android Tablet

I’ve seen a lot of bargain Android Tablets lately, and I know a lot of people who are interested in getting one, but don’t think they can afford it. I’ve got news for you: they’re cheaper than you might think. Tiger Direct recently had one for $20 after rebate. These tablets are a hot item. The trick is, how do you know that you’re getting something worth having? A lot of those cheap tablets are not going to do what you expect of them.

Here are my tips for finding a worth-while bargain Android tablet (January 2014 edition):

1. Look for at least Android 4.1 or newer out of the box. Android is a free operating system (well, sort of), and so in theory you could update an older tablet yourself, but there’s more going on here than that. Anything older than Android 4.1, and you’re likely looking at last year’s tablet coming off the shelf, and last year’s bargain tablets were, well, just plain bad. There’s a reason I don’t have a 2013 edition of this post. Android is also free (or nearly so) to manufactures, and so there’s no reason to see anything older than this on a new device.
2. Minimum 1.2 Ghz dual-core processor. Emphasis on the dual core; that is what will keep the operating system feel responsive, even when running some of the higher-demand apps.
3. Dual (front and rear) cameras. Many of the bargain tablets will cut out one or both cameras to keep costs down, but as someone who’s had a couple different tablets for a while now I can say with confidence that you really will want a camera on both sides. It’s the feature I miss most on my Kindle Fire. The front camera will be used mainly for video chat, and doesn’t need to be great, but the rear camera should be at least 3MP (more would be better, but remember: we’re bargain hunting).
4. A MicroSD card slot. This will let you turn a cheap 4 GB tablet into a generous 36 GB device for less than $30 extra. Take that iPad. You can skip this if you find one that has generous storage out of the box.
5. Capacitive Touch Screen. It’s rare to see a resistive touch screen tablet any more, but if you don’t pay attention you can get caught out here. Even with a capacitive screen, this is the place where the manufacture is most likely to cut corners, and you may end up with a display that is not sensitive enough, or not responsive enough. At this point, though, it’s hard to suss out the good ones from the bad.
6. Minimum 200ppi (pixels per inch). You’ll have to do the math here, but if you’re looking for bargain tablets that likely means a 7 inch device, and that means at least something around 1280×720. 800×600, or even 1024×768, are not likely to cut it. Anything less than this, and the tablet screen won’t look clear. Small text on the small screen will be harder to read. More is better, but remember: we’re shopping for bargains. Sadly, this item is likely to push your purchase up over $100 at the moment. If you’re willing to fudge on this (I advise against it), you can get some crazy deals that meet all the other points.

I’d like to have a note about the battery, but at this point I don’t have a feel yet for what to look for in that department. Still, follow these six rules, and you should be able to get a decent, off-brand tablet that you’ll be very happy to have, for a price much less than you’d expect.
Posted in IT News | Leave a comment