âWhatâs the point,â pondered Alice, âOf getting other people to stuff things in a box, if one cannot ever get them out?â
Ok, she never did say that, but itâs the sort of thing Alice would wonder.
Particularly if she noticed how often modern businesses send around Word forms with input fields designed to be filled out by team members, only to then be manually copied into spreadsheets, databases, or other documents.
Iâd put this as the second most irritating waste of document functionality.
And it doesnât have to be this way.
FIrst, letâs look at what you get with a Word form. There really isnât anything quite as specific a beast as a Word form. Itâs just a Word document. With form fields. Form fields are places into which users can type text, check boxes, select from drop down lists, etc.
Once form fields have been put into a document, the original document author can ârestrictâ the document such that only editing the form fields is allowed. This is usually done with a password, to make it less likely that others will edit the document beyond the form fields.
The presence of a password should not be taken to indicate that this is a security measure.
Removing the restriction can be done by guessing the password, or accessing the settings.xml inside the docx file, and changing the value of âw:enforcementâ from â1â to â0â. Other methods include saving to RTF, then editing the file in a text editor before saving it as docx again.
Restricting the document is done to make it less likely that blithe nonces will return your document to you with changes that are outside of the fields youâve provided to them, or with fields removed. This is important, because you canât as easily extract data from a document if you donât know where it is.
Hereâs what a form looks like when itâs restricted for editing, and has a number of form field elements provided â Iâve given a text field for a personâs name, a drop-down list for their zodiac sign, and a check box for education level. This is the sort of thing you might expect a form to be really useful for collecting.
Now that youâve sent this out to a hundred recipients, though, you want to extract the data from each form.
First weâve got to get the part of the document containing the data out. Knowing, as we do, that a docx file is just a ZIP file full of XML files, we could unzip it and go searching for the data. Iâve already done that â the data is in the file called âword/document.xmlâ. You could just rename the docx file to a zip file, open it in Explorer, navigate into the âwordâ folder, and then drag the document.xml file out for handling, but thatâs cumbersome, and we want an eventual automated solution.
Yeah, you could write this in a batch file using whatever ZIP program youâve downloaded, it wouldnât be that difficult, but Iâm thinking about PowerShell a lot these days for my automation. Hereâs code that will take a docx file and extract just the word/document.xml component into an output file whose name is provided.
# Load up the types required to handle a zip file.
Add-Type -AssemblyName System.IO.Compression.FilesystemFunction Get-DocXDocFile ($infilename, $outfilename){
$infileloc = [System.IO.Path]::Combine($pwd,$infilename)
$zip = [System.IO.Compression.ZipFile]::OpenRead($infileloc)
$zip.Entries | where { $_.FullName -eq “word/document.xml” } | foreach {
$outfileloc = [System.IO.Path]::Combine($pwd,$outfilename)
[System.IO.Compression.ZipFileExtensions]::ExtractToFile($_, “$outfileloc”,$true)
}
}
By now, if youâre like me, youâve opened up that XML file and looked into it, and decided you donât care that much to read its entrails.
Thatâs OK, I did it for you.
The new-style fields are all in âw:sdtâ elements, and can be found by the âw:tagâ name under the âw:sdtPrâ element.
Old-style fields are all in âw:fldCharâ elements, and can be found by the âw:nameâ value under the âw:ffDataâ element.
In XPath, a way of describing how you find a specific element / attribute in an XML file, thatâs expressed as follows:
//w:sdt/w:sdtPr/w:tag[@w:val=’Txt1Tag’]/../..
//w:fldChar/w:ffData/w:name[@w:val=’Text1′]/../..
This does assume that you gave each of your fields names or tags. But it would be madness to expect data out if you arenât naming your fields.
If youâre handy with .NET programming, youâre probably half way done writing the code to parse this using XmlDocument.
If youâre not handy with .NET programming, you might need something a little (but sadly, not a lot) easier.
Remember those XPath elements? Wouldnât it be really cool if we could embed those into a document, and then have that document automatically expand them into their contents, so we could do that for every form file weâve got?
Well, we can.
Short for Extensible Stylesheet Language Transformation (which is definitely long enough to need something to be short for it), XSLT, which really has no good pronunciation because Iâm never going to say something that sounds like âex-slutâ at work, XSLT is a way to turn one XML-formatted document into some kind of output.
Letâs say weâre working with the document I outlined above (and which I will forget to attach to this blog post until someone points it out). Weâve already extracted document.xml, and with the right XSL file, and a suitable XSLT command (such as the Microsoft msxml tool, or whatever works in your native environment), we can do something like this:
Maybe instead of text, you prefer something more like CSV:
I will probably forget to attach the XSL stylesheets that I used for these two transformations to this blog post.
Maybe next time we can see about building this into a toolâŠ
Here’s the files I forgot to add: ExtractData
Whether itâs âNo-shave Novemberâ or âMovemberâ, thereâs a lot of attention given this time of year to menâs health in general, and cancer in particular.
I donât take part in either of these events, partly because I donât like the way a beard / moustache feels, but mostly because I already spend my November extremely aware of menâs cancer issues.
So, let me tell you, on this International Menâs Day, how my right testicle tried to kill me. Warning â rude words ahead, including the dreaded âc-wordâ.
A little over fifteen years ago, I was living a fantastic life.
A wife, a six-year-old son, a house in a nice suburb of Austin, working from home on my own projects, and making enough money with those projects to justify doing so.
As anyone whoâs ever watched any âfunny home videoâ shows on TV will tell you, the purpose of a six year old is to throw things at your crotch, or to swing things at your crotch, or to hit you in your crotch, or to head-butt you in your crotch.
OK, so thatâs maybe not his sole purpose, but that year it seemed like this was happening more often than usual. It wasnât, of course, but it was noticeable that I was ⊠feeling the impact a little more keenly than usual.
I checked, my wife checked, and we concurred â something was definitely not as it had been. I mean, everyone knows that a manâs testicles arenât the same size and shape on each side, and Iâd been blessed with a particularly disparate pair from my teenage years.
But this was something new â swelling that just increased gradually, and a firmness that was inappropriately placed.
It was time to see the doctor.
Even knowing this, and reading about how badly â and how quickly â testicular diseases can impact men, it was extraordinarily difficult to face the task of picking up the phone, calling to speak to a doctorâs [female] receptionist, and tell them exactly why I wanted to come and see the doctor. Nonetheless, I girded my loins as much as I could, swallowed hard, and made the call.
The key is to remind yourself that this is probably the fifth call that receptionist has received this week on the same topic, and that she wouldnât be working in a doctorâs office if she werenât ready to hear medical terms briefly describing anatomical parts. Iâm surprised how quickly I came to this conclusion, given how many decades it took me to learn that when a doctor asks âso, how are you doing today?â, they actually want to hear the details, rather than âoh, fine, thanks, and you?â
The doctorâs visit was quick and clinical, just what youâd hope for. A flashlight applied to the nether regions, in much the same way you might check a henâs egg for occupants, a little uncomfortable palpation, and a quick inspection of nearby things while you have your underpants down.
âYouâve got a hydrocele,â he said, doing that thing with the rubber gloves where you snap them off, startling an already nervous patient. âA short surgery should fix that.â
Relief. Nothing quite as horrifying or scary as I had expected.
âIâll set you up with a urologist, and weâll get that taken care of in the next couple of weeks. Good luck.â
Iâd never had a doctor wish me âgood luckâ before, and it quite chilled me.
I visited the urologist, got set up for surgery, and discussed plans with my wife.
It was always in the back of my head that this could be something more than merely having a little extra fluid to drain.
So we talked about the C-word. I think of it that way, because on all the forms since, this is the one word the medical establishment goes out of its way to avoid writing in full. There are long words, foreign words, culturally taboo words, and all of them are written in full on some or other medical forms. There are abbreviations, but no word more than this one results in hardened medical professionals ceding to decency and refusing to name it in full:
You kind of guessed that was going to be the result, right?
We kind of did, too, and had discussed the idea that if there was any cancerous signs, that quite frankly I preferred being a living eunuch, if that was necessary, to being a dead, but otherwise intact, cancerous corpse. It seems such an obvious decision to make, but itâs still a very hard one to bring to bear.
And my wife did so on her own.
Because the only way to tell if the testicle looked cancerous was while I was under general anaesthetic in the operating room.
And sure enough, the doctor came out mid-surgery, while Iâm away with the fairies, to talk to my wife about the situation at hand. I can only imagine how that conversation went, so I shanât try to replay it here. I can only express how truly grateful I am that my wife gave consent to do what we had already discussed â to remove that cancerous nasty thing and send it to a lab for study.
So I woke up to a woman looking unutterably upset at the prospect that sheâd had to make life-altering medical decisions, for which I have always been truly grateful. There literally isnât a day that goes by that I wish sheâd made any other choice.
And yet even to this day, it still bothers her â thatâs how upsetting it is to be on the outside of this disease.
It wasnât much fun on the inside, either, to be honest, and thatâs my story which I can tell.
This was all in the week before Thanksgiving, 2002, a year when the first movie featuring an all-CGI Incredible Hulk was being advertised on the TV.
Poor Bruce Banner, strapped to a table, unable to move, while gamma rays coursed through his body under the control of a malfunctioning computer, turning him into the hangriest super-anti-hero ever.
After a trip to San Antonio, during which I felt every pothole on the I-35 from Austin, to have Thanksgiving dinner with my inlaws, we returned home and started observational and preventive treatment as follow up for good ole âtesticular Câ.
First, the tattoos. I have five tattoos now, each one a single dot, in the shape of a cross.
For targetting.
I wasnât exactly strapped to a table, but I was unable to move, while gamma rays coursed through my body, laser cross-hairs ensuring that the focused radiation hit only the right parts of my intestines. They call it radiotherapy, and when you go to an oncologist / radiologist to get radiotherapy in Austin in 2002, you sit in a waiting room surrounded by inspirational photos of Lance Armstrong. Whatever you feel about his drug use while winning the Tour de France competing against others who almost certainly used most of the same drugs themselves, he continues to be inspirational to many cancer survivors like myself, simply for having survived enough to be able to ride a bike.
Testicular cancer doesnât travel across, it goes up â so the process is, remove the testicle, fry the intestines lightly, and monitor the chest with ongoing X-rays just to make sure. Removing the testicle is called an âorchiectomyâ â true story, the orchid plant is named after testicles, because thatâs what the plantâs bulbs allegedly look like. This is why testicular cancer awareness pins are orchid-coloured.
One of the side effects you think of with any cancer treatment is serious nausea, and this is definitely the case with radiotherapy. It makes you feel uncomfortably unwell. American medical care being run by insurance companies, I was given leave to have fifteen anti-nausea pills. For 25 days of treatment. During which Iâd need multiple pills per day.
The only thing to do â snack on saltine crackers, and where possible actually cook some meals at least for my son. Bland food was really pretty much all I could manage. To this day, he quite rightly refuses to eat chicken and rice.
Because my wife had to return to work, and was travelling as a result, I drove myself to appointments, and thatâs probably my biggest mistake in all of this â the American Cancer Society offers free rides to patients attending hospital and doctor appointments, and has many other services besides. Take advantage of them, I donate to them specifically for you to use their services.
After that, every six months to a year, Iâd get a CT scan of my abdomen, and a blood test every month. CT scans are not the most comfortable of procedures, particularly with the iodine contrast dyes.
Once in a while, the person administering the blood test would question whether the test was really for me. On my doctorâs advice, I would ask them to re-check the form. It turns out that I was basically being given a monthly pregnancy test, to ensure the cancer wasnât coming back.
Still more surgeries were in my future over the next year â apparently, skin likes to stick to skin in unusual situations and in uncomfortable ways.
The insurance company raised our rates â presumably in line with regular price rises, but to the point where it was difficult to afford. After all, even back before the ACA, it wasnât right to raise insurance rates just because someone got sick. However, what WAS legal back then was the ability of other insurance providers to call the cancer a pre-existing condition, and to use that as reason to either refuse to sell me a policy, or to jack up the rates. Personal insurance policies are expensive to begin with, but when you canât shop around (or threaten to do so), youâre really out of luck.
And thatâs why I took the Microsoft job, and jacked in my personal business for the most part. Because American health insurance kills the American dream more often than it deserves to.
So, the final lesson â and there always is one â is that if you are a man, aged between twenty and thirty-five, or you know someone who fits, or will fit, that description, know that itâs important to check your health â actually touch and feel your body, particularly your âman partsâ â on a regular basis. When things change in a way that isnât expected, itâs really important to give your doctor a call. That week. Perhaps even that day that you notice it. The person who takes your call has heard it all before â and if you arenât comfortable talking to them, you can actually ask to speak to a nurse, a physicianâs assistant, and even specifically to a man, if thatâs what you need to feel comfortable to cover this.
Your doctor will tell you if itâs important, or something not to worry about. Theyâll give you advice on what to watch for in future, and wish you good luck if you need it.
Above all, donât literally die of embarrassment.
Iâve posted before how Iâd like to get my source code out of the version control system I used to use, because it was no longer supported by the manufacturer, and into something else.
I chose git, in large part because it uses an open format, and as such isnât going to suffer the same problem I had with ComponentSoftwareâs CS-RCS.
Now that Iâve figured out how to use Bash on Ubuntu on Windows to convert from CS-RCS to git, using the rcs-fast-export.rb script, Iâm also looking to protect my source control investment by storing it somewhere off-site.
This has a couple of good benefits â one is that Iâll have access to it when Iâm away from my home machine, another is that Iâll be able to handle catastrophic outages, and a third is that Iâll be able to share more easily with co-conspirators.
Iâm going to use Visual Studio Team Services (VSTS), formerly known as Visual Studio Online, previous to that, as Team Foundation Services Online. You can install VSTS on your own server, or you can use the online tool at <yourdomain>.visualstudio.com. If your team is smaller than five people, you can do this for free, just like you can use Visual Studio 2015 Community Edition for free. This is a great way in which Microsoft supports hobbyist developers, open source projects, college students, etc.
After my last post on the topic, you have used git and rcs-fast-export.rb to create a Git repository.
You may even have done a âgit checkoutâ command to get the source code into a place where you can work on it. Thatâs not necessary for our synchronisation to VSTS, because weâre going to sync the entire repository. This will work whether you are using the Bash shell or the regular Command Prompt, as long as you have git installed and in your PATH.
If youâve actually made any changes, be sure to add and commit them to the local Git repository. We donât want to lose those!
Iâm also going to assume you have a VSTS account. First, visit the home page.
Under âRecent Projects & Teamsâ, click âNewâ.
Give it a name and a description â I suggest leaving the other settings at their default of âAgileâ and âGitâ unless you have reason to change. The setting of âGitâ in particular is required if youâre following along, because thatâs how weâre going to synchronise next.
When you click âCreate projectâ, itâll think for a whileâŠ
And then youâll have the ability to continue on. Not sure my teamâs actually âgoing to love thisâ, considering itâs just me!
Yes, itâs not just your eyes, the whole dialog moved down the screen, so you canât hover over the button waiting to hit it.
Click âNavigate to projectâ, and youâll discover that thereâs a lot waiting for you. Fortunately a quick popup gives you the two most likely choices youâll have for any new project.
As my team-mates will attest, I donât do Kanban very well, so weâll ignore that side of things, Iâm mostly using this just to track my source code. So, hit âAdd Codeâ, and you get this:
Donât choose any yet
âClone to your computerâ â an odd choice of the direction to use, since this is an empty source directory. But, since it has a âClone in Visual Studioâ button, this may be an easy way to go if you already have a Visual Studio project working with Git that you want to tie into this. There is a problem with this, however, in that if youâre working with multiple versions of Visual Studio, note that any attempt from VSTS to open Visual Studio will only open the most recently installed version of Visual Studio. I found no way to make Visual Studio 2013 automatically open from the web for Visual Studio 2013 projects, although the Visual Studio Version Selector will make the right choice if you double click the SLN file.
âPush an existing repository from command lineâ â this is what I used. A simple press of the âCopy to clipboardâ button gives me the right commands to feed to my command shell. You should run these commands from somewhere in your workspace, I would suggest from the root of the workspace, so you can check to see that you have a .git folder to import before you run the commands.
BUT â I would strongly recommend not dismissing this screen while you run these commands, you canât come back to it later, and youâll want to add a .gitignore file.
The other options are:
âImport a repositoryâ â this is if youâre already hosting your git repository on some other web site (like Github, etc), and want to make a copy here. This isnât a place for uploading a fast-import file, sadly, or we could shortcut the git process locally. (Hey, Microsoft, you missed a trick!)
âInitialize with a README or gitignoreâ â a useful couple of things to do. A README.md file is associated with git projects, and instructs newcomers to the project about it â how to build it, what itâs for, where to find documentation, etc, etc â and you can add this at any time. The .gitignore file tells git what file names and extensions to not bother with putting into. Object files, executables, temporary files, machine generated code, PCH & PDB files, etc, etc. You can see the list is long, and thereâs no way to add a .gitignore file with a single button click after youâve left this page. You can steal one from an empty project, by simply copying it â but the button press is easier.
Iâve found it useful to run the âgit remoteâ and âgit pushâ commands from the command-line (and I choose to run them from the Bash window, because Iâm already there after running the RCS export), and then add the .gitignore. So, I copy the commands and send them to the shell window, before I press the âAdd a .gitignoreâ button, choose âVisual Studioâ as my gitignore type, and then select âInitializeâ:
First, letâs start with a recap of using the rcs-fast-export command to bring the code over from the old RCS to a new Git repository:
Commands in that window:
Commands:
No commands â weâve imported and are ready to sync up to the VSTS server.
Commands (copied from the âAdd Codeâ window):
Your solution still has lines in it dictating what version control youâre using. So you want to unbind that.
[If you donât unbind existing version control, you wonât be able to use the built-in version control features in Visual Studio, and youâll keep getting warnings from your old version control software. When you uninstall your old version control software, Visual Studio will refuse to load your projects. So, unbinding your old version control is really important!]
I like to do that in a different directory from the original, for two reasons:
So, now itâs Command Prompt window timeâŠ
Yes, you could do that from Visual Studio, but itâs just as easy from the command line. Note that I didnât actually enter credentials here â theyâre cached by Windows.
Commands entered in that window:
Your version control system may complain when opening this project that itâs not in the place it remembers being in⊠I know mine does. Tell it thatâs OK.
[Yes, Iâve changed projects, from Juggler to EFSExt. I suddenly realised that Juggler is for Visual Studio 2010, which is old, and not installed on this system.]
Now that weâve opened the solution in Visual Studio, itâs time to unbind the old source control. This is done by visiting the File => Source Control => Change Source Control menu option:
Youâll get a dialog that lists every project in this solution. You need to select every project that has a check-mark in the âConnectedâ column, and click the âUnbindâ button.
Luckily, in this case, theyâre already selected for me, and I just have to click âUnbindâ:
You are warned:
Note that this unbinding happens in the local copy of the SLN and VCPROJ, etc files â itâs not actually going to make any changes to your version control. [But you made a backup anyway, because youâre cautious, right?]
Click âUnbindâ and the dialog changes:
Click OK, and weâre nearly thereâŠ
Finally, we have to sync this up to the Git server. And to do that, we have to change the Source Control option (which was set when we first loaded the project) to Git.
This is under Tools => Options => Source Control. Select the âMicrosoft Git Providerâ (or in Visual Studio 2015, simply âGitâ):
Press âOKâ. Youâll be warned if your solution is still bound in some part to a previous version control system. This can happen in particular if you have a project which didnât load, but which is part of this solution. Iâm not addressing here what you have to do for that, because it involves editing your project files by hand, or removing projects from the solution. You should decide for yourself which of those steps carries the least risk of losing something important. Remember that you still have your files and their history in at least THREE version control systems at this point â your old version control, the VSTS system, and the local Git repository. So even if you screw this up, thereâs little real risk.
Now that you have Git selected as your solution provider, youâll see that the âChangesâ option is now available in the Team Explorer window:
Save all the files (but I donât have any open!) by pressing Ctrl-Shift-S, or selecting File => Save All.
If you skip this step, there will be no changes to commit, and you will be confused.
Select âChangesâ, and youâll see that the SLN files and VCPROJ files have been changed. You can preview these changes, but they basically are to do with removing the old version control from the projects and solution.
It wants a commit message. This should be short and explanatory. I like âRemoved references to old version control from solutionâ. Once youâve entered a commit message, the Commit button is available. Click it.
It now prompts you to Sync to the server.
So click the highlighted word, âSyncâ, to see all the unsynced commits â you should only have one at this point, but as you can imagine, if you make several commits before syncing, these can pile up.
Press the âSyncâ button to send the commit up to the server. This is also how you should usually get changes others have made to the code on the server. Note that âothersâ could simply mean âyou, from a different computer or repositoryâ.
Check on the server that the history on the branch now mentions this commit, so that you know your syncing works well.
Sure, it seems like a long-winded process, but most of what Iâve included here is pictures of me doing stuff, and the stuff Iâm doing is only done once, when you create the repository and populate it from another. Once itâs in VSTS, I recommend building your solution, to make sure it still builds. Run whatever tests you have to make sure that you didnât break the build. Make sure that you still have valid history on all your files, especially binary files. If you donât have valid history on any files in particular, check the original version control, to see if you ever did have. I found that my old CS-RCS implementation was storing .bmp files as text, so the current version was always fine, but the history was corrupted. Thatâs history I canât retrieve, even with the new source control.
Now, what about those temporary repositories? Git makes things really easy â the Git repository is in a directory off the root of the workspace, called â.gitâ. Itâs hidden, but if you want to delete the repository, just delete the â.gitâ folder and its contents. You can delete any temporary workspaces the same way, of course.
I did spend a little time automating the conversion of multiple repositories to Git, but that was rather ad-hoc and wobbly, so Iâm not posting it here. Iâd love to think that some of the rest of this could be automated, but I have only a few projects, so it was good to do by hand.
No programmer should be running an unsupported, unpatched, unupdated old version control system. Thatâs risky, not just from a security perspective, but from the perspective that it may screw up your files, as you vary the sort of projects you build.
No programmer should be required to drop their history when moving to a new version control system. There is always a way to move your history. Maybe that way is to hire a grunt developer to fetch versions dated at random/significant dates throughout history out of the old version control system, and check them in to the new version control system. Maybe you can write automation around that. Or maybe youâll be lucky and find that someone else has already done the automation work for you.
Hopefully Iâve inspired you to take the plunge of moving to a new version control system, and youâve successfully managed to bring all your precious code history with you. By using Visual Studio Team Services, youâve also got a place to track features and bugs, and collaborate with other members of a development team, if thatâs what you choose to do. Because youâve chosen Git, you can separate the code and history at any time from the issue tracking systems, should you choose to do so.
Let me know how (if?) it worked for you!
In which I move my version control from ComponentSoftwareâs CS-RCS Pro to Git while preserving commit history.
[If you donât want the back story, click here for the instructions!]
OK, so having watched the video I linked to earlier, I thought Iâd move some of my old projects to Git.
I picked one at random, and went looking for tools.
Iâm hampered a little by the fact that all my old projects used ComponentSoftwareâs âCS-RCS Proâ.
A couple of really good reasons:
But you know who doesnât use CS-RCS Pro any more?
Thatâs right, ComponentSoftware.
Itâs a dead platform, unsupported, unpatched, and belongs off my systems.
One simple reason â if I move off the platform, I face the usual choice when migrating from one version control system to another:
The second option seems a bit of a waste to me.
OK, so yes, technically I could mix the two modes, by using CS-RCS Pro to browse the ancient history when I need to, and Git to browse recent history, after starting Git from a clean working folder. But I could see a couple of problems:
So, really, I wanted to make sure that I could move my files, history and all.
I really didnât have a good way to do it.
Clearly, any version control system can be moved to any other version control system by the simple expedient of:
But, as you can imagine, thatâs really long-winded and manual. That should be automatable.
In fact, given the shared APIs of VSS-compatible source control services, Iâm truly surprised that nobody has yet written a tool to do basically this task. Iâd get on it myself, but I have other things to do. Maybe someone will write a âVSS2Gitâ or âVSS2VSSâ toolkit to do just this.
There is a format for creating a single-file copy of a Git repository, which Git can process using the command âgit fast-importâ. So all I have to find is a tool that goes from a CS-RCS repository to the fast-import file format.
So, clearly thereâs no tool to go from CS-RCS Pro to Git. Thereâs a tool to go from CS-RCS Pro to CVS, or there was, but that was on the now-defunct CS-RCS web site.
But⊠Remember I said that itâs compatible with GNU RCS.
And thereâs scripts to go from GNU RCS to Git.
OK, so the script for this is written in Ruby, and as I read it, there seemed to be a few things that made it look like it might be for Linux only.
I really wasnât interested in making a Linux VM (easy though that may be) just so I could convert my data.
Everything changed with the arrival of the recent Windows 10 Anniversary Update, because along with it came a new component.
Bash on Ubuntu on Windows.
Itâs like a Linux VM, without needing a VM, without having to install Linux, and it works really well.
With this, I could get all the tools I needed â GNU RCS, in case I needed it; Ruby; Git command line â and then I could try this out for myself.
Of course, I wouldnât be publishing this if it wasnât somewhat successful. But there are some caveats, OK?
Iâve tried this a few times, on ONE of my own projects. This isnât robustly tested, so if something goes all wrong, please by all means share, and people who are interested (maybe me) will probably offer suggestions, some of them useful. Iâm not remotely warrantying this or suggesting itâs perfect. It may wipe your development history out of your one and only copy of version control⊠so donât do it on your one and only copy. Make a backup first.
GNU RCS likes to store files in one of two places â either in the same directory as the working files, but with a â,vâ pseudo-extension added to the filename, or in a sub-directory off each working folder, called âRCSâ and with the same â,vâ extension on the files. If you did either of these things, thereâs no surprises. ButâŠ
CS-RCS Pro doesnât do this. It has a separate RCS Repository Root. I put mine in C:\RCS, but you may have yours somewhere else. Underneath that RCS Repository Root is a full tree of the drives youâve used CS-RCS to store (without the â:â), and a tree under that. I really hope you didnât embed anything too deep, because that might bode ill.
Initially, this seemed like a bad thing, but because you donât actually need the working files for this task, you can pretend that the RCS Repository is actually your working space.
Maybe this is obvious, but it took me a moment of thinking to decide I didnât have to move files into RCS sub-folders of my working directories.
Make this a âflag dayâ. After you do this conversion, never use CS-RCS Pro again. It was good, and it did the job, and itâs now buried in the garden next to Old Yeller. Do not sprinkle the zombification water on that hallowed ground to revive it.
This also means you MUST check in all your code before converting, because checking it in afterwards will be ⊠difficult.
Assumption: You have Windows 10.
This might look like a lot of instructions, but I mostly just wanted to be clear. This is really quick work. If you screw up after the âgit initâ command, simply ârm ârf .gitâ to remove the new repository.
The Atlantic today published a reminder that the Associated Press has declared in their style guide as of today that the word âInternetâ will be spelt with a lowercase âiâ rather than an uppercase âIâ.
The title is âElegy for the Capital-I Internetâ, but manages to be neither elegy nor eulogy, and misses the mark entirely, focusing as it does on the awe-inspiring size of the Internet being why the upper-case initial was important; then moving to describe how its sheer ubiquity should lead us to associating it with a lower-case i.
The "Internet", capital I, gives the information that this is the only one of its kind, anywhere, ever. There is only one Internet. A lower-case I would indicate that there are several "internets". And, sure enough, there are several lower-class networks-of-networks (which is the definition of âinternetâ as a lower-case noun).
Iâd like to inform the people who are engaging in this navel-gazing debate over big-I or small-i, that there functionally is only exactly one Internet. When their cable company came to "install the Internet", there was no question on the form to say "which internet do you want to connect to?" and people would have been rightly upset if there had been.
So, from that perspective, very much capital-I is still the right term for the Internet. There’s only one. Those other smaller internets are not comparable to âthe Internetâ.
From a technical perspective, we’re actually at the time when it’s closest to being true that there’s two internets. We’re in the midst of the long, long switch from IPv4 to IPv6. We’ve never done that before. And, while there are components of each that will talk to the other, it’s possible to describe the IPv6 and IPv4 collections of networks as two different "internets". So, maybe small-i is appropriate, but for none of the reasons this article describes.
Having said that, IPv6 engineers work really really hard to make sure that users just plain don’t notice that there’s a second internet while they’re building it, and it just feels exactly like it would if there was still only one Internet.
Again, you come back to "there is only one Internet", you don’t get to check a box that selects which of several internets you are going to connect to, it’s not like "the cloud", where there are multiple options. You are either connected to the one Internet, or you’re not connected to any internet at all.
Capital I, and bollocks to the argument from the associated press – lower-cased, because itâs not really that big or important, and neither is the atlantic. So, with their own arguments (which I believe are fallacious anyway), I donât see why they deserve an upper-case initial.
The Atlantic, on the other hand â thatâs huge and I wouldnât want to cross it under my own steam.
And the Internet, different from many other internets, deserves its capital I as a designation of its singular nature. Because itâs a proper noun.
The Ubuntu âCircle of Friendsâ logo.
Depending on the kind of company you work at, itâs either:
If you work at the first place, reach out to me on LinkedIn â I know some people who might want to work with you.
If youâre at the third place, you should probably get out now. Whatever theyâre paying you, or however much the stock might be worth come the IPO, itâs not worth the pain and suffering.
If youâre at the second place, congratulations â youâre at a regular, ordinary workplace that could do with a little better management.
A surprisingly great deal.
Whenever thereâs a security incident, there should be an investigation as to its cause.
Clearly the cause is always human error. Machines donât make mistakes, they act in predictable ways â even when they are acting randomly, they can be stochastically modeled, and errors taken into consideration. Your computer behaves like a predictable machine, but at various levels it actually routinely behaves like itâs rolling dice, and there are mechanisms in place to bias those random results towards the predictable answers you expect from it.
Humans, not so much.
Humans make all the mistakes. They choose to continue using parts that are likely to break, because they are past their supported lifecycle; they choose to implement only part of a security mechanism; they forget to finish implementing functionality; they fail to understand the problem at hand; etc, etc.
It always comes back to human error.
Occasionally I will experience these great flashes of inspiration from observing behaviour, and these flashes dramatically affect my way of doing things.
One such was when I attended the weekly incident review board meetings at my employer of the time â a health insurance company.
Once each incident had been resolved and addressed, they were submitted to the incident review board for discussion, so that the company could learn from the cause of the problem, and make sure similar problems were forestalled in future.
These werenât just security incidents, they could be system outages, problems with power supplies, really anything that wasnât quickly fixed as part of normal process.
But the principles I learned there apply just as well to security incident.
The biggest principle I learned was âroot cause analysisâ â that you look beyond the immediate cause of a problem to find what actually caused it in the long view.
At other companies, who canât bear to think that they didnât invent absolutely everything, this is termed differently, for instance, âthe five whysâ (suggesting if you ask âwhy did that happen?â five times, youâll get to the root cause). Other names are possible, but the majority of the English-speaking world knows it as âroot cause analysisâ
This is where I learned that if you believe the answer is that a single humanâs error caused the problem, you donât have the root cause.
Whenever I discuss this with friends, they always say âBut! What about this example, or that?â
You should always ask those questions.
Hereâs some possible individual causes, and some of their associated actual causes:
Bob pulled the wrong lever | Who trained Bob about the levers to pull? Was there documentation? Were the levers labeled? Did anyone assess Bobâs ability to identify the right lever to pull by testing him with scenarios? |
Kate was evil and did a bad thing | Why was Kate allowed to have unsupervised access? Where was the monitoring? Did we hire Kate? Why didnât the background check identify the evil? |
Jeremy told everyone the wrong information | Was Jeremy given the right information? Why was Jeremy able to interpret the information from right to wrong? Should this information have been automatically communicated without going through a Jeremy? Was Jeremy trained in how to transmute information? Why did nobody receiving the information verify it? |
Grace left her laptop in a taxi | Why does Grace have data that we care about losing â on her laptop? Can we disable the laptop remotely? Why does she even have a laptop? What is our general solution for people, who will be people, leaving laptops in a taxi? |
Jane wrote the algorithm with a bug in it | Who reviews Janeâs code? Who tests the code? Is the test automated? Was Jane given adequate training and resources to write the algorithm in the first place? Is this her first time writing an algorithm â did she need help? Who hired Jane for that position â what process did they follow? |
I could go on and on, and I usually do, but itâs important to remember that if you ever find yourself blaming an individual and saying âhuman error caused this faultâ, itâs important to remember that humans, just like machines, are random and only stochastically predictable, and if you want to get predictable results, you have to have a framework that brings that randomness and unpredictability into some form of logical operation.
Many of the questions I asked above are also going to end up with the blame apparently being assigned to an individual â thatâs just a sign that it needs to keep going until you find an organisational fix. Because if all you do is fix individuals, and you hire new individuals and lose old individuals, your organisation itself will never improve.
[Yes, for the pedants, your organisation is made up of individuals, and any organisational fix is embodied in those individuals â so blog about how the organisation can train individuals to make sure that organisational learning is passed on.]
Finally, if youâd like to not use Ubuntu as my âcircle of blameâ logo, thereâs plenty of others out there â for instance, Microsoft Alumni:
Tomorrow is April 1, also known as April Foolsâ Day.
As a result, you should expect that anything I say on this blog is fabrication, fantasy, foolery and snark.
Apparently, this hasnât previously been completely stupidly blindly obvious.
Iâve mentioned before how much I love the vagaries of dates and times in computing, and Iâm glad itâs not a part of my regular day-to-day work or hobby coding.
Hereâs some of the things I expect to happen this year as a result of the leap year:
And then thereâs the ordinary issues with dates that programmers canât understand â like the fact that there are more than 52 weeks in a year. âASSERT(weeknum>0 && weeknum<53);â, anyone? 52 weeks is 364 days, and every year has more days than that. [Pedantic mathematical note â maybe this somewhat offsets the âemployerâs extra dayâ item above]
Happy Leap Day â and always remember to test your code in your head as well as in real life, to find its extreme input cases and associated behaviours. Theyâll get tested anyway, but you donât want it to be your users who find the bugs.
Version control is one of those vital tools for developers that everyone has to use but very few people actually enjoy or understand.
So, itâs with no surprise that I noted a few months ago that the version control software on which Iâve relied for several years for my personal projects, Component Softwareâs CS-RCS, has not been built on in years, and cannot now be downloaded from its source site. [Hence no link from this blog]
Iâve used git before a few times in professional projects while I was working at Amazon, but relatively reluctantly â it has incredibly baroque and meaningless command-line options, and gives the impression that it was written by people who expected their users to be just as proficient with the ins and outs of version control as they are.
While I think itâs a great idea for developers to build software they would use themselves, I think itâs important to make sure that the software you build is also accessible by people who arenât the same level of expertise as yourself. After all, if your users were as capable as the developer, they would already have built the solution for themselves, so your greater user-base comes from accommodating novices to experts with simple points of entry and levels of improved mastery.
git, along with many other open source, community-supported tools, doesnât really accommodate the novice.
As such, it means that most people who use it rely on âcookbooksâ of sets of instructions. âIf you want to do X, type commands Y and Zâ â without an emphasis on understanding why youâre doing this.
This leads inexorably to a feeling that youâre setting yourself up for a later fall, when you decide you want to do an advanced task, but discover that a decision youâve made early on has prevented you from doing the advanced task in the way you want.
Thatâs why Iâve been reluctant to switch to git.
But itâs clear that git is the way forward in the tools Iâm most familiar with â Visual Studio and its surrounding set of developer applications.
Itâs one of those decisions Iâve made some time ago, but not enacted until now, because I had no idea how to start â properly. Every git repository Iâve worked with so far has either been set up by someone else, or set up by me, based on a cookbook, for a new project, and in a git environment thatâs managed by someone else. I donât even know if those terms, repository and environment, are the right terms for the things I mean.
There are a number of advanced things I want to do from the very first â particularly, I want to bring my code from the old version control system, along with its history where possible, into the new system.
And I have a feeling that this requires I understand the decisions I make when setting this up.
So, it was with much excitement that I saw a link to this arrive in my email:
Next thing is Iâm going to watch this, and see how Iâm supposed to work with git. Iâll let you know how it goes.
I happened upon a blog post by the Office team yesterday which surprised me, because it talked about a feature in PowerPoint that Iâve wanted ever since I first got my Surface 2.
Hereâs a link to documentation on how to use this feature in PowerPoint.
It seems like the obvious feature a tablet should have.
Here’s a video of me using it to draw a few random shapes:
But not just in PowerPoint â this should be in Word, in OneNote, in Paint, and pretty much any app that accepts ink.
So hereâs the blog post from Office noting that this feature will finally be available for OneNote in November.
On iPad, iPhone and Windows 10. Which I presume means itâll only be on the Windows Store / Metro / Modern / Immersive version of OneNote.
Thatâs disappointing, because it should really be in every Office app. Hell, Iâd update from Office 2013 tomorrow if this was a feature in Office 2016!
Please, Microsoft, donât stop at the Windows Store version of OneNote.
Shape recognition, along with handwriting recognition (which is apparently also hard), should be a natural part of my use of the Surface Pen. It should work the same across multiple apps.
Thatâs only going to happen if itâs present in multiple apps, and is a documented API which developers â of desktop apps as well as Store apps â can call into.
Well, desktop apps can definitely get that.
Iâll admit that I havenât had the time yet to build my own sample, but Iâm hoping that this still works â thereâs an API called âInk Analysisâ, which is exactly how you would achieve this in your app:
https://msdn.microsoft.com/en-us/library/ms704040.aspx
It allows you to analyse ink youâve captured, and decide if itâs text or a drawing, and if itâs a drawing, what kind of drawing it might be.
[Iâve marked this with the tag âAlunâs Codeâ because I want to write a sample eventually that demonstrates this function.]