For a few days from July 19th to July 23rd 2021, a short, and unwanted, piece of JavaScript became a required part of the flow of every Office 365 setup, impacting over a quarter of a million Office users.
This script had immediate access to the first and last name, as well as the email address, of anyone who started their Office setup during that time â and much more, besides.
Well, if you know anything about me, youâll already have figured out a couple of things:
For the last three and a bit years, Iâve been trying to figure out ways to find, and protect against, subdomain takeovers of various kinds. To the point that I now use the shorthand term âSDTOâ to describe a subdomain takeover. Itâs two less syllables.
In the weeks before pandemic shutdowns struck the US, I took the calculated risk of flying to San Francisco to give a talk at the RSA Security Conference on some of my tried-and-tested methods for detecting and automatically preventing subdomain takeovers. You can see the talk hereâŠ
One of the slides in that deck talked very briefly about other kinds of subdomain takeovers:
I went through all the source code I had access to at work, and didnât find a single 2nd Order SDTO, but I was definitely told by others in the InfoSec community that they exist.
Iâm a little skeptical of things I canât directly see or experience, so I kind of took stories of their existence with a grain of salt.
It certainly was, and it wasnât until January 2021 that I thought Iâd engage in some personal research, outside of work, to try and find 2nd order SDTOs â thatâs another example of how little I expected Iâd find any.
I learned (actually, re-learned) how to create a Chrome extension.
Itâs not hard. Itâs just JavaScript.
I wrote an extension that looks for 2nd Order SDTOs in every page I visit.
What I got â loads of false positives, which are good because they let you know your codeâs running, but which are also irritating, because they interrupt your everyday life.
Thatâs exactly right. July 2021, and by the most peculiar of circumstances. Instead of going out looking for sites that might be vulnerable, I was literally checking on whether I had registered enough copies of Office this year for my family.
I went to setup.office.com, logged in, and my plugin popped up.
Itâs ⊠not a pretty plugin (and yes, thatâs a slightly faked-up version of the dialog).
But what itâs saying is that the part of the Office 365 Setup page where you enter your PIN is trying to call a file named âdeveloper.intercept.jsâ, and that the host on which that file sits – uxmicrosoft-uat.azurewebsites.net â doesnât actually exist.
So I naturally got rather excited.
If you watched my RSA talk on SDTOs, youâll know that NXDOMAIN is just the start of the possibility of a useful SDTO.
Many of my false positives had this same NXDOMAIN error, but I couldnât turn them into an SDTO, because their targets were at cloud sites where itâs impossible to retake a name previously used by another owner.
But this one â any idiot could have registered uxmicrosoft-uat.azurewebsites.net â and so I did.
For free.
Then I created the JavaScript file, and added a command to log a simple fixed string to the console log.
And I loaded the page.
As the log says, âWay heyâ! My script executed. And not because thereâs anything special on my account.
I totally resisted the temptation to:
All of this took only a very few short minutes!
Now Iâve got the web site under my belt, and the script is in place, I have all the elements I need to submit a bug report to Microsoft. Theyâve got a bug bounty page, with a list of all the services that are covered.
I fill out the first page of the form, where it asks me for a short description and a proof of concept. To my irritation, I realise there is no 2nd page, and that this is the version of my report that theyâre going to see.
So hereâs my really inadequate report as it stood when I submitted it:
Everythingâs there, that should be enough for anyone on the bug bounty team to recognise and reproduce this bug, evaluate it for badness and close things up.
And you canât edit this â or the metadata.
You canât go in and change the Security Impact or the Reported products (why the plural? you can only select one!)
I can upload some additional files, though, so I upload a screenshot of a page of logs of people from all over the world executing my script, and a video of how you can spot my code executing in your browser.
And I also sent follow-up emails with a more detailed description of what I had done, how I had found the bug, what kinds of behaviours it let me do, and so on.
At this point, Iâve got the code executing in my own browser, but I want to know if other people are fetching it.
So I enable logging, and as I stream the logs, I realise that thereâs a lot of people accessing this page, and fetching my script â presumably executing it, too.
Bear in mind that, since the script is executed with a simple <script> tag in the main document, itâs executing with exactly the same privileges as the Office web site and the Microsoft Account user who has logged in.
Following my report, I watched the number of requests â not a very big flow, but certainly as many as a dozen or score per second. By the time Microsoft seems to have fixed their web site sometime around July 23, Iâve got around 250,000 requests for that script. All told as of time of writing, 271,194 fetches (and presumably, executions) of this script have occurred. Weirdly, they still keep occurring, in dribs and drabs, and from a different referrer â perhaps these are requests from spiders?
Either way, Iâm keeping the page up, just in case. Itâs free, and it keeps this from being exploited by someone else.
Now I feel like some kind of Mr. Big, with dreams of all the malfeasance I could have gotten up to had I wanted to do so.
Subdomain takeovers are specifically ruled out of scope, even though I was able to use this one to inject code (in scope) into the web site.
setup.office.com is also not in the list of âin-scope domainsâ, and so is specifically out of scope.
Lesson to me: always read the scope document to the end. You might still submit the bug, but at least you arenât getting needlessly excited about the prospect of a non-existent bounty.
If thereâs a way to cause a subdomain takeover â if thereâs a way to abandon a named resource in such a way that someone can create a resource with that name, while receiving traffic â someone will have screwed up, and it might just as well be Microsoft as anyone else, because the cloud providers are building services faster than they can create meaningful threat models.
Instead of a single focus on preventing subdomain takeovers in Azure, Microsoft have put together any number of different approaches, sometimes separately inventing the same solution inside of a company that really needs to spend more time talking internally.
It should be possible for Azure to completely prevent subdomain takeovers using resources in Azure.
Where the platform isnât helping prevent subdomain takeovers, developers will cause the vulnerability to happen.
We are left with training, detection, and trying to be smarter than the hackers.
I canât believe itâs been over thirteen years since I last wrote about NTFS Alternate Data-Streams.
A lot has changed since then, including the fact that Iâve taken down the site where my download for âsdirâ was listed. But thatâs an old tool, and I donât think we need it any more.
What else has changed is that my wife is studying a number of security courses with the SANS Womenâs Academy, which is an excellent initiative to bring more women into the world of information security, where they, along with the rest of humanity (for whom SANS has other programs), are sorely needed. One of the classes she was studying included a piece on NTFS Alternate Data Streams, or ADS.
An Alternate Data Stream, or ADS, is a parallel stream of data, as the name implies, to the default data stream of a particular file. This default data stream is what most users have spent their lives thinking of as âthe fileâ.
The file is more than just the bytes it contains, in this case. You can go a long way without realising this.
Alternate Data Streams were originally created to support Apple Mac Resource Forks, in files copied from Apple to NTFS and back. Iâm not sure Apple even bothers with them any more, now that theyâve moved to something akin to Linux as their OS.
Created as part of the original NTFS in 1993, these Alternate Data Streams shouldnât be confused with:
Not really easily â at the command prompt, you can use âdir /râ to view the files in your current directory along with all their attendant streams â but you canât combine the â/râ and â/bâ options, so you canât get a really succinct list of all the streams in your system. Hereâs an example listing of a download directory:
In PowerShell, you have more control, and you can even call in to .NET, but you donât need to in order to see file streams. Hereâs a simple command to display just the non-default data streams on files in a particular directory:
Get-ChildItem | Get-Item -Stream * | Where-Object Stream -ne ':$DATA' | Format-Table filename,stream,length
The output this produces looks like this:
Left as an exercise for the reader â how you do this recursively through subdirectories to find all the streams.
The most common ADS on your directory is almost certainly the stream named âZone.Identifierâ, as this is created on every file you download from the web using Internet Explorer, Edge, Chrome, Outlook, or any application that cooperates with Microsoftâs idea of marking files that have been downloaded. If you open Explorer and view properties on a file thatâs been downloaded, youâll see thereâs a checkbox allowing you to âUnblockâ this file, along with a note that it came from another computer. Checking the âUnblockâ box and clicking OK or Apply will remove this Zone.Identifier stream.
This stream is known as the âMark Of The Webâ or âMOTWâ in some documentation, so thatâs another term to use if youâre searching for this stream.
Other stream names I find on my hard drive:
uidStream â I found this on some eBooks in my âMy Kindle Booksâ folder, but whether theyâre specific to the Kindle app, or some other e-reader Iâve used, I canât be certain.
SmartScreen â these are on some downloaded .EXE files, so from the name and file type, Iâll assume this is from virus scanning downloaded EXEs. Since the stream contains just the word âAnaheimâ, Iâm not sure how useful this is.
ms-properties â a binary stream on a few of the JPG images I have on my computer, all of which are photos I took on my Surface Pro.
And some very oddly-named streams on some scanned files, because thereâs just as much of a standard for stream names as there are for file names, and itâs completely a Wild West out there, so the best way to make sure youâre not going to be overwritten by someone elseâs stream is to pick a completely weird and off the wall stream name.
Joking aside, the second of those shows that choosing a GUID is actually a good way to name a stream so it doesnât collide with others â itâs random, and you can make it searchable on the web by documenting it.
Sure enough, if we search for that GUID, thereâs some interesting information to be found at https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-fscc/4f3837c4-2f96-40d7-b0bf-80dd1d0b0da0, among other places. This particular GUID is used to include some kind of summary information.
Iâve also read in a couple of places that the Windows File Classification Infrastructure uses ADS to carry information.
It doesnât take much thinking to come up with other uses for alternate data streams. Really any time you might want to associate data with a file, or several files, without bothering the application that might want to read the file. Hereâs some suggestions:
Thinking on this, thereâs a couple of ideas I already have â if I can extract ID3 tags from files and put them into an ADS, itâs going to be quicker and easier to find that information than parsing the entire MP4/MP3/M4A file each time I want to look at the data.
Iâve commented on this before, and Iâve read a lot about how âobviouslyâ viruses will use ADS to hide, or that exfiltration will use ADS to avoid detection, and while thereâs some truth to this idea, I think both threats are overblown.
For exfiltration, the problem is essentially the same as that with using EFS to encrypt a file thatâs being exfiltrated â in order for the data to leave the system, it has to pass through the usual file APIs that your DLP solution is hooked into, and unless your DLP solution is being too smart for its britches, the data will be noticed and blocked. Copying a file from NTFS to FAT or exFAT will destroy the associated ADS data as if it was never there, just as it will destroy EFS encryption.
For virus hiding, while itâs not impossible to execute from an ADS, itâs not particularly easy, and the methods used themselves can trigger your antivirus. To load and execute the data in the ADS, you have to use normal means to load and execute code in a default data stream. And those normal means can be detected by the virus scanner just as easily as they detect any other executable content. If your virus scanner hooks normal load/execute APIs, itâll also intercept the loading and execution of the ADS.
This is probably why thereâs only one virus I found significant information on that uses ADS to hide parts of itself – Backdoor:Win32/Rustock.A â which copies itself into streams off the system32 folder. From the technical description of this virus, itâs also clear that the virus has a fail-back facility for when itâs trying to install itself on a system with no ADS support (really, who installs Windows on a FAT partition? Maybe they mean ReFS, which didnât initially support ADS).
The most likely ADS security threat is still the one for which itâs best known â that of accessing the default data stream of a file by appending â:$DATAâ to the requested filename, and getting around restrictions that an application might have in place.
Years and years ago (1998), this was a trick you could use against IIS to fetch the source code of an ASP page â instead of fetching âpagename.aspâ (which gave you the output of executing the code), youâd fetch âpagename.asp:$DATAâ.
Obviously, IIS fixed this years and years ago, and yet the problem comes up over and over again, in other applications which map incoming requests to files through a simple mapping (file name requested â file name fetched), and which arenât aware of this kind of issue. (On Windows, you can open a file âfor information onlyâ and then query the handle for its canonical name, if you need to write code to get around this â see Writing Secure Code 2nd Edition for details)
So, every now and again, if youâre a hacker and you canât get a file, try getting it with â:$DATAâ at the end of its name.
The command prompt has a few ways to handle Alternate Data Streams with files.
Very limited, as you can tell â you canât do âdir /s/r/bâ to get a list of all the streams, because the /b parameter ignores the /r parameter. You canât directly load an executable from a stream, but you can use another EXE to load it for you (there are examples available online of using WMIC and PSEXEC to do this)
If you absolutely have to remove an alternate data stream from an NTFS file with only Explorer or the Command Prompt, moving it to and from a FAT or exFAT formatted drive (such as a USB stick) will do that, but will also kill any other NTFS properties, such as ownerships, permissions, EFS encryption, etc, as well as killing any audit continuity on the file. I donât recommend this, particularly for those files that you really donât want your name associated with the creation of.
The news is supposedly a little better in PowerShell, which is meant to have built-in support for ADS.
In PowerShell, we use Get-ChildItem to navigate through folders, and Get-Item to look at individual files. Remove-Item is what we use to delete files. Each of these commands has a â-Streamâ parameter, so it seems we are set for our alternate data stream handling.
We can delete a stream from a file as easily(!) as this:
Remove-Item <file> -Stream <stream>
It feels a little weird, but it only deletes the stream, not the file itself.
Seems like this should work to list all streams from our current directory going down, right?
Get-ChildItem -Recurse | Get-Item -Stream * | Where-Object Stream -ne ‘:$DATA’ | Format-Table FileName,Stream,Length
Well, it does most of what weâre looking for.
What it specifically misses is the directories.
Yeah, you can put an alternate data stream on a folder. You canât put a default data stream on a directory, but you can put any number of alternate data streams there.
My PowerShell script wonât find that âwhatnot.txtâ stream. Curiously enough, this is documented at https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.management/get-item?view=powershell-5.1 even though itâs clearly an oversight. âThis parameter isnât valid on foldersâ â well, it should be.
Can we use the Remove-Item -Stream parameter to delete streams from directory, even if we canât actually find them using PowerShell?
Sure, but itâs even more scary:
Contrary to what the warning says, the directory and all its children were not deleted, just the stream. Everything is safe and well.
So, what needs fixing?
Oh, yeah, and what on earth is with this lack of support for actual, real, file system features in PowerShell?
And yes, Iâm kind of cheating here, but not much!
Oh, and this folder confuses the command promptâs âdir/r/sâ as well. Note that the directory âAUXâ doesnât have a stream, but when listing the contents of that directory, the directory â.â DOES.
The words, the exploration and examples, the concepts and the thinking, are all shared work between Debbie Lester-Jones and myself.
At some point, when sheâs done with her classes, one of you could be lucky enough to employ her. Or any of the other awesome students of the SANS Womenâs Academy.
Information Security is full of terminology.
Sometimes we even understand what we mean. Iâve yet to come across a truly awesome, yet brief, definition of âthreatâ, for instance.
But one that bugs me, because it shouldnât be that hard to get right, and because I hear it from people I otherwise respect greatly, is that of âinput validationâ.
Fight me on this, but I think that validation is essentially a yes/no decision on a set of input, whether itâs textual, binary, or whatever other format you care to define.
Exactly what you are validating is up for debate, whether youâre looking at syntax or semantics â is it formatted correctly, versus does it actually make sense?
âGreen ideas sleep furiouslyâ is a famous example of a sentence that is syntactically correct â it follows a standard âAdjective noun verb adverbâ pattern that is common in English â but semantically, it makes no sense: ideas canât be green, and they canât sleep, and nothing can sleep furiously (although my son used to sleep with his fists clenched really tight when he was a little baby).
â0 / 0â is a syntactically correct mathematical expression, but you can argue if itâs semantically correct.
âSell 1000 sharesâ might be a syntactically correct instruction, but semantically, it could be you donât have 1000 shares, or thereâs a business logic limit, which says such a transaction requires extra authentication.
So thereâs a difference between syntactical validation and semantic validation, butâŠ
Injection attacks occur when an input data â a string of characters â is semantically valid in the language of the enclosing code, as code itself, and not just as data. Sometimes (but not always) this means the data contains a character or character sequence that allows the data to âescapeâ from its data context to a code context.
This is a question I ask, in various round-about ways, in a lot of job interviews, so itâs quite an important question.
The answer is really simple.
Yes. And no.
If you can validate your input, such that it is always syntactically and semantically correct, you can absolutely prevent injection exploits.
But this is really only possible for relatively simple sets of inputs, and where the processing is safe for that set of inputs.
An example â suppose Iâve got a product ordering site, and Iâm selling books.
You can order an integer number of books. Strictly speaking, positive integers, and 0 makes no sense, so start at 1. You probably want to put a maximum limit on that field, perhaps restricting people to buying no more than a hundred of that book. If theyâre buying more, theyâll want to go wholesale anyway.
So, your validation is really simple â âis the field an integer, and is the integer value between 1 and 100?â
Having said âyes, and noâ, I have to show you an example of the ânoâ, right?
OK, letâs say youâre asking for validation of names of people â whatâs your validation rules?
Letâs assume youâre expecting everyone to have âlatinisedâ their name, to make it easy. All the letters are in the range a-z, or A-Z if thereâs a capital letter.
Great, so thereâs a rule â only match â[A-Za-z]â
Unless, you know, Leonardo da Vinci. Or di Caprio. So you need spaces.
Or Daniel Day-Lewis. So thereâs also hyphens to add.
And if you have an OâReilly, an OâBrian, or a DâArtagnan, or a NâDour â yes, youâre going to add apostrophes.
Now your validation rule is letting in a far broader range of characters than you start out with, and thereâs enough there to allow for SQL injection to happen.
Input can now be syntactically correct by your validation rule, and yet semantically equivalent to data plus SQL code.
I have a working hypothesis. It goes like this.
As a neophyte in information security, you learn a trick.
That trick is validation, and itâs a great thing to share with developers.
They donât need to be clever or worry hard about the input that comes in, they simply need to validate it.
It actually feels good to reject incorrect input, because you know youâre keeping the bad guys out, and the good guys in.
Then you find an input field where validation alone isnât sufficient.
But youâve told everyone â and had other security folk agree with you â that validation is the way to solve injection attacks.
So you learn a new trick â a new way of protecting inputs.
After all, it ⊠uhh, kind of does the same thing. It stops injection attacks, so it must be validation.
This new trick is encoding, quoting, or in some way transforming the data, so the newly transformed data is safe to accept.
Every one of those apostrophes? Turn them into the sequence â'â if theyâre going into HTML, or double them if theyâre in a SQL string, or â and this is FAR better â use parameterised queries so you donât have to even know how the input string is being encoded on its way into the SQL command.
Now your input can be validated â and injection attacks are stopped.
In fact, once youâve encoded your inputs properly, your validation can be entirely open and empty! At least from the security standpoint, because youâve made the string semantically entirely meaningless to the code in which it is to be embedded as data. There are no escape characters or sequences, because they, too, have been encoded or transformed into semantically safe data.
And I happen to think itâs important to separate the two concepts of validation and encoding.
Validation is saying âyesâ or ânoâ to the question âis this string âgoodâ data?â You can validate in a number of different ways, and with good defence in depth, youâll validate at different locations, based on different knowledge about what is âgoodâ. This matches very strongly with the primary dictionary definition of âvalidationâ â itâs awesome when a technical term matches very closely with a common language term, because teaching it to others becomes easier.
Encoding doesnât say âyesâ or ânoâ, encoding simply takes whatever input itâs given, and makes it safe for the next layer to which the data will be handed.
Itâs not.
Just a quick note, because Iâve been sick this week, but last weekend, I put a little more work into my Padding Oracle exploit tool.
You can find the new code up at https://github.com/alunmj/PaddingOracle, and because of all the refactoring, itâs going to look like a completely new batch of code. But I promise that most of it is just moving code from Program.cs into classes, and adding parsing of command-line arguments.
I donât pretend to be the worldâs greatest programmer by any stretch, so if you can tell me a better way to do what Iâve done here, do let me know, and Iâll make changes and post something about them here.
Also, please let me know if you use the tool, and how well it worked (or didn’t!) for you.
The arguments currently supported are:
The only parameter unadorned with an option letter â this is the URL for the resource the Padding Oracle code will be pounding to test guesses at the encrypted code.
Also, âcipher. This provides a .NET regular expression which matches the ciphertext in the URL.
Also, âtextencoding, âencoding. This sets the encoding thatâs used to specify the ciphertext (and IV) in the URL. The default is b64
Also, âiv. This provides a .NET regular expression which matches the IV in the URL if itâs not part of the ciphertext.
Also, âblocksize. This sets the block size in bytes for the encryption algorithm. It defaults to 16, but should work for values up to 32.
Also, âverbose. Verbose â output information about the packets weâre decrypting, and statistics on speed at the end.
Also, âhelp. Outputs a brief help message
Also âparallelism. Dictates how much to parallelise. Specifying â1â means to use one thread, which can be useful to see whatâs going on. â1 means âmaximum parallelisationâ â as many threads as possible. Any other integer is roughly akin to saying âno more than this number of threadsâ, but may be overridden by other aspects of the Windows OS. The default is â1.
Instead of decrypting, this will encrypt the provided text, and provide a URL in return that will be decrypted by the endpoint to match your provided text.
These examples are run against the WebAPI project thatâs included in the PadOracle solution.
Letâs say youâve got an example URL like this:
http://localhost:31140/api/encrypted/submit?iv=WnfvRLbKsbYufMWXnOXy2Q%3d%3d&ciphertext=087gbLKbFeRcyPUR2tCTajMQAeVp0r50g07%2bLKh7zSyt%2fs3mHO96JYTlgCWsEjutmrexAV5HFyontkMcbNLciPr51LYPY%2f%2bfhB9TghbR9kZQ2nQBmnStr%2bhI32tPpaT6Jl9IHjOtVwI18riyRuWMLDn6sBPWMAoxQi6vKcnrFNLkuIPLe0RU63vd6Up9XlozU529v5Z8Kqdz2NPBvfYfCQ%3d%3d
This strongly suggests (because who would use âivâ and âciphertextâ to mean anything other than the initialisation vector and cipher text?) that you have an IV and a ciphertext, separate from one another. We have the IV, so letâs use it â hereâs the command line Iâd try:
PadOracle "http://localhost:31140/api/encrypted/submit?iv=WnfvRLbKsbYufMWXnOXy2Q%3d%3d&ciphertext=087gbLKbFeRcyPUR2tCTajMQAeVp0r50g07%2bLKh7zSyt%2fs3mHO96JYTlgCWsEjutmrexAV5HFyontkMcbNLciPr51LYPY%2f%2bfhB9TghbR9kZQ2nQBmnStr%2bhI32tPpaT6Jl9IHjOtVwI18riyRuWMLDn6sBPWMAoxQi6vKcnrFNLkuIPLe0RU63vd6Up9XlozU529v5Z8Kqdz2NPBvfYfCQ%3d%3d" -c "087gb.*%3d%3d" âi "WnfvRL.*2Q%3d%3d"
This is the result of running that command:
Notes:
Same URL, but this time I want to encrypt some text.
Our command line this time is:
PadOracle "http://localhost:31140/api/encrypted/submit?iv=WnfvRLbKsbYufMWXnOXy2Q%3d%3d&ciphertext=087gbLKbFeRcyPUR2tCTajMQAeVp0r50g07%2bLKh7zSyt%2fs3mHO96JYTlgCWsEjutmrexAV5HFyontkMcbNLciPr51LYPY%2f%2bfhB9TghbR9kZQ2nQBmnStr%2bhI32tPpaT6Jl9IHjOtVwI18riyRuWMLDn6sBPWMAoxQi6vKcnrFNLkuIPLe0RU63vd6Up9XlozU529v5Z8Kqdz2NPBvfYfCQ%3d%3d" -c "087gb.*%3d%3d" âi "WnfvRL.*2Q%3d%3d" âe "Hereâs some text I want to encrypt"
When we run this, it warns us itâs going to take a very long time, and boy itâs not kidding â we donât get any benefit from the frequency table, and we canât parallelise the work.
And you can see it took about two hours.
Iâve been a little absent from this blog for a while, mostly because Iâve been settling in to a new job where Iâve briefly changed my focus almost completely from application security to being a software developer.
The blog absence is going to change now, and Iâd like to start that with a renewed effort to write something every week. In addition to whatever grabs my attention from the security news feeds I still suck up, I want to get across some of knowledge and approaches Iâve used while working as an application security guy. Iâll likely be an application security guy in my next job, whenever that is, so itâll stand me in good stead to write what I think.
The phrase âOne Simple Thingâ underscores what I try to return to repeatedly in my work â that if you can get to the heart of what youâre working on, everything else flows easily and smoothly.
This does not mean that thereâs only one thing to think about with regards to security, but that when you start asking clarifying questions about the âone simple thingâ that drives â or stops â a project in the moment, itâs a great way to make tremendous progress.
Iâll start by discussing the One Simple Thing I pick up by default whenever Iâm given a security challenge.
What are we protecting?
This is the first question I ask on joining a new security team â often as early as the first interviews. Everyone has a different answer, and itâs a great way to find out what approaches youâre likely to encounter. The question also has several cling-on questions that it demands be asked and answered at the same time:
Why are we protecting it?
Who are we protecting it from?
Why do they want it?
Why shouldnât they get it?
What are our resources?
These come very quickly out of the One Simple Thing of âwhat are we protecting?â.
Hereâs some typical answers:
You can see from the selection of answers that not everyone has anything like the same approach, and that they donât all line up exactly under the typical buckets of Confidentiality, Integrity and Availability.
Do you think someone can solve your security issues or set up a security team without first finding out what it is youâre protecting?
Do you think you can engage with a team on security issues without understanding what they think theyâre supposed to be protecting?
Youâve seen from my [short] list above that there are many answers to be had between different organisations and companies.
Iâd expect there to be different answers within an organisation, within a team, within a meeting room, and even depending on the time I ask the question.
âWhat are we protectingâ on the day of the Equifax leak quickly becomes a conversation on personal data, and the damaging effect of a leak to âcustomersâ. [I prefer to call them âdata subjectsâ, because they arenât always your customers.]
On the day that Yahoo gets bought by Verizon for substantially less than initially offered, the answer becomes more about company value, and even perhaps executive stability.
Next time youâre confused by a security problem, step back and ask yourself â and others â âWhat are we protecting?â and see how much it clarifies your understanding.
Sometimes, itâs just my job to find vulnerabilities, and while thatâs kind of fun, itâs also a little unexciting compared to the thrill of finding bugs in other peopleâs software and getting an actual âthank youâ, whether monetarily or just a brief word.
About a year ago, I found a minor Cross-Site Scripting (XSS) flaw in a major companyâs web page, and while it wasnât a huge issue, I decided to report it, as I had a few years back with a similar issue in the same web site. I was pleased to find that the company was offering a bounty programme, and simply emailing them would submit my issue.
The first thing to notice, as with all XSS issues, is that there were protections in place that had to be got around. In this case, some special characters or sequences were being blocked. But not all. And itâs really telling that there are still many websites which have not implemented widespread input validation / output encoding as their XSS protection. So, while the WAF slowed me down even when I knew the flaw existed, it only added about 20 minutes to the exploit time. So, my example had to use âconfirm()â instead of âalert()â or âprompt()â. But really, if I was an attacker, my exploit wouldnât have any of those functions, and would probably include an encoded script that wouldnât be detected by the WAF either. WAFs are great for preventing specific attacks, but arenât a strong protection against an adversary with a little intelligence and understanding.
My email resulted in an answer that same day, less than an hour after my initial report. A simple âthank youâ, and âweâre forwarding this to our developersâ goes a long way to keeping a security researcher from idly playing with the thought of publishing their findings and moving on to the next game.
In under a week, I found that the original demo exploit was being blocked by the WAF â but if I replaced âonclickâ with âoclickâ, âonmouseoverâ with âomouseoverâ, and âconfirmâ with âcofirmâ, I found the blocking didnât get in the way. Granted, since those arenât real event handlers or JavaScript functions, I canât use those in a real exploit, but it does mean that once again, all the WAF does is block the original example of the attack, and it took only a few minutes again to come up with another exploit string.
If theyâd told me âhey, weâre putting in a WAF rule while we work on fixing the actual bugâ, I wouldnât have been so eager to grump back at them and say they hadnât fixed the issue by applying their WAF and by the way, hereâs another URL to exploit it. But they did at least respond to my grump and reassure me that, yes, they were still going to fix the application.
I heard nothing after that, until in February of this year, over six months later, I replied to the original thread and asked if the report qualified for a bounty, since I noticed that they had actually fixed the vulnerability.
No response. Thinking of writing this up as an example of how security researchers still get shafted by businesses â bear in mind that my approach is not to seek out bounties for reward, but that I really think itâs common courtesy to thank researchers for reporting to you rather than pwning your website and/or your customers.
About a month later, while looking into other things, I found that the company exists on HackerOne, where they run a bug bounty. This renewed my interest in seeing this fixed. So I reported the email exchange from earlier, noted that the bug was fixed, and asked if it constituted a rewardable finding. Again, a simple âthanks for the report, but this doesnât really rise to the level of a bountyâ is something Iâve been comfortable with from many companies (though it is nice when you do get something, even if itâs just a keychain or a t-shirt, or a bag full of stickers).
3/14: I got a reply the next day, indicating that âwe are investigatingâ.
3/28: Then nothing for two weeks, so I posted another response asking where things were going.
4/3: Then a week later, a response. âWeâre looking into this and will be in touch soon with an update.â
4/18: Me: Ping?
5/7: Me: Hey, how are we doing?
5/16: Anything happening?
5/18: Finally, over two months after my report to the company through HackerOne, and ten months after my original email to the first bug bounty address, itâs addressed.
5/19: The severity of the bug report is lowered (quite rightly, the questionnaire they used pushed me to a priority of âhighâ, which was by no means warranted). A very welcome bounty, and a bonus for my patience – unexpected but welcome, are issued.
The cheapest way to learn things is from someone elseâs mistakes. So I decided to share with my readers the things I picked up from this experience.
Here are a few other lessons Iâve picked up from bug bounties Iâve observed:
If you start a bug bounty, consider how ready you might be. Are you already fixing all the security bugs you can find for yourself? Are you at least fixing those bugs faster than you can find more? Do your developers actually know how to fix a security bug, or how to verify a vulnerability report? Do you know how to expand on an exploit, and find occurrences of the same class of bug? [If you donât, someone will milk your bounty programme by continually filing variations on the same basic flaw]
How many security vulnerabilities do you think you have? Multiply that by an order of magnitude or two. Now multiply that by the average bounty you expect to offer. Add the cost of the personnel who are going to handle incoming bugs, and the cost of the projects they could otherwise be engaged in. Add the cost of the developers, whose work will be interrupted to fix security bugs, and add the cost of the features that didnât get shipped on time before they were fixed. Sure, some of that is just a normal cost of doing business, when a security report could come at you out of the blue and interrupt development until itâs fixed, but starting a bug bounty paints a huge target on you.
Hiring a penetration tester, or renting a tool to scan for programming flaws, has a fixed cost â you can simply tell them how much youâre willing to pay, and theyâll work for that long. A bug bounty may result in multiple orders of magnitude more findings than you expected. Are you going to pay them all? What happens when your bounty programme runs out of money?
Finding bugs internally using bug bashes, software scanning tools or dedicated development staff, has a fixed cost, which is probably still smaller than the amount of money youâre considering on putting into that bounty programme.
Thatâs not to say bug bounties are always going to be uneconomical. At some point, in theory at least, your development staff will be sufficiently good at resolving and preventing security vulnerabilities that are discovered internally, that they will be running short of security bugs to fix. They still exist, of course, but theyâre more complex and harder to find. This is where it becomes economical to lure a bunch of suckers â excuse me, security researchers â to pound against your brick walls until one of them, either stronger or smarter than the others, finds the open window nobody saw, and reports it to you. And you give them a few hundred bucks â or a few thousand, if itâs a really good find â for the time that they and their friends spent hammering away in futility until that one successful exploit.
At that point, your bug bounty programme is actually the least expensive tool in your arsenal.
Iâm pretty much unhappy with the use of âSecurity Questionsâ â things like âwhatâs your motherâs maiden nameâ, or âwhat was your first petâ. These questions are sometimes used to strengthen an existing authentication control (e.g. âyouâve entered your password on a device that wasnât recognised, from a country you normally donât visit â please answer a security questionâ), but far more often they are used as a means to recover an account after the password has been lost, stolen or changed.
Iâve been asked a few times, given that these are pretty widely used, to explain objectively why I have such little disregard for them as a security measure. Hereâs the Too Long; Didnât Read summary:
Letâs take them one by one:
Whatâs your favourite colour? Blue, or Green. At the outside, red, yellow, orange or purple. That covers most peopleâs choices, in less than 3 bits of entropy.
Whatâs your favourite NBA team? Thereâs 29 of those â 30, if you count the 76ers. Thatâs 6 bits of entropy.
Obviously, there are questions that broaden this, but are relatively easy to guess with a small number of tries â particularly when you can use the next fact about Security Questions.
Whatâs your motherâs maiden name? Itâs a matter of public record.
What school did you go to? If we know where you grew up, itâs easy to guess this, since there were probably only a handful of schools you could possibly have attended.
Who was your first boyfriend/girlfriend? Many people go on about this at length in Facebook posts, Iâm told. Or thereâs this fact:
Whatâs your porn name? Whatâs your Star Wars name? Whatâs your Harry Potter name?
All these stupid quizzes, and they get you to identify something about yourself â the street you grew up on, the first initial of your secret crush, how old you were when you first heard saxophones.
And, of course, because of the next fact, all I really have to do is convince you that you want a free account at my site.
Every site that you visit asks you variants of the same security questions â which means that youâll have told multiple sites the same answers.
Youâve been told over and over not to share your password across multiple sites â but here you are, sharing the security answers that will reset your password, and doing so across multiple sites that should not be connected.
And do you think those answers (and the questions they refer back to) are kept securely by these various sites? No, because:
Thereâs regulatory protection, under regimes such as PCI, etc, telling providers how to protect your passwords.
There is no such advice for protecting security questions (which are usually public) and the answers to them, which are at least presumed to be stored in a back-end database, but are occasionally sent to the client for comparison against the answers! Thatâs truly a bad security measure, because of course youâre telling the attacker.
Even assuming the security answers are stored in a database, theyâre generally stored in plain text, so that they can be accessed by phone support staff to verify your answers when you call up crying that youâve forgotten your password. [Awesome pen-testing trick]
And because the answers are shared everywhere, all it takes is a breach at one provider to make the security questions and answers they hold have no security value at all any more.
Thereâs an old joke in security circles, âmy password got hacked, and now I have to rename my dogâ. Itâs really funny, because there are so many of these security answers which are matters of historical fact â while you can choose different questions, you canât generally choose a different answer to the same question.
Well, obviously, you can, but then youâve lost the point of a security question and answer â because now you have to remember what random lie you used to answer that particular question on that particular site.
Yes, I know you can lie, you can put in random letters or phrases, and the system may take them (âYour place of birth cannot contain spacesâ â so, Las Vegas, New York, Lake Windermere are all unusable). But then youâve just created another password to remember â and the point of these security questions is to let you log on once youâve forgotten your password.
So, youâve forgotten your password, but to get it back, you have to remember a different password, one that you never used. Thereâs not much point there.
Security questions and answers, when used for password recovery / reset, are complete rubbish.
Security questions are low-entropy, predictable and discoverable password substitutes that are shared across multiple sites, are under- or un-protected, and (like fingerprints) really canât be changed if they become exposed. This makes them totally unsuited to being used as password equivalents in account recovery / password reset schemes.
If you have to implement an account recovery scheme, find something better to use. In an enterprise, as Iâve said before, your best bet is to use something that the enterprise does well â the management hierarchy. Every time you forget your password, you have to get your manager, or someone at the next level up from them, to reset your password for you, or to vouch for you to tech support. That way, someone who knows you, and can affect your behaviour in a positive way, will know that you keep forgetting your password and could do with some assistance. In a social network, require the
Also, password hints are bullshit. Many of the Adobe breachâs âpassword hintsâ were actually just the password in plain-text. And, because Adobe didnât salt their password hashes, you could sort the list of password hashes, and pick whichever of the password hints was either the password itself, or an easy clue for the password. So, even if you didnât use the password hint yourself, or chose a really cryptic clue, some other idiot came up with the same password, and gave a âDaily Express Quick Crosswordâ quality clue.
Credentials include a Claim and a Proof (possibly many).
The Claim is what states one or more facts about your identity.
A Username is one example of a Claim. So is Group Membership, Age, Eye Colour, Operating System, Installed Software, etcâŠ
The Proof is what allows someone to reliably trust the Claim is true.
A Password is one example of a Proof. So is a Signature, a Passport, etcâŠ
Claims are generally public, or at least non-secret, and if not unique, are at least specific (e.g. membership of the group âBrown eyesâ isnât open to people with blue eyes).
Proofs are generally secret, and may be shared, but such sharing should not be discoverable except by brute force. (Which is why we salt passwords).
Password resets can occur for a number of reasons â youâve forgotten your password, or the password change functionality is more cumbersome than the password reset, or the owner of the account has changed (is that allowable?) â but the basic principle is that an account needs a new password, and there needs to be a way to achieve that without knowledge of the existing password.
Letâs talk as if itâs a forgotten password.
So we have a Claim â we want to assert that we possess an identity â but we have to prove this without using the primary Proof.
Which means we have to know of a secondary Proof. There are common ways to do this â alternate ID, issued by someone you trust (like a government authority, etc). Itâs important in the days of parody accounts, or simply shared names (is that Bob Smith, his son, Bob Smith, or his unrelated neighbour, Bob Smith?) that you have associated this alternate ID with the account using the primary Proof, or as a part of the process of setting up the account with the primary Proof. Otherwise, youâre open to account takeover by people who share the same name as their target.
And you can legally change your name.
E-mail.
Pretty much every public web site relies on the use of email for password reset, and uses that email address to provide a secondary Proof.
Itâs not enough to know the email address â thatâs unique and public, and so it matches the properties of a Claim, not a Proof, of identity.
We have to prove that we own the email address.
Itâs not enough to send email FROM the email address â email is known to be easily forged, and so thereâs no actual proof embodied in being able to send an email.
That leaves the server with the prospect of sending something TO the email address, and the recipient having proved that they received it.
You could send a code-word, and then have the recipient give you the code-word back. A shared secret, if you like.
And if you want to do that without adding another page to the already-too-large security area of the site, you look for the first place that allows you to provide your Claim and Proof, and you find the logon page.
By reusing the logon page, youâre going to say that code-word is a new password.
[This is not to say that email is the only, or even the best, way to reset passwords. In an enterprise, you have more reliable proofs of identity than an email provider outside of your control. You know people who should be able to tell you with some surety that a particular person is who they claim to be. Another common secondary identification is the use of Security Questions. See my upcoming article, âSecurity Questions are Bullshitâ for why this is a bad idea.]
Well, yes and no. No, actually. Pretty much definitely no, itâs not your new password.
Letâs imagine what can go wrong. If I donât know your password, but I can guess your username (because itâs not secret), I can claim to be you wanting to reset your password. That not only creates opportunity for me to fill your mailbox with code-words, but it also prevents you from logging on while the code-words are your new password. A self-inflicted denial of service.
So your old password should continue working, and if you never use the code-word, because youâre busy ignoring and deleting the emails that come in, it should keep working for you.
Iâve frequently encountered situations in my own life where Iâve forgotten my password, gone through the reset process, and itâs only while typing in the new password, and being told what restrictions there are on characters allowed in the new password, that I remember what my password was, and I go back to using that one.
In a very real sense, the code-word sent to you is NOT your new password, itâs a code-word that indicates youâve gone the password reset route, and should be given the opportunity to set a new password.
Try not to think of it as your âtemporary passwordâ, itâs a special flag in the logon process, just like a âduress passwordâ. It doesnât replace your actual password.
Shared secrets are fantastic, useful, and often necessary â TLS uses them to encrypt data, after the initial certificate exchange.
But the trouble with shared secrets is, you canât really trust that the other party is going to keep them secret very long. So you have to expire them pretty quickly.
The same is true of your password reset code-word.
In most cases, a user will forget their password, click the reset link, wait for an email, and then immediately follow the password reset process.
Users are slow, in computing terms, and email systems arenât always directly linked and always-connected. But I see no reason why the most usual automated password reset process should allow the code-word to continue working after an hour.
[If the process requires a manual step, you have to count that in, especially if the manual step is something like âcontact a manager for approvalâ, because managers arenât generally 24/7 workers, the code-word is going to need to last much longer. But start your discussion with an hour as the base-point, and make people fight for why itâll take longer to follow the password reset process.]
You can absolutely supply a URL in the email that will take the user to the right page to enter the code-word. But you canât carry the code-word in the URL.
Why? Check out these presentations from this yearâs Black Hat and DefCon, showing the use of a malicious WPAD server on a local â or remote â network whose purpose is to trap and save URLs, EVEN HTTPS URLs, and their query strings.
Every URL you send in an email is an HTTP or HTTPS GET, meaning all the parameters are in the URL or in the query string portion of the URL.
This means the code-word can be sniffed and usurped if itâs in the URL. And the username is already assumed to be known, since itâs a non-secret. [Just because itâs assumed to be known, donât give the attacker an even break â your message should simply say âyou requested a password reset on an account at our websiteâ â a valid request will come from someone who knows which account at your website they chose to request.]
So, donât put the code-word in the URL that you send in the email.
DONâT LOG THE PASSWORD
I have to say that, because otherwise people do that, as obviously wrong as it may seem.
But log the fact that youâve changed a password for that user, along with when you did it, and what information you have about where the user reset their password from.
Multiple users resetting their password from the same IP address â thatâs a bad sign.
The same user resetting their password multiple times â thatâs a bad sign.
Multiple expired code-words â thatâs a bad sign.
Some of the bad things being signaled include failures in your own design â for instance, multiple expired code-words could mean that your password reset function has stopped working and needs checking. You have code to measure how many abandoned shopping carts you have, so include code that measures how many abandoned password reset attempts you have.
Did I miss something, or get something wrong? Let me know by posting a comment!
Sometimes I think that title is the job of the Security Engineer â as a Subject Matter Expert, weâre supposed to meet with teams and tell them how their dreams are going to come crashing down around their ears because of something they hadnât thought of, but which is obvious to us.
This can make us just a little bit unpopular.
But being argumentative and sceptical isnât entirely a bad trait to have.
Sometimes it comes in handy when other security guys spread their various statements of doom and gloom â or joy and excitement.
âRename your administrator account so itâs more secureâ â or lengthen the password and achieve the exact same effect without breaking scripts or requiring extra documentation so people know what the administrator is called this week.
âEncrypt your data at rest by using automatic database encryptionâ â which means any app that authenticates to the database can read that data back out, voiding the protection that was the point of encrypting at rest. If fields need encrypting, maybe they need field-level access control, too.
âComplex passwords, one lower case, one upper case, one number, one symbol, no repeated lettersâ â or else, measure strength in more interesting ways, and display to users how strong their password is, so that a longish phrase, used by a competent typist, becomes an acceptable password.
Now Iâm going to commit absolute heresy, as Iâm going against the biggest recent shock news in security advice.
I understand the arguments, and I know Iâm frequently irritated with the unnecessary requirements to change my password after sixty days, and even more so, I know that the reasons behind password expiration settings are entirely arbitrary.
Thereâs a good side to password expiry.
These arenât the only ways in which passwords are discovered.
The method that frequently gets overlooked is when they are deliberately shared.
âBobâs out this week, because his mother died, and he has to arrange details in another state. He didnât have time to set up access control changes before he left, but he gave me a sticky-note with his password on it, so that we donât need to bother him for anythingâ
âEveryone on this team has to monitor and interact with so many shared service accounts, we just print off a list of all the service account passwords. You can photocopy my laminated card with the list, if you like.â
Yes, those are real situations Iâve dealt with, and they have some pretty obvious replacement solutions:
Bob (or Bobâs manager, if Bob is too distraught to talk to anyone, which isnât at all surprising) should notify a system administrator who can then respond to requests to open up ACLs as needed, rather than someone using Bobâs password. But he didnât.
When Bob comes back, is he going to change his password?
No, because he trusts his assistant, Dave, with his communications.
But, of course, Dave handed out his password to the sales VP, because it was easier for him than fetching up the document she wanted. And sales VPs just canât be trusted. Now the entire sales team knows Bobâs password. And then one of them gets fired, or hired on at a new competitor. The temptation to log on to Bobâs account â just once â is immense, because that list of customers is just so enticing. And really, who would ever know? And if they did know, everyone has Bobâs password, so itâs not like they could prosecute you, because they couldnât prove it was you.
Whatâs going to save Bob is if he is required to change his password when he returns.
Yes, this also happened. Because we found the photocopy of the laminated sheet folded up on the floor of a hallway outside the lavatory door.
There was some disciplining involved. Up to, and possibly including, the termination of employment, as policy allows.
Then the bad stuff happened.
The team who shared all these passwords pointed out that, as well as these being admin-level accounts, they had other special privileges, including the avoidance of any requirement to change passwords.
These passwords hadnât changed in six years.
And the team had no idea what would break if they changed the passwords.
Maybe one of those passwords is hard-coded into a script somewhere, and vital business processes would grind to a halt if the password was changed.
When I left six months later, they were still (successfully) arguing that it would be too dangerous to try changing the passwords.
Iâm not familiar with any company that acknowledges in policy that users share passwords, nor the expected behaviour when they do [log when you shared it, and who you shared it with, then change it as soon as possible after it no longer needs to be shared].
Once you accept that passwords are shared for valid reasons, even if you donât enumerate what those reasons are, you can come up with processes and tools to make that sharing more secure.
If there was a process for Bob to share with Dave what his password is, maybe outlining the creation of a temporary password, reading Dave in on when Dave can share the password (probably never), and how Dave is expected to behave, and becomes co-responsible for any bad things done in Bobâs account, suddenly thereâs a better chance Daveâs not going to share. âI canât give you Bobâs password, but I can get you that document youâre afterâ
If there was a tool in which the team managing shared service accounts could find and unlock access to passwords, that tool could also be configured to distribute changed passwords to the affected systems after work had been performed.
If you donât have these processes or tools, the only protection you have against password sharing (apart from the obviously failing advice to âjust donât do itâ) is regular password expiry.
Iâm also fond of talking about password expiration as being a means to train your users.
Certificates expire once a year, and as a result, programmers write code as if itâs never going to happen. After all, thereâs plenty of time between now and next year to write the ârenew certificateâ logic, and by the time itâs needed, Iâll be working on another project anyway.
If passwords donât expire â or donât expire often enough â users will not have changed their password anything like recently enough to remember how to do so if they have to in an emergency.
So, when a keylogger has been discovered to be caching all the logons in the conference room, or the password hashes have been posted on Pastebin, most of your users â even the ones approving the company-wide email request for action â will fight against a password change request, because they just donât know how to do it, or what might break when they do.
Unless theyâve been through it before, and it were no great thing. In which case, theyâll briefly sigh, and then change their password.
This is where I equivocate and say, yes, I broadly think the advice to reduce or remove password expiration is appropriate. My arguments above are mainly about things weâve forgotten to remember that are reasons why we might have included password expiry to begin with.
Here, in closing, are some ways in which password expiry is bad, just to balance things out:
Right now, this is where we stand:
As a result, especially of this last item, I donât think businesses can currently afford to remove password expiry from their accounts.
But any fool can see which way the wind is blowing â at some point, you will be able to excuse your company from password expiry, but just in case your compliance standard requires it, you should have a very clear and strong story about how you have addressed the risks that were previously resolved by expiring passwords as frequently as once a quarter.
I tweeted this the other day, after reading about Microsoftâs Project Bletchley:
With Microsoft releasing "blockchain as a service", how long till privacy rules suggest using blockchains to track data provenance?
â Alun Jones (@ftp_alun) June 16, 2016
Iâve been asked how I can tweet something as specific as this, when in a subsequent tweet, I noted:
[I readily admit I didn’t understand the announcement, or what it’s /supposed/ to be for, but that didn’t stop me thinking about it]
â Alun Jones (@ftp_alun) June 17, 2016
Despite having reasonably strong background in the use of crypto, and a little dabbling into the analysis of crypto, I donât really follow the whole âblockchainâ thing.
So, hereâs my attempt to explain what little I understand of blockchains and their potential uses, with an open invitation to come and correct me.
The most widely-known use of blockchains is that of Bit Coin and other âdigital currenciesâ.
Bit Coins are essentially numbers with special properties, that make them progressively harder to find as time goes on. Because they are scarce and getting scarcer, it becomes possible for people of a certain mindset to ascribe a âvalueâ to them, much as we assign value to precious metals or gemstones aside from their mere attractiveness. [Bit Coins have no intrinsic attractiveness as far as I can tell] That there is no actual intrinsic value leads me to refer to Bit Coin as a kind of shared madness, in which everyone who believes there is value to the Bit Coin shares this delusion with many others, and can use that shared delusion as a basis for trading other valued objects. Of course, the same kind of shared madness is what makes regular financial markets and country-run money work, too.
Because of this value, people will trade them for other things of value, whether thatâs shiny rocks, or other forms of currency, digital or otherwise. Itâs a great way to turn traceable goods into far less-traceable digital commodities, so its use for money laundering is obvious. Its use for online transactions should also be obvious, as itâs an irrevocable and verifiable transfer of value, unlike a credit card which many vendors will tell you from their own experiences can be stolen, and transactions can be revoked as a result, whether or not youâve shipped valuable goods.
What makes this an irrevocable and verifiable transfer is the principle of a âblockchainâ, which is reported in a distributed ledger. Anyone can, at any time, at least in theory, download the entire history of ownership of a particular Bit Coin, and verify that the person whoâs selling you theirs is truly the current and correct owner of it.
Iâm going to assume you understand how digital signatures work at this point, because thatâs a whole ânother explanation.
Remember that a Bit Coin starts as a number. It could be any kind of data, because all data can be represented as a number. Thatâs important, later.
The first owner of that number signs it, and then distributes the number and signature out to the world. This is the âdistributed ledgerâ. For Bit Coins, the âworldâ in this case is everyone else who signs up to the Bit Coin madness.
When someone wants to buy that Bit Coin (presumably another item of mutually agreed similar value exchanges hands, to buy the Bit Coin), the seller signs the buyerâs signature of the Bit Coin, acknowledging transfer of ownership, and then the buyer distributes that signature out to the distributed ledger. You can now use the distributed ledger at any time to verify that the Bit Coin has a story from creation and initial signature, unbroken, all the way up to current ownership.
Iâm a little flakey on what, other than a search in the distributed ledger for previous sales of this Bit Coin, prevents a seller from signing the same Bit Coin over simultaneously to two other buyers. Maybe thatâs enough â after all, if the distributed ledger contains a demonstration that you were unreliable once, your other signed Bit Coins will presumably have zero value.
So, in this perspective, a blockchain is simply an unbroken record of ownership or provenance of a piece of data from creation to current owner, and one that can be extended onwards.
In the world of financial use, of course, there are some disadvantages â the most obvious being that if I can make you sign a Bit Coin against your well, itâs irrevocably mine. There is no overarching authority that can say âno, letâs back up on that transaction, and say it never happenedâ. This is also pitched as an advantage, although many Bit Coin owners have been quite upset to find that their hugely-valuable piles of Bit Coins are now in someone elseâs ownership.
With the above perspective in the back of my head, I read the Project Bletchley report.
I even looked at the pictures.
I still didnât really understand it, but something went âpingâ in my head.
Maybe this is how C-level executives feel.
With Microsoft releasing "blockchain as a service", how long till privacy rules suggest using blockchains to track data provenance?
â Alun Jones (@ftp_alun) June 16, 2016
Hereâs my thought:
Businesses get data from customers, users, partners, competitors, outright theft and shenanigans.
Maybe in environments where privacy is respected, like the EU, blockchains could be an avenue by which regulators enforce companies describing and PROVING where their data comes from, and that it was not acquired or used in an inappropriate manner?
When I give you my data, I sign it as coming from me, and sign that itâs now legitimately possessed by you (I wonât say âownedâ, because I feel that personal data is irrevocably âownedâ by the person it describes). Unlike Bit Coin, I can do this several times with the same packet of data, or different packets of data containing various other information. That information might also contain details of what Iâm approving you to do with that information.
This is the start of a blockchain.
When information is transferred to a new party, that transfer will be signed, and the blockchain can be verified at that point. Further usage restrictions can be added.
Finally, when an information commissioner wants to check whether a company is handling data appropriately, they can ask for the blockchains associated with data that has been used in various ways. That then allows the commissioner to verify whether reported use or abuse has been legitimately approved or not.
And before this sounds like too much regulatory intervention, it also allows businesses to verify the provenance of the data they have, and to determine where sensitive data resides in their systems, because if it always travels with its blockchain, itâs always possible to find and trace it.
[Of course, if it travels without its blockchain, then it just looks like you either have old outdated software which doesnât understand the blockchain and needs to be retired, or youâre doing something underhanded and inappropriate with customersâ data.]
It even allows the revocation of a set of data to be performed â when a customer moves to another provider, for instance.
Yes, thereâs the downside of hugely increased storage requirements. Oh well.
Oh, and that revocation request on behalf of the customer, that would then be signed by the business to acknowledge it had been received, and would be passed on to partners â another blockchain.
So, maybe Iâve misunderstood, and this isnât how itâs going to be used, but I think itâs an intriguing thought, and would love to hear your comments.