SimpleLisp (1) Compilando Lisp a JavaScript

Ya varias veces implementé un intérprete Lisp, en C#, en Java, en JavaScript. Ver proyectos

https://github.com/ajlopez/AjLisp
https://github.com/ajlopez/AjLispJava
https://github.com/ajlopez/AjLispJs

Siempre es bueno implementar Lisp. Es un lenguaje simple, sencillo, poderoso, con funciones como ciudadanos de primera clase, y con el “twist” de implementar: algunas funciones que no evalúan de antemano sus argumentos, y macros.

Pero esta vez quiero implementar un compiladr Lisp. Es decir, algo que traslade al lenguaje anfitrión el programa Lisp. He elegido JavaScript como lenguaje anfitrión, y también como lenguaje de compilación: JavaScript mismo es el encargado de leer y traducir código Lisp a JavaScript. Pueden ver mi avance en

https://github.com/ajlopez/SimpleLisp

Como es costumbre, todo armado siguiendo el flujo de trabajo de TDD (Test-Driven Development), y sin tener todo el diseño de antemano, simplemente va surgiendo con los pequeños casos de uso que voy incorporando. Como es mi primer compilador Lisp, estoy aprendiendo nuevas formas de implementar el lenguaje. Algo vi en los últimos años al toparme con Clojure, que compila a Java, a CLR y a JavaScript. Lo que veo que tengo que implementar es:

SĂ­mbolos: identificadores por nombre, con valor asociado. Los estoy compilando a variables de JavaScript. En SimpleLisp, un sĂ­mbolo puede quedar definido para el tope del programa, para un bloque let, o como argumento de una funciĂłn. Entonces, depende del caso, lo paso a una variable tope de JavaScript (o al menos, tope en el mĂłdulo que estoy armando en compilaciĂłn), a una variable local, o a un argumento con nombre.

Funciones: traducir una función normal de Lisp a una función normal de JavaScript. Lo único diferente es que las funciones de Lisp siempre devuelven en un valor, los “comandos” son siempre expresiones, como en Ruby. Una lista a evaluar en SimpleLisp la compilo a una llamada a función simple en JavaScript.

Special Forms: Su implementaciĂłn es novedosa para mĂ­. Ahora que tengo que compilar, no interpretar, cada vez que me topo con un una lista cuya cabeza es un if, un do, un let, etc…  voy y la compilo de forma especial a JavaScript. Un (if … ) de SimpleLisp queda transformado en un if de JavaScript (es un poco más complicado, porque el if de SimpleLisp tiene que devolver un valor, es una expresiĂłn, mientras que un if de JavaScript es un comando; ya veremos en futuro post la implementaciĂłn que adoptĂ©).

Macros: De nuevo, al tener que compilar, puedo adoptar una implementación nueva: expandir la macro AL MOMENTO DE COMPILAR. Veremos hasta donde puedo llegar con este camino. Una consecuencia que veo: no puedo pasar una macro como valor funcional como argumento de otra función. Justamente, lo que pasa en Clojure: las macros no son valores funcionales, son “trucos” que se expande en tiempo de compilación.

Y como en otras implementaciones, agrego acceso a invocar JavaScript nativo desde SimpleLisp. Mi idea es poder escribir un sitio Node.js/Express con SimpleLisp.

Nos leemos!

Angel “Java” Lopez
http://www.ajlopez.com
http://twitter.com/ajlopez

Window sizing..

Here we go again..

image

Half a window.. Grrrrrrrrrrr

I have yet to find any method to make it stick at the size I want it to be. It only happens with an IE window. Everything else stays put.

Bitlocker..

I am going to say it again..

………………..

Don’t use Bitlocker to protect your data unless it is military grade and would threaten the security of your nation were it to become public knowledge..

………………..

Bitlocker is NOT a toy utility..

………………..

Bitlocker doesn’t encrypt files. It encrypts drives..

………………..

Use Bitlocker at your own risk. If your Bitlocker credentials decide not work, the only way to get your drive back is essentially to trash it as a storage device and start over..

………………..

If you want or need to encrypt files and folders, use this method..

EFS

Microsoft Azure: Linux

During the past Connect(); event in New York Microsoft made clear they love Linux and Open Source. Their .NET framework is now open source which shows this love. But Satya Nadella earlier already stated Microsoft Love Linux.

Microsoft_LOVES_Linux

In the beginning Azure only exists of Microsoft related products. In fact Azure was only PAAS. Virtual Machines like with Amazon wasn’t possible. There has been something called VM Role but luckily this was replaced very quickly by the real IAAS.

Since the new HTML portal (http://manage.windowsazure.com) IAAS or Virtual Machine belong to Microsoft Cloud landscape. And Linux images belonged to them on day one! Many frowned their eyebrows, the number of images just went up. On the current preview Portal (http://portal.azure.com) the list of Linux images is really big. And I think it is not the end yet. Perhaps in the Marketplace other vendors will sell and service other Linux images. The bigger the demand, the bigger the offering.

11-23-2014 15-36-59

During my daily walk through the Azure portal (Microsoft Azure grows very quick) I saw I could service my own Minecraft server. The only thing you need, a name and 15 minutes. After that you got your own Minecraft server. How cool! This one runs on a Linux image by the way.

minecraftazure

What is your reason not to use Microsoft Azure?

Visual Studio 2015 Preview: C# 6

On November 12 and 13 Microsoft organized the Connect(); event in New York. the keynote was live and the Dutch communities (WAZUG, DotNed, VB Central and of course SDN) had organized a evening event to watch the keynote together. If you weren’t there or missed the event completely, you can still watch via Channel 9. Not the least speakers were there: Scott Guthrie, S. Somasegar, Brian Harry and Scott Hanselman.

During this event Microsoft showed the words of CEO Satya Nadella, Mobile First, Cloud First, weren’t just a hollow remark. The statement is part of whole Microsoft and in all the do. The keynote showed it.

They did a lot of announcements, one was the fact Microsoft made the .NET platform Open Source via GitHUB. Which means anyone can do a Pull request and solve bugs etc. Microsoft will pick up these changes (after testing etc.) corporate in the branch. How cool is that! The first pull request came within a hour after opening.

An other announcement was a preview of Visual Studio 2015. Besides that also new previews of ASP vNext and C# 6. However there are no new language elements added, but the language and tooling is cleaned and optimized. All adjustments are meant to make it easier for the developer.

In this blogpost a few of the changes.

Code must besides easier to read also cleaner. Take this simple method, we could do without the function body.

11-22-2014 15-57-00

By using the Lambda arrow (=>) it is possible to make it shorter. The Lambda arrow was already in the language, but now you can use it for this too. There is an other usage.

11-22-2014 15-57-00

Clean code, understandable and readable. Self documenting code Knipogende emoticon

Other change. You will recognize the next code construction.

11-22-2014 16-00-57

With the new Elvis operator it can be shorter and more readable. The ?. (image some Elvis hair and eyes) operator checks for the object to be NULL.

11-22-2014 16-01-07

Also posisble with methods in a Class: x?.Calc(1,2); Personally I find this dangerous, because the Calc methode is not executed if x is NULL. And if your tests or structure of the application is not good, this will unpredictable situations.

Another construction you and I use a lot. We make strings for messages and we use of course the string.Format method. Super easy and rather readable, disadvantage you need to count the position and it is not always clear in a blink of an eye.

11-22-2014 16-03-39

So this should be different. Simple {<variable>} and the code is more documented. Of course the formatting rules of string.Format are still possible, complete freedom for the text.

11-22-2014 16-03-39

The above line makes the variable a field of a class/object. By making it public, you can manipulate it. You better use properties. But nowadays properties need to have a getter and a setter. In C# 6 there is the notion of a read-only property, a property with a getter and no setter.

This in the above example done with the lambda arrow, it will transfer in a read-only property automatically. You see this by the Code lens reference counter. But if you want to change the property in code you get a compiler warning.

11-22-2014 16-04-47

11-22-2014 16-01-54

There are lots more, more about that later.

Creating NIC team without knowing the team members

I was asked how to create a NIC team only using the 1GB adapters without knowing how many 1gb NICs were on the server.

 

I think this should solve the problem

 

New-NetLbfoTeam -TeamMembers (Get-NetAdapter | where Speed -ge 1gb | select -ExpandProperty Name) -Name MyTeam

 

Use New-NetLbfoTeam to create the team. The team member names are generated by

 

Get-NetAdapter | where Speed -ge 1gb | select -ExpandProperty Name

 

By putting that statement in parentheses as the value for the –TeamMembers parameter the results are used as the value for the parameter.  Shouldn’t matter now how many NICs or what they are called.  You can modify the filter criteria as required.

Weekend reading

Girls CAN Do Anything

 

Mattel has angered a large segment of the Girl Geek world with a book with a decidedly sexist slant that portrays Barbie as needing help from boys to successfully complete a coding project. While Mattel has apologized and “pulled” the publication and has promised a better attitude going forward, the important concept that girls can be what they dream to be and should reach for the stars is what really matters. Women in tech have been successful for years. I’m not just talking about the Marissa Mayer’s of this world. There are thousand’s (and hopefully tens of thousands) of us who have built successful tech careers, many of them, like myself, going back 20 or 30 years.

There’s been lots of activity on Barbie’s Facebook page where posts are being deleted by Barbie’s social media staff (see Posts to Page) and some still are up at https://www.facebook.com/BarbieNAD/posts/362944293876701

One of my peers has started a #realgirlgeek campaign to highlight the fact that women and girls CAN be anything they want to be, including computer programmers and IT professionals.

The White House also has recently stepped up a campaign to highlight and help and equalize the pay gap in ALL professions.

 

white house equality

Let’s encourage our girls to pursue and excel in technical careers.

Oh yeah, for what it’s worth, here’s my remix of the horrendous Barbie, I can be a Computer Engineer.

Enfrentando os Desafios de Compliance com Outsourcing

Introdução


Instituições financeiras no mundo inteiro estão transferindo seus processos principais para o modelo de outsourcing e offshoring, com o objetivo de reduzir custos anuais e melhorar a qualidade do desenvolvimento e melhorias destes processos. Isto permite que os gestores mantenham o foco no core business e nas estratégias de mercado.

Motivação para Terceirização dos Processos de Compliance


Atualmente diversas empresas enfrentam desafios de compliance. O escopo varia de acordo com a indústria, contudo, empresas das áreas de saúde e serviços financeiros são as mais afetadas pelas regulamentações governamentais. A decisão para adotar o modelo de outsourcing é motivada pelos fatores a seguir.

Escassez de Profissionais Especializados


Uma forte razão para terceirização é a falta de profissionais qualificados. Além disso, recrutar e treinais profissionais qualificados tornou-se extremamente difícil. O número de especialistas em compliance é pequeno quando comparado com a crescente demanda vinda das empresas submetidas às normas de regulamentação. As empresas estão cada vez mais preocupadas com os riscos relacionados à complexidade e constante mudança das regras de compliance, e os níveis de investimentos necessários para recrutar, treinar e manter profissionais com o conhecimento e experiência necessários.

Constantes Alterações nos Processos de Compliance


A contínua evolução das normas de regulamentação torna os investimentos nos processos de compliance reativos. Como resultado, as organizações enfrentam altos custos e baixa qualidade no desenvolvimento de soluções de compliance.

Altos Investimentos em Infraestrutura TecnolĂłgica


Empresas do setor financeiro, precisam investir continuamente em novas tecnologias e infraestrutura para atender as necessidades de compliance. Este cenário é agravado com as constantes mudanças nas normas existentes e o surgimento de novas normas de regulamentação. Empresas com atuação global enfrentam a necessidade de investimentos ainda maiores para atender as exigências de compliance que incidem sobre suas operações globais.

Custos Operacionais


A crescente necessidade de recursos (profissionais, processos e tecnologia) em razão das normativas de regulamentação afetam diretamente os custos de operação.

Transferindo os Desafios de Compliance para o modelo Outsourcing


A expressão “Compliance Outsourcing” significa terceirizar os processos relacionados às normas e regulamentações legais para um provedor de serviços localizado domesticamente ou em outro país, neste caso denominado “Offshoring”. Empresas não familiarizadas com estes modelos podem vê-los como inviáveis ou até mesmo impossíveis; a razão mais comum para isto está associada aos desafios de compliance.

Desafios Comuns


Segurança dos dados, complexidade das regulamentações, confiabilidade dos relatórios, responsividade e infraestrutura são os fatores mais comuns apontados pelas organizações como fatores que tornam o outsourcing ou offshoring impraticáveis. Contudo, os mesmos fatores argumentam a favor da terceirização, na verdade, empresas especializadas contam com profissionais qualificados que podem enfrentar esses desafios de forma mais efetiva reduzindo os custos operacionais. Tendo os processos de compliance atendidos de forma efetiva e econômica é o maior benefício agregado pelo outsourcing.

Benefícios da Terceirização dos Processos de Compliance


O fornecedor de serviços ideal deve contemplar os seguintes benefícios:
  • Ganhos com eficiĂŞncia e qualidade com a utilização de processos estruturados.
  • Acesso aos profissionais especializados e experientes.
  • Execução transparente dos processos end-to-end, desde a interpretação das normas de regulamentação atĂ© as ações corretivas para atender estas normas.
  • Flexibilidade para ampliar ou reduzir o time de profissionais qualificados de acordo com as necessidades do projeto.
  • Uso de ferramentas de análise de dados que oferecem tendĂŞncias e insights.
  • Redução de carga na infraestrutura interna e recursos humanos.
  • Efetiva redução de custos operacionais.

Outsourcing Seletivo


A estratégia de outsourcing seletivo, escolhendo quais processos de compliance conduzir internamente e quais terceirizar, pode ajudar as empresas a otimizar sua alocação de recursos. Isso reflete o objetivo principal do outsourcing – alocar processos de compliance com um terceiro que execute-os com alto nível de qualidade, responsividade e custo efetivo, permitindo que os recursos internos concentrem seus esforços no core business.

Modelos de Outsourcing


Compliance Outsourcing é um tipo de terceirização de conhecimentos especializados, conhecido como Knowledge Process Outsourcing (KPO), que no passado era visto como parte dos modelos de Information Technology Outsourcing (ITO), e Business Process Outsourcing (BPO). Atividades relacionadas ao modelo KPO tendem a ser mais complexas, uma vez que exigem conhecimento especializado sobre os processos da indústria onde está atuando, como normas, regulamentações, frameworks e experiência anterior neste modelo.

Comparando o modelo In-House e Outsourced Compliance


Cada empresa deve elaborar e considerar seu próprio business case para terceirização dos seus processos de compliance comparados com o desenvolvimento e suporte in-house, que pode exigir investimentos constantes em recursos profissionais e infraestrutura. Os provedores de serviço preparados para atender os desafios de compliance, podem oferecer vantagens significativas sobre o modelo in-house, principalmente porque estes provedores têm que manter-se atualizados para continuar atuando no mercado. O quadro a seguir apresenta uma comparação entre os dois modelos.
Fatores In-house Outsourced
Custo Fixo Variável/Reduzido
Flexibilidade de Recursos Limitado Imediata
CompetĂŞncia/Habilidades Restrita De acordo com a demanda
Disponibilidade de Talentos Limitado Prontamente disponĂ­vel
Impacto de Treinamentos Tempo e custo Nenhum
Desafios Globais Significante MĂ­nimo
Velocidade de Mudança Lento Proativo

 

ConclusĂŁo


Para cada indústria, quanto mais sujeita às regulamentações legais e quanto mais dinâmica a frequência de alterações no horizonte de normativas, maior deve ser a motivação para terceirização deste processo. A indústria de serviços financeiros lidera o mercado quando tratamos de outsourcing de infraestrutura e desenvolvimento de soluções voltadas ao compliance.

Artigo também disponível em Inglês! Clique aqui para fazer o download.

Simplifying ADO.NET Data Access, Part 4

This is a continuation of a series on simplifying the use of ADO.NET. In the last article we added the ability to cleanly separate query definition from the underlying data provider. But we left out parameters which are generally critical (and specific) to ADO.NET providers. In this article we will add support for parameters and add a fluent interface to make it easy to use.

Commands and parameters in ADO.NET

In ADO.NET you create a parameter by creating a provider-specific instance of DbParameter and then set the values accordingly.


var cmd = new SqlCommand("SELECT Id, Name FROM Roles where Id = @id");
cmd.Parameters.AddWithValue("@id", id);

This has several disadvantages.


  • The instance is tied to the provider
  • The parameter can be associated with only one command making reuse harder
  • The parameter name is provider-specific making it harder to build generic queries
  • There is no type checking to ensure that the parameter value matches the parameter type

DataParameter


As with the other areas we will create a generic type to represent parameters called DataParameter. It will expose the standard properties common to all parameters such as name, type, value, direction, precision, etc.


public class DataParameter
{   
   private DataParameter ()    
   {
      IsNullable = true;        
      SourceVersion = DataRowVersion.Current;    
   }

   public DataParameter ( string name, DbType type ) : this(name, type, ParameterDirection.Input)    
   { }    

   public DataParameter ( string name, DbType type, ParameterDirection direction ) : this()    
   {        
      Name = name;        
      DbType = type;        
      Direction = direction;    
   }

   public DbType DbType { get; private set; }    
   public ParameterDirection Direction { get; private set; }    

   public bool IsNullable { get; set; }    
   public string Name { get; set; }    
   public int Precision { get; set; }    
   public int Scale { get; set; }        
   public int Size { get; set; }    
   public string SourceColumn { get; set; }       
   public DataRowVersion SourceVersion { get; set; }
    
   public object Value { get; set; }
}

Ignoring error checking, it is simply a collection of properties. Notice that the type and direction are fixed at creation time.  The type is of type DbType which is the database-agnostic list of types. If you need to support provider-specific types then you’ll have to create a derived type as will be discussed later. Finally note that the parameter name does not contain any provider prefixes like @, ? or colon (:).


To hook everything up we need only add a Parameters property to DataCommand. We’ll use a KeyedCollection so that the parameters are accessible by name as well.


public class DataCommand 
{    
   public DataCommand ( string commandText, CommandType type )    
   {
      ...
      Parameters = new DataParameterCollection();    
   } 

   public DataParameterCollection Parameters { get; private set; }
}

We can now update the original code with a more generic version.


var cmd = new DataCommand("SELECT Id, Name FROM Roles where Id = @id", CommandType.Text);
cmd.Parameters.Add(new DataParameter("id", DbType.Int32) { Value = id });

Converting to ADO.NET


Now that the generic code is in place we now need to hook up the provider back end. This is more complicated but ultimately not that hard. Most of the changes are in the ConnectionManager.CreateCommandBase abstract method.


  • For each parameter associated with the command create a provider-specific instance
  • Copy the property values making any adjustments based upon the provider such as the data type and parameter name
  • For input or input-output parameter copy the value

For the generic DbProviderFactoryConnectionManager we wrote it is straightforward.


protected override DbCommand CreateCommandBase ( DataCommand command )
{
    DbCommand cmdDb = Factory.CreateCommand();
    ...
    foreach (var parm in command.Parameters)
    {
        cmdDb.Parameters.Add(CreateParameterBase(parm, cmdDb));
    };

    return cmdDb;
}

CreateParameterBase is a new virtual method that handles the heavy lifting. It determines if the provider requires any formatting of a parameter name and applies it accordingly. Refer to the code if you are interested in the details.


protected virtual DbParameter CreateParameterBase (DataParameter source, DbCommand command)
{
    DbParameter target = command.CreateParameter();

    target.ParameterName = FormatParameterName(source.Name);
    target.DbType = source.DbType;
    target.Direction = source.Direction;
    target.Size = source.Size;
    target.SourceColumn = source.SourceColumn;
    target.SourceVersion = source.SourceVersion;

    switch (source.Direction)
    {
        case ParameterDirection.InputOutput:
        case ParameterDirection.Input: target.Value = source.Value ?? DBNull.Value; break;
    };

    return target;
}

One small change is also needed in ConnectionManager.PrepareCommandCore to handle null. Any parameter that has no value is set to DBNull.Value so that the database will properly see the null value.


Passing the parameter is only part of the solution. It is also necessary to get any output parameter values after the call is made. UpdateParameterCore is a new method in ConnectionManager that is responsible for copying any output parameter value back to the original DataParameter.


private static void UpdateParameterCore (DbCommand command, DataCommand target)
{
    for (int nIdx = 0; nIdx < target.Parameters.Count; ++nIdx)
    {
        switch (command.Parameters[nIdx].Direction)
        {
            case ParameterDirection.InputOutput:
            case ParameterDirection.Output:
            case ParameterDirection.ReturnValue:
            {
                target.Parameters[nIdx].Value = command.Parameters[nIdx].Value;
                break;
            };
        };
    };
}

Every place a command is created is updated to call this method after the connection is closed (output parameters don’t get their value until the command is cleaned up).


Custom Commands


At this point we have everything we need to support parameterized commands but we can clean up the code a little more. In general commands are either adhoc queries or stored procedures so we can make the code a little clearer by creating derived types. For a stored procedure it is common to what to view the return value so we expose a property containing the result.


public class AdhocQuery : DataCommand
{
    public AdhocQuery (string commandText) : base(commandText, CommandType.Text)
    {
    }
}

public class StoredProcedure : DataCommand
{
    public StoredProcedure (string name) : base(name, CommandType.StoredProcedure)
    {
    }

    public int ReturnValue { get; internal set; }
}

The client could easily add a parameter for that but we can extend the existing code to determine if a stored procedure is being executed and add the parameter on the provider side. This requires a change to ConnectionManager.PrepareCommandCore. Note that we could generalize this behavior to commands in general but I’ll leave it specific to a stored procedure.


//Automatically capture the return value, if a sproc
var sproc = command as StoredProcedure;
if (sproc != null)
{
    //If there isn't a return value parameter already then add one
    if (!cmd.Parameters.OfType<DbParameter>().Any(p => p.Direction == ParameterDirection.ReturnValue))
    {
        var pReturn = cmd.CreateParameter();
        pReturn.ParameterName = "return";
        pReturn.DbType = DbType.Int32;
        pReturn.Direction = ParameterDirection.ReturnValue;
        cmd.Parameters.Add(pReturn);
    };
};

we have to make a similar change on the ConnectionManager.UpdateParameterCore method to copy the value back to the StoredProcedure.ReturnValue property after the command executes. Refer to the code for the details.


Next Time


We are almost done. Next time we will clean up the parameter API a little by making it fluent and generic. We will also clean up the reader code to make it easier to use.


DataAccessDemo.zip

This Blog.. updated

You may have noticed that the original header image has disappeared. All of the work that I put in to making it has come to nothing. For reasons only known to the WordPress theme, it has decided to compress the header image to only 80 pixels which of course has involved more work.

So, what do you think of the new one? I don’t like the placement of the images on the header, but I am only working with MS Paint. For now, it will have to do..

Updated bit..

OK, so it looks like the author of the theme has changed something. When I originally picked the theme, the header image was very different to what is being shown now. It was difficult enough to place the images in my original custom header image, and trying this time is just not working.

If I use MS Paint to create a 1600 x 80 pixel image, it doesn’t translate to WordPress at that size. It is all over the place. I tried in Arcsoft’s Photostudio 5 and had the same issue there. Not funny..

Another error I made was to remove what I considered to be duplicate images from the WordPress library. A consequence of this action is that some graphics no longer show in posts. Where they were important to the text, you will now have to use your imagination to its fullest.. Smile

TFS announcements roundup

There have been a load on announcements about TFS, VSO and Visual Studio in general in the past couple of week, mostly at the Connect() event.

Just to touch on a few items

If you have not had a chance to have look at these features try the videos of all the sessions on Channel9, the keynotes are a good place to start. Also look, as usual, at the various posts on Brian Harry’s Blog. It is time of rapid change in ALM tooling


Source: Rfennell

‘Visual Studio Community 2013 with Update 4′ is released

Community2013

Download Visual Studio Community 2013.

The Visual Studio Express products have been a huge success – hundreds of millions of downloads – MSDN team has brought the Express SKUs together into one product that can do everything from desktop development to Store development to Azure and ASP.NET development. Plus, it includes full extensibility, so you can use all your favorite extensions from the VS Gallery and elsewhere. Built off of the Visual Studio 2013 Update 4 release, VS Community enables you to develop everything from Windows Forms and WPF and MFC to Windows Phone and Store to Azure and ASP.NET – it’s basically a superset of the existing VS Express products. More than that, it includes support for the ecosystem of over 5,000 Visual Studio extensions. Read the Visual Studio Community 2013 release notes and watch the Visual Studio Community 2013 video to learn all about what you can do with this release. Visual Studio Community 2013 is meant for use by open source developers, startups, students, and hobbyists, rather than enterprises. To try it out you could use an Azure VM image.

 

Microsoft Visual Studio Community 2013 with Update 4 – English Install now

Microsoft Visual Studio Community 2013 with Update 4 – English DVD5 ISO image

Microsoft Visual Studio 2013 Language Pack – English Install now

WordPress 4.0.1 is a Critical Security Release that Fixes a Cross-Site Scripting Vulnerability

WordPress core contributors released a security update today. All users who have not yet received the automatic update are encouraged to update as soon as possible. WordPress 4.0.1 is a critical security release that provides a fix for a critical cross-site scripting vulnerability, originally reported by Jouko Pynnonen on September 26th.

http://wptavern.com/wordpress-4-0-1-is-a-critical-security-release-that-fixes-a-cross-site-scripting-vulnerability

Errors running tests via TCM as part of a Release Management pipeline

Whilst getting integration tests running as part of a Release Management  pipeline within Lab Management I hit a problem that TCM triggered tests failed as the tool claimed it could not access the TFS build drops location, and that no .TRX (test results) were being produced. This was strange as it used to work (the RM system had worked when it was 2013.2, seems to have started to be issue with 2013.3 and 2013.4, but this might be a coincidence)

The issue was two fold..

Permissions/Path Problems accessing the build drops location

The build drops location passed is passed into the component using the argument $(PackageLocation). This is pulled from the component properties, it is the TFS provided build drop with a appended on the end.

image 

Note that the in the text box is there as the textbox cannot be empty. It tells the component to uses the root of the drops location. This is the issue, as when you are in a network isolated environment and had to use NET USE to authenticate with a the TFS drops share the trailing causes a permissions error (might occur in other scenarios too I have not tested it).

Removing the slash or adding a . (period) after the fixes the path issue, so..

  • serverDropsServices.ReleaseServices.Release_1.0.227.19779        –  works
  • serverDropsServices.ReleaseServices.Release_1.0.227.19779      – fails 
  • serverDropsServices.ReleaseServices.Release_1.0.227.19779.     – works 

So the answer is add a . (period) in the pipeline workflow component so the build location is $(PackageLocation). as opposed to $(PackageLocation) or to edit the PS1 file that is run to do some validation to strip out any trailing characters. I chose the later, making the edit

if ([string]::IsNullOrEmpty($BuildDirectory))
    {
        $buildDirectoryParameter = [string]::Empty
    } else
    {
        # make sure we remove any trailing slashes as the cause permission issues
        $BuildDirectory = $BuildDirectory.Trim()
        while ($BuildDirectory.EndsWith(""))
        {
            $BuildDirectory = $BuildDirectory.Substring(0,$BuildDirectory.Length-1)
        }
        $buildDirectoryParameter = "/builddir:""$BuildDirectory"""
    }
   

Cannot find the TRX file even though it is present


Once the tests were running I still had an issue that even though TCM had run the tests, produced a .TRX file and published it’s contents back to TFS, the script claimed the file did not exist and so could not pass the test results back to Release Management.


The issue was the call being used to check for the file existence.


[System.IO.File]::Exists($testRunResultsTrxFileName)


As soon as I swapped to the recommended PowerShell way to check for files


Test-Path($testRunResultsTrxFileName)


it all worked.


Source: Rfennell

Removal instructions for GoSave

What is GoSave?

The Malwarebytes research team has determined that GoSave is a browser hijacker. These so-called “hijackers” manipulate your browser(s), for example to change your startpage or searchscopes, so that the affected browser visits their site or one of their choice. This one also displays advertisements.

https://forums.malwarebytes.org/index.php?/topic/161230-removal-instructions-for-gosave/

Removal instructions for Zorton Win 7 Protection 2014

What is Zorton Win 7 Protection 2014?

The Malwarebytes research team has determined that Zorton Win 7 Protection 2014 is a fake anti-malware application. These so-called “rogues” use intentional false positives to convince users that their systems have been compromised. Then they try to sell you their software, claiming it will remove these threats. In extreme cases the false threats are actually the very trojans that advertise or even directly install the rogue.

https://forums.malwarebytes.org/index.php?/topic/161221-removal-instructions-for-zorton-win-7-protection-2014/

Azure IaaS für IT Pros Online Event

Vom 1. bis 4. Dezember gibt es jeden Abend (deutscher Zeit) ein kostenfreies Online Event von Microsoft zu Azure IaaS. Das Training ist auch ideal zur Vorbereitung auf die Microsoft Prüfung 70-533 “Implementing Microsoft Azure Infrastructure Solutions”. Als Belohnung für die regelmäßige Teilnahme gibt es einen 50% Voucher für die Prüfungsgebühr.

Die “Keynote” hält Mark Russinovich (Microsoft Chief Technology Officer Azure), durch das Programm führen Rick Claus (Senior Technical Evangelist) und Mitglieder aus dem Azure Team.

Weitere Informationen:

Viel SpaĂź!

Viele GrĂĽĂźe
Dieter


Dieter Rauscher
MVP Enterprise Security

How to Create a Charms Bar Shortcut in Windows 10

Windows 10 has the Settings, Devices, Start, Share, and Search charms available. Charms are context sensitive to the location (desktop vs Start screen) and application that is running when opened.

This tutorial provides a download for a charms bar shortcut that always opens directly on your desktop or Start screen and also showing the clock in Windows 10.

Read more…

How to Open and Use Disk Cleanup in Windows 10

You can use Disk Cleanup to reduce the number of unnecessary files on your drives, which can help your PC run faster. It can delete temporary files and system files, empty the Recycle Bin, and remove a variety of other items that you might no longer need.

This tutorial will show you how to open and use Disk Cleanup to help free up space by removing unneeded files in Windows 10.

Read more…

Recent Comments

Archives