Category Archives: Computer Science

Cloud computing types, perspectives, and disadvantages

Around the beginning of the 20th century, factories used to be placed near river sources in order to get controlled access to electricity. Now electricity is not a differentiation factor among factories anymore, so they simply outsource this service to specialized electrical companies. I think the same will occur with computing sooner or later. In a previous post, I explored the context in which cloud computing was born, and why it is an effective way to reduce computing costs by turning fixed costs into variable costs. This is one of its principal attractiveness, but it’s not the only one. Other advantages can be discovered by considering cloud computing from many different viewpoints:

  • Cloud computing as a use model, where we are interested in computing results and not in how we implement them.
  • Cloud computing as an access model, where we can access an application from everywhere.
  • Cloud computing as an infrastructure model, where capacity is not reserved upfront, and it changes elastically.


We therefore cannot talk about a unique cloud computing model anymore. Multiple models make up the cloud and they are actually independent problems, each with its own research efforts, difficulties, and implementations. There are also different types or deployment models: There are public clouds (also known as external clouds), in which users access the cloud via interfaces using web browsers. Resources are provisioned on a fine-grained basis, which clearly consists a form of utility computing. Private clouds are set up inside an organization’s internal data center. They provide more control over deployment and use than public clouds, in a similar way than Intranet functionality. Amid the two previous approaches, hybrid clouds are private clouds linked to one or more external cloud services. They provide more secure control of data and allow various parties to access information over the Internet. They also have open architectures to be able to integrate themselves in larger, external management systems.

From these different models we can elaborate a list of aspects that every company should assess before adopting a cloud computing solution. In the following list I summarize some that I consider most relevant:

  • Cloud providers are responsible for security management using (hopefully) a specialized team that applies state-of-the-art technologies.
  • The coherence problem derived from data synchronization is now responsibility of the provider.
  • Licenses are “pay-per-use”, instead of a high initial fixed cost.
  • Systems are automatically upgraded by the cloud provider.
  • Collaborative work is easier (Google Docs and similar products are a great example of this).
  • Lightweight clients can access the cloud from anywhere there is an Internet connection.
  • Energetic costs are minimized.


In my opinion, Cloud computing help differentiate companies by reducing time in those tasks that are not principal in their business and, at the same time, enables new delivery methods that help increase competitiveness. However, cloud computing is not a universal solution to every computing need. Some problems, scenarios and applications are not appropriate for a cloud environment, and there is also a risk of downtime, so the recommendation is that businesses should migrate their less critical services first. Businesses must conduct a rigorous analysis before looking into a migration to the cloud, and this analysis must not overlook some weak points:

  • There is not full control over the service’s internal management.
  • There is a high dependency on an Internet connection.
  • There is an inherent migration risk if the cloud provider goes out of business. Which tools and procedures are available to export our data from the cloud provider? This phenomenon is technically called the “lock-in problem”.
  • There is some uncertainty about the performance of computations in the cloud. Bottlenecks can occur in data-centric applications.


I left out two of the most important aspects of cloud computing, or at least they are the main causes of concern to users and companies around the world, according to many surveys: security and privacy. They are tangled concepts that have deserved great attention from the research community, and I will try to explain them in a future post.

A brief introduction to grid, virtualization, and cloud computing

“Cloud” is one of the latest buzzwords in the media today. People in technology circles often associate the cloud with Dropbox, Facebook or Windows Azure, but in fact the cloud is a broader concept. What’s exactly cloud computing?

A little bit of context

We have to introduce the context in which cloud computing was born: parallel and distributed computing. We can say that the parallel and distributed computing aim is to run intensive data or calculation applications in an efficient way. There are two main paradigms in parallel and distributed computing:

  • High performance computing, or HPC
  • High throughput computing, or HTC

In HPC we are interested in running one parallel application as fast as possible. Thus, this paradigm is useful in domains where time is constrained. For example, in computational fluid dynamics engineers simulate how an airplane wing behaves in a wind tunnel using complex algorithms. Here it is obvious that we need to reduce the computational time as much as we can. Another typical example is a weather forecast. It is also a CPU-intensive activity that computers need to finish in timely fashion (i.e. it’s not acceptable to complete a weather forecast for the next week in two weeks). In HTC we are interested in completing as many jobs per second as possible. For example, financial models and bioinformatics are two fields were HTC is commonly used.

Now that we have presented the two paradigms, let’s see the type of platforms that can be found in each. In HPC there are two important platforms: Symmetric Multi-Processors, or SMP, and Massive Parallel Processors, or MPP. In HTC, clusters and network systems are the rule. You can see clearly that there is a shift from the centralized, easy to administrate, homogeneous, tightly-coupled platforms in HPC to the more heterogeneous and not-so-easy to administrate platforms in HTC. This motivates the addition of a layer of abstraction that offers a uniform view of a system and simplifies administration. This layer is what computer scientists call a local resource management system, or LRMS. One of the most popular open source LRMS is Oracle Grid Engine. Local resource management systems are not exempt of disadvantages, though. The most important is that they lack a common interface and security infrastructure, which makes it difficult to construct computational architectures that span multiple administration domains.

Many, if not all computer science problems can be solved by adding a new layer of indirection. And that’s what is actually done to integrate different domain administrations: we add a computational grid as middleware layer above LRMS. A computational grid is chiefly a system that complies with the following requirements formulated by Ian Foster in 2002:

  • Coordinates decentralized resources.
  • Uses open protocols and standards.
  • Provides non-trivial quality of service.

Examples of middleware grids are The Globus Alliance, Unicore, or GRIA.

Servers, virtualization, and the cloud

Classic servers are static, pre-configured environments that run a particular instance of an operating system. Therefore, there’s a close relationship between server and resources (be it an operating system or a hardware configuration). This involves a high cost destined to fault tolerance monitoring and hardware maintenance, a high cost in downtime during hardware upgrades, etc. The alternative is virtualization, which decouples servers and physical resources, that is, many virtual machines can be executing on top of the same resource, even simultaneously.

Cloud computing can be understood as a paradigm to deliver on-demand hardware resources by means of virtualization. The assumption is that, in many scenarios, hardware resources are not used to the full for a long time, so it’s more efficient to consume computing power like cable TV or electricity. More precisely, we can observe cloud computing from multiple angles: Infrastructure as a Service (IaaS), which delivers a raw computer infrastructure, like Amazon web services, Platform as a Service (PaaS), which is a platform for the developing and delivering of web applications, like Windows Azure, and Software as a Service (SaaS), which provides on-demand access to any application, like Skype or Gmail.

At the simplest level, every cloud architecture has two principal components that we will see in greater detail in future posts:

  • A front-end, which is basically a remote interface, like Globus Nimbus.
  • A back-end, which behaves as a local manager, like OpenNebula.

In this article my intention was to summarize a high-level vision of what cloud computing is and where it comes from. Many people are still unwilling to adopt this incipient paradigm just like some people were reluctant to use cars at the beginning of the 20th century. In my opinion, when a new technology arises people usually try to adapt it to what is used right now, instead of trying to discover potential new use cases and functionality. In a nutshell, virtualization, cloud, and grid are complementary technologies that will coexist and cooperate at different levels of abstraction. One attractive of virtualization and cloud to end users is that they don’t have to change anything from their point of view. In the next posts I will explore in more detail the cloud concept and some relevant issues that are not solved yet, like providing a security model for the cloud.

We need to explore new ways to teach Mathematics to high school students

It’s not a secret that a popular software company offer their employees a percentage of time to contribute to the project of their wish. I think most people when they are asked the question “how would you spend that time?” would answer: “I’d contribute to this X new project,” “I’d add this Y feature to this Z project”…

They are all very good options that will impact millions of people around the world. But what if we want to impact the future generations of consumers, or even more, the future of computing? I would use that time to improve education. Recent studies report that nowadays not many people are choosing Computer Science as a major, especially in the U.S. This is a serious problem that caught the attention of many well-known people, like Bill Gates. Indeed, if we want to progress in this digital era, we need qualified and creative engineers that are prepared to invent and develop the “next big thing”. How could we generate more interest around Computer Science and, consequently, increase competitiveness? By changing the way Mathematics and, in general, exact sciences are taught in high school.

I don’t know about each country and school in the world, but Mathematics in high school tends to be a subject about calculating things: One of the most prominent goals is that you are able to solve things like a complicated integral without the slightest error. You don’t even need to know what an integral is, and why integrals are important. Of course, calculating is also important and cannot be ignored, but now computers can do this kind of computations in less time, and without a single mistake. Moreover, computers are now able to solve the trickiest differential equations, but this cannot avoid the fact that we don’t know how to model some important “real world” problems in the form of differential equations.

That’s the point I’d like to make. We need to focus the teaching of Mathematics around problem-solving skills. Once we’ve modeled a problem, then use a computer or cloud computing service to get the exact answer. Problem-solving skills are not only essential for engineers, they are useful for professionals in general in their everyday activities. Consequently, I believe that with this change of mentality Engineering would appeal more to students, there would be less drop-outs after the first year, and there would be more opportunities to success in this field.

What do you think? Do you also think that the method of teaching Mathematics at pre-university level is becoming obsolete? How would you foster creativity and innovation to keep up with the world as it is now?