Cloud Computing And The Birth Of Openstack
A little history…
In 1970, at the origin of the first enterprise information systems, the major technology was the “Mainframe”, a brick considered as the core of the system, possessing all the business and functionalities expected by the company. In 1980 came the “client-server” model which was a first step towards the fragmentation of the IS, given that customers were no longer simple terminals, but real PCs with integrated software. It was, therefore, possible, for the same customer, to interact with different servers, business data providers on a defined perimeter.
In parallel, in 1990, it is not only the domain of the company which knows a revolution, but the whole world, the Internet is born, and bring with him the standards become unavoidable that is today the TCP / IP and the HTTP protocol.
In 2000, internet technologies dominate all operating system architectures, through the concept of “SOA” (Service Oriented Architecture) mainly built on the use of web services (HTTP / SOAP / XML) and asynchronous exchanges ( Tail). At this point in history, the information system decorrelates the material of the trade and breaks down into “service”, the entity responsible for a business domain. A service can be accessible, through an interface that it provides, by any application via a weak coupling or nonexistent (stateless). The evolution will lead to the arrival of components dedicated to the concentration, and the management of the exchanges between all the services that we will call ESB (Enterprise Service Bus), in order to allow the management of the complexity of the whole of the system.
In parallel with this flourishing era around internet technologies, another major hardware revolution is developing, this is “Virtualization”. The latter makes it possible to segment a physical server into a multitude of logical servers, with their own software stack ranging from the operating system to the installed applications. This made it possible to recycle all the unused servers deemed “obsolete”, and to rationalize the system resources in order to leave the very expensive and predominant mode, which was an application = a physical server.
The cloud is the culmination of the meeting between technologies called “service” and virtualization, it is a model of access to shared and configurable computer resources, from a network access, all on demand. These resources are available quickly and are operational without the need for interaction with the service provider, all managed by a subscription system based on the needs of the user.
The different Cloud typologies
Cloud computing, like all significant developments in the field of computing, has developed rapidly to the detriment of security, which remains a factor limiting the advance and evolution of a new technology. Nevertheless, companies have remained wary of these offers that offer “no more and no less” to store their data outside their private IS, regardless of their degree of sensitivity. In order to adapt to this major constraint, the market has responded mainly by declining the Cloud Computing business model into four main categories, so that, depending on the characteristics and expectations specific to each company, it can respond with an adapted offer. and so allow a smooth ride to change.
So we have the following categories:
In this model, a provider has an infrastructure that segments and rents to different customers. This solution is by far the most economical given its “on demand” operation, without having to support either the acquisition or the exploitation of the equipment. Nevertheless, it implies to deport all or part of the IT activity of a company outside the IS, on which the control of the data remains a vision of the mind. Each client’s data is stored within the same infrastructure, but access to that data is managed through authentication and accreditations. Nevertheless, any DSI must question the life of its data in cases where the provider would stop its activity, or in case of attack and data theft. The acceptance of this type of service is a scholarly calculation of profitability, flexibility, and security.
This model is the antithesis of the public cloud given its “private” nature, not to say “isolated”. It is not intended to be shared with other customers. A private cloud can be housed either within the company or at a provider that will provide a complete dedicated infrastructure for that business. The benefits are a general mastery of its data, network capabilities UX, infrastructure, security and “tailor-made”. Indeed, a private cloud is sized for the specific needs of the company, the latter is not forced to adapt to existing offers. Many government entities are increasingly adopting this model, with the goal of mastering data sovereignty.
Hybrid cloud This is a consequence of the advantages and disadvantages previously stated between public and private because it was designed to spread the system of information of a company between the two types of models, according to the importance of the data. CommunityCloudThis model is the most recent, it is used by several organizations that have similar needs. It can be used to host generic applications, with specificities expected by each member of the group. In this orientation, each entity participates in the effort and invests in the structure of the shared cloud. OpenStack: an open-source cloudOpenstack was born from the initiative to provide businesses the opportunity to set up their own private cloud using a free, modular and scalable ecosystem.
However, public cloud providers can also rely on this solution to diversify their offer catalog. The project, originally born from the collaboration of NASA and Rackspace, is supported by the OpenStack Foundation, a non-commercial organization which aims to promote the OpenStack project, but also to maintain the overall coherence of the project. Currently, OpenStack is licensed under Apache and is one of the largest open source projects in the world. All major players in the market have integrated the foundation, to ensure consistency on choices and directions, and that is why, for example, companies such as Dell, HP, IBM or Suse, are As a software point of view, OpenStack’s goal is to get away from the infrastructure layer to provide a standard solution, no matter what hardware is available.
The general solution is an assembly of independent and specialized modules each fulfilling a specific function such as, for example, the network, the hypervisor or the storage. Given the desire to make the software layer the sole holder of intelligence, OpenStack offers the implementation of a set of innovative solutions, such as the “Software Defined Storage” to manage storage, block or object, or the “Software Defined Network” to manage virtual networks through Control layer and NFV (Network Function Virtualization) concepts.
A series of articles will delve into these different themes later. The foundation and the community of developers were very active, the pace of releases is biannual, imposing a certain flexibility on the part of companies wishing to implement it. OpenStack: a modular approach more practical terms, OpenStack is a framework containing a large number of independent services that, once assembled, allow the realization of a cloud.OpenStack infrastructure, in its operation, is a service architecture based on a set of synchronous exchanges (REST) and asynchronous (Queue). Below, a first overview of the architecture of Open Stack, with the major functions.Tags: Cloud computing