Developer Exchange Blog
Let’s look at the Wikipedia definition of a cloud
Cloud computing is the use of computing resources (hardware and software) that are delivered as a service over a network (typically the Internet). The name comes from the use of a cloud-shaped symbol as an abstraction for the complex infrastructure it contains in system diagrams. Cloud computing entrusts remote services with a user’s data, software and computation.
There are many types of public cloud computing:
- Infrastructure as a service (IaaS),
- Platform as a service (PaaS),
- Software as a service (SaaS)
- Storage as a service (STaaS)
- Security as a service (SECaaS)
- Data as a service (DaaS)
- Business process as a service (BPaaS)
- Test environment as a service (TEaaS)
- Desktop as a service (DaaS)
- API as a service (APIaaS)
From a consumers perspective, the cloud means that your data is stored on the Internet somewhere. For example, the iCloud product from Apple keeps all your data in sync between computers and iOS devices. The iTunes Match service stores your music on servers, and the same goes for the Google Music service.
To many IT professionals, the cloud is just a way to offload services they currently have in house to a server outside of their infrastructure that is managed, to varying levels, by someone else. For example, many organizations have shifted from having internal mail servers, choosing to migrate to Google for Business, where every aspect of the infrastructure is managed by Google. Another example might be moving your internal Microsoft Exchange mail server to a hosting provider who manages it for you. The end result, again to varying degrees, is greater reliability and less burden and reliance on internal IT resources. JFrog Artifactory for dependency management has a cloud service or a server you can download. If I purchase this service, can I say that my dependency management is in the cloud regardless of how JFrog implements their cloud service? The answer to this question is yes, but I don't really know exactly how 'cloudy' their implementation may be.
The True Cloud
While the cloud may mean different things to different people, preparing an application for a cloud architecture is not as simple as one might think. There are all kinds of decisions that need to be made and coding practices that should be adhered to. Typically, if you follow Java conventions, then you won’t have much of an issue with cloud deployments. Let’s look at an example Java web application and some of the concerns that need to be addressed prior to cloud deployment to a service such as Cloud Foundry.
- Your application must be bundled as a war file.
- You application must not access the file system. You may utilize the class loader to load files from within the war, but don’t write to files.
- Resources/Services should be looked up via JNDI.
- If you cache, utilize caching solutions like EHCache that propagate automatically across instances.
- Persist data to a database such as MongoDB, MySQL or PostgreSQL to guarantee access.
Cloud Architecture Dynamic Scaling vs. Autoscaling
One of the biggest advantages to a cloud architecture is the ability to scale up instances of your application and only pay for what you utilize from your provider. There are two types of scaling. The first is Auto-scaling, where a tool is utilized to measure load and will automatically increase instances of your application to accommodate the need. The second is dynamic scaling, where you control the scale based on a load or capacity planning forecast. Auto-scaling depends on dynamic scaling. Basically dynamic scaling is the ability, through software commands, to dramatically scale up your runtime instances, without having to manually configure new servers and so forth. The cloud infrastructure knows how to expand the runtime environment. Auto-scaling simply automates the controls, automatically making the decisions with respect to how large or small the runtime environment should be. So, auto-scaling simply utilizes dynamic scaling to make the changes it desires.
An example where there's a need for scaling would be a situation where you normally have a load of 1000 concurrent users but you have an upcoming event that is expected to increase your load to 10,000 users. In a traditional architecture, you would need to purchase servers and hardware to accommodate the worst case scenario, which would leave your infrastructure largely unutilized most of the time causing inefficiency, waste and a lot of added cost for a temporary surge in use. In a cloud scenario, you might have 10 instances to service your normal load and for the 24 hour period of your exceptional event, you would scale to 100 instances and then, when the smoke clears, you would return to 10 instances. All this can be done with a single click of a button in a cloud infrastructure.
Automatic scaling is provided by some cloud providers in the form of a tool. Amazon provides such a tool that will monitor the load and allow you to set triggers to determine how many instances to ramp and when. Automatic scaling is very useful as it handles the unforeseen, but it should not be used in lieu of true capacity planning. Auto-scaling can be dangerous. Advanced intelligence needs to be built in to determine the different between junk or malicious activity and legitimate traffic. For example, think of what would happen if a DoS attack was performed against your application. Remember you pay based on the number of servers you ramp up. A dumb auto-scaled solution could be driven to very expensive heights with a sophisticated (or maybe not so sophisticated) attack.
Under the hood, scaling works by issuing commands to the server through an API. For example, with Cloud Foundry, a vmc command will let you monitor load and add instances. These command in-turn create JSON formatted requests that are sent to the server to give it instruction. You can also use third party tools that interface in the same fashion, or you can build in scaling intelligence directly into your own application by hitting the server API yourself. Using the latter technique, your application can control itself, making it very intelligent.
Regardless of how you see the cloud and how you utilize it, the endgame for your organization should be to offload the burden from your internal staff, decrease your expenses, provide greater uptime and flexibility and give you the ability to scale dynamically. That's what the Cloud is really all about.
Please login first in order for you to submit comments