Total Pageviews

Saturday, July 27, 2013

EXCHANGE RATES

What do ‘real’ numbers mean?
The word ‘real’ in economics — as opposed to ‘nominal’ — is used to describe a metric, where the impact of prices has been taken into account. For example, real GDP captures output of goods and services at constant prices, removing the effect of inflation.
What is real exchange rate?
Real exchange rate can be defined as the rate that takes into account inflation differential between the countries. Suppose the rupee was trading at Rs 40 to a dollar at the beginning of 2009. Assuming a 10% inflation in the Indian economy and 5% inflation in the US economy for the whole year, then this model says the rupee should depreciate by 5% (10%-5%) to Rs 42 to a dollar, other things being equal.
Why is the real exchange rate important?
Competitiveness of a country’s exports is decided not only by the nominal exchange rate, but also relative price movements in domestic and foreign markets. For instance, even if the nominal exchange of the rupee remains unchanged with respect to, say, the dollar, India’s exports to the US will become less competitive if inflation in India is higher than in the US. This means nominal exchange rate will have to be adjusted for effect of inflation.
How is nominal exchange rate adjusted for inflation?
Central banks use the concept of ‘real effective exchange rate’, or REER, to adjust nominal effective exchange rate for inflation. Conceptually, the REER is the weighted average of nominal exchange rates adjusted for the price differential between the domestic and foreign countries. The price differential, however, is based on the purchasing power concept. The currencies used are of those countries with which trade is the highest.
How does the RBI calculate REER?
The RBI calculates REER for India. It calculates the value of the rupee with respect to two indices, one comprising six countries and the other 36 countries with a 2004-05 base. The RBI, however, uses the wholesale price index-based inflation whereas globally consumer price indices are used. One conceptual flaw with this model is that it assumes that the base exchange rate is the correct exchange rate or represents the purchasing power parities accurately, which may not be the case.


BASE RATE

What is base rate?
It is the minimum lending rate that banks can charge their customers from July 1, 2010. So far, all lending rates were pegged to a bank’s prime lending rate (PLR). Under the existing system, banks charge customers interest rate either above the PLR or below PLR. Thus PLR worked as an anchor rate. From July 1, the base rate will not only replace the PLR as the benchmark, but it will also be the new floor rate below which no bank can lend. India’s largest bank, the State Bank of India, has indicated that it plans to peg its base rate in the range of 7.5-8%.
What will happen to loans linked to PLR?
Outstanding loans that are linked to PLR will continue to exist alongside the new loans linked to the base rate. As the old loans get repaid or the contract comes up for renewal, the base rate will become the sole benchmark.
From July 1, on all new loans, banks will charge customers at base rate or above base rate, depending on the rating and relationship. As and when the loan contract comes for renewal, banks will link the interest rate to the base rate. Existing customers will also get a choice to migrate to base rate. Customers will not be charged any penalty if they wish to migrate from PLR to base rate before the contract is due for renewal.
Why is base rate being introduced?
It is aimed at bringing more transparency in the lending market. As of now, the prime customer bargain rate is below PLR while average to risky customers are charged at PLR or above PLR. About 70% of the loan given by banks is at rates below PLR. Some banks have lent at 6% when their PLR is 13%. As a result, a customer who is able to bargain the most get the best rate. In case of base rate, no bank will be able to lend below base rate, making lending rates comparable.
What will happen to home loan customers?
Home loans are long-term contracts and thus it does not come up for renewal like most other corporates loans. Therefore, banks may give all existing customers a choice to move to a base rate. There are instances of home loans where the interest rate is fixed for initial years and floating rate in following rates. Here, in subsequent years, interest rate is a few basis points below PLR when it moves to floating rate regime.
In such cases, the base rate will not implicitly replace PLR. If the loan document, for instance, says after three years home loan will be 300 bps below PLR, it would not mean that the loan would be 300 bps below base rate once the base rate regime comes into being. Because, in any way, no loan can be below the base rate.


Monday, July 1, 2013

CLOUD COMPUTING

Cloud computing is a colloquial expression used to describe a variety of different computing concepts that involve a large number of computers that are connected through a real-time communication network (typically theInternet). Cloud computing is a jargon term without a commonly accepted non-ambiguous scientific or technical definition. In science, cloud computing is a synonym for distributed computing over a network and means the ability to run a program on many connected computers at the same time. The popularity of the term can be attributed to its use in marketing to sell hosted services in the sense of application service provisioning that run client server software on a remote location.

Cloud computing logical diagram


Advantages

Cloud computing relies on sharing of resources to achieve coherence and economies of scale similar to a utility (like the electricity grid) over a network.[2] At the foundation of cloud computing is the broader concept ofconverged infrastructure and shared services.
The cloud also focuses on maximizing the effectiveness of the shared resources. Cloud resources are usually not only shared by multiple users but as well as dynamically re-allocated as per demand. This can work for allocating resources to users in different time zones. For example, a cloud computer facility which serves European users during European business hours with a specific application (e.g. email) while the same resources are getting reallocated and serve North American users during North America's business hours with another application (e.g. web server). This approach should maximize the use of computing powers thus reducing environmental damage as well, since less power, air conditioning, rackspace, and so on, is required for the same functions.
The term moving cloud also refers to an organization moving away from a traditional capex model (buy the dedicated hardware and depreciate it over a period of time) to the opex model (use a shared cloud infrastructure and pay as you use it)
Proponents claim that cloud computing allows companies to avoid upfront infrastructure costs, and focus on projects that differentiate their businesses instead of infrastructure.[3] Proponents also claim that cloud computing allows enterprises to get their applications up and running faster, with improved manageability and less maintenance, and enables IT to more rapidly adjust resources to meet fluctuating and unpredictable business demand.[3][4][5]

Hosted services

In marketing, cloud computing is mostly used to sell hosted services in the sense of Application Service Provisioning that run client server software on a remote location. Such services are given popular acronyms like 'SaaS' (Software as a Service), 'PaaS' (Platform as a Service). End users access cloud-based applications through a web browser or a light-weight desktop or mobile app while the business software and user's data are stored on servers at a remote location.

History

The 1950s

The underlying concept of cloud computing dates back to the 1950s, when large-scale mainframe computers became available in academia and corporations, accessible via thin clients/terminal computers, often referred to as "dumb terminals", because they were used for communications but had no internal computational capacities. To make more efficient use of costly mainframes, a practice evolved that allowed multiple users to share both the physical access to the computer from multiple terminals as well as to share the CPU time. This eliminated periods of inactivity on the mainframe and allowed for a greater return on the investment. The practice of sharing CPU time on a mainframe became known in the industry as time-sharing.[6]

The 1960's–1990's

John McCarthy opined in the 1960s that "computation may someday be organized as a public utility."[7] Almost all the modern-day characteristics of cloud computing (elastic provision, provided as a utility, online, illusion of infinite supply), the comparison to the electricity industry and the use of public, private, government, and community forms, were thoroughly explored in Douglas Parkhill's 1966 book, The Challenge of the Computer Utility. Other scholars have shown that cloud computing's roots go all the way back to the 1950s when scientist Herb Grosch (the author of Grosch's law) postulated that the entire world would operate on dumb terminals powered by about 15 large data centers.[8] Due to the expense of these powerful computers, many corporations and other entities could avail themselves of computing capability through time sharing and several organizations, such as GE's GEISCO, IBM subsidiary The Service Bureau Corporation (SBC, founded in 1957), Tymshare (founded in 1966), National CSS (founded in 1967 and bought by Dun & Bradstreet in 1979), Dial Data (bought by Tymshare in 1968), and Bolt, Beranek and Newman (BBN) marketed time sharing as a commercial venture.

The 1990s

In the 1990s, telecommunications companies, who previously offered primarily dedicated point-to-point data circuits, began offering virtual private network (VPN) services with comparable quality of service, but at a lower cost. By switching traffic as they saw fit to balance server use, they could use overall network bandwidth more effectively. They began to use the cloud symbol to denote the demarcation point between what the provider was responsible for and what users were responsible for. Cloud computing extends this boundary to cover servers as well as the network infrastructure.[9]
As computers became more prevalent, scientists and technologists explored ways to make large-scale computing power available to more users through time sharing, experimenting with algorithms to provide the optimal use of the infrastructure, platform and applications with prioritized access to the CPU and efficiency for the end users.[10]

Since 2000

After the dot-com bubbleAmazon played a key role in all the development of cloud computing by modernizing their data centers, which, like most computer networks, were using as little as 10% of their capacity at any one time, just to leave room for occasional spikes. Having found that the new cloud architecture resulted in significant internal efficiency improvements whereby small, fast-moving "two-pizza teams" (teams small enough to feed with two pizzas) could add new features faster and more easily, Amazon initiated a new product development effort to provide cloud computing to external customers, and launched Amazon Web Services (AWS) on a utility computing basis in 2006.[11][12]
In early 2008, Eucalyptus became the first open-source, AWS API-compatible platform for deploying private clouds. In early 2008, OpenNebula, enhanced in the RESERVOIR European Commission-funded project, became the first open-source software for deploying private and hybrid clouds, and for the federation of clouds.[13] In the same year, efforts were focused on providing quality of service guarantees (as required by real-time interactive applications) to cloud-based infrastructures, in the framework of the IRMOS European Commission-funded project, resulting to a real-time cloud environment.[14] By mid-2008, Gartner saw an opportunity for cloud computing "to shape the relationship among consumers of IT services, those who use IT services and those who sell them"[15] and observed that "organizations are switching from company-owned hardware and software assets to per-use service-based models" so that the "projected shift to computing ... will result in dramatic growth in IT products in some areas and significant reductions in other areas."[16]
On March 1, 2011, IBM announced the IBM SmartCloud framework to support Smarter Planet.[17] Among the various components of the Smarter Computing foundation, cloud computing is a critical piece.

Growth and popularity

The development of the Internet from being document centric via semantic data towards more and more services was described as "Dynamic Web".[18] This contribution focused in particular in the need for better meta-data able to describe not only implementation details but also conceptual details of model-based applications.
The ubiquitous availability of high-capacity networks, low-cost computers and storage devices as well as the widespread adoption of hardware virtualizationservice-oriented architectureautonomic, and utility computing have led to a growth in cloud computing.[19][20][21]
Financials Cloud vendors are experiencing growth rates of 90% per annum.[22]

Origin of the term

The origin of the term cloud computing is unclear. The expression cloud is commonly used in science to describe a large agglomeration of objects that visually appear from a distance as a cloud and describes any set of things whose details are not inspected further in a given context.
  • Meteorology: a weather cloud is an agglomeration.
  • Mathematics: a large number of points in a coordinate system in mathematics is seen as a point cloud;
  • Astronomy: many stars that crowd together are seen as star clouds (also known as star mist) in the sky, e.g. the Milky Way;
  • Physics: The indeterminate position of electrons around an atomic kernel appears like a cloud to a distant observer;
  • Video Games: "The Cloud" was what followed Mario characters around, allowing them to store and access extra items;
In analogy to above usage the word cloud was used as a metaphor for the Internet and a standardized cloud-like shape was used to denote a network on telephony schematics and later to depict the Internet in computer network diagrams. The cloud symbol was used to represent the Internet as early as 1994.[23][24] Servers were then shown connected to, but external to, the cloud symbol.
Urban legends claim that usage of the expression is directly derived from the practice of using drawings of stylized clouds to denote networks in diagrams of computing and communications systems.
The term became popular after Amazon.com introduced the Elastic Compute Cloud in 2006.

Similar systems and concepts

Cloud Computing is the result of evolution and adoption of existing technologies and paradigms. The goal of cloud computing is to allow users to take benefit from all of these technologies, without the need for deep knowledge about or expertise with each one of them. The cloud aims to cut costs, and help the users focus on their core business instead of being impeded by IT obstacles.[25]
The main enabling technology for cloud computing is virtualization. Virtualization abstracts the physical infrastructure, which is the most rigid component, and makes it available as a soft component that is easy to use and manage. By doing so, virtualization provides the agility required to speed up IT operations, and reduces cost by increasing infrastructure utilization. On the other hand, autonomic computing automates the process through which the user can provision resources on-demand. By minimizing user involvement, automation speeds up the process and reduces the possibility of human errors.[25]
Users face difficult business problems every day. Cloud computing adopts concepts from Service-oriented Architecture (SOA) that can help the user break these problems into services that can be integrated to provide a solution. Cloud computing provides all of its resources as services, and makes use of the well-established standards and best practices gained in the domain of SOA to allow global and easy access to cloud services in a standardized way.
Cloud computing also leverages concepts from utility computing in order to provide metrics for the services used. Such metrics are at the core of the public cloud pay-per-use models. In addition, measured services are an essential part of the feedback loop in autonomic computing, allowing services to scale on-demand and to perform automatic failure recovery.
Cloud computing is a kind of grid computing; it has evolved from grid computing by addressing the QoS (quality of service) and reliability problems. Cloud computing provides the tools and technologies to build data/compute intensive parallel applications with much more affordable prices compared to traditional parallel computing techniques.[25]
Cloud computing shares characteristics with:
  • Client–server model — Client–server computing refers broadly to any distributed application that distinguishes between service providers (servers) and service requesters (clients).[26]
  • Grid computing — "A form of distributed and parallel computing, whereby a 'super and virtual computer' is composed of a cluster of networked, loosely coupled computers acting in concert to perform very large tasks."
  • Mainframe computer — Powerful computers used mainly by large organizations for critical applications, typically bulk data processing such as census, industry and consumer statistics, police and secret intelligence services, enterprise resource planning, and financial transaction processing.[27]
  • Utility computing — The "packaging of computing resources, such as computation and storage, as a metered service similar to a traditional public utility, such as electricity."[28][29]
  • Peer-to-peer means distributed architecture without the need for central coordination. Participants are both suppliers and consumers of resources (in contrast to the traditional client–server model).
  • Cloud gaming—also known as on-demand gaming—is a way of delivering games to computers. Gaming data is stored in the provider's server, so that gaming is independent of client computers used to play the game.

Characteristics

Cloud computing exhibits the following key characteristics:
  • Agility improves with users' ability to re-provision technological infrastructure resources.
  • Application programming interface (API) accessibility to software that enables machines to interact with cloud software in the same way that a traditional user interface (e.g., a computer desktop) facilitates interaction between humans and computers. Cloud computing systems typically use Representational State Transfer (REST)-based APIs.
  • Cost is claimed to be reduced, and in a public cloud delivery model capital expenditure is converted to operational expenditure.[30] This is purported to lower barriers to entry, as infrastructure is typically provided by a third-party and does not need to be purchased for one-time or infrequent intensive computing tasks. Pricing on a utility computing basis is fine-grained with usage-based options and fewer IT skills are required for implementation (in-house).[31] The e-FISCAL project's state of the art repository[32] contains several articles looking into cost aspects in more detail, most of them concluding that costs savings depend on the type of activities supported and the type of infrastructure available in-house.
  • Device and location independence[33] enable users to access systems using a web browser regardless of their location or what device they are using (e.g., PC, mobile phone). As infrastructure is off-site (typically provided by a third-party) and accessed via the Internet, users can connect from anywhere.[31]
  • Virtualization technology allows servers and storage devices to be shared and utilization be increased. Applications can be easily migrated from one physical server to another.
  • Multitenancy enables sharing of resources and costs across a large pool of users thus allowing for:
    • Centralization of infrastructure in locations with lower costs (such as real estate, electricity, etc.)
    • Peak-load capacity increases (users need not engineer for highest possible load-levels)
    • Utilisation and efficiency improvements for systems that are often only 10–20% utilised.[11][34]
  • Reliability is improved if multiple redundant sites are used, which makes well-designed cloud computing suitable for business continuity and disaster recovery.[35]
  • Scalability and elasticity via dynamic ("on-demand") provisioning of resources on a fine-grained, self-service basis near real-time,[36][37] without users having to engineer for peak loads.[38][39][40]
  • Performance is monitored, and consistent and loosely coupled architectures are constructed using web services as the system interface.[31]
  • Security could improve due to centralization of data, increased security-focused resources, etc., but concerns can persist about loss of control over certain sensitive data, and the lack of security for stored kernels.[41]Security is often as good as or better than other traditional systems, in part because providers are able to devote resources to solving security issues that many customers cannot afford.[42] However, the complexity of security is greatly increased when data is distributed over a wider area or greater number of devices and in multi-tenant systems that are being shared by unrelated users. In addition, user access to security audit logs may be difficult or impossible. Private cloud installations are in part motivated by users' desire to retain control over the infrastructure and avoid losing control of information security.
  • Maintenance of cloud computing applications is easier, because they do not need to be installed on each user's computer and can be accessed from different places.
The National Institute of Standards and Technology's definition of cloud computing identifies "five essential characteristics":
On-demand self-service. A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with each service provider.
Broad network access. Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, tablets, laptops, and workstations).
Resource pooling. The provider's computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand. ...
Rapid elasticity. Capabilities can be elastically provisioned and released, in some cases automatically, to scale rapidly outward and inward commensurate with demand. To the consumer, the capabilities available for provisioning often appear unlimited and can be appropriated in any quantity at any time.
Measured service. Cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service.
—National Institute of Standards and Technology[2]

On-demand self-service

On-demand self-service allows users to obtain, configure and deploy cloud services themselves using cloud service catalogues, without requiring the assistance of IT.[43][44] This feature is listed by the National Institute of Standards and Technology (NIST) as a characteristic of cloud computing.[2]
The self-service requirement of cloud computing prompts infrastructure vendors to create cloud computing templates, which are obtained from cloud service catalogues. Manufacturers of such templates or blueprints includeBMC Software (BMC), with Service Blueprints as part of their cloud management platform[45] Hewlett-Packard (HP), which names its templates as HP Cloud Maps[46] RightScale[47] and Red Hat, which names its templates CloudForms.[48]
The templates contain predefined configurations used by consumers to set up cloud services. The templates or blueprints provide the technical information necessary to build ready-to-use clouds.[47] Each template includes specific configuration details for different cloud infrastructures, with information about servers for specific tasks such as hosting applications, databases, websites and so on.[47] The templates also include predefined Web service, the operating system, the database, security configurations and load balancing.[48]
Cloud computing consumers use cloud templates to move applications between clouds through a self-service portal. The predefined blueprints define all that an application requires to run in different environments. For example, a template could define how the same application could be deployed in cloud platforms based on Amazon Web Service, VMware or Red Hat.[49] The user organization benefits from cloud templates because the technical aspects of cloud configurations reside in the templates, letting users to deploy cloud services with a push of a button.[50][51] Cloud templates can also be used by developers to create a catalog of cloud services.[52]

Service models

Cloud computing providers offer their services according to several fundamental models:[2][53] infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS) where IaaS is the most basic and each higher model abstracts from the details of the lower models. Other key components in XaaS are described in a comprehensive taxonomy model published in 2009,[54] such as Strategy-as-a-Service, Collaboration-as-a-Service, Business Process-as-a-Service, Database-as-a-Service, etc. In 2012, network as a service (NaaS) and communication as a service (CaaS) were officially included by ITU (International Telecommunication Union) as part of the basic cloud computing models, recognized service categories of a telecommunication-centric cloud ecosystem.[55]
Cloud computing layers.png

Infrastructure as a service (IaaS)

In the most basic cloud-service model, providers of IaaS offer computers - physical or (more often) virtual machines - and other resources. (A hypervisor, such asXen or KVM, runs the virtual machines as guests. Pools of hypervisors within the cloud operational support-system can support large numbers of virtual machines and the ability to scale services up and down according to customers' varying requirements.) IaaS clouds often offer additional resources such as a virtual-machinedisk image library, raw (block) and file-based storage, firewalls, load balancers, IP addresses, virtual local area networks (VLANs), and software bundles.[56]IaaS-cloud providers supply these resources on-demand from their large pools installed in data centers. For wide-area connectivity, customers can use either the Internet or carrier clouds (dedicated virtual private networks).
To deploy their applications, cloud users install operating-system images and their application software on the cloud infrastructure. In this model, the cloud user patches and maintains the operating systems and the application software. Cloud providers typically bill IaaS services on a utility computing basis[citation needed]: cost reflects the amount of resources allocated and consumed.
The spending on cloud service is expected to show the largest increase in the IT marketplace, with North Africa and the Middle East having growth of over 20% through 2016, according to analysts at Gartner. The first cloud service in the United Arab Emirates for SMBs and enterprises was announced June 2013 when the leading telecom operator in the Middle East and AfricaEtisalat launched its first cloud service in the UAE. IaaS cloud model was believed to reduce IT costs up to 60% and time to market faster by up to 90%.
Cloud communications and cloud telephony, rather than replacing local computing infrastructure, replace local telecommunications infrastructure with Voice over IP and other off-site Internet services.

Platform as a service (PaaS)