Sunday, 7 February 2016

Understanding Email Archiving


Email Archiving


The pervasiveness and conduct business continues to fuel email growth. However, today’s business must be able to mange email growth while satisfying requirements for email availability, retention, and content-oriented retrieval. Content-oriented retrieval is of particular relevance in regulatory audits or electronic discovery, in which emails associated with a particular keyword or relevant to a specific topic are frequently sought. Email archiving applications and solutions address both of these technical and business requirements.

A email archiving system is simply a stand- alone application that works with your email server to help you manage email. For example , email archiving captures and preserves all email traffic flowing in and out of the email server. It processes the email data and stores it on magnetic disk storage. When the need arises to search historical email for internal investigations or for a court-ordered legal discovery, you can search thousands of email records in seconds by using search tools embedded in the email archiving system.

Below are the few questions for email admin to understand whether he required the email archiving solution or not.

  1. Users are constantly asking for more mailbox capacity.
  2. HR is calling you to search email and you are in panic.
  3. A major law suit is pending and email records are part of it.
  4. Your email server is full and you need to add five new employees.
  5. Industry regulations require that you preserve email for five year.
  6. Email is being lost in local PST files.
  7. Attachment files are filling your email server.
  8. Your email server just plain runs slow.

In-house and Hosted Email Archiving solution.

In-house email archiving solutions are installed on a dedicated server in the same data centre as the email server. The email archiving server access the email archive using their email client via the existing company network. Sensitive corporate data is kept and controlled by your employees in your facility.
The archived email can be searched as many times as needed for discovery. In-house email archival solutions are customizable and are available from multiple vendors to support the leading email applications. Email archiving software is typically licensed by the number of  mailboxes and additional fees apply for support and maintenance. Long-term costs can be noticeably lower than hosted solutions.

Hosted Solution

Hosted email archiving solutions are an option for customers who don’t wish to install their own email archiving solutions in-house for whatever reason. For what’s usually a monthly fee, a number of mailboxes can be archived off-site with no on-site setup required. Because the email is captured while in transit, all email applications are supported. With a hosted email archiving solution, you need not worry about managing the email archival servers and you can access archived email via a web-based search application provided by the hosted service provider. This solution usually has lover start-up costs, but service costs can be high over the ling-term.

To reduce the amount of email data on the email application server, email archival solutions remove email and attachments using policies based on the age and size - it is that simple. Once the email is removed it is replaced with a small stub file(or shortcut) and it is stored safely in the archive.
Users access archived email in the same manner they access normal email, which is to “double-click” the shortcut in the email client. The email and attachment in the archive can also be searched for legal discovery and compliance.

Storing Email in the Archive

Single Instance Storage
Single Instance Storage (SIS) is a process that email archival solutions use to help reduce total archive storage. The basic process is simple - the archive stores only one copy of each original email and attachment. so it help duplicating the same mail getting send again again from other users. SIS can help reduce total archive storage by as much as 40%.

WORM Disks
 WORM is write once and read many , so are you wondering , what does this have to do with email archiving? well, if your organization operates under the guidance of the securities  and exchange commission then you may already know the answer. In the financial services industry, rule 17a-4 specifies that all email transactions be stored on non-erasable and non- rewritable WORM storage, it is that simple.

Long- Term Archival
How long must email be kept in the archive? well it depends. In the healthcare industry, email information that is part of the patient record must be retained for the life of the patient. In the financial services industry, email must be kept for up to seven year. Every industry is different.

Email Archival solutions manage long-term email storage using admin-defined retention and disposition  polices. Using the policies as a guide, email archival solutions store the email in the archive and protect it against unauthorized access or tampering. When the retention period expires, the email archival system disposes of the email record completely.

Look of email archival solutions that save a digital signature for each email. Years later, the archived email can be compared to its original signature to verify authenticity.

Finding Email in Archive

Finding a file from million of files, will make your eyes red. So your Email archiving solution should and must have a good , fast search capability. 

Email Archiving solution should offer two main functions when every thing shakes out.

  1. The ability to capture email and other exchange objects in a secure archive.
  2. The ability to quickly and easily find information within the archive.

Having both a quick and simple search capability fro end-users and powerful comprehensive search capabilities for IT and legal are an absolute must for the solutions for IT and legal are an absolute must for the solution to be useful.

The email archiving system should automatically keep access and audit records of who has accessed the archive as an administrator  or auditor and exactly what material they accessed. this audit capability protects all parties from misuse of the archive.

Some time more advanced searches are required to do more gradual searches, you may need to consider the point in time at which the mail-box needs to be accessed - do you need to have access to the mailbox as it was six month ago, four month ago, two weeks ago, or yesterday? Being able to look at Exchange objects at different points in time can give you a bunch of interesting information - like when it was received, when it was deleted, what mailbox folders it resided in, and so on. 

Retention and Disposition of Email in the Archive

Electronic storage isn’t as costly as physical warehouse, but you need to clean each one out eventually to make room for new records. It’s just too expensive in time and money to keep everything. That’s why it’s important to decide what email will be archived and for how long it will be kept.

For effective email retention policy, you need to distribute a written policy to all employees. This policy should include several of the topics shown below.

Effective Date: This will let you know that whether the policy is currently in effect or is an old that should be discarded.

Last Change date and changes made: This information confirms the policy’s authenticity and appropriateness because regulations change over time.

Person or department responsible for the policy: This gives employees or their managers someone to contact with questions.

Scope/Coverage: This includes the geographic limit of the policy (if any), affected departments and offices, and a definition of what company information is covered.

Purpose of the policy statement: This can include a company philosophy statement about the business/legal/regulatory reasons for records retention.

Definitions: Defines what constitutes business records and applicable exceptions.

Responsibilities :
  1. Business units/subsidiaries and special departments( such as the legal department).
  2. General employees.
  3. Records retention coordinators.
  4. Procedures for retention and deletion of email and attachment ( if no automated email archiving system is employed).
  5. How the emails should be stored ( usually in PST).
  6. where those PSTs should be stored, like a network storage target/share drive.
  7. How often those files should be cleaned out.
  8. How duplicate copies/convenience copies are treated.


A manually managed email retention policy relies on employees understanding and following the email retention policy. the obvious fact is that each employee will interpret the policy a little differently , so in reality you will have many different email retention policies. this fact is the main reason you need to adopt an email archiving solution.

Email retention Policy Benefit.

  1. The first is regulatory compliance. Email retention for regulatory compliance isn’t  a choice, but rather an absolute requirement.
  2. The second benefit addresses legal risk management. 
  3. The last obvious benefit relates to document retention for corporate governance.

Disposition of email from the email achieve.
Based on the retention time period governed by the email archiving solution , records such as email,attachments, and other objects will be automatically deleted out of the email archive if no litigation hold process is in place. some arching solutions automatically hold email that has reached the end of the retention period until someone, usually IT, approves the deletion. As you can imagine, in large system with millions or billions of emails, this isn’t an effective policy.

eDiscovery and Email Archiving

What is eDiscovery : eDiscovery( or electronic discovery) refers to the legal process where electronic data is sought, found, reviewed, and produced with the intent of supporting or providing a case by the opposing legal team. All electronic data is subject to the discovery process, which means that email server message stores,email server backup tapes, network share drives, PDAs, cell phones, USB thumb drives, removable media such as CD or DVD disks, iPod, digital cameras, and even employee personal email accounts are subject to legal eDiscovery. In all cases the judge is the final decision maker as to what data is discoverable.

The litigation Hold: A litigation hold (or stop destruction notice) is the systematic process, including communication to affected employees, to stop the routine records retention/destruction procedures for records that may be required in an upcoming legal proceeding. A litigation hold applies to all responsive records, documents, files and emails( both electronic and hardcopy) that exist. In many companies, the email system is the largest repository of responsive records due to the nature of corporate email. Email messages can include business instructions, legal contracts, financial data, presentations, and unguarded opinions about many business- related activities and is there-fore a major target of discovery in litigation.

Another driver for email system eDiscovery is for internal or corporate governance . corporate governance includes conducting internal investigations for all sorts of reasons, including criminal activity, sexual harassment, enforcing corporate policies, and tracking intellectual property leakage.

  1. Internal investigations for criminal activity.
  2. Enforcing corporate policies.
  3. Intellectual property.

Few Questions to ask to yourself before shopping for an Email Archiving Solution.

its always good to ask questions to improve your knowledge and understanding. Always understand why your company is contemplating purchasing an email archiving solution, this will help you to understand what all things need to be there for email archiving solutions.

First thing to note is that what will be best solution for your company email archiving solution is it Hosted solution or In-House solution.

A hoisted email archiving solution means that all of your company email is stored off-site by another company. This solution usually has lower start-up costs and is a little quicker to implement. Because the service provider supplies all the required hardware and software within their facility. you have no upfront equipment or software costs.

However, your hosted email archiving service costs can also be much higher, especially over the long term and they even can charge as per the space consumed or on the basis of user and can even charge of additional services for accessing the archive and eDiscovery.

Many companies are uncomfortable with trusting another company to store and have access to their corporate communication. You can never be entirely sure who has access to your information after it leaves your building and your servers.

An in-house email archiving solutions refers to the requirement that you purchase the software and hardware and install and manage the solution from within your data centers. the solution and all equipment are owned and managed by you. that means your sensitive corporate data is kept and controlled by your employees in your facilities.

the in-house solution is much more customizable. Because you own and control all aspect of the solution, changes beneficial to your specific requirement are quick to implement and don't require additional services costs. Long term costs can be noticeably lower.

what is scalability : always go for Scalable archive solution.

how long will i be Archiving Email : Understand the retention policy of your Email Archival, so that you can properly manage your email and storage space. this question each organization need to decide by itself, for ex. is there any government regulatory retention requirement, how long access to old email is productive to your employee, what kind of access your HR department need, and finally what does your corporate legal department think ?

Do I need journaling or Log shipping capability. If you need to capture each and every message without interference from your employees, then you need journaling or log shipping.

Journaling : saves a copy of each email and attachment sent or received and stores the email in a separate exchange mailbox called the journal mail box., How ever email within this journal mailbox loses all audit and tracking information.

Log shipping: uses the standard exchange transaction log files to capture all exchange information and has no impact on performance of an email server.

Do i give Employees access to their specific mailbox Archive : In most cases the answer is absolutely, by allowing employees access to their archived mailboxes, you increases employee productivity, It also frees up your IT staff. Those intrepid workers are no longer dealing with requests to recover old emails form backup tapes.

Should I Incorporate PSTs into the Archive : Incorporating employees PSTs into a central archive saves storage space because many email archiving solutions create a single instance of each message. Plus , incorporating all emails including those emails in PSTs greatly simplifies discovery of the email system due to the fact that all email related data is located in the archive, not on employee desktops, network drive, or even backup tapes.

How Does an Email Archiving Solution Change My Exchange Server Backup Requirement.
Having an email archiving solution in place doesn’t necessarily remove your requirement to backup your email servers. Your exchange system has many different types of data files that enable it to run effectively. Most email archiving system capture copies of email messages and attachments, but they don’t capture all the other data resident in the server like the exchange database, calendar data, the contacts database, and tasks and so they can’t act as a backup and restore application. Ask the vendor whether its archiving system captures all the data resident in exchange and whether it provides any email archiving solutions don't capture all the required files to act as a backup and restore application.

How Does an Email Archiving solution change my exchange storage requirement. 
In some cases, your email archival solution can reduce your existing email storage by as much as 80 percent, and it can manage your email storage moving ahead. Now you can manage and control your email storage budget, without sacrificing user productivity.

Keep in mind the email archive can either completely remove email from the email server, or remove and replace it with a small stub file. Both methods will reduce email storage, but the latter potentially introduces thousands of small files that may cause performance problems with outlook as their numbers increase.

Major Benefits of Email Archiving

  1. Meet Federal or State Regulatory Requirements.
  2. Respond to eDiscovery orders and litigation hold requirement.
  3. Increase your exchange server’s performance.
  4. Reduce your exchange server backup and recovery times.
  5. Reduce storage requirements to lower your overall storage costs.
  6. Increase employee productivity.
  7. Adopt internal corporate governance process.
  8. More effective corporate knowledge management.
  9. Lower migration costs.
  10. Eliminate PSTs.







Sunday, 26 July 2015

VNX Systems storage Pool and raid group and their LUNs(THICK and THIN)

VNX Systems storage Pool and raid group and their LUNs(THICK and THIN)



VNX system allow the creation of two types of block storage pool- RAID Group and Pools.

Raid Group is the traditional way to group disks into the sets.Rules regarding the number of disks allowed in a RG, and the minimum/maximum number for a given raid type are enforced by the system. Supported RAID types are RAID 1,1/0,3,5,6.
Note: Only Traditional LUNs can be created on a RG.

Pools are required for FAST VP(auto-tiering), and may have mixed disk types(FlASH, SAS and NL-SAS). The number of disks in a single pool depends on the VNX model, and is the maximum number of disks in the storage system less 4. As an example, the vnx5700 has a maximum capacity of 500 disks and a maximum pool size of 496 disks. The remaining 4 disk are the system disk and can not be a part of pool. only  RAID 5,6,1/0 are supported in pools and the entire pool(all tiers) will be one RAID type.

Traditional LUN are created on RG, They exhibit the highest level of performance of nay LUN type, and are recommended where predictable performance is required. All LUNs in a RG will be of the same RAID type.

Two different type of LUN can be created on the Pools- Thick LUNs and Thin LUNs.
Thick Lun: when thick lun is created, the entire space that will be used for the LUN is reserved, if there is insufficient space in the pool, the thick LUN will not be created. An initial allocation of 3GiB slices will be allocated as needed. these slices contain 1GiB of contiguous Logical Block Addresses(LBAs), so when a slice is written of the first time, it will be allocated to the thick LUN. Because tracking happens at a granularity of 1GiB, the amount of metadata is relatively low, and the lookups that required to find the location of the slice in the pool are fast. Because lookups are required , thick LUN access will be slower than accesses to traditional LUNs.

Thin LUN: Thin LUNs alo allocate 1GiB slices when space is needed , but the granularity inside those slices is at the 8KiB block level. Any 1GiB slice will be allocated to only 1 thin Lun,but the 8KiB blocks will not necessarily be from contiguous LBAs. Oversubscription is allowed, so the total size of the thin LUNs in a pool can exceed the side of the available physical data space. monitoring is required to ensure that out of space condition do not occur. there is appreciably more overhead associated with thin LUNs that with thick LUNs and traditional LUNs, and performance is substantially reduces as result.

Metadata
Pool LUN also have metadata and metadata is associated with the use of both the thick LUN and thin LUN . The metadata is used to locate the data on the private LUNs used in the Pools structure. The amount of metadata depends on the type and size of the LUN. 

for ex: Thin LUN metadata(GB)=LUN capacity*0.02+3GB
so for e.g 500GB thin LUN (when fully allocated )=13 GB metadata.
Thick LUN metadata(GB)=LUN capacity*0.001+3GB
e.g 500GB Thick LUN=3.5GB metadata

so from above example we can see that thin lun required around 10GB more space than the thick LUN.

Positioning Thin LUNs
Thin LUN should be positioned in Block environments where space saving and storage efficiency out weight performance as the main goals. Areas where storage space is traditionally over allocated and where the thin LUN”allocate space on demand” functionality would be an advantage, include user home directories and shared data space.

If FAST VP is a requirement , and Pool LUNs are being proposed for that reason, it is important to remember that thick LUNs achieve better performance than thin LUNs.

Beware  that thin LUNs are not recommended in certain environment which is Exchange 2010 and the file systems on VNX.

Thin LUN Perfomance Implication.
space is assigned to thin LUNs at a granularity of 8KiB(inside 1GiB slice). The implication here is that tracking is required for each 8kib piece of data saved on a thin LUN, and that tracking involves capacity overhead in the form of metadata. In addition , since the location of any 8KiB piece of data cannot be predicted , each data access to a thin LUN requires a lookup to determine the data location. If the metadata is not currently memory-resident, a disk access will be required , and an extended response time will result. This makes Thin LUNs appreciably slow than Traditional LUNs, and slower than Thick LUNs.

Because Thin LUNs make use of this additional metadata, recovery of thin LUNs, after certain types of failure(e.g cache dirty faults) will take appreciably longer than recovery for think LUN or traditional LUNs. A strong recommendation, therefore, is to place mission critical applications on thick LUNs or traditional LUNs.

In some environment- those with a high locality of data reference -FAST cache may help to reduce the performance impact of the metadata lookup.

Thin LUN should not be used for the VNX filesystem. Thin LUN should never be used where high performance is an important goal.
Pool space should be monitored carefully (Thin LUNs allow Pool oversubscription whereas Thick LUNs do not). The system issues an alert when the consumption of any pool reaches a user-selectable limit. By default , this limit is 70% and allows ample time for the user to take any corrective action required.

hope you all enjoyed reading it.










Saturday, 26 October 2013

Understanding SAN vs NAS for SMB Markets


Understanding SAN vs NAS for SMB Markets.

 

One of the greatest downside of the SAN is that, while they do offer the multi-host access to the device, but they don’t offer the multi-host access to the file, whereas most application want that multi-access file facilities (application used in media industries), if you want a system connected to the SAN and to be able to read/write to the same file then you required for the clustered file system. Such file system are available such as quantum StorNext file system but are quite expensive, whereas the NAS system or filers have multi-host file access technology from long time..

NAS are consider and are easier to understand and manage than SAN, because NAS device there is only NAS GUI to understand and from that GUI you can almost do lots of thing, and manage the NAS more nicely and CIFS and NFS concept are also not difficult to understand, whereas the SAN with FIBER make you to learn about the fiber technology and san switch concept , zone concept , loop concept, HBA understanding is also required and even the SAN device management understanding is also required, so if you see there are HBA vendor manual to read, switch vendor manual to read, and SAN storage vendor manual to read, whereas in NAS there is only one NAS vendor manual to read and understand..

In NAS as the whole NAS come from single vendor so any problem happens you can contact to that vendor for the troubleshooting, as we know the storage is the only box who in IT environment to take the initial blame for all the problem, everybody blames storage if the data is not available, performance issue and so and so.., but if we Cleary see there are lots of factor and component and hardware between storage and server, so when it comes to SAN environment there is lots of vendor involvement in case of performance detection, hardware failure or management issue, because HBA from different vendor, san switch from different vendor can make you call all those vendor in case of troubleshooting, so management of NAS compared to SAN is quite handy and easy.

NAS filers can be difficult to backup to tape. Although the snapshot and off-site replication software sold by some NAS vendors offers some wonderful recovery possibilities that are rather difficult to achieve with a SAN , filers must still be backed up to tape at some point, and that can be a challenge. One of the reasons is that performing a full backup to tape will typically task an I/O system much more than any other application .This means that backing up a really large filer to tape will create quite a load on the system. Although many filers have significantly improved their backup and recovery speeds, SANs are still faster when it comes to raw throughput to tape.

The throughput possible with a SAN makes large-scale backup and recovery much easier. In fact , large NAS environments take advantage of SAN technology in order to share a tape library and perform LAN-less backup.

 

SAN is costlier than NAS and that’s quite understandable but now a days, san solutions are getting cheaper or quite affordable, like NETAPP FAS series, EMC VNX series and HITACHI HUS series are the vendor which offer the unified storage solutions which can be used as SAN and NAS and we see the costing also is a quite affordable, but then solution selling vary from country to country, like I am a presales engineer in INDIA, and when I go to the customer for discussion of the storage solutions , in spite of how good the technology is and how nicely it’s going to improve its performance, at the end its all go waste ,because of budget issue, people or IT Manager of Indian companies, I would not talk about the enterprise but for the SMB market , it becomes quite difficult to sell the storage solutions, because they don’t have budget or they don’t look for the long time saving, but they all look for what can be bought with this approved budget, they need best in smaller price.., again the SMB market is dived into three different segments small SMB , MID  SMB and enterprise SMB, and there all lots of opportunity to do business in those companies, like they have business potential they have budget also ,and they can easily go for the good IT solutions, but as the companies grows their mentality does not grows , until and unless they see some disaster happening due to low budget IT solutions they are using in their environment, I have seen some companies they have good money with them, but if visit their IT infrastructure, they are not using centrally storage, not using virtualization, they are not suing any backup solutions, they are just keep on buying the servers and keep on taking the manually backup of their servers and desktop\laptop if they need to protect them.., they don’t understand that how one time investment can save their time, money and can protect their data nicely, and they all end up spending or losing more money which they thought they saved for not buying a good solutions.

But if you see from other side of table, customer will say that vendor presales guy or sales guy does not spend much time on counseling of those IT Manager and telling them how they can save lots of money in buying a right solution for their environment, but if you ask to the IT sales guy, his performance are measured by his targets, so it does not make sense for him to waste his or his presales guy time on those customer whose budget is low range, and it becomes quite difficult because of one more reasons there are solutions which can be bought in low range budget, but then it can resolve your problem for short term not for long term, and people in INDIA or IT manager of all those SMB looks for short term solutions not for the long terms, it doesn’t mean they cannot be changed, they can be but by proper counseling and proper knowledge sharing, or spending some good time with them.

Enterprise they don’t compromise in quality, and SMB they can be fooled easily….

Thank you..

 

Monday, 21 October 2013

Understanding Cloud technology


Understanding Cloud technology

After a long time I am back to share some new things with you guys, it’s about cloud technology as there is lots of buzz going on in the market about the cloud technology, and there is lots of confusion about the cloud, so I thought of writing something about this technology.

Let’s start from what..?

What is cloud: well in environmental way cloud brings us rain, and keeps our environment greenly and healthy, so that we can grow nice crops in our land, but sometimes unwanted rain can even destroy our crops also, so if we see altogether we don’t have control in environmental cloud those brings us rain and they are not that user friendly and not that flexible, imagine if we could have got control on environmental cloud then we can use it as per our convenience.

Well but in IT cloud it’s a service given to us when we need it, so as per our convenience we can use it, I am not comparing about the environmental cloud and about the IT cloud , I am just telling you why we call it cloud technology.

So cloud is not any type of software or hardware  product, but a way of delivering a IT services that are consumable on demand , scalable on demand , elastic to scale up and scale down as needed and you can pay as you grow.

Cloud technology not only gives you better services but saves lots of IT money, means it bring down your IT cost, because IT cloud gives you that flexibility you can pay for what you need any type of software or hardware service which you need to keep your work going, you don’t have to buy and don’t have to hire someone to manage, just pay to some cloud vendor and he can provide you all those services and charge you for that, so all the managerial headache for managing the IT is not on you but on cloud vendor.

Cloud can be many form such as (storage-as-a-service, compute-as-a-service, application-as-a-service), but without the fundamental storage pieces, none of the other application are possible.

While there  are still varying  definitions and much hype around what cloud does and does not mean, the key attributes that cloud computing must provide include.

1.       The ability to rapidly provision and de_ provision a service.

2.       A consumption model where user pays for what they use.

3.       The agility to flexible scale (flex up and flex down) the service without extensive pre-planning.

4.       A secure direct connection to cloud without having to recode applications.

5.       Multi-tenancy capabilities that segregate and protect the data.

Now let’s come to the why…?

Why we need cloud: As I have already stated in my previous paragraph that as the data is growing and so with that the IT cost is also growing so now a days lots of work around is going on in IT world to bring down the IT cost and cloud technology only charges you as per your need, so if you see it drastically reduces the IT cost.

Why we should choose cloud.

1.       Cost reduction by leveraging the economics of scale beyond the four walls of the data center.

2.       IT agility to respond faster to changing business needs.

3.       100 per cent resource utilization.

Technical Terms highly used in Cloud technology

1.       Multi-tenancy is a secure way to partitions the IT infrastructure (application, storage pool, network) so multiple customer share a single resource pool. Multi-tenancy is one of the key ways cloud can achieve massive economies of scale.

2.       REST (representational state transfer) is a type of software architecture for client/server communication over the web.

3.       Chargeback: is the ability to report on capacity and utilization by application or dataset and charge business users or departments based on how much they use.

Simplifying planning and using resource more cost effectively is appealing to all organization. Utilizing cloud delivers time and cost savings.

Cloud technology distribute the IT resource in a better and cost effective way, rather than buying all at once and maintain it, managing it not knowing that whether I am fully utilizing my IT resource which I bought and ending up wasting time and money. In cloud you buy resource as you grow, so you not only utilize your resource nicely, but if you need to degrade the IT resource and save your money you can degrade it and save your money too, so there is no extra waste of money.

Most of the organization may overprovision to manage storage burst or attempt to meet the capacity planning, or even buy because there is budget is available. These organizational efforts result in a lot of idle capacity and longer time to realize a return of assets (ROA).

Employing cloud instead can simplify long-range long range financial and storage planning, as the redeployment of resources is performed instantly, anytime and anywhere, to scale up and down and to support business objectives as needed.

Cloud subscribers and Providers

So in cloud technology cloud involves the subscribers and the providers, the service provider could be the company internal IT group, or the third party or the combination of both. The subscriber is one who is using the cloud services. Providers gain economies of scale using multi-tenant infrastructure and a predictable, recurring revenue stream.

Subscriber’s benefits include:

1.       Shifting storage cost to an operating expenses: pay for use.

2.       Lowering operating expense and reducing the drain on IT resources.

3.       Balancing the value of data with service level agreements (SLAs) and cost.

4.       Gaining business flexibility with subscriber-controlled, on –demand capacity and performance.

5.       Future-proofing, because storage media can change below the cloud layer without disturbing the services.

 

What are “as-a-service” in Cloud technology .

A frequently used term in any cloud-related book is as-a-service. It really means that a resource or task has been packaged so it can be delivered automatically to customers on demand in a repeatable fashion. It is commonly used to describe cloud delivery models.

For example:

Infrastructure-as-a-service (IaaS) delivers compute hardware (servers, network or storage) as a service. The characteristics commonly seen with IaaS are

•Subscribers provision the resource without control of the underlying cloud infrastructure.

•The service is paid for on a usage basis.

•Infrastructure can be automatically scaled up or down.

An example of infrastructure-as-a-service is Amazon’s Elastic Compute Cloud (EC2), http://aws.amazon.com/ec2.

Storage-as-a-service (STaaS) provides storage resources as a pay-per-use utility to end users. It is one flavor or type of infrastructure-as-a-service and therefore shares the common characteristics described in the preceding point.

Hitachi’s Private File Tiering Cloud (www.hds.com/solutions/storage-strategies/cloud/index.html?WT.ac=us_hp_flash_r1) is an example of storage-as-a-service.

Platform-as-a-service (PaaS) provides more than just the infrastructure. It is a comprehensive stack for developers to create cloud-ready business applications. The characteristics commonly seen with PaaS are that it:

•Is multi-tenant

•Supports web services standards

•Is dynamically scaling based on demand?

An example of platform-as-a-service is Microsoft Azure www.microsoft.com/windowsazure.

Software-as-a-service (SaaS) cloud providers host and deliver business applications as a service. The characteristics commonly seen with SaaS include:

•Multi-tenancy

•Consumer uses applications running on a cloud infrastructure

•Accessible from various client devices through web browser

•CRM (customer relationship management) is one of the most commonly seen SaaSSalesforce.com (www.salesforce.com) is an example of software-as-a-service.

Main categories of cloud

The three main categories of cloud models are private, hybrid and public. Each one may offer varying levels of security, services, access, service level of agreements (SLA) and value to end users.

Private cloud: the word itself state that its private, that means that all component will reside within the firewall of an organization, the infrastructure is either managed by the internal IT team or manage and delivered by the cloud provider.

How is private cloud used?

Private cloud can leverage existing infrastructure, deliver the massive scale and enable the charge back either by the organization’s own IT staff, or as a vendor-managed service, but within the privacy of an organization’s network.

Additional benefit what you can get.

1.       Can deliver Iaas or STaaS internally to employees or business units through an intranet or the internet via a virtual private network (VPN).

2.       Can deliver software (applications) as a service to branch office.

3.       Include database on demand, email on demand or storage on demand.

Security in private cloud

With private cloud, security of the data and physical premises are determined and monitored by the IT team, and its high quality SLAs remains intact. In a private cloud environment, the network bandwidth is under IT’s control as well, which also helps ensure SLAs.

An organization maintains its own strong security practices of both the data and the physical location, such as key codes, passwords and badging. Access to data is determined internally and may resemble existing role-based access controls; or separate administration and data permissions, based on data types and security practices may be granted.

Why use private cloud?

Reasons for using private cloud include

To the end users: Quick and easy resource sharing, rapid deployment, self-service and the ability to perform chargeback to departments or user groups.

To the service provider (in this case, an organization): The ability to initiate chargeback accounting for usage while maintaining control over data access and security.

Public cloud

In Public cloud as name itself say that it is public means cloud is multi-tenant infrastructure that means same hardware or IT infrastructure are shared by multiple companies, all major component are located in a multi-tenant infrastructure outside an organization’s firewall. Application and storage are made available over the internet and can be free or offered at a pay –per- usage fee.

The Key characteristic of public cloud is

1.       Elasticity

2.       Ease of use.

3.       Low entry costs

4.       Pay-per-use

Examples of public cloud services include picture and music sharing, laptop backup and file sharing. Examples of providers include amazon and Google on demand web applications, yahoo mail, Facebook and LinkedIn.

Why to use Public cloud

Public cloud is focus is on the consumer and small to medium size businesses where pay-per-use pricing is available, often equating to pennies per gigabyte, for end user its very cheap compared to buy a small removable hard disk for storing their data we can store in cloud, and easy to share, rapid deployment and self-service.

Note:  public cloud offers a low level SLA and may not offer guarantees against data loss or corruption.

Hybrid Cloud

Hybrid cloud is combination of public and private means some of the selected data or application of IT infrastructure are allowed to be punched through the corporate firewall and be provided by a trusted cloud provider, multi-tenant infrastructure outside the firewall delivered by a trusted cloud provider is leveraged for further cost- reduction. The IT organization makes the decision  regarding what type of services and data can live outside the firewall to be managed by a trusted third-party partner, such as Telco’s, system integrator and internet service provider.

How is cost saving achieved?

Hybrid cloud usually provides an attractive alternative to an organization when internal processes can no longer be optimized because further cost reduction is provided by leveraging a trusted service provider’s ability to deliver to more than a single customer.

The service provider’s costs are lower because they amortize infrastructure across many customers and this helps even out supply ‘peaks and valleys’. The service provider passes along those savings to the customer base.

An organization’s cost infrastructure may only be amortized across business units or a small customer base. By moving certain data and applications to a hybrid cloud, the organization is able to take advantage of the multi-tenant capabilities and economies of scale.

The overall outlay of service delivery shifts to the pay-for-usage model for an organization, while the trusted provider appreciates higher utilization rates through its shared infrastructure. The result is reduced costs for any given service offered through the hybrid cloud. Building bridges between an organization and its trusted partners is critical to ensuring data is protected. Hybrid cloud providers use stringent security practices and uphold high quality SLAs to help the organization mitigate risks and maintain control over data managed services and application hosting services delivered through multi-tenancy. An organization also determines access limitations for the provider and whether the services will be delivered via VPNs or dedicated networks.

Why use hybrid cloud?

Reasons for using hybrid cloud include:

To the organization: Cost reductions — well-managed services that are seamlessly and securely accessed by its end users.

To the trusted provider: The economies of scale — supplying services to multiple customers while increasing utilization rates of highly scalable cloud-enabled infrastructure.
 
In the last I would like to suggest that cloud technology is Future technology, future will be not wright word now, because already this technology is getting adopted in market , but still lots of market is need to be captured by cloud technology providers. All the leading storage vendor started providing the Cloud technology and new Cloud provider have already started their business in Indian market, well Indian market will take some more time to get into cloud technology,
So those who are presales engineer in partners or implementation engineer or engineer students, please start studying about cloud technology and start getting certified in this because in future lots of job will get create in this cloud technology and those who will be having basic understanding will be ahead .
 
Thanks for every body who visit my blog , I hope my blog on cloud will help you all to know little bit about cloud technology..
 

 

 

 

Tuesday, 17 September 2013

Quantum Tape Library i40/i80


Quantum Tape Library i40/i80 Comparison with other Tape Library.

 



 

And please find the some of the Problem which Tape admin faces and their solution via Installing Quantum Tape Solutions.

 

Some Common challenges and their Solution via Quantum tape Libraries.

 

1. The amount of data we have to protect is growing we don’t know how much capacity will be needed in 3 year.

Sol: The Scalar i40/i80 products provide market leading investment protection with Capacity-on-Demand (COD) scalability – this allows you to simply expand your capacity 60% through a software license – there is no hardware to purchase or install, saving you time and money.

Quantum tape library in i40 gives you first 25 slots and then with COD license you can grow to 40 slots, no need to buy new hardware.

 

2. We do not have large technical staff- we cannot afford to spend time on managing a complex automation product.

Sol: The Scalar i40/i80 simplifies everything from initial setup and on-going management to adding capacity over time. With over 30,000 iLayer libraries shipped, the iLayer management software has shown to reduce management time by over 50% in most instances.

Quantum iLayer Management software reduces the Management time by over 50%

 

3. We do not have technical staff onsite- how do we simply swap the correct tapes for offsite disaster recovery protection with our non-technical resources?

Sol: The Scalar i40/i80 has large import/export (I/E) slots to simplify the exchange of media for offsite disaster recovery. Administrative personnel can simply replace tapes placed in the  I/E slot for offsite storage, without complicated commands and without access to the internal library tapes – ensuring only the correct tapes are removed and the backup operations continue without interference.

Quantum i40 have 5 I/E slots which will help you to exchange of media for offsite disaster recovery.

 

4. We spend too much time dealing with failed backup jobs- what’s available to reduce this issue for us.

Sol: The iLayer feature in Scalar i40/i80 proactively monitors events inside the library and sends email alerts to assigned personnel and/or Quantum service to ensure the library is not the cause of a failed backup or restore – iLayer has shown to reduce service calls by 50%.

Quantum i40 iLayer Management software will proactively monitors events inside the library by sending mails and reduces service calls by 50%

 

5. We need to protect our data from getting into the wrong hands both from a government compliance concern and from a corporate security concern.

Sol: The Scalar i40/i80 support the highest level of encryption, AES-256 bit encryption standard, to ensure regulatory compliance and sensitive company data is protected, even while being stored offsite

Quantum i40 AES -256  encryption will protect your data.

 

Thursday, 25 April 2013

Hitachi Dynamic Link Manager (HDLM)


Hitachi Dynamic Link Manager (HDLM)

What is the HDLM: it is a server –based software solution that directly addresses challenges associated with the single point of failure.

iSCSI devices , Hitachi storage system command devices, such as Hitachi RAID Manager command devices, EMC DMX series, EMC CX series, and HP EVA series are managed by HDLM and tape devices and the disks on the host devices are not managed by HDLM.

HDLM features.

1.     Multipathing: Multiple paths can also be used to share I/O workloads and improves performance.

2.     Path failover: By removing the threat of I/O bottlenecks, HDLM protect your data paths and increases performance and reliability.

3.     Failback: By recovering a failed path and placing it back online when it becomes available, the maximum number of paths available for load balancing and failover is assured.

4.     Load Balancing: By allocating I/O request across all paths, load balancing ensures continuous operations at optimum performances levels, along with improved system and application performance. Several load balancing policies are supported.

Since HDLM automatically perform path health checking, the need to perform repeated manual path status checks is eliminated.

 

With multi-pathing, a failure with one or more components still allows applications to access their data. In addition to providing fault tolerance, multi-pathing also serves to redistribute the read/write load among multiple paths between the server and storage, helping to remove bottlenecks and balance workloads. In addition, distributing data access across all the available paths increases performance, allowing more application to be run and more work to be performed in a shorter period of time.

 

How HDLM works:

1.     HDLM driver interfaces with HBA driver or multipathing framework provided by OS.

2.     Assign a unique identifier to paths between each storage devices and host.

3.     Distributes application I/O across each path according to fail-over and load balancing.

4.     When a path fails, all outstanding and subsequent I/O request shift automatically and transparently from failed or down path to alternative paths.

 

Two types of failover happen: Automatic and manual.

 

Failover keeps your mission critical operation running without interruptions, storage assets is maximized, and business operations remain online.

 

The path can go offline due to the following reasons.

1.     An error occurred on the path.

2.     A user intentionally placed the path offline by using the path management window in the HDLM GUI.

3.     A user executed the HDLM commands offline operations.

4.     Hardware, such as cables or HBAs, has been removed.

 

You can manually place a path online or offline by doing the following:

1.     Use the HDLM GUI path Management window.

2.     Execute the “dlnkmgr” command’s online or offline operation.

The algorithms used for the load balancing are the “round robin”. This algorithm simply distributes I/O by alternating request across all available data paths. Some multipath solutions, such as the IBM MPIO default PCM, only provide this type of load balancing.

If we use the Extended Round-robin for the load balancing then it distributes I/O to paths depending on whether the I/O involves sequential or random access:

. For sequential access, a certain number of I/O is issued to one path in succession. The next path is chosen according to the round robin algorithm.

. For random access, I/O is distributed to multiple paths according to the round-robin algorithm.

To centrally manage the multiple HDLM instances by Hitachi Global Link manager (HGLM).

By HGLM we can centrally administrator multiple HDLM multipath environment from single point of control and consolidate and present complex multipath configuration information on simplified host and storage centric views.

 

Summary:

1.     Provides a centralized facility for managing path failover, automatic failback, and selection of I/O balancing techniques thorugh integration with Hitachi Global link Manager.

2.     Eases installation and use through auto-discovery function, which automatically detects all available paths for failover and load balancing.

3.     Provides one path-management tool for all operating systems, Includes the CLI that allows administrators flexibility in managing paths across networks.

4.     Provides manual and automatic failover and failback support.

5.     Monitors status of online paths through a health –check facility at customer-specified intervals, and places a failed path offline when an error is detected.