Saturday, 26 October 2013

Understanding SAN vs NAS for SMB Markets


Understanding SAN vs NAS for SMB Markets.

 

One of the greatest downside of the SAN is that, while they do offer the multi-host access to the device, but they don’t offer the multi-host access to the file, whereas most application want that multi-access file facilities (application used in media industries), if you want a system connected to the SAN and to be able to read/write to the same file then you required for the clustered file system. Such file system are available such as quantum StorNext file system but are quite expensive, whereas the NAS system or filers have multi-host file access technology from long time..

NAS are consider and are easier to understand and manage than SAN, because NAS device there is only NAS GUI to understand and from that GUI you can almost do lots of thing, and manage the NAS more nicely and CIFS and NFS concept are also not difficult to understand, whereas the SAN with FIBER make you to learn about the fiber technology and san switch concept , zone concept , loop concept, HBA understanding is also required and even the SAN device management understanding is also required, so if you see there are HBA vendor manual to read, switch vendor manual to read, and SAN storage vendor manual to read, whereas in NAS there is only one NAS vendor manual to read and understand..

In NAS as the whole NAS come from single vendor so any problem happens you can contact to that vendor for the troubleshooting, as we know the storage is the only box who in IT environment to take the initial blame for all the problem, everybody blames storage if the data is not available, performance issue and so and so.., but if we Cleary see there are lots of factor and component and hardware between storage and server, so when it comes to SAN environment there is lots of vendor involvement in case of performance detection, hardware failure or management issue, because HBA from different vendor, san switch from different vendor can make you call all those vendor in case of troubleshooting, so management of NAS compared to SAN is quite handy and easy.

NAS filers can be difficult to backup to tape. Although the snapshot and off-site replication software sold by some NAS vendors offers some wonderful recovery possibilities that are rather difficult to achieve with a SAN , filers must still be backed up to tape at some point, and that can be a challenge. One of the reasons is that performing a full backup to tape will typically task an I/O system much more than any other application .This means that backing up a really large filer to tape will create quite a load on the system. Although many filers have significantly improved their backup and recovery speeds, SANs are still faster when it comes to raw throughput to tape.

The throughput possible with a SAN makes large-scale backup and recovery much easier. In fact , large NAS environments take advantage of SAN technology in order to share a tape library and perform LAN-less backup.

 

SAN is costlier than NAS and that’s quite understandable but now a days, san solutions are getting cheaper or quite affordable, like NETAPP FAS series, EMC VNX series and HITACHI HUS series are the vendor which offer the unified storage solutions which can be used as SAN and NAS and we see the costing also is a quite affordable, but then solution selling vary from country to country, like I am a presales engineer in INDIA, and when I go to the customer for discussion of the storage solutions , in spite of how good the technology is and how nicely it’s going to improve its performance, at the end its all go waste ,because of budget issue, people or IT Manager of Indian companies, I would not talk about the enterprise but for the SMB market , it becomes quite difficult to sell the storage solutions, because they don’t have budget or they don’t look for the long time saving, but they all look for what can be bought with this approved budget, they need best in smaller price.., again the SMB market is dived into three different segments small SMB , MID  SMB and enterprise SMB, and there all lots of opportunity to do business in those companies, like they have business potential they have budget also ,and they can easily go for the good IT solutions, but as the companies grows their mentality does not grows , until and unless they see some disaster happening due to low budget IT solutions they are using in their environment, I have seen some companies they have good money with them, but if visit their IT infrastructure, they are not using centrally storage, not using virtualization, they are not suing any backup solutions, they are just keep on buying the servers and keep on taking the manually backup of their servers and desktop\laptop if they need to protect them.., they don’t understand that how one time investment can save their time, money and can protect their data nicely, and they all end up spending or losing more money which they thought they saved for not buying a good solutions.

But if you see from other side of table, customer will say that vendor presales guy or sales guy does not spend much time on counseling of those IT Manager and telling them how they can save lots of money in buying a right solution for their environment, but if you ask to the IT sales guy, his performance are measured by his targets, so it does not make sense for him to waste his or his presales guy time on those customer whose budget is low range, and it becomes quite difficult because of one more reasons there are solutions which can be bought in low range budget, but then it can resolve your problem for short term not for long term, and people in INDIA or IT manager of all those SMB looks for short term solutions not for the long terms, it doesn’t mean they cannot be changed, they can be but by proper counseling and proper knowledge sharing, or spending some good time with them.

Enterprise they don’t compromise in quality, and SMB they can be fooled easily….

Thank you..

 

Monday, 21 October 2013

Understanding Cloud technology


Understanding Cloud technology

After a long time I am back to share some new things with you guys, it’s about cloud technology as there is lots of buzz going on in the market about the cloud technology, and there is lots of confusion about the cloud, so I thought of writing something about this technology.

Let’s start from what..?

What is cloud: well in environmental way cloud brings us rain, and keeps our environment greenly and healthy, so that we can grow nice crops in our land, but sometimes unwanted rain can even destroy our crops also, so if we see altogether we don’t have control in environmental cloud those brings us rain and they are not that user friendly and not that flexible, imagine if we could have got control on environmental cloud then we can use it as per our convenience.

Well but in IT cloud it’s a service given to us when we need it, so as per our convenience we can use it, I am not comparing about the environmental cloud and about the IT cloud , I am just telling you why we call it cloud technology.

So cloud is not any type of software or hardware  product, but a way of delivering a IT services that are consumable on demand , scalable on demand , elastic to scale up and scale down as needed and you can pay as you grow.

Cloud technology not only gives you better services but saves lots of IT money, means it bring down your IT cost, because IT cloud gives you that flexibility you can pay for what you need any type of software or hardware service which you need to keep your work going, you don’t have to buy and don’t have to hire someone to manage, just pay to some cloud vendor and he can provide you all those services and charge you for that, so all the managerial headache for managing the IT is not on you but on cloud vendor.

Cloud can be many form such as (storage-as-a-service, compute-as-a-service, application-as-a-service), but without the fundamental storage pieces, none of the other application are possible.

While there  are still varying  definitions and much hype around what cloud does and does not mean, the key attributes that cloud computing must provide include.

1.       The ability to rapidly provision and de_ provision a service.

2.       A consumption model where user pays for what they use.

3.       The agility to flexible scale (flex up and flex down) the service without extensive pre-planning.

4.       A secure direct connection to cloud without having to recode applications.

5.       Multi-tenancy capabilities that segregate and protect the data.

Now let’s come to the why…?

Why we need cloud: As I have already stated in my previous paragraph that as the data is growing and so with that the IT cost is also growing so now a days lots of work around is going on in IT world to bring down the IT cost and cloud technology only charges you as per your need, so if you see it drastically reduces the IT cost.

Why we should choose cloud.

1.       Cost reduction by leveraging the economics of scale beyond the four walls of the data center.

2.       IT agility to respond faster to changing business needs.

3.       100 per cent resource utilization.

Technical Terms highly used in Cloud technology

1.       Multi-tenancy is a secure way to partitions the IT infrastructure (application, storage pool, network) so multiple customer share a single resource pool. Multi-tenancy is one of the key ways cloud can achieve massive economies of scale.

2.       REST (representational state transfer) is a type of software architecture for client/server communication over the web.

3.       Chargeback: is the ability to report on capacity and utilization by application or dataset and charge business users or departments based on how much they use.

Simplifying planning and using resource more cost effectively is appealing to all organization. Utilizing cloud delivers time and cost savings.

Cloud technology distribute the IT resource in a better and cost effective way, rather than buying all at once and maintain it, managing it not knowing that whether I am fully utilizing my IT resource which I bought and ending up wasting time and money. In cloud you buy resource as you grow, so you not only utilize your resource nicely, but if you need to degrade the IT resource and save your money you can degrade it and save your money too, so there is no extra waste of money.

Most of the organization may overprovision to manage storage burst or attempt to meet the capacity planning, or even buy because there is budget is available. These organizational efforts result in a lot of idle capacity and longer time to realize a return of assets (ROA).

Employing cloud instead can simplify long-range long range financial and storage planning, as the redeployment of resources is performed instantly, anytime and anywhere, to scale up and down and to support business objectives as needed.

Cloud subscribers and Providers

So in cloud technology cloud involves the subscribers and the providers, the service provider could be the company internal IT group, or the third party or the combination of both. The subscriber is one who is using the cloud services. Providers gain economies of scale using multi-tenant infrastructure and a predictable, recurring revenue stream.

Subscriber’s benefits include:

1.       Shifting storage cost to an operating expenses: pay for use.

2.       Lowering operating expense and reducing the drain on IT resources.

3.       Balancing the value of data with service level agreements (SLAs) and cost.

4.       Gaining business flexibility with subscriber-controlled, on –demand capacity and performance.

5.       Future-proofing, because storage media can change below the cloud layer without disturbing the services.

 

What are “as-a-service” in Cloud technology .

A frequently used term in any cloud-related book is as-a-service. It really means that a resource or task has been packaged so it can be delivered automatically to customers on demand in a repeatable fashion. It is commonly used to describe cloud delivery models.

For example:

Infrastructure-as-a-service (IaaS) delivers compute hardware (servers, network or storage) as a service. The characteristics commonly seen with IaaS are

•Subscribers provision the resource without control of the underlying cloud infrastructure.

•The service is paid for on a usage basis.

•Infrastructure can be automatically scaled up or down.

An example of infrastructure-as-a-service is Amazon’s Elastic Compute Cloud (EC2), http://aws.amazon.com/ec2.

Storage-as-a-service (STaaS) provides storage resources as a pay-per-use utility to end users. It is one flavor or type of infrastructure-as-a-service and therefore shares the common characteristics described in the preceding point.

Hitachi’s Private File Tiering Cloud (www.hds.com/solutions/storage-strategies/cloud/index.html?WT.ac=us_hp_flash_r1) is an example of storage-as-a-service.

Platform-as-a-service (PaaS) provides more than just the infrastructure. It is a comprehensive stack for developers to create cloud-ready business applications. The characteristics commonly seen with PaaS are that it:

•Is multi-tenant

•Supports web services standards

•Is dynamically scaling based on demand?

An example of platform-as-a-service is Microsoft Azure www.microsoft.com/windowsazure.

Software-as-a-service (SaaS) cloud providers host and deliver business applications as a service. The characteristics commonly seen with SaaS include:

•Multi-tenancy

•Consumer uses applications running on a cloud infrastructure

•Accessible from various client devices through web browser

•CRM (customer relationship management) is one of the most commonly seen SaaSSalesforce.com (www.salesforce.com) is an example of software-as-a-service.

Main categories of cloud

The three main categories of cloud models are private, hybrid and public. Each one may offer varying levels of security, services, access, service level of agreements (SLA) and value to end users.

Private cloud: the word itself state that its private, that means that all component will reside within the firewall of an organization, the infrastructure is either managed by the internal IT team or manage and delivered by the cloud provider.

How is private cloud used?

Private cloud can leverage existing infrastructure, deliver the massive scale and enable the charge back either by the organization’s own IT staff, or as a vendor-managed service, but within the privacy of an organization’s network.

Additional benefit what you can get.

1.       Can deliver Iaas or STaaS internally to employees or business units through an intranet or the internet via a virtual private network (VPN).

2.       Can deliver software (applications) as a service to branch office.

3.       Include database on demand, email on demand or storage on demand.

Security in private cloud

With private cloud, security of the data and physical premises are determined and monitored by the IT team, and its high quality SLAs remains intact. In a private cloud environment, the network bandwidth is under IT’s control as well, which also helps ensure SLAs.

An organization maintains its own strong security practices of both the data and the physical location, such as key codes, passwords and badging. Access to data is determined internally and may resemble existing role-based access controls; or separate administration and data permissions, based on data types and security practices may be granted.

Why use private cloud?

Reasons for using private cloud include

To the end users: Quick and easy resource sharing, rapid deployment, self-service and the ability to perform chargeback to departments or user groups.

To the service provider (in this case, an organization): The ability to initiate chargeback accounting for usage while maintaining control over data access and security.

Public cloud

In Public cloud as name itself say that it is public means cloud is multi-tenant infrastructure that means same hardware or IT infrastructure are shared by multiple companies, all major component are located in a multi-tenant infrastructure outside an organization’s firewall. Application and storage are made available over the internet and can be free or offered at a pay –per- usage fee.

The Key characteristic of public cloud is

1.       Elasticity

2.       Ease of use.

3.       Low entry costs

4.       Pay-per-use

Examples of public cloud services include picture and music sharing, laptop backup and file sharing. Examples of providers include amazon and Google on demand web applications, yahoo mail, Facebook and LinkedIn.

Why to use Public cloud

Public cloud is focus is on the consumer and small to medium size businesses where pay-per-use pricing is available, often equating to pennies per gigabyte, for end user its very cheap compared to buy a small removable hard disk for storing their data we can store in cloud, and easy to share, rapid deployment and self-service.

Note:  public cloud offers a low level SLA and may not offer guarantees against data loss or corruption.

Hybrid Cloud

Hybrid cloud is combination of public and private means some of the selected data or application of IT infrastructure are allowed to be punched through the corporate firewall and be provided by a trusted cloud provider, multi-tenant infrastructure outside the firewall delivered by a trusted cloud provider is leveraged for further cost- reduction. The IT organization makes the decision  regarding what type of services and data can live outside the firewall to be managed by a trusted third-party partner, such as Telco’s, system integrator and internet service provider.

How is cost saving achieved?

Hybrid cloud usually provides an attractive alternative to an organization when internal processes can no longer be optimized because further cost reduction is provided by leveraging a trusted service provider’s ability to deliver to more than a single customer.

The service provider’s costs are lower because they amortize infrastructure across many customers and this helps even out supply ‘peaks and valleys’. The service provider passes along those savings to the customer base.

An organization’s cost infrastructure may only be amortized across business units or a small customer base. By moving certain data and applications to a hybrid cloud, the organization is able to take advantage of the multi-tenant capabilities and economies of scale.

The overall outlay of service delivery shifts to the pay-for-usage model for an organization, while the trusted provider appreciates higher utilization rates through its shared infrastructure. The result is reduced costs for any given service offered through the hybrid cloud. Building bridges between an organization and its trusted partners is critical to ensuring data is protected. Hybrid cloud providers use stringent security practices and uphold high quality SLAs to help the organization mitigate risks and maintain control over data managed services and application hosting services delivered through multi-tenancy. An organization also determines access limitations for the provider and whether the services will be delivered via VPNs or dedicated networks.

Why use hybrid cloud?

Reasons for using hybrid cloud include:

To the organization: Cost reductions — well-managed services that are seamlessly and securely accessed by its end users.

To the trusted provider: The economies of scale — supplying services to multiple customers while increasing utilization rates of highly scalable cloud-enabled infrastructure.
 
In the last I would like to suggest that cloud technology is Future technology, future will be not wright word now, because already this technology is getting adopted in market , but still lots of market is need to be captured by cloud technology providers. All the leading storage vendor started providing the Cloud technology and new Cloud provider have already started their business in Indian market, well Indian market will take some more time to get into cloud technology,
So those who are presales engineer in partners or implementation engineer or engineer students, please start studying about cloud technology and start getting certified in this because in future lots of job will get create in this cloud technology and those who will be having basic understanding will be ahead .
 
Thanks for every body who visit my blog , I hope my blog on cloud will help you all to know little bit about cloud technology..
 

 

 

 

Tuesday, 17 September 2013

Quantum Tape Library i40/i80


Quantum Tape Library i40/i80 Comparison with other Tape Library.

 



 

And please find the some of the Problem which Tape admin faces and their solution via Installing Quantum Tape Solutions.

 

Some Common challenges and their Solution via Quantum tape Libraries.

 

1. The amount of data we have to protect is growing we don’t know how much capacity will be needed in 3 year.

Sol: The Scalar i40/i80 products provide market leading investment protection with Capacity-on-Demand (COD) scalability – this allows you to simply expand your capacity 60% through a software license – there is no hardware to purchase or install, saving you time and money.

Quantum tape library in i40 gives you first 25 slots and then with COD license you can grow to 40 slots, no need to buy new hardware.

 

2. We do not have large technical staff- we cannot afford to spend time on managing a complex automation product.

Sol: The Scalar i40/i80 simplifies everything from initial setup and on-going management to adding capacity over time. With over 30,000 iLayer libraries shipped, the iLayer management software has shown to reduce management time by over 50% in most instances.

Quantum iLayer Management software reduces the Management time by over 50%

 

3. We do not have technical staff onsite- how do we simply swap the correct tapes for offsite disaster recovery protection with our non-technical resources?

Sol: The Scalar i40/i80 has large import/export (I/E) slots to simplify the exchange of media for offsite disaster recovery. Administrative personnel can simply replace tapes placed in the  I/E slot for offsite storage, without complicated commands and without access to the internal library tapes – ensuring only the correct tapes are removed and the backup operations continue without interference.

Quantum i40 have 5 I/E slots which will help you to exchange of media for offsite disaster recovery.

 

4. We spend too much time dealing with failed backup jobs- what’s available to reduce this issue for us.

Sol: The iLayer feature in Scalar i40/i80 proactively monitors events inside the library and sends email alerts to assigned personnel and/or Quantum service to ensure the library is not the cause of a failed backup or restore – iLayer has shown to reduce service calls by 50%.

Quantum i40 iLayer Management software will proactively monitors events inside the library by sending mails and reduces service calls by 50%

 

5. We need to protect our data from getting into the wrong hands both from a government compliance concern and from a corporate security concern.

Sol: The Scalar i40/i80 support the highest level of encryption, AES-256 bit encryption standard, to ensure regulatory compliance and sensitive company data is protected, even while being stored offsite

Quantum i40 AES -256  encryption will protect your data.

 

Thursday, 25 April 2013

Hitachi Dynamic Link Manager (HDLM)


Hitachi Dynamic Link Manager (HDLM)

What is the HDLM: it is a server –based software solution that directly addresses challenges associated with the single point of failure.

iSCSI devices , Hitachi storage system command devices, such as Hitachi RAID Manager command devices, EMC DMX series, EMC CX series, and HP EVA series are managed by HDLM and tape devices and the disks on the host devices are not managed by HDLM.

HDLM features.

1.     Multipathing: Multiple paths can also be used to share I/O workloads and improves performance.

2.     Path failover: By removing the threat of I/O bottlenecks, HDLM protect your data paths and increases performance and reliability.

3.     Failback: By recovering a failed path and placing it back online when it becomes available, the maximum number of paths available for load balancing and failover is assured.

4.     Load Balancing: By allocating I/O request across all paths, load balancing ensures continuous operations at optimum performances levels, along with improved system and application performance. Several load balancing policies are supported.

Since HDLM automatically perform path health checking, the need to perform repeated manual path status checks is eliminated.

 

With multi-pathing, a failure with one or more components still allows applications to access their data. In addition to providing fault tolerance, multi-pathing also serves to redistribute the read/write load among multiple paths between the server and storage, helping to remove bottlenecks and balance workloads. In addition, distributing data access across all the available paths increases performance, allowing more application to be run and more work to be performed in a shorter period of time.

 

How HDLM works:

1.     HDLM driver interfaces with HBA driver or multipathing framework provided by OS.

2.     Assign a unique identifier to paths between each storage devices and host.

3.     Distributes application I/O across each path according to fail-over and load balancing.

4.     When a path fails, all outstanding and subsequent I/O request shift automatically and transparently from failed or down path to alternative paths.

 

Two types of failover happen: Automatic and manual.

 

Failover keeps your mission critical operation running without interruptions, storage assets is maximized, and business operations remain online.

 

The path can go offline due to the following reasons.

1.     An error occurred on the path.

2.     A user intentionally placed the path offline by using the path management window in the HDLM GUI.

3.     A user executed the HDLM commands offline operations.

4.     Hardware, such as cables or HBAs, has been removed.

 

You can manually place a path online or offline by doing the following:

1.     Use the HDLM GUI path Management window.

2.     Execute the “dlnkmgr” command’s online or offline operation.

The algorithms used for the load balancing are the “round robin”. This algorithm simply distributes I/O by alternating request across all available data paths. Some multipath solutions, such as the IBM MPIO default PCM, only provide this type of load balancing.

If we use the Extended Round-robin for the load balancing then it distributes I/O to paths depending on whether the I/O involves sequential or random access:

. For sequential access, a certain number of I/O is issued to one path in succession. The next path is chosen according to the round robin algorithm.

. For random access, I/O is distributed to multiple paths according to the round-robin algorithm.

To centrally manage the multiple HDLM instances by Hitachi Global Link manager (HGLM).

By HGLM we can centrally administrator multiple HDLM multipath environment from single point of control and consolidate and present complex multipath configuration information on simplified host and storage centric views.

 

Summary:

1.     Provides a centralized facility for managing path failover, automatic failback, and selection of I/O balancing techniques thorugh integration with Hitachi Global link Manager.

2.     Eases installation and use through auto-discovery function, which automatically detects all available paths for failover and load balancing.

3.     Provides one path-management tool for all operating systems, Includes the CLI that allows administrators flexibility in managing paths across networks.

4.     Provides manual and automatic failover and failback support.

5.     Monitors status of online paths through a health –check facility at customer-specified intervals, and places a failed path offline when an error is detected.

 

 

 

Wednesday, 13 March 2013

Actifio PAS: Protection and availability storage


Actifio PAS: Protection and availability storage

Last month I went to Hyderabad for the new product technical understanding and that product name is Actifio PAS and above I have already mentioned that what PAS stand for, as the training was wonderful and the technology too was very good.

I will be explaining about the Actifio technology and how it works and where this technology can be used and which section of market can be targeted.

Actifio—Who We Are

Actifio is radically simple copy data management.

Our Protection and Availability Storage (PAS) lets businesses recover anything instantly, for up to 90 percent less. PAS eliminates siloed data protection applications, virtualizing data management to deliver an application-centric, SLA-driven solution that decouples the management of data from storage, network and server infrastructure. Actifio has helped liberate IT organizations and service providers of all sizes from vendor lock-in and the management challenges associated with exploding data growth. (definition taken from Actifio site).


So In above figure you can easily see that how we keep on copying the same production data again and again in different form for different purpose, and that’s why we keep on buying different Hardware to store those copies and because of that our IT budget keep on increasing on storing the same copy of data.

Now you all will be thinking that there is lot of technology which removes the duplicate data to be stored again in the compliance, ya that’s right like today there is lot of buzz of deduplication technology in market and it really helps you to not copy the duplicate data again.

Just have a small look in deduplication technology, how its coming in market, for ex: either the backup software will be having this deduplication features so that you can easily save your disk space by not copying the duplicate data again and again.so this may be licensed feature again extra cost if you want to use this feature.

Or you buy a dedicate Hardware deduplication compliance like EMC Datadomain and Quantum DXI boxes for dedicate deduplication backup solution, these hardware are disk based inline deduplication solution which will integrate with you existing backup software and takes backup on their disk. So if you see again a new hardware and extra cost for storing the copy of production data to the new hardware device.

Or now days this duplication features are present in the NAS storage also, but they are not inline deduplcation and they start the deduplication service on timely basis and find the duplicate data and keeps pointer on it.., but not much successful because utilize lots of CPU resource and then lots of manual work is required like I seen in Netapp deduplication volumes that you need to unduep the deduplication volume, if you want to increase its size. And if you see these production storage (SAN &NAS) are not meant for these types of work, they are meant for fetching the data to the user or servers or application in greatest performance, that’s why they are brought to centrally store the data and should give good performance on fetching the data.

And if we see in many IT environment these production storage are doing lots of extra work because of these extra features like deduplication, snapshot, cloning, migration,replication and etc.., and because of all those process the performance of production storage box decreases drastically and then the IT admin or IT manager complaints that their production storage is not giving the better performance.

Now Actifio PAS has removed all those production storage burden from (SAN &  NAS) and came up with the backup compliance which will be not only doing the deduplication but also it has a features of migration, cloning, mounting ,snapshot, backup, restore with defined default SLAs and if user want then he can define its own SLAs, to protect its data.

In below figure you will see how the Actifio PAS came up with the new approach.

 

“Actifio virtualizes your copy data storage applications, consolidating them to enable you to recover anything instantly for up to 90% less.”

Actifio—A New Approach 

Actifio virtualizes your copy data storage applications, providing backup, snapshot, BC/DR, test & dev, analytics and disaster recovery all through a single storage system. This approach enables you to consolidate all those functions and recover anything instantly; this is why Actifio is Radically Simple. (definition taken from the Actifo site.)

Now as it is clearly understandable if you see the image that how the Actifio stores the data from the production storage to its compliance and then how it virtualize the copy to the different environment, and save your lots of time in copying or moving the data from one place to other.

Actifio’s PAS is a copy data management system that can be deployed either within the SAN Fabric or “out-of-band,” over the network. Both methods have their advantages and provide customer options when deploying the Actifio solution. The heart of the Actifio solution is the Virtual Data Pipeline (VDP) technology. Its function is to virtualize production data copy management, eliminating redundancies and re-purposing the unique data for multiple data management applications. VDP efficiently captures a single copy or “gold copy” of changed data from the server and reuses the data for multiple uses, allowing the applications to directly access to copy data from Actifio PAS without any data movement.

One of the good feature of Actifio is the discovery of the critical application on server and then its drive by just installing the Actifio connector , its an software which need to be installed on the server which need to be backedup, this software automatically discover the critical application and protect it by define SLA, and this connector comes free of cost and so if you see we no need to depend on the backup software to take the backup of the data, just install the connector and then you can easily backup its critical application as well the files on that server.

Even it easily discover the ESX servers and the vm which are hosted on that server, those vm can be viewed in a filesystem type view, in a proper structured way, so its easy to find out the which esx is hosting which vms and what all the application that vm have and the proper layout of its drive and files, so it becomes easy to take backup of single file in vm and easy to restore single file.

Several enterprise class applications and platforms provide advanced interfaces for better manageability of copy data. Examples include Microsoft Windows VSS and Oracle RMAN. Actifio PAS directly interfaces with these API’s to capture an application-consistent snapshot of the data and import only the changed blocks into Actifio PAS. This provides the most efficient level of data capture at the application level.

In addition to reducing the storage footprint by over 10X, Actifio PAS enables customers to use any storage device. Customers can now have their SLA’s dictate the type of storage they use, rather than be constrained by their storage vendor. Many users have re-purposed their existing storage for storing data copies or have opted for lower cost storage devices. This capability further lowers the overall storage costs by over 50%

Most of the production storage does not support the synchronous replication more than 100km, this 100 km of synchronous replication is supported by Hitachi HNAS solution it was mentioned on their white paper. But if you go through the white paper of Actifio it support the synchronous replication up to 300 km, which is unique.., and if you see known of the backup deduplication disk solution support the synchronous replication they all support the async replication and if you see Actifio which is your new generation backup device can sync your backup data upto 300km, so that you can achieve fastest recovery on the time of disaster.

Sync Replication: Synchronous replication can be guaranteed between customer sites   up to 300KM apart.

Now why I am telling about the replication of actifio is that to do synchronous replication in mostly required for the production storage, but for doing synchronous replication lots of cost comes into picture , buying the license and then doing the setup, and apart from that you will not get the distance of 300 km , so in Actifio the synchronous license is free you don’t have to pay additional cost and then you can easily do a synchronous setup of 300km .

Async Replication: Asyncronous replication has no distance limitation, and will send data over the WAN as fast as network bandwidth allows.

You no need to replace any of your device, like your SAN and NAS device it can easily integrate with your existing storage, now if I say integration means that you can easily give storage disk to Actifio PAS from your existing storage and that means if you have multiple vendor storage you can assign LUN from those storage via Actifio, and can easily give those luns to different servers.

The below given figure tells you about the product packages.



 

Where this solution can be fit in IT environment.



Now after all the solution discussion comes the management, as some time managing the good solution becomes very complex , but the Actifio desktop is the GUI interface by which you can manage the actifio compliance and define your SLA and do the replication and restore easily the backedup data and easily take backup of the critical data.., I will be showing of the “Actifio desktop view “ via some snap by which you can easily figure out that how the Actifio desktop looks and how easily you can manage the actifio compliance.

How to create a SLA



 





 

In the above figure you have seen that how easy it is to define your own SLA and then how easy to protect particular application on that define SLA and then how easy it is to restore the file and data, and how fast you can restore by not copying or moving the data back to the client or server but via mounting the data back to the server, or you can create a clone copy of the data and use it for your testing purpose.

The below given is the snap view of the management window in Actifio desktop which shows you that what all application are protected and what all are unprotected, you can see the system health the space view and the jobs history and hardware health and many more..



 

Now as I have already wrote enough about the Actifio which can easily help you out to understand about this product, if still you want to know more than you can visit to Actifio site www.actifio.com and can get more information about this product, or you can write to me on my email id.

I want to tell you that this product was tested by the ESG lab and they have given a good report about this product, you can download that report from the ESG site or from the actifio site and can have a proper and deep understanding of how good this product is and how easily it can save lots of your IT money, and how it can reduce you backup window and restore window.

Below is the conclusive report which the ESG lab has given about the Actifio.

The Bigger Truth 

Everyone is talking about the data explosion occurring in organizations around the globe. More data —and bigger data sets —are being created to drive business. What few realize, however, is that while production data is certainly growing, copies   of production data are growing exponentially. Copies are made for physical backup, virtual backup, snapshots, disaster recovery, business continuity, business analytics, compliance, and test/development. The cost of managing and retaining these copies is often many times higher than the cost to store the original data. In particular, storing and managing all of the copies using separate tools and duplicate infrastructures is a tremendous expense, and it consumes much of the database, application, server, and storage administrators’ time.

Actifio PAS separates what you do with your data copies from how you store them. Why should copies have to be created differently and managed separately just because they have different purposes, copy frequencies, retention times, and recovery needs?

Actifio PAS virtualizes the management of   all copies so they work for the customer instead of the other way around. Users make a single copy and use it for different purposes with different SLAs —dramatically reducing unnecessary data growth, reclaiming tire 1 storage space, and getting instant ba ckup and recovery. Along the way, they can get rid of backup software, point solutions, dedupe appliances, tape libraries, tapes, replication tools, and WAN optimization products. Offloading copying and copy management makes a production environment more efficient. Customers gain freedom of choice because any production storage can be attached to Actifio PAS appliances   to be used as the data protection store. In addition, that freedom makes offsite replication more affordable and ensures that future storage   decisions aren’t dictated by today’s needs.

Through validation testing, ESG Lab was able to confirm a 29 times (2900%) capacity reduction compared with the traditional copy management approach.  By leveraging deduplicated replication, we also observed a 97% reduction in the bandwidth required for replication.  ESG Lab expects real-world savings to be even greater than the testing results were, as field experience   has shown typical data to exhibit even better deduplication rates than the   test data   set. 

Actifio PAS represents a new class of storage designed to simplify and streamline the IT infrastructure, and the benefits are dramatic. Most organizations make between three and 20 copies of production data.  Some may have over one hundred copies.  That’s great news for storage vendors, but it is a huge capital and operational expense   for IT organizations.

Actifio PAS is an application-centric, SLA -driven solution that, through virtualization, decouples the management of data from storage, network, and server infrastructures. The efficiency improvements and cost reduction s will benefit not only corporate IT organizations, but also service providers that can leverage these features to differentiate their offerings, prices, and margins.  Although   Actifio PAS is new to the market, ESG expects it to significantly affect the bottom line of any corporate IT department or service provider, as well as enable   new   levels of protection for “big data” sets. We look forward to following Actifio PAS as it expands in its scope and capabilities.

 

If you all enjoyed my blog on Actifio or on any other which I wrote please do comment on it, your comment keep me motivated on writing lot more and sharing all my knowledge with you all. Do write suggestions if you all have any.