Leave a comment

Not only backup, but also reconstruction (#2)

Those are not preventing from a quick reconstruction of information which is spread in a drive and they work at the highest speed, at the same time saving and reconstructing a set of data. An accomplished copy has no such drawbacks but it needs a software which can radically reduce the amount of data and cooperate with a virtualized environment, while ‘understanding’ and restoring data in such installations.

Backup which understands virtualization
Virtualization of storage, therefore working in a highly consolidated environment, is very different from an installation that works directly on an equipment. All the I/O operations from one server virtual machines are supported by one bus and the whole loading is served by one set of processors doing the same task repeatedly. In such a massive loading we have to take into consideration the influence of a backup task on the efficiency of production environment. If the loading will be very high you can expect a static during the work of production tasks. Imputing virtual devices and limiting simultaneous copies is incompatible with automatization and flexibility of a virtualized environment.

No future for agents
In a model working during each installation of an operational system, each computer should have agent’s software. Transferring the backup model to a virtualized environment is possible and the backup can still be made, but it would not be an optimal solution. What seems a bit worse, is the reconstruction of such environment, as there should be activated a machine’s standard with an already installed agent and then all data inside the installation should be recovered by using an agent. The process needs to reconstruct the same information tenfold. Using a non-agent tool, which communicates with the hypervisor through API, makes the data possible to be reconstructed directly to blocks on an array, which is far more faster and it applies less load on the CPU and storage. A disadvantage of an agent solution is that the backup cannot be checked until it is reconstructed into a target environment while an operational system is working. A solution which is using API hypervisor enables controlling the integrity.

Deduplication and compression
If it was to compare the work of a backup on a device which uses agents with the virtualized one, some contents are similar in both environments. Both models assume omitting the contents of some objects such as files or swap partitions, as the objects will be reconstructed during each start of a system. Deduplication and compression are the preincibles of a functional backup in both models. It means that many data blocks will be recurrent amongst the disk images of an operational system and applications and also within one disk picture. Both contents in both models have to be prepared in a different way, providing the specificity of an environment – instead of asking the Windows system about swap files’ location, which can be done by a backup agent, those information need to be obtained upon details in virtual disk images.

Laborious reconstruction or an immediate start-up?

If a copy reconstruction is connected with a necessary import of many terabytes of data the time of resource reconstruction will be measured in hours. Sometimes such operations need to be done anyway, which happens when some serious incidents take place. In spite of all, most data reconstruction tasks from a backup concern single virtual machines or single objects. Actions can be quickened if a software enables a fast mount of a machine directly from the backup files or reconstructing from deduplicated resources. There is an essential dependence connected with a way in which a deduplication is carried. If the whole space of a copy is deduplicated, the process of getting data out into its primary version is basing on a complete archive which contain all virtual machines. Copy of a single machine lead in such a way cannot be transferred between installations without getting data out into its primary version.

Deduplication guided on the level of a task makes that the copy belonging to a global resource storage, so it is portable. The price for this is a bigger need for the disk memory as the global deduplication works more efficiently. A copy made with deduplication accomplished per task enables not only a quick reconstruction of a single device but also the possibility to mount a virtual machine directly from the backup copy files. Backup software can immediately provide a reconstructed content of a machine as a disk image to the hypervisor, running a machine directly from data saved in security copy. In such way you can test the functionality of producing environment in a reserve center.

Leave a comment

Not only backup, but also reconstruction (#1)

While planning the subject of damage protection many companies are considering and testing only the process of copying the data into other drives or locations. Omitting the process a copy reconstruction is a serious mistake.protection

The main task which should be accomplished by a backup system is copying the data from a production system into another resource, drive or even location. The technology has evolved –starting from transferring the information to a tape by means of simple devices, through systems that realize the incremental and differential backups ending  on a radical reduction of the volume of storage data by means of deduplication and compression. The source of a backup has also changed – there are large machines of the mainframe class and huge servers of a UNIX type and wide environments of dispersed data reconstruction in server Linux and Windows systems or virtualized environments. Apart of big changes in the model of data reconstruction  the backup is still concentrated on copying and restoring the functionality of a system to the same data conversion structure. Virtualization has changed the vision of the work in the IT as it had radically expedited the process of system implementation. The old backup model does not keep up with this process.

The most important is not ‘backup’, but ‘restore’

The agility of a backup system can be checked in only one way: by means of restoring the data in such way in which the system has been designed. In many implementations in the first tests it came out that the process of data restoring lasted far more longer than it was expected. Efficiency problems  and problems connected with the accordance with the used software or drives have appeared.  Sometimes there are even critical failures: a compiled copy was not complete or was damaged and it was impossible to check things out in other way than through restoring it to a test environment. Problems can be also connected with the efficiency of the connection between the backup servers and also with the process of retrieving the information from a tape drive.

# to be continued

Leave a comment

Powerful storage system

Network attached storage is a powerful storage system which provides centralized data storage management. But to make the most of the possibilities of NAS solution it is crucial to choose the right NAS software.

Nowadays, many operating systems support not only a single NAS functionality but also a more complex solutions. Therefore, to establish a NAS system on standard operating system does not seem to be a problem. Additionally, every NAS server has an already installed and configured operating system for network storage.

NAS hardware is relatively cost-effective solution, so you can minimize the costs of adopting and upgrading your storage resources. NAS software simplifies the network administration services, reduces complexity of the system, automates many tedious tasks and satisfies the requirements of high performance and reliability of the system.

A growing number of enterprises already deploy NAS technology, if only with devices such as CD-ROM towers that are connected directly to the network. NAS software enables one of the most attractive characteristic of the system – the expandability. If a more storage space is required, the user can add another NAS device and expand the available storage. NAS also brings an additional level of fault tolerance to the network. One of such fault tolerant solutions is RAID. Disk arrays like RAID can be used to make sure that the NAS device does not become a point of failure.

The main processes that are served by NAS software are data reduplication, backup storage, system security, disaster recovery, automatic failover and NAS storage virtualization. The software also supports iSCSI functionality, therefore it is possible to create a SAN –NAS hybrid system. It means that you can add a functionality of iSCSI target and initiator for a NAS storage device.

Leave a comment

QNAP servers

QNAP System has presented two eight-disk NAS servers. TS-EC879U-RP is equipped with a quad-core processor Intel Xeon E3-1225(3,1 GHz) 4GB DDR3 ECC, an unnecessary AC, four Gigabit LAN ports and two ports to connect an extra NIC or mass storage. TS-879-RP has a dual-core Intel Core processor i3-2120 (3,3 GHz), 2GB DDR3 RAM, two Gigabit LAN ports and two ports for extension. Both devices fits the 2U case. qnap

Apart from offering advanced iSCSI functions which are used in the IP-SAN they also operate several simultaneously running iSCSI targets and provide full functionality of the NAS server. Qnap is one of the world’s leading Network Attached Storage and Network Video Recorder solutions provider.

Leave a comment

NAS OS

Network Attached Storage appliances have an architecture designed for one purpose – to serve data files to clients in heterogeneous network environments. NAS system managed by NAS OS is optimized for file Input/Output activity, therefore file serving performance is greater than that of a general purpose server, which is designed to perform a multitude of functions. Also, the capacity and transfer speed is increased due to one specialisation of storage server. NAS enables you to locate storage where it is needed on the network and provide clients with direct, server independent communication to storage resources. Localizing file I/O traffic provides for a more efficient use of network resources.

network
A NAS appliance connects directly to your existing LAN. NAS OS which is present in the storage device allows for transferring data over standard network access TCP/IP or IPX protocols using standard file sharing protocols such as SMB/CIFS, NCP, NFS, FTP or HTTP. No additional software or client licenses are required for clients to access storage. This enables you to implement an attractive storage solution and reduce existing network investments. Management of NAS OS can be performed from anywhere on your network or over the Internet using a standard web browser.

Although it is not possible to eliminate server downtime, whether for planned maintenance or due to unexpected crashes or outages, NAS OS provides tools for protecting valuable storage resources. NAS OS makes servers operate independently of network servers and communicate directly with the client. Thanks to this feature and automatic failover, files remain available in the event of network server downtime. Separating storage resources from the server decreases both the number of components and the amount of file I/O activity, reducing the probability of server downtime and increasing the reliability of the network and application servers. Such solution provides a more reliable and efficient network storage system.

Leave a comment

SAN Software

SAN Software is an important element of the whole storage area network. It consists of many solutions and applications which enable successful management of the system, Since SANs are becoming more complex and powerful, they are also more difficult to implement. An appropriate software SAN reduces these difficulties and makes implementation, configuration and management of the system a lot easier.san storage

However, since users have different needs and requirements, not all types of software SAN are universal, e.g. some of them are adjusted to certain business environments, the choice and configuration of software SAN cannot be inconsiderate. Much of the technology and SAN basic functions are ingrained in software SAN. The applications used in SAN management control and protect the data storage systems and ensure comfort of their users.

Unauthorized access to information storage systems is commonly a big problem, that is why software SAN always addresses this issue. Apart from systems which deal with data protection, file reduplication and backups and extensive disaster recovery, there is yet another For example, in an environment where technologies are not contained, Windows NT server(s) would see every available LUN and try to takeover. In many types of software there is a solution called technology containment which keeps the servers from gaining unauthorized access or accidental access to certain resources in the SAN. Software SAN can also be beneficial in another respect, especially important for company budget and outgoings.

By implementing the technique called hierarchical storage management, the data can be recognized, monitored, and classified on the basis of its importance. The files that are less frequently accessed can be moved to slower and cheaper devices, whereas those that are used all the time are stored in devices which are more costly to maintain, but allow for a very quick access.

Leave a comment

iSCSI target

Storage area network infrastructure allows users to create multiple iSCSI targets. iSCSI is the leading protocol used in SAN systems, which is faster and more efficient than other solutions, for example fibre channel, which is also used in SAN networks.

iSCSI target features can be created on a network device by means of iSCSI software. In order to transform a device into an iSCSI target it is possible to use either an iSCSI software available in the operating system of SAN server or the applications available separately from SAN devices. A system device with an iSCSI target feature will be able to receive SCSI commands sent via iSCSI from iSCSI initiator. Moreover, most modern operating systems are already adapted to provide the services of iSCSI targets and initiators.

Using the feature of iSCSI targets allows the users to consolidate, share and have access to all of their storage resources inside a high-availability system. iSCSI is definitely a more cost-effective solution when one considers the construction of SAN iSCSI.

iSCSI target can also refer to a virtualised device, since many network storage devices can be virtualised machines that store the resources on multiple devices which are not often many kilometres apart from the iSCSI initiator. In this way, client can access remote iSCSI storage as local disk resources.

Using iSCSI SAN allows to share server’s disk, partition, VMDK file, cluster, ESX or ISO file with iSCSI targets and initiators. iSCSI targets offer safe storage share and makes the user stop worrying about the safety of shared data on server because everything will be recovered after client disconnects. The SAN system applications with iSCSI allow for maintaining business continuity and provide disaster recovery by means of automatic failover and data reduplication technologies.

Follow

Get every new post delivered to your Inbox.