Looking back at what was important in networking and data storage over 20 years ago
I was recently contacted by Google to tell me that my blog created at blogspot.com - which was later acquired by Google - was to be closed down and deleted as part of its data housekeeping efforts.
No complaints from me, I hadn't even accessed it since 2007! I thought I would repost the whole continuous blog here for a little fun, and to stop my features for various magazines and websites - that longer exist - to be saved and read by those that like to read such things as part of their research, or for historical interest.
Maybe there are those out there who would like to take issue with some of my points - over 20 years later!
Happy reading
Antony Savvas
Tuesday, July 23, 2002
? STORAGE AREA NETWORKS Communications News - Summer 2002 Antony Savvas In the present economic and political climate the need for the protection of valuable data has never been more pronounced, but how should companies approach the technical demands of storing, managing and backing-up their corporate assets? Analyst Gartner Group says the average company will have collected 120 terabytes of customer data by 2004. This is the equivalent of a staggering 4,560 miles of full filing cabinets, which, incidentally, would stretch the entire length of the Great Wall of China. E-commerce activities or even an email system with a small number of highly active users have the potential to bring an organisation’s storage and back-up strategy to its knees. Analyst IDC says nearly 90% of businesses use e-mail, creating 9.7bn messages every day. And e-mail has proved so useful to commerce and addictive to consumers that analysts predict more than 35bn e-mails will be sent per day by 2005. When considering that it is estimated that one in 17 e-mails is stored for a significant length of time, the burden placed on network managers and their infrastructure becomes apparent. And if European Union proposals to make companies keep a record of all outgoing and incoming e-mails as well as Internet URLS visited by staff for at least seven years, becomes law, this situation will only get worse. SANs AND NAS An increasing number of companies therefore are implementing storage area networks (SANs) to cope with the data explosion. But what exactly is a SAN and how does it differ from its cheaper rival in the form of the NAS (network attached storage) network. A SAN should provide high stability and availability, fast throughput, and large amounts of storage that can be accessed by various machines while minimizing degradation of the network and system performance. SANs support the interconnectivity of servers, clients, disk storage (RAID and/or NonRAID), CD/DVD storage libraries and Tape drives and libraries into a manageable solution. SANs should also provide inter-platform communication and management and should be designed to scale to meet the growing needs of the user. Various analysts, including IDC, predicted that the SAN industry would be worth around $10bn by this year, but so far it is mainly larger firms that have taken the plunge. A SAN is typically based on the Fibre Channel protocol, but can also use protocols like ATM (asynchronous transfer mode), Fast Ethernet (100Mbps) and Gigabit Ethernet, with 10 Gigabit Ethernet soon to follow. Because a SAN running on Fibre Channel is complicated and expensive to implement, smaller companies, or those not wanting to choose a large dedicated system now, have opted for a NAS solution instead. A SAN provides a scalable storage environment where data is shared and stored on the network. Its large capacity and the high cost of the many components forming its architecture make it an expensive option unless the company’s storage needs are in the range of multiple terabytes. An alternative NAS system shares several qualities with a SAN, but NAS acts like another client on your LAN – rather than connecting servers to one another – and is cheaper to install and run. A NAS solution provides sharable storage space to your existing network and can be a single device or a family of devices, rather than a sub-network devoted to storage. NAS architecture often consists of an operating system and several ports allowing the addition of multiple hard disks to increase storage capacity. The system software is embedded, and software upgrades are often free via Web downloading, which eliminates costly upgrades. The user pays only for those hardware and software components necessary for network storage. Benefits of NAS The key advantages of NAS over SANs are: ? Cost Savings ? Implementation Speed ? Deployable on any LAN ? Low Maintenance Costs Emerging Protocols But whether a user chooses a SAN or NAS solution, there are various other factors to consider involving emerging protocols, back-up technology, and storage management. In the case of protocols, the biggest factor is the widespread adoption of the IP protocol and increased use of the Internet, which potentially makes the reach of a SAN infinite. In addition, much has been made of 10 Gigabit Ethernet as a fast and easy-to-deploy data transport protocol, but one of its greatest potential applications is as a storage solution. The fact corporates already rely on Ethernet technology, means that 10 Gigabit Ethernet is a serious threat to the established existence of Fibre Channel when it comes to SANs. Paul Hammond, vice-president professional services at storage solutions company BI-Tech, says: “10 Gigabit Ethernet switched networks should prove excellent for shipping storage data over wide areas, even complementing existing fibre channel-based infrastructures.” But Hammond points out: “One has to remember that as Ethernet wire speeds have increased, the development of Fibre Channel has never been far behind.” Hammond is certainly right about Fibre Channel’s development, as 10 Gigabit Fibre Channel is already in the offing. As far as future-proofing the network is concerned, 10 Gigabit Ethernet is particularly important when considering the impending introduction of even newer protocols. The two main ones to look out for are Fibre Channel over IP (FCIP) and iSCSI (see box). ISCSI basically uses the same architecture as Gigabit Ethernet solutions, so the interoperability question is more easily addressed in networks that come to rely on mixed transport protocols. Analyst IDC estimates that usage of Gigabit Ethernet ports will increase more than fivefold by 2004, so the companies buying these ports will logically want to make sure that the investment in Gigabit Ethernet is fully realised by adopting or expanding towards 10 Gigabit Ethernet. ISCSI has now been accepted by the industry as a potential solid standard, but so far the industry hasn’t been promoting it too hard. This is partly because it doesn’t want to kill its existing sales as a result of users holding out for something better, and partly because storage over plain Fibre Channel probably has a good three years still left in it before the first iSCSI products come on stream. For those users suffering from investment squeezes implemented by the board as a result of the downturn, maybe it’s no bad thing to use a solution like 10 Gigabit Ethernet now, seeing that it will be easily interoperable with iSCSI when it arrives. Dr Geoff Barrall, chief technical officer at storage systems company BlueArc, says: “10 Gigabit Ethernet is giving the Internet a “leg-up” when it comes to storage applications in metropolitan areas, but iSCSI will be easier to use, as well as cheaper, and will make “storage over Ethernet” a reality.” A spokesman at Nortel Networks sees room in the market for both 10 Gigabit and present and future versions of Fibre Channel, even though Fibre Channel is viewed by many as being more reliable than Ethernet. He says: “I feel that in the mid-term, going to 2005, Fibre Channel will be the only acceptable protocol for storage traffic when no loss of data and a deterministic network is required to keep e-business alive.” Nortel believes this analysis however mainly applies to the big data centres that already form most existing Fibre Channel business. The company says that 10 Gigabit Ethernet will find an immediate market in NAS systems and in SANs which can afford to live with the less than perfect performance currently offered by Ethernet. Fibre Channel Replacements: 10 Gigabit Ethernet: although faster than plain Fibre Channel, 10 Gigabit Fibre Channel is on the horizon. iSCSI: SCSI codes generated from user requests and data encapsulated into IP packets for transmission over an Ethernet connection. iSCSI overcomes plain SCSI’s latency problem and 50m distance barrier. FCIP (Fibre Channel over IP): Developed by Internet Engineering Task Force and enables transmission of Fibre Channel information by tunnelling data between SANs over IP networks. Particularly suited for data sharing over geographically disparate organisations. iFCP (Internet Fibre Channel Protocol): Hybrid solution which is a version of FCIP that moves Fibre Channel data over IP networks using iSCSI protocols. Designed to interconnect existing Fibre Channel SANs. Back-Up As far as data back-up is concerned, users have to consider a migration away from unconnected tape drives. But tape is still king for most users, and there are a number of new solutions to help carry on flying the flag. Nick Charles, European vice-president at Overland Data, says: “Whether it’s DLT1, AIT, DLTtape, Super DLTtape or LTO (Linear Tape Open), there’s no escaping the fact that despite years of industry observers predicting its death-knell, tape continues to go from strength to strength.” Overland’s Neo Series also has a Fibre Channel option for Lan-free or server-free back-up in a SAN environment, and this model has been followed by other suppliers too. Mark Boulton, sales and marketing director at high-end ‘jukebox’ storage back-up company Gresham Enterprise Software, says: “The biggest development over the last 12 months has been the introduction of Lan-free products. These products allow a server to back-up direct to tape via the SAN. The benefit of this to the user is that no traffic moves over the Lan, saving on bandwidth.” Boulton says: “As SANs have matured and Lan-free products have been introduced, the user now has a viable way of managing the explosion of data taking place. The cost of archiving to tape continues to fall and the introduction of higher density media means tape continues to be the most cost effective method of archiving data.” Offsite Back-up and Restore Hiving off the responsibilities of back-up to another company is another option to growing amounts of tape however. InTechnology says it has 40 companies signed up to its remote back-up solution using its wide area, private network infrastructure built around its VBAK service. The company’s entry-level 500Gb of data service for SMEs costs £15,000 to set up, which includes the £3,000 cost of setting up a leased line to gain access to the data. The ongoing charge is £3,500 a month, and InTechnology claims its solution works out 10% cheaper than a company managing its own tape solution. InTechnology claims it can provide a faster back-up of data, as well as instant restore when needed. The latest companies to choose InTechnology’s solution are the Heritage Lottery Fund, Porche UK, Teather & Greenwood Holdings, Newsplay International, and IMG UK. The use of fast optical networking technologies over metropolitan area networks has also created an opening for storage service providers (SSPs). The benefits of using a SSP include the availability of trained staff, a full suite of services, the ability to easily add more capacity, and the offer of robust disaster plans. The drawbacks however include whether an organisation is willing to trust an outside partner with its mission critical data, security, and potential problems or fears around quality of service and lack of local control. The wider use of optical technologies is also reliant on an extensive fibre network in the MAN, and the lack of this in many areas means the market is currently restricted. Alan Zeichick, principal analyst at Camden Associates, predicts the growth of the SSP market will be slow. He says: “While there is no technical reason why many companies couldn’t migrate to an SSP, companies will still be slow to trust their data to an outside provider’s care.” Storage Management With storage management, the companies selling the storage solutions didn’t seem to realise or didn’t care that certain customers had specific needs when it came to managing their data. Companies like EMC, Compaq, Hitachi, StorageTek, and IBM which sold the big expensive boxes, didn’t feel obliged to configure their solutions to enable them to work with rival products for instance, even though it is established practice in most IT departments to work with solutions from different vendors. And although it was clear that the massive increase in data being held by users would soon create the need for data management solutions that could be used flexibly, hardly any R&D expenditure has been spent on this area by the big box shifters. Although it is not in the short-term interests of the storage giants to allow their customers to more easily use other hardware from different vendors, therefore cannibalising their own sales, the giants however are at least looking at supplying easier management solutions for their own products. But the Holy Grail of storage management remains the mixing and matching of storage boxes on one network through one hub – a process known as “virtualisation” – but the number of solutions out there which allow this are very limited. Wayne Budgen, senior consultant at business continuity company HarrierZeuros, says: “Centralised management of a SAN through a single management interface increases efficiency. Potentially, hundreds of servers and devices can be managed and supported as a single entity, lowering the cost of storage management on a per unit basis, whilst increasing the functionality available to administrators.” Susan Clarke, senior research analyst at Butler Group, says: “As far as virtualisation is concerned, DataCore was the first to market with a product, but unfortunately is likely to suffer because of its size and low market profile. “Butler Group believes DataCore is ideally positioned to be acquired by one of the large storage players that are currently playing catch-up as far as virtualisation is concerned.” Big storage player Fujitsu Softek is among many companies taking advantage of DataCore’s solutions to give users the flexibility they need. Other players in the virtualisation market include ExaNet, StorageAge, and Hewlett-Packard. HP bought into the market with its recent acquisition of StorageApps. EMC has also recently launched a solution. Virtualisation software usually involves the suppliers logging all the set-up and connection methods used by the main storage vendors and creating a system to connect all the different storage systems onto one platform. This process obviously takes time and must also be constantly updated as more suppliers and additional storage systems come onto the market. It would therefore be far more helpful to users to be able to work on a single storage standard based on IP. The two main alternatives to the current interoperability debacle are Fibre Channel over IP (FCIP) and iSCSI. The various vendors are alleviating problems by agreeing through the Fibre Channel Industry Association to work to a minimum of configuration standards to improve interoperability. But this process will obviously be limited and not all-encompassing. Chris Atkins, product manager for storage at Sun Microsystems, says a storage management solution should offer efficiency, cost allocation, and management reporting. Regarding efficiency, the software should be able to measure capacity, what percentage is actually holding data, and how much of that data is inactive and which could be archived off. With cost allocation, the question is which applications, departments, and users are using what amounts of storage? With this ability, says Atkins, a “chargeback” should be achievable based on an allocation of costs in proportion to usage. And with management reporting, users need to know what speed is capacity increasing; which applications are close to running out of capacity; and have any applications’ capacity needs started growing unexpectedly? Requirements of a storage management solution: The measurement of capacity and efficiency Cost allocation Management reporting Manage back-up and restore functions Although there are many obstacles to overcome when implementing a SAN, the market conditions which have seen a fall in prices for hardware, software and bandwidth are bound to tempt many new entrants into the SAN market. Copyright Protected - Antony Savvas - [email protected] posted by Antony 9:24 AM
? STORAGE AREA NETWORKS Communications News - Summer 2002 Antony Savvas In the present economic and political climate the need for the protection of valuable data has never been more pronounced, but how should companies approach the technical demands of storing, managing and backing-up their corporate assets? Analyst Gartner Group says the average company will have collected 120 terabytes of customer data by 2004. This is the equivalent of a staggering 4,560 miles of full filing cabinets, which, incidentally, would stretch the entire length of the Great Wall of China. E-commerce activities or even an email system with a small number of highly active users have the potential to bring an organisation’s storage and back-up strategy to its knees. Analyst IDC says nearly 90% of businesses use e-mail, creating 9.7bn messages every day. And e-mail has proved so useful to commerce and addictive to consumers that analysts predict more than 35bn e-mails will be sent per day by 2005. When considering that it is estimated that one in 17 e-mails is stored for a significant length of time, the burden placed on network managers and their infrastructure becomes apparent. And if European Union proposals to make companies keep a record of all outgoing and incoming e-mails as well as Internet URLS visited by staff for at least seven years, becomes law, this situation will only get worse. SANs AND NAS An increasing number of companies therefore are implementing storage area networks (SANs) to cope with the data explosion. But what exactly is a SAN and how does it differ from its cheaper rival in the form of the NAS (network attached storage) network. A SAN should provide high stability and availability, fast throughput, and large amounts of storage that can be accessed by various machines while minimizing degradation of the network and system performance. SANs support the interconnectivity of servers, clients, disk storage (RAID and/or NonRAID), CD/DVD storage libraries and Tape drives and libraries into a manageable solution. SANs should also provide inter-platform communication and management and should be designed to scale to meet the growing needs of the user. Various analysts, including IDC, predicted that the SAN industry would be worth around $10bn by this year, but so far it is mainly larger firms that have taken the plunge. A SAN is typically based on the Fibre Channel protocol, but can also use protocols like ATM (asynchronous transfer mode), Fast Ethernet (100Mbps) and Gigabit Ethernet, with 10 Gigabit Ethernet soon to follow. Because a SAN running on Fibre Channel is complicated and expensive to implement, smaller companies, or those not wanting to choose a large dedicated system now, have opted for a NAS solution instead. A SAN provides a scalable storage environment where data is shared and stored on the network. Its large capacity and the high cost of the many components forming its architecture make it an expensive option unless the company’s storage needs are in the range of multiple terabytes. An alternative NAS system shares several qualities with a SAN, but NAS acts like another client on your LAN – rather than connecting servers to one another – and is cheaper to install and run. A NAS solution provides sharable storage space to your existing network and can be a single device or a family of devices, rather than a sub-network devoted to storage. NAS architecture often consists of an operating system and several ports allowing the addition of multiple hard disks to increase storage capacity. The system software is embedded, and software upgrades are often free via Web downloading, which eliminates costly upgrades. The user pays only for those hardware and software components necessary for network storage. Benefits of NAS The key advantages of NAS over SANs are: ? Cost Savings ? Implementation Speed ? Deployable on any LAN ? Low Maintenance Costs Emerging Protocols But whether a user chooses a SAN or NAS solution, there are various other factors to consider involving emerging protocols, back-up technology, and storage management. In the case of protocols, the biggest factor is the widespread adoption of the IP protocol and increased use of the Internet, which potentially makes the reach of a SAN infinite. In addition, much has been made of 10 Gigabit Ethernet as a fast and easy-to-deploy data transport protocol, but one of its greatest potential applications is as a storage solution. The fact corporates already rely on Ethernet technology, means that 10 Gigabit Ethernet is a serious threat to the established existence of Fibre Channel when it comes to SANs. Paul Hammond, vice-president professional services at storage solutions company BI-Tech, says: “10 Gigabit Ethernet switched networks should prove excellent for shipping storage data over wide areas, even complementing existing fibre channel-based infrastructures.” But Hammond points out: “One has to remember that as Ethernet wire speeds have increased, the development of Fibre Channel has never been far behind.” Hammond is certainly right about Fibre Channel’s development, as 10 Gigabit Fibre Channel is already in the offing. As far as future-proofing the network is concerned, 10 Gigabit Ethernet is particularly important when considering the impending introduction of even newer protocols. The two main ones to look out for are Fibre Channel over IP (FCIP) and iSCSI (see box). ISCSI basically uses the same architecture as Gigabit Ethernet solutions, so the interoperability question is more easily addressed in networks that come to rely on mixed transport protocols. Analyst IDC estimates that usage of Gigabit Ethernet ports will increase more than fivefold by 2004, so the companies buying these ports will logically want to make sure that the investment in Gigabit Ethernet is fully realised by adopting or expanding towards 10 Gigabit Ethernet. ISCSI has now been accepted by the industry as a potential solid standard, but so far the industry hasn’t been promoting it too hard. This is partly because it doesn’t want to kill its existing sales as a result of users holding out for something better, and partly because storage over plain Fibre Channel probably has a good three years still left in it before the first iSCSI products come on stream. For those users suffering from investment squeezes implemented by the board as a result of the downturn, maybe it’s no bad thing to use a solution like 10 Gigabit Ethernet now, seeing that it will be easily interoperable with iSCSI when it arrives. Dr Geoff Barrall, chief technical officer at storage systems company BlueArc, says: “10 Gigabit Ethernet is giving the Internet a “leg-up” when it comes to storage applications in metropolitan areas, but iSCSI will be easier to use, as well as cheaper, and will make “storage over Ethernet” a reality.” A spokesman at Nortel Networks sees room in the market for both 10 Gigabit and present and future versions of Fibre Channel, even though Fibre Channel is viewed by many as being more reliable than Ethernet. He says: “I feel that in the mid-term, going to 2005, Fibre Channel will be the only acceptable protocol for storage traffic when no loss of data and a deterministic network is required to keep e-business alive.” Nortel believes this analysis however mainly applies to the big data centres that already form most existing Fibre Channel business. The company says that 10 Gigabit Ethernet will find an immediate market in NAS systems and in SANs which can afford to live with the less than perfect performance currently offered by Ethernet. Fibre Channel Replacements: 10 Gigabit Ethernet: although faster than plain Fibre Channel, 10 Gigabit Fibre Channel is on the horizon. iSCSI: SCSI codes generated from user requests and data encapsulated into IP packets for transmission over an Ethernet connection. iSCSI overcomes plain SCSI’s latency problem and 50m distance barrier. FCIP (Fibre Channel over IP): Developed by Internet Engineering Task Force and enables transmission of Fibre Channel information by tunnelling data between SANs over IP networks. Particularly suited for data sharing over geographically disparate organisations. iFCP (Internet Fibre Channel Protocol): Hybrid solution which is a version of FCIP that moves Fibre Channel data over IP networks using iSCSI protocols. Designed to interconnect existing Fibre Channel SANs. Back-Up As far as data back-up is concerned, users have to consider a migration away from unconnected tape drives. But tape is still king for most users, and there are a number of new solutions to help carry on flying the flag. Nick Charles, European vice-president at Overland Data, says: “Whether it’s DLT1, AIT, DLTtape, Super DLTtape or LTO (Linear Tape Open), there’s no escaping the fact that despite years of industry observers predicting its death-knell, tape continues to go from strength to strength.” Overland’s Neo Series also has a Fibre Channel option for Lan-free or server-free back-up in a SAN environment, and this model has been followed by other suppliers too. Mark Boulton, sales and marketing director at high-end ‘jukebox’ storage back-up company Gresham Enterprise Software, says: “The biggest development over the last 12 months has been the introduction of Lan-free products. These products allow a server to back-up direct to tape via the SAN. The benefit of this to the user is that no traffic moves over the Lan, saving on bandwidth.” Boulton says: “As SANs have matured and Lan-free products have been introduced, the user now has a viable way of managing the explosion of data taking place. The cost of archiving to tape continues to fall and the introduction of higher density media means tape continues to be the most cost effective method of archiving data.” Offsite Back-up and Restore Hiving off the responsibilities of back-up to another company is another option to growing amounts of tape however. InTechnology says it has 40 companies signed up to its remote back-up solution using its wide area, private network infrastructure built around its VBAK service. The company’s entry-level 500Gb of data service for SMEs costs £15,000 to set up, which includes the £3,000 cost of setting up a leased line to gain access to the data. The ongoing charge is £3,500 a month, and InTechnology claims its solution works out 10% cheaper than a company managing its own tape solution. InTechnology claims it can provide a faster back-up of data, as well as instant restore when needed. The latest companies to choose InTechnology’s solution are the Heritage Lottery Fund, Porche UK, Teather & Greenwood Holdings, Newsplay International, and IMG UK. The use of fast optical networking technologies over metropolitan area networks has also created an opening for storage service providers (SSPs). The benefits of using a SSP include the availability of trained staff, a full suite of services, the ability to easily add more capacity, and the offer of robust disaster plans. The drawbacks however include whether an organisation is willing to trust an outside partner with its mission critical data, security, and potential problems or fears around quality of service and lack of local control. The wider use of optical technologies is also reliant on an extensive fibre network in the MAN, and the lack of this in many areas means the market is currently restricted. Alan Zeichick, principal analyst at Camden Associates, predicts the growth of the SSP market will be slow. He says: “While there is no technical reason why many companies couldn’t migrate to an SSP, companies will still be slow to trust their data to an outside provider’s care.” Storage Management With storage management, the companies selling the storage solutions didn’t seem to realise or didn’t care that certain customers had specific needs when it came to managing their data. Companies like EMC, Compaq, Hitachi, StorageTek, and IBM which sold the big expensive boxes, didn’t feel obliged to configure their solutions to enable them to work with rival products for instance, even though it is established practice in most IT departments to work with solutions from different vendors. And although it was clear that the massive increase in data being held by users would soon create the need for data management solutions that could be used flexibly, hardly any R&D expenditure has been spent on this area by the big box shifters. Although it is not in the short-term interests of the storage giants to allow their customers to more easily use other hardware from different vendors, therefore cannibalising their own sales, the giants however are at least looking at supplying easier management solutions for their own products. But the Holy Grail of storage management remains the mixing and matching of storage boxes on one network through one hub – a process known as “virtualisation” – but the number of solutions out there which allow this are very limited. Wayne Budgen, senior consultant at business continuity company HarrierZeuros, says: “Centralised management of a SAN through a single management interface increases efficiency. Potentially, hundreds of servers and devices can be managed and supported as a single entity, lowering the cost of storage management on a per unit basis, whilst increasing the functionality available to administrators.” Susan Clarke, senior research analyst at Butler Group, says: “As far as virtualisation is concerned, DataCore was the first to market with a product, but unfortunately is likely to suffer because of its size and low market profile. “Butler Group believes DataCore is ideally positioned to be acquired by one of the large storage players that are currently playing catch-up as far as virtualisation is concerned.” Big storage player Fujitsu Softek is among many companies taking advantage of DataCore’s solutions to give users the flexibility they need. Other players in the virtualisation market include ExaNet, StorageAge, and Hewlett-Packard. HP bought into the market with its recent acquisition of StorageApps. EMC has also recently launched a solution. Virtualisation software usually involves the suppliers logging all the set-up and connection methods used by the main storage vendors and creating a system to connect all the different storage systems onto one platform. This process obviously takes time and must also be constantly updated as more suppliers and additional storage systems come onto the market. It would therefore be far more helpful to users to be able to work on a single storage standard based on IP. The two main alternatives to the current interoperability debacle are Fibre Channel over IP (FCIP) and iSCSI. The various vendors are alleviating problems by agreeing through the Fibre Channel Industry Association to work to a minimum of configuration standards to improve interoperability. But this process will obviously be limited and not all-encompassing. Chris Atkins, product manager for storage at Sun Microsystems, says a storage management solution should offer efficiency, cost allocation, and management reporting. Regarding efficiency, the software should be able to measure capacity, what percentage is actually holding data, and how much of that data is inactive and which could be archived off. With cost allocation, the question is which applications, departments, and users are using what amounts of storage? With this ability, says Atkins, a “chargeback” should be achievable based on an allocation of costs in proportion to usage. And with management reporting, users need to know what speed is capacity increasing; which applications are close to running out of capacity; and have any applications’ capacity needs started growing unexpectedly? Requirements of a storage management solution: The measurement of capacity and efficiency Cost allocation Management reporting Manage back-up and restore functions Although there are many obstacles to overcome when implementing a SAN, the market conditions which have seen a fall in prices for hardware, software and bandwidth are bound to tempt many new entrants into the SAN market. Copyright Protected - Antony Savvas - [email protected] posted by Antony 9:19 AM
? STORAGE AREA NETWORKS Communications News - Summer 2002 Antony Savvas In the present economic and political climate the need for the protection of valuable data has never been more pronounced, but how should companies approach the technical demands of storing, managing and backing-up their corporate assets? Analyst Gartner Group says the average company will have collected 120 terabytes of customer data by 2004. This is the equivalent of a staggering 4,560 miles of full filing cabinets, which, incidentally, would stretch the entire length of the Great Wall of China. E-commerce activities or even an email system with a small number of highly active users have the potential to bring an organisation’s storage and back-up strategy to its knees. Analyst IDC says nearly 90% of businesses use e-mail, creating 9.7bn messages every day. And e-mail has proved so useful to commerce and addictive to consumers that analysts predict more than 35bn e-mails will be sent per day by 2005. When considering that it is estimated that one in 17 e-mails is stored for a significant length of time, the burden placed on network managers and their infrastructure becomes apparent. And if European Union proposals to make companies keep a record of all outgoing and incoming e-mails as well as Internet URLS visited by staff for at least seven years, becomes law, this situation will only get worse. SANs AND NAS An increasing number of companies therefore are implementing storage area networks (SANs) to cope with the data explosion. But what exactly is a SAN and how does it differ from its cheaper rival in the form of the NAS (network attached storage) network. A SAN should provide high stability and availability, fast throughput, and large amounts of storage that can be accessed by various machines while minimizing degradation of the network and system performance. SANs support the interconnectivity of servers, clients, disk storage (RAID and/or NonRAID), CD/DVD storage libraries and Tape drives and libraries into a manageable solution. SANs should also provide inter-platform communication and management and should be designed to scale to meet the growing needs of the user. Various analysts, including IDC, predicted that the SAN industry would be worth around $10bn by this year, but so far it is mainly larger firms that have taken the plunge. A SAN is typically based on the Fibre Channel protocol, but can also use protocols like ATM (asynchronous transfer mode), Fast Ethernet (100Mbps) and Gigabit Ethernet, with 10 Gigabit Ethernet soon to follow. Because a SAN running on Fibre Channel is complicated and expensive to implement, smaller companies, or those not wanting to choose a large dedicated system now, have opted for a NAS solution instead. A SAN provides a scalable storage environment where data is shared and stored on the network. Its large capacity and the high cost of the many components forming its architecture make it an expensive option unless the company’s storage needs are in the range of multiple terabytes. An alternative NAS system shares several qualities with a SAN, but NAS acts like another client on your LAN – rather than connecting servers to one another – and is cheaper to install and run. A NAS solution provides sharable storage space to your existing network and can be a single device or a family of devices, rather than a sub-network devoted to storage. NAS architecture often consists of an operating system and several ports allowing the addition of multiple hard disks to increase storage capacity. The system software is embedded, and software upgrades are often free via Web downloading, which eliminates costly upgrades. The user pays only for those hardware and software components necessary for network storage. Benefits of NAS The key advantages of NAS over SANs are: ? Cost Savings ? Implementation Speed ? Deployable on any LAN ? Low Maintenance Costs Emerging Protocols But whether a user chooses a SAN or NAS solution, there are various other factors to consider involving emerging protocols, back-up technology, and storage management. In the case of protocols, the biggest factor is the widespread adoption of the IP protocol and increased use of the Internet, which potentially makes the reach of a SAN infinite. In addition, much has been made of 10 Gigabit Ethernet as a fast and easy-to-deploy data transport protocol, but one of its greatest potential applications is as a storage solution. The fact corporates already rely on Ethernet technology, means that 10 Gigabit Ethernet is a serious threat to the established existence of Fibre Channel when it comes to SANs. Paul Hammond, vice-president professional services at storage solutions company BI-Tech, says: “10 Gigabit Ethernet switched networks should prove excellent for shipping storage data over wide areas, even complementing existing fibre channel-based infrastructures.” But Hammond points out: “One has to remember that as Ethernet wire speeds have increased, the development of Fibre Channel has never been far behind.” Hammond is certainly right about Fibre Channel’s development, as 10 Gigabit Fibre Channel is already in the offing. As far as future-proofing the network is concerned, 10 Gigabit Ethernet is particularly important when considering the impending introduction of even newer protocols. The two main ones to look out for are Fibre Channel over IP (FCIP) and iSCSI (see box). ISCSI basically uses the same architecture as Gigabit Ethernet solutions, so the interoperability question is more easily addressed in networks that come to rely on mixed transport protocols. Analyst IDC estimates that usage of Gigabit Ethernet ports will increase more than fivefold by 2004, so the companies buying these ports will logically want to make sure that the investment in Gigabit Ethernet is fully realised by adopting or expanding towards 10 Gigabit Ethernet. ISCSI has now been accepted by the industry as a potential solid standard, but so far the industry hasn’t been promoting it too hard. This is partly because it doesn’t want to kill its existing sales as a result of users holding out for something better, and partly because storage over plain Fibre Channel probably has a good three years still left in it before the first iSCSI products come on stream. For those users suffering from investment squeezes implemented by the board as a result of the downturn, maybe it’s no bad thing to use a solution like 10 Gigabit Ethernet now, seeing that it will be easily interoperable with iSCSI when it arrives. Dr Geoff Barrall, chief technical officer at storage systems company BlueArc, says: “10 Gigabit Ethernet is giving the Internet a “leg-up” when it comes to storage applications in metropolitan areas, but iSCSI will be easier to use, as well as cheaper, and will make “storage over Ethernet” a reality.” A spokesman at Nortel Networks sees room in the market for both 10 Gigabit and present and future versions of Fibre Channel, even though Fibre Channel is viewed by many as being more reliable than Ethernet. He says: “I feel that in the mid-term, going to 2005, Fibre Channel will be the only acceptable protocol for storage traffic when no loss of data and a deterministic network is required to keep e-business alive.” Nortel believes this analysis however mainly applies to the big data centres that already form most existing Fibre Channel business. The company says that 10 Gigabit Ethernet will find an immediate market in NAS systems and in SANs which can afford to live with the less than perfect performance currently offered by Ethernet. Fibre Channel Replacements: 10 Gigabit Ethernet: although faster than plain Fibre Channel, 10 Gigabit Fibre Channel is on the horizon. iSCSI: SCSI codes generated from user requests and data encapsulated into IP packets for transmission over an Ethernet connection. iSCSI overcomes plain SCSI’s latency problem and 50m distance barrier. FCIP (Fibre Channel over IP): Developed by Internet Engineering Task Force and enables transmission of Fibre Channel information by tunnelling data between SANs over IP networks. Particularly suited for data sharing over geographically disparate organisations. iFCP (Internet Fibre Channel Protocol): Hybrid solution which is a version of FCIP that moves Fibre Channel data over IP networks using iSCSI protocols. Designed to interconnect existing Fibre Channel SANs. Back-Up As far as data back-up is concerned, users have to consider a migration away from unconnected tape drives. But tape is still king for most users, and there are a number of new solutions to help carry on flying the flag. Nick Charles, European vice-president at Overland Data, says: “Whether it’s DLT1, AIT, DLTtape, Super DLTtape or LTO (Linear Tape Open), there’s no escaping the fact that despite years of industry observers predicting its death-knell, tape continues to go from strength to strength.” Overland’s Neo Series also has a Fibre Channel option for Lan-free or server-free back-up in a SAN environment, and this model has been followed by other suppliers too. Mark Boulton, sales and marketing director at high-end ‘jukebox’ storage back-up company Gresham Enterprise Software, says: “The biggest development over the last 12 months has been the introduction of Lan-free products. These products allow a server to back-up direct to tape via the SAN. The benefit of this to the user is that no traffic moves over the Lan, saving on bandwidth.” Boulton says: “As SANs have matured and Lan-free products have been introduced, the user now has a viable way of managing the explosion of data taking place. The cost of archiving to tape continues to fall and the introduction of higher density media means tape continues to be the most cost effective method of archiving data.” Offsite Back-up and Restore Hiving off the responsibilities of back-up to another company is another option to growing amounts of tape however. InTechnology says it has 40 companies signed up to its remote back-up solution using its wide area, private network infrastructure built around its VBAK service. The company’s entry-level 500Gb of data service for SMEs costs £15,000 to set up, which includes the £3,000 cost of setting up a leased line to gain access to the data. The ongoing charge is £3,500 a month, and InTechnology claims its solution works out 10% cheaper than a company managing its own tape solution. InTechnology claims it can provide a faster back-up of data, as well as instant restore when needed. The latest companies to choose InTechnology’s solution are the Heritage Lottery Fund, Porche UK, Teather & Greenwood Holdings, Newsplay International, and IMG UK. The use of fast optical networking technologies over metropolitan area networks has also created an opening for storage service providers (SSPs). The benefits of using a SSP include the availability of trained staff, a full suite of services, the ability to easily add more capacity, and the offer of robust disaster plans. The drawbacks however include whether an organisation is willing to trust an outside partner with its mission critical data, security, and potential problems or fears around quality of service and lack of local control. The wider use of optical technologies is also reliant on an extensive fibre network in the MAN, and the lack of this in many areas means the market is currently restricted. Alan Zeichick, principal analyst at Camden Associates, predicts the growth of the SSP market will be slow. He says: “While there is no technical reason why many companies couldn’t migrate to an SSP, companies will still be slow to trust their data to an outside provider’s care.” Storage Management With storage management, the companies selling the storage solutions didn’t seem to realise or didn’t care that certain customers had specific needs when it came to managing their data. Companies like EMC, Compaq, Hitachi, StorageTek, and IBM which sold the big expensive boxes, didn’t feel obliged to configure their solutions to enable them to work with rival products for instance, even though it is established practice in most IT departments to work with solutions from different vendors. And although it was clear that the massive increase in data being held by users would soon create the need for data management solutions that could be used flexibly, hardly any R&D expenditure has been spent on this area by the big box shifters. Although it is not in the short-term interests of the storage giants to allow their customers to more easily use other hardware from different vendors, therefore cannibalising their own sales, the giants however are at least looking at supplying easier management solutions for their own products. But the Holy Grail of storage management remains the mixing and matching of storage boxes on one network through one hub – a process known as “virtualisation” – but the number of solutions out there which allow this are very limited. Wayne Budgen, senior consultant at business continuity company HarrierZeuros, says: “Centralised management of a SAN through a single management interface increases efficiency. Potentially, hundreds of servers and devices can be managed and supported as a single entity, lowering the cost of storage management on a per unit basis, whilst increasing the functionality available to administrators.” Susan Clarke, senior research analyst at Butler Group, says: “As far as virtualisation is concerned, DataCore was the first to market with a product, but unfortunately is likely to suffer because of its size and low market profile. “Butler Group believes DataCore is ideally positioned to be acquired by one of the large storage players that are currently playing catch-up as far as virtualisation is concerned.” Big storage player Fujitsu Softek is among many companies taking advantage of DataCore’s solutions to give users the flexibility they need. Other players in the virtualisation market include ExaNet, StorageAge, and Hewlett-Packard. HP bought into the market with its recent acquisition of StorageApps. EMC has also recently launched a solution. Virtualisation software usually involves the suppliers logging all the set-up and connection methods used by the main storage vendors and creating a system to connect all the different storage systems onto one platform. This process obviously takes time and must also be constantly updated as more suppliers and additional storage systems come onto the market. It would therefore be far more helpful to users to be able to work on a single storage standard based on IP. The two main alternatives to the current interoperability debacle are Fibre Channel over IP (FCIP) and iSCSI. The various vendors are alleviating problems by agreeing through the Fibre Channel Industry Association to work to a minimum of configuration standards to improve interoperability. But this process will obviously be limited and not all-encompassing. Chris Atkins, product manager for storage at Sun Microsystems, says a storage management solution should offer efficiency, cost allocation, and management reporting. Regarding efficiency, the software should be able to measure capacity, what percentage is actually holding data, and how much of that data is inactive and which could be archived off. With cost allocation, the question is which applications, departments, and users are using what amounts of storage? With this ability, says Atkins, a “chargeback” should be achievable based on an allocation of costs in proportion to usage. And with management reporting, users need to know what speed is capacity increasing; which applications are close to running out of capacity; and have any applications’ capacity needs started growing unexpectedly? Requirements of a storage management solution: The measurement of capacity and efficiency Cost allocation Management reporting Manage back-up and restore functions Although there are many obstacles to overcome when implementing a SAN, the market conditions which have seen a fall in prices for hardware, software and bandwidth are bound to tempt many new entrants into the SAN market. Copyright Protected - Antony Savvas - [email protected] posted by Antony 9:10 AM
领英推荐
Friday, May 24, 2002
? SUPPLIER STRATEGIES Lightwave Europe – May 2002 Antony Savvas Telcos and service providers met at the IDC European Telecoms Forum in Rome last month (April 2002) to discuss how they could squeeze more out of their customers. The necessary move to find new, higher margin customer solutions was emphasised by IDC analysts. Mark Winther, IDC group vice president, worldwide telecommunications, said there was now an enterprise buying climate because telecoms prices were dropping 25% to 30% a year. He said bandwidth was plentiful, Internet circuits are better than ever, and there was a renewed emphasis on customer service from the telcos and service providers. Some of the key industry areas which delegates were asked to consider were IP telephony, convergence, and broadband access. IDC analysts at the Forum presented research which showed that there were opportunities for both telcos and hardware suppliers to take advantage of growth in all three areas, providing they got their marketing and technical strategies right. In the case of convergence, IDC said only 11% of companies in Europe had so far fully converged their voice and data in their WAN, but that another 11% planned to do so in the next 12 months. A further 37% had plans to make the leap at a later date, although 41% had no plans to. On the face of it, these figures aren’t exactly startling, but that’s probably because that for most companies a tidy conversion from various infrastructures into one isn’t an option. Instead, says IDC, companies are throwing everything they have into the pot – including X.25, ATM and Frame Relay – to enable as many users as possible to enjoy “pervasive” computing and connectivity from any location. Winther said a typical pervasive computing contract would see the provider guarantee QoS to 800 to 1,200 connection points, demonstrating the potential business to the industry, if it can convince companies of the benefits of moving to the operation of a single network instead of two separate ones. The gradual approach taken by companies can also be seen by the take-up of IP VPNs, said IDC. When a company adopts an IP VPN it doesn’t normally immediately connect every employee to it but does it in stages, dependent on what existing network infrastructure it currently has. In 2000, only 14% of European companies had an IP VPN, said IDC. By the end of this year there will be a leap to 42%. These corporate IP VPNs are being used by companies to create an extranet for trading partners, to connect remote workers to the corporate network, provide basic Internet access to employees, connect mobile workers, and build an intranet for employees. As far as Voice over IP is concerned, Pim Bilderbeek, IDC vice president, European e-business and networking research, said firms had a choice as to how they tackled the technology. They could opt for an IP-enabled PBX, an IP/circuit-switched PBX, or a total replacement solution in the form of a softswitch/fully packetised PBX. IDC says 25% of companies have now adopted VoIP in one form or another and that there will be 25bn minutes of business IP telephony traffic by 2005. The current figure is around 5bn minutes. The key message for the delegates though was that they should not be bogged down by trying to sell new technology, but instead focus on the business benefits from using it in the form of new and convenient applications built around services, like unified messaging, as well as cheaper calls. On the broadband side of things, Bill Pearon, Alcatel global marketing director, said only 0.3% of lines in Europe – 500,000 – have so far been unbundled in the local loop. Pearon said that up to now telcos and governments had mainly concentrated on the question of broadband infrastructure builds and legislation, but that there was now a general move to paying more attention on convincing businesses and consumers as to why they should have it. He said though, that telcos had to do more to make it cheaper for themselves to sign up DSL customers. He said a typical scenario would be that out of every 100 potential customers that would contact a provider about DSL, only 20 would actually subscribe. He cited lack of initial information about DSL from providers, technical problems, poor service, and competition from areas like cable getting to the customer first, for this poor take-up. Pearon said telcos would have to aim for at least a 50% hit rate from initial enquiries to make DSL profitable, as enquiries cost providers money. Pearon added that providers would also have to offer segmented broadband packages to business, which were distinct from the straightforward high-speed connection rates offered to consumers when it came to marketing DSL. Service level agreements for broadband should be included in such business packages. For both consumers and businesses though, telcos had to aim for a rich multimedia services package over the next few years, taking in services over DSL like VPNs, Voice over DSL, video conferencing, video on demand, multi-player gaming, MPEG music, and broadcast television. Telecom Italia took up the theme of segmented services by presenting to delegates its new approach to DSL access. It is offering DSL to customers who aren’t necessarily high Internet users, by charging cheaper prices for a fixed number of connected hours every month, as part of a strategy to grow its DSL market to cover not only access, but also services. How service providers can make full use of Ethernet: The IDC event was used by Cisco and its partners to explain how service providers across Europe are using Gigabit Ethernet to connect users to broadband multimedia services in urban areas. Mark de Simone, Cisco vice president, technology solutions and corporate marketing, told delegates: “The benefits of Internet business solutions (IBS) have to be extended to SMEs and the residential market, and our strategy is to promote metro Ethernet.” De Simone said that a wider take-up of Internet business solutions via broadband could increase Europe’s GDP (gross domestic product) by 30% when considering EU and OECD financial parameters, but that telcos would only build such networks if they could see a quick return. He said local traffic was changing, with 80% now routed in the metro area and the remaining 20% in the WAN. He said Gigabit Ethernet was the cheapest way to deliver bandwidth for either LAN, MAN or WAN applications on a port by port basis. His estimates on the growing importance of Gigabit Ethernet showed that by 2005, Gigabit Ethernet would be 10 times cheaper than ATM, and not far off 100 times cheaper than SDH. Cisco’s strategy now involves selling its routers and switches to service providers and telcos in the metro market to enable them to connect businesses and multi-tenanted units (MTUs) to deliver high-speed Internet connections for voice and data. Where Cisco has already struck deals, customers have enjoyed multimedia services like digital TV, video conferencing, music, household security, video on demand, and gaming at speeds far faster than the 500kbps possible via ADSL. All this is possible via 10/100/1000 Mbps access directly to users via copper, fibre and wireless, using direct Gigabit Ethernet connections from the metro optical transport backbone. With MTUs, the main focus of Cisco’s attention, a switch sits in the basement of the building and is fed an IP multicast from the metro core router. The bandwidth is then split to the various end users who have signed up to the service. Service providers who are already feeding customers services this way include FastWeb in Italy, which is providing a metro Ethernet service to users in cities including Milan, Turin, Genoa, Naples, and Bologna. FastWeb’s 100,000 customers are also located in Hamburg, Germany. Customers pay around 50 euros a month for all their voice and data services, which includes Internet connections through standard TVs as well as PCs. Another company using Cisco’s solution is B2 in Sweden which has connected 220,000 households across 40 cities. It has 80,000 paying subscribers who pay around $30 a month to get a two-way 10Mbps broadband connection – 20 times faster than ADSL which only works at 500kbps when receiving data. De Simone says: “Broadband is not just a regulatory issue, it is a matter of will.” De Simone believes the newly created Greater London Assembly in England for instance, should offer incentives to businesses, local communities and tenant groups to create the metro Ethernet networks created elsewhere in Europe. Copyright Protected – Antony Savvas – [email protected] posted by Antony 5:12 AM
? NETWORK PERCEPTIONS Lightwave Europe – May 2002 Antony Savvas There is talk of optical networking to the desktop, but where exactly is the market in terms of understanding how the definitions of networking have or should have changed over the last few years. Is the LAN dead? With the growth of IP and the development of intranets, extranets and IP virtual private networks, and ultra-fast technologies in the MAN like 10 Gigabit Ethernet which almost make distance between sites irrelevant, have perceptions changed among customers? Most people using IT at companies would now probably have an idea as to what a LAN was, but it is debatable whether many would know what a MAN was and hardly any would understand the importance of a WAN. The standard retort of course among many suppliers is that users, their customers, shouldn’t really have to know. But the possibilities which have opened up for cheaper and more efficient communications over the last few years means a better understanding is beneficial on the customer side, even if the distinction between the LAN, MAN, and WAN seems to have become somewhat blurred. The days of the orthodox campus network where almost everyone works on one particular site via a single LAN are now over, thanks to the growth of hot-desking, home working and mobile communications, which have brought us the PAN (personal area network) involving mobile phones and PDAs connected to each other and with other devices, and the WLAN (wireless local area network). Customers should know which new technologies address which areas of the global network to enable them to make informed buying decisions. For instance, outside the specialist trade press, there has been a lack of information for companies about the importance of ADSL for their business and its potential to cut costs. Leased lines and ISDN currently generate a lot of money for the telcos by providing established WAN links and many people are still using them simply because they are used to them. There haven’t been many telcos in Europe who have sold ADSL as a cheaper replacement WAN technology for leased lines and ISDN because they don’t want to cannibalise their own revenues. Arguably one of the few exceptions has seen Deutsche Telekom sell ADSL as an upgrade to ISDN, but BT for instance, has barely said anything to its customers about upgrading their slower and more expensive ISDN lines to ADSL. If more companies thought laterally about their communications they would have realised that new opportunities in the MAN and WAN mean more than clever conversations among networking cliques but bottom line savings. Alan McGibbon, managing director at network integrator Scalable Networks, says the existence of the campus network LAN is not threatened, but that customers should pin down their suppliers more about the opportunities that make linking the LAN to the outside world more efficient. McGibbon says: “Until wide area services are available without geographic limitation, the LAN and WAN will retain their own technological identities – Ethernet in the LAN, Frame Relay, ISDN and leased lines in the WAN. “Whilst LAN extension services are being offered by a number of carriers, they are not generally available, and certainly not outside major cities at anything like affordable costs.” But, McGibbon adds: “Revenues from traditional leased line services, such as Megastream, have suffered significantly at the hands of metropolitan LAN extension services. “This trend is set to continue, with some providers even offering international Ethernet/IP services.” McGibbon points out networking definitions have always been blurred, with most of the confusion generated by suppliers trying to be “all things to all men” and not wanting to be pinned down on too many specifics. He says customers should have two important objectives: to deal with a reduced number of suppliers, and to only deal with suppliers that are expert in their fields to help overcome the problems created by complex and mixed network infrastructures. If they can’t achieve these two goals, says McGibbon, they are on a hiding to nothing as a result of poor decision making and/or conflicting information. McGibbon says Scalable’s customers are often confused about the services available to them on their LANs or WANs, with the situation worsened by the way some telcos do business. He says: “Market protectionism is rife, with sales tactics that include discounts on bandwidth in return for three-, five-, or even seven-year contracts – this smacks of photocopier leasing!” Logical, another network integrator whose clients include large UK clients Egg, Prudential, PowerGen and Marks & Spencer, also says the LAN is here to stay, largely because of cost factors and the confusion over technologies that seems to rein over the market. Gavin Blunt, senior network consultant at Logical, says: “The LAN is still extremely relevant to most organisations as very few can afford to make use of the bandwidth available from carriers offering the ability to create large-scale MANs over long distances.” Blunt says this position is still the case despite the appearance of Gigabit Ethernet, 10 Gigabit Ethernet, fixed broadband wireless and satellite. He says: “There has certainly been an increase in interest in developing MANs based on the new high bandwidth technologies available today, but it is not always economically viable for small- to medium-sized companies in particular.” Blunt concludes: “I believe the established networking definitions of LANs, MANs and WANs are still clear, but those carriers providing a wide range of products that cross over these boundaries are causing confusion with the consumer, so it is up to the integrator to provide the necessary guidance to the user.” French-based IP telephony company QuesCom sees the debate over terms in a different way to others. The company helps firms update their existing PBXs to allow VoIP applications, as a first step to perhaps completely replacing their infrastructure with a pure IP telephony solution in the future. Peter Derks, QuesCom vice president of marketing, says: “We think the difference between the network terms is now pretty artificial within the industry, mainly because of the take-up of IP, and for the customer the only question is whether the data is inside or outside the company.” Derks says the debate over networking solutions is now mainly focused on issues like security. He says: “You can offer customers unlimited bandwidth between their sites, but if it isn’t secured, like with an IP VPN, they won’t use it for critical information whether it’s a LAN, MAN or WAN, and this is what the industry should be addressing.” Mabel Brooks, senior product manager for data services at Telewest Communications, says: “The LAN has not died at all as it does a lot of work the WAN cannot do especially where IP VPNs are concerned. “The WAN still does not go to the desk top and the LAN has to perform aggregation and linking functions between servers and end user PCs, switchboards and routers for instance.” Brooks says most customers are clear what a LAN is but there is less of a clear definition as to what a WAN or MAN are. She says most customers use the term “network” to describe both a MAN and WAN, which obviously isn’t wrong. Brooks says Telewest explains to customers that a MAN is vital in big campus environments or to enable it, as the service provider, to reach as many users as possible on a ring topology. Telewest moved into the IP VPN market at the end of last year, and sees this solution as potentially bringing the various terms closer together by giving customers a future-proof way of easily expanding their connectivity requirements between sites and individual users. Some can also argue that the importance of the LAN is actually increasing as a result of faster carrier class services which seek to use the Ethernet platform the LAN relies on. This can be seen with the metro Ethernet strategy vigorously being pursued by Cisco (see separate feature on page…), in addition to the industry adoption of 10Gb Ethernet in the MAN designed to give firms an easier broadband interface to the external networking cloud. Mark Weeks, EMEA vice president at Appian Communications, says: “The MAN is certainly the next major area of investment for carriers who are looking to address a bandwidth bottleneck. “The good news is that technologies like Ethernet promise to dramatically change the price/performance, flexibility and scale of the data services available to customers for the better.” So it’s by no means goodbye to the LAN or the MAN, but what was once thought of as the WAN can now increasingly be described as the Internet via an IP VPN. How the IP VPN is taking over from the WAN For most companies the WAN is enabled through expensive and less flexible technologies like leased circuits and at the bottom end via ISDN. But research from analyst IDC shows that over 40% of companies in Europe are now operating an IP VPN, using a universal browser and simple portal solutions to take full advantage of the Internet to link sites and individual users. UK-based IT training company QA Plc used project manager Infonet Solutions who appointed European telecoms provider Telia IC to develop an IP VPN, which uses both new and old bandwidth and hardware to make it work. QA provides classroom training at various regional centres to IT professionals using “live” Internet connections to simulate the technology being studied. The first priority for QA was to separate the corporate and classroom traffic in a secure and flexible way. But two separate networks wasn’t a financially viable option or was the traditional way of allocating bandwidth on a straight percentage ratio like 50/50 or 30/70, because this did not provide management with the flexibility it sought for the business. Instead, MPLS (multi-protocol label switching) was used to both give prioritisation to certain traffic in both business streams and to offer the security which prevented the students being tempted to pry on corporate traffic. QA IT manager Pat Farrow said Telia provided around 70% of the network bandwidth to make the new IP VPN work, with the rest coming from other providers under existing contracts. But he said Telia would eventually have direct control over the whole network as the other contracts expired. Why MANs now don’t need cables and can become a “Wireless Metropolitan Area Network”: A group of seven schools in Kings Cross, north London wanted to form a cheap network to handle group e-mail and deliver high-bandwidth Internet-based learning, and they ended up choosing an optical wireless system for their needs. They appointed Cablefree to link them up to a Gigabit Ethernet laser system at the end of last year. A secondary school for older children acts as a central hub to six primary schools for younger children, who all enjoy 100Mbps laser links through line of site connections. The total cost of the hardware and software from Cablefree was £180,000 and the total project cost was £300,000. The schools involved, who form the Kings Cross Education Action Zone, part of a Government drive to improve education standards in inner-city areas, say they have covered the installation costs in communications savings already. Some might describe this as a MAN, others may see it as a WAN. Copyright Protected – Antony Savvas - [email protected] posted by Antony 5:05 AM
Thursday, April 25, 2002
? UNIFIED MESSAGING The-technology-channel.com - Autumn 2001 Antony Savvas With analyst Ovum predicting there will be 218m active users of unified messaging by 2007, with a worldwide revenue take of $31bn (£22bn), perhaps is the time for potential customers to see what’s on offer. The idea that users should be able to access their messages through a single medium has been around for a while. Initially, the main target of the solution was to unify voice landlines, faxes and pagers. Next came mobile phone messages, and then the biggest driver in the form of e-mail. Most companies involved in supplying unified messaging solutions would admit that the current market is relatively modest though, with most companies still struggling to improve and consolidate the various single communications systems at their disposal. There has also been the question of finding the right supplier at the right price who can accommodate all the solutions now available and expected to be available in the near to medium term. In addition, when it comes to e-mail and items such as attachments, the unified chain has often been broken as a result of a lack of bandwidth and a lack of standards to deliver all messages in an acceptable form. When also considering there has also been talk of video e-mail too, some companies have understandably held back to see what services are developed. There are signs now though that true unified solutions are on their way, thanks to the wider use of IP (Internet Protocol) to carry all traffic and the impending wider commercial use of mobile technologies like GPRS (General Packet Radio Service). The promise of even faster mobile broadband speeds via 3G has also given a much-needed kick-start to the unified messaging market. The fact that companies like Vodafone offer consumers the chance to access their e-mail accounts from suppliers like Freeserve, Yahoo!, Hotmail and AOL using their phones shows what is possible. If consumers can verbally forward quick replies to downloaded webmail while on the move using a mobile phone, it can’t be that demanding for companies to set up similar systems for their corporate accounts. One company which is looking to take advantage of IP solutions and the advent of GPRS and 3G is Teamphone, which already includes BT, WorldCom, Equant, Microsoft, Barclays, Prudential, and Virgin Atlantic amongst its client list. With it’s recently introduced Teamphone IP solution, the company allows business users to control multi-channel communications via both traditional and IP voice and data networks, providing services using any device from anywhere. Teamphone IP’s features include a single number service, which gives users a single phone number that follows them wherever they go; a single mailbox which can contain voice messages, faxes, e-mail and SMS messages, which can be accessed by any phone or Internet-enabled device; and a ‘live’ directory service containing the most up to date contact information for individual users. Teamphone president Stephen Meyler says: “Teamphone IP will play a major role as a killer application for the emerging GPRS and 3G networks, it is one of the first solutions to take advantage of GPRS and WAP on both mobile phones and telephony-enabled PDAs (personal digital assistants).” Meyler says: “There is much talk about the changing workplace and the growth in flexible working practices, and Teamphone IP has been developed in direct response to this.” An important feature of Teamphone IP is its ability to offer users, through its “easireach” function, the choice of how to respond to people who’ve left messages in the most cost-effective way. For instance, dial-backs can be arranged via a PDA or WAP phone to a land line phone, so avoiding higher mobile phone charges. Services in more detail include user availability and location indicators in unified mailboxes, enabling users to see at a glance a colleague’s location, availability to respond, and preferred means of communication, at any given time. This information can be updated in real time at the touch of a button. “Tenancies” also enable an organisation to run services for multiple companies on a single platform, with each company’s data remaining secure. This ability is critical for an Application Service Provider (ASP) to sell on the service, but also enables an organisation to deploy an internal “Chinese Wall” to protect certain data from specific users. Teamphone IP’s e-mail client connects to existing e-mail accounts and enables mobile workers to be more responsive and manage e-mail more effectively when away from the office, through being able to call, text, record or reply in response to an e-mail. Other unified messaging suppliers include Call Sciences and TOPCALL. Call Sciences is an enthusiastic member of the recently set up Unified Communications Consortium, and its managing director Piers Mummery is European director of the Consortium. Mummery says key battle areas for the unified messaging market include unified communications in a 3G environment; addressing the needs of mobile workers in general, which includes those who “hot-desk”; and the selling and billing of unified communications, including through the ASP model. One recent adherent to unified messaging is the RSPCA, which installed a unified messaging system at the same time it completed the installation of a VoIP (Voice over IP) solution from Cable & Wireless and Cisco. The RSPCA chose TOPCALL as its unified messaging supplier, and the main aim of the organisation as a result using the technology is the integration of all voice, e-mail and fax communications into one message store. There is a unified messaging interface with the organisation’s existing Lotus Notes, Microsoft Exchange, and Novell GroupWise e-mail systems, which allows RSPCA officials to get their messages via any fixed and mobile device. Teamphone’s Meyler says: “With the advent of flexi-time, hot-desking, increased part-time working, mobile working, tele-working, and part- and full-time home-working, now has got to be the time for companies of all sizes to consider unified messaging.” Copyright Protected - Antony Savvas - [email protected] posted by Antony 8:22 AM
? STORAGE MANAGEMENT StorAge - Winter 2002 Antony Savvas Both analysts and the storage industry estimate corporate storage needs are increasing at between 60% and 100% every year, but how can users manage so much extra capacity? With storage hardware housing increasingly mission critical data, from e-commerce transactions to data which has to be kept for data protection reasons, one would have expected the storage industry to have easily accessible solutions on offer to help with the management of this traffic. But for various reasons this isn’t so, so it is important for users to know the latest state of play before they adopt a new storage strategy or expand on their existing one. After the vast explosion in the storage industry’s fortunes as a result of the growth in e-mail and the rapid creation of an e-commerce industry to take advantage of Internet access growth, the companies selling the solutions didn’t seem to realise or didn’t care that certain customers had specific needs when it came to managing their data. Companies like EMC, Compaq, Hitachi, StorageTek, and IBM which sold the big expensive boxes, didn’t feel obliged to configure their solutions to enable them to work with rival products for instance, even though it is established practice in most IT departments to work with solutions from different vendors. And although it was clear that the massive increase in data being held by users would soon create the need for data management solutions that could be used flexibly, hardly any R&D expenditure has been spent on this area by the big box shifters. It’s as if no one learnt from the days of the simple PC dealers ten years ago, when it was enough to make money out of selling lots of boxes to make you a major player in the channel, without realising that one day you would be expected to offer something extra, like flexible services and full product support. We have now reached a stage when giants like EMC are seen as a potentially dying giant, while little nipper start-up companies are expected to start making the real money by making available software solutions which allow all the various big boxes to work together, unless of course the big dinosaurs snap them up first. And as well as companies making available software solutions to make all types of storage hardware work together, there are a few new storage standards on the way which will allow stored data to be managed and moved over the all conquering IP standard, which will make it even harder for the dinosaurs to trap their customers into single-branded SAN or NAS storage systems. Although it is not in the short-term interests of the storage giants to allow their customers to more easily use other hardware from different vendors, therefore cannibalising their own sales, the giants however are at least looking at supplying easier management solutions for their own products. EMC now has a storage management solution which it sells separately, including through integrators like EDS. And EMC recently went as far as admitting that previously it was wrong in allowing it salesforce to simply sell ever-bigger storage boxes instead of working with customers to tailor their solutions for optimum use. And IBM is now enthusiastically pushing the storage management solution sold by its subsidiary Tivoli. Both these solutions though aren’t designed to allow users to mix and match their storage boxes on one network through one hub though – a process known as “virtualisation” – and the number of solutions out there which allow this are very limited. Wayne Budgen, senior storage consultant at business continuity company HarrierZeuros, says: “Centralised management of a SAN through a single management interface increases efficiency. Potentially, hundreds of servers and devices can be managed and supported as a single entity, lowering the cost of storage management on a per unit basis, whilst increasing the functionality available to administrators.” Susan Clarke, senior research analyst at Butler Group, says: “As far as virtualisation is concerned, DataCore was the first to market with a product, but unfortunately is likely to suffer because of its size and low market profile. “Butler Group believes DataCore is ideally positioned to be acquired by one of the large storage players that are currently playing catch-up as far as virtualisation is concerned.” Big storage player Fujitsu Softek is among many companies taking advantage of DataCore’s solutions to give users the flexibility they need. Other players in the virtualisation market include ExaNet, StorageAge, and Hewlett-Packard. HP bought into the market with its recent acquisition of StorageApps. Virtualisation software usually involves the suppliers logging all the set-up and connection methods used by the main storage vendors and creating a system to connect all the different storage systems onto one platform. This process obviously take time and must also be constantly updated as more suppliers and additional storage systems come onto the market. It would therefore be far more helpful to users to be able to work on a single storage standard based on IP. The two main alternatives to the current interoperability debacle are Fibre Channel over IP (FCIP) and iSCSI. ISCSI has now been accepted by the industry as a potential solid standard, but so far the industry hasn’t been promoting it too hard. This is partly because it doesn’t want to kill its existing sales as a result of users holding out for something better, and partly because storage over plain Fibre Channel probably has a good three years still left in it before the first iSCSI products come on stream. The various vendors are also alleviating the problems by agreeing through the Fibre Channel Industry Association to work to a minimum of configuration standards to improve interoperability. But this process will obviously be limited and not all-encompassing. For those users suffering from investment squeezes implemented by the board however, as a result of the downturn, maybe it’s no bad thing to wait for a solution which will be far more all-encompassing, in the form of iSCSI. Fibre Channel Replacements: iSCSI: SCSI codes generated from user requests and data encapsulated into IP packets for transmission over an Ethernet connection. iSCSI overcomes plain SCSI’s latency problem and 50m distance barrier. FCIP (Fibre Channel over IP): Developed by Internet Engineering Task Force and enables transmission of Fibre Channel information by tunnelling data between SANs over IP networks. Particularly suited for data sharing over geographically disparate organisations. iFCP (Internet Fibre Channel Protocol): Hybrid solution which is a version of FCIP that moves Fibre Channel data over IP networks using iSCSI protocols. Designed to interconnect existing Fibre Channel SANs. But as far as a spending squeeze is concerned, storage integrator CSF Group, doesn’t think storage will be affected much. Ian Williams, CSF chief technology officer, claims: “In an economic downturn companies often look inwards to themselves to improve their services and infrastructures, and the centralising and effective use of data is central to this process. “They need to pull together disparate systems that need to be connected to share data and ensure it is not duplicated, and is easily accessible enterprise wide.” Graeme Rowe, marketing director at storage consultancy Posetiv, acknowledges the lack of virtualisation solutions around, pointing out that companies like Compaq are still working on their products. But he says there are still opportunities to be had in taking advantage of new storage management solutions. He says: “Virtualisation is a much used term in the storage industry, with many companies claiming to offer THE solution. As a technology that controls where data is stored, and makes this “invisible” to the application, it is often a case of using the best solution applicable in each instance to match the different flavours of storage system being connected. “With storage management, it has always been a problem for businesses, but with companies like BMC Software putting their weight behind it, this is set to be a real growth area in storage.” SAN supplier Gadzoox Networks also tips BMC for the top, along with other solutions. Nigel Houghton, Gadzoox northern Europe manager, says: “The storage management market is likely to be dominated by the three main players in the form of Computer Associates, Tivoli, and Hewlett-Packard, along with the largest independent player Veritas and BMC, whose latest offering, Patrol, enjoys impressive functionality.” Chris Atkins, product manager for storage at Sun Microsystems, says a storage management solution should offer efficiency, cost allocation, and management reporting. Regarding efficiency, the software should be able to measure capacity, what percentage is actually holding data, and how much of that data is inactive and which could be archived off. With cost allocation, which applications, departments, and users are using what amounts of storage? With this ability, says Atkins, a “chargeback” should be achievable based on an allocation of costs in proportion to usage. And with management reporting, users need to know what speed is capacity increasing; which applications are close to running out of capacity; and have any applications’ capacity needs started growing unexpectedly? Requirements of a storage management solution: The measurement of capacity and efficiency Cost allocation Management reporting Manage back-up and restore functions Storage Management Solutions: DataCore DataCore’s virtualisation product is SANsymphony, and one of its first UK customers is 7 Global, a business services aggregator which integrates best-of-breed services from multiple application service providers into a single unified service. 7 Global uses SANsymphony to integrate its storage hardware from Hitachi Data Systems and IBM and SAN switches from Brocade and Intel-based servers running Windows 2000. Raidtec Raidtec’s SANfinity SAN management software is already well-established, but it recently launched a virtualisation solution called SNAZ SVA Computer Associates CA’s BrightStor software covers just about every area connected to storage management from back-up, restore and replication to measuring storage capacity and breaking down types of files which may need archiving. But the modular BrightStor, which works on just about every platform you can imagine, is still playing catch-up on the virtualisation front. Tivoli Tivoli Storage Manager is one of the most established storage management software solutions around, and the fact it is promoted by IBM means it is destined to remain a market leader, but like CA, there is still a lot to be done on the interoperability front. BMC BMC’s modular Patrol solution offers both storage management and virtualisation and parts of the system are designed to work with the storage management products from Tivoli and Veritas, two far bigger rivals. Quest Software Quest Software has just launched Storage Xpert, as a management tool to eliminate bottlenecks in storage devices and make sure users are getting the best out of their existing hardware. LSI Logic Storage Systems LSI’s ContinuStor Director is a remote replication and virtualisation device which allows users to migrate data between different types of hardware. LSI claims ContinuStor Director can move data ten times faster than other solutions on the market. Copyright Protected - Antony Savvas - a_yahoo.co.uk posted by Antony 8:16 AM
? NETWORK SOFTWARE DISTRIBUTION Network Computing - Autumn 2001 Antony Savvas The distribution and management of software applications can create major upheaval within an organisation, and can result in a great deal of time spent in making sure implementations have gone to plan. Network Computing considers what IT departments have to consider and views some of the solutions on the market which can ease the process. With the advent of the multi-media PC more and more applications are being used by desktops in companies, resulting in more frequent software distribution challenges and greater management requirements. This is borne out by research carried out by applications management company Vector Networks. When Vector asked companies how often they were involved in software distribution, very few were prepared to put a regular time on such a requirement. While Vector found that seven companies said they were involved in software distribution every month, ten said every three months, eight every six months, and 25 revealed an annual distribution requirement, 200 companies said they were involved in “frequent” distributions. This illustrates the haphazard environment IT departments now work within. Vector found that just 25 companies said they experienced no technical problems with software distribution, but many others did, with 60 admitting that manual installation posed problems, and 50 cited the time needed to manage software distribution in general. To a lesser extent, network and bandwidth restrictions were cited, as well as dealing with remote locations. When asked how software was distributed across the enterprise, 100 said every desktop was visited to install the software, while 120 said they used centrally implemented software distribution software. Others (55) said the applications were hosted by a server. Vector found that the use of centrally implemented software distribution packages had found a particularly enthusiastic following over the last 12 months. Vector’s Colin Bartram says: “Network managers can only cope with software distribution through the enterprise if they have a tool that does not compromise their existing desktops.” The advent of more complex and interrelated 32bit applications has made software distribution even more complicated. Vector’s LANutil Suite is one of a number of solutions which now recognise this fact by taking into consideration the client’s relationship with shared files, sensitivity to the presence of other applications, registry entries, and other factors. When it comes to choosing the system to distribute new applications IT departments will be roughly split into two camps: those committed to the server-based computing model of “thin clients” that run the bare bones and which have access to other applications on demand, and those having to service the traditional model of “fat clients” equipped with a host of permanent applications which have to be constantly updated. Three years ago the thin client model was dressed up as the way to go, with organisations like Oracle predicting the death of the fat client equipped with everything, and analysts proffering figures which showed potential savings of hundreds of pounds a year on each user seat. Companies including Citrix have made a reasonable niche business from the thin client model with Citrix MetaFrame, and IBM has built a following with its Workspace On-Demand solution aimed at the same market. Server-based computing forms a major slice of the business generated by IT services company Computer Solutions and Finance (CSF). Paul Reeve, business manager of server-based computing at CSF says: “Network managers face multiple problems when implementing an enterprise application or upgrade. “Many companies, even ones of considerable size, still upgrade basic applications one desktop at a time. The time and cost implications in support staff alone is huge, then you have to add to this the disruption to business.” Reeve says: “With the Citrix MetaFrame solution for instance, you can deliver new or updated applications across the enterprise and only have to load it once onto the server, then everyone can access those applications from their desktop. You can also prevent users from loading applications they shouldn’t have onto their desktops, preventing potential licensing problems.” But what about Unix applications, aren’t they more complicated to distribute to users? Reeve admits: “It’s not straight-forward, it typically requires an emulation package and specific protocols which in turn need bandwidth and integration into the infrastructure, a resource-intensive process.” Reeve says Citrix Metaframe for Unix for instance, can be used to allow native Unix applications to be distributed to the desktop without the machines having an on-board Unix operating system. While it is undisputed that many organisations have adopted thin-clients, the fact is however, that most have stuck with the fat client model. But until relatively recently there weren’t many solutions available which offered the automatic distribution of applications to these devices. One of the first solutions which became available was ZENworks from Novell. Original versions of ZENworks relied on organisations having a Novell NetWare server operating system in place to use the software, and as a result usage was curtailed with the corporate world’s general preference for Microsoft Windows operating systems on both the server and desktop side. The latest version, ZENworks for Desktops 3.2, allows both Windows NT and Windows 2000-based systems, as well as NetWare ones, to be the host. ZENworks for Servers is also available. Novell claims the cost of managing user workstations can be slashed by half by using ZENworks. The latest version also supports support for thin client devices. Built on Novell’s eDirectory and Novell Directory Service (NDS) technology, ZENworks has gained the praise of many analysts for its ability to be easily integrated into any Windows environment, and to compete with its main Microsoft rival in the form of Intellimirror, which is bundled with Windows 2000. One recent major ZENworks gain for Novell was the announcement from Computer Associates that it would be integrating ZENworks into its Unicenter systems management suite of products. Microsoft’s Intellimirror offering, bundled with Windows 2000 Server and Windows 2000 Professional, has a lot of ground to make up on ZENworks and some comparisons from analysts illustrate why. IDC says of Intellimirror: “Intellimirror can improve client management in four main areas: desktop settings management, user data management, applications deployment and management, and operating systems remote install. “But the downside for users is that gaining the benefits requires that Windows 2000’s bundled Active Directory is used to store user account information, hardware and software configuration information, and access control information. “Without a full-blown Active Directory implementation there is little opportunity to exploit Windows 2000’s client management capabilities. By choosing to take a forward-looking approach without regard for legacy clients, Microsoft has left the door wide open for Novell and IBM to come in and offer management solutions to users who plan to have mixed infrastructures for a long time to come.” And as many companies are now only jumping on the Windows 2000 bandwagon in the form of upgrades from their existing Windows NT systems, many haven’t even thought about adopting the new way of working which is offered by Active Directory. Therefore, the use of Intellimirror is so far not widespread. Analyst Giga Information Group confirms: “ZENworks for Desktops and ZENworks for Servers are definitely worth a look for any NT or Windows 2000 business looking for an advanced management package that is tried and proven in production.” For those companies that are still pondering whether or not to make use of Active Directory as part of their roll-out of Windows 2000, instead of keeping their existing NDS (Novell Directory Services) system, Novell is attempting to keep them on its side by integrating the best features of Intellimirror with ZENworks for Desktops. With this move, Novell says customers get true policy-based management for all their Windows workstations, with no need to completely overhaul their domain name system with the installation of Active Directory. When it comes to the distribution of new applications in the organisation there are a number of other solutions on the market though. The Mainframe Ian Benn, marketing director for systems and technology at Unisys, says: “We are finding an increasing resurgence of the datacentre – the mainframe is back. “It is evident that whilst the client device market is getting more and more diverse, with Symbian, Pocket PC, thin clients, Java devices, Linux edge devices, games consoles, WAP, SMS, and set top boxes, the datacentre end of things is getting more homogenous, giving organisations a clearer way of working. “This is driving demand for our ES7000 Windows mainframe and keeping our traditional mainframe customer base busy with new developments and incremental applications. Operations managers are therefore avoiding software distribution altogether.” Targeted Multicasting Intel is one of a number of companies with a solution which allows software to be distributed using the “targeted multicasting” method. Intel’s LANDesk Management Suite creates inventories of the existing applications on clients and distributes new applications automatically without clogging up the network. LANDesk, which has been available in one form or another for 11 years, supports Windows, Linux, OS/2 and Macintosh desktop operating systems, and Windows, Novell, UnixWare and Linux server operating systems. Along with management services such as automated distribution, migration to new systems, remote control of user desktops, and the creation of software and hardware inventories, LANDesk delivers targeted multicasting for large-scale software deployments. As large deployments can overwhelm network and server bandwidth, LANDesk avoids sending a copy of the new application straight to each desktop. Instead, data goes to a single client, which then uses the multicasting software to relay the application to other target clients via an ad hoc-created subnet. This is targeted multicasting. This solution speeds up the overall data distribution while keeping the network bandwidth available for other needs. The latest version of LANDesk – version 6.5 – is launched this month (October) and offers greater support for mobile clients. IT departments will be able to set up policies on how much bandwidth should be available before laptop users will be sent any software. This is aimed at reducing the high support cost when dealing with mobile users and ensures that they are always included when software is updated. Previously, the targeted multicasting function was a separate module sold separately, but this will now be included with LANDesk as standard. Intel says this decision will make it easier for companies faced with more regular anti-virus software upgrades, for instance, to cope, in the face of increased attacks. Tivoli Tivoli is amongst many companies which have packages which specifically focus on the straight distribution of software to clients, without many of the other management features offered by ZENworks and Intellimirror. Tivoli Software Distribution supports all the main Windows operating systems including Windows 2000, along with NetWare and a number of Unix operating environments. Copyright Protected - Antony Savvas - [email protected]
Global business technology editor and writer
10 个月To correct the typo in the post - none of these titles exist anymore.