Cloud data center scales up, how to streamline Ethernet

The deployment of cloud computing infrastructure is growing at an alarming rate, even faster than Moore's Law. In some cases, the annual growth rate of cloud computing is believed to be 30 times, and in some cases even 100 times. In order to meet this demand, cloud data centers must be scaled up. Hundreds or even thousands of servers are already a common configuration in the data center market.

At this scale, the network faces serious challenges. The need for more and more switches increases the capital investment and management complexity. In order to solve the rising expenditure problem, network decomposition has become an increasingly popular technology route. By separating the switch hardware from the software it runs, vendor lock-in can be reduced or even eliminated. OEM hardware can be used with software developed in-house or with third-party vendor software, saving money.

While network decomposition solves the direct problem of huge capital expenditures, it must be noted that business operating costs remain high. The number of managed switches remains basically the same, and in order to reduce operating costs, network complexity must be addressed.

Network decomposition

Today, almost every application we use at home or in the work environment is connected to the cloud in some way. Our email service providers, mobile apps, corporate websites, virtual desktops, and servers all run on servers in the cloud.

For these cloud service providers, this incredible growth is both an excellent opportunity and a big challenge. As demand increases, Moore's Law has been trying to meet this demand. Today's data center expansion also includes scaling out, buying more computing and storage capacity, and then connecting it through network investments, increasing the cost and complexity of managing all of this quickly.

Previous network hardware and software were often bundled together, and when you purchased a switch, router, or firewall from a vendor, you needed to run the vendor's software on those hardware. Larger cloud service providers saw a market opportunity. These vendors did not lack skilled software engineers. After operating to a larger scale, they found that they could save a lot of capital by buying commercial network hardware and then running their own software. expenditure.

This decomposition of software from the hardware may be financially attractive, but it does not address the complexity of the network infrastructure, and there is much room for further optimization.

802.1BR

Today's cloud data centers mostly use a layered architecture, usually configured in a fat-tree or leaf-spine structure. Then, connect the rack row with the top-of-rack (ToR) switch to the switch upstream of the backbone network. In fact, ToR switches perform simple network traffic aggregation. The use of relatively complex, high-energy switches to accomplish this task results in higher capital expenditures and management costs, and the headaches are not addressed.

The purpose of the port extension technique described in the IEEE 802.1BR standard is to simplify the above architecture. By replacing the ToR switch with a port expander, the port connection extends directly from the rack to the upstream. Network management is integrated into a smaller number of switches on the upper backbone network, eliminating the need for dozens or even hundreds of switches at the rack level.

The use of port expanders can greatly reduce the complexity of switch management. This technology approach has been widely recognized. Although various network switches on the market are now 802.1BR compliant, not all the advantages of this standard are Can be achieved.

The future of network decomposition

While many port expanders on the market today are capable of 802.1BR, they use legacy components that rely on previous switching products rather than the 802.1BR standard itself. Therefore, this will affect the potential cost and power advantages offered by the new architecture.

Marvell's Passive Smart Port Extender (PIPE) products are specifically designed for the 802.1BR standard and are optimized for architecture. PIPE is interoperable with 802.1BR-compliant upstream bridge switches from all industry-leading OEMs, enabling fanless, cost-effective port expander deployments that save upfront investment in cloud data centers and ongoing operational costs. At the same time, power consumption and switch management complexity can be reduced by an order of magnitude.

The first wave of network decomposition is to separate the switch software from the hardware it runs. The 802.1BR port extension architecture is bringing the second wave, where the ports will be separated from the switches that manage those ports. The modular network approach discussed here will further reduce costs, reduce energy consumption, and significantly simplify network management.

Display Port Cable

Display Port Cable

  • Displayport 1. 4 tested and Certified by VESA to provide a high bit rate 3 (HBR3) connection With a bandwidth of 32. 4 Gbps
  • Supports 8K at 60Hz and 4K at 240Hz when connected to a device with display stream Compression (DSC) 1. 2
  • Without DSC 1. 2, This cable can achieve 8K (7680 x 4320) at 30Hz, 5K (5120 x 2880) at 60Hz, and 4K (3840 x 2160) at 120Hz
  • Supports stereoscopic 3D video, DPCP 1. 0, HDCP 2. 2, HDR, and Dual-Mode DisplayPort (DP ++)
  • Performance and durability Backed by Accell 3-year limited


Display Port Cable,Dp Cable,Dp To Hdmi Cable,Display Port

CHANGZHOU LESEN ELECTRONICS TECHNOLOGY CO.,LTD , https://www.china-lesencable.com

Posted on