Introduction
As AI infrastructure scales to meet the exponential compute demands of large language models and agentic AI workloads, data center interconnect architecture faces mounting pressure to support higher throughput, tighter latency budgets, and strict power envelopes to support sustainable AI growth. Traditionally, copper and optics have been deployed for short and long reaches, respectively, but each has reached critical inflection points.
Copper cables, while low-cost and reliable, struggle with fundamental material limitations—especially high-frequency loss from skin effect—above 224 Gbps SERDES speeds. Meanwhile, optics offer long reach but impose high complexity, excessive power draw, and cost barriers for short-reach applications from 5-10 meters.
Point2 Technology’s e-Tube, a novel interconnect based on Radio Frequency (RF) transmission over plastic dielectric waveguide, offers a compelling alternative. It extends copper’s reliability and economics to support terabit-scale data rates in a lightweight, power-efficient, and highly manufacturable form to address short reach, AI cluster scale-up in data centers. With roadmap scalability to multi-terabit and compatibility with standard cable form factors, e-Tube redefines the compute fabric for next-generation AI clusters.
AI Clusters Demand High Bandwidth and Low Power for Sustainable AI Growth
Modern AI clusters are GPU-centric, with thousands of interconnected accelerators communicating across a tightly integrated fabric. Emerging designs—such as those underpinning training of trillion-parameter models—often include disaggregated CPU-GPU topologies, flyover interconnects inside the rack, and adjacent-rack connectivity for modular scale-out.
Figure 1. Data center leaf-spin architecture (left) and AI GPU & CPU architecture (NVIDIA) (right).
In this environment, short-reach interconnects (5-10m) carry immense data volume between compute nodes. Interconnect density and speed become critical bottlenecks, and any limitations in channel performance directly impact workload efficiency.
Key requirements for such infrastructure include:
- Terabit throughput per link to minimize serialization bottlenecks.
- Low power per bit to avoid rack-level thermal challenges.
- Scalability to support next-generation SERDES (224 Gbps, 448 Gbps, and beyond).
- Long-term reliability for uninterrupted operation after deployment.
- Form-factor compatibility for seamless deployment.
Copper and optics each fall short when faced with this list.
Copper at its Limits
Copper cables dominate in short-reach deployments due to their simplicity and low cost. However, as data rates rise, copper faces physical constraints:
- Skin Effect: High-frequency signals concentrate near the surface, increasing attenuation and reducing bandwidth.
- Insertion Loss: Thinner, lighter copper twin-ax (e.g., 30 AWG) suffers significant signal degradation at 224 Gbps and beyond.
- Length Restrictions: PHYs are challenged to drive copper beyond 1 meters at 224 Gbps speeds without active electronics to reduce loss.
- Scaling Difficulty: Copper cables optimized for slower SERDES speeds require re-engineering to handle 224 Gbps, due to non-flat frequency response.
As shown in simulation results (as shown in Figure 1), thinner copper twin-ax (34/32/30 AWG) exhibit loss curves that limit reach and reliability–making them increasingly impractical for high-performance inside-the-rack and cross-rack computing systems. The alternative is larger copper wire gauges (26/28 AWG) that create thick and heavy cable bundles that make high-density AI cluster deployments impossible.
Figure 2. Copper twin-ax loss.
Optical Interconnects: Power-Hungry, Expensive, and Add Inherent Reliability Issues
Optical technologies, retimed Active optical cables (AOC) and linear drive optics (LPO), provide longer reach, but their complexity introduces substantial penalties:
- Power Consumption: Optical digital signal processing (DSPs), lasers, and transimpedance amplifier (TIAs) consume 3-5x more power than copper solutions. In hyperscale clusters with terabit-scale interconnect, this leads to MWatts of additional power draw each year.
- Cost: Optical components, retimers, and linear equalizers are expensive, driving cable cost up to 5x of copper.
- Design Complexity: DSPs, linear equalizers, optical to electrical (O/E) conversion circuitry, optical alignment, and temperature sensitivity add layers of integration difficulty
- Reliability: With many components and optics that will inherently fail over time, optical module FIT rates will be 10x those of electrical-only solutions, creating costly operational interruptions after deployment.
- Cost and Reliability: LPO variants attempt to reduce power by eliminating the DSP but still fall short in terms of cost and reliability compared to electrical-only solutions.
For short-reach, high-density AI cluster scale-up applications, optical technologies are overengineered, too costly, and with expected reliability metrics that create unnecessary interruptions after deployment.
e-Tube Architecture: RF Transmission over Plastic Cable
e-Tube leverages millimeter-wave RF signals transmitted through a plastic dielectric waveguide. Plastic material is not impacted by skin-effect, a phenomenon where an alternating current tends to flow predominantly near the surface of a conductor like copper. As frequencies rise, skin effect causes the current to concentrate at the surface of the copper conductor, reducing the effective surface area for current to flow. This means higher resistance as speed increases, making it more difficult to transmit data. The result is shorter copper cables. The alternative is to increase the copper wire gauge for more surface area for current flow. This also means thicker and heavier cables.
The e-Tube plastic waveguide structure enables high-frequency transmission with no skin effect as there is no current flow, just RF signals propagating through the plastic material. The waveguides are designed to exhibit a flat frequency response over the transmission spectrum--enabling scalability and re-usability across generations of SERDES speeds (112G/224G/448G).
e-Tube Signal Path Overview
- RF Transmitter IC up-converts baseband PAM-4 electrical data input into mmWave RF domain.
- The RF signal is launched into the e-Tube Core (Plastic Waveguide) via an on-chip antenna.
- RF signal travels through the e-Tube Core with minimal attenuation.
- A corresponding RF Receiver IC down-converts the RF signal back to baseband PAM4 electrical data output.
Figure 3. e-Tube data transmission path.
This direct e-Tube RF path is a complete electrical system. It eliminates the need for laser drivers, DSPs, TIAs, and any optical components, reducing power, latency, cost, and eliminating reliability issues inherent in optical technologies.
Platform Architecture Delivers Unmatched Performance Metrics
RF SoCs are designed to output carriers across the millimeter-wave frequency band into each e-Tube Core. Each RF carrier will support the bandwidth needed to transmit and receive the equivalent data channel in the electrical domain. Data links with 224 Gbps SERDES, the dual- carrier approach enables a transmission data rate of 448 Gbps for each e-Tube Core. Each e-Tube Core will effectively replace two copper twin-ax cables to double the density of the interconnect.
e-Tube cables are built with monolithic RF SoCs. It eliminates the DSP, electrical integrated circuit, photonic integrated circuit, and other components needed for optical data transmission. This approach allows e-Tube to deliver best-in-class 3 pJ/bit energy efficiency with near-zero picosecond latencies. Active RF Cables (ARC) based on e-Tube support 10x copper cable lengths at a similar cost structure while being 3x lower in power, 3x lower in cost, and 1000x lower in latency when compared to active optical cabling.
Figure 4. 1.6 T cable comparative analysis.
e-Tube delivers copper-like performance at a similar cost structure. It is the ideal replacement for copper for short-reach connectivity to eliminate the interconnect bottleneck and maximize compute efficiency.
Energy-Efficient Interconnect for Sustainable AI Growth
A case study comparing ARCs and AOCs in a 10,000 GPU accelerator cluster deployment shows that e-Tube provides annual savings of up to 6000 MWhr.
Figure 5. e-Tube annual energy savings connecting GPUs (e-Tube ARC vs. AOCs).
The 6,000 MWh annual energy savings enabled by e-Tube is equivalent to the consumption of roughly 600 average U.S. households. When benchmarked against coal-based power generation, this translates to an estimated reduction of ~5,600 metric tons of CO2 emission.
Cable Design and Manufacturability
An ARC is comprised of the following components:
- e-Tube RF SoCs: Monolithic RF transmitter and receiver SoCs with integrated antennae. This is the engine of e-Tube platform that enables scalable, multi-terabit data transmission. The RF SoCs are built using off-the-shelf and mature foundry processes, manufactured in 300mm wafer fabs with great capacity.
- e-Tube Core: Architected waveguide structure to serve as the transmission medium within the e-Tube platform. The waveguide is built with standard plastic material (no special compound materials required) and utilizes copper manufacturing techniques to minimize capital expense and production costs.
- Metal Waveguide: Enables RF signal transition from the e-Tube RF SoC antennae to the e-Tube Core. The metal waveguide is designed with openings that align with the antennae locations and includes interior pathways to direct RF signals output from the RF SoC, through the metal waveguide, and to the e-Tube Core. The metal waveguide utilizes existing copper connector manufacturing techniques to minimize capital expense and production costs.
The e-Tube platform–from RF SoCs to the cabling and connectiorization–is architected to use mature technology manufacturing techniques. This enables manufacturing scale to high volumes while minimizing production cost.
Figure 6. ARC components.
Technology with Scalability
e-Tube proof-of-concept in Octal Small Form Factor Pluggable (OSFP) form factor has been tested to interoperate with data center switches and GPU accelerators with 1e-9 Bit Error Rate (BER) performance. The production product supporting 224 Gbps SERDES for 1.6 T OSFP cable bundles will support 10m at 1e-12 BER for short-reach intra-rack and cross-rack applications. e-Tube Core’s flat frequency response allows the same core to scale with data center speeds, enabling equipment manufacturers and hypercalers to reuse the cable for next-generation 448 Gbps data speeds.
RF SoCs enable best-in-class energy efficiency of 3 pJ/bit for the 224 Gbps speed node. The e-Tube architecture enables RF SoCs to scale with standard foundry processes, thereby continuing to scale in SERDES speed and improving energy efficiency. This approach also allows e-Tube roadmap to achieve beach-front bandwidth densities of 1Tb/mm and higher for near-package and co-package e-Tube form factors to replace copper backplane in inside-the-rack GPU scale-up use cases.
Applications and Use Cases
ARCs enabled by the e-Tube technology platform are designed to be agnostic to different network protocols. It will support Ethernet, Infiniband, PCIe, UALink, or other proprietary protocols. It is an ideal interconnect solution for the high-performance and high-density compute fabric connecting GPU accelerators together within AI cluster scale-up applications.
To expedite deployment in intra-rack and cross-rack use cases, ARCs will be designed to comply with MSA-defined OSFP, Octal Small Form-factor Pluggable – Extended Density (OSFP-XD), and Octal Small Form-factor Pluggable Double Density (OSFP-DD) form factors for 1.6 and 3.2 T data speeds. Beyond these form factors, high-density near-packaged and co-packaged e-Tube modules can be deployed next to switch ASIC and GPU accelerators to dramatically expand the high-speed I/O bandwidth that connects the GPU accelerators inside the rack.
While data center AI interconnect infrastructure is the flagship use case, the technology value proposition makes it ideal for future networked-centric autonomous vehicles where high-bandwidth, lightweight interconnects between sensor arrays and control units will be essential.
Conclusion
Copper’s reign as the default short-reach interconnect is coming to an end. At 224 Gbps SERDES speeds and above, signal degradation, short-reach, thickness, and weight make it unscalable. Optics, while functional, bring prohibitive complexity, power overhead, and reliability concerns for short-distance applications.
Point2’s e-Tube offers a better path—one that embraces simplicity through an RF transmission approach without abandoning the cost and manufacturability principles that made copper ubiquitous. The architecture scales from 224 Gbps, 448 Gbps, and beyond. The technology will deploy in common module form factors for intra-rack and cross-rack use cases and follow quickly with near-package and co-package form factors inside the rack, with power efficiency and reliability unmatched by other interconnect options.
For data center architects building terabit-scale compute fabrics, e-Tube delivers scalable, low-power, cost-effective interconnect infrastructure for generations of AI clusters to come.
.png)


