Marco Hogewoning

Do We Need a New IP?

Marco Hogewoning
12 You have liked this article 0 times.

A number of recent publications have addressed Huawei’s proposal for a new internet-like architecture, called “New IP”, which aims to develop a set of protocols that could replace the current Internet. We believe that any evolution of the Internet should be left to the IETF, and we want to explain why.

The proposal has been put forward in the International Telecommunication Union (ITU). As part of our ongoing participation in the ITU and the Internet Engineering Task Force (IETF), we have been tracking this and related work for a while now and used our ITU membership to send a response objecting to this course of action.

What is this all about?

You may have seen the article in the Financial Times (soft paywall) or blog posts regarding “New IP”, a proposal by Huawei for a new architecture to connect heterogenous networks and systems in an internet. Although not stated explicitly, it is clear that the long-term vision is for this new architecture to supersede TCP/IP and replace the Internet.

The original proposal for New IP was made at an ITU meeting in September last year where, together with a proposal to set up a formal structure to work on this, Huawei presented some details on what they envision a future internet to look like. Around the same time, a similar presentation took place at a side meeting organised alongside the IETF 106 meeting in Singapore. Since then, a number of similar presentations have been delivered at other meetings, including a number of ITU study groups and a virtual session organised on the side of IETF 107.

As is the case with many of these long-term visionary approaches, the proposals do not contain a lot of technical details and mostly limit themselves to discussing functional models. From several presentations made by the proponents (e.g. during IETF 106), it has become clear that the proposed architecture will use a number of existing technologies or build upon work and studies already undertaken in a number of other standards development organisations (SDOs) and industry consortia such as IEEE, 3GPP and the IETF itself.

At the same time, it has been made clear that the proponents envision departing from a number of key components of the current Internet architecture, in particular where it concerns addressing and forwarding. With that, they also depart from the core philosophy behind TCP/IP and the later Internet: an open and flexible system that is much more the result of decades of evolution rather than a single master plan.

Why now?

The ITU and its membership are preparing for the World Telecommunication Standardisation Assembly (WTSA), which is expected to be held in November in Hyderabad, India. These conferences are held every four years and mark the start of a new study period. During the conference, the membership decides what the ITU should be working on at a high level. We expect this proposal to play an important part in the discussions at and ahead of WTSA.

Should it be proposed?

The simple fact that you can change things doesn’t mean you should. As the current situation with COVID-19 has once again demonstrated, the Internet has become an integral part of our societies and economies, in ways far beyond what its creators originally envisioned.

As we wrote in our response, whereas the telecommunication infrastructure once enabled us to carry IP packets around the world, things have turned and most of the traditional telecommunications services are now reliant on the Internet and its protocols, or have been replaced by novel applications that were developed using the open framework provided by the Internet protocol suite and its layered model. This has provided an enabling environment which is often championed as “permissionless innovation”.

We need to carefully evaluate which venue would be the most appropriate to make changes to this model and, more importantly, a suitable governance model to discuss whether any changes need to be made at all, let alone how to best develop solutions for the problems that have or will come to light under a new model.

The most problematic and dangerous part of the proposal is not the technology, but the fundamental beliefs behind it, which represent a departure from the Internet’s fundamental values of openness, transparency and putting the end user in control. The current Internet was not so much designed as grown over time and often only documented ex post. The multistakeholder model wasn’t invented; it was the best description we had of how things had been working for decades already. It served a purpose in documenting how it was different from the traditional multilateral decision-making processes (such as those still practised by the ITU).

Staff from Huawei and Futurewei, its R&D branch, have made it clear on several occasions that they see New IP as an opportunity to redesign the governance model into a top-down structure. This is the case with the design and standardisation efforts, but also comes through in the network’s envisioned functioning. Despite the many claims of taking a decentralised approach, forwarding and access to the network itself would be controlled from centralised authorities who, for instance, would be able to signal subordinate network elements to block a particular data flow. This is much more of a fundamental shift than it first appears to be, as it would give control to the core of the network instead of leaving it to the end points, as we mostly do on the Internet today.

The technology’s virtual reality

As I have already mentioned, there isn’t much detail on the technology; certainly there are not yet any standards or specifications that would allow you to implement New IP. Having said that, there are some functional descriptions out there and several work items have already been proposed in existing ITU study groups that provide a glimpse of the future.

Most important is that New IP isn’t totally new, but appears to integrate a lot of existing technology developed by the ITU and, importantly, by a number of other SDOs such as IEEE, 3GPP and the IETF. The novelty is not so much in the technology itself, but in how and where New IP would be deployed.

We already noted in our response that some of the rationale behind the proposed changes is flawed. One example is its claim that the current IP address space of 128 bits would not be enough to include all people and devices, but the New IP architecture would also be built on an initial base of 128-bit addressing. This would then be extended into flexible space where the position of certain bits in the address would have a semantic meaning, or where the address space would be extended with additional “headers” in front or behind.

If you have been around a bit longer, this might sound familiar. It somewhat resembles the initial thinking around IPv6 address allocations, such as that described in RFC 1887, which assumed a hierarchical model that the community has since moved away from. Also, while some people argue that the hard boundary of interface identifiers being 64 bits is wasteful or inflexible, it’s possible to integrate lower-layer identifiers such as IEEE MAC addresses with ease, without the need to build custom extension headers or other modifications to the IP layer stack and forwarding systems.

Another persistent argument has been that the current Internet would not have the capacity or provide the low latency required to support future applications such as holographic video calling or remote surgery. While I might agree that the overhead associated with using IP is not suitable for all environments – especially those networks which have limitations on resources such as bandwidth or power – I also don’t see how this new architecture would avoid such problems. The speed of light is fixed, and error correction will also take time and consume bandwidth, regardless of the layer in which it is done. There might be room for improvement, but I don’t see why that work cannot be done in IEEE or 3GPP, which are already working on it and have the required expertise.

Finally, as was flagged in a recent presentation about the security aspects of New IP, it’s expected that the availability of quantum computers will dramatically change the encryption landscape. The ultimate solution will be when we manage to find and implement cypher suites that are more robust and hardened against these new capabilities. It’s a stretch to blame the network protocols for not implementing something that has yet to be developed. It also completely ignores the fact that current protocols, such as DNSSEC or TLS, will be able to relatively easily adopt new algorithms as they become available. This property is also highlighted in ITU Recommendation Y.4807 (Agility by design for Telecommunications/ICT Systems Security used in the Internet of Things), which references several examples of existing Internet protocols as examples.


Do we need New IP? I don’t think we do. Although certain technical challenges exist with the current Internet model, I do not believe that we need a whole new architecture to address them. I believe the current Internet governance model is especially well-suited to address those needs, where we can re-invent and build upon existing structures and implement changes if and when they are proven to be useful.

This is the model we have been working with for the past 40 years and which has proven over time that the Internet is not a static design but can and will evolve to accommodate uses far beyond what its founders originally envisioned.

If there is a need to evolve the IP protocol layers or other aspects of Internet technology, it should be done by established SDOs that have control over the current standards and which have the expertise to develop new solutions. More importantly, that work should be done “the Internet way”, in an open and transparent model, driven by the need of the network’s users and with contributions from all stakeholders. This was the main argument we made in our formal response.

Can you help?

Many people and organisations have reached out to me personally as well as to the RIPE NCC about this. For now, what is most important is that we, as an industry, state our needs and let decision makers know that New IP is not what we need. Talk to your government representatives at the ITU and elsewhere and make sure they understand that this proposal is not about a real need for new technology, but about trying to alter the governance structure of the Internet – one of the most fundamental aspects of this hugely important technology that has made it the success it is today.

12 You have liked this article 0 times.

You may also like

View more

About the author

Marco Hogewoning is acting Manager Public Policy and Internet Governance with the RIPE NCC. As part of the External Relations department, he helps lead the RIPE NCC's engagement with membership, the RIPE community, government, law enforcement and other Internet stakeholders. Marco joined the RIPE NCC in 2011, working for two years in the Training Services team. Prior to joining the RIPE NCC, he worked as a Network Engineer for various Dutch Internet Service Providers. As well as designing and operating the networks, he was also involved in running the Local Internet Registries. During 2009 and 2010, Marco worked on introducing native IPv6 as a standard service on the XS4ALL DSL network. In November 2010, this project was awarded a Dutch IPv6 award. More recently, he has contributed to the MENOG / RIPE NCC IPv6 Roadshow, a hands-on training initiative in the Middle East. Marco has been involved with the RIPE community since 2001 and was involved with various policy proposals over that period. In February 2010, he was appointed by the RIPE community as one of the RIPE IPv6 Working Group Co-Chairs.

Comments 8