Archives

These are unedited transcripts and may contain errors.


PLENARY SESSION,
24TH SEPTEMBER, 2012, AT 2 P.M.:

ROB BLOKZIJL: All right. I think it's time to start. If people who are still standing at the back can find a seat, then we will start. Good afternoon, my name is Rob Blokzijl, I am the Chairman of RIPE and I extend a warm welcome to all of you for the 65th RIPE Meeting in Amsterdam.

It's a bit of a special meeting this week. 65 is an age where many in many countries, people retire and start enjoying their pension. It's also, this week, that IPv4 sort of retired in our area. It's on pension now, like all pensioners, hopefully it will live for a couple of more years, but the next generation is knocking on the door, has been knocking for many, many years and I think the time is RIPE for everybody to realise that the future is with IPv6 and nothing else. That might still come as a shock to some you, but you have been warned for 15 years or so. But this time it's for real. The barrel is empty ?? well, there are a few scrapings left and we have a few remaining policies left, but for your everyday consumption, it's gone, and if you become hungry for address space very soon, it's IPv6, you might have to swallow a little bit on that, but there are enough people in the room already who can tell you that it's not only an acquired taste, but once you have tasted it, you never want to go back.

Right... there is not much more for me to say. We have a full programme as usual. There are a few interesting bits for discussion later on this week. I expect the Address Policy Working Group to do some soul?searching as well. For those of you who front the RIPE NCC, the members, the membership meeting on Wednesday, you can have your final say about financial future of the RIPE NCC. There are a few other interesting bits this week ?? no, that's not true, there are only interesting bits this week. So, without further ado ?? no, there is one thing I want to mention. You may have noticed we have a different type of badge, we have now a multipurpose badge, multi?media badge, which contains all the information that used to be in your registration pack. It's the first time we do it. I have two remarks. I think the registration team would like to hear some feedback. So if you think it's a nice idea, pass by the registration desk and say "great". Also, spot the mistake in the programme. It gives the impression that all afternoon sessions start at 2:30 in the afternoon. That is the mistake, they start at two o'clock, as usual. They have been doing so for 64 RIPE meetings and, so it's easy to remember, two o'clock is the start of the afternoon sessions.

So, without further ado, I wish you all a pleasant week, a joyful week and a very profitable week, profitable in the sense that you contribute a lot to the community and you receive a lot from the community. So, without further ado, this RIPE Meeting is open and I hand over to the Chair of this afternoon first session, Jan Zorz, who immediately was relieved of his microphone and now the Chairman of the Programme Committee, Todd Underwood, has a few words to say.

TODD UNDERWOOD: I am Todd Underwood, Chair of the Programme Committee. Just a couple of quick things. Thing number one, there are elections for members of the Programme Committee can. The Programme Committee has expanded from the eight paltry members; we have now to 12, count them, 12 members. What that means is that there are five open slots to be selected at this RIPE Meeting. There will almost certainly never be a better chance to join the RIPE Programme Committee. So if this is something that interests you, please send a picture of yourself and a bio/statement of interest to pc [at] ripe [dot] net by lunch on Thursday. After that, we will place candidates into an electronic election system. Secondarily the election system requires that you have a RIPE NCC access account. But you already had this account and you want it had anyway and the reason you wanted it was that it will enable to you rate talks. And I want you to rate talks. But why do you want to rate talks? You want to rate talks because Serge is going to give you, maybe not you, but maybe you one of two €100 gift certificates tomorrow if you rate talks today. So, the moral of this portion of this presentation is get an NCC access account if you don't have one already. Sign in, look at the Plenary programme which will be magically transformed in front of your eyes into a talk rating system which you will then rate talks which will tell the current programme KPMG of. And enter you for a chance to win up to one out of two hundred euro gift cards, so this is fantastic. Then you'll be able to vote for people on Friday as well.

That was the first thing. ripe65.ripe.net in the upper right corner, there is a sign in with RIPE NCC access and you can create an account and anyone can do so. The agenda we mentioned the apparent confusion about the time, the time in bold is correct, so, things start at 14:00.

I think that's all I have. So, rate the talks. Consider joining the Programme Committee, and Jan will introduce our first speaker. Thank you very much.

CHAIR: Hello, my name is Jan Zorz and welcome to RIPE 65. I hope you recovered from RIPE 64 in Ljubljana. We repaired the damage, everything is back to normal. And I would like to introduce the first speaker that comes from my homeland, we received very good feedback from the last RIPE Meeting, on his talk, he will talk about OpenFlow and SDN and here he comes. Ivan Pepeinjak. Thank you.

IVAN PEPEINJAK: Okay. So, as you probably know, last March there was an explosion of hype around something called OpenFlow and open networking foundation and something called SDN. And if you had been active in the community, you probably have heard about OpenFlow for a few years before that, because that's a great tool the academics use to model new ideas and new protocols on production networks. Now, why is it, all of a sudden, of interest to everyone here? Because we are, after all, service providers and not someone who would be interested in academic ideas, at least not the theoretical ones. Before I go into details I have to warn you, I have been in this industry for way too long, so, I have seen every idea reinvented for three to for times and OpenFlow is no different and that irritates me. Or, as someone else said a while ago, if everything you look at starts looking like LAN a.m. /HRAEUGS, you have been been in this business for too long.

Okay, what is OpenFlow? Let's start with this one first. As you know, every box we have in our networks today has at least three different planes: the management plane, where you configure it; the control plane where it tries to figure out who the neighbours are, what they should do, how we should propagate packets around the network; and finally, when we agree what a topology is, what the best paths are, we build the IP routing table and downlowd that into the forwarding plane, or data plane if you wish. That's how we have been working for the last 30 years. Well known, well understood.

Now, OpenFlow has this great idea. Well, the idea is this: Let's take the control plane, let's rip it out, let's put it somewhere else. So, let's make the boxes in the network totally dump, they will have no control plane, they will have no control plane protocols, there will be no ISPF, there will be no LACP, there will be no BFD, there will be nothing. This boxes will be totally dump. And the only device in the network that will have some intelligence will be the dedicated OpenFlow controller. Well, actually this switch is still have to communicate with that controller so they need IP connectivity which means that they have to run something like DHCP. They have some off out of band management LAN to reach that controller. But apart from that, it's totally out of band, so the controller does all the intelligence decisions and the boxes themselves are totally stupid. That's the basic idea, and the proponents are saying, well you know, this routing protocols is distributed things, they are so complex, they are so unreliable, let's make things centralised. Because, then we can control the whole environment. Does it make sense? As you will see, maybe not.

How does this work? We have all sorts of message types in the OpenFlow standards. By the way, now we are at version 1.3, so it's the fourth revision of the standard. It has been totally changed in the meantime, so 1.0 is totally incompatible with 1.3, by the way. But it doesn't matter and message types are simple things like give me the configuration of the switch, tell me what feature you support and most importantly, allow me to download data into your forwarding table. This is the essence of OpenFlow. OpenFlow is actually a [TCAM] download protocol. It's a protocol that allows someone from the outside to download data into the forwarding table of a switch. That's all there is. There is another goodie. If the hardware supports that, you could have statistics assigned to each forwarding entry. That's why some service providers love that. The moment they start using the right hardware and OpenFlow, they get automatic building information out of this statistics. Okay?

And because sometimes if we have multiple devices in sequence, we have to know which one has the proper forwarding tables, we have solve something like transactions, which is called barriers here. Now, moving deeper down into the details of this [TCAM]. OpenFlow switches are like Java virtual machines, if you wish. A hypothetical thing that can match on a large number of fields and perform some actions. And then it's the job of actual switch vendor to transform that hypothetical thingey into the actual instructions in the TCAM. Okay? And we can match on almost anything you can imagine. You can match on sources in a Mac address, IPv4 or IPv6 addresses, VLAN textings, provider balm bridge texts, you name it, and you can do header rewrites, you can do source or destination IP rewrites, you can do port rewrites, you can do anything you wish. So, in principle, anything that you have been able to do on a networking device before, can now be programmed through OpenFlow. Okay? Makes sense?

Well, it actually doesn't. This thing is way too rich. The problem is that this is not how the hardware in the switches is built. We don't have TCAM with this many entries. Our TCAMS are smaller. So the problem we are facing today with OpenFlow is that, it S we can specify all these things, but then most of them work in software not the hardware. Which reduces the performance by a factor of 100 or so. Just to give you an example, and I'm not picking on HP in any way.

HP has OpenFlow in all pro curve switches. When I was reading the release notes, they were able to do layer 3 matching in hardware and layer 2 matching only in software. So if you were matching on IP addresss in this flow entries, then it was done by the cheap sets. If you were matching on Mac addresses, it was done in software and really, really slow. Why is that? Because no one has hardware in their chips that would allow them to match on source Mac address. Who does ever need that? We always match on destination Mac addresses not on source ones. So, if someone wants to programme a flow entry that says match the source Mac address, we can't do it in hardware. It goes back to the software. And it's slow.

And by the way, there is one other thing: if the switch doesn't know what to do, it can send the packet to the controller, and this is, for example, how some people will implement load balancers. For every new ?? and the controller will download the flow information in these devices and then we will have load balancing. Yeah, sure, that's chaos.

Okay. And by the way, because this devices are totally dump, they can't even do topology discovery. You can't, the device can't figure out who is connected to it. So what happens? The controller tells a device, please send whatever packet, LLDP in my case, out this particular port, and then because this device is as dumb as this device is, it doesn't know what to do with that packet, so the packet lands back at the controller. Okay? And now the controller says, ah, I see, I send the packet through that port, and it arrives through this port so they must be connected. And so on and so on and so on and so on. So all the intelligence we had in the switches for the last 30 years is supposed to be gone now. And everything should be moved to the controller. That makes perfect sense, right?

And the first thing someone claimed at the world launch of this is that with OpenFlow you don't need routing protocols any more. Yeah, sure...

Anyone who has ever seen a frame relay or ATM network knows how false that is.

So, there are some drawbacks of a routing protocol and we are all aware of them only too aware of them. But, there are some benefits, like they are distributed and they work. And when you start talking to the OpenFlow guys and you start asking them simple questions like, how do you do reliability, how will you do you auto discovery, how you do fast convergence, how will you do BFD? Imagine doing BFD from the, from controller to the switch to the other switch to the controller. That will have some fast response time, right? So, it turns out that yes, you can do that, but you have to reinvent all the wheels that have ever been invented, because most of them were invented for a reason.

So, as RFC 1925 says, every idea will be reinvented and reproposed again, regardless of how good it is.

So, there has been frame relay and ATM, there has been Sonnet and SDH, there is MPLS?TP. Is anyone using forces? Thought so...

So, all these things are identical to OpenFlow in a way. All of them expect a central controller to download forwarding information into individual switches, and we know how well that works. And there is always the same set of problems. Scaleability. OpenFlow proponents wanted to install per flow data for every TCP session of flow entry into every switch. Yeah, that scales...

And then, the hardware vendors tell that you they can only install 1,000 flows per second in a switch. Yeah, that scales really well. So, people that have actually faced these problems in real life have already come to the conclusion that we need two OpenFlows, the edge OpenFlow where we will deal with five top else and the core OpenFlow where we will deal with MPLS labels. So we are back to the old world.

And then there is the problem of feedback loops, fast convergence, running things like line card protocols through the controller. That doesn't make sense. By the way, the biggest commercial implementation that I have ever heard of is from NEC, they have an OpenFlow controller that controls data centre switches, and what they can do at the moment is 50 top X switches. That's how much they can control with one controller. Well not much. That's like 2,000 ports maybe.

The important difference this time as compared to for example forces, this time there is enormous customer pressure. For example, in US OpenFlow is a check box on some of the academic tenders, so you don't have OpenFlow, you can't sell your boxes. And therefore, every single switch vendor has something that they claim is OpenFlow enabled.

Anyway, now that you know what OpenFlow is, remember, it's a very simple protocol to download data into TCAM. Let's see what SDN gets. And I have real fun with this acronym. Because, I can't remember the days when networking was not software defined. Or maybe I was too young for that. IBM had main frame, front?end processor and control units, and all of them were running software. But, anyway, so what does SDN stand for.

There is this open networking foundation that's the group that forces the vendors to implement OpenFlow, and they have this great idea in their architectural paper saying that the he will control and the ?? I mean, this thing works great in PowerPoint. Then you have vendors like Arista. They are notorious. Whatever they are is SDN. They have a new feature, it's SDN. They have this other feature, it's SDN. They have something else, it's SDN. Whatever they have, it's SDN washed. And startups are even better. I mean, you have startups that, I know, someone was doing load balancers, I mean, load balancers, and they call them SDN appliances. But that does attract venture capital funds, so it works.

And some vendors, including Cisco, by the way, think that if you have an API, that's SDN. Well, no...

Okay. Is where does SDN as a concept make sense? As we all know, there are things we do well. Internet works. So layer 3 destination only forwarding, we know how to do it. It works well. Then there are things that we don't do so well. Large scale layer 2 forwarding, we don't do well and it's not just the spanning tree, it's the flooding. So, if someone would do layer 2 forwarding without flooding, that would make sense. From last RIPE Meeting, there was this great presentation about MPLS traffic engineering auto bandwith problems. So placing traffic engineering tunnels on a network is a tough problem. Theoretically, it's MP complete. It's a [knapsack] problem. Doing this from distributed platform, like switches and routers, gives you the results that we all love to hate. Now, imagine that you would be able to compute all the tunnels in advance and download them into the switches. And don't tell me some people have been doing that for 15 years, because I know that. But now with OpenFlow, this is so much better. And there is always the routing of elephant flows. So these are the things that we do. But we don't do them so well. So this is where OpenFlow polite make sense. And then there are things we don't do at all or we don't do well.

Like, distributing security policies on thousands of devices, particularly if the devices come from different vendors. That's a nightmare. And the university of Indiana, for example, is solving this nightmare with OpenFlow and they are doing a really great job of solving that problem. So what they do is the moment a new port is enabled, a user phasing port is enabled, they redirect all that traffic to a controller. The controller authenticates the user and then downloads an eckle on to that port as OpenFlow X topples. So, you can dynamically identify users and you know that, for example, some Apple devices don't support some of the protocols we use to authenticate users on wireless, for example, so you can authenticate any device using any protocol that device supports and then look up the security policy for that particular user and download it to that switch.

Load?based forwarding adaptions you could theoretically do with OpenFlow because you could reroute the elephant flows in realtime or policy based routing with OpenFlow you could download those since consistently across the network or at least on the network edge. So the best approach seems to be at the moment to combine SDN and OpenFlow with some traditional mechanism.

And almost everyone has already realised that. So there are still people who are pushing this OpenFlow only, and, as I said, the most they got so far was 50 switches. There are people who use OpenFlow, and then, of course, we have vendor?specific extensions. That makes for great compatibility. So, instead of doing line card functionality as LLDP or BFD through OpenFlow controller, you can do that on the line card and then just use OpenFlow to download the flow information.

Academics use this, the ships?in?the?night approach, where you run in the same switch two forwarding planes. One is controlled with traditional management and control plane, so, half of the switch is a traditional switch and then the other half is OpenFlow controlled. And you can say this three ports are controlled through this OpenFlow controller, and those two ports are controlled through another OpenFlow controller and the rest of the switch works as usual. Some vendors can do that only on per?port basis, at least Brocade can do that on per?VLAN basis. So, that you could have some X switches that you can play with and a VLAN connecting two switches so that you have an end?to?end OpenFlow control system. That's great for the academics to test new protocols.

And this is what Juniper is doing. Juniper ?? they are saying, you know what, existing protocols do make sense, but there are things we just can't do with them. For example, I would like to insert an additional route that points that way and but that BGP or OSPF, or whatever it is, or I would like to install a new ackle on a port but not permanently, not through router configuration. So, they are using OpenFlow to create temporary state instead of modifying the router configuration.

Another really good use case and this is where OpenFlow makes perfect sense. Don't do it in the core because the core, we know how to do. Do it on the edges. Nicira is a company that has built a solution that supports Mac over IP virtual network. So layer 2 but not with VLANs but with IP as the underlying infrastructure. And what they are doing is they are, they have this central controller, and for example, it can be integrate with open stack, so the central controller knows every Mac address of every virtual machine in the whole network, and now this controller can download the forwarding information to individual virtual switches. And they can do a lot of really cool tricks with this because in this virtual switches, this is all software?based, so, you can do whatever you wish there. It's just software lookups.

By the way, OpenFlow is just one of the tools that you need to implement something, some people usually forget that. Remember, OpenFlow is a TCAM download protocol. Now, who will configure the switches? Okay?

So, there are a number of other tools in the tool box. OpenFlow being one of them, NETCONf for configuration management and even the open networking foundation has started a configuration Working Group which is using net confor OpenFlow switch configuration and what this Working Group actually did was they created their own YANG schema, how to configure OpenFlow devices. So, we are using NETCONF, we are using YANG, and, by the way, this is the data model that you can use to configure OpenFlow switches. Okay?

And there is a totally new initiative, Internet routing system, that's within IETF but it hasn't yet become a Working Group so now they are only a few people using an IETF based mailing list. And this thing has moved one layer above OpenFlow. So, instead of manipulating TCAMS, IRS is supposed to be manipulating the routing tables and interacting with the routing protocols. So, think of IRS as yet another routing protocol. Yet another source of routing information that integrates with the other routing protocols to build the routing table.

So, for example, when you are doing remote trigger black holes for denial of service attack, instead of using BGP to send them down into the boxes you could use IRS to do the same. And by the way, if anyone is interested, there has been a draft published with numerous use cases a few days ago, so if you are interested, go and read that draft.

And then, of course, you have vendor APIs. And every single vendor has some APIs that you can use and no two of them are alike. Those vendors that run UNIX or Linux behind the scenes, usually give you Perl or Python, some others give you something like Cisco and F5, Juniper is big on XML. And Arista, this is interesting, they have XMPP, where the switches talk to the central chat room, if you wish, and you can tell the switches what commands to execute and then they execute the commands and report the results back. Which is cool. I mean, it's an amazing use of a very simple technology to give you some immediate results.

As I said, because of the pressure from the customers, almost every switch vendor has an OpenFlow product. HP was definitely first with pro curve, Brocade was, I think they launched this in the summer on MLX, this was 100 gig OpenFlow?enabled interface, if you need that. IBM has an OpenFlow switch. NEC has their own switch that they use this one as well. Juniper has SDK for a multiplex series. Cisco has announced OpenFlow on Nexus 3,000, I think initially. And there are all sorts of smaller vendors that make OpenFlow?enabled really cheap 10 gig switches.

On the software side, you have the open vSwitch, which is now part of the official Linux distribution. So in every single Linux distro, you will have an OpenFlow switch in a few months when this gets propagated down. And Xen, for example, is already using that for their security management tool. There is a reference implementation in NetFPGA. If you are interested there is an open image for the low end routers. If you want to play with large network, there is the mini net which is an emulation tool. On the controller side, there is not much. As I told you before, implementing a new TCAM download mechanism in a switch is a simple thing to do. Creating a controller that will replicate everything we have learned in the last 30 years is a hard thing to do.

Big switch networks have been talking about their controller for a year or so. It's still a mythical beast that a few people have seen. NEC has their own OpenFlow control that they use for data centre switches. It does do some nice integration with ZN Ware, but, for example, it has no layer 3 routing protocols, so, when I said well what about the routing protocols? They told me use static routes. Yes, sure...

What about LACP? Well use static port channels. That's safe. How about spanning tree? Oh don't worry, we pass it transparently.

So, I'm really amazed that they got this thing working at this scale. But there are a lot of things missing there. And Nicira, their natural virtualisation platform, it's really an edge application so they avoided all the problems we had to deal with in the last 30 years and they do something on the edge where it really makes sense. On the other hand, if you really want to play with this stuff, there are a lot of open source controllers; you can choose Java , Python. Route flow is an interesting one. The whole idea behind route flow is like this: I don't want to have my edge routers peering with all my customers' routers over BGP. I want all my customer ?? well, the routers of my customers to peer over BGP with a central route server. So, I only have to configure things in routing policies once. And, you know, you have been doing that with EBGP multi?hop sessions but these guys are saying, networks no, no we can do that without EBGP multi?hop. Because if the first switch is an OpenFlow switch, it can encompass late the BGP packet and send it to a controller through a tunnel. So, the route server that I'm running in the core of the network appears as being directly connected to all customer routers. Okay?

Now, what happens if you lose half the network and the controller is in the other half? Well that never happens, right?

So, a good idea, but I have the problem with having centralised control.

And the other problem these guys are having is that then, after they have computed the routing tables, they have to download them to the switches, and these are not switches, these are routers. And routers, the high end routers don't have TCAMS. They have whatever trees to implement prefix?based routing. And mapping a 12 double into a tree that is implemented in the actual hardware of a high end router is an interesting problem. So, I don't know how far they got with this idea.

Anyway, I think this is my last ?? no, there is one more.

So, SDN is definitely an interesting concept. Google definitely did some really, really, really cool stuff with that and I think Todd was telling you about that during the last RIPE Meeting, but remember, OpenFlow is just one of the low level tools you need, and we'll probably see the whole things in large scale data centres initially, like Facebook is really involved in this, Google is involved in this. The whole thing, from my perspective as a regular end user, is still pretty immature. We are missing the controllers and more importantly, no one has even started working on a northbound API. Because, without a northbound API from the controller to the rest of you're applications, you have just traded the lock in. Today, you are locked into Cisco or Juniper or whoever your vendor is. Tomorrow, those boxes all speak OpenFlow, but you have an OpenFlow controller that has a proprietary API. So now you are locked into the controller vendor. Unless Google and you write your own controller, in which case you own your destiny.

As I said, it has already crossed the academic to commercial gap, so we are seeing a lot of products primarily on the switch side. And by the way ?? here are some more information about the web sites that you can use to find the additional information. This is how Scott Adams envisions OpenFlow. When I have seen this, I was so amazed...

So that's all I have, and if you have any questions, if you don't, ask them now, send them to my e?mail or send me a tweet and I'll try my best to answer you.

CHAIR: Thank you, Ivan. That was great.

(Applause)

CHAIR: Thank you, Ivan, for this really great presentation. I love the ending. And I see we have Randy at the mike, probably with a question.

RANDY BUSH: Funny thing, Randy Bush, IIJ. The telcos didn't believe the datagrams would work. But here is one right here. One of my favourite Telcos. The Internet will never work. Circuits are what's needed. So we gave them MPLS. ATM didn't work so they tried ATM 2 and it kind of works and those who use it don't. They do not believe in distributed algorithms. They cannot believe that routing protocols work. Okay. They believe in centralisation, highly reliable devices, etc.. when in fact, the Internet is building a reliable network out of unreliable devices. Okay. So, we now have centralisation and fear of distributed routing algorithms and those people who fear those things and need centralisation will do it and they will have control failures more often than they have management reorganisations. And yes, I'm thinking of AT&T. And this is going to be life. The vendors need some magic pixie dust to sell new equipment, so they will back it, and life will go on. I work in a network that doesn't have MPLS and doesn't have OpenFlow. We all have choices.
(Applause)

CHAIR: Thank you, Randy. Do you have any opinion on that?

IVAN PEPEINJAK: No. Apart from agreeing with him.

CHAIR: That was an unusually long comment from Randy. Thank you.

AUDIENCE SPEAKER: Maksym Tulyuk, Amsterdam Internet exchange. If was any look at OpenFlow model as from security point of view, because we have one controller and if that guy gets access he immediately gets access to whole network, which is practically impossible with a modern network.

IVAN PEPEINJAK: So, the first thing that you should have ?? and this is true for every semi?centralised system. If someone gets access to your ATM network management station, you have the same problem. So the first thing that you should have to start with is out of band control plane network. So, don't mix the user traffic with your OpenFlow sessions. You should have a totally separate out of band network for the OpenFlow sessions.

The other thing that they did was, they implemented TLS on those OpenFlow sessions, so SSL. So if you want those sessions could be encrypted. Apart from that, I totally agree with you. If someone gets to the central controller, and it could be in any organisation, you are gone.

AUDIENCE SPEAKER: Rudiger Volk. Ivan, I wonder whether you can give us some hints on what actually the customer demand that you are pointing to means semantically. You used the word check box item, and I remember 20 years ago, say, over ?? well, okay, not talking about what was happening around here, over the ocean there was a check box item by a very potent customer demanding to get gossip, and we know what came out of it.

IVAN PEPEINJAK: So, the first thing that I'm ?? well, you should tell us about OpenFlow at Google. That's a customer.

RUDIGER VOLK: Well, customer demand from customers who cannot do their own controller because that's really a very specific segment I think.

IVAN PEPEINJAK: Oh, it is. So there are the academic customers in US who want to build the next generation Internet, the academic Internet so that they could play over it. And instead of giving them GRE tunnels to play with, they are giving them OpenFlow circuits to play with.

The other ?? the only other use case that has real deployment is Nicira's MVP and that's because they have solved the problem of layer 2 flooding in, over large networks, very large virtualised multi?tenant IASS networks. But to call them an OpenFlow company is an exaggeration. They needed some mechanism to download the information from the controller to the switches and they just chose OpenFlow because it was available.

CHAIR: So, the unknown gentleman in the front.

TODD UNDERWOOD: So, I think this presentation, I think it's useful to step back and say, sort of what is the problem we are trying to solve here? And what is the approach to that problem? I think that there is an unreasonable amount of fud about this right now. So for example, Randy's comments are sort of bizarre to me that he attributes three communities of people, none of whom actually interested in OpenFlow as being the primary motivateers for this. Vendors who were dragged into OpenFlow rather than moved into it agreesively. These other constituents and specifically Telcos, I am not aware of any Telco who is actually that interested.

IVAN PEPEINJAK: Deutsche Telekom.

TODD UNDERWOOD: From what I can see this is being driven by people who run large single tenant ?? right now recollect large, single tenant data centres that are already layer 3 routed that have a serious traffic engineering problem that's not solvable with current distributed routing. As we know like distributed routing protocols work wonderfully except for traffic engineering which they have never solved and is not, and may not be solvable, right with the a distributed protocol. If it is I haven't seen a good solution and I suspect nobody else here has either. So I think from my perspective, what's interesting about this is to think about either, this came from nowhere and nobody ever want this and it's just stupid and made up nonsense just as IPv6 was, or ?? anyway, no that was the other check box, sorry, we are back to that. Or, there is some reason smart people think this is necessary for their networks and we should consider what they say reasons are. I think the real question is are we able going to have open source reliable sufficient controllers that are sufficiently feature rich that people can deploy them, and if we are, then lots of people use OpenFlow for lots of interesting things, and, if we aren't, then this will all wander off and people who need to do traffic engineering will find another way to put policy on their forwarding devices. Thanks.

IVAN PEPEINJAK: Thank you.

CHAIR: Thank you. I see there are no more questions. Thank you, Ivan, again.

(Applause)

CHAIR: I'm really happy that even the first, the first talker generated so much of a discussion, and we at RIPE Programme Committee worked very hard to select only the best for you. And now from down under, you all know him, Geoff Huston will talk about the concept of Quality of Service and I expect this also to be a great talk. Thank you.

GEOFF HUSTON: Thanks, and good afternoon and thank you for having me talk this afternoon. I appreciate the opportunity.

Here we are back in Amsterdam. And you know what's going through my head is the kind of question, is it possible to have too much fun? You know, think about what's happening this year. You have got the excitement of running out of v4 addresses and now you are going to have to do things with your network that are not only completely unnatural and bizarre but at a network engineering level all of a sudden you have got employment for the next couple of years, because what was working before won't work in the next world and you have got to do v6 as well, this is getting even better. But you know there is even more things to play with because if you get word with that you can turn your entire network inside out and as we are just seen run software defined network in OpenFlow. This is so cool. You can have so much fun. If that's not enough, you can do even more because everyone is rolling out massive layer 2 networks as part of their national broadband infrastructure, aren't they? Which reminds me as a cautionary note, friends don't let friends build large layer 2 networks. But you know...

But not only that, you are now no longer a secret. You are now no longer a counter culture. You are so, so, so in the mainstream. There is nothing left as the telephone company where it.

There is no thing left as a telephone company. We're it. So all of a sudden, when the ITU appears, it's us. And when they start making bizarre pronouncements, it's all about us too.

As you are probably aware, or maybe you're not, they are having this grand meeting in Dubai at the end of this year. The last time they had such a wonderful grand meeting was 25 years ago, and they sorted out the international telecommunications regulations. The thing that kept the telephone networks running for the last 25 years, according to them, were these regulations. That defined the way in which money circulated in the world of telephony. And bizarrely, maybe it's about time they have decided to do it again, so in die buy at this end of this year they are talking again about rewriting those regulations. But the telephone companies are different. They are you and I. And telephony is not where the money is any more. Probably Google and app sell where the money is but what's left of it in telephony is actually all now about the broader area of telecommunications and networking.

And the worry is that we have created this massive mythology around us. This huge bubble of bizarreness. So bizarre that when you see this, which is a draft from oh, only a few days ago, from the grand pooh bar, no, it's the secretary general of the ITU, talking about networking and inter?connection, all of a sudden it's about QoS and what I like is this assumption that while not everybody does full integrated services everybody is doing DiffServ QoS in their networks, aren't they? Where did this crap come from? So what I'd like this afternoon is to actually talk about QoS and actually try and see if we can get through some of this cloud of mythology and weird belief systems and actually see if there is anything behind all of this crap.

I at least know a little bit about what I'm talking about, with Paul Ferguson way, way back about 14 years ago, we sold the Amazon best seller of a microsecond, this wonderful book, which is now so heavily cited that if you look closely at Amazon you can buy it for the princely sum of 1 cent.

Buoyed on by such obvious commercial success, I did a successor, which equally you can buy for a cent. I encourage you to buy many of them.

But what's it all about? Why we having this. Why were these books even written at the time? I'd like to go back a little bit and get into this idea of why were networks built and what were they built for and where does QoS come from? And so the first thing to actually figure out about networking and voice networks and telephony is the beauty of it is they were built for me. Oh and you as well, but they were built for me. Because, the lowest frequency that my voice can produce is about 300 hertz. The highest is about 3,500 hertz. Shannon's law says I have to sample it twice per frequency. Basically if I want to digicode advertise this I do about 8,000 samples as second and you'd have me. And about the softest and loudest I can speak is about 50 decibels of range and most of the power is around 1 kilohertz and below. So that's humans. What if I wanted to digicode advertise that? Well 8,000 samples a second. I'd normally do 65,000 levels but because most of then knee is down low, I can actually use a skewed distribution and if you are a need your help a sensible part of the world, Australia and Europe and a few others, you'd use a law and if you are in this weird place over the pond over there, you'd use mew law because standards really work, don't they? And can encode all of this over here at 64 kilobits per second and over there at 56 because standards really really work in that industry.

So, that was the way it sort of all worked in telephony, because it was all about you and I and the way we spoke. So, when we did the last great change, which was actually round in the seventies or so and digicode advertised all the world's telephony, the unit of telephony was a 64 kilobit per second bit stream. No jitter, no nothing, 64,000 bits per second, really, really tightly defined. And it was completely intolerant to jitter. Because, as you probably seen on your mobile phones when it all gets a little bit stuttery, any kind of jitter in that stream and you lose it, the voice just disappears, it was also completely intolerant to drop. You couldn't lose bits of that digital stream and still get a voice back. So, the networking that we built was synchronous, it all clocked and a gigantic thumping atomic clock running at 8,000 cycles per second and every telephone company worth its salt had at least three of these clocks just to make sure it never lost a single clock pulse. How did they serve the network. These were end?to?end synchronous virtual circuits. The network had a fixed capacity because voice had a fixed capacity.

That meant, a bit like the road system out there, there was no elasticity. At Christmas, we all wanted to speak to ourselves and to our mums and dads or whatever. So you had to build networks for the peak load and for the rest of the year the network sat there tiedly. This was, I think the best thing we had ever built in a century of networking and the 1980s telephone network was a marvel and it cost a massive amount to run. It was incredibly difficult to keep this up in the air. We had 500,000 people working for BT. Even more for AT&T and so on. It was kept together with a whole bunch of people polishing all the wheels. The servers that came out were hid usely expensive. We can complained about data roaming. But do you ever remember international phone costs about 20 years ago. They make international data roaming look cheap because this stuff was expensive. Voice networks were difficult to run.

The other thing about these networks is being the way they were built to circuit?switch was actually really difficult to add capacity on the fly. Adding capacity to these networks was actually incredibly difficult. So, it was very common to build the biggest network you could think of at the time and then gradually let voice fill it up. It was very common to over?provision. And all of a sudden, that let into the computers, because all of a sudden here were you and I with our main frames looking at these networks saying ah?ha, spare capacity. What's the price of running a voice circuit from here to Frankfurt? Oh, that cheap. I'll buy two. And originally we built or networks, even IP networks on those margins of over?supply of the voice network and the first protocols we used were actually designed to be perfectly analogous to voice. We used X25. What was X25? It was actually virtual circuit switching. We used frame relay which was a slightly lossy version of precisely the same thing. The capacity of the system was designed by the network itself, and the services were synchronous. That's bizarre. I have this whacking great computer. It can do anything on the planet. It can retransmit lost bits. You now have to plug it into a network. Talk about over engineering. So it wasn't long before someone figured out that you didn't need to switch circuits. Because if you have computers at either end, all of a sudden I don't need synchronicity. I can withstand loss. It doesn't matter. I can withstand jitter, because who gives a stuff. My computer certainly doesn't. So, now I'm highly adaptive, I am error tolerant, I am loss tolerant and I am jitter tolerant. So what about the network? I don't need much. In fact, I need remarkably little. If I get rid of the entire circuit concept and simply write in the top of the packet where I'm going to, 203.10.60.1 all of the switches between here and there don't need to know about a circuit state any longer. They just simply need to know approximately where most addresses are. The packets run reliable. Who cares? Because computers can fix it up. All of a sudden, things are adaptive. How fast does TCP run? It depends on what the rest of you were doing. It doesn't have a speed. It just adapts. So, this whole question about what's the load on your network? The computers go quite frankly I'll take what you have got and if there is three of us trying to do it, sooner or later we'll figure it out and take roundabout a third either. It's fully adaptive. There is no requirement for centralised network resource management. None whatsoever. That's what we built. Why did we do this? Ultimately, because we could stuff traffic into the network really efficiently. That meant that we actually ran the networks using effectively a minimal amount of resource for the maximal amount of benefit. In economic terms, it's dirt cheap. So instead of being really expensive the whole reason why the world went to the Internet is they're really inexpensive.

The voice guys weren't happy. We were spending 90% of the telcos' money on these wonderful voice networks and no one wanted to use them. So when you start looking at data, where services are really cheap, the whole thing is elastic and voice where services are really expensive and you start adding up the costs on what you are doing, it wasn't long before the mantra that appeared in some telco, I think sprint was very early on with this everything over IP, but these days everything is over IP. Why? Because it's cheap. And costs are what drive the industry.

So, that glosses over one essential problem that I haven't changed, neither has my voice box and neither has the physics of me speaking. If you are far away and want to hear me speak, somehow I have to replicate my voice, somehow or another at the other end we need to rebuild that 64 kilobit per second stream of digital stuff that you can convert to an analogue signal and you can hear me. So the question is: How do you mix the two together? How do you mix congestion prone and congestion intolerant? How do you mix the data in the voice? Well, that was the way it worked.

You could just add more bandwidth. And if you are not from the telephone world, it's blindingly obvious. But if you are from the telephone world, this is the worst possible answer. Because the telephone world was used to building extremely large over?provisioned networks and not letting you use them. That was the business model. If I put in an undersea cable, it has to last for 20 years. I can't give it all away in the first month. God, no. So this idea of adding more bandwidth was completely foreign to these folk. It was a language they weren't speaking. And bandwidth? Of course not, can't do that. Well, in that case change the applications. And they sit there and go applications? You know, the things in software. Oh that's not our department. That's a Microsoft problem. That's an Apple problem. That's somebody else's problem. We don't do Apps, say these folks. So, if all you have got is a network and you have got a problem, obviously the answer is to make the network more complicated and this is this kind of Goldilocks answer. There were similar guys and this kind of story is about at their level that realistically what makes sense to these network guys was to ignore everything else that they could possibly do and concentrate on the things that they had control of and say let's make them more complicated. Why? Because that will solve the problem. Well that's QoS. That was precisely what QoS was. Let's rebuild the voice network in IP. Oh yes, Dave Clarke came from MIT to the IETF, he visited us from his spaceship back in 1994 and said I have the answer and we all came in to hear it. You start off with a core request, sorry, a reservation. You end with a call acknowledgment, sorry a confirmation, and then you put some traffic down a circuit, no, no, you don't you put it down a reserve path, and then you hang up. No, sorry, you release the call. That was it. That was the entire integrated services architecture that they were presented with and you know we bought it, hook, line and sinker. Bizarre.

Because, what we were really doing was rebuild telephony inside IP. It was hideously complicated. But some of the terms are with you right now. Because what we wanted was a flow state inside the network and a distributed resource reservation. So who runs MPLS? Oh, come on, most of you do. And what's the protocol to use these dynamic MPLS circuits with resources? Oh, it's RSVP, the ghosts of integrated services are there with you now for these static pinned paths. But you use them as infrastructure. You use them to aggregate millions. Oh, no, the original model wasn't like that. The original model was every single TCP session or in particular, every single UDP flow had its own reservation. If I wanted to hook up to Hulu or Netflix and download a movie and stream it, I was doing an in?serve reservation and, all of a sudden, you had state complexity and fragility in cost loaded right up inside your network because this was meant to work per end?to?end flow. This wasn't trunking. So, obviously that was crap. Even at the time we couldn't build networks and switches that big, and if you think about it trying to divide up a world per individual flow is insanity. For all those who are thinking of deploying CGNs in the next few months, let me say this again because you are going to have to remember this: trying to divide the network up per individual flow is insanity. You will go mad. Your customers will hate you. Just remember that.

So, anyway, where were we? This sort of architecture doesn't scale, same applies to CGNs but that's a different thing. We are clever folk at the IETF and we never say die on bad idea so you have got to make the bad idea even worse who we we tried was this differentiated service. The packets come in you figure out whether you like them or not and if you like them, by how much? And you get out your paint brush and you paint them a colour, then you send them into the network with colours, some kind of differentiation. Anyone who remembers their IP header, there is only 20 bytes so you should all remember it. Do you remember those 8 bits that went TOS? That stood for Type of Service. So, you know, being the IETF we reinvented TOS and called it DiffServ and everyone thought it was wonderful. Does that remind you of of SDN. No, we reinvented TOS and called it wonderful with a new name. We did lots of slide ware, and that's some of it there and it was all meant to work, wasn't it? Well the problem was that it was never absolute. You could never say I want 64k or 128k or I want jitter within 2 microseconds. You could never figure out what was going on because there was no statement feedback inside the network because one of the things we still haven't figured out in IP is feedback control and that's a feature, not a bug. So what actually happens inside different serve was really quite unexpected, I suppose, because if you think about a network the quality goes from amazing fast, down to so slow you can have a cup of tea and nothing has happened. Over here is kind of network load, as the load gets higher, things start to get worse. Interestingly, in IP, you don't notice other people much. If I did a terabit network coming out of this hotel, whatever the problem is that you have on your screen, it would not be the people around you causing it. If you have enough capacity there, if the network is running just fine, then quite frankly, what's going to happen is you are going to get maximal performance no matter what because the buffers are open. There is no queuing going on. Jitter disappears, do what you want. Knock yourself out. But, IP doesn't degrade gracefully. When it degrades, it goes to hell really quickly. Once the buffers fill, you start dropping packets. Once you start dropping packets, TCP goes, oh, my God, I am really going to slow down and if you have over loaded with UDP, you are still over loaded with UDP and the quality gets abysmal. So you can't stop being here or being the overloaded a BIS of despair. That's life. All that DiffServ did was slightly alter your trajectory as you plummeted to death. Some folks splattered on the ground a bit later than others. You paid a lot of money for that. That was all it was. There were no guarantees. When there was too much high quality traffic. High service traffic, that congested as much as everyone else. So you couldn't do what was promised. You couldn't do these per flow reservations. You couldn't deliver any kind of assurance outcome. So if I said to you, look, it's triple the cost because it's really cool technology, you'd go but what do I get for if? I said well, you die a millisecond later than the rest of the room. You know, that's really what it was. You couldn't guarantee this fixed service response, and, if you tried to measure it, you really couldn't, because what you were comparing it against? So DiffServ didn't work either. We are inventive people at the IETF and we have to meet three times a year so we have got to do something. Then we discovered that the next thing to do was to talk about the next steps in signalling. Instead of trying to fix the network, we are going to fix the control plane of the network and that was going to solve all our problems. Then MPLS came along and we thought it was going to solve everything because you were going to do elastic QoS and this was going to be wonderful. We heard from the vendors those four letters solve everything. Everything. It doesn't matter what your problem is, they will solve it for you. And now we are even getting things like aggregated QoS of trying to take the best or maybe the worst of all these technologies and match them all together and I was being nice to the ITU, I did not use ?? did I use here NGN? No, I did not. NGN. Just there, NGN. We have tried as hard as we can to do this and it just doesn't work. And you kind of go, what is it with QoS other than something vendors say to gullible customers? What is the real issue here and what are you trying to do? There is a balancing act going on and it's a tough one. Some of you run QoS in small?scale administrative environments. You have got 128k circuits and life is hard and you prioritise voice and video. Knock yourself out and go for it. Just cool. But if you try and make this work in a large scale public network, all of a sudden the costs go right up and the relative benefits start to disappear and it's this skewed exercise of spending that sooner or later when you look at it you are spending 95% of your engineering budget to change all the routers to manage all the queues in complicated ways for a small percentage of your traffic, less than 1%, all of a sudden cost and benefit don't work.

So, why is it a failure? You are all rational people, you work for this industry. Maybe that's asking too much. The real issue here, there is no magic. I can't invent more resources or a faster network. QoS doesn't make more networks, it just redistributes the damage ever so slightly. So if you over?subscribe your network and put a ?? life will be crap and QoS won't help you. If you manage to run as much buffering as you can possibly buy from the memory vendors, life will be crap. There is nothing you can do about it. If your design was invented back at the wrong time and someone just spilled a cup of coffee over a piece of paper and called it a design, I can't help you and neither can QoS, and, quite frankly, your poor business plans are your problem. QoS won't stop it or fix it. And if you think you are too far away from America, I can't help that. And, in fact, the way continental drift is working, the problem is getting worse every day, not better. And if you don't like the speed of light, find another universe, the physicists tell us there are millions of alternates but this one we're stuck networks it's a constant and you're stuffed. QoS won't help with any of those. It doesn't make those fundamentals any better. There is no magic there. So, why, oh why, oh why, does your local network association, ETNO, here in Europe, so rabidly, I use that word deliberately, rabidly so keen about QoS? Why do they persist with what is obviously bull shit mythology technology? Why do they operated around all the public forums saying Internet inter?provider settlements must be based on QoS? I suspect there is a different reason and it's nothing to do with technology and everything to do with money and control. Let me digress for a second.

Most of you would have done networking or I hope you did and of course, the one thing that OS I left with us, the one legacy of that debacle of open system interconnect I have in the 1980s was the 7 layer stack. Even though IP doesn't use it, we still all draw, it it's a wonderful thing. You know, this is relevant to the story, because at that top level is service. And if you controlled the entire stack as the telco did, people didn't pay for the presentation layer or the session layer or the transport layer. They paid for service. The money was all about allowing folks to talk to each other. Why is Apple a seven?hundred?million?dollar company? Because they think hard about you and services, Apps they call them. Same with Google, it's all about service. What used to happen was you'd make all your money at the top floor of the building and syphon it down the lift?well and pay for all the rest. And then we deregulated. Now, the theory was that what would happen is more telephone companies would appear on this world, that hell would replicate. What actually happened was entirely different. We didn't get other versions of the same hell. We actually got folks saying stuff the rest of the layers, services is where it's an at. So the most intense competition came at the service level and the rest of the network was oh well that just all works, doesn't it. Because deregulation forced attention to the one thing that people valued and actually started to create problems further down the stack in terms of money and investment. So, where is the money to build more networks? Who is paying for it? Because now, you pay separately for services. You pay separately for Apps. You don't just pay huge dollars to your local telco who makes the Internet work for you. There is lots of relationships that you and I now enjoy. So now they are looking at this going where is my money? And maybe I could redraw the picture a little bit more to show precisely where their thinking is.

They, the access provider, is servicing all these users with all of those services, and the money is only coming in from the users. Now, the problem with that is that there are only so many users and so much money, and what they look over the fence and see Apple, what was that seven hundred million dollars again? Google and so on and so on and so on. All of the money they claim is on that side of the fence. And what they would dearly like to to do a built of monetary extraction and all of a sudden QoS comes along and goes hang on a second, if all the traffic is going from here to there, why don't I make them put a toll, why don't I put an access gateway here and go you know, it could be so much better if you just paid me money? And that's what they are trying to do. It's as simple as that, it's as blatant as that and, in some ways, it's as silly as that because I think it's basically a doomed attempt. Because, the underlying motivation is that they want that control to extort revenue from what they believe is the source of all bountiful revenue, the contents service providers. But why are they heading off to the ITU? Why do they want the ITRs to help them? Why are they seeking regulatory relief? Well, there is this axiom in business, that if your business plan is completely broken and you are about to go bankrupt, if you go to Brussels and make the right case, they'll save you. Speak to the Greeks.

What's actually going on there ?? it had to be said ?? what's actually going on though is if your business plan is not good enough, if the dynamics of the market aren't working in your favour, you don't necessarily have to give up. There is one group of folk who might welcome to your help and that's the regulatory sector. If you can convince the regulatory sector that you need money by regulatory fear, you're saved. And that's precisely what's going on now. They are flying a kite. They are trying to see if there is a regulatory impulse that they can generate against those evil content folk who are just refusing to give garage money and try and make the regulator take their perspective. So that's why they appealed to the IT because there is nothing like governmental support. But Goldilocks was wrong. The whole idea that QoS was all about networking is complete crap. It goes against everything we ever understood about what makes IP effective. You only needed 20 bytes of header. You only needed stateless networking. What you do need is to add more bandwidth where it's needed. Build 100 gig, build terabit, just go and do it and if your business model sucks and you can't afford it, change your business model. Look at folk who are successful commodity providers. Folks like Free seem to do the job just fine, while France Telecom has a problem. Is it infrastructural or is it the attitude to their business and customers. It's not just the French. A lot of this is actually all about business models need to evolve. And that evolution is painful but necessary. And yes, you can add more bandwidth. Moore's Law continues to be bountiful. But thinking that applications can't change is equally insane and stupid. Why do we deliver television content over UDP streaming? Because it stops me recording it. Crap. You and I can record RTSB with about 60 different Apps. Why don't we do the stuff in TCP like the rest of the folk running Torrent? I have no idea why not. Why don't we make our applications congestion?sensitive like DCCP and the whole explicit congestion notification. Surely it's not hard? Of course it's not hard. Surely it's oh so difficult to change those behaviours. Of course not. They are just computers. It's just just software. If you honestly want software?defined networking, try software?defined Apps first and see how far you will get. I suspect that's where the true gold lies.

Thank you very much.

(Applause)

CHAIR: Thank you, Geoff. Great presentation. Actually, I'm really sorry that our telecom regulator is not sitting in this room, usually they are, I will send you their presentation because they are dreaming about this stuff and they are bothering me about it. So, I'll just send them the video recording and the presentation and say just, hear that. So, I see ??

AUDIENCE SPEAKER: Hi, my name is Lee from ?? Hi Geoff. Are you saying, just to see if I understand you correctly that whatever the question is, QoS is never the answer?

GEOFF HUSTON: What I said was, if you have got a problem with the way in which things behave on the Internet, maybe you should look at the underlying issues of bandwidth and the overlying ways in which applications behave. What I see at the moment is a huge call to go and redo a huge amount of the signalling and control feedback mechanisms to put QoS in the routers. Why, I ask? Oh, streaming UDP. What's streaming UDP? Oh, video. Now, come on, most video comes from massive digital play back machines on one side of the fence and your digital system on the other. I could run that in TCP in seconds. And could I run it a lot faster than realtime and get your load out of the way. And if I'm running in TCP, if someone else wants to do something down the same wear, it will adapt, the application will adapt. So why do one side of the folk think oh, no, it will always be UDP, and I'm looking at it going this is nonsense. The only application for UDP, large scale UDP, DNS at the small scale, build yourself a massive square kilometre telescope and let's talk UDP. But video streaming, puff:

AUDIENCE SPEAKER: So if I'm trying to video conference or just do a voice?over IP and my computer or iPhone decides it's time to download one of those 6 hundred megabyte system updates or maybe someone starts a torrent or somebody, what is it that you can offer to me?

GEOFF HUSTON: Put yourself on LTE, run 4G and don't worry about it. There is always an answer.

AUDIENCE SPEAKER: That's very good answer if you sell equipment or services. It's not such a good answer if you are a user and the best you can get is 10 megabits.

GEOFF HUSTON: Hold on a second. It's really easy to engineer networks to perfect yesterday. We understood yesterday really well. We understood its constraints. We understood what worked and what didn't work. And it's really easy today to try and change things to adapt to yesterday. I would rather we looked at terabit networks. I would rather we tried to understand what we could do and try and make things go faster, try and make things go better, because, quite frankly, it's tomorrow's network that's far more interesting to me than yesterday's. So, I appreciate your problems with your iPhone 2, buy a new 5 or whatever it is; in other words, push the envelope, don't accept it.

AUDIENCE SPEAKER: Okay. I'll take that, but then I'll offer you something in return, that is that the same innovation you are asking for, I'm sure the application people are also doing that, and they'll be able to fill up your terabit networks. Mark my word.

GEOFF HUSTON: Great.

CHAIR: We have another question.

AUDIENCE SPEAKER: Hi, James Rice. I was just wondering, if you have say a couple of providers and their peering maybe 10, 20, 40 gig bits per second of peering, a few gigabits per second of traffic and maybe a few hundred megabits per second of voice. Then somebody decides to activate the 200 gigabits per, and is this not a case to QoS is good.

GEOFF HUSTON: One of the most insidious forms of denial of service is actually been the sin attack. If you really want systems that are resilient to denial of service, there is a whole lot more going on than simple filtering and selective damage to packets. So, no, I actually argue that's the wrong kind of response to that kind of problem. What you are after in terms of anyone can launch an attack is actually trying to understand how things work and what kind of ameliorating architectures would help. Simply going the answer is QoS is, I suspect, not really a viable long term response.

AUDIENCE SPEAKER: I loved your presentation. But I found a little flaw in the logic over there. You keep talking about how, with the deregulation investments in service and the service providers is crime to others. All you this, but then you say don't accept it. Push the bulk, you want a terabit network. Who is going to pay for if? The network providers don't want to pay for T then you are talking about get 4G and LTE. Great. What about areas in the world where that's a dream maybe in ten years they are going to see, I mean Africa, Middle East, parts of Asia, there is a big section of the world and the majority of the population don't have that kind of access to there is a fundamental flaw there I think. How do you address it?

GEOFF HUSTON: One of the great mistakes the ITU made in the 1950s was to confuse financial accounting and inter carry settlement with international aid and assistance. And turning the telephony network into a multi?billion dollar money syphoning exercise created a culture of dependence that now is biting hard politically. Aid is aid and it's a wonderful thing. But if you want efficient industries and markets to innovate, to constantly push, to strive to be better, loading it down with the structural inefficiency of a historical cross subsidization tax is the worst possible thing we could have done that that institutional relevant I can of that structure, the ITU is the price this world is paying for such a miss guided decision. So, we should have done better then. We shouldn't accept it now. It's the wrong answer. The problem is there, but making the telephone network or these days the Internet pay that tax is structurally the wrong answer.

(Applause)

CHAIR: Thank you. We are running into the coffee break so I will leave to Todd to do the house keeping and closing.

TODD UNDERWOOD: Just really briefly, lightning talk submissions are supposed to be open, but somehow they have been closed but they'll be reopened again in a minute and when they are, you should submit lightning talks, we will have three of them this afternoon. We have spots for more tomorrow and Friday. They do fill up fast, so, if you have an idea to present, you should slap it in there. Do you not need slides of any sort or a presentation of any sort to submit a proposal. So just an abstract and a title and your smiling face and we'll tell you whether we accept it. So cheers... and do vote for talks if you want the €100 gift card that now if everyone votes for talks you won't get. It's the tragedy of the comments.

(Coffee break)