Archives

These are unedited transcripts and may contain errors.


DNS Working Group, 26th of September, 2012, at 2 p.m.:

JIM REID: Good afternoon, ladies and gentlemen. This is a second session of the DNS Working Group for the RIPE meeting. Thank you for all showing up. Just before we get started, one or two of the obvious formalities to remind you all about: Please set your mobile phones and other electronic devices to stand mode. The session is being webcast and recorded, please make a point of speaking clearly, giving your name and your affiliation, this is help also the nice lady down at the front who is scribing the meeting for everybody and the people out in web land who follow things over the Internet. We are going to have a second panel session to do with nasty DDoS attacks and amplification issues towards the end, but because we have had to juggle the agenda from time to time because of various other things going on, sorry we had to make one or two last minute tweaks to the order that was published today. The first presentation is Marek Vavrua.

MAREK VAVRUSA: All right. I am happy to be scheduled right after lunch because your attention is pretty high. So, my name is Marek Vavrua and I work for the CZ.NIC and I am /THAOER give you an update on DNS, three things, what we have done in the last weeks, what are replanning to do in the next release in a month or so, and also some great feedback from our users.

So about what is not DNS at all, it's an authoritative only DNS server developed over the CZ.NIC. It works on most platforms like different flavours of platforms, Linux, BSD, when we designed it we are trying to strike some balance between feature rich server and a strong focus on performance, as usual, speaking of which I'd like to show you some new graphs showing fast how are we and to boast, but in time I took a bit different approach because today is not really about performance; we feel it's pretty all right as it is, so we decided to focus on some features that would make the users' life easier and some fixes and stuff.

So, what have we got since now? We wrote a pretty decent user manual. It's still work in progress but you get the point. We also have a very nice experimental feature that allows you to track differents between zone file reloads and it's able to build the changes for the outgoing IXFR, so we called the server that's fully support or fully ready for being deployed as a master.

And yeah, we have also done some great work on the control utility, we have added a few comments and so on, and last but not least, we have implemented the hot topic of the day, the DANE protocol support. So that's what we got right now.

We asked our users about what do they think about not DNS, what do they miss and hate, and the answers weren't as surprising at all but we picked four new features that we would like to see in the next release. So the first and foremost and definitely the most is dynamic updates which is already done, we have already the working code is in our testing phase. We'd like to stress that it's fully RFC compliant with the exception of packing multiple update messages into one. It also supports the update message forwarding to the primary master and as is our good habit it does not interrupt the server interruption or answering at all, it's fully non?blocking.

Next thing is a project we have been working on sometime, and it's a brand new zone file parser. We already have one. It's very good, it's feature rich and all, but it's kind of slow and we have some feedback that it's been getting tedious for people to come pile zones over and over and it takes sometime, but we didn't want to lose any of the features we have, so we decided to create something completely new from scratch and I think we have it prepared for now.

You get the point that it's pretty fast. I don't have any hard data on it, but it should be in the next release.

One thing I'd like to stress out, that started as a completely separate project, it does not depend on non?DNS in any way for it's possible for to you take it and use it with your favourite library like the D LDNS and ?? yeah. We don't have any official packages yet so you have got to get and get or G IT repository if you want to see it.

Next thing is also very popular for our users because we had this control utility that it's kind of simple but it doesn't allow you many things because it's based on signals it does not allow our users to control the deMonday from the remote location and doesn't allow our users to, for example, refresh a specific zone because it's not simply possible to add any more data. So we gave it a thought and designed a simple protocol, secure data something like D SIG, it's based on one way hashing so you get the keys and all, we designed it in a way it's extensible for the future so it's possible to refresh specific zones or add specific zones, flush it, whatever you like.

And the last thing is not a feature per se, but it's my favourite one; we had some feedback that in some scenarios it consumes quite a lot of memory so we said, all right, we will do some measurements but we have found out that they are right and we should do something about it. So, why are we in this position in first place? When we denied Knot DNS we did everything for the answering performance, it was first and foremost, so we ended up with four different structure to hold zone data in, a lot of pointers and any extra data and so on, so it's getting kind of bloaty. What do we do with it?

First, we wanted to do is merge all those data structures into one or two.

Second, we use our own DNS library. It's pretty nice to work with but keeping so many structures also means a lot of small allocations and that means the generic memory that we use can't simply handle it.

And the last thing, we already have few custom memory allocateers that we could use for this progress so we are going to do that.

So, there was four new features we have prepared for the next release which is coming in a month or two. Let me give you a quickly three reasons why I think you should give it a three if you run any kind of DNS service.

The first is we have a very thorough testing process, take user reported features and we take it seriously. We have this toolset we run every night, four different scenarios from compiling a zone, running it through a various sort of compliance test where is we attack an interesting query set, we play it to different name servers and compare the results and also some penetration tests and static code analysis to catch our mistakes and so on and so on.

The next very good reason is, we really talk to our users, and when you write to us it's always a priority, so most of the bug reports are dealt with within the same day they came to us. It depends on the complexity.

You can also reach us on Google plus or Twitter account, good old?fashioned mailing list, a issue tracking system and so on.

And my favourite part is our users. We had some really great feedback from them. We already had some people from the L root toying with Knot DNS and they found out that our packet compression algorithm is not really doing its best and we should do some work there, and we also had some great experience with a couple of hosting companies, again we use it ourselves for our zones and I have also heard that people (use it) are playing with it and they think it's great.

A recap: Next release in about two months. We have a four new features, the DDNS, new zone file parser, the remote control utility and some work on the memory architecture.

That's it. Thank you.
(Applause)

JIM REID: Thank you. Are there any questions? Nothing at all?

SARA DICKINSON: You mentioned your remote control utility, is that a prioritiry protocol you have developed for that?

MAREK VAVRUSA: It's a protocol that kind of resembles DNS because we thought we know how to parse DNS packets and how to do DSIG and stuff so it's nothing that we would like to publish, but it's doing the work for us right now.

SARA DICKINSON: My next question is there anything published on that protocol our your schema?

MAREK VAVRUSA: Not yet.

JIM REID: Will you publish anything?

MAREK VAVRUSA: Yes, we should have a dinner tonight because we and some guys decided it would be great to have some sort of standard in a way how to control different name server implementations so yes, it is a good idea.

JIM REID: Thank you, mar he can, thank you. Next up is Warren Kumari.

WARREN KUMARI: This isn't my presentation, these aren't my slides, Ondrej was going to be doing this. Unfortunately his passport got stolen and he had to rush back to Prague to get a new one issued. We are largely going to be finding out what is in these slides together. It's kind of long so I am going to be going through this fairly quickly, if I skip over anything interrupt me.

So what is wrong with PKI, the CA system, what are we trying to solve in DANE? When somebody initially connects, they have no way of knowing which CA that specific site uses. All of the CAs are equally trusted in the model. If the CA is in your Trust Anchor store you are going to use it and there are a large number of them, there are a ridiculously large number of CAs. There are also a couple of different types of certificates now: Extend validation with a CA actually does a lot of validation before signing the cert; there is organisational validation; and then there is the standard which many people use, which is domain validation.

In order to gate domain validated cert, all you need to do is you need to be able to prove that you can get e?mail at a specific address at that domain or at the record in Whois or the e?mail address in Whois. So what all of this means is if the operator, for example,.com goes off and gets a cert from a really well?run and trusted CA and goes to the expensive and of getting validated cert, if a malicious party can get another cert issued through another certificate authority the first time you can act if you get handed the cert you trust them all equally, you have no way of knowing this isn't the correct cert.com and you will happy use this. There is some fairly well publicised examples of stuff like this happening recently. There was an incident recently which I apologise, I am sure I have mucked up the pronunciation of that horrendously, but somebody was able to compromise the CA and /SHA*U large number of certificates for well?known high traffic, high visibility sites, and they actually used these for man in the middle attacks against what appears to be an entire country or most of it, and so there were read dissidents' e?mails and things like that and these are have real world implementations.

So a question for you: Do you know which CA your bank uses? You presumably connect to them over http S, H SL fairly often? Do you know which CA they are using, if a different certificate shows up would you have any way of knowing that? Do you know if they use DV certs or EV certs OV certs. Does your mother or sister know and more importantly, should you need to know or anybody need to know?

So, the way we are trying to address this in DANE is we are saying while the person who is fishily connecting to example.com might not know which cert it should be the operator definitely knows which cert it is they are using. And so what we are suggesting or actually what we have specified in the documents is you attack the cert that you are currently using and you calculate the fingerprint to that and you publish that in the DNS at wherever in the DNS you are, so if your example.com you publish this in the DNS at example.com and the only person who can publish that example is the person who controls the DNS for example.com. You sign this with DNS SEC. I mean it's important security information. The relying parties need to have a way of knowing that this is actually validated. And then when the relying party comes along to use this they make an HTP S, they get the certificate as part of the standard handshake and then before trusting it they look this up in the DNS. If the fingerprint that's published matches the fingerprint on the certificate they received through TLS everything is good, they can trust it. If it doesn't, there is evidence of malicious behaviour and you should not proceed.

What is the status of this DANE thing? We have the uses and requirements document which got published lamb year ago. The RFC number is up there by Richard Barns, who I think I saw in the audience a short while ago, and then just like month we published the actual main document that describes the DANE framework, how you use this, what the DANE resource record, the TSLA record looks like that was written by Paul Hoffman and this is what the DNS resource record looks like, someone trying to connect to over TCP on port 443 like HTP S and you get back this resource record with the bunch of stuff down the bottom which is a hash of the certificate or the certificate itself and then these three magic numbers before that, 30 and a 1. And what these numbers actually represent is how you should be using this information, what the information actually is.

So, the first number which currently there are four things defined zero one, two and three. Specifies whether the certificate that or that's published specifies the CA that you use, so you can use this to specify I only use the CA thought or the key that's associated with the CA thought or I only use Verisign who is now actually semantic. Or you can publish a TLS A record saying this is actually the specific certificate that I use, not the certificate chains up to a specific CA, this is the actual certificate, if you get a different one, don't trust it.

So, these allow you to use your current existing sort of CA signed certificates. There are two other numbers you can use, which are a little more sort of innovative and interesting; you can create your own CA and publish a TLS A record saying when you get the certificate and do certificate validation, it needs to chain up to the specific CA cert, or you can just generate your own self?signed certificate, you know the sorts of certificates that you get when you brows through certain websites and get the big red box that says this certificate is signed by an unknown authority, you can generate one of those, publish it in the DNS and when relying parties come to use it they can look in the DNS, see that this information is there, see that it was signed and have a reason to actually trust this information.

There is also the next two numbers are a select and matching type, very important for DANE, not specific important for this presentation, if you are interested, go along and read the document.

So, what I said earlier is the only person who can publish something at example.com is the person who controls the DNS, for example.com. And I chose those words relatively carefully because the person who runs the DNS for example.com isn't example.com in many cases; often it's a Third Party hosting provider, your registrar can publish information at example.com, they change the NS records, they can publish something there then, the reg.com could publish information, could update the DNS records, blah?blah?blah. This initially seems really scary, you are now trusting a large number of people who you weren't trusting before. Unfortunately, you currently are trusting those people. Anybody who controls the DNS for example.com could just as easily update the MX records for example.com, could then go along and apply for a DV cert, have a mail server, get the authorisation token. So you are currently trusting the DNS hosting provider and the registrar, you are trusting the reg, these are already people in the trust chain that can screw you over if they wanted.

DANE and DNSSEC, because there is important security information in this, you require the DANE records to be signed with DNSSEC. You also require that validation is done all the way down to the application or sort of potential Lyon the end host but the application needs a way to know that validation was performed so it can trust these F you are interested in DANE and would like to play, there is a tool over here called Swede, here is an example of how to create an and TSLA record, feel free to publish these in your zone. The future of DANE:

The primary document just lists how you do DANE with sort of generic web services and in general. Certain things use DNS in slightly different way or find records in different ways. This is how do you DANE with MSTP, SMIME, Jabber, etc. If you actually want to do validation with DANE ?? and I am running really low on time ?? there is a Firefox add?on that will let you do this. Unfortunately, no browsers really support this natively yet. This is largely because the browser vendors haven't seen enough people deploying them, people aren't deploying them because there aren't people deploying them yet, classic chicken and egg problem. If would you like to break this, please publish records, tell your browser vendor that you would like this and with seconds to spare, questions? Comments?

JIM REID: Any questions for Warren?
(Applause)

Thanks very much as well especially for standing in for Ondrej at such short notice. Olaf is going to talk about something weird.

OLAF KOLKMAN: I am going to about the WEIRDS, the weirdest acronym around at the moment. If you have been to the Database Working Group this morning you have seen this presentation. WEIRDS is a Working Group in the IETF, the acronym expands to Web Extensible Internet Registration Data Service, I forget it all the time.

What is it doing? It's chartered to standardise a single data framework for registration data related to numbers and names. Deliver that type of data, encapsulated over a restful service, what restful is I will get back to and generally following requirements that were set up previously when people thought about replacing the Whois.

The goal is to keep it as simple as possible. Simple ?? easy to implement, easy to read, really make this a simple but also international and capture all the things that we cannot do in the Whois today for whatever value of Whois we are talking about.

It should have a possibility for differential services like making differentiation between law enforcement and regular users or do all kinds of other nice things. And that's basically what the purpose of the Working Group is. And we don't speak of follow?up on Whois because Whois is an overloaded term; we speak about registration data access service protocol. So that's sort of the context.

It's intended to be a restful protocol. RESTFUL is a term of art used in web architectures, acronym is representational state transfer in essence, you use verbs to get objects, get post the lead and all those things are put in URIs and represented by URIs.

Using web architecture framework is really nice because as an implementer that provides you with a shit load of libraries which you can use.

As a practical example, you see two types of URLs representing a query on the screen. And in fact, this type of technology already is available specifically at IRRs, I believe that ARIN has more than ?? I don't believe the number but it's more than 50% of their queries are using this sort of protocol.

The Working Group is working on registration data for both numbers and name people, so IRS, IP addresses and ASes and the named site for TLDs and ccTLDs, gTLDs, the whole lot. That makes it an interesting Working Group. The numbers, it's a small number of IRRs, five in total and the number of players is significantly bigger, there are over 2000 TLDs on their way, about as many registries and a multitude of registrars and all those people are stakeholders for this protocol.

The number of people have made an implementation, as I just said, RIPE has done so, the ARIN has done so, and they came together and wanted to standardise this and so the name people saw this and said, hey, that's good idea, we want to hop on and see if we can benefit from this work.

And it's generally considered that having one protocol to do approximately the same thing is a good thing, so that's why we have the charter as we have. I am on videotape, hello.

Numbers people as I said, there is running code, this is an example for the ARIN ?? again in the RIPE region we have a similar thing. And if we look at the work that's going on in the Working Group, then I don't think the difficulties are with the RESTFULNESS of the protocol and the implementation specific of that; but the big elephant in the room is actually the data model: What are the elements and objects that you are going to standise in this Working Group? And this is a journey that the Working Group is undertaking now. So this is also a good time to hope to catch on with the work if you are actually interested and want to give input.

State of the Working Group is that we just started to do ?? to adopt Working Group documents and we are starting to look at the data model so this is actually a good time. If you want to know more, start off with reading the charter and mailing list, I have got pointers at the end of the presentation.

Currently there are four Working Group documents, there are actually more but these are sort of the things that work on unified model of names and number requirements. The first one is basically describing the http transport layer, the second one and the third one is the query and the response that are used standardising that and then there is a bunch of security services. If you see that document you see a lot of sections that say to be determined, so a lot of work needs to be done there.

Where are we? Two Bofs before the Working Group was chartered in the summer of this year; first face?to?face in the summer meeting, and the next face?to?face meeting will be in Atlanta in November. Of course, the question to you as an audience as stakeholders I would say is: Write, review and implement, running code is important. And that's not only the server code but also the client code.

This is the co?order NATs for the Working Group. That's a summary. Thank you.

JIM REID: Thank you, Olaf. Any questions?
(Applause)

I see someone standing up.

AUDIENCE SPEAKER: Hosting ?? and I will ask you why we need this new protocol and what are the fences between this new protocol and extensive provisioning protocol that's definite RFC 5731?

OLAF KOLKMAN: Which protocol?

OLAF KOLKMAN: The EPP protocol is on the inside of the data so this is how registrars communicate to registries. This lives on the other side of the data where the general public wants to get information from registries about what is out there. And that's a completely different type of data set, a completely different set of requirements and there might be some overlap in what people use but we are also a few steps further in time in what is available as libraries and so on and so forth. So completely different work, not completely related but different problem to solve hence different protocol.

JIM REID: Next up we have Daniel Karrenberg who is going to be giving us an update on the future of DNSMON. I think we have another change of speaker and Romeo.

ROMEO ZWART: This will be very poor imperson nation of Daniel, so apologies for that in advance. This will be very short one. Basically to clear up some potential confusion that we may have caused ourselves with regard to the DNSMON service.

The ?? as you may or may not be aware, the TTM service that we have been announcing to be winding down at some point in time, and that point in time I actually vary depending on your point of view the winding down of it. TTM has been discussed extensively and will be a bit more during this week. That has a bit of impact into the DNSMON service because there is overlap in the state doing the monitoring. That has created some questions with our DNSMON users and as to whether DNSMON would remain ?? will cease to exist as well as we are planning to wind down TTM. So the answer is no, DNSMON will remain there. As far as we are concerned it's business as usual with the DNSMON service. Although as I said, there is some interconnect at the lower level, which has, in the recent past, led to some minor outages in the DNSMON service that were basically TTM related.

The DNSMON service will exist and exist now and will remain in existing service also in the future. However, we will be basically doing some stuff on ?? ? beneath the service if you will, in terms of how the system actually works. And that has to do with a migration of the vantage point, viewpoints we have which are currently TTM systems, they will be moving over gradually to Atlas anchors, which is a different project that's in a way progression or next phase of TTM services.

Now, one impact that have might be that at some point we might see the number of vantage points for DNSMON lowered a little bit. We will certainly keep an eye on that and migrate that in a cordinated way so that there will be a substantial number of vantage points for DNSMON left even while TTM is being closed down.

What we will do, actually, as I said, is migrate the measurements points to Atlas anchors, and there will be at some future point in time also, there will be some changes in the basically the reporting side so the Gooey for the service will be moving into the RIPE stat interface.

Basic message is: Yes, DNSMON will stay. However, and this is the next and basically my last slide, however, there will be a change to the ?? we expect there to be a change to the charging model, and this is obviously dependent upon the approval of the charging model, it has been proposed ?? will be proposed at the general meeting but under the new charging scheme that we think will be alive and will be implemented, under the new charging scheme, DNSMON as a service will be a member service which means that current DNSMON service will have to become RIPE members.

The ?? that's a bit of a change.

Finally, the change shouldn't be too big and that's certainly not something we should discuss here; it's more of a discussion for the service Working Group, I think. But that is a ?? well it's a contextual difference, it's a bit of a different model.

One other thing related to this is that we'd planned to ?? as it would become a member service that it makes sense that this will become available to members in general, not specifically to the current DNSMON customers, and our current planning is that roughly during the second half of the next year we will be able to open up the service further than to the current DNSMON service, to the DNSMON customers, so that means that at that point in time, we will be looking to expand that to other potential members that are interested. Although we will maintain a focus on, if we can say that, important domains, so we will be focusing on TLDs and other similar high level domains to be monitored.

And those will be basically, we will have an emphasis there from our part to get those as part of this service as well.

Having said that, are there any questions?



JOAO DAMAS: When you say you will open it to all members, can you be a bit more specific because you have a very large member base so it becomes a little less clear who is looking at the real data for instance, I am concerned who gets to see realtime data, there were concerns in the past about accessibility to the graphs in realtime, what people may be using those for, can you elaborate a bit on that.

ROMEO ZWART: What we mean to say with the service, we will be open to monitor domains for members that want their domains to be monitored. The focus there ?? our preference will be important domains, for some measure of important. About the visibility of that information, that's a different point, and actually, I personally haven't considered that aspect, and it may well be that that has been considered more deeply but I would need get back to you on that but it's a very good point.

JIM REID: Questions? Thank you very much, Romeo, for standing in for Daniel, thank you.

(Applause)

And our next talker is Roland van Rijswijk from soft net, a follow?up to the Ljubljana meeting.

ROLAND VAN RIJSWIJK: Yes, I am going to go back to something that I talked about at the last Working Group meeting in Ljubljana where I presented some work that we had been doing on fragmentation issues in specifically DNSSEC but it applies to anything that uses EDNS0.

So, just for those of you who weren't there and haven't seen the slides from the last time, a quick recap of the problem we are dealing with and that is if you use ?? if you have larger answers in DNS that exceed the MTU size of the transport medium you have, say I have a cache that's sitting behind a firewall and it sends a request and I get a fragmented answer back which are the errors 2 and 3, then there are firewalls out there that may be blocking those fragments and that means that the caching resolver behind the firewall is not getting any answers and we did some investigation into this problem and we did this research as a masters degree student in our office and what we noticed, that the problem is much bigger than we initially expected. There was some prior research into how stat can and cannot receive fragments done by Nicholas weaver from the Netalyzer initiative and he found that up to 9% of all hosts on the Internet are unable to receive fragmented messages.

We did this specifically for DNS and DNSSEC and we noticed that between two and ten percent of all the hosts that sent queries to our authoritative name servers are unable to receive fragmented responses, and that's a big problem because that means that they cannot get answers for our DNSSEC signed zones.

Jim Reid came up to me after the presentation last time and said can you give some kind of recommendation on how we should deal with this? So, we sat down and we decided that we should write a proposal for a recommendation and that's what I am presenting here. Of course, there are a number of solutions out there; the real solution is that the people that sit behind these firewalls set their resolver settings such they do not receive fragmented answers, that's the real solution. And the update to our C 2671 which is the EDNS0 RFC states that resolving name servers should and I capitalise this in the slides but I didn't know I was going to use the PDF here, should advertise a proper maximum size if they can cannot deal with it. Unfortunately, that RFC is stale draft and there are very few people that run resolvers that actually attack ?? keep this use the default segment.

What we did a approach the problem from a different perspective because it is a problem for us as operators because if people are unable to resolve data in our zone they are very likely to think there is something wrong with us and not with their Internet provider. To be more specific, we had an issue with the largest ISP in the Netherlands where we have assigned domain and they were blocking flagments on the edge of their service network and that meant that any customer that have ISP was unable to send us e?mail or approach our website and those were some of my colleagues so that's how I learned about this and I had my colleagues call the help desk and lo and behold, they said surf net must be doing something wrong because they are a website is unavailable. That goes back to the remark that Patrik made this morning in the panel session on DNSSEC day where he said they are always blaming the wrong guy. We thought can we as zone operator do something to deal with this and one of the solutions is we could lower the E DNS maximum response size on our side, and specifically do that on some of our authoritative name servers so that people that are doing it right can still get the big answers with all the nice additional data in there but people who have problems will at least get an answer or fall back to TCP because they get a truncated answer.

So I asked Heiss who did the initial study into the extent of the problem to give us some numbers on what normal behaviour for resolvers would be and what you will see happening if fragments get blocked so these are normal resolver operations and the numbers shows you the 95% distribution of response times and you can see that they are all very low, these are all for empty caches so they are not representative but it's just to get a feeling for what you could expect if everything was working as it should be.

Now, these are the same numbers but then, if we block fragmented responses, so what you will see is that, for instance, for Windows server 2012, on average, it takes up to 18 seconds before you get an answer because they have some sort of fall back mechanism in there but it's really bad. Unbound from NLNet Labs will fall back to a smaller EDNS buffer size after a failed query and in parallel try to see if it can get an answer by setting a small maximum response size and that was recently changed in I think unbound 1417 or 1418, and you can see that even though it takes longer they still get a response but it takes twice as long if fragments are getting blocked. The same applies to bind from IFC, it's a little bit more conservative, and actually we had a look what they do and after about five failed queries they will fall back to the minimum EDNS buffer size which is 512 bytes. But still, they manage to get an answer out of it but it takes a lot longer and ten times longer than the normal performance.

So what we did is, we changed the setting on one of our authoritative name servers and set the EDNS buffer size on the authoritative server to something that ensures that no fragmented responses get sent back so we set it to just under 1,500 bytes below the ethernet MTU. Is that Windows is still struggling but the time is reduced. Unbound does a little bit better and BIND much better, you see a improvement of factor of five.

If we does this for two of the five authoritative name servers then we see a dramatic performance improvement. Windows server will now get awe response on two seconds which is still bad but not as bad as no answer at all. Unbound is nearly down to normal response times and the same goes for bind which is about two times slower than it is when there are no problems.

The experiments which I showed you the graphs were all in a controlled environment because obviously we don't want to experiment with this in a live setting and it was very hard to set it up, we could easily manipulate the network settings but we also did an experiment in our live authoritative name server where we took one of our named servers and specifically set the EDNS buffer size to 1232, which means that even on IPv6 with the lowest possible path MTU you wouldn't get fragmentation. And what you see is that obviously the number of fragments responses drops to zero. Interestingly, there is hardly any increase in truncated UDP messages so the name servers will adapt and put less information in the response and we saw no statistically significant increase in truncated UDP responses which is very good.

One of the ways we detected that this issue occurred was that we saw IC and P fragment reassembly time had exceeded so FRTE messages coming back from those host got the first fragment, but not followup fragments and when we changed this setting on this specific time server we saw that problem completely going away, as as would you expect. And also, the total number of retries did not increase so we monitored if the behaviour of that turned out not to be the case in real world situation.

Obviously, if you want to recommend to people to change this kind of setting on their authoritative name server you need to know if it causes trouble so what we did is, we looked at SecSpider, picked 1,000 of the domains in there and sent queries with a certain maximum response size set on the querying side. And what we wanted to see, if these resulted in loads more truncated messages. So if we set the maximum response size to 4096 which is the default setting on most resolvers, then we don't see any truncated responses in the set that we probed. If we set it to 1472, so just below ethernet MTU this would ensure any response you get back does not get fragmented. If it's carried over ethernet, then you see that only for a small number of domains will we get fragmented responses and further investigations showed that these were domains that were doing all kinds of experiments, they have tonnes of keys in their zones so they had tonnes of signatures in there yielding big answers but the ?? "normal" domains did not suffer and did not send this truncated responses. If we drop it down to 1232 so it would also ensure that there would be no fragmentation on IPv6, then we get into a bit of a problem because then we see specifically for DNS key queries, that up to 40% of all DNS key queries, so 40% of the 1,000 domains we send the query to will give us a truncated response that would be a problem, that would be a reason not to see the EDNS maximum response size on the authoritative side to 1232.

Finally, I circulated the document on the RIPE DNS Working Group mailing list last week. Which has these recommendations in there. I know that some people in the room will feel these are controversial but I want to start a discussion on this. And given the experiments that we did and the data that we have we would argue for the time being, as long as there is no large scale implementation of the recommendations in RFC 2671 bis which says resolvers should configure their hosts correctly, there should be a way to deal with this for people on the operator side because for us there is a big business case for our zone being resolvable because obviously we want our clients to be able to reach us and that goes for everybody that signs their domain with DNSSEC and it also goes for everybody who has for instance, large NS sets in their zone like some of the black list people that do spam detection have.

So what we are recommending is that you should set at least 50% and that's a bit of an arbitrary figure but it is based upon our results that we did with our own authoritative name server set, at least 50% of your authoritative set should give answers that are not fragmented, either by setting the maximum response size to something that avoids fragmentation, or by setting minimal responses which is not something I put on the slide but that's an alternative.

And obviously, because you are likely to see some extra truncation, this is a big business case for finally enabling TCP for DNS, which some people still don't seem to do.

Jim asked me to leave some time for discussion, so these are the slides. Fire away.

JIM REID: Any questions? Please form a queue at the mikes.



AUDIENCE SPEAKER: Rick. You mention that had there are different safe boundaries for IPv4 and IPv6 return track and you also mentioned the problem is caused in firewalls. Now, IPv4 and IPv6 have different firewalls. Do you have ?? have you investigated if this problem is an IPv4 only firewall only problem or IPv6 ??

ROLAND VAN RIJSWIJK: We have investigated that and the problem is much worse than IPv6. IPv6 has the principle that fragment reseemably is only done at the end points thank means if you have a firewall that sits in the path but it's not an end point it will not do fragmentary assembly and discard it so the problem is worse on IPv6.

RICK VAN REIN: Have you investigated if firewalls in practice have a problem with dealing with too long packets?

ROLAND VAN RIJSWIJK: Sorry I don't understand the question.

RICK VAN REIN: Sorry, you have answered basically, thank you.

JOAO DAMAS: One data point and then some question. Has taken a long time, longer than I wished for, sorry for that, but it was a proof, I asked you for getting yesterday so I need to clean it up a little bit and we should have final document soon. BIND. BIND behaviour: It turns out that when you start using things, there is such thing as trying too hard, which was what BIND was trying, basically. Trying all possible combinations to get to an end result that worked for everyone. And that takes time so we are going to spend autumn of 2012, the next months until the end of the year, rewriting the DNS behaviour in BIND 9. When we do this, there is also discussion internally about how far should one go; also, related to the sizes of the DNS buffers. Should one compromise the efficiency and performance of DNS because there are some firewalls, that if you work ?? you work around them by doing all this sort of changes to your system, will remain in place broken forever. Or shall we joist make things go well for the people who configure their stuff well and somehow make everyone else suffer for having broken stuff? They are both ?? there are both sides of the argument I would like to hear what you have to say about that.

ROLAND VAN RIJSWIJK: I remember that discussion came up in Ljubljana as well. There are two things. One, all I really care about is my users are responsible to resolve my zones, I care about the user being able to use that Internet, that user may not have influence on that firewall that's mucking up his or her resolving problems. Should you compromise it? Well, the approach that BIND is already taken is it has a fall?back mechanism means you are considering that hey, in the end we want the user to get an answer. What the guys from NLNet Labs have done is gone one step further and said if we don't get an answer the first try we will try a different strategy on the second attempt and thus, get them an answer a lot quicker than BIND is doing now, and I think that the user is still ?? still has a penalty, right? Users cannot reach me now if it takes four seconds for them to get an answer, because as Geoff Huston pointed out this morning they have a very small span of attention. If their Internet feels slow because there is a fall back mechanism or we are implementing this recommendation they are less likely to blame it on us as a zone operator and more likely to blame it on network operator because their Internet is slow. And if their neighbour's ISP is doing it right, I can go to my neighbour and say that site has been slow for a very long time but I see it loads very quickly on your computer, can you tell me do you use a different ISP and they say say yes, I might call my ISP is tell them you are doing something wrong my internet is slow.

JOAO DAMAS: A follow?up question. As an zone ?? authoritative server operator, how happy are you with an increase in TCP for DNS?

ROLAND VAN RIJSWIJK: That wasn't in the stats but we obviously monitored if there was an increase in TCP, and there was no statistically significant increase in TCP fall back.

JIM REID: We are running a little bit over schedule to ask people to make their answers brief and questions brief. I will close the make off.

AUDIENCE SPEAKER: Paul Vixie, IFC, a couple of questions and comments: Did the total number of UDP queries go up when you dropped the buffer size? In other words, when you say that there weren't a lot of extra truncations, I am wondering did that cause the resurftive servers to ask you more questions to get the various data did you not include in additional data?

ROLAND VAN RIJSWIJK: We did not specifically monitor that but as far as I can tell from the data that we have, there were no spikes that indicated there was an increase in extra queries, no. But we could look at it specifically.

PAUL VIXIE: I would love to know that answer because it seems to argue that we are putting unnecessary stuff into the packet simply because it's large number and that it might be that we should have a soft and a hard limit in the servers so that if you are exceeding 1,500 bytes and it's something it doesn't really need to be there because we could leave that out and avoid a lot of those trouble. So my comment is, as the author of 2671, I was aware that fragmentation was a bad idea. I decided that we would do this and try to make the Internet basically we would use this as forcing function. It hasn't worked and I can see it has not worked because you have to be resolvable and you will do whatever it takes to be resolvable, including discarding your first mover advantage that would otherwise eventually cause these firewall operators to improve their configurations so I apologise.

ROLAND VAN RIJSWIJK: Thank you.

AUDIENCE SPEAKER: From Netnod. Proposed recommendations: In which way, shape and form do you intend to convey this recommendations and to whom?

ROLAND VAN RIJSWIJK: OK, well based on the comment from Jim last time we have written a draft RIPE Working Group recommendation which we intend to circulate on the ?? that was sort of a starting point. I am open to alternatives. But ?? as Paul stated, 2671 bit contains a big should on the behaviour ?? on the resolving side and once that gets out there that's a good starting point.

AUDIENCE SPEAKER: DNS operators hitherto haven't been too happy to be told thousand run their servers either on the authoritative side or the resolving side, so even though the ideas may be well intended, I kind ??

JIM REID: If could I interject there, realise we don't have a big stick and we are not the protocol police but at least we have something documented that can give an example of some people do at least have have for the most number of people it could be card useful thing to do.

AUDIENCE SPEAKER: I agree but I would rephrase entirely and say we have done this analysis, these are the results of various types of configurations, please use this as input in your decision process.

ROLAND VAN RIJSWIJK: That's a good suggestion, yes.

ERIC OSTERWEIL: Eric Osterweil of Verisign Labs, also of SecSpider. I guess having a recommendation seems like it's a sort of a slippery slope because at some point you might be telling somebody how to write their code and that tends to go awry. But I think it's useful to point out there is a handful of conflations here and if we are to separate them we might see what is really going on. I think the problem is not at the recursive side or name server side it's in the network in between and the paths between those two end points change all the time and you have named servers that don't want to keep state and resolvers that do keep state and if tower apply a hard limit on either side because of something in the middle that can change, I changed my firewall, my path, I upgrade the former on something then you hard coded something for an effect that has been ameliorated. I think what ?? my personal opinion would be that the software should adapt to the situations, we already tried to avoid doing stateful anything at the name server side and you are a big name server and we are, so we understand the semantics that have but at the same time I think if you want to issue something for people to consume it might be to really nicely decouple what is happening, where is the problem happening, there is a network problem, we can put protocol semantics into EDN0, we can say things, I have measured my buff size and adjust it dynamically but any hard limit risks become obsolete and a debugging problem.

ROLAND VAN RIJSWIJK: I concur. I argued this the last time in Ljubljana and unfortunately that turned out to be a bridge too far at that time because I got a lot of kick back saying people should just know thousand configure their resolver and as it turns out many users don't know that, sitting somewhere in this room who has been told by his boss that he needs new DNS server and within five minutes, grabs something off?the?shelf and installs it and it should work, I agree with you adaptive behaviour is better and I would like to challenge the people that create resolver software to create this adaptive behaviour.

JIM REID: Thank you very much for that, Roland.
(Applause)

So, while the panel session get themselves organised for the next part of the agenda, I would like to close off at this part of the discussion we would like to see this docks that Roland prepared, give it more consideration on the Working Group and if there is a consensus in the Working Group that this document is in reasonably good shape or some fine?tuning on the lines mentioned, perhaps we could get this republished as a RIPE document in a month's time or something like that. If there are any further comments, make them on the mailing list, the document just came out before the Working Group met so there wasn't much time, we still have a chance to discuss this further, I personally think it's a useful piece of work. With that then, we now have our panel session coming together. We have Andrew Sullivan, Olaf Kolkman, Paul and I hope Xander Jansen is going to join us too. Peter is going to chair this issue. We are discussing some of the concerns about DDoS and DNS amplification attacks.

PETER KOCH: So the panel participants are already there on their way. Who of you knows what DNS amplification attacks are? OK. Who of you is armless? Also OK. Thank you. So, this is obviously a topic that almost everyone has been in touch with voluntarily or not, and we thought that it might be a good idea to have a panel session on the current state of affairs, opportunities and threats, possible solutions and such. The idea here was not to restrain our stoves a particular, say, technical solution but to attack a bit of a broader perspective because in the end there might be bad guys doing these things and society has certain measures to deal with bad guys and not all of them are technical. So, with that said, there is Olaf Kolkman, also an ex?IAB member. We have this new former title like Gordon yesterday was an official ex EU representative so you know my ex IAB representative.

OLAF KOLKMAN: A lot of ex.

PAUL VIXIE: Paul Vixie with a lot of hats and a lot of exhats and I leave it to him to explain whatever role he prefers. Andrew Sullivan, still not the ex could chair ??

MS. O'SULLIVAN: Any day now.

PETER KOCH: And Xander Jansen, please go ahead and intro yourself and short perspective and we will frame some questions. And in the second half, I will invite the audience to throw questions to the panel, thank you.

XANDER JANSEN: I work for SURFnet. And I was a bit surprised to be asked to be in this panel. I work for SURFnet, Dutch ISP for the universities academic world, schools. We have been both middlemen, victim and researcher of DNS amplification attacks and that's the reason I am now sitting here.

ANDREW SULLIVAN: I am Andrew Sullivan, I am the earth while co?chair of the DNS extension Working Group, the IETF, I also work for DANE, who, among other things, runs DNS servers and so we see some of these attacks. A number of years ago I worked for Afilias and we saw the attacks there, this is a problem going on for a long time and pretty clearly we haven't moved very far in solving it.

PAUL VIXIE: I am Paul Vixie, and I coined accidentally the term "orbital death ray," referring to dotcom servers but really to the roots and any TLD server that lacks any discipline about rate limiting because you can send a tiny trickle of tiny packets and it will answer back with these mammoth EDNS moby grams from hell who will bury someone whose IP address was forged. There is all kinds of stuff to talk about, what needs to be done about it and I am hoping to do that.

OLAF KOLKMAN: NLnet Labs, we do name server software, also used to create ?? what was it? ?? orbital death race, like any authoritative name serve by the way that's not protected and we don't have any technical measures implemented yet because we are trying to figure out what the roll of the DNS server is and the DNS operator; from a little bit more distance it feels to me that we might have the same discussion that we had this morning and just a minute ago about being a good NAT son, a first mover, and behaving yourself while on the Internet. Who is responsible for fixing what? And I think those are the questions that we will see popping up in this discussion, and I am ?? do I not have answers, only questions at this moment.

PETER KOCH: Thank you. So, first of all, this kind of may look like a bit of a flashback because reflection attacks aren't really new, so we have at least one RFC in the 5,000 ?? 5,300 range and João is one of the co?editors if I remember correctly and that's ages ago, it was even then that EDN0 was present and abused so I would attack to ask the panel to give a short perspective, what is different this time, why are we actually dealing with this today, which has changed and what do you think has changed and just go ahead

PAUL VIXIE: What has changed is that everybody now knows about it. It's been popularised. There are easy software kits, any angry teenager can now use this technology and so it's been sort of an open hole, it's been a big vulnerability in the fabric of the Internet architecture forever since day one, but now it's popular and that is what has changed.

Cork Cork any other stance? This RFC specifically targets open recursive name servers.

ANDREW SULLIVAN: Right, but the fact ?? BCP38 came out in 2000. We have been telling people for a long time that their networks are filthy and they should keep them clean, and yet we haven't managed to do that and convince the network operators to do that. So it's a particular example of a general problem that we have on the Internet, and it's fundamentally a kind of commons problem, it costs money to keep your network clean and therefore you are not going to do it when the victims are someone else.

PETER KOCH: I would just like to remind the community that this forum here, not this Working Group, but the RIPE community actually had an anti?spoofing task force that was really tasked with encouraging the deployment of BCP38 which is the filtering practices, so yeah, indeed. Olaf.

OLAF KOLKMAN: Well, you cut away the question because I was going to ask the audience, and I expect a lot of hands; who knows about B ?? what BCP38 is and who doesn't? So first, who knows? And who doesn't know what BCP38 is? So, that is in the roomful of specialists, I still see about ten hands. If you were to ask this question in any other operator forum, industry forum, say at the wry where there is more the commercial type of things, there would not be any hands raised if you ask: Do you know what BCP38 is? So, what we have is a sort of general unawareness about public health, but that's one, that's one of the aspects.

Second, you were asking about what are the solutions. One of the other solutions is actually prosecution. Do you have an answer on that? It's more a receipt or I can question, by the way.

ANDREW SULLIVAN: I am ?? I am very nervous about meets?based law here because, I mean, as we heard in the plenary yesterday, right, there are people who are really keen to apply what are effectively old?fashioned top down laws to the Internet and we already know that part of the reason the Internet is successful is because we didn't use that style of operation for it. Once you involve people whose natural response to everything is we will involve the national regulator and get people to do it the way we tell them to, we are going to change the way our network works so, you know, we may kill the patient while we are trying to cure the disease.

PAUL VIXIE: I don't think regulation is automatically a bad word or regulators are automatically bad people. The Internet cannot always be a place where you have no recourse. We need accountability on the owners of resources or the operators of various resources, if somebody is spamming you, you need to find a way to possibly be able to reject traffic from them instead of letting them bounce it off a whole lot of other places and thus have no accountability for their actions and the same is true of packets. If this kind of thing was happening in the mails where you were getting letter bombed in this same way, it would be thought of as a disaster and, you know, people would march in the streets we can't say that the Internet, because it lax streets, should have no one marching.

OLAF KOLKMAN: Yes, you just said, Andrew, the Internet is about cooperation but in the terms of source address validation and BCP38 that cooperation essentially failed. But it's a nice ?? who is accountable for this because I heard in another forum a conversation about whether authoritative name servers, if they deploy defences, create a precedent in terms of making themselves, allowing themselves to be accountable for the bad things that others, in fact, cause, so, if you, as an authoritative name server, deploy mechanisms that stop or hold or reduce the effect of the death beam, does that make you regularory target because you just took responsibility and the next time that something bad happens, who is going to knock on your door? And maybe the moderator of this forum has something to say about that?

PETER KOCH: The point making me the moderator, was to actually avoid that. But still I think for ?? I thank Olaf for that perspective, and you mentioned both the deployment of defences and I guess we have mentioned or talked about the particularities of the technology. This long?term effect of taking the responsibilities, of course important, it's one; it's not so much about the blaming, so let's set aside that question for a second and maybe blame ourselves for like collective failure. I can remind you or us that a famous member of the community who doesn't show up at these meetings usually but actually works in this country at a university, who wrote one mail software and DNS soft which are has been warning for years about this potential and obviously there was no following suit there. So there are tactics as in deploying the defensive measures there but is there a long?term thing that the protocol needs to be changed or is that a too high hanging fruit?

PAUL VIXIE: So, we have two bad choices available to us as a protocols to carry DNS traffic, we can either use UDP or TCP. This is similar to my political system in the United States, my next president will either be a democrat or a republican, there is nothing I can do about it. So TCP is wrong because it requires too many packets and too much state held for too long and so you can never achieve the transaction rates you need with the size of the pipeline that we have. UDP is wrong because there is no way, there is no repudiation, there is no way to be certain that the source address is correct because you have never enough packets. We designed, William Simpson and I, an option for the TCP protocol give it an ability to answer and one packet in each direction for this, and the ?? and the response by the community was, why would he ever need anything like that? So although we could potentially move beyond the straitjacket of it. CP V UDP, I don't think we have the will, and in any case, the firewalls have first mover advantage. No non?UDP, TCP protocol is going to work in this room or any hotel room or coffee shop nor most houses in the world so we really are stuck. So no the answer is we should not be using the protocols we are forced to use. What is your plan B?

ANDREW SULLIVAN: One of the things we could do instead of invoking laws that are, you know, first of all national in scope and secondly well?designed for environments that are not like the Internet, is to try to figure out mechanisms by which as network operators we can do something. One answer here of course is some sort of reputational system, not for particular end points but rather for networks. So, that if what you start to discover is that certain networks become the source of a lot of abusive traffic, you gradually turn down those networks and you just don't accept traffic from them or you filter the traffic or you limit it so that it can't come through. Now, this has nasty effects, it makes they very uncomfortable to suggest it so I am not advocating this but it is the kind of thing I here people suggest from time to time and it seems to me that if we think about these things in, you know in, a completely ?? with a completely open mind, we will maybe stumble across something that's a less bad idea than balkanizing the Internet that way which is what I am afraid is going to happen if we don't.

PETER KOCH: So before we open the floor, I would like to ask one question to Xander because you mentioned that you had been on several sides of this party. Could you share with the audience what the, say, costs were on your side of being an accomplice or being a reflecting amplifying party?

XANDER JANSEN: The cost, not in money, the many hours we spent on it, it's a lot, but basically, we are a target, we have three named servers which are politically bit dangerous and from time to time they get blasted away from the net with, amongst others, the amplification attacks, so a lot of fragments coming in. The last one was a few weeks ago, 30 gigabit per second which is not much, but 75% of our backbone on pieces. That does something to the network. So it does harm.

We also being used in the recent 9 K amplification attack. We have four named servers on our network which are being used to generate large answers to spoofed source addresses and recently we also have customers being the victim of these attacks combined and what Paul said, it's popularised, gamers, they just have software, they push a button and from all over the world packets are generated sending queries to the name servers and giving large answers. So both we, because our backbone is being saturated, so from time to time because our name servers are handling, instead of the normal 200 queries per second or something like that, 10,000 queries per second continuously for the last month, don't do it because we filter them, but and our customers being blasted away from the net because some student on their network do think it's funny to press that button and the Internet is getting slow he said. But basically, their connection was saturated. And it's getting more and more and that's the damage we see at the moment.

PETER KOCH: Thank you. I think we have time for one or two questions from the floor.

JOAO DAMAS: I am just wondering from all the discussion and the length it has taken to get us here, and we are still where we were when this discussion began 10, 15 years ago, whether the right thing to do here is to simply admit that the solution to this problem cannot be ?? cannot come from the people in this room, that maybe as much as you are afraid of meets based law that will, in the end, be the only solution.

OLAF KOLKMAN: The problem with meets based law is enforcement. If you were to try and really catch the culprits that are behind this type of reflection attacks, that takes serious amount of investigative resources, that's not a trivial thing. The question is, you know, you also need international treaties and so on and so forth. Now, I think there is a role in meets base for having regulation in place that allows that type of cooperation with operators and so on and so forth. That said, there is also the arms race that we are all involved in and we are still trying to keep up and part of that is trying to figure out mechanisms to keep all this nastiness within acceptable limits. Now, I don't say that 30 gigs is an ?? 30 gigs per second is an acceptable limit, but that's ?? that seems, for this type of audience, the only way out. Or universal deployment of, what is the number, RFC 3514.

PAUL VIXIE: So, recourse ultimately has to come down to criminal punishment and, I want to just note in this audience if you are a victim, if you are either being used as an amplifier and reflector or if you are being attacked by others who are doing that and you have data that you can share about that and you are willing to participate in an investigation involving realtime data sharing where you might receive information or send information and join the global effort to track down the people that are doing this, please contract me off?line. It's a secret handshake society but I can get you in.

JIM REID: There has been a number of defences to this, we have to attack a multi?discipline approach in one case it's going to be law enforcement things, things we can do at applications level and network operations level, there is no single magic rule that's going to put a stop to think. One thing I think is bit time the packets arrive at my name server the bad guys have won and what I would like to try and do is stop the stuff coming and again BCP38 is perhaps one of the best ways of trying to achieve that and I wonder if we could somehow try to find economic incentives to encourage ISPs to deploy BCP38, if core Internet Exchanges gave people discounts or prices or fatter pipes if they had BCP38 before they got into access for peering or something like that, just a thought.

ANDREW SULLIVAN: This is entirely in line with my thinking. I think that something that we can do that does not involve prosecution across national boundaries with possibly uncooperative foreign governments is simply to say that if you are ?? if you become the kind of network that sends us crappy traffic then you are just not going to get good bandwidth and that answer scares me, as I already suggested, because it potentially says that we have different classes of Internet service. On the other hand, currently we have different classes of Internet service because if you are the target network your system sucks, and that seems to me to be a worse answer, if we could get Internet Exchanges to encourage this sort of behaviour it would be a huge benefit.

PETER KOCH: OK. I am sorry, I had to cut the queues, we are again running into the break but I will take that as an encourage ?? those in the audience did not raise their arms when they knew what ?? to approach their neighbour or one of the panelists for an explanation. One of the reasons we are doing this before the break is to give you food for thought and discussion and mingle around and get yourself involved.

With that, I will hand back to Jim for the closing remarks and I guess we have one item under any other business.

JIM REID: Thanks to the panel and thanks to Peter.

(Applause)

We have two short items of other business, first of all I wonder if there were any comments or feedback on Paul Ebersman's presentation in the plenary, who was talking about issues around IPv6 and PTR records for that, I was wondering if there were any particular comments that anybody or wants to make on Paul's presentation? I guess not. Well, in that case then, Anand.

ANAND BUDDHDEV: I work for the RIPE NCC, just a very quick AOB. About six or seven weeks ago I sent a proposal around on the DNS Working Group mailing list and this had to do with the end server attribute in domain objects in the RIPE database. At the moment, the name server attribute is optional and we would like to make it mandatory. And my proposal also talked about cleaning up the syntax or tightening the syntax of the end server attribute, so that it's easier for operators to define their named servers and easier for to us parse these domain objects because the current syntax makes it very, very complicated for us to write code to handle lists in named server attributes. So if you haven't seen this discussion please look at the archives of the DNS Working Group mailing list and feel free to comment there. And if anyone has any quick comments now, we are happy to hear them or come talk to me during the break, but please, do participate and let us know what you think.

JIM REID: OK. Thank you, Anand. With that, I will call the meeting to a close. I'd like to thank everybody who has helped make this happen, the speakers, the panelists, Peter for chairing the two panel discussions, Jap for chairing this morning, the NCC staff have done a great job, taking minutes, helping with the presentations, and the lady doing the Stenography as well, taking these acronyms and my terrible Scottish accent.

(Applause)