MavEtJu's Distorted View of the World - 2008-05
BATV and Postgrey
FreeBSD IPv6 Divert socket adventures (3) FreeBSD IPv6 Divert socket adventures (2) FreeBSD IPv6 Divert socket adventures IPv6 training and thoughts APNIC IPv6 training Three Mobile roaming 3G network Back to index BATV and PostgreyPosted on 2008-05-22 19:00:00, modified on 2008-06-03 19:00:00 BATV stands for Bounce Address Tag Validation and is a method to prevent backscatter from spam runs. It works by modifying (danger! technical content ahead!) the Envelope From address in an SMTP session from [email protected] to [email protected]. If this email is undeliverable, it will be send back to [email protected] instead of to [email protected] and your mail host knows that this is a valid undeliverable message. So what has Postgrey to do with this? Postgrey is a greylisting server. It is (danger! technical content ahead!) forcing email deliveries from addresses and hosts which are not yet known to be retried later. Why? Earlier this century, emails sent by viruses and spam-hosts weren't smart enough to understand this and the email with the malicious payload was not accepted by your mailhost. Yes, but what has greylisting to do with it? Greylisting delays every email from / email to / sending host combination it hasn't seen before. So if BATV changes the email from address every day, the first email from that user will be delayed every day. Every day! So Postgrey needs to be taught what the real email address is. Luckely BATV keeps this information in the from address: prvs=tag-value=[email protected]. Small patch, and it works. And now the tricky stuff: Not every read the documentation properly, and the two following formats have been seen: Brilliant! They swapped it around! So my four line patch becomes an eight line patch.[email protected] [email protected] Anyway, the patch is available and submitted to the Postgrey author. Note: Please note that I've made a little change to the patch to pick the second field (as the standard suggests) instead of the wrong standard. Not that it ever should come to there, but it's a "just in case" thing. No comments | Share on Facebook | Share on Twitter FreeBSD IPv6 Divert socket adventures (3)Posted on 2008-05-13 23:00:00 Victory! Tonight I managed to get the nat6to4 daemon working. Remember what went wrong yesterday: IPv6 packet does not go from the nat6to4 daemon into divert. What?!?!?Yes, that would have worked in one go if I actually had pushed the data in the right IPv6 socket instead of in the wrong IPv4 socket. It happens, specially when you are copying whole functions around. As a demo: That is pretty uninteresting for the naked eye, but the thing is that the Allegro-Software-RomPager doesn't support IPv6, it's mapped via the nat6to4 gateway.[~] edwin@freefall>telnet 2001:5c0:8fff:ffff::c3 80 Trying 2001:5c0:8fff:ffff::c3... Connected to 2001:5c0:8fff:ffff::c3. Escape character is '^]'. HEAD / HTTP/1.0 HTTP/1.1 200 OK Content-Type: text/xml; charset="utf-8" Date: Tue, 13 May 2008 14:10:23 GMT Expires: Thu, 26 Oct 1995 00:00:00 GMT Last-Modified: Tue, 13 May 2008 14:10:23 GMT Pragma: no-cache Content-Length: 32 Server: Allegro-Software-RomPager/4.34 Connection closed by foreign host. So, my nat6to4 daemon works for mapping the TCP or UDP payload of basic IPv6 packets (single header, nothing fancy) onto IPv4 packets: Now I can make all basic services on my jails (webservers, LDAP servers, DNS servers, POP and IMAP servers) available via IPv6. Now I have to put netinet6/ip6_divert.c into a shape so that it gets accepted by the FreeBSD project, right now there are too many things commented out because I didn't know what they are for. Yes, I feel like an apprentice magician left alone with a hand full of scrolls and asked to find out if they are interesting.
Patches for FreeBSD 6.3 are available from
http://people.freebsd.org/~edwin/freebsd63-ip6divert-20080513.patch.
01005 606 245695 divert 8664 ip from any to 192.168.253.2 01006 452 33304 allow ipv6-icmp from any to me6 via tun1 01006 517 51742 divert 8666 ip6 from any to me6 via tun1 65535 79349 20349420 allow ip from any to any Show 2 comments | Share on Facebook | Share on Twitter FreeBSD IPv6 Divert socket adventures (2)Posted on 2008-05-12 10:00:00 A small update:
So what works and what doesn't?
But that is an adventure for later when I have some spare time again... work and two kids, that doesn't leave much time adventures like this (except between 22:00 and 01:00 which is very bad for everybody) No comments | Share on Facebook | Share on Twitter FreeBSD IPv6 Divert socket adventuresPosted on 2008-05-06 10:00:00 This whole IPv6 Divert idea seems to be a little bit optimistic now. As all new technology (I wouldn't call IPv6 new technology, but still) not everything you have available now is supported in it. So, how far did we get?
So, this is getting trickier and trickier for an userland guy like me to experiment with on a sunday afternoon... But I'm not going to give up! (yet) On the other hand, an interesting paper by Dr. Goto (I'm not kidding) and friends available at IPV4/V6 NETWORK EMULATOR USING DIVERT SOCKET says: I have asked him if he wanted to share his code but haven't heard anything back from him.[...] We have ported divert socket to IPv6 by adding about 1000 lines of C program code either on FreeBSD and Linux kernel for this research. [...] No comments | Share on Facebook | Share on Twitter IPv6 training and thoughtsPosted on 2008-05-03 09:00:00 After two days of the IPv6 training workshop organized by APNIC, I think I'm ready for it. Mentally that is. I will guide BarNet safely into the 21st century! (Yes, I know that that century is already eight years old, but then IPv6 is already more than eight years old) There are a couple of interesting things about IPv6: The first one is the absence of the checksum field in the IP header. Had an IPv4 header a checksum which had to be recalculated on every router it went through (that's no fun for highspeed routers I tell you), IPv6 packets don't have to do this anymore. The second thing is the absence of the ARP protocol: It's gone now. It's over for it. Bye! It's now part of the ICMPv6 protocol: If you need to know the ethernet address of a host, you send an ICMP packet asking for it. I'm pretty sure that RARP (RFC903) is obsolete now. The third one is the absence of subnet broadcast address: No more all 1's, it just doesn't exist. If you want to tell something to all hosts, use a local multicast. Related to the third one is that the network troubleshooting trick to see how many hosts are alive by pinging everybody in the subnet, which is now an /64, is obsolete, because it will take seventeen days before you have complete the full sweep. The famous IPv6 autoconfiguration... It is great for a simple network where hosts have everything (DNS, WINS, proxy etc) statically configurated, but I don't really believe in it for a properly managed network: DHCPv6 is the way to go. Still. I will have to figure out how it works: I saw that the ISC-DHCP port in the FreeBSD ports tree was very outdated and that there wasn't a 3.1 not 4.x version in it... I will have to take things in my own hands! With regarding to our network equipment I'm not too worried: The routing Extreme Networks boxes and the Juniper boxes do support IPv6, the Cisco PoE switches should be fine. The FreeBSD, Linux and Windows 2K3 devices do support it. And PIPE networks does support it. With the FreeBSD boxes there is a small problem right now though... In the past we carefully designed our network and services with jails and such, and jails support only one IP address. That's the whole idea behind a jail, nothing you can do about it. And that works great, oh, except for the fact that in dual stack mode you need two IP addresses: an IPv4 and IPv6 one. I know that there are patches around for FreeBSD 7.0 to support multiple IP addresses, but that doesn't help me yet because we have just migrated everything to 6.3...
So, how can provide IPv6 access to our services easily? The simplest
way is to make some kind of IPv6-to-IPv4 gateway, which translates
an IPv6 packet into an IPv4 packet and forwards that to our servers.
Nothing difficult, this is just plain NAT. And it would expose all our services in one go to the IPv6 world, without anything to change on the services. Okay, you would lose the information of who it really was who asked it, but that is just a small price to pay until the full IPv6 service is in place. Let's do some hacking! No comments | Share on Facebook | Share on Twitter APNIC IPv6 trainingPosted on 2008-05-01 09:00:00, modified on 2008-06-01 09:00:00 The coming two days I'll be at the IPv6 Workshop of APNIC. Of course this workshop is in the middle of nowhere, which is impossible for a Sydney based event so let me rephrase it: It is held in a non-central location unreachable by train. The options? Take the train to the city (one hour) and then the bus (one hour) or take the train to Parramatta (1.5 hours) and take a taxi from there. But the good news is: thanks to the speed of the Cronulla / Bondi train this morning I was able to catch one train earlier at Redfern, and that one only stops at Strathfield, Lidcombe, Granville and Parramatta, which will save me some hassles... I hope :-) On the sideline, I checked out when my first IPv6 capable program was created: It was the Fatal Dimensions Mud server and the commit date was 29 April 2000, eight years ago. The IPv6 connection came via FreeNet6 in Canada and that was a IPv6-over-IPv4 tunnel. Thanks to my FreeBSD port of their tunnel software I got a tshirt from them! Update: That taxi took half an hour to get there.... No comments | Share on Facebook | Share on Twitter Three Mobile roaming 3G networkPosted on 2008-05-01 08:00:00, modified on 2008-07-14 16:00:00 Thanks to my job, or cursed by my job sometimes, I have access to a 3G modem of Three Mobile. It works fine in areas where there is Three Mobile coverage, outside these areas were we are roaming (and probably on the Telstra 3G network) there are interesting issues with the IPCP phase (IP Confirguration Protocol) of PPP which make the PPP setup fail. So what works and what doesn't? According to the PPP manual, this is what we should see: The first two Ps become capitals, but the third one doesn't when we are roaming.ppp ON awfulhak> # No link has been established Ppp ON awfulhak> # We've connected & finished LCP PPp ON awfulhak> # We've authenticated PPP ON awfulhak> # We've agreed IP numbers What do the logs say? The short version of a successful handshake is: The full log is at the end of this writeup.IPCP: FSM: Using "deflink" as a transport IPCP: deflink: State change Initial --> Closed IPCP: deflink: LayerStart. IPCP: deflink: SendConfigReq(1) state = Closed IPCP: deflink: State change Closed --> Req-Sent IPCP: deflink: RecvConfigNak(1) state = Req-Sent IPCP: deflink: SendConfigReq(2) state = Req-Sent IPCP: deflink: RecvConfigNak(2) state = Req-Sent IPCP: deflink: SendConfigReq(3) state = Req-Sent IPCP: deflink: RecvConfigReq(0) state = Req-Sent IPCP: deflink: SendConfigNak(0) state = Req-Sent IPCP: deflink: RecvConfigRej(3) state = Req-Sent IPCP: deflink: SendConfigReq(4) state = Req-Sent IPCP: deflink: RecvConfigReq(1) state = Req-Sent IPCP: deflink: SendConfigAck(1) state = Req-Sent IPCP: deflink: State change Req-Sent --> Ack-Sent IPCP: deflink: RecvConfigNak(4) state = Ack-Sent IPCP: deflink: SendConfigReq(5) state = Ack-Sent IPCP: deflink: RecvConfigAck(5) state = Ack-Sent IPCP: deflink: State change Ack-Sent --> Opened The discussion goes like this: I propose something three times (SendConfigReq) and they reject it (RecvConfigNak), but at the third time they say "Let's try it with this" (RecvConfigReq) and we come up with a common ground. And everybody is happy! The short version of an unsuccessful handshake is: The full log is at the end of this writeup.IPCP: FSM: Using "deflink" as a transport IPCP: deflink: State change Initial --> Closed IPCP: deflink: LayerStart. IPCP: deflink: SendConfigReq(1) state = Closed IPCP: deflink: State change Closed --> Req-Sent IPCP: deflink: RecvConfigNak(1) state = Req-Sent IPCP: deflink: SendConfigReq(2) state = Req-Sent IPCP: deflink: RecvConfigNak(2) state = Req-Sent IPCP: deflink: SendConfigReq(3) state = Req-Sent IPCP: deflink: RecvConfigNak(3) state = Req-Sent IPCP: deflink: SendConfigReq(4) state = Req-Sent IPCP: deflink: RecvConfigNak(4) state = Req-Sent IPCP: deflink: SendConfigReq(5) state = Req-Sent IPCP: deflink: RecvConfigNak(5) state = Req-Sent IPCP: deflink: SendConfigReq(6) state = Req-Sent IPCP: deflink: RecvConfigNak(6) state = Req-Sent IPCP: deflink: SendConfigReq(7) state = Req-Sent IPCP: deflink: RecvConfigNak(7) state = Req-Sent IPCP: deflink: SendConfigReq(8) state = Req-Sent IPCP: deflink: RecvConfigNak(8) state = Req-Sent IPCP: deflink: SendConfigReq(9) state = Req-Sent IPCP: deflink: RecvConfigNak(9) state = Req-Sent IPCP: deflink: SendConfigReq(10) state = Req-Sent IPCP: deflink: RecvConfigNak(10) state = Req-Sent IPCP: deflink: SendConfigReq(11) state = Req-Sent IPCP: deflink: SendTerminateReq(11) state = Req-Sent IPCP: deflink: State change Req-Sent --> Closing IPCP: deflink: RecvTerminateAck(11) state = Closing IPCP: deflink: LayerFinish. IPCP: deflink: State change Closing --> Closed IPCP: deflink: RecvConfigNak(11), dropped (expected 12) Here there are only SendConfigReq and RecvConfigNak, there is no RecvConfigReq coming from them! If anybody has managed to setup a successful PPP connection on the 3G network of anybody but Three Mobile can send me some logs, I would be very happy to see if I can fix things. Update According to John Marshall, sending a string similar to this: could resolve the issue. Next time when I'm in the roaming-zone I'll try it!AT+CGDCONT=1,"IP","telstra.internet" It worked, when I used "3netaccess" as the indentifier. Here are the full logs: This is the IPCP part of a successful handshake ppp[1098]: tun0: Phase: bundle: Network ppp[1098]: tun0: IPCP: FSM: Using "deflink" as a transport ppp[1098]: tun0: IPCP: deflink: State change Initial --> Closed ppp[1098]: tun0: IPCP: deflink: LayerStart. ppp[1098]: tun0: IPCP: deflink: SendConfigReq(1) state = Closed ppp[1098]: tun0: IPCP: IPADDR[6] 0.0.0.0 ppp[1098]: tun0: IPCP: COMPPROTO[6] 16 VJ slots without slot compression ppp[1098]: tun0: IPCP: PRIDNS[6] 202.124.76.66 ppp[1098]: tun0: IPCP: SECDNS[6] 255.255.255.255 ppp[1098]: tun0: IPCP: deflink: State change Closed --> Req-Sent ppp[1098]: tun0: LCP: deflink: RecvProtocolRej(2) state = Opened ppp[1098]: tun0: LCP: deflink: -- Protocol 0x80fd (Compression Control Protocol) was rejected! ppp[1098]: tun0: CCP: deflink: State change Req-Sent --> Stopped ppp[1098]: tun0: IPCP: deflink: RecvConfigNak(1) state = Req-Sent ppp[1098]: tun0: IPCP: PRIDNS[6] 10.11.12.13 ppp[1098]: tun0: IPCP: SECDNS[6] 10.11.12.14 ppp[1098]: tun0: IPCP: PRINBNS[6] 10.11.12.13 ppp[1098]: tun0: IPCP: MS NBNS req 130 - NAK?? ppp[1098]: tun0: IPCP: SECNBNS[6] 10.11.12.14 ppp[1098]: tun0: IPCP: MS NBNS req 132 - NAK?? ppp[1098]: tun0: IPCP: Primary nameserver set to 10.11.12.13 ppp[1098]: tun0: IPCP: Secondary nameserver set to 10.11.12.14 ppp[1098]: tun0: IPCP: deflink: SendConfigReq(2) state = Req-Sent ppp[1098]: tun0: IPCP: IPADDR[6] 0.0.0.0 ppp[1098]: tun0: IPCP: COMPPROTO[6] 16 VJ slots without slot compression ppp[1098]: tun0: IPCP: PRIDNS[6] 10.11.12.13 ppp[1098]: tun0: IPCP: SECDNS[6] 10.11.12.14 ppp[1098]: tun0: IPCP: deflink: RecvConfigNak(2) state = Req-Sent ppp[1098]: tun0: IPCP: PRIDNS[6] 10.11.12.13 ppp[1098]: tun0: IPCP: SECDNS[6] 10.11.12.14 ppp[1098]: tun0: IPCP: PRINBNS[6] 10.11.12.13 ppp[1098]: tun0: IPCP: MS NBNS req 130 - NAK?? ppp[1098]: tun0: IPCP: SECNBNS[6] 10.11.12.14 ppp[1098]: tun0: IPCP: MS NBNS req 132 - NAK?? ppp[1098]: tun0: IPCP: Primary nameserver set to 10.11.12.13 ppp[1098]: tun0: IPCP: Secondary nameserver set to 10.11.12.14 ppp[1098]: tun0: IPCP: deflink: SendConfigReq(3) state = Req-Sent ppp[1098]: tun0: IPCP: IPADDR[6] 0.0.0.0 ppp[1098]: tun0: IPCP: COMPPROTO[6] 16 VJ slots without slot compression ppp[1098]: tun0: IPCP: PRIDNS[6] 10.11.12.13 ppp[1098]: tun0: IPCP: SECDNS[6] 10.11.12.14 ppp[1098]: tun0: IPCP: deflink: RecvConfigReq(0) state = Req-Sent ppp[1098]: tun0: IPCP: [EMPTY] ppp[1098]: tun0: IPCP: deflink: SendConfigNak(0) state = Req-Sent ppp[1098]: tun0: IPCP: IPADDR[6] 10.0.0.2 ppp[1098]: tun0: IPCP: deflink: RecvConfigRej(3) state = Req-Sent ppp[1098]: tun0: IPCP: COMPPROTO[6] 16 VJ slots without slot compression ppp[1098]: tun0: IPCP: deflink: SendConfigReq(4) state = Req-Sent ppp[1098]: tun0: IPCP: IPADDR[6] 0.0.0.0 ppp[1098]: tun0: IPCP: PRIDNS[6] 10.11.12.13 ppp[1098]: tun0: IPCP: SECDNS[6] 10.11.12.14 ppp[1098]: tun0: IPCP: deflink: RecvConfigReq(1) state = Req-Sent ppp[1098]: tun0: IPCP: [EMPTY] ppp[1098]: tun0: IPCP: deflink: SendConfigAck(1) state = Req-Sent ppp[1098]: tun0: IPCP: [EMPTY] ppp[1098]: tun0: IPCP: deflink: State change Req-Sent --> Ack-Sent ppp[1098]: tun0: IPCP: deflink: RecvConfigNak(4) state = Ack-Sent ppp[1098]: tun0: IPCP: IPADDR[6] 119.11.8.120 ppp[1098]: tun0: IPCP: IPADDR[6] changing address: 0.0.0.0 --> 119.11.8.120 ppp[1098]: tun0: IPCP: PRIDNS[6] 202.124.68.130 ppp[1098]: tun0: IPCP: SECDNS[6] 202.124.76.66 ppp[1098]: tun0: IPCP: Primary nameserver set to 202.124.68.130 ppp[1098]: tun0: IPCP: Secondary nameserver set to 202.124.76.66 ppp[1098]: tun0: IPCP: deflink: SendConfigReq(5) state = Ack-Sent ppp[1098]: tun0: IPCP: IPADDR[6] 119.11.8.120 ppp[1098]: tun0: IPCP: PRIDNS[6] 202.124.68.130 ppp[1098]: tun0: IPCP: SECDNS[6] 202.124.76.66 ppp[1098]: tun0: IPCP: deflink: RecvConfigAck(5) state = Ack-Sent ppp[1098]: tun0: IPCP: IPADDR[6] 119.11.8.120 ppp[1098]: tun0: IPCP: PRIDNS[6] 202.124.68.130 ppp[1098]: tun0: IPCP: SECDNS[6] 202.124.76.66 ppp[1098]: tun0: IPCP: deflink: State change Ack-Sent --> Opened ppp[1098]: tun0: IPCP: deflink: LayerUp. This is the IPCP part of a unsuccessful handshake ppp[1661]: tun0: Phase: bundle: Network ppp[1661]: tun0: IPCP: FSM: Using "deflink" as a transport ppp[1661]: tun0: IPCP: deflink: State change Initial --> Closed ppp[1661]: tun0: IPCP: deflink: LayerStart. ppp[1661]: tun0: IPCP: deflink: SendConfigReq(1) state = Closed ppp[1661]: tun0: IPCP: IPADDR[6] 0.0.0.0 ppp[1661]: tun0: IPCP: COMPPROTO[6] 16 VJ slots with slot compression ppp[1661]: tun0: IPCP: PRIDNS[6] 10.11.12.13 ppp[1661]: tun0: IPCP: SECDNS[6] 10.11.12.14 ppp[1661]: tun0: IPCP: deflink: State change Closed --> Req-Sent ppp[1661]: tun0: LCP: deflink: RecvProtocolRej(2) state = Opened ppp[1661]: tun0: LCP: deflink: -- Protocol 0x80fd (Compression Control Protocol) was rejected! ppp[1661]: tun0: CCP: deflink: State change Req-Sent --> Stopped ppp[1661]: tun0: IPCP: deflink: RecvConfigNak(1) state = Req-Sent ppp[1661]: tun0: IPCP: PRIDNS[6] 10.11.12.13 ppp[1661]: tun0: IPCP: SECDNS[6] 10.11.12.14 ppp[1661]: tun0: IPCP: PRINBNS[6] 10.11.12.13 ppp[1661]: tun0: IPCP: MS NBNS req 130 - NAK?? ppp[1661]: tun0: IPCP: SECNBNS[6] 10.11.12.14 ppp[1661]: tun0: IPCP: MS NBNS req 132 - NAK?? ppp[1661]: tun0: IPCP: Primary nameserver set to 10.11.12.13 ppp[1661]: tun0: IPCP: Secondary nameserver set to 10.11.12.14 ppp[1661]: tun0: IPCP: deflink: SendConfigReq(2) state = Req-Sent ppp[1661]: tun0: IPCP: IPADDR[6] 0.0.0.0 ppp[1661]: tun0: IPCP: COMPPROTO[6] 16 VJ slots with slot compression ppp[1661]: tun0: IPCP: PRIDNS[6] 10.11.12.13 ppp[1661]: tun0: IPCP: SECDNS[6] 10.11.12.14 ppp[1661]: tun0: IPCP: deflink: RecvConfigNak(2) state = Req-Sent ppp[1661]: tun0: IPCP: PRIDNS[6] 10.11.12.13 ppp[1661]: tun0: IPCP: SECDNS[6] 10.11.12.14 ppp[1661]: tun0: IPCP: PRINBNS[6] 10.11.12.13 ppp[1661]: tun0: IPCP: MS NBNS req 130 - NAK?? ppp[1661]: tun0: IPCP: SECNBNS[6] 10.11.12.14 ppp[1661]: tun0: IPCP: MS NBNS req 132 - NAK?? ppp[1661]: tun0: IPCP: Primary nameserver set to 10.11.12.13 ppp[1661]: tun0: IPCP: Secondary nameserver set to 10.11.12.14 ppp[1661]: tun0: IPCP: deflink: SendConfigReq(3) state = Req-Sent ppp[1661]: tun0: IPCP: IPADDR[6] 0.0.0.0 ppp[1661]: tun0: IPCP: COMPPROTO[6] 16 VJ slots with slot compression ppp[1661]: tun0: IPCP: PRIDNS[6] 10.11.12.13 ppp[1661]: tun0: IPCP: SECDNS[6] 10.11.12.14 ppp[1661]: tun0: IPCP: deflink: RecvConfigNak(3) state = Req-Sent ppp[1661]: tun0: IPCP: PRIDNS[6] 10.11.12.13 ppp[1661]: tun0: IPCP: SECDNS[6] 10.11.12.14 ppp[1661]: tun0: IPCP: PRINBNS[6] 10.11.12.13 ppp[1661]: tun0: IPCP: MS NBNS req 130 - NAK?? ppp[1661]: tun0: IPCP: SECNBNS[6] 10.11.12.14 ppp[1661]: tun0: IPCP: MS NBNS req 132 - NAK?? ppp[1661]: tun0: IPCP: Primary nameserver set to 10.11.12.13 ppp[1661]: tun0: IPCP: Secondary nameserver set to 10.11.12.14 ppp[1661]: tun0: IPCP: deflink: SendConfigReq(4) state = Req-Sent ppp[1661]: tun0: IPCP: IPADDR[6] 0.0.0.0 ppp[1661]: tun0: IPCP: COMPPROTO[6] 16 VJ slots with slot compression ppp[1661]: tun0: IPCP: PRIDNS[6] 10.11.12.13 ppp[1661]: tun0: IPCP: SECDNS[6] 10.11.12.14 ppp[1661]: tun0: IPCP: deflink: RecvConfigNak(4) state = Req-Sent ppp[1661]: tun0: IPCP: PRIDNS[6] 10.11.12.13 ppp[1661]: tun0: IPCP: SECDNS[6] 10.11.12.14 ppp[1661]: tun0: IPCP: PRINBNS[6] 10.11.12.13 ppp[1661]: tun0: IPCP: MS NBNS req 130 - NAK?? ppp[1661]: tun0: IPCP: SECNBNS[6] 10.11.12.14 ppp[1661]: tun0: IPCP: MS NBNS req 132 - NAK?? ppp[1661]: tun0: IPCP: Primary nameserver set to 10.11.12.13 ppp[1661]: tun0: IPCP: Secondary nameserver set to 10.11.12.14 ppp[1661]: tun0: IPCP: deflink: SendConfigReq(5) state = Req-Sent ppp[1661]: tun0: IPCP: IPADDR[6] 0.0.0.0 ppp[1661]: tun0: IPCP: COMPPROTO[6] 16 VJ slots with slot compression ppp[1661]: tun0: IPCP: PRIDNS[6] 10.11.12.13 ppp[1661]: tun0: IPCP: SECDNS[6] 10.11.12.14 ppp[1661]: tun0: IPCP: deflink: RecvConfigNak(5) state = Req-Sent ppp[1661]: tun0: IPCP: PRIDNS[6] 10.11.12.13 ppp[1661]: tun0: IPCP: SECDNS[6] 10.11.12.14 ppp[1661]: tun0: IPCP: PRINBNS[6] 10.11.12.13 ppp[1661]: tun0: IPCP: MS NBNS req 130 - NAK?? ppp[1661]: tun0: IPCP: SECNBNS[6] 10.11.12.14 ppp[1661]: tun0: IPCP: MS NBNS req 132 - NAK?? ppp[1661]: tun0: IPCP: Primary nameserver set to 10.11.12.13 ppp[1661]: tun0: IPCP: Secondary nameserver set to 10.11.12.14 ppp[1661]: tun0: IPCP: deflink: SendConfigReq(6) state = Req-Sent ppp[1661]: tun0: IPCP: IPADDR[6] 0.0.0.0 ppp[1661]: tun0: IPCP: COMPPROTO[6] 16 VJ slots with slot compression ppp[1661]: tun0: IPCP: PRIDNS[6] 10.11.12.13 ppp[1661]: tun0: IPCP: SECDNS[6] 10.11.12.14 ppp[1661]: tun0: IPCP: deflink: RecvConfigNak(6) state = Req-Sent ppp[1661]: tun0: IPCP: PRIDNS[6] 10.11.12.13 ppp[1661]: tun0: IPCP: SECDNS[6] 10.11.12.14 ppp[1661]: tun0: IPCP: PRINBNS[6] 10.11.12.13 ppp[1661]: tun0: IPCP: MS NBNS req 130 - NAK?? ppp[1661]: tun0: IPCP: SECNBNS[6] 10.11.12.14 ppp[1661]: tun0: IPCP: MS NBNS req 132 - NAK?? ppp[1661]: tun0: IPCP: Primary nameserver set to 10.11.12.13 ppp[1661]: tun0: IPCP: Secondary nameserver set to 10.11.12.14 ppp[1661]: tun0: IPCP: deflink: SendConfigReq(7) state = Req-Sent ppp[1661]: tun0: IPCP: IPADDR[6] 0.0.0.0 ppp[1661]: tun0: IPCP: COMPPROTO[6] 16 VJ slots with slot compression ppp[1661]: tun0: IPCP: PRIDNS[6] 10.11.12.13 ppp[1661]: tun0: IPCP: SECDNS[6] 10.11.12.14 ppp[1661]: tun0: IPCP: deflink: RecvConfigNak(7) state = Req-Sent ppp[1661]: tun0: IPCP: PRIDNS[6] 10.11.12.13 ppp[1661]: tun0: IPCP: SECDNS[6] 10.11.12.14 ppp[1661]: tun0: IPCP: PRINBNS[6] 10.11.12.13 ppp[1661]: tun0: IPCP: MS NBNS req 130 - NAK?? ppp[1661]: tun0: IPCP: SECNBNS[6] 10.11.12.14 ppp[1661]: tun0: IPCP: MS NBNS req 132 - NAK?? ppp[1661]: tun0: IPCP: Primary nameserver set to 10.11.12.13 ppp[1661]: tun0: IPCP: Secondary nameserver set to 10.11.12.14 ppp[1661]: tun0: IPCP: deflink: SendConfigReq(8) state = Req-Sent ppp[1661]: tun0: IPCP: IPADDR[6] 0.0.0.0 ppp[1661]: tun0: IPCP: COMPPROTO[6] 16 VJ slots with slot compression ppp[1661]: tun0: IPCP: PRIDNS[6] 10.11.12.13 ppp[1661]: tun0: IPCP: SECDNS[6] 10.11.12.14 ppp[1661]: tun0: IPCP: deflink: RecvConfigNak(8) state = Req-Sent ppp[1661]: tun0: IPCP: PRIDNS[6] 10.11.12.13 ppp[1661]: tun0: IPCP: SECDNS[6] 10.11.12.14 ppp[1661]: tun0: IPCP: PRINBNS[6] 10.11.12.13 ppp[1661]: tun0: IPCP: MS NBNS req 130 - NAK?? ppp[1661]: tun0: IPCP: SECNBNS[6] 10.11.12.14 ppp[1661]: tun0: IPCP: MS NBNS req 132 - NAK?? ppp[1661]: tun0: IPCP: Primary nameserver set to 10.11.12.13 ppp[1661]: tun0: IPCP: Secondary nameserver set to 10.11.12.14 ppp[1661]: tun0: IPCP: deflink: SendConfigReq(9) state = Req-Sent ppp[1661]: tun0: IPCP: IPADDR[6] 0.0.0.0 ppp[1661]: tun0: IPCP: COMPPROTO[6] 16 VJ slots with slot compression ppp[1661]: tun0: IPCP: PRIDNS[6] 10.11.12.13 ppp[1661]: tun0: IPCP: SECDNS[6] 10.11.12.14 ppp[1661]: tun0: IPCP: deflink: RecvConfigNak(9) state = Req-Sent ppp[1661]: tun0: IPCP: PRIDNS[6] 10.11.12.13 ppp[1661]: tun0: IPCP: SECDNS[6] 10.11.12.14 ppp[1661]: tun0: IPCP: PRINBNS[6] 10.11.12.13 ppp[1661]: tun0: IPCP: MS NBNS req 130 - NAK?? ppp[1661]: tun0: IPCP: SECNBNS[6] 10.11.12.14 ppp[1661]: tun0: IPCP: MS NBNS req 132 - NAK?? ppp[1661]: tun0: IPCP: Primary nameserver set to 10.11.12.13 ppp[1661]: tun0: IPCP: Secondary nameserver set to 10.11.12.14 ppp[1661]: tun0: IPCP: deflink: SendConfigReq(10) state = Req-Sent ppp[1661]: tun0: IPCP: IPADDR[6] 0.0.0.0 ppp[1661]: tun0: IPCP: COMPPROTO[6] 16 VJ slots with slot compression ppp[1661]: tun0: IPCP: PRIDNS[6] 10.11.12.13 ppp[1661]: tun0: IPCP: SECDNS[6] 10.11.12.14 ppp[1661]: tun0: IPCP: deflink: RecvConfigNak(10) state = Req-Sent ppp[1661]: tun0: IPCP: PRIDNS[6] 10.11.12.13 ppp[1661]: tun0: IPCP: SECDNS[6] 10.11.12.14 ppp[1661]: tun0: IPCP: PRINBNS[6] 10.11.12.13 ppp[1661]: tun0: IPCP: MS NBNS req 130 - NAK?? ppp[1661]: tun0: IPCP: SECNBNS[6] 10.11.12.14 ppp[1661]: tun0: IPCP: MS NBNS req 132 - NAK?? ppp[1661]: tun0: IPCP: Primary nameserver set to 10.11.12.13 ppp[1661]: tun0: IPCP: Secondary nameserver set to 10.11.12.14 ppp[1661]: tun0: IPCP: deflink: SendConfigReq(11) state = Req-Sent ppp[1661]: tun0: IPCP: IPADDR[6] 0.0.0.0 ppp[1661]: tun0: IPCP: COMPPROTO[6] 16 VJ slots with slot compression ppp[1661]: tun0: IPCP: PRIDNS[6] 10.11.12.13 ppp[1661]: tun0: IPCP: SECDNS[6] 10.11.12.14 ppp[1661]: tun0: Command: /dev/ttyp1: quit ppp[1661]: tun0: IPCP: deflink: SendTerminateReq(11) state = Req-Sent ppp[1661]: tun0: IPCP: deflink: State change Req-Sent --> Closing ppp[1661]: tun0: IPCP: deflink: RecvTerminateAck(11) state = Closing ppp[1661]: tun0: IPCP: deflink: LayerFinish. ppp[1661]: tun0: IPCP: Connect time: 11 secs: 0 octets in, 0 octets out ppp[1661]: tun0: IPCP: 0 packets in, 0 packets out ppp[1661]: tun0: IPCP: total 0 bytes/sec, peak 0 bytes/sec on Thu Apr 24 13:39:49 2008 ppp[1661]: tun0: IPCP: deflink: State change Closing --> Closed ppp[1661]: tun0: IPCP: deflink: RecvConfigNak(11), dropped (expected 12) ppp[1661]: tun0: Phase: bundle: Terminate No comments | Share on Facebook | Share on Twitter |