MavEtJu's Distorted View of the World - 2008-05

BATV and Postgrey
FreeBSD IPv6 Divert socket adventures (3)
FreeBSD IPv6 Divert socket adventures (2)
FreeBSD IPv6 Divert socket adventures
IPv6 training and thoughts
APNIC IPv6 training
Three Mobile roaming 3G network

Back to index

BATV and Postgrey

Posted on 2008-05-22 19:00:00, modified on 2008-06-03 19:00:00
Tags: SMTP, Spam, Email

BATV stands for Bounce Address Tag Validation and is a method to prevent backscatter from spam runs. It works by modifying (danger! technical content ahead!) the Envelope From address in an SMTP session from [email protected] to [email protected]. If this email is undeliverable, it will be send back to [email protected] instead of to [email protected] and your mail host knows that this is a valid undeliverable message.

So what has Postgrey to do with this? Postgrey is a greylisting server. It is (danger! technical content ahead!) forcing email deliveries from addresses and hosts which are not yet known to be retried later. Why? Earlier this century, emails sent by viruses and spam-hosts weren't smart enough to understand this and the email with the malicious payload was not accepted by your mailhost.

Yes, but what has greylisting to do with it? Greylisting delays every email from / email to / sending host combination it hasn't seen before. So if BATV changes the email from address every day, the first email from that user will be delayed every day. Every day! So Postgrey needs to be taught what the real email address is. Luckely BATV keeps this information in the from address: prvs=tag-value=[email protected]. Small patch, and it works.

And now the tricky stuff: Not every read the documentation properly, and the two following formats have been seen:

[email protected] [email protected]
Brilliant! They swapped it around! So my four line patch becomes an eight line patch.

Anyway, the patch is available and submitted to the Postgrey author.

Note: Please note that I've made a little change to the patch to pick the second field (as the standard suggests) instead of the wrong standard. Not that it ever should come to there, but it's a "just in case" thing.


No comments | Share on Facebook | Share on Twitter

FreeBSD IPv6 Divert socket adventures (3)

Posted on 2008-05-13 23:00:00
Tags: IPv6, FreeBSD, Networking

Victory! Tonight I managed to get the nat6to4 daemon working. Remember what went wrong yesterday:

IPv6 packet does not go from the nat6to4 daemon into divert. What?!?!?
Yes, that would have worked in one go if I actually had pushed the data in the right IPv6 socket instead of in the wrong IPv4 socket. It happens, specially when you are copying whole functions around.

As a demo:

[~] edwin@freefall>telnet 2001:5c0:8fff:ffff::c3 80
Trying 2001:5c0:8fff:ffff::c3...
Connected to 2001:5c0:8fff:ffff::c3.
Escape character is '^]'.
HEAD / HTTP/1.0

HTTP/1.1 200 OK
Content-Type: text/xml; charset="utf-8"
Date: Tue, 13 May 2008 14:10:23 GMT
Expires: Thu, 26 Oct 1995 00:00:00 GMT
Last-Modified: Tue, 13 May 2008 14:10:23 GMT
Pragma: no-cache
Content-Length: 32
Server: Allegro-Software-RomPager/4.34

Connection closed by foreign host.
That is pretty uninteresting for the naked eye, but the thing is that the Allegro-Software-RomPager doesn't support IPv6, it's mapped via the nat6to4 gateway.

So, my nat6to4 daemon works for mapping the TCP or UDP payload of basic IPv6 packets (single header, nothing fancy) onto IPv4 packets: Now I can make all basic services on my jails (webservers, LDAP servers, DNS servers, POP and IMAP servers) available via IPv6.

Now I have to put netinet6/ip6_divert.c into a shape so that it gets accepted by the FreeBSD project, right now there are too many things commented out because I didn't know what they are for. Yes, I feel like an apprentice magician left alone with a hand full of scrolls and asked to find out if they are interesting.

Patches for FreeBSD 6.3 are available from http://people.freebsd.org/~edwin/freebsd63-ip6divert-20080513.patch.
The nat6to4d is available from http://people.freebsd.org/~edwin/nat6to4d-20080513.c.
And the ipfw rules:

01005   606   245695 divert 8664 ip from any to 192.168.253.2
01006   452    33304 allow ipv6-icmp from any to me6 via tun1
01006   517    51742 divert 8666 ip6 from any to me6 via tun1
65535 79349 20349420 allow ip from any to any


Show 2 comments | Share on Facebook | Share on Twitter

FreeBSD IPv6 Divert socket adventures (2)

Posted on 2008-05-12 10:00:00
Tags: IPv6, FreeBSD, Networking

A small update:

  • Dr. Goto has replied. It seems that I contacted him at the beginning on the Golden Week, a week in Japan with a lot of free days so that was the reason it took longer. His patches are against 5.2.1 and applied without problems but didn't compile. But it gave me some good hints.
  • Three kernel panics later I was told by Bruce M Simpson to use Qemu for kernel development work. Two days later I have it up and running. Works nice, I have it booting via PXE and the disks are mounted via NFS. I can boot in several FreeBSD versions but not into 5.2.1 because it hangs just before starting the userland. So far for my reference platform...
  • I have copied sys/src/netinet/ip_divert.c to sys/src/netinet/ip6_divert.c and modified all IPv4 functions into their IPv6 equivalents. Also added IP_PROTO_SPACERS to in6_proto.h. And finally when I open a socket with PF_INET6 and SOCK_RAW and IPPROTO_DIVERT, I get a proper socket:
    nat6to4d 2578  root    3u  IPv6    0xc1df8ec4        0t0  DIVERT *:8666
    nat6to4d 2578  root    4u  IPv4    0xc1dfaec4        0t0  DIVERT *:8664
    sockstat still doesn't show it though...
  • Internally in ip6_divert.c, instead of abusing the sin_zero[] fields in struct sockaddr_in I have created the new sockaddr type struct sockaddr_div:
    struct sockaddr_div {
    	uint8_t         div_len;
    	sa_family_t     div_family;     /* AF_INET / AF_INET6 */
    	in_port_t       div_cookie;     /* was: sin_port */
    	char            div_iface[8];
    	struct in6_addr div6_addr;      /* IPv6 address */
    	struct in_addr  div4_addr;      /* IPv4 address */
    };
    This could / should be also used in the normal ip_divert code.

So what works and what doesn't?

  • IPv6 packet goes from divert to the nat6to4 daemon. Yay!
    (TCP SYN packet from IPv6 host to the IPv6 address of the machine with the nat6to4 daemon)
  • IPv4 packet goes from the nat6to4 daemon to divert. Yay!
    (TCP SYN packet from the IPv4 address of the machine with the nat6to4 daemon to the IPv4 host. Trivia: Why do you need to recalculate the TCP checksum when you haven't changed the TCP header and TCP payload?)
  • IPv4 packet goes from divert to the nat6to4 daemon. Yay!
    (TCP SYN-ACK packet from IPv4 host to the IPv4 address of the machine with the nat6to4 daemon)
  • IPv6 packet does not go from the nat6to4 daemon into divert. What?!?!?
I'm not sure what goes wrong here: sendto() says that it is accepted, but the packet is expected to end up on the div6_send() function but it doesn't end up there. For some reason. Which is kind of annoying. Robert Watson suggested to check the GDB debugger what happens in the sendto() call.

But that is an adventure for later when I have some spare time again... work and two kids, that doesn't leave much time adventures like this (except between 22:00 and 01:00 which is very bad for everybody)


No comments | Share on Facebook | Share on Twitter

FreeBSD IPv6 Divert socket adventures

Posted on 2008-05-06 10:00:00
Tags: IPv6, FreeBSD, Networking

This whole IPv6 Divert idea seems to be a little bit optimistic now. As all new technology (I wouldn't call IPv6 new technology, but still) not everything you have available now is supported in it.

So, how far did we get?

  • FreeBSD's ipfw2 supports diverting of IPv6 packets. Yay!
  • A simple program which listens on the divert socket (IPPROTO_DIVERT) for IPv4 (PF_INET) and prints some information about the packets received works. Yay!
  • That same program for IPv6 (PF_INET6) doesn't work, lsof says that it is listening for protocol 0 (the catch all) instead of protocol IPPROTO_DIVERT.
  • Looking through ip_divert() in src/netinet/ip_divert.c shows about the registration of protocols (thanks to bms@ for hints): pf_proto_register(PF_INET, &div_protosw), but when I do this for PF_INET6 it returns an error which is related to the absence of a free IPPROTO_SPACER. And these are (thanks to rwatson@ for hints) not available for IPv6.

So, this is getting trickier and trickier for an userland guy like me to experiment with on a sunday afternoon... But I'm not going to give up! (yet)

On the other hand, an interesting paper by Dr. Goto (I'm not kidding) and friends available at IPV4/V6 NETWORK EMULATOR USING DIVERT SOCKET says:

[...]
We have ported divert socket to IPv6 by adding about 1000
lines of C program code either on FreeBSD and Linux kernel
for this research.
[...]
I have asked him if he wanted to share his code but haven't heard anything back from him.


No comments | Share on Facebook | Share on Twitter

IPv6 training and thoughts

Posted on 2008-05-03 09:00:00
Tags: IPv6, Networking

After two days of the IPv6 training workshop organized by APNIC, I think I'm ready for it. Mentally that is. I will guide BarNet safely into the 21st century! (Yes, I know that that century is already eight years old, but then IPv6 is already more than eight years old)

There are a couple of interesting things about IPv6: The first one is the absence of the checksum field in the IP header. Had an IPv4 header a checksum which had to be recalculated on every router it went through (that's no fun for highspeed routers I tell you), IPv6 packets don't have to do this anymore.

The second thing is the absence of the ARP protocol: It's gone now. It's over for it. Bye! It's now part of the ICMPv6 protocol: If you need to know the ethernet address of a host, you send an ICMP packet asking for it. I'm pretty sure that RARP (RFC903) is obsolete now.

The third one is the absence of subnet broadcast address: No more all 1's, it just doesn't exist. If you want to tell something to all hosts, use a local multicast.

Related to the third one is that the network troubleshooting trick to see how many hosts are alive by pinging everybody in the subnet, which is now an /64, is obsolete, because it will take seventeen days before you have complete the full sweep.

The famous IPv6 autoconfiguration... It is great for a simple network where hosts have everything (DNS, WINS, proxy etc) statically configurated, but I don't really believe in it for a properly managed network: DHCPv6 is the way to go. Still. I will have to figure out how it works: I saw that the ISC-DHCP port in the FreeBSD ports tree was very outdated and that there wasn't a 3.1 not 4.x version in it... I will have to take things in my own hands!

With regarding to our network equipment I'm not too worried: The routing Extreme Networks boxes and the Juniper boxes do support IPv6, the Cisco PoE switches should be fine. The FreeBSD, Linux and Windows 2K3 devices do support it. And PIPE networks does support it.

With the FreeBSD boxes there is a small problem right now though... In the past we carefully designed our network and services with jails and such, and jails support only one IP address. That's the whole idea behind a jail, nothing you can do about it. And that works great, oh, except for the fact that in dual stack mode you need two IP addresses: an IPv4 and IPv6 one. I know that there are patches around for FreeBSD 7.0 to support multiple IP addresses, but that doesn't help me yet because we have just migrated everything to 6.3...

So, how can provide IPv6 access to our services easily? The simplest way is to make some kind of IPv6-to-IPv4 gateway, which translates an IPv6 packet into an IPv4 packet and forwards that to our servers.
Confused? Here is an example: www.mavetju.org has an DNS A record of 202.83.176.248. The IPv6 DNS AAAA record would be 2001:0DF0:0009::CA53:B0F8 (or 2001:DF0:0009::0202:0083:0176:0248 to prevent nasty dec-to-hex conversion errors). As you can see, the last 32 bits is the IPv4 address, the first 48 bits is our network. The IPv6-to-IPv4 gateway would carefully craft an IPv4 packet with the right IPv4 address and send it of to my webserver. The answer would be translated back into an IPv6 address.

Nothing difficult, this is just plain NAT. And it would expose all our services in one go to the IPv6 world, without anything to change on the services. Okay, you would lose the information of who it really was who asked it, but that is just a small price to pay until the full IPv6 service is in place. Let's do some hacking!


No comments | Share on Facebook | Share on Twitter

APNIC IPv6 training

Posted on 2008-05-01 09:00:00, modified on 2008-06-01 09:00:00
Tags: IPv6, Trains, Rant, Memories

The coming two days I'll be at the IPv6 Workshop of APNIC. Of course this workshop is in the middle of nowhere, which is impossible for a Sydney based event so let me rephrase it: It is held in a non-central location unreachable by train. The options? Take the train to the city (one hour) and then the bus (one hour) or take the train to Parramatta (1.5 hours) and take a taxi from there.

But the good news is: thanks to the speed of the Cronulla / Bondi train this morning I was able to catch one train earlier at Redfern, and that one only stops at Strathfield, Lidcombe, Granville and Parramatta, which will save me some hassles... I hope :-)

On the sideline, I checked out when my first IPv6 capable program was created: It was the Fatal Dimensions Mud server and the commit date was 29 April 2000, eight years ago. The IPv6 connection came via FreeNet6 in Canada and that was a IPv6-over-IPv4 tunnel. Thanks to my FreeBSD port of their tunnel software I got a tshirt from them!

Update: That taxi took half an hour to get there....


No comments | Share on Facebook | Share on Twitter

Three Mobile roaming 3G network

Posted on 2008-05-01 08:00:00, modified on 2008-07-14 16:00:00
Tags: Networking, FreeBSD, 3G

Thanks to my job, or cursed by my job sometimes, I have access to a 3G modem of Three Mobile. It works fine in areas where there is Three Mobile coverage, outside these areas were we are roaming (and probably on the Telstra 3G network) there are interesting issues with the IPCP phase (IP Confirguration Protocol) of PPP which make the PPP setup fail.

So what works and what doesn't? According to the PPP manual, this is what we should see:

ppp ON awfulhak>    # No link has been established
Ppp ON awfulhak>    # We've connected & finished LCP
PPp ON awfulhak>    # We've authenticated
PPP ON awfulhak>    # We've agreed IP numbers
The first two Ps become capitals, but the third one doesn't when we are roaming.

What do the logs say? The short version of a successful handshake is:

IPCP: FSM: Using "deflink" as a transport
IPCP: deflink: State change Initial --> Closed
IPCP: deflink: LayerStart.
IPCP: deflink: SendConfigReq(1) state = Closed
IPCP: deflink: State change Closed --> Req-Sent
IPCP: deflink: RecvConfigNak(1) state = Req-Sent
IPCP: deflink: SendConfigReq(2) state = Req-Sent
IPCP: deflink: RecvConfigNak(2) state = Req-Sent
IPCP: deflink: SendConfigReq(3) state = Req-Sent
IPCP: deflink: RecvConfigReq(0) state = Req-Sent
IPCP: deflink: SendConfigNak(0) state = Req-Sent
IPCP: deflink: RecvConfigRej(3) state = Req-Sent
IPCP: deflink: SendConfigReq(4) state = Req-Sent
IPCP: deflink: RecvConfigReq(1) state = Req-Sent
IPCP: deflink: SendConfigAck(1) state = Req-Sent
IPCP: deflink: State change Req-Sent --> Ack-Sent
IPCP: deflink: RecvConfigNak(4) state = Ack-Sent
IPCP: deflink: SendConfigReq(5) state = Ack-Sent
IPCP: deflink: RecvConfigAck(5) state = Ack-Sent
IPCP: deflink: State change Ack-Sent --> Opened
The full log is at the end of this writeup.
The discussion goes like this: I propose something three times (SendConfigReq) and they reject it (RecvConfigNak), but at the third time they say "Let's try it with this" (RecvConfigReq) and we come up with a common ground. And everybody is happy!

The short version of an unsuccessful handshake is:

IPCP: FSM: Using "deflink" as a transport
IPCP: deflink: State change Initial --> Closed
IPCP: deflink: LayerStart.
IPCP: deflink: SendConfigReq(1) state = Closed
IPCP: deflink: State change Closed --> Req-Sent
IPCP: deflink: RecvConfigNak(1) state = Req-Sent
IPCP: deflink: SendConfigReq(2) state = Req-Sent
IPCP: deflink: RecvConfigNak(2) state = Req-Sent
IPCP: deflink: SendConfigReq(3) state = Req-Sent
IPCP: deflink: RecvConfigNak(3) state = Req-Sent
IPCP: deflink: SendConfigReq(4) state = Req-Sent
IPCP: deflink: RecvConfigNak(4) state = Req-Sent
IPCP: deflink: SendConfigReq(5) state = Req-Sent
IPCP: deflink: RecvConfigNak(5) state = Req-Sent
IPCP: deflink: SendConfigReq(6) state = Req-Sent
IPCP: deflink: RecvConfigNak(6) state = Req-Sent
IPCP: deflink: SendConfigReq(7) state = Req-Sent
IPCP: deflink: RecvConfigNak(7) state = Req-Sent
IPCP: deflink: SendConfigReq(8) state = Req-Sent
IPCP: deflink: RecvConfigNak(8) state = Req-Sent
IPCP: deflink: SendConfigReq(9) state = Req-Sent
IPCP: deflink: RecvConfigNak(9) state = Req-Sent
IPCP: deflink: SendConfigReq(10) state = Req-Sent
IPCP: deflink: RecvConfigNak(10) state = Req-Sent
IPCP: deflink: SendConfigReq(11) state = Req-Sent
IPCP: deflink: SendTerminateReq(11) state = Req-Sent
IPCP: deflink: State change Req-Sent --> Closing
IPCP: deflink: RecvTerminateAck(11) state = Closing
IPCP: deflink: LayerFinish.
IPCP: deflink: State change Closing --> Closed
IPCP: deflink: RecvConfigNak(11), dropped (expected 12)
The full log is at the end of this writeup.
Here there are only SendConfigReq and RecvConfigNak, there is no RecvConfigReq coming from them!

If anybody has managed to setup a successful PPP connection on the 3G network of anybody but Three Mobile can send me some logs, I would be very happy to see if I can fix things.

Update According to John Marshall, sending a string similar to this:

AT+CGDCONT=1,"IP","telstra.internet"
could resolve the issue. Next time when I'm in the roaming-zone I'll try it!

It worked, when I used "3netaccess" as the indentifier.


Here are the full logs:

This is the IPCP part of a successful handshake

ppp[1098]: tun0: Phase: bundle: Network
ppp[1098]: tun0: IPCP: FSM: Using "deflink" as a transport
ppp[1098]: tun0: IPCP: deflink: State change Initial --> Closed
ppp[1098]: tun0: IPCP: deflink: LayerStart.
ppp[1098]: tun0: IPCP: deflink: SendConfigReq(1) state = Closed
ppp[1098]: tun0: IPCP:  IPADDR[6] 0.0.0.0
ppp[1098]: tun0: IPCP:  COMPPROTO[6] 16 VJ slots without slot compression
ppp[1098]: tun0: IPCP:  PRIDNS[6] 202.124.76.66
ppp[1098]: tun0: IPCP:  SECDNS[6] 255.255.255.255
ppp[1098]: tun0: IPCP: deflink: State change Closed --> Req-Sent
ppp[1098]: tun0: LCP: deflink: RecvProtocolRej(2) state = Opened
ppp[1098]: tun0: LCP: deflink: -- Protocol 0x80fd (Compression Control Protocol) was rejected!
ppp[1098]: tun0: CCP: deflink: State change Req-Sent --> Stopped
ppp[1098]: tun0: IPCP: deflink: RecvConfigNak(1) state = Req-Sent
ppp[1098]: tun0: IPCP:  PRIDNS[6] 10.11.12.13
ppp[1098]: tun0: IPCP:  SECDNS[6] 10.11.12.14
ppp[1098]: tun0: IPCP:  PRINBNS[6] 10.11.12.13
ppp[1098]: tun0: IPCP: MS NBNS req 130 - NAK??
ppp[1098]: tun0: IPCP:  SECNBNS[6] 10.11.12.14
ppp[1098]: tun0: IPCP: MS NBNS req 132 - NAK??
ppp[1098]: tun0: IPCP: Primary nameserver set to 10.11.12.13
ppp[1098]: tun0: IPCP: Secondary nameserver set to 10.11.12.14
ppp[1098]: tun0: IPCP: deflink: SendConfigReq(2) state = Req-Sent
ppp[1098]: tun0: IPCP:  IPADDR[6] 0.0.0.0
ppp[1098]: tun0: IPCP:  COMPPROTO[6] 16 VJ slots without slot compression
ppp[1098]: tun0: IPCP:  PRIDNS[6] 10.11.12.13
ppp[1098]: tun0: IPCP:  SECDNS[6] 10.11.12.14
ppp[1098]: tun0: IPCP: deflink: RecvConfigNak(2) state = Req-Sent
ppp[1098]: tun0: IPCP:  PRIDNS[6] 10.11.12.13
ppp[1098]: tun0: IPCP:  SECDNS[6] 10.11.12.14
ppp[1098]: tun0: IPCP:  PRINBNS[6] 10.11.12.13
ppp[1098]: tun0: IPCP: MS NBNS req 130 - NAK??
ppp[1098]: tun0: IPCP:  SECNBNS[6] 10.11.12.14
ppp[1098]: tun0: IPCP: MS NBNS req 132 - NAK??
ppp[1098]: tun0: IPCP: Primary nameserver set to 10.11.12.13
ppp[1098]: tun0: IPCP: Secondary nameserver set to 10.11.12.14
ppp[1098]: tun0: IPCP: deflink: SendConfigReq(3) state = Req-Sent
ppp[1098]: tun0: IPCP:  IPADDR[6] 0.0.0.0
ppp[1098]: tun0: IPCP:  COMPPROTO[6] 16 VJ slots without slot compression
ppp[1098]: tun0: IPCP:  PRIDNS[6] 10.11.12.13
ppp[1098]: tun0: IPCP:  SECDNS[6] 10.11.12.14
ppp[1098]: tun0: IPCP: deflink: RecvConfigReq(0) state = Req-Sent
ppp[1098]: tun0: IPCP:   [EMPTY]
ppp[1098]: tun0: IPCP: deflink: SendConfigNak(0) state = Req-Sent
ppp[1098]: tun0: IPCP:  IPADDR[6] 10.0.0.2
ppp[1098]: tun0: IPCP: deflink: RecvConfigRej(3) state = Req-Sent
ppp[1098]: tun0: IPCP:  COMPPROTO[6] 16 VJ slots without slot compression
ppp[1098]: tun0: IPCP: deflink: SendConfigReq(4) state = Req-Sent
ppp[1098]: tun0: IPCP:  IPADDR[6] 0.0.0.0
ppp[1098]: tun0: IPCP:  PRIDNS[6] 10.11.12.13
ppp[1098]: tun0: IPCP:  SECDNS[6] 10.11.12.14
ppp[1098]: tun0: IPCP: deflink: RecvConfigReq(1) state = Req-Sent
ppp[1098]: tun0: IPCP:   [EMPTY]
ppp[1098]: tun0: IPCP: deflink: SendConfigAck(1) state = Req-Sent
ppp[1098]: tun0: IPCP:   [EMPTY]
ppp[1098]: tun0: IPCP: deflink: State change Req-Sent --> Ack-Sent
ppp[1098]: tun0: IPCP: deflink: RecvConfigNak(4) state = Ack-Sent
ppp[1098]: tun0: IPCP:  IPADDR[6] 119.11.8.120
ppp[1098]: tun0: IPCP:  IPADDR[6] changing address: 0.0.0.0  --> 119.11.8.120
ppp[1098]: tun0: IPCP:  PRIDNS[6] 202.124.68.130
ppp[1098]: tun0: IPCP:  SECDNS[6] 202.124.76.66
ppp[1098]: tun0: IPCP: Primary nameserver set to 202.124.68.130
ppp[1098]: tun0: IPCP: Secondary nameserver set to 202.124.76.66
ppp[1098]: tun0: IPCP: deflink: SendConfigReq(5) state = Ack-Sent
ppp[1098]: tun0: IPCP:  IPADDR[6] 119.11.8.120
ppp[1098]: tun0: IPCP:  PRIDNS[6] 202.124.68.130
ppp[1098]: tun0: IPCP:  SECDNS[6] 202.124.76.66
ppp[1098]: tun0: IPCP: deflink: RecvConfigAck(5) state = Ack-Sent
ppp[1098]: tun0: IPCP:  IPADDR[6] 119.11.8.120
ppp[1098]: tun0: IPCP:  PRIDNS[6] 202.124.68.130
ppp[1098]: tun0: IPCP:  SECDNS[6] 202.124.76.66
ppp[1098]: tun0: IPCP: deflink: State change Ack-Sent --> Opened
ppp[1098]: tun0: IPCP: deflink: LayerUp.

This is the IPCP part of a unsuccessful handshake

ppp[1661]: tun0: Phase: bundle: Network
ppp[1661]: tun0: IPCP: FSM: Using "deflink" as a transport
ppp[1661]: tun0: IPCP: deflink: State change Initial --> Closed
ppp[1661]: tun0: IPCP: deflink: LayerStart.
ppp[1661]: tun0: IPCP: deflink: SendConfigReq(1) state = Closed
ppp[1661]: tun0: IPCP:  IPADDR[6] 0.0.0.0
ppp[1661]: tun0: IPCP:  COMPPROTO[6] 16 VJ slots with slot compression
ppp[1661]: tun0: IPCP:  PRIDNS[6] 10.11.12.13
ppp[1661]: tun0: IPCP:  SECDNS[6] 10.11.12.14
ppp[1661]: tun0: IPCP: deflink: State change Closed --> Req-Sent
ppp[1661]: tun0: LCP: deflink: RecvProtocolRej(2) state = Opened
ppp[1661]: tun0: LCP: deflink: -- Protocol 0x80fd (Compression Control Protocol) was rejected!
ppp[1661]: tun0: CCP: deflink: State change Req-Sent --> Stopped
ppp[1661]: tun0: IPCP: deflink: RecvConfigNak(1) state = Req-Sent
ppp[1661]: tun0: IPCP:  PRIDNS[6] 10.11.12.13
ppp[1661]: tun0: IPCP:  SECDNS[6] 10.11.12.14
ppp[1661]: tun0: IPCP:  PRINBNS[6] 10.11.12.13
ppp[1661]: tun0: IPCP: MS NBNS req 130 - NAK??
ppp[1661]: tun0: IPCP:  SECNBNS[6] 10.11.12.14
ppp[1661]: tun0: IPCP: MS NBNS req 132 - NAK??
ppp[1661]: tun0: IPCP: Primary nameserver set to 10.11.12.13
ppp[1661]: tun0: IPCP: Secondary nameserver set to 10.11.12.14
ppp[1661]: tun0: IPCP: deflink: SendConfigReq(2) state = Req-Sent
ppp[1661]: tun0: IPCP:  IPADDR[6] 0.0.0.0
ppp[1661]: tun0: IPCP:  COMPPROTO[6] 16 VJ slots with slot compression
ppp[1661]: tun0: IPCP:  PRIDNS[6] 10.11.12.13
ppp[1661]: tun0: IPCP:  SECDNS[6] 10.11.12.14
ppp[1661]: tun0: IPCP: deflink: RecvConfigNak(2) state = Req-Sent
ppp[1661]: tun0: IPCP:  PRIDNS[6] 10.11.12.13
ppp[1661]: tun0: IPCP:  SECDNS[6] 10.11.12.14
ppp[1661]: tun0: IPCP:  PRINBNS[6] 10.11.12.13
ppp[1661]: tun0: IPCP: MS NBNS req 130 - NAK??
ppp[1661]: tun0: IPCP:  SECNBNS[6] 10.11.12.14
ppp[1661]: tun0: IPCP: MS NBNS req 132 - NAK??
ppp[1661]: tun0: IPCP: Primary nameserver set to 10.11.12.13
ppp[1661]: tun0: IPCP: Secondary nameserver set to 10.11.12.14
ppp[1661]: tun0: IPCP: deflink: SendConfigReq(3) state = Req-Sent
ppp[1661]: tun0: IPCP:  IPADDR[6] 0.0.0.0
ppp[1661]: tun0: IPCP:  COMPPROTO[6] 16 VJ slots with slot compression
ppp[1661]: tun0: IPCP:  PRIDNS[6] 10.11.12.13
ppp[1661]: tun0: IPCP:  SECDNS[6] 10.11.12.14
ppp[1661]: tun0: IPCP: deflink: RecvConfigNak(3) state = Req-Sent
ppp[1661]: tun0: IPCP:  PRIDNS[6] 10.11.12.13
ppp[1661]: tun0: IPCP:  SECDNS[6] 10.11.12.14
ppp[1661]: tun0: IPCP:  PRINBNS[6] 10.11.12.13
ppp[1661]: tun0: IPCP: MS NBNS req 130 - NAK??
ppp[1661]: tun0: IPCP:  SECNBNS[6] 10.11.12.14
ppp[1661]: tun0: IPCP: MS NBNS req 132 - NAK??
ppp[1661]: tun0: IPCP: Primary nameserver set to 10.11.12.13
ppp[1661]: tun0: IPCP: Secondary nameserver set to 10.11.12.14
ppp[1661]: tun0: IPCP: deflink: SendConfigReq(4) state = Req-Sent
ppp[1661]: tun0: IPCP:  IPADDR[6] 0.0.0.0
ppp[1661]: tun0: IPCP:  COMPPROTO[6] 16 VJ slots with slot compression
ppp[1661]: tun0: IPCP:  PRIDNS[6] 10.11.12.13
ppp[1661]: tun0: IPCP:  SECDNS[6] 10.11.12.14
ppp[1661]: tun0: IPCP: deflink: RecvConfigNak(4) state = Req-Sent
ppp[1661]: tun0: IPCP:  PRIDNS[6] 10.11.12.13
ppp[1661]: tun0: IPCP:  SECDNS[6] 10.11.12.14
ppp[1661]: tun0: IPCP:  PRINBNS[6] 10.11.12.13
ppp[1661]: tun0: IPCP: MS NBNS req 130 - NAK??
ppp[1661]: tun0: IPCP:  SECNBNS[6] 10.11.12.14
ppp[1661]: tun0: IPCP: MS NBNS req 132 - NAK??
ppp[1661]: tun0: IPCP: Primary nameserver set to 10.11.12.13
ppp[1661]: tun0: IPCP: Secondary nameserver set to 10.11.12.14
ppp[1661]: tun0: IPCP: deflink: SendConfigReq(5) state = Req-Sent
ppp[1661]: tun0: IPCP:  IPADDR[6] 0.0.0.0
ppp[1661]: tun0: IPCP:  COMPPROTO[6] 16 VJ slots with slot compression
ppp[1661]: tun0: IPCP:  PRIDNS[6] 10.11.12.13
ppp[1661]: tun0: IPCP:  SECDNS[6] 10.11.12.14
ppp[1661]: tun0: IPCP: deflink: RecvConfigNak(5) state = Req-Sent
ppp[1661]: tun0: IPCP:  PRIDNS[6] 10.11.12.13
ppp[1661]: tun0: IPCP:  SECDNS[6] 10.11.12.14
ppp[1661]: tun0: IPCP:  PRINBNS[6] 10.11.12.13
ppp[1661]: tun0: IPCP: MS NBNS req 130 - NAK??
ppp[1661]: tun0: IPCP:  SECNBNS[6] 10.11.12.14
ppp[1661]: tun0: IPCP: MS NBNS req 132 - NAK??
ppp[1661]: tun0: IPCP: Primary nameserver set to 10.11.12.13
ppp[1661]: tun0: IPCP: Secondary nameserver set to 10.11.12.14
ppp[1661]: tun0: IPCP: deflink: SendConfigReq(6) state = Req-Sent
ppp[1661]: tun0: IPCP:  IPADDR[6] 0.0.0.0
ppp[1661]: tun0: IPCP:  COMPPROTO[6] 16 VJ slots with slot compression
ppp[1661]: tun0: IPCP:  PRIDNS[6] 10.11.12.13
ppp[1661]: tun0: IPCP:  SECDNS[6] 10.11.12.14
ppp[1661]: tun0: IPCP: deflink: RecvConfigNak(6) state = Req-Sent
ppp[1661]: tun0: IPCP:  PRIDNS[6] 10.11.12.13
ppp[1661]: tun0: IPCP:  SECDNS[6] 10.11.12.14
ppp[1661]: tun0: IPCP:  PRINBNS[6] 10.11.12.13
ppp[1661]: tun0: IPCP: MS NBNS req 130 - NAK??
ppp[1661]: tun0: IPCP:  SECNBNS[6] 10.11.12.14
ppp[1661]: tun0: IPCP: MS NBNS req 132 - NAK??
ppp[1661]: tun0: IPCP: Primary nameserver set to 10.11.12.13
ppp[1661]: tun0: IPCP: Secondary nameserver set to 10.11.12.14
ppp[1661]: tun0: IPCP: deflink: SendConfigReq(7) state = Req-Sent
ppp[1661]: tun0: IPCP:  IPADDR[6] 0.0.0.0
ppp[1661]: tun0: IPCP:  COMPPROTO[6] 16 VJ slots with slot compression
ppp[1661]: tun0: IPCP:  PRIDNS[6] 10.11.12.13
ppp[1661]: tun0: IPCP:  SECDNS[6] 10.11.12.14
ppp[1661]: tun0: IPCP: deflink: RecvConfigNak(7) state = Req-Sent
ppp[1661]: tun0: IPCP:  PRIDNS[6] 10.11.12.13
ppp[1661]: tun0: IPCP:  SECDNS[6] 10.11.12.14
ppp[1661]: tun0: IPCP:  PRINBNS[6] 10.11.12.13
ppp[1661]: tun0: IPCP: MS NBNS req 130 - NAK??
ppp[1661]: tun0: IPCP:  SECNBNS[6] 10.11.12.14
ppp[1661]: tun0: IPCP: MS NBNS req 132 - NAK??
ppp[1661]: tun0: IPCP: Primary nameserver set to 10.11.12.13
ppp[1661]: tun0: IPCP: Secondary nameserver set to 10.11.12.14
ppp[1661]: tun0: IPCP: deflink: SendConfigReq(8) state = Req-Sent
ppp[1661]: tun0: IPCP:  IPADDR[6] 0.0.0.0
ppp[1661]: tun0: IPCP:  COMPPROTO[6] 16 VJ slots with slot compression
ppp[1661]: tun0: IPCP:  PRIDNS[6] 10.11.12.13
ppp[1661]: tun0: IPCP:  SECDNS[6] 10.11.12.14
ppp[1661]: tun0: IPCP: deflink: RecvConfigNak(8) state = Req-Sent
ppp[1661]: tun0: IPCP:  PRIDNS[6] 10.11.12.13
ppp[1661]: tun0: IPCP:  SECDNS[6] 10.11.12.14
ppp[1661]: tun0: IPCP:  PRINBNS[6] 10.11.12.13
ppp[1661]: tun0: IPCP: MS NBNS req 130 - NAK??
ppp[1661]: tun0: IPCP:  SECNBNS[6] 10.11.12.14
ppp[1661]: tun0: IPCP: MS NBNS req 132 - NAK??
ppp[1661]: tun0: IPCP: Primary nameserver set to 10.11.12.13
ppp[1661]: tun0: IPCP: Secondary nameserver set to 10.11.12.14
ppp[1661]: tun0: IPCP: deflink: SendConfigReq(9) state = Req-Sent
ppp[1661]: tun0: IPCP:  IPADDR[6] 0.0.0.0
ppp[1661]: tun0: IPCP:  COMPPROTO[6] 16 VJ slots with slot compression
ppp[1661]: tun0: IPCP:  PRIDNS[6] 10.11.12.13
ppp[1661]: tun0: IPCP:  SECDNS[6] 10.11.12.14
ppp[1661]: tun0: IPCP: deflink: RecvConfigNak(9) state = Req-Sent
ppp[1661]: tun0: IPCP:  PRIDNS[6] 10.11.12.13
ppp[1661]: tun0: IPCP:  SECDNS[6] 10.11.12.14
ppp[1661]: tun0: IPCP:  PRINBNS[6] 10.11.12.13
ppp[1661]: tun0: IPCP: MS NBNS req 130 - NAK??
ppp[1661]: tun0: IPCP:  SECNBNS[6] 10.11.12.14
ppp[1661]: tun0: IPCP: MS NBNS req 132 - NAK??
ppp[1661]: tun0: IPCP: Primary nameserver set to 10.11.12.13
ppp[1661]: tun0: IPCP: Secondary nameserver set to 10.11.12.14
ppp[1661]: tun0: IPCP: deflink: SendConfigReq(10) state = Req-Sent
ppp[1661]: tun0: IPCP:  IPADDR[6] 0.0.0.0
ppp[1661]: tun0: IPCP:  COMPPROTO[6] 16 VJ slots with slot compression
ppp[1661]: tun0: IPCP:  PRIDNS[6] 10.11.12.13
ppp[1661]: tun0: IPCP:  SECDNS[6] 10.11.12.14
ppp[1661]: tun0: IPCP: deflink: RecvConfigNak(10) state = Req-Sent
ppp[1661]: tun0: IPCP:  PRIDNS[6] 10.11.12.13
ppp[1661]: tun0: IPCP:  SECDNS[6] 10.11.12.14
ppp[1661]: tun0: IPCP:  PRINBNS[6] 10.11.12.13
ppp[1661]: tun0: IPCP: MS NBNS req 130 - NAK??
ppp[1661]: tun0: IPCP:  SECNBNS[6] 10.11.12.14
ppp[1661]: tun0: IPCP: MS NBNS req 132 - NAK??
ppp[1661]: tun0: IPCP: Primary nameserver set to 10.11.12.13
ppp[1661]: tun0: IPCP: Secondary nameserver set to 10.11.12.14
ppp[1661]: tun0: IPCP: deflink: SendConfigReq(11) state = Req-Sent
ppp[1661]: tun0: IPCP:  IPADDR[6] 0.0.0.0
ppp[1661]: tun0: IPCP:  COMPPROTO[6] 16 VJ slots with slot compression
ppp[1661]: tun0: IPCP:  PRIDNS[6] 10.11.12.13
ppp[1661]: tun0: IPCP:  SECDNS[6] 10.11.12.14
ppp[1661]: tun0: Command: /dev/ttyp1: quit
ppp[1661]: tun0: IPCP: deflink: SendTerminateReq(11) state = Req-Sent
ppp[1661]: tun0: IPCP: deflink: State change Req-Sent --> Closing
ppp[1661]: tun0: IPCP: deflink: RecvTerminateAck(11) state = Closing
ppp[1661]: tun0: IPCP: deflink: LayerFinish.
ppp[1661]: tun0: IPCP: Connect time: 11 secs: 0 octets in, 0 octets out
ppp[1661]: tun0: IPCP: 0 packets in, 0 packets out
ppp[1661]: tun0: IPCP:  total 0 bytes/sec, peak 0 bytes/sec on Thu Apr 24 13:39:49 2008
ppp[1661]: tun0: IPCP: deflink: State change Closing --> Closed
ppp[1661]: tun0: IPCP: deflink: RecvConfigNak(11), dropped (expected 12)
ppp[1661]: tun0: Phase: bundle: Terminate


No comments | Share on Facebook | Share on Twitter