Backup Pi

My online backup provider Crashplan cancelled home users accounts to focus on business users.. Now what?

Spoilers: This is unfinished at time of publishing. I’ll update this post as I progress towards a solution I’m happy with.

There aren’t any obvious Linux compatible unlimited (priced per computer rather than per GB) online backup solutions left, and I didnt fancy running one in a VM or any other jiggery pokery like that.

Cloud providers would be ideal, BackBlaze B2 is a good candidate https://www.backblaze.com/b2/cloud-storage-pricing.html, but Wasabi cheapest I’ve found, https://wasabi.com/pricing/. It still works out for 8TiB over 3 Years with one full restore to cost $1468. I suspect any cheaper providers will change/increase their pricing model or close in a few years and I don’t want to find a new solution every two years, but this is still too expensive for my needs.

Potential Solution: run my own offsite backup system.

Hardware List

$35 Raspberry Pi 3

$16 SD Card

$30 USB Dual HDD Dock

$275 2 * WD RED 8TiB

Total $666

Onsite backup seeding saves normal ~month of initial backup nonsense.

Configuring the Pi

Power

Works on normal usb power as not doing anything too fancy to need the extra watts.

Operating system

Raspian no-gui whatever was latest

Basic WiFi Networking

Wifi setup at my location has public WiFi, that will do

#/etc/network/interfaces.d/wlan0
auto wlan0
iface wlan0 inet dhcp
wireless-essid GuestWifiNoCrypto

VPN

My pi is installed in a firewalled location and can’t act as a server that my backup machine can reach.

Simplest VPN: ssh tunnel!

However, using ssh to create a tunnel is very temporary, autossh is a solution that will start ssh on boot and restart it if the connection fails, but it needs some encouragement to have the right / most robust config. The flag that bit me that was missing from other examples is the ExitOnForwardFailure, until I had that the sshvpn was reconnecting, but not establishing the reverse tunnel when an old stale one hadn’t quite cleaned up.

homemachine$ sudo adduser --system pitunnel
# First step, create and install a ssh key for the inbound tunnel:
pi$ ssh-keygen -t rsa -b 4096
# Install key into my home machine.
pi$ ssh-copy-id [email protected]
# Test the key, this should connect without a password
pi$ ssh [email protected]

Here is my autossh setup.

# /lib/systemd/system/autossh.service 
[Unit]
Description=tunnel
After=network.target
Wants=network-online.target
StartLimitIntervalSec=0

[Service]
User=pi
Restart=always
RestartSec=5
Environment=AUTOSSH_GATETIME=0
ExecStart=/usr/bin/autossh -M 0 -N -q -o "ServerAliveInterval 60" -o "ServerAliveCountMax 3" -o "ExitOnForwardFailure yes" -p 22 -l pitunnel my.dyndns.address -R 2200:127.0.0.1:22 -i /home/pi/.ssh/id_rsa

[Install]
WantedBy=multi-user.target

Now Enable it / Start it and check it is running ok.

$sudo systemctl enable autossh
$sudo systemctl start autossh
$sudo systemctl status autossh
● autossh.service - tunnel
   Loaded: loaded (/lib/systemd/system/autossh.service; enabled; vendor preset: enabled)
   Active: active (running) since Tue 2017-10-xx 01:01:38 UTC; 0 days ago
 Main PID: 739 (autossh)
   CGroup: /system.slice/autossh.service
           ├─ 739 /usr/lib/autossh/autossh -M 0 -N -q -o ServerAliveInterval 60 -o ServerAliveCountMax 3 -o ExitOnForwardFailure yes -p 22 -l pitunnel my.dyndns.address -R 2200:127.0.0.1:22 -i /home/pi/.ssh/id_
           └─2748 /usr/bin/ssh -N -q -o ServerAliveInterval 60 -o ServerAliveCountMax 3 -o ExitOnForwardFailure yes -p 22 -l pitunnel -R 2200:127.0.0.1:22 -i /home/pi/.ssh/id_rsa my.dyndns.address

Set up convenience ssh host target, so you dont need all the usernames and ports in every ssh and scp you want to use:

# /root/.ssh/config 
Host pitunnel
    Hostname 127.0.0.1
    User borg
    port 2200
    IdentityFile ~/.ssh/borg

Test it with

ssh pitunnel

It should connect with no password prompt.

Installing the disks

TODO I’ve not really finished this section.

I had initially used a btrfs raid1 mirror instead of a linux mdadm raid, but one of the drives in my dock had connectivity issues.

  • Running a btrfs balance took far far too long (days). I have no idea why btrfs takes so long to rebalance, it is not IO speed limited. I’d like someone to benchmark btrfs rebalance in ram and see how slow it still is.
  • It would crash when trying to mount after a bad reset, needing to be mounted from my PC with more ram / different kernel / whatever else, in order to clean up the filesystem state. This happened more than once.
  • It would force the filesystem read-only after one degraded mount after I took a disk out, this is supposedly fixed in Linux 4.14: http://lkml.iu.edu/hypermail/linux/kernel/1709.1/00609.html

I refuse to use a non check-summing filesystem, and zfs is out of the question for me right now (not open for discussion here, may work for you), so I’m sticking with btrfs, but ignoring the raid modes.

$EDITOR /etc/fstab

/dev/disk/by-label/borgbackup /backup btrfs defaults,noatime,nofail 0 2

linux raid jiggery, I’ve done this a squillion times, but havent actually set it up yet.

$ sudo apt install smartmontools

TODO: You should probably configure a mail relay so emailing from the pi works.

watchdog

The systemd watchdog looks an interesting framework, but didn’t have an immediately obvious way of checking net connection was ok, so I opted for the lazy/easy watchdog package..

sudo apt install watchdog

$EDITOR /etc/watchdog.conf
# set the following lines
ping = 8.8.8.8 # google dns
interface = wlan0 # check wifi interface
watchdog-timeout = 10 # largest pi watchdog value, default  of 60 errors

Misc Config

I had the Pi plugged into a spare HDMI port on a nearby monitor to be able to check in on it from time to time and plugged into a usb switcher to swap keyboard/mouse over.

aliexpress $6 USB KVM Switcher was the cheapest I found, and I just ignore the vga ports

Annoyingly when my machine was locked, the Pi would occasionally reboot or steal input via HDMI to the monitor..

#/boot/config.txt
hdmi_blanking=1

This setting is supposed to fix it, at the cost of breaking media player software horribly.. which is ok, as I dont use it on this. I’ll update my experience.

Backup software

restic

I first tried with restic, it ticked most of my boxes, golang, encrypted, authenticated, de-duplicating, incremental. I however had a few problems, restic seems to have a few problems with pruning over slow links, and it ran out of memory when I tried to prune from the pi itself. It looks like it’s got scope to grow into a solid backup tool, but doesn’t quite meet my low memory / high latency needs at the moment.

Issues I ran into:

No snapshot usage stats https://github.com/restic/restic/issues/693

Random Crash: https://github.com/restic/restic/issues/1251

Poor errors with SFTP backend: https://github.com/restic/restic/issues/1323

Very little progress output on prune: https://github.com/restic/restic/issues/1322

No network stats https://github.com/restic/restic/issues/1033

The restic author is maintaining a detailed feature list of alternative backup software, which I can highly recommend for those exploring backup systems

The most mature alternative candidate seems to be borgbackup. I didn’t really need compression that borgbackup adds, and I didn’t want the overhead that python has, but it seems to tick most boxes.

borgbackup

I used the borgbackup 1.1.1 release from pip, as it fixes some issues I had with clearing out stale locks (after the watchdog resets the pi) and raspian hasn’t updated yet..
Version 1.1.0 managed to crash my host machine due to a bug with it opening every block device, and possibly a secondary bug in my kernel, this needs further investigation for the kernel side – https://github.com/borgbackup/borg/issues/3213

$ sudo pip3 install borgbackup

I dropped the script from the borg backup Quick Start – Automating Backups into /etc/cron.daily/borgbackup I added the -x flag for one filesystem and then added the paths to the mounts I wanted

 export BORG_REPO=ssh://pitunnel/~/repo

Oops I slightly corrupted my repo with the watchdog resets and lock fudgery, the prune was crashing with a missing directory.

Python single threaded, cpu bound, multiple day long check…awarghgjbwakwffkawkfkafhgh, it’s still not finished, I have little idea about its progress (it’s spent all the time Analyzing archive “hostname-2017-10-18T11:44:25.checkpoint (3/19)”, and I have 15 idle cores.

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND                                                                |
22288 root      20   0 1000796 897456   6984 R 100.0  5.5   3844:24 /usr/bin/python3 /usr/bin/borg check --verbose --repair               

Day 7: This breached 10000 minutes 7 days of runtime so I’ve shut it down..

Day 14: I forgot to check up on it for a while, miraculously its been running for a week making backups every day! Hooray

But the repo isnt “healthy”..

$ borg delete $BORG_REPO::darkskiez-2017-10-23T23:05:06.checkpoint
borg.repository.Repository.ObjectNotFound: Object with key fe00c10f6ce4b733d013ee30a60f47284368e4fa47164cb0d8af507dc0acedf0 not found in repository ssh://pitunnel/~/reopo::darkskiez-2017-10-23T23:05:06.checkpoint.

Sod it, I’ll force it and nuke the lot of them.

borg list --short|grep checkpoint|xargs -i borg delete --force $BORG_REPO::{}
forced deletion succeeded, but the deleted archive was corrupted.
borg check --repair is required to free all space. 

Argh! borg list still shows a checkpoint file… and borg delete –force is erroring with the same not found error, how hard is it to delete something that isn’t there?!!?

We know how well borg check worked last time, but I really didn’t want it to repair these checkpoints before, maybe now it will work 😐 I don’t have high hopes

using builtin fallback logging configuration
35 self tests completed in 0.08 seconds
SSH command line: ['ssh', 'pitunnel', 'borg', 'serve', '--umask=077', '--debug']
Remote: using builtin fallback logging configuration
Remote: 35 self tests completed in 0.72 seconds
Remote: using builtin fallback logging configuration
Remote: Initialized logging system for JSON-based protocol
Remote: Resolving repository path b'/~/repo'
Remote: Resolved repository path to '/backup/repo'
'check --repair' is an experimental feature that might result in data loss.
Type 'YES' if you understand this and want to continue: YES
Remote: Starting repository check
Remote: Verified integrity of /backup/repo/index.179252
Remote: Read committed index of transaction 179252
Remote: Cleaned up 0 uncommitted segment files (== everything after segment 179252).
Remote: Segment transaction is    179252
Remote: Determined transaction is 179252
Remote: Found 166446 segments
Remote: Checking segments 100%
Remote: Completed repository check, no problems found.
Starting archive consistency check...
Remote: Verified integrity of /backup/repo/index.179253
TAM-verified manifest
Analyzing archive darkskiez-2017-10-12T11:01:00 (1/9)
Remote: Cleaned up 0 uncommitted segment files (== everything after segment 179253).
Remote: Verified integrity of /backup/repo/hints.179253
Analyzing archive darkskiez-2017-10-18T10:22:54.checkpoint (2/9)
Archive metadata block is missing!
Analyzing archive darkskiez-2017-10-24T10:02:14 (3/9)
/srv/photos/IMG_2445.JPG: New missing file chunk detected (Byte 0-888561). Replacing with all-zero chunk.
/srv/photos/IMG_2445.JPG: New missing file chunk detected (Byte 888561-1153253). Replacing with all-zero chunk.
item metadata chunk missing [chunk: 000528_fe00c10f6ce4b733d013ee30a60f47284368e4fa47164cb0d8af507dc0acedf0]

It got here before in the 7 day run, so I assume it doesn’t do the fixing so well.

I’m going to delete this specific archive.

Day 21: For various reasons this check couldnt finish either, still takes too long..

I am very sad. I am resisting the strong urge to write my own backup system. Perhaps I should retry restic but with much smaller repos. Perhaps I should consider trying another system… knoxite looks better than restic on the box, but has nearly zero development activity / users. I refused to believe it is mature with this evidence, but then again, I was wholly wrong about borg backup it seems.

btrfs snapshots

Ok, maybe all this backup software is too fancy.. Maybe btrfs is fancy enough a filesystem that I don’t need the extra things..

There are a few btrfs backup scripts, based on the principals in: https://btrfs.wiki.kernel.org/index.php/Incremental_Backup

So I have quite a lot of data I wish to “exclude” from my backup, and reorganizing my filesystem to have different (sub)volumes doesn’t sound like fun, especially when we are talking about lots of ‘cache’ dirs and what not.

I wonder if btrfs snapshot diffing is intelligent enough that if I snapshot fs A into S1 and S2 in different times, prune the files I don’t want in S1 and S2, and do a btrfs send of the deltas S1-S2 if it will work at all, and be intelligent enough to not include the deleted files.

Crude test time.

btrfs subvol create /home/snaptest
# Create a bigfile1 /home/snaptest/b1
btrfs subvolume snapshot /home/snaptest /home/snaptest-s1
# Create a bigfile2 /home/snaptest/b2
# Create a small file /home/snaptest/s
btrfs subvolume snapshot /home/snaptest /home/snaptest-s2
# Delete b2 from  /home/snaptest-s1/b2 and /home/snaptest-s2/b2

btrfs send -p /home/snaptest-s1 /home/snaptest-s2|wc -c
ERROR: subvolume /home/snaptest-s1 is not read-only
# Woops
btrfs property set -ts /home/snaptest-s1 ro true
btrfs property set -ts /home/snaptest-s2 ro true

btrfs send -p /home/snaptest-s1 /home/snaptest-s2|wc -c
# Wahoo: only a bit bigger than smallfile
btrfs sub del /home/snaptest*

There may be some hope here yet.

If I encrypt the data that is being sent with ‘btrfs send’ that is secure, but, very awkward to remotely use in an emergency when I cant restore the snaphots..
Perhaps layering it on something like https://nuetzlich.net/gocryptfs/ will do the job. If I really need to I can use that to mount the encrypted snapshots remotely.

My concerns mostly revolve around how to recover from partial data loss. If my btrfs base snapshot becomes corrupt, is there a way to recover data from a serialized snapshot diff? I’d like a btrfs receive –ignore-inconsistencies option, so I could apply a partial snapshot diff to a blank snapshot and get any new files that were included in the delta.

Stenaline Public Wifi Very Insecure – SSL MITM Attacks

Update: Stenaline have responded and disabled email inspection

Fabulous, they have disabled the configuration that was causing me the most problems, within 24hrs of my original post. Congratulations stena line and I look forward to my return journey!

As a direct action on your mail we immediately notified our service provider regarding this issue and the problem is now located and solved.

IMAP protocol inspection is now disabled. The previous setting was an unintentional mistake by our service provider. .

Original Post:

Stena Line provide free wifi, which is awesome, however, their egregious content filtering system massively compromises the passengers online safety far more than normal public wifi hotspots.

They claim on their captive portal login screen on their on-ship wifi:

 

Privacy protection
All open wireless networks are by nature insecure. Please, take the necessary precautions to protect your privacy and data communications.

What kind of security, does the network provide?
The network security level is basically the same as you find on public hotspots.

However, this is clearly not the case, because they go out of their way to invalidate “the necessary precautions” that I normally take which is checking my email over SSL protected connections..

It appears stenaline think that its OK for them to snoop on my SSL secured private and work emails some fortinet/fortigate snooping appliance they seem to have. This appliance proxies SSL connections and re-signs the certificates with its own keys, effectively performing an SSL MITM (Man In The Middle) Attack. SSL is designed to prevent this sort of thing, which is why your browser throws up an error message when somethings gone wrong. All in all, this however means nobody can tell if its them “protecting” me or if it is infact a rogue hacker running a fake access point, a so called “Evil Twin” network, collecting peoples google, and corporate credentials, or snooping on my emails.. Thanks very much guys.

Update: It appears that It’s not limited to email or non-https web ports, lots of websites that have https connections blow up too, even common ones like google plus which loads content from https://fls.doubleclick.net (a google owned site). They appear to whitelist certain popular websites to allow https directly, but this is unmanageable and unmaintainable, and wholly ridiculous. I cant safely sign into my work SSL VPN with this either.

Heres an example where they have stripped the regular CA and added the Self-Signed Fortinet CA Certificate for imap.google.com

$ openssl s_client -connect imap.gmail.com:993
CONNECTED(00000003)
depth=1 C = US, ST = California, L = Sunnyvale, O = Fortinet, OU = Certificate Authority, CN = FortiGate CA, emailAddress = [email protected]
verify error:num=19:self signed certificate in certificate chain
---
Certificate chain
0 s:/C=US/ST=California/L=Mountain View/O=Google Inc/CN=imap.gmail.com
i:/C=US/ST=California/L=Sunnyvale/O=Fortinet/OU=Certificate Authority/CN=FortiGate CA/[email protected]
1 s:/C=US/ST=California/L=Sunnyvale/O=Fortinet/OU=Certificate Authority/CN=FortiGate CA/[email protected]
i:/C=US/ST=California/L=Sunnyvale/O=Fortinet/OU=Certificate Authority/CN=FortiGate CA/[email protected]
---
Server certificate
subject=/C=US/ST=California/L=Mountain View/O=Google Inc/CN=imap.gmail.com
issuer=/C=US/ST=California/L=Sunnyvale/O=Fortinet/OU=Certificate Authority/CN=FortiGate CA/[email protected]

it should look like

$ openssl s_client -connect imap.gmail.com:993 -showcerts
CONNECTED(00000003)
depth=1 C = US, O = Google Inc, CN = Google Internet Authority
---
Certificate chain
 0 s:/C=US/ST=California/L=Mountain View/O=Google Inc/CN=imap.gmail.com
   i:/C=US/O=Google Inc/CN=Google Internet Authority
 1 s:/C=US/O=Google Inc/CN=Google Internet Authority
   i:/C=US/O=Equifax/OU=Equifax Secure Certificate Authority
---
Server certificate
subject=/C=US/ST=California/L=Mountain View/O=Google Inc/CN=imap.gmail.com
issuer=/C=US/O=Google Inc/CN=Google Internet Authority

All in all, the standard advice of when using a public wifi connection, use a VPN still stands…

As most regular folk dont have a VPN (and even SSL VPNs are compromised here), the best you can hope for is that the sites you use will use SSL, and that you dont just willy-nilly click “OK” on Browser SSL Warning Pages, sadly, this standard web-safety advice means you just cant browse the web safely on stena line ferries.

Google take a very dim view on this sort of thing: Google security blog: Attempted man in the middle attacks

Another interesting thing is that this may also be illegal in the UK within the terms of the computer misuse act or telecommunication acts, as they are attempting to decrypt an encrypted communication, though I’m not a lawyer. If anyone can shed any light on this I’d like to know.

I’ve emailed stena a link to this post to see if they respond, and hopefully set a timeframe for removing this egregious device/configuration from their network.

How to get rid of GPG NO_PUBKEY errors when doing apt-get update

When doing apt-get update you might see a lot of errors like

W: GPG error: http://ceph.newdream.net lenny Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY DA4420ED288995C8
W: GPG error: http://download.opensuse.org Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 85753AA5EEFEFDE9
W: GPG error: http://ppa.launchpad.net karmic Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 28577FE31F882273
W: GPG error: http://download.virtualbox.org lenny Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 54422A4B98AB5139

For the best part you should install the apropriate keyrings

apt-cache search keyring$

should list most of them, sometimes they dont exist for some third party repositories, so try this one liner, split for (a little) clarity

for KEY in `apt-get update 2>&1 |grep NO_PUBKEY|awk  '{print $NF}'`; do
 gpg --keyserver subkeys.pgp.net --recv $KEY; gpg --export --armor $KEY|apt-key add -;
done

Caveat, this is insecure, but more secure than disabling validation. Please be aware for full security you should validate the key signatures you are importing via private quantumly secured links to the originator obtained at your own cost, etc etc.

Validating and Exploring DNSSEC with dig

Now that the Root DNS nameservers and .org TLD have both been signed, you can validate DNS server responses are legitimate.

In an attempt to learn better how this all hangs together, I thought I’d first try and validate some requests.

My first difficulty was figuring out what the root nameserver key is, what format it needs to be in, where you store it, and how to use it with dig to validate. Of course, the keys themselves are stored in DNS, you can query them in the format DIG needs to read them back with the command:

dig +nocomments +nostats +nocmd +noquestion -t dnskey . > trusted-key.key

This can be placed in /etc/trusted-key.key if desired for site-wide dig use, or else it will search the current directory for it.

If the file cannot be parsed, dig when you try to use it in dnssec mode will print:

No trusted keys present
or
;; No trusted key, +sigchase option is disabled

To test out a full chain of validation from the root, you can now try to resolve www.isc.org

dig +topdown +sigchase +multiline -ta www.isc.org

-- snip --

;; OK a DS valids a DNSKEY in the RRset
;; Now verify that this DNSKEY validates the DNSKEY RRset
;; VERIFYING DNSKEY RRset for isc.org. with DNSKEY:12892: success
;; VERIFYING A RRset for www.isc.org. with DNSKEY:7617: success
;; The Answer:
www.isc.org.		600 IN A 149.20.64.42

;; FINISH : we have validate the DNSSEC chain of trust: SUCCESS

Hooray, we have validation.

Blog is now IPv6 Enabled

Although my awesome hosting company dreamhost are not serving up via IPv6 yet, I’ve IPv6 enabled by blog for them by using apache reverse proxy on my home machine.

I configured apache much like: http://linux.yyz.us/ipv6/proxy.html with a few differences

1. I didnt use NameVirtualHost, just put a specific IP in. NameVirtualHosts just feel a bit unnecessary with IPv6, and this way i can point several aliases at the entry without defining them all. This may be considered a small security issue, as other people could point their illegitimate domain names at your site and it would still work and look official.

2. So i didnt have to register all of the aliases, i did not enable ProxyPreserveHost

3. As bryars.eu would resolve to AAAA and A, I didn’t want Apache to get in a loop and proxy to itself, so i added an alias v4 to ensure v4 only forwarding.

For some reason it was sending 301 redirects until the DNS was all sorted

# Contents of /etc/apache2/sites-available/bryars.eu-proxy
<VirtualHost [2001:470:9272:1::2]>
 ServerName bryars.eu
 CustomLog /var/log/apache2/bryars.eu-proxy.log combined

 ProxyRequests Off
 <proxy http://v4.bryars.eu/*>
 Order deny,allow
 Allow from all
 </proxy>
 ProxyPass             /       http://v4.bryars.eu/
 ProxyPassReverse      /       http://v4.bryars.eu/
</VirtualHost>

Also added the IP address to my /etc/network/interfaces so it would get allocated when the tunnel came up, by adding the following line to my tunnel interface stanza:

up ip addr add 2001:470:9272:1::2/64 dev $IFACE preferred_lft 0

The preferred_lft 0 is to mark the ip as deprecated so it doesn’t get used as the default address for outgoing connections. For more information see: http://www.davidc.net/networking/ipv6-source-address-selection-linux

Debian IPv6 Configuration and Lessons Learned

I’ve had a few issues configuring IPv6 on Debian

If due to a misconfiguration a v4tunnel interface you have brought up with ifup has failed, you need to manually delete it before trying again or this annoying error will happen:


# ifup somev6tunnel
ioctl: No buffer space available
# ip tunnel del somev6tunnel
# ifup somev6tunnel
#

I was trying to configure a 6to4 tunnel but without specifying a local interface address by using local any endpoint any, but that gives an unhelpful and yet increasingly familiar error message.


# ifup 6to4
ioctl: No buffer space available

Linux doesn’t like both local and remote values unset, so I thought, aha I’ll just use local any endpoint 192.88.99.1, it appeared to work. I have since realised that it only works for talking to non 6to4 hosts, if I tried to talk to another 6to4 host, it routed the packets through the gateway instead of directly and the return packets were also lost. So, I just specified the local address and it works.

My working 6to4 debian /etc/network/interfaces


auto 6to4
iface 6to4 inet6 v4tunnel
address 2002:561e:XXXX::1 # ipv6calc -I ipv4addr -O ipv6addr -A conv6to4 86.30.XX.XX
netmask 16
local 192.168.1.2 # address assigned by wifi router
endpoint any
gateway ::192.88.99.1 # 6to4 anycast address

Though its best to use a managed tunnel, like Hurricane Electrics tunnelbroker.net

The is the Debian network interfaces config i used to connect to my tunnelbroker.net ipv6 tunnel, reconfigure the tunnel endpoint dynamically, and also add one of my routed /48 subnets to the interface (so i can use pretty reverse dns addresses from my host).


auto he-ipv6
iface he-ipv6 inet6 v4tunnel
address 2001:470:1f08:xxxx::2
netmask 64
endpoint 216.66.80.26
gateway ::216.66.80.26
# Docs to generate pass etc from http://ipv4.tunnelbroker.net/ipv4_end.php
up wget --no-check-certificate -O - 'https://ipv4.tunnelbroker.net/ipv4_end.php?ipv4b=AUTO&pass=9c4db7a186c8xxxxxxxxxxxxxx&user_id=ef2ffab0c775dxxxxxx&tunnel_id=19xxx' 2>/dev/null
up ip addr add 2001:470:XXXX:1::1/64 dev $IFACE